U.S. patent application number 15/616878 was filed with the patent office on 2017-12-14 for providing content items in response to a natural language query.
The applicant listed for this patent is Apple Inc.. Invention is credited to Samir Bajaj, John M. Hornkvist, Eric Koebler, Jennifer Moore, Benjamin S Phipps, Ron Santos.
Application Number | 20170357661 15/616878 |
Document ID | / |
Family ID | 60572728 |
Filed Date | 2017-12-14 |
United States Patent
Application |
20170357661 |
Kind Code |
A1 |
Hornkvist; John M. ; et
al. |
December 14, 2017 |
PROVIDING CONTENT ITEMS IN RESPONSE TO A NATURAL LANGUAGE QUERY
Abstract
Described is a system that may search for content items in
response to a voice-based natural language query. The system may
provide search results for content associated with various types of
user actions such as sending or receiving a document, sharing a
content, printing a document, etc. For example, the system may
provide search results to a query such as "Show me the last
spreadsheet I sent to Bill," or "Find all emails from Bill in
April." In addition, the system may search for content associated
with a particular application. For example, the user may provide a
search query including "Open my `NewApp` documents." Accordingly,
one or more aspects of the system may provide an intuitive search
mechanism for content by allowing a user to provide natural
language search queries.
Inventors: |
Hornkvist; John M.;
(Cupertino, KR) ; Santos; Ron; (San Jose, CA)
; Koebler; Eric; (Santa Cruz, CA) ; Moore;
Jennifer; (Mountain View, CA) ; Bajaj; Samir;
(San Ramon, CA) ; Phipps; Benjamin S; (San
Francisco, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Apple Inc. |
Cupertino |
CA |
US |
|
|
Family ID: |
60572728 |
Appl. No.: |
15/616878 |
Filed: |
June 7, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62349106 |
Jun 12, 2016 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06F 16/2423 20190101;
G06F 16/148 20190101; G06F 16/164 20190101; G06F 16/2428 20190101;
G06F 40/40 20200101 |
International
Class: |
G06F 17/30 20060101
G06F017/30; G06F 17/28 20060101 G06F017/28 |
Claims
1. A non-transitory machine-readable medium storing instructions
which, when executed by one or more processors of a computing
device, cause the computing device to perform operations
comprising: in response to detecting, at a device, a user action
performed with a content item, storing one or more attributes
characterizing the user action performed, wherein the attributes
are stored as metadata; receiving a search query for one or more
content items; identifying one or more references within the search
query, wherein the references include at least a reference to the
user action; identifying search criteria associated with the one or
more references, wherein the search criteria correspond to one or
more attributes stored as metadata; and identifying one or more
content items based on performing a search for content items
associated with the one or more attributes corresponding to the
search criteria.
2. The medium of claim 1, wherein the metadata is stored as part of
the content item.
3. The medium of claim 2, further comprising creating an index of
the metadata for content items stored on the device, and wherein
the performing the search includes searching the index.
4. The medium of claim 1, wherein the user action performed
includes sharing the content item between a first user and a second
user, and wherein the one or more attributes include an identifier
for the first user and an identifier for the second user.
5. The medium of claim 1, wherein the user action performed
comprises sending the content item from a first user to a second
user, and wherein the one or more attributes characterizing the
user action performed include an identifier for the first user as a
sender and an identifier for the second user as a recipient.
6. The medium of claim 5, wherein sending the content item from the
first user to the second user includes sending the content item
using an application, and the one or more attributes characterizing
the user action performed further include an identifier for the
application, and a time stamp of when the content item was
sent.
7. The medium of claim 5, wherein the search query is received as a
voice input and the reference to the user action includes an
utterance of a term referencing sharing, sending, or emailing.
8. The medium of claim 7, wherein the search query further includes
a reference to the recipient, wherein the reference to the
recipient includes an utterance of an identifier associated with
the recipient.
9. The medium of claim 8, wherein the search query further includes
a reference to the content item, wherein the reference to the
content item includes an utterance of a term referencing the
content item.
10. The medium of claim 2, further comprising securing the
attributes characterizing the user action performed by encrypting a
corresponding portion of the metadata stored as part of the content
item.
11. The medium of claim 3, further comprising securing the
attributes characterizing the user action performed by encrypting a
corresponding portion of the index.
12. The medium of claim 1, wherein the search query further
includes one or more filtering terms, and the search performed
filters the content based on the one or more filtering terms,
wherein the one or more filtering terms reference at least one of a
time period, recency, username, or folder name.
13. The medium of claim 1, wherein the search query is received as
part on an interaction with a digital assistant, and wherein the
identified content items are displayed by the digital assistant as
search results.
14. The medium of claim 1, wherein the content item is a document
and the user action performed includes printing the document, and
the one or more attributes characterizing the user action performed
includes a time stamp of when the document was printed.
15. A method comprising: detecting, by a device, a user action
performed with a document by a first user; storing one or more
attributes characterizing the user action performed with the
document, wherein the attributes are stored as part of the document
as metadata; receiving a search query including at least a
reference to the user action; in response to recognizing the search
query includes the reference to the user action, identifying search
criteria associated with one or more terms of the search query,
wherein the search criteria correspond to one or more of the
attributes characterizing the user action; and identifying one or
more documents associated with the identified search criteria.
16. The method of claim 15, wherein the user action performed
includes sending the document from the first user to a second user
using an application, and the one or more attributes include an
identifier for the first user as a sender, an identifier for the
second user as a recipient, an identifier for the application, and
a time stamp of when the document was sent.
17. The method of claim 16, wherein the search query is received as
a voice input and the reference to the user action includes as an
utterance of a term referencing sharing, sending, or emailing.
18. The method of claim 17, wherein the search query further
includes a reference to the second user, wherein the reference to
the second user includes an utterance of an identifier associated
with the second user.
19. A device, comprising: a processor; and a memory coupled to the
processor, the memory storing instructions, which when executed by
the processor, cause the processor to perform operations
comprising: detecting, by a device, an action performed with a
content item by a first user; storing one or more attributes
characterizing the action performed with the content item, wherein
the attributes are stored as part of the document as metadata;
receiving a search query including at least a reference to the
action; identifying one or more of the attributes characterizing at
least the user action; and identifying one or more content items
associated with the identified one or more attributes.
20. The device of claim 19, wherein the action performed includes
sending the content item from the first user to a second user using
an application, and the one or more attributes include an
identifier for the first user as a sender, an identifier for the
second user as a recipient, an identifier for the application, and
a time stamp of when the document was sent.
21. The device of claim 20, wherein the search query is received as
a voice input and the reference to the user action includes an
utterance of a term referencing sharing, sending, or emailing, and
wherein the search query further includes a reference to the
recipient, wherein the reference to the recipient includes an
utterance of an identifier associated with the recipient.
22. The device of claim 20, wherein the search query further
includes a reference to the content item, wherein the reference to
the content item includes an utterance of a term referencing the
content item.
23. A non-transitory machine-readable medium storing instructions
which, when executed by one or more processors of a computing
device, cause the computing device to perform operations
comprising: recognizing a query, received by a search application,
includes a request for content of an application, wherein the
recognizing includes determining the received query includes at
least a reference to an identifying name of the application; in
response to recognizing the request for content of the application,
identifying content associated with the application by determining
a content item type; and performing a search for content items
having the identified content item type, wherein the content items
are provided to the search application as search results for the
query.
24. The medium of claim 23, wherein identifying content associated
with the application includes accessing one or more files provided
with an installation package of the application and stored locally
on the device or accessing a file extension or identifier.
25. The medium of claim 24, wherein determining the content item
type includes accessing only the one or more files stored locally
on the device and without accessing a file on another device or
server.
26. The medium of claim 25, wherein the one or more files include a
first list indicating which content item types the application
supports reading or opening, and a second list, different from the
first list, indicating which content item types the application
supports creating or exporting.
27. The medium of claim 26, wherein the content item types the
application supports creating or exporting are identified with a
Universal Type Identifier (UTI) that identifies a content item type
specific to the application and usable by other applications.
28. The medium of claim 23, further comprising determining the
identifying name of the application in response to detecting an
installation of the application on the device.
29. The medium of claim 23, wherein the search application is a
digital assistant, and the query is received as a voice input to
the digital assistant.
30. The medium of claim 29, wherein the identifying name is an
utterance within the query.
31. The medium of claim 30, wherein the recognizing further
includes determining the received query includes a reference to
content, wherein the reference to the content is an utterance of a
term referencing a document, email, file, or item.
32. The medium of claim 31, wherein the query further includes an
utterance of one or more filtering terms, and the search performed
filters the content items based on the one or more filtering terms,
wherein the one or more filtering terms reference at least one of a
time period, recency, username, filename, folder name, storage
location, and action attribute.
33. A non-transitory machine-readable medium storing instructions
which, when executed by one or more processors of a computing
device, cause the computing device to perform operations
comprising: identifying, for an application installed on the
device, a content item type the application has authority over by
determining the content item type is included in one or more lists
of content item type capabilities for the application; receiving,
by a search application, a query for content of the application; in
response to determining the query includes a reference to the
content and a reference to the application, providing, as a result
for the query, a set of content items from a search performed for
content of the identified content item type.
34. The medium of 33, wherein determining a content item type the
application has authority over includes identifying a content item
type that is included in both a first list of content items types
the application is configured to open or edit, and a second list of
content item types the application is configured to create or
export.
35. The medium of claim 34, wherein the first list and the second
list or stored in one or more files provided with an installation
package of the application and stored locally on the device.
36. The medium of claim 35, wherein determining the content item
type the application has authority over includes accessing only the
one or more files stored locally on the device and without
accessing a file on another device or server.
37. The medium of claim 33, wherein identifying the content item
type the application has authority over occurs in response to
determining the application has been installed on the device.
38. The medium of claim 33, wherein the search application is a
digital assistant, and the query is received as a voice input.
39. The medium of claim 38, wherein the reference to the
application includes an utterance of an identifying name of the
application.
40. The medium of claim 38, wherein the reference to the content
includes an utterance of a term referencing a document, email,
file, or item.
41. A device, comprising: a processor; and a memory coupled to the
processor, the memory storing instructions, which when executed by
the processor, cause the processor to perform operations
comprising: determining an identifying name of an application
installed on the device; identifying a content item type the
application has authority over by determining the content item type
is included in both a first list of content items types the
application supports opening or editing and a second list,
different from the first list, of content item types the
application supports creating or exporting; determining a received
search query is a request for content the application has authority
over by determining the query includes a reference to content and
the identifying name of the application; and providing, in response
to the search query, one or more content items from a search
performed for content items having the identified content item
type.
42. The device of claim 41, wherein identifying the content item
type the application has authority over is performed in response to
installing the application on the device.
43. The device of claim 42, wherein the search query is received as
a voice input and the reference to content includes an utterance of
a term referencing a document, file, content, or item.
Description
RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional
Patent Application No. 62/349,106, filed Jun. 12, 2016, the
entirety of which is incorporated herein by reference.
TECHNICAL FIELD
[0002] This disclosure relates generally to the field of content
item searching. More particularly, the disclosure relates to
searching content items in response to a natural language
query.
BACKGROUND
[0003] Many traditional tools exist to search for various types of
content within different environments. For example, in an online
environment, a search engine may employ various algorithms for
ranking search results such as websites. However, when searches are
initiated in a local (e.g. offline) environment, traditional tools
often rely on search capabilities of a file management application
(e.g. file explorer). For example, these file management
applications often allow a user to search for files often limited
to a particular set of file types. Moreover, such file management
applications often search through the same information regardless
of the type of file, which typically includes the file name, the
type of file, the date of creation, and certain other basic
parameters that are maintained for the file.
[0004] Current devices, however, often store vast amounts of
content including content items not typically accessed by users
from a file management application. For example, certain types of
content items (although saved as a file in some cases) are
typically accessed directly from one or more applications, and not
organized in directories traditionally navigated by file management
applications. Accordingly, current devices often have interfaces
that include a system-wide search mechanism (e.g. a "finder"
program) that users initiate as a primary source to access contents
items. These search mechanisms, however, often rely on searches
based solely on a filename or explicit attributes that are defined
by a creator of a content item (e.g. title, author, etc.).
Accordingly, there is a continued need to improve mechanisms for
searching for content items in an intuitive manner.
SUMMARY
[0005] Described is a system (and method) for searching for content
items in response to a query such as a voice-based natural language
query. For example, the query may be provided as part of an
interaction with a voice-based digital assistant.
[0006] In a first aspect, a user may perform a search for content
items associated with various types of user actions performed with
a content item such as sending or receiving a document, sharing a
content item, printing a document, or other types of user actions.
In order to provide such search capabilities, the system may store
implicit attributes associated with a content item in response to
detecting such user actions. For example, these attributes may
store information characterizing a type of action performed with a
content item. For example, these attributes may identify the
application used to perform the action, a recipient or sender of a
content item, the time the action was performed, and other
characteristics. The system may store these attributes as metadata
associated with a content item including metadata that may be
stored as part of the content item itself and/or metadata that is
stored as part of a searchable index. In order to protect
information that may be derived from these stored attributes, the
system may also secure portions (or all) of the metadata using
various techniques including, for example, encryption.
[0007] When performing a search, the system may use natural
language processing capabilities to identify search criteria for
one or more of these implicit attributes. Thus, the system may
provide a mechanism to answer various types of queries that may not
necessarily be provided in a predefined search query format. For
example, the system may provide search results in response to
natural language voice-based query such as "Show me my most recent
documents." In addition, the query may include actions that are
associated with another user such as a recipient or a sender of a
content item. For example, a user may provide a query such as "Show
me the last spreadsheet I sent to Bill," or "Find all emails from
Bill in April."
[0008] In a second aspect, a user may perform a search for content
items associated with a particular application. For example, a user
may be interested in content originating from the application, or
those to which the user generally associates with a particular
application. For example, the user may provide a search query
including "Show me my `NewApp` items" (where "NewApp" is the name
of a particular application). To provide such a search capability,
however, may require determining which content item types a user
associates with a particular application because multiple
application may support a particular content item type. For
example, an application may typically open various types of content
items (e.g. various document formats), but those content items may
not of interest to a user. Accordingly, in one embodiment, the
system may determine content items the application has authority
over (or content items belonging to the particular application).
For example, in one embodiment, "authority over" may include
content item types the application may not only open (e.g. open,
read, import, etc.), but also content item types the application
may create (e.g. create, export, write, etc.). For example, a
content item an application may create may include content items
types that are usable by other applications.
[0009] In order to determine content items associated with a
particular application, the system may reference one more files
associated with the particular application such as a manifest (or
property list) type file. For example, the system may access such
files to cross-reference a list of content items that an
application may read with a list of content items the application
may create.
[0010] Accordingly, one or more aspects of the system as further
descried herein may provide an intuitive search mechanism for
content items by allowing a user to provide natural language search
queries.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] Embodiments of the disclosure are illustrated by way of
example, and not by way of limitation, in the figures of the
accompanying drawings in which like reference numerals refer to
similar elements.
[0012] FIG. 1 is a block diagram illustrating an example of an
operating environment according to an embodiment of the
disclosure.
[0013] FIG. 2 is a block diagram illustrating an example of a
digital assistant according to an embodiment of the disclosure.
[0014] FIG. 3 is a process flow diagram illustrating a process of
identifying a set of content items associated with a user action
according to an embodiment of the disclosure.
[0015] FIGS. 4A-4B are diagrams illustrating examples of stored
attributes according to some embodiments of the disclosure.
[0016] FIGS. 5A-5D are diagrams illustrating examples of processing
a query requesting content associated with a user action according
to some embodiments of the disclosure.
[0017] FIGS. 6A-6D are diagrams illustrating examples of an
interface for providing search results for content items associated
with a user action as part of an interaction with a digital
assistant according to some embodiments of the disclosure.
[0018] FIG. 7 is an example flow diagram illustrating a method of
providing search results of contents items associated with a user
action based on a natural language query according to an embodiment
of the disclosure.
[0019] FIG. 8 is a process flow diagram illustrating a process of
identifying content items associated with a particular application
according to an embodiment of the disclosure.
[0020] FIG. 9 is a diagram illustrating an example of processing a
query requesting content associated with a particular application
according to some embodiments of the disclosure.
[0021] FIG. 10 is a diagram illustrating an example of information
relating to capabilities supported by a particular application
according to some embodiments of the disclosure.
[0022] FIGS. 11A-11B are diagrams illustrating examples of an
interface for providing search results for content items associated
with a particular application as part of an interaction with a
digital assistant according to some embodiments of the
disclosure.
[0023] FIG. 12 is an example flow diagram illustrating a method of
providing search results of contents items associated with a
particular application based on a natural language query according
to an embodiment of the disclosure.
[0024] FIG. 13 is a block diagram illustrating an example computing
system, which may be used in conjunction with one or more of the
embodiments of the disclosure.
DETAILED DESCRIPTION
[0025] Various embodiments and aspects will be described with
reference to details discussed below, and the accompanying drawings
will illustrate the various embodiments. The following description
and drawings are illustrative and are not to be construed as
limiting. Numerous specific details are described to provide a
thorough understanding of various embodiments. However, in certain
instances, well-known or conventional details are not described in
order to provide a concise discussion of embodiments. References to
"one embodiment" or "an embodiment" or "some embodiments" means
that a particular feature, structure, or characteristic described
in conjunction with the embodiment can be included in at least one
embodiment. The appearances of the phrase "embodiment" in various
places in the specification do not necessarily refer to the same
embodiment. The processes depicted in the figures that follow are
performed by processing logic that comprises hardware (e.g.
circuitry, dedicated logic, etc.), software, or a combination of
both. Although the processes are described below in terms of some
sequential operations, it should be appreciated that some of the
operations described may be performed in a different order, or some
operations may be performed in parallel rather than
sequentially.
[0026] As described above, the disclosure relates to searching for
content items, which may be performed within an operating
environment.
[0027] FIG. 1 is a block diagram illustrating an example of an
operating environment according to an embodiment of the disclosure.
The system 100 may include a client device 110 and server 120,
which may be connected via a network 113. The network 113 may be
any suitable type of wired or wireless network such as a local area
network (LAN), a wide area network (WAN), or combination thereof.
The network may also provide access to public content items 175
(e.g. internet content).
[0028] The client device 110 may be any type of computing device
such as a smartphone, tablet, laptop, desktop, wearable device
(e.g. smartwatch), set-top-box, interactive speaker, etc., and the
server 120 may be any kind of server (or computing device, or
another client device 110), which may be a standalone device, or
part of a cluster of servers, and may include a cloud-based server,
application server, backend server, or a combination thereof.
[0029] As shown, the client device 110 may include various
components or modules to perform various operations as described
herein. The client device 110 may include a metadata processing
module 140 that may process and collect various forms of metadata
160. Accordingly, the metadata processing module 140 may access
metadata 160, as well as one or more indexes.
[0030] As referred to herein, metadata 160 may include any
information that may include characteristics, attributes, and the
like, that may be associated with particular content items (local
content items 170 or public content items 175). For example, the
metadata 160 may include attributes including implicit attributes
as further described herein. The metadata 160 may be stored as part
of a content item, and/or may be stored separately (e.g. within a
database or file). The metadata 160 may include information stored
in any suitable format (e.g. metadata entries, fields, objects,
files, etc.). In one embodiment, metadata (e.g. if a metadata
object or file) will itself contain entries or fields. In addition,
the metadata may include information from different applications
and a specific type of metadata may be created for each of the
applications. In one embodiment, metadata may include a persistent
identifier that uniquely identifies its associated content item.
For example, this identifier remains the same even if the name of
the file is changed or the file is modified. This allows for the
persistent association between the particular content item and its
metadata.
[0031] In addition, non-limiting examples of metadata may be found
in commonly assigned U.S. Pat. No. 7,437,358, issued Oct. 14, 1998,
the entirety of which is incorporated herein by reference.
[0032] The operating environment 100 may also include a local
content items index 150 for local content items 170 and/or metadata
160, and a public content items index 155 for public content items
175. As referred to herein, content items may include content items
stored on a particular device (e.g. local content items 170) such
as documents, emails, messages, pictures, media, applications,
contacts, calendar events, reminders, folders, browser history,
bookmarks, posts, and the like, as well as content items from
public sources (e.g. public content items 175) including websites,
webpages, applications, map information, reviews, retail items
(e.g. from a particular online retailer), streamed media (e.g.
music, videos, eBooks, etc.), pictures, social media content (e.g.
posts, pictures, messages, contacts, etc.), and the like. It should
be noted that the local and public content items are not mutually
exclusive, and accordingly, public content items 175 may also be
local content items 170 (and vice versa).
[0033] The indexes (e.g. indexes 150 and 155) may include
identifiers or representations of the content items and these
indexes may be designed to allow a user to rapidly locate a wide
variety of content items. These indexes may index metadata 160
associated with content items (e.g. local content items 170 and
public content items 175), as well as index the contents of these
content items. In some embodiments, the local content items index
150 may be updated continuously (e.g. as content items are shared,
created, modified, printed, downloaded, etc.) using a background
process (e.g. daemon) executing on a device. In addition, the
public content items index 155 may be updated using a crawler 157
such as an internet crawler. For example, crawler 157 may retrieve
(e.g. "crawl") for information from various websites, as well as
from third-party providers. For example, crawler 157 may retrieve
metadata relating to, for example, multimedia content items (e.g.
music, movies, audio book, etc.) provided by third-party providers
that may be accessed via a user account with the third-party
provider. For instance, the system (e.g. crawler 157) may
coordinate with a third-party provider (via an API) to provide
searchable metadata for online content items to which the user may
subscribe. In some embodiments, the public content index 155 (or
portions thereof) may be stored locally to allow the system to
index interactions with public content items 175 (e.g. visited
webpage history, media playlist history, etc.). Accordingly, in
some embodiments, the system may maintain a history of user actions
performed with content that may be stored publicly as further
discussed herein.
[0034] The local content items index 150 and public content item
index 155 may be stored on the same device, or on separate devices
as shown. In one embodiment, the system may distinguish between
private data (e.g. local content items 170) stored on the device,
and the public content items 175 (e.g. non-local content items)
that may be accessed from public sources such as the internet or
third-parties. In one embodiment, the system may secure the local
content items 170 and metadata 160, by incorporating a firewall or
other features to maintain the privacy of user content. In one
embodiment, components that are part of a server may also be part
of the client device 110, and accordingly, the system may secure
these components using various techniques such as "sandboxing." For
example, sandboxing may include compartmentalizing resources (e.g.
data, memory addresses, processes, features, etc.) that one or more
components access on the client device 110. In addition, various
encryption mechanisms may also be employed to secure content items
170, metadata 160, or attributes as further described herein. It
should be noted that although the content items are shown as a
local (e.g. private) versus public dichotomy, in some embodiments,
metadata (or content items) from public sources may be stored
locally on the client device 110, and some local content items 170
may be accessed from a remote source (e.g. another client device
110, or server 120).
[0035] The system may include a digital assistant 132 (or digital
assistant). The digital assistant 132 may reside on the client
device 110, the server 120, or both, for example, as a
client-server implementation. For example, as a client-server
implementation, certain functionality of the digital assistant 132
may reside on the client device 110 such as user interface
searching components, etc., functionality such as query and natural
language processing may occur on the server 120. For example, as
further described herein a query received from a user may be
transmitted to a server 120 for processing, and the server 120 may
instruct client device 110 (e.g. provide search criteria) to
perform a search for content items (e.g. local content items
170).
[0036] Referring to FIG. 2, the digital assistant 132 may include a
speech-to-text module 134, natural processing module 136, and a
vocabulary 138.
[0037] The speech-to-text processing module 134 may process
received speech input (e.g. a user utterance) using various
acoustic and language models to recognize the speech input as a
sequence of phonemes, and ultimately, a sequence of words or tokens
written in one or more languages. The speech-to-text processing
module 134 may use any suitable speech recognition techniques,
acoustic models, and language models, such as Hidden Markov Models,
Dynamic Time Warping (DTW)-based speech recognition, and other
statistical and/or analytical techniques. In one embodiment, the
speech-to-text processing can be performed by the server 120,
client device 110, or both. Once the speech-to-text processing
module 134 obtains the result of the speech-to-text processing
(e.g. a sequence of words or tokens), it may provide the result to
the natural language processing module 136 for intent
deduction.
[0038] The natural language processing module 136 (or natural
language processor) of the digital assistant 132 may take a
sequence of words or tokens (e.g. token sequence) generated by the
speech-to-text processing module 134, and attempts to associate the
token sequence with one or more intents recognized by the digital
assistant 132. For example, the natural language processing module
136 may receive a token sequence (e.g., a text string) from the
speech-to-text processing module 134, and determines what intents
or attributes are implicated by the words in the token sequence,
which may include referring to a vocabulary (or vocabulary index)
138. For example, an intent represents a task that can be performed
by the digital assistant 132 or the system 100. For example, within
the context of some embodiments described herein, the intent may
include locating or searching for content items. In some
embodiments, the natural language processing includes identifying
one of the one or more terms as a pronoun and determining a noun to
which the pronoun refers. For example, terms such as "me" and "my"
may be associated with a particular user and user actions performed
by the particular user. Accordingly, the digital assistant 132 may
use natural language processing to disambiguate ambiguous terms.
For example, disambiguating may include identifying that one or
more terms has multiple candidate meanings; prompting a user for
additional information about terms; receiving the additional
information from the user in response to the prompt; and
disambiguating the terms in accordance with the additional
information. In some embodiments, prompting the user for additional
information includes providing a voice-based prompt to the
user.
[0039] Referring back to FIG. 1, once a query is processed by the
digital assistant 132, the search module 124 may work in
conjunction with a metadata processing module 140 to search for
content items by searching metadata 160, or one or more of the
indexes (or content items themselves). The search module 124 may
also rank relevant search results. The search module 124 may also
include, or work in conjunction with, a search application
(including the digital assistant 132), which may provide an
interface for receiving a search query and may initiate a search
for content items as further described herein. In one embodiment,
search results may be provided within an interface provided with
the digital assistant as further described herein. The search
application (or application) may be provided as part of an
operating system of the device, or may be part of an installed
application (e.g. third-party application). In addition, the search
application may part be of an application that provides built-in
search functionality. For example, an application (e.g. email
application) may communicate with the components (e.g. modules)
described herein via an API 130 to provide an interface for
searching for content items. In some embodiments, the API 130 may
also provide the ability for third-party applications to work in
conjunction with the digital assistant 132. For example, a
voice-based search may be initiated from within a third-party
application.
[0040] In one embodiment, the system may primarily (or exclusively)
search for local content items, but may also include a set of
predefined (or user defined) public content items. For example, in
one embodiment, the system may search a predefined set or type of
public contents items (e.g. maps, wiki pages, particular websites,
etc.). Accordingly, in one embodiment, the system may search for
content items that are stored on a local device, as well as for
content items that are from public sources (e.g. internet). It
should be noted that as described above, the search functionality
described herein may be performed on the client device 110, or in
conjunction with the server 120. For example, the search query may
be transmitted to a server 120 for processing, and the server may
instruct client device 110 to perform a local search.
[0041] As described, the client device 110 may also include an API
130, to allow components (e.g. third-party applications) to access,
for example, the metadata processing module 140 and other
components shown. For example, the API 130 may provide a method for
transferring data and commands between the metadata processing
module 140 and these components. The metadata processing module 140
may also receive data from an importer/exporter via the API 130
that may communicate with various components to provide metadata
for certain types of content items. As referred to herein, a
third-party refers to a manufacturer or company other than the
manufacturer or company providing the operating system for the
device or the device itself. For example, a developer may utilize
the API 130 to allow the developers to import or provide an
indication of extractable metadata for content items specific to a
particular application (e.g. third-party application).
[0042] The client device 110 may also store various applications
123, including applications that may be installed or provided from
third-party providers. When an application is installed, it may
store various application components 162 including various, files,
resources, etc. As further described herein, files stored as part
of the application components 162 may include various manifest
files that provide information regarding the characteristics,
capabilities, and other information regarding a particular
application that the system (e.g. operating system, API 130,
various components) may access.
[0043] It should be noted that the configurations described herein
are examples, and various other configurations may be used without
departing from the embodiments described herein. For instance,
components may be added, omitted, and may interact in various ways
known to an ordinary person skilled in the art.
[0044] As described above, in one aspect, a system may allow a user
may perform a search for content items associated with various
types of user actions such as sending or receiving a document,
sharing a content item, printing a document, or other types of user
actions.
[0045] FIG. 3 is a process flow diagram illustrating a process of
identifying a set of content items associated with a user action
according to an embodiment of the disclosure. Process 300 may be
performed, for example, by one or more components of system
100.
[0046] In 310, the system may detect a user action (or action)
performed with a content item. For example, the system may include
one or more processes (e.g. daemon), which may be part of an
operating system, that detects various actions or actions performed
by a user (e.g. user actions). As referred to herein, performing a
user action with a content item may include various operations,
commands, instructions, and the like that may be performed in
conjunction with a content item. For example, the user action may
include copying, sending, sharing, printing, moving, deleting,
editing, modifying, creating, opening, downloading, saving,
posting, archiving, playing, transferring, capturing (e.g. taking a
picture), and like types actions performed with a content item.
[0047] In one embodiment, the user action may include sharing (or
sending) a content item. The sharing may be performed by
transmitting the content item within a network or through a direct
communication link. For example, the user action may include
sending a content item (e.g. from a first user) to a second user.
For instance, sending a content item may include attaching a
content item to an email sent to the second user as a recipient. In
another example, sharing may include transmitting the content item
to one or more users via a messaging or chat application. In
another example, sharing a content item may include storing the
content item in a file repository (e.g. virtual drop box) or cloud
account that allows the content to be accessed by authorized users.
In yet another example, sharing a content item may include
providing the content item to a collaborative work environment or
platform.
[0048] In addition, sharing a content item may include using a
third-party application that may provide various sharing
mechanisms. For example, an environment may allow a user to select
a particular content item (e.g. right click) to provide a sharing
menu including various applications that may be selected to perform
a particular sharing action. Accordingly, the system may detect
sharing a content item using such a mechanism.
[0049] In 315, the system may create and store one or more
attributes characterizing the user action detected in 310. As
referred to herein, attributes characterizing the action performed
may include any attributes to provide information related to the
user action. These attributes may be stored as any type of suitable
value (e.g. number, string, object, etc.), and may be selected from
a predefined set, or may be provided as a new value derived from
the user action. The system may store these attributes as part of
the metadata (e.g. metadata 160). In one embodiment, the attributes
may be stored as part of the content item itself. For example, if a
user were to transfer content items to a second device, the
attributes would transfer with the content items, and accordingly,
the second device may determine particular user actions performed.
In addition, or as an alternative, the attributes may be stored as
part of an index (e.g. indexes 150 or 155) to provide an efficient
retrieval method as described above. It should be noted that as
described above, when content items are transferred to a second
device, a new index may also be created on the second device.
[0050] FIGS. 4A-4B are diagrams illustrating an example of stored
attributes according to some embodiments of the disclosure. As
shown in FIGS. 4A-B, the system may store one or more attributes
including, for example, a content item ID 405 attribute, a content
item type 406 attribute, and various other types of attributes
including implicit attributes. For example, in one embodiment,
attributes characterizing the user action may be stored as implicit
attributes 400 as opposed to explicit attributes. For example,
implicit attributes 400 may be created automatically by the system
(e.g. in response to detecting a type of user action). In contrast,
the system may also store explicit attributes that may be specified
by a user or a creator of the content item. For example, explicit
attributes may include attributes such as the author, title, genre,
etc. of a content item that may be specified by the creator of the
content item (e.g. user or application).
[0051] As shown in FIG. 4A, the implicit attributes 400 may include
attributes for a user action 408, user 415, application 413, and
time 414. A user action 408 attribute may specify the type of user
action performed including, for example, the user actions described
above. For example, as shown in the entries of FIG. 4A, the user
action 408 attributes indicate that one or more content items were
created, printed, and downloaded. A user attribute 415 may indicate
the user (or user account) that performed the user action 408. The
application attribute 413 may indicate the application used to
perform the action (if any). The time attribute 414 may indicate
when the user action 408 was performed. A time attribute 414 may
correspond to various characteristics such as, for example, a time,
hour, day, week, month, year, etc., the time attribute may also
reference a recency (e.g. "last," "last 10," "most recent," "just
working on," etc.). For example, the implicit attributes 400 for
entry 41 indicate that a pages document was created by a first user
(e.g. user1) using the Pages application at 17:01 on 4/1/16. As
another example, the implicit attributes 400 for entry 42 indicate
that a pdf document was downloaded by a second user (e.g. user2)
using a Browser application at 18:45 on 4/7/16.
[0052] The example of FIG. 4B shows implicit attributes 400 that
may be created in the context where a user action may include a
recipient and/or sender such as when sending, receiving, or sharing
a content item. Accordingly, in one embodiment, in addition to
attributes for a user action 408, application 413 and time 414 as
described above, the implicit attributes 400 may also include a
sender 409, a sender handle 410 (e.g. an email address, contact
name, etc.), a recipient 411, and recipient handle 412. For
example, the implicit attributes 400 for entry 43 indicate that an
email was shared (or sent) by user1 (e.g. sender) to Bill (e.g.
recipient) using the mail application at 13:01 on 4/1/16. As
another example, the implicit attributes 400 for entry 44 indicate
that a picture was shared by Bill (e.g. sender) to user1 (e.g.
recipient) using a mail application (e.g. picture attached or
included with an email) at 9:11 on 4/3/16. Accordingly, as shown in
the examples of entries 43 and 44, the system may characterize
sending an email as well as sending a document as part of an email
(e.g. as an attachment). In another example of sharing a content
item, the implicit attributes 400 for entry 45 indicate that a
pages document was shared (or sent) by user1 (e.g. sender) to Jane
(e.g. recipient) using the Messages application at 7:15 on 4/4/16.
For example, such a user action may include sending the pages
document within the messages application during a communication
session (e.g. chat, video conference, call, etc.) between user1 and
Jane.
[0053] It should be noted that the attributes shown in FIGS. 4A-B
are merely examples, and other attributes may be created and stored
depending on the context and user actions. For example, any number
of attributes may be stored including user-defined attributes.
Moreover, these figures display a representation of the attributes
in table format, but the attributes may be stored in any suitable
format or configuration (e.g. as objects, flat file, etc.). In
addition, the attributes may be stored as part of an index, or as
part of the content item itself, or both as described above.
[0054] In one embodiment, because information such as various user
actions performed by a particular user may be considered private,
the system may encrypt or secure one or more attributes. For
example, in one embodiment, only the implicit attributes (e.g.
attributes 400) may be encrypted.
[0055] Returning to FIG. 3, once the system has stored attributes
characterizing various user actions (e.g. implicit attributes 400),
the system may perform a search for content items based on a
natural language query. In 320, the system may process a received
query (or search query) that may include a reference to a user
action. The query may be received from a user as an utterance (e.g.
voice-based input), or as a typed entry. As described above, the
query (and displayed results) may work in conjunction with a
digital assistant (e.g. digital assistant 132). When processing the
query, the system may determine whether the query includes one or
more particular types of references. In 325, the system may
identify one or more references in the search query using natural
language processing as described above. For example, in one
embodiment, the system may identify a reference to a user action.
As referred to herein, a reference may include one or more words,
phrases, clauses, etc. that may be associated with search criteria
or attributes stored by the system. Accordingly, the system may
identify references within a query to identify one or more
attributes that may be used to perform a search for content
items.
[0056] As mentioned, the system may identify particular types of
references within the query. One type of reference may include a
reference to a user action as described above. Accordingly, a
reference to a user action may include an utterance corresponding
to terms such as "shared, sent, copied, printed, moved, deleted,
modified, created, edited, saved, opened, downloaded, posted,
archived, played, transferred, captured," and like type terms. For
example, if a query is the phrase "show me the last document I
modified," the system may determine the utterance includes the term
"modified" referencing a user action (e.g. user action attribute
408) of modifying a document.
[0057] The system may also identify various other references within
a query for natural language processing. For example, another type
of reference may include a reference to a content item. For
example, a reference to a content item may include an utterance
corresponding to any type of content item as described above. For
instance, a reference to a content item may include a term such as
"content item, document, file, spreadsheet, presentation, reminder,
note, appointment, paper, email, message, picture, image, song,
movie, playlist, video," etc. For instance, using the same example
of "Show me the last document I opened," the system may determine
the query includes the term "document" referencing a content item
(e.g. content item type attribute 405). In addition, the system may
determine whether the query or reference to the content item
includes an application specific type of content item such as a
pdf, word or word document, pages or pages document, etc.
Accordingly, the system may also detect a particular file type that
may be associated with a content item (e.g. "word document," or
"pages document," "pdf document," etc.).
[0058] In 330, the system may identify search criteria associated
with the one or more stored attributes (e.g. implicit attributes
400) characterizing the user action referenced. Using the example
above, the system may identify search criteria characterizing
opening a document based on identifying the term "opened" in the
query as described in 320. When determining search criteria
corresponding to attributes characterizing a user action, various
other references may also be used. For example, the type of content
item, a reference to a time period (e.g. day, month, etc.), recency
(e.g. most recent), user identifier, folder name, etc. may also be
used.
[0059] FIGS. 5A-5D are diagrams illustrating examples of processing
a query requesting content associated with a user action according
to an embodiment of the disclosure. As shown in FIGS. 5A-5D, a
query (or search query) 501 may include one or more words or
phrases. Accordingly, the system may parse the query 501 using the
speech-to-text and/or natural language processing as described
above, to identify one or more references 503. As described, one or
more references 503 may correspond to one or more search criteria
504.
[0060] As shown in the examples of FIGS. 5A-5B, the query may
relate to searching for content items based on a user action. For
example, as shown in FIG. 5A, the query "Display the documents I
most recently modified" may be parsed to determine one or more
search criteria 504. As shown in this example, the query may
include one or more references 503 such as the terms; "documents"
that may correspond to search criteria for a content item type
(e.g. content item type attribute 406=document); "most recently"
that may correspond to search criteria for a time (e.g. time
attribute 414=last/most recent); and "modified" that may correspond
to search criteria for a user action (e.g. user action attribute
408=modify).
[0061] As shown in another example of FIG. 5B, the query "Display
the document I last downloaded" may be processed to determine
search criteria 504 for a content item type (e.g. content item type
attribute 406=document), time (e.g. time attribute 414=last/most
recent), and user action (e.g. user action attribute 408=download).
Accordingly, the system may perform a search for content items
based on the determined search criteria 504.
[0062] As shown in the example of FIGS. 5C-5D, the query may also
relate to searching for content items shared with another user as a
user action. For example, as shown in FIG. 5C, the query "Show me
the last email from Bill" may be parsed to identify various types
of references 503 to determine one or more search criteria 504
including a sender and/or recipient of a shared content item. As
shown in this example, the term "email" may correspond to search
criteria for a content item type (e.g. content item type attribute
406=email), search criteria for a user action (e.g. user action
attribute 408=receive email), and/or search criteria for an
application (e.g. application attribute 413=email application). In
addition, the system may process various terms to determine a
sender and/or recipient of the content item. When determining
whether a user or a reference to user is a sender or a recipient,
the system may evaluate (e.g. via natural language processing)
various terms associated with sending/receiving a content item
(e.g. "to," "from," "sent," "received," etc.). For example, the
system may determine the terms "from" and/or "Bill" correspond to
search criteria for a sender (e.g. sender attribute 409=Bill), and
accordingly, the recipient of the email is the current user (e.g.
recipient attribute 411=current user). In some embodiments, the
current user may be the user currently logged-on to a device,
and/or a particular user account currently associated with the
device. In this example, the system may determine that the user is
being referenced by the term "me," and accordingly, the term "from"
may be used in conjunction with the user identifier "Bill," to
determine that the current user is the recipient and that Bill is
the sender of the content item.
[0063] As shown in another example in FIG. 5D, the query 501 may
include the phrase "Show me the emails sent to Bill in April."
Accordingly, in a similar manner as described above, the term
"email" may correspond to search criteria for a content item type
(e.g. content item type attribute 406=email), search criteria for a
user action (e.g. user action attribute 408=send email), and/or
search criteria for an application (e.g. application attribute
413=email application). The term "April" may correspond to search
criteria for time or time period (e.g. time attribute 414=send date
in April). In addition, the system may determine the terms "sent
to" and/or "Bill" correspond to search criteria for a recipient
(e.g. recipient attribute 411=Bill), and accordingly, the sender of
the email is the current user (e.g. sender attribute 409=current
user).
[0064] It should be noted that although the above examples use a
term referencing a name as a user identifier (e.g. "Bill"), other
suitable identifiers may also be used (e.g. account name, alias,
email address, contact name, group name, family relation such as
wife, father, etc.).
[0065] It should also be noted the reference terms shown in the
examples above are provided as simplified examples. The system may
use various other mechanisms for natural language processing to
determine a particular set of search criteria within a search
query. For example, the system may use various disambiguation
techniques for disambiguating various terms including pronoun such
as "me," "my," "I," etc., in combination with terms associated with
a sender or recipient such as "to/send" or "from/receive" to
determine whether a particular user is a sender or recipient of a
content item.
[0066] In addition, in some embodiments, the system may also
provide multiple results sets based on the type of query. For
example, particular queries may provide multiple interpretations
and the system may account for such circumstances. For instance,
the query "Show me all my emails read yesterday" may be interpreted
in multiples ways, including emails that are read by the user and
are received yesterday (e.g. isRead=Yes && date=yesterday),
as well as emails that were read by the user yesterday (e.g.
userRead=yesterday). In another example, the system may provide
multiple results based on a requested type of content item. For
instance, the query "Messages received from Bill" may include
content item types such as an email, as well as content item types
such as chat messages, and/or sms messages, and the like.
[0067] Returning once again to FIG. 3, in 340, the system may
identify content items associated with the one or more attributes
corresponding to the search attributes identified in 330. In one
embodiment, the content items may be identified based on performing
a search of content items. Accordingly, the system may identify
particular content items by searching an index of metadata for
attributes corresponding to the determined search criteria. For
example, the system may filter content items based on various
attribute values as described above.
[0068] As described above, some embodiments may work in conjunction
with a digital assistant. FIGS. 6A-6D are diagrams illustrating an
example interface 60 for providing search results for content items
associated with a user action as part of an interaction with a
digital assistant (e.g. digital assistant 132) according to an
embodiment of the disclosure. As shown in the example of FIG. 6A,
an operating environment may allow a user to initiate the digital
assistant with a menu item 61, or button, icon, etc. Accordingly,
in response to initiating the digital assistant, the digital
assistant may provide a visual and/or voice-based prompt 62 such as
"What can I help you with?" In response, the user may provide a
search query 63 as a voice-based input such as "Display the
document I last downloaded" that may also be displayed using
speech-to-text processing as described above. Once a digital
assistant (or system) receives the search query 63, the system may
process the search query 63 and perform a search for content items
as described above. Accordingly, as shown in FIG. 6B, the digital
assistant may display search results, which in this case is a pdf
document 64 (i.e. the last document downloaded by the user).
[0069] As another example, FIG. 6C shows a search query 63 for
content items shared with another user such as "Show me my emails
from Bill in April." Accordingly, as shown, the digital assistant
may display a set of search results, which in this case are a set
of emails 66 (i.e. emails received from Bill in April).
[0070] In addition, once an initial set of search results is
displayed, the user may provide additional filtering terms. For
example, the filtering terms may include a further date
specification (e.g. "Show me only the emails before April 15th"),
subject matter (e.g. "Show me only the emails that include report
summary in the subject line"), characteristics of the content item
(e.g. "Show me only emails with attachments"), file location ("Show
me documents saved in my documents folder"), and any other
filtering terms that may correspond to one or more attributes
stored as metadata for content items.
[0071] In some embodiments, the digital assistant may also be
initiated within a particular application via an API. For example,
the digital assistant may be initiated from within an application
(including a third-party application) and provide contextual search
results. For example, if the digital assistant is initiated from an
email application, search results in response to a query that
includes a request for "files" may include a set of emails (as a
contextual response to the term file) that may be displayed within
the email application. Similarly, particular types of documents may
be provided depending on the application from which the digital
assistant is initiated such as a particular document type based on
the word processing application (e.g. pages, pdf, word, etc.)
[0072] FIG. 7 is an example flow diagram illustrating a method of
providing search results based on a natural language query
according to an embodiment of the disclosure. Process 700 may be
performed by a system or device as described herein (e.g. system
100, client device 110, or server 120).
[0073] As shown, in 701, the system may detect a user action
performed with a content item. As described herein, the user action
may include various actions that may be performed with a content
item (e.g. sending, sharing, printing, downloading, modifying,
etc.).
[0074] In 702, the system may store one or more attributes (e.g.
implicit attributes 400) characterizing the user action performed.
For example, the user action performed may include sharing (or
sending) the content item between a first user and a second user,
and accordingly, the one or more attributes may include an
identifier for the first user (e.g. sender attribute 409) and an
identifier for the second user (e.g. recipient attribute 411). In
one embodiment, sending the content item may include sending the
content item using an application, and the attributes
characterizing the user action may further include an identifier
for the application (e.g. application attribute 413), as well as a
time or time stamp of when the content was sent (e.g. time
attribute 414). In one embodiment, the attributes may be stored as
metadata (e.g. metadata 160), which may be stored as part of the
content item and/or part of an index.
[0075] In 703, the system may receive a search query (e.g. query
501) for one or more content items. In 704, the system may identify
one or more references (e.g. references 503) within the search
query including at least a reference to the user action.
Accordingly, the system may use natural language processing to
determine one or more words, phrases, clauses, etc. that may
correspond to search criteria. For example, the reference to the
user action may include an utterance of a term referencing sharing,
sending, or emailing. The search query may further include a
reference to a recipient such as an identifier (e.g. name, email
address, user ID, etc.). In addition, the search query may further
include a reference to the content item as described above ("file,"
"document," "email," etc.).
[0076] In 705, the system may identify search criteria (e.g. search
criteria 504) associated with the one or more references. In one
embodiment, the search criteria may correspond to one or more
attributes stored as metadata. In 706, the system may identify
content items based on performing a search for content items
associated with the attributes corresponding to the search
criteria. In one embodiment, the system may perform the search by
searching one or more indexes (e.g. indexes 150 and/or 155).
Accordingly, the system may display the identified content items as
search results in the response to the search query as described
above.
[0077] As described above, in second aspect, a system may allow a
user may perform a search for content items that a user associates
with a particular application.
[0078] FIG. 8 is a process flow diagram illustrating a process of
identifying content items associated with a particular application
according to an embodiment of the disclosure. Process 800 may be
performed, for example, by one or more components of system
100.
[0079] In 810, the system may detect installation of a new
application on a client device. For example, the system may detect
a user has installed a new application (including a third-party
application), for example, from an application store (e.g. app
store). The application may be installed from an installation
package, which may include various types of files. In one
embodiment, an application may be developed as an application
bundle that includes a structured format for an application. For
example, the bundle may include executables, resource files, and
other support files, along with one or more manifest type files. In
one embodiment, a manifest file (e.g. information property list
file or info.plist file) may be a structured file that includes
configuration information for the application. For example, the
system may rely on the presence of this file to identify relevant
information about a particular application. In one embodiment, the
system may access such a file to determine content item types the
application supports.
[0080] In 815, the system may determine an identifying name for the
application. In one embodiment, the system may be able to recognize
the identifying name from a server or other source. For example, a
server may work in conjunction with a digital assistant (e.g.
digital assistant 132) to recognize identifying names of newly
installed applications by referencing an application ID or other
form of unique identifier for the application. As another example,
the identifying name may be provided and managed by an application
store or service. In yet another example, the system may determine
the identifying name from local resources such as accessing the
manifest type file or other form of indicator. In one embodiment,
the system may determine the identifying name of an application in
response to detecting the installing of the application.
Accordingly, the system may now be aware of the application's
existence on a device. As a result, the system may now recognize
the identifying name of the particular application in utterances
received by the digital assistant.
[0081] In 820, the system may process a received query (or search
query) for content items associated with a particular application.
The query may be received from a user as an utterance (e.g.
voice-based input), or as a typed entry. As described above, the
query (and displayed results) may work in conjunction with a
digital assistant (e.g. digital assistant 132). When processing the
query, the system may determine whether the query includes one or
more particular types of references. For example, the system may
identify one or more references to the particular application (e.g.
reference to the identifying name) and a reference to content items
in the search query using natural language processing as described
above. As referred to herein, a reference may include one or more
words, phrases, clauses, etc. that may be associated with search
criteria or attributes stored by the system. Accordingly, the
system may identify references within a query to identify one or
more attributes that may be used to perform a search for content
items.
[0082] FIG. 9 is a diagram illustrating an example of processing a
query requesting content associated with a particular application
according to an embodiment of the disclosure. As shown in FIG. 9 a
query (or search query) 901 may include one or more words or
phrases. Accordingly, the system may parse the query 901 using the
speech-to-text and/or natural language processing as described
above, to identify one or more references 903. As described, one or
more references 903 may correspond to one or more search criteria
904. As shown in this example, the query "Show me my NewApp files"
may be parsed to determine one or more search criteria 904. As
shown, the query may include one or more references 903 such as the
terms "NewApp," which may be may recognized as an identifying name
of the NewApp application. In addition, the term "files" may
correspond to a reference to content. Accordingly, in a similar
manner as described above, the term "NewApp" and "file" may
correspond to search criteria for a content item type (e.g. content
item type attribute=NewApp Sheet).
[0083] In 825, the system may determine one or more content item
types associated with the application. In one embodiment, the
system may determine content item types associated with the
application in response to receiving the query in 820. For example,
the system may identify a content item type only after a user
requests such content in a query. Accordingly, the system may make
such a determination at the time of the search. Accordingly, in
such embodiments, the system may not necessarily be required to
maintain a database of corresponding associations, and instead, may
make the determination on an as needed basis. Moreover, by making a
determination on an as needed basis, the system may not need to be
concerned with which applications are currently installed on the
device or which have been removed.
[0084] In one embodiment, content items associated with a
particular application may include content items the application
has authority over (or content items belonging to the particular
application), or a particular content item type specific to a
particular application (e.g. a content item type supported only by
the particular application). For example, a user may associate
particular content item types with a particular application.
Accordingly, the system may respond to queries such as "Show me my
`NewApp` items," where "NewApp" is the name (or identifying name)
of the particular application. Typically, multiple applications may
support a content item type in some manner (e.g. merely read or
open), but those content item types may not be of interest to a
user. Instead, a user may only be interested in those content items
to which the user associates with a particular application such as
those the application has authority over. Accordingly, in one
embodiment, the system may determine content item types that are
supported in multiple ways by the application to determine which
content item types the application has authority over. In one
embodiment, authority over may include content item types the
application may create (e.g. not merely read or open). In addition,
in one embodiment, authority over may include content item types
the application may open (e.g. open, read, import, etc.), and also
content item types the application may create (e.g. create, export,
write, etc.). For example, the content item types the application
may create may be specified by a unique identifier for a content
item type such as a Universal Type Identifier (UTI). In one
embodiment, the identifier may identify a content item type
specific to the application and usable by other applications.
[0085] In one embodiment, the system may access one or more files
(e.g. a manifest file) to determine the content items a particular
application may support. In one embodiment, the one or more files
may include a first list indicating which content item types the
application supports reading or opening, and a second list,
different from the first list, indicating which content item types
the application supports creating or exporting. In addition, in one
embodiment, the system may perform such a determination by
accessing only the one or more files stored locally on the device
(e.g. client device 110) and without accessing a file on another
device or server (e.g. server 120). For example, in one embodiment,
when a new application (e.g. "NewApp") is installed, it may be
bundled with a manifest type file. Accordingly, the system may
determine the content items the application has authority over
without having to query a server, which may include a database that
would need to be periodically updated to determine which files an
application has authority over. Accordingly, in one embodiment,
such information may be determined at the time of installation,
without having to communicate with a server.
[0086] FIG. 10 is a diagram illustrating an example of capabilities
supported by a particular application (e.g. "NewApp") according to
an embodiment of the disclosure.
[0087] As shown, a file 90 (e.g. manifest type file, information
property list file, etc.) indicates data (or content) created by
the particular application NewApp. As shown, the file 90 may
indicate a content item type 906, a corresponding UTI 921 (or
identifier), along with other information such as a file extension
922 for a particular application 920. As shown, NewApp may be
capable of creating various data (or content, or content item
types), but all of these may not be of interest to a user. For
example, a data file or a log file may be used internally by the
application or system, which would not the content typically
accessed or used by a user. Accordingly, in order to determine
which content item types that would be of interest to a user, the
system may cross-reference these content item types with content
items types the application may support in various ways. For
example, file 91 (e.g. manifest type file, information property
list file, etc.) indicates capabilities supported for various
content item types 906 by the NewApp application. In this example,
the capabilities include the ability to open 908 (e.g. read,
import, etc.), edit 909, and save 910, etc. In this example, the
system may determine an application has authority over content item
types it may create and well as either open 908 or edit 909, which
in this example is a NewApp Sheet 92. In addition, as shown in this
example, the NewApp application may support either opening or
editing other content item types (e.g. pages and pdf documents),
but does not create such content item types, and thus, is not
deemed to have authority over these content items. It should be
noted that when determining the content item types the application
supports, various lists may be cross-referenced including those
stored in a multiple files as shown in this example, or those
stored in a single files.
[0088] It should be noted that FIG. 10 is merely an example and the
system may cross-reference various other combinations of actions
performed (e.g. edit, import, output, share, etc.) with content
item types that are supported by the application. Such other
various combinations are also contemplated by one or more
embodiments of the system. Moreover, the figure merely displays a
representation of the lists in table format, but the attributes may
be stored in any suitable format or configuration (e.g. as objects,
flat file, etc.). In addition, the supported content item types may
be stored as part of an index.
[0089] Returning once again to FIG. 8, in 830, the system may
perform a search for content items that are associated with a
particular application (e.g. an identified content item type the
particular application has authority over). For instance, based on
the example of FIGS. 9 and 10, the system may perform a search for
NewApp Sheets (e.g. content item types NewApp has authority over).
In one embodiment, when performing the search, the system may
identify particular content items by searching an index of metadata
for attributes corresponding to the determined search criteria
(e.g. content item type=NewApp Sheet).
[0090] As described above, some embodiments may work in conjunction
with a digital assistant. FIGS. 11A-11B are diagrams illustrating
an example interface 650 for providing search results for content
items associated with a particular application as part of an
interaction with a digital assistant (e.g. digital assistant 132).
As shown in the example of FIG. 11A, an operating environment may
allow a user to initiate the digital assistant with a menu item
651, or button, icon, etc. Accordingly, in response to initiating
the digital assistant, the digital assistant may provide a visual
and/or voice-based prompt 652 such as "What can I help you with?"
In response, the user may provide a search query 655 as a
voice-based input such as "Show me my NewApp files" that may also
be displayed using speech-to-text processing as described above.
Once a digital assistant (or system) receives the search query, the
system may process the search query and perform a search for
content items as described above. Accordingly, as shown in FIG.
11B, the digital assistant may display search results, which in
this case is a set of NewApp files 656 (e.g. NewApp Sheet), which
are associated with the NewApp application.
[0091] In addition, once an initial set of search results is
displayed, the user may provide additional filtering terms. For
example, the filtering terms may include a further date
specification (e.g. "Show me only the documents saved before April
15th"), subject matter (e.g. "Show me only the documents that
include the title report summary"), characteristics of the content
item (e.g. "Show me only content items modified recently"), file
location ("Show me documents saved in my documents folder"), and
any other filtering terms that may correspond to one or more
attributes stored as metadata for content items.
[0092] FIG. 12 is an example flow diagram illustrating a method of
providing search results of contents items associated with a
particular application based on a natural language query according
to an embodiment of the disclosure. Process 850 may be performed by
a system or device as described herein (e.g. system 100, client
device 110, or server 120).
[0093] In 851, the system may recognize a query (or search query)
includes a request for content of the application. In one
embodiment, the search query may be received by a search
application. For example, the search query may be received as part
of an interaction with a digital assistant. In one embodiment,
recognizing a query includes a request for content of the
application may include determining the received query includes at
least a reference to an identifying name of the application. In
addition, the system may also recognize the query includes a
reference to content items.
[0094] In 852, the system may identify content associated with an
application. In one embodiment, the system may identify the
associated content in response to recognizing the query includes a
request for content of the application. In one embodiment, content
associated with an application may include a content item type the
application has authority over. In one embodiment, a content item
type the application has authority over may including determining
the content item type is included in one or more lists of content
item type capabilities for the application. In one embodiment, the
system may determine content item types the application is capable
of creating. In another embodiment, the system may determine a
content item types the application is capable of both opening or
editing, and creating or exporting. In one embodiment, the content
item types the application supports creating or exporting may be
identified with an identifier such as a Universal Type Identifier
(UTI). In one embodiment, the identifier may identify a content
item type specific to the application and usable by other
applications. For example, the content item type specific to the
application may be a newly added content item type the system may
be capable of processing (e.g. opening, reading, editing,
exporting, etc.) in response to the installation of the particular
application.
[0095] In 853, the system may perform a search for content items
having the identified content item type. Accordingly, the content
items may be provided to the search application as search results
for the query.
[0096] FIG. 13 is a block diagram illustrating an example computing
system, which may be used in conjunction with one or more of the
embodiments of the disclosure. For example, computing system 1200
(or system, or computing device, or device) may represent any of
the systems (e.g. system 100), or devices described herein (e.g.
client device 110 or server 120) that perform any of the processes,
operations, or methods of the disclosure. Note that while the
computing system illustrates various components, it is not intended
to represent any particular architecture or manner of
interconnecting the components as such details are not germane to
the present disclosure. It will also be appreciated that other
types of systems that have fewer or more components than shown may
also be used with the present disclosure.
[0097] As shown, the computing system 1200 may include a bus 1205
which may be coupled to a processor 1210, ROM (Read Only Memory)
1220, RAM (or volatile memory) 1225, and storage (or non-volatile
memory) 1230. The processor 1210 may retrieve stored instructions
from one or more of the memories 1220, 1225, and 1230 and execute
the instructions to perform processes, operations, or methods
described herein. These memories represent examples of a
non-transitory machine-readable medium or storage containing
instructions which when executed by a computing system (or a
processor), cause the computing system (or processor) to perform
operations, processes, or methods described herein. The RAM 1225
may be implemented as, for example, dynamic RAM (DRAM), or other
types of memory that require power continually in order to refresh
or maintain the data in the memory. Storage 1230 may include, for
example, magnetic, semiconductor, tape, optical, removable,
non-removable, and other types of storage that maintain data even
after power is removed from the system. It should be appreciated
that storage 1230 may be remote from the system (e.g. accessible
via a network).
[0098] A display controller 1250 may be coupled to the bus 1205 in
order to receive display data to be displayed on a display device
1255, which can display any one of the user interface features or
embodiments described herein and may be a local or a remote display
device. The computing system 1200 may also include one or more
input/output (I/O) components 1265 including mice, keyboards, touch
screen, network interfaces, printers, speakers, and other devices.
Typically, the input/output components 1265 are coupled to the
system through an input/output controller 1260.
[0099] Modules 1270 (or components, units, or logic) may represent
any of the modules described above, such as, for example, digital
assistant 132, applications 122, search module 124, metadata
processing module 140, and crawler 157 (and related modules, and
sub-modules). Modules 1270 may reside, completely or at least
partially, within the memories described above, or within a
processor during execution thereof by the computing system. In
addition, modules 1270 can be implemented as software, firmware, or
functional circuitry within the computing system, or as
combinations thereof.
[0100] The present disclosure recognizes that the use of personal
information data, in the present technology, can be used to the
benefit of users. For example, the personal information data can be
used to deliver targeted content that is of greater interest to the
user. Accordingly, use of such personal information data enables
calculated control of the delivered content. Further, other uses
for personal information data that benefit the user are also
contemplated by the present disclosure.
[0101] The present disclosure further contemplates that the
entities responsible for the collection, analysis, disclosure,
transfer, storage, or other use of such personal information data
will comply with well-established privacy policies and/or privacy
practices. In particular, such entities should implement and
consistently use privacy policies and practices that are generally
recognized as meeting or exceeding industry or governmental
requirements for maintaining personal information data private and
secure. For example, personal information from users should be
collected for legitimate and reasonable uses of the entity and not
shared or sold outside of those legitimate uses. Further, such
collection should occur only after receiving the informed consent
of the users. Additionally, such entities would take any needed
steps for safeguarding and securing access to such personal
information data and ensuring that others with access to the
personal information data adhere to their privacy policies and
procedures. Further, such entities can subject themselves to
evaluation by third parties to certify their adherence to widely
accepted privacy policies and practices.
[0102] Despite the foregoing, the present disclosure also
contemplates embodiments in which users selectively block the use
of, or access to, personal information data. That is, the present
disclosure contemplates that hardware and/or software elements can
be provided to prevent or block access to such personal information
data. For example, in the case of advertisement delivery services,
the present technology can be configured to allow users to select
to "opt in" or "opt out" of participation in the collection of
personal information data during registration for services. In
another example, users can select not to provide location
information for targeted content delivery services. In yet another
example, users can select to not provide precise location
information, but permit the transfer of location zone
information.
[0103] In the foregoing specification, example embodiments of the
disclosure have been described. It will be evident that various
modifications may be made thereto without departing from the
broader spirit and scope of the disclosure as set forth in the
following claims. The specification and drawings are, accordingly,
to be regarded in an illustrative sense rather than a restrictive
sense.
* * * * *