U.S. patent application number 15/319306 was filed with the patent office on 2017-05-18 for methods, systems, and media for performing personalized actions on mobile devices associated with a media presentation device.
This patent application is currently assigned to Google Inc.. The applicant listed for this patent is Google Inc.. Invention is credited to Raunaq Shah, Matt Van Der Staay.
Application Number | 20170139657 15/319306 |
Document ID | / |
Family ID | 53539942 |
Filed Date | 2017-05-18 |
United States Patent
Application |
20170139657 |
Kind Code |
A1 |
Shah; Raunaq ; et
al. |
May 18, 2017 |
METHODS, SYSTEMS, AND MEDIA FOR PERFORMING PERSONALIZED ACTIONS ON
MOBILE DEVICES ASSOCIATED WITH A MEDIA PRESENTATION DEVICE
Abstract
Methods, systems, and media for performing personalized actions
on mobile devices associated with a media presentation device are
provided. In some implementations, the method comprises: causing a
slideshow of images to be presented on a media presentation device;
detecting a presence of a first mobile device associated with a
first user and a second mobile device associated with a second
user, wherein the first mobile device and the second mobile device
are in a proximity of the media presentation device; receiving
first user preferences associated with the first user and second
user preferences associated with the second user in response to the
detection; receiving metadata associated with an image from the
slideshow of images that is currently being presented on the media
presentation device; determining a first action for the first
mobile device based on the first user preferences associated with
the first mobile device and the metadata associated with the image
currently being presented on the media presentation device and a
second action based on the second user preferences associated with
the second mobile device and the metadata associated with the image
currently being presented on the media presentation device, wherein
the first action is a different action than the second action; and
causing the first action to be performed by the first mobile device
and the second action to be performed by the second mobile
device.
Inventors: |
Shah; Raunaq; (San
Francisco, CA) ; Van Der Staay; Matt; (San Jose,
CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Google Inc. |
Mountain View |
CA |
US |
|
|
Assignee: |
Google Inc.
Mountain View
CA
|
Family ID: |
53539942 |
Appl. No.: |
15/319306 |
Filed: |
June 24, 2015 |
PCT Filed: |
June 24, 2015 |
PCT NO: |
PCT/US15/37517 |
371 Date: |
December 15, 2016 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62016421 |
Jun 24, 2014 |
|
|
|
62016580 |
Jun 24, 2014 |
|
|
|
62016575 |
Jun 24, 2014 |
|
|
|
62016428 |
Jun 24, 2014 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06F 3/14 20130101; H04L
67/34 20130101; G06F 3/0481 20130101; G06F 16/435 20190101; G06F
16/4393 20190101; H04L 67/306 20130101; H04L 65/607 20130101; G06F
16/285 20190101; G06F 16/438 20190101 |
International
Class: |
G06F 3/14 20060101
G06F003/14; G06F 17/30 20060101 G06F017/30 |
Claims
1. A method for performing personalized actions on mobile devices,
the method comprising: causing a slideshow of images to be
presented on a media presentation device; detecting a presence of a
first mobile device associated with a first user and a second
mobile device associated with a second user, wherein the first
mobile device and the second mobile device are in a proximity of
the media presentation device; receiving first user preferences
associated with the first user and second user preferences
associated with the second user in response to the detection;
receiving metadata associated with an image from the slideshow of
images that is currently being presented on the media presentation
device; determining a first action for the first mobile device
based on the first user preferences associated with the first
mobile device and the metadata associated with the image currently
being presented on the media presentation device and a second
action based on the second user preferences associated with the
second mobile device and the metadata associated with the image
currently being presented on the media presentation device, wherein
the first action is a different action than the second action; and
causing the first action to be performed by the first mobile device
and the second action to be performed by the second mobile
device.
2. The method of claim 1, further comprising: generating a group
identifier; associating the group identifier with the first mobile
device and the second mobile device; generating combined user
preferences based on the first user preferences and the second user
preferences; and associating the combined user preferences with the
group identifier, wherein each image in the slideshow of images has
been selected based on the combined user preferences.
3. The method of claim 1, further comprising determining a third
action to be performed by the first mobile device and a fourth
action to be performed by the second mobile device in response to
causing a second image from the slideshow of images to be presented
on the media playback device.
4. The method of claim 1, wherein detecting the presence further
comprises receiving a first indication that the first mobile device
is associated with the media presentation device over a wireless
communications network and a second indication that the second
mobile device is associated with the media presentation device over
the wireless communications network.
5. The method of claim 1, wherein detecting the presence further
comprises: receiving location information indicating relative
proximities of the first mobile device and the second mobile device
with the media presentation device; and determining that the first
mobile device and the second mobile device are within a
predetermined proximity of the media presentation device based on
the received location information.
6. The method of claim 1, further comprising determining a third
action to be performed by the first mobile device in response to
determining that a predetermined amount of time has elapsed since
the first action was performed by the first mobile device.
7. A system for performing personalized actions on mobile devices,
the system comprising: a hardware processor that is configured to:
cause a slideshow of images to be presented on a media presentation
device; detect a presence of a first mobile device associated with
a first user and a second mobile device associated with a second
user, wherein the first mobile device and the second mobile device
are in a proximity of the media presentation device; receive first
user preferences associated with the first user and second user
preferences associated with the second user in response to the
detection; receive metadata associated with an image from the
slideshow of images that is currently being presented on the media
presentation device; determine a first action for the first mobile
device based on the first user preferences associated with the
first mobile device and the metadata associated with the image
currently being presented on the media presentation device and a
second action based on the second user preferences associated with
the second mobile device and the metadata associated with the image
currently being presented on the media presentation device, wherein
the first action is a different action than the second action; and
cause the first action to be performed by the first mobile device
and the second action to be performed by the second mobile
device.
8. The system of claim 7, wherein the hardware processor is further
configured to: generate a group identifier; associate the group
identifier with the first mobile device and the second mobile
device; generate combined user preferences based on the first user
preferences and the second user preferences; and associate the
combined user preferences with the group identifier, wherein each
image in the slideshow of images has been selected based on the
combined user preferences.
9. The system of claim 7, wherein the hardware processor is further
configured to determine a third action to be performed by the first
mobile device and a fourth action to be performed by the second
mobile device in response to causing a second image from the
slideshow of images to be presented on the media playback
device.
10. The system of claim 7, wherein the hardware processor is
further configured to receive a first indication that the first
mobile device is associated with the media presentation device over
a wireless communications network and a second indication that the
second mobile device is associated with the media presentation
device over the wireless communications network.
11. The system of claim 7, wherein the hardware processor is
further configured to: receive location information indicating
relative proximities of the first mobile device and the second
mobile device with the media presentation device; and determine
that the first mobile device and the second mobile device are
within a predetermined proximity of the media presentation device
based on the received location information.
12. The system of claim 7, wherein the hardware processor is
further configured to determine a third action to be performed by
the first mobile device in response to determining that a
predetermined amount of time has elapsed since the first action was
performed by the first mobile device.
13. A non-transitory computer-readable medium containing computer
executable instructions that, when executed by a processor, cause
the processor to perform a method for performing personalized
actions on mobile devices, the method comprising: causing a
slideshow of images to be presented on a media presentation device;
detecting a presence of a first mobile device associated with a
first user and a second mobile device associated with a second
user, wherein the first mobile device and the second mobile device
are in a proximity of the media presentation device; receiving
first user preferences associated with the first user and second
user preferences associated with the second user in response to the
detection; receiving metadata associated with an image from the
slideshow of images that is currently being presented on the media
presentation device; determining a first action for the first
mobile device based on the first user preferences associated with
the first mobile device and the metadata associated with the image
currently being presented on the media presentation device and a
second action based on the second user preferences associated with
the second mobile device and the metadata associated with the image
currently being presented on the media presentation device, wherein
the first action is a different action than the second action; and
causing the first action to be performed by the first mobile device
and the second action to be performed by the second mobile
device.
14. The non-transitory computer-readable medium of claim 13,
wherein the method further comprises: generating a group
identifier; associating the group identifier with the first mobile
device and the second mobile device; generating combined user
preferences based on the first user preferences and the second user
preferences; and associating the combined user preferences with the
group identifier, wherein each image in the slideshow of images has
been selected based on the combined user preferences.
15. The non-transitory computer-readable medium of claim 13,
wherein the method further comprises determining a third action to
be performed by the first mobile device and a fourth action to be
performed by the second mobile device in response to causing a
second image from the slideshow of images to be presented on the
media playback device.
16. The non-transitory computer-readable medium of claim 13,
wherein the method further comprises receiving a first indication
that the first mobile device is associated with the media
presentation device over a wireless communications network and a
second indication that the second mobile device is associated with
the media presentation device over the wireless communications
network.
17. The non-transitory computer-readable medium of claim 13,
wherein the method further comprises: receiving location
information indicating relative proximities of the first mobile
device and the second mobile device with the media presentation
device; and determining that the first mobile device and the second
mobile device are within a predetermined proximity of the media
presentation device based on the received location information.
18. The non-transitory computer-readable medium of claim 13,
wherein the method further comprises determining a third action to
be performed by the first mobile device in response to determining
that a predetermined amount of time has elapsed since the first
action was performed by the first mobile device.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional
Patent Application No. 62/016,421, filed Jun. 24, 2014, U.S.
Provisional Patent Application No. 62/016,428, filed Jun. 24, 2014,
U.S. Provisional Patent Application No. 62/016,575, filed Jun. 24,
2014, and U.S. Provisional Patent Application No. 62/016,580, filed
Jun. 24, 2014, which are hereby incorporated by reference herein in
their entireties.
TECHNICAL FIELD
[0002] The disclosed subject matter relates to methods, systems,
and media for performing personalized actions on mobile devices
associated with a media presentation device.
BACKGROUND
[0003] Media presentation devices that present background content,
such as a slideshow of images, can access the content from a user
account by receiving and storing a username and password of the
user account. The username and password are used to retrieve
content associated with the account, which is then presented by the
media presentation device. However, this requires user credentials
to be stored by the media presentation device and is also limited
to presenting content of only one user.
[0004] While consuming media content being displayed on a
television device, a viewer of the media content is often
interested in information relating to the media content, such as
additional information about a location related to the media
content, information about a topic related to the media content,
etc. Moreover, multiple viewers that are consuming the media
content on the television may be interested in different types of
information relating to the media content.
[0005] Accordingly, new methods, systems, and media for associating
multiple users with a media presentation device are desirable.
SUMMARY
[0006] In accordance with some implementations of the disclosed
subject matter, mechanisms for performing personalized actions on
mobile devices associated with a media presentation device are
provided.
[0007] In accordance with some implementations of the disclosed
subject matter, a method for performing personalized actions on
mobile devices is provided, the method comprising: causing a
slideshow of images to be presented on a media presentation device;
detecting a presence of a first mobile device associated with a
first user and a second mobile device associated with a second
user, wherein the first mobile device and the second mobile device
are in a proximity of the media presentation device; receiving
first user preferences associated with the first user and second
user preferences associated with the second user in response to the
detection; receiving metadata associated with an image from the
slideshow of images that is currently being presented on the media
presentation device; determining a first action for the first
mobile device based on the first user preferences associated with
the first mobile device and the metadata associated with the image
currently being presented on the media presentation device and a
second action based on the second user preferences associated with
the second mobile device and the metadata associated with the image
currently being presented on the media presentation device, wherein
the first action is a different action than the second action; and
causing the first action to be performed by the first mobile device
and the second action to be performed by the second mobile
device.
[0008] In accordance with some implementations of the disclosed
subject matter, a system for performing personalized actions on
mobile devices is provided, the system comprising: a hardware
processor that is configured to: cause a slideshow of images to be
presented on a media presentation device; detect a presence of a
first mobile device associated with a first user and a second
mobile device associated with a second user, wherein the first
mobile device and the second mobile device are in a proximity of
the media presentation device; receive first user preferences
associated with the first user and second user preferences
associated with the second user in response to the detection;
receive metadata associated with an image from the slideshow of
images that is currently being presented on the media presentation
device; determine a first action for the first mobile device based
on the first user preferences associated with the first mobile
device and the metadata associated with the image currently being
presented on the media presentation device and a second action
based on the second user preferences associated with the second
mobile device and the metadata associated with the image currently
being presented on the media presentation device, wherein the first
action is a different action than the second action; and cause the
first action to be performed by the first mobile device and the
second action to be performed by the second mobile device.
[0009] In accordance with some implementations of the disclosed
subject matter, a non-transitory computer-readable medium
containing computer executable instructions that, when executed by
a processor, cause the processor to perform a method for performing
personalized actions on mobile devices is provided, the method
comprising: causing a slideshow of images to be presented on a
media presentation device; detecting a presence of a first mobile
device associated with a first user and a second mobile device
associated with a second user, wherein the first mobile device and
the second mobile device are in a proximity of the media
presentation device; receiving first user preferences associated
with the first user and second user preferences associated with the
second user in response to the detection; receiving metadata
associated with an image from the slideshow of images that is
currently being presented on the media presentation device;
determining a first action for the first mobile device based on the
first user preferences associated with the first mobile device and
the metadata associated with the image currently being presented on
the media presentation device and a second action based on the
second user preferences associated with the second mobile device
and the metadata associated with the image currently being
presented on the media presentation device, wherein the first
action is a different action than the second action; and causing
the first action to be performed by the first mobile device and the
second action to be performed by the second mobile device.
[0010] In accordance with some implementations of the disclosed
subject matter, a system for performing personalized actions on
mobile devices is provided, the system comprising: means for
causing a slideshow of images to be presented on a media
presentation device; means for detecting a presence of a first
mobile device associated with a first user and a second mobile
device associated with a second user, wherein the first mobile
device and the second mobile device are in a proximity of the media
presentation device; means for receiving first user preferences
associated with the first user and second user preferences
associated with the second user in response to the detection; means
for receiving metadata associated with an image from the slideshow
of images that is currently being presented on the media
presentation device; means for determining a first action for the
first mobile device based on the first user preferences associated
with the first mobile device and the metadata associated with the
image currently being presented on the media presentation device
and a second action based on the second user preferences associated
with the second mobile device and the metadata associated with the
image currently being presented on the media presentation device,
wherein the first action is a different action than the second
action; and means for causing the first action to be performed by
the first mobile device and the second action to be performed by
the second mobile device.
[0011] In some implementations, the system further comprises: means
for generating a group identifier; means for associating the group
identifier with the first mobile device and the second mobile
device; means for generating combined user preferences based on the
first user preferences and the second user preferences; and means
for associating the combined user preferences with the group
identifier, wherein each image in the slideshow of images has been
selected based on the combined user preferences.
[0012] In some implementations, the system further comprises means
for determining a third action to be performed by the first mobile
device and a fourth action to be performed by the second mobile
device in response to causing a second image from the slideshow of
images to be presented on the media playback device.
[0013] In some implementations, the system further comprises means
for receiving a first indication that the first mobile device is
associated with the media presentation device over a wireless
communications network and a second indication that the second
mobile device is associated with the media presentation device over
the wireless communications network.
[0014] In some implementations, the system further comprises: means
for receiving location information indicating relative proximities
of the first mobile device and the second mobile device with the
media presentation device; and means for determining that the first
mobile device and the second mobile device are within a
predetermined proximity of the media presentation device based on
the received location information.
[0015] In some implementations, the system further comprises means
for determining a third action to be performed by the first mobile
device in response to determining that a predetermined amount of
time has elapsed since the first action was performed by the first
mobile device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] Various objects, features, and advantages of the disclosed
subject matter can be more fully appreciated with reference to the
following detailed description of the disclosed subject matter when
considered in connection with the following drawings, in which like
reference numerals identify like elements.
[0017] FIG. 1 shows an example of a generalized schematic diagram
of a system on which the mechanisms for performing personalized
actions on mobile devices associated with a media presentation
device that are based on presented media content as described
herein can be implemented in accordance with some
implementations.
[0018] FIG. 2 shows an example of hardware that can be used to
implement one or more user devices, presentation devices and
servers depicted in FIG. 1 in accordance with some implementations
of the disclosed subject matter.
[0019] FIG. 3 shows an example of a process for performing
personalized actions on mobile devices associated with a media
presentation device that are based on presented media content in
accordance with some implementations of the disclosed subject
matter.
[0020] FIG. 4 shows an example of a display device presenting
content and multiple user devices that are each performing
personalized actions based on the presented content in accordance
with some implementations of the disclosed subject matter.
[0021] FIG. 5 shows an example of a process for associating user
preferences with a presentation device in accordance with some
implementations of the disclosed subject matter.
[0022] FIG. 6 shows an example of a process for presenting
customized content on a presentation device in accordance with some
implementations of the disclosed subject matter.
DETAILED DESCRIPTION
[0023] In accordance with some implementations, as described in
more detail below, mechanisms, which can include methods, systems,
and/or computer readable media, for performing personalized actions
using a second screen device are provided.
[0024] In some implementations, these mechanisms can allow user
preferences for multiple users to be used when selecting content to
be presented without requiring each of the users to manually select
content for presentation. In some implementations, a media
presentation device, such as a digital media receiver or media
streaming device (which may not include a display) can request
content to be presented when the media presentation device is on
and outputting video data but lacks image and/or video content to
be presented. For example, when the media presentation device first
starts (e.g., before any content is requested for presentation),
after a predetermined period of time has elapsed with no activity,
when presentation of content in a queue of content to be presented
has been completed, etc., the media presentation device can request
personalized content to be presented. In such an example,
personalized content can be, for example, a slideshow of images
that are determined to be of interest to a user or users associated
with the media presentation device.
[0025] In some implementations, the media presentation device can
detect the presence of one or more user devices on a network that
is common to the media presentation device. Additionally, in some
implementations, identifying information of user devices that the
media presentation device detects as present can be used when
requesting the personalized content from, for example, a server.
For example, the media presentation device can detect that a
smartphone associated with a first user and a tablet computer
associated with a second user are connected to a Wi-Fi network to
which the media presentation device is also connected. In such an
example, the media presentation device can send identifying
information of the smartphone and tablet, such as a MAC address, a
device ID, etc., to the server with a request for content.
[0026] In some implementations, a server can select content based
on the identifying information of devices that are detected by the
media presentation device as being present. The server can, for
example, identify user preferences for users associated with the
devices that are present and select content based on a combination
of the user preferences. Such user preferences can include, for
example, search history, travel information, user-stated interests,
user interactions with one or more media content items (e.g., a
song, a video, a web page, etc.), and/or any other suitable
information related to the user of the user device. In some
implementations, the server can cause the media presentation device
to present the selected content.
[0027] In some implementation, in response to detecting the
presence of one or more user devices within a predetermined
proximity of the media presentation device and in response to
determining that content is being presented (e.g., an image in a
slideshow of images), the server can determine one or more actions
that may be performed on the user device (e.g., a second screen
device). For example, the server can determine one or more search
terms based on keywords, phrases, etc. contained in the user
preferences associated with the user device and metadata about the
media content being presented on the media presentation device. The
server can then perform a search based on the search terms and
cause the search results to be presented on the corresponding user
device. As another example, the server can identify content that
relates to the media content being presented on the media
presentation device and that may interest the user based on the
user preferences associated with the user device and the metadata
associated with the media content being presented. In a more
particular example, the server can determine that the user
preferences associated with the user device indicate a likelihood
that the user of the user device is interested in food-related
content and, as the metadata includes terms like "Paris, France"
and "Eiffel Tower," the server assigns the corresponding action to
be a search for French restaurants near the user device. The server
can then cause information about the identified content, such as
search results, to be presented on the user device.
[0028] It should be noted that any suitable number of user devices
can be in communications with the media presentation device. As
such, for each connected user device, the server can determine one
or more personalized actions that may be performed on that user
device.
[0029] FIG. 1 shows an example 100 of a generalized schematic
diagram of a system on which the mechanisms for performing
personalized actions on mobile devices associated with a media
presentation device that are based on presented media content as
described herein can be implemented in accordance with some
implementations. As illustrated, system 100 can include one or more
user devices 102. User devices 102 can be local to each other or
remote from each other. User devices 102 can be connected by one or
more communications links 104 to a communication network 106 that
can be linked to a server 120 via a communications link 112.
[0030] Although three user devices 102 are shown in FIG. 1 to avoid
over-complicating the drawing, any suitable number of these
devices, and any suitable types of these devices, can be used in
some implementations.
[0031] System 100 can include one or more presentation devices 110.
Presentation devices 110 can be local to each other or remote from
each other. Presentation devices 110 can be connected by one or
more communications links 108 to communication network 106 that can
be linked to server 120 via communications link 112 and/or user
devices 102 via communications link 104.
[0032] System 100 can include one or more servers 120. Server 120
can be any suitable server or servers for providing access to the
mechanisms described herein for associating multiple users with a
presentation device, such as a processor, a computer, a data
processing device, or any suitable combination of such devices. For
example, the mechanisms for performing personalized actions on
mobile devices associated with a media presentation device that are
based on presented media content can be distributed into multiple
backend components and multiple frontend components and/or user
interfaces. In a more particular example, backend components, such
as mechanisms for receiving requests to associate user preferences
with a presentation device, requests identifying user preferences
associated with a particular presentation device, receiving
metadata relating to content being presented on a presentation
device, requests to present content based on the user preferences,
etc., can be performed on one or more servers 120. In another
particular example, frontend components, such as mechanisms for
presenting content, requesting content to be presented, identifying
user devices that are present, setting user preferences, causing a
user device to be associated with a presentation device, causing an
action to be performed on a user device and/or presenting the
results of such an action, etc., can be performed on one or more
user devices 102 and/or presentation device 110.
[0033] In some implementations, each of user devices 102,
presentation device 110 and server 120 can be any of a general
purpose device such as a computer or a special purpose device such
as a client, a server, etc. Any of these general or special purpose
devices can include any suitable components such as a hardware
processor (which can be a microprocessor, digital signal processor,
a controller, etc.), memory, communication interfaces, display
controllers, input devices, etc. For example, user device 102 can
be implemented as a smartphone, a tablet computer, a wearable
computer, a laptop computer, a portable game console, any other
suitable computing device, or any suitable combination thereof. As
another example, presentation device 110 can be implemented as a
digital media receiver, a media streaming device, a game console, a
set-top box, a television, a projector, any other suitable
computing device, or any suitable combination thereof.
[0034] Communications network 106 can be any suitable computer
network or combination of such networks including the Internet, an
intranet, a wide-area network (WAN), a local-area network (LAN), a
wireless network, a Wi-Fi network, a digital subscriber line (DSL)
network, a frame relay network, an asynchronous transfer mode (ATM)
network, a virtual private network (VPN), an intranet, one or more
peer-to-peer connections, etc. Each of communications links 104,
108, and 112 can be any communications links suitable for
communicating data among user devices 102, presentation device 110
and server 120, such as network links, dial-up links, wireless
links, hard-wired links, any other suitable communications links,
or any suitable combination of such links. Note that, in some
implementations, multiple servers 120 can be used to provide access
to different mechanisms associated with the mechanisms described
herein for associating multiple users with a media presentation
device. For example, system 100 can include a user preferences
server 120 that stores user preferences associated with one or more
users and/or one or more user devices 102, a user preferences
database server 120 that maintains one or more databases of
correspondence between users and/or user devices 102 with which a
particular presentation device 110 is associated, a content
delivery server 120 that determines which content to cause to be
presented by the particular presentation device 110 based on the
user preferences of users and/or user devices 102 associated with
presentation device 110, and/or any other suitable servers for
performing any suitable functions of the mechanisms described
herein.
[0035] In some implementations, user device 102 can be associated
with user identifying information 130. User identifying information
130 can identify a user of user device 102 and/or can identify user
device 102. For example, in some implementations, user identifying
information 130 can be a token or other data associated with a user
of user device 102. For example, the token or other data can
identify a user associated with a particular user account of a
product and/or service. In a more particular example, such a token
or other information can include a string of characters that is
associated with a particular email address that was used to sign in
to an application on the user device. As another example, user
identifying information 130 can be identifying information of user
device 102, such as a MAC address, a device ID, a serial number,
and/or any other suitable identifying information of user device
102. As yet another example, user identifying information 130 can
be a combination of identifying information of a user and
identifying information of user device 102.
[0036] In some implementations, presentation device 110 can be
associated with presentation device identifying information 140.
Presentation device identifying information 140 can identify a user
of presentation device 110 and/or presentation device 110.
[0037] For example, in some implementations, device identifying
information 140 can be a token or other data associated with a user
of presentation device 110. For example, the token or other data
can identify a user associated with a particular user account of a
product and/or service. In a more particular example, such a token
or other information can include a string of characters (which can
be, for example, randomly assigned) that is associated with a
particular email address that was used as a credential to log in to
an application on the presentation device. As another example,
presentation device identifying information 140 can be identifying
information of presentation device 110, such as a MAC address, a
device ID, a serial number, and/or any other suitable identifying
information of presentation device 110. As yet another example,
presentation device identifying information 140 can be a
combination of identifying information of a user and identifying
information of presentation device 110. In some implementations,
presentation device identifying information 140 can include
semantically meaningful identifying information, such as a user
assigned name (e.g., "Brett's Living Room Streaming Device").
[0038] In some implementations, presentation device identifying
information 140 can include a persistent identifier for
presentation device 110 that can be assigned based on any suitable
conditions. For example, a device ID of presentation device 110 can
be assigned when presentation device 110 is initialized and/or
reinitialized. In a more particular example, during initialization
presentation device 110 can contact a server to request a
persistent device ID. In some implementations, this device ID can
be assigned by the server such that each presentation device has a
unique device ID. Additionally, presentation device 110 can receive
a different device ID upon presentation device 110 being reset or
otherwise reinitialized. In some implementations, such a device ID
can be used to associate user preferences and/or any other suitable
information (e.g., at a server) with presentation device 110 for
later use in determining content to be presented using presentation
device 110.
[0039] In some implementations, server 120 can store a user
preferences database 150. User preferences database 150 can include
user preferences associated with a user of a particular user device
102 (e.g., associated with user identifying information 130 of that
user device 102). Additionally or alternatively, in some
implementations, user preferences database 150 can include
information identifying which users are associated with which
presentation devices (e.g., by relating user identifying
information 130 and presentation device identifying information
140). In some implementations, information in user preferences
database 150 can be organized using any suitable technique or
combination of techniques. For example, user preferences database
150 can be organized as a relational database.
[0040] In situations in which the mechanisms described herein
collect personal information about users, or can make use of
personal information, the users can be provided with an opportunity
to control whether programs or features collect user information
(e.g., information about cached device details on a user's user
device, devices discovered on networks to which the user device is
connected, an address from which a database query is sent, a social
network, social actions or activities, profession, a user's
preferences, or a user's current location), or to control whether
and/or how to receive content from the server that can be more
relevant to the user. In addition, certain data can be treated in
one or more ways before it is stored or used, so that personally
identifiable information is removed. For example, a user's identity
can be treated so that no personally identifiable information can
be determined for the user, or a user's geographic location can be
generalized where location information is obtained (such as to a
city, ZIP code, or state level), so that a particular location of a
user cannot be determined. Thus, the user can have control over how
information is collected about the user and used by a content
server.
[0041] In some implementations, information stored in user
preferences database can be stored such that personal information
of a user is obscured. For example, user identifying information
130 and/or presentation device identifying information 140 can be
an assigned identification number and/or code name and user
preferences can be associated with such an identification number
and/or code name.
[0042] FIG. 2 shows an example 200 of hardware that can be used to
implement one or more of user devices 102, presentation devices 110
and servers 120 depicted in FIG. 1 in accordance with some
implementations of the disclosed subject matter. Referring to FIG.
2, user device 102 can include a hardware processor 202, a
display/input device 204, memory 206 and a transmitter/receiver
208, which can be interconnected. In some implementations, memory
206 can include a storage device (such as a computer-readable
medium) for storing a user device program for controlling hardware
processor 202.
[0043] Hardware processor 202 can use the user device program to
execute and/or interact with the mechanisms described herein for
performing personalized actions on mobile devices associated with a
media presentation device that are based on presented media
content, associating multiple devices with a media presentation
device, controlling presentation of the content on the media
presentation device, disassociating user preferences from the media
presentation device, setting user preferences, causing one or more
determined actions to be performed on a user device, etc. In some
implementations, the user device program can cause hardware
processor 202 to, for example, interact with a device executing at
least a portion of process 300 as described below in connection
with FIG. 3. In some implementations, hardware processor 202 can
send and receive data through communications link 104 or any other
communication links using, for example, a transmitter, a receiver,
a transmitter/receiver, a transceiver, or any other suitable
communication device, such as transmitter/receiver 208.
Display/input device 204 can include a touchscreen, a flat panel
display, a cathode ray tube display, a projector, a speaker or
speakers, and/or any other suitable display and/or presentation
devices, and/or can include a computer keyboard, a computer mouse,
one or more physical buttons, a microphone, a touchpad, a voice
recognition circuit, a touch interface of a touchscreen, a camera,
a motion sensor such as an optical motion sensor and/or an
accelerometer, a temperature sensor, a near field communication
sensor, a biometric data sensor, and/or any other suitable input
device. Transmitter/receiver 208 can include any suitable
transmitter and/or receiver for transmitting and/or receiving,
among other things, instructions for presenting content,
instructions for setting user preferences, instructions for
associating user preferences with a presentation device, etc., and
can include any suitable hardware, firmware and/or software for
interfacing with one or more communication networks, such as
network 106 shown in FIG. 1. For example, transmitter/receiver 208
can include network interface card circuitry, wireless
communication circuitry, and/or any other suitable type of
communication network circuitry, one or more antennas, and/or any
other suitable hardware, firmware and/or software for transmitting
and/or receiving signals.
[0044] Presentation device 110 can include a hardware processor
212, a display/input device 214, memory 216 and a
transmitter/receiver 218, which can be interconnected. In some
implementations, memory 216 can include a storage device (such as a
computer-readable medium) for storing a presentation device program
for controlling hardware processor 212.
[0045] Hardware processor 212 can use the presentation device
program to execute and/or interact with the mechanisms described
herein for performing personalized actions on mobile devices
associated with a media presentation device that are based on
presented media content, associating multiple users with a media
presentation device, requesting content to present based on user
preferences of associated users, request and/or transmit
presentation device identifying information 140, etc. In some
implementations, the presentation device program can cause hardware
processor 212 to, for example, interact with a device executing at
least a portion of processes 300, 500, and 600 as described below
in connection with FIGS. 3, 5, and 6, respectively. In some
implementations, hardware processor 212 can send and receive data
through communications link 108 or any other communication links
using, for example, a transmitter, a receiver, a
transmitter/receiver, a transceiver, or any other suitable
communication device, such as transmitter/receiver 218.
Display/input device 214 can include a touchscreen, a flat panel
display, a cathode ray tube display, a projector, a speaker or
speakers, and/or any other suitable display and/or presentation
devices, and/or can include a computer keyboard, a computer mouse,
one or more physical buttons, a microphone, a touchpad, a voice
recognition circuit, a touch interface of a touchscreen, a camera,
a motion sensor such as an optical motion sensor and/or an
accelerometer, a temperature sensor, a near field communication
sensor, a biometric data sensor, and/or any other suitable input
device. In some implementations, display/input device 214 of
presentation device 110 can be omitted. Transmitter/receiver 218
can include any suitable transmitter and/or receiver for
transmitting and/or receiving, among other things, requests for
content to be presented, content to be presented, signals to
determine whether one or more user devices 102 are present, etc.,
and can include any suitable hardware, firmware and/or software for
interfacing with one or more communication networks, such as
network 106 shown in FIG. 2. For example, transmitter/receiver 218
can include network interface card circuitry, wireless
communication circuitry, USB input and/or output circuitry, HDMI
input and/or output circuitry, and/or any other suitable type of
communication network circuitry, one or more antennas, and/or any
other suitable hardware, firmware and/or software for transmitting
and/or receiving signals.
[0046] Server 120 can include a hardware processor 222, a
display/input device 224, memory 226 and a transmitter/receiver
228, which can be interconnected. In some implementations, memory
228 can include a storage device for storing data received through
communications link 112 or through other links. The storage device
can further include a server program for controlling hardware
processor 222. In some implementations, memory 228 can include
information stored as a result of user activity and/or activity by
a presentation device (e.g., user preferences, user identifying
information 130, presentation device identifying information 140,
user preferences database 150, content to be presented, requests
for content to be presented, user credentials for use in accessing
content to be presented, etc.). In some implementations, the server
program can cause hardware process 222 to, for example, execute at
least a portion of process 300, 500, and 600 as described below in
connection with FIGS. 3, 5, and 6, respectively.
[0047] Hardware processor 222 can use the server program to
communicate with user devices 102 and/or presentation device 110 as
well as provide access to and/or copies of the mechanisms described
herein. It should also be noted that data received through
communications link 112 or any other communications links can be
received from any suitable source. In some implementations,
hardware processor 222 can send and receive data through
communications link 112 or any other communications links using,
for example, a transmitter, a receiver, a transmitter/receiver, a
transceiver, or any other suitable communication device, such as
transmitter/receiver 228. In some implementations, hardware
processor 222 can receive commands and/or values transmitted by one
or more user devices 102, presentation device 110, one or more
other servers 120, and/or one or more users of server 120, such as
a user that makes changes to adjust settings associated with the
mechanisms described herein for associating multiple users with a
media presentation device. Display 224 can include a touchscreen, a
flat panel display, a cathode ray tube display, a projector, a
speaker or speakers, and/or any other suitable display and/or
presentation devices, and/or can include a computer keyboard, a
computer mouse, one or more physical buttons, a microphone, a
touchpad, a voice recognition circuit, a touch interface of a
touchscreen, a camera, a motion sensor such as an optical motion
sensor and/or an accelerometer, a temperature sensor, a near field
communication sensor, a biometric data sensor, and/or any other
suitable input device. Transmitter/receiver 228 can include any
suitable transmitter and/or receiver for transmitting and/or
receiving, among other things, content to be presented, user
preferences, user identifying information 130, presentation device
identifying information 140, requests for content, etc., and can
include any suitable hardware, firmware and/or software for
interfacing with one or more communication networks, such as
network 106 shown in FIG. 2. For example, transmitter/receiver 228
can include network interface card circuitry, wireless
communication circuitry, and/or any other suitable type of
communication network circuitry, one or more antennas, and/or any
other suitable hardware, firmware and/or software for transmitting
and/or receiving signals.
[0048] In some implementations, server 120 can be implemented in
one server or can be distributed as any suitable number of servers.
For example, multiple servers 120 can be implemented in various
locations to increase reliability and/or increase the speed at
which the server can communicate with user devices 102 and/or
presentation device 110. Additionally or alternatively, as
described above in connection with FIG. 1, multiple servers 120 can
be implemented to perform different tasks associated with the
mechanisms described herein.
[0049] FIG. 3 shows an example 300 of a process for performing
personalized actions on mobile devices associated with a media
presentation device that are based on presented media content in
accordance with some implementations of the disclosed subject
matter. As shown in FIG. 3, process 300 can begin, at 302, by
receiving new and/or updated user preferences to be used in
processes involving devices with which the user preferences have
been associated. In some implementations, such user preferences
can, for example, include a user's stated interests, a user's
implied interests, media content that the user has consumed, media
content and/or products about which the user has commented on
and/or that the user has rated, and/or any other suitable
information about the user. In some implementations, a user's
implied interests can be based on user actions such as what types
of media content the user consumes, what types of products the user
buys, the user's actions with relation to the content and/or
products (e.g., whether the user is engaged with the
content/product by commenting and/or "liking" the content/product,
a rating given to the content/product, etc.). In some
implementations, a user is given an opportunity to determine which
information is used in determining user preferences. For example,
in some implementations, user preferences can be manually entered
by a user. As another example, in some implementations, a user can
select one or more sources of information that may or may not be
used in determining user preferences. In some implementations, user
preferences can be updated in response to a user instruction to
update user preferences (e.g., in response to a user editing user
preferences, making changes to permissions of which sources of
information can be used for determining user preferences, etc.).
Additionally or alternatively, user preferences can be updated
automatically based on any suitable criteria or criterion, such as
in response to an event (e.g., in response to the user taking an
action with relation to content and/or a product), in response to a
particular period of time having elapsed, etc.
[0050] At 304, process 300 can cause the new and/or updated user
preferences to be stored in connection with the user to which the
user preferences pertain. Any suitable technique or combination of
techniques can be used to determine to which user the user
preferences received at 302 pertain. For example, user identifying
information (e.g., user identifying information 130) associated
with information used to set the new and/or updated user
preferences received at 302 can be used to associate the user
preferences with a particular user. As another example, identifying
information of an account that caused the new and/or updated user
settings to be submitted to a device executing process 300 can be
used to associate the user preferences with a particular user. In
some implementations, actions performed on multiple devices can be
associated with the same user, based on identifying information of
the user device used to perform an action and/or identifying
information of an account to which the device was logged in when
the action was performed. For example, multiple user devices can be
associated with a user by registering the user device to the user.
In a more particular example, actions performed on such a user
device can be attributed to the user by using identifying
information associated with each user device (e.g., user
identifying information 130). As another example, in cases when an
application used to perform an action is associated with a user
account (e.g., by logging in to the user account though the
application), an action can be associated with the user regardless
of whether the device used to perform the action is otherwise
associated with the user (or another user).
[0051] At 306, process 300 can determine whether a request to
associate particular user preferences with a particular
presentation device has been received. In some implementations,
such a request can be associated with a particular user based on
user identifying information (e.g., user identifying information
130) associated with the request. Additionally, in some
implementations, such a request can be associated with a particular
presentation device based on presentation device identifying
information (e.g., presentation device identifying information 140)
associated with the request. In some implementations, such a
request can be initiated by any suitable device (e.g., user device
102, presentation device 110, etc.) and in response to any suitable
action performed on such a device. For example, user device 102 can
cause a request to associate a user of user device 102 (e.g., based
on user identifying information 130) with a particular presentation
device 110 to be sent to server 120 executing at least a portion of
process 300 in response to and/or as part of a request initiated by
a user to present particular content on presentation device 110. As
another example, presentation device 110 can cause a request to
associate a user of user device 102 with presentation device 110 to
be sent to server 120 executing at least a portion of process 300
in response to presentation device 110 receiving an instruction
from such a user device 102 to perform any suitable action. As yet
another example, user device 102 can cause a request to associate a
user of user device 102 (e.g., based on user identifying
information 130) with a particular presentation device 110 to be
sent to server 120 executing at least a portion of process 300 in
response to any suitable user action initiating such an
association.
[0052] If process 300 determines that such a request has been
received ("YES" at 306), process 300 can proceed to 308. At 308,
process 300 can cause user preferences associated with the user
(e.g., a user identified by user identifying information received
with the request) that caused the request received at 306 to be
sent, to be associated with the presentation device (a presentation
device identified by presentation device identifying information
received with the request). Such an association can, for example,
be stored in a database or list (e.g., user preferences database
150) such that the association can be determined at a later
time.
[0053] If process 300 determines that a request to associate user
preferences with a presentation device has not been received ("NO"
at 306), process 300 can proceed to 310. At 310, process 300 can
determine whether a request to present content based on user
preferences associated with a presentation device has been
received. In some implementations, such a request can be a request
for content to be presented by the presentation device, where the
content that is to be presented is determined, at least in part, by
a server. In such implementations, the server (which may or may not
be a server executing process 300) can use user preferences to
determine which content is to be presented. In some
implementations, a determination as to which user preferences to
use in determining the content to be presented can be based, at
least in part, on which users are associated with the presentation
device that is requesting the content.
[0054] If process 300 determines that such a request has been
received ("YES" at 310), process 300 can proceed to 312. At 312,
process 300 can receive identifying information of user devices
detected as being present by the requesting presentation device
(e.g., a device associated with the request received at 310). In
some implementations, any suitable technique or combination of
techniques can be used to detect the presence of a particular user
device. For example, presentation device 110 can detect user
devices that are connected to a same local network as presentation
device 110 (e.g., a LAN including a Wi-Fi network). As another
example, presentation device 110 can broadcast and/or unicast one
or more messages targeting nearby user devices 102 using any
suitable communication techniques, such as peer-to-peer
communication techniques. In a more particular example,
presentation device 110 can use transmitter/receiver 218 to
transmit one or more signals (e.g., using any suitable
communication standard such as Bluetooth, wireless USB, etc.) to
any nearby user devices 102 which can, in some cases, receive the
signal using transmitter/receiver 208 and respond with a message
indicating that the user device is present. In another more
particular example, presentation device 110 can use a speaker to
emit a signal as sound waves, which can be outside the range of
human hearing, to any nearby user devices 102 which can, in some
cases, receive the signal using a microphone and respond with a
message indicating that the user device is present.
[0055] In some implementations, in lieu of or in addition to a user
device 102 responding to a signal from presentation device 110 to
detect presence of user devices 102, user device 102 can transmit a
signal to server 120 identifying itself as being in the presence of
presentation device 110.
[0056] In a more particular example, a client application can be
loaded on any suitable user device, such as a smartphone, a tablet
computer, a wearable computer, etc. Once the client application is
loaded, the client application can initiate presentation device
discovery in some implementations. For example, presentation device
discovery can be initiated on a network to which the user device is
connected. In a more particular example, the client application can
cause the user device to search for presentation devices on a
network (e.g., a Wi-Fi network) utilizing the Discovery And Launch
(DIAL) protocol. In another more particular example, a full
discovery protocol can be executed that causes the computing device
to send a User Datagram Protocol (UDP) multicast message on a
network to which the user device is connected. In some
implementations, the UDP can include an M-Search message directed
to presentation devices, such as digital media renderers and/or
digital media servers, digital media players, or any other suitable
presentation device that outputs, processes, and/or presents media
content. In some implementations, the UDP multicast message can
include an address of the device sending the message (e.g., the
network address of the user device), and can include a time period
during which replies are to be sent. Such a time period can be any
suitable time period, such as one second, two seconds, etc., and
can be set based on any suitable factors.
[0057] As another example, presentation device discovery can be
initiated to determine whether presentation devices are in a
proximity of the user device. In another more particular example,
the client application can execute a BLUETOOTH Service Discovery
Protocol (SDP) and/or any other suitable SDP that allows a device
to discover other devices through a short-range connection.
[0058] It should be noted that, prior to initiating presentation
device discovery or performing any action on the user device, the
client application can provide a user of the user device with an
opportunity to provide affirmative consent or authorization to
perform actions on the user device, such as detecting presentation
devices connected to the user device, retrieving user preferences
associated with the user, retrieving media content associated with
the user, performing a personalized action, etc. For example, upon
loading the client application on the user device, the client
application can prompt the user to provide authorization for
retrieving user preferences associated with the user device and/or
a presentation device. In a more particular example, in response to
downloading the client application and/or loading the client
application on the user device, the user can be prompted with a
message that requests (or requires) that the user provide consent
prior to performing these actions. Additionally or alternatively,
in response to installing the client application, the user can be
prompted with a permission message that requests (or requires) that
the user provide consent prior to performing these actions.
[0059] In some implementations, upon detecting the presence of a
presentation device (e.g., presentation device 110), the client
application can receive information about customized content being
presented by the presentation device. In some implementations, the
information can be received from the presentation device, a server,
and/or any other suitable source.
[0060] In some implementations, the received information can
include identifying information about the customized content (e.g.,
a content identifier, a URI, and/or any other suitable information
that can be used to identify the customized content), identifying
information about the presentation device, and/or any other
suitable information.
[0061] Additionally or alternatively, in some implementations,
process 300 can detect the presence of a user based on image data
of the user. In a more particular example, process 300 can receive
image data of the user from one or more suitable cameras. Process
300 can then detect the presence of the user using suitable object
detection technique, object tracking technique, and/or any other
suitable technique or combination of techniques.
[0062] Process 300 can cause content to be presented by the
presentation device based on user preferences of users associated
with user devices identified by information received at 312. In
some implementations, process 300 can compare identifying
information for each user received at 312 to user preferences
stored in association with the presentation device that requested
the content. In such implementations, for users not associated with
the presentation device, process 300 can inhibit any user
preferences of the non-associated users from being used in
determining which content to present. For example, presentation
device 110 and/or user devices 102 can send identifying information
(e.g., a MAC address, a device ID, etc.) of all user devices that
receive the signal from presentation device 110 and/or respond to
such a signal, and server 120 executing process 300 can determine
which of those devices are associated with a user that has user
preferences associated with presentation device 110.
[0063] In some implementations, process 300 can cause user
preferences of users that are associated with the presentation
device that sent the request for content to be retrieved for use in
determining which content is to be presented by the requesting user
device. Additionally, in some implementations, a device executing
process 300 (e.g., a first server 120) or any other suitable device
or combination of devices (e.g., one or more other servers 120) can
use the retrieved user preferences in any suitable combination to
determine content that is to be presented by a requesting
presentation device.
[0064] In some implementations, identifying information of user
devices received at 312 can be grouped by process 300 using a group
ID and/or any other suitable information to identify a particular
combination of devices that are present. In such implementations,
the group ID can correspond to user devices that are detected and
for which there are user preferences associated with presentation
device 110. When a combination of user devices that is different
from combinations of user devices represented by existing group IDs
is in proximity to a presentation device, a new group ID can be
associated with the new combination. User preferences corresponding
to all user devices represented by a group ID can be combined and
associated with the group ID. These user preferences can then be
used when that combination of devices is present. For example, when
a first group of user devices is present a user preferences
associated with first group ID can be used to determine content
that is to be presented by a presentation device. In such an
example, when another user device that has user preferences
associated with the presentation device becomes present (e.g., a
new user associates preferences with the user device, a user device
with user preferences already associated comes into proximity of
the device, etc.) of when a device that is present is no longer
present (e.g., a user disassociates their user device from the
presentation device, a user device leaves a proximity of the
presentation device, etc.), user preferences associated with a
different group ID can be used in determine which content is to be
presented. In some implementations, user preferences associated
with a group ID can be updated in response to any suitable action
(e.g., user preferences of a particular user being updated, a user
disassociating from the presentation device, after a predetermined
period of time has elapsed, etc.).
[0065] In some implementations, in response to the user preferences
being used to identify which content to present, a device executing
process 300 and/or any other suitable processes, can cause the
requesting presentation device to present the identified
content.
[0066] In some implementations, process 300 can retrieve metadata
associated with the media content being presented by a media
presentation device at 314. As described above, process 300 can
cause content to be presented by the presentation device based on
user preferences, can cause default content to be presented by the
presentation device, and/or can cause content to be presented based
on any other suitable criterion. In some implementations, along
with the retrieval and presentation of media content, which can
include one or more images, video content, audio content, text,
graphics, and/or any other suitable content, process 300 can also
retrieve metadata relating to each piece of media content.
[0067] In some implementations, metadata can include any suitable
information about the media content. For example, the metadata can
include one or more topics related to the media content. In some
implementations, a topic related to the media content can be
"arts," "sports," "weather," "personal photos," "travel," "stocks,"
"news," "fashion," and/or any other suitable topic. As another
example, the metadata can include any suitable information about
the subject of the media content, such as a description of what is
depicted in an image. As yet another example, the metadata can
include any geographic information related to the media content,
such as a name of a location where the media content was captured,
a name of a landmark that appears in the media content, etc. In a
more particular example, metadata about an image of the Eiffel
Tower can include the phrases "Paris," "France," "Europe," "Eiffel
Tower," etc. As still another example, the metadata can include any
suitable information about one or more sources related to the media
content, such as a social media post, a web page, a URI, etc.
[0068] In some implementations, process 300 can determine one or
more actions to be performed by user devices based on user
preferences and the metadata associated with the presented content.
As described above, process 300 can cause user preferences of users
that are associated with the presentation device that sent the
request for content to be retrieved for use in determining which
content is to be presented by the requesting user device. For
example, user preferences can include a user's stated interests, a
user's implied interests, media content that the user has consumed,
media content and/or products about which the user has commented on
and/or that the user has rated, and/or any other suitable
information about the user. In some implementations, a user's
implied interests can be based on user actions such as what types
of media content the user consumes, what types of products the user
buys, the user's actions with relation to the content and/or
products (e.g., whether the user is engaged with the
content/product by commenting and/or "liking" the content/product,
a rating given to the content/product, etc.). In some
implementations, a user is given an opportunity to determine which
information is used in determining user preferences. For example,
in some implementations, user preferences can be manually entered
by a user. As another example, in some implementations, a user can
select one or more sources of information that may or may not be
used in determining user preferences. In some implementations, user
preferences can be updated in response to a user instruction to
update user preferences (e.g., in response to a user editing user
preferences, making changes to permissions of which sources of
information can be used for determining user preferences, etc.).
Additionally or alternatively, user preferences can be updated
automatically based on any suitable criteria or criterion, such as
in response to an event (e.g., in response to the user taking an
action with relation to content and/or a product), in response to a
particular period of time having elapsed, etc.
[0069] As another example, the user preferences can include any
suitable information about search history associated with the user,
such as search queries performed by the user, search results that
have been accessed by the user, a preferred search application, one
or more user preferences related to a search application (e.g.,
language preferences, location preferences, preferred filtering
options, etc.), etc. As yet another example, user preferences can
include any suitable information about the user's interactions with
one or more media content items (e.g., a web page, a song, a video,
etc.). In a more particular example, user preferences can include
any suitable information relating to a media content item with
which the user has interacted, such as a description of the media
content item, a link to the media content item (e.g., a URL), an
identifier that can identify the media content item (e.g., a URI, a
program identifier, etc.), an author of the media content item, an
artist related to the media content item, etc. In another more
particular example, user preferences can include any suitable
information related to a user interaction with a media content
item, such as a type of the user interaction (e.g., consuming the
media content item, publishing the media content item via a social
networking service or any other suitable service, sharing the media
content item with other users, liking the media content item via a
social networking service or any other suitable service, commenting
on the media content item, etc.), timing information related to the
user interaction (e.g., a duration of the user interaction, a time
corresponding to the user interaction, etc.). As still another
example, user preferences can include any suitable information
about the user's travel behavior, such as a place that the user has
been visited, a place that the user is from, a place that the user
intends to visit, etc. As a further example, user preferences can
include any suitable information about one or more topics that may
interest the user, such as "news," "stocks," "weather," "travel,"
"arts," "sports," "fashion," "movies," "music," "food," etc.
[0070] At least a portion of the user preferences associated with a
user of a user device can be used to determine which actions to be
performed by the user device would be suitable based on the media
content currently being presented. For example, based on the user
preferences associated with a user of a user device, process 300
can determine which types of actions would be of interest to the
user (e.g., actions that are executed by a particular application
on the user device, actions that related to playing back media
content, actions that relate to particular types of time content,
actions that relate to the present location information of the user
device, etc.). In another example, based on the user preferences
associated with a user of a user device, process 300 can determine
how such an action should be presented to the user (e.g., in the
form of a recommendation interface, in the form of content that
results from the automatic performance of the action, in the form
of a notification, etc.).
[0071] In some implementations, these actions can include any
suitable action that can be performed by a mobile device, a server,
and/or any other suitable device. For example, these actions can
include performing a search by the server based on user preferences
associated with an identified user device and the metadata about
the media content and/or causing one or more search results to be
presented by the mobile device. In a more particular example,
process 300 can determine that the user associated with the
identified user device is interested in foreign movies based on the
user profile (e.g., information indicating that the user searched
for, consumed, shared, etc. foreign movies, information indicating
that the user is associated with the topic "movies," etc.) and that
the metadata about the media content includes the keyword "France."
Process 300 can then determine that a search for French movies can
be performed for the identified user device and/or that a list of
French movies can be presented on the user device. Alternatively to
performing the search for French movies available for playback on
the mobile device or available for viewing at a theater near the
location of the user device, the mechanisms can generate a
recommendation card that, upon selection, performs the search
action on the mobile device. For example, the recommendation card
can inquire as to whether the user of the user device is interested
in finding out about French movies to watch and, in response to
selecting the recommendation card, can initiate a search using a
particular search service.
[0072] In some implementations, a search can be performed for a
user device using one or more search terms based on keywords,
phrases, etc. contained in the user preferences associated with the
user device and the metadata. In some implementations, a set of
search results can be obtained by performing a search based on one
or more keywords, phrases, etc. contained in the metadata about the
media content. The search results can then be refined and/or
filtered using one or more keywords, phrases, etc. contained in the
user preferences. Alternatively, a set of search results can be
obtained by performing a search based on one or more keywords,
phrases, etc. contained in the user preferences. The search results
can be refined and/or filtered based on one or more keywords,
phrases, etc. contained in the metadata about the media
content.
[0073] As another example, process 300 can determine that an
identified user is interested in travelling based on the user
preferences (e.g., information about the identified user's travel
behavior, information about topics that interest the identified
user, etc.) and that the metadata about the media content includes
the keyword "France." Process 300 can then determine that media
content (e.g., an advertisement) including information related to
travel to France (e.g., an airline advertisement, hotel rates,
etc.) can be presented to the identified user on the corresponding
user device.
[0074] At 318, process 300 can cause the determined action or
actions to be performed on mobile devices associated with the
identified users. It should be noted that process 300 can cause the
determined action or actions to be performed in any suitable
manner. For example, process 200 can transmit one or more
instructions to instruct a mobile device associated with an
identified user to perform action or actions determined for the
identified user. As another example, process 200 can transmit any
suitable data that can be used to perform one or more determined
actions on a mobile device associated with an identified user. In a
more particular example, the data can include one or more search
terms that can be used to perform a search for the identified user.
In another more particular example, the data can include one or
more search results for presentation to the identified user and/or
any suitable information that can be used to present the search
results (e.g., a URL to the search results, a snippet of the search
results, etc.). In yet another more particular example, the data
can include one or more HyperText Markup Language (HTML) files,
scripts, style sheets, and/or any other suitable data that can be
used to render a web page and/or any suitable portion or portions
of a web page in order to perform the determined action(s).
[0075] In a more particular example, the client application
executing on the user device can transmit a request for a
personalized action related to the presented content to the server.
In some implementations, the request can include user preference
information or portions of the user preference information
associated with the user device, identifying information, and/or
any other suitable information.
[0076] Upon receiving the request, the server can retrieve metadata
and/or any other suitable data related to the presented content.
For example, based on an image in a slideshow that is currently
being presented on the presentation device, the server can retrieve
a data blob or any other suitable information about the currently
presented image and use the data blob to retrieve metadata
associated with the image. In some implementations, metadata
related to the media content can be identified and/or retrieved
based on the identifying information related to the presented
content, the identifying information related to the presentation
device, and/or any other suitable information.
[0077] In some implementations, the metadata can contain any
suitable information relating to the presented content, such as one
or more topics related to the presented content, information about
the type of information contained in the presented content (e.g.,
an image, a video, a file type, etc.), information about the
subject of the presented content (e.g., a description of what is
depicted in an image), a source where the presented content
originates (for example, a social media post, a web page, a URI,
etc.), information about one or more users related to the presented
content (e.g., a user that appears in a photo), information about
one or more authors of the presented content, etc.
[0078] The server can obtain one or more personalized actions
related to the presented content based on the retrieved metadata.
For example, the personalized actions can be obtained by performing
a search based on the metadata (e.g., using the server and/or any
other suitable device and/or service) and obtaining one or more
search results. In a more particular example, if the presented
content includes an image of a painting, the server can retrieve
web pages, news articles, and/or any other suitable content related
to the painting, the artist of the painting, etc., by performing a
search based on the metadata related to the painting. Using the
information from the web pages, news articles, and/or any other
suitable content relating to the presented content, the server can
obtain a larger set of keywords relating to the presented content
and compare this with the retrieved user preferences to determine
an action that may be of interest to the user of the user
device.
[0079] The server can transmit a response to the user device for
presenting the personalized action. In some implementations, the
response can include any suitable data that can be used to present
the personalized action. For example, the response can include a
link (e.g., a uniform resource locator (URL)), a barcode (e.g., a
quick response (QR) code), and/or any other suitable mechanism
directed to a web page including information related to the
presented content, etc. As another example, the metadata can
include a snippet of web content (e.g., a web page, text, video,
etc.) related to the presented content.
[0080] In some implementations, the client application can receive
the response. In some implementations, the client application can
cause the personalized action related to the customized content to
be performed by user device 102. In some implementations, the
personalized action can include presenting content using text,
images, icons, graphics, videos, animations, audio clips,
hypertext, hyperlinks, sounds, and/or any other suitable
content.
[0081] FIG. 4 shows an example of a display device 410 presenting
media content 420 and mobile devices 102-A, 102-B, and 102-C each
performing a personalized action or presenting results from a
personalized action relating to the presented media content in
accordance with some implementations of the disclosed subject
matter. In some implementations, display device 410 can be
operatively coupled to a media presentation device and/or a media
presentation device can be incorporated into display device 410.
For example, a media presentation device, such as a digital media
receiver or media streaming device, can request content to be
presented when the media presentation device is on and outputting
video data but lacks image and/or video content to be presented. In
some implementations, media content 420 can be default content
associated with display device 410, such as a collection of images
from one source or multiple sources (e.g., a locally stored
database of images, images stored on the presentation device, a
server, etc.). Additionally or alternatively, media content 420 can
be content that has been selected for presentation based on the
presence of user devices 102-A, 102-B, and/or 102-C (e.g., user
preferences associated with accounts authenticated on each of the
user devices). In some implementations, the collection of images
can be displayed as a slideshow of images. For example, each of the
images in the slideshow can be presented one (or many) at a time
for a predetermined period of time (e.g., fifteen seconds, thirty
seconds, etc.).
[0082] In some implementations, the mechanisms described herein can
detect the presence of mobile devices 102-A, 102-B, and/or 102-C in
a proximity of media presentation device 410 (e.g., within a
predetermined proximity of media presentation device 410, on a
local area network to which media presentation device 410 is
connected, etc.).
[0083] In some implementations, upon detecting the presence of one
or more user devices 102-A, 102-B, and/or 102-C that are associated
with the presentation device, the mechanisms can retrieve one or
more user preferences associated with user device(s) 102-A, 102-B,
and/or 102-C (e.g., as described above in connection with process
300 of FIG. 3) and can cause customized content to be presented on
display device 410 based on the retrieved user preferences. For
example, the mechanisms can identify a media source (e.g., a
service, a local storage device, etc.) designated by the retrieved
user preferences and can cause media content provided by the media
source to be presented on display device 410. As another example,
the mechanisms can identify one or more topics in which one or more
users associated with user device(s) 102-A, 102-B, and/or 102-C
might interested based on the user preferences and can cause media
content related to the identified topic(s) to be presented by
display device 410. In a more particular example, the mechanisms
can cause media content about weather or traffic information to be
presented in response to determining that the user(s) may be
interested in such information based on the user preferences. In
another more particular example, the mechanisms can cause images
from one or more users associated with user device(s) 102-A, 102-B,
and/or 102-C (e.g., photos published by a user associate with user
device(s) 102-A, 102-B, and/or 102-C via a social networking
service) to be presented in response to determining that the
user(s) might be interested in personal photos based on the
retrieved user preferences.
[0084] In some implementations, the mechanisms can cause a
personalized action related to media content 420 to be performed on
one or more user devices that are in a proximity of display device
410, such as user device(s) 102-A, 102-B, and/or 102-C associated
with the media presentation device, a user device that is not
associated with the media presentation device, and/or any other
suitable user device. In some implementations, media content 420
can include images, text, video content, audio content, and/or any
other suitable content. In some implementations, media content 420
can be associated with any suitable metadata, such as one or more
topics related to media content 420, information about the type of
information contained in media content 420 (e.g., an image, a
video, a file type, etc.), information about the subject of media
content 420 (e.g., a description of what is depicted in an image),
a source where media content 420 originates (for example, a social
media post, a web page, a URI, etc.), information about one or more
authors of media content 115, etc.
[0085] For example, as shown in FIG. 4, when presenting an image
420 depicting the Eiffel Tower on display device 410, the
mechanisms can cause a personalized action related to image 420
(e.g., French restaurant recommendations, French music
recommendations, French movie recommendations, content about the
creator of the Eiffel Tower, reviews related to visiting the Eiffel
tower, a link to information about the Eiffel Tower, etc.) to be
presented by user devices 102-A, 102-B, and/or 102-C. In a more
particular example, the mechanisms can determine that the user
associated with mobile device 120-A is interested in "food" or
food-related content (e.g., based on information of user-stated
interests, search history related to the user, etc.) and that the
metadata about media content 410 includes the keyword "France" or
relate to the topic "French things." The mechanisms can then
perform a search for French restaurants near the location of mobile
device 102-A and cause the search results 430 to be presented on
mobile device 120. Alternatively to performing the search for
French restaurants near the location of mobile device 102-A, the
mechanisms can generate a recommendation card that, upon selection,
performs the search action on the mobile device. For example, the
recommendation card can inquire as to whether the user of mobile
device 102-A is interested in French restaurants in the area and,
in response to selecting the recommendation card, can initiate a
search using a particular search service. As another more
particular example, the mechanisms can determine that the user
associated with mobile device 102-B is interested in music based on
a user profile associated with the user and authenticated on mobile
device 102-B and that the metadata about media content 420 includes
the keywords "Paris" and "French." The mechanisms can then search
for and/or retrieve information relating to French music and/or
Parisian music and cause the information 440 to be presented on
mobile device 102-B. For example, the mechanisms can recommend a
playlist of French songs for playback on mobile device 102-B. As
yet another more particular example, the mechanisms can determine
that the user associated with mobile device 102-C is interested in
movies (e.g., based on information about the user's search history,
media content consumed by the user, etc.) and that the metadata
about media content 420 includes the keyword "France." The
mechanisms can then search for and/or retrieve information 450
about French movies and cause the information to be presented on
mobile device 102-C. For example, the mechanisms can determine a
French movie that the user of mobile device 102-C may be interested
in, determine the media playback applications that are installed on
mobile device 102-C, and present the user with links for purchasing
and/or playing back the particular French movie using an
application associated with a media service.
[0086] As another example, when presenting an image 420 depicting
the Eiffel Tower on display device 410, the mechanisms can retrieve
metadata regarding image 420 that includes topic information,
location information, creator information, and/or source
information. Such metadata can be used by the mechanisms to
determine one or more suitable actions from a list of actions
should be performed for the user based on user preferences--e.g., a
particular recommendation card presented on a particular
application executing on mobile device 102-A, 102-B, and/or
102-C.
[0087] It should be noted that, in some implementations, an
application executing on mobile devices 102-A, 102-B, and 102-C can
present information about the media content that is currently being
presented on display device 410 and information about the performed
actions. For example, as shown in FIG. 4, mobile device 102-A can
present a representation of the media content currently being
presented on display device 410 (e.g., an image of the Eiffel
Tower) concurrently with search results 430 relating to French
restaurants within a particular proximity of the location
information associated with mobile device 120.
[0088] It should also be noted that, in some implementations, the
application executing on mobile devices 102-A, 102-B, and 102-C can
present a new and/or updated action in response to a particular
period of time elapsing (e.g., thirty seconds). For example, in
response to determining particular topics of interest associated
with a user device based on user preference information, the
mechanisms can rank the particular topics with a first personalized
action corresponding to the highest ranked topic (e.g., French
restaurant recommendations) and a second personalized action
corresponding to the next ordered topic (e.g., French cooking class
recommendations). Additionally or alternatively, the application
executing on mobile devices 102-A, 102-B, and 102-C can present a
new and/or updated action in response to the presentation device
causing the currently presented media content to be replaced with
another piece of media content.
[0089] Turning to FIG. 5, an example 500 of a process for
associating user preferences with a presentation device in
accordance with some implementations of the disclosed subject
matter is shown.
[0090] As illustrated, process 500 can begin by receiving
identifying information about a user at 502. In some
implementations, the identifying information can include any
suitable information that can be used to identify a user and/or a
user device associated with the user. For example, the identifying
information can be user identifying information 130 as discussed
above in connection with FIG. 1. In a more particular example, the
identifying information can be and/or include an email address, a
username, a pass code, an image, a uniform resource identifier
(URI), a fingerprint, and/or any other suitable information that
can be used to identify the user and/or an account associated with
the user (e.g., a user account with a social networking service, a
video sharing service, a file hosting service, a photo sharing
service, a messaging service, etc.). As another more particular
example, the identifying information can include a device
identifier, a media address control (MAC) address, a serial number,
a product identifier, and/or any other suitable information that
can be used to identify a user device associated with the user.
[0091] At 504, process 500 can associate the identifying
information with a user identifier. In some implementations, a user
identifier can be a string, a number, or any suitable combination
of numbers, letters, characters, symbols, etc. that can be used to
uniquely identify a user and/or a user device associated with the
user. In some implementations, the user identifier can have any
suitable length and value.
[0092] In some implementations, process 500 can identify an
existing user identifier that has been associated with the user and
can then associate the identifying information with the existing
user identifier. Additionally or alternatively, process 500 can
generate a user identifier upon receiving the identifying
information using a hash function, a random number generator, a
pseudorandom number generator, and/or any other suitable mechanism
that can be used to generate a user identifier.
[0093] At 506, process 500 can identify a presentation device to be
associated with the user. In some implementations, the presentation
device can be identified using any suitable identifying information
related to the presentation device (e.g., as described above in
connection with 308 of FIG. 3), such as a device identifier, a
media address control (MAC) address, a serial number, a product
identifier, an IP address, and/or any other suitable information
that can be used to identify the presentation device.
[0094] In some implementations, the identifying information can be
obtained in any suitable manner. For example, process 500 can
discover a presentation device using any suitable device discovery
protocol. In a more particular example, a presentation device that
is in a proximity of a user device associated with the user and/or
that is connected to a given network (e.g., a Wi-Fi network) can be
discovered. Additionally or alternatively, process 500 can prompt
the user to provide identifying information related to the
presentation device.
[0095] At 508, process 500 can receive one or more user preferences
for presenting customized content using the presentation device. In
some implementations, a user preference for presenting customized
content can include one or more topics that the user is interested
in, such as "personal photos," "arts," "news," "lifestyle,"
"weather," "stocks," etc.
[0096] In some implementations, a user preference for presenting
customized content can indicate one or more media sources that can
provide content for presentation. In a more particular example, the
media source(s) can be and/or include a service associated with
and/or designated by the user, such as a social networking service,
a video sharing service, a photo sharing service, a file sharing
and/or storage service, a media streaming service, a messaging
service, a website, etc. In another more particular example, the
media source(s) can be and/or include a device associated with
and/or designated by the user, such as a user device associated
with the user, a storage device, etc. that can provide media
content for presentation.
[0097] At 510, process 500 can associate the received user
preferences with the user and/or the presentation device. For
example, process 500 can store the user preferences in a database
indexed by user and/or presentation device (e.g., as described
above in connection with 308 of FIG. 3). In a more particular
example, the user preferences can be stored in association with the
user identifier, identifying information related to the
presentation device, and/or any other suitable information, such
that, in response to receiving a subsequent request for customized
content and/or supplemental information related to customized
content relating to a particular presentation device, a service can
retrieve and/or determine customized content and/or supplemental
information based on user preferences and/or any other suitable
information associated with the presentation device.
[0098] Noted that in some implementations in which the mechanisms
described herein collect information about a particular user, the
user can be provided with an opportunity to control whether the
mechanisms collect information about particular users and/or how
collected user information is used by the mechanisms. Examples of
information about a user can include the user's interests and
identifying information of the user (e.g., a user profile, user
credentials, device identification, etc.). Additionally, certain
information about the user can be stored locally (e.g., not
shared), encrypted, and/or treated in one or more ways before it is
stored to remove personally identifiable information. For example,
the mechanisms described herein can store user preferences and/or
user interests for a particular user with an anonymous user
identifier (e.g., a user identifier that is not associated with the
user's name, the user's username and/or password, the user's email
address, etc.). Using these techniques, the user can have control
over what information is collected about the user and/or how that
information is used by the mechanisms described herein.
[0099] FIG. 6 shows an example 600 of a process for presenting
customized content on a presentation device in accordance with some
implementations of the disclosed subject matter.
[0100] As illustrated, process 600 can begin by causing content
from a presentation device to be presented by a display device at
602. Note that, as described above in connection with FIGS. 1, 2
and 4, a presentation device (e.g., presentation device 110) can be
operatively connected to and/or incorporated into a display device
(e.g., display device 402). In some implementations, process 600
can cause content to be presented as part of 314 described above in
connection with FIG. 3. In some implementations, the content can
include images, video content, audio content, text, etc. For
example, process 600 can present a collection of images as a
slideshow of images. In some implementations, each of the images in
the slideshow can be presented one at a time for a predetermined
period of time (e.g., fifteen seconds, thirty seconds, etc.). In
such implementations, the slideshow can include images from one
source or multiple sources (e.g., a locally stored database of
images, images stored on the presentation device, a server, etc.).
In some implementations, the content can be presented periodically
(e.g., every 15 seconds, every minute, etc.).
[0101] At 604, process 600 can determine whether a user is present
in a proximity of the presentation device. For example, process 600
can detect the presence of a user by detecting one or more user
devices associated with the user. In a more particular example,
process 600 can detect the presence of a user device that is in a
proximity of the presentation device using a BLUETOOTH Service
Discovery Protocol (SDP) and/or any other suitable SDP that allows
a device to discover other devices through a short-range
connection. In another more particular example, process 600 can
initiate device discovery on a network to which the presentation
device is connected. More particularly, for example, process 600
can search for user devices on a network (e.g., a WiFi network)
utilizing the Discovery And Launch (DIAL) protocol and/or any other
suitable protocol.
[0102] As another example, process 600 can detect the presence of a
user based on image data of the user. In a more particular example,
process 600 can receive image data of the user from one or more
suitable cameras. Process 600 can then detect the presence of the
user using suitable object detection technique, object tracking
technique, and/or any other suitable technique or combination of
techniques.
[0103] In some implementations, process 600 can loop back to 602 in
response to failing to detect a user in a proximity of the
presentation device ("NO" at 604). Alternatively, in response to
detecting the presence of one or more users in the proximity of the
presentation device, process 600 can identify the detected user or
users at 606. For example, process 600 can compare identifying
information related to a user device associated with a detected
user (e.g., a device identifier, an IP address, a URI, a MAC, etc.)
to known identifying information related to known user devices that
are associated with the presentation device to find a match. As
another example, process 600 can identify the detected user(s)
using any suitable facial recognition technique or combination of
techniques. In a more particular example, process 600 can generate
a set of facial features based on image data of the detected
user(s) and can compare the generated facial features with known
facial features of known users that are associated with the
presentation device to find a match.
[0104] In some implementations, any suitable user can be considered
a user associated with the presentation device. In some
implementations, one or more users and/or one or more user devices
associated with the user(s) can be associated with the presentation
device using process 300 of FIG. 3, process 500 of FIG. 5 and/or
any other suitable process.
[0105] At 608, process 600 can receive one or more user preferences
associated with the identified user(s). In some implementations,
upon identifying multiple users at 606, process 600 can retrieve a
user preference for each of the identified users. In some
implementations, the received user preferences can include any
suitable information for presenting customized content using the
presentation device. In some implementations, the user preferences
can be received from one or more users and/or be associated with
the presentation device as described above in connection with
process 500 of FIG. 5 and/or in any other suitable manner.
[0106] In some implementations, identifying information of user
devices and/or users identified at 606 can be grouped by process
600 using a group ID and/or any other suitable information to
identify devices that are present. For example, a group ID can be
used as described above in connection with 312 of FIG. 3.
[0107] At 610, process 600 can present customized content using the
presentation device based on the user preferences. In some
implementations, the customized content can be presented by causing
any suitable media content, such as images, video content, audio
content, multimedia content, text, etc., to be presented by a
display device.
[0108] In some implementations, the customized content can be
presented in any suitable manner. For example, process 600 can
identify one or more topics in which one or more of the identified
users may be interested based on the user preferences, such as a
topic included in a user preference of a particular user associated
with the presentation device, a common topic included in user
preferences of multiple users associated with the presentation
device, etc. Process 600 can then cause media content related to
the identified topic(s) to be presented using the presentation
device. In a more particular example, in response to determining
that the user preferences associated with the identified users
include a common topic of "stocks," process 600 can cause
information about one or more stocks and/or news about one or more
companies associated with the one or more stocks to be presented on
the display device. In another more particular example, in response
to determining that the user preferences indicate that the
identified user(s) are interested in "personal photos," process 600
can cause photos of one or more of the identified users to be
presented on the display device. In some implementations, these
photos can include photos that are published by a user on a social
networking service, photos that are stored in a user device
associated with the users, etc.
[0109] As another example, process 600 can identify a media source
associated with one or more of the identified users based on the
user preferences, such as a website, a service (e.g., a video
hosting service, a photo sharing service, a file sharing service, a
social networking service, etc.), a device (e.g., a user device, a
storage device, etc.), and/or any other suitable media source
designated by one or more user preferences. Process 600 can then
cause media content provided by the identified media source to be
presented by the presentation device. In a more particular example,
process 600 can cause media content (e.g., photos, video content,
audio content, etc.) stored in a user device designated by the user
preferences to be presented by the presentation device. In another
more particular example, process 600 can cause media content
published on a social network (social media posts, photos, videos,
etc. associated with a user account of one or more of the
identified users) to be presented by the presentation device.
[0110] In some implementations, the mechanisms described herein can
include software, firmware, hardware, or any suitable combination
thereof. For example, the mechanisms described herein can encompass
a computer program written in a programming language recognizable
by one or more of hardware processors 202, 212 and 222 (e.g., a
program written in a programming language, such as, Java, C,
Objective-C, C++, C#, Javascript, Visual Basic, or any other
suitable approaches). As another example, the mechanisms described
herein can encompass code corresponding to one or more Web pages or
Web page portions (e.g., via any suitable encoding, such as Hyper
Text Markup Language ("HTML"), Dynamic Hyper Text Markup Language
("DHTML"), Extensible Markup Language ("XML"), JavaServer Pages
("JSP"), Active Server Pages ("ASP"), Cold Fusion, or any other
suitable approaches).
[0111] In situations in which the mechanisms described herein
collect personal information about users, or can make use of
personal information, the users can be provided with an opportunity
to control whether programs or features collect user information
(e.g., information about cached device details on a user's user
device, devices discovered on networks to which the user device is
connected, an address from which a database query is sent, a social
network, social actions or activities, profession, a user's
preferences, or a user's current location), or to control whether
and/or how to receive content from the server that can be more
relevant to the user. In addition, certain data can be treated in
one or more ways before it is stored or used, so that personally
identifiable information is removed. For example, a user's identity
can be treated so that no personally identifiable information can
be determined for the user, or a user's geographic location can be
generalized where location information is obtained (such as to a
city, ZIP code, or state level), so that a particular location of a
user cannot be determined. Thus, the user can have control over how
information is collected about the user and used by a content
server.
[0112] In some implementations, any suitable computer readable
media can be used for storing instructions for performing the
functions and/or processes described herein. For example, in some
implementations, computer readable media can be transitory or
non-transitory. For example, non-transitory computer readable media
can include media such as magnetic media (such as hard disks,
floppy disks, etc.), optical media (such as compact discs, digital
video discs, Blu-ray discs, etc.), semiconductor media (such as
flash memory, electrically programmable read only memory (EPROM),
electrically erasable programmable read only memory (EEPROM),
etc.), any suitable media that is not fleeting or devoid of any
semblance of permanence during transmission, and/or any suitable
tangible media. As another example, transitory computer readable
media can include signals on networks, in wires, conductors,
optical fibers, circuits, any suitable media that is fleeting and
devoid of any semblance of permanence during transmission, and/or
any suitable intangible media.
[0113] It should be understood that the above described steps of
the processes of FIGS. 3, 5, and 6 can be executed or performed in
any order or sequence not limited to the order and sequence shown
and described in the figures. Also, some of the above steps of the
process of FIGS. 3, 5, and 6 can be executed or performed
substantially simultaneously where appropriate or in parallel to
reduce latency and processing times.
[0114] It should also be noted that, as used herein, the term
mechanism can encompass hardware, software, firmware, or any
suitable combination thereof.
[0115] Accordingly, methods, systems, and media for performing
personalized actions using mobile devices associated with a media
presentation device are provided.
[0116] Although the invention has been described and illustrated in
the foregoing illustrative implementations, it is understood that
the present disclosure has been made only by way of example, and
that numerous changes in the details of implementation of the
invention can be made without departing from the spirit and scope
of the invention, which is limited only by the claims that follow.
Features of the disclosed implementations can be combined and
rearranged in various ways.
* * * * *