U.S. patent application number 13/535061 was filed with the patent office on 2013-01-03 for automated facial detection and eye tracking techniques implemented in commercial and consumer environments.
This patent application is currently assigned to 3G STUDIOS, INC.. Invention is credited to JAMES PETER KOSTA, DYLAN S. PETTY, DEAN E. WOLF.
Application Number | 20130005443 13/535061 |
Document ID | / |
Family ID | 47391183 |
Filed Date | 2013-01-03 |
United States Patent
Application |
20130005443 |
Kind Code |
A1 |
KOSTA; JAMES PETER ; et
al. |
January 3, 2013 |
AUTOMATED FACIAL DETECTION AND EYE TRACKING TECHNIQUES IMPLEMENTED
IN COMMERCIAL AND CONSUMER ENVIRONMENTS
Abstract
Various aspects described or referenced herein are directed to
different methods, systems, and computer program products for
facilitating and/or implementing automated facial detection and eye
tracking techniques implemented in commercial and consumer
environments. Various types of commercial devices may be configured
or designed to include facial detection & eye tracking
component(s) which may be operable to facilitate and/or provide one
or more of the following operation(s)/action(s)/feature(s) (or
combinations thereof): facial feature detection functionality for
identifying facial features associated with a user that is
interacting with the commercial device; facial expression
recognition functionality for detecting and recognizing facial
expressions associated with a user that is interacting with the
commercial device; eye tracking functionality detecting and
tracking eye movements associated with a user that is interacting
with the commercial device.
Inventors: |
KOSTA; JAMES PETER;
(Gardnerville, NV) ; WOLF; DEAN E.; (Boulder,
CO) ; PETTY; DYLAN S.; (Reno, NV) |
Assignee: |
3G STUDIOS, INC.
Reno
NV
|
Family ID: |
47391183 |
Appl. No.: |
13/535061 |
Filed: |
June 27, 2012 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61504141 |
Jul 1, 2011 |
|
|
|
Current U.S.
Class: |
463/25 |
Current CPC
Class: |
A63F 13/20 20140902;
G07F 17/3206 20130101; A63F 13/87 20140902; A63F 13/25 20140902;
G07F 9/023 20130101 |
Class at
Publication: |
463/25 |
International
Class: |
A63F 13/02 20060101
A63F013/02 |
Claims
1. A gaming device in a gaming network, comprising: a gaming
controller; memory; a first display; at least one interface for
communicating with at least one other device in the gaming network;
at least one facial detection component; the gaming device being
operable to: control a wager-based game played at the gaming
device; recognize facial features associated with a first user that
is interacting with the gaming device; and initiate a first action
in response to identifying a recognized facial feature associated
with the first user.
2. The gaming device of claim 1 being further operable to:
recognize facial expressions associated with the first user; and
influence an outcome of at least one event of the first gaming
session in response to identifying a recognized facial expression
associated with the first user.
3. The gaming device of claim 1 being further operable to: enable
the first user to participate in a first gaming session at the
gaming device; recognize facial features associated with the first
user; and influence an outcome of at least one event of the first
gaming session in response to identifying a recognized facial
feature associated with the first user.
4. The gaming device of claim 1 being further operable to: enable
the first user to participate in a first gaming session at the
gaming device; monitor events of the first gaming session in which
the first user is a participant; monitor the first user's facial
expressions during participation of at least one event of the first
gaming session; and create a first association between a first
identified event of the first gaming session and a first facial
expression made by the first user while participating in the first
identified event.
5. The gaming device of claim 1 being further operable to: enable
the first user to participate in a first gaming session at the
gaming device; monitor events of the first gaming session in which
the first user is a participant; interpret a first facial
expression made by the first user during participation in a first
event of the first gaming session; and create a first association
between the first identified event of the first gaming session and
the interpretation of the user's first facial expression made by
the first user while participating in the first identified
event.
6. The gaming device of claim 1 being further operable to: track
one or more eye movements associated with the first user during a
first time interval; and identify at least one object being
observed by the first user during the first time interval in
response to tracking one or more eye movements associated with the
user.
7. The gaming device of claim 1 further comprising a first
multi-layer display (MLD), the first MLD including a first display
screen and a second display screen, the first MLD being configured
or designed to display a first portion of game-related content on
the first display screen, and being further operable to display a
second portion of game-related content on the second display
screen; the gaming device being further operable to: identify a
current position or location of the first user's eyes; dynamically
adjust display of content displayed on at least one display screen
of the MLD in a manner which facilitates improved visual alignment
of content displayed on the first and second display screens as
observed from the identified current position or location of the
first user's eyes.
8. The gaming device of claim 1 further comprising a first
multi-layer display (MLD), the first MLD including a first display
screen and a second display screen, the first MLD being configured
or designed to display a first portion of game-related content on
the first display screen, and being further operable to display a
second portion of game-related content on the second display
screen; the gaming device being further operable to: identify a
current position or location of the first user's eyes; determine an
amount of adjustment to be made to content displayed on at least
one display screen of the MLD for facilitating improved visual
alignment of content displayed on the first and second display
screens as observed from the identified current position or
location of the first user's eyes.
9. The gaming device of claim 1 further comprising a first display;
the gaming device being further operable to: identify a current
position or location of the first user's head; dynamically
influence, using information relating to the current position or
location of the first user's head, a behavior of a selected virtual
character displayed at the first display.
10. The gaming device of claim 1 further comprising a first
display; the gaming device being further operable to: identify a
current position or location of the first user's head; dynamically
influence a behavior of a first virtual character displayed at the
first display to thereby cause the first virtual character to
appear to acknowledge a presence of the first user at the
identified current position or location.
11. The gaming device of claim 1 being further operable to:
identify a first set of facial features associated with the first
user; and automatically determine, using the first set of
identified facial features, user demographic information relating
to the first user.
12. The gaming device of claim 1 being further operable to:
identify a first set of facial features associated with the first
user; automatically determine, using the first set of identified
facial features, user demographic information relating to the first
user; automatically identify, using the user demographic
information, a first portion of user targeted content specifically
targeted toward a first portion of the user demographic
information; and dynamically cause the first portion of user
targeted information to be displayed at the first display in
response to determining the first user's demographic information
using the first set of identified facial features.
13. A computer implemented method for operating a gaming device in
a gaming network, the gaming device including at least one facial
detection component, the method comprising: controlling a
wager-based game played at the gaming device; recognizing, using
the at least one facial detection component, facial features
associated with a first user that is interacting with the gaming
device; and initiating a first action in response to identifying a
recognized facial feature associated with the first user.
14. The method of claim 13 further comprising: recognizing, using
the at least one facial detection component, facial expressions
associated with the first user; and influence an outcome of at
least one event of the first gaming session in response to
identifying a recognized facial expression associated with the
first user.
15. The method of claim 13 further comprising: enabling the first
user to participate in a first gaming session at the gaming device;
recognizing, using the at least one facial detection component,
facial features associated with the first user; and influencing an
outcome of at least one event of the first gaming session in
response to identifying a recognized facial feature associated with
the first user.
16. The method of claim 13 further comprising: enabling the first
user to participate in a first gaming session at the gaming device;
monitoring events of the first gaming session in which the first
user is a participant; monitoring, using the at least one facial
detection component, the first user's facial expressions during
participation of at least one event of the first gaming session;
and creating a first association between a first identified event
of the first gaming session and a first facial expression made by
the first user while participating in the first identified
event.
17. The method of claim 13 further comprising: enabling the first
user to participate in a first gaming session at the gaming device;
monitoring events of the first gaming session in which the first
user is a participant; interpreting, using the at least one facial
detection component, a first facial expression made by the first
user during participation in a first event of the first gaming
session; and creating a first association between the first
identified event of the first gaming session and the interpretation
of the user's first facial expression made by the first user while
participating in the first identified event.
18. The method of claim 13 further comprising: tracking, using the
at least one facial detection component, one or more eye movements
associated with the first user; and identifying at least one object
being observed by the first user in response to tracking one or
more eye movements associated with the user.
19. The method of claim 13 wherein the gaming device includes a
first multi-layer display (MLD), the first MLD including a first
display screen for displaying a first portion of game-related
content, the first MLD further including a second display screen
for displaying a second portion of game-related content, the method
further comprising: identifying, using the at least one facial
detection component, a current position or location of the first
user's eyes; dynamically adjusting display of content displayed on
at least one display screen of the MLD in a manner which
facilitates improved visual alignment of content displayed on the
first and second display screens as observed from the identified
current position or location of the first user's eyes.
20. The method of claim 13 wherein the gaming device includes a
first multi-layer display (MLD), the first MLD including a first
display screen for displaying a first portion of game-related
content, the first MLD further including a second display screen
for displaying a second portion of game-related content, the method
further comprising: identifying, using the at least one facial
detection component, a current position or location of the first
user's eyes; determining an amount of adjustment to be made to
content displayed on at least one display screen of the MLD for
facilitating improved visual alignment of content displayed on the
first and second display screens as observed from the identified
current position or location of the first user's eyes.
21. The method of claim 13 further comprising a first display, the
method further comprising: identifying, using the at least one
facial detection component, a current position or location of the
first user's head; dynamically influencing, using information
relating to the current position or location of the first user's
head, a behavior of a selected virtual character displayed at the
first display.
22. The method of claim 13 further comprising a first display, the
method further comprising: identifying, using the at least one
facial detection component, a current position or location of the
first user's head; dynamically influencing a behavior of a first
virtual character displayed at the first display to thereby causing
the first virtual character to appear to acknowledge a presence of
the first user at the identified current position or location.
23. The method of claim 13 further comprising: identifying, using
the at least one facial detection component, a first set of facial
features associated with the first user; and automatically
determining, using the first set of identified facial features,
user demographic information relating to the first user.
24. The method of claim 13 further comprising: identifying, using
the at least one facial detection component, a first set of facial
features associated with the first user; automatically determining,
using the first set of identified facial features, user demographic
information relating to the first user; automatically identifying,
using the user demographic information, a first portion of user
targeted content specifically targeted toward a first portion of
the user demographic information; and dynamically causing the first
portion of user targeted information to be displayed at a first
display of the gaming device in response to determining the first
user's demographic information using the first set of identified
facial features.
Description
RELATED APPLICATION DATA
[0001] The present application claims benefit, pursuant to the
provisions of 35 U.S.C. .sctn.119, of U.S. Provisional Application
Ser. No. 61/504,141 (Attorney Docket No. 3GSTP001P), titled "USER
BEHAVIOR, SIMULATION AND GAMING TECHNIQUES", naming Kosta et al. as
inventors, and filed 1 Jul. 2011, the entirety of which is
incorporated herein by reference for all purposes.
COPYRIGHT NOTICE/PERMISSION
[0002] A portion of the disclosure of this patent document contains
material which is subject to copyright protection. The copyright
owner has no objection to the facsimile reproduction by anyone of
the patent document or the patent disclosure as it appears in the
Patent and Trademark Office patent file or records, but otherwise
reserves all copyright rights whatsoever. The following notice
applies to the software and data as described below and in the
drawings hereto: Copyright.COPYRGT. 2010-2012, Dean E. Wolf, All
Rights Reserved.
BACKGROUND
[0003] The present disclosure relates to facial detection and eye
tracking. More particularly, the present disclosure relates to
automated facial detection and eye tracking techniques implemented
in commercial and consumer environments.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] FIG. 1 illustrates a simplified block diagram of a specific
example embodiment of a portion of a Computer Network 100.
[0005] FIG. 2 is a simplified block diagram of an exemplary gaming
machine 200 in accordance with a specific embodiment.
[0006] FIG. 3 shows a diagrammatic representation of machine in the
exemplary form of a client (or end user) computer system 300.
[0007] FIG. 4 is a simplified block diagram of an exemplary
Facial/Eye-Enabled Commercial Device 400 in accordance with a
specific embodiment.
[0008] FIG. 5 illustrates an example embodiment of a Server System
580 which may be used for implementing various aspects/features
described herein.
[0009] FIG. 6 illustrates an example of a functional block diagram
of a Server System 600 in accordance with a specific
embodiment.
[0010] FIG. 7 shows an illustrative example of a gaming machine 710
which has been configured or designed to include facial detection
and eye tracking functionality in accordance with a specific
embodiment.
[0011] FIG. 8 shows an illustrative example of an F/E Commercial
Device 810 which has been configured or designed to include facial
detection and eye tracking functionality in accordance with a
specific embodiment.
[0012] FIG. 9 shows an illustrative example of an F/E Commercial
Device 910 which has been configured or designed to include facial
detection and eye tracking functionality in accordance with a
specific embodiment.
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
Overview
[0013] Various aspects described or referenced herein are directed
to different methods, systems, and computer program products for
facilitating and/or implementing automated facial detection and eye
tracking techniques implemented in commercial and consumer
environments.
[0014] According to different embodiments, various types of
commercial devices may be configured or designed to include facial
detection & eye tracking component(s) which may be operable to
facilitate and/or provide one or more of the following
operation(s)/action(s)/feature(s) (or combinations thereof): facial
feature detection functionality for identifying facial features
associated with a user that is interacting with the commercial
device; facial expression recognition functionality for detecting
and recognizing facial expressions associated with a user that is
interacting with the commercial device; eye tracking functionality
detecting and tracking eye movements associated with a user that is
interacting with the commercial device. According to different
embodiments, examples of various types of commercial devices may
include, but are not limited to, one or more of the following (or
combinations thereof): gaming machines, vending machines,
televisions, kiosks, consumer devices, smart phones, video game
consoles, personal computer systems, electronic display systems,
etc.
[0015] A first aspect is directed to different methods, systems,
and computer program products for operating a gaming device which
includes at least one facial detection component. According to
different embodiments, the gaming device may be operable to
facilitate, enable, initiate, and/or perform one or more of the
following operation(s), action(s), and/or feature(s) (or
combinations thereof): control a wager-based game played at the
gaming device; recognize facial features associated with a first
user that is interacting with the gaming device; initiate a first
action in response to identifying a recognized facial feature
associated with the first user; recognize facial expressions
associated with the first user; initiate a second action in
response to identifying a recognized facial expression associated
with the first user; enable the first user to participate in a
first gaming session at the gaming device; influence an outcome of
at least one event of the first gaming session in response to
identifying a recognized facial feature associated with the first
user; monitor events of the first gaming session in which the first
user is a participant; monitor the first user's facial expressions
during participation of at least one event of the first gaming
session; interpret a first facial expression made by the first user
during participation in a first event of the first gaming session;
create a first association between a first identified event of the
first gaming session and a first facial expression made by the
first user while participating in the first identified event;
create a first association between the first identified event of
the first gaming session and the interpretation of the user's first
facial expression made by the first user while participating in the
first identified event; track one or more eye movements associated
with the first user; identify at least one object being observed by
the first user in response to tracking one or more eye movements
associated with the user; identify a current position or location
of the first user's eyes; determine an amount of adjustment to be
made to content displayed on at least one display screen of the MLD
for facilitating improved visual alignment of content displayed on
the first and second display screens as observed from the
identified current position or location of the first user's eyes;
dynamically adjust display of content displayed on at least one
display screen of a multi-layer display (MLD) in a manner which
facilitates improved visual alignment of content displayed on the
first and second display screens as observed from the identified
current position or location of the first user's eyes; identify a
current position or location of the first user's head; dynamically
influence a behavior of a first virtual character displayed at the
first display to thereby cause the first virtual character to
appear to acknowledge a presence of the first user at the
identified current position or location; automatically determine,
using the first set of identified facial features, user demographic
information relating to the first user; automatically determine,
using the first set of identified facial features, user demographic
information relating to the first user; automatically identify,
using the user demographic information, a first portion of user
targeted content specifically targeted toward a first portion of
the user demographic information; dynamically cause the first
portion of user targeted information to be displayed at the first
display in response to determining the first user's demographic
information using the first set of identified facial features.
[0016] A second aspect is directed to different methods, systems,
and computer program products for operating a product vending
machine which includes a display of one or more products. According
to different embodiments, the vending machine may include facial
detection and eye tracking component(s), and may be configured or
designed to facilitate, enable, initiate, and/or perform one or
more of the following operation(s), action(s), and/or feature(s)
(or combinations thereof): analyze captured image data for
recognition of one or more facial features of a consumer; detect
the presence of a person within a predefined proximity; record eye
tracking activity and related data, such as, for example, one or
more of the following (or combinations thereof):
region(s)/location(s) where consumer has observed (or is currently
observing); item(s)/product(s) which consumer has observed; length
of time consumer has observed each particular item/product; analyze
and process the received consumer viewing information; associate at
least a portion of the processed consumer viewing information with
the profile of a selected/identified consumer; report at least a
portion of the processed consumer viewing information to 3.sup.rd
party entities; automatically and/or dynamically generate one or
more targeted advertisements or promotions based on at least a
portion of the processed consumer viewing information; dynamically
adjust pricing information relating to one or more items viewed by
the consumer; dynamically adjust inventory management information
based on at least a portion of the processed consumer viewing
information; identify, recognize, and record the facial
characteristics of one or more consumer(s) in a manner which
enables the vending machine to automatically determine the identity
of a subsequently returning consumer; automatically record the
purchasing activity and/or viewing activity associated with an
identified consumer, and may associate such activities with that
consumer's profile; use the consumer's profile information (e.g.,
purchasing activity and/or viewing activity associated with the
identified consumer) to automatically generate one or more
dynamically generated, targeted promotions or purchase suggestions
to be presented to the consumer.
[0017] A third aspect is directed to different methods, systems,
and computer program products for operating an intelligent TV which
includes facial detection and eye tracking component(s), and may be
configured or designed to facilitate, enable, initiate, and/or
perform one or more of the following operation(s), action(s),
and/or feature(s) (or combinations thereof): analyze captured image
data for recognition of one or more facial features of a viewer;
detect the presence of a person within a predefined proximity;
record eye tracking activity and related data, such as, for
example, one or more of the following (or combinations thereof):
region(s)/location(s) of the Intelligent TV display where viewer
has observed (or is currently observing); timestamp information;
concurrent content and/or program information being presented at
the Intelligent TV display (e.g., during times when a viewer's
viewing activities are being recorded); length of time viewer has
observed the Intelligent TV display (and/or specific regions
therein); automatically and/or dynamically monitor and record
information relating to: detection of one or more sets of eyes
viewing Intelligent TV display; timestamp information of detected
events; content being displayed on Intelligent TV display at
time(s) when viewer's eyes detected as viewing Intelligent TV
display; automatically and/or dynamically monitor and record
information relating to: detection of person(s) NOT viewing
Intelligent TV display; timestamp information of detected events;
content being displayed on Intelligent TV display at time(s) when
person(s) detected as NOT viewing Intelligent TV display; associate
at least a portion of the processed viewer viewing information with
the profile of a selected/identified viewer; report at least a
portion of the processed viewer viewing information to 3.sup.rd
party entities; automatically and/or dynamically generate one or
more targeted advertisements or promotions based on at least a
portion of the processed viewer viewing information; identify,
recognize, and record the facial characteristics of one or more
viewer(s) in a manner which enables the Intelligent TV to
automatically determine the identity of a subsequently returning
viewer; automatically identify and recognize the facial features of
the viewer, and may compare the recognize facial features to those
stored in the viewer profile database(s) in order to automatically
determine the identity of the viewer who is currently interacting
with the Intelligent TV; use the viewer's profile information
and/or demographic information to automatically generate one or
more dynamically generated, targeted promotions or viewing
suggestions to be presented to the viewer; automatically and/or
dynamically lower its audio output volume if no persons are
detected to be watching the Intelligent TV display; automatically
and/or dynamically increase its audio output volume (or return it
to its previous level) if at least one person is detected to be
watching the Intelligent TV display. Various objects, features and
advantages of the various aspects described or referenced herein
will become apparent from the following descriptions of its example
embodiments, which descriptions should be taken in conjunction with
the accompanying drawings.
Specific Example Embodiments
[0018] Various techniques will now be described in detail with
reference to a few example embodiments thereof as illustrated in
the accompanying drawings. In the following description, numerous
specific details are set forth in order to provide a thorough
understanding of one or more aspects and/or features described or
reference herein. It will be apparent, however, to one skilled in
the art, that one or more aspects and/or features described or
reference herein may be practiced without some or all of these
specific details. In other instances, well known process steps
and/or structures have not been described in detail in order to not
obscure some of the aspects and/or features described or reference
herein.
[0019] One or more different inventions may be described in the
present application. Further, for one or more of the invention(s)
described herein, numerous embodiments may be described in this
patent application, and are presented for illustrative purposes
only. The described embodiments are not intended to be limiting in
any sense. One or more of the invention(s) may be widely applicable
to numerous embodiments, as is readily apparent from the
disclosure. These embodiments are described in sufficient detail to
enable those skilled in the art to practice one or more of the
invention(s), and it is to be understood that other embodiments may
be utilized and that structural, logical, software, electrical and
other changes may be made without departing from the scope of the
one or more of the invention(s). Accordingly, those skilled in the
art will recognize that the one or more of the invention(s) may be
practiced with various modifications and alterations. Particular
features of one or more of the invention(s) may be described with
reference to one or more particular embodiments or figures that
form a part of the present disclosure, and in which are shown, by
way of illustration, specific embodiments of one or more of the
invention(s). It should be understood, however, that such features
are not limited to usage in the one or more particular embodiments
or figures with reference to which they are described. The present
disclosure is neither a literal description of all embodiments of
one or more of the invention(s) nor a listing of features of one or
more of the invention(s) that must be present in all
embodiments.
[0020] Headings of sections provided in this patent application and
the title of this patent application are for convenience only, and
are not to be taken as limiting the disclosure in any way.
[0021] Devices that are in communication with each other need not
be in continuous communication with each other, unless expressly
specified otherwise. In addition, devices that are in communication
with each other may communicate directly or indirectly through one
or more intermediaries.
[0022] A description of an embodiment with several components in
communication with each other does not imply that all such
components are required. To the contrary, a variety of optional
components are described to illustrate the wide variety of possible
embodiments of one or more of the invention(s).
[0023] Further, although process steps, method steps, algorithms or
the like may be described in a sequential order, such processes,
methods and algorithms may be configured to work in alternate
orders. In other words, any sequence or order of steps that may be
described in this patent application does not, in and of itself,
indicate a requirement that the steps be performed in that order.
The steps of described processes may be performed in any order
practical. Further, some steps may be performed simultaneously
despite being described or implied as occurring non-simultaneously
(e.g., because one step is described after the other step).
Moreover, the illustration of a process by its depiction in a
drawing does not imply that the illustrated process is exclusive of
other variations and modifications thereto, does not imply that the
illustrated process or any of its steps are necessary to one or
more of the invention(s), and does not imply that the illustrated
process is preferred.
[0024] When a single device or article is described, it will be
readily apparent that more than one device/article (whether or not
they cooperate) may be used in place of a single device/article.
Similarly, where more than one device or article is described
(whether or not they cooperate), it will be readily apparent that a
single device/article may be used in place of the more than one
device or article.
[0025] The functionality and/or the features of a device may be
alternatively embodied by one or more other devices that are not
explicitly described as having such functionality/features. Thus,
other embodiments of one or more of the invention(s) need not
include the device itself.
[0026] Techniques and mechanisms described or reference herein will
sometimes be described in singular form for clarity. However, it
should be noted that particular embodiments include multiple
iterations of a technique or multiple instantiations of a mechanism
unless noted otherwise.
[0027] Various aspects described or referenced herein are directed
to different methods, systems, and computer program products for
automated facial detection and eye tracking techniques implemented
in commercial and consumer environments. Examples of such
commercial and/or consumer environments may include, but are not
limited to, one or more of the following (or combinations thereof):
retail commercial environments; business/office environments;
gaming environments; consumer shopping environments; home
environments; etc.
[0028] FIG. 1 illustrates a simplified block diagram of a specific
example embodiment of a portion of a Computer Network 100. As
described in greater detail herein, different embodiments of
computer networks may be configured, designed, and/or operable to
provide various different types of operations, functionalities,
and/or features generally relating to facial detection and/or eye
tracking technology. Further, as described in greater detail
herein, many of the various operations, functionalities, and/or
features of the Computer Network(s) disclosed herein may provide
may enable or provide different types of advantages and/or benefits
to different entities interacting with the Computer Network(s).
[0029] According to different embodiments, the Computer Network 100
may include a plurality of different types of components, devices,
modules, processes, systems, etc., which, for example, may be
implemented and/or instantiated via the use of hardware and/or
combinations of hardware and software. For example, as illustrated
in the example embodiment of FIG. 1, the Computer Network 100 may
include one or more of the following types of systems, components,
devices, processes, etc. (or combinations thereof): [0030] Server
System(s) 120--In at least one embodiment, the Server System(s) may
be operable to perform and/or implement various types of functions,
operations, actions, and/or other features such as those described
or referenced herein (e.g., such as those illustrated and/or
described with respect to FIG. 6). [0031] Publisher/Content
Provider System component(s) 140 [0032] Client Computer System (s)
130 [0033] 3.sup.rd Party System(s) 150 [0034] Internet &
Cellular Network(s) 110 [0035] Remote Database System(s) 180 [0036]
Remote Server System(s)/Service(s) 170, which, for example, may
include, but are not limited to, one or more of the following (or
combinations thereof): [0037] Content provider servers/services
[0038] Media Streaming servers/services [0039] Database
storage/access/query servers/services [0040] Financial transaction
servers/services [0041] Payment gateway servers/services [0042]
Electronic commerce servers/services [0043] Event
management/scheduling servers/services [0044] Etc. [0045]
Commercial Device(s) 160, which, for example, may include, but are
not limited to, one or more of the following (or combinations
thereof): gaming machines, vending machines, televisions, kiosks,
consumer devices, smart phones, video game consoles, personal
computer systems, electronic display systems, etc. In at least one
embodiment, the Commercial Device(s) may be operable to perform
and/or implement various types of functions, operations, actions,
and/or other features such as those described or referenced herein
(e.g., such as those illustrated and/or described with respect to
FIG. 4). [0046] etc.
[0047] In at least one embodiment, a Commercial Device may be
configured or designed to include Facial Detection & Eye
Tracking Component(s) 192 which may be operable to facilitate
and/or provide one or more of the following
operation(s)/action(s)/feature(s) (or combinations thereof): [0048]
Facial Feature Detection functionality for identifying facial
features associated with a user that is interacting with the F/E
Commercial Device. [0049] Facial Expression Recognition
functionality for detecting and recognizing facial expressions
associated with a user that is interacting with the F/E Commercial
Device. [0050] Eye Tracking functionality detecting and tracking
eye movements associated with a user that is interacting with the
F/E Commercial Device.
[0051] For reference purposes, a commercial device which has been
configured or designed to provide Facial Feature Detection
functionality, Facial Expression Recognition functionality, and/or
Eye Tracking functionality may be referred to herein as a
"Facial/Eye-Enabled Commercial Device" or "F/E Commercial Device".
Similarly, a computer network which includes components for
providing Facial Feature Detection functionality, Facial Expression
Recognition functionality, and/or Eye Tracking functionality may be
referred to herein as a "Facial/Eye-Enabled Computer Network" or
"F/E Computer Network". The Computer Network 100 of FIG. 1
illustrates an example embodiment of an F/E Computer Network.
[0052] According to different embodiments, Facial Detection &
Eye Tracking Component(s) (e.g., 192, FIG. 1; 292, FIG. 2; 492,
FIG. 4) and/or Facial/Eye Tracking Analysis and Interpretation
Component(s) (e.g., 190, FIG. 1; 294, FIG. 2; 494, FIG. 4; 692,
FIG. 6) may be configured or designed to facilitate, initiate
and/or perform one or more of the following types of
operation(s)/action(s)/function(s) (or combinations thereof):
[0053] Detect the presence of a user within a predefined proximity
of the F/E Commercial Device. [0054] Detect the presence of a user
observing or interacting with the F/E Commercial Device. [0055]
Facial Feature Detection functionality for identifying facial
features associated with a user that is interacting with the F/E
Commercial Device. [0056] Facial Expression Recognition
functionality for detecting and recognizing facial expressions
associated with a user that is interacting with the F/E Commercial
Device. [0057] Eye Tracking functionality detecting and tracking
eye movements associated with a user that is interacting with the
F/E Commercial Device. [0058] Map an identified facial expression
(e.g., performed by a user interacting with the F/E Commercial
Device) to one or more function(s). [0059] Initiate and/or perform
one or more action(s)/operation(s) in response to identifying a
recognized facial feature associated with a user interacting with
the F/E Commercial Device. [0060] Initiate and/or perform one or
more action(s)/operation(s) in response to identifying a recognized
facial expression associated with a user interacting with the F/E
Commercial Device. [0061] Influence game-related activities and/or
outcomes in response to identifying a recognized facial expression
associated with a user interacting with the F/E Commercial Device.
[0062] Initiate and/or perform one or more action(s)/operation(s)
in response to tracking one or more eye movements associated with a
user interacting with the F/E Commercial Device. [0063] Identify
one or more items being observed by a user (interacting with the
F/E Commercial Device) in response to tracking one or more eye
movements associated with the user; [0064] Create an association
between an identified facial expression (e.g., performed by a user
interacting with the F/E Commercial Device) and the user who
performed that facial expression. [0065] Automatically and/or
dynamically adjust the display of content being displayed on a
multi-layer display (MLD) device in response to detecting a
location of a user's eyes (e.g., wherein the user is interacting
with a F/E Commercial Device which includes the MLD display).
[0066] Track a user's head movements/positions to automatically
and/or dynamically adjust (e.g., in real-time) output display of
MLD content on each MLD screen in a manner which results in
improved alignment and display of MLD content from the perspective
of the user's current eyes/head position. [0067] Track a user's
head movements/positions to automatically and/or dynamically
improve alignment of front and/or rear (e.g., mask) displayed
content (e.g., in real time) in a manner which results in improved
visibility/presentation of the displayed content as viewed by the
user (e.g., as viewed from the perspective of the user's current
eyes/head position). [0068] Automatically and/or dynamically align
on screen objects to a viewer's perspective, creating a virtual
window effect. For example, in one embodiment, objects displayed in
the background will pan and move differently than objects displayed
in the foreground, based on user's detected head movements. [0069]
Automatically and/or dynamically adjust display of characters
and/or objects (e.g., on an F/E Commercial Device display screen)
to reference a user's detected position or location (e.g., in
real-time). For example, a character may be automatically and/or
dynamically adjusted (e.g., in real-time) to look in the direction
of a user viewing the display screen, and to wave at the user.
[0070] Automatically and/or dynamically adjust display of
characters and/or objects (e.g., on an F/E Commercial Device
display screen) based on the detected number of live (e.g.,
in-person) viewers looking at (or observing) the screen. [0071]
Automatically and/or dynamically adjust the size of displayed
characters and/or objects (e.g., on an F/E Commercial Device
display screen) based on detected location and/or detected distance
of a user interacting with the device. For example, in one
embodiment, the F/E Commercial Device may be configured or designed
to determine how far a user's head (or body) is from the display
screen, and may respond by automatically and/or dynamically
resizing (e.g., in real-time) displayed characters and/or objects
so that they are more easily readable/recognizable by the user.
[0072] Capture image data using F/E Commercial Device camera
component(s), and analyze captured image data for recognition of
facial features such as, for example, one or more of the following
(or combinations thereof): eyes; nostrils; nose; mouth region; chin
region; etc. [0073] Enable independent/individual facial/eye
tracking activities to be simultaneously performed for multiple
different users (e.g., who are standing in front of a multiple
display array) [0074] Coordinate identification and tracking of
movements of a given user across different displays of a multiple
display array (e.g., as the user walks past the different displays
of the multiple display array). [0075] Automatically and
dynamically modify content displayed on selected displays of a
multiple display array in response to tracked movements and/or
recognized facial expressions of a given user. [0076] Detect and
analyze facial features of a user that is interacting with the F/E
Commercial Device in order to identify and/or determine user
demographic information relating to the user. For example, in one
embodiment, the F/E Commercial Device (working in conjunction with
a server system) may be configured or designed to detect and
analyze facial features of a user and automatically and/or
dynamically (e.g., in real-time) determine that the user is a
Caucasian female whose age is estimated to be between 21-29 years
old. [0077] Automatically and dynamically generate, alter, and/or
or supplement advertising or displayed content based on the
demographics of the audience deemed to be viewing the selected
display. For example, in at least one embodiment, the F/E
Commercial Device (working in conjunction with a server system) may
be configured or designed to detect and analyze facial features of
a user and automatically and/or dynamically determine that the user
is a Caucasian female whose age is estimated to be between 21-29
years old. Using this identified user demographic information, the
F/E Commercial Device may automatically and/or dynamically display
(e.g., in real-time) updated content (e.g., game-related content,
marketing/promotional content, advertising content, etc.) which is
specifically targeted towards one or more demographic groups
associated with the identified user, such as, for example, one or
more of the following (or combinations thereof): Caucasian females;
Persons between the ages of 21-29 years old; Caucasian females
between the ages of 21-29 years old; etc. [0078] Automatically and
dynamically alter or supplement advertising or displayed content
based on the demographics of the audience deemed to be viewing the
selected display. [0079] Displaying an avatar or character which
interacts with the viewing audience based on relative location,
movement and demographics of the audience deemed to be viewing the
selected display. [0080] Record eye tracking activity and related
data, such as, for example, one or more of the following (or
combinations thereof): region(s)/location(s) where user has
observed; item(s)/product(s) which user has observed; length of
time user has observed a particular item/product. [0081] Determine,
using user eye tracking data, identity of object(s) which user is
observing or viewing. [0082] Automatically and/or dynamically lower
TV audio output volume if no persons are detected to be watching a
TV display. [0083] Automatically and/or dynamically increase TV
audio output volume if at least one person is detected to be
watching a TV display. [0084] Monitor and record information
relating to: detection of one or more sets of eyes viewing TV
display; timestamp information of detected events; content being
displayed on TV display at time(s) when viewer's eyes detected as
viewing TV display. [0085] Monitor and record information relating
to: detection of person(s) NOT viewing TV display; timestamp
information of detected events; content being displayed on TV
display at time(s) when person(s) detected as NOT viewing TV
display.
[0086] In at least one embodiment, a F/E Commercial Device may be
operable to detect gross motion or gross movement of a user. For
example, in one embodiment, a F/E Commercial Device may include
motion detection component(s) which may be operable to detect gross
motion or gross movement of a user's body and/or appendages such
as, for example, hands, fingers, arms, head, etc.
[0087] According to different embodiments, at least some F/E
Computer Network(s) may be configured, designed, and/or operable to
provide a number of different advantages and/or benefits and/or may
be operable to initiate, and/or enable various different types of
operations, functionalities, and/or features, such as, for example,
one or more of those described or referenced herein.
[0088] According to different embodiments, at least a portion of
the various types of functions, operations, actions, and/or other
features provided by the F/E Computer Network 100 may be
implemented at one or more client systems(s), at one or more server
systems (s), and/or combinations thereof.
[0089] According to different embodiments, the F/E Computer Network
may be operable to utilize and/or generate various different types
of data and/or other types of information when performing specific
tasks and/or operations. This may include, for example, input
data/information and/or output data/information. For example, in at
least one embodiment, the F/E Computer Network may be operable to
access, process, and/or otherwise utilize information from one or
more different types of sources, such as, for example, one or more
local and/or remote memories, devices and/or systems. Additionally,
in at least one embodiment, the F/E Computer Network may be
operable to generate one or more different types of output
data/information, which, for example, may be stored in memory of
one or more local and/or remote devices and/or systems. Examples of
different types of input data/information and/or output
data/information which may be accessed and/or utilized by the F/E
Computer Network may include, but are not limited to, one or more
of those described and/or referenced herein.
[0090] According to specific embodiments, multiple instances or
threads of the F/E Computer Network may be concurrently implemented
and/or initiated via the use of one or more processors and/or other
combinations of hardware and/or hardware and software. For example,
in at least some embodiments, various aspects, features, and/or
functionalities of the F/E Computer Network may be performed,
implemented and/or initiated by one or more of the various systems,
components, systems, devices, procedures, processes, etc.,
described and/or referenced herein.
[0091] In at least one embodiment, a given instance of the F/E
Computer Network may access and/or utilize information from one or
more associated databases. In at least one embodiment, at least a
portion of the database information may be accessed via
communication with one or more local and/or remote memory devices.
Examples of different types of data which may be accessed by the
F/E Computer Network may include, but are not limited to, one or
more of those described and/or referenced herein.
[0092] According to different embodiments, one or more different
threads or instances of the F/E Computer Network may be initiated
in response to detection of one or more conditions or events
satisfying one or more different types of minimum threshold
criteria for triggering initiation of at least one instance of the
F/E Computer Network. Various examples of conditions or events
which may trigger initiation and/or implementation of one or more
different threads or instances of the F/E Computer Network may
include, but are not limited to, one or more of those described
and/or referenced herein.
[0093] It will be appreciated that the F/E Computer Network of FIG.
1 is but one example from a wide range of Computer Network
embodiments which may be implemented. Other embodiments of the F/E
Computer Network (not shown) may include additional, fewer and/or
different components/features that those illustrated in the example
Computer Network embodiment of FIG. 1.
[0094] Generally, the facial detection and eye tracking techniques
described herein may be implemented in hardware and/or
hardware+software. For example, they can be implemented in an
operating system kernel, in a separate user process, in a library
package bound into network applications, on a specially constructed
machine, or on a network interface card. In a specific embodiment,
various aspects described herein may be implemented in software
such as an operating system or in an application running on an
operating system.
[0095] Hardware and/or software+hardware hybrid embodiments of the
facial detection and eye tracking techniques described herein may
be implemented on a general-purpose programmable machine
selectively activated or reconfigured by a computer program stored
in memory. Such programmable machine may include, for example,
mobile or handheld computing systems, PDA, smart phones, notebook
computers, tablets, netbooks, desktop computing systems, server
systems, cloud computing systems, network devices, etc.
[0096] FIG. 2 is a simplified block diagram of an exemplary gaming
machine 200 in accordance with a specific embodiment. As
illustrated in the embodiment of FIG. 2, gaming machine 200
includes at least one processor 210, at least one interface 206,
and memory 216.
[0097] In one implementation, processor 210 and master game
controller 212 are included in a logic device 213 enclosed in a
logic device housing. The processor 210 may include any
conventional processor or logic device configured to execute
software allowing various configuration and reconfiguration tasks
such as, for example: a) communicating with a remote source via
communication interface 206, such as a server that stores
authentication information or games; b) converting signals read by
an interface to a format corresponding to that used by software or
memory in the gaming machine; c) accessing memory to configure or
reconfigure game parameters in the memory according to indicia read
from the device; d) communicating with interfaces, various
peripheral devices 222 and/or I/O devices; e) operating peripheral
devices 222 such as, for example, card readers, paper ticket
readers, etc.; f) operating various I/O devices such as, for
example, displays 235, input devices 230; etc. For instance, the
processor 210 may send messages including game play information to
the displays 235 to inform players of cards dealt, wagering
information, and/or other desired information.
[0098] The gaming machine 200 also includes memory 216 which may
include, for example, volatile memory (e.g., RAM 209), non-volatile
memory 219 (e.g., disk memory, FLASH memory, EPROMs, etc.),
unalterable memory (e.g., EPROMs 208), etc. The memory may be
configured or designed to store, for example: 1) configuration
software 214 such as all the parameters and settings for a game
playable on the gaming machine; 2) associations 218 between
configuration indicia read from a device with one or more
parameters and settings; 3) communication protocols allowing the
processor 210 to communicate with peripheral devices 222 and I/O
devices 211; 4) a secondary memory storage device 215 such as a
non-volatile memory device, configured to store gaming software
related information (the gaming software related information and
memory may be used to store various audio files and games not
currently being used and invoked in a configuration or
reconfiguration); 5) communication transport protocols (such as,
for example, TCP/IP, USB, Firewire, IEEE1394, Bluetooth, IEEE
802.11x (IEEE 802.11 standards), hiperlan/2, HomeRF, etc.) for
allowing the gaming machine to communicate with local and non-local
devices using such protocols; etc. In one implementation, the
master game controller 212 communicates using a serial
communication protocol. A few examples of serial communication
protocols that may be used to communicate with the master game
controller include but are not limited to USB, RS-232 and Netplex
(a proprietary protocol developed by IGT, Reno, Nev.).
[0099] A plurality of device drivers 242 may be stored in memory
216. Example of different types of device drivers may include
device drivers for gaming machine components, device drivers for
peripheral components 222, etc. Typically, the device drivers 242
utilize a communication protocol of some type that enables
communication with a particular physical device. The device driver
abstracts the hardware implementation of a device. For example, a
device drive may be written for each type of card reader that may
be potentially connected to the gaming machine. Examples of
communication protocols used to implement the device drivers
include Netplex, USB, Serial, Ethernet 275, Firewire, I/O
debouncer, direct memory map, serial, PCI, parallel, RF,
Bluetooth.TM., near-field communications (e.g., using near-field
magnetics), 802.11 (WiFi), etc. Netplex is a proprietary IGT
standard while the others are open standards. According to a
specific embodiment, when one type of a particular device is
exchanged for another type of the particular device, a new device
driver may be loaded from the memory 216 by the processor 210 to
allow communication with the device. For instance, one type of card
reader in gaming machine 200 may be replaced with a second type of
card reader where device drivers for both card readers are stored
in the memory 216.
[0100] In some embodiments, the software units stored in the memory
216 may be upgraded as needed. For instance, when the memory 216 is
a hard drive, new games, game options, various new parameters, new
settings for existing parameters, new settings for new parameters,
device drivers, and new communication protocols may be uploaded to
the memory from the master game controller 212 or from some other
external device. As another example, when the memory 216 includes a
CD/DVD drive including a CD/DVD designed or configured to store
game options, parameters, and settings, the software stored in the
memory may be upgraded by replacing a first CD/DVD with a second
CD/DVD. In yet another example, when the memory 216 uses one or
more flash memory 219 or EPROM 208 units designed or configured to
store games, game options, parameters, settings, the software
stored in the flash and/or EPROM memory units may be upgraded by
replacing one or more memory units with new memory units which
include the upgraded software. In another embodiment, one or more
of the memory devices, such as the hard-drive, may be employed in a
game software download process from a remote software server.
[0101] In some embodiments, the gaming machine 200 may also include
various authentication and/or validation components 244 which may
be used for authenticating/validating specified gaming machine
components such as, for example, hardware components, software
components, firmware components, information stored in the gaming
machine memory 216, etc. Examples of various authentication and/or
validation components are described in U.S. Pat. No. 6,620,047,
titled, "ELECTRONIC GAMING APPARATUS HAVING AUTHENTICATION DATA
SETS," incorporated herein by reference in its entirety for all
purposes.
[0102] Peripheral devices 222 may include several device interfaces
such as, for example: transponders 254, wire/wireless power
distribution components 258, input device(s) 230, sensors 260,
audio and/or video devices 262 (e.g., cameras, speakers, etc.),
transponders 254, wireless communication components 256, wireless
power components 258, mobile device function control components
262, side wagering management components 264, etc.
[0103] Sensors 260 may include, for example, optical sensors,
pressure sensors, RF sensors, Infrared sensors, image sensors,
thermal sensors, biometric sensors, etc. Such sensors may be used
for a variety of functions such as, for example detecting the
presence and/or identity of various persons (e.g., players, casino
employees, etc.), devices (e.g., mobile devices), and/or systems
within a predetermined proximity to the gaming machine. In one
implementation, at least a portion of the sensors 260 and/or input
devices 230 may be implemented in the form of touch keys selected
from a wide variety of commercially available touch keys used to
provide electrical control signals. Alternatively, some of the
touch keys may be implemented in another form which are touch
sensors such as those provided by a touchscreen display. For
example, in at least one implementation, the gaming machine player
displays and/or mobile device displays may include input
functionality for allowing players to provide desired information
(e.g., game play instructions and/or other input) to the gaming
machine, game table and/or other gaming system components using the
touch keys and/or other player control sensors/buttons.
Additionally, such input functionality may also be used for
allowing players to provide input to other devices in the casino
gaming network (such as, for example, player tracking systems, side
wagering systems, etc.)
[0104] Wireless communication components 256 may include one or
more communication interfaces having different architectures and
utilizing a variety of protocols such as, for example, 802.11
(WiFi), 802.15 (including Bluetooth.TM.), 802.16 (WiMax), 802.22,
Cellular standards such as CDMA, CDMA2000, WCDMA, Radio Frequency
(e.g., RFID), Infrared, Near Field Magnetic communication
protocols, etc. The communication links may transmit electrical,
electromagnetic or optical signals which carry digital data streams
or analog signals representing various types of information.
[0105] Power distribution components 258 may include, for example,
components or devices which are operable for providing wired or
wireless power to other devices. For example, in one
implementation, the power distribution components 258 may include a
magnetic induction system which is adapted to provide wireless
power to one or more mobile devices near the gaming machine. In one
implementation, a mobile device docking region may be provided
which includes a power distribution component that is able to
recharge a mobile device without requiring metal-to-metal
contact.
[0106] In at least one embodiment, mobile device function control
components 262 may be operable to control operating mode selection
functionality, features, and/or components associated with one or
more mobile devices (e.g., 250). In at least one embodiment, mobile
device function control components 262 may be operable to remotely
control and/or configure components of one or more mobile devices
250 based on various parameters and/or upon detection of specific
events or conditions such as, for example: time of day, player
activity levels; location of the mobile device; identity of mobile
device user; user input; system override (e.g., emergency condition
detected); proximity to other devices belonging to same group or
association; proximity to specific objects, regions, zones,
etc.
[0107] In at least one embodiment, side wagering management
components 264 may be operable to manage side wagering activities
associated with one or more side wager participants. Side wagering
management components 264 may also be operable to manage or control
side wagering functionality associated with one or more mobile
devices 250. In accordance with at least one embodiment, side
wagers may be associated with specific events in a wager-based game
that is uncertain at the time the side wager is made. The events
may also be associated with particular players, gaming devices
(e.g., EGMs), game themes, bonuses, denominations, and/or
paytables. In embodiments where the wager-based game is being
played by multiple players, in one embodiment the side wagers may
be made by participants who are not players of the game, and who
are thus at least one level removed from the actual play of the
game.
[0108] In instances where side wagers are made on events that
depend at least in part on the skill of a particular player, it may
be beneficial to provide observers (e.g., side wager participants)
with information which is useful for determining whether a
particular side wager should be placed, and/or for helping to
determine the amount of such side wager. In at least one
embodiment, side wagering management components 264 may be operable
to manage and/or facilitate data access to player ratings,
historical game play data, historical payout data, etc. For
example, in one embodiment, a player rating for a player of the
wager-based game may be computed based on historical data
associated with past play of the wager-based game by that player in
accordance with a pre-determined algorithms. The player rating for
a particular player may be displayed to other players and/or
observers, possibly at the option (or permission) of the player. By
using player ratings in the consideration of making side wagers,
decisions by observers to make side wagers on certain events need
not be made completely at random. Player ratings may also be
employed by the players themselves to aid them in determining
potential opponents, for example.
[0109] Facial Detection & Eye Tracking Component(s) 292 may
include one or more camera(s) and/or other types of image capturing
component(s). In at least one embodiment, Facial Detection &
Eye Tracking Component(s) 292 and/or Facial/Eye Tracking Analysis
and Interpretation Component(s) 294 may be configured or designed
to facilitate and/or provide one or more of the following
operation(s)/action(s)/feature(s) (or combinations thereof): [0110]
Facial Feature Detection functionality for identifying facial
features associated with a user that is interacting with the Gaming
Machine. [0111] Facial Expression Recognition functionality for
detecting and recognizing facial expressions associated with a user
that is interacting with the Gaming Machine. [0112] Eye Tracking
functionality detecting and tracking eye movements associated with
a user that is interacting with the Gaming Machine. [0113] Other
types of functions, features, operations, and/or procedures
described herein.
[0114] In other embodiments (not shown) other peripheral devices
include: player tracking devices, card readers, bill
validator/paper ticket readers, etc. Such devices may each comprise
resources for handling and processing configuration indicia such as
a microcontroller that converts voltage levels for one or more
scanning devices to signals provided to processor 210. In one
embodiment, application software for interfacing with peripheral
devices 222 may store instructions (such as, for example, how to
read indicia from a portable device) in a memory device such as,
for example, non-volatile memory, hard drive or a flash memory.
[0115] In at least one implementation, the gaming machine may
include card readers such as used with credit cards, or other
identification code reading devices to allow or require player
identification in connection with play of the card game and
associated recording of game action. Such a user identification
interface can be implemented in the form of a variety of magnetic
card readers commercially available for reading a user-specific
identification information. The user-specific information can be
provided on specially constructed magnetic cards issued by a
casino, or magnetically coded credit cards or debit cards
frequently used with national credit organizations such as
VISA.TM., MASTERCARD.TM., banks and/or other institutions.
[0116] The gaming machine may include other types of participant
identification mechanisms which may use a fingerprint image, eye
blood vessel image reader, or other suitable biological information
to confirm identity of the user. Still further it is possible to
provide such participant identification information by having the
dealer manually code in the information in response to the player
indicating his or her code name or real name. Such additional
identification could also be used to confirm credit use of a smart
card, transponder, and/or player's mobile device.
[0117] It will be apparent to those skilled in the art that other
memory types, including various computer readable media, may be
used for storing and executing program instructions pertaining to
the operation EGMs described herein. Because such information and
program instructions may be employed to implement the
systems/methods described herein, example embodiments may relate to
machine-readable media that include program instructions, state
information, etc. for performing various operations described
herein. Examples of machine-readable media include, but are not
limited to, magnetic media such as hard disks, floppy disks, and
magnetic tape; optical media such as CD-ROM disks; magneto-optical
media such as floptical disks; and hardware devices that are
specially configured to store and perform program instructions,
such as read-only memory devices (ROM) and random access memory
(RAM). Example embodiments may also be embodied in a carrier wave
traveling over an appropriate medium such as airwaves, optical
lines, electric lines, etc. Examples of program instructions
include both machine code, such as produced by a compiler, and
files including higher level code that may be executed by the
computer using an interpreter.
[0118] FIG. 3 shows a diagrammatic representation of machine in the
exemplary form of a client (or end user) computer system 300 within
which a set of instructions, for causing the machine to perform any
one or more of the methodologies discussed herein, may be executed.
In alternative embodiments, the machine operates as a standalone
device or may be connected (e.g., networked) to other machines. In
a networked deployment, the machine may operate in the capacity of
a server or a client machine in server-client network environment,
or as a peer machine in a peer-to-peer (or distributed) network
environment. The machine may be a personal computer (PC), a tablet
PC, a set-top box (STB), a Personal Digital Assistant (PDA), a
cellular telephone, a web appliance, a network router, switch or
bridge, or any machine capable of executing a set of instructions
(sequential or otherwise) that specify actions to be taken by that
machine. Further, while only a single machine is illustrated, the
term "machine" shall also be taken to include any collection of
machines that individually or jointly execute a set (or multiple
sets) of instructions to perform any one or more of the
methodologies discussed herein.
[0119] The exemplary computer system 300 includes a processor 302
(e.g., a central processing unit (CPU), a graphics processing unit
(GPU) or both), a main memory 304 and a static memory 306, which
communicate with each other via a bus 308. The computer system 300
may further include a video display unit 310 (e.g., a liquid
crystal display (LCD) or a cathode ray tube (CRT)). The computer
system 300 also includes an alphanumeric input device 312 (e.g., a
keyboard), a user interface (UI) navigation device 314 (e.g., a
mouse), a disk drive unit 316, a signal generation device 318
(e.g., a speaker) and a network interface device 320.
[0120] The disk drive unit 316 includes a machine-readable medium
322 on which is stored one or more sets of instructions and data
structures (e.g., software 324) embodying or utilized by any one or
more of the methodologies or functions described herein. The
software 324 may also reside, completely or at least partially,
within the main memory 304 and/or within the processor 302 during
execution thereof by the computer system 300, the main memory 304
and the processor 302 also constituting machine-readable media.
[0121] The software 324 may further be transmitted or received over
a network 326 via the network interface device 320 utilizing any
one of a number of well-known transfer protocols (e.g., HTTP).
[0122] While the machine-readable medium 322 is shown in an
exemplary embodiment to be a single medium, the term
"machine-readable medium" should be taken to include a single
medium or multiple media (e.g., a centralized or distributed
database, and/or associated caches and servers) that store the one
or more sets of instructions. The term "machine-readable medium"
shall also be taken to include any medium that is capable of
storing, encoding or carrying a set of instructions for execution
by the machine and that cause the machine to perform any one or
more of the methodologies of the present invention, or that is
capable of storing, encoding or carrying data structures utilized
by or associated with such a set of instructions. The term
"machine-readable medium" shall accordingly be taken to include,
but not be limited to, solid-state memories, optical and magnetic
media, and carrier wave signals. Although an embodiment of the
present invention has been described with reference to specific
exemplary embodiments, it will be evident that various
modifications and changes may be made to these embodiments without
departing from the broader spirit and scope of the invention.
Accordingly, the specification and drawings are to be regarded in
an illustrative rather than a restrictive sense.
[0123] According to various embodiments, Client Computer System 300
may include a variety of components, modules and/or systems for
providing various types of functionality. For example, in at least
one embodiment, Client Computer System 300 may include a web
browser application which is operable to process, execute, and/or
support the use of scripts (e.g., JavaScript, AJAX, etc.),
Plug-ins, executable code, virtual machines, vector-based web
animation (e.g., Adobe Flash), etc.
[0124] In at least one embodiment, the web browser application may
be configured or designed to instantiate components and/or objects
at the Client Computer System in response to processing scripts,
instructions, and/or other information received from a remote
server such as a web server. Examples of such components and/or
objects may include, but are not limited to, one or more of the
following (or combinations thereof): [0125] UI Components such as
those illustrated, described, and/or referenced herein. [0126]
Database Components such as those illustrated, described, and/or
referenced herein. [0127] Processing Components such as those
illustrated, described, and/or referenced herein. [0128] Other
Components which, for example, may include components for
facilitating and/or enabling the Client Computer System to perform
and/or initiate various types of operations, activities, functions
such as those described herein.
[0129] In at least one embodiment, Client Computer System 300 may
be configured or designed to include Facial Detection & Eye
Tracking Component(s) (not shown) which, for example, may include
one or more camera(s) and/or other types of image capturing
component(s). In some embodiments, Client Computer System may also
be configured or designed to include Facial/Eye Tracking Analysis
and Interpretation Component(s) (not shown). According to different
embodiments, the Facial Detection & Eye Tracking Component(s)
and/or Facial/Eye Tracking Analysis and Interpretation Component(s)
may be configured or designed to facilitate and/or provide one or
more of the following operation(s)/action(s)/feature(s) (or
combinations thereof): [0130] Facial Feature Detection
functionality for identifying facial features associated with a
user that is interacting with the Client Computer System. [0131]
Facial Expression Recognition functionality for detecting and
recognizing facial expressions associated with a user that is
interacting with the Client Computer System. [0132] Eye Tracking
functionality detecting and tracking eye movements associated with
a user that is interacting with the Client Computer System. [0133]
Other types of functions, features, operations, and/or procedures
described herein.
[0134] FIG. 4 is a simplified block diagram of an exemplary
Facial/Eye-Enabled Commercial Device 400 in accordance with a
specific embodiment. In at least one embodiment, the F/E Commercial
Device may be configured or designed to include hardware components
and/or hardware+software components for enabling or implementing at
least a portion of the various facial detection and/or eye tracking
techniques described and/or referenced herein.
[0135] According to specific embodiments, various aspects,
features, and/or functionalities of the F/E Commercial Device may
be performed, implemented and/or initiated by one or more of the
following types of systems, components, systems, devices,
procedures, processes, etc. (or combinations thereof): Processor(s)
410; Device Drivers 442; Memory 416; Interface(s) 406; Power
Source(s)/Distribution 443; Geolocation module 446; Display(s) 435;
I/O Devices 430; Audio/Video devices(s) 439; Peripheral Devices
431; Motion Detection module 440; User
Identification/Authentication module 447; Client App Component(s)
460; Other Component(s) 468; UI Component(s) 462; Database
Component(s) 464; Processing Component(s) 466; Software/Hardware
Authentication/Validation 444; Wireless communication module(s)
445; Information Filtering module(s) 449; Operating mode selection
component 448; Speech Processing module 454; Scanner/Camera 452;
OCR Processing Engine 456; Facial Detection & Eye Tracking
Component(s) 492; etc.
[0136] As illustrated in the example of FIG. 4, F/E Commercial
Device 400 may include a variety of components, modules and/or
systems for providing various types of functionality. For example,
as illustrated in FIG. 4, F/E Commercial Device 400 may include
Commercial Device Application components (e.g., 460), which, for
example, may include, but are not limited to, one or more of the
following (or combinations thereof): [0137] UI Components 462 such
as those illustrated, described, and/or referenced herein. [0138]
Database Components 464 such as those illustrated, described,
and/or referenced herein. [0139] Processing Components 466 such as
those illustrated, described, and/or referenced herein. [0140]
Other Components 468 which, for example, may include components for
facilitating and/or enabling the F/E Commercial Device to perform
and/or initiate various types of operations, activities, functions
such as those described herein.
[0141] In at least one embodiment, the F/E Commercial Device
Application component(s) may be operable to perform and/or
implement various types of functions, operations, actions, and/or
other features such as, for example, one or more of those described
and/or referenced herein.
[0142] According to specific embodiments, multiple instances or
threads of the F/E Commercial Device Application component(s) may
be concurrently implemented and/or initiated via the use of one or
more processors and/or other combinations of hardware and/or
hardware and software. For example, in at least some embodiments,
various aspects, features, and/or functionalities of the F/E
Commercial Device Application component(s) may be performed,
implemented and/or initiated by one or more of the various systems,
components, systems, devices, procedures, processes, etc.,
described and/or referenced herein.
[0143] According to different embodiments, one or more different
threads or instances of the F/E Commercial Device Application
component(s) may be initiated in response to detection of one or
more conditions or events satisfying one or more different types of
minimum threshold criteria for triggering initiation of at least
one instance of the F/E Commercial Device Application component(s).
Various examples of conditions or events which may trigger
initiation and/or implementation of one or more different threads
or instances of the F/E Commercial Device Application component(s)
may include, but are not limited to, one or more of those described
and/or referenced herein.
[0144] In at least one embodiment, a given instance of the F/E
Commercial Device Application component(s) may access and/or
utilize information from one or more associated databases. In at
least one embodiment, at least a portion of the database
information may be accessed via communication with one or more
local and/or remote memory devices. Examples of different types of
data which may be accessed by the F/E Commercial Device Application
component(s) may include, but are not limited to, one or more of
those described and/or referenced herein.
[0145] According to different embodiments, F/E Commercial Device
400 may further include, but is not limited to, one or more of the
following types of components, modules and/or systems (or
combinations thereof): [0146] At least one processor 410. In at
least one embodiment, the processor(s) 410 may include one or more
commonly known CPUs which are deployed in many of today's consumer
electronic devices, such as, for example, CPUs or processors from
the Motorola or Intel family of microprocessors, etc. In an
alternative embodiment, at least one processor may be specially
designed hardware for controlling the operations of the client
system. In a specific embodiment, a memory (such as non-volatile
RAM and/or ROM) also forms part of CPU. When acting under the
control of appropriate software or firmware, the CPU may be
responsible for implementing specific functions associated with the
functions of a desired network device. The CPU preferably
accomplishes all these functions under the control of software
including an operating system, and any appropriate applications
software. [0147] Memory 416, which, for example, may include
volatile memory (e.g., RAM), non-volatile memory (e.g., disk
memory, FLASH memory, EPROMs, etc.), unalterable memory, and/or
other types of memory. In at least one implementation, the memory
416 may include functionality similar to at least a portion of
functionality implemented by one or more commonly known memory
devices such as those described herein and/or generally known to
one having ordinary skill in the art. According to different
embodiments, one or more memories or memory modules (e.g., memory
blocks) may be configured or designed to store data, program
instructions for the functional operations of the client system
and/or other information relating to the functionality of the
various facial detection and eye tracking techniques described
herein. The program instructions may control the operation of an
operating system and/or one or more applications, for example. The
memory or memories may also be configured to store data structures,
metadata, timecode synchronization information, audio/visual media
content, asset file information, keyword taxonomy information,
advertisement information, and/or information/data relating to
other features/functions described herein. Because such information
and program instructions may be employed to implement at least a
portion of the facial detection and eye tracking techniques
described herein, various aspects described herein may be
implemented using machine readable media that include program
instructions, state information, etc. Examples of machine-readable
media include, but are not limited to, magnetic media such as hard
disks, floppy disks, and magnetic tape; optical media such as
CD-ROM disks; magneto-optical media such as floptical disks; and
hardware devices that are specially configured to store and perform
program instructions, such as read-only memory devices (ROM) and
random access memory (RAM). Examples of program instructions
include both machine code, such as produced by a compiler, and
files containing higher level code that may be executed by the
computer using an interpreter. [0148] Interface(s) 406 which, for
example, may include wired interfaces and/or wireless interfaces.
In at least one implementation, the interface(s) 406 may include
functionality similar to at least a portion of functionality
implemented by one or more computer system interfaces such as those
described herein and/or generally known to one having ordinary
skill in the art. For example, in at least one implementation, the
wireless communication interface(s) may be configured or designed
to communicate with selected electronic game tables, computer
systems, remote servers, other wireless devices (e.g., PDAs, cell
phones, player tracking transponders, etc.), etc. Such wireless
communication may be implemented using one or more wireless
interfaces/protocols such as, for example, 802.11 (WiFi), 802.15
(including Bluetooth.TM.), 802.16 (WiMax), 802.22, Cellular
standards such as CDMA, CDMA2000, WCDMA, Radio Frequency (e.g.,
RFID), Infrared, Near Field Magnetics, etc. [0149] Device driver(s)
442. In at least one implementation, the device driver(s) 442 may
include functionality similar to at least a portion of
functionality implemented by one or more computer system driver
devices such as those described herein and/or generally known to
one having ordinary skill in the art. [0150] At least one power
source (and/or power distribution source) 443. In at least one
implementation, the power source may include at least one mobile
power source (e.g., battery) for allowing the client system to
operate in a wireless and/or mobile environment. For example, in
one implementation, the power source 443 may be implemented using a
rechargeable, thin-film type battery. Further, in embodiments where
it is desirable for the device to be flexible, the power source 443
may be designed to be flexible. [0151] Geolocation module 446
which, for example, may be configured or designed to acquire
geolocation information from remote sources and use the acquired
geolocation information to determine information relating to a
relative and/or absolute position of the client system. [0152]
Motion detection component 440 for detecting motion or movement of
the client system and/or for detecting motion, movement, gestures
and/or other input data from user. In at least one embodiment, the
motion detection component 440 may include one or more motion
detection sensors such as, for example, MEMS (Micro Electro
Mechanical System) accelerometers, that can detect the acceleration
and/or other movements of the client system as it is moved by a
user. [0153] User Identification/Authentication module 447. In one
implementation, the User Identification module may be adapted to
determine and/or authenticate the identity of the current user or
owner of the client system. For example, in one embodiment, the
current user may be required to perform a log in process at the
client system in order to access one or more features.
Alternatively, the client system may be adapted to automatically
determine the identity of the current user based upon one or more
external signals such as, for example, an RFID tag or badge worn by
the current user which provides a wireless signal to the client
system for determining the identity of the current user. In at
least one implementation, various security features may be
incorporated into the client system to prevent unauthorized users
from accessing confidential or sensitive information. [0154] One or
more display(s) 435. According to various embodiments, such
display(s) may be implemented using, for example, LCD display
technology, OLED display technology, and/or other types of
conventional display technology. In at least one implementation,
display(s) 435 may be adapted to be flexible or bendable.
Additionally, in at least one embodiment the information displayed
on display(s) 435 may utilize e-ink technology (such as that
available from E Ink Corporation, Cambridge, Mass., www.eink.com),
or other suitable technology for reducing the power consumption of
information displayed on the display(s) 435. [0155] One or more
user I/O Device(s) 430 such as, for example, keys, buttons, scroll
wheels, cursors, touchscreen sensors, audio command interfaces,
magnetic strip reader, optical scanner, etc. [0156] Audio/Video
device(s) 439 such as, for example, components for displaying
audio/visual media which, for example, may include cameras,
speakers, microphones, media presentation components, wireless
transmitter/receiver devices for enabling wireless audio and/or
visual communication between the client system 400 and remote
devices (e.g., radios, telephones, computer systems, etc.). For
example, in one implementation, the audio system may include
componentry for enabling the client system to function as a cell
phone or two-way radio device. [0157] Other types of peripheral
devices 431 which may be useful to the users of various client
systems, such as, for example: PDA functionality; memory card
reader(s); fingerprint reader(s); image projection device(s);
social networking peripheral component(s); etc. [0158] Information
filtering module(s) 449 which, for example, may be adapted to
automatically and dynamically generate, using one or more filter
parameters, filtered information to be displayed on one or more
displays of the mobile device. In one implementation, such filter
parameters may be customizable by the player or user of the device.
In some embodiments, information filtering module(s) 449 may also
be adapted to display, in real-time, filtered information to the
user based upon a variety of criteria such as, for example,
geolocation information, casino data information, player tracking
information, etc. [0159] Wireless communication module(s) 445. In
one implementation, the wireless communication module 445 may be
configured or designed to communicate with external devices using
one or more wireless interfaces/protocols such as, for example,
802.11 (WiFi), 802.15 (including Bluetooth.TM.), 802.16 (WiMax),
802.22, Cellular standards such as CDMA, CDMA2000, WCDMA, Radio
Frequency (e.g., RFID), Infrared, Near Field Magnetics, etc. [0160]
Software/Hardware Authentication/validation components 444 which,
for example, may be used for authenticating and/or validating local
hardware and/or software components, hardware/software components
residing at a remote device, game play information, wager
information, user information and/or identity, etc. Examples of
various authentication and/or validation components are described
in U.S. Pat. No. 6,620,047, titled, "ELECTRONIC GAMING APPARATUS
HAVING AUTHENTICATION DATA SETS," incorporated herein by reference
in its entirety for all purposes. [0161] Operating mode selection
component 448 which, for example, may be operable to automatically
select an appropriate mode of operation based on various parameters
and/or upon detection of specific events or conditions such as, for
example: the mobile device's current location; identity of current
user; user input; system override (e.g., emergency condition
detected); proximity to other devices belonging to same group or
association; proximity to specific objects, regions, zones, etc.
Additionally, the mobile device may be operable to automatically
update or switch its current operating mode to the selected mode of
operation. The mobile device may also be adapted to automatically
modify accessibility of user-accessible features and/or information
in response to the updating of its current mode of operation.
[0162] Scanner/Camera Component(s) (e.g., 452) which may be
configured or designed for use in scanning identifiers and/or other
content from other devices and/or objects such as for example:
mobile device displays, computer displays, static displays (e.g.,
printed on tangible mediums), etc. [0163] OCR Processing Engine
(e.g., 456) which, for example, may be operable to perform image
processing and optical character recognition of images such as
those captured by a mobile device camera, for example. [0164]
Speech Processing module (e.g., 454) which, for example, may be
operable to perform speech recognition, and may be operable to
perform speech-to-text conversion. [0165] Facial Detection &
Eye Tracking Component(s) 492 (e.g., which may include, for
example, one or more camera(s) and/or other types of image
capturing components), and/or Facial/Eye Tracking Analysis and
Interpretation Component(s) 494. In at least one embodiment, Facial
Detection & Eye Tracking Component(s) 492 and/or Facial/Eye
Tracking Analysis and Interpretation Component(s) 494 may be
configured or designed to facilitate and/or provide one or more of
the following operation(s)/action(s)/feature(s) (or combinations
thereof): [0166] Facial Feature Detection functionality for
identifying facial features associated with a user that is
interacting with the F/E Commercial Device. [0167] Facial
Expression Recognition functionality for detecting and recognizing
facial expressions associated with a user that is interacting with
the F/E Commercial Device. [0168] Eye Tracking functionality
detecting and tracking eye movements associated with a user that is
interacting with the F/E Commercial Device. [0169] Other types of
functions, features, operations, and/or procedures described
herein. [0170] Etc.
[0171] According to a specific embodiment, the F/E Commercial
Device may be adapted to implement at least a portion of the
features associated with the mobile game service system described
in U.S. patent application Ser. No. 10/115,164, which is now U.S.
Pat. No. 6,800,029, issued Oct. 5, 2004, (previously incorporated
by reference in its entirety). For example, in one embodiment, the
F/E Commercial Device may be comprised of a hand-held game service
user interface device (GSUID) and a number of input and output
devices. The GSUID is generally comprised of a display screen which
may display a number of game service interfaces. These game service
interfaces are generated on the display screen by a microprocessor
of some type within the GSUID. Examples of a hand-held GSUID which
may accommodate the game service interfaces are manufactured by
Symbol Technologies, Incorporated of Holtsville, N.Y.
[0172] The game service interfaces may be used to provide a variety
of game service transactions and gaming operations services. The
game service interfaces, including a login interface, an
input/output interface, a transaction reconciliation interface, a
ticket validation interface, a prize services interfaces, a food
services interface, an accommodation services interfaces, a gaming
operations interfaces, a multi-game/multi-denomination meter data
transfer interface, etc. Each interface may be accessed via a main
menu with a number of sub-menus that allow a game service
representative to access the different display screens relating to
the particular interface. Using the different display screens
within a particular interface, the game service representative may
perform various operations needed to provide a particular game
service. For example, the login interface may allow the game
service representative to enter a user identification of some type
and verify the user identification with a password. When the
display screen is a touch screen, the user may enter the
user/operator identification information on a display screen
comprising the login interface using the input stylus and/or using
the input buttons. Using a menu on the display screen of the login
interface, the user may select other display screens relating to
the login and registration process. For example, another display
screen obtained via a menu on a display screen in the login
interface may allow the GSUID to scan a finger print of the game
service representative for identification purposes or scan the
finger print of a game player.
[0173] The user identification information and user validation
information may allow the game service representative to access all
or some subset of the available game service interfaces available
on the GSUID. For example, certain users, after logging into the
GSUID (e.g., entering a user identification and a valid user
identification information), may be able to access a variety of
different interfaces, such as, for example, one or more of:
input/output interface, communication interface, food services
interface, accommodation services interface, prize service
interface, gaming operation services interface, transaction
reconciliation interface, voice communication interface, gaming
device performance or metering data transfer interface, etc.; and
perform a variety of services enabled by such interfaces. While
other users may be only be able to access the award ticket
validation interface and perform EZ pay ticket validations. The
GSUID may also output game service transaction information to a
number of different devices (e.g., card reader, printer, storage
devices, gaming machines and remote transaction servers, etc.).
[0174] In addition to the features described above, various
embodiments of mobile devices described herein may also include
additional functionality for displaying, in real-time, filtered
information to the user based upon a variety of criteria such as,
for example, geolocation information, casino data information,
player tracking information, etc.
[0175] FIG. 5 illustrates an example embodiment of a Server System
580 which may be used for implementing various aspects/features
described herein. In at least one embodiment, the Server System 580
includes at least one network device 560, and at least one storage
device 570 (such as, for example, a direct attached storage
device). In one embodiment, Server System 580 may be suitable for
implementing at least some of the facial detection and eye tracking
techniques described herein.
[0176] In according to one embodiment, network device 560 may
include a master central processing unit (CPU) 562, interfaces 568,
and a bus 567 (e.g., a PCI bus). When acting under the control of
appropriate software or firmware, the CPU 562 may be responsible
for implementing specific functions associated with the functions
of a desired network device. For example, when configured as a
server, the CPU 562 may be responsible for analyzing packets;
encapsulating packets; forwarding packets to appropriate network
devices; instantiating various types of virtual machines, virtual
interfaces, virtual storage volumes, virtual appliances; etc. The
CPU 562 preferably accomplishes at least a portion of these
functions under the control of software including an operating
system (e.g. Linux), and any appropriate system software (such as,
for example, AppLogic.TM. software).
[0177] CPU 562 may include one or more processors 563 such as, for
example, one or more processors from the AMD, Motorola, Intel
and/or MIPS families of microprocessors. In an alternative
embodiment, processor 563 may be specially designed hardware for
controlling the operations of Server System 580. In a specific
embodiment, a memory 561 (such as non-volatile RAM and/or ROM) also
forms part of CPU 562. However, there may be many different ways in
which memory could be coupled to the system. Memory block 561 may
be used for a variety of purposes such as, for example, caching
and/or storing data, programming instructions, etc.
[0178] The interfaces 568 may be typically provided as interface
cards (sometimes referred to as "line cards"). Alternatively, one
or more of the interfaces 568 may be provided as on-board interface
controllers built into the system motherboard. Generally, they
control the sending and receiving of data packets over the network
and sometimes support other peripherals used with the Server System
580. Among the interfaces that may be provided may be FC
interfaces, Ethernet interfaces, frame relay interfaces, cable
interfaces, DSL interfaces, token ring interfaces, Infiniband
interfaces, and the like. In addition, various very high-speed
interfaces may be provided, such as fast Ethernet interfaces,
Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS
interfaces, FDDI interfaces, ASI interfaces, DHEI interfaces and
the like. Other interfaces may include one or more wireless
interfaces such as, for example, 802.11 (WiFi) interfaces, 802.15
interfaces (including Bluetooth.TM.), 802.16 (WiMax) interfaces,
802.22 interfaces, Cellular standards such as CDMA interfaces,
CDMA2000 interfaces, WCDMA interfaces, TDMA interfaces, Cellular 3G
interfaces, etc.
[0179] Generally, one or more interfaces may include ports
appropriate for communication with the appropriate media. In some
cases, they may also include an independent processor and, in some
instances, volatile RAM. The independent processors may control
such communications intensive tasks as packet switching, media
control and management. By providing separate processors for the
communications intensive tasks, these interfaces allow the master
microprocessor 562 to efficiently perform routing computations,
network diagnostics, security functions, etc.
[0180] In at least one embodiment, some interfaces may be
configured or designed to allow the Server System 580 to
communicate with other network devices associated with various
local area network (LANs) and/or wide area networks (WANs). Other
interfaces may be configured or designed to allow network device
560 to communicate with one or more direct attached storage
device(s) 570.
[0181] Although the system shown in FIG. 5 illustrates one specific
network device described herein, it is by no means the only network
device architecture on which one or more embodiments can be
implemented. For example, an architecture having a single processor
that handles communications as well as routing computations, etc.
may be used. Further, other types of interfaces and media could
also be used with the network device.
[0182] Regardless of network device's configuration, it may employ
one or more memories or memory modules (such as, for example,
memory block 565, which, for example, may include random access
memory (RAM)) configured to store data, program instructions for
the general-purpose network operations and/or other information
relating to the functionality of the various facial detection and
eye tracking techniques described herein. The program instructions
may control the operation of an operating system and/or one or more
applications, for example. The memory or memories may also be
configured to store data structures, and/or other specific
non-program information described herein.
[0183] Because such information and program instructions may be
employed to implement the systems/methods described herein, one or
more embodiments relates to machine readable media that include
program instructions, state information, etc. for performing
various operations described herein. Examples of machine-readable
storage media include, but are not limited to, magnetic media such
as hard disks, floppy disks, and magnetic tape; optical media such
as CD-ROM disks; magneto-optical media such as floptical disks; and
hardware devices that may be specially configured to store and
perform program instructions, such as read-only memory devices
(ROM) and random access memory (RAM). Some embodiments may also be
embodied in transmission media such as, for example, a carrier wave
travelling over an appropriate medium such as airwaves, optical
lines, electric lines, etc. Examples of program instructions
include both machine code, such as produced by a compiler, and
files containing higher level code that may be executed by the
computer using an interpreter.
[0184] FIG. 6 illustrates an example of a functional block diagram
of a Server System 600 in accordance with a specific embodiment. In
at least one embodiment, the Server System 600 may be operable to
perform and/or implement various types of functions, operations,
actions, and/or other features such, for example, one or more of
those illustrated, described, and/or referenced herein.
[0185] In at least one embodiment, the Server System may include a
plurality of components operable to perform and/or implement
various types of functions, operations, actions, and/or other
features such as, for example, one or more of the following (or
combinations thereof): [0186] Context Interpreter (e.g., 602)
which, for example, may be operable to automatically and/or
dynamically analyze contextual criteria relating to one or more
detected event(s) and/or condition(s), and automatically determine
or identify one or more contextually appropriate response(s) based
on the contextual interpretation of the detected
event(s)/condition(s). According to different embodiments, examples
of contextual criteria which may be analyzed may include, but are
not limited to, one or more of the following (or combinations
thereof): location-based criteria (e.g., geolocation of client
device, geolocation of agent device, etc.); time-based criteria;
identity of Client user; identity of Agent user; user profile
information; transaction history information; recent user
activities; proximate business-related criteria (e.g., criteria
which may be used to determine whether the client device is
currently located at or near a recognized business establishment
such as a bank, gas station, restaurant, supermarket, etc.); etc.
[0187] Time Synchronization Engine (e.g., 604) which, for example,
may be operable to manages universal time synchronization (e.g.,
via NTP and/or GPS) [0188] Search Engine (e.g., 628) which, for
example, may be operable to search for transactions, logs, items,
accounts, options in the TIS databases [0189] Configuration Engine
(e.g., 632) which, for example, may be operable to determine and
handle configuration of various customized configuration parameters
for one or more devices, component(s), system(s), process(es), etc.
[0190] Time Interpreter (e.g., 618) which, for example, may be
operable to automatically and/or dynamically modify or change
identifier activation and expiration time(s) based on various
criteria such as, for example, time, location, transaction status,
etc. [0191] Authentication/Validation Component(s) (e.g., 647)
(password, software/hardware info, SSL certificates) which, for
example, may be operable to perform various types of
authentication/validation tasks such as, for example, one or more
of the following (or combinations thereof):
verifying/authenticating devices, verifying passwords, passcodes,
SSL certificates, biometric identification information, and/or
other types of security-related information; verify/validate
activation and/or expiration times; etc. In one implementation, the
Authentication/Validation Component(s) may be adapted to determine
and/or authenticate the identity of the current user or owner of
the mobile client system. For example, in one embodiment, the
current user may be required to perform a log in process at the
mobile client system in order to access one or more features. In
some embodiments, the mobile client system may include biometric
security components which may be operable to validate and/or
authenticate the identity of a user by reading or scanning the
user's biometric information (e.g., fingerprints, face, voice,
eye/iris, etc.). In at least one implementation, various security
features may be incorporated into the mobile client system to
prevent unauthorized users from accessing confidential or sensitive
information. [0192] Transaction Processing Engine (e.g., 622)
which, for example, may be operable to handle various types of
transaction processing tasks such as, for example, one or more of
the following (or combinations thereof): identifying/determining
transaction type; determining which payment gateway(s) to use;
associating databases information to identifiers; etc. [0193] OCR
Processing Engine (e.g., 634) which, for example, may be operable
to perform image processing and optical character recognition of
images such as those captured by a mobile device camera, for
example. [0194] Database Manager (e.g., 626) which, for example,
may be operable to handle various types of tasks relating to
database updating, database management, database access, etc. In at
least one embodiment, the Database Manager may be operable to
manage TISS databases, Gaming Device Application databases, etc.
[0195] Log Component(s) (e.g., 610) which, for example, may be
operable to generate and manage transactions history logs, system
errors, connections from APIs, etc. [0196] Status Tracking
Component(s) (e.g., 612) which, for example, may be operable to
automatically and/or dynamically determine, assign, and/or report
updated transaction status information based, for example, on the
state of the transaction. In at least one embodiment, the status of
a given transaction may be reported as one or more of the following
(or combinations thereof): Completed, Incomplete, Pending, Invalid,
Error, Declined, Accepted, etc. [0197] Gateway Component(s) (e.g.,
614) which, for example, may be operable to facilitate and manage
communications and transactions with external Payment Gateways.
[0198] Web Interface Component(s) (e.g., 608) which, for example,
may be operable to facilitate and manage communications and
transactions with TIS web portal(s). [0199] API Interface(s) to
Server System(s) (e.g., 646) which, for example, may be operable to
facilitate and manage communications and transactions with API
Interface(s) to Server System(s). [0200] API Interface(s) to 3rd
Party Server System(s) (e.g., 648) which, for example, may be
operable to facilitate and manage communications and transactions
with API Interface(s) to 3rd Party Server System(s). [0201] OCR
Processing Engine (e.g., 634) which, for example, may be operable
to perform image processing and optical character recognition of
images such as those captured by a mobile device camera, for
example. [0202] At least one processor 610. In at least one
embodiment, the processor(s) 610 may include one or more commonly
known CPUs which are deployed in many of today's consumer
electronic devices, such as, for example, CPUs or processors from
the Motorola or Intel family of microprocessors, etc. In an
alternative embodiment, at least one processor may be specially
designed hardware for controlling the operations of the mobile
client system. In a specific embodiment, a memory (such as
non-volatile RAM and/or ROM) also forms part of CPU. When acting
under the control of appropriate software or firmware, the CPU may
be responsible for implementing specific functions associated with
the functions of a desired network device. The CPU preferably
accomplishes all these functions under the control of software
including an operating system, and any appropriate applications
software. [0203] Memory 616, which, for example, may include
volatile memory (e.g., RAM), non-volatile memory (e.g., disk
memory, FLASH memory, EPROMs, etc.), unalterable memory, and/or
other types of memory. In at least one implementation, the memory
616 may include functionality similar to at least a portion of
functionality implemented by one or more commonly known memory
devices such as those described herein and/or generally known to
one having ordinary skill in the art. According to different
embodiments, one or more memories or memory modules (e.g., memory
blocks) may be configured or designed to store data, program
instructions for the functional operations of the mobile client
system and/or other information relating to the functionality of
the various Mobile Transaction techniques described herein. The
program instructions may control the operation of an operating
system and/or one or more applications, for example. The memory or
memories may also be configured to store data structures, metadata,
identifier information/images, and/or information/data relating to
other features/functions described herein. Because such information
and program instructions may be employed to implement at least a
portion of the F/E Computer Network techniques described herein,
various aspects described herein may be implemented using machine
readable media that include program instructions, state
information, etc. Examples of machine-readable media include, but
are not limited to, magnetic media such as hard disks, floppy
disks, and magnetic tape; optical media such as CD-ROM disks;
magneto-optical media such as floptical disks; and hardware devices
that are specially configured to store and perform program
instructions, such as read-only memory devices (ROM) and random
access memory (RAM). Examples of program instructions include both
machine code, such as produced by a compiler, and files containing
higher level code that may be executed by the computer using an
interpreter. [0204] Interface(s) 606 which, for example, may
include wired interfaces and/or wireless interfaces. In at least
one implementation, the interface(s) 606 may include functionality
similar to at least a portion of functionality implemented by one
or more computer system interfaces such as those described herein
and/or generally known to one having ordinary skill in the art.
[0205] Device driver(s) 642. In at least one implementation, the
device driver(s) 642 may include functionality similar to at least
a portion of functionality implemented by one or more computer
system driver devices such as those described herein and/or
generally known to one having ordinary skill in the art. [0206] One
or more display(s) 635. According to various embodiments, such
display(s) may be implemented using, for example, LCD display
technology, OLED display technology, and/or other types of
conventional display technology. In at least one implementation,
display(s) 635 may be adapted to be flexible or bendable.
Additionally, in at least one embodiment the information displayed
on display(s) 635 may utilize e-ink technology (such as that
available from E Ink Corporation, Cambridge, Mass., www.eink.com),
or other suitable technology for reducing the power consumption of
information displayed on the display(s) 635. [0207] Email Server
Component(s) 636, which, for example, may be configured or designed
to provide various functions and operations relating to email
activities and communications. [0208] Web Server Component(s) 637,
which, for example, may be configured or designed to provide
various functions and operations relating to web server activities
and communications. [0209] Messaging Server Component(s) 638,
which, for example, may be configured or designed to provide
various functions and operations relating to text messaging and/or
other social network messaging activities and/or communications.
[0210] Facial/Eye Tracking Analysis and Interpretation Component(s)
694, which, for example, may be configured or designed to
facilitate and/or provide one or more of the following feature(s)
(or combinations thereof). [0211] Facial Feature Detection
functionality for analyzing user image data (e.g., images of users
captured by one or more cameras of a F/E Commercial Device) and
identifying facial features. [0212] Facial Expression Recognition
functionality for detecting and recognizing facial expressions
associated with one or more users that may be interacting with one
or more F/E Commercial Device(s). [0213] Eye Tracking functionality
for analyzing tracked eye movement data associated with a given
user that is interacting with a particular F/E Commercial Device.
[0214] Functionality for monitoring, tracking, recording and/or
storing information relating to facial feature detection, and
facial expression recognition, and/or eye tracking functionality.
[0215] Functionality for mapping an identified facial expression
(e.g., performed by a user interacting with a F/E Commercial
Device) to one or more function(s). [0216] Functionality for
Initiate and/or perform one or more action(s)/operation(s) in
response to identifying a recognized facial feature associated with
a user interacting with a F/E Commercial Device. [0217]
Functionality for initiating and/or performing one or more
action(s)/operation(s) in response to identifying a recognized
facial expression associated with a user interacting with a F/E
Commercial Device. [0218] Functionality for initiating and/or
performing one or more action(s)/operation(s) in response to
tracking one or more eye movements associated with a user
interacting with a F/E Commercial Device. [0219] Functionality for
identifying one or more items being observed by a user (interacting
with a F/E Commercial Device) in response to tracking one or more
eye movements associated with the user. [0220] Functionality for
creating an association between an identified facial expression
(e.g., performed by a user interacting with a F/E Commercial
Device) and the user who performed that facial expression. [0221]
Functionality for automatically and/or dynamically adjusting the
display of content being displayed on a multi-layer display (MLD)
device in response to detecting a location of a user's eyes (e.g.,
wherein the user is interacting with a F/E Commercial Device which
includes the MLD display). [0222] Functionality for tracking a
user's head movements/positions to automatically and/or dynamically
adjust (e.g., in real-time) output display of MLD content on each
MLD screen in a manner which results in improved alignment and
display of MLD content from the perspective of the user's current
eyes/head position. [0223] Functionality for tracking a user's head
movements/positions to automatically and/or dynamically improve
alignment of front and/or rear (e.g., mask) displayed content
(e.g., in real time) in a manner which results in improved
visibility/presentation of the displayed content as viewed by the
user (e.g., as viewed from the perspective of the user's current
eyes/head position). [0224] Functionality for automatically and/or
dynamically aligning on screen objects to a viewer's perspective,
creating a virtual window effect. For example, in one embodiment,
objects displayed in the background will pan and move differently
than objects displayed in the foreground, based on user's detected
head movements. [0225] Functionality for automatically and/or
dynamically adjusting display of characters and/or objects (e.g.,
on an F/E Commercial Device display screen) to reference a user's
detected position or location (e.g., in real-time). For example, a
character may be automatically and/or dynamically adjusted (e.g.,
in real-time) to look in the direction of a user viewing the
display screen, and to wave at the user. [0226] Functionality for
automatically and/or dynamically adjusting display of characters
and/or objects (e.g., on an F/E Commercial Device display screen)
based on the detected number of live (e.g., in-person) viewers
looking at (or observing) the screen. [0227] Functionality for
automatically and/or dynamically adjusting the size of displayed
characters and/or objects (e.g., on an F/E Commercial Device
display screen) based on detected location and/or detected distance
of a user interacting with the device. For example, in one
embodiment, a F/E Commercial Device may be configured or designed
to determine how far a user's head (or body) is from the display
screen, and may respond by automatically and/or dynamically
resizing (e.g., in real-time) displayed characters and/or objects
so that they are more easily readable/recognizable by the user.
[0228] Functionality for capturing image data using F/E Commercial
Device camera component(s), and analyze captured image data for
recognition of facial features such as, for example, one or more of
the following (or combinations thereof): eyes; nostrils; nose;
mouth region; chin region; etc. [0229] Functionality for enabling
independent/individual facial/eye tracking activities to be
simultaneously performed for multiple different users (e.g., who
are standing in front of a multiple display array). [0230]
Functionality for coordinating identification and tracking of
movements of a given user across different displays of a multiple
display array (e.g., as the user walks past the different displays
of the multiple display array). [0231] Functionality for
automatically and dynamically modifying content displayed on
selected displays of a multiple display array in response to
tracked movements and/or recognized facial expressions of a given
user. [0232] Functionality for recording eye tracking activity and
related data, such as, for example, one or more of the following
(or combinations thereof): region(s)/location(s) where user has
observed; item(s)/product(s) which user has observed; length of
time user has observed a particular item/product. [0233]
Functionality for determining, using user eye tracking data,
identity of object(s) which user is observing or viewing. [0234]
Functionality for detecting and analyzing facial features of a user
that is interacting with the F/E Commercial Device in order to
identify and/or determine user demographic information relating to
the user. [0235] Functionality for automatically and dynamically
altering or supplementing advertising or displayed content based on
the demographics of the audience deemed to be viewing the selected
display. [0236] Functionality for influencing game-related
activities and/or outcomes in response to identifying one or more
recognized facial expression(s) associated with a user interacting
with the F/E Commercial Device. [0237] Etc.
Illustrative Examples of Facial-Eye Enabled Commercial Device
Embodiments
Example Facial Detection/Eye Tracking in Gaming Environments
[0238] FIG. 7 shows an illustrative example of a gaming machine 710
which has been configured or designed to include facial detection
and eye tracking functionality in accordance with a specific
embodiment.
[0239] As illustrated in the example embodiment of FIG. 7, gaming
machine 710 has been adapted to include one or more cameras (e.g.,
712a-d) capable of capturing images (and/or videos) of one or more
players interacting with the gaming machine, and/or capable of
capturing images/videos of other persons within a given proximity
to the gaming machine.
[0240] In at least one embodiment, at least one camera (e.g., 712a)
may be installed in the front portion of the gaming machine cabinet
for viewing/monitoring user/player movements, facial expressions,
eye tracking, etc. The camera may be used to capture images, and
the gaming machine may be operable to analyze the captured image
data for recognition of one or more facial features of a user
(e.g., 740) such as, for example, eyes; nostrils; nose; mouth
region; chin region; and/or other facial features.
[0241] In some embodiments, the gaming machine may be configured or
designed to automatically and/or dynamically align displayed
objects (e.g., viewable at one or more display screens of the
gaming machine) to a viewer's perspective, thereby creating a
virtual window effect. For example, in one embodiment, objects
displayed in the background may be caused to pan and move
differently than objects displayed in the foreground, based on a
player's detected head movements and detected locations/positions
of the user's head and/or eyes.
[0242] In some embodiments, the gaming machine may be configured or
designed to automatically and/or dynamically adjust display of
characters and/or objects (e.g., displayed on one or more displays
of the gaming machine) to reference a user's detected position or
location (e.g., in real-time). For example, in one example scenario
where it is assumed that a fictional character is being displayed
on a display screen of the gaming machine, when the gaming machine
detects that a person is interacting with the gaming machine (or
that the person is observing the display screen), the gaming
machine may respond by automatically and dynamically causing (e.g.,
in real-time) the displayed character to look in the direction of
the person, and to wave at that person. In some embodiments, the
gaming machine may be configured or designed to automatically
and/or dynamically change or adjust the quantity and/or appearance
of displayed of characters/objects in response to detecting
specific activities, events and/or conditions at the gaming machine
such as, for example, one or more of the following (or combinations
thereof): [0243] the number of persons (e.g., physically present
persons) looking at or observing the gaming machine display screen;
[0244] the facial expressions of a player interacting with the
gaming machine; [0245] movement(s) of one or more players at or
near the gaming machine; [0246] eye tracking activity (e.g.,
associated with a player who is interacting with the gaming
machine);
[0247] In some embodiments, the gaming machine may be configured or
designed to automatically and/or dynamically adjust the size and/or
appearance of displayed characters and/or objects based on the
detected proximity of a person/player to the gaming machine. For
example, in one embodiment, the gaming machine may be configured or
designed to determine how far a player's head (or body) is from the
display screen, and may respond by automatically and/or dynamically
resizing (e.g., in real-time) displayed characters and/or objects
so that they are more easily readable/recognizable by the
player.
[0248] In at least one embodiment, the gaming machine may be
configured or designed to record and store (e.g., for subsequent
analysis) facial expressions of a player interacting with the
gaming machine (e.g., during game play). In at least one
embodiment, the gaming machine may also record and store concurrent
game play information. The recorded player facial expression
information and related game play information may be provided to a
server system (e.g., Server System 600), where the information may
be analyzed to determine which portions or aspects of the game play
the player enjoyed (e.g., which portions of the game play made the
player smile), which portions or aspects of the game play the
player disliked (e.g., which portions of the game play made the
player frown or look unhappy).
[0249] In at least one embodiment, a gaming machine or gaming
system may be configured or designed to facilitate, initiate and/or
perform one or more of the following operation(s)/action(s) (or
combinations thereof): identify facial features of a person or
player interacting with the gaming machine/system (e.g., during
game play mode); recognize and/or interpret facial expressions made
by the identified person/player; and; initiate or perform one or
more action(s)/operation(s) in response to detecting a recognized
facial expression made by the person/player. For example, in one
embodiment, the content displayed at the gaming machine/system
(and/or the outcome of specific event(s) during game play) may be
dynamically determined and/or modified (e.g., in real-time) based
on the recognized facial expressions of the player who is
interacting with that gaming machine/system. In some embodiments
where the game outcome is known or predetermined (e.g., within the
gaming system) before the end of current round of game play, the
displayed game play content and/or game play events presented to
the player during the game may be automatically and/or dynamically
determined and/or modified (e.g., in real-time) to encourage the
player to exhibit a happy facial expression (e.g., a smile). If,
during the game play, it is detected that the user is not smiling,
the gaming machine/system may respond by dynamically changing or
modifying (e.g., in real-time) the player's game play experience to
encourage the player to exhibit a happy facial expression.
[0250] In some embodiments, the gaming machine may be adapted to
include an array of cameras for improved tracking and multi-player
detection/monitoring. In one embodiment, a gaming system may be
provided which includes an array of separate display screens and an
array of cameras which may be used for facilitating tracking and
multi-player detection/monitoring. In some embodiments, the gaming
system may be configured or designed to automatically and/or
dynamically enable independent/individual facial/eye tracking
activities to be simultaneously performed for multiple different
players or persons (e.g., who are standing in front of the multiple
display array). In some embodiments, the gaming system may be
configured or designed to automatically and/or dynamically
coordinate identification and tracking of movements of a given
person across different displays of a multiple display array (e.g.,
as the person walks past the different displays of the multiple
display array) In some embodiments, the gaming system may be
configured or designed to automatically and/or dynamically modify
content displayed on selected displays of a multiple display array
in response to tracked movements and/or recognized facial
expressions of a detected person/player.
Multi-Layered Displays
[0251] Various embodiments of devices and/or systems described
herein (including gaming machines and/or gaming systems) may be
configured or designed to include at least one multi-layered
display (MLD) system which includes a plurality of multiple layered
display screens.
[0252] As the term is used herein, a display device refers to any
device configured to adaptively output a visual image to a person
in response to a control signal. In one embodiment, the display
device includes a screen of a finite thickness, also referred to
herein as a display screen. For example, LCD display devices often
include a flat panel that includes a series of layers, one of which
includes a layer of pixilated light transmission elements for
selectively filtering red, green and blue data from a white light
source. Numerous exemplary display devices are described below.
[0253] The display device is adapted to receive signals from a
processor or controller included in the gaming system and to
generate and display graphics and images to a person near the
gaming system. The format of the signal will depend on the device.
In one embodiment, all the display devices in a layered arrangement
respond to digital signals. For example, the red, green and blue
pixilated light transmission elements for an LCD device typically
respond to digital control signals to generate colored light, as
desired.
[0254] In one embodiment, the gaming system comprises a
multi-touch, multi-player interactive display system which includes
two display devices, including a first, foremost or exterior
display device and a second, underlying or interior display device.
For example, the exterior display device may include a transparent
LCD panel while the interior display device includes a digital
display device with a curved surface.
[0255] In another embodiment, the gaming system comprises a
multi-touch, multi-player interactive display system which includes
three or more display devices, including a first, foremost or
exterior display device, a second or intermediate display device,
and a third, underlying or interior display device. The display
devices are mounted, oriented and aligned within the gaming system
such that at least one--and potentially numerous--common lines of
sight intersect portions of a display surface or screen for each
display device. Several exemplary display device systems and
arrangements that each include multiple display devices along a
common line of sight will now be discussed.
[0256] Layered display devices may be described according to their
position along a common line of sight relative to a viewer. As the
terms are used herein, `proximate` refers to a display device that
is closer to a person, along a common line of sight, than another
display device. Conversely, `distal` refers to a display device
that is farther from a person, along the common line of sight, than
another.
[0257] In at least one embodiment, one or more of the MLD display
screens may include a flat display screen incorporating flat-panel
display technology such as, for example, one or more of the
following (or combinations thereof): a liquid crystal display
(LCD), a transparent light emitting diode (LED) display, an
electroluminescent display (ELD), and a microelectromechanical
device (MEM) display, such as a digital micromirror device (DMD)
display or a grating light valve (GLV) display, etc. In some
embodiments, one or more of the display screens may utilize organic
display technologies such as, for example, an organic
electroluminescent (OEL) display, an organic light emitting diode
(OLED) display, a transparent organic light emitting diode (TOLED)
display, a light emitting polymer display, etc. In addition, at
least one display device may include a multipoint touch-sensitive
display that facilitates user input and interaction between a
person and the gaming system.
[0258] In one embodiment, the display screens are relatively flat
and thin, such as, for example, less than about 0.5 cm in
thickness. In one embodiment, the relatively flat and thin display
screens, having transparent or translucent capacities, are liquid
crystal diodes (LCDs). It should be appreciated that the display
screen can be any suitable display screens such as lead lanthanum
include titanate (PLZT) panel technology or any other suitable
technology which involves a matrix of selectively operable light
modulating structures, commonly known as pixels or picture
elements.
[0259] Various companies have developed relatively flat display
screens which have the capacity to be transparent or translucent.
One such company is Tralas Technologies, Inc., which sells display
screens which employ time multiplex optical shutter (TMOS)
technology. This TMOS display technology involves: (a) selectively
controlled pixels which shutter light out of a light guidance
substrate by violating the light guidance conditions of the
substrate; and (b) a system for repeatedly causing such violation
in a time multiplex fashion. The display screens which embody TMOS
technology are inherently transparent and they can be switched to
display colors in any pixel area. Certain TMOS display technology
is described in U.S. Pat. No. 5,319,491.
[0260] Another company, Deep Video Imaging Ltd., has developed
various types of multi-layered displays and related technology.
Various types of volumetric and multi-panel/multi-screen displays
are described, for example, in one or more patents and/or patent
publications assigned to Deep Video Imaging such as, for example,
U.S. Pat. No. 6,906,762, and PCT Pub. Nos.: WO99/42889,
WO03/040820A1, WO2004/001488A1, WO2004/002143A1, and
WO2004/008226A1, each of which is incorporated herein by reference
in its entirety for all purposes.
[0261] It should be appreciated that various embodiments of
multi-touch, multi-player interactive displays may employ any
suitable display material or display screen which has the capacity
to be transparent or translucent. For example, such a display
screen can include holographic shutters or other suitable
technology.
[0262] In some embodiments, gaming machines and/or gaming systems
which include one or more multi-layer display(s) (MLDs) may be
configured or designed to automatically and/or dynamically adjust
the display and/or appearance of content being displayed on a
multi-layer display (MLD) in response to tracking a player's eye
movements. In some embodiments, the gaming machine/system may be
configured or designed to automatically and/or dynamically track a
player's head movements/positions to automatically and/or
dynamically adjust (e.g., in real-time) output display of MLD
content on each MLD screen in a manner which results in improved
alignment and viewing of displayed of MLD content from the
perspective of the player's current eyes/head position. In some
embodiments, the gaming machine/system may be configured or
designed to automatically and/or dynamically track a player's head
movements/positions to automatically and/or dynamically improve
alignment of front and/or rear (e.g., mask) displayed content
(e.g., in real time) in a manner which results in improved
visibility/presentation of the displayed content as viewed by the
player (e.g., as viewed from the perspective of the player's
current eyes/head position).
[0263] According to different embodiments, the various gaming
machine features described above (as well as other features
described and/or referenced herein) may be implemented at the
gaming machine during game play mode and/or attract mode.
Example Facial/Eye Tracking in Kiosk/Consumer Environments
[0264] FIG. 8 shows an illustrative example of an F/E Commercial
Device 810 which has been configured or designed to include facial
detection and eye tracking functionality in accordance with a
specific embodiment. As illustrated in the example embodiment of
FIG. 8, F/E Commercial Device 810 may be configured as a
consumer-type vending machine which has been adapted to include one
or more cameras (e.g., 812a, 812b) capable of capturing images
(and/or videos) of one or more customers interacting with the
vending machine, and/or capable of capturing images/videos of other
persons within a given proximity to the vending machine.
[0265] In at least one embodiment, at least one camera (e.g., 812a)
may be installed in the front portion of the vending machine
cabinet for viewing/monitoring customer movements, facial
expressions, eye tracking, etc. The camera may be used to capture
images and/or videos, and the vending machine (and/or server
system) may be operable to analyze the captured image data for
recognition of one or more facial features of a consumer (e.g.,
840) such as, for example, eyes; nostrils; nose; mouth region; chin
region; and/or other facial features.
[0266] According to different embodiments, the vending machine may
be configured or designed to detect the presence of a person within
a predefined proximity. In at least one embodiment, the vending
machine may be configured or designed to automatically and/or
dynamically record eye tracking activity and related data, such as,
for example, one or more of the following (or combinations
thereof): region(s)/location(s) where consumer has observed (or is
currently observing); item(s)/product(s) which consumer has
observed; length of time consumer has observed each particular
item/product; etc. For example, as illustrated in the example
embodiment of FIG. 8, vending machine 810 may be configured or
designed to track (e.g., using camera 812a) the eye positions and
movements of a consumer (840), and determine and record which items
of the product display (820) the consumer has viewed and for how
long.
[0267] In at least one embodiment, the recorded consumer viewing
information may be transmitted from the vending machine to a remote
system such as server system 600. In at least one embodiment, the
server system may analyze and process the received consumer viewing
information, and in response, may facilitate, initiate and/or
perform one or more of the following operation(s)/action(s) (or
combinations thereof): [0268] associate at least a portion of the
processed consumer viewing information with the profile of a
selected/identified consumer; [0269] report at least a portion of
the processed consumer viewing information to 3.sup.rd party
entities; [0270] automatically and/or dynamically generate one or
more targeted advertisements or promotions based on at least a
portion of the processed consumer viewing information [0271]
dynamically adjust pricing information relating to one or more
items viewed by the consumer; [0272] dynamically adjust inventory
management information based on at least a portion of the processed
consumer viewing information; [0273] etc.
[0274] For example, in the specific example embodiment of FIG. 8,
it is assumed that the vending machine is tracking the consumer's
eye movements, and has determined that the consumer 840 has viewed
item 821 (e.g., located at vending machine display grid position
C1) for an amount of time exceeding predefined threshold value
(e.g., 10 seconds). In order to incentivize the consumer to
purchase the identified item 821, the vending machine (e.g., in
communication with server system 600) may automatically display a
dynamically generated, targeted promotion 815 to the consumer such
as, for example, "Receive a 10% discount if you purchase item C1 in
the next 60 seconds." During the next 60 seconds, the vending
machine may dynamically reduce the purchase price of the identified
item 821 by 10%. In one embodiment, if the consumer walks away
before the 60 seconds has expired, the vending machine may detect
such activity, and may respond by automatically restoring the
purchase price of the identified item 821 back to its original
value. In at least one embodiment, if the vending machine detects
that the consumer is walking away without completing a purchase, it
may automatically and/or dynamically generate one or more visual
and/or audio signals to capture the attention of the consumer, and
may additionally display one or more dynamically generated,
targeted promotions to the consumer based on the consumer's prior
viewing activities.
[0275] In some embodiments, the vending machine may identify,
recognize, and record the facial characteristics of one or more
consumer(s) in a manner which enables the vending machine to
automatically determine the identity of a subsequently returning
consumer. In at least one embodiment, the vending machine (and/or
server system) may maintain consumer profiles which include for a
given consumer, that consumer's unique facial feature
characteristics. In at least one embodiment, the vending machine
may automatically record the purchasing activity and/or viewing
activity associated with an identified consumer, and may associate
such activities with that consumer's profile. When a consumer
approaches the vending machine to view the product display, the
vending machine may automatically identify and recognize the facial
features of the consumer, and may compare the recognize facial
features to those stored in the consumer profile database(s) in
order to automatically determine the identity of the consumer who
is currently interacting with the vending machine. In at least one
embodiment, if the vending machine is able to determine the
identity of the consumer who is currently interacting with the
vending machine, it may use the consumer's profile information
(e.g., purchasing activity and/or viewing activity associated with
the identified consumer) to automatically generate one or more
dynamically generated, targeted promotions or purchase suggestions
to be presented to the consumer.
[0276] It will be appreciated that various aspects and features of
the facial detection and eye tracking functionality described
herein may be implemented in other types of kiosk/consumer
environments such as, for example, one or more of the following (or
combinations thereof): [0277] Drive-through restaurant
environments; [0278] Public transportation environments; [0279]
Private transportation environments; [0280] Environments involving
automated sales of products and/or tickets; [0281] etc.
Example Facial/Eye Tracking in Television (TV) Viewing
Environments
[0282] FIG. 9 shows an illustrative example of an F/E Commercial
Device 910 which has been configured or designed to include facial
detection and eye tracking functionality in accordance with a
specific embodiment. As illustrated in the example embodiment of
FIG. 9, F/E Commercial Device 910 may be configured as a
Intelligent TV device which has been adapted to include one or more
cameras (e.g., 912a, 912b) capable of capturing images (and/or
videos) of one or more viewers interacting with the Intelligent TV,
and/or capable of capturing images/videos of other persons within a
given proximity to the Intelligent TV. Additionally, in some
embodiments, the Intelligent TV may include infrared flash
component(s) (e.g., 914), which may be configured or designed to
facilitate detection and/or tracking of the eyes of one or more
viewers.
[0283] In at least one embodiment, at least one camera (e.g., 912a)
may be installed in the front portion of the Intelligent TV frame
for viewing/monitoring viewer movements, facial expressions, eye
tracking, etc. In other embodiments, one or more external camera(s)
may be hooked up to the TV (e.g., Microsoft Kinect Unit). The
camera may be used to capture images and/or videos, and the
Intelligent TV (and/or server system) may be operable to analyze
the captured image data for recognition of one or more facial
features of a viewer (e.g., 940) such as, for example, eyes;
nostrils; nose; mouth region; chin region; and/or other facial
features.
[0284] According to different embodiments, the Intelligent TV may
be configured or designed to detect the presence of a person within
a predefined proximity. In at least one embodiment, the Intelligent
TV may be configured or designed to automatically and/or
dynamically record eye tracking activity and related data, such as,
for example, one or more of the following (or combinations
thereof): region(s)/location(s) of the Intelligent TV display where
viewer has observed (or is currently observing); timestamp
information; concurrent content and/or program information being
presented at the Intelligent TV display (e.g., during times when a
viewer's viewing activities are being recorded); length of time
viewer has observed the Intelligent TV display (and/or specific
regions therein); etc. According to different embodiments, the
Intelligent TV may be configured or designed to automatically
and/or dynamically monitor and record information relating to:
detection of one or more sets of eyes viewing Intelligent TV
display; timestamp information of detected events; content being
displayed on Intelligent TV display at time(s) when viewer's eyes
detected as viewing Intelligent TV display. Additionally, the
Intelligent TV may be configured or designed to automatically
and/or dynamically monitor and record information relating to:
detection of person(s) NOT viewing Intelligent TV display;
timestamp information of detected events; content being displayed
on Intelligent TV display at time(s) when person(s) detected as NOT
viewing Intelligent TV display.
[0285] In at least one embodiment, the recorded viewer viewing
information may be transmitted from the Intelligent TV to a remote
system such as server system 600. In at least one embodiment, the
server system may analyze and process the received viewer viewing
information, and in response, may facilitate, initiate and/or
perform one or more of the following operation(s)/action(s) (or
combinations thereof): [0286] associate at least a portion of the
processed viewer viewing information with the profile of a
selected/identified viewer; [0287] report at least a portion of the
processed viewer viewing information to 3.sup.rd party entities;
[0288] automatically and/or dynamically generate one or more
targeted advertisements or promotions based on at least a portion
of the processed viewer viewing information; [0289] etc.
[0290] In some embodiments, the Intelligent TV may identify,
recognize, and record the facial characteristics of one or more
viewer(s) in a manner which enables the Intelligent TV to
automatically determine the identity of a subsequently returning
viewer. In at least one embodiment, the Intelligent TV (and/or
server system) may maintain viewer profiles which include for a
given viewer, that viewer's unique facial feature characteristics.
In at least one embodiment, the Intelligent TV may automatically
record the viewing activity associated with an identified viewer,
and may associate such activities with that viewer's profile. The
Intelligent TV may automatically identify and recognize the facial
features of the viewer, and may compare the recognize facial
features to those stored in the viewer profile database(s) in order
to automatically determine the identity of the viewer who is
currently interacting with the Intelligent TV. In at least one
embodiment, if the Intelligent TV is able to determine the identity
of the viewer who is currently interacting with the Intelligent TV,
it may use the viewer's profile information to automatically
generate one or more dynamically generated, targeted promotions or
viewing suggestions to be presented to the viewer.
[0291] In at least one embodiment, the Intelligent TV may be
configured or designed to automatically and/or dynamically lower
its audio output volume if no persons are detected to be watching
the Intelligent TV display. Similarly, the Intelligent TV may be
configured or designed to automatically and/or dynamically increase
its audio output volume (or return it to its previous level) if at
least one person is detected to be watching the Intelligent TV
display.
[0292] According to different embodiments, at least a portion of
the various types of functions, operations, actions, and/or other
features provided by one or more facial detection/eye tracking
procedure(s) may be implemented at one or more client systems(s),
at one or more server systems (s), and/or combinations thereof. In
at least one embodiment, the facial detection/eye tracking
procedure(s) may be operable to perform and/or implement various
types of functions, operations, actions, and/or other features such
as one or more of those described and/or referenced herein.
[0293] In at least one embodiment, the facial detection/eye
tracking procedure(s) may be operable to utilize and/or generate
various different types of data and/or other types of information
when performing specific tasks and/or operations. This may include,
for example, input data/information and/or output data/information.
For example, in at least one embodiment, the facial detection/eye
tracking procedure(s) may be operable to access, process, and/or
otherwise utilize information from one or more different types of
sources, such as, for example, one or more local and/or remote
memories, devices and/or systems. Additionally, in at least one
embodiment, the facial detection/eye tracking procedure(s) may be
operable to generate one or more different types of output
data/information, which, for example, may be stored in memory of
one or more local and/or remote devices and/or systems. Examples of
different types of input data/information and/or output
data/information which may be accessed and/or utilized by the
facial detection/eye tracking procedure(s) may include, but are not
limited to, one or more of those described and/or referenced
herein.
[0294] In at least one embodiment, a given instance of the facial
detection/eye tracking procedure(s) may access and/or utilize
information from one or more associated databases. In at least one
embodiment, at least a portion of the database information may be
accessed via communication with one or more local and/or remote
memory devices. Examples of different types of data which may be
accessed by the facial detection/eye tracking procedure(s) may
include, but are not limited to, one or more of those described
and/or referenced herein.
[0295] According to specific embodiments, multiple instances or
threads of the facial detection/eye tracking procedure(s) may be
concurrently implemented and/or initiated via the use of one or
more processors and/or other combinations of hardware and/or
hardware and software. For example, in at least some embodiments,
various aspects, features, and/or functionalities of the facial
detection/eye tracking procedure(s) may be performed, implemented
and/or initiated by one or more of the various systems, components,
systems, devices, procedure(s), processes, etc., described and/or
referenced herein.
[0296] According to different embodiments, one or more different
threads or instances of the facial detection/eye tracking
procedure(s) may be initiated in response to detection of one or
more conditions or events satisfying one or more different types of
minimum threshold criteria for triggering initiation of at least
one instance of the facial detection/eye tracking procedure(s).
Various examples of conditions or events which may trigger
initiation and/or implementation of one or more different threads
or instances of the facial detection/eye tracking procedure(s) may
include, but are not limited to, one or more of those described
and/or referenced herein.
[0297] According to different embodiments, one or more different
threads or instances of the facial detection/eye tracking
procedure(s) may be initiated and/or implemented manually,
automatically, statically, dynamically, concurrently, and/or
combinations thereof. Additionally, different instances and/or
embodiments of the facial detection/eye tracking procedure(s) may
be initiated at one or more different time intervals (e.g., during
a specific time interval, at regular periodic intervals, at
irregular periodic intervals, upon demand, etc.).
[0298] In at least one embodiment, initial configuration of a given
instance of the facial detection/eye tracking procedure(s) may be
performed using one or more different types of initialization
parameters. In at least one embodiment, at least a portion of the
initialization parameters may be accessed via communication with
one or more local and/or remote memory devices. In at least one
embodiment, at least a portion of the initialization parameters
provided to an instance of the facial detection/eye tracking
procedure(s) may correspond to and/or may be derived from the input
data/information.
[0299] Although several example embodiments of one or more aspects
and/or features have been described in detail herein with reference
to the accompanying drawings, it is to be understood that aspects
and/or features are not limited to these precise embodiments, and
that various changes and modifications may be effected therein by
one skilled in the art without departing from the scope of spirit
of the invention(s) as defined, for example, in the appended
claims.
* * * * *
References