U.S. patent application number 14/747637 was filed with the patent office on 2016-12-29 for facilitating dynamic game surface adjustment.
This patent application is currently assigned to INTEL CORPORATION. The applicant listed for this patent is INTEL CORPORATION. Invention is credited to GLEN J. ANDERSON, KEVIN W. BROSS, MARK R. FRANCIS, RAVISHANKAR IYER, DAVID I. POISNER, DANIEL P. SHEIL, OMESH TICKOO, YEVGENIY Y. YARMOSH.
Application Number | 20160375354 14/747637 |
Document ID | / |
Family ID | 57586072 |
Filed Date | 2016-12-29 |
![](/patent/app/20160375354/US20160375354A1-20161229-D00000.png)
![](/patent/app/20160375354/US20160375354A1-20161229-D00001.png)
![](/patent/app/20160375354/US20160375354A1-20161229-D00002.png)
![](/patent/app/20160375354/US20160375354A1-20161229-D00003.png)
![](/patent/app/20160375354/US20160375354A1-20161229-D00004.png)
![](/patent/app/20160375354/US20160375354A1-20161229-D00005.png)
![](/patent/app/20160375354/US20160375354A1-20161229-D00006.png)
![](/patent/app/20160375354/US20160375354A1-20161229-D00007.png)
![](/patent/app/20160375354/US20160375354A1-20161229-D00008.png)
United States Patent
Application |
20160375354 |
Kind Code |
A1 |
FRANCIS; MARK R. ; et
al. |
December 29, 2016 |
FACILITATING DYNAMIC GAME SURFACE ADJUSTMENT
Abstract
A mechanism is described for facilitating dynamic game surface
adjustment at smart play surfaces of smart play sets according to
one embodiment. A method of embodiments, as described herein,
includes receiving one or more inputs to perform an action at a
portion of a play surface of a play set; evaluating the one or more
inputs for generating an action plan to perform the action at the
portion of the play surface, where the action plan is to affect one
or more objects acting on the surface. The method may further
include executing the action at the portion of the surface, wherein
the action to adjust one or more properties of the play
surface.
Inventors: |
FRANCIS; MARK R.; (Portland,
OR) ; TICKOO; OMESH; (Portland, OR) ; IYER;
RAVISHANKAR; (Portland, OR) ; ANDERSON; GLEN J.;
(Beaverton, OR) ; BROSS; KEVIN W.; (Tigard,
OR) ; POISNER; DAVID I.; (Carmichael, CA) ;
YARMOSH; YEVGENIY Y.; (Portland, OR) ; SHEIL; DANIEL
P.; (Portland, OR) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
INTEL CORPORATION |
Santa Clara |
CA |
US |
|
|
Assignee: |
INTEL CORPORATION
Santa Clara
CA
|
Family ID: |
57586072 |
Appl. No.: |
14/747637 |
Filed: |
June 23, 2015 |
Current U.S.
Class: |
463/16 |
Current CPC
Class: |
A63H 2200/00 20130101;
A63F 9/16 20130101; A63H 30/04 20130101; A63H 1/00 20130101 |
International
Class: |
A63F 9/16 20060101
A63F009/16; A63H 1/04 20060101 A63H001/04 |
Claims
1. An apparatus comprising: detection/reception logic to receive
one or more inputs to perform an action at a portion of a play
surface of the apparatus; management logic to evaluate the one or
more inputs to generate an action plan to perform the action at the
portion of the play surface, wherein the action plan is to affect
one or more objects acting on the surface; and
application/execution logic to execute the action at the portion of
the surface, wherein the action to adjust one or more properties of
the play surface.
2. The apparatus of claim 1, wherein the play surface to host the
one or more objects including one or more moveable objects having
object sensors, wherein the play surface includes at least one of
surface sensors, actuators, and physical effects detectors, wherein
the apparatus includes a play set comprising one or more of a smart
toy, a smart game set, a smart field, and a smart play area, and
wherein the play surface includes a play arena associated with the
play set, and wherein the play surface is further to host one or
more users holding or wearing the one or more moveable objects.
3. The apparatus of claim 1, wherein the sensory input comprises at
least one of surface-related sensory data as retrieved via one or
more of the surface sensors, moveable object-related sensory data
as retrieved via one or more of the object sensors, and physical
effects-related information at the surface as retrieved via one or
more of the physical effects detectors, and wherein the one or more
inputs include at least one of a user command, a predetermined
criterion, a sensory input, and an audio/visual input.
4. The apparatus of claim 2, wherein an actuator of the actuators
to activate the portion of the play surface to trigger the action
at the portion of the play surface, wherein the actuator is placed
beneath the portion of the play surface, and wherein the actuator
is further to activate other one or more actions of the action plan
on the play surface, wherein the other one or more actions include
at least one of vibrating, moving, swinging, tilting, booming,
sinking, and bumping of the play surface.
5. The apparatus of claim 1, further comprising sensors data
processor of the management logic to process the sensory input
prior to evaluating the one or more inputs to generate the action
plan.
6. The apparatus of claim 1, further comprising video/audio
analytics logic to evaluate the audio/video input to analyze
activities of the one or more moveable objects operating on the
surface, wherein the audio/video input includes at least one of
sounds, images, and videos relating to the activities of the one or
more moveable objects, wherein the audio/video input is captured
via at least of a two-dimensional (2D) camera, a three-dimensional
(3D) camera, a microphone of capturing/sensing components.
7. The apparatus of claim 6, further comprising output components
including one or more projectors to facilitate a projection at the
portion of the play surface, wherein the projection to reflect an
environment relevant to the action, wherein the projection includes
one or more of numbers, letters, characters, messages, lights,
images, videos, and colors.
8. The apparatus of claim 1, wherein the user command is placed by
a user via an user interface at a computing device over a
communication medium, wherein the communication medium includes at
least one of a Cloud network, an intranet, a proximity network, and
the Internet.
9. The apparatus of claim 1, further comprising a database to store
the one or more inputs, wherein the database to further store rules
or policies relating to at least one of the user, the action, and
the play surface, wherein the database includes at least one of a
Cloud database or a non-Cloud database.
10. A method comprising: receiving one or more inputs to perform an
action at a portion of a play surface of a play set; evaluating the
one or more inputs for generating an action plan to perform the
action at the portion of the play surface, wherein the action plan
is to affect one or more objects acting on the surface; and
executing the action at the portion of the surface, wherein the
action to adjust one or more properties of the play surface.
11. The method of claim 10, wherein the play surface to host the
one or more objects including one or more moveable objects having
object sensors, wherein the play surface includes at least one of
surface sensors, actuators, and physical effects detectors, wherein
the play set comprises one or more of a smart toy, a smart game
set, a smart field, and a smart play area, and wherein the play
surface includes a play arena associated with the play set, and
wherein the play surface is further to host one or more users
holding or wearing the one or more moveable objects.
12. The method of claim 10, wherein the sensory input comprises at
least one of surface-related sensory data as retrieved via one or
more of the surface sensors, moveable object-related sensory data
as retrieved via one or more of the object sensors, and physical
effects-related information at the surface as retrieved via one or
more of the physical effects detectors, and wherein the one or more
inputs include at least one of a user command, a predetermined
criterion, a sensory input, and an audio/visual input.
13. The method of claim 12, wherein an actuator of the actuators to
activate the portion of the play surface to trigger the action at
the portion of the play surface, wherein the actuator is placed
beneath the portion of the play surface, and wherein the actuator
is further to activate other one or more actions of the action plan
on the play surface, wherein the other one or more actions include
at least one of vibrating, moving, swinging, tilting, booming,
sinking, and bumping of the play surface.
14. The method of claim 10, further comprising processing, via
sensors data processor, the sensory input prior to evaluating the
one or more inputs to generate the action plan.
15. The method of claim 10, further comprising evaluating the
audio/video input to analyze activities of the one or more moveable
objects operating on the surface, wherein the audio/video input
includes at least one of sounds, images, and videos relating to the
activities of the one or more moveable objects, wherein the
audio/video input is captured via at least of a two-dimensional
(2D) camera, a three-dimensional (3D) camera, a microphone of
capturing/sensing components.
16. The method of claim 15, further comprising facilitating, via
one or more projectors of output components, a projection at the
portion of the play surface, wherein the projection to reflect an
environment relevant to the action, wherein the projection includes
one or more of numbers, letters, characters, messages, lights,
images, videos, and colors.
17. The method of claim 10, further comprising storing, at a
database, the one or more inputs, wherein the database to further
store rules or policies relating to at least one of the user, the
action, and the play surface, wherein the database includes at
least one of a Cloud database or a non-Cloud database, wherein the
user command is placed by a user via an user interface at a
computing device over a communication medium, wherein the
communication medium includes at least one of a Cloud network, an
intranet, a proximity network, and the Internet.
18. At least one machine-readable medium comprising a plurality of
instructions, executed on a computing device, to facilitate the
computing device to perform one or more operations comprising:
receiving one or more inputs to perform an action at a portion of a
play surface of a play set; evaluating the one or more inputs for
generating an action plan to perform the action at the portion of
the play surface, wherein the action plan is to affect one or more
objects acting on the surface; and executing the action at the
portion of the surface, wherein the action to adjust one or more
properties of the play surface.
19. The machine-readable medium of claim 18, wherein the play
surface to host the one or more objects including one or more
moveable objects having object sensors, wherein the play surface
includes at least one of surface sensors, actuators, and physical
effects detectors, wherein the play set comprises one or more of a
smart toy, a smart game set, a smart field, and a smart play area,
and wherein the play surface includes a play arena associated with
the play set, and wherein the play surface is further to host one
or more users holding or wearing the one or more moveable
objects.
20. The machine-readable medium of claim 18, wherein the sensory
input comprises at least one of surface-related sensory data as
retrieved via one or more of the surface sensors, moveable
object-related sensory data as retrieved via one or more of the
object sensors, and physical effects-related information at the
surface as retrieved via one or more of the physical effects
detectors, and wherein the one or more inputs include at least one
of a user command, a predetermined criterion, a sensory input, and
an audio/visual input.
21. The machine-readable medium of claim 20, wherein an actuator of
the actuators to activate the portion of the play surface to
trigger the action at the portion of the play surface, wherein the
actuator is placed beneath the portion of the play surface, and
wherein the actuator is further to activate other one or more
actions of the action plan on the play surface, wherein the other
one or more actions include at least one of vibrating, moving,
swinging, tilting, booming, sinking, and bumping of the play
surface.
22. The machine-readable medium of claim 18, wherein the one or
more operations further comprise processing, via sensors data
processor, the sensory input prior to evaluating the one or more
inputs to generate the action plan.
23. The machine-readable medium of claim 18, wherein the one or
more operations further comprise evaluating the audio/video input
to analyze activities of the one or more moveable objects operating
on the surface, wherein the audio/video input includes at least one
of sounds, images, and videos relating to the activities of the one
or more moveable objects, wherein the audio/video input is captured
via at least of a two-dimensional (2D) camera, a three-dimensional
(3D) camera, a microphone of capturing/sensing components.
24. The machine-readable medium of claim 23, wherein the one or
more operations further comprise facilitating, via one or more
projectors of output components, a projection at the portion of the
play surface, wherein the projection to reflect an environment
relevant to the action, wherein the projection includes one or more
of numbers, letters, characters, messages, lights, images, videos,
and colors.
25. The machine-readable medium of claim 18, wherein the one or
more operations further comprise storing, at a database, the one or
more inputs, wherein the database to further store rules or
policies relating to at least one of the user, the action, and the
play surface, wherein the database includes at least one of a Cloud
database or a non-Cloud database, wherein the user command is
placed by a user via an user interface at a computing device over a
communication medium, wherein the communication medium includes at
least one of a Cloud network, an intranet, a proximity network, and
the Internet.
Description
FIELD
[0001] Embodiments described herein generally relate to computers.
More particularly, embodiments relate to facilitating dynamic game
surface adjustment.
BACKGROUND
[0002] Conventional techniques do not provide for adjustment of
gaming surfaces (e.g., Beyblade.TM. arenas) and thus, users (e.g.,
game players) of such games are unable to enjoy full gaming
experience.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] Embodiments are illustrated by way of example, and not by
way of limitation, in the figures of the accompanying drawings in
which like reference numerals refer to similar elements.
[0004] FIG. 1 illustrates a computing device employing a game
surface adjustment mechanism according to one embodiment.
[0005] FIG. 2A illustrates a game surface adjustment mechanism
according to one embodiment.
[0006] FIG. 2B illustrates an architectural placement according to
one embodiment.
[0007] FIG. 3 illustrates a use case scenario according to one
embodiment.
[0008] FIG. 4A illustrates a method for facilitating game surface
adjustment according to one embodiment.
[0009] FIG. 4B illustrates a method for facilitating game surface
adjustment according to one embodiment.
[0010] FIG. 4C illustrates a method for facilitating game surface
adjustment according to one embodiment.
[0011] FIG. 5 illustrates computer environment suitable for
implementing embodiments of the present disclosure according to one
embodiment.
[0012] FIG. 6 illustrates a method for facilitating dynamic
targeting of users and communicating of message according to one
embodiment.
DETAILED DESCRIPTION
[0013] In the following description, numerous specific details are
set forth. However, embodiments, as described herein, may be
practiced without these specific details. In other instances,
well-known circuits, structures and techniques have not been shown
in details in order not to obscure the understanding of this
description.
[0014] Embodiments provide for a novel technique for dynamic
interaction with surfaces to facilitate a surface to change its
properties in real-time. Embodiments are not limited to any
particular type of surface; however, for brevity, clarity, and ease
of understanding, terms like "surface", "game surface", "toy
surface", "play surface", "play arena" or simply "arena" are
referenced interchangeably throughout this document.
[0015] For example, many of today's toys, games, etc., involve
movements of various objects on specific surfaces, such as electric
cars racing in race tracks, trains running on train tracks,
Beyblade.TM. tops to fight it out on battle arenas, Magic.TM. cards
on play surfaces with graphics, etc. Embodiments provide for
instrumentation of such play surfaces to enable the users to have a
unique and new user experience where a surface can change its
properties, such as generate a bump or a depression, etc.,
on-demand or as predetermined.
[0016] FIG. 1 illustrates a computing device 100 employing a game
surface adjustment mechanism 110 according to one embodiment.
Computing device 100 serves as a host machine for hosting game
surface adjustment mechanism ("surface mechanism") 110 that
includes any number and type of components, as illustrated in FIG.
2A, to facilitate real-time and dynamic adjustment of properties of
play surfaces as will be further described throughout this
document.
[0017] Computing device 100 may include any number and type of data
processing devices, such as large computing systems, such as server
computers, desktop computers, etc., and may further include set-top
boxes (e.g., Internet-based cable television set-top boxes, etc.),
global positioning system (GPS)-based devices, etc. Computing
device 100 may include mobile computing devices serving as
communication devices, such as cellular phones including
smartphones, personal digital assistants (PDAs), tablet computers,
laptop computers (e.g., Ultrabook.TM. system, etc.), e-readers,
media internet devices (MIDs), media players, smart televisions,
television platforms, intelligent devices, computing dust, media
players, head-mounted displays (HMDs) (e.g., wearable glasses, such
as Google.RTM. Glass.TM., head-mounted binoculars, gaming displays,
military headwear, etc.), and other wearable devices (e.g.,
smartwatches, bracelets, smartcards, jewelry, clothing items,
etc.), and/or the like.
[0018] Computing device 100 may include an operating system (OS)
106 serving as an interface between hardware and/or physical
resources of the computer device 100 and a user. Computing device
100 further includes one or more processors 102, memory devices
104, network devices, drivers, or the like, as well as input/output
(I/O) sources 108, such as touchscreens, touch panels, touch pads,
virtual or regular keyboards, virtual or regular mice, etc.
[0019] It is to be noted that terms like "node", "computing node",
"server", "server device", "cloud computer", "cloud server", "cloud
server computer", "machine", "host machine", "device", "computing
device", "computer", "computing system", and the like, may be used
interchangeably throughout this document. It is to be further noted
that terms like "application", "software application", "program",
"software program", "package", "software package", "code",
"software code", and the like, may be used interchangeably
throughout this document. Also, terms like "job", "input",
"request", "message", and the like, may be used interchangeably
throughout this document. It is contemplated that the term "user"
may refer to an individual or a group of individuals using or
having access to computing device 100.
[0020] FIG. 2A illustrates a game surface adjustment mechanism 110
according to one embodiment. In one embodiment, surface mechanism
110 may include any number and type of components, such as (without
limitation): detection/reception logic 201; management logic
("surface/AR logic") 203 including sensors data processor 205;
video/audio analytics logic ("analytics logic") 207;
application/execution logic 209; and communication/compatibility
logic 211.
[0021] Computing device 100 may be part of smart play set 200,
where "smart play set" may be interchangeably referred to as "play
set", "play device", "play setup", "game set", "game", "toy set",
or simply "toy" throughout this document. As illustrated, in some
embodiments, play set 200 may further include various embedded,
connected, and/or loose parts, such as moveable objects/attachment
("objects") 240, active play arena ("surface") 250, etc. In some
embodiments, objects 240, having object sensors 241, may include
players' toys, toy characters, pieces, add-ons, and/or other
moveable objects or attachments that may be placed on surface 250,
where surface 250 representing a play surface may include any
number and type of components, such as (without limitation) surface
sensors 251, surface actuators (actuators") 253, physical effects
detectors ("detectors") 255, etc. As aforementioned, throughout
this document, active play arena 250 may be referred to as
"surface", "game surface", "toy surface", "play surface", "play
arena" or simply "arena".
[0022] Computing device 100 may include input/out sources 108
including capturing/sensing components 221 and output components
223 which, as will be further described below, may also include any
number and type of components, sensor arrays, etc. For example,
capturing/sensing components 221 may include cameras (e.g.,
two-dimensional (2D) cameras, three-dimensional (3D) cameras,
etc.), sensors array, etc. Similarly, output components 223 may
include display screens, display/projection areas, projectors,
etc.
[0023] Computing device 100 may be further in communication with
any number and type of other computing devices, such as computing
devices (also referred to as "personal devices") 270A, 270B, 270C
(e.g., mobile computer, such as tablet computer, smartphone, etc.),
that may be accessed by their corresponding users (also referred to
as "players" or "participants") using user interfaces, such as user
interface 271, to serve as input console to not only participate in
playing the game, but also choose to alter or adjust surface
250.
[0024] It is contemplated that surface 250 is capable of
interacting with other parts of play set 100, such as touch screen,
joystick, and other input controls, etc., along with personal
devices 270A-270C such that one or more properties of surface 250
may be altered using digital inputs from one or more players using
their corresponding personal devices 270A-270C.
[0025] As will be further described in this document, in one
embodiment, computing device 270A-270C may be used by their
corresponding players to input commands or data to have surface 250
behave differently where certain properties of surface 250 may be
altered (e.g., create a bump, a ditch, a bridge, an obstacle, etc.)
such that one or more of other components, such as a projector, may
be used to project or create AR visualizations to corresponding to
the changes in surface 250. Such surface changes may be enabled and
facilitated by surface mechanism 110. In one embodiment, play set
100 may be in communication with personal devices 270A-270C over
communication medium 260, such as a Cloud network, the Internet, an
intranet, a proximity network, etc.
[0026] In one embodiment, computing device 100 may be in
communication with one or more repositories or data sources or
databases, such as database 265, to obtain, communicate, store, and
maintain any amount and type of data (e.g., media, metadata,
templates, real-time data, historical contents, user and/or device
identification tags and other information, resources, policies,
criteria, rules, regulations, upgrades, etc.). In some embodiments,
communication medium 260 may include any number and type of
communication channels or networks, such as Cloud network, the
Internet, intranet, Internet of Things ("IoT"), proximity network,
Bluetooth, etc.). It is contemplated that embodiments are not
limited to any particular number or type of computing devices,
media sources, databases, personal devices, networks, etc.
[0027] Computing device 100 may further include I/O sources 108
having any number and type of capturing/sensing components 221
(e.g., sensor array (such as context/context-aware sensors and
environmental sensors, such as camera sensors, ambient light
sensors, Red Green Blue (RGB) sensors, movement sensors, etc.),
depth sensing cameras, 2D cameras, 3D cameras, image sources,
audio/video/signal detectors, microphones, eye/gaze-tracking
systems, head-tracking systems, etc.) and output components 223
(e.g., audio/video/signal sources, display planes, display panels,
display screens/devices, projectors, display/projection areas,
speakers, etc.).
[0028] Capturing/sensing components 221 may further include one or
more of vibration components, tactile components, conductance
elements, biometric sensors, chemical detectors, signal detectors,
electroencephalography, functional near-infrared spectroscopy, wave
detectors, force sensors (e.g., accelerometers), illuminators,
eye-tracking or gaze-tracking system, head-tracking system, etc.,
that may be used for capturing any amount and type of visual data,
such as images (e.g., photos, videos, movies, audio/video streams,
etc.), and non-visual data, such as audio streams or signals (e.g.,
sound, noise, vibration, ultrasound, etc.), radio waves (e.g.,
wireless signals, such as wireless signals having data, metadata,
signs, etc.), chemical changes or properties (e.g., humidity, body
temperature, etc.), biometric readings (e.g., figure prints, etc.),
brainwaves, brain circulation, environmental/weather conditions,
maps, etc. It is contemplated that "sensor" and "detector" may be
referenced interchangeably throughout this document. It is further
contemplated that one or more capturing/sensing components 221 may
further include one or more of supporting or supplemental devices
for capturing and/or sensing of data, such as illuminators (e.g.,
infrared (IR) illuminator), light fixtures, generators, sound
blockers, etc.
[0029] It is further contemplated that in one embodiment,
capturing/sensing components 221 may further include any number and
type of context sensors (e.g., linear accelerometer) for sensing or
detecting any number and type of contexts (e.g., estimating
horizon, linear acceleration, etc., relating to a mobile computing
device, etc.). For example, capturing/sensing components 221 may
include any number and type of sensors, such as (without
limitations): accelerometers (e.g., linear accelerometer to measure
linear acceleration, etc.); inertial devices (e.g., inertial
accelerometers, inertial gyroscopes, micro-electro-mechanical
systems (MEMS) gyroscopes, inertial navigators, etc.); gravity
gradiometers to study and measure variations in gravitation
acceleration due to gravity, etc.
[0030] Further, for example, capturing/sensing components 221 may
include (without limitations): audio/visual devices (e.g., cameras,
microphones, speakers, etc.); context-aware sensors (e.g.,
temperature sensors, facial expression and feature measurement
sensors working with one or more cameras of audio/visual devices,
environment sensors (such as to sense background colors, lights,
etc.), biometric sensors (such as to detect fingerprints, etc.),
calendar maintenance and reading device), etc.; global positioning
system (GPS) sensors; resource requestor; and trusted execution
environment (TEE) logic. TEE logic may be employed separately or be
part of resource requestor and/or an I/O subsystem, etc.
Capturing/sensing components 221 may further include voice
recognition devices, photo recognition devices, facial and other
body recognition components, voice-to-text conversion components,
etc.
[0031] Computing device 100 may further include one or more output
components 223 in communication with one or more capturing/sensing
components 221 and one or more components of surface mechanism 110
for facilitating playing and/or visualizing of varying contents,
such as images, videos, texts, audios, animations, interactive
representations, visualization of fingerprints, visualization of
touch, smell, and/or other sense-related experiences, etc. For
example, output components 223 may further include one or more
telepresence projectors to project a real image's virtual
representation capable of being floated in mid-air while being
interactive and having the depth of a real-life object.
[0032] Further, output components 223 may include dynamic tactile
touch screens having tactile effectors as an example of presenting
visualization of touch, where an embodiment of such may be
ultrasonic generators that can send signals in space which, when
reaching, for example, human fingers can cause tactile sensation or
like feeling on the fingers. Further, for example and in one
embodiment, output components 223 may include (without limitation)
one or more of light sources, display devices and/or screens, audio
speakers, tactile components, conductance elements, bone conducting
speakers, olfactory or smell visual and/or non/visual presentation
devices, haptic or touch visual and/or non-visual presentation
devices, animation display devices, biometric display devices,
X-ray display devices, high-resolution displays, high-dynamic range
displays, multi-view displays, and head-mounted displays (HMDs) for
at least one of virtual reality (VR) and augmented reality (AR),
etc.
[0033] For example, in case of play set 200 being a Beyblade.TM.
toy, various spinning tops, such as objects 240, are configured by
players using mechanical attachments and made to spin in arena,
such as surface 250, that causes the tops to bump each other and
the last remaining top wins the battle. In one embodiment, applying
surface mechanism 110 to a Beyblade.TM.-like arena, such as surface
250, may facilitate surface 250 to actively participate in the game
by allowing the players to strategize by, for example, connecting
various objects 240 (e.g., other players' spinning tops, obstacles,
lights, additional weapons, etc.) to best battle the opponent
player's style of top and facilitating surface 250 or one or more
portions of it to behave in certain ways, such as turn soft, erect
a hill, build a bridge, blow up, turn into a hole, create a bump,
etc., making the game more exciting and/or challenging by having
these changes made to surface 250 serve as favorable to the player,
obstacles to other players, or remain neutral, such as serving as a
passive scenery or equally challenging obstacles, etc.
[0034] Similarly, play device 200 may include other forms of toys,
games, setup, etc., such as a train set having a set of train
tracks serving as surface 250, race cars having a race track
serving as surface 250, sailboats having an area of water serving
as surface 250, airplanes having a runway serving as surface 250,
etc., where these various forms of surface 250 may be altered,
on-demand or as predetermined, to become an active part of using a
game/toy, such as play device 100, in order to obtain a unique user
experience as facilitated by surface mechanism 110.
[0035] In one embodiment, detection/reception logic 201 may be used
to receive users requests, such as placed via user interface 271 of
personal device 270A over communication medium 260, to modify one
or more properties of one or more portions of surface 250, such as
generating bumps in surface 250 for increased difficulty for one or
more opponent players or all players, etc. Similarly, in another
embodiment, detection/reception logic 201 may detect predetermined
criteria to be applied to surface 250, where such criteria may set
forth any number of information and rules regarding the type of
changes to be applied to surface 250, what time and for how long
they the changes may be applied, any other features (e.g., lights)
are to be turned-on or turned off before, after, or during the
application of those changes, and/or the like.
[0036] In one embodiment, these predetermined criteria and any
applicable or relevant rules may be stored at database 265. For
example and in some embodiments, the rules may be used to define
any number and type of features, such as how many actions or
changes to be applied to surface 250, what actions, what time an
action may be activated by a player, etc. Further, these rules at
database 265 may be player-specific or play-specific, such as a
player with a high record may get to have and apply more actions
per game to surface 250, a player may earn and/or buy additional
actions or a right to have more actions per play, and/or the
like.
[0037] Upon receiving the user request or detecting predetermined
criteria along with detecting and accessing any relevant rules at
database 265, this information may then be forwarded on to
surface/AR logic 203 for further processing. In one embodiment,
surface/AR logic 203 may be used to control and manage surface
characteristics and/or AR effects based on user inputs, surface
sensor measurements as determined by surface sensors 251, video
recognition results, and sensor readings from one or more movement
objects 240 placed on surface 250 as facilitated by their
corresponding movement sensors 241 and surface sensors 251,
respectively. For example, sensors data processor 205 of surface/AR
logic 203 may be used to process any amount and type of data
received from sensors 241, 251, detector 253, one or more sensors
of sensor array of capturing/sensing components 221, etc., which
may then be used by surface/AR logic 203 to perform its various
tasks.
[0038] Further, in one embodiment, surface 250 may employ actuators
253 (such as under or beneath surface 250) and physical effects
detectors 255 to appropriately facilitate changes in properties
relating to surface 250, such as tilting, creating bumps, applying
various physical effects (e.g., magnetic filed, air jets), etc.,
with precision and in accordance with one or more of user requests,
predetermined criteria, and/or rules at database 265. For example,
actuators 253 and detectors 255 may be placed under surface 250,
where actuators 253 may be used to translate and actualize the
requested/predetermined actions (such as based on user inputs,
predetermined criteria, rules, etc.) to ensure that surface 250
behaves accordingly, such as vibrate, tilt to the right, etc.
Similarly, in one embodiment, detectors 255 may be used to detect
and project the corresponding physical effects, such as increased
or decreased lighting or sound, creating augmented reality around
and at surface 250, and/or the like, and then communicate this
information with actuators 253 so they may perform their tasks.
[0039] Further in one embodiment, play surface 250 may be a surface
upon which one or more people may operate. For example, a person
may wear or hold an object that is monitored by surface mechanism
110 such that play surface 250 may be adjusted in some way to
affect the play of the game, such as a surface change on surface
250 may be caused to move a ball that the person is trying to pick
up during the game.
[0040] Further, in one embodiment, any data collected by camera,
sensors, etc., of capturing/sensing components 221 and by any other
sensors, such as sensors 241, 251, detectors 255, etc., may be
forwarded on to surface/AR logic 203, via sensors data processor
205, to determine an appropriate action and corresponding augmented
reality to be performed at surface 250 using actuators 253 and one
or more output components 223, such as a projectors, tactile
displays, etc. For example, in some embodiment, in an AR-enabled
system, such as play set 200, a project of output components 223
may be used to project images, videos, etc., on and around surface
250 during the execution of an action, as facilitated by
application/execution logic 209, to create a more realistic user
experience, such as making AR characters, such as moveable objects
240, blend in with the chosen environment or scenery (such as
choosing beach, downtown, mountains, etc., for racing cars, etc.)
for the game for a convincingly realistic experience.
[0041] In one embodiment, one or more cameras (e.g., 2D/3D cameras)
of capturing/sensing components 221 may be used to capture images,
videos, etc., of AR characters, such as moveable objects 240,
placed on surface 250; similarly, any relevant and surrounding
sounds or audio may be captured by one or more microphones of
capturing/sensing components 221 subsequently, these images,
videos, sounds, etc., may be forwarded on to analytics logic 207 to
identify one or more movable objects 240 to determine their
relevant environmental characteristics, such as recognizing
horizontal levels of surface 250 to bind any AR character actions
of objects 240 so that, for example, they do not fall through the
floor of surface 250.
[0042] In one embodiment, movable objects 240 may be sensed using
their corresponding sensors 241, and/or identified using a Radio
Frequency Identification (RFID) technique to scan any embedded RIFD
tags of movable objects 240. In one embodiment, objects 240 may be
sensed directly by detection/reception logic 201 and their report
may then be provided to sensor data processor 205 or, in another
embodiment, sensors 241 may report directly to sensors data
processor 205 of the movements of their corresponding objects 240.
For example, sensors 241 may be embedded into their corresponding
moveable objects 240 and each time these objects 240 move, their
sensors 241 may sense their movements and provide movement reports
to sensor data processor 205 for further processing, where these
movement reports may include data relating to direction, rotation,
velocity, etc., of movable objects 240.
[0043] Further, as illustrated, personal devices 270A-C, may
include user interfaces, such as user interface 271 at personal
device 270A, to serve as user input controls for accepting
requests, commands, inputs, etc., by various users/players, where
each user interface 271 may include a dedicated built-in touch
screen, and where such personal devices 270A-C may include any
number and type of computing device, such as desktop computes,
laptops, tablet computers, smartphones, wearable devices (e.g.,
wearable glasses, bracelets, etc.) connected over communication
medium 260 (e.g., WiFi, Bluetooth link, etc.).
[0044] Further, as aforementioned and in one embodiment, output
components 223 may include tactile displays to create special
vibration patterns, use electrostatic approaches, dynamic surface
morphing (such as using microfluidics, etc.) to allow for proper
sensation of contours and raised surfaces on flat screens.
Moreover, such tactile display may include dynamic tactile touch
displays that use raised bumps may appear and disappear based on
electrical charges across a special surface, such as surface
250.
[0045] In one embodiment, upon having various components, such as
surface/AR logic 203, analytics logic 207, etc., perform their
corresponding tasks, such as analyzed and evaluated all the
available and/or relevant data detected or obtained from sensors
241, 251, detectors 255, capturing/sensing components 221 (e.g.,
cameras, sensor arrays, etc.), user inputs via physical devices
270A-270C, predetermined criteria and/or rules being maintained at
database 265, etc., surface/AR logic 203 may then forward the
evaluation results of these findings to application/execution logic
209 which may then be triggered to apply these results and execute
any corresponding actions at surface 250.
[0046] In one embodiment, an action that may be performed to alter
properties, conditions, etc., at surface 250 may include a player
choosing a power play to change one or more portions of surface 250
such that they are inhibited or constrained for opponent players,
where such changing surface conditions may include (without
limitation) haptic/vibration beneath surface 250 near the
opponent's top, tilt to part of surface 250, generate physical
bumps on surface 250, create adjustable magnetic fields under
surface 250 to slow the top, and/or the like. Similarly, in some
embodiments, other actions may include (without limitation): 1)
vibration under a specific area of surface 250, which may be
achieved through one or more projectors of output components 223
which may be used to project a texture on that specific area of
surface 250 so that the specific area appears rough; 2) player
scores, which may be achieved by having one or more projectors of
output components 223 project or indicate the score at a particular
spot of scoring on surface 250; and 3) top moves across surface
250, which may be achieved by having one or more projectors of
output components 223 project a path of top that is previous or
anticipated; and/or the like.
[0047] Moreover, in one embodiment, communication/compatibility
logic 211 may include various components relating to communication,
messaging, compatibility, etc., such as connectivity and messaging
logic, to facilitate communication and exchange of messages and
data, etc., between surface mechanism 110 at computing device 100,
movable objects 240, surface 250, personal devices 270A-270C,
and/or the like.
[0048] Communication/compatibility logic 211 may be used to
facilitate dynamic communication and compatibility between
computing device 100, personal devices 270A-C, database(s) 265,
etc., and any number and type of other computing devices (such as
wearable computing devices, mobile computing devices, desktop
computers, server computing devices, etc.), processing devices
(e.g., central processing unit (CPU), graphics processing unit
(GPU), etc.), capturing/sensing components (e.g., non-visual data
sensors/detectors, such as audio sensors, olfactory sensors, haptic
sensors, signal sensors, vibration sensors, chemicals detectors,
radio wave detectors, force sensors, weather/temperature sensors,
body/biometric sensors, scanners, etc., and visual data
sensors/detectors, such as cameras, etc.), user/context-awareness
components and/or identification/verification sensors/devices (such
as biometric sensors/detectors, scanners, etc.), memory or storage
devices, data sources, and/or database(s) (such as data storage
devices, hard drives, solid-state drives, hard disks, memory cards
or devices, memory circuits, etc.), network(s) (e.g., Cloud
network, the Internet, intranet, cellular network, proximity
networks, such as Bluetooth, Bluetooth low energy (BLE), Bluetooth
Smart, Wi-Fi proximity, Radio Frequency Identification (RFID), Near
Field Communication (NFC), Body Area Network (BAN), etc.), wireless
or wired communications and relevant protocols (e.g., Wi-Fi.RTM.,
WiMAX, Ethernet, etc.), connectivity and location management
techniques, software applications/websites, (e.g., social and/or
business networking websites, business applications, games and
other entertainment applications, etc.), programming languages,
etc., while ensuring compatibility with changing technologies,
parameters, protocols, standards, etc.
[0049] Throughout this document, terms like "logic", "component",
"module", "framework", "engine", "tool", and the like, may be
referenced interchangeably and include, by way of example,
software, hardware, and/or any combination of software and
hardware, such as firmware. Further, any use of a particular brand,
word, term, phrase, name, and/or acronym, such as "play set", "play
device", "surface", "arena", "movable objects", "attachments",
"game surface adjustment", Beyblade.TM.", "personal device", "toy",
"game", "player", "smart device", "mobile computer", "wearable
device", etc., should not be read to limit embodiments to software
or devices that carry that label in products or in literature
external to this document.
[0050] It is contemplated that any number and type of components
may be added to and/or removed from surface mechanism 110 to
facilitate various embodiments including adding, removing, and/or
enhancing certain features. For brevity, clarity, and ease of
understanding of surface mechanism 110, many of the standard and/or
known components, such as those of a computing device, are not
shown or discussed here. It is contemplated that embodiments, as
described herein, are not limited to any particular technology,
topology, system, architecture, and/or standard and are dynamic
enough to adopt and adapt to any future changes.
[0051] FIG. 2B illustrates an architectural placement 280 according
to one embodiment. As an initial matter, for brevity, clarity, and
ease of understanding, many of the components and processes
discussed above with reference to FIGS. 1-2A may not be repeated or
discussed hereafter.
[0052] It is contemplated and to be noted that embodiments are not
limited to any particular architecture setup, such as architectural
placement 280, or as described with reference to FIG. 2A, and that
any number and type of components may be employed, placed, and used
in any manner or form to perform the relevant tasks for
facilitating game surface adjustment.
[0053] As illustrated here, in one embodiment, surface mechanism
110 may include any number and type of components, such as (without
limitation): surface/AR logic 203; V/A logic 207; sensors data
processor 205; and application/execution logic 209. As further
illustrated and previously described with reference to FIG. 2A, in
one embodiment, surface mechanism 110 includes
communication/compatibility logic 211 which includes connectivity
and messaging module 281 for performing various tasks relating to
connectivity and messaging between various components of surface
mechanism 110 as well as other components and devices, such as
moveable objects 240, surface 250, personal device 270A,
communication medium 260 of FIG. 2A, etc.
[0054] Similarly, as illustrated and described with reference to
FIG. 2A, detection/reception logic 201 may include user input
console 283 to allow a user/player to provide inputs, place
requests, set preferences, etc., in addition to or in lieu of using
personal device 270A. As further illustrated, in one embodiment,
database 265, serving as rules database/storage, may be part of
surface mechanism 110 at computing device 100 of FIG. 2A.
[0055] In some embodiments, other components include 2D/3D camera
291 of capture/sensing components 221, and projector 293 of output
components 223 of FIG. 2A. Similarly, as illustrated, additional
components include moveable objects/attachments 240 having sensors
241, and active play arena/surface 250 having surface sensors 251,
actuators 253, and physical effects detectors 255 as previously
described with reference to FIG. 2A.
[0056] FIG. 3 illustrates a use case scenario 300 according to one
embodiment. As an initial matter, for brevity, clarity, and ease of
understanding, many of the components and processes discussed above
with reference to FIGS. 1-2B may not be repeated or discussed
hereafter. Further, it is contemplated and to be noted that
embodiments are not limited to this particular use case scenario
300 any of its particular components, such as Beyblade.TM.-like
games, toys, surfaces, etc., and that any number and type of other
play sets and components may be used for facilitating game surface
adjustment as facilitated by surface mechanism 110 of FIG. 2A. As
aforementioned, some examples of other play sets using surface
mechanism 110 may include (without limitation) train sets, race car
sets, boating sets, plane sets, Hot Wheel.RTM. race track sets,
Matchbox.RTM. track sets, and even board games like Chutes and
Ladders.TM., etc.
[0057] Referring now to the illustrated embodiment, for example, a
Beyblad.TM.-like play set may be used where surface 250 is shown to
have a couple of spinning tops, such as moveable objects 240. In
this embodiment and as discussed with reference to FIG. 2A, various
command for actions being performed at surface 250 may be received
from moveable objects 240, such as using their sensors 241 of FIG.
2A, and/or from user 301 placing one or more commands directly
using user interface 271 of physical device 270 A of FIG. 2A and/or
user input console 283 of detection/reception logic 201 of FIG. 2B.
In one embodiment, such commands may then be processed by various
components of surface mechanism 110 of FIG. 2B, resulting in an
action is performed with respect to one or more
characteristics/properties of surface 250, such as (as illustrated)
projecting an image, such as projected surface image 303, on a
particular area of surface 250 using projector 293 and actuating
the action within the area using one or more actuators 253 that are
typically placed beneath surface 250.
[0058] FIG. 4A illustrates a method 400 for facilitating game
surface adjustment according to one embodiment. Method 400 may be
performed by processing logic that may comprise hardware (e.g.,
circuitry, dedicated logic, programmable logic, etc.), software
(such as instructions run on a processing device), or a combination
thereof. In one embodiment, method 400 may be performed by surface
mechanism 110 of FIG. 2A. The processes of method 400 are
illustrated in linear sequences for brevity and clarity in
presentation; however, it is contemplated that any number of them
can be performed in parallel, asynchronously, or in different
orders. For brevity, many of the details discussed with reference
to the previous figures may not be discussed or repeated
hereafter.
[0059] Method 400 begins with block 401 with one or more sensors,
such as moveable object sensors, surface sensors, cameras, etc.,
tracking location of a game piece, such as a moveable object, on a
surface of a play set. At block 403, a determination is made as to
whether the tracked location triggers an effect of an action being
performed on the surface. If not, method 400 continues with the
process of block 401. If yes, method 400 continues at block 405
with an application of the effect to facilitate the action on the
surface.
[0060] FIG. 4B illustrates a method 430 for facilitating game
surface adjustment according to one embodiment. Method 430 may be
performed by processing logic that may comprise hardware (e.g.,
circuitry, dedicated logic, programmable logic, etc.), software
(such as instructions run on a processing device), or a combination
thereof. In one embodiment, method 430 may be performed by surface
mechanism 110 of FIG. 2A. The processes of method 430 are
illustrated in linear sequences for brevity and clarity in
presentation; however, it is contemplated that any number of them
can be performed in parallel, asynchronously, or in different
orders. For brevity, many of the details discussed with reference
to the previous figures may not be discussed or repeated
hereafter.
[0061] Method 430 begins at block 431 with a player using a play
set initiating and placing a command (e.g., spell command) to
facilitate an action at a surface of the play set. At block 433,
the command is detected and, at block 435, the effect (e.g., spell
effect) is looked up. At block 437, the effect is implemented to
facilitated the action at the surface of the play set.
[0062] FIG. 4C illustrates a method 450 for facilitating game
surface adjustment according to one embodiment. Method 450 may be
performed by processing logic that may comprise hardware (e.g.,
circuitry, dedicated logic, programmable logic, etc.), software
(such as instructions run on a processing device), or a combination
thereof. In one embodiment, method 450 may be performed by surface
mechanism 110 of FIG. 2A. The processes of method 450 are
illustrated in linear sequences for brevity and clarity in
presentation; however, it is contemplated that any number of them
can be performed in parallel, asynchronously, or in different
orders. For brevity, many of the details discussed with reference
to the previous figures may not be discussed or repeated
hereafter.
[0063] Method 450 beings at block 451 with one or more components
of surface mechanism 110 of FIG. 2B receiving or detecting user
inputs/commands and/or predetermined criteria relating to an action
to be performed on a surface of a play set and continues, at block
453, with detection or reception of sensory data, as retrieved by
various sensors, detectors, cameras, etc. At block 455, the user
inputs/commands, predetermined criteria, and/or sensor data are
evaluated and, as a result, at block 457, an action (e.g., AR-based
action) plan to facilitate the action on the surface is prepared.
At block 459, the action plan is applied and the action is executed
at the surface of the play set.
[0064] FIG. 5 illustrates an embodiment of a computing system 500
capable of supporting the operations discussed above. Computing
system 500 represents a range of computing and electronic devices
(wired or wireless) including, for example, desktop computing
systems, laptop computing systems, cellular telephones, personal
digital assistants (PDAs) including cellular-enabled PDAs, set top
boxes, smartphones, tablets, wearable devices, etc. Alternate
computing systems may include more, fewer and/or different
components. Computing device 500 may be the same as or similar to
or include computing devices 100 described in reference to FIG.
1.
[0065] Computing system 500 includes bus 505 (or, for example, a
link, an interconnect, or another type of communication device or
interface to communicate information) and processor 510 coupled to
bus 505 that may process information. While computing system 500 is
illustrated with a single processor, it may include multiple
processors and/or co-processors, such as one or more of central
processors, image signal processors, graphics processors, and
vision processors, etc. Computing system 500 may further include
random access memory (RAM) or other dynamic storage device 520
(referred to as main memory), coupled to bus 505 and may store
information and instructions that may be executed by processor 510.
Main memory 520 may also be used to store temporary variables or
other intermediate information during execution of instructions by
processor 510.
[0066] Computing system 500 may also include read only memory (ROM)
and/or other storage device 530 coupled to bus 505 that may store
static information and instructions for processor 510. Date storage
device 540 may be coupled to bus 505 to store information and
instructions. Date storage device 540, such as magnetic disk or
optical disc and corresponding drive may be coupled to computing
system 500.
[0067] Computing system 500 may also be coupled via bus 505 to
display device 550, such as a cathode ray tube (CRT), liquid
crystal display (LCD) or Organic Light Emitting Diode (OLED) array,
to display information to a user. User input device 560, including
alphanumeric and other keys, may be coupled to bus 505 to
communicate information and command selections to processor 510.
Another type of user input device 560 is cursor control 570, such
as a mouse, a trackball, a touchscreen, a touchpad, or cursor
direction keys to communicate direction information and command
selections to processor 510 and to control cursor movement on
display 550. Camera and microphone arrays 590 of computer system
500 may be coupled to bus 505 to observe gestures, record audio and
video and to receive and transmit visual and audio commands.
[0068] Computing system 500 may further include network
interface(s) 580 to provide access to a network, such as a local
area network (LAN), a wide area network (WAN), a metropolitan area
network (MAN), a personal area network (PAN), Bluetooth, a cloud
network, a mobile network (e.g., 3.sup.rd Generation (3G), etc.),
an intranet, the Internet, etc. Network interface(s) 580 may
include, for example, a wireless network interface having antenna
585, which may represent one or more antenna(e). Network
interface(s) 580 may also include, for example, a wired network
interface to communicate with remote devices via network cable 587,
which may be, for example, an Ethernet cable, a coaxial cable, a
fiber optic cable, a serial cable, or a parallel cable.
[0069] Network interface(s) 580 may provide access to a LAN, for
example, by conforming to IEEE 802.11b and/or IEEE 802.11g
standards, and/or the wireless network interface may provide access
to a personal area network, for example, by conforming to Bluetooth
standards. Other wireless network interfaces and/or protocols,
including previous and subsequent versions of the standards, may
also be supported.
[0070] In addition to, or instead of, communication via the
wireless LAN standards, network interface(s) 580 may provide
wireless communication using, for example, Time Division, Multiple
Access (TDMA) protocols, Global Systems for Mobile Communications
(GSM) protocols, Code Division, Multiple Access (CDMA) protocols,
and/or any other type of wireless communications protocols.
[0071] Network interface(s) 580 may include one or more
communication interfaces, such as a modem, a network interface
card, or other well-known interface devices, such as those used for
coupling to the Ethernet, token ring, or other types of physical
wired or wireless attachments for purposes of providing a
communication link to support a LAN or a WAN, for example. In this
manner, the computer system may also be coupled to a number of
peripheral devices, clients, control surfaces, consoles, or servers
via a conventional network infrastructure, including an Intranet or
the Internet, for example.
[0072] It is to be appreciated that a lesser or more equipped
system than the example described above may be preferred for
certain implementations. Therefore, the configuration of computing
system 500 may vary from implementation to implementation depending
upon numerous factors, such as price constraints, performance
requirements, technological improvements, or other circumstances.
Examples of the electronic device or computer system 500 may
include without limitation a mobile device, a personal digital
assistant, a mobile computing device, a smartphone, a cellular
telephone, a handset, a one-way pager, a two-way pager, a messaging
device, a computer, a personal computer (PC), a desktop computer, a
laptop computer, a notebook computer, a handheld computer, a tablet
computer, a server, a server array or server farm, a web server, a
network server, an Internet server, a work station, a
mini-computer, a main frame computer, a supercomputer, a network
appliance, a web appliance, a distributed computing system,
multiprocessor systems, processor-based systems, consumer
electronics, programmable consumer electronics, television, digital
television, set top box, wireless access point, base station,
subscriber station, mobile subscriber center, radio network
controller, router, hub, gateway, bridge, switch, machine, or
combinations thereof.
[0073] Embodiments may be implemented as any or a combination of:
one or more microchips or integrated circuits interconnected using
a parentboard, hardwired logic, software stored by a memory device
and executed by a microprocessor, firmware, an application specific
integrated circuit (ASIC), and/or a field programmable gate array
(FPGA). The term "logic" may include, by way of example, software
or hardware and/or combinations of software and hardware.
[0074] Embodiments may be provided, for example, as a computer
program product which may include one or more machine-readable
media having stored thereon machine-executable instructions that,
when executed by one or more machines such as a computer, network
of computers, or other electronic devices, may result in the one or
more machines carrying out operations in accordance with
embodiments described herein. A machine-readable medium may
include, but is not limited to, floppy diskettes, optical disks,
CD-ROMs (Compact Disc-Read Only Memories), and magneto-optical
disks, ROMs, RAMs, EPROMs (Erasable Programmable Read Only
Memories), EEPROMs (Electrically Erasable Programmable Read Only
Memories), magnetic or optical cards, flash memory, or other type
of media/machine-readable medium suitable for storing
machine-executable instructions.
[0075] Moreover, embodiments may be downloaded as a computer
program product, wherein the program may be transferred from a
remote computer (e.g., a server) to a requesting computer (e.g., a
client) by way of one or more data signals embodied in and/or
modulated by a carrier wave or other propagation medium via a
communication link (e.g., a modem and/or network connection).
[0076] References to "one embodiment", "an embodiment", "example
embodiment", "various embodiments", etc., indicate that the
embodiment(s) so described may include particular features,
structures, or characteristics, but not every embodiment
necessarily includes the particular features, structures, or
characteristics. Further, some embodiments may have some, all, or
none of the features described for other embodiments.
[0077] In the following description and claims, the term "coupled"
along with its derivatives, may be used. "Coupled" is used to
indicate that two or more elements co-operate or interact with each
other, but they may or may not have intervening physical or
electrical components between them.
[0078] As used in the claims, unless otherwise specified the use of
the ordinal adjectives "first", "second", "third", etc., to
describe a common element, merely indicate that different instances
of like elements are being referred to, and are not intended to
imply that the elements so described must be in a given sequence,
either temporally, spatially, in ranking, or in any other
manner.
[0079] FIG. 6 illustrates an embodiment of a computing environment
600 capable of supporting the operations discussed above. The
modules and systems can be implemented in a variety of different
hardware architectures and form factors including that shown in
FIG. 4.
[0080] The Command Execution Module 601 includes a central
processing unit to cache and execute commands and to distribute
tasks among the other modules and systems shown. It may include an
instruction stack, a cache memory to store intermediate and final
results, and mass memory to store applications and operating
systems. The Command Execution Module may also serve as a central
coordination and task allocation unit for the system.
[0081] The Screen Rendering Module 621 draws objects on the one or
more multiple screens for the user to see. It can be adapted to
receive the data from the Virtual Object Behavior Module 604,
described below, and to render the virtual object and any other
objects and forces on the appropriate screen or screens. Thus, the
data from the Virtual Object Behavior Module would determine the
position and dynamics of the virtual object and associated
gestures, forces and objects, for example, and the Screen Rendering
Module would depict the virtual object and associated objects and
environment on a screen, accordingly. The Screen Rendering Module
could further be adapted to receive data from the Adjacent Screen
Perspective Module 607, described below, to either depict a target
landing area for the virtual object if the virtual object could be
moved to the display of the device with which the Adjacent Screen
Perspective Module is associated. Thus, for example, if the virtual
object is being moved from a main screen to an auxiliary screen,
the Adjacent Screen Perspective Module 2 could send data to the
Screen Rendering Module to suggest, for example in shadow form, one
or more target landing areas for the virtual object on that track
to a user's hand movements or eye movements.
[0082] The Object and Gesture Recognition System 622 may be adapted
to recognize and track hand and harm gestures of a user. Such a
module may be used to recognize hands, fingers, finger gestures,
hand movements and a location of hands relative to displays. For
example, the Object and Gesture Recognition Module could for
example determine that a user made a body part gesture to drop or
throw a virtual object onto one or the other of the multiple
screens, or that the user made a body part gesture to move the
virtual object to a bezel of one or the other of the multiple
screens. The Object and Gesture Recognition System may be coupled
to a camera or camera array, a microphone or microphone array, a
touch screen or touch surface, or a pointing device, or some
combination of these items, to detect gestures and commands from
the user.
[0083] The touch screen or touch surface of the Object and Gesture
Recognition System may include a touch screen sensor. Data from the
sensor may be fed to hardware, software, firmware or a combination
of the same to map the touch gesture of a user's hand on the screen
or surface to a corresponding dynamic behavior of a virtual object.
The sensor date may be used to momentum and inertia factors to
allow a variety of momentum behavior for a virtual object based on
input from the user's hand, such as a swipe rate of a user's finger
relative to the screen. Pinching gestures may be interpreted as a
command to lift a virtual object from the display screen, or to
begin generating a virtual binding associated with the virtual
object or to zoom in or out on a display. Similar commands may be
generated by the Object and Gesture Recognition System using one or
more cameras without benefit of a touch surface.
[0084] The Direction of Attention Module 623 may be equipped with
cameras or other sensors to track the position or orientation of a
user's face or hands. When a gesture or voice command is issued,
the system can determine the appropriate screen for the gesture. In
one example, a camera is mounted near each display to detect
whether the user is facing that display. If so, then the direction
of attention module information is provided to the Object and
Gesture Recognition Module 622 to ensure that the gestures or
commands are associated with the appropriate library for the active
display. Similarly, if the user is looking away from all of the
screens, then commands can be ignored.
[0085] The Device Proximity Detection Module 625 can use proximity
sensors, compasses, GPS (global positioning system) receivers,
personal area network radios, and other types of sensors, together
with triangulation and other techniques to determine the proximity
of other devices. Once a nearby device is detected, it can be
registered to the system and its type can be determined as an input
device or a display device or both. For an input device, received
data may then be applied to the Object Gesture and Recognition
System 622. For a display device, it may be considered by the
Adjacent Screen Perspective Module 607.
[0086] The Virtual Object Behavior Module 604 is adapted to receive
input from the Object Velocity and Direction Module, and to apply
such input to a virtual object being shown in the display. Thus,
for example, the Object and Gesture Recognition System would
interpret a user gesture and by mapping the captured movements of a
user's hand to recognized movements, the Virtual Object Tracker
Module would associate the virtual object's position and movements
to the movements as recognized by Object and Gesture Recognition
System, the Object and Velocity and Direction Module would capture
the dynamics of the virtual object's movements, and the Virtual
Object Behavior Module would receive the input from the Object and
Velocity and Direction Module to generate data that would direct
the movements of the virtual object to correspond to the input from
the Object and Velocity and Direction Module.
[0087] The Virtual Object Tracker Module 606 on the other hand may
be adapted to track where a virtual object should be located in
three dimensional space in a vicinity of an display, and which body
part of the user is holding the virtual object, based on input from
the Object and Gesture Recognition Module. The Virtual Object
Tracker Module 606 may for example track a virtual object as it
moves across and between screens and track which body part of the
user is holding that virtual object. Tracking the body part that is
holding the virtual object allows a continuous awareness of the
body part's air movements, and thus an eventual awareness as to
whether the virtual object has been released onto one or more
screens.
[0088] The Gesture to View and Screen Synchronization Module 608,
receives the selection of the view and screen or both from the
Direction of Attention Module 623 and, in some cases, voice
commands to determine which view is the active view and which
screen is the active screen. It then causes the relevant gesture
library to be loaded for the Object and Gesture Recognition System
622. Various views of an application on one or more screens can be
associated with alternative gesture libraries or a set of gesture
templates for a given view. As an example in FIG. 1A a
pinch-release gesture launches a torpedo, but in FIG. 1B, the same
gesture launches a depth charge.
[0089] The Adjacent Screen Perspective Module 607, which may
include or be coupled to the Device Proximity Detection Module 625,
may be adapted to determine an angle and position of one display
relative to another display. A projected display includes, for
example, an image projected onto a wall or screen. The ability to
detect a proximity of a nearby screen and a corresponding angle or
orientation of a display projected therefrom may for example be
accomplished with either an infrared emitter and receiver, or
electromagnetic or photo-detection sensing capability. For
technologies that allow projected displays with touch input, the
incoming video can be analyzed to determine the position of a
projected display and to correct for the distortion caused by
displaying at an angle. An accelerometer, magnetometer, compass, or
camera can be used to determine the angle at which a device is
being held while infrared emitters and cameras could allow the
orientation of the screen device to be determined in relation to
the sensors on an adjacent device. The Adjacent Screen Perspective
Module 607 may, in this way, determine coordinates of an adjacent
screen relative to its own screen coordinates. Thus, the Adjacent
Screen Perspective Module may determine which devices are in
proximity to each other, and further potential targets for moving
one or more virtual object's across screens. The Adjacent Screen
Perspective Module may further allow the position of the screens to
be correlated to a model of three-dimensional space representing
all of the existing objects and virtual objects.
[0090] The Object and Velocity and Direction Module 603 may be
adapted to estimate the dynamics of a virtual object being moved,
such as its trajectory, velocity (whether linear or angular),
momentum (whether linear or angular), etc. by receiving input from
the Virtual Object Tracker Module. The Object and Velocity and
Direction Module may further be adapted to estimate dynamics of any
physics forces, by for example estimating the acceleration,
deflection, degree of stretching of a virtual binding, etc. and the
dynamic behavior of a virtual object once released by a user's body
part. The Object and Velocity and Direction Module may also use
image motion, size and angle changes to estimate the velocity of
objects, such as the velocity of hands and fingers
[0091] The Momentum and Inertia Module 602 can use image motion,
image size, and angle changes of objects in the image plane or in a
three-dimensional space to estimate the velocity and direction of
objects in the space or on a display. The Momentum and Inertia
Module is coupled to the Object and Gesture Recognition System 622
to estimate the velocity of gestures performed by hands, fingers,
and other body parts and then to apply those estimates to determine
momentum and velocities to virtual objects that are to be affected
by the gesture.
[0092] The 3D Image Interaction and Effects Module 605 tracks user
interaction with 3D images that appear to extend out of one or more
screens. The influence of objects in the z-axis (towards and away
from the plane of the screen) can be calculated together with the
relative influence of these objects upon each other. For example,
an object thrown by a user gesture can be influenced by 3D objects
in the foreground before the virtual object arrives at the plane of
the screen. These objects may change the direction or velocity of
the projectile or destroy it entirely. The object can be rendered
by the 3D Image Interaction and Effects Module in the foreground on
one or more of the displays.
[0093] The following clauses and/or examples pertain to further
embodiments or examples. Specifics in the examples may be used
anywhere in one or more embodiments. The various features of the
different embodiments or examples may be variously combined with
some features included and others excluded to suit a variety of
different applications. Examples may include subject matter such as
a method, means for performing acts of the method, at least one
machine-readable medium including instructions that, when performed
by a machine cause the machine to performs acts of the method, or
of an apparatus or system for facilitating hybrid communication
according to embodiments and examples described herein.
[0094] Some embodiments pertain to Example 1 that includes an
apparatus to facilitate dynamic game surface adjustment,
comprising: detection/reception logic to receive one or more inputs
to perform an action at a portion of a play surface of the
apparatus; management logic to evaluate the one or more inputs for
generating an action plan to perform the action at the portion of
the play surface, wherein the action plan is to affect one or more
objects acting on the surface; and application/execution logic to
execute the action at the portion of the surface, wherein the
action to adjust one or more properties of the play surface.
[0095] Example 2 includes the subject matter of Example 1, wherein
the play surface to host the one or more objects including one or
more moveable objects having object sensors, wherein the play
surface includes at least one of surface sensors, actuators, and
physical effects detectors, wherein the apparatus includes a play
set comprising one or more of a smart toy, a smart game set, a
smart field, and a smart play area, and wherein the play surface
includes a play arena associated with the play set, and wherein the
play surface is further to host one or more users holding or
wearing the one or more moveable objects.
[0096] Example 3 includes the subject matter of Example 1, wherein
the sensory input comprises at least one of surface-related sensory
data as retrieved via one or more of the surface sensors, moveable
object-related sensory data as retrieved via one or more of the
object sensors, and physical effects-related information at the
surface as retrieved via one or more of the physical effects
detectors, and wherein the one or more inputs include at least one
of a user command, a predetermined criterion, a sensory input, and
an audio/visual input.
[0097] Example 4 includes the subject matter of Example 1 or 2,
wherein an actuator of the actuators to activate the portion of the
play surface to trigger the action at the portion of the play
surface, wherein the actuator is placed beneath the portion of the
play surface, and wherein the actuator is further to activate other
one or more actions of the action plan on the play surface, wherein
the other one or more actions include at least one of vibrating,
moving, swinging, tilting, booming, sinking, and bumping of the
play surface.
[0098] Example 5 includes the subject matter of Example 1, further
comprising sensors data processor of the management logic to
process the sensory input prior to evaluating the one or more
inputs to generate the action plan.
[0099] Example 6 includes the subject matter of Example 1, further
comprising video/audio analytics logic to evaluate the audio/video
input to analyze activities of the one or more moveable objects
operating on the surface, wherein the audio/video input includes at
least one of sounds, images, and videos relating to the activities
of the one or more moveable objects, wherein the audio/video input
is captured via at least of a two-dimensional (2D) camera, a
three-dimensional (3D) camera, a microphone of capturing/sensing
components.
[0100] Example 7 includes the subject matter of Example 1 or 6,
further comprising output components including one or more
projectors to facilitate a projection at the portion of the play
surface, wherein the projection to reflect an environment relevant
to the action, wherein the projection includes one or more of
numbers, letters, characters, messages, lights, images, videos, and
colors.
[0101] Example 8 includes the subject matter of Example 1, wherein
the user command is placed by a user via an user interface at a
computing device over a communication medium, wherein the
communication medium includes at least one of a Cloud network, an
intranet, a proximity network, and the Internet.
[0102] Example 9 includes the subject matter of Example 1 or 8,
further comprising a database to store the one or more inputs,
wherein the database to further store rules or policies relating to
at least one of the user, the action, and the play surface, wherein
the database includes at least one of a Cloud database or a
non-Cloud database.
[0103] Some embodiments pertain to Example 10 that includes a
method for facilitating dynamic game surface adjustment,
comprising: receiving one or more inputs to perform an action at a
portion of a play surface of a play set; evaluating the one or more
inputs for generating an action plan to perform the action at the
portion of the play surface, wherein the action plan is to affect
one or more objects acting on the surface; and executing the action
at the portion of the surface, wherein the action to adjust one or
more properties of the play surface.
[0104] Example 11 includes the subject matter of Example 10,
wherein the play surface to host the one or more objects including
one or more moveable objects having object sensors, wherein the
play surface includes at least one of surface sensors, actuators,
and physical effects detectors, wherein the play set comprises one
or more of a smart toy, a smart game set, a smart field, and a
smart play area, and wherein the play surface includes a play arena
associated with the play set, and wherein the play surface is
further to host one or more users holding or wearing the one or
more moveable objects.
[0105] Example 12 includes the subject matter of Example 10,
wherein the sensory input comprises at least one of surface-related
sensory data as retrieved via one or more of the surface sensors,
moveable object-related sensory data as retrieved via one or more
of the object sensors, and physical effects-related information at
the surface as retrieved via one or more of the physical effects
detectors, and wherein the one or more inputs include at least one
of a user command, a predetermined criterion, a sensory input, and
an audio/visual input.
[0106] Example 13 includes the subject matter of Example 10 or 12,
wherein an actuator of the actuators to activate the portion of the
play surface to trigger the action at the portion of the play
surface, wherein the actuator is placed beneath the portion of the
play surface, and wherein the actuator is further to activate other
one or more actions of the action plan on the play surface, wherein
the other one or more actions include at least one of vibrating,
moving, swinging, tilting, booming, sinking, and bumping of the
play surface.
[0107] Example 14 includes the subject matter of Example 10,
further comprising processing, via sensors data processor, the
sensory input prior to evaluating the one or more inputs to
generate the action plan.
[0108] Example 15 includes the subject matter of Example 10,
further comprising evaluating the audio/video input to analyze
activities of the one or more moveable objects operating on the
surface, wherein the audio/video input includes at least one of
sounds, images, and videos relating to the activities of the one or
more moveable objects, wherein the audio/video input is captured
via at least of a two-dimensional (2D) camera, a three-dimensional
(3D) camera, a microphone of capturing/sensing components.
[0109] Example 16 includes the subject matter of Example 10 or 15,
further comprising facilitating, via one or more projectors of
output components, a projection at the portion of the play surface,
wherein the projection to reflect an environment relevant to the
action, wherein the projection includes one or more of numbers,
letters, characters, messages, lights, images, videos, and
colors.
[0110] Example 17 includes the subject matter of Example 10,
further comprising storing, at a database, the one or more inputs,
wherein the database to further store rules or policies relating to
at least one of the user, the action, and the play surface, wherein
the database includes at least one of a Cloud database or a
non-Cloud database.
[0111] Example 18 includes the subject matter of Example 10 or 17,
wherein the user command is placed by a user via an user interface
at a computing device over a communication medium, wherein the
communication medium includes at least one of a Cloud network, an
intranet, a proximity network, and the Internet.
[0112] Example 19 includes at least one machine-readable medium
comprising a plurality of instructions, when executed on a
computing device, to implement or perform a method or realize an
apparatus as claimed in any preceding claims or examples.
[0113] Example 20 includes at least one non-transitory or tangible
machine-readable medium comprising a plurality of instructions,
when executed on a computing device, to implement or perform a
method or realize an apparatus as claimed in any preceding claims
or examples.
[0114] Example 21 includes a system comprising a mechanism to
implement or perform a method or realize an apparatus as claimed in
any preceding claims or examples.
[0115] Example 22 includes an apparatus comprising means to perform
a method as claimed in any preceding claims or examples.
[0116] Example 23 includes a computing device arranged to implement
or perform a method or realize an apparatus as claimed in any
preceding claims or examples.
[0117] Example 24 includes a communications device arranged to
implement or perform a method or realize an apparatus as claimed in
any preceding claims or examples.
[0118] Some embodiments pertain to Example 25 includes a system
comprising a storage device having instructions, and a processor to
execute the instructions to facilitate a mechanism to perform one
or more operations comprising: receiving one or more inputs to
perform an action at a portion of a play surface of a play set;
evaluating the one or more inputs for generating an action plan to
perform the action at the portion of the play surface, wherein the
action plan is to affect one or more objects acting on the surface;
and executing the action at the portion of the surface, wherein the
action to adjust one or more properties of the play surface.
[0119] Example 26 includes the subject matter of Example 25,
wherein the play surface to host the one or more objects including
one or more moveable objects having object sensors, wherein the
play surface includes at least one of surface sensors, actuators,
and physical effects detectors, wherein the play set comprises one
or more of a smart toy, a smart game set, a smart field, and a
smart play area, and wherein the play surface includes a play arena
associated with the play set, and wherein the play surface is
further to host one or more users holding or wearing the one or
more moveable objects.
[0120] Example 27 includes the subject matter of Example 25,
wherein the sensory input comprises at least one of surface-related
sensory data as retrieved via one or more of the surface sensors,
moveable object-related sensory data as retrieved via one or more
of the object sensors, and physical effects-related information at
the surface as retrieved via one or more of the physical effects
detectors, and wherein the one or more inputs include at least one
of a user command, a predetermined criterion, a sensory input, and
an audio/visual input.
[0121] Example 28 includes the subject matter of Example 25 or 27,
wherein an actuator of the actuators to activate the portion of the
play surface to trigger the action at the portion of the play
surface, wherein the actuator is placed beneath the portion of the
play surface, and wherein the actuator is further to activate other
one or more actions of the action plan on the play surface, wherein
the other one or more actions include at least one of vibrating,
moving, swinging, tilting, booming, sinking, and bumping of the
play surface.
[0122] Example 29 includes the subject matter of Example 25,
wherein the one or more operations further comprise processing, via
sensors data processor, the sensory input prior to evaluating the
one or more inputs to generate the action plan.
[0123] Example 30 includes the subject matter of Example 25,
wherein the one or more operations further comprise evaluating the
audio/video input to analyze activities of the one or more moveable
objects operating on the surface, wherein the audio/video input
includes at least one of sounds, images, and videos relating to the
activities of the one or more moveable objects, wherein the
audio/video input is captured via at least of a two-dimensional
(2D) camera, a three-dimensional (3D) camera, a microphone of
capturing/sensing components.
[0124] Example 31 includes the subject matter of Example 25 or 30,
wherein the one or more operations further comprise facilitating,
via one or more projectors of output components, a projection at
the portion of the play surface, wherein the projection to reflect
an environment relevant to the action, wherein the projection
includes one or more of numbers, letters, characters, messages,
lights, images, videos, and colors.
[0125] Example 32 includes the subject matter of Example 25,
wherein the one or more operations further comprise storing, at a
database, the one or more inputs, wherein the database to further
store rules or policies relating to at least one of the user, the
action, and the play surface, wherein the database includes at
least one of a Cloud database or a non-Cloud database.
[0126] Example 33 includes the subject matter of Example 25 or 32,
wherein the user command is placed by a user via an user interface
at a computing device over a communication medium, wherein the
communication medium includes at least one of a Cloud network, an
intranet, a proximity network, and the Internet.
[0127] Some embodiments pertain to Example 34 includes an apparatus
comprising: means for receiving one or more inputs to perform an
action at a portion of a play surface of a play set; means for
evaluating the one or more inputs for generating an action plan to
perform the action at the portion of the play surface, wherein the
action plan is to affect one or more objects acting on the surface;
and means for executing the action at the portion of the surface,
wherein the action to adjust one or more properties of the play
surface.
[0128] Example 35 includes the subject matter of Example 34,
wherein the play surface to host the one or more objects including
one or more moveable objects having object sensors, wherein the
play surface includes at least one of surface sensors, actuators,
and physical effects detectors, wherein the play set comprises one
or more of a smart toy, a smart game set, a smart field, and a
smart play area, and wherein the play surface includes a play arena
associated with the play set, and wherein the play surface is
further to host one or more users holding or wearing the one or
more moveable objects.
[0129] Example 36 includes the subject matter of Example 34,
wherein the sensory input comprises at least one of surface-related
sensory data as retrieved via one or more of the surface sensors,
moveable object-related sensory data as retrieved via one or more
of the object sensors, and physical effects-related information at
the surface as retrieved via one or more of the physical effects
detectors, and wherein the one or more inputs include at least one
of a user command, a predetermined criterion, a sensory input, and
an audio/visual input.
[0130] Example 37 includes the subject matter of Example 34 or 36,
wherein an actuator of the actuators to activate the portion of the
play surface to trigger the action at the portion of the play
surface, wherein the actuator is placed beneath the portion of the
play surface, and wherein the actuator is further to activate other
one or more actions of the action plan on the play surface, wherein
the other one or more actions include at least one of vibrating,
moving, swinging, tilting, booming, sinking, and bumping of the
play surface.
[0131] Example 38 includes the subject matter of Example 34,
further comprising means for processing, via sensors data
processor, the sensory input prior to evaluating the one or more
inputs to generate the action plan.
[0132] Example 39 includes the subject matter of Example 34,
further comprising means for evaluating the audio/video input to
analyze activities of the one or more moveable objects operating on
the surface, wherein the audio/video input includes at least one of
sounds, images, and videos relating to the activities of the one or
more moveable objects, wherein the audio/video input is captured
via at least of a two-dimensional (2D) camera, a three-dimensional
(3D) camera, a microphone of capturing/sensing components.
[0133] Example 40 includes the subject matter of Example 34 or 39,
further comprising means for facilitating, via one or more
projectors of output components, a projection at the portion of the
play surface, wherein the projection to reflect an environment
relevant to the action, wherein the projection includes one or more
of numbers, letters, characters, messages, lights, images, videos,
and colors.
[0134] Example 41 includes the subject matter of Example 34,
further comprising means for storing, at a database, the one or
more inputs, wherein the database to further store rules or
policies relating to at least one of the user, the action, and the
play surface, wherein the database includes at least one of a Cloud
database or a non-Cloud database.
[0135] Example 42 includes the subject matter of Example 34 or 41,
wherein the user command is placed by a user via an user interface
at a computing device over a communication medium, wherein the
communication medium includes at least one of a Cloud network, an
intranet, a proximity network, and the Internet.
[0136] Example 43 includes at least one non-transitory or tangible
machine-readable medium comprising a plurality of instructions,
when executed on a computing device, to implement or perform a
method as claimed in any of claims or examples 10-18.
[0137] Example 44 includes at least one machine-readable medium
comprising a plurality of instructions, when executed on a
computing device, to implement or perform a method as claimed in
any of claims or examples 10-18.
[0138] Example 45 includes a system comprising a mechanism to
implement or perform a method as claimed in any of claims or
examples 10-18.
[0139] Example 46 includes an apparatus comprising means for
performing a method as claimed in any of claims or examples
10-18.
[0140] Example 47 includes a computing device arranged to implement
or perform a method as claimed in any of claims or examples
10-18.
[0141] Example 48 includes a communications device arranged to
implement or perform a method as claimed in any of claims or
examples 10-18.
[0142] The drawings and the forgoing description give examples of
embodiments. Those skilled in the art will appreciate that one or
more of the described elements may well be combined into a single
functional element. Alternatively, certain elements may be split
into multiple functional elements. Elements from one embodiment may
be added to another embodiment. For example, orders of processes
described herein may be changed and are not limited to the manner
described herein. Moreover, the actions any flow diagram need not
be implemented in the order shown; nor do all of the acts
necessarily need to be performed. Also, those acts that are not
dependent on other acts may be performed in parallel with the other
acts. The scope of embodiments is by no means limited by these
specific examples. Numerous variations, whether explicitly given in
the specification or not, such as differences in structure,
dimension, and use of material, are possible. The scope of
embodiments is at least as broad as given by the following
claims.
* * * * *