U.S. patent application number 14/458169 was filed with the patent office on 2016-02-18 for overlay of avatar onto live environment for recording a video.
The applicant listed for this patent is Fuel Industries, Inc.. Invention is credited to Michel A. Burns.
Application Number | 20160045834 14/458169 |
Document ID | / |
Family ID | 55301426 |
Filed Date | 2016-02-18 |
United States Patent
Application |
20160045834 |
Kind Code |
A1 |
Burns; Michel A. |
February 18, 2016 |
OVERLAY OF AVATAR ONTO LIVE ENVIRONMENT FOR RECORDING A VIDEO
Abstract
Systems and methods enable a user of a mobile device to overlay
or integrate a puppet or avatar on or into a live environment data
feed provided by an input device such as a camera of the mobile
device. In one embodiment, a method includes: creating a puppet;
activating at least one input device that provides data from a live
environment; controlling motions of the puppet; presenting an
overlay of the puppet onto the live environment; and recording a
video that integrates the motions of the puppet with the data from
the live environment.
Inventors: |
Burns; Michel A.; (Malibu,
CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Fuel Industries, Inc. |
Ottawa |
|
CA |
|
|
Family ID: |
55301426 |
Appl. No.: |
14/458169 |
Filed: |
August 12, 2014 |
Current U.S.
Class: |
446/268 |
Current CPC
Class: |
A63H 3/28 20130101; A63H
3/52 20130101; A63H 2200/00 20130101 |
International
Class: |
A63H 3/52 20060101
A63H003/52; A63H 3/28 20060101 A63H003/28 |
Claims
1. A method, comprising: creating, by at least one processor of a
mobile device, a puppet; activating, by the at least one processor,
at least one input device that provides data from a live
environment; controlling, based on input from a user interface of
the mobile device, motions of the puppet; presenting, on a display
of the mobile device, an overlay of the puppet onto the live
environment; and recording, by the at least one processor, a video
that integrates the motions of the puppet with the data from the
live environment.
2. The method of claim 1, wherein the user interface is a touch
interface, and the method further comprising playing back the
recorded video on the display of the mobile device.
3. The method of claim 1, wherein the at least one input device
includes a microphone to record voice data of a person and the
motions of the puppet include movement of a mouth of the puppet,
the method further comprising synchronizing the voice data with
movement of the mouth during the recording of the video.
4. The method of claim 3, wherein the user interface is a touch
interface, and the mouth of the puppet is activated by a person
using the touch interface prior to recording the voice data.
5. The method of claim 1, wherein the user interface is a touch
interface, and the controlling the motions of the puppet comprises
changing a position of the puppet in the display in response to
physical interaction of a person with the touch interface.
6. The method of claim 1, wherein the user interface is a touch
interface, and during the recording, visual indicators are
presented on the display, the visual indicators corresponding to
portions of the touch interface that, when touched, cause a motion
of the puppet.
7. The method of claim 1, wherein the at least one input device
comprises a camera of the mobile device, and the data from the live
environment includes image data captured by the camera during the
recording.
8. The method of claim 7, wherein the at least one input device
further comprises a microphone of the mobile device, and the data
from the live environment includes sound data recorded by the
microphone.
9. A system, comprising: a display; a touch interface; at least one
input device; at least one processor; and memory storing
instructions configured to instruct the at least one processor to:
activate the at least one input device to provide data from a live
environment; control, based on user input via the touch interface,
motions of a graphical image on the display; present, on the
display, an overlay of the graphical image onto the live
environment; and record a digital file that integrates data for the
motions of the graphical image with the data from the live
environment.
10. The system of claim 9, wherein the at least one input device
comprises a camera and a microphone integrated into a mobile
device.
11. The system of claim 9, wherein the graphical image is a puppet,
and the instructions further instruct the at least one processor to
create the puppet by receiving selections from a user of at least
one component of the puppet.
12. The system of claim 11, wherein the at least one component
comprises a body part of the puppet.
13. The system of claim 9, wherein the instructions further
instruct the at least one processor to send the digital file to a
social network server.
14. The system of claim 9, wherein the display is part of a mobile
device, and the at least one input device comprises a sensor to
provide data regarding motion of the mobile device.
15. The system of claim 14, wherein motion of the graphical image
during the recording corresponds at least in part to movement of
the mobile device as determined based on the data from the
sensor.
16. A non-transitory machine-readable medium storing instructions,
which when executed, cause a mobile device to: activate, by at
least one processor, at least one input device that provides data
from a live environment; control, based on input received via a
touch interface, motions of a puppet; present, on a display, an
overlay of the puppet onto the live environment; and record, by the
at least one processor, a video that integrates the motions of the
puppet with the data from the live environment.
17. The machine-readable medium of claim 16, wherein the
instructions further cause the mobile device to create, prior to
the recording, the puppet based on body parts selected by a user
using the touch interface.
18. The machine-readable medium of claim 16, wherein the
instructions further cause the mobile device to convert touch
information received from the touch interface to coordinate
data.
19. The machine-readable medium of claim 16, wherein the at least
one input device comprises a camera, and the data from the live
environment includes image data captured by the camera.
20. The machine-readable medium of claim 19, wherein the at least
one input device further comprises a microphone, and the data from
the live environment further includes sound data recorded by the
microphone.
Description
FIELD OF THE TECHNOLOGY
[0001] At least some embodiments disclosed herein relate to
computer control of graphical images in general, and more
particularly, but not limited to, overlaying an avatar onto or
integrated with a live environment (e.g., when recording a video of
motions of the avatar as controlled by a user of a mobile
device).
BACKGROUND
[0002] Prior forms of animation include creating a continuous
motion and shape change illusion by means of the rapid display of a
sequence of static images that minimally differ from each other.
Such animations can be recorded on either analog media, such as a
flip book, motion picture film, video tape, or on digital media,
including formats such as animated GIF, Flash animation or digital
video. To display it, a digital camera, computer, or projector are
used.
[0003] Computer animation or CGI animation is a process used for
generating animated images by using computer graphics. The more
general term computer-generated imagery typically encompasses both
static scenes and dynamic images, while computer animation
typically refers to moving images.
[0004] Modern computer animation usually uses 3D computer graphics,
although 2D computer graphics are still used for stylistic,
low-bandwidth, and faster real-time renderings. Sometimes the
target of the animation is the computer itself, but sometimes the
target is another medium, such as film. Computer generated
animations are more controllable than other more physically-based
processes because it allows the creation of images that would not
be feasible using other technology. Low-bandwidth animations
transmitted via the Internet (e.g., 2D Flash, X3D) often use
software on the end-user's computer to render in real time as an
alternative to streaming or pre-loaded high-bandwidth
animations.
[0005] The term "motion graphics" was popularized by Trish and
Chris Meyer's book about the use of ADOBE After Effects, titled
"Creating Motion Graphics". This was considered the beginning of
desktop applications which specialized in video production, but
were not editing or 3D programs. These new programs collected
together special effects, compositing, and color correction
toolsets, and primarily came between edit and 3D in the production
process. Motion graphics continues to evolve as an art form with
the incorporation of 3D elements. Motion graphics has begun to
integrate many traditional animation techniques as well.
SUMMARY OF THE DESCRIPTION
[0006] Systems and methods to overlay or integrate an avatar on or
into a live environment are described herein (e.g., an avatar may
have its motion on a computer screen manipulated like a puppet by a
user). Some embodiments are summarized in this section. Embodiments
below describe a puppet for purposes of illustrating user
manipulation of avatars and other graphical images in a data
processing system (e.g., an APPLE iPad, ANDROID tablet device,
mobile smartphone, or laptop computer).
[0007] In one embodiment, a method implemented in a data processing
system includes: creating, by at least one processor of a mobile
device, a puppet; activating, by the at least one processor, at
least one input device that provides data from a live environment;
controlling, based on input from a touch interface of the mobile
device, motions of the puppet; presenting, on a display of the
mobile device, an overlay of the puppet onto the live environment;
and recording, by the at least one processor, a video that
integrates the motions of the puppet with the data from the live
environment.
[0008] In one embodiment, a system includes: a display; a touch
interface; at least one input device; at least one processor; and
memory storing instructions configured to instruct the at least one
processor to: activate the at least one input device to provide
data from a live environment; control, based on user input via the
touch interface, motions of a graphical image on the display;
present, on the display, an overlay of the graphical image onto the
live environment; and record a digital file that integrates data
for the motions of the graphical image with the data from the live
environment.
[0009] The disclosure includes methods and apparatuses which
perform these methods, including data processing systems which
perform these methods, and computer readable media containing
instructions which when executed on data processing systems cause
the systems to perform these methods.
[0010] Other features will be apparent from the accompanying
drawings and from the detailed description which follows.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The embodiments are illustrated by way of example and not
limitation in the figures of the accompanying drawings in which
like references indicate similar elements.
[0012] FIG. 1 shows a main screen of a computer-generated puppet
manipulation environment as presented to a user on a display of a
mobile device, according to one embodiment.
[0013] FIG. 2 shows a different screen of the puppet manipulation
environment of FIG. 1 in which a pre-built puppets selection tab is
presented to the user, according to one embodiment.
[0014] FIG. 3 shows a different screen of the puppet manipulation
environment of FIG. 1 in which a user-created puppets selection tab
is presented to the user, according to one embodiment.
[0015] FIG. 4 shows a different screen of the puppet manipulation
environment of FIG. 1 in which a set selection tab is presented to
the user for selecting the set (e.g., the background environment)
in which one or more puppets are presented, according to one
embodiment.
[0016] FIG. 5 shows a screen of the puppet manipulation environment
of FIG. 1 which presents an options panel to the user for selecting
options related to operation of the software that provides the
puppet manipulation environment, according to one embodiment.
[0017] FIG. 6 shows a screen presented by the software of FIG. 5 in
which the user can interact with a random or pseudo-random gaming
feature or mechanism that provides credits for use by the user in
the puppet manipulation environment (e.g., buying puppet parts for
assembling the user's custom puppet by paying with credits),
according to one embodiment.
[0018] FIG. 7 shows a screen of a puppet manipulation environment
in which the user has selected an augmented reality option that
utilizes a camera of a mobile device in order to superimpose one or
more puppets over images of a live environment as captured by the
camera during user control of the puppet, according to one
embodiment.
[0019] FIG. 8 shows a system, including a puppet services server
and one or more user terminals, to overlay or integrate an avatar
on or into a live environment for each of the user terminals,
according to one embodiment.
[0020] FIG. 9 shows a block diagram of a data processing system,
including various input devices such as a camera and microphone,
which can be used in various embodiments of the puppet manipulation
system.
[0021] FIG. 10 shows a block diagram of a user device, according to
one embodiment.
[0022] FIG. 11 shows a method to overlay or integrate an avatar on
or into a live environment, according to one embodiment.
DETAILED DESCRIPTION
[0023] The following description and drawings are illustrative and
are not to be construed as limiting. Numerous specific details are
described to provide a thorough understanding. However, in certain
instances, well known or conventional details are not described in
order to avoid obscuring the description. References to one or an
embodiment in the present disclosure are not necessarily references
to the same embodiment; and, such references mean at least one.
[0024] Reference in this specification to "one embodiment" or "an
embodiment" means that a particular feature, structure, or
characteristic described in connection with the embodiment is
included in at least one embodiment of the disclosure. The
appearances of the phrase "in one embodiment" in various places in
the specification are not necessarily all referring to the same
embodiment, nor are separate or alternative embodiments mutually
exclusive of other embodiments. Moreover, various features are
described which may be exhibited by some embodiments and not by
others. Similarly, various requirements are described which may be
requirements for some embodiments but not other embodiments.
[0025] As used herein, a "puppet" includes, for example, a digital
avatar or a character, whether based in fantasy and/or realism,
intended to have some motion characteristic(s) that can be
controlled by a user. Also, a puppet is merely just one example of
a "graphical image".
[0026] As used herein, "mobile devices" include, for example,
mobile phones, tablet computers, and laptop computers.
[0027] In one embodiment, a user operates software that is
executing on a mobile device in order to create a performance that
integrates motions of a customizable puppet or avatar (as directed
by a user on a touch interface or otherwise) and live camera
footage. The camera footage is drawn from a built-in camera on the
mobile device (see FIG. 9, discussed below), and the customizable
avatar is an overlay on the live images provided by the camera. The
user is then able to manipulate the avatar as well as add a voice
track (e.g., recorded by a microphone of the mobile device). The
integration of the avatar and the live footage provides a
performance that is recorded in a file for playback and/or sharing
with others.
[0028] In one example, a child creates a puppet avatar using a
built-in puppet customizer included in software that is running on
the mobile device. The child chooses the appearance of the puppet
(e.g., selects body parts) and then loads in a live camera feed
from the camera of the mobile device. With, for example, an APPLE
iPhone pointing at his sister, the child is then able to create and
record a performance of his puppet interacting with his live sister
via the camera of the iPhone.
[0029] In one embodiment, the software provides recreational
creation of digital avatar-based performances. Users are able to
create virtually any graphical image they imagine within the
constraints of the software system, and then to share their
creations with friends/family/others via the Internet or online
services. In other cases, the puppet manipulation system can be
used by a teacher to create a performance to help reinforce a
lesson. The application may be useful with young children who might
respond well to seeing a digital avatar "interacting" with a live
person (e.g., teacher) through the camera. In one embodiment,
puppet manipulation system is used through a digital application
which is created and distributed on several platforms, including
iOS and Android.
[0030] In one embodiment, the system enables a user to blend
together digital creations with data regarding the analog or real
world. While prior applications allow users to create and share
digital avatars, these applications do not provide the ability to
overlay that avatar on live camera footage or other data regarding
a live environment as provided from input devices during the
recording of puppet motions directed by a user. Using the present
disclosure, users can combine their customizable digital creations
with anything they can view through, for example, their mobile
device camera. During recording of a performance, a user may be
shown a time indicator so that the length of the performance can be
controlled by the user.
[0031] In another embodiment, the user creates its own puppet show.
There are controls provided for the user to customize and build its
puppet. There are camera controls that allow the user to record his
voice, as well as manipulate the puppet and the scene and the set
in which the puppet is placed (e.g., the background imagery). The
user can move the puppet back and forth using a finger on a touch
screen. The motions and live environment feed are integrated into a
single recording, which can be pushed out over, for example, a
YouTube or other distribution channel.
[0032] In this embodiment, users can create their own puppets by
piecing together various parts, such as heads and noses. The puppet
can be overlaid on top of the live camera information that is being
streamed into the mobile device. The user sees the puppet overlaid
on top of the real world during recording.
[0033] A mobile device camera is initialized so that built-in
camera footage is activated for reading by the software application
that integrates the live and puppet data. The camera footage and
the visual creation of the avatar are blended together. In one
example, a recorded performance appears as though the puppet is
actually interviewing a person in the real world.
[0034] In one embodiment, a recording of motions of a puppet as
directed by an end-user using a touch interface of a mobile device
is generated using rag-doll physics and recorded as a video file
(e.g., stored in a memory of the mobile device). The video may
further record live environment data captured by a camera of the
mobile device (e.g., data from the camera showing the real-world
surroundings of the user that is holding the mobile device during
recording) that is integrated with the puppet motion during the
recording, as described in more detail below.
[0035] In one embodiment, the end-user is essentially put into an
"animator role". In this embodiment, animations are procedurally
generated based on a model, a rig, and further based on input from
the user. As such there is no "animator" in the conventional sense,
as the user doesn't have control over the model or rig. In this
embodiment, the model and rig are generated as prefabs (or
pre-builts) and included with the software provided to the user.
The user then manipulates the prefabs to form the final animation
(e.g., the final animation that is recorded in a video or other
type of file). Thus, the software allows the user to create its own
animation performance with few restraints.
[0036] FIG. 1 shows a main screen of a computer-generated puppet
manipulation environment as presented to a user on a display of a
mobile device, according to one embodiment. A puppet 126 is shown
as part of a set 128. Set 128 includes images of the background
such as the tower and the trees in the distance. A user may
activate an icon 120 on a touch interface of the mobile device in
order to create a new puppet.
[0037] As discussed below, the user may touch portions of the
screen on or in close proximity to puppet 126 to activate motions
of the puppet, such as rotation left or right or movement from one
portion of the set 128 to another. A record button 124 may be used
to begin the recording of the video in order to record the motions
of the puppet caused by the user. A button 122 lets a user access a
list of previously recorded video files.
[0038] FIG. 2 shows a different screen of the puppet manipulation
environment of FIG. 1 in which a pre-built puppets selection tab
220 is presented to the user, according to one embodiment. Tab 220
presents various pre-built puppets, which the user may select for
inclusion in the current scene. Each puppet may have a cost in
credits associated with it, which are charged to a credit account
of the user. The current credit balance may be indicated by
graphical feature 222.
[0039] Visual indicators 224 and 226 may be provided on the screen
to indicate to the user where a finger may touch in order to cause
movement of puppet 126. For example, touching the screen at the
point of indicator 224 causes the puppet to rotate in one
direction. Touching the screen at the point of indicator 226 causes
the puppet to rotate in the opposite direction. The puppet may also
be moved by touching and dragging by the user so that the puppet
moves to a different portion of the screen. In addition, a portion
of the puppet may be touched by the user, such as touching the
puppet's mouth, in order to activate synchronization of sounds
recorded by a microphone of the mobile device with movement of the
mouth of the puppet.
[0040] FIG. 3 shows a different screen of the puppet manipulation
environment of FIG. 1 in which a user-created puppets selection tab
302 is presented to the user, according to one embodiment. A user
may create a custom puppet 300 by selecting a main body 304 and
then selecting various body parts such as a nose and ears that will
be added to the puppet. Each body part may have a credit cost
associated with it. The user may use shopping cart 306 to purchase
additional credits that can be exchanged for body parts of a custom
puppet.
[0041] FIG. 4 shows a different screen of the puppet manipulation
environment of FIG. 1 in which a set selection tab is presented to
the user for selecting the set (e.g., the background environment)
in which one or more puppets are presented, according to one
embodiment. The set selection tab may be used to select a
particular unique set such as set 128 by selecting an icon 402 that
provides a small-scale image of the set. An augmented reality icon
400 can be selected or activated by the user in order to provide a
background for puppet 126 that is sourced from the camera on the
mobile device (see FIG. 7 discussed below).
[0042] FIG. 5 shows a screen of the puppet manipulation environment
of FIG. 1 which presents an options panel to the user for selecting
options related to operation of the software that provides the
puppet manipulation environment, according to one embodiment. In
particular, the user may activate a button 500 in order to access
information regarding credits of the user. An icon 502 may be used
to activate a gaming feature included in the software. An icon 504
may be used to activate the connection to a social network server
such as FACEBOOK.
[0043] FIG. 6 shows a screen presented by the software of FIG. 5 in
which the user can interact with a random or pseudo-random gaming
feature or mechanism 600 that provides credits for use by the user
in the puppet manipulation environment (e.g., buying puppet parts
for assembling the user's custom puppet by paying with credits),
according to one embodiment. Gaming feature 600 may be activated by
icon 502. In one example, the gaming feature is a spinning wheel
that randomly lands on a particular credit amount 602 that is added
to the credit account balance of the user.
[0044] FIG. 7 shows a screen of a puppet manipulation environment
in which the user has selected an augmented reality option that
utilizes a camera of a mobile device in order to superimpose one or
more puppets over images of a live environment as captured by the
camera during user control of the puppet, according to one
embodiment. In FIG. 7, the live environment includes a camera image
with a chair 702 and other real-world office furniture pieces.
Puppet 126 appears to be overlaid on this live camera image. The
user can touch record button 124 to begin recording motions of
puppet 126 such as was described earlier. For example, visual
indicators 700 appear during recording in order to guide the user
as to where the screen may be touched to cause motion of the
puppet.
[0045] As the user touches various portions of the puppet and/or
the screen around the puppet, the puppet moves and appears to talk
in a way that is synchronized with a voice of the user that is
recorded using a microphone of the mobile device. The motions of
the puppet occur and are recorded at the same time as the
real-world environment captured by the camera may be changing, such
as where movement of real persons is included in the live camera
footage. Sounds from the live environment may be captured by the
microphone of the mobile device so that the voices of the persons
and/or other sounds being generated from the live environment are
integrated into the recorded video.
[0046] FIG. 8 shows a system, including a puppet services server
102 and one or more user terminals 141, 143, 145, to overlay or
integrate an avatar on or into a live environment for each of the
user terminals, according to one embodiment. In FIG. 8, the user
terminals (e.g., 141, 143, . . . , 145) are used to access server
102 over a communication network 121 (e.g., the Internet or a
wide-area network).
[0047] The puppet services server 102 may include one or more web
servers (or other types of data communication servers) to
communicate with the user terminals (e.g., 141, 143, . . . , 145).
Examples of the user terminals include mobile phones, tablet
computers (e.g., APPLE iPad), desktop computers, and laptop
computers.
[0048] The puppet services server 102 may be connected to a data
storage facility (e.g., repository 104) to store user-provided
content, such as puppets created by a user on a user terminal
and/or user options or preferences as indicated by user on the user
terminal.
[0049] In FIG. 8, the users may use the terminals (e.g., 141, 143,
. . . , 145) to create puppets using, for example, an application
that has been downloaded to the user terminal from puppet services
server 102 or from another server such as an application
marketplace. This application also may be used by the user to
manipulate a puppet in a gaming or other simulated environment on
the user terminal.
[0050] The manipulation of the puppet may be recorded as a file
(e.g., a video file). The recorded file may be sent for storage by
server 102 in repository 104. The recorded file may also be sent to
a social network server 106 (e.g., FACEBOOK) for sharing with other
persons. These persons may have an account on puppet services
server 102. Optionally, during manipulation of the puppet and/or
creation of the recorded file, cloud services 108 may be provided
to the user terminal and/or puppet services server 102.
[0051] The cloud services 108 may include providing of data for
integration into the recorded file. The data from cloud services
108 may be, for example, data relating to a live environment
associated with the user of user terminal 145 and/or a live
environment associated with other users interacting with the user
during manipulation of the puppet. As one example, other users may
interact with the user in a multi-player online gaming or
simulation environment.
[0052] In one embodiment, the user terminal includes a digital
still picture camera, or a digital video camera. The user terminal
can be used to create multimedia content for integration with the
data regarding the manipulated puppet for the recorded file.
Alternatively, multimedia content can be created using a separate
device and loaded into the user terminal or server 102 for
integration with a puppet manipulation file created by the
user.
[0053] Although FIG. 8 illustrates an example system implemented in
client-server architecture, embodiments of the disclosure can be
implemented in various alternative architectures. For example, the
online social network can be implemented via a peer to peer network
of user terminals, where live environment data and/or puppet
manipulation data are shared via peer to peer communication
connections.
[0054] In some embodiments, a combination of client server
architecture and peer to peer architecture can be used, in which
one or more centralized servers may be used to provide some of the
information and/or services and the peer to peer network is used to
provide other information and/or services. Thus, embodiments of the
disclosure are not limited to a particular architecture.
[0055] FIG. 9 shows a block diagram of a data processing system,
including various input devices such as a camera 216 and microphone
214, which can be used in various embodiments of the puppet
manipulation system above. While FIG. 9 illustrates various
components of a computer system, it is not intended to represent
any particular architecture or manner of interconnecting the
components. Other systems that have fewer or more components may
also be used.
[0056] In FIG. 9, the system 201 includes an inter-connect 202
(e.g., bus and system core logic), which interconnects a
microprocessor(s) 203 and memory 208. The microprocessor 203 is
coupled to cache memory 204 in the example of FIG. 9.
[0057] The inter-connect 202 interconnects the microprocessor(s)
203 and the memory 208 together and also interconnects them to a
display controller and display device 207 and to peripheral devices
such as input/output (I/O) devices 205 through an input/output
controller(s) 206. Typical I/O devices include mice, keyboards,
modems, network interfaces, printers, scanners, video cameras and
other devices which are well known in the art. Sensor(s) 212 may
also provide data as input devices regarding a live environment of
system 201.
[0058] The inter-connect 202 may include one or more buses
connected to one another through various bridges, controllers
and/or adapters. In one embodiment the I/O controller 206 includes
a USB (Universal Serial Bus) adapter for controlling USB
peripherals, and/or an IEEE-1394 bus adapter for controlling
IEEE-1394 peripherals.
[0059] The memory 208 may include ROM (Read Only Memory), and
volatile RAM (Random Access Memory) and non-volatile memory, such
as hard drive, flash memory, etc.
[0060] Volatile RAM is typically implemented as dynamic RAM (DRAM)
which requires power continually in order to refresh or maintain
the data in the memory. Non-volatile memory is typically a magnetic
hard drive, a magnetic optical drive, or an optical drive (e.g., a
DVD RAM), or other type of memory system which maintains data even
after power is removed from the system. The non-volatile memory may
also be a random access memory.
[0061] The non-volatile memory can be a local device coupled
directly to the rest of the components in the data processing
system. A non-volatile memory that is remote from the system, such
as a network storage device coupled to the data processing system
through a network interface such as a modem or Ethernet interface,
can also be used.
[0062] In one embodiment, a data processing system as illustrated
in FIG. 9 is used to implement the puppet services server 102. In
one embodiment, a data processing system as illustrated in FIG. 9
is used to implement a user terminal. A user terminal may be, for
example, in the form of a personal digital assistant (PDA), a
cellular phone, a notebook computer or a personal desktop
computer.
[0063] In some embodiments, one or more servers of the system can
be replaced with the service of a peer to peer network of a
plurality of data processing systems, or a network of distributed
computing systems. The peer to peer network, or a distributed
computing system, can be collectively viewed as a server data
processing system.
[0064] Embodiments of the disclosure can be implemented via the
microprocessor(s) 203 and/or the memory 208. For example, the
functionalities described can be partially implemented via hardware
logic in the microprocessor(s) 203 and partially using the
instructions stored in the memory 208. Some embodiments are
implemented using the microprocessor(s) 203 without additional
instructions stored in the memory 208. Some embodiments are
implemented using the instructions stored in the memory 208 for
execution by one or more general purpose microprocessor(s) 203.
Thus, the disclosure is not limited to a specific configuration of
hardware and/or software.
[0065] FIG. 10 shows a block diagram of a user device, according to
one embodiment. In FIG. 10, the user device includes an
inter-connect 221 connecting the presentation device 229, user
input device 231, a processor 233, a memory 227, a position
identification unit 225 and a communication device 223.
[0066] In FIG. 10, the position identification unit 225 is used to
identify a geographic location for user recordings created for
sharing. The position identification unit 225 may include a
satellite positioning system receiver, such as a Global Positioning
System (GPS) receiver, to automatically identify the current
position of the user device.
[0067] In FIG. 10, the communication device 223 is configured to
communicate with puppet services server 102 to provide user
recordings or options. In one embodiment, the user input device 231
is configured to generate user data which is to be tagged with the
location information for sharing. The user input device 231 may
include a text input device, a still image camera, a video camera,
and/or a sound recorder, etc.
[0068] In one embodiment, the user input device 231 and the
position identification unit 225 are configured to automatically
tag the user puppet recordings created by the user input device 231
with information provided from the position identification unit
225.
[0069] FIG. 11 shows a method to overlay or integrate an avatar on
or into a live environment, according to one embodiment. In block
1102, a puppet is created and customized by a user. In block 1104,
live environment inputs are activated. An example is use of a
camera and/or microphone as discussed above. In block 1106, a user
controls motions and behaviors of the puppet in real time.
[0070] In block 1108, the user views an overlay of the puppet onto
the live environment on a screen or display of a mobile device or
other user terminal as described above. In block 1110, puppet
motions and behaviors as controlled or caused by the user are
recorded and integrated with the data stream from the live
environment. This data stream comprises data from input devices
such as a camera, microphone, or other sensors. In block 1112, the
recorded file is played back for user viewing on the mobile
device.
[0071] In one embodiment, a method includes creating a puppet;
activating at least one input device that provides data from a live
environment; controlling motions of the puppet using a touch or
other user interface (e.g., another form of user control such as a
mouse or digital pen or the depth sense camera below); presenting
an overlay of the puppet onto or integrated with the live
environment; and recording a video that integrates the motions of
the puppet with the data from the live environment.
[0072] In one embodiment, the user interface (for controlling
puppet motion) is a system of input in which the user is moving his
or her hand (or other body part) in physical (real-world) space
without touching the mobile device. That motion is captured by a
three-dimensional (3D) depth sense camera (e.g., this camera may be
part of the mobile device, or separate and coupled to the mobile
device). This camera translates the physical motion into machine
readable input used to control the puppet on the screen or display
of the mobile device.
[0073] In one embodiment, the method may further include playing
back the recorded video on the display of the mobile device. The at
least one input device may include a microphone to record voice
data of a person and the motions of the puppet may include movement
of a mouth of the puppet. The method may further include
synchronizing the voice data with movement of the mouth during the
recording of the video.
[0074] In one embodiment, the mouth of the puppet is activated by a
person using the touch interface prior to recording the voice data.
The controlling of the motions of the puppet may comprise changing
a position of the puppet in the display in response to physical
interaction of a person with the touch interface.
[0075] In one embodiment, during the recording, visual indicators
are presented on the display, the visual indicators corresponding
to portions of the touch interface that, when touched, cause a
motion of the puppet. The at least one input device may include a
camera of the mobile device, and the data from the live environment
may include image data captured by the camera during the recording.
The at least one input device may further include a microphone of
the mobile device, and the data from the live environment may
include sound data recorded by the microphone.
[0076] In another embodiment, the system includes: a display of a
user terminal; a touch interface or other user interface input
device; at least one input device to provide data from a live
environment; at least one processor; and memory storing
instructions configured to instruct the at least one processor to:
activate the at least one input device to provide data from a live
environment; control, based on user input via the touch interface,
motions of a graphical image or puppet on the display; present, on
the display, an overlay of the graphical image or puppet onto the
live environment; and record a video that integrates data for the
motions of the graphical image or puppet with the data from the
live environment.
[0077] The at least one input device may include a camera and a
microphone integrated into a mobile device. The instructions may
further instruct the at least one processor to create the puppet by
receiving selections from a user of at least one component of the
puppet. The at least one component may comprise a body part of the
puppet.
[0078] The instructions may further instruct the at least one
processor to send the digital file to a social network server. The
display may be part of a mobile device, and the at least one input
device may include a sensor to provide data regarding motion of the
mobile device. The motion of the graphical image during the
recording may correspond at least in part to movement of the mobile
device as determined based on the data from the sensor.
[0079] In another embodiment, a non-transitory machine-readable
medium stores instructions, which when executed, cause a mobile
device to: activate, by at least one processor, at least one input
device that provides data from a live environment; control, based
on input received via a touch interface, motions of a puppet;
provide an overlay of the puppet onto the live environment; and
record, by the at least one processor, a video that integrates the
motions of the puppet with the data from the live environment.
[0080] The instructions further cause the mobile device to create,
prior to the recording, the puppet based on body parts selected by
a user using the touch interface. The instructions may further
cause the mobile device to convert touch information received from
the touch interface to coordinate data. In one embodiment, the at
least one input device further comprises a microphone, and the data
from the live environment further includes sound data recorded by
the microphone.
[0081] In an alternative embodiment, the motions of a puppet may be
integrated with data from a pre-recorded file of a prior live
event. For example, a video of a historic speech by a political
figure may be played on the screen of the mobile device as the
background set or environment for a puppet. The user may move the
puppet as described above to integrate of motions of the puppet
with the video of the prior live event. The integration of this
puppet data and prior live event data is recorded as a video file
for sharing as discussed above. Similarly, a previously-recorded
audio file may be played during the recording of motions of the
puppet in order to create a new performance recording that
integrates sound from the audio file with the motions of the
puppet.
[0082] In one embodiment, a recorded puppet performance is uploaded
to the puppet services server 102. Server 102 then uploads the
performance to a social network or other sharing channel.
Information regarding comments on the performance after it has been
shared are collected by server 102 (e.g., "likes" are counted).
Server 102 updates rewards data for a specific user, and this user
receives rewards based on ratings or comments on the performance
from others.
[0083] In one embodiment, a user manipulates a puppet through a
touch interface. Touch information generated by the user is
captured. The touch information is converted to coordinate data.
The puppet position is updated to reflect user-generated movements.
The puppet is modified and re-displayed.
[0084] In this description, various functions and operations may be
described as being performed by or caused by software code to
simplify description. However, those skilled in the art will
recognize what is meant by such expressions is that the functions
result from execution of the code by a processor, such as a
microprocessor. Alternatively, or in combination, the functions and
operations can be implemented using special purpose circuitry, with
or without software instructions, such as using an
Application-Specific Integrated Circuit (ASIC) or a
Field-Programmable Gate Array (FPGA). Embodiments can be
implemented using hardwired circuitry without software
instructions, or in combination with software instructions. Thus,
the techniques are limited neither to any specific combination of
hardware circuitry and software, nor to any particular source for
the instructions executed by the data processing system.
[0085] While some embodiments can be implemented in fully
functioning computers and computer systems, various embodiments are
capable of being distributed as a computing product in a variety of
forms and are capable of being applied regardless of the particular
type of machine or computer-readable media used to actually effect
the distribution.
[0086] At least some aspects disclosed can be embodied, at least in
part, in software. That is, the techniques may be carried out in a
computer system or other data processing system in response to its
processor, such as a microprocessor, executing sequences of
instructions contained in a memory, such as ROM, volatile RAM,
non-volatile memory, cache or a remote storage device.
[0087] Routines executed to implement the embodiments may be
implemented as part of an operating system, middleware, service
delivery platform, SDK (Software Development Kit) component, web
services, or other specific application, component, program,
object, module or sequence of instructions referred to as "computer
programs." Invocation interfaces to these routines can be exposed
to a software development community as an API (Application
Programming Interface). The computer programs typically comprise
one or more instructions set at various times in various memory and
storage devices in a computer, and that, when read and executed by
one or more processors in a computer, cause the computer to perform
operations necessary to execute elements involving the various
aspects.
[0088] A machine readable medium can be used to store software and
data which when executed by a data processing system causes the
system to perform various methods. The executable software and data
may be stored in various places including for example ROM, volatile
RAM, non-volatile memory and/or cache. Portions of this software
and/or data may be stored in any one of these storage devices.
Further, the data and instructions can be obtained from centralized
servers or peer to peer networks. Different portions of the data
and instructions can be obtained from different centralized servers
and/or peer to peer networks at different times and in different
communication sessions or in a same communication session. The data
and instructions can be obtained in entirety prior to the execution
of the applications. Alternatively, portions of the data and
instructions can be obtained dynamically, just in time, when needed
for execution. Thus, it is not required that the data and
instructions be on a machine readable medium in entirety at a
particular instance of time.
[0089] Examples of computer-readable media include but are not
limited to recordable and non-recordable type media such as
volatile and non-volatile memory devices, read only memory (ROM),
random access memory (RAM), flash memory devices, floppy and other
removable disks, magnetic disk storage media, optical storage media
(e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile
Disks (DVDs), etc.), among others. The computer-readable media may
store the instructions.
[0090] The instructions may also be embodied in digital and analog
communication links for electrical, optical, acoustical or other
forms of propagated signals, such as carrier waves, infrared
signals, digital signals, etc. However, propagated signals, such as
carrier waves, infrared signals, digital signals, etc. are not
tangible machine readable medium and are not configured to store
instructions.
[0091] In general, a tangible machine readable medium includes any
mechanism that provides (e.g., stores) information in a form
accessible by a machine (e.g., a computer, network device, personal
digital assistant, manufacturing tool, any device with a set of one
or more processors, etc.).
[0092] In various embodiments, hardwired circuitry may be used in
combination with software instructions to implement the techniques.
Thus, the techniques are neither limited to any specific
combination of hardware circuitry and software nor to any
particular source for the instructions executed by the data
processing system.
[0093] Although some of the drawings illustrate a number of
operations in a particular order, operations which are not order
dependent may be reordered and other operations may be combined or
broken out. While some reordering or other groupings are
specifically mentioned, others will be apparent to those of
ordinary skill in the art and so do not present an exhaustive list
of alternatives. Moreover, it should be recognized that the stages
could be implemented in hardware, firmware, software or any
combination thereof.
[0094] In the foregoing specification, the disclosure has been
described with reference to specific exemplary embodiments thereof.
It will be evident that various modifications may be made thereto
without departing from the broader spirit and scope as set forth in
the following claims. The specification and drawings are,
accordingly, to be regarded in an illustrative sense rather than a
restrictive sense.
* * * * *