U.S. patent application number 12/112975 was filed with the patent office on 2008-10-30 for method of creating video in a virtual world and method of distributing and using same.
Invention is credited to Michael Brook, Ken Mok.
Application Number | 20080268961 12/112975 |
Document ID | / |
Family ID | 39887644 |
Filed Date | 2008-10-30 |
United States Patent
Application |
20080268961 |
Kind Code |
A1 |
Brook; Michael ; et
al. |
October 30, 2008 |
METHOD OF CREATING VIDEO IN A VIRTUAL WORLD AND METHOD OF
DISTRIBUTING AND USING SAME
Abstract
A video of a virtual world is created utilizing stored gameplay
data and game states. The gameplay data is used to re-create a
given gameplay sequence in order to create video of the gameplay
sequence. The gameplay sequence may be re-created using enhanced
graphics to create new video files from any number of visual
perspectives.
Inventors: |
Brook; Michael; (Los
Angeles, CA) ; Mok; Ken; (Los Angeles, CA) |
Correspondence
Address: |
KLEINBERG & LERNER, LLP
2049 CENTURY PARK EAST, SUITE 1080
LOS ANGELES
CA
90067
US
|
Family ID: |
39887644 |
Appl. No.: |
12/112975 |
Filed: |
April 30, 2008 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60915073 |
Apr 30, 2007 |
|
|
|
Current U.S.
Class: |
463/42 |
Current CPC
Class: |
H04N 21/4781 20130101;
A63F 13/12 20130101; A63F 13/63 20140902; H04N 21/854 20130101;
H04N 21/2743 20130101; A63F 13/497 20140902; H04N 21/254 20130101;
A63F 2300/6018 20130101; A63F 2300/538 20130101; A63F 2300/577
20130101; A63F 13/355 20140902; A63F 2300/6669 20130101; A63F
13/5252 20140902 |
Class at
Publication: |
463/42 |
International
Class: |
A63F 9/24 20060101
A63F009/24 |
Claims
1. A method of capturing and transmitting video game sequences for
custom editing and retransmitting of virtual worlds, comprising the
steps of: generating gameplay data by game software during the
playing of a game; sending said gameplay data to a datacasting API
embedded in said game software; logging said gameplay data by said
datacasting API; sending said logged gameplay data to data capture
software; converting said logged gameplay data in said data capture
software into a state of variables in a reproducible format;
sending said state of variables to data rendering software;
converting said state of variables into replay game data; sending
said replay game data to replay game software; reproducing gameplay
of the game from said replay game software; sending reproduced
gameplay of the game to video rendering software; and creating a
plurality of video files by said video rendering software from said
reproduced gameplay.
2. The method of claim 1 in which said plurality of video files are
sent to a video distribution channel.
3. The method of claim 1 in which said plurality of video files are
sent to video editing software and including the additional steps
of: editing said plurality of video files to create a plurality of
edited video files; and sending said plurality of edited video
files to a video distribution channel.
4. A method of capturing and transmitting video game sequences for
custom editing and retransmitting of virtual worlds, comprising the
steps of: generating gameplay data by game software during the
playing of a game; sending said gameplay data to a datacasting API;
logging said gameplay data by said datacasting API; sending said
logged gameplay data to data capture software; converting said
logged gameplay data in said data capture software into a state of
variables in a reproducible format; sending said state of variables
to data rendering software; converting said state of variables into
replay game data; sending said replay game data to replay game
software; reproducing gameplay of the game from said replay game
software; sending reproduced gameplay of the game to video
rendering software; and creating a plurality of video files by said
video rendering software from said reproduced gameplay.
5. The method of claim 4 in which said plurality of video files are
sent to a video distribution channel.
6. The method of claim 4 including the additional steps of: sending
said files to video editing software; editing said plurality of
video files to create a plurality of edited video files; and
sending said plurality of edited video files to a video
distribution channel.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present application is a continuation-in-part, claiming
priority from U.S. Provisional Patent 60/915,073, filed Apr. 30,
2007, and entitled METHOD OF CREATING VIDEO IN A VIRTUAL WORLD AND
METHOD OF DISTRIBUTING AND USING SAME.
BACKGROUND
[0002] 1. Field of the Invention
[0003] The present invention relates to computer software and more
specifically to a method of creating video in a virtual world and a
method of distributing and using same. The present invention
generally describes a novel method of creating a video presentation
of an interactive video game (video of other virtual worlds may
also be created) using data that has been created and stored as the
interactive video game is played.
[0004] 2. Description of the Related Art
[0005] There exist other systems and methods whereby a video
presentation of interactive video games may be created. For
example, there are numerous software products available which
simply "record" the actions taken on a user's computer or video
game console. These software products are capable of creating a
video presentation of interactive video games.
[0006] These software products are typically purchased and
downloaded as stand-alone applications for use in recording
whatever actions are taking place on the user's screen. This
provides the functionality necessary for recording video game
gameplay, but it is not designed specifically with the re-creation
of video game gameplay in mind.
[0007] However, these programs are generally only capable of
creating video presentation of exactly what was displayed on a
given player's screen (or a portion thereof). There is no
capability, short of running similar software on multiple
additional machines, to switch perspectives, user viewpoints or to
alter dynamically, as the game is on-going, the viewpoint of the
video being created, free of any game-created constraints.
[0008] From a television or video production standpoint, this
results in a substantial limitation on the ability to create
compelling "replays" or even compelling video presentations of
gameplay events. While the gameplay or action within the game may
be very exciting to a player or to a group of players, it makes
post-production videos based upon the gameplay or action very dull.
A very rudimentary understanding of video production and
post-production informs content-creators that a single perspective
of an entire event is, generally, uninteresting. In most modern
motion pictures and television shows, a multiplicity of angles and
perspectives on the same event are used in order to tell a much
more compelling story. The existing software programs are not
capable, absent some substantial pre-planning, of providing this
functionality.
[0009] Furthermore, the existing programs often result in
user-created videos that are substantially inferior in quality. In
contrast, video game manufacturer's videos of their own games are
often of very high quality, but take a substantial amount of time
and resources to create, due to the need to generate multiple
perspectives, to plan and orchestrate beforehand in pre-production
the ways in which various "shots" will be laid out and other
relevant factors.
[0010] For these and other reasons, there exists a need. There are
no means by which a game player or game manufacturer can easily
create compelling videos based upon an interactive video game
"match" or gameplay sequence. Further there are no means by which
videos may be created after a gameplay sequence has taken place.
The programs of the prior art provide only a single view or complex
pre-planned views of a gameplay experience. Therefore, there exists
a need for a software product capable of addressing these
limitations of the available technology.
SUMMARY OF THE INVENTION
[0011] The present invention provides a method of creating a video
record of virtual worlds and a method of distributing and using
same. The preferred embodiment of the present invention provides
numerous benefits over the prior art.
[0012] The present invention generally provides a means by which a
stand-alone or "add-in" software tool may be employed in two parts
to re-create from stored data a three dimensional gameplay
experience (or other rendition of a three-dimensional world) after
it has occurred. The experience or sequence of events may then be
stored in a video format after-the-fact, reproducing the event
faithfully, but allowing for much more thoughtful pre-production,
post-production and point of view, lighting, and sound
placement.
[0013] The present invention generally works by means of a two
software application process. The invention could be implemented
using more or fewer software applications, but in the preferred
embodiment only two are utilized. The first is a software
application that is either built-into a video game (or other,
similar) software or as a stand alone application. This first
software application is designed to "grab" or "log" the data that
is created as a video game is played.
[0014] This application is designed to log (or store) all of the
data created. For example, in a first person shooter (FPS) game,
the application would log shooter positions, weapons used, the
locations of every player at each moment (typically x, y, z
coordinates), the number of shots fired by each player, each
player's name or "nickname" in the game, the angle of firing and
location, the location in which other players are hit by "bullets"
or "lasers" and various other data.
[0015] The second application is an application designed in
conjunction with the game software in the preferred embodiment, but
in alternative embodiments it may stand alone, as well, to recreate
the event exactly as it originally occurred during the game play.
This application also provides the secondary functions of allowing
a user to see the event at whatever speed desired, to place a
virtual camera (or multiple virtual cameras) throughout the
recreated gameplay sequence, and to similarly place virtual sound
receivers or virtual lighting effects in various places throughout
the gameplay sequence. This software is capable of moving through
the gameplay sequence both in time and space, so as to provide the
best possible recreation of the event for "filming" and to provide
a subsequent "director" of the filming with the most resources to
recreate and edit the event in a compelling manner.
[0016] The first benefit is the ability to drastically improve
rendering quality. Most video capture software captures only the
quality of the image displayed on the player's screen (and only
from the player's perspective). As it turns out, in order to
increase response time, many gamers intentionally degrade the
quality of the graphics displayed by the game. Similarly, they
intentionally degrade the video capture software's resolution so
that it does not needlessly hinder the gameplay experience by
spending CPU cycles transcoding video in real time.
[0017] By "filming" (analogous to the operation of a camera) a
re-creation of the gameplay sequence and "filming" the re-creation
based upon saved data, the graphics may be "cranked up" to the
maximum levels of resolution and graphical quality. Furthermore,
the video may be captured, of those high-resolution graphics, at a
very high resolution as well.
[0018] There is no concern, at the time of creating the video of
the gameplay sequence, for compromising the performance of the
machine upon which the game is being played. It is being done
after-the-fact. This leads to an overall drastic improvement in the
resulting video presentation. Higher-quality videos are much more
suitable for broadcast by high definition television or for sharing
via the web.
[0019] A second benefit of the present invention is to allow a
"director" of the video file or virtual film creation to review the
sequence and to select, much like a director would for a television
or movie sequence, the best angles and virtual camera locations
(POV). The "director" composing the subsequent film creation may be
any user of the software, from a game player to a game software
manufacturer to a third party media creator or an interactive
entertainment competition organizer. The ability to direct, allows
the most compelling or "best angles" for exciting moments in the
game to be selected by the director, resulting in substantially
better quality and more compelling video files being created.
[0020] An additional benefit is that the director of a video file
or film creation based upon the gameplay sequence may also create
multiple virtual cameras for use in capturing different angles and
various times. Cutting between angles and scenes is an excellent
way for a director to make lengthy gameplay sequences more exciting
to watch and to provide the best angles for multiple different
events or actions. The present invention provides the only current
method whereby virtual camera locations and virtual camera angles
(including multiple virtual cameras) may be used to record an
event, other than scripting an event prior to recording or capture.
Scripting is rarely desirable in gaming competitions or other
similar situations.
[0021] It is therefore an object of the present invention to
provide means for any user of stored data pertaining to a gameplay
sequence to recreate the gameplay sequence using that data, to
insert a multiplicity of "points of view" (using virtual cameras
and virtual "microphones") at various locations within the gameplay
sequence and world, to record with multiple virtual cameras at
various times throughout the gameplay sequence, to select those
locations and times in which each virtual camera will be present
and recording video at various locations, and to provide, as an
output, a video file or video files suitable for post-production
editing, viewing and distribution.
[0022] The novel features which are characteristic of the
invention, both as to structure and method of the operation
thereof, together with further objects and advantages thereof, will
be understood from the following description, considered in
connection with the accompanying drawings, in which the preferred
embodiment of the invention is illustrated by way of example. It is
to be expressly understood, however, that the drawings are for the
purpose of illustration and description only, and they are not
intended as a definition of the limits of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] FIG. 1 is a block diagram of the components and data flow in
a preferred embodiment of the present invention;
[0024] FIG. 2 is a block diagram of an alternative embodiment;
[0025] FIG. 3 illustrates the placement of several virtual cameras
and a virtual microphone in a gameplay sequence;
[0026] FIG. 4 is flowchart of the steps involved in the
video-creation process;
[0027] FIG. 5 is a flowchart of the steps involved in the method of
creating and distributing a video file; and
[0028] FIG. 6 is a flowchart of an alternative embodiment of the
method of creating and distributing a video file.
DETAILED DESCRIPTION OF THE INVENTION
[0029] Several definitions may be useful during the reading of this
document. These are defined in the following paragraphs. "Director"
or "user" as used throughout the specification refers to any one of
a number of individuals or companies who act to select "locations"
at which to place virtual cameras, microphones and to provide any
motion to either, during the course of recording a gameplay
sequence. The director or user may be an individual who has taken
part in a gameplay sequence, a third party game software
manufacturer or developer, a media source or a third-party game
competition organizer.
[0030] "Gameplay sequence" as used herein generally refers to the
parsing of any saved gameplay data with a game software application
to create (or re-create) a series of events over the course of
time, within a game software application. "Gameplay sequence" is
also intended to apply to the use of any data to create (or
re-create) any events or activity within a virtual "world"
generated by a computer. Gameplay sequence is not intended to be
limited to video games only, as it may be applicable to any virtual
world that is created by a computer in conjunction with suitable
software and hardware.
[0031] Turning first to FIG. 1, the components and data flow of a
preferred embodiment of the present invention are shown in block
diagram form. The present invention generally relates to a method
used to create video files or a "film" of a virtual world. This
figure shows one of the ways in which the video files are
created.
[0032] A consumer computer 10 may be a conventional desktop or
laptop computer. As can be seen, this consumer computer 10 has at
least one software application installed, in this instance a game
software application 12. The game software application is
preferably a video game of some type. The game software 12 may take
the form of an online multiplayer game, a first person shooter
(FPS), a massively multiplayer online (MMO) game, a real-time
strategy (RTS) game or virtually any other form of video game. It
may also take the form of any other virtual world environment
capable of creation by a computer. In general, both multiplayer as
well as single player games will be recorded or filmed using this
method, it being understood that by "filming", digital recording is
intended.
[0033] In this embodiment, the game software 12 contains a
datacasting API (Application Programming Interface) 14. The
datacasting API 14 in the embodiment depicted in this figure is an
"add on" or a "tool" which has been integrated into the game
software 12 itself such that the game software 12 is capable,
inherently, of utilizing the datacasting functionality.
[0034] The datacasting API 14 is the component part of the game
which is capable of parsing and translating gameplay sequence data
created by the game software 12. For example, in the case of a
first person shooter, the datacasting API 14 would "log" data
created by the game software representing player locations, guns or
other weapons picked up or used, the locations and angles of
bullets, projectiles or lasers fired or other weapons used, the
results of any successful and unsuccessful attacks, player
movement, player names, player statistics, animation data and
virtually every piece of data used or manipulated by the game
software 14 in carrying on the gameplay sequence (hereinafter
referred to as "gameplay data").
[0035] In the event that the game has an online-multiplayer
component, all or a portion of this datacasting API 14 may be,
instead, a part of the game server software (not pictured). As a
user connects to the game server software at a remote (or one of
the player's computers) location, the datacasting API begins
logging information pertaining to that player. In either case, the
datacasting API "logs" or otherwise parses gameplay sequence data
which may then provided to data capture software 16. In the
preferred embodiment, the datacasting API 14 and the data capture
software 16 captures data at a rate of 30 frames per second (FPS).
This is faster than standard television (24 FPS) and motion picture
(16 FPS) and at speeds of 30 FPS, the human eye is virtually
incapable of seeing any "chop" in the video subsequently created
from this gameplay data.
[0036] The data capture software 16 is software which may be
resident on the consumer computer 10, resident at a remote server
location or resident at a third party site or computer and is used
to capture all of the gameplay data. The data capture software 16
creates a game sequence data file that may be used to reproduce the
video game gameplay sequence at a later time. The data capture
software 16 creates a detailed log of the entire gameplay sequence
for storage.
[0037] In order to create the gameplay data, the datacasting API 14
and the data capture software 16 together take "snapshots" of the
game "state." A state is a term of art used in computer science to
refer to the status of the entire program in memory at a given
time. It may also refer to the current setting of every (or a
portion of every) variable currently defined and in use by the
program as it runs. The total data pertaining to the on-going and
running software program is the "state" of that program.
[0038] The present invention takes a "snapshot" of the "state" of
all variables, the memory registers and values in memory locations
allocated to the game program at a rate of 30 times per second. The
game state includes any animation state, game data state, sound
replay state, player location and action state, and various other
states. This data is then written, in a reproducible format such as
text, XML or directly as a RAM dump, to a state log for use in
rendering later using a rendering computer 18. As the state is
loaded into the rendering computer 18, the gameplay sequence is
re-created, exactly, from the moment that the game state data was
captured, including all avatar and character action.
[0039] The state data may be created or stored in a compressed
format using differential analysis to record only the data that has
changed from the last moment at which a game state was saved. This
may analyze the data, quickly, and determine what elements have
changed from moment to moment, then only save those elements in the
new game state. For example, from moment to moment a player's
nickname would not change, but the player's location may be
changing constantly. Therefore, the player's name need not be
logged at every instance. However, data pertaining to movement and
action would be saved often for accuracy. This may result in a
significantly smaller overall file size of the resulting saved
gameplay data.
[0040] The next component is the rendering computer 18. It is to be
expressly understood that the rendering computer 18 serves at least
two separate and distinct rendering functions. It is, notably, not
only a computer capable of rendering video from data. The first
function is to accept a data file produced by the data capture
software 16 and to "re-create" a gameplay sequence based upon that
data. Its second, and distinctly separate, function is to record
the re-created gameplay sequence according to a user's direction.
All methods of the prior art only perform the second function (and
only in part) in that they record, only, the event as it is
happening (instead of a re-created gameplay sequence) and from a
single perspective, the prior art that is overcome by the present
invention.
[0041] First, the rendering computer 18 (which may, depending upon
the implementation, be the same as the consumer computer 10, but is
pictured separately for ease of explanation) is capable of
recreating the gameplay sequence as if it is being played again.
This is accomplished by data rendering software 20. The data
rendering software 20 is software designed to interpret the
gameplay data (such as player location, name, weapons used and the
like) and to re-create the gameplay sequence based upon that
data.
[0042] In order to accomplish this, the data rendering software 20
is used, virtually always, in conjunction with replay game software
22. The replay game software 22 is the same or is closely related
to the game software 12, but may include higher resolution graphics
and/or advertising audio and graphics, for example. The data
rendering software 20 feeds the data pertaining to the gameplay
sequence that was created by the data capture software 16 back into
the replay game software 22 on the rendering computer 18. The
replay game software 22, under the direction of the data rendering
software 20 interprets the data and presents the gameplay sequence,
in its entirety, subject to the user's direction, exactly as it
occurred.
[0043] It is as if the gameplay is occurring all over again.
However, at this stage, the data rendering results in a three
dimensional world in which the data rendering software 20 allows a
user to move through and view from literally any location in the
game world or map. This is not a single-viewpoint locked camera
follow of one or more players, it is an exact re-creation of the
gameplay sequence, from the complete gameplay data created as the
gameplay sequence was first completed.
[0044] At this point, the user may use the rendering computer 18
and the data rendering software 20 to select a multiplicity of
locations, within the game world or gameplay sequence to place one
or more "virtual" cameras and to record video from one time to
another at any one of those cameras. Similarly, the "virtual"
microphone may be placed at any point in the gameplay space or at a
multiplicity of locations if so desired. This process may be more
readily understood when described with reference to FIG. 3
below.
[0045] The user may then, once all of the virtual cameras and
microphones are placed, "run" the entire gameplay sequence while
one or more of the cameras record video and the microphones record
sound. In the preferred embodiment, these virtual cameras and
microphones output to individual time-stamped video files, audio
files or audio/video files. These files may then be used for "post
production" editing of a video. Video rendering software 24 is used
in connection with the data rendering software 20 and the replay
game software 22 to appropriately create video files of the
sequence from each location.
[0046] The output from the video rendering software 24 is a video
file 26. Of course this video file 26 may in fact be a multiplicity
of video files, should a multiplicity of virtual cameras be used.
Video file 26 is intended to represent the collective output of
this process, including any audio files or audio/video files
created. It is also intended to represent the possibility that
analog audio and/or video files may be created using this
functionality.
[0047] The video file 26 may be immediately shared via a video
distribution channel 32. This video distribution channel 32 may be
a professional video game website, a television show or compilation
video of video game highlights. It may be a television network or
an internet video sharing site. The video file 26 is ready to be
shared, broadly, the moment it has been rendered by the video
rendering software 24.
[0048] However, a user may, optionally, choose to use a video
editing computer 28 equipped with video editing software 30 to
combine the various video files created by the video rendering
software 24 into a single, composite video. The optional nature of
this choice is indicated by dotted connecting lines. The user may
choose to use the video editing software 30 to insert graphics,
video effects, transitions between cuts or to integrate multiple
angles from which the video was created into a single video file
for sharing.
[0049] In the film industry, this process is typically considered
"post production". It is also known as "editing." This process
allows a user to select the "best" shot, given the series of angles
from which video was created. Even further, if the user is
dissatisfied with the angles he or she has chosen or the outcome of
one or more video files, the user may go back to the gameplay
sequence data and use the data rendering software 20, the replay
game software 22 and the video rendering software 24 to create a
new angle or new portion of video for the resultant video file 26
or for subsequent distribution.
[0050] One of the advances over the prior art is that the user may
return, directly, to the source of the video in order to create
additional angles, movements, pans, or data points from which video
may be created for subsequent inclusion in a post-production, final
cut video program to be created. Once the final cut is created
using the video editing software 30, the video may be distributed
via the video distribution channel 32.
[0051] Referring next to FIG. 2, an alternative components and data
flow diagram is shown. The first component is a consumer computer
34. This may be the same consumer computer found in FIG. 1 (as
element 10). The next component is game software 36. It is notable,
however, that in this case, the game software 36 is separate from a
datacasting API 38 unlike the combination shown in FIG. 1. This is
the primary difference between this embodiment and the embodiment
depicted in FIG. 1.
[0052] The datacasting API 38 in this embodiment is an application
separate from the game software 36. This means that the datacasting
API 38 in this embodiment is designed in such a way as to "listen"
to the game software 36 as it works. The datacasting API 38 still
creates a log, which it passes to data capture software 40, of all
relevant information happening within the game. The capture of the
gameplay data takes place, using whatever means are available. In
some instances, a "plug-in" to a game will be used in connection
with the datacasting API 38 when the datacasting API is not
integrated into the game software 36.
[0053] In other cases, the datacasting API 38 will employ
"listening" techniques to gather the information from the game
software 36. In any event, the datacasting API 38, while not a part
of the game software 36, (which will be described further below) is
still capable of gathering all relevant gameplay data and passing
it on to data capture software 40.
[0054] A rendering computer 42, corresponding to the rendering
computer 18 in FIG. 1, performs the same two-part function. First,
it renders the game using the gameplay data and data rendering
software 44 and replay game software 46, then it renders a video
file from that gameplay data using video rendering software 48,
according to user direction.
[0055] The resultant video file 50 (which may, in fact, be a
multiplicity of audio and/or video files) may be edited according
to user wishes or may be immediately distributed via a video
distribution channel 56. The video editing may take place on a
video editing computer 52 using video editing software 54.
[0056] Referring next to FIG. 3, a simple example gameplay scenario
is shown in order to better explain the process shown in FIGS. 1
and 2. FIG. 3 is intended to represent a soccer game field 58
within a video game. This soccer game field 58 is shown from a
"top-down" perspective as a two-dimensional field. However, it is
intended to represent a three-dimensional "game world" that may be
filmed using the method of this invention.
[0057] The depicted field 58 includes only two players. Player one
60 and player two 62 are both shown on the field 58. This entire
display is intended to represent the "gameplay area" or "gameplay
space" and a simple gameplay sequence which may be recreated and
filmed using the method of this invention.
[0058] A soccer ball 64 (at time=0) and 64' (representing the same
soccer ball at time=1) is also shown. For purposes of explanation,
assume that the gameplay sequence is made up of two time periods,
time=0 and time=1. There is, however, a smooth transition between
these two time periods. It is apparent that player two 62 has just
struck the ball 64 with his on-screen character or avatar. The ball
64 is traveling toward the goal 66.
[0059] In the preferred embodiment of the present invention, all of
the data pertaining to player two 62, player one 60, the ball 64,
the goal 66 and the field 58 are being stored thirty times in a
single second. As the ball 64 moves toward the goal 66, the
datacasting API 14 (See FIG. 1) captures all of this data and
provides it to the data capture software 16.
[0060] After the event or gameplay sequence has occurred (and
presumably after the game has ended) the data capture software 16
may input its data into the rendering computer 18 for use of the
data rendering software 20. This software allows for the placement
of a multiplicity of "virtual cameras" around the gameplay
field.
[0061] The data rendering software 20 places these virtual cameras
around the field 58. The first is fixed camera 68. It is a fixed
camera 68 for use in "filming" the ball and its movement from a
single angle for a period of time. For example, fixed camera 68 may
be set to record only a portion of the transition time from time=0
to time=1, the portion in which the ball 64 is still in the frame
of the camera. Alternatively, it may be set to film the entire
period from time=0 to time=1.
[0062] The next camera, pan camera 70 is placed, using scripting
protocols or "drag and drop" methodologies, such that it pans
across the field moving past player one 60 and following the ball
64 from time=0 to time=1. This shot would follow the goal-scoring
moment in the match from player two's 62 shot until the ball 64 (or
later 64') enters the net of the goal 66. As depicted, there is a
single virtual microphone 72 (and at time=1, microphone 72') which
also follows or pans with the ball as it moves past player one
60.
[0063] It is to be understood that a multiplicity of microphones
may be placed throughout the game field 58. Each of these
microphones may record time-stamped, position-based audio. If the
microphone is "near" the stands of the match, for example, the
voices of the "crowd" in the stands may be louder than if the
microphone is further away. Similarly, if sound effects of the kick
of the ball 64 are generated by the software in a spatial manner, a
microphone closer to the ball 64 may record that sound more closely
and accurately and loudly, so as to indicate a close proximity to
the kick.
[0064] Finally, a third camera 74 is placed directly behind player
two 62. This third camera 74, similar to the "player following"
cameras typically employed in video games, may record the player's
movements precisely and watch as player two 62 watches the ball 64
(and later ball 64') go into the net.
[0065] It is to be understood that the placement of these "cameras"
around the game field 58 take place after the events of the game
have already ended. The re-creation of the gameplay sequence is
done after the fact using the data rendering software 20, not
during the actual match taking place. Similarly, no physical camera
is placed anywhere. These are virtual cameras in the sense that
they record action from a certain perspective within the gameplay
sequence and the gameplay world in accordance with user requests or
direction.
[0066] The cameras (and microphones) may or may not
"simultaneously" record the event as it is re-created by the data
rendering software 20. The user may review and replay the
re-created event as many times as necessary, because it is based
upon stored data, not upon events as they occur, in order to get an
appropriate or desired shot of a particular portion (or all) of a
gameplay sequence. Time-stamping of the video files is used to
synch up the rendered video and audio files for later editing.
[0067] Because the video recording only occurs after the gameplay
sequence is completed, and is based upon re-rendered or re-created
gameplay performed by the data rendering software, the perfect shot
and perfect "pre-production" (actually occurring after the event,
but in effect, planning the best shots to use, and in that sense
"pre-production") may be done. The cameras may be placed numerous
times and in different locations. If the shot is not-quite-right,
the user may alter the camera location only slightly. If the shot
is not acceptable, the user may alter it completely or simply not
use any video created from that "virtual" camera in a final video
that is created.
[0068] It is to be understood that this technology may be applied
to gameplay sequences of any length, type or kind. Video of two
dimensional games, three dimensional games and blends of the two
may be created using this method. Complex games involving any
number of players, online or connected by means of a local area
network or connected to a large-scale server or multiple servers
may be filmed using the methodology described herein. For example,
as described more fully below, the game server itself may be
equipped with the ability to capture video game data (the data
capture software 16) in some embodiments, such that it records data
locally and subsequently videos may be created from that data at
remote locations or locally.
[0069] Referring now to FIG. 4, a flowchart of the steps in the
preferred embodiment is shown. This flowchart includes the steps
used to capture game data, re-create a gameplay sequence and to
subsequently create video from that gameplay data and re-created
gameplay sequence.
[0070] A first step 76 is to play a game. This first step 76 also
involves the playing of a game including the creation of gameplay
data. The gameplay data, as has previously been discussed, is
gathered by a datacasting API 14 and captured by data capture
software 16 (See FIG. 1). During this step one or more players
plays a game and thereby generates data pertaining to the gameplay
sequence. As described above, this data may include: player
location within the game world, player weapons, player or computer
player actions, items used, injuries or losses sustained, movement
of player characters or units around the game world and virtually
any and all data generated or used to generate a gameplay sequence
(and as further described above "gameplay data").
[0071] A next step 78 requires that the gameplay data be captured.
In the preferred embodiment a piece of software "listens" to the
game as the gameplay sequence moves forward. This is done by the
datacasting API 14, either incorporated into the game or as a
stand-alone module. The data capture software 18 captures the data
and stores it for later use in re-creating the gameplay
sequence.
[0072] It is to be understood that the gameplay data that is
captured may be stored in virtually any location. In the preferred
embodiment, the data is stored on the user's computer. For example,
in the instance of a game player playing a game and using the
datacasting API 14 (see FIG. 1), the data may be stored on the
consumer computer 10. However, in some cases the data may be stored
remotely, for example, on a multiplayer hosting server or on a
media outlet's web server. The data may be stored in any location
and may be stored for a set period of time or indefinitely.
[0073] Similarly, the gameplay data may be saved remotely. A user
may take part in a gameplay sequence, continue playing for several
minutes or hours, then request, after-the-fact, that the remote
server provide him or her with the gameplay data created in the
course of that gameplay sequence from a first time to a second
time. The user may then be provided with that gameplay data, either
as a downloadable file or through access on a remote server. This
type of configuration would allow a user (or other party), not
knowing in advance that a particular sequence will be meaningful,
to create, after-the-fact, an excellent video of the meaningful
gameplay sequence.
[0074] A next step 80 is to provide the gameplay data to data
rendering software. The data rendering software 20 (described with
reference to FIGS. 1 and 2) can use the gameplay data to recreate
the gameplay sequence exactly as it originally happened. After the
gameplay sequence has been recreated, the user or director of the
process can move on to a next step 82 performing video
preproduction. This is the process wherein the user or director may
select the various locations for virtual cameras within the
gameplay sequence, the places for microphones, any movement of
cameras or microphones and the like.
[0075] As can be seen this step 82 of preproduction is not actually
occurring before the gameplay sequence has completed. Instead, it
occurs during the re-creation of the gameplay sequence, but prior
to the video rendering. In the sense that various shots may be
selected, microphones placed, thoughtfulness given to the ways in
which to film action sequences, it is a pre-production step 82.
However, in the sense that it occurs after the gameplay sequence
has completed or partially completed, it is not truly
pre-production. It is pre-video rendering production, which
provides, virtually, all of the benefits of true pre-production,
not previously available in the video creation process for
interactive video games (and similar computer-created
environments).
[0076] It is to be understood that the step 80 of providing the
gameplay data to data rendering software may be carried out by any
party, the user or game player, a gameplay host or game
manufacturer or developer, a media outlet or a third-party video
game competition organizer. The gameplay data may be readily
available to the public on a website or available only to the
gameplay host server or user.
[0077] The data rendering software 20 in conjunction with the game
software 22 may also be used to place advertisements or avatars or
sounds, not present in the original gameplay sequence, in various
locations throughout the shot or shots to be taken. So, while in
the in-game world a player never experiences an advertisement for a
particular product, the advertisement may be added after-the-fact,
to the re-created game world for use in recording the video and
advertising, for example, if the video is to be played on a
television network or displayed on-line.
[0078] The type of advertisements or avatars that may be placed are
virtually limitless. For example, an object already present in the
game world may be replaced, upon a subsequent rendering (at the
user's request or automatically) by a different object, such as a
billboard or recognizable product. This could act, similarly to a
"product placement" in a television or movie sequence.
[0079] Similarly, an object need not be present or "visible" within
the game world in order to be placed during a subsequent rendering
of a gameplay sequence. For example, in a massively multiplayer
online game, a particular boss may be known to be on the
"progression" list as a player or group of players progresses
through the game. It is highly likely, therefore, that a video may
be rendered of an encounter against that boss at some point. The
game developer may therefore, create one or more "invisible"
objects within the area immediately surrounding that boss.
[0080] Once a user recreates a video based upon the gameplay data
of an encounter with this boss, the "invisible` objects may be
replaced (automatically or manually) with any number of other
objects or avatars. They may be replaced with advertisements of
various types, or even with in-game avatars, for example, of
commentators provided by a third party renderer (or the user
themselves). This allows for "play-by-play" like functionality,
"present" in the re-creation of the gameplay sequence, without it
being intrusively present during the actual gameplay sequence as a
user is playing.
[0081] It is to be further understood that any object or
"invisible" object may also contain position-based audio as well.
So, as a user moves a camera closer (or as a user approaches) a
visible or invisible object, a sound or series of sounds may become
louder, as if the user is actually closer to the object, in any
subsequently rendered video based upon the gameplay data. For
example, as a soccer ball moves closer to or further from the goal
or one player moves closer to or further from another, during a
gameplay sequence, a sound associated with either may grow louder
or softer, building suspense or anticipation of a gameplay event
within the gameplay sequence.
[0082] Finally, in-game avatars of commentators may also be an
example of an "invisible" object that, when rendered, actually
appears in the subsequent video. A commentator may follow a
particularly good player, for example, as an invisible object. It
may take place as the game is ongoing or after the gameplay
sequence has ended. However, in order to not distract the player,
the commentator, may remain invisible. But data pertaining to that
commentator, the avatar's location and positional audio may be
saved. In subsequent rendering, an avatar of the commentator and
associated positional audio may be rendered, if desired, into the
re-created gameplay sequence as if the commentator was there
throughout the gameplay sequence.
[0083] During the re-recreation and preproduction process, the user
may, also, insert "instant replay"-like functionality into the
video being captured. For example, the video may slow down to
re-create a portion of the re-created sequence so as to better show
a particular event or action. This may be scripted into the
re-creation during the preproduction step 82. A portion of the
re-creation may also be slowed down so as to appear to be in "slow
motion" or sped up so as to appear in "double time" as it is
filmed. This may be used to create dramatic effects or to speed
through lengthy portions of gameplay in the resulting video.
[0084] Also during the re-creation preproduction, a user or
director (as defined above) is given complete control over the
level of detail or the resolution of the gameplay. In normal play,
individual players often turn down the resolution settings and
"high quality" graphics features in order to aid in better
performance. Because at the stage of video rendering of gameplay
sequences, there is no concern for the quality of gameplay (the
gameplay sequence has already occurred), the director or user may
turn the graphics setting to the maximum level. These settings
often would reduce the ability of a player to play a game
effectively, but for purposes of creating a re-creation of the
event after-the-fact as a video file, it results in a
higher-quality, better-looking video presentation.
[0085] The user may also add voice-over effects in this
pre-production stage. There may be "announcers" added in or, in the
case of a game manufacturer, voice over regarding the plot of the
game or the status of the event. These voice overs may be
synchronized with the re-creation of the gameplay sequence or may
be added to the video created later.
[0086] Similarly, at this stage, a director or user may add
"overlays" to the video, from no matter what angle within the
gameplay sequence it is taken. These overlays, for example, may
include the score of a sports game being played or, in the case of
a video game competition, a leader board. Similarly, these overlays
may contain advertisements or the "station identification" labels
common during televised sports games.
[0087] These overlays may recreate portions of game data (such as
player scores, timers or other relevant information) or may be
manually set and edited by users. The overlays may further contain
graphics or "picture in picture" functionality showing other
locations or angles in the gameplay sequence or the actions of
other players simultaneously. Similarly, transitions between
multiple perspectives may be automatically inserted by means of the
data rendering software 20.
[0088] Once the preproduction process is complete, planning out
each camera location, any slow motions, any instant replays (or
rewinds to replay), any advertisements or other indicia added to
the "background" of the game, all microphone locations and all
camera and microphone pans or movements; the next step 84 is to
create the video. In this step, the re-creation is set to "run"
such that it may be filmed from the various camera locations. The
gameplay sequence runs, exactly as it did in the original gameplay
sequence and video files are created for each of the camera
locations simultaneously or sequentially. These video files are
stored and saved in a format suitable for subsequent editing,
transmission or storage.
[0089] A next step 86 is to perform video postproduction. At this
point the user or director selects pieces of video files to splice
together (typically with software) in order to create a single (or
a few) video of the gameplay sequence. In some embodiments, this
process may be automated. In the preferred embodiment, the video
rendering software 24 is capable of some small-scale editing either
automatically or subject to user input, such that a video may be
created. For more complex video editing a stand-alone or separate
application may be used for postproduction.
[0090] A final step 88 is to distribute the video. In this step a
user may place the video on a video file sharing website, on an
"own" website, on a game-related website or make it available for
purchase or review by some other means. Similarly, if a user of the
software is a game manufacturer or software company who made the
game, they may make the video available on their website as a
marketing tool. Finally, if the user of the software is a
television network, a professional gamer group or an online
game-related site, the video may be placed in a "highlights reel"
or broadcast by any number of means to end users.
[0091] The gameplay data generated may be used to quickly provide
commentary and/or add "preproduction" elements, as described above,
prior to the broadcast of the gameplay sequence. This "almost
real-time" broadcast, even delaying the gameplay sequence by a few
seconds, allows a broadcaster the opportunity to add commentary, to
insert advertisements, to select which "cameras" to use for
broadcast and to perform a multiplicity of other "pre-production"
tasks, as previously described.
[0092] A user, viewing this gameplay sequence would not, strictly
speaking, be watching the game in "real time" but would receive the
gameplay sequence broadcast sufficiently shortly thereafter and
with excellent added detail, so as to appear to be broadcast in
virtually real time. This application of this software provides the
best of both worlds, providing pre-production to a
previously-impossible pre-production environment of interactive
games, while providing the content as it is happening to expectant
viewers.
[0093] The game state captured by the data capture software 16 may
also be used in other unique ways. Because the game state data is
captured at the speed of 30 frames per second, each of these
"frames" may be used as a save file. A user may then "pick up" the
game from that point onward as a save file. The gameplay data may
be used as a series of rapidly-created game save files.
[0094] For example, in single player games a user may resume
playing, taking control of any avatar or player who was previously
present during the original gameplay sequence. In the case of
multiplayer games, a suitable replacement for each of the
player-controlled avatars may be found or, alternatively, a
computer using artificial intelligence may take the place of any
non-player characters in the game's saved state.
[0095] In large multiplayer games, a single user may play from a
particular point in the gameplay data forward, where each of the
other players in the multiplayer sequence simply mimic the actions
in the previously-recorded gameplay state or, alternatively, are
replaced with computer artificial intelligence.
[0096] Additionally, this gameplay data may be "created" and
subsequently distributed for later play. For example, a television
company televising a particular football game, may create a
gameplay data with a series of save states, representing the game's
exact status at any point in an on-going televised game. Users at
home may then download that gameplay data and play the game they
just watched or are watching from any point in the game as it
progresses, taking the place of any one of the players on the
field. The other players may be replaced with player characters via
a network or by computer-controlled artificial intelligence.
[0097] Referring next to FIG. 5, a method of distribution of the
software embodying the method described above is disclosed. A first
element is the software manufacturer 90. This is the company or
individual(s) which produces the software used to enable
datacasting and data rendering. A first software package is the
datacasting and data capture software 92. As described above, this
software 92 is used to gather gameplay sequence data for later
rendering. The second software package is a data rendering software
94, which is used to render a re-creation of the gameplay sequence
based upon the gameplay data captured by the datacasting and data
capture software 92.
[0098] In this embodiment, the datacasting and data capture
software 92 is provided to a game manufacturer 96, while the data
rendering software 94 is provided to a game consumer 100. This
providing of the data rendering software 94 may be through a free
download, through license of the software or may be provided, along
with the game client, for free to enable the game consumer 100 to
render gameplay sequences that are created using the datacasting
and data capture software 92 as he or she plays the game.
[0099] The game manufacturer creates a game client incorporating
the datacasting API 98. The consumer simply plays the game client
102 which includes the datacasting API 98. In the course of playing
the game, the consumer creates gameplay data 104. This data is
automatically created, logged and stored by the game client and the
datacasting and data capture software 92. Using this gameplay data,
the data rendering software 94, previously provided to the
consumer, may be used so that the consumer may create game video
106. The consumer may then share that video 108 with whomever he or
she desires.
[0100] This distribution model provides the data rendering software
94 to the consumer for his or her own enjoyment, filming and
distributing video of his or her gameplay sequences to whomever
they choose. The datacasting and data capture software 92, however,
is provided to the game manufacturer, for integration into games
such that the game manufacturer may provide the benefit of this
software's capability to the game consumer, through integrated
datacasting and data capture software 92.
[0101] Referring now to FIG. 6, an alternative distribution model
is disclosed. In this distribution model a software manufacturer
110 remains the primary creator of datacasting and data capture
software 112 and data rendering software 114. However, in this
embodiment, both pieces of software are provided (by any number of
means) to a game manufacturer 116. It is to be understood that
"game manufacturer" herein could refer to anyone that provides
online game hosting services for a game, to a licensee of a game
manufacturer, to the parent company of a game manufacturer, to a
player group, to a professional video game league or to one of many
other groups responsible for video game competition.
[0102] Game client integrating datacasting and data capture
software 118 is created by the game manufacturer 116 as is a game
server (or series of game servers) integrating the datacasting and
data capture software 120. The consumer plays the game client 122
from a game created by the game manufacturer and the game server
integrating datacasting and data capture software 120 and the game
client integrating datacasting and data capture software 118 work
together as the client plays to create consumer created data 124 of
the gameplay sequence.
[0103] In this embodiment, the game manufacturer may create video
126 using the data, typically stored on one or more of the game
servers 120. The video may be put through some post production 128
and may subsequently be distributed 130 by the game manufacturer,
for example, to a media outlet for additional press of the game. As
discussed previously, advertisements may be inserted so that the
game manufacturer receives some additional benefit from its
broadcast in the form of advertising revenue.
[0104] The distinction between these two business models is the
involvement or lack of involvement of the consumer in the video
creation process. In the first embodiment, the video is created by
the consumer, with tools granted by a game manufacturer. In the
second, the video is created by a party other than the video game
consumer. The first embodiment is purely for enjoyment and sharing
of gameplay experiences with others by the consumer. The second
embodiment may be for those purposes, but may also be for
advertising or marketing purposes or simply to better display
gameplay in action or an exciting portion of the game that a game
manufacturer (or media outlet) wishes to call to the attention of
game players or the general public.
[0105] Accordingly, a method of creating video of virtual worlds
and method of distributing and using same has been described. It
provides numerous benefits over the video recording of interactive
games of the prior art. It is to be understood that the foregoing
description has been made with respect to specific embodiments
thereof for illustrative purposes only. The overall spirit and
scope of the present invention is limited only by the following
claims, as defined in the foregoing description.
* * * * *