U.S. patent application number 11/286124 was filed with the patent office on 2006-05-18 for method and program for scenario provision in a simulation system.
This patent application is currently assigned to VirTra Systems, Inc.. Invention is credited to Robert D. Ferris, Robert L. Hill.
Application Number | 20060105299 11/286124 |
Document ID | / |
Family ID | 36386776 |
Filed Date | 2006-05-18 |
United States Patent
Application |
20060105299 |
Kind Code |
A1 |
Ferris; Robert D. ; et
al. |
May 18, 2006 |
Method and program for scenario provision in a simulation
system
Abstract
A simulation system (20) facilitates training for trainees (26)
subject to multi-directional threats. A method for providing a
scenario (211) for use in the simulation system (20) utilizes a
scenario provision process (202) executable on a computing system
(26). The process (202) calls for choosing a background image (234)
for the scenario (211), and selecting a video clip(s) (386) of one
or more actors (266) from a database (203) of video clips (386).
The video clip(s) (386) are filmed using a green or blue screen
technique, and include a mask portion (394) of the actor (266) and
a transparent portion (396). The video clip(s) (386) are combined
with the background image (234) to create the scenario (211), with
the mask portion (394) forming a foreground image over the
background image (234). The scenario (211) is displayed on a
display of the simulation system (20).
Inventors: |
Ferris; Robert D.; (Mesa,
AZ) ; Hill; Robert L.; (Phoenix, AZ) |
Correspondence
Address: |
MESCHKOW & GRESHAM, P.L.C.
5727 NORTH SEVENTH STREET, SUITE 409
PHOENIX
AZ
85014
US
|
Assignee: |
VirTra Systems, Inc.
|
Family ID: |
36386776 |
Appl. No.: |
11/286124 |
Filed: |
November 22, 2005 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10800942 |
Mar 15, 2004 |
|
|
|
11286124 |
Nov 22, 2005 |
|
|
|
60633087 |
Dec 3, 2004 |
|
|
|
Current U.S.
Class: |
434/11 |
Current CPC
Class: |
G09B 19/00 20130101 |
Class at
Publication: |
434/011 |
International
Class: |
F41A 33/00 20060101
F41A033/00 |
Claims
1. A method for providing a scenario for use in a scenario playback
system, said method utilizing a computing system executing scenario
creation code, and said method comprising: choosing, at said
computing system, a background image for said scenario; selecting a
video clip from a database of video clips stored in said computing
system, said video clip having a mask portion and a transparent
portion; combining said video clip with said background image to
create said scenario, said mask portion forming a foreground image
over said background image; and displaying said scenario on a
display of said scenario playback system for interaction with a
user.
2. A method as claimed in claim 1 wherein said background image is
chosen from a plurality of background images stored in said
computing system, each of which portrays an environment.
3. A method as claimed in claim 1 wherein said background image is
a panoramic format image, said display includes a first screen and
a second screen adjacent said first screen, and said displaying
operation comprises: presenting a first view of said background
image on said first screen; and presenting a second view of said
background image on said second screen, said first and second views
being adjacent portions of said background image.
4. A method as claimed in claim 3 further comprising manipulating
said background image relative to said first and second screens to
form said first and second views.
5. A method as claimed in claim 1 further comprising specifying a
foreground layer from a portion of said background image, said
foreground layer overlaying said mask portion of said video clip in
said situational scenario.
6. A method as claimed in claim 1 further comprising recording,
prior to said selecting operation, said video clips in said
database, said video clips portraying an actor performing animation
sequences.
7. A method as claimed in claim 6 wherein said actor is a first
actor, and said method further comprises: recording, prior to said
selecting operation, additional video clips into said database,
said additional video clips portraying a actor performing said
animation sequences; and enabling a selection of said video clips
for said first actor and said additional video clips for said
second actor for combination with said background image.
8. A method as claimed in claim 6 further comprising distinguishing
each of said video clips in said database by an identifier
characterizing one of said animation sequences.
9. A method as claimed in claim 6 wherein said recording operation
comprises: filming said actor performing said animation sequences
against a backdrop having a single color to obtain said video
clips; and creating a matte defining said transparent portion and
said mask portion such that an image of said actor forms said mask
portion.
10. A method as claimed in claim 6 wherein for one of said
animation sequences, said method further comprises: defining zones
on said mask portion; storing, in said computing system, zone
information corresponding to said zones in association with said
one of said animation sequences; detecting, in response to said
presenting operation, an event within one of said zones; and
initiating, within said scenario, a response to said event.
11. A method as claimed in claim 10 wherein: said defining
operation defines hit zones; said detecting operation detects
discharge of a weapon into one of said hit zones; and said
initiating operation includes branching to one of said animation
sequences portraying a reaction of said actor to said
discharge.
12. A method as claimed in claim 10 wherein: said defining
operation defines a first hit zone and a second hit zone; said
detecting operation detects discharge of a weapon into one of said
first and second hit zones; and said initiating operation includes:
when said discharge is detected in said first hit zone, branching
to a first one of said animation sequences portraying a first
reaction of said actor to said discharge; and when said discharge
is detected in said second hit zone, branching to a second one of
said animation sequences portraying a second reaction of said actor
to said discharge.
13. A method as claimed in claim 1 wherein said video clips portray
an actor performing animation sequences, said video clip is a first
video clip of a first one of said animation sequences, and said
method further comprises: selecting a second video clip from said
database, said second video clip being a second one of said
animation sequences of said actor; and linking said first and
second video clips in a logic flow of said first and second video
clips to form a behavior for said actor, and said combining
operation combines said first and second video clips in said logic
flow with said background image to create said scenario of said
actor exhibiting said behavior.
14. A method as claimed in claim 13 further comprising: selecting a
third video clip from said database, said third video clip being a
third one of said animation sequences of said actor; selectively
linking said third video clip with said first and second video
clips in said logic flow; assigning an event to said first video
clip; initiating said third video clip when said event occurs
during said presenting operation; and initiating said second video
clip when said event fails to occur.
15. A method as claimed in claim 1 wherein database of video clips
includes of a plurality of actors performing a plurality of
animation sequences, said database further includes a plurality of
behaviors formed from logic flows of ones of said animation
sequences, and: said selecting operation comprises: choosing a
first actor from said plurality of actors; assigning a first
behavior to said first actor from said plurality of behaviors;
choosing a second actor from said plurality of actors; and
assigning a second behavior to said second actor from said
plurality of behaviors; and said combining operation includes
combining said video clips of said first actor exhibiting said
first behavior and said video clips of said second actor exhibiting
said second behavior with said background image to create said
scenario that includes said first and second actors.
16. A method as claimed in claim 15 further comprising: assigning
an event to one of said video clips of said first behavior; and
effecting said second behavior for said second actor in response to
initiation of said event within said first behavior for said first
actor.
17. A method as claimed in claim 1 wherein said combining operation
comprises employing a drag-and-drop function to determine a
location of said mask portion of said video clip against said
background image.
18. A method as claimed in claim 1 wherein said combining operation
comprises resizing said mask portion of said video clip relative to
said background image to characterize a distance of said mask
portion from said user.
19. A method as claimed in claim 1 wherein said combining operation
comprises imposing a time delay on an appearance of said video clip
in combination with said background image.
20. A method as claimed in claim 1 wherein said method further
comprises: assigning an event to said video clip; linking a trigger
with said video clip, said trigger being associated with said
event; and said displaying operation includes displaying a second
video clip when said trigger is activated indicating an occurrence
of said event.
21. A method as claimed in claim 1 wherein said video clip includes
an audio signal, and said displaying operation comprises
broadcasting said audio signal.
22. A computer-readable storage medium containing executable code
for instructing a processor to create a scenario for interactive
use in a scenario playback system, said executable code instructing
said processor to perform operations comprising: receiving a first
input indicating choice of a background image for said scenario,
said background image being one of a plurality of background images
stored in a memory associated with said processor, each of said
background images portraying an environment; receiving a second
input indicating selection of an actor from a plurality of actors
stored in said memory; receiving a third input indicating
assignment of a behavior from a plurality of behaviors stored in
said memory; accessing video clips of said actor from a database of
said video clips stored in said memory, said video clips portraying
said actor performing animation sequences in accordance with said
behavior, each of said video clips having a mask portion and a
transparent portion; combining said video clips with said
background image to create said scenario, said mask portion forming
a foreground image over said background image; and saving said
scenario for presentation on a display of said scenario playback
system for interaction with a user.
23. A computer-readable storage medium as claimed in claim 22
wherein said actor is a first actor, said behavior is a first
behavior, and said executable code instructs said processor to
perform further operations comprising: receiving a fourth input
indicating selection of a second actor from said plurality of
actors; receiving a fifth input indicating assignment of a second
behavior from said plurality of behaviors; accessing additional
video clips of said second actor from said database, said
additional video clips portraying said second actor performing said
animation sequences in accordance with said second behavior, each
of said additional video clips having a mask portion and a
transparent portion; and said combining operation includes
combining said video clips of said first actor and said additional
video clips of said second actor with said background image to
create said scenario that includes said first and second
actors.
24. A computer-readable storage medium as claimed in claim 23
wherein said executable code instructs said processor to perform
further operations comprising: assigning an event to one of said
video clips of said first behavior; and effecting said second
behavior for said second actor in response to initiation of said
event within said first behavior for said first actor.
25. A method for providing a scenario for use in a scenario
playback system, said method utilizing a computing system executing
scenario creation code, and said method comprising: choosing, at
said computing system, a background image for said scenario;
selecting a video clip from a database of video clips stored in
said computing system, said video clip having a mask portion and a
transparent portion; combining said video clip with said background
image to create said scenario, said mask portion forming a
foreground image over said background image, said combining
operation including: employing a drag-and-drop function to
determine a location of said mask portion of said video clip
against said background image; and specifying a foreground layer
from a portion of said background image, said foreground layer
overlaying said mask portion of said video clip at said location;
and displaying said scenario on a display of said scenario playback
system for interaction with a user.
26. A method as claimed in claim 25 wherein said background image
is a panoramic format image, said display includes a first screen
and a second screen adjacent said first screen, and said presenting
operation comprises: presenting a first view of said background
image on said first screen; and presenting a second view of said
background image on said second screen, said first and second views
being adjacent portions of said background image.
27. A method as claimed in claim 25 wherein said combining
operation comprises resizing said mask portion of said video clip
relative to said background image to characterize a distance of
said mask portion from said user.
28. A method as claimed in claim 25 wherein said combining
operation comprises imposing a time delay on an appearance of said
video clip in combination with said background image.
29. A method as claimed in claim 25 wherein said method further
comprises: assigning an event to said video clip; linking a trigger
with said video clip, said trigger being associated with said
event; and initiating said event when said trigger is activated
through interaction by said user during said presenting
operation.
30. A method as claimed in claim 25 wherein said video clip
includes an audio signal, and said displaying operation comprises
broadcasting said audio signal.
31. A method for providing a scenario for use in a scenario
playback system, said method utilizing a computing system executing
scenario creation code, and said method comprising: filming an
actor performing animation sequences against a backdrop having a
single color to obtain video clips; creating a matte defining a
transparent portion and a mask portion of said video clips such
that an image of said actor forms said mask portion;
differentiating said video clips by identifiers characterizing said
animation sequences; storing, at said computing system, said video
clips in connection with said identifiers in a database; and
selecting one of said video clips from said database for
combination with a background image to create said scenario; and
displaying said scenario on a display of a scenario playback
system, said mask portion forming a foreground image over said
background image.
32. A method as claimed in claim 31 wherein said database of video
clips includes of a plurality of actors performing a plurality of
animation sequences, said database further includes a plurality of
behaviors formed from logic flows of ones of said animation
sequences, and: said selecting operation comprises selecting said
actor from said plurality of actors and assigning a behavior from
said plurality of behaviors; and said combining operation comprises
combining said video clips of said actor exhibiting said behavior
with a background image to create said scenario.
Description
RELATED INVENTION
[0001] The present invention is a continuation in part (CIP) of
"Multiple Screen Simulation System and Method for Situational
Response Training," U.S. patent application Ser. No. 10/800,942,
filed 15 Mar. 2004, which is incorporated by reference herein.
[0002] In addition, the present invention claims priority under 35
U.S.C. .sctn.119(e) to: "Video Hybrid Computer-Generated Imaging
Software," U.S. Provisional Patent Application Ser. No. 60/633,087,
filed 3 Dec. 2004, which is incorporated by reference herein.
TECHNICAL FIELD OF THE INVENTION
[0003] The present invention relates to the field of simulation
systems for weapons training. More specifically, the present
invention relates to scenario authoring and provision in a
simulation system.
BACKGROUND OF THE INVENTION
[0004] Due to current world events, there is an urgent need for
highly effective law enforcement, security, and military training.
Training involves practicing marksmanship skills with lethal and/or
non-lethal weapons. Additionally, training involves the development
of decision-making skills in situations that are stressful and
potentially dangerous. Indeed, perhaps the greatest challenges
faced by a trainee are when to use force and how much force to use.
If an officer is unprepared to make rapid decisions under the
various threats he or she faces, injury to the officer or citizens
may result.
[0005] Although scenario training is essential for preparing a
trainee to react safely with appropriate force and judgment, such
training under various real-life situations is a difficult and
costly endeavor. Live-fire weapons training may be utilized in
firing ranges, but it is inherently dangerous, tightly safety
regulated, costly in terms of training ammunition, and firing
ranges may not be readily available in all regions. Moreover,
live-fire weapons cannot be safely utilized under various real-life
situation training.
[0006] One technique that has been in use for many years is the
utilization of simulation systems to conduct training exercises.
Simulation provides a cost effective means of teaching initial
weapon handling skills and some decision-making skills, and
provides training in real-life situations in which live-fire may be
undesirable due to safety or other restrictions.
[0007] A conventional simulation system includes a single screen
projection system to simulate reality. A trainee views the single
screen with video projected thereon, and must decide whether to
shoot or not to shoot at the subject. The weapon utilized in a
simulation system typically employs a laser beam or light energy to
simulate firearm operation and to indicate simulated projectile
impact locations on a target.
[0008] Single screen simulators utilize technology which restricts
realism in tactical training situations and restricts the ability
for thorough performance measurements. For example, in reality,
lethal threats can come from any direction or from multiple
directions. Unfortunately, a conventional single screen simulator
does not expand or stimulate a trainee's awareness to these
multi-directional threats because the trainee is compelled to focus
on a situation directly in front of the trainee, as presented on
the single screen. Accordingly, many instructors feel that the
industry is encouraging "tunnel vision" by having the trainees
focus on an 8-10 foot screen directly in front of them.
[0009] One simulation system proposes the use of one screen
directly in front of the trainee and a second screen directly
behind the trainee. This dual screen simulation system simulates
the "feel" of multi-directional threats. However, the trainee is
not provided with peripheral stimulation in such a dual screen
simulation system. Peripheral vision is used for detecting objects
and movement outside of the direct line of vision. Accordingly,
peripheral vision is highly useful for avoiding threats or
situations from the side. The front screen/rear screen simulation
system also suffers from the "tunnel vision" problem mentioned
above. That is, a trainee does not employ his or her peripheral
vision when assessing and reacting to a simulated real-life
situation.
[0010] In addition, prior art simulation systems utilize projection
systems for presenting prerecorded video, and detection cameras for
tracking shots fired, that operate at standard video rates and
resolution based on National Television Standards Committee (NTSC)
for analog television standard. Training scenarios based on NTSC
analog television standards suffer from poor realism due to low
resolution images that are expanded to fit the large screen of the
simulator system. In addition, detection cameras based on NTSC
standards suffer from poor tracking accuracy, again due to low
resolution.
[0011] While effective training can increase the potential for
officer safety and can teach better decision-making skills for
management of use of force against others, law enforcement,
security, and military training managers must devote more and more
of their limited resources to equipment purchases and costly
training programs. Consequently, the need to provide cost
effective, yet highly realistic, simulation systems for situational
response training in austere budget times has presented additional
challenges to the simulation system community.
[0012] Accordingly, what is needed is a simulation system that
provides realistic, multi-directional threats for situational
response training. In addition, what is needed is a simulation
system that includes that ability for high accuracy trainee
performance measurements. Moreover, the simulation system should
support a number of configurations and should be cost
effective.
SUMMARY OF THE INVENTION
[0013] It is an advantage of the present invention that a
simulation system is provided for situational response
training.
[0014] It is another advantage of the present invention is that a
simulation system is provided in which a trainee can face multiple
risks from different directions, thus encouraging teamwork and
reinforcing the use of appropriate tactics.
[0015] Another advantage of the present invention is that a
simulation system is provided having realistic scenarios in which a
trainee may practice observation techniques, practice time-critical
judgment and target identification, and improve decision-making
skills.
[0016] Yet another advantage of the present invention is that a
cost-effective simulation system is provided that can be configured
to enable situational response training, marksmanship training,
and/or can be utilized for weapons qualification testing.
[0017] The above and other advantages of the present invention are
carried out in one form by a simulation system. The simulation
system includes a first screen for displaying a first view of a
scenario, and a second screen for displaying a second view of the
scenario. The first and second views of the scenario occur at a
same instant, and the scenario is a visually presented situation.
The simulation system further includes a device for selective
actuation toward a target within the scenario displayed on the
first and second screens, a detection subsystem for detecting an
actuation of the device toward the first and second screens, and a
processor in communication with the detection subsystem for
receiving information associated with the actuation of the device
and processing the received information to evaluate user response
to the situation.
[0018] The above and other advantages of the present invention are
carried out in another form by a method of training a participant
utilizing a simulation system, the participant being enabled to
selectively actuate a device toward a target. The method calls for
displaying a first view of a scenario on a first screen of the
simulation system and displaying a second view of the scenario on a
second screen of the simulation system. The first and second views
of the scenario occur at a same instant, the scenario is
prerecorded video of a situation, and the first and second views
are adjacent portions of the prerecorded video. The method further
calls for detecting an actuation of the device toward a target
within the scenario displayed on the first and second screens, and
evaluating user response to the situation in response to the
actuation of the device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] A more complete understanding of the present invention may
be derived by referring to the detailed description and claims when
considered in connection with the Figures, wherein like reference
numbers refer to similar items throughout the Figures, and:
[0020] FIG. 1 shows a block diagram of a full surround simulation
system in accordance with a preferred embodiment of the present
invention;
[0021] FIG. 2 shows a block diagram of components that form the
simulation system of FIG. 1;
[0022] FIG. 3 shows a side view of a rear projection system of the
simulation system;
[0023] FIG. 4 shows a block diagram of a portion of the simulation
system of FIG. 1 arranged in a firing range configuration;
[0024] FIG. 5 shows a table of a highly simplified exemplary
scenario pointer database;
[0025] FIG. 6 shows a flowchart of an exemplary video playback
process for a scenario that includes video branching to
subscenarios;
[0026] FIG. 7 shows an illustrative representation of adjacent
views of a prerecorded scenario;
[0027] FIG. 8 shows a block diagram of a half surround simulation
system in accordance with another preferred embodiment of the
present invention;
[0028] FIG. 9 shows a block diagram of a three hundred degree
surround simulation system in accordance with yet another preferred
embodiment of the present invention;
[0029] FIG. 10 shows a flowchart of a training process of the
present invention;
[0030] FIG. 11 shows a diagram of an exemplary calibration
pattern;
[0031] FIG. 12 shows a diagram of a detector of the simulation
system zoomed in to a small viewing area for qualification
testing;
[0032] FIG. 13 shows a block diagram of a simulation system in
accordance with an alternative embodiment of the present
invention;
[0033] FIG. 14 shows a simplified block diagram of a computing
system for executing a scenario provision process to generate a
scenario for playback in a simulation system;
[0034] FIG. 15 shows a flow chart of a scenario provision
process;
[0035] FIG. 16 shows a screen shot image of a main window presented
in response to execution of the scenario provision process;
[0036] FIG. 17 shows a screen shot image of a library window from
the main window exposing a list of background images for the
scenario;
[0037] FIG. 18 shows a screen shot image of the main window
following selection of one of the background images of FIG. 17;
[0038] FIG. 19 shows a screen shot image of the library window from
the main window exposing a list of actors for the scenario;
[0039] FIG. 20 shows a screen shot image of the library window from
the main window exposing a list of behaviors for assignment to an
actor from the list of actors;
[0040] FIG. 21 shows a screen shot image of an exemplary drop-down
menu of behaviors supported by a selected one of the actors from
the list of actors;
[0041] FIG. 22 shows a screen shot image of the main window
following selection of actors and behaviors for the scenario;
[0042] FIG. 23 shows a screen shot image of a scenario logic window
from the main window for configuring the scenario logic of the
scenario;
[0043] FIG. 24 shows a table of a key of exemplary symbols utilized
within the scenario logic window of FIG. 23;
[0044] FIG. 25 shows a screen shot image of an exemplary drop down
menu of events associated with the scenario logic window of FIG.
23;
[0045] FIG. 26 shows a screen shot image of an exemplary drop down
menu of triggers associated with the scenario logic window of FIG.
23;
[0046] FIG. 27 shows a screen shot image of a background editor
window of the scenario provision process with a pan tool enabling a
pan capability;
[0047] FIG. 28 shows a screen shot image of the background editor
window with a foreground marking tool enabling a layer
capability;
[0048] FIG. 29 shows a screen shot image of the background editor
window with a background image selected for saving into a
database;
[0049] FIG. 30 shows an exemplary table of animation sequences
associated with actors for use within the scenario provision
process;
[0050] FIGS. 31a-d show an illustration of a single frame of an
exemplary video clip undergoing video filming and editing;
[0051] FIG. 32 shows a screen shot image of a behavior editor
window showing a behavior logic flow for a first behavior;
[0052] FIG. 33 shows a table of a key of exemplary symbols utilized
within the behavior editor window; and
[0053] FIG. 34 shows a partial screen shot image of the behavior
editor window showing a behavior logic flow for a second
behavior.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0054] FIG. 1 shows a diagram of a full surround simulation system
20 in accordance with a preferred embodiment of the present
invention. Full surround simulation system 20 includes multiple
screens 22 that fully surround a participation location 24 in which
one or more participants, i.e., trainees 26, may be positioned.
Since multiple screens 22 fully surround participation location 24,
at least one of screens 22 is configured to swing open to enable
ingress and egress. For example, screens 22 may be hingedly coupled
to one another, and one of screens 22 may be mounted on casters
that enables it to roll outwardly enough to allow passage of
trainees 26 and/or trainers (not shown).
[0055] Each of multiple screens 22 has a rear projection system 28
associated therewith. Rear projection system 28 is operable, and
trainees 26 actions may be monitored from, a workstation 30 located
remote from participation location 24. Workstation 30 is
illustrated as being positioned proximate screens 22. However, is
should be understood that workstation 30 need not be proximate
screens 22, but may instead be located more distantly, for example,
in another room. When workstation 30 is located in another room,
bi-directional audio may be provided for communication between
trainees 26 and trainers located at workstation 30. In addition,
video monitoring of participation location 24 may be provided to
the trainer located at workstation 30.
[0056] Full surround simulation system 20 includes a total of six
screens 22 arranged such that an angle 27 formed between
corresponding faces 29 of screens 22 is approximately one hundred
and twenty degrees. As such, the six screens 22 are arranged in a
hexagonal pattern. In addition, each of screens 22 may be
approximately ten feet wide by seven and a half feet high. Of
course, those skilled in the art will recognize that other sizes of
screens 22 may be provided. For example, a twelve foot wide by six
foot nine inch high screen may be utilized for high definition
formatted video. Thus, the configuration of simulation system 20
provides a multi-directional simulated environment in which a
situation, or event, is unfolding. Although screens 22 are shown as
being generally flat, the present invention may be adapted to
include screens 22 that are curved. In such a configuration,
screens 22 would form a generally circular pattern rather than the
illustrated hexagonal pattern.
[0057] Full surround simulation system 20 provides a visually
presented situation onto each of screens 22 so that trainees 26 in
participation location 24 are fully immersed in the situation. In
such a configuration, trainees 26 can train to respond to
peripheral visual cues, multi-directional auditory cues, and the
like. In a preferred embodiment, the visually presented situation
is full motion, pre-recorded video. However, it should be
understood that other techniques may be employed such as, video
overlay, computer generated imagery, and the like.
[0058] The situation presented by simulation system 20 is pertinent
to the type of training and the trainees 26 participating in the
training experience. Trainees 26 may be law enforcement, security,
military personnel, and the like. Accordingly, training scenarios
projected via rear projection system 28 onto associated screens 22
correspond to real life situations in which trainees 26 might find
themselves. For example, law enforcement scenarios could include
response to shots fired at a facility, domestic disputes, hostage
situations, and so forth. Security scenarios might include action
in a crowded airport departure/arrival terminal, the jet way, or in
an aircraft. Military scenarios could include training for a
pending mission, a combat situation, an ambush, and so forth.
[0059] Trainees 26 are provided with a weapon 31. Weapon 31 may be
implemented by any firearm (i.e., hand-gun, rifle, shotgun, etc.)
and/or a non-lethal weapon (i.e., pepper spray, tear gas, stun gun,
etc.) that may be utilized by trainees 26 in the course of duty.
However, for purposes of the simulation, weapon 31 is equipped with
a laser insert instead of actual ammunition. Trainees 26 actuate
weapon 31 to selectively project a laser beam, represented by an
arrow 33, toward any of screens 22 in response to the situation
presented by simulation system 20. In a preferred embodiment,
weapon 31 is a laser device that projects infra red (IR) light,
although a visible red laser device may also be used.
Alternatively, other non-live fire weaponry and/or live-fire
weaponry may be employed.
[0060] Referring to FIG. 2 in connection with FIG. 1, FIG. 2 shows
a block diagram of components that form simulation system 20.
Workstation 30 generally includes a simulation controller 32, and a
tracking processor 34 in communication with simulation controller
32. In addition, simulation controller 32 is in communication with
each of multiple projection controllers 36. One each of projection
controllers 36 is in communication with one each of rear projection
systems 28. Although separate controller/processor elements are
utilized herein for different functions, those skilled in the art
will readily appreciate that many of the computing functions
performed by simulation controller 32, tracking processor 34, and
projection controllers 36 may alternatively be combined into a
comprehensive computing platform.
[0061] Each rear projection system 28 includes a projector 38
having a video input 40 in communication with a video output 42 of
its respective projection controller 36, and a sound device, i.e.,
a speaker 44, having an audio input 46 in communication with an
audio output 48 of its respective projection controller 36. Each
rear projection system 28 further includes a detector 50, in
communication with tracking processor 34 via a high speed serial
bus 51. Thus, the collection of detectors 50 defines a detection
subsystem of simulation system 20. Projector 38 and detector 50
face a mirror 52 of rear projection system 28.
[0062] In general, simulation controller 32 may include a scenario
pointer database 54 that is an index to a number of scenarios
(discussed below) that are prerecorded full motion video of various
situations that are to be presented to trainees 26. In addition,
each of projection controllers 36 may include a scenario library 56
pertinent to their location within simulation system 20. Each
scenario library 56 includes a portion of the video and audio to be
presented via the associated one of projectors 38 and speakers
44.
[0063] An operator at workstation 30 selects one of the scenarios
to present to trainees 26 and simulation controller 32 accesses
scenario pointer database 54 to index to the appropriate video
identifiers (discussed below) that correspond to the scenario to be
presented. Simulation controller 32 then commands each of
projection controllers 36 to concurrently present corresponding
video, represented by an arrow 58, and any associated audio,
represented by arced lines 60.
[0064] Video 58 is projected toward a reflective surface 62 of
mirror 52 where video 58 is thus reflected onto screen 22 in
accordance with conventional rear projection methodology. Depending
upon the scenario, trainee 26 may elect to shoot his or her weapon
31, i.e. project laser beam 33, toward an intended target within
the scenario. An impact location (discussed below) of laser beam 33
is detected by detector 50 via reflective surface 62 of mirror 52
when laser beam 33 is projected onto screen 22. Information
regarding the impact location is subsequently communicated to
tracking processor 34 to evaluate trainee response to the presented
scenario (discussed below). Optionally, trainee response may then
be concatenated into a report 64.
[0065] Simulation controller 32 is a conventional computing system
that includes, for example, input devices (keyboard, mouse, etc.),
output devices (monitor, printers, etc.), a data reader, memory,
programs stored in memory, and so forth. Simulation controller 32
and projection controllers 36 operate under a primary/secondary
computer networking communication protocol in which simulation
controller 32 (the primary device) controls projection controllers
(the secondary devices).
[0066] Simulation system 20, illustrated in FIG. 1, includes a
quantity of six screens 22 and six rear projection systems 28
arranged in a hexagonal configuration to form the full surround
configuration of system 20. However, simulation system 20 need not
be limited to only six screens 22, but may include more or less
than six screens 22. Accordingly, the block diagram representation
of simulation system 20 is shown having a quantity of "N"
projection controllers 36 and their associated "N" rear projection
systems 28 to illustrate this point.
[0067] In a preferred embodiment, each of projectors 38 is capable
of playing high definition video. The term "high definition" refers
to being or relating to a television system that has twice as many
scan lines per frame as a conventional system, a proportionally
sharper image, and a wide-screen format. The high-definition format
uses a 16:9 aspect ratio (an image's width divided by its height),
although the 4:3 aspect ratio of conventional television may also
be used. The high resolution images (1024.times.768 or
1280.times.720) allow much more detail to be shown. Simulator
system 20 places trainees 26 close to screens 22, so that trainees
26 can see more detail. Consequently, the high resolution video
images are advantageously utilized to provide more realistic
imagery to trainees 26. Although the present invention is described
in terms of it's use with known high definition video formats, the
present invention may further be adapted for future higher
resolution video formats.
[0068] In a preferred embodiment, each of detectors 50 is an
Institute of Electrical and Electronics Engineers (IEEE)
1394-compliant digital video camera in communication with tracking
processor 34 via high speed serial bus 51. IEEE 1394 is a digital
video serial bus interface standard that offers high-speed
communications and isochronous real-time data services. An IEEE
1394 system is advantageously used in place of the more common
universal serial bus (USB) due to its faster speed. However, those
skilled in the art will recognize that existing and upcoming
standards that offer high-speed communications, such as USB 2.0,
may alternatively be employed.
[0069] Each of detectors 50 further includes an infrared (IR)
filter 66 removably covering a lens 68 of detector 50. IR filter 66
may be hingedly affixed to detector 50 or may be pivotally affixed
to detector 50. IR filter 66 covers lens 68 when simulator system
20 is functioning so as to accurately detect the impact location of
laser beam 33 (FIG. 1) on screen 22 by filtering all light except
IR. However, prior to onset of the simulation scenario, it is first
necessary to calibrate each of detectors 50 relative to their
associated one of projectors 38. IR filter 66 is removed from lens
68 so that visible light can be let in during the calibration
process. IR filter 66 may be manually or automatically moved from
in front of lens 68 as represented by detector 50, labeled
"DETECTOR N." An exemplary calibration process will be described in
connection with the training process of FIG. 10.
[0070] FIG. 3 shows a side view of one of rear projection systems
28, i.e., a first projection system 28', of simulation system 20
(FIG. 1). Only one of rear projection systems 28 is described in
detail herein. However, the following description applies equally
to each of rear projection systems 28 depicted in FIG. 1. First
rear projection system 28' includes a frame structure 70 for
placement behind a first screen 22'. A first mirror 52' is coupled
to a first end 72 of frame structure 70 with a first reflective
surface 62' facing a rear face 74 of first screen 22'. Frame
structure 70 retains first mirror 52' in a fixed orientation that
is substantially parallel to first screen 22'.
[0071] A first projector 38' is situated at a second end 76 of
frame structure 70 at a distance, d, from first reflective surface
62' of first mirror 52'. First projector 38' is preferably equipped
with an adjustment mechanism which can be employed to adjust first
projector 38' so that a center of a first view 78 of the projected
video 58 (FIG. 3) is approximately centered on first screen 22'.
First projector 38' projects first view 78 of video 58 toward first
mirror 52', and first view 78 reflects from first mirror 52' onto
first screen 22'. A first detector 50' is also positioned on frame
structure 70. First detector 50' may also be equipped with an
adjustment mechanism which may be employed to adjust first detector
50' so that first detector 50' has an appropriate view of first
screen 22' via first mirror 52'.
[0072] The utilization of first rear projection system 28' in
simulation system 20 (FIG. 1) advantageously saves space by
shortening the distance between first projector 38' and first
screen 22'. The distance, d, between first mirror 52' and first
projector 38' is approximately one half the throw distance of first
projector 38' to maximize space savings. Furthermore, the use of a
rear projection technique effectively frees participant location 24
(FIG. 1) of the clutter and distraction of components that would be
found in a front projection configuration, and avoids the problem
of casting shadows that can occur in a front projection
configuration.
[0073] The relationship of components on frame structure 70
simplifies system configuration and calibration, and makes
adjusting of first projector 38' simpler. As shown, frame structure
70 further includes casters 82 mounted to a bottom thereof. Through
the use of casters 82, simulation system 20 (FIG. 1) can be readily
repositioned into different arrangements of screens 22 and rear
projection systems 28.
[0074] FIG. 4 shows a block diagram of a portion of the simulation
system 20 arranged in a firing range configuration 84. The
configuration of simulation system 20 shown in FIG. 1
advantageously surrounds and immerses a participant in a realistic,
multi-directional environment for situational response training. In
addition to the development of decision-making skills in situations
that are stressful and potentially dangerous, a comprehensive
training program may also involve practicing marksmanship skills
with lethal and/or non-lethal weapons and weapons qualification
testing.
[0075] In firing range configuration 84, screens 22 are arranged
such that corresponding viewing faces 86 of screens 22 are aligned
to be substantially coplanar. Additionally, rear projection systems
28 are readily repositioned behind the aligned screens 22 via
casters 82 (FIG. 3). Trainees 26 may then face screens 22, and
project laser beam 33 of their respective weapons 31, toward
targets presented on screens 22 via rear projection systems 28.
Although firing range configuration 84 shows one of trainees 26 at
each of screens 22, it is equally likely that each of screens can
accommodate more than one trainee 26 for marksmanship training
and/or weapons qualification testing. Further discussion regarding
the use of full surround simulation system 20 for marksmanship
training and/or qualification testing is presented below in
connection with FIG. 12.
[0076] FIG. 5 shows a table of a highly simplified exemplary
scenario pointer database 54. As discussed briefly in connection
with FIG. 2, scenario pointer database 54 provides an index to a
number of scenarios of prerecorded full motion video of various
situations that are to be presented to trainees 26. Simulation
controller 32 (FIG. 2) accesses scenario pointer database 54 to
index to the appropriate video identifiers that correspond to the
scenario to be presented. Simulation controller 32 then commands
projection controllers 36 to concurrently present corresponding
video 58 (FIG. 2) and any associated audio 60 (FIG. 2) stored
within their respective scenario libraries 56 (FIG. 2).
[0077] Exemplary scenario pointer database 54 includes four
exemplary scenarios 86, labeled "1", "2", "3", and "4", and
referenced in a scenario identifier field 87. Each of scenarios 86
is pre-recorded video 58 corresponding to a real life situation in
which trainees 26 might find themselves, as discussed above. In
addition, each of scenarios 86, is split into adjacent portions,
i.e., adjacent views 88, referenced in a video index identifier
field 90, and assigned to particular projection controllers 36,
referenced in a projection controller identifier field 92. For
example, a first projection controller 36' is assigned a first view
88', identified in video index identifier field 90, by the label
1-1. Similarly, a second projection controller 36'' is assigned a
second view 88'', identified in video index identifier field 90, by
1-2.
[0078] In a preferred embodiment, pre-recorded video 58 may be
readily filmed utilizing multiple high-definition format cameras
with lenses outwardly directed from the same location, or a
compound motion picture camera, in order to achieve a 360-degree
field-of-view. Post-production processing entails stitching, or
seaming, the individual views to form a panoramic view. The
panoramic view is subsequently split into adjacent views 88 that
are presented, via rear projection systems 28 (FIG. 2), onto
adjacent screens 22. Through the use of digital video editing
software, adjacent views 88 can be time locked, for example,
through the assignment of appropriate time codes so that adjacent
views 88 of scenario 86 are played back at the same instant.
[0079] The video is desirably split so that the primary subject or
subjects of interest in the video is not split over adjacent
screens 22. The splitting of video into adjacent views 88 for
presentation on adjacent screens 88 need not be a one to one
correlation. For example, during post-production processing a
stitched panoramic video having a 270-degree field-of-view may be
projected onto five screens to yield a 300-degree
field-of-view.
[0080] Audio 60 may simply be recorded at the time of video
production. During post-production processing, particular portions
of the audio are assigned to particular slices of the video so that
audio relevant to the view is provided. For example, audio 60 (FIG.
2) of a door opening should come from speaker 44 (FIG. 2)
associated with one of screens 22 (FIG. 2) at which the door is
shown, while audio 60 of a person's voice should come from speaker
44 associated another of screens 22 at which the person is
presented. Thus, audio 60 is cost effectively produced using an
emulation of three-dimensional audio to match the video. Such an
approach is much less expensive, often more realistic, and scales
better with system configurations than more complex surround sound
techniques.
[0081] Although one video and audio production technique is
described above that cost-effectively yields a high resolution
emulation of a real-life situation, it should be apparent that
other video and audio production techniques may be employed. For
example, the pre-recorded video may be filmed utilizing a digital
camera system having a lens system that can record 360-degree
video. Post-production processing then merely entails splitting the
360-degree video into adjacent views to be presented on adjacent
screens. Similarly, audio may be produced utilizing one of several
surround sound techniques known to those skilled in the art.
[0082] Simulation system 20 (FIG. 1) may employ a branching video
technology. Branching video technology enables control of multiple
playback paths through a video database. As such, scenarios 86 may
optionally branch to a different outcome, i.e., a subscenario 94
based on the action or inaction of trainee 26 (FIG. 1).
[0083] Referring to FIG. 6 in connection with FIG. 5, FIG. 6 shows
a flowchart of an exemplary video playback process 93 for a second
scenario 86'', labeled "2", that includes video branching to
subscenarios 94. At an onset of the simulation training, an
operator initiates second scenario 86'', labeled "2". At a
particular junction within the playback of second scenario 86'', a
branching decision 96 may be required. If no branch is to occur at
branching decision 96, second scenario 86'' continues. However, if
the video is to branch at branching decision 96, a first
subscenario 94', labeled 2A, may be presented to trainee 26 (FIG.
1).
[0084] In addition, second scenario 86'' shows that following
initiation of first subscenario 94', another branching decision 98
may be required. When no branching is to occur at branching
decision 98, first subscenario 94' continues. Alternatively, when
branching is to occur at branching decision 98, a second
subscenario 94'', labeled 2C is presented. Following the completion
of second scenario 86'', first subscenario 94', or second
subscenario 96'', video playback process 93 for second scenario
86'' is finished.
[0085] An exemplary scenario 86 in which video branching might
occur is as follows: detectors 50 (FIG. 1) are surveying their
respective screens 22 (FIG. 1) for an infrared (IR) spot,
indicating that at least one of weapons 31 (FIG. 1) has been
"fired" to project laser beam 33 (FIG. 1) onto one of screens 22.
Tracking processor 34 (FIG. 2) may determine coordinates for a best
estimated location of the "shot." These coordinates are
communicated from tracking processor 34 to simulation controller 32
(FIG. 2). Simulation controller time links the impact location of
the "shot" to video 58 (FIG. 1) and controls branching of the video
accordingly. For example, if a person within second scenario 86 is
"shot", scenario 86 may branch to a subscenario 94 showing the
person falling.
[0086] Referring back to FIG. 5, the presentation of scenarios 86
can be tailored to the type and complexity of the desired training.
For example, scenario 86, labeled "1" may optionally take a single
branch. As described above, second scenario 86'' may optionally
branch to first subscenario 94', and then optionally branch from
first subscenario 94' to second subscenario 94''. Another scenario
86, labeled "3" need not branch at all, and yet another scenario
86, labeled "4", may optionally branch to one of two subscenarios
94.
[0087] The present invention contemplates the provision of custom
authoring capability of scenarios 86 to the training organization.
To that end, scenario creation software permits a scenario
developer to construct situations that can be displayed on screens
22 from "stock" footage without the demands to perform extensive
camera work. In a preferred embodiment, the scenario creation
software employs a technique known as compositing. Compositing is
the post-production combination of two or more video/film/digital
clips into a single image.
[0088] In compositing, two images (or clips) are combined in one of
several ways using a mask. The most common way is to place one
image (the foreground) over another (the background). Where the
mask indicates transparency, the background image will show through
the foreground. Blue/green screening, also known as chroma keying
is a type of compositing where the mask is calculated from the
foreground image. Where the image is blue (or green for green
screen), the mask is considered to be transparent. This technique
is useful when shooting film and video, as a blue or green screen
can be placed behind the object being shot and some other image
then inserted in that space later.
[0089] The scenario creation software provides the scenario
developer with a library of background still and/or motion images.
These background images are desirably panoramic images, so that one
large picture is continued from one view on one of screens 22 (FIG.
1) to the adjacent one of screens 22, and so forth. Green (or blue)
screen video clips may be captured by the user, or may be provided
within scenario creation software. These video clips may include
threatening or non-threatening individuals opening doors, coming
around corners, appearing from behind objects, and so forth.
[0090] The scenario creation software then enables the scenario
developer to display the background image with various foreground
clips to form the scenario. In addition, the scenario developer may
optionally determine the "logic" behind when and where the clips
may appear. For example, the scenario developer could determine
that foreground image "A" is to appear at a predetermined and/or
random time. In addition, the scenario developer may add "hit
zones" to the clips. These "hit zones" are areas where the clip
would branch due to interaction by the user. The scenario developer
can instruct the scenario to branch to clip "C" if a "hit zone" was
activated on clip "B".
[0091] Through the use of scenario creation software, the software
developer is enabled to add, modify, and subtract video clips,
still images, and/or audio clips to or from the scenario that they
are creating. The scenario developer may then be able to preview
and test their scenario during the scenario creation process. Once
the scenario developer is satisfied with the content, the scenario
creation software can create the files needed by simulation system
20 (FIG. 1), and automatically set up the scenario to be presented
on screens 22.
[0092] FIG. 7 shows an illustrative representation of adjacent
views 88 of prerecorded video 58 of one of scenarios 86. Adjacent
views 88 are presented on adjacent screens 22. For example, first
screen 22' shows first view 88', second screen 22'' shows second
view 88'', and so forth. Of course, as described above, screens 22
are arranged in a hexagonal configuration. Accordingly, adjacent
views 88 surround and immerse trainee 26 (FIG. 1) into the
situation presented in scenario 86. Upon being presented with such
a situation in scenario 86, it is incumbent upon trainee 26 to
determine what course of action he or she might take in response to
the situation.
[0093] FIG. 7 further illustrates an exemplary impact location 100
of laser beam 33 (FIG. 1) projected onto first screen 22'. In this
instance, trainee 26 has determined that a subject 102 was an
imminent threat to trainee 26 and/or to a second subject 104.
Hence, for purposes of demonstration, subject 102 is a target 105
within scenario 86 displayed on the multiple screens 22.
[0094] Trainee 26 responded to perceived aggressive behavior
exhibited by subject 102 with the force that he or she deemed to be
reasonably necessary during the course of the situation unfolding
within scenario 86. As discussed previously, detector 50 (FIG. 1)
associated with first screen detects impact location 100. Tracking
processor 34 (FIG. 1) receives information from detector 50
associated with impact location 100 indicating that weapon 31 was
actuated by trainee 26. The received information may entail receipt
of the raw digital video, which tracking processor 34 then converts
to processed information, for example, X and Y coordinates of
impact location 100. The X and Y coordinates can then be presented
to trainee 26 in the form of report 64 (FIG. 2), and/or can be
communicated to simulation controller 32 (FIG. 2) for subsequent
video branching, as discussed above.
[0095] FIG. 8 shows a diagram of a half surround simulation system
106 in accordance with another preferred embodiment of the present
invention. The components presented in full surround simulation
system 20 are modular and can be readily incorporated into other
simulation systems dependent upon training requirements. In this
situation, a 180-degree field of view is accomplished. As such,
half surround simulation system 106 includes three screens 22, each
of which has associated therewith one of projectors 38, one of
detectors 50, and one of speakers 44. However, unlike full surround
simulation system 20, half surround simulation system 106 utilizes
a conventional front projection technique. In this case, projectors
38 and detectors 50 are desirably mounted on a ceiling and out of
the way of trainee 26.
[0096] The 180-degree field of view enables trainee 26 to utilize
peripheral visual and auditory cues. However space and cost savings
is realized relative to full surround simulation system 106. Space
savings is realized because the overall footprint of half surround
simulation system 106 is approximately half that of full surround
simulation system 20, and cost savings is realized by utilizing a
smaller number of components.
[0097] FIG. 9 shows a diagram of a three hundred degree surround
simulation system 108 in accordance with yet another preferred
embodiment of the present invention. As shown, 300-degree surround
simulation system 108 includes a total of five screens 22 and five
rear projection systems 28. Three hundred degree surround
simulation system 108 enables nearly full surround and effective
immersion for trainees 26. However, by using one less screen 22, an
opening 110 is formed between screens 22 for easy ingress, egress,
and trainee observation purposes.
[0098] System 108 is further shown as including a remote debrief
station 111. Remote debrief station 111 may be located in a
different room, as represented by dashed lines 113. Station 111 is
in communication with workstation 30, and more particularly with
tracking processor 34 (FIG. 2) and/or simulation controller 32
(FIG. 2), via a wireline or wireless link 115. In an exemplary
situation, software resident at workstation 30 compiles and
transfers pertinent files for off-line review of trainee 26
response following a simulation experience. Off-line review could
entail review and/or playback of the scenario, video/audio files of
trainee 26, results, and so forth.
[0099] Although each of the simulation systems of FIGS. 1, 4, 8,
and 9 show the use of either front projection systems or rear
projection systems, it should be understood that a single
simulation system may include a combination of front and rear
projections systems in order to better accommodate size limitations
of the room in which the simulation system is to be housed.
[0100] Referring to FIGS. 1 and 10, FIG. 10 shows a flowchart of a
training process 112 of the present invention. Training process 112
is performed utilizing, for example, full surround simulation
system 20. Training process 112 will be described herein in
connection with a single one of trainees 26 utilizing full surround
simulation system 20 for simplicity of illustration. However, as
discussed above, more than one trainee 26 may participate in
training process 112 at a given session. In addition, training
process 112 applies equivalently when utilizing half surround
simulation system 106 (FIG. 8) or three hundred degree surround
simulation 108 (FIG. 9).
[0101] Training process 112 presents one of scenarios 86 (FIG. 5),
in the form of full motion, realistic video. Trainee 26, with
weapon 31, is immersed into scenario 86 and is enabled to react to
a threatening situation. The object of such training is to learn to
react safely, and with appropriate use of force and judgment.
[0102] Training process 112 begins with a task 114. At task 114, an
operator calibrates simulation system 20. As such, calibration task
114 is a preliminary activity that can occur prior to positioning
trainee 26 within participation location 24 of simulation system
20. Calibration task 114 is employed to calibrate each of detectors
50 with their associated projectors 38. In addition, calibration
task 114 may be employed to calibrate, i.e., zero, weapon 31
relative to projectors 38.
[0103] Referring to FIG. 11 in connection with task 114, FIG. 11
shows a diagram of an exemplary calibration pattern 116 of squares
that may be utilized to calibrate each of detectors 50 with their
associated projectors 38. In order to branch from scenario 86 (FIG.
5) to an appropriate subscenario 94 (FIG. 5), and/or to obtain high
accuracy trainee performance measurements, it is essential that the
detection accuracy of detectors 50 corresponds with a known
standard, i.e., calibration pattern 116 presented via one of
projectors 38. For example, if projector 38 illuminates one pixel
at, for example, X-Y coordinates of 4-4, and the associated
detector 50 detects the illuminated pixel (dot) at, for example,
X-Y coordinates of 5-5, then tracking software, resident in
tracking processor 34 (FIG. 2) must determine the appropriate
mathematical adjustments to ensure that detector 50 is coordinated
with projector 38.
[0104] To that end, at calibration task 114, IR filter 66 (FIG. 2)
is removed from lens 68 (FIG. 2) of detector 50 and visible light
is allowed in so that detector 50 can detect calibration pattern
116. As mentioned before IR filter 66 removal may be accomplished
by manual removal by the operator, or by automatic means. Following
IR filter 66 removal, projector 38 projects calibration pattern 116
for detection by detector 50, and tracking processor 34 (FIG. 2)
correlates detected coordinates with projected coordinates. Weapon
zeroing may entail projecting laser beam 33 (FIG. 1) from weapon 31
toward a predetermined position, i.e., a "zero" position, on
calibration pattern 116. Interpolation can subsequently be employed
to correlate projected coordinates for impact location 100 (FIG. 7)
with detected coordinates for impact location 100. Calibration task
114 is performed for each projector 38 and detector 50 pair, either
sequentially or concurrently.
[0105] With reference back to FIGS. 1 and 11, following task 114,
trainee 26 involvement can begin at a task 118. At task 118,
trainee 26 moves into participation location 24, and the operator
at workstation 30 displays a selected one of scenarios 86 (FIG. 5).
In particular, simulation controller 32 (FIG. 2) commands
projection controllers 36 (FIG. 2) to access their respective
scenario libraries 56 (FIG. 2) to obtain a portion of video 58
(FIG. 2) associated with the desired scenario 86. Adjacent views 88
of scenario 86 are subsequently displayed on adjacent screens 22,
as described in connection with FIG. 7.
[0106] In conjunction with task 118, a query task 120 determines
whether laser beam 33 is detected on one of screens 22. That is, at
query task 120, each of detectors 50 monitors for laser beam 33
projected on one of screens 22 in response to actuation of weapon
31. When one of detectors 50 detects laser beam 33, this
information is communicated to tracking processor 34 (FIG. 2), in
the form of, for example, a digital video signal.
[0107] When laser beam 33 is detected at query task 120, process
flow proceeds to a task 122. At task 122, tracking processor 34
determines coordinates describing impact location 100 (FIG. 7).
[0108] Following task 122, or alternatively, when laser beam 33 is
not detected at query task 120, process flow proceeds to query task
124. Query task 124 determines whether to branch to one of
subscenarios 94 (FIG. 5). In particular, simulation controller 32
(FIG. 2) determines from received information associated with
impact location 100, i.e., X-Y coordinates, or from the absence of
X-Y coordinates, whether to command projection controllers 36 (FIG.
2) to branch to one subscenarios 94. As such, depending upon the
desired training approach, this branching query task 124 may be due
to action (i.e., detection of laser beam 33) or inaction (i.e., no
laser beam 33 detected) of trainee 26.
[0109] Process 112 proceeds to a task 126 when a determination is
made at query task 124 to branch to one of subscenarios 94. At task
126, simulation controller 32 commands projection controllers 36
(FIG. 2) to access their respective scenario libraries 56 (FIG. 2)
to obtain a portion of video 58 (FIG. 2) associated with the
desired subscenario 94 (FIG. 5). Adjacent views 88 of subscenario
94 are subsequently displayed on adjacent screens 22.
[0110] When query task 124 determines not to branch to one of
subscenarios 94, process 112 continues with a query task 128. Query
task 128 determines whether playback of scenario 86 is complete.
When playback of scenario 86 is not complete, program control loops
back to query task 120 to continue monitoring for laser beam 31.
Thus, training process 112 allows for the capability of detecting
multiple shots fired from weapon 31. Alternatively, when playback
of scenario 86 is complete, process control proceeds to a query
task 130 (discussed below).
[0111] Referring back to task 126, a query task 132 is performed in
conjunction with task 126. Query task 132 determines whether laser
beam 33 is detected on one of screens 22 in response to the
presentation of subscenario 94. When one of detectors 50 detects
laser beam 33, this information is communicated to tracking
processor 34 (FIG. 2), in the form of, for example, a digital video
signal.
[0112] When laser beam 33 is detected at query task 132, process
flow proceeds to a task 134. At task 134, tracking processor 34
determines coordinates describing impact location 100 (FIG. 7).
[0113] Following task 134, or alternatively, when laser beam 33 is
not detected at query task 132, process flow proceeds to query task
136. Query task 136 determines whether to branch to another one of
subscenarios 94 (FIG. 5). In particular, simulation controller 32
(FIG. 2) determines from received information associated with
impact location 100, i.e., X-Y coordinates, or from the absence of
X-Y coordinates, whether to command projection controllers 36 (FIG.
2) to branch to another one of subscenarios 94.
[0114] Process 112 loops back to task 126 when a determination is
made at query task 136 to branch to another one of subscenarios 94.
The next one of subscenarios 94 is subsequently displayed, and
detectors 50 continue to monitor for laser beam 31. However, when
query task 136 determines not to branch to another one of
subscenarios 94, process 112 continues with a query task 138.
[0115] Query task 138 determines whether playback of subscenario 94
is complete. When playback of subscenario 94 is incomplete, program
control loops back to query task 132 to continue monitoring for
laser beam 31. Alternatively, when playback of subscenario 94 is
complete, process control proceeds to query task 130.
[0116] Following completion of playback of either of scenario 86,
determined at query task 128, or completion of playback of
subscenario 94, determined at query task 138, query task 130
determines whether report 64 (FIG. 2) is to be generated. A
determination can be made when one of tracking processor 34 (FIG.
2) or simulation controller 32 detects an affirmative or negative
response to a request for report 64 presented to the operator. When
no report 64 is desired, process 112 exits. However, when report 64
is desired, process 112 proceeds to a task 140.
[0117] At task 140, report 64 is provided. In an exemplary
embodiment, tracking processor 34 (FIG. 2) may process the received
information regarding impact location 100, associate the received
information with the displayed scenario 86 and any displayed
subscenarios 94, and combine the information into a format, i.e.,
report 64, that can be used for review and de-briefing. Report 64
may be formatted for display and provided via a monitor at, for
example, remote debrief station 111 (FIG. 9) in communication with
tracking processor 34. Alternatively, or in addition, report 64 may
be printed out. Report 64 may include various information
pertaining to trainee 26 performance including, for example,
location of first and second subjects 102 and 104, respectively
(FIG. 7), versus impact location 100, the desired response to
scenario 86 versus the actual response of trainee 26 to scenario
86, and/or the state of the wellbeing of trainee 26 in response to
scenario 86. The state of wellbeing might indicate whether the
trainee's response to scenario 86 could have caused trainee 26 to
be injured or killed in a real life situation simulated by scenario
86.
[0118] Following task 140, training process 112 exits. Of course,
it should be apparent that training process 112 can be optionally
repeated utilizing the same one of scenarios 86 or another one of
scenario 86.
[0119] Training process 112 describes methodology associated with
situational response training for honing a trainee's
decision-making skills in situations that are stressful and
potentially dangerous. Of course, as discussed above, a
comprehensive training program may also encompass marksmanship
training and/or weapons qualification testing. Full surround
simulation system 20 may be configured for marksmanship training
and weapons qualification testing, as discussed in connection with
FIG. 4. That is, screens 22 may be arranged coplanar with one
another to form firing range configuration 84 (FIG. 4).
[0120] FIG. 12 shows a diagram of detector 50 of simulation system
20 (FIG. 1) zoomed in to a small viewing area 142 for weapons
qualification testing. In a preferred embodiment, at least one of
detectors 50 is outfitted with a zoom lens 144. Zoom lens 144 is
adjustable to decrease an area of one of screens 22, for example,
first screen 22', that is viewed by detector 50. By either
automatically or manually zooming and focusing in to small viewing
area 142, higher-resolution tracking of laser beam 31 (FIG. 1) can
be achieved. Although only one is shown, there may additionally be
multiple -detectors 50 each configured to detect shots fired in an
associated viewing area 142 on first screen 22'.
[0121] Targets 146 presented on first screen 22' via one of
projectors 38 (not shown) are proportionately correct and sized to
fit within small viewing area 142. Thus, the size of targets 146
may be reduced by fifty percent relative to their appearance when
zoomed out. As shown, there may be multiple targets 146 presented
on first screen 22'. Additional information pertinent to
qualification testing may also be provided on first screen 22'.
This additional information may include, for example, distance to
the target (for example, 75 meters), wind speed (for example, 5
mph), and so forth. In addition, an operator may optionally enter,
via workstation 30, information for use by a software ballistic
calculator to compute, for example, the effects of wind, barometric
pressure, altitude, bullet characteristics, and for forth, on the
location of a "shot" fired toward targets 146.
[0122] Report 64 (FIG. 2) may be generated in response to
qualification testing that includes data pertinent to shooting
accuracy, such as average impact location for laser beam 31, offset
of laser beam 31 from center, a score, and so forth.
[0123] FIG. 13 shows a block diagram of a simulation system 150 in
accordance with an alternative embodiment of the present invention.
Simulation system 150 includes many of the components of the
previously described simulation systems. That is, simulation system
150 includes multiple screens 22 surrounding participation location
24, a rear projection system 28 associated with each screen 22, a
workstation 30, and so forth. Accordingly, a description of these
components need not be repeated.
[0124] In contrast to the aforementioned simulation systems,
simulation system 150 utilizes a non-laser-based weapon 152. Like
weapon 31 (FIG. 1), weapon 152 may be implemented by any firearm
(i.e., hand-gun, rifle, shotgun, etc.) and/or a non-lethal weapon
(i.e., pepper spray, tear gas, stun gun, etc.) that may be utilized
by trainees 26 in the course of duty. However, rather than a laser
insert, weapon 152 is outfitted with at least two tracking markers
154. Simulation system 150 further includes a detection subsystem
formed from multiple tracking cameras 156 encircling, and desirably
positioned above, participation location 24.
[0125] In a preferred embodiment, tracking markers 154 are
reflective markers coupled to weapon 152 that are detectable by
tracking cameras 156. Thus, tracking cameras 156 can continuously
track the movement of weapon 152. Continuous tracking of weapon 152
provides ready "aim trace" where the position of weapon 152 (or
even trainee 26) can be monitored and then replayed during a
debrief. Reflective tracking markers 154 require no power, and
tracking cameras 156 can track movement of weapon 152 in three
dimensions, as opposed to two dimensions for projected laser beam
tracking. In addition, reflective tracking is not affected by metal
objects in close proximity, and reflective tracking operates at a
very high update rate.
[0126] Accurate reflective tracking calls for a minimum of two
reflective markers 154 per weapon 152 and at least three tracking
cameras 156, although four to six tracking cameras 156 are
preferred. Each of tracking cameras 156 emits light (often
infrared-light) directly next to the lens of tracking camera 156.
Reflective tracking markers 154 then reflect the light back to
tracking cameras 156. A tracking processor (not shown) at
workstation 30 then performs various calculations and combines each
view from tracking cameras 156 to create a highly accurate
three-dimensional position for weapon 152. Of course, as known to
those skilled in the art a calibration process is required for both
tracking cameras 156 and weapon 152, and if any of tracking cameras
156 are moved or bumped, simulation system 150 should be
recalibrated.
[0127] Weapon 152 may be a pistol, for example, loaded with blank
rounds. Actuation of weapon 152 is thus detectable by tracking
cameras 156 as a sudden movement of tracking markers 154 caused by
the recoil of weapon 152 in a direction opposite from the direction
of the "shot" fired, as signified by a bi-directional arrow 158. By
using such a technique, multiple weapons 152 can be tracked in
participation location 24, and the position of weapons 152, as well
as the projection of where a "shot" fired would go, can be
calculated with high accuracy. Additional markers 154 may
optionally be coupled to trainee 26, for example, on the head
region to track trainee 26 movement and to correlate the movement
of trainee 26 with the presented scenario.
[0128] If weapon 152 is one that does not typically recoil when
actuated, weapon 152 could further be configured to transmit a
signal, via a wired or wireless link, indicating actuation of
weapon 152. Alternatively, a weapon may be adapted to include both
a laser insert and tracking markers, both of which may be employed
to detect actuation of the weapon.
[0129] FIG. 14 shows a simplified block diagram of a computing
system 200 for executing a scenario provision process 202 to
generate a scenario for playback in a simulation system, such as
those described above. As mentioned in connection with FIG. 6, the
present invention contemplates the provision of custom authoring
capability of scenarios to the training organization. To that end,
the present invention entails scenario creation code executable on
computing system 200 and methodology for providing a scenario for
use in the simulation systems described above.
[0130] Traditional training authoring software for instructional
use-of-force training and military simulation can provide
three-dimensional components. That is, conventional authoring
software enables the manipulation of three-dimensional geometry
that represents, for example, human beings. However, due to current
technological limitations, computer-generated human characters lack
realism in both look and movement, especially in real-time
applications. If a trainee believes they are shooting a non-person,
rather than an actual person, they may be more likely to use deadly
force, even when deadly force is unwarranted. Consequently, a
trainee having trained with video game-like "cartoon" characters,
may overreact when faced with minimal or non-threats. Similarly,
the trainee may be less effective against real threats.
[0131] Other current training approaches utilize interactive
full-frame video. This type of video can provide very realistic
human look and movement, at least on single screen applications.
However, simulations based on full-frame video have limitations
with respect to branching because the producers of such content
must film every possible branch that may be needed during the
simulation. In a practical setting, this means that training
courseware becomes increasingly difficult to film as additional
threats (i.e., characters) are added. The usual practice is to set
up a branching point within the video, then further down the
timeline, set up another branching point. This effectively limits
the number of characters "on-screen" at any one time to usually a
maximum of one or two. Moreover, such video has limited ability for
reuse since the actions of the actors are not independent from the
background. For video-based applications within the multi-screen
simulation systems described above, these limitations are
unacceptable.
[0132] As discussed in detail below, the scenario creation code
permits a scenario developer to construct situations that can be
displayed on screens 22 (FIG. 1) from stock footage without the
demands of performing extensive camera work. The present invention
may be utilized to create scenarios for the simulation systems
described above, as well as other use-of-force training and
military simulation systems. Use-of-force training can include
firearms as well as less lethal options, such as chemical spray,
TZER, baton, and so forth. In addition, the present invention may
be utilized to create scenarios for playback in other playback
systems that are not related to use-of-force or military training,
such as teaching or behavioral therapy environments, sales
training, and the like. Moreover, the present invention may be
adapted for scenario creation for use within video games.
[0133] Computing system 200 includes a processor 204 on which the
methods according to the invention can be practiced. Processor 204
is in communication with a data input 206, a display 208, and a
memory 210 for storing at least one scenario 211 (discussed below)
generated in response to the execution of scenario provision
process 202. These elements are interconnected by a bus structure
212.
[0134] Data input 206 can encompass a keyboard, mouse, pointing
device, and the like for user-provided input to processor 204.
Display 208 provides output from processor 204 in response to
execution of scenario provision process 202. Computing system 200
can also include network connections, modems, or other devices used
for communications with other computer systems or devices.
[0135] Computing system 200 further includes a computer-readable
storage medium 214. Computer-readable storage medium 214 may be a
magnetic disc, optical disc, or any other volatile or non-volatile
mass storage system readable by processor 204. Scenario provision
process 202 is executable code recorded on computer-readable
storage medium 214 for instructing processor 204 to create scenario
211 for interactive use in a scenario playback system for
visualization and interactive use by trainees 26 (FIG. 1). A
database 203 may be provided in combination with scenario provision
process 202. Database 203 includes actor video clips, objects,
sounds, background images, and the like that can be utilized to
create scenario 211.
[0136] FIG. 15 shows a flow chart of scenario provision process
202. Process 202 is executed to create scenario 211 for playback in
a simulation system, such as those described above. For clarity of
illustration, process 202 is executed to create scenario 211 for
playback in three hundred degree surround simulation system 108
(FIG. 9). However, process 202 may alternatively be executed to
create scenario 211 for full surround simulation system 20 (FIG.
1), half surround simulation system 106 (FIG. 8), and other single
screen or multi-screen simulation systems. In general, process 202
allows a scenario author to customize scenario 211 by choosing and
combining elements, such as actor video clips, objects, sounds, and
background images. Process 202 further allows the scenario author
to define the logic (i.e., a relationship between the elements)
within scenario 211.
[0137] Scenario provision process 202 begins with a task 216. At
task 216, process 202 is initiated. Initiation of process 202
occurs by conventional program start-up techniques and yields the
presentation of a main window on display 208 (FIG. 14).
[0138] Referring to FIG. 16 in connection with task 216, FIG. 16
shows a screen shot image 218 of a main window 220 presented in
response to execution of scenario provision process 202. Main
window 220 is the primary opening view of process 202, and includes
a number of sub-windows such as a scenario layout window 222, a
library window 224, a scenario logic window 226, and a properties
window 228. Main window 220 further includes a number of user
fields, referred to as buttons, for determining the behavior of
process 202 and controlling its execution. The functions of the
sub-windows and buttons within main window 220 will be revealed
below in connection with the execution of scenario provision
process 202. In response to task 216, scenario provision process
202 awaits receipt of commands from a scenario author (not shown)
in order to generate scenario 211 (FIG. 14).
[0139] Referring to FIG. 15, a task 230 is performed in response to
the receipt of a first input, via data input 206 (FIG. 14) from the
scenario author. The first input indicates choice of a background
image for scenario 211.
[0140] Referring to FIG. 17 in connection with task 230, FIG. 17
shows a screen shot image 232 of library window 224 from main
window 220 (FIG. 16) exposing a list 233 of background images 234
for scenario 211 (FIG. 14). Interactive buttons within library
window 224 can include a "background images" button 236, an
"actors" button 238, and a "behaviors" button 240. Additional
buttons include a "new folder" button 242 and a "create new" button
244. List 233 is revealed when the scenario author clicks on
background images button 236. As shown, list 233 may be organized
in folders representing image categories 246, such as rural, urban,
interior, and the like. However, it should be understood that list
233 may be organized in various ways pertinent to the particular
organization executing scenario provision process 202 with the
creation of new or different folders and image categories 246.
[0141] Background images 234 may be chosen from those provided
within list 233 stored in database 203 (FIG. 14). Alternatively,
new background images 234 may be imported utilizing "create new"
button 244. In a preferred embodiment, background images 234 can be
obtained utilizing a camera and creating still images within an
actual, or real environment. Background images 234 may be in a
panoramic format utilizing conventional panoramic photographic
techniques and processing for use within the large field-of-view of
three hundred degree surround simulation system 108 (FIG. 9). The
creation, editing, and storage of background images 234 will be
described in greater detail in connection with a background editor
illustrated in FIGS. 27-29.
[0142] The scenario author may utilize a conventional pointer 248
to point to one of background images 234. A short description 250,
in the form of text and/or a thumbnail image, may optionally be
presented at the bottom of library window 224 to assist the
scenario author in his or her choice of one of background images
234. Once the scenario author has chosen one of background images
234, the scenario author can utilize a conventional drag-and-drop
technique by clicking on one of background images 234 and dragging
it into scenario layout window 222 (FIG. 16). Those skilled in the
art will recognize that other conventional techniques, rather than
drag-and-drop, may be employed for choosing one of background
images 234 and placing it within scenario layout window 222.
[0143] Referring to FIG. 18 in connection with task 230 (FIG. 15)
of scenario provision process 202, FIG. 18 shows a screen shot
image 252 of main window 220 following selection of one background
images 234 provided in list 233 (FIG. 17). A first background image
234' is shown in scenario layout window 222. First background image
234' is presented in five adjacent panels 254 within scenario
layout window 222. These five adjacent panels 254 correspond to the
five adjacent screens 22 (FIG. 9) of three hundred degree surround
simulation system 108 (FIG. 9). As shown, first background image
234' can be seamlessly presented across panels 254, hence the five
screens 22 of system 108. A sixth panel 256 in scenario layout
window 222 may include a portion of one of background images 234
when creating scenario 211 (FIG. 14) for utilization within full
surround simulation system 20 (FIG. 1).
[0144] Referring back to scenario provision process 202 (FIG. 15),
following task 230 in which first background image 234' (FIG. 18)
is chosen and displayed in scenario layout window 222 (FIG. 18),
process 202 proceeds to a video clip selection segment 258. Video
clip selection segment 258 includes a task 260. Task 260 is
performed in response to the receipt of a second input, via data
input 206 (FIG. 14) from the scenario author. The second input
indicates selection of an actor that may be utilized within
scenario 211.
[0145] Referring to FIG. 19 in connection with task 260, FIG. 19
shows a screen shot image 262 of library window 224 from main
window 220 (FIG. 16) exposing a list 264 of actors 266 for scenario
211. List 264 is revealed when the scenario author clicks on actors
button 238. List 264 may be organized in folders representing actor
categories 268, such as friendlies, hostiles, targets, and so
forth. However, it should be understood that list 264 may be
organized in various ways pertinent to the particular organization
executing scenario provision process 202 with the creation of new
or different folders and actor categories 268.
[0146] Actors 266 may be chosen from those provided within list 264
stored in database 203 (FIG. 14). Alternatively, new actors 266 may
be imported utilizing "create new" button 244, and importing one or
more video clips of an actor or actors performing activities, or
animation sequences. In a preferred embodiment, video clips of
actors 266 can be obtained by filming an actor against a blue or
green screen, and performing post-production processing to create a
"mask" or "matte", of the area that the actor occupies against the
blue or green screen. The creation, editing, and storage of video
clips of actors 266 will be described in greater detail in
connection with FIGS. 30-34.
[0147] At task 260 of process 202, the scenario author may utilize
pointer 248 to point to one of actors 266, for example a first
actor 266', labeled."Offender 1". A short description 270, in the
form of text and/or a thumbnail image, may optionally be presented
at the bottom of library window 224 to assist the scenario author
in his or her selection of one of actors 266.
[0148] With reference back to process 202 (FIG. 15), once the
scenario author has selected one of actors 266 at task 260, video
clip selection segment 258 proceeds to a task 272. Task 272 is
performed in response to the receipt of a third input, via data
input 206 (FIG. 14) from the scenario author. The third input
indicates assignment of a behavior to the selected one of actors
266.
[0149] Referring to FIG. 20 in connection with task 172, FIG. 20
shows a screen shot image 274 of library window 224 from main
window 220 (FIG. 16) exposing a list 276 of behaviors 278 for
assignment to an actor 266 (FIG. 19) from list 264 (FIG. 19). In
this exemplary illustration, list 276 may be revealed when the
scenario author clicks on behaviors button 240. List 276 may be
organized in folders representing behavior categories 280, such as
aggressive, alert, civil, and such. However, it should be
understood that list 276 may be organized in various ways pertinent
to the particular organization executing scenario provision process
202 with the creation of new or different folders and behavior
categories 280.
[0150] In accordance with a preferred embodiment of the present
invention, each of behaviors 278 within list 276 is the aggregate
of actions and/or movements made by an object irrespective of the
situation. Behaviors 278 within list 276 are not linked with
particular actors 266 (FIG. 19). Rather, they are the aggregate of
possible behaviors provided within database 203 (FIG. 14) that may
be assigned to particular actors. For example, one of behaviors
278, i.e., a first behavior 278' labeled "Hostile A", may be a
hostile behavior that includes stand, shoot, and fall if shot, as
indicated by its description 282. By way of another example, a
second behavior 278'', labeled "Civil A" may be a civil, or
non-hostile, behavior that includes stand, turn, and flee. Again,
it is important to note that behaviors 278 are not linked with
particular actors, but rather are defined by the provider of
scenario provision process 202 as possible actions and/or movements
that may be undertaken within scenario 211.
[0151] List 264 (FIG. 19) of actors 266 (FIG. 19) is illustrated
herein to show the presentation of an aggregate of actors 266 (FIG.
19) that may be selected when creating scenario 211. Similarly,
list 276 of behaviors 278 is illustrated herein to show the
presentation of an aggregate of behaviors 278 (FIG. 20) that may be
assigned to actors 266 when creating scenario 211. In actuality,
certain behaviors 278 can only be assigned to actors 266 if the
actors 266 were initially filmed against a green or blue screen
performing those behaviors 278. That is, each of behaviors 278
represents a script that may be performed by any of a number of
actors 266 and filmed to create video clips for use within scenario
provision process 202 (FIG. 15). Thus, a particular one of actors
266 may support a subset of behaviors 278 within list 276, rather
than the totality of behaviors 278 in list 276.
[0152] Referring now to FIG. 21 in connection with task 272 (FIG.
15) of scenario provision process 202 (FIG. 15), FIG. 21 shows a
screen shot image 284 of an exemplary drop-down menu 286 of
behaviors 278 supported by a selected one of the actors 266.
Drop-down menu 286 represents a subset of behaviors 278 in which
the selected one of actors 266 was filmed and for which video clips
of those behaviors 278 exist in database 203 (FIG. 14). When the
scenario author selects, for example, first actor 266' (FIG. 19) at
task 260, drop-down menu 286 may appear to facilitate assignment of
one of behaviors 278. By utilizing pointer 248 to point to and
select one of behaviors 278, the scenario author may assign one of
behaviors 278, for example, first behavior 278', to first actor
266'.
[0153] Although the above description indicates the selection of
one of actors 266 and the subsequent assignment of one of behaviors
278 to the selected actor 266, it should be understood that the
present invention enables the opposite occurrence. For example, the
scenario author may select one of behaviors 278 from list 276. In
response, a drop-down menu may appear that includes a subset of
actors 266 from list 264 (FIG. 19), each of which supports the
selected one of behaviors 278. The scenario author may subsequently
select one of actors 266 that supports the selected one of
behaviors 278.
[0154] With reference back to scenario provision process 202 (FIG.
15), following behavior assignment task 272, process flow proceeds
to a task 288. At task 288, the selected actor and the assigned
behavior, i.e., first actor 266' (FIG. 19) performing first
behavior 278', are combined with the selected background image,
i.e., first background image 234' (FIG. 18), in scenario layout
window 222 (FIG. 18). It should be understood that task 288 causes
the combination of first background image 234 with video clips
corresponding to first actor 266' performing first behavior 278'.
Due to the blue or green screen filming technique of first actor
266', first actor 266' is a mask portion that forms a foreground
image over first background image 234', with first background image
234' being visible in the transparent portion (the blue or green
screen background) of the video clips of first actor 266'.
[0155] In a preferred embodiment, the scenario author can utilize a
conventional drag-and-drop technique by clicking on first actor
266' and dragging it into scenario layout window 222. By utilizing
the drag-and drop technique, the scenario author can determine a
location within first background image 234' in which the author
wishes first actor 266' to appear. Those skilled in the art will
recognize that other conventional techniques, rather than
drag-and-drop, may be employed for choosing one of actors 266 and
placing it within scenario layout window 222.
[0156] In addition, the scenario author can resize first actor 266'
relative to first background image 234' to characterize a distance
of first actor 266' from trainee 26 (FIG. 9) utilizing simulation
system 108 (FIG. 9). For example, the scenario author may alter the
pixel dimension of the digital image of first actor 266 by using
up/down keys on the keyboard of data input 206 (FIG. 14).
Alternatively, the scenario author may select first actor 266'
within first background image 234', position pointer 248 (FIG. 17)
over a conventional selection handle displayed around first actor
266', and resize first actor 266' by clicking on the handle and
dragging. In yet another alternative embodiment, scenario provision
process 202 may enable the entry of a desired distance of first
actor 266' from trainee 26. Process 202 may then automatically
calculate a height of first actor 266' within first background
image 234' relative to the desired distance.
[0157] Following combining task 288, scenario provision process 202
proceeds to a query task 290. At query task 290, the scenario
author determines whether scenario 211 is to include another one of
actors 266 (FIG. 19). When another one of actors 266 is to be
utilized within scenario 211 (FIG. 14), process 202 loops back to
task 260 so that another one of actors 266, for example, a second
actor 266'' (FIG. 19) is selected, assignment of one of behaviors
278 (FIG. 20) is made at task 272, for example, second behavior
278'' (FIG. 20), and video clips of second actor 266'' performing
second behavior 278'' are combined with first background image 234'
(FIG. 18). Consequently, repetition of tasks 260, 272, and 288
enables the scenario author to determine a quantity of actors 266
that would be appropriate for scenario 211.
[0158] FIG. 22 shows a screen shot image 294 of a portion of main
window 220 following selection of first and second actors 266' and
266'', respectively, and their associated first and second
behaviors 278' and 278'' (FIG. 20), respectively, for scenario 211.
Since each of first and second actors 266' and 266'' are defined as
a mask portion, or matte, during post production processing, each
of first and second actors 266' and 266'' overlay first background
image 234'.
[0159] It should be noted that both first and second actors 266'
and 266'' appear to be behind portions of first background image
234'. For example, first actor 266' appears to be partially hidden
by a rock 296, and second actor 266'' appears to be partially
hidden by shrubbery 298. During a background editing process,
portions of first background image 234' can be specified as
foreground layers. Thus rock 296 and shrubbery 298 are each defined
as a foreground layer within first background image 234'. When
regions within a background image are defined as foreground layers,
these foreground layers will overlay the mask portion of the video
clips corresponding to first and second actors 266' and 266''. This
layering feature is described in greater detail in connection with
background editing of FIGS. 27-29.
[0160] With reference back to FIG. 15, when a determination is made
at query task 290 that no further actors 266 are to be selected,
program flow proceeds to a task 300. At task 300, the scenario
author has the opportunity to build the scenario logic flow for
scenario 211 (FIG. 14). That is, although actors and behaviors have
been selected, as of yet, there is no definition of when the actors
may appear. Nor is there definition of the interaction, or lack
thereof, between the actors and behaviors. That capability is
provided to the scenario author to further customize scenario 211
in accordance with his or her particular training agenda.
[0161] Referring to FIGS. 23-24 in connection with task 290, FIG.
23 shows a screen shot image 302 of scenario logic window 226 from
main window 220 (FIG. 16) for configuring the scenario logic of
scenario 211 (FIG. 14), and FIG. 24 shows a table 304 of a key of
exemplary symbols 306 utilized within scenario logic window 226.
Symbols 306 represent actions, events, and activities within a
logic flow for scenario 211. By interconnecting symbols 306 within
scenario logic window 226, the "logic", or relationship between the
elements can be readily constructed.
[0162] Table 304 includes a "start point" symbol 308, an "external
command" symbol 310, a "trigger" symbol 312, an "event" symbol 314,
and "actor/behavior" symbol 316, an "ambient sound" symbol 318, and
a "delay" symbol 320. Symbols 306 are provided herein for
illustrative purposes. Those skilled in the art will recognize that
symbols 306 could take on a great variety of shapes. Alternatively,
color coding could be utilized to differentiate the various
symbols.
[0163] As shown in FIG. 23, a scenario logic flow 322 for scenario
211 (FIG. 14) includes a number of interconnected symbols 306.
Start point symbol 308 is automatically presented within scenario
logic window 226, and provides command and control to the scenario
playback system, in this case three hundred degree surround
simulation system 108 (FIG. 9), to load and initialize scenario
211. Actor/behavior symbol(s) 316 may appear in scenario logic
window 226 when actors 266 (FIG. 19) performing behaviors 278 (FIG.
20) are combined with one of background images 234. However,
actor/behavior symbol(s) 316 are "floating" or unconnected with
regard to any other symbols appearing in scenario logic window 226
until the scenario author creates those connections.
[0164] Interactive buttons within scenario logic window 226 can
include an "external command" button 324, a "timer" button 326, and
a "sound" button 328. External command symbol 310 is created in
scenario logic window 226 when the scenario author clicks on
external command button 324. External commands are interactions
that may be created within scenario logic flow 322 that occur from
outside of simulation system 108 (FIG. 9). These external commands
may be stored within database 203 (FIG. 14), and may be listed in,
for example, in properties window 228 (FIG. 16) of main window 220
(FIG. 16) when external command symbol 310 is created in scenario
logic window 226. The scenario author can then select one of the
external commands listed in properties window 228. Exemplary
external commands can include an instructor start command which
starts the motion video of scenario 211, a shoot command that
causes an actor to be shot, although not by trainee 26 (FIG. 1).
Other exemplary external commands could include initiating a shoot
back device toward trainee 26, initiating random appearance of
another actor, initiating a specialized sound, and so forth. In
operation of scenario 211, these external commands can be displayed
for ease of use by the instructor.
[0165] Delay symbol 320 is created in scenario logic window 226
when the scenario author clicks on timer button 326. The use of
timer button 326 allows the scenario author to input a time delay
into scenario logic flow 322. Appropriate text may appear in, for
example, properties window 228 of main window when delay symbol 320
is created in scenario logic window 226. This text can allow the
author to enter a duration of the delay, or can allow the author to
select from a number of pre-determined durations of the delay.
[0166] Ambient sound symbol 318 is created in scenario logic window
226 when the scenario author clicks on sound button 328. The use of
sound button 328 allows the scenario author to input ambient sound
into scenario logic flow 322. Text may appear in, for example,
properties window 228 of main window when ambient sound symbol 320
is created in scenario logic window 226. This text may be a list of
sound files that are stored within database 203 (FIG. 14). The
scenario author can then select one of the sound files listed in
properties window 228. Exemplary sound files include wilderness
sounds, warfare sounds, street noise, traffic, and so forth.
Alternatively, properties window 228 may present a browse
capability when the scenario author clicks on sound button 328 so
that the author is enabled to browse within computing system 200
(FIG. 14) or over a network connection for a particular sound
file.
[0167] Trigger symbol 312 within scenario logic flow 322 represents
notification to actor/behavior symbol 316 that something has
occurred. Whereas, event symbol 314 within scenario logic flow 322
represents an occurrence of something within an actor's behavior
that will cause a reaction within scenario logic flow 322. In this
exemplary embodiment, trigger symbol 312 and event symbol 314 can
be generated when the scenario author "right clicks" on
actor/behavior symbol 316.
[0168] FIG. 25 shows a screen shot image 330 of an exemplary
drop-down menu 332 of events 334 associated with scenario logic
window 226 (FIG. 23). When the scenario author "right clicks" on
actor/behavior symbol 316 representing first actor 266', drop-down
menu 332 is revealed and one of events 334 can be selected.
Drop-down menu 332 reveals a set of events 334 that can occur
within scenario logic flow 322 in response to an actor's behavior.
By utilizing pointer 248 to point to and select one of events 334,
the scenario author may assign one of events 334, for example, a
"Fall" event 334', to first actor 266' within scenario logic flow
322.
[0169] FIG. 26 shows a screen shot image 336 of exemplary drop-down
menu 332 of triggers 338 associated with scenario logic window 226
(FIG. 23). When the scenario author "right clicks" on
actor/behavior symbol 316 representing second actor 266'',
drop-down menu 332 is again revealed and one of the listed triggers
338 can be selected. Drop-down menu 332 reveals a set of triggers
338 that can provide notification to an associated actor/behavior
symbol 316. By utilizing pointer 248 to point to and select one of
triggers 338, the scenario author may assign one of triggers 338,
for example, a "Shot" trigger 338', to second actor 266'' within
scenario logic flow 322.
[0170] Referring back to FIG. 23, the various symbols 306 within
scenario logic flow 322 are interconnected by arrows to define the
various relationships and interactions. Solid arrows 340, represent
the interconnections made by the scenario author. Whereas, dashed
arrows 342 are automatically generated when events 334 and/or
triggers 338 are assigned to various actor/behavior symbols 316
within scenario logic flow 322.
[0171] Scenario logic flow 322 describes a "script" for scenario
211 (FIG. 14). The "script" is as follows: scenario 211 starts
(Start point 308), the instructor initiates events (Instructor
start 310), ambient sound immediately begins (Ambient Sound 318),
and first actor 266' immediately begins performing his behavior
(Offender 1 316). If first actor 266' falls (Fall 314), a delay is
imposed (Delay 320). Second actor 266'' begins performing his
behavior (Guard 1 316) following expiration of the delay. The
instructor shoots second actor 266'' (Shoot Guard 1 310) which
causes a trigger (Shot 312) notifying second actor 266'' to react.
The reaction of second actor 266'' is logged as an event (Fall
314).
[0172] Scenario logic 322 is highly simplified for clarity of
understanding. However, in general it should be understood that
scenario logic can be generated such that the behavior of a first
actor can effect the behavior of a second actor and/or that an
external command can effect the behavior of either of the actors.
The behaviors of the actors can also be affected by interaction of
trainee 26 within scenario 211. This interaction can occur at the
behavior level of the actors, and is described in greater detail in
connection with FIGS. 33-34.
[0173] Returning to FIG. 15, after scenario logic flow 322 (FIG.
23) is built at task 300, scenario provision process 202 proceeds
to a task 344. At task 344, scenario 211 (FIG. 14) is saved into
memory 210 (FIG. 14). Following task 344, a task 346 is performed.
At task 346, scenario 211 is displayed on the scenario playback
system, for example, three hundred degree surround simulation
system 108 (FIG. 9), for interaction with trainee 26 (FIG. 1).
[0174] Scenario provision process 202 includes ellipses 348
separating scenario save task 344 and scenario display task 346.
Ellipses 348 indicate an omission of standard processing tasks for
simplicity of illustration. These processing tasks may include
saving scenario 211 in a format compatible for playback at
simulation system 108, writing scenario 211 to a storage medium
that is readable by simulation system 108, conveying scenario 211
to simulation system 108, and so forth. Following task 346,
scenario provision process 202 exits.
[0175] Referring to FIGS. 27-29, FIG. 27 shows a screen shot image
350 of a background editor window 352 with a pan tool 354 enabling
a pan capability. FIG. 28 shows a screen shot image 356 of
background editor window 352 with a foreground marking tool 358
enabling a layer capability, and FIG. 29 shows a screen shot image
360 of background editor window 352 with first background image
234' selected for saving into database 203 (FIG. 14).
[0176] As mentioned briefly above, background images 234 (FIG. 17)
can be obtained utilizing a camera and creating still images within
an actual, or real environment. These still images are desirably in
a panoramic format. In accordance with the present invention, a
still image may be manipulated in a digital environment through
background editor window 352 to achieve a desired one of background
images 234.
[0177] Interactive buttons within background editor window 352
include a "load panoramic" button 362, a "pan" button 364, and a
"layer" button 366. Load panoramic button 362 allows a user to
browse within computing system 200 (FIG. 14), over a network
connection, or to load from a digital camera, a particular still
image 368. Once selected, still image 368 will be presented on
adjacent panels 370 within background editor window 352, that
represent panels 254 (FIG. 18) within scenario layout window 222
(FIG. 18).
[0178] As illustrated in FIG. 27, the user can click on pan button
364 to reveal pan tool 354. Pan tool 354 allows the user to
manipulate still image 368 horizontally and vertically for optimal
placement of adjacent views within panels 370. A horizontal lock
372 and a vertical lock 373 can be selected after still image has
been manipulated to a desired position. A zoom adjustment element
374 may also be provided to enable the user to move still image 368
inward and outward at an appropriate depth.
[0179] As illustrated in FIG. 28, the user can click on layer
button 366 to reveal foreground marking tool 358. Foreground
marking tool 358 allows the user to cover or "paint" over areas
within still image 368 that he or she wishes to be specified as a
foreground layer. Foreground marking tool 358 may take on a variety
of forms for encircling a region, creating a "feathered" edge,
subtracting a region, and so forth known to those skilled in the
art. In this image, the foreground layer is designated by a shaded
region 376 created by movement of foreground marking tool 358.
Shaded region 376 will be saved as a data file in association with
still image 368 to define a foreground layer 378 (FIG. 29).
[0180] As illustrated in FIG. 29, after still image 368 has been
manipulated into a desired position, as needed, and foreground
layer(s) 378 have been specified, the user can select save still
image as first background image 234' by conventional procedures
using a "save" button 380. It should be noted that foreground
layers 378 will not appear as shaded region 376 (FIG. 28), but
instead foreground layers 378 within first background image 234'
will appear as the image of the portion of still image 368 that was
marked in FIG. 28. Alternatively, shaded region 376 may be
optionally toggled visible, invisible, or partially
transparent.
[0181] FIG. 30 shows an exemplary table 382 of animation sequences
384 associated with actors 266 for use within scenario provision
process 202 (FIG. 15). Table 382 relates to information stored
within database 203 (FIG. 14) of scenario provision process
202.
[0182] In the context of the following description, animation
sequences 384 are the scripted actions that any of actors 266 may
perform. Video clips 386 may be recorded of actors 266 performing
animation sequences 384 against a blue or green screen. Information
regarding video clips 386 are subsequently recorded in association
with one of actors 266. In addition, video clips 386 are
distinguished by identifiers 388, such as a frame number sequence,
in table 382 characterizing one of animation sequences 384. Thus,
video clips 386 portray actors 266 performing particular animation
sequences 384.
[0183] A logical grouping of animation sequences 384 defines one of
behaviors 278 (FIG. 20), as shown and discussed in connection with
FIGS. 32-34. When a user wishes to assign one of behaviors 278 to
one of actors 266 at task 272 (FIG. 15) of scenario provision
process 202 (FIG. 15), video clips 386 of the animation sequences
384 that make up the desired one of behaviors 278 must first be
recorded in database 203 (FIG. 14).
[0184] FIGS. 31a-d show an illustration of a single frame 390 of an
exemplary one of video clips 386 undergoing video filming and
editing. Motion picture video filming may be performed utilizing a
standard or high definition video camera. Video editing may be
performed utilizing video editing software for generating digital
"masks" of the actor's performance. Those skilled in the art will
recognize that video clips 386 contain many more than a single
frame. However, only a single frame 390 is shown to illustrate post
production processing that may occur to generate video clips 386
for use with scenario provision process 202 (FIG. 15).
[0185] At FIG. 31a, first actor 266' is filmed against a backdrop
392 having a single color, such as a green or blue screen. At FIG.
31b, a matte 393, sometimes referred to as an alpha channel, is
created that defines a mask portion 394 (i.e., the area that first
actor 266' occupies) and a transparent portion 396 (i.e., the
remainder of frame 390 in which backdrop 392 is visible). At FIG.
31c, zones, illustrated as shaded circular and oval regions 398,
are defined on mask portion 394. In an exemplary embodiment, these
zones 398 are hit zones that provide information so scenario 211
(FIG. 14) can detect discharge of a weapon into one of zones 398.
That is, scenario 211 can determine whether trainee 26 (FIG. 1)
hits or misses a target, such as first actor 266'.
[0186] Zones 398 can be computed using matte 393, i.e., the alpha
channel, as a starting point. For example, in the area of frame 390
where the opacity exceeds approximately ninety-five percent, i.e.,
mask portion 394, it can be assumed that the image asset, i.e.
first actor 266', is "solid" and therefore can be hit by a bullet.
Any less opacity will cause the bullet to "miss" and hit the next
object in the path of the bullet. This hit zone information can be
enhanced by adding different types of zones 398 to different areas
of first actor 266'. For example, FIG. 31c shows circular hit zones
400 and oval hit zones 402. By using differing ones of zones 398,
behavior 278 for first actor 266' can generate an event related to
a strike in one of circular and oval hit zones 400 and 402 that
would affect the behavior's branching. The information regarding
zones 398 is stored in a file of hit zone information for each
frame 390 in a given one of video clips 386 (FIG. 30).
[0187] At FIG. 31d, single frame 390 is shown with foreground layer
378 overlaying mask portion 394 representing first actor 266'. FIG.
31d is provided herein to demonstrate a situation in which
foreground layer 378 overlies mask portion 394. In such a
circumstance, only those hit zones, in this case two circular hit
zones 400 and a single oval hit zone 402 can be hit.
[0188] Referring to FIGS. 32-33, FIG. 32 shows a screen shot image
404 of a behavior editor window 406 showing behavior logic flow 408
for first behavior 278', and FIG. 33 shows a table 410 of a key of
exemplary symbols 412 utilized within behavior editor window 406.
Symbols 412 represent actions, events, activities, and video clips
within behavior logic flow 408. By interconnecting symbols 412
within behavior editor window 406, the "logic", or relationship
between the elements can be readily constructed for one of
behaviors 278.
[0189] Like table 304 (FIG. 24), table 410 includes "start point"
symbol 308, "trigger" symbol 312, and "event" symbol 314. In
addition, table 410 includes an "animation sequence" symbol 414, a
"random" symbol 416 and an "option" symbol 418. Symbols 412 are
provided herein for illustrative purposes. Those skilled in the art
will recognize that symbols 412 could take on a great variety of
shapes. Alternatively, color coding could be utilized to
differentiate the various symbols.
[0190] As shown in FIG. 32, behavior logic flow 408 for first
behavior 278' includes a number of interconnected symbols 412.
Start point symbol 308 is automatically presented within behavior
logic flow 408, and provides command and control to load and
initialize first behavior 278'.
[0191] A branching options window 420 facilitates generation of
behavior logic flow 408. Branching options window 420 includes a
number of user interactive buttons. For example, window 420
includes a "branch" button 422, an "event" button 424, a "trigger"
button 426, a "random" button 428, and an "option" button 430. In
general, selection of branch button 422 allows for a branch to
occur within behavior logic flow 408. Selection of event button 424
results in the generation of event symbol 314, and selection of
trigger button 426 results in the generation of trigger symbol 312
in behavior logic flow 408.
[0192] It is interesting to note that the definition of trigger and
event symbols 312 and 314, respectively, when utilized within
behavior logic flow 408 differ slightly from their definitions set
forth in connection with scenario logic flow 322 (FIG. 23). That
is, trigger symbol 312 generated within behavior logic flow 408 is
a notification that something has occurred within that behavior
logic flow 408. A trigger within behavior logic flow 408 becomes an
event with scenario logic flow 322. Similarly, event symbol 314
generated within behavior logic flow 408 is an occurrence of
something that results in a reaction of the actor in accordance
with behavior logic flow 408. An event within behavior logic flow
408 becomes a trigger within scenario logic flow 322.
[0193] Selection of random button 428 results in the generation of
random symbol 416 in behavior logic flow 408. Similarly selection
of option button 430 results in the generation of option symbol 418
in behavior logic flow 408. The introduction of random and/or
option symbols 416 and 418, respectively, into behavior logic flow
408 introduces random or unexpected properties to a behavior logic
flow. These random or unexpected properties will be discussed in
connection with FIG. 34.
[0194] A properties window 432 allows the selection of animation
sequences 384. In addition, properties window 432 allows the
behavior author to assign various properties to the selected one of
animation sequences 384. These various properties can include, for
example, selection of a particular sound associated with a gunshot.
When one of animation sequences 384 is generated, animation
sequence symbol 414 will appear in behavior editor window 406. The
various symbols 412 will be presented in behavior editor window 406
as "floating" or unconnected with regard to any other symbols 412
appearing in window 406 until the behavior creation author creates
those connections. Symbols 412 within behavior logic flow 408 are
interconnected by arrows 432 to define the various relationships
and interactions.
[0195] Behavior logic flow 408 describes a "script" for one of
behaviors 278 (FIG. 20), in this case first behavior 278'. The
"script" is as follows: behavior flow 408 starts (Start point 308)
and animation sequence 384 is presented (Stand 414). If an event
occurs (Shot 314), a trigger is generated (Fall 312), and another
animation sequence 384 is presented (Fall 414). The trigger (Fall
312) is communicated as needed within scenario logic flow 322 (FIG.
23) as an event, Fall 314 (FIG. 23).
[0196] FIG. 34 shows a partial screen shot image 434 of behavior
editor window 406 showing a behavior logic flow 436 for another one
of behaviors 278. Behavior logic flow 436 is significantly more
complex than behavior logic flow 408 (FIG. 32). However, flow 436
is readily constructed utilizing symbols 412 (FIG. 33), and
introduces various random properties.
[0197] The "script" for behavior logic flow 436 is as follows:
behavior flow 436 starts (Start point 308) and animation sequence
384 is presented (Duck 414). Next, a random property is introduced
(Random 416). The random property (Random 416) allows behavior
logic flow 436 to branch to either an optional side logic flow
(Side 418) or an optional stand logic flow (Stand 418). Option
symbols 418 indicate that logic flow can include either side logic
flow, stand logic flow, or both side and stand logic flows when
implementing the random property (Random 416).
[0198] First reviewing side logic flow (Side 418), animation
sequence 384 is presented (From Duck: Side & Shoot 414). This
translates to "from the duck position, move sideways and shoot).
Next, animation sequence 384 is presented (From Side: Shoot 414),
meaning from the sideways position shoot weapon. Next, a random
property (Random 416) is introduced. The random property allows
behavior logic flow 436 to branch and present either animation
sequence 384 (From Side: Shoot 414) or animation sequence 384 (From
Side: Shoot & Duck 414).
[0199] During any of the three animation sequences, (From Duck:
Side & Shoot 414), (From Side: Shoot 414), and (From Side:
Shoot & Duck 414), and event can occur (Shot 314). If an event
occurs (Shot 314), a trigger is generated (Fall 312), and another
animation sequence 384 is presented (From Side: Shoot & Fall).
If another event occurs (Shot 314), another trigger is generated
(Fall 312), and yet another animation sequence 384 (Twitch 414) is
presented. If animation sequence 394 (From Side: Shoot & Duck
414) is presented for a period of time, and no event occurs, i.e.,
Shot 314 does not occur, behavior logic flow 436 loops back to
animation sequence 384 (Duck 414).
[0200] Next reviewing stand logic flow (Stand 418), animation
sequence 384 is presented (From Duck: Stand & Shoot 414). This
translates to "from the duck position, stand up and shoot). Next,
animation sequence 384 is presented (From Stand: Shoot and Duck
414), meaning from the standing position, shoot weapon, then duck.
If an event associated with animation sequences 384 (From Duck:
Stand. & Shoot 414) and (From Stand: Shoot and Duck 414) does
not occur, i.e., Shot 314 does not occur, behavior logic flow 436
loops back to animation sequence 384 (Duck 414).
[0201] However, during either of the two animation sequences 384
(From Duck: Stand & Shoot 414) and (From Stand: Shoot and Duck
414), and event can occur (Shot 314). If an event occurs (shot
314), a trigger is generated (Fall 312), and another animation
sequence 384 is presented (From Stand: Shoot & Fall 414). If
another event occurs (Shot 314), another trigger is generated (Fall
312), and yet another animation sequence 384 (Twitch 414) is
presented.
[0202] Although only two behavior logic flows for behaviors 278
(FIG. 20) are described herein, it should be apparent that a
variety of customized behavior logic flows can be developed and
stored within database 203 (FIG. 14). When actors 266 (FIG. 19) are
filmed performing particular animation sequences 384 (FIG. 30),
video clips 386 (FIG. 30) associated with these animation sequences
384 can be assembled in accordance with behaviors 278 to form an
actor/behavior definition for scenario 211 (FIG. 14).
[0203] In summary, the present invention teaches of a method for
scenario provision in a simulation system that utilizes executable
code operable on a computing system. The executable code is in the
form of a scenario provision process that permits the user to
create new scenarios with the importation of sounds and image
objects, such as, panoramic pictures, still digital pictures,
standard and high-definition video files, green or blue screen
video. Green or blue screen based filming provides for extensive
reusability of content, as individual "actors" can be filmed and
then "dropped" into various settings with various other "actors."
In addition, the program and method permits the user to place the
image objects (for example, actor video clips) in a desired
location within a background image. The program and method further
allows a user to manipulate a panoramic image for use as a
background image in a single or multi-screen scenario playback
system. The program and method permits the user to assign sounds
and image objects to layers so that the user can define what object
is displayed in front of or behind another object. In addition, the
program and method enables the user to readily construct scenario
logic flow defining a scenario through a readily manipulated and
understandable flowchart style user interface.
[0204] Although the preferred embodiments of the invention have
been illustrated and described in detail, it will be readily
apparent to those skilled in the art that various modifications may
be made therein without departing from the spirit of the invention
or from the scope of the appended claims. For example, the process
steps discussed and the images provided herein can take on great
number of variations and can be performed and shown in a differing
order then that which was presented.
* * * * *