U.S. patent application number 12/969844 was filed with the patent office on 2012-06-21 for virtual shoot wall with 3d space and avatars reactive to user fire, motion, and gaze direction.
This patent application is currently assigned to LOCKHEED MARTIN CORPORATION. Invention is credited to Jeremy Aker, Eric Burns, David Easter, Ken Lane.
Application Number | 20120156652 12/969844 |
Document ID | / |
Family ID | 46234871 |
Filed Date | 2012-06-21 |
United States Patent
Application |
20120156652 |
Kind Code |
A1 |
Lane; Ken ; et al. |
June 21, 2012 |
VIRTUAL SHOOT WALL WITH 3D SPACE AND AVATARS REACTIVE TO USER FIRE,
MOTION, AND GAZE DIRECTION
Abstract
A simulator system includes functionality for dynamically
tracking position and orientation of one or more simulation
participants and objects as they move throughout a capture volume
using an array of motion capture video cameras so that two- or
three-dimensional ("2D" and "3D") views of a virtual environment,
which are unique to each participant's point of view, may be
generated by the system and rendered on a display. In 3D and/or
multi-participant usage scenarios, the unique views are decoded
from a commonly utilized display by equipping the participants with
glasses that are configured with shutter lenses, polarizing
filters, or a combination of both. The object tracking supports the
provision and use of an optical signaling capability that may be
added to an object so that manipulation of the object by the
participant can be communicated to the simulator system over the
optical communications path that is enabled by use of the video
cameras.
Inventors: |
Lane; Ken; (Cary, NC)
; Aker; Jeremy; (Raleigh, NC) ; Burns; Eric;
(Chapel Hill, NC) ; Easter; David; (Raleigh,
NC) |
Assignee: |
LOCKHEED MARTIN CORPORATION
Orlando
FL
|
Family ID: |
46234871 |
Appl. No.: |
12/969844 |
Filed: |
December 16, 2010 |
Current U.S.
Class: |
434/11 |
Current CPC
Class: |
G09B 9/006 20130101;
F41G 3/26 20130101; G09B 9/003 20130101; F41J 9/14 20130101 |
Class at
Publication: |
434/11 |
International
Class: |
F41A 33/00 20060101
F41A033/00 |
Claims
1. A method for operating a simulation supported on a simulator,
the method comprising the steps of: tracking a participant in the
simulation to determine at least one of position, orientation, or
motion of the participant within a capture volume, the capture
volume being monitored by an optical motion capture system that is
configured to capture positions of participant markers within the
capture volume, the participant markers being positioned at known
locations on the participant; configuring an object with i) object
markers at known locations so that the optical motion capture
system can capture positions of the object markers within the
capture volume and ii) at least one participant-actuated light
source that is disposed at a known location on the object and
monitored by the optical capture system so as to implement an
optical communications path between the object and the optical
motion capture system over which a signal may be transmitted via
actuation of the light source; tracking the object to determine at
least one of position, orientation, or motion of the object within
a capture volume; and dynamically generating a virtual environment
utilized by the simulation, the virtual environment being generated
from the participant's point of view responsively to the
participant tracking and further being generated responsively to
the object tracking and signal transmitted over the optical
communications path.
2. The method of claim 1 including a further step of rendering the
virtual environment onto a display, the display being one of shoot
wall or CAVE.
3. The method of claim 1 in which the object comprises a simulated
weapon and the user-actuated light source is operatively coupled to
a trigger on the weapon, the light source being operated in
response to a trigger pull.
4. The method of claim 3 in which the light source is an IR light
source and the signal is indicative of the weapon being fired.
5. The method of claim 3 including a further step of determining a
trajectory of a simulated discharge of a round from the weapon
using the object tracking.
6. The method of claim 1 in which the optical motion capture system
utilizes an array of video cameras, each of the video cameras
including one or more IR light sources.
7. The method of claim 1 in which the virtual environment includes
one or more avatars that are responsive to the tracked participant,
the avatars being configured to dynamically change gaze or weapon
aim in response to the position, orientation, or motion of the
participant within the capture volume.
8. The method of claim 1 in which the participant tracking
comprises tracking the participant's head.
9. A computer-implemented method for providing a shoot wall
simulation, the method comprising the steps of: tracking a position
within a capture volume of each of one or more participants in the
shoot wall simulation using a motion capture system that is
configured to monitor the capture volume; generating a unique view
for each of one or more participants, each unique view being taken
from a point of view of the respective participant as the
participant moves within the capture volume; superimposing the
unique views onto a display device that is commonly utilized by
each of the one or more participants; tracking a position of a
weapon associated with one or more of the participants; and
detecting operation of the weapon using the motion capture system,
the detecting comprising monitoring actuation of a light affixed to
the weapon, the light being actuated when the weapon is fired.
10. The computer-implemented method of claim 9 in which the unique
views are encoded as 3D views with left-eye and right-eye
images.
11. The computer-implemented method of claim 10 in which the
left-eye and right-eye images are decoded using one of LCD shutter
glasses or polarizing filter glasses.
12. The computer-implemented method of claim 9 in which the
commonly utilized display device utilizes multiple walls in a CAVE
configuration.
13. The computer-implemented method of claim 9 in which the weapon
utilizes at least two markers disposed substantially along a long
axis defined by the barrel of the weapon.
14. The computer-implemented method of claim 9 in which the markers
comprise substantially spherical retro-reflectors.
15. One or more computer-readable storage media containing
instructions which, when executed by one or more processors
disposed in a computing device, implement a simulator system, the
instructions being logically grouped in modules, the modules
comprising: a camera module for interfacing with an array of
optical motion capture video cameras, the array being configured
for optically monitoring a capture volume and for receiving
captured images of tracked simulation participants and weapons
associated with respective participants; a head tracking module for
determining a position of a head of one or more of the tracked
simulation participants within the capture volume using the
captured images; a weapon tracking module for determining a
position of one or more tracked weapons within the capture volume
using the captured images; and a virtual environment generation
module for generating a virtual environment supported by the
simulator system, the virtual environment being corrected for
parallax distortion and trajectory parallax.
16. The one or more computer-readable storage media of claim 15
further comprising a virtual environment rendering module for
rendering the generated virtual environment onto a display.
17. The one or more computer-readable storage media of claim 15 in
which the parallax distortion correction comprises generating a
unique view for each participant based on each participant's point
of view within the capture volume.
18. The one or more computer-readable storage media of claim 15 in
which the trajectory parallax correction comprises determining a
trajectory of a round fired from one or more of the weapons using
the tracked position of the one or more weapons.
19. The one or more computer-readable storage media of claim 15 in
which the virtual environment generation module generates avatars
that are responsive to the position of the participants.
20. The one or more computer-readable storage media of claim 16 in
which the rendering is performed in 3D.
Description
BACKGROUND
[0001] Increased capabilities in computer processing, such as
improved real-time image and audio processing, have aided the
development of powerful training simulators such as vehicle,
weapon, and flight simulators, action games, and engineering
workstations, among other simulator types. Simulators are
frequently used as training devices which permit a participant to
interact with a realistic simulated environment without the
necessity of actually going out into the field to train in a real
environment. For example, different simulators may enable a live
participant, such as a police officer, pilot, or tank gunner to
acquire, maintain, and improve skills while minimizing costs, and,
in some cases the risks and dangers that are often associated with
live training
[0002] Current simulators perform satisfactorily in many
applications. However, customers for simulators, such as branches
of the military, law enforcement agencies, industrial and
commercial entities, etc., have expressed a desire for more
realistic simulations so that training effectiveness can be
improved. In addition, simulator customers typically seek to
improve the quality of the simulated training environments
supported by simulators by increasing realism in simulations and
finding ways to make the simulated experiences more immersive. With
regard to shooting simulations in particular, customers have shown
a desire for more accurate and complex simulations that go beyond
the typical shoot/no shoot scenarios that are currently
available.
[0003] This Background is provided to introduce a brief context for
the Summary and Detailed Description that follow. This Background
is not intended to be an aid in determining the scope of the
claimed subject matter nor be viewed as limiting the claimed
subject matter to implementations that solve any or all of the
disadvantages or problems presented above.
SUMMARY
[0004] A simulator system includes functionality for dynamically
tracking position and orientation of one or more simulation
participants and objects as they move throughout a capture volume
using an array of motion capture video cameras so that two- or
three-dimensional ("2D" and "3D") views of a virtual environment,
which are unique to each participant's point of view, may be
generated by the system and rendered on a display. In 3D and/or
multi-participant usage scenarios, the unique views are decoded
from a commonly utilized display by equipping the participants with
glasses that are configured with shutter lenses, polarizing
filters, or a combination of both. The object tracking supports the
provision and use of an optical signaling capability that may be
added to an object so that manipulation of the object by the
participant can be communicated to the simulator system over the
optical communications path that is enabled by use of the video
cameras.
[0005] In various illustrative examples, the simulator system
supports a shoot wall simulation where the simulated personnel
(i.e., avatars) can be generated and rendered in the virtual
environment so they react to the position and/or motion of the
simulation participant. The gaze and/or weapon aim of the avatars,
for example, will move in response to the location of the
participant so that the avatars realistically appear to be looking
and/or aiming their weapons at the participant. The participant's
weapon may be tracked using the object tracking capability by
tracking markers affixed to the weapon at known locations. A light
source affixed to the weapon and operatively coupled to the
weapon's trigger is actuated by a trigger pull to optically
indicate to the simulator system that the participant has fired the
weapon. Using the known location of the weapon gained from the
motion capture, an accurate trajectory of discharged rounds from
the weapon can be calculated and then realistically simulated. Use
of the light source allows the motion capture system to detect
weapon fire without the need for cumbersome and restrictive
conventional wired or tethered interfaces.
[0006] The participant's head is tracked through motion capture of
markers that are affixed to a helmet or other garment/device worn
by the participant when interacting with the simulation. By
correlating head position in the capture volume to the
participant's gaze direction, an accurate estimate can be made as
to where the participant is looking A dynamic view of the virtual
environment from the participant's point of view can then be
generated and rendered. Such dynamic view generation and rendering
from the point of view of the participant enables the participant
to interact with the virtual environment in a realistic and
believable manner by being enabled, for example, to change
positions in the capture volume to look around an obstacle to
reveal an otherwise hidden target.
[0007] Advantageously, the present simulator system supports a
richly immersive and realistic simulation by enabling the
participant's interaction with the virtual environment that more
closely matches interactions with an actual physical environment.
In combination with accurate trajectory simulation, the
participant-based point of view affords the virtual environment
with the appearance and response that would be expected of a real
environment--avatars react with gaze direction and weapon aim as
would their real world counterparts, rounds sent downrange hit
where expected, and the rendered virtual environment has realistic
depth well past the plane of the shoot wall.
[0008] This Summary is provided to introduce a selection of
concepts in a simplified form that are further described below in
the Detailed Description. This Summary is not intended to identify
key features or essential features of the claimed subject matter,
nor is it intended to be used as an aid in determining the scope of
the claimed subject matter.
DESCRIPTION OF THE DRAWINGS
[0009] FIG. 1 shows a pictorial view of an illustrative simulation
environment that may be facilitated by implementation of the
present simulator system with a 3D space and reactive avatars;
[0010] FIG. 2 shows an illustrative implementation of the present
simulator system using CAVE (Cave Automatic Virtual Environment)
configuration;
[0011] FIG. 3 shows an illustrative arrangement in which a capture
volume may be monitored for motion capture using an array of video
cameras;
[0012] FIG. 4 shows an illustrative six degree-of-freedom
coordinate system;
[0013] FIG. 5 shows an illustrative motion capture video
camera;
[0014] FIG. 6 shows a simplified block diagram of illustrative
functional components of a motion video camera;
[0015] FIG. 7 shows a set of illustrative markers that are applied
to a helmet worn by the participant at known locations;
[0016] FIG. 8 depicts an illustrative idealized object that is
arranged with multiple spherical retro-reflective markers that are
rigidly fixed to an object at known locations;
[0017] FIG. 9 shows an illustrative example of markers and light
sources as applied to a long arm weapon at known locations;
[0018] FIG. 10 shows a simulation participant wearing glasses that
may be configured with shutter lenses, polarizing filters, or both
to decode participant-specific views of a virtual environment;
[0019] FIG. 11 shows a pictorial representation of a modeled
environment;
[0020] FIG. 11A shows the modeled environment as rendered when
captured from a first point of view;
[0021] FIG. 12 shows a pictorial representation of a modeled
environment in which imaginary cameras which capture the
environment are located coincidentally with the participant's
head;
[0022] FIG. 12A shows the modeled environment as rendered from a
second point of view;
[0023] FIG. 13 illustrates the divergence between an actual
trajectory and perpendicular trajectory of a round discharged from
a weapon when the target is relatively close to the plane of the
shoot wall;
[0024] FIG. 14 illustrates the divergence between an actual
trajectory and perpendicular trajectory of a round discharged from
a weapon when the target is relatively distant from the plane of
the shoot wall;
[0025] FIG. 15 shows an illustrative architecture that may be used
to implement the present simulator system; and
[0026] FIG. 16 is a flowchart of an illustrative method of
operating the present simulator system.
[0027] Like reference numerals indicate like elements in the
drawings. Unless otherwise indicated, elements are not drawn to
scale.
DETAILED DESCRIPTION
[0028] FIG. 1 shows a pictorial view of an illustrative simulation
environment 100 that may be facilitated by implementation of the
present simulator system with a 3D space and reactive avatars. The
simulation environment 100 supports a participant 105 in the
simulation. In this particular illustrative example, the
participant 105 is a single soldier, using a simulated weapon 110,
who is engaging in training that is intended to provide a realistic
and immersive shooting simulation. It is emphasized, however, that
the present simulator system is not limited to military
applications or shooting simulations. The present simulator system
may be adapted to a wide variety of usage scenarios including, for
example, industrial, emergency response/911, law enforcement, air
traffic control, firefighting, education, sports, commercial,
engineering, medicine, gaming/entertainment, and the like.
[0029] The simulation environment 100 may also support multiple
participants if needed to meet the needs of a particular training
scenario. In many applications, when a 3D virtual environment is
implemented, then the present simulator system may be configured to
support two participants, each of whom is provided with unique and
independent 3D views of the virtual environment generated by the
system. In applications where a 2D virtual environment is
implemented, then a configuration may be utilized that may support
up to four participants, each of whom is provided with independent
2D views of the virtual environment generated by the system.
Discussion of the configurations used to support multiple
participants is provided in more detail below.
[0030] As shown in FIG. 1, the participant 105 trains within a
space (designated by reference numeral 115) that is termed a
"capture volume." The participant 105 is typically free to move
within the capture volume 115 as a given training simulation
unfolds. Although the capture volume 115 is indicated with a circle
in FIG. 1, it is noted that this particular shape is arbitrary and
various sizes, shapes, and configurations of capture volumes may be
utilized as may be needed to meet the requirements of a particular
implementation. As described in more detail below, the capture
volume 115 is monitored, in this illustrative example, by an
optical motion capture system. Motion capture is also referred to
as "motion tracking " Utilization of such a motion capture system
enables the simulator system to maintain knowledge of the position
and orientation of the soldier and weapon as the soldier moves
through the capture volume 115 during the course of the training
simulation.
[0031] A simulation display screen 120 is also supported in the
environment 100. The display screen 120 provides a dynamic view 125
of the virtual environment that is generated by the simulator
system. Typically a video projector is used to project the view 125
onto the display screen 120, although direct view systems using
flat panel emissive displays can also be utilized in some
applications. In FIG. 1, the view 125 shows a snapshot of an
illustrative avatar 130, who in this example is part of an enemy
force and thus a target of the shooting simulation. An avatar is
typically a model of a virtual person who is generated and animated
by the simulator system. In some applications, the avatar 130 may
be a representation of an actual person (i.e., a virtual alter ego)
and might take any of a variety of roles such as a member of a
friendly or opposing force, a civilian non-combatant, etc.
Furthermore, while a single avatar 130 is shown in the view 125,
the number of avatars utilized in any given simulation can vary as
needs dictate.
[0032] The simulation environment 100 shown in FIG. 1 is commonly
termed a "shoot wall" because a single display screen is utilized
in a vertical planar configuration that the participant 105 faces
to view the projected virtual environment. However, the present
simulator system is not necessarily limited to shoot wall
applications and can be arranged to support other configurations.
For example, as shown in FIG. 2, a CAVE configuration may be
supported in which four non-co-planar display screens 205.sub.1, 2
. . . 4 are typically utilized to provide a richly immersive
virtual environment that is projected across three walls and the
floor. As the projected virtual environment substantially surrounds
the participant 105, the capture volume 115 is coextensive with the
space enclosed by the CAVE projection screens, as shown in FIG.
2.
[0033] In some implementations of CAVE, the display screens
205.sub.1, 2 . . . 4 enclose a space that is approximately 10 feet
wide, 10 feet long, and 8 feet high, however, other dimensions may
also be utilized as may be required by a particular implementation.
The CAVE paradigm has also been applied to fifth and/or sixth
display screens (i.e., the rear wall and ceiling) to provide
simulations that may be even more encompassing for the participant
105. Video projectors 210.sub.1, 2 . . . 4 may be used to project
appropriate portions of the virtual environment onto the
corresponding display screens 205.sub.1, 2 . . . 4. In some CAVE
simulators, the virtual environment is projected stereoscopically
to support 3D observations for the participant 105 and interactive
experiences with substantially full-scale images.
[0034] As shown in FIG. 3, the capture volume 115 is within the
field of view of an array of multiple video cameras 305.sub.1, 2 .
. . N that are part of a motion capture system so that the position
and orientation of the participant 105 and weapon 110 (FIG. 1) may
be tracked within the capture volume as the participant moves as a
simulation unfolds. Such tracking utilizes images of markers (not
shown in FIG. 3) that are captured by the video cameras 305. The
markers are placed on the participant 105 and weapon 110 at known
locations. The centers of the marker images are matched from the
various camera views using triangulation to compute frame-to-frame
spatial positions of the participant 105 and weapon 110 within the
3D capture volume 115.
[0035] The positions are defined by six degrees-of-freedom ("dof"),
as depicted by the coordinate system 400 shown in FIG. 4, including
translation along each of the x, y, and z axes, as well as rotation
about each axis. Thus, both location of an object in the capture
volume (i.e., "position") and its rotation about each of the axes
(i.e., "orientation") may be described using the coordinate system
400. Note that the term "position" will be used to refer to both
location and rotation in the description that follows unless stated
otherwise.
[0036] Returning again to FIG. 3, stands, trusses, or similar
supports, as representatively indicated by reference numeral 310,
are typically used to arrange the video cameras 305 around the
periphery 315 of the capture volume 115. The number of video
cameras N may vary from 6 to 24 in many typical applications. While
fewer cameras can be successfully used in some implementations, six
is generally considered to be the minimum number that can be
utilized to provide accurate head tracking since tracking markers
can be obscured from a given camera in some situations depending on
the movement and position of the participant 105. Additional
cameras can be utilized to provide full body tracking, additional
tracking robustness, and/or redundancy.
[0037] In this illustrative example, the video cameras 305 may be
configured as part of a reflective optical motion capture system.
As shown in FIG. 5, reflective systems typically use multiple IR
LEDs (infra-red light emitting diodes), as representatively
indicated by reference numeral 505, that are arranged around the
perimeter of the lens 510 or aperture of a video camera 305. An
IR-pass filter may also be utilized over the lens 510 in some
camera designs. The IR LEDs 505 will function as light sources to
illuminate the markers on the participant 105 and weapon 110 (FIG.
1).
[0038] FIG. 6 shows a simplified block diagram of illustrative
functional components of a motion capture video camera 305. In
addition to the IR LEDs light sources 505, a video camera 305 will
generally include an image capture subsystem 605 comprising a
solid-state image sensor and optics such as one or more lenses. The
image capture subsystem, along with a processor 610 and memory 615
will typically be configured to give the video camera 305 the
capability to capture video with an appropriate resolution and
frame capture rate to enable motion tracking at the simulator
system level that meets a desired accuracy in real time. For
example, presently commercially available video cameras having
multiple megapixels of resolution and a 60 frames-per-second
capture rate may provide satisfactory performance in many typical
motion capture usage scenarios. The video cameras 305 will
typically include a high speed communications interface 620 that
facilitates operative connection and data exchange with external
subsystems and systems. For example, the interface 620 may be
embodied as a USB (Universal Serial Bus) interface.
[0039] FIG. 7 shows a set of illustrative markers 705 that are
applied to a helmet 710 worn by the participant 105 and secured
with a chinstrap 715. In alternative implementations, the markers
705 can be applied to hat, headband, skullcap or other relatively
tight-fitting device/garment so that the motion of the markers
closely matches the motions of participant (i.e., extraneous motion
of the markers is minimized). The markers 705 are substantially
spherically shaped in many typical applications and formed using
retro-reflective materials which reflect incident light back to a
light source with minimal scatter. The number of markers 705
utilized in a given implementation can vary, but generally a
minimum of three are used to enable six dof head tracking The
markers 705 are rigidly mounted in known locations on the helmet
710 to enable the triangulation calculation to be performed to
determine position within the capture volume 115. More markers 705
may be utilized in some usage scenarios to provide redundancy when
markers would otherwise be obscured during the course of a
simulation (for example, the participant lies on the floor, ducks
behind cover when so provided in the capture volume, etc.), or to
enhance tracking accuracy and/or robustness in some cases.
[0040] In this illustrative example, the markers 705 are used to
dynamically track the position and orientation of the participant's
head during interaction with a simulation. Head position is
generally well correlated to gaze direction of the participant 105.
In other words, knowledge of the motion and position of the
participant's head enables an accurate inference to be drawn as to
what or who the participant is looking at within the virtual
environment. In alternative implementations, additional markers may
be applied to the participant, for example, using a body suit,
harness, or similar device, to enable full body tracking within the
capture volume 115. Real time full body tracking can typically be
expected to consume more processing cycles and system resources as
compared to head tracking, but may be desirable in some
applications where, for example, a simulation is operated over
distributed simulator infrastructure and avatars of local
participants need to be generated for display on remote
systems.
[0041] FIG. 8 depicts an illustrative idealized object 805 that is
arranged with multiple spherical retro-reflective markers
810.sub.1, 2 . . . N that are rigidly fixed to the object 805 at
known locations. In applications where the object 805 is
implemented as a weapon such as a long arm, two markers 810 fixed
in positions along the long axis of the barrel are typically
sufficient to triangulate the location of the object 805 within the
capture volume 115 (FIG. 1), as the knowledge of the rotation of
the object about the long axis is generally unnecessary. However,
additional markers may be utilized to support marker redundancy,
for example, or when needed to meet the other requirements posed by
a particular implementation.
[0042] In this illustrative example, the object 805 is also
configured to support one or more light sources 815.sub.1, 2 . . .
N that may be selectively user-actuated via a switch 820 that is
operatively coupled to the lights, as indicated by line 825. The
light sources 815 may be implemented, for example, using IR LEDs
that are powered by a power source, such as a battery (not shown),
that is internally disposed in the object or arranged as an
externally-coupled power pack. The light sources 815 are used to
effectuate a relatively low-bandwidth optical communication path
for signaling or transmitting data from the object 805 (or from the
participant via interaction with the object) within the capture
volume 115 using the same optical motion capture system that is
utilized to track the position of the participant and object.
Advantageously, the light sources 815 implement the signal path
without the necessity of additional communications infrastructure
such as RF (radio frequency), magnetic sensing, or other equipment.
In addition, utilization of an optically-implemented communication
path obviates the need for wires, cables, or other tethers that
might restrict movement of the participant 105 within the capture
volume 115 or otherwise reduce the realism of the simulation.
[0043] As with the markers 810, the light sources 815 are rigidly
fixed to the object 805 at known locations. The light sources 815
may be located on object 805 both along the long axis as well as
off-axis, as shown. The number of light sources 815 utilized and
their location on the object 805 can vary by application.
Typically, however, at least one light source 815 will be utilized
to provide one bit, binary (i.e., on and off) signaling
capability.
[0044] FIG. 9 shows an illustrative example of markers 910 and
light sources 915 as particularly applied to the simulated weapon
110 shown in FIG. 1. Simulated weapons are typically similar in
appearance and weight as their real counterparts, but are not
capable of firing live ammunition. In some cases, simulated weapons
are real weapons that have been appropriately reconfigured and/or
temporarily modified for simulation purposes. In this example,
markers 910.sub.1, 910.sub.2, and 910.sub.3 are located along the
long axis defined by the barrel of the weapon 110 while marker
910.sub.N is located off the long axis. Light source 915.sub.1 is
located off axis and operatively coupled to the trigger 920 of the
weapon 110. Light source 915.sub.N is also located off-axis as
shown, and may be alternatively or optionally utilized. Generally,
at least two markers 910 located along the long axis of the weapon
and one light source 915 (either located on or off the long axis)
can be utilized in typical applications to track the position of
the weapon 110 in the capture volume 115 and implement the binary
signaling capability.
[0045] In operation during a simulation, the participant's
actuation of the trigger 920 will activate a light source 915 to
signal that the weapon has been virtually fired. In some cases,
different light activation patterns can signal different types of
discharge patterns such as a single round per trigger pull, 3-round
burst per trigger pull, fully automatic fire with a trigger pull,
and the like. Such patterns can be implemented, for example, by
various flash patterns using a single light source or multiple
light sources 915. Activation of the light source 915 will be
detected by one or more of the video cameras 305 (FIG. 3) that
monitor the capture volume 115. Since the position of the weapon
110 at the time it is fired can be known from motion capture, a
trajectory for the virtually discharged round (or rounds) in the
virtual environment can be determined and used for purposes of the
simulation. Further description of this aspect of the present
simulator system is provided below.
[0046] FIG. 10 shows the participant 105 wearing a pair of glasses
1005 that are used, in this illustrative example, to provide a 3D
view of the virtual environment that is projected on to the display
screen 120 (FIG. 1). Such 3D viewing is typically implemented by
providing an eye-specific view in which a unique view is projected
for each of the left eye and right eye to create the 3D effect by
using the participant's stereoscopic vision. That is, what each eye
sees is slightly mismatched (what is termed "binocular disparity")
and the human brain uses the mismatch to perceive depth. Thus, to
implement 3D view, the projected virtual environment will comprise
two unique, separately-encoded dynamic views that are shown on the
display screen 120.
[0047] Eye-specific views can be generated by configuring the
left-eye and right-eye lenses (as respectively indicated by
reference numerals 1010 and 1015) as LCD (liquid crystal display)
shutter lenses. Liquid crystal display shutter lenses are also
known as "flicker glasses." Each shutter lens contains a liquid
crystal layer that alternately goes dark or transparent with the
respective application and absence of a voltage. The voltage is
controlled by a timing signal received at the glasses 1005 (e.g.,
via an optical or radio frequency communications link to a remote
imaging subsystem or module) that enables the shutter lenses to
alternately darken over one eye of the participant 105 and then the
other eye in synchronization with the refresh rate of the display
screen 120 (FIG. 1). The video displayed on the screen 120
alternately shows left view and right view images (also termed
"fields" when referring to video signals). When the participant 105
views the display screen 120, shutter lenses in the glasses 1005
are synchronously shuttered and un-shuttered to respectively
occlude the unwanted image and transmit the wanted image. Thus, the
left eye only sees the left view and the right eye only sees the
right view. The participant's inherent persistence of vision,
coupled with a sufficiently high refresh rate of the projected
display can typically be expected to result in the participant's
perception of stable and flicker-free 3D images.
[0048] In other implementations of the present simulator system,
the glasses 1005 may be configured to decode separate left- and
right-eye views by applying polarizing filters to the lenses 1010
and 1015. For example, left- and right-handed circular polarizing
filters may be respectively utilized in the lenses. Alternatively,
linear polarizing filters may be utilized that are orthogonally
oriented in respective lenses. As each lens only passes images
having like polarization, stereoscopic imaging can be implemented
by projecting two different views (each view being uniquely
polarized) that are superimposed onto the display screen 120. In
some applications, use of circular polarization may be particularly
advantageous to avoid image bleed between left and right views
and/or loss of stereoscopic perception that may occur while using
linear polarizing filters when the participant's head is tilted to
thus misalign the polarization axes of the glasses with the
projected display.
[0049] The glasses 1005 may alternatively be configured with both
shutter and polarizing components. By employing such a
configuration, the glasses 1005 can decode and disambiguate among
four unique and dynamic points of view of a virtual environment
shown on the display screen 120. That is, two unique viewpoints can
be supported using synchronous shuttering, and two additional
unique views can be supported using polarizing filters. In
combination with appropriate generation and projection of a virtual
environment on the display screen 120, the four unique views may be
used to provide, for example, each of two participants with unique
3D views of the virtual environment, or each of four participants
with unique 2D views of the virtual environment.
[0050] The provision of a unique dynamic point of view per
participant is a feature of the present simulator system that may
provide additional realism to a simulation by addressing the issue
of parallax distortion that is frequently experienced when
interacting with conventional shoot wall simulators. Parallax
distortion occurs when a virtual environment is generated and
displayed using a point of view that is fixed and does not move
during the course of a simulation. In other words, assuming an
imaginary camera is used to capture the virtual environment that is
displayed on the screen 120 (FIG. 1), then parallax distortion can
occur when the camera's position is fixed at some arbitrary point
in space in the capture volume 115 and the position does not change
as a simulation unfolds.
[0051] This problem is illustrated in FIG. 11 which shows a
pictorial representation of the environment 1105 that is modeled
for the simulation shown in FIG. 1. As shown in FIG. 11, the
modeled environment 1105 can be thought of as being separated from
the capture volume 115 by the display screen 120 (i.e., the plane
of the shoot wall) and physically extending into a 3D space that is
adjacent to the capture volume. As with the capture volume, the
size and configuration of the modeled environment can vary by
application and be different from the arbitrary size and shape that
is illustrated in the drawing.
[0052] For the environment 1105 to appear realistic when projected
onto the display screen 120, the view on the screen would need to
appear differently depending on the position of the participant's
head in the capture volume 115. That is, the participant 105 would
expect the virtual environment to look different as his point of
view changes. For example, when the participant 105 is in position
"A" (as indicated by reference numeral 1110), his line of sight
along line 1115 to the enemy soldier 130 is obscured by the wall
1120. Assuming that the wall 1120 is co-planar with the display
screen 120, the dot 1125 shows that the sight line 1115 intersects
the front plane of the environment at the wall 1120. By contrast,
when the participant 105 moves to position "B" (as indicated by
reference numeral 1130), his line of sight 1135 to the enemy
soldier 130 is no longer obscured by the wall 1120. Thus, if the
modeled environment accurately matches its physical counterpart,
the participant could move to look around an obstacle to see if an
enemy is hidden behind it.
[0053] As shown in FIG. 11, when the imaginary camera 1140 is
positioned in the center of the capture volume 115 its line of
sight 1145 to the enemy soldier 130 is obscured by the wall.
Accordingly, if the view of the virtual environment was generated
using the image captured by the imaginary camera 1140, it would
show the wall 1120 but the enemy soldier 130 would be hidden from
view. This view from the imaginary camera 1140 as projected onto
the display screen 120 is shown in the inset drawing FIG. 11A. As
noted above, in conventional shoot wall simulators the position of
the imaginary camera 1140 is typically fixed. Thus, when rendered
by conventional simulators using such a fixed point of view, the
virtual environment would appear unnatural and unrealistic because
the projected display would not take the position of the
participant 105 into account. Thus, if the participant 105 moved
his position to attempt to see what is behind the wall 1120, the
display 120 would look the same regardless of the participant's
point of view, and the enemy would remain hidden by the wall.
[0054] By contrast, application of the principles of the present
simulator system enables an accurate and realistic display to be
generated and projected by tracking the position of the participant
105 in the capture volume 115. The imaginary camera 1140 is then
placed to be coincident with the participant's head so that the
captured view of the modeled environment 1105 matches the
participant's point of view as he moves through the capture volume
115. This feature is shown in FIG. 12. As shown, the imaginary
camera 1140 is located in substantially the same position and
orientation as the participant's head. That way, the displayed view
of the modeled environment 1105 will match the physical environment
more closely and meet the participant's expectation that movement
from position "A" to position "B" through the capture volume 115
will allow inspection of the area behind the wall 1120. The view
from the imaginary camera 1140 at position "B" in the capture
volume 115 (which corresponds to what the participant would see) is
shown in the inset drawing, FIG. 12A. As shown, the enemy soldier
130 is revealed in this view.
[0055] In addition to supporting the generation and projection of a
virtual environment that is dynamically and continuously captured
from the participant's point of view as he moves through the
capture volume 115, the present simulator system supports
additional features which can add to the accuracy and realism of a
given simulation. The virtual environment may also be generated so
that rendered elements in the environment are responsive to the
participant's position in the capture volume. For example, the
avatar of the enemy soldier 130 can be rendered so that the
soldier's eyes and aim of his weapon track the participant 105. In
this way, the avatar 130 realistically appears to be looking at the
participant 105 and the avatar's gaze will dynamically change in
response to the participant's motion. This enhanced realism is in
contrast to simulations supported by conventional simulators where
avatars typically appear to stare into space or in an odd direction
when supposedly attempting to look at or aim a weapon at a
participant.
[0056] Tracking the position of the weapon 110 (FIG. 1) also
enables enhanced simulation accuracy and realism. As described
above, knowledge of the position of the weapon 110 when it is fired
(for example, as indicated by actuation of a light source 915
responsively to a trigger pull as shown in FIG. 9) enables a
trajectory of the discharged round to be determined. Such
determination enables the present simulator system to overcome
another common shortcoming of conventional simulators, namely
unrealistic trajectory of discharged rounds.
[0057] As shown in FIG. 13, in conventional simulators, when the
weapon 110 is fired the discharged round turns unrealistically from
its actual trajectory 1305 when it intersects the display screen
120 (i.e. the plane of the shoot wall) to fly exactly perpendicular
to the display screen surface. This modified perpendicular
trajectory, as indicated by reference numeral 1310, is typically
implemented in conventional simulators without regard to the
starting incident angle of the incoming round since the position of
the weapon in the capture volume is unknown (the point of
intersection with the shoot wall by comparison is typically known
using a light source such as a laser in the weapon and a
photodetector at the shoot wall/display screen that detects the
location of the incident laser beam).
[0058] A parallax angle p between the actual trajectory 1305 and
the perpendicular trajectory 1310 is thus created. Conventional
simulators will typically rely on simple 3D scenarios where
elements in the modeled environment 1105 do not extend deeply past
the plane of the shoot wall in order to minimize the impact of the
parallax. Thus, as shown in FIG. 13, the perpendicular trajectory
1310 will still result in a hit on target since the enemy soldier
130 is positioned relatively close to the shoot wall 120. However,
short-depth modeled environment can typically be expected to
constrain the types and quality of the simulations that are
supported.
[0059] As shown in FIG. 14, as the enemy soldier 130 moves deeper
into the modeled environment 1105 the impact of the parallax angle
p is magnified. In this case, the divergence between the actual
trajectory 1405 and the perpendicular trajectory 1410 is great
enough so that the perpendicular trajectory results in a miss of
the target. In this case, the perpendicular trajectory can be
expected to create a readily apparent loss of believability and
immersion for the participant 105. Thus, by utilizing the actual
trajectories (as indicated by reference numerals 1305 and 1405 in
the scenarios depicted in FIGS. 13 and 14), the present simulator
system avoids the problems associated with the perpendicular
trajectory described above.
[0060] FIG. 15 shows an illustrative architecture that may be used
to implement the present simulator system 1505. In many
applications, the simulator system 1505 is configured to operate
using a variety of software modules embodied as instructions on
computer-readable storage media, described below, that may execute
on general-purpose computing platforms such as personal computers
and workstations, or alternatively on purpose-built simulator
platforms. In other applications, the simulator system 1505 may be
implemented using various combinations of software, firmware, and
hardware. In some cases, the simulator system 1505 may be
configured as a plug-in to existing simulators in order to provide
the enhanced functionality described herein. For example, the
simulator system 1505 when configured with appropriate interfaces
may be used to augment the training scenarios afforded by an
existing ground combat simulation to make them more realistic and
more immersive.
[0061] A camera module 1510 is utilized to abstract the
functionality provided by the video cameras 305 (FIG. 3) which are
used to monitor the capture volume 115 (FIG. 1). Typically the
camera module 1510 will utilize an interface such as an API
(application programming interface) to expose functionality to the
video cameras 305 to enable operative communications over a
physical layer interface, such as USB. In some applications, the
camera module 1510 may enhance the native motion capture
functionality supported by the video cameras 305, and in other
applications the module functions essentially as a pass-through
communications interface.
[0062] A head tracking module 1515 is also included in the
simulator system 1505. In this illustrative example, head tracking
alone is utilized in order to minimize the resource costs and
latency that is typically associated with full body tracking
However, in alternative implementations, full body tracking and
motion capture may be utilized. The head tracking module 1515 uses
images of the helmet markers captured by the camera module 1510 in
order to triangulate the position of the participant's head within
the capture volume 115 as a given simulation unfolds and the
participant moves throughout the volume.
[0063] Similarly, an object tracking module 1520 is included in the
simulator system 1505 which uses images of the weapon markers
captured by the camera module 1510 to triangulate the position of
the weapon within the capture volume 115 and detect trigger pulls.
For both head tracking and object tracking, the position
determination is performed substantially in real time to minimize
latency as the simulator system generates and renders the virtual
environment. Minimization of latency can typically be expected to
increase the realism and immersion of the simulation. In some
cases, the head tracking and object tracking modules can be
combined into a single module as indicated by dashed line 1525 in
FIG. 15.
[0064] The simulator system 1505 further supports the utilization
of a virtual environment generation module 1530. This module is
responsible for generating a virtual environment responsive to the
needs of a given simulation. In addition, module 1530 will generate
a virtual environment while correcting for point of view parallax
distortion and trajectory parallax, as respectively indicated by
reference numerals 1535 and 1540. That is, the virtual environment
generation module 1530 will dynamically generate one or more views
of a virtual environment that are consistent with the participant's
respective and unique points of view. As noted above, up to four
unique views may be generated and rendered depending on the
configuration of the glasses 1005 (FIG. 10) being utilized. In
addition, the virtual environment generation module 1530 will
determine the actual trajectory of rounds fired by weapon 110
downrange.
[0065] A virtual environment rendering module 1545 is utilized in
the simulator system 1505 to take the generated virtual environment
and pass it off in an appropriate format for projection or display
on the display screen 120. As described above, multiple views
and/or multiple screens may be utilized as needed to meet the
requirements of a particular implementation. Other hardware may be
abstracted in a hardware abstraction layer 1550 in some cases in
order for the simulator system 1505 to implement the necessary
interfaces with various other hardware components that may be
needed to implement a given simulation. For example, various other
types of peripheral equipment may be supported in a simulation, or
interfaces may need to be maintained to support the simulator
system 1505 across multiple platforms in a distributed computing
arrangement.
[0066] FIG. 16 is a flowchart 1600 of an illustrative method of
operating the simulator system 1505 shown in FIG. 15 and described
in the accompanying text. The method starts at block 1605. At block
1610 the position and orientation of the participant's head is
tracked as the participant 105 moves throughout the capture volume
115 during the course of a simulation. At block 1615 the position
and orientation of the weapon 110 is tracked as the participant 105
moves through the capture volume 115. In this illustrative example
a single participant and weapon are tracked, however, multiple
participants and weapons may be tracked when the simulator system
1505 is used to support multi-participant simulation scenarios.
[0067] The participant's point of view is determined, at block
1620, in response to the head tracking At block 1625, the gaze
direction of one or more avatars 130 in the simulation will be
determined based on the location of the participant 105 in the
capture volume 115. Similarly the direction of the avatar's weapon
will be determined, at block 1630, so that the aim of the weapon
will track the motion of the participant and thus appear
realistic.
[0068] At block 1635, the simulator system 1505 will detect weapon
fire (and/or detect other communicated data transmitted over the
low-bandwidth communication path described above in the text
accompanying FIG. 8). At block 1640 the actual trajectory of
discharged rounds will be determined in response to the position of
the weapon 110 within the capture volume 115.
[0069] Data descriptive of a given simulation scenario is received,
as indicated at block 1645. Such data, for example, may be
descriptive of the storyline followed in the simulation, express
the actions and reactions of the avatars to the participant's
commands and/or actions, and the like. At block 1650, using the
captured information from the camera module, the various
determinations described in blocks 1625 through 1640, and the
received simulation data, the virtual environment will be generated
using the participant's point of view, having a realistic avatar
gaze and weapon direction, and using the actual trajectory for
weapon fire. At block 1655, the generated virtual environment will
be rendered by projecting or displaying the appropriate views on
the display screen 120. At block 1660 control is returned back to
the start and the method 1600 is repeated. The rate at which the
method repeats can vary by application, however, the various steps
of capturing, determining, generating, and rendering will be
performed with sufficient frequency to provide a smooth and
seamless simulation.
[0070] Although the subject matter has been described in language
specific to structural features and/or methodological acts, it is
to be understood that the subject matter defined in the appended
claims is not necessarily limited to the specific features or acts
described above. Rather, the specific features and acts described
above are disclosed as example forms of implementing the
claims.
* * * * *