U.S. patent application number 12/778437 was filed with the patent office on 2010-12-09 for simulator with enhanced depth perception.
Invention is credited to Timothy James Lock, Wallace Maass, Mark Michniewicz, Derek Smith, Kristy Smith.
Application Number | 20100311512 12/778437 |
Document ID | / |
Family ID | 43298024 |
Filed Date | 2010-12-09 |
United States Patent
Application |
20100311512 |
Kind Code |
A1 |
Lock; Timothy James ; et
al. |
December 9, 2010 |
SIMULATOR WITH ENHANCED DEPTH PERCEPTION
Abstract
A simulator system includes a user tracking device for detecting
a position of a user and generating a sensor signal representing
the position of the user, a processor for receiving the sensor
signal, analyzing the sensor signal, and generating an image signal
in response to the analysis of the sensor signal, wherein the
analyzing of the sensor signal includes determining a position of a
virtual camera corresponding to the position of the user, the
virtual camera being directed toward a reference look-at-point; and
a image generating device for receiving the image signal and
generating an image in response to the image signal, wherein the
image is modified in response to the position and an orientation of
the virtual camera relative to the reference look-at-point.
Inventors: |
Lock; Timothy James; (Ann
Arbor, MI) ; Maass; Wallace; (Perrysburg, OH)
; Smith; Kristy; (Ann Arbor, MI) ; Smith;
Derek; (Ann Arbor, MI) ; Michniewicz; Mark;
(Holly, MI) |
Correspondence
Address: |
FRASER CLEMENS MARTIN & MILLER LLC
28366 KENSINGTON LANE
PERRYSBURG
OH
43551
US
|
Family ID: |
43298024 |
Appl. No.: |
12/778437 |
Filed: |
May 12, 2010 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61184127 |
Jun 4, 2009 |
|
|
|
Current U.S.
Class: |
473/199 ;
473/266 |
Current CPC
Class: |
A63F 13/525 20140902;
A63F 2300/8011 20130101; A63F 13/428 20140902; A63F 2300/1087
20130101; A63F 2300/6676 20130101; A63F 13/00 20130101 |
Class at
Publication: |
473/199 ;
473/266 |
International
Class: |
A63B 69/36 20060101
A63B069/36 |
Claims
1. A simulator system comprising: a user tracking device for
detecting a position of a user and generating a sensor signal
representing the position of the user; a processor for receiving
the sensor signal, analyzing the sensor signal, and generating an
image signal in response to the analysis of the sensor signal,
wherein the analyzing of the sensor signal includes determining a
position of a virtual camera corresponding to the position of the
user, the virtual camera being directed toward a reference
look-at-point; and a image generating device for receiving the
image signal and generating an image in response to the image
signal, wherein the image is modified in response to the position
and an orientation of the virtual camera relative to the reference
look-at-point.
2. The simulator system according to claim 1, wherein the user
tracking device detects a position of a particular body part of the
user and the image is modified in response to a change in the
position of the particular body part.
3. The simulator system according to claim 1, wherein the image
generating device is a projector.
4. The simulator system according to claim 1, further comprising an
object tracking device for tracking a motion of an object
interacting with the user.
5. The simulator system according to claim 1, wherein a motion of
the user relative to the user tracking device produces a
translation of a point of view of the virtual camera relative to
the look-at-point and a rotation of the point of view of the
virtual camera about the look-at-point.
6. A simulator system comprising: a plurality of user tracking
devices arranged to track a position of a user and generate a
sensor signal representing the position of the user; a processor
for receiving the sensor signal, analyzing the sensor signal, and
generating an image signal in response to the analysis of the
sensor signal, wherein the analyzing of the sensor signal includes
determining a position of a virtual camera corresponding to the
position of the user, the virtual camera being directed toward a
reference look-at-point; and a image generating device for
receiving the image signal and generating an image in response to
the image signal, wherein the image is modified in response to a
change in at least one of the position and an orientation of the
virtual camera relative to the reference look-at-point.
7. The simulator system according to claim 1, wherein the user
tracking device detects a position of a particular body part of the
user and the image is modified in response to a change in the
position of the particular body part.
8. The simulator system according to claim 1, wherein the image
generating device is a projector.
9. The simulator system according to claim 1, wherein each of the
user tracking devices is a camera and each of the user tracking
devices captures a time synchronized image of the user, and wherein
the images are transmitted to the processor via the sensor
signal.
10. The simulator system according to claim 9, wherein the
processor performs an image processing of the time synchronized
images to produce a blob shape representing at least a portion of a
body of the user.
11. The simulator system according to claim 10, wherein the
processor compares the blob shape to a pre-defined criterion for a
particular body feature to determine at least one of a position and
orientation of the at least a portion of the body of the user
relative to the user tracking devices.
12. The simulator system according to claim 10, wherein the
processor analyzes the blob shape to determine a center of mass
thereof, wherein the blob shape and center of mass are compared to
a pre-defined criterion for a plurality of body features to match
the blob shape to one of the body features.
13. The simulator system according to claim 10, wherein a three
dimensional position of the blob shape is determined by a
geometrical analysis of an intersecting ray from each of the user
tracking devices.
14. The simulator system according to claim 6, wherein a motion of
the user relative to the user tracking devices produces a
translation of a point of view of the virtual camera relative to
the look-at-point and a rotation of the point of view of the
virtual camera about the look-at-point.
15. A method for providing an enhanced depth perception to a user
of a simulator, the method comprising the steps of: providing a
user tracking device to detect a position of a user and generate a
sensor signal representing the position of the user; analyzing the
sensor signal to determine a position of a virtual camera
corresponding to the position of the user, the virtual camera being
directed toward a reference look-at-point; and generating an image
in response to the analysis of the sensor signal, wherein the image
is modified in response to a change in at least one of the position
and an orientation of the virtual camera relative to the reference
look-at-point.
16. The method according to claim 15, wherein the user tracking
device detects a position of a particular body part of the user and
the image is modified in response to a change in the position of
the particular body part.
17. The method according to claim 15, wherein the user tracking
device includes a plurality of cameras, each of the cameras
capturing a time synchronized image of the user and transmitting
the images to the processor via the sensor signal.
18. The method according to claim 17, wherein the step of analyzing
the sensor signal includes an image processing of the time
synchronized images to produce a blob shape representing at least a
portion of a body of the user.
19. The method according to claim 18, wherein the step of analyzing
the sensor signal includes determining a three dimensional position
of the blob shape by a geometrical analysis of an intersecting ray
from each of the cameras of the user tracking device.
20. The method according to claim 16, wherein a motion of the user
relative to the user tracking device produces at least one of a
translation of a point of view of the virtual camera relative to
the look-at-point and a rotation of the point of view of the
virtual camera about the look-at-point.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of U.S. provisional
patent application Ser. No. 61/184,127 filed Jun. 4, 2009, hereby
incorporated herein by reference in its entirety.
FIELD OF THE INVENTION
[0002] The present invention relates generally to simulators for
sports related activities. More particularly, the invention is
directed to a simulator system and a method for providing an
enhanced depth perception to a user of the simulator system.
BACKGROUND OF THE INVENTION
[0003] Various arrangements are used for simulating the playing of
a game of golf in small areas, such as indoors, to provide
opportunities for people to play who might not otherwise be able to
play because of crowded golf course conditions or because of bad
weather. In addition, such golf simulators can simulate play on
various famous golf courses not otherwise accessible to the
players.
[0004] Most golf simulation equipment includes at least three
components: a central control unit which keeps track of play and
calculates ball travel and lie, a sensor unit which senses how a
ball is hit to enable the control unit to calculate the trajectory
and resulting lie of the hit ball, and a projection unit for
projecting an image showing the green to which the ball is to be
hit from the location of the ball. Because the equipment senses how
a ball is hit and the distance and direction of travel of the ball,
such equipment could also be adapted to simulate various other
sport games, such as baseball or soccer, or at least various
practice aspects thereof.
[0005] U.S. Pat. Nos. 4,150,825 and 4,437,672 show a type of golf
simulation game. In the game of the patents, one to four players
initially enter information into the control unit regarding the
players and the men's, women's, or championship tees from which
each will play, and the particular course and holes to be played,
e.g., the front nine, back nine, etc. The control unit then
operates a display to show who is to tee off and operates a
projector to project an image on a screen in front of the players
showing the view toward the green from the tee.
[0006] A player hits a ball from the tee toward the green as he or
she would on a regular golf course. The ball moves toward and makes
contact with the screen which is specially designed for that
purpose and is usually located about twenty feet in front of the
player. Special sensors in the form of photosensor arrays are
arranged to detect passage of the ball through three separate
sensing planes, the third plane being positioned with respect to
the screen so as to sense the ball's movement toward the screen and
also the ball's rebound from the screen. With the information from
the sensors, the ball's trajectory can be calculated and the
position at which the ball lands along the fairway can be
determined relatively accurately. The control unit keeps track of
each player's ball and the position at which it landed. After all
players have teed off, the control unit determines which player's
ball is farthest from the hole and causes operation of the
projector to move to and project an image on the screen showing the
view from the position of the farthest ball looking toward the
green. The player again hits his or her ball toward the green shown
on the screen and again the trajectory of the ball is calculated
and the new position along the fairway determined. The control unit
then again determines the farthest ball from the hole, displays the
name of the player, and instructs the projector to provide the new
appropriate image. The identified player then hits his or her ball.
Play is continued in this manner until all players reach the green.
At that time, a simulated green is lighted and the players actually
putt the ball into a hole in the simulated green.
[0007] However, current simulators provide an image on a planar
screen. The image has a minimal sense of dimension due to the
conventional limitations of creating a perception of depth on a
two-dimensional screen.
[0008] Accordingly, it would be desirable to develop a simulator
system and a method for providing enhanced depth perception to a
user of the simulator system, wherein the simulator system and the
method provide an individualized perception of depth based on a
position of the user.
SUMMARY OF THE INVENTION
[0009] Concordant and consistent with the present invention, a
simulator system and a method for providing enhanced depth
perception to a user of the simulator system, wherein the simulator
system and the method provide an individualized perception of depth
based on a position of the user, has surprisingly been
discovered.
[0010] In one embodiment, a simulator system comprises: a user
tracking device for detecting a position of a user and generating a
sensor signal representing the position of the user; a processor
for receiving the sensor signal, analyzing the sensor signal, and
generating an image signal in response to the analysis of the
sensor signal, wherein the analyzing of the sensor signal includes
determining a position of a virtual camera corresponding to the
position of the user, the virtual camera being directed toward a
reference look-at-point; and an image generating device for
receiving the image signal and generating an image in response to
the image signal, wherein the image is modified in response to the
position and an orientation of the virtual camera relative to the
reference look-at-point.
[0011] In another embodiment, a simulator system comprises: a
plurality of user tracking devices arranged to track a position of
a user and generate a sensor signal representing the position of
the user; a processor for receiving the sensor signal, analyzing
the sensor signal, and generating an image signal in response to
the analysis of the sensor signal, wherein the analyzing of the
sensor signal includes determining a position of a virtual camera
corresponding to the position of the user, the virtual camera being
directed toward a reference look-at-point; and a image generating
device for receiving the image signal and generating an image in
response to the image signal, wherein the image is modified in
response to a change in at least one of the position and an
orientation of the virtual camera relative to the reference
look-at-point.
[0012] The invention also presents methods for providing enhanced
depth perception to a user of a simulator.
[0013] One method comprises the steps of: providing a user tracking
device to detect a position of a user and generating a sensor
signal representing the position of the user; analyzing the sensor
signal to determine a position of a virtual camera corresponding to
the position of the user, the virtual camera being directed toward
a reference look-at-point; and generating an image in response to
the analysis of the sensor signal, wherein the image is modified in
response to a change in at least one of the position and an
orientation of the virtual camera relative to the reference
look-at-point.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The above, as well as other advantages of the present
invention, will become readily apparent to those skilled in the art
from the following detailed description of the preferred embodiment
when considered in the light of the accompanying drawings in
which:
[0015] FIG. 1 is a schematic plan view representation of a
simulator system according to an embodiment of the present
invention; and
[0016] FIG. 2 is a schematic block diagram of the simulator system
of FIG. 1.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS OF THE INVENTION
[0017] The following detailed description and appended drawings
describe and illustrate various embodiments of the invention. The
description and drawings serve to enable one skilled in the art to
make and use the invention, and are not intended to limit the scope
of the invention in any manner. In respect of the methods
disclosed, the steps presented are exemplary in nature, and thus,
the order of the steps is not necessary or critical.
[0018] Referring to FIGS. 1 and 2, a simulator system 10 is
illustrated according to an embodiment of the present invention. As
shown, the simulator system 10 includes a display screen 12, a
plurality of user tracking devices 14, 16, a plurality of light
sources 18, 20, 22, a plurality of object tracking devices 24, 26,
28, a projector 30, and a processor 32. It is understood that any
number of projector screens, user tracking devices, the light
sources, object tracking devices, projectors, and processors may be
used. It is further understood that any specific positioning of the
user tracking devices 14, 16, the light sources 18, 20, 22, the
object tracking devices 24, 26, 28, the projector screen 12 (or
screens) and other equipment is not limited by the drawings. Other
configurations and relative positioning can be used.
[0019] The display screen 12 is positioned to receive an image from
the projector 30. It is understood that the display screen 12 may
have a size and shape. However, the display screen 12 is typically
formed from a substantially smooth material and positioned to
create a substantially flat resilient surface for withstanding an
impact and absorbing the energy of a moving sports object (e.g. a
golf ball or a baseball).
[0020] As shown, each of the user tracking devices 14, 16 is a
tracking camera in communication with the processor 32. The user
tracking devices 14, 16 are positioned such that a collective field
of view of the user tracking devices 14, 16 covers a pre-defined
field of activity 34 where user activity generally occurs. However,
it is understood that any other means of tracking a position of the
user may be used, such as an accelerometer/gyroscopic system, a
transponder systems, a sonic/sonar systems, and structured
light/machine vision techniques known in the art, such as marked
attire (e.g. light emitting diode markers) or projected grid or
line patterns, for example. In certain embodiments, the user wears
an object such as a hat with one or more markers (e.g. dots or
other shape or pattern). As such, the markers are detected by the
user tracking devices 14, 16 as the user enters the field of
activity 34 and tracked as the user moves within a field of vision
of the user tracking devices 14, 16.
[0021] The light sources 18, 20, 22 may be any device or system for
illuminating at least the field of activity 34 where user activity
occurs. It is understood that in certain embodiments, the user
tracking devices 14, 16 may require a particular light source to
provide reliable tracking of the position of the user. It is
further understood, that the light sources 18, 20, 22 may provide
aesthetic features to further enhance a simulated experience for
the user.
[0022] The object tracking devices 24, 26, 28, are positioned to
track a motion of any object such as sports implements used in
golf, tennis, and baseball for example. The object tracking devices
24, 26, 28 are typically high speed cameras for tracking at least a
speed, a direction, and a spin of a moving object. As a
non-limiting example, object tracking devices 24, 26, 28 are
similar to the 3Trak.RTM. high-speed photography technology used in
simulators manufactured by aboutGolf Ltd. (Maumee, Ohio). However,
other object tracking devices can be used, as appreciated by one
skilled in the art.
[0023] The projector 30 is positioned to project an image onto the
display screen 12. It is understood that a plurality of the
projectors 30 may be used to provide a panoramic or a surrounding
image. The projector 30 is adapted to receive an image signal from
the processor 32 to create and modify the image projected on the
display screen 12. It is understood that other displays can be used
to generate an image based upon the image signal.
[0024] The processor 32 is in data communication with the user
tracking devices 14, 16 for receiving a sensor signal therefrom,
analyzing the sensor signal, and generating the image signal in
response to the analysis of the sensor signal. As a non-limiting
example, the processor 32 analyzes the sensor signal based upon an
instruction set 36. The instruction set 36, which may be embodied
within any computer readable medium, includes processor executable
instructions for configuring the processor 32 to perform a variety
of tasks and calculations. As a non-limiting example the
instruction set 36 includes processor executable algorithms and
commands relating to image processing, spatial representation,
geometrical analysis, three-dimensional physics, and a rendering of
digital graphics. It is understood that any equations can be used
to model the position of at least a portion of the user. It is
further understood that the processor 32 may execute a variety of
functions such as controlling various settings of the user tracking
devices 14, 16, the light sources 18, 20, 22, the object tracking
devices 24, 26, 28, and the projector 30, for example. In certain
embodiments, the processor 32 includes a software suite for
tracking a movement and trajectory of an object in the field of
activity 34.
[0025] As a non-limiting example, the processor 32 includes a
storage device 38. The storage device 38 may be a single storage
device or may be multiple storage devices. Furthermore, the storage
device 38 may be a solid state storage system, a magnetic storage
system, an optical storage system or any other suitable storage
system or device. It is understood that the storage device 38 is
adapted to store the instruction set 36. In certain embodiments,
data retrieved from at least one of the user tracking devices 14,
16 and the object tracking devices 24, 26, 28 is stored in the
storage device 38. It is further understood that certain known
parameters may be stored in the storage device 38 to be retrieved
by the processor 32.
[0026] As a further non-limiting example, the processor 32 includes
a programmable device or component 40. It is understood that the
programmable device or component 40 may be in communication with
any other component of the system 10 such as the user tracking
devices 14, 16 and the object tracking devices 24, 26, 28, for
example. In certain embodiments, the programmable component 40 is
adapted to manage and control processing functions of the processor
32. Specifically, the programmable component 40 is adapted to
control the analysis of the data signals (e.g. sensor signal
generated by the user tracking devices 14, 16) received by the
processor 32. It is understood that the programmable component 40
may be adapted to store data and information in the storage device
38, and retrieve data and information from the storage device 38.
In certain embodiments, the programmable component includes a human
machine interface to allow the user to directly control certain
functions of the system 10.
[0027] In operation, the user tracking devices 14, 16 work in
concert such that a collective field of view of the user tracking
devices 14, 16 covers the entire field of activity 34 where user
activity is expected to occur. As the user enters the field of view
of each of the user tracking devices 14, 16, a plurality of time
synchronized images or representations are captured. Each of the
synchronized images captures at least a portion of a body of the
user, in particular the upper body. The images are processed (e.g.
binarization, thresholding, and the like) to produce "blob" shapes
representing a shape of the user, as appreciated by one skilled in
the art of image processing. The blob shapes are analyzed for
features such as a head, a torso, and arms by determining blob
extremities and applying pre-determined criteria rules of size and
shape.
[0028] As a non-limiting example, a center of mass calculation is
performed on the blob extremities to match a pre-determined "head"
criterion. In certain embodiments a head center of mass position is
determined in a plurality of images (one from each user tracking
devices 14, 16), and a three dimensional position is subsequently
determined by a geometrical analysis of an intersecting ray
location from each of the user tracking devices 14, 16. It is
understood that a three dimensional position can be determined for
any portion of the body of the user. It is further understood that
a reference location of each of the user tracking devices 14, 16
relative to the projector screen 12 is predetermined by calibrating
to a reference marker during a setup of the system 10.
[0029] Once the user tracking devices 14, 16 have acquired the head
position of the user, the processor 32 and the user tracking
devices 14, 16 cooperate to perform real-time tracking of the head
position. Specifically, the user tracking devices 14, 16 transmit
positional information to the processor 32 in real-time via the
sensor signal. However, it is understood that a periodic transfer
of positional information may be used.
[0030] The processor 32 determines a position of a virtual camera
42 corresponding to the known player location and a known size of
the projector screen 12. The virtual camera 42 is oriented and
directed at a reference look-at-point 44. As a non-limiting
example, the reference look-at-point 44 is substantially equivalent
to a position of the virtual camera 42 plus a distance of the head
of the user to the projector screen 12. A field of view of the
virtual camera 42 is maintained as a position of the virtual camera
42 is translated and rotated relative to the reference
look-at-point 44. The relative motion of the virtual camera 42
produces an effective rotation of a point of view of the virtual
camera 42 about the reference look-at-point 44 as the user moves in
the field of activity 34. Specifically, as a position of a head of
the user moves left-to-right, the virtual camera 42 translates a
corresponding left or right distance, and rotates slightly toward
the reference look-at-point 44. As the user moves to or away from
the display screen 12, the virtual camera 42 is translated through
the projected image in the direction of the movement of the user.
As the user raises or lowers his/her head, the virtual camera 42 is
translated up or down a corresponding amount while rotating
slightly toward the reference look-at-point 44.
[0031] It is understood that in a conventional simulator
environment, one or more projectors display a "virtual world" on
one or more screens such that the user feels immersed in the
virtual environment. A common frame of reference between the
virtual world and the physical world must be identified as the
point of view, or position of the virtual camera 42. For example,
in a golf simulator environment the expected action location on the
hitting mat, from where the golf ball is hit, is the common frame
of reference. In the present invention, to achieve the feel of
three dimensional (3D) simulation, the position of the virtual
camera 42 is adjusted in real-time to match the head position of
the user as the user moves. In certain embodiments, the processor
32 receives head location updates at a rate of at least 60 Hz with
smaller than one frame latency so that the movement of the virtual
camera 42 can track the physical head position of the user without
a lag.
[0032] A critical feature of the current invention is related to
the movement of the point of perspective from some arbitrary
location to that of a newly acquired position of the user. In a
multiple participant mode, the image projected on the display
screen 12 may change to a splash screen image displaying a name of
the "active" user (i.e. next user to enter the simulator field of
activity 34). Once a user is acquired and tracking, the screen
image changes to the position-rectified scene for a position of the
virtual camera 42 associated with a head position of the "active"
user.
[0033] In certain embodiments, the simulator system 10 is adapted
to track one or more users outside the simulator field of activity
34. Such multi-user tracking can be accomplished by the user
tracking devices 14, 16 or a separate tracking system, such that as
a user becomes "active", the simulator system 10 begins displaying
an image representing the scene relative to a position of the
"active" user. Therefore, as the "active" user approaches the field
of activity 34, the scene represented by the image on the display
screen 12 is already rectified to the position of the "active"
user.
[0034] Further, the initial position of the virtual camera 42 may
be set at a default location relative to the field of activity 34
and translated or faded to a location of a head of a new user when
the new user is tracked at entrance into the field of activity
34.
[0035] In a multiple player application a set of images projected
on the display screen 12 is rectified to a point of view of a
specific individual user. As a non-limiting example, a unique
virtual camera view can be presented to each of the users (e.g.
using cross-polarized glasses, or images strobed in sequence with
individual shutter glasses with matching time stamp).
[0036] In many sports activities, a critical characteristic of
performance ability is related to accurate judgment of distance.
The present invention allows for a more realistic presentation of
depth in the virtual world displayed to the user by providing
relative motion cues that are typically used in real world
environments when judging mid- and far-field distances. The
simulator system 10 and method enhances a realistic feel of the
simulator environment and allows coaching, training, and play to
occur with visual stimulus unavailable in typical projection sports
simulators.
[0037] In addition, in certain sporting activities, obstacles to
the play may occur. For example, far into the rough in a golf
event, one may encounter trees or shrubs. Using the simulator
system 10 according to the present invention, a participant may
move his head or step to one side, to see around the virtual
obstacle image. In current state of the art simulators without head
tracking, this is not possible.
[0038] Further, in certain sport activities, such as golf, accurate
judgment of terrain contour is critical to successful training and
performance. This is not realistically possible in simulators where
real-time motion interaction with the virtual world is not
obtained. However, activities such as kneeling and moving aside
which is a common practice on golf greens, for example, are sensed
by the simulator system 10 to provide terrain variations and an
enhanced perception of depth from the perspective of the user.
[0039] From the foregoing description, one ordinarily skilled in
the art can easily ascertain the essential characteristics of this
invention and, without departing from the spirit and scope thereof,
make various changes and modifications to the invention to adapt it
to various usages and conditions.
* * * * *