U.S. patent application number 11/433173 was filed with the patent office on 2006-12-07 for bimodal user interaction with a simulated object.
Invention is credited to Thomas G. Anderson.
Application Number | 20060277466 11/433173 |
Document ID | / |
Family ID | 38694353 |
Filed Date | 2006-12-07 |
United States Patent
Application |
20060277466 |
Kind Code |
A1 |
Anderson; Thomas G. |
December 7, 2006 |
Bimodal user interaction with a simulated object
Abstract
A method of providing user interaction with a computer
representation of a simulated object, where the user can control
the object in three dimensions. The method can provide for two
distinct states: a "holding" state, and a "released" state. The
holding state roughly corresponds to the user holding the simulated
object (although other metaphors such as holding a spring attached
to the object, or controlling the object at a distance can also be
suitable). The released state roughly corresponds to the user not
holding the object. A simple example of the two states can include
the holding, then throwing of a ball. While in the holding state,
the method provides force feedback to the user representation of
forces that the user might experience if the user were holding an
actual object. The forces are not applied when in the released
state.
Inventors: |
Anderson; Thomas G.;
(Albuquerque, NM) |
Correspondence
Address: |
V. Gerald Grafe, esq.
P.O. Box 2689
Corrales
NM
87048
US
|
Family ID: |
38694353 |
Appl. No.: |
11/433173 |
Filed: |
May 13, 2006 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60681007 |
May 13, 2005 |
|
|
|
Current U.S.
Class: |
715/700 |
Current CPC
Class: |
G06F 3/016 20130101;
G06F 3/017 20130101 |
Class at
Publication: |
715/700 |
International
Class: |
G06F 3/00 20060101
G06F003/00 |
Claims
1. A method of providing user interaction with a computer
representation of a simulated object, wherein the user controls a
user input device in three dimensions, comprising: a. Providing a
"holding" interaction state, wherein the user input device is
placed in correspondence with the simulated object, and applying
simulated object forces to the user input device, where simulated
object forces comprise forces representative of forces that would
be experienced by the user if the user were interacting with an
actual object with characteristics similar to the simulated object;
b. Providing a "released" interaction state, in which state
simulated object forces are either not applied to the user input
device or are significantly reduced from the forces that would be
applied in the holding state.
2. A method as in claim 1, further comprising switching from the
holding interaction state to the released interaction state
responsive to direction from the user.
3. A method as in claim 2, wherein the direction from the user
comprises one or more of the user activating a switch; the user
issuing a voice direction; the user moving the simulated object
within a simulated space to or past a defined boundary within the
simulated space; the user touching a virtual object in the
simulated space; the user imparting force, acceleration, or
velocity to the simulated object of at least a threshold value;
4. A method as in claim 1, further comprising switching from the
released interaction state to the holding interaction state
responsive to direction from the user.
5. A method as in claim 3, wherein the direction from the user
comprises one or more of the user causing a cursor to move within a
simulated space such that the cursor approaches within a threshold
distance in the simulated space of the simulated object; the user
activating a hardware switch; the user moving a cursor along a
defined path in a simulated space; touching a virtual object in the
simulated space; applying a predetermined force to an input device;
the user moving a cursor in the simulated space along a path that
substantially matches the path of the simulated object; the user
moving the input device along defined path; the user issuing a
voice direction.
6. A method as in claim 1, wherein in the holding interaction
state, the simulated object forces comprises forces representative
of forces that would be experienced by the user if the user were
holding an actual object with characteristics similar to the
simulated object, including both pushing and pulling the
object.
7. A method as in claim 1, wherein the simulated object has a first
set of properties while in the holding interaction state, and a
second set of properties, different from the first set, while in
the released interaction state.
8. A method as in claim 1, wherein the simulated object interacts
with a simulated space in a first manner when in the holding
interaction state, and in a second manner, different from the first
manner, when in the released interaction state.
9. A method as in claim 1, wherein in the holding interaction
state, the simulated object forces communicate to the user inertia
and momentum of the simulated object.
10. A method as in claim 1, further comprising presenting to the
user a visible representation of the simulated object, and wherein
the visible representation of the simulated object, a displayed
space including the simulated object, or both, is different in the
holding interaction state than in the released interaction
state.
11. A method of representing a simulated object in a three
dimensional computer simulation, comprising: a. Determining whether
the simulated object is being directly controlled by the user; b.
If the simulated object is being directly controlled by the user,
then representing the object within the simulation according to a
first set of interaction properties; c. If the simulated object is
not being directly controlled by the user, then representing the
object within the simulation according to a second set of
interaction properties, different from the first set.
12. A method as in claim 11, wherein the first set of interaction
properties comprises a first set of object properties, and wherein
the second set of interaction properties comprises a second set of
object properties, different from the first set of object
properties.
13. A method as in claim 12, wherein the object is represented by
at least one of mass, size, gravitational constant, time step, or
acceleration, which has a different value in the first set of
properties than in the second set of properties.
14. A method as in claim 12, further comprising determining forces
to be applied to a user controlling the simulated object, which
forces are determined from the first set of properties.
15. A method as in claim 11, wherein the position of the simulated
object is represented substantially by P=Po+Vo*t+1/2*a*t*t; and
wherein t is different in the first and second interaction
properties.
16. A method of simulating an object in a computer-simulated
environment, wherein the object has a set of object properties and
the environment has a set of environment properties, and wherein
the simulation comprises determining interactions according to
real-world physics principles applied to the object and environment
properties, and wherein a user can control the object within the
environment, wherein at least one of the physics principles, the
object properties, and the environment properties is different when
the user is controlling the object than when the user is not
controlling the object.
17. A method of allowing a user to propel an object in a computer
game using a force-capable input device, comprising: a. Moving the
object within the game responsive to user motion of or force
applied to the input device; b. Communicating forces to the user
via the input device, which forces are representative of the
object's simulated properties, which properties include at least
one of mass, acceleration, gravitational force, wind resistance,
acceleration; c. Accepting a "release" indication from the user,
and then d. Moving the object within the game according to the
object's simulated energy, position, velocity, acceleration, or a
combination thereof, near the time of the release indication, and
according to interaction properties, at least one of which is
different after the release indication than before the release
indication.
18. A method as in claim 17, wherein the object comprises a
football in a computer football game.
19. A method as in claim 17, wherein the object comprises a
basketball in a computer basketball game.
20. A method as in claim 17, wherein the object comprises a bowling
ball in a computer bowling game.
21. A method as in claim 17, wherein the object comprises a dart in
a computer dart game.
22. A method as in claim 17, wherein the object comprises a
baseball in a computer baseball game.
23. A method as in claim 17, wherein the object comprises a soccer
ball in a computer soccer game.
24. A method as in claim 17, wherein the release indication
comprises motion of the object to a boundary within the game.
25. A method as in claim 17, further comprising displaying to the
user a visible representation of the object and the game, and
wherein the visible representation is different after the release
indication than before the release indication.
26. A method as in claim 17, wherein at least one of the different
property or properties is different in a manner that encourages the
propelled object more toward a desired result than would have been
the case had the property not been different.
27. A method as in claim 1, wherein the position, velocity,
attendant forces, or a combination thereof are determined at a
first rate when in the holding mode, and at a second rate, slower
than the first rate, when in the released mode.
28. A method as in claim 27, wherein the direction of motion of the
simulated object is changed on transition from the holding state to
the released state.
29. A method as in claim 1, wherein the at least one of the
object's properties in the simulation, the simulation properties,
or the simulation time is different in the released state than in
the holding state.
30. A method as in claim 27, wherein the direction of the object's
velocity is different in a manner that encourages the object toward
a goal direction or destination.
31. A method as in claim 1, wherein at least one object property,
simulation property, or environment property is different in the
holding state than in the released state.
32. A method as in claim 1, wherein the simulated object forces are
determined from a first set of object, simulation, and environment
properties; and the simulated object's motion within a simulated
environment is determined from a second set of object, simulation,
and environment properties; wherein at least one of the object,
simulation, and environment properties in the second set is
different than in the first set.
33. A method as in claim 15, wherein at least one of Po, Vo, and a
is different in the released mode than in the holding mode.
34. A method as in claim 4, wherein at least one of the direction
of motion, the velocity, or the acceleration of the simulated
object is changed on transition from the holding state to the
released state.
35. A method as in claim 2, wherein the holding state is entered
responsive to a button pressed by the user, and the released state
is entered responsive to the user releasing the button.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. provisional
application 60/681,007, Computer Interface Methods and
Apparatuses,"filed May 13, 2005, incorporated herein by
reference.
BACKGROUND
[0002] The present invention relates to methods and apparatuses
related to user interaction with computer-simulated objects, and
more specifically to methods and apparatuses related to force
feedback in user interaction with different object behavior
dependent on whether or not a user is holding or controlling the
object.
[0003] Computing technology has seen a many-fold increase in
capability in recent years. Processors work at ever higher rates;
memories are ever larger and always faster; mass storage is larger
and cheaper every year. Computers now are essential elements in
many aspects of life, and are often used to present
three-dimensional worlds to users, in everything from games to
scientific visualization.
[0004] The interface between the user and the computer has not seen
the same rate of change. Screen windows, keyboard, monitor, and
mouse are the standard, and have seen little change since their
introduction. Many computers are purchased with great study as to
processor speed, memory size, and disk space. Often, little thought
is given to the human-computer interface, although most of the
user's experience with the computer will be dominated by the
interface (rarely does a user spend significant time waiting for a
computer to calculate, while every interaction must use the
human-computer interface).
[0005] As computers continue to increase in capability, the
human-computer interface becomes increasingly important. The
effective bandwidth of communication with the user will not be
sufficient using only the traditional mouse and keyboard for input
and monitor and speakers for output. More capable interface support
will be desired to accommodate more complex and demanding
applications. For example, six degree of freedom input devices,
force and tactile feedback devices, three dimensional sound, and
stereo or holographic displays can improve the human-computer
interface.
[0006] As these new interface capabilities become available, new
interface methods are needed to fully utilize new modes of
human-computer communication enabled. Specifically, new methods of
interaction can use the additional human-computer communication
paths to supplement or supplant conventional communication paths,
freeing up traditional keyboard input and visual feedback
bandwidth. The use of force feedback, or haptics, can be especially
useful in allowing a user to feel parts of the interface, reducing
the need for a user to visually manage interface characteristics
that can be managed by feel. Users interfacing with non-computer
tasks routinely exploit the combination of visual and haptic
feedback (seeing one side of a task while feeling the other);
bringing this sensory combination into human-computer interfaces
can make such interfaces more efficient and more intuitive for the
user. Accordingly, there is a need for new methods of
human-computer interfacing that make appropriate use of haptic and
visual feedback.
[0007] As a specific example, many contemporary computer games
require the user to throw or otherwise propel an object. The games
typically allow the user to specify a throw by pressing a button or
combination of buttons. The timing of the button press, often
relative to the timing of other actions occurring in the game,
controls the affect of the throw (e.g., the accuracy or distance of
the throw). Some games display a slider or power bar that indicates
direction or force of a throw; the user must press the appropriate
button when the slider or bar is at the right value for the desired
throw. The user can thereby control aspects of the throw, but not
with any of the skills learned in real world throwing.
Specifically, the direction of the user's hand motion and the force
applied by the user near the release of the throw generally are not
significant to the throwing action in the game. Also, the object
being thrown is generally represented within the game independent
of whether it is being held or has been released, forcing the user
to adjust the control of the object to the constraints of the
simulations in the game. These limitations in current approaches
can produce unrealistic and difficult to learn game
interaction.
SUMMARY OF THE INVENTION
[0008] The present invention can provide a method of providing user
interaction with a computer representation of a simulated object,
where the user can control the object in three dimensions. The
method can provide for two distinct states: a "holding" state, and
a "released" state. The holding state roughly corresponds to the
user holding the simulated object (although other metaphors such as
holding a spring attached to the object, or controlling the object
at a distance can also be suitable). The released state roughly
corresponds to the user not holding the object. A simple example of
the two states can include the holding, then throwing of a ball.
While in the holding state, the method provides force feedback to
the user representation of forces that the user might experience if
the user were holding an actual object. The forces are not applied
when in the released state.
[0009] The present invention can allow the user to direct
transitions from the holding state to the released state (e.g.,
releasing the ball at the end of the throwing motion), from the
released state to the holding state (e.g., picking up a ball), or
both. The present invention can also provide forces that represent
both pushing and pulling the simulated object. The present
invention can also accommodate different haptic and visual
expectations of the user by providing different interaction of the
object within a simulated space in the two modes. For example, a
ball can be simulated with a large mass when being held by the
user, to provide significant force feedback communication to the
user. Upon release, however, the ball's mass internal to the
simulation can be adjusted to a smaller value, to allow the ball to
move and interact with other objects on a scale more fitting to the
visual display capabilities.
[0010] The present invention can also provide a method of
representing a simulated object within a three dimensional computer
simulation. The object can be represented within the simulation
with a first set of properties when the user is directly
controlling the object (e.g., holding the object), and with a
second set of properties when the user is not directly controlling
the object (e.g., after the user releases the object). The
properties different between the two sets can comprise properties
of the simulation (e.g., time or gravity forces), properties of the
simulated object (e.g., mass or inertia), or a combination thereof.
The simulation can comprise a simulation of real-world physics
interactions, such as is supported by contemporary hardware
accelerators, with the physics principles, the object properties,
the environment properties, or a combination thereof, changed when
the user initiates or terminates direct control of the object.
[0011] The present invention can be applied to computer game
applications, where the present invention can provide for enhanced
user experience in propelling an object. When the user is holding
the object, the present invention can provide for a set of object
and interaction forces that optimize the user experience of holding
the object. When the user indicates a release of the object, the
present invention can provide for a set of object and simulation
properties that optimize the simulated object's behavior within the
game environment. The present invention can be applied to games
such as football, basketball, bowling, darts, and soccer.
[0012] The advantages and features of novelty that characterize the
present invention are pointed out with particularity in the claims
annexed hereto and forming a part hereof. However, for a better
understanding of the invention and the methods of its making and
using, reference should be made to the drawings which form a
further part hereof, and to the accompanying descriptive matter in
which there are illustrated and described preferred embodiments of
the present invention. The description below involves specific
examples; those skilled in the art will appreciate other examples
from the teachings herein, and combinations of the teachings of the
examples.
DESCRIPTION OF THE INVENTION
[0013] The present invention can provide a method of providing user
interaction with a computer representation of a simulated object,
where the user can control the object in three dimensions. The
method can provide for two distinct states: a "holding" state, and
a "released" state. The holding state roughly corresponds to the
user holding the simulated object (although other metaphors such as
holding a spring attached to the object, or controlling the object
at a distance can also be suitable). The released state roughly
corresponds to the user not holding the object. A simple example of
the two states can include the holding, then throwing of a ball.
While in the holding state, the method provides force feedback to
the user representation of forces that the user might experience if
the user were holding an actual object. The forces are not applied
when in the released state.
[0014] The present invention can allow the user to direct
transitions from the holding state to the released state (e.g.,
releasing the ball at the end of the throwing motion), from the
released state to the holding state (e.g., picking up a ball), or
both. The present invention can also provide forces that represent
both pushing and pulling the simulated object. The present
invention can also accommodate different haptic and visual
expectations of the user by providing different interaction of the
object within a simulated space in the two modes. For example, a
ball can be simulated with a large mass when being held by the
user, to provide significant force feedback communication to the
user. Upon release, however, the ball's mass internal to the
simulation can be adjusted to a smaller value, to allow the ball to
move and interact with other objects on a scale more fitting to the
visual display capabilities.
[0015] The present invention can also provide a method of
representing a simulated object within a three dimensional computer
simulation. The object can be represented within the simulation
with a first set of properties when the user is directly
controlling the object (e.g., holding the object), and with a
second set of properties when the user is not directly controlling
the object (e.g., after the user releases the object). The
properties different between the two sets can comprise properties
of the simulation (e.g., time or gravity forces), properties of the
simulated object (e.g., mass or inertia), or a combination thereof.
The simulation can comprise a simulation of real-world physics
interactions, such as is supported by contemporary hardware
accelerators, with the physics principles, the object properties,
the environment properties, or a combination thereof, changed when
the user initiates or terminates direct control of the object.
[0016] The present invention can be applied to computer game
applications, where the present invention can provide for enhanced
user experience in propelling an object. When the user is holding
the object, the present invention can provide for a set of object
and interaction forces that optimize the user experience of holding
the object. When the user indicates a release of the object, the
present invention can provide for a set of object and simulation
properties that optimize the simulated object's behavior within the
game environment. The present invention can be applied to games
such as football, basketball, bowling, darts, and soccer.
[0017] Various terms may be referred to herein, and a discussion of
their respective meanings is presented in order to facilitate
understanding of the invention. Haptics is the field that studies
the sense of touch. In computing, haptics refers to giving a User a
sense of touch through a Haptic Device. A Haptic Device (or Device)
is the mechanism that allows a User to feel virtual objects and
sensations. The forces created from a Haptic Device can be
controlled through motors or any other way of transferring
sensations to a User. The position of a Device typically refers to
the position of a handle on the Device that is held by User. Any of
the algorithms described can vary depending on where the handle of
the Device is within its workspace. Haptic Devices can have any
number of Degrees of Freedom (DOF), and can have a different number
of DOF for tracking than for forces. For example a Haptic Device
can track 3 DOF (x, y, and z), and output forces in 3 DOF (x, y,
and z), in which case the tracked DOF are the same as the forces
DOF. As another example, a Haptic Device can track 6 DOF (x, y, and
z, and rotation about x, y, and z), but only have 3 DOF (x, y, and
z), in which case the tracked DOF are a superset of the forces DOF.
Additionally, any of a Device's DOF can be controlled by direct
movements not relative to a fixed point in space (like a standard
computer mouse), controlled by direct movements relative to a fixed
point in space (like a mechanical tracker, mechanically grounded to
a table it is resting on, where it can only move within a limited
workspace), or controlled by forces against springs, movements
around pivot points, twisting or turning a handle, or another type
of limited movements (joystick, spaceball, etc).
[0018] A User is a person utilizing a Haptic Device to play a game
or utilize some other type of application that gives a sense of
touch. The User can experience a simulation or game in ways that
are consistent with a Character (described below) such that the
User feels, sees, and does what the Character does. The User can
also have any portion or all of the interactions with the
simulation be separate from the Character. For example, the User's
view (i.e. what is seen on the monitor) does not have to be lined
up with a Character's view (i.e. what a Character would see given
the environment and the location of the Character's eyes), whether
the Character is currently being controlled or not. The User can
directly feel what a Character feels, through the Haptic Device
(for example, the weight of picking up an object), or the User can
feel a representation of what the Character feels to varying
degrees (for example a haptic representation of the Character
spinning in the air). A Character is a person or object controlled
by a User in a video game or application. A Character can also be a
first person view and representation of the User. Characters can be
simple representations described only by graphics, or they can have
complex characteristics such as Inverse Kinematic equations, body
mass, muscles, energy, damage levels, artificial intelligence, or
can represent any type of person, animal, or object in real life in
varying degrees of realism. A Character can be a complex system
like a human, or it can simply be a simple geometric object like a
marble in a marble controlling game. Characters and their
information can be described and contained on a single computer, on
multiple computers, and in online environments. Characters can
interact with other Characters. A Character can be controlled by
the position of a Device or a Cursor, and a Character can be any
Cursor or any object.
[0019] A Cursor is a virtual object controlled by a User
controlling a Haptic Device. As the User moves the Haptic Device,
the Cursor can move in some relationship to the movement of the
Device. Typically, though not always, the Cursor moves
proportionally to the movement of the Haptic Device along each axis
(x,y,z). Those movements, however, can be scaled, rotated, or
skewed or modified by any other function, and can be modified in
these ways differently along different axes. For example, a Cursor
can be controlled through a pivot point, where a movement of the
Haptic Device to the right would move the Cursor to the left (the
amount of movement can depend on a simulation of where the pivot
point is located with respect to the Cursor). A Cursor can be a
point, a sphere, any other geometric shape, a polygonal surface, a
volumetric representation, a solids model, a spline based object,
or can be described in any other mathematical way. A Cursor can
also be a combination of any number of those objects. The
graphical, haptic, and sound representations of a Cursor can be
different from each other. For example, a Cursor can be a perfect
haptic sphere, but a polygonal graphical sphere.
[0020] A Cursor can be controlled directly, or can be controlled
through the interactions of one or more virtual objects that
interact with another virtual object or other virtual objects. For
example, a Haptic Device can control a point that is connected to a
sphere with a spring, where the sphere is used as the Cursor. A
Cursor's movements can be constrained in the visual, haptic, audio,
or other sensory domain, preventing the Cursor, or its use, from
moving into a specified area. Cursor movements can be constrained
by objects, algorithms, or physical stops on a Device as examples.
The position of a Cursor and the forces created can be modified
with any type of linear or non-linear transformation (for example,
scaling in the x direction). Position can be modified through
transformations, and the forces created can be modified through an
inverse function to modify the perception of Cursor movements, to
modify objects (such as scale, rotation, etc), or to give
interesting effects to the User. A Cursor can have visual, haptic,
and sound representations, and any properties of any of those three
Cursor modalities can be different. For example, a Cursor can have
different sound, graphic, and haptic representations. A Cursor does
not need to be shown visually. Also, a Cursor can vary in any of
the ways described above differently at different times. For
example, a Cursor can have a consistent haptic and graphic position
when not touching objects, but a different haptic and graphic
position when objects are touched. A Cursor can be shown
graphically when preparing to perform an action (like beginning a
snowboard run, beginning a golf swing, or selecting an object), and
then not shown graphically when performing the action
(snowboarding, swinging a golf club, or holding an object,
respectively).
[0021] A Cursor can also be a representation of a User's
interaction with a Character or an action, rather than a specific
object used to touch other objects. For example, a Cursor can be
the object used to implement a golf swing, and control the weight
and feel of the club, even though the User never actually sees the
Cursor directly. A Cursor can change haptic and graphic
characteristics as a function of time, or as a function of another
variable. A Cursor can be any object, any Character, or control any
part of a Character or object. A Cursor can be in the shape of a
human hand, foot, or any other part or whole of a human, animal, or
cartoon. The shape, function, and motion of a Cursor can be related
to that of an equivalent or similar object in the real world. For
example, a Cursor shaped like a human hand can change wrist, hand
and finger positioning in order to gesture, grasp objects, or
otherwise interact with object similar to how hands interact with
real objects.
[0022] In video games and other computer applications that simulate
throwing--such as propelling an object in an intended direction
using a limb--there is a need to allow the computer User to
actually feel a representation of the throwing process. This haptic
experience of throwing can enable the User to more realistically
control a virtual object or Character, can give a simulation a
greater intensity or requirement of skill, and can make game-play
more enjoyable. There are a number of applications, simulations,
and games that simulate throwing, including overhand throwing
(football, water polo, Quidditch, knife or axe throwing, javelin,
darts, etc), sideways throwing (baseball, skipping stones, etc),
underhand throwing (softball, egg toss, bowling, rugby, skeeball,
horseshoes etc.), heaving (throwing a heavy boulder, log toss in
celtic games, pushing another Character into the air, soccer
throw-in, fish toss etc), overhand pushing (basketball, throwing
something onto a roof, shotput, etc.), spinning and throwing
(hammer throw, bolo, lasso, discus, etc), cross body throwing
(frisbee, ring toss, etc), mechanical throwing (atlatl, lacrosse,
fishing cast etc), dropping an object (tossing an object down to
another person, jacks, etc), flipping an object off a finger
(marbles, flipping a coin etc), and skilled tossing (juggling,
baton twirling, etc). All of those applications are examples of
what is meant herein by "to propel" an object.
[0023] Conventionally, the act of throwing is simulated by pressing
buttons and/or moving joysticks using video game controllers. These
types of interactions focus almost solely on the timing of game
controller interactions and often provide the User only visual
feedback. Although the timing of movements involves learning and
can give a basic sense of the task, the lack of haptic feedback and
the limited control buttons and joysticks offer limits the realism
and more complex learning that this operation can have with 3D
force feedback interactions.
[0024] An interface in many games must provide the user with a
method of indicating discharge of an object, for example release of
a thrown ball. Conventional game interfaces use buttons or
switches--unrelated to usual methods of releasing objects and
consequently not a very realistic interface effect. In the real
world, objects are thrown by imparting sufficient momentum to them.
A haptic interface can accommodate interaction that allows
intuitive release.
[0025] One or more force membranes can be presented to the user,
where a force membrane is a region of the haptic space accessible
by the user that imposes a force opposing motion toward the
membrane as the user approaches the membrane. For example, a
membrane placed in the direction of the intended target can
discourage the user from releasing the object accidentally, but can
allow intentional release by application of sufficient force by the
user to exceed the membrane's threshold. As another example,
consider a user throwing a football. The user brings the haptic
interaction device back (as if to cock the arm back to throw) past
a membrane, then pushes it forward (feeling the weight and drag of
the football haptically), and if the user pushes the football
forward fast enough to give it the required momentum, the football
is thrown. Motion of the object in a throwing direction can be
accompanied by a combination of dynamics forces and viscosity to
guide the users movement. These forces can make directing the
object thrown much easier. The forces related to the membrane can
drop abruptly when the object is thrown, or can be decreased over
time, depending on the desired interface characteristics. As
examples, such a release mechanism can be used to throw balls or
other objects (e.g., by pushing the object forward through a force
barrier disposed between the user location and the target), to drop
objects (e.g., by pushing the object downward through a force
barrier between the user location and the floor), and to discharge
weapons or blows (e.g., by pushing a representation of a weapon or
character portion through a force barrier between the weapon or
character portion and the target).
[0026] Other triggers can be used to effect release of the object.
As an example, a membrane can apply an increasing force, and the
object released when the user-applied force reaches a certain
relation to the membrane's force (e.g., equals the maximum force,
or is double the present force). Release can also be triggered by
gesture recognition: a hand moving forward rapidly, then quickly
slowing, can indicate a desired throw.
[0027] The direction of the object can be determined in various
ways, some of which are discussed in more detail in U.S.
provisional application 60/681,007, "Computer Interface Methods and
Apparatuses," filed May 13, 2005, incorporated herein by reference.
As examples: the position, at release, pre-release, or both, can be
used to set direction; the object can be modeled as attached by a
spring to the cursor, and the direction of throw determined from
the relative positions.
[0028] A visual indication can be combined with the haptic
information to indicate status of the release; for example, an
image of an arm about to throw can be displayed in connection with
the image of the ball when the ball is ready to be thrown (pulled
through the first membrane in the previous example). A haptic cue
can also be used, for example a vibration in the device or
perceptible bump in its motion.
[0029] Bimodal interface. The present invention can provide for two
separate modes of object simulation, optionally selectable
responsive to direction from the user. In a first mode, the user is
directly controlling a simulated object. This can correspond to, as
examples, the user holding the simulated object, the user holding a
spring or other intermediate structure that in turn holds the
object, or the user controlling the object at a distance (e.g.,
according to a magical levitation spell or a space-type tractor
beam). In a second mode, the user is not directly controlling the
simulated object. This can correspond to, as examples, the user
having released a thrown object, the user dropping an object, some
interaction with the simulated environment causing the user to lose
control of the object, or the distance control being disrupted
(e.g., the spell is broken). The object can be represented to the
user and to the simulation with different interaction properties in
the two states, where "interaction properties" that can be
different are aspects of the user's perception of the object that
change upon transition between the modes, including as examples the
properties of the object in the simulation; the properties of the
object as perceived by the user (e.g., texture, size); the rate of
determining object characteristics (e.g., position or deformation);
simulation properties (e.g., time scales, physics models);
environment properties (e.g., gravitational constants, relative
size of objects); and execution of a corresponding software program
on a different thread or a different processor. The transition
between the states can be at any point where the user associates
the state transition with the release of the object, including
simultaneous with the release of an object by the user or before or
after such release.
[0030] "Holding" mode. In the first mode, the user can generally be
allowed to hold the object, manipulate the object, and interact
with the object and, through the object, with the simulated
environment. After an object is touched or selected, a User can
determine that the object should be held. This can be accomplished
automatically (for example, an object can be automatically grabbed,
when it is touched) or the object can be held or grabbed based on a
User input such as a button press, a switch press, a voice input, a
gesture that is recognized, or some other type of User input. When
an object is held, it can have dynamics properties such as weight,
inertia, momentum, or any other physical property. It can be
implemented as a weight, and a User can feel the weight and
reaction forces as it is moved. Objects that are held can have
interactions with the environment. For example, a heavy object
might need to be dragged across a virtual ground if it is too heavy
to be picked up by the Character. In this case, as in real life,
the User can feel the weight of the object, but that weight can be
less than if the object was not touching the ground. As it is
dragged across the ground, forces can be applied to the Cursor or
object representing the friction or resistance of the object as it
moves across the ground, bumps into things, or snags or gets caught
on other virtual objects.
[0031] Objects that are held can also have forces applied to them
(or directly to the Cursor) based on other virtual objects that
interact with the object that is held. For example, a User might
feed a virtual animal. As an apple is picked up the User might feel
the weight of the apple, then, when a virtual horse bites the
apple, the User can feel the apple being pulled and pushed, and the
weight can be adjusted to reflect the material removed by the bite.
Objects can be rotated while they are held. To rotate an object, a
User can rotate the handle of the Device, or can modify the
position of the Device so that the rotation of the object occurs
based on a change in position of the Cursor.
[0032] Objects that are held can have other haptic characteristics.
A User could hold an object that is spinning and feel the resulting
forces. For example, a virtual gyroscope could create directional
forces that the User would feel. A User can feel the acceleration
of an object being held. For example, if a User holds a virtual
firehose, the User might feel the force pushing back from the
direction that firehose is spraying based on how much water is
coming out, how close another virtual object is that is being
sprayed, how tightly the hose is being held, how much pressure
there is in the spraying, or other aspects of the simulation of the
water being sprayed. If an object has its own forces, based on its
representation, the User could feel them. For example, if a User is
holding a popcorn popper, the User could feel the popcorn popping
within it. Each individual force of a popcorn popping could be
relayed to the User, or the forces could be represented through
some other representation (such as a random approximation of forces
that a popcorn popper would create). The forces that a User feels
can have any form or representation which can represent any action
or event. For example, a User might hold a virtual wand to cast
spells. The User can feel any type of force through the wand, which
can represent any type of spell.
[0033] The User can feel interactions of a virtual held object with
another object that hits the object. For example, a User might feel
the weight of a baseball bat, and a path constraint of how it can
move. Then when the bat hits a baseball, that feeling can be felt
by the User through a force applied to the bat or applied directly
to the Cursor. If a User is holding onto a matador cape, and a
virtual bull runs through the cape, the User would feel the pull of
the cape against the Cursor as the bull pulls on the cape. The User
might also feel the force adjusted if the cape were to rip, the
feeling of the cape being pulled out of the Character's hands, the
cape getting caught in the bull's horns and being pulled harder, or
any other interaction with the object being held. A User might hold
onto an object that is controlling the Character's movements. For
example, the Character might grab onto the bumper of a moving car.
Then, the User would feel the pull of the bumper relative to the
movement of the Character. The feeling of pull could also be
combined with other interactions such as maintaining balance. A
User can feel an object interact with another object that is caught
by the object. For example the User can hold a net and catch a
fish. The User can feel the forces of the fish flopping within the
net either directly through forces applied to the Cursor, or
through forces applied to the net through the fish's actions (and
therefore applied to the Cursor, through its interaction with the
net).
[0034] A User can control a held object to interact with an
environment. For example, A User could hold onto a fishing pole and
swing it forward to cast a line out into a lake. Then a User might
feel a tugging force representing a fish nibbling at the hook. The
User might pull on the fishing pole to hook the fish, and then feel
the weight of the fish along the line added to the weight of the
pole as the fish is reeled in. While an object is held, it can be
shaken. A User can feel the changing inertia of an object that is
moved back and forth at different rates. A virtual object might
have liquid inside it that can be felt by a User through a force
representation that varies based on how the liquid moves within the
object. A User might feel an object slipping in his hands. For
example, if a User is pulling on a virtual rope, then the User
might feel a force as the rope is pulled. Then the User could feel
a lesser force if the rope slips through his hands until the rope
is grabbed tighter, at which point the rope stops sliding and the
User feels a larger force again. The User might also feel force
variations representing the texture of an object that is sliding
through his hands. An object that is held can be swung as well. The
User can feel the weight of an object, and feel the force of the
object change based on the relative positions of the Cursor (to
which the object is attached) and the object.
[0035] "Released" mode. In the second mode, the User is no longer
directly controlling or manipulating the object. The object can
then interact with the remainder of the simulated environment as
any other simulated object. In many applications, forces and
motions that are most effective for efficient and intuitive User
control or an object are not the same as forces and motions that
are most suitable for the computer simulation. As an example, the
physical constraints of the range of motion of an input device can
be larger than the actual display, but smaller than the total
simulated space. Accordingly, motions of the User to control an
object that are directly mapped to motion in the simulated space
can be ill-suited for efficient User interaction. As another
example, forces applicable by the user and time over which such
forces are applied can be different in interacting with a haptic
input device than forces and times in the computer simulation. As a
specific example, a user of a haptic input device might have only a
few inches of motion to simulate shooting a basketball; while the
simulation might represent a more complete range of travel for the
shooting action. Also, the basketball experienced by the user might
be more effective if the mass represented by the force feedback to
the haptic device is larger than that in the simulation (moving a
larger basketball mass a shorter distance, in this simplified
example).
[0036] As another example, the time variable in a simulation can be
different when the user is holding an object than when the object
is released. Sophisticated user interaction experiences can be
implemented by changing such parameters (e.g., mass, time, scale).
The user can be provided with a sense of imparting energy to an
object by applying a relatively large force; changing the time
variable in the simulation when the user releases the object can
interface the user's control of the object with the simulated
environment in a manner that provides a unique user experience.
Even if the properties of the simulated object are unchanged, the
rate at which the object's position, velocity, or other
characteristics can be different in the two modes. For example, in
the holding mode, the object's position and force interactions can
be updated at a rate amenable to haptics interaction (e.g., 1000
Hz). In a released state, the object's position can be updated at a
rate amenable to the simulation or to the display characteristics
desired (e.g., 60 Hz).
[0037] In many computer simulations, the behavior of an object is
simulated by determining acceleration (a) by dividing the vector
sum of forces determined to be acting on the object by the mass
assigned to the object. The current velocity (V) of the object can
be determined by adding the previously determined velocity (Vo) to
the acceleration (a) times the time step (t). The position (P) of
the object can be determined from the previously determined
position (Po) by P=Po+Vo*t+1/2*a*t*t. In such a simulation,
changing the effective time (t) on transition between modes can
allow effective user interaction in some applications.
[0038] The separation of the two modes can be especially beneficial
in a game or other simulation that relies on physics-based
simulations of objects within the game. The physics simulation,
sometimes accelerated with special purpose hardware, is often
designed for efficient implementation of the desired game
characteristics. These are often not directly translatable to
characteristics desired for efficient and intuitive haptic
interactions. By separation into two modes of interaction, the
human-computer interface can optimize the haptic interaction; e.g.,
by presenting to the user's touch an object with mass, inertia, or
other properties that give the user intuitive haptic control of the
object. These properties can be effectively decoupled from the
underlying physics simulation, so that the haptic interaction and
physics simulation can be separately optimized.
[0039] The present invention can be especially useful in games
where the user has experience or expectations in real world analogs
to the game activity. For example, many users have experience
throwing a football or shooting a basketball. The present invention
can allow the haptic control of the throwing or shooting motion to
be designed such that the user experiences forces and motions
building on the real world experience, while the underlying game
simulation can proceed at its slower iteration rate and with its
game-based physics properties. Another example is a baseball game,
where the speed and direction of motion of a haptic device, caused
by the user in opposition to resistive forces, can be designed to
provide a realistic throwing interaction. In all cases, the change
from holding to released mode can be an occasion for acceleration,
deceleration, error correction or amplification, or other changes
suitable to the change in modes. Computer simulations of darts,
bowling, and soccer throwins are also amenable for benefit from the
present invention. Games with momentary contact between the user
and an object can also benefit from the present invention; for
example in a tennis game the time when the ball is in contact with
the racket can be implemented as a holding mode; in a soccer game,
the foot of a character can be holding the ball for a brief time to
benefit from the present invention.
[0040] The particular sizes and equipment discussed above are cited
merely to illustrate particular embodiments of the invention. It is
contemplated that the use of the invention may involve components
having different sizes and characteristics. It is intended that the
scope of the invention be defined by the claims appended
hereto.
* * * * *