U.S. patent application number 13/147069 was filed with the patent office on 2016-09-08 for device for object manipulating with multi-input sources.
This patent application is currently assigned to Samsung Electronics Co., Ltd.. The applicant listed for this patent is Jeong Hwan Ahn, Wook Chang, Seung Ju Han, Hyun Jeong Lee, Joon Ah Park. Invention is credited to Jeong Hwan Ahn, Wook Chang, Seung Ju Han, Hyun Jeong Lee, Joon Ah Park.
Application Number | 20160259409 13/147069 |
Document ID | / |
Family ID | 42754397 |
Filed Date | 2016-09-08 |
United States Patent
Application |
20160259409 |
Kind Code |
A1 |
Lee; Hyun Jeong ; et
al. |
September 8, 2016 |
DEVICE FOR OBJECT MANIPULATING WITH MULTI-INPUT SOURCES
Abstract
An object manipulation apparatus and method model an object for
manipulation of a virtual object, suggest object operation schema,
define a physical and mental condition of an avatar, and set motion
data of the avatar.
Inventors: |
Lee; Hyun Jeong; (Yongin-si,
KR) ; Han; Seung Ju; (Yongin-si, KR) ; Park;
Joon Ah; (Yongin-si, KR) ; Chang; Wook;
(Yongin-si, KR) ; Ahn; Jeong Hwan; (Yongin-si,
KR) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Lee; Hyun Jeong
Han; Seung Ju
Park; Joon Ah
Chang; Wook
Ahn; Jeong Hwan |
Yongin-si
Yongin-si
Yongin-si
Yongin-si
Yongin-si |
|
KR
KR
KR
KR
KR |
|
|
Assignee: |
Samsung Electronics Co.,
Ltd.
Suwon-si
KR
|
Family ID: |
42754397 |
Appl. No.: |
13/147069 |
Filed: |
January 29, 2010 |
PCT Filed: |
January 29, 2010 |
PCT NO: |
PCT/KR2010/000571 |
371 Date: |
January 4, 2012 |
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
A63F 13/58 20140902;
G06F 3/011 20130101; A63F 13/10 20130101; G06F 3/017 20130101; G06T
13/40 20130101; A63F 2300/65 20130101; A63F 2300/6607 20130101;
A63F 2300/5553 20130101; G06T 17/10 20130101; A63F 2300/8082
20130101 |
International
Class: |
G06G 7/48 20060101
G06G007/48; G06T 19/00 20060101 G06T019/00 |
Foreign Application Data
Date |
Code |
Application Number |
Jan 29, 2009 |
KR |
1020090007181 |
Jan 29, 2009 |
KR |
1020090007182 |
Jan 28, 2010 |
KR |
1020100008110 |
Claims
1. An object manipulation device comprising: an object modeling
unit configured by a processor to set a structure of a virtual
object; and an object operating unit configured by the processor to
select the virtual object and control an object operation of the
selected virtual object, wherein a model of the virtual object
comprises general information, an object identifier, and object
attributes, wherein the object identifier comprises an object
identification, an object state, and modifiable attributes, wherein
the object state comprises available status, selected status, and
unavailable status wherein the modifiable attributes comprise
available status and unavailable status, and wherein the object
operating unit is responsive to one of the statuses of the object
state and one of the statuses of the modifiable attributes.
2. The object manipulation device of claim 1, wherein the virtual
object comprises at least one of general information on the virtual
object, an object identifier for identification of the virtual
object in a virtual world, and object attributes comprising at
least one attribute of the virtual object.
3. The object manipulation device of claim 2, wherein the object
identifier comprises at least one of an object ID allocated to the
virtual object, an object state for recognition of a state of the
virtual object, and modifiable attributes for determining
modifiability of attributes of the virtual object.
4. The object manipulation device of claim 2, wherein the object
attributes comprises at least one of spatial attributes, physical
attributes, temporal attributes, and combinational attributes.
5. The object manipulation device of claim 4, wherein the spatial
attributes comprise at least one of a shape, a location, and a size
of the virtual object, the physical attributes comprise at least
one of a tactile sensation, a pressure, a vibration, and a
temperature of the virtual object, and the temporal attributes
comprise at least one of a duration and a motion of the virtual
object.
6. The object manipulation device of claim 1, wherein the object
operating unit controls at least one performance of selection of
the virtual object, collection of object attributes of the virtual
object, modification of the object attributes of the virtual
object, and removal and storing of the object attributes of the
virtual object.
7-11. (canceled)
12. The object manipulation device of claim 1, further comprising:
a computer comprising the processor, wherein the object attributes
comprise spatial attributes, physical attributes, temporal
attributes, and combinations, wherein the spatial attributes
comprise shape, location, and size, wherein the physical attributes
comprise tactile, pressure or force, vibration, and temperature,
and wherein the temporal attributes comprise duration and
motion.
13. The object manipulation device of claim 12, wherein the virtual
object comprises at least one of general information on the virtual
object, an object identifier for identification of the virtual
object in a virtual world, and object attributes comprising at
least one attribute of the virtual object.
14. The object manipulation device of claim 13, wherein the object
identifier comprises at least one of an object ID allocated to the
virtual object, an object state for recognition of a state of the
virtual object, and modifiable attributes for determining
modifiability of attributes of the virtual object.
15. The object manipulation device of claim 13, wherein the object
attributes comprises at least one of spatial attributes, physical
attributes, temporal attributes, and combinational attributes.
16. An object manipulation method comprising: setting, by an object
modeling unit of a computer, a structure of a virtual object; and
selecting, by an object operating unit of the computer, the virtual
object and control an object operation of the selected virtual
object, wherein a model of the virtual object comprises general
information, an object identifier, and object attributes, wherein
the object identifier comprises an object identification, an object
state, and modifiable attributes, wherein the object state
comprises available status, selected status, and unavailable
status, wherein the modifiable attributes comprise available status
and unavailable status, and wherein the object operating unit is
responsive to one of the statuses of the object state and one of
the statuses of the modifiable attributes.
17. The object manipulation method of claim 16, wherein the virtual
object comprises at least one of general information on the virtual
object, an object identifier for identification of the virtual
object in a virtual world, and object attributes comprising at
least one attribute of the virtual object.
18. The object manipulation method of claim 17, wherein the object
identifier comprises at least one of an object ID allocated to the
virtual object, an object state for recognition of a state of the
virtual object, and modifiable attributes for determining
modifiability of attributes of the virtual object.
19. The object manipulation method of claim 17, wherein the object
attributes comprises at least one of spatial attributes, physical
attributes, temporal attributes, and combinational attributes.
20. A non-transitory computer-readable recording medium controlling
a computer to execute the method of claim 16.
21. An object manipulation device comprising: a processor
configured to import an object identifier from a virtual world
engine based on an predefined object model, wherein the predefined
object model defines that the object identifier comprises an object
state and an modifiable attributes, wherein the object state
comprises available status, selected status, and unavailable
status, and wherein the modifiable attributes comprise available
status and unavailable status; import an object information from
the virtual world engine based on the predefined object model,
wherein the predefined object model further defines that the object
information comprises spatial attributes, physical attributes, and
temporal attributes; manipulate the object information by
modifying, removing, restoring, or combining at least one of the
spatial attributes, the physical attributes, and the temporal
attributes; and export the manipulated object information to the
virtual world engine.
22. The object manipulation device of claim 21, wherein the
processor is further configured to: receive a first sensor input
command from a sensor, wherein the first sensor input command
corresponds to a selecting operation; import the object identifier,
based on the first sensor input command; check the object state
from the imported object identifier to determine whether the object
state has an available status, a selected status, or an unavailable
status; responsive to the object state having an available status,
set the object state to indicate a selected status; import the
object information; receive a second sensor input command from the
sensor, wherein the second sensor input command corresponds to
manipulating operation; check the modifiable attributes from the
imported object identifier to determine whether the modifiable
attributes has an available status or an unavailable status; and
responsive to the modifiable attributes having an available status,
manipulate the object information based on the second sensor input
command, and export the manipulated object information.
23. The object manipulation device of claim 22, wherein: the
spatial attributes comprise shape, location, and size; the physical
attributes comprise tactile, pressure or force, vibration, and
temperature; and the temporal attributes comprise duration, and
motion.
24. The object manipulation device of claim 22, wherein, if the
object state indicates an unavailable status, the processor is
further configured not to perform the selecting operation; and if
the modifiable attributes indicate an unavailable status, the
processor is further configured not to perform the manipulating
operation.
Description
TECHNICAL FIELD
[0001] The present invention relates to a method for modeling a
structure of a virtual object and also modeling an avatar in a
virtual world.
BACKGROUND ART
[0002] Recent research has rapidly increased interest of users
regarding interaction between a human and a computer. A virtual
reality (VR) technology is being developed and applied in various
fields, particularly, in an entertainment field. The entertainment
field is commercialized, for example, in the form of 3-dimensional
(3D) virtual online community such as Second Life and a 3D game
station. The 3D game station offers an innovative gaming experience
through a 3D input device. A sensor-based multi-modal interface may
be applied to a VR system to achieve control of a complicated 3D
virtual world. Here, a connection between the real world and the
virtual world may be achieved by a virtual to real-representation
of sensory effect (VR-RoSE) engine and a real to virtual-RoSE
(RV-RoSE) engine.
[0003] Corresponding to development of the VR technology, there is
a need for a method of more effectively reflecting a motion in the
real world for manipulation of an object of the virtual world and
navigating an avatar in the virtual world.
DISCLOSURE OF INVENTION
[0004] According to an aspect of the present invention, there is
provided an object manipulation device including an object modeling
unit to set a structure of a virtual object, and an object
operating unit to select the virtual object and control an object
operation of the selected virtual object.
[0005] The virtual object may include at least one selected from
general information on the virtual object, an object identifier for
identification of the virtual object in a virtual world, and object
attributes including at least one attribute of the virtual
object.
[0006] The object identifier may include at least one selected from
an object ID allocated to the virtual object, an object state for
recognition of a state of the virtual object, and modifiable
attributes for determining modifiability of attributes of the
virtual object.
[0007] The object attributes may include at least one selected from
spatial attributes, physical attributes, temporal attributes, and
combinational attributes.
[0008] The spatial attributes may include at least one of a shape,
a location, and a size of the virtual object. The physical
attributes may include at least one of a tactile sensation, a
pressure, a vibration, and a temperature of the virtual object, and
the temporal attributes may include at least one of a duration and
a motion of the virtual object.
[0009] The object operating unit may control at least one
performance of selection of the virtual object, collection of
object attributes of the virtual object, modification of the object
attributes of the virtual object, and removal and storing of the
object attributes of the virtual object.
[0010] The object manipulation device may include an avatar
structure setting unit to set a structure of an avatar, and an
avatar navigation unit to control a motion of the avatar
corresponding to a motion of a user in a real world.
[0011] The avatar structure setting unit may include an avatar
identifying unit to set information for identifying the avatar, an
avatar condition managing unit to set a physical condition and a
mental condition of the avatar, and a motion managing unit to
manage the motion of the avatar.
[0012] The avatar navigation unit may include a general information
managing unit to manage general information of the avatar, and a
control data managing unit to control the motion of the avatar.
[0013] The control data managing unit may manage at least one of a
movement state, a movement direction, and a speed of the
avatar.
[0014] According to one embodiment of the present invention, there
is provided an object manipulation apparatus and method capable of
modeling an object for manipulation of a virtual object and
effectively reflecting a motion of a real world to manipulation of
an object of a virtual world.
[0015] According to one embodiment of the present invention, there
is provided an object manipulation apparatus and method capable of
effectively navigating an avatar in a virtual world by determining
a physical and mental condition of the avatar and setting motion
data of the avatar.
BRIEF DESCRIPTION OF DRAWINGS
[0016] FIG. 1 illustrates a block diagram of an object manipulation
apparatus according to an embodiment of the present invention;
[0017] FIG. 2 illustrates a diagram of a system connecting a
virtual world with a real world, according to an example
embodiment;
[0018] FIG. 3 illustrates a diagram describing an object modeling
operation according to an example embodiment;
[0019] FIG. 4 illustrates a diagram describing an object operation
model according to an example embodiment;
[0020] FIG. 5 illustrates a diagram describing an object operation
model according to another example embodiment;
[0021] FIG. 6 illustrates a diagram describing a process of
manipulating an object associated with a real to
virtual-representation of sensory effect (RV-RoSE) engine according
to an example embodiment;
[0022] FIG. 7 illustrates a block diagram describing an object
manipulation apparatus according to another example embodiment;
[0023] FIG. 8 illustrates a diagram showing a countenance and a
pose of an avatar, which are determined by an avatar condition
managing unit, according to an example embodiment;
[0024] FIG. 9 illustrates a diagram describing metadata control for
avatar navigation, according to an example embodiment; and
[0025] FIG. 10 illustrates a diagram describing an avatar
navigation process in association with an RV-RoSE engine, according
to an example embodiment.
BEST MODE FOR CARRYING OUT THE INVENTION
[0026] Reference will now be made in detail to embodiments of the
present invention, examples of which are illustrated in the
accompanying drawings, wherein like reference numerals refer to
like elements throughout. The embodiments are described below in
order to explain the present invention by referring to the
figures.
[0027] FIG. 1 illustrates a block diagram of an object manipulation
apparatus 100 according to an embodiment of the present
invention.
[0028] Referring to FIG. 1, the object manipulation apparatus 100
may include an object modeling unit 110 to set a structure of a
virtual object, and an object operating unit 120 to select the
virtual object and control an object operation of the selected
virtual object. Here, the object modeling refers to a process of
defining an object model that includes an identifier and attributes
for manipulation of the virtual object.
[0029] FIG. 2 illustrates a diagram of a system connecting a
virtual world with a real world, according to an example
embodiment. That is, FIG. 2 shows system architecture of sensor
input metadata and virtual element metadata. A connection between a
real world 220 and a virtual world 210 may be achieved via a
virtual to real-representation of sensory effect (VR-RoSE) engine
231 and a virtual to real-representation of sensory effect
(RV-RoSE) engine 232. Here, the virtual element metadata refers to
metadata related to structures of objects and avatars present in a
virtual space. The sensor input metadata refers to metadata for a
control function such as navigation and manipulation of the avatars
and the objects in a multimodal input device. The object modeling
and the object operation will be described in further detail. The
object modeling relates to the sensor input metadata while the
object operation relates to a sensor input command.
[0030] <Object Modeling (OM)>
[0031] The OM including the identifier and the attributes may be
defined for manipulation of the virtual object. All objects in the
virtual world 210 need to have a particular identifier for
controlling software capable of discriminating the objects. In
addition, all the objects may include spatial, physical, and
temporal attributes to provide reality. Hereinafter, an example of
the object modeling will be described with reference to FIG. 3.
[0032] FIG. 3 illustrates a diagram describing an object modeling
operation according to an example embodiment.
[0033] FIG. 3 shows an object 310 having a predetermined shape and
an object 320 having a predetermined tactile sensation. The object
modeling may define shape attributes and tactile attributes of the
objects.
[0034] The virtual world may provide a selection effect and a
manipulation effect. Variables related to the effects may include a
size, a shape, a tactile sensation, a density, a motion, and the
like.
[0035] A hierarchical diagram of the object modeling for the
selection and the manipulation is shown below.
[0036] The object may include general information, an object
identifier, and object attributes. The general information may
contain an overall description of the object.
[0037] The object identifier is used for discrimination of the
object in the virtual world. The object identifier may include an
object ID, an object state indicating a present state by selected,
selectable, and unselectable modes, and modifiable attributes
indicating modifiability of the attributes.
[0038] The object attributes may include spatial attributes such as
a shape, a location, and a size, physical attributes such as a
tactile sensation, a pressure or force, a vibration, and a
temperature, temporal attributes such as a duration and a motion,
and combinational attributes including a combination of the
aforementioned attributes.
[0039] <Object Operation (OO)>
[0040] The object operation may include collection of information
through an interface, modification of the object attributes, and
removal and restoration of the object. Hereinafter, an example
object operation will be described with reference to FIG. 4.
[0041] FIG. 4 illustrates a diagram describing an OO model
according to an example embodiment.
[0042] FIG. 4 illustrates a process of generating a virtual car.
The virtual car may be generated by using initial models and
revising sizes, locations, and shapes of the initial models. That
is, the virtual car may be generated as desired by revising the
sizes, locations, and shapes of the initial models through
sequential operations 410, 420, and 430.
[0043] Reality may be provided to the virtual object according to a
weight, a roughness, and the like of the virtual object.
[0044] For example, FIG. 5 shows various states of a hand grasping
boxes of various weights. That is, with respect to objects having
the same shape, various motions may be expressed according to
weights, masses, and the like. FIG. 5 also shows various deformed
states of a rubber ball being grasped by a hand. That is, the
object may be deformed according to forces, pressures, and the like
applied to the object.
[0045] A hierarchical diagram of the OO will be described in
further detail below.
[0046] The OO may include object selection to select an object
desired to be deformed by a user, and object manipulation to modify
the attributes of the selected object. The object manipulation may
perform at least one of collection of object attributes of the
virtual object, modification of the object attributes of the
virtual object, removal and storing of the object attributes of the
virtual object. Accordingly, the object manipulation may include
ObtainTargetInfo to obtain an ID of the selected object and
existing attributes, ModifyAttributes to modify the object
attributes, and Remove/RestoreObject to remove or restore the
object attributes.
[0047] Hereinafter, the system architecture for the object
manipulation will be described.
[0048] The object manipulation may include operations of selecting
a target object according to a user preference, extracting the
object attributes of the selected object, modifying the extracted
object attributes, storing the modified object attributes, and
releasing the object.
[0049] FIG. 6 illustrates a diagram describing object manipulation
in association with an RV-RoSE engine according to an example
embodiment.
[0050] Referring to FIG. 6, the whole system includes a virtual
world engine 610, a real world interface 630, a sensor adaptation
preference 620, and an RV-RoSE engine 640.
[0051] The virtual world engine 610 is a system for connecting with
a virtual world such as Second Life. The real world interface 630
refers to a terminal enabling a user to control the virtual world.
For example, the real world interface 630 includes a 2D/3D mouse, a
keyboard, a joystick, a motion sensor, a heat sensor, a camera, a
haptic glove, and the like.
[0052] The sensor adaptation preference 620 refers to a part to add
an intention of the user, for example, adjustment of a range of
data values.
[0053] When the user selects the virtual object through various
versions of the real world interface 630, ID information of the
selected virtual object may be input to an importer of the RV-RoSE
engine 640. Additionally, spatial, physical, and temporal
information are input to a sub object variable through an object
manipulation controller. When the user modifies the object
attributes through various versions of the real world interface
630, the object manipulation controller of the RV-RoSE engine 640
adjusts and stores values of corresponding variables. Next, the
modified object attributes may be transmitted to the virtual world
engine 610 through an object information exporter.
[0054] <Metadata Schema>
[0055] Hereinafter, metadata schema, syntax, and semantics related
to the object modeling and the object operation will be
described.
[0056] <ObjectModel (OM) Schema>
[0057] 1. OM
[0058] The OM is a basic element of the virtual element
metadata.
[0059] Syntax
[0060] 2. ObjectIdentifier
[0061] Syntax
[0062] Semantic
[0063] 3. ObjectAttributes
[0064] Syntax
[0065] Semantic
[0066] 4. SpatialAttributes
[0067] Syntax
[0068] Semantic
[0069] 6. TemporalAttributes
[0070] Syntax
[0071] Semantic
[0072] <ObjectOperations (OO) Schema>
[0073] 1. OO
[0074] Syntax
[0075] Semantic
[0076] 2. ObjectManipulation
[0077] Syntax
[0078] Semantic
[0079] FIG. 7 illustrates a block diagram describing an object
manipulation apparatus 700 according to another example
embodiment.
[0080] Referring to FIG. 7, the object manipulation apparatus 700
includes an avatar structure setting unit 710 to set a structure of
an avatar, and an avatar navigation unit 720 to control a motion of
the avatar corresponding to a motion of the user of the real world.
Here, the avatar structure setting may be related to the virtual
element metadata whereas avatar motion control, that is, navigation
control may be related to a sensor input metadata.
[0081] <Avatar>
[0082] Virtual elements may include avatars, objects, geometries,
cameras, light conditions, and the like. The present embodiment
will define the structure of the avatar.
[0083] An avatar represents another identity of the user. In Second
Life or a 3D game, the avatar needs to have attributes including a
physical condition and a mental condition since the avatar behaves
in different manners according to the physical condition and the
mental condition of a user. Also, motion patterns of the avatar may
be varied by combining the physical condition and the mental
condition. For combination of information on the physical condition
and the mental condition, the avatar may include parameters related
to the physical condition and the mental condition.
[0084] For example, first, AvatarCondition may be defined as a main
element for the physical condition and the mental condition of the
avatar. The AvatarCondition may include PhysicalCondition and
MentalCondition as sub-parameters for the physical condition and
the mental condition, respectively.
[0085] A countenance and a pose of the avatar may be determined by
values of the AvatarCondition, which will be described in detail
with reference to FIG. 8.
[0086] FIG. 8 illustrates a diagram showing the countenance and the
pose of the avatar, which are determined by an avatar condition
managing unit, according to an example embodiment.
[0087] Referring to FIG. 8, various countenances and poses, such as
an expressionless face 810, a happy face 820, and a sitting pose
830, may be determined according to the values of the
AvatarCondition.
[0088] To generate various behavior patterns, the avatar metadata
may also include AvatarMotionData. The AvatarMotionData may
indicate a current motion state such as an on and off state when
motion data is allocated, and a degree of reaction of the avatar
with respect to the motion, such as a reaction range.
[0089] Accordingly, a hierarchical diagram of avatar information
may be expressed as follows.
[0090] <Navigation Control>
[0091] Avatar navigation is a basic operation among control
operations for a 3D virtual world. A multi-modal interface is
capable of recognizing context information to related to a user or
user environments and also recognizing information necessary for
the navigation. When sensor input of the multi-modal interface is
systemized, the avatar navigation may be expressed in various
manners.
[0092] FIG. 9 illustrates a diagram describing metadata control for
the avatar navigation, according to an example embodiment.
[0093] Referring to FIG. 9, the avatar may use MoveState to check a
motion such as walking, running, and the like. Here, walking and
running may be discriminated by speed. RefMotionID provides
information on which motion is simultaneously performed with the
avatar navigation. In addition, various situations may be applied
to be navigable using context information by the multi-modal
interface.
[0094] A hierarchical diagram of navigation control information
with respect to the sensor input may be expressed as follows.
[0095] FIG. 10 illustrates a diagram describing an avatar
navigation process in association with an RV-RoSE engine 1040,
according to an example embodiment.
[0096] Referring to FIG. 10, the whole system may include a virtual
world engine 101, a real world interface 1030, a sensor adaptation
preference 1020, and the RV-RoSE engine 1040.
[0097] The virtual world engine 1010 is a system for connection
with the virtual world such as Second Life. The real world
interface 1030 refers to a terminal enabling a user to control the
virtual world. The sensor adaptation preference 1020 may add an
intention of the user, for example, adjustment of a range of data
values.
[0098] When the user selects an avatar through various versions of
the real world interface 1030, ID information of the selected
avatar may be input to an importer of the RV-RoSE engine 1040.
Additionally, navigation information is input to a sub navigation
variable through an avatar navigation controller. When the user
modifies a navigation value through various types of the real world
interface 1030, the avatar navigation controller of the RV-RoSE
engine 1040 adjusts and stores a value of a corresponding variable.
Next, the modified navigation value of the avatar may be
transmitted to the virtual world engine 1010 through an avatar
information exporter.
[0099] <Virtual Element Schema>
[0100] 1. VE
[0101] VE is a basic element of virtual elements.
[0102] syntax
[0103] semantics
[0104] 2. Avatar
[0105] Avatar contains information on all parameters applicable to
characteristics of the avatar.
[0106] syntax
[0107] semantics
[0108] 3. AvatarIdentifier
[0109] AvatarIdentifier contains a specific type of information on
avatar identification.
[0110] syntax
[0111] semantics
[0112] 4. AvatarMotionData
[0113] AvatarMotionData contains a specific type of information on
an avatar motion.
[0114] syntax
[0115] semantics
[0116] 5. AvatarCondition
[0117] AvatarCondition contains a specific type of condition
information of the avatar. AvatarCondition includes
PhysicalCondition and MentalCondition.
[0118] syntax
[0119] semantics
[0120] <Navigation Control Schema>
[0121] 1. Navigation
[0122] Navigation contains information on all control parameters
and contextual states of the control parameters.
[0123] syntax
[0124] semantics
[0125] 2. NavigationDescription
[0126] NavigationDescription contains information for an initial
navigation state.
[0127] syntax
[0128] semantics
[0129] As described above, a motion in the real world may be
effectively reflected to manipulation of a virtual object of the
virtual world by modeling an object for manipulation of the virtual
object and suggesting object operation schema.
[0130] In addition, an avatar in the virtual world may be
effectively navigated by defining a physical condition and a mental
condition of the avatar and setting motion data of the avatar.
[0131] The methods according to the above-described example
embodiments may be recorded in non-transitory computer-readable
media including program instructions to implement various
operations embodied by a computer. The media may also include,
alone or in combination with the program instructions, data files,
data structures, and the like. The program instructions recorded on
the media may be those specially designed and constructed for the
purposes of the example embodiments, or they may be of the kind
well-known and available to those having skill in the computer
software arts. Examples of non-transitory computer-readable media
include magnetic media such as hard discs, floppy disks, and
magnetic tape; optical media such as CD ROM disks and DVDs;
magneto-optical media such as optical disks; and hardware devices
that are specially configured to store and perform program
instructions, such as read-only memory (ROM), random access memory
(RAM), flash memory, and the like. The media may be transfer media
such as optical lines, metal lines, or waveguides including a
carrier wave for transmitting a signal designating the program
command and the data construction. Examples of program instructions
include both machine code, such as produced by a compiler, and
files containing higher level code that may be executed by the
computer using an interpreter. The described hardware devices may
be configured to act as one or more software modules in order to
perform the operations of the above-described example embodiments,
or vice versa.
[0132] Although a few embodiments of the present invention have
been shown and described, the present invention is not limited to
the described embodiments. Instead, it would be appreciated by
those skilled in the art that changes may be made to these
embodiments without departing from the principles and spirit of the
invention, the scope of which is defined by the claims and their
equivalents.
* * * * *