U.S. patent application number 12/667775 was filed with the patent office on 2010-08-12 for apparatus and method of avatar customisation.
This patent application is currently assigned to SONY COMPUTER ENTERTAINMENT EUROPE LIMITED. Invention is credited to John Foster, Andrew George Gill, Mark Horneff, Keith Thomas Ribbons, Nick Ryan.
Application Number | 20100203968 12/667775 |
Document ID | / |
Family ID | 38440552 |
Filed Date | 2010-08-12 |
United States Patent
Application |
20100203968 |
Kind Code |
A1 |
Gill; Andrew George ; et
al. |
August 12, 2010 |
Apparatus And Method Of Avatar Customisation
Abstract
An entertainment device comprises skeletal modelling means to
control placements of a three dimensional mesh representing some or
all of a user avatar in response to the position of one or more
skeletal components of the user avatar, skeleton modification means
to modify one or more physical properties of one or more skeletal
components of the user avatar via a user interface, and rendering
means to render the user avatar responsive to the modified user
avatar skeleton.
Inventors: |
Gill; Andrew George;
(London, GB) ; Ribbons; Keith Thomas; (London,
GB) ; Foster; John; (London, GB) ; Horneff;
Mark; (London, GB) ; Ryan; Nick; (London,
GB) |
Correspondence
Address: |
LERNER, DAVID, LITTENBERG,;KRUMHOLZ & MENTLIK
600 SOUTH AVENUE WEST
WESTFIELD
NJ
07090
US
|
Assignee: |
SONY COMPUTER ENTERTAINMENT EUROPE
LIMITED
London
GB
|
Family ID: |
38440552 |
Appl. No.: |
12/667775 |
Filed: |
July 4, 2008 |
PCT Filed: |
July 4, 2008 |
PCT NO: |
PCT/GB08/02321 |
371 Date: |
April 22, 2010 |
Current U.S.
Class: |
463/32 |
Current CPC
Class: |
H04L 67/38 20130101;
G06T 19/20 20130101; A63F 2300/66 20130101; A63F 2300/5553
20130101; G06T 13/40 20130101; A63F 2300/6018 20130101; G06T
2219/2021 20130101 |
Class at
Publication: |
463/32 |
International
Class: |
A63F 9/24 20060101
A63F009/24 |
Foreign Application Data
Date |
Code |
Application Number |
Jul 6, 2007 |
GB |
0713186.5 |
Claims
1. An entertainment device, comprising: skeletal modelling means
operable to configure a three dimensional mesh representing some or
all of a user avatar in based upon at least a first property of one
or more skeletal components of the user avatar; skeleton
modification means operable to modify one or more properties of the
one or more skeletal components of the user avatar via a user
interface that enables asymmetric modifications of the one or more
skeletal components of the user avatar; and rendering means
operable to render the user avatar in accordance with the three
dimensional mesh as configured in response to the modified user
avatar skeleton; and wherein an asymmetric modification of the one
or more skeletal components of the user avatar asymmetrically
alters the rendered appearance of the user avatar.
2. An entertainment device according to claim 1, comprising:
transmission means to transmit data descriptive of the modified
user avatar skeleton to one or more remote entertainment devices;
reception means to receive data descriptive of respective modified
avatar skeletons corresponding to respective ones of the one or
more remote entertainment devices; and in which: the rendering
means is operable to render a plurality of respective avatars
corresponding to respective ones of the one or more remote
entertainment devices, the rendering of each avatar being
responsive to its respective modified avatar skeleton.
3. An entertainment device according to claim 2, in which
transmission to the one or more remote entertainment devices and
reception from the one or more remote entertainment devices is via
an on-line server.
4. (canceled)
5. An entertainment device according to claim 1, in which the
skeletal components that may be asymmetrically modified relate to
facial features comprising one or more of the following: i. lateral
eye position; ii. vertical eye position; iii. vertical ear
position; iv. nose position; v. nose profile; vi. upper cranial
shape; vii. upper face shape; viii. lower face shape; and ix. jaw
line.
6. An entertainment device according to claim 1, in which the user
interface of the skeletal modification means enables selection of
additional skeletal components for incorporation into the user
avatar.
7. An entertainment device according to claim 5, in which the
additional skeletal components comprise one or more of the
following: i. glasses; ii. hair; iii. hats; iv. headphones; v.
horns; vi. crests; and vii. trunks.
8. An entertainment device according to claim 1, further
comprising: texture component adjustment means to adjust one or
more parameters of one or more texture layers applied to the three
dimensional mesh representing some or all of a user avatar.
9. An entertainment device according to claim 8, in which one of
the one or more adjustable parameters is texture transparency.
10. An entertainment device according to claim 8, in which one of
the one or more adjustable parameters is a degree of bump-mapping
applied to a texture.
11. An entertainment device according to claim 1, further
comprising: mesh deformation means operable to alter the positions
of vertices of the three dimensional mesh via the user interface,
in which: the mesh deformation means is operable to alter the
positions of the vertices of the three dimensional mesh in
dependence upon the respective vertex positions of at least one of
a plurality of predetermined three dimensional meshes each defined
with respect to the three dimensional mesh representing some or all
of the user avatar; the extent to which the three dimensional mesh
is altered by a predetermined three dimensional mesh is dependent
upon a blend weight associated with the predetermined three
dimensional mesh and adjustable via the user interface; and the
rendering means is operable to render the user avatar in accordance
with the modified three dimensional mesh as modified by the mesh
deformation means.
12. An entertainment device according to claim 11, in which: each
vertex of each of the plurality of predetermined three dimensional
meshes is defined as a positional offset with respect to the
position of the corresponding vertex of an un-deformed three
dimensional mesh; and the mesh deformation means is operable to
modify the positions of the vertices of the three dimensional mesh
in dependence upon a sum of the positional offsets for each vertex
of the plurality of predetermined three dimensional meshes
multiplied by their respective blend weights plus the vertex
positions of the undeformed three dimensional mesh.
13. An entertainment device according to claim 11, in which each of
the plurality of predetermined three dimensional meshes is
associated with an ethnic type.
14. A server operable to administer a multi-player online virtual
environment, the server comprising: reception means to receive data
descriptive of respective modified avatar skeletons from a
plurality of remote entertainment devices; and transmission means
to transmit data descriptive of the respective modified avatar
skeletons to a plurality of remote entertainment devices, wherein
at least a first one of the modified avatar skeletons comprises an
asymmetric modification of one or more skeletal components.
15. A server according to claim 14, the server being operable to
maintain a plurality of substantially similar on-line virtual
environments each comprising a distinct plurality of avatars,
wherein the data descriptive of each avatar's skeleton is
distributed between members of its respective on-line virtual
environment.
16. An on-line system comprising: first and second entertainment
devices; an on-line server operable to administer a multi-player
online virtual environment; the first entertainment device
comprising: skeletal modelling means operable to configure a three
dimensional mesh representing some or all of a user avatar based
upon at least a first property of one or more skeletal components
of the user avatar; skeleton modification means to modify one or
more properties of the one or more skeletal components of the user
avatar via a user interface that enables asymmetric modifications
of the one or more skeletal components of the user avatar;
rendering means operable to render the user avatar in accordance
with the three dimensional mesh as configured in response to the
modified user avatar skeleton, wherein an asymmetric modification
of the one or more skeletal components of the user avatar
asymmetrically alters the rendered appearance of the user avatar;
and transmission means to transmit data descriptive of the modified
user avatar skeleton to the second entertainment device, and the
second entertainment device comprising: reception means to receive
data descriptive of a modified user avatar skeleton from the first
entertainment device; and rendering means operable to render the
modified avatar of the first entertainment device in accordance
with the three dimensional mesh as configured based upon the
modified avatar skeleton, wherein an asymmetric modification of the
one or more skeletal components of the user avatar asymmetrically
alters the rendered appearance of the user avatar, the on-line
server comprising: reception means to receive data descriptive of
the modified user avatar skeleton from the first entertainment
device; and transmission means to transmit data descriptive of the
modified user avatar skeleton to a the second entertainment
device.
17. A method of avatar customisation for an on-line virtual
environment comprising the steps of: selecting a user avatar for
use in the on-line virtual environment; modifying one or more
properties of one or more skeletal components of the user avatar
via a user interface that enables asymmetric modifications of the
one or more skeletal components of the user avatar; configuring a
three dimensional mesh representing some or all of the user avatar
based upon at least a first property of the one or more skeletal
components of the user avatar; and rendering the user avatar in
accordance with the three dimensional mesh as configured in
response to the modified user avatar skeleton, wherein an
asymmetric modification of the one or more skeletal components of
the user avatar asymmetrically alters the rendered appearance of
the user avatar.
18. A method of avatar customisation according to claim 17,
comprising the steps of: transmitting data descriptive of the
modified user avatar skeleton to one or more remote entertainment
devices; receiving data descriptive of respective modified avatar
skeletons corresponding to one or more respective ones of the
remote entertainment devices; and rendering a plurality of
respective avatars corresponding to the respective ones of the one
or more remote entertainment devices, the rendering of each avatar
being responsive to its respective modified avatar skeleton.
19. (canceled)
20. A method of avatar customisation according to claim 17, in
which the step of modifying one or more physical properties of the
one or more skeletal components of the user avatar enables
selection of additional skeletal components for incorporation into
the user avatar.
21. A method of avatar customisation according to claims 17,
further comprising the step of adjusting one or more parameters of
one or more texture layers applied a three dimensional mesh
representing some or all of the user avatar.
22. A method of avatar customisation according to claim 21, in
which one of the adjustable parameters is texture transparency.
23. A method of avatar customisation according to claim 21, in
which an one of the adjustable parameters is a degree of
bump-mapping applied to a texture.
24. A method of avatar customisation according to claim 17, further
comprising the steps of: deforming the positions of vertices of the
three dimensional mesh via the user interface in dependence upon
the respective vertex positions of at least one of a plurality of
predetermined three dimensional meshes each defined with respect to
the three dimensional mesh representing some or all of the user
avatar, wherein the extent to which the three dimensional mesh is
altered by a predetermined three dimensional mesh being dependent
upon a blend weight associated with the predetermined three
dimensional mesh and adjustable via the user interface; and
rendering the user avatar in accordance with the deformed three
dimensional mesh as deformed by the step of deforming the positions
of the three dimensional mesh.
25. A method of avatar customisation according to claim 24, in
which: each vertex of each of the plurality of predetermined three
dimensional meshes is defined as a positional offset with respect
to the position of the corresponding vertex of the un-deformed
three dimensional mesh; and the step of deforming the positions of
the vertices of the three dimensional mesh deforms the positions of
the vertices of the three dimensional mesh in dependence upon a sum
of the positional offsets for each vertex of the plurality of
predetermined three dimensional meshes multiplied by their
respective blend weights plus the vertex positions of the
un-deformed three dimensional mesh.
26. A method of administering a multi-player online virtual
environment, comprising the steps of: receiving data descriptive of
respective modified avatar skeletons from a plurality of remote
entertainment devices; and transmitting data descriptive of the
respective modified avatar skeletons to the plurality of remote
entertainment devices, wherein at least a first one of the
respective modified avatar skeletons comprises an asymmetric
modification of one or more skeletal components.
27. A method of administering a multi-player online virtual
environment according to claim 26, further comprising the step of
maintaining a plurality of substantially similar on-line virtual
environments each comprising a distinct plurality of avatars,
wherein the data descriptive of each avatar's skeleton is
distributed between members of its respective on-line virtual
environment.
28. A computer-readable medium having instructions stored thereon,
the instructions, when executed by a processor, cause the processor
to perform a method of avatar customisation for an on-line virtual
environment, the method comprising the steps of: selecting a user
avatar for use in the on-line virtual environment; modifying one or
more properties of one or more skeletal components of the user
avatar via a user interface that enables asymmetric modifications
of the one or more skeletal components of the user avatar;
configuring a three dimensional mesh representing some or all of
the user avatar based upon at least a first property of the one or
more skeletal components of the user avatar; and rendering the user
avatar in accordance with the three dimensional mesh as configured
in response to the modified user avatar skeleton, wherein an
asymmetric modification of the one or more skeletal components of
the user avatar asymmetrically alters the rendered appearance of
the user avatar.
29. A computer-readable medium having instructions stored thereon,
the instructions, when executed by a processor, cause the processor
to perform a method of administering a multi-player online virtual
environment, the method comprising the steps of: receiving data
descriptive of respective modified avatar skeletons from a
plurality of remote entertainment devices; and transmitting data
descriptive of the respective modified avatar skeletons to the
plurality of remote entertainment devices, wherein at least a first
one of the respective modified avatar skeletons comprises an
asymmetric modification of one or more skeletal components.
Description
[0001] This invention relates to an apparatus and method of avatar
customisation. In online-gaming, it is conventional for players to
adopt distinctive names for their in-game characters (generally
termed `avatars`). In addition, these avatars may also be
customised, for example according to race (real or fictional) or
gender, and a range of different heads and bodies are often
provided. In addition, features such as hair styles, skin tone and
age may be customised. The purpose of such naming and customisation
is typically to project the user's personality within the game,
and/or to be as distinctive as possible. For example, see http
://starwarsgalaxies.station.sony.com/players/guides.vm?id=70000.
[0002] As on-line gaming continues to grow there is an increasing
move to explore the social aspect of these virtual environments,
and consequently a greater need for the user's avatar within such
an environment to be distinctive enough to fulfil the requirements
of social interaction between many individuals (e.g. see
www.selectparks.net/blundell_charcust.pdf).
[0003] Conventional means of further customising a user's avatar
for such a purpose may include uploading the user's own face as a
texture to use on the avatar (for example, see
http://research.microsoft.com/.about.zhang/Face/redherringReport.htm),
or modifying the gestures and expressions of the avatar to reflect
a particular mood that the user wishes to express. However,
gestures and expressions are not instantly recognisable as they
first need to be carried out. Meanwhile, uploading images of users
faces is potentially intrusive, and rendering the images in a
consistent manner when each face may be captured under different
lighting conditions and at different effective resolutions is
difficult. Moreover, the user may be dissatisfied with the result
if it is a poor approximation. Finally, many people online wish to
present a fictional appearance whilst remaining true to their
personality, or wish to appear appropriately `in character` within
the game environment; in this case a captured image is not a
desirable solution.
[0004] It has been suggested that therefore it would be desirable
if the end-user could have access to customisation options `down to
the level of the shape of a nostril` (see the introduction to
www.selectparks.net/blundell_charcust.pdf). However, this would
result in a bewildering array of options for the user to work
through, and significantly would also result in considerable work
in providing the different customisation options during initial
game development. In addition, a significant data overhead in terms
of transmission of configuration data in a massively multiplayer
game is also likely. Consequently such systems do not appear to
have been realised in-game (see again
http://starwarsgalaxies.station.sony.com/players/guides.vm?id=70000).
[0005] The present invention seeks to address the above
concerns.
[0006] In a first aspect of the present invention, an entertainment
device comprises skeletal modelling means to configure a three
dimensional mesh representing some or all of a user avatar in
response to at least a first property of one or more skeletal
components of the user avatar, skeleton modification means to
modify one or more properties of one or more skeletal components of
the user avatar via a user interface, and rendering means to render
the user avatar in accordance with the three dimensional mesh as
configured in response to the modified user avatar skeleton.
[0007] By configuring the three-dimensional mesh used to render the
user avatar in accordance with a skeletal model, then by
manipulation of one or more skeletal components the user can create
distinctive faces for their avatars in a comparatively simple
fashion before applying any further, more conventional changes such
as texture or accessory selection to the mesh.
[0008] In another aspect of the present invention, a server
operable to administer a multi-player online virtual environment
comprises reception means to receive data descriptive of respective
modified avatar skeletons from a plurality of remote entertainment
devices, and transmission means to transmit data descriptive of
respective modified avatar skeletons to a plurality of remote
entertainment devices.
[0009] In another aspect of the present invention, a system
comprising a server and two or more entertainment devices as
described in the above aspects co-operate to allow the two or more
entertainment devices to render the modified avatars of the users
of the respective other devices.
[0010] By enabling the distribution of user-modified skeletal
models for avatars, advantageously a population of avatars within
an on-line environment can therefore be more easily differentiated
when the populated environment is rendered by each participating
remote entertainment device.
[0011] Further respective aspects and features of the invention are
defined in the appended claims, including corresponding methods of
operation as appropriate.
[0012] Embodiments of the present invention will now be described
by way of example with reference to the accompanying drawings, in
which:
[0013] FIG. 1 is a schematic diagram of an entertainment
device;
[0014] FIG. 2 is a schematic diagram of a cell processor;
[0015] FIG. 3 is a schematic diagram of a video graphics
processor;
[0016] FIG. 4 is a schematic diagram of an interconnected set of
game zones in accordance with an embodiment of the present
invention;
[0017] FIG. 5 is a schematic diagram of a Home environment online
client/server arrangement in accordance with an embodiment of the
present invention;
[0018] FIG. 6a is a schematic diagram of a lobby zone in accordance
with an embodiment of the present invention;
[0019] FIG. 6b is a schematic diagram of a lobby zone in accordance
with an embodiment of the present invention;
[0020] FIG. 6c is a schematic diagram of a cinema zone in
accordance with an embodiment of the present invention;
[0021] FIG. 6d is a schematic diagram of a developer/publisher zone
in accordance with an embodiment of the present invention;
[0022] FIG. 7 is a flow diagram of a method of on-line transaction
in accordance with an embodiment of the present invention;
[0023] FIG. 8a is schematic diagram of an apartment zone in
accordance with an embodiment of the present invention;
[0024] FIG. 8b is schematic diagram of a trophy room zone in
accordance with an embodiment of the present invention;
[0025] FIG. 9 is a schematic diagram of a communication menu in
accordance with an embodiment of the present invention;
[0026] FIG. 10 is a schematic diagram of an interactive virtual
user device in accordance with an embodiment of the present
invention;
[0027] FIG. 11 is a schematic diagram of a user interface in
accordance with an embodiment of the present invention;
[0028] FIG. 12 is a schematic diagram of a user interface in
accordance with an embodiment of the present invention;
[0029] FIG. 13 is a schematic diagram of a user interface in
accordance with an embodiment of the present invention;
[0030] FIGS. 14A and 14B are schematic diagrams of a user interface
in accordance with an embodiment of the present invention;
[0031] FIGS. 15A, B &C are schematic diagrams of a user avatar
in accordance with an embodiment of the present invention; and
[0032] FIG. 16 is a flow diagram of a method of user identification
in accordance with an embodiment of the present invention.
[0033] An apparatus and method of user identification are
disclosed. In the following description, a number of specific
details are presented in order to provide a thorough understanding
of the embodiments of the present invention. It will be apparent,
however, to a person skilled in the art that these specific details
need not be employed to practice the present invention. Conversely,
specific details known to the person skilled in the art are omitted
for the purposes of clarity where appropriate.
[0034] In a summary embodiment of the present invention, a user of
an entertainment device connected to an on-line virtual environment
selects and customises an avatar using conventional options such as
gender, clothing and skin-tone. In addition, however, the user can
also modify the three-dimensional mesh used to define the surface
of the user's avatar, upon which textures relating to gender, skin
tone, age etc., can be applied. This configuration is achieved
using a comparatively simple user interface by positioning vertices
of the avatar mesh in response to a skeletal model underpinning the
avatar's mesh structure. The user can therefore modify the mesh of
their avatar by making simple parametric adjustments to the
so-called `bones` of the skeletal models. Typically these bones are
interlinked so that modification to one bone or set of bones also
affects other related bones, so maintaining a harmonious set of
physical proportions for the mesh. The user interface provides a
hierarchy of adjustments, from whole-face skeletal adjustments
(e.g. by race) to partial face skeletal adjustments (e.g. upper
face, lower face, cranium) to individual features (e.g. cheek
bones). This allows a quick modification of the avatar without
compromising the ability to fine tune the results. Moreover, the
user interface allows the modification of bone parameters to
provide an asymmetric mesh, as this conveys a more naturalistic
appearance for the avatar, as well as providing an additional
source of distinctiveness and identity.
[0035] FIG. 1 schematically illustrates the overall system
architecture of the Sony.RTM. Playstation 3.RTM. entertainment
device. A system unit 10 is provided, with various peripheral
devices connectable to the system unit.
[0036] The system unit 10 comprises: a Cell processor 100; a
Rambus.RTM. dynamic random access memory (XDRAM) unit 500; a
Reality Synthesiser graphics unit 200 with a dedicated video random
access memory (VRAM) unit 250; and an I/O bridge 700.
[0037] The system unit 10 also comprises a Blu Ray.RTM. Disk
BD-ROM.RTM. optical disk reader 430 for reading from a disk 440 and
a removable slot-in hard disk drive (HDD) 400, accessible through
the I/O bridge 700. Optionally the system unit also comprises a
memory card reader 450 for reading compact flash memory cards,
Memory Stick.RTM. memory cards and the like, which is similarly
accessible through the I/O bridge 700.
[0038] The I/O bridge 700 also connects to four Universal Serial
Bus (USB) 2.0 ports 710; a gigabit Ethernet port 720; an IEEE
802.11b/g wireless network (Wi-Fi) port 730; and a Bluetooth.RTM.
wireless link port 740 capable of supporting up to seven Bluetooth
connections.
[0039] In operation the I/O bridge 700 handles all wireless, USB
and Ethernet data, including data from one or more game controllers
751. For example when a user is playing a game, the I/O bridge 700
receives data from the game controller 751 via a Bluetooth link and
directs it to the Cell processor 100, which updates the current
state of the game accordingly.
[0040] The wireless, USB and Ethernet ports also provide
connectivity for other peripheral devices in addition to game
controllers 751, such as: a remote control 752; a keyboard 753; a
mouse 754; a portable entertainment device 755 such as a Sony
Playstation Portable.RTM. entertainment device; a video camera such
as an EyeToy.RTM. video camera 756; and a microphone headset 757.
Such peripheral devices may therefore in principle be connected to
the system unit 10 wirelessly; for example the portable
entertainment device 755 may communicate via a Wi-Fi ad-hoc
connection, whilst the microphone headset 757 may communicate via a
Bluetooth link.
[0041] The provision of these interfaces means that the Playstation
3 device is also potentially compatible with other peripheral
devices such as digital video recorders (DVRs), set-top boxes,
digital cameras, portable media players, Voice over IP telephones,
mobile telephones, printers and scanners.
[0042] In addition, a legacy memory card reader 410 may be
connected to the system unit via a USB port 710, enabling the
reading of memory cards 420 of the kind used by the
Playstation.RTM. or Playstation 2.RTM. devices.
[0043] In the present embodiment, the game controller 751 is
operable to communicate wirelessly with the system unit 10 via the
Bluetooth link. However, the game controller 751 can instead be
connected to a USB port, thereby also providing power by which to
charge the battery of the game controller 751. In addition to one
or more analogue joysticks and conventional control buttons, the
game controller is sensitive to motion in 6 degrees of freedom,
corresponding to translation and rotation in each axis.
Consequently gestures and movements by the user of the game
controller may be translated as inputs to a game in addition to or
instead of conventional button or joystick commands. Optionally,
other wirelessly enabled peripheral devices such as the Playstation
Portable device may be used as a controller. In the case of the
Playstation Portable device, additional game or control information
(for example, control instructions or number of lives) may be
provided on the screen of the device. Other alternative or
supplementary control devices may also be used, such as a dance mat
(not shown), a light gun (not shown), a steering wheel and pedals
(not shown) or bespoke controllers, such as a single or several
large buttons for a rapid-response quiz game (also not shown).
[0044] The remote control 752 is also operable to communicate
wirelessly with the system unit 10 via a Bluetooth link. The remote
control 752 comprises controls suitable for the operation of the
Blu Ray Disk BD-ROM reader 430 and for the navigation of disk
content.
[0045] The Blu Ray Disk BD-ROM reader 430 is operable to read
CD-ROMs compatible with the Playstation and PlayStation 2 devices,
in addition to conventional pre-recorded and recordable CDs, and
so-called Super Audio CDs. The reader 430 is also operable to read
DVD-ROMs compatible with the Playstation 2 and PlayStation 3
devices, in addition to conventional pre-recorded and recordable
DVDs. The reader 430 is further operable to read BD-ROMs compatible
with the Playstation 3 device, as well as conventional pre-recorded
and recordable Blu-Ray Disks.
[0046] The system unit 10 is operable to supply audio and video,
either generated or decoded by the Playstation 3 device via the
Reality Synthesiser graphics unit 200, through audio and video
connectors to a display and sound output device 300 such as a
monitor or television set having a display 305 and one or more
loudspeakers 310. The audio connectors 210 may include conventional
analogue and digital outputs whilst the video connectors 220 may
variously include component video, S-video, composite video and one
or more High Definition Multimedia Interface (HDMI) outputs.
Consequently, video output may be in formats such as PAL or NTSC,
or in 720p, 1080i or 1080p high definition.
[0047] Audio processing (generation, decoding and so on) is
performed by the Cell processor 100. The Playstation 3 device's
operating system supports Dolby.RTM. 5.1 surround sound, Dolby.RTM.
Theatre Surround (DTS), and the decoding of 7.1 surround sound from
Blu-Ray.RTM. disks.
[0048] In the present embodiment, the video camera 756 comprises a
single charge coupled device (CCD), an LED indicator, and
hardware-based real-time data compression and encoding apparatus so
that compressed video data may be transmitted in an appropriate
format such as an intra-image based MPEG (motion picture expert
group) standard for decoding by the system unit 10. The camera LED
indicator is arranged to illuminate in response to appropriate
control data from the system unit 10, for example to signify
adverse lighting conditions. Embodiments of the video camera 756
may variously connect to the system unit 10 via a USB, Bluetooth or
Wi-Fi communication port. Embodiments of the video camera may
include one or more associated microphones and also be capable of
transmitting audio data. In embodiments of the video camera, the
CCD may have a resolution suitable for high-definition video
capture. In use, images captured by the video camera may for
example be incorporated within a game or interpreted as game
control inputs.
[0049] In general, in order for successful data communication to
occur with a peripheral device such as a video camera or remote
control via one of the communication ports of the system unit 10,
an appropriate piece of software such as a device driver should be
provided. Device driver technology is well-known and will not be
described in detail here, except to say that the skilled man will
be aware that a device driver or similar software interface may be
required in the present embodiment described.
[0050] Referring now to FIG. 2, the Cell processor 100 has an
architecture comprising four basic components: external input and
output structures comprising a memory controller 160 and a dual bus
interface controller 170A,B; a main processor referred to as the
Power Processing Element 150; eight co-processors referred to as
Synergistic Processing Elements (SPEs) 110A-H; and a circular data
bus connecting the above components referred to as the Element
Interconnect Bus 180. The total floating point performance of the
Cell processor is 218 GFLOPS, compared with the 6.2 GFLOPs of the
Playstation 2 device's Emotion Engine.
[0051] The Power Processing Element (PPE) 150 is based upon a
two-way simultaneous multithreading Power 970 compliant PowerPC
core (PPU) 155 running with an internal clock of 3.2 GHz. It
comprises a 512 kB level 2 (L2) cache and a 32 kB level 1 (L1)
cache. The PPE 150 is capable of eight single position operations
per clock cycle, translating to 25.6 GFLOPs at 3.2 GHz. The primary
role of the PPE 150 is to act as a controller for the Synergistic
Processing Elements 110A-H, which handle most of the computational
workload. In operation the PPE 150 maintains a job queue,
scheduling jobs for the Synergistic Processing Elements 110A-H and
monitoring their progress. Consequently each Synergistic Processing
Element 110A-H runs a kernel whose role is to fetch a job, execute
it and synchronise with the PPE 150.
[0052] Each Synergistic Processing Element (SPE) 110A-H comprises a
respective Synergistic Processing Unit (SPU) 120A-H, and a
respective Memory Flow Controller (MFC) 140A-H comprising in turn a
respective Dynamic Memory Access Controller (DMAC) 142A-H, a
respective Memory Management Unit (MMU) 144A-H and a bus interface
(not shown). Each SPU 120A-H is a RISC processor clocked at 3.2 GHz
and comprising 256 kB local RAM 130A-H, expandable in principle to
4 GB. Each SPE gives a theoretical 25.6 GFLOPS of single precision
performance. An SPU can operate on 4 single precision floating
point members, 4 32-bit numbers, 8 16-bit integers, or 16 8-bit
integers in a single clock cycle. In the same clock cycle it can
also perform a memory operation. The SPU 120A-H does not directly
access the system memory XDRAM 500; the 64-bit addresses formed by
the SPU 120A-H are passed to the MFC 140A-H which instructs its DMA
controller 142A-H to access memory via the Element Interconnect Bus
180 and the memory controller 160.
[0053] The Element Interconnect Bus (EIB) 180 is a logically
circular communication bus internal to the Cell processor 100 which
connects the above processor elements, namely the PPE 150, the
memory controller 160, the dual bus interface 170A,B and the 8 SPEs
110A-H, totalling 12 participants. Participants can simultaneously
read and write to the bus at a rate of 8 bytes per clock cycle. As
noted previously, each SPE 110A-H comprises a DMAC 142A-H for
scheduling longer read or write sequences. The EIB comprises four
channels, two each in clockwise and anti-clockwise directions.
Consequently for twelve participants, the longest step-wise
data-flow between any two participants is six steps in the
appropriate direction. The theoretical peak instantaneous EIB
bandwidth for 12 slots is therefore 96B per clock, in the event of
full utilisation through arbitration between participants. This
equates to a theoretical peak bandwidth of 307.2 GB/s (gigabytes
per second) at a clock rate of 3.2 GHz.
[0054] The memory controller 160 comprises an XDRAM interface 162,
developed by Rambus Incorporated. The memory controller interfaces
with the Rambus XDRAM 500 with a theoretical peak bandwidth of 25.6
GB/s.
[0055] The dual bus interface 170A,B comprises a Rambus FlexIO.RTM.
system interface 172A,B. The interface is organised into 12
channels each being 8 bits wide, with five paths being inbound and
seven outbound. This provides a theoretical peak bandwidth of 62.4
GB/s (36.4 GB/s outbound, 26 GB/s inbound) between the Cell
processor and the I/O Bridge 700 via controller 170A and the
Reality Simulator graphics unit 200 via controller 170B.
[0056] Data sent by the Cell processor 100 to the Reality Simulator
graphics unit 200 will typically comprise display lists, being a
sequence of commands to draw vertices, apply textures to polygons,
specify lighting conditions, and so on.
[0057] Referring now to FIG. 3, the Reality Simulator graphics
(RSX) unit 200 is a video accelerator based upon the NVidia.RTM.
G70/71 architecture that processes and renders lists of commands
produced by the Cell processor 100. The RSX unit 200 comprises a
host interface 202 operable to communicate with the bus interface
controller 170B of the Cell processor 100; a vertex pipeline 204
(VP) comprising eight vertex shaders 205; a pixel pipeline 206 (PP)
comprising 24 pixel shaders 207; a render pipeline 208 (RP)
comprising eight render output units (ROPs) 209; a memory interface
210; and a video converter 212 for generating a video output. The
RSX 200 is complemented by 256 MB double data rate (DDR) video RAM
(VRAM) 250, clocked at 600 MHz and operable to interface with the
RSX 200 at a theoretical peak bandwidth of 25.6 GB/s. In operation,
the VRAM 250 maintains a frame buffer 214 and a texture buffer 216.
The texture buffer 216 provides textures to the pixel shaders 207,
whilst the frame buffer 214 stores results of the processing
pipelines. The RSX can also access the main memory 500 via the EIB
180, for example to load textures into the VRAM 250.
[0058] The vertex pipeline 204 primarily processes deformations and
transformations of vertices defining polygons within the image to
be rendered.
[0059] The pixel pipeline 206 primarily processes the application
of colour, textures and lighting to these polygons, including any
pixel transparency, generating red, green, blue and alpha
(transparency) values for each processed pixel. Texture mapping may
simply apply a graphic image to a surface, or may include
bump-mapping (in which the notional direction of a surface is
perturbed in accordance with texture values to create highlights
and shade in the lighting model) or displacement mapping (in which
the applied texture additionally perturbs vertex positions to
generate a deformed surface consistent with the texture).
[0060] The render pipeline 208 performs depth comparisons between
pixels to determine which should be rendered in the final image.
Optionally, if the intervening pixel process will not affect depth
values (for example in the absence of transparency or displacement
mapping) then the render pipeline and vertex pipeline 204 can
communicate depth information between them, thereby enabling the
removal of occluded elements prior to pixel processing, and so
improving overall rendering efficiency. In addition, the render
pipeline 208 also applies subsequent effects such as full-screen
anti-aliasing over the resulting image.
[0061] Both the vertex shaders 205 and pixel shaders 207 are based
on the shader model 3.0 standard. Up to 136 shader operations can
be performed per clock cycle, with the combined pipeline therefore
capable of 74.8 billion shader operations per second, outputting up
to 840 million vertices and 10 billion pixels per second. The total
floating point performance of the RSX 200 is 1.8 TFLOPS.
[0062] Typically, the RSX 200 operates in close collaboration with
the Cell processor 100; for example, when displaying an explosion,
or weather effects such as rain or snow, a large number of
particles must be tracked, updated and rendered within the scene.
In this case, the PPU 155 of the Cell processor may schedule one or
more SPEs 110A-H to compute the trajectories of respective batches
of particles. Meanwhile, the RSX 200 accesses any texture data
(e.g. snowflakes) not currently held in the video RAM 250 from the
main system memory 500 via the element interconnect bus 180, the
memory controller 160 and a bus interface controller 170B. The or
each SPE 110A-H outputs its computed particle properties (typically
coordinates and normals, indicating position and attitude) directly
to the video RAM 250; the DMA controller 142A-H of the or each SPE
110A-H addresses the video RAM 250 via the bus interface controller
170B. Thus in effect the assigned SPEs become part of the video
processing pipeline for the duration of the task.
[0063] In general, the PPU 155 can assign tasks in this fashion to
six of the eight SPEs available; one SPE is reserved for the
operating system, whilst one SPE is effectively disabled. The
disabling of one SPE provides a greater level of tolerance during
fabrication of the Cell processor, as it allows for one SPE to fail
the fabrication process. Alternatively if all eight SPEs are
functional, then the eighth SPE provides scope for redundancy in
the event of subsequent failure by one of the other SPEs during the
life of the Cell processor.
[0064] The PPU 155 can assign tasks to SPEs in several ways. For
example, SPEs may be chained together to handle each step in a
complex operation, such as accessing a DVD, video and audio
decoding, and error masking, with each step being assigned to a
separate SPE. Alternatively or in addition, two or more SPEs may be
assigned to operate on input data in parallel, as in the particle
animation example above.
[0065] Software instructions implemented by the Cell processor 100
and/or the RSX 200 may be supplied at manufacture and stored on the
HDD 400, and/or may be supplied on a data carrier or storage medium
such as an optical disk or solid state memory, or via a
transmission medium such as a wired or wireless network or interne
connection, or via combinations of these.
[0066] The software supplied at manufacture comprises system
firmware and the Playstation 3 device's operating system (OS). In
operation, the OS provides a user interface enabling a user to
select from a variety of functions, including playing a game,
listening to music, viewing photographs, or viewing a video. The
interface takes the form of a so-called cross media-bar (XMB), with
categories of function arranged horizontally. The user navigates by
moving through the function icons (representing the functions)
horizontally using the game controller 751, remote control 752 or
other suitable control device so as to highlight a desired function
icon, at which point options pertaining to that function appear as
a vertically scrollable list of option icons centred on that
function icon, which may be navigated in analogous fashion.
However, if a game, audio or movie disk 440 is inserted into the
BD-ROM optical disk reader 430, the Playstation 3 device may select
appropriate options automatically (for example, by commencing the
game), or may provide relevant options (for example, to select
between playing an audio disk or compressing its content to the HDD
400).
[0067] In addition, the OS provides an on-line capability,
including a web browser, an interface with an on-line store from
which additional game content, demonstration games (demos) and
other media may be downloaded, and a friends management capability,
providing on-line communication with other Playstation 3 device
users nominated by the user of the current device; for example, by
text, audio or video depending on the peripheral devices available.
The on-line capability also provides for on-line communication,
content download and content purchase during play of a suitably
configured game, and for updating the firmware and OS of the
Playstation 3 device itself. It will be appreciated that the term
"on-line" does not imply the physical presence of wires, as the
term can also apply to wireless connections of various types.
[0068] In an embodiment of the present invention, the
above-mentioned online capability comprises interaction with a
virtual environment populated by avatars (graphical
representations) of the user of the PS3 10 and of other PS3 users
who are currently online.
[0069] The software to enable the virtual interactive environment
is typically resident on the HDD 400, and can be upgraded and/or
expanded by software that is downloaded, or stored on optical disk
440, or accessed by any other suitable means. Alternatively, the
software may reside on a flash memory card 420, optical disk 440 or
a central server (not shown). In an embodiment of the present
invention, the virtual interactive environment (hereafter called
the `Home` environment) is selected from the cross-media bar. The
Home environment then starts in a conventional manner similar to a
3D video game by loading and executing control software, loading 3D
models and textures into video memory 250, and rendering scenes
depicting the Home environment. Alternatively or in addition, the
Home environment can be initiated by other programs, such as a
separate game.
[0070] Referring now to FIG. 4, which displays a notional map of
the Home environment, and FIG. 5, which is a schematic diagram of a
Home environment online client/server arrangement, the user's
avatar is spawned within a lobby zone 1010 by default. However, a
user can select among other zones 1010-1060 (detailed below) of the
map, causing the select zone to be loaded and the avatar to be
spawned within that zone. In an embodiment of the present
invention, the map screen further comprises a sidebar on which the
available zones may be listed, together with management tools such
as a ranking option, enabling zones to be listed in order of user
preference, or such as most recently added and/or A-Z listings. In
addition a search interface may allow the user to search for a zone
by name. In an embodiment of the present invention, there maybe
many more zones available than are locally stored on the user's PS3
at any one time; the local availability may be colour coded on the
list, or the list may be filtered to only display locally available
zones. If the user selects a locally unavailable zone, it can be
downloaded from a Home environment Server 2010.
[0071] Referring now to FIG. 6a, the lobby zone 1010 typically
resembles a covered piazza, and may comprise parkland (grass,
trees, sculptures etc.), and gathering spaces (such as open areas,
single benches or rows of seats etc.) where users can meet through
their avatars.
[0072] The lobby zone 1010 typically also comprises
advertisement-hoardings, for displaying either still or moving
adverts for games or other content or products. These may be on the
walls of the lobby, or may stand alone.
[0073] The lobby zone 1010 may also include an open-air cinema 1012
showing trailers, high-profile adverts or other content from
third-party providers. Such content is typically streamed or
downloaded from a Home environment server 2010 to which the PS3 10
connects when the Home environment is loaded, as described in more
detail later.
[0074] The cinema screen is accompanied by seating for avatars in
front of it, such that when an avatar sits down, the camera angle
perceived by the user of the avatar also encompasses the
screen.
[0075] Referring now also to FIG. 6b, the lobby zone 1010 may also
include general amusements 1014, such as functioning pool tables,
bowling alleys, and/or a video arcade. Games of pool or bowling may
be conducted via the avatar, such that the avatar holds the pool
cue or bowling ball, and is controlled in a conventional manner for
such games. In the video arcade, if an avatar approaches a
videogame machine, the home environment may switch to a
substantially full-screen representation of the videogame selected.
Such games may, for example, be classic arcade or console games
such as Space Invaders (.RTM.), or Pac-Man (.RTM.), which are
comparatively small in terms of memory and processing and can be
emulated by the PS3 within the Home environment or run as plug-ins
to the Home environment. In this case, typically the user will
control the game directly, without representation by the avatar.
The game will switch back to the default Home environment view if
the user quits the game, or causes the avatar to move away from the
videogame machine. In addition to classic arcade games,
user-created game content may be featured on one or more of the
virtual video game machines. Such content may be the subject of
on-line competitions to be featured in such a manner, with new
winning content downloaded on a regular basis.
[0076] In addition to the lobby zone 1010, other zones (e.g. zones
1020, 1030, 1040, 1050 and 1060, which may be rooms, areas or other
constructs) are available. These may be accessed either via a map
screen similar in nature to that of FIG. 4, or alternatively the
user can walk to these other areas by guiding their avatar to
various exits 1016 from the lobby.
[0077] Typically, an exit 1016 takes the form of a tunnel or
corridor (but may equally take the form of an anteroom) to the next
area. While the avatar is within the tunnel or anteroom, the next
zone is loaded into memory. Both the lobby arid the next zone
contain identical models of the tunnel or anteroom, or the model is
a common resource to both. In either case, the user's avatar is
relocated from the lobby-based version to the new zone-based
version of the tunnel or anteroom at the same position. In this way
the user's avatar can apparently walk seamlessly throughout the
Home environment, without the need to retain the whole environment
in memory at the same time.
[0078] Referring now also to FIG. 6c, one available zone is a
Cinema zone 1020. The Cinema zone 1020 resembles a multiplex
cinema, comprising a plurality of screens that may show content
such as trailers, movies, TV programmes, or adverts downloaded or
streamed from a Home environment server 2010 as noted previously
and detailed below, or may show content stored on the HDD 400 or on
an optical disk 440, such as a Blu-Ray disk.
[0079] Typically, the multiplex cinema will have an entrance area
featuring a screen 1022 on which high-profile trailers and adverts
may be shown to all visitors, together with poster adverts 1024,
typically but not limited to featuring upcoming movies. Specific
screens and the selection and display of the trailers and posters
can each be restricted according to the age of the user, as
registered with the PS3. This age restriction can be applied to any
displayed content to which an age restriction tag is associated, in
any of the zones within the Home environment.
[0080] In addition, in an embodiment of the present invention the
multiplex cinema provides a number of screen rooms in which
featured content is available, and amongst which the user can
select. Within a screen room downloaded, streamed or locally stored
media can be played within a virtual cinema environment, in which
the screen is set in a room with rows of seats, screen curtains,
etc. The cinema is potentially available to all users in the Home
environment, and so the avatars of other users may also be visible,
for example watching commonly streamed material such as a web
broadcast. Alternatively, the user can zoom in so that the screen
occupies the full viewing area.
[0081] Referring now also to FIG. 6d, another type of zone is a
developer or publisher zone 1030. Typically, there may be a
plurality of such zones available. Optionally, each may have its
own exit from the lobby area 1010, or alternatively some or all may
share an exit from the lobby and then have separate exits from
within a tunnel or ante-room model common to or replicated by each
available zone therein. Alternatively they may be selected from a
menu, either in the form of a pop-up menu, or from within the Home
environment, such as by selecting from a set of signposts. In these
latter cases the connecting tunnel or anteroom will appear to link
only to the selected developer or publisher zone 1030.
Alternatively or in addition, such zones maybe selected via the map
screen, resulting in the zone being loaded in to memory, and the
avatar re-spawning within the selected zone.
[0082] Developer or publisher zones 1030 provide additional virtual
environments, which may reflect the look and feel of the developer
or publisher's products, brands and marks.
[0083] The developer or publisher zones 1030 are supplementary
software modules to the Home environment and typically comprise
additional 3D models and textures to provide the structure and
appearance of the zone.
[0084] In addition, the software operable to implement the Home
environment supports the integration of third party software via an
application program interface (API). Therefore, developers can
integrate their own functional content within the Home environment
of their own zone. This may take the form of any or all of:
[0085] i. Downloading/streaming of specific content, such as game
trailers or celebrity endorsements;
[0086] ii. Changes in avatar appearance, behaviour and/or
communication options within the zone;
[0087] iii. The provision of one or more games, such as basketball
1032 or a golf range 1034, optionally branded or graphically
reminiscent of the developer's or publisher's games;
[0088] iv. One or more interactive scenes or vignettes
representative of the developer's or publisher's games, enabling
the player to experience an aspect of the game, hone a specific
skill of the game, or familiarise themselves with the controls of a
game;
[0089] v. An arena, ring, dojo, court or similar area 1036 in which
remotely played games may be represented live by avatars 1038, for
spectators to watch.
[0090] Thus, for example, a developer's zone resembles a concourse
in the developer's signature colours and featuring their logos,
onto which open gaming areas, such as soccer nets, or a skeet range
for shooting. In addition, a booth (not shown) manned by
game-specific characters allows the user's avatar to enter and
either temporarily change into the lead character of the game, or
zoom into a first person perspective, and enter a further room
resembling a scene from the featured game. Here the user interacts
with other characters from the game, and plays out a key scene.
Returning to the concourse, adverts for the game and other content
are displayed on the walls. At the end of the zone, the concourse
opens up into an arena where a 5-a-side football match is being
played, where the positions of the players and the ball correspond
to a game currently being played by a popular group, such as a
high-ranking, game clan, in another country.
[0091] In embodiments of the present invention, developer/publisher
zones are available to download. Alternatively or in addition, to
reduce bandwidth they may be supplied as demo content on magazine
disks, or may be installed/upgraded from disk as part of the
installation process for a purchased game of the developer or
publisher. In the latter two examples, subsequent purchase or
registration of the game may result in further zone content being
unlocked or downloaded. In any event, further modifications, and
timely advert and trailer media, may be downloaded as required.
[0092] A similar zone is the commercial zone 1040. Again, there may
be a plurality of such commercial zones accessible in similar
manner to the developer and publisher zones. Like
developer/publisher zones 1030, commercial zones 1040 may comprise
representative virtual assets of one or more commercial vendors in
the form of 3D models, textures etc., enabling a rendering of their
real-world shops, brands and identities, and these may be
geographically and/or thematically grouped within zones.
[0093] Space within commercial zones may be rented as so-called
`virtual real-estate` by third parties. For example, a retailer may
pay to have a rendering of their shop included within a commercial
zone 1040 as part of a periodic update of the Home environment
supplied via the Home environment server 2010, for example on a
monthly or annual renewal basis. A retailer may additionally pay
for the commerce facilities described above, either on a periodic
basis or per item. In this way they can provide users of the Home
environment with a commercial presence.
[0094] Again, the commercial zone comprises supplementary software
that can integrate with the home environment via an API, to provide
additional communication options (shop-specific names, goods,
transaction options etc), and additional functionality, such as
accessing an online database of goods and services for purchase,
determining current prices, the availability of goods, and delivery
options. Such functions may be accessed either via a menu (either
as a pop-up or within the Home environment, for example on a wall)
or via communication with automated avatars. Communication between
avatars is described in more detail later.
[0095] It will be appreciated that developers and publishers can
also provide stores within commercial zones, and in addition that
connecting tunnels between developer/publisher and commercial zones
may be provided. For example, a tunnel may link a developer zone to
a store that sells the developer's games. Such a tunnel may be of a
`many to one` variety, such that exits from several- zones emerge
from the same tunnel in-store. In this case, if re-used, typically
the tunnel would be arranged to return the user to the previous
zone rather than one of the possible others.
[0096] In an embodiment of the present invention, the software
implementing the Home environment has access to an online-content
purchase system provided by the PS3 OS. Developers, publishers and
store owners can use this system via an interface to specify the IP
address and query text that facilitates their own on-line
transaction. Alternatively, the user can allow their PS3
registration details and credit card details to be used directly,
such that by selecting a suitably enabled object, game, advert,
trailer or movie anywhere within the Home environment, they can
select to purchase that item or service. In particular, the Home
environment server 2010 can store and optionally validate the
user's credit card and other details so that the details are ready
to be used in a transaction without the user having to enter them.
In this way the Home environment acts as an intermediary in the
transaction. Alternatively such details can be stored at the PS3
and validated either by the PS3 or by the Home environment
server.
[0097] Thus, referring now also to FIG. 7, in an embodiment of the
present invention a method of sale comprises in a step s2102 a user
selecting an item (goods or a service) within the Home environment.
In step s2104, the PS3 10 transmits identification data
corresponding with the object to the Home environment server 2010,
which in step s2016 verifies the item's availability from a
preferred provider (preferably within the country corresponding to
the IP address of the user). If the item is unavailable then in
step s2107 it informs the user by transmitting a message to the
user's PS3 10. Alternatively, it first checks for availability from
one or more secondary providers, and optionally confirms whether
supply from one of these providers is acceptable to the user. In
step s2108, the Home environment server retrieves from data storage
the user's registered payment details and validates them. If there
is no valid payment method available, then the Home environment may
request that the user enters new details via a secure (i.e.
encrypted) connection. Once a valid payment method is available,
then in step s2110 the Home environment server requests from the
appropriate third party payment provider a transfer of payment from
the user's account. Finally, in s2112 the Home environment server
places an order for the item with the preferred provider, giving
the user's delivery address or IP address as applicable, and
transferring appropriate payment to the preferred provider's
account.
[0098] In this way, commerce is not limited specifically to shops.
Similarly, it is not necessary for shops to provide their own
commerce applications if the preferred provider for goods or
services when displayed within a shop is set to be that shop's
owner. Where the goods or service may be digitally provided, then
optionally it is downloaded from the preferred provider directly or
via a Home environment server 2010.
[0099] In addition to the above public zones, there are additional
zones that are private to the individual user and may only be
accessed by them or by invitation from them. These zones also have
exits from the communal lobby area, but when entered by the avatar
(or chosen via the map screen), load a respective version of the
zone that is private to that user.
[0100] Referring to FIG. 8a, the first of these zones is an
apartment zone 1050. In an embodiment of the present invention,
this is a user-customisable zone in which such features 1052 as
wallpaper, flooring, pictures, furniture, outside scenery and
lighting may be selected and positioned. Some of the furniture is
functional furniture 1054, linked to PS3 functionality. For
example, a television may be placed in the apartment 1050 on which
can be viewed one of several streamed video broadcasts, or media
stored on the PS3 HDD 400 or optical disk 440. Similarly, a radio
or hi-fi may be selected that contains pre-selected links to
internet radio streams. In addition, user artwork or photos may be
imported into the room in the form of wall hangings and
pictures.
[0101] Optionally, the user (represented in FIG. 8a by their avatar
1056) may purchase a larger apartment, and/or additional goods such
as a larger TV, a pool table, or automated non-player avatars.
Other possible items include a gym, swimming pool, or disco area.
In these latter cases, additional control software or configuration
libraries to provide additional character functionality will
integrate with the home environment via the API in a similar
fashion to that described for the commercial and
developer/publisher zones 1030, 1040 described previously.
[0102] Such purchases may be made using credit card details
registered with the Home environment server. In return for a
payment, the server downloads an authorisation key to unlock the
relevant item for use within the user's apartment. Alternatively,
the 3D model, textures and any software associated with an item may
also be downloaded from the Home environment server or an
authorised third-party server, optionally again associated with an
authorisation key. The key may, for example, require correspondence
with a firmware digital serial number of the PS3 10, thereby
preventing unauthorised distribution.
[0103] A user's apartment can only be accessed by others upon
invitation from the respective user. This invitation can take the
form of a standing invitation for particular friends from within a
friends list, or in the form of a single-session pass conferred on
another user, and only valid whilst that user remains in the
current Home environment session. Such invitations may take the
form of an association maintained by a Home environment server
2010, or a digital key supplied between PS3 devices on a
peer-to-peer basis that enables confirmation of status as an
invitee.
[0104] In an embodiment of the present invention invited users can
only enter the apartment when the apartment's user is present
within the apartment, and are automatically returned to the lobby
if the apartment's user leaves. Whilst within the apartment, all
communication between the parties present (both user and positional
data) is purely peer-to-peer.
[0105] The apartment thus also provides a user with the opportunity
to share home created content such as artwork, slideshows, audio or
video with invited guests, and also to interact with friends
without potential interference from other users within the public
zones.
[0106] When invited guests enter a user's apartment, the
configuration of the room and the furnishings within it are
transmitted in a peer-to-peer fashion between the attendees using
ID codes for each object and positional data. Where a room or item
are not held in common between the user and a guest, the model,
textures and any code required to implement it on the guest's PS3
may also be transmitted, together with a single-use key or similar
constraint, such as use only whilst in the user's apartment and
whilst the user and guest remain online in this session.
[0107] Referring to FIG. 8b, a further private space that may
similarly be accessed only by invitation is the user's Trophy Room
1060. The Trophy Room 1060 provides a space within which trophies
1062 earned during game play may be displayed.
[0108] For example, a third-party game comprises seeking a magical
crystal. If the player succeeds in finding the crystal, the third
party game nominates this as a trophy for the Trophy Room 1060, and
places a 3D model and texture representative of the crystal in a
file area accessed by the Home environment software when loading
the Trophy Room 1060. The software implementing the Home
environment can then render the crystal as a trophy within the
Trophy Room.
[0109] When parties are invited to view a user's trophy room, the
models and textures required to temporarily view the trophies are
sent from the user's PS3 to those of the other parties on a
peer-to-peer basis. This may be done as a background activity
following the initial invitation, in anticipation of entering the
trophy room, or may occur when parties enter a connecting
tunnel/anteroom or select the user's trophy room from the map
screen. Optionally, where another party also has that trophy, they
will not download the corresponding trophy from the user they are
visiting. Therefore, in an embodiment of the present invention,
each trophy comprises an identifying code.
[0110] Alternatively or in addition, a trophy room may be shared
between members of a group or so-called `clan`, such that a trophy
won by any member of the clan is transmitted to other members of
the clan on a peer-to-peer basis. Therefore all members of the clan
will see a common set of trophies.
[0111] Alternatively or in addition, a user can have a standing
invitation to all members of the Home environment, allowing anyone
to visit their trophy room. As with the commercial and
developer/publisher zones, a plurality of rooms is therefore
possible, for example a private, a group-based and a public trophy
room. This may be managed either by selection from a pop-up menu or
signposts within the Home environment as described previously, or
by identifying a relevant user by walking up to their avatar, and
then selecting to enter their (public) trophy room upon using the
trophy room exit from the lobby.
[0112] Alternatively or in addition, a public trophy room may be
provided. This room may display the trophies of the person in the
current instance of the Home environment who has the most trophies
or a best overall score according to a trophy value scoring
scheme.
[0113] Alternatively it may be an aggregate trophy room, showing
the best, or a selection of, trophies from some or all of the users
in that instance of the Home environment, together with the ID of
the user. Thus, for example, a user could spot a trophy from a game
they are having difficulty with, identify who in the Home
environment won it, and then go and talk to them about how they won
it. Alternatively, a public trophy room could contain the best
trophies across a plurality of Home environments, identifying the
best garners within a geographical, age specific or game specific
group, or even worldwide. Alternatively or in addition, a leader
board of the best scoring garners can be provided and updated
live.
[0114] It will be appreciated that potentially a large number of
additional third party zones may become available, each comprising
additional 3D models, textures and control software. As a result a
significant amount of space on HDD 400 may become occupied by Home
environment zones.
[0115] Consequently, in an embodiment of the present invention the
number of third party zones currently associated with a user's Home
environment can be limited. In a first instance, a maximum memory
allocation can be used to prevent additional third party zones
being added until an existing one is deleted. Alternatively or in
addition, third party zones may be limited according to
geographical relevance or user interests (declared on registration
or subsequently via an interface with the Home environment server
2010), such that only third party zones relevant to the user by
these criteria are downloaded. Under such a system, if a new third
party zone becomes available, its relevance to the user is
evaluated according to the above criteria, and if it is more
relevant than at least one of those currently stored, it replaces
the currently least relevant third party zone stored on the user's
PS3.
[0116] Other criteria for relevance may include interests or
installed zones of nominated friends, or the relevance of zones to
games or other media that have been played on the user's PS3.
[0117] Further zones may be admitted according to whether the user
explicitly installs them, either by download or by disk.
[0118] As noted above, within the Home environment users are
represented by avatars. The software implementing the Home
environment enables the customisation of a user's avatar from a
selection of pre-set options in a similar manner to the
customisation of the user's apartment. The user may select gender
and skin tone, and customise the facial features and hair by
combining available options for each. The user may also select from
a wide range of clothing. To support this facility, a wide range of
3D models and textures for avatars are provided. In an embodiment
of the present invention, user may import their own textures to
display on their clothing. Typically, the parameters defining the
appearance of each avatar only occupy around 40 bytes, enabling
fast distribution via the home server when joining a populated Home
environment.
[0119] Each avatar in the home environment can be identified by the
user's ID or nickname, displayed in a bubble above the avatar. To
limit the proliferation of bubbles, these fade into view when the
avatar is close enough that the text it contains could easily be
read, or alternatively when the avatar is close enough to interact
with and/or is close to the centre of the user's viewpoint.
[0120] The avatar is controlled by the user in a conventional
third-person gaming manner (e.g. using the game controller 751),
allowing them to walk around the Home environment. Some avatar
behaviour is contextual; thus for example the option to sit down
will only be available when the avatar is close to a seat. Other
avatar behaviour is available at all times, such as for example the
expression of a selected emotion or gesture, or certain
communication options. Avatar actions are determined by use of the
game controller 751, either directly for actions such as movement,
or by the selection of actions via a pop-up menu, summoned by
pressing an appropriate key on the game controller 751.
[0121] Options available via such a menu include further
modification of the avatar's appearance and clothing, and the
selection of emotions, gestures and movements. For example, the
user can select that their avatar smiles, waves and jumps up and
down when the user sees someone they know in the Home
environment.
[0122] Users can also communicate with each other via their avatars
using text or speech.
[0123] To communicate by text, in an embodiment of the present
invention, messages appear in pop-up bubbles above the relevant
avatar, replacing their name bubble if necessary.
[0124] Referring now also to FIG. 9, to generate a message the user
can activate a pop-up menu 1070 in which a range of preset messages
is provided. These may be complete messages, or alternatively or in
addition may take the form of nested menus, the navigation of which
generates a message by concatenating selected options.
[0125] Alternatively or in addition, a virtual keyboard may be
displayed, allowing free generation of text by navigation with the
game controller 751. If a real keyboard 753 is connected via
Bluetooth, then text may by typed into a bubble directly.
[0126] In an embodiment of the present invention, the lobby also
provides a chat channel hosted by the Home environment server,
enabling conventional chat facilities.
[0127] To communicate by speech, a user must have a microphone,
such as a Bluetooth headset 757, available. Then in an embodiment
of the present invention, either by selection of a speech option by
pressing a button on the game controller 751, or by use of a voice
activity detector within the software implementing the Home
environment, the user can speak within the Home environment. When
speaking, a speech icon may appear above the head of the avatar for
example to alert other users to adjust volume settings if
necessary.
[0128] The speech is sampled by the user's PS3, encoded using a
Code Excited Linear Prediction (CELP) codec (or other known VoIP
applicable codec), and transmitted in a peer-to-peer fashion to the
eight nearest avatars (optionally provided they are within a preset
area within the virtual environment surrounding the user's avatar).
Where more than eight other avatars are within the preset area, one
or more of the PS3s that received the speech may forward it to
other PS3s having respective user avatars within the area that did
not receive the speech, in an ad-hoc manner. To co-ordinate this
function, in an embodiment of the present invention the PS3 will
transmit a speech flag to all PS3s whose avatars are within the
preset area, enabling them to place a speech icon above the
relevant (speaking) avatars head (enabling their user to identify
the speaker more easily) and also to notify the PS3s of a
transmission. Each PS3 can determine from the relative positions of
the avatars which ones will not receive the speech, and can elect
to forward the speech to the PS3 of whichever avatar they are
closest to within the virtual environment. Alternatively, the PS3s
within the area can ping each other, and whichever PS3 has the
lowest lag with a PS3 that has not received the speech can elect to
forward it.
[0129] It will be appreciated that the limitation to eight is
exemplary, and the actual number depends upon such factors as the
speech compression ratio and the available bandwidth.
[0130] In an embodiment of the present invention, such speech can
also be relayed to other networks, such as a mobile telephony
network, upon specification of a mobile phone number. This may be
achieved either by routing the speech via the Home environment
server to a gateway server of the mobile network, or by Bluetooth
transmission to the user's own mobile phone. In this latter case,
the mobile phone may require middleware (e.g. a Java applet) to
interface with the PS3 and route the call.
[0131] Thus a user can contact a person on their phone from within
the Home environment. In a similar manner, the user can also send a
text message to a person on their mobile phone.
[0132] In a similar manner to speech, in an embodiment of the
present invention users whose PS3s are equipped with a video camera
such as the Sony .RTM. Eye Toy .RTM. video camera can use a video
chat mode, for example via a pop-up screen, or via a TV or similar
device within the Home environment, such as a Sony Playstation
Portable (PSP) held by the avatar. In this case video codecs are
used in addition to or instead of the audio codecs.
[0133] Optionally, the avatars of users with whom you have spoken
recently can be highlighted, and those with whom you have spoken
most may be highlighted more prominently, for example by an icon
next to their name, or a level of glow around their avatar.
[0134] Referring back to FIG. 5, when a user selects to activate
the Home environment on their PS3 10, the locally stored software
generates the graphical representation of the Home environment, and
connects to a Home environment server 2010 that assigns the user to
one of a plurality of online Home environments 2021, 2022, 2023,
2024. Only four home environments are shown for clarity.
[0135] It will be understood that potentially many tens of
thousands of users may be online at any one time. Consequently to
prevent overcrowding, the Home environment server 2010 will support
a large plurality of separate online Home environments. Likewise,
there may be many separate Home environment servers, for example in
different countries.
[0136] Once assigned to a Home environment, a PS3 initially uploads
information regarding the appearance of the avatar, and then in an
ongoing fashion provides the Home environment server with
positional data for its own avatar, and receives from the Home
environment server the positional data of the other avatars within
that online Home environment. In practice this positional update is
periodic (for example every 2 seconds) to limit bandwidth, so other
PS3s must interpolate movement. Such interpolation of character
movement is well-known in on-line games. In addition, each update
can provide a series of positions, improving the replication of
movement (with some lag), or improving the extrapolation of current
movement.
[0137] In addition the IP addresses of the other PS3s 2031, 2032,
2033 within that Home environment 2024 is shared so that they can
transmit other data such as speech in a peer-to-peer fashion
between themselves, thereby reducing the required bandwidth of data
handled by the Home environment server.
[0138] To prevent overcrowding within the Home environments, each
will support a maximum of, for example, 64 users.
[0139] The selection of a Home environment to which a user will be
connected can take account of a number of factors, either supplied
by the PS3 and/or known to the Home environment server via a
registration process. These include but are not limited to:
[0140] i. The geographical location of the PS3;
[0141] ii. The user's preferred language;
[0142] iii. The user's age;
[0143] iv. Whether any users within the current user's `friends
list` are in a particular Home environment already;
[0144] v. What game disk is currently within the user's PS3;
[0145] vi. What games have recently been played on the user's
PS3.
[0146] Thus, for example, a Swiss teenager may be connected to a
Home environment on a Swiss server, with a maximum user age of 16
and a predominant language of French. In another example, a user
with a copy of `Revolution` mounted in their PS3 may be connected
to a home environment where a predominant number of other users
also currently have the same game mounted, thereby facilitating the
organisation of multiplayer games. In this latter case, the PS3 10
detects the game loaded within the BD-ROM 430 and informs the Home
environment server 2010. The server then chooses a Home environment
accordingly.
[0147] In a further example, a user is connected to a Home
environment in which three users identified on his friends list can
be found. In this latter example, the friends list is a list of
user names and optionally IP addresses that have been received from
other users that the user given wishes to meet regularly. Where
different groups of friends are located on different Home
environment servers (e.g. where the current user is the only friend
common to both sets) then the user may either be connected to the
one with the most friends, or given the option to choose.
[0148] Conversely, a user may invite one or more friends to switch
between Home environments and join them. In this case, the user can
view their friends list via a pop-up menu or from within the Home
environment (for example via a screen on the wall or an information
booth) and determine who is on-line. The user may then broadcast an
invite to their friends, either using a peer-to-peer connection or,
if the friend is within a Home environment or the IP address is
unknown, via the Home environment server. The friend can then
accept or decline the invitation to join.
[0149] To facilitate invitation, generally a Home environment
server will assign less than the maximum supported number of users
to a specific home environment, thereby allowing such additional
user-initiated assignments to occur. This so-called `soft-limit`
may, for example, be 90% of capacity, and may be adaptive, for
example changing in the early evening or at weekends where people
are more likely to meet up with friends on-line.
[0150] Where several friends are within the same Home environment,
in an embodiment of the present invention the map screen may also
highlight those zones in which the friends can currently be found,
either by displaying their name on the map or in association with
the zone name on the side bar.
[0151] Referring now also to FIG. 10, in addition, preferences,
settings, functions of the Home environment and optionally other
functionality may be viewed, adjusted or accessed as appropriate by
use of a virtual Sony .RTM. Playstation Portable .RTM. (PSP)
entertainment device 1072 that can be summoned by use of the game
controller 751 to pop-up on screen. The user can then access these
options, settings and functionality via a PSP cross-media bar 1074
displayed on the virtual PSP. As noted above, the PSP could also be
used as an interface for video chat.
[0152] When a user wishes to leave the Home environment, in
embodiments of the present invention they may do so by selection of
an appropriate key on the game controller 751, by selection of an
exit option from a pop-up menu, by selection of an exit from within
the map screen, by selection of an option via their virtual PSP or
by walking through a master exit within the lobby zone.
[0153] Typically, exiting the Home environment will cause the PS3
10 to return to the PS3 cross media bar.
[0154] Finally, it will be appreciated that additional, separate
environments based upon the Home environment software and
separately accessible from the PS3 cross-media bar are envisaged.
For example, a supermarket may provide a free disk upon which a
Supermarket environment, supported in similar fashion by the Home
environment servers, is provided. Upon selection, the user's avatar
can browse displayed goods within a virtual rendition of the
supermarket (either as 3D models or textures applied to shelves)
and click on them to purchase as described above. In this way
retailers can provide and update online shopping facilities for
their own user base.
[0155] In an embodiment of the present invention, the avatar model
comprises two aspects; a mesh, or skin, defining the three
dimensional surface upon which textures are placed, and a hierarchy
of so-called `bones` used to modify the vertices of the mesh. It
should be understood that these bones are typically one-dimensional
lines comprising a position, size and orientation, and are
associated with vertices or regions of a mesh and/or optionally
with other bones. As such, they do not correspond to human bones in
the conventional sense.
[0156] The mesh typically has a default design, e.g. for male and
female avatars. In conventional animation, this mesh can be
deformed by so called `blend shapes` (or `morph targets`). Blend
shapes are commonly used for facial animation of game characters or
avatars using known techniques. For example, during the designing
of a game, an artist will typically explicitly design a default
mesh describing the head of a game character or avatar together
with blend shapes that depict facial expressions of that game
character or avatar, such as left eyebrow raised, right eyebrow
raised, mouth saying "oo", mouth saying "ee", and the like. During
animation, the animator assigns a blend weight to each blend shape
so as to specify the degree to which that blend shape will
influence the distortion of the template mesh during the animation.
The rendered position of each vertex consequently corresponds to
the sum of the blend shape offset positions multiplied by their
respective weights plus the vertex position of the template mesh.
Therefore, for example, an animator might choose to create a
smiling avatar with a raised eyebrow by assigning an appropriate
weight to those blend shapes describing those facial
expressions.
[0157] Alternatively or in addition to blend shapes, animators
traditionally also use skeletal animation. In skeletal animation,
each bone in the skeleton is associated with one or more vertices
of the mesh. Conversely, these vertices can also be associated with
one or more bones, each association being determined by a vertex
weight. Consequently the displacement of the mesh is determined by
the weighted influence of the neighbouring bones.
[0158] Where one bone is linked to another bone, the relationship
is described as a `parent/child` relationship. In this
relationship, the positioning and orientation of the mesh node or
region associated with the child bone is a product of the
positioning, scaling and orientation of both the child and parent
bone.
[0159] In conventional so-called `skeletal` animation, this
hierarchy simplifies the positioning of a character frame-by frame
since, for example, moving a thigh bone will also move a lower leg
bone, resulting in a change in the position of the associated mesh
of the whole leg.
[0160] In the present embodiment, a skeletal model is used to
enable the user to change bone parameters so as to modify, warp and
otherwise deform the default mesh of the avatar, independent of
whether skeletal animation is subsequently used to move the avatar
about.
[0161] Likewise, blend shapes can be used to provide global
modifications to the mesh of the avatar that can then be adjusted
by the skeletal model in a similar fashion, again independently of
whether blend shapes are used in subsequent animation.
[0162] Referring now to FIG. 11, to facilitate a fast customisation
of the user's avatar, sets of blend shapes (or morph targets) are
pre-determined for different ethnicities such as
[0163] Caucasian, Native American, African, Middle Eastern,
Oriental, and Native Australian. As noted above, a blend shape is a
deformed version of a mesh and in the present embodiment is defined
with respect to the vertex points that describe the default male or
female mesh. For example, the vertices of the blend shape could be
defined as the positional offset from the vertices that describe
the default, un-deformed mesh. To generate ethnic characteristics,
the blend shape associated with the brow of a Native Australian,
for example, will be different to that of a Caucasian.
[0164] According to the present embodiment, a user interface
enables the user to select the percentage of each ethnicity they
wish to include in their avatar, thereby providing a weighted
average of the different pre-determined blend shapes for each
ethnicity. Accordingly, a user can interact with the user interface
to adjust the blend weight of each blend shape associated with each
ethnic type so as to quickly and straightforwardly modify the
avatars basic appearance. For example, an avatar could be 50%
Caucasian and 50% African, or alternatively 40% Native American,
30% Oriental and 30% Native Australian. Optionally predetermined
skin tones may be mixed in the same proportions, but preferably
these are also independently adjustable.
[0165] Therefore, by modifying the blend weight associated with
each blend shape that describes each ethnic type, a wide range of
familiar face structures can be quickly imposed upon the default
mesh of the avatar, providing an initial source of distinctiveness
for the user.
[0166] It will be understood that different blend shape sets may be
used for male and female avatars as appropriate, and that the
available ethnicities are not limited to the above examples, and
indeed can extend to fictional races and species.
[0167] Weighted blend shapes may be used to modify the appearance
of the avatar as to body type or amount of body fat. For example,
blend shapes for different neck thicknesses could be used or blend
shapes relating to fatter or thinner body types could be used to
modify the basic body mesh describing an avatar.
[0168] In addition to adjusting ethnicity and body type using
weighted blend shapes (where the skeleton is unchanged by altering
the blend weights), various bones and groups of bones can be
adjusted directly via respective user interfaces so as to modify or
further modify the appearance of the avatar. Thus, for example,
bones used to control the jaw-line can be adjusted by selecting
from a menu of adjustable features the option `lower jaw-line` and
then adjusting the bone parameters using the controller 751. As the
controller has at least two sets of directional controls, typically
two sets of parameters can be adjusted together. Referring now to
FIG. 12, these adjustments may comprise, for example, rear jaw
width (left-hand horizontal control) and the angle between the chin
and the rear of the jaw (left-hand vertical control), and chin
prominence (right-hand horizontal control) and degree of
double-chin (right-hand vertical control). These adjustments change
the position, size and orientation of bones associated with the jaw
accordingly, and can also affect any bones coupled to these jaw
bones in the skeletal hierarchy; thus for example the cheek, upper
jaw, nose, ears and neck may each be affected by changes to the jaw
line.
[0169] Notably, the position, scale and orientation of each
individual bone is not necessarily accessible via the user
interface; the parameters input by the user are translated into
parameters of the individual bones by the entertainment device.
This makes the adjustment of the facial features simple for the
user.
[0170] Other bone or bone groups can be adjusted in a similar
manner to modify facial features, including cheek bones, brow
ridge, eye socket size, lateral eye position, vertical eye
position, nose position, cranial shape, upper face shape and lower
face shape. As noted previously, bones do not correspond to literal
bones and so may also be used for such features as lip size and
shape, nose profile, vertical ear position and ear shape, as well
as for double chins as in the preceding example.
[0171] In an embodiment of the present invention, an additional
naturalistic effect is achieved by allowing modifications to one or
more bones or bone groups to be asymmetric. For example, ears are
rarely exactly matched, both in terms of size and vertical position
on the head.
[0172] Thus, for example, bones used to control the ears can be
adjusted by selecting from a menu of adjustable features the option
`ear disparity` or the like and then adjusting the bone parameters
using the controller 751. Referring now to FIG. 13, for example the
relative imbalance in vertical position of the ears can be
controlled by the left-hand horizontal control of the controller
such that a move to the right raises the right ear and lowers the
left, whilst a move to the left raises the left ear and lowers the
right. Meanwhile, the relative imbalance in ear size can be
controlled for example by the left-hand vertical control of the
controller such that a move upward increases the size of the right
ear and decreases the size of the left, and a move downward
increases the size of the left ear and decreases the size of the
right.
[0173] Other bones or bone groups can be adjusted to create
asymmetry in a similar manner, such as eye socket size, lateral and
vertical eye position, brow tilt, vertical and lateral cheek bone
position, nose position and profile, lateral jaw offset and jaw
tilt. Several of these features can also be further grouped to
provide quick asymmetry adjustments, typically based upon size and
angle disparity with respect to the centre line of the face.
Referring to FIGS. 14A and B, these may include for example upper-
and/or lower-face symmetry, head (cranial) symmetry and overall
face symmetry. In each case, the user input is very simple (e.g.
vertical axis adjustments affect size aspects and horizontal axis
adjustments affect balance aspects of the asymmetry), and are
coupled to parameters of a relevant subset of bones in the skeletal
model by a series of weights or transforms, so that for example
eye, ear and nose changes in a single `face symmetry` adjustment
are not necessarily to the same degree, but are proportionate with
respect to each other so as to create a_plausible human face
following the subsequent deformation of the face mesh.
[0174] Again the specific position, scale and orientation of each
individual bone is not necessarily accessible via the user
interface; rather the user controls the degree of asymmetry with
respect to certain parameters of certain bones. Again, this makes
the adjustment of the facial features simple for the user when in
practice there may be nearly 100 bones within the avatar's
face.
[0175] It will be appreciated by a person skilled in the art that
the input convention outlined above is one option, but that other
suitable input conventions and methods are possible; for example,
using the EyeToy.RTM. video camera 756 to adjust one or more bone
or bone groups by gesturing with respect to an on-screen depiction
of the avatar.
[0176] In addition to adjustment of the bones in the avatar (and in
particular in the head and face of the avatar), the user interface
enables the addition of further meshes that may interact with bones
of the avatar or be associated with further bones. These meshes
typically provide accessories for the head and face, such as
spectacles, hair, hats, and headphones, and for exotic races or
amusing modifications, e.g. features such as horns, crests and
trunks.
[0177] Thus, for example, a spectacles mesh may be associated with
ear, cheek and nose bones, so that the position of the spectacles
automatically reacts to the structure of the face. Similarly, hair,
hats and headphones may be associated with ear and cranial bones so
that they stretch to fit the size of head and line up with the ear
position. Other facial features such as beard and moustaches would
be associated with the nose and jaw bones in a manner similar to
the facial mesh over which they are placed or which they
replace.
[0178] By using a skeletal model within the avatar (or at least
within the avatar's head and face) in this fashion, and enabling
bones or bone groups to be parametrically adjusted, and furthermore
enabling the asymmetric modification of such bones or bone groups,
a highly distinctive and naturalistic avatar can be generated.
Moreover, by grouping adjustments in decreasing scales of detail,
from overall ethnicity (which may be determined via weighted blend
shapes), to large scale adjustments (e.g. upper/lower face), to a
selection of intuitive adjustments to certain parameters of certain
bone groups and finally bones, distinctive and individual faces can
be quickly made for the avatar, whilst keeping its facial features
harmonious by virtue of the linkages between bones in the skeletal
model and the vertex weighting of the bones to the mesh.
[0179] By way of example, FIG. 15A shows a default female avatar
adjusted by weighted blend shapes to correspond to 50% Caucasian,
50% Native American. FIGS. 15B and 15C then show this female avatar
after various bone parameters have been adjusted in the manner
described herein, so as to produce two highly distinctive faces by
subsequent manipulation of the skeletal model.
[0180] The avatar may be further customised by the application of
texture layers and texture effects. Different texture layers adding
wrinkles, freckles, moles or blemishes and scars may be added with
varying degrees of transparency via the user interface. Likewise,
skin texture may be introduced and varied by controlling the degree
of bump mapping (or other lighting effect) used in relation to one
or more of these textures, or other dedicated bump-mapping
textures. Thus scars, pock-marking and stubble may be added to
varying degrees.
[0181] Optionally one or more texture layers may be alpha-blended,
bump-mapped or otherwise included on the avatar according to an
`age` slider, thereby providing a quick way to vary the age of the
character. Optionally certain bone parameters may also be coupled
to this slider, for example to cause sunken cheekbones, deeper eye
sockets, a thinner neck and larger ears as age progresses.
Optionally different age sliders or an age profile switch could be
provided to give different age effects; for example another age
profile could result in the character becoming fatter-faced,
red-cheeked and jowly with age, rather than gaunt.
[0182] Typically, the skeletal model and the mesh deformations
responsive to the skeletal model and to the weighted blend shapes
are computed by one or more SPEs 110A-H. The resulting modified
mesh is used in combination with one or more textures by the RSX
unit 200 to render the avatar. The modified mesh is further
manipulated to move the avatar, animate the face, perform
lip-syncing and display emotions in substantially the same manner
as would be done with a conventional or default avatar mesh.
[0183] Referring now back again to FIG. 5, when the user logs into
the home environment, the PS3 entertainment device transmits
configuration details of the user's avatar including the modified
bone parameters and blend shape weights to the Home environment
server 2010, or optionally in a peer-to-peer fashion to the other
PS3s 2031, 2032, 2033 in the same instance of the Home
environment.
[0184] Likewise, it also receives from either the Home environment
server or the peer PS3s the respective configuration details of the
other avatars within that instance of the Home environment, also
including their respective modified bone parameters and blend shape
weights.
[0185] The PS3 entertainment device 10 then computes the mesh
deformation for each avatar responsive to the relevant blend shape
weights and modified bone parameters, before rendering it according
to the other configuration details received for that avatar.
[0186] In an alternative embodiment, the data describing the
deformed mesh is transmitted rather than the data describing the
blend shape weights and bone parameters, thereby avoiding the need
for the recipient PS3 to compute the effect of the blend shape
weights and bone parameters on the mesh for each of the other
avatars. In this case, the mesh may be transmitted in a
conventional manner or as a set of deviations from a default mesh
(or a default male or female mesh as applicable).
[0187] In an embodiment of the present invention the Home
environment server is therefore operable to receive data
descriptive of respective blend shape weights and modified avatar
skeletons from each remote entertainment device in an instance of
the Home environment, and to transmit this data to the respective
remaining PS3s. Alternatively it is operable to transmit mesh data
for each avatar, either in a conventional mesh format or as
deviations from a default mesh.
[0188] Thus an on-line system comprising a Home environment server
and two or more PS3s allows the users of each PS3 to customise
their own avatar by virtue of a skeletal modifier and blend shape
weight adjuster coupled to a user interface, and to then distribute
these modified avatars within the Home environment via the Home
environment server before each PS3 renders the Home environment
populated with said modified avatars.
[0189] It will be apparent to a person skilled in the art that
embodiments of the present invention are not be limited to the Home
environment, but are also applicable to any multiplayer on-line
environment where a plurality of users may encounter each other
through avatars, such as. for example in an on-line game.
[0190] It will be appreciated by a person skilled in the art that
the ethnic changes to the avatars face generated by weighted blend
shapes may also be achieved by other deformers such as a lattice
deformer or sculpt deformer, or by suitably placed bones with
different parameter values, or by a combination of the above.
[0191] Referring now to FIG. 16, in an embodiment of the present
invention a method of user identification in an on-line virtual
environment comprises in a first step s10, selecting a user avatar
for use in the on-line virtual environment (for example selecting
an initial gender or character class). Then in a second step s20,
one or more physical properties of one or more skeletal components
of the user avatar are modified via a user interface. In a third
step s30, vertices in a three dimensional mesh representing some or
all of the user avatar are adjusted in response to the position of
one or more skeletal components of the user avatar, such that the
placement of such vertices is responsive to one or more bones of
the modified skeletal model. Then in a fourth step s40, the user
avatar is rendered responsive to the modified user avatar
skeleton.
[0192] It will be apparent to a person skilled in the art that
variations in the above method corresponding to operation of the
various embodiments of the apparatus disclosed herein are
considered within the scope of the present invention, including but
not limited to: [0193] transmitting data descriptive of the
modified user avatar skeleton to one or more remote entertainment
devices; [0194] receiving data descriptive of respective modified
avatar skeletons corresponding to one or more respective remote
entertainment devices; [0195] rendering a plurality of respective
avatars corresponding to one or more respective remote
entertainment devices, the rendering of each avatar being
responsive to its respective modified avatar skeleton; [0196]
modifying one or more physical properties of one or more skeletal
components of the user avatar asymmetrically; [0197] modifying
vertex positions of the three dimensional mesh in accordance with
one or more blend shapes; and [0198] selecting of additional
skeletal components for incorporation into the user avatar.
[0199] It will be appreciated by a person skilled in the art that
in embodiments of the present invention, elements of the method of
user identification in an on-line virtual environment and
corresponding skeletal modelling, skeletal modification, blend
shape weighting, user input and rendering means of the apparatus
may be implemented in any suitable manner.
[0200] Thus the adaptation of existing parts of a conventional
equivalent entertainment device may be implemented in the form of a
computer program product comprising processor implementable
instructions stored on a data carrier such as a floppy disk,
optical disk, hard disk, PROM, RAM, flash memory or any combination
of these or other storage media, or transmitted via data signals on
a network such as an Ethernet, a wireless network, the Internet, or
any combination of these of other networks, or realised in hardware
as an ASIC (application specific integrated circuit) or an FPGA
(field programmable gate array) or other configurable circuit
suitable to use in adapting the conventional equivalent device.
* * * * *
References