U.S. patent application number 11/206664 was filed with the patent office on 2006-02-02 for dynamic wrinkle mapping.
This patent application is currently assigned to Pixar. Invention is credited to John Anderson, Rick Sayre, Ferdi Scheepers.
Application Number | 20060022991 11/206664 |
Document ID | / |
Family ID | 35239028 |
Filed Date | 2006-02-02 |
United States Patent
Application |
20060022991 |
Kind Code |
A1 |
Scheepers; Ferdi ; et
al. |
February 2, 2006 |
Dynamic wrinkle mapping
Abstract
A method for a computer system includes retrieving a plurality
of base poses for an object, retrieving a plurality of base texture
maps associated with the plurality of base poses, receiving a
desired pose for the object, determining a plurality of
coefficients associated with the plurality of base poses in
response to the desired pose and to the plurality of base poses,
and determining a desired texture map in response to the plurality
of coefficients and to the plurality of base texture maps.
Inventors: |
Scheepers; Ferdi; (Pleasant
Hill, CA) ; Anderson; John; (San Anselmo, CA)
; Sayre; Rick; (Kensington, CA) |
Correspondence
Address: |
TOWNSEND AND TOWNSEND AND CREW, LLP
TWO EMBARCADERO CENTER
EIGHTH FLOOR
SAN FRANCISCO
CA
94111-3834
US
|
Assignee: |
Pixar
Emeryville
CA
|
Family ID: |
35239028 |
Appl. No.: |
11/206664 |
Filed: |
August 17, 2005 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10841219 |
May 6, 2004 |
|
|
|
11206664 |
Aug 17, 2005 |
|
|
|
Current U.S.
Class: |
345/582 |
Current CPC
Class: |
G06T 15/04 20130101;
G06T 13/40 20130101 |
Class at
Publication: |
345/582 |
International
Class: |
G09G 5/00 20060101
G09G005/00 |
Claims
1-15. (canceled)
16. A rendering apparatus comprises: a memory configured to store a
first plurality of component poses for a three-dimensional object,
wherein the memory is also configured to store a first plurality of
component two-dimensional images associated with the first
plurality of component poses; and a processor coupled to the
memory, wherein the processor is configured to receive a
specification of a desired pose for the three-dimensional object,
wherein the processor is configured to determine a weighted
combination of a second plurality of component poses from the first
plurality of component poses to approximately form the desired
pose, wherein the processor is configured to form a desired
two-dimensional image from a weighted combination of a second
plurality of two-dimensional images from the first plurality of
two-dimensional images, wherein the second plurality of
two-dimensional images are associated with the second plurality of
component poses.
17. The rendering apparatus of claim 16 wherein the weighted
combination of the second plurality of component poses comprises a
first coefficient associated with a first component pose, and a
second coefficient associated with a second component pose; and
wherein the weighted combination of the second plurality of
two-dimensional images comprises the first coefficient associated
with a first two-dimensional image, and the second coefficient
associated with the second two-dimensional image.
18. The rendering apparatus of claim 16 wherein each of the second
plurality of two-dimensional images comprises data selected from
the group: texture map, displacement map.
19. The rendering apparatus of claim 16 wherein the processor is
also configured to initiate a rendering pipeline process to
determine a plurality of shading values of surfaces of the
three-dimensional object in the desired pose in response to the
desired two-dimensional image; and wherein the processor is
configured to create an output image in response to the plurality
of shading values of the surfaces of the three-dimensional
object.
20. The rendering apparatus of claim 19 wherein the processor is
configured to determine the weighted combination of the second
plurality of component poses from the first plurality of component
poses to approximately form the desired pose within the rendering
pipeline process.
21. The rendering apparatus of claim 16 wherein the first plurality
of component poses comprise from 6 to 10 poses.
22. The rendering apparatus of claim 16 wherein the first plurality
of two-dimensional images comprises more than 2 two-dimensional
images.
23. The rendering apparatus of claim 16 wherein the memory
comprises random-access memory.
24. A computer system comprises: a memory configured to store a
geometric representation of an object, a specification of a first
pose for the object, and a first texture map associated with the
object in the first pose; and a processor coupled to the memory,
wherein the processor is configured to form a first image including
a rendering of at least a portion of the object in the first pose,
in response to the geometric representation of the object, the
specification of the first pose, and the first texture map; wherein
the first texture map associated with the object in the first pose
comprises a weighted combination of a first texture map associated
with a first base pose for the object and a second texture map
associated with a second base pose for the object, wherein the
weighted combination of the first texture map and the second
texture map are determined in response to the a first weight and a
second weight; wherein the first weight and the second weight are
determined in response to decomposing the first pose for the object
to a weighted combination of the first base pose for the object and
the second base pose for the object, wherein the weighted
combination comprises the first weight associated with the first
base pose and the second weight associated with the second base
pose; and wherein the memory is also configured to store the first
image
25. The computer system of claim 24 wherein the memory is also
configured to store a specification of a second pose for the
object, and a second texture map associated with the object in the
second pose; wherein the processor is also configured to form a
second image including a rendering of at least another portion of
the object in the second pose, in response to the geometric
representation of the object, the specification of the second pose,
and the second texture map; wherein the second texture map
associated with the object in the second pose comprises a weighted
combination of a third texture map associated with a third base
pose for the object and a fourth texture map associated with a
fourth base pose for the object, wherein the weighted combination
of the third texture map and the fourth texture map are determined
in response to the a third weight and a fourth weight; wherein the
third weight and the fourth weight are determined in response to
decomposing the second pose for the object to a weighted
combination of the third base pose for the object and the fourth
base pose for the object, wherein the weighted combination
comprises the third weight associated with the third base pose and
the fourth weight associated with the fourth base pose; and wherein
the memory is also configured to store the second image.
26. The computer system of claim 25 wherein the first texture map
associated with the object in the first pose include
representations of microscale geometry not represented in the
second texture map associated with the object in the second
pose.
27. The computer system of claim 25 wherein the second texture map
comprises a weighted combination of the third texture map
associated with the third base pose for the object, the fourth
texture map associated with the fourth base pose for the object,
and the first texture map associated with the first base pose for
the object, wherein the weighted combination of the third texture
map, the fourth texture map, and the first texture map are
determined in response to the third weight, a fourth weight, and a
fifth weight; and wherein the third weight, the fourth weight, and
fifth weight are determined in response to decomposing the second
pose for the object to a weighted combination of the third base
pose for the object, the fourth base pose for the object, and the
first base pose for the object, wherein the weighted combination
comprises the third weight associated with the third base pose, the
fourth weight associated with the fourth base pose, and the fifth
weight associated with the first base pose.
28. The computer system of claim 24 wherein the processor is also
configured to store a representation of the first image on a
removable media; wherein the removable media is selected from a
group consisting of: film stock, DVD, magnetic media, print media,
optical storage media.
29. The computer system of claim 24 further comprising wherein the
weighted combination of the first texture map and the second
texture map comprises the first texture map weighted by the first
weight and the second texture map weighted by the second weight,
and wherein the weighted combination of the first base pose and the
second base pose comprises the first base pose weighted by the
first weight and the second base pose weighted by the second
weight.
30. A computer system comprises: a memory configured to store a
first texture map and an associated first base pose for an object,
a second texture map and an associated second base pose for the
object, a specification of a first desired pose for the object, and
a specification of the object; and a processor coupled to the
memory; wherein the memory also comprises: code that directs the
processor to decompose the first desired pose for the object into a
weighted combination of the first base pose and the second base
pose, wherein the weighted combination comprises a first weight for
the first base pose and a second weight for the second base pose;
and code that directs the processor to determine the texture map
associated with the object in the first desired pose in response to
a weighted combination of the first texture map and the second
texture map, wherein the weighted combination of the first texture
map and the second texture map are determined in response to the
first weight and the second weight.
31. The computer system of claim 30 wherein the memory is also
configured to store the texture map associated with the first
desired pose for the object.
32. The computer system of claim 31 wherein the memory is also
configured to store code that directs the processor to render an
image including the object in the first desired pose in response to
the texture map associated with the object in the first desired
pose.
33. The computer system of claim 30 wherein the memory is also
configured to store a third texture map and an associated third
base pose for an object, and a specification of a second desired
pose for the object; and wherein the memory also comprises: code
that directs the processor to decompose the second pose for the
object into a weighted combination of the first base pose and the
third base pose, wherein the weighted combination comprises a third
weight for the third base pose and a fourth weight for the first
base pose; and code that directs the processor to determine the
texture map associated with the object in the second desired pose
in response to a weighted combination of the first texture map and
the third texture map, wherein the weighted combination of the
first texture map and the third texture map are determined in
response to the third weight and the fourth weight.
34. The computer system of claim 33 wherein the memory is also
configured to store code that directs the processor to render an
image at least a portion of the object in the second desired pose
in response to the texture map associated with the object in the
second desired pose.
35. The computer system of claim 34 wherein the texture map
associated with the object in the first desired pose includes
representations of geometry not represented in the texture map
associated with the object in the second desired pose.
Description
CROSS-REFERENCES TO RELATED APPLICATIONS
[0001] This application is a division patent application of U.S.
application Ser. No. 10/841,219 Filed May 6, 2004, having the same
named inventors.
COPYRIGHT NOTICE
[0002] A portion of the disclosure of this patent document contains
material that is subject to copyright protection. The copyright
owner has no objection to the facsimile reproduction by anyone of
the patent document or the patent disclosure as it appears in the
Patent and Trademark Office patent file or records, but otherwise
reserves all copyright rights whatsoever.
BACKGROUND OF THE INVENTION
[0003] The present invention relates to computer animation. More
particularly, the present invention relates to techniques and
apparatus for rendering of more natural-looking wrinkles or creases
on a posed object.
[0004] Throughout the years, movie makers have often tried to tell
stories involving make-believe creatures, far away places, and
fantastic things. To do so, they have often relied on animation
techniques to bring the make-believe to "life." Two of the major
paths in animation have traditionally included, drawing-based
animation techniques and stop motion animation techniques.
[0005] Drawing-based animation techniques were refined in the
twentieth century, by movie makers such as Walt Disney and used in
movies such as "Snow White and the Seven Dwarfs" (1937) and
"Fantasia" (1940). This animation technique typically required
artists to hand-draw (or paint) animated images onto a transparent
media or cels. After painting, each cel would then be captured or
recorded onto film as one or more frames in a movie.
[0006] Stop motion-based animation techniques typically required
the construction of miniature sets, props, and characters. The
filmmakers would construct the sets, add props, and position the
miniature characters in a pose. After the animator was happy with
how everything was arranged, one or more frames of film would be
taken of that specific arrangement. Stop motion animation
techniques were developed by movie makers such as Willis O'Brien
for movies such as "King Kong" (1933). Subsequently, these
techniques were refined by animators such as Ray Harryhausen for
movies including "Mighty Joe Young" (1948) and Clash Of The Titans
(1981).
[0007] With the wide-spread availability of computers in the later
part of the twentieth century, animators began to rely upon
computers to assist in the animation process. This included using
computers to facilitate drawing-based animation, for example, by
painting images, by generating in-between images ("tweening"), and
the like. This also included using computers to augment stop motion
animation techniques. For example, physical models could be
represented by virtual models in computer memory, and
manipulated.
[0008] One of the pioneering companies in the computer aided
animation (CAA) industry was Pixar. Pixar developed both computing
platforms specially designed for CAA, and animation software now
known as RenderMan.RTM.. RenderMan.RTM. was particularly well
received in the animation industry and recognized with two Academy
Awards.RTM.. RenderMan.RTM. software is used to convert graphical
specifications of objects and convert them into one or more images.
This technique is known generally in the industry as rendering.
[0009] Previously, some methods were proposed to graphically
specify the appearance of fine wrinkles and/or fine creases on
objects for the rendering process. One method was to
fully-mathematically define where the fine wrinkles and creases
would appear on the object and fully physically simulating the
three dimensional microscale geometry of the wrinkles. Another
method was to dynamically adjust surface geometry based upon
underlying object models, for example, skin on muscle models.
[0010] Drawbacks to these approaches for specifying fine wrinkles
and creases included that the mathematical definition of such fine
features for an object would require a large number of detailed
surfaces that would be difficult to represent. Another drawback
included that the simulation of the microscale geometry or
performing a surface mapping based upon a underlying model would be
computationally prohibitive. Yet another drawback was that if
rendered, the specified features may not appear natural in the full
and often extreme range of poses of the three-dimensional
object.
[0011] Another method that was used to specify wrinkles included
mapping of a two-dimensional image (texture maps) onto a
three-dimensional object surface. Using these techniques, the
wrinkles/creases are represented by a two-dimensional map, where
the intensity of pixels in the map specify "peaks" and "valleys" of
the surface. Another method, although not necessarily in the prior
art, decomposed the texture map into directional-based texture
maps. Next, at render time, the pose of the object is also
decomposed into directional-based poses. Finally, the texture map
is formed by combining the directional-based texture-maps and the
directional-based poses.
[0012] Drawbacks to this approach for rendering wrinkles included
that only one texture map would be used to specify wrinkles for all
poses of the three-dimensional object. Similar to the technique
described above, wrinkles that may appear natural in one character
pose, may be inappropriate and unnatural looking in another
character pose. For example, directional components often fade
visually on and off in unnatural ways. Additional drawbacks include
that the results are unintuitive and that the user cannot control
the appearance and disappearance of wrinkles in arbitrary
poses.
[0013] In light of the above, what is needed are improved
techniques for users to specify wrinkles and creases for objects
without the drawbacks described above.
BRIEF SUMMARY OF THE INVENTION
[0014] The present invention relates to computer animation. More
particularly, the present invention relates to novel methods and
apparatus for dynamically specifying natural-looking fine wrinkles
and/or fine creases on an object in arbitrary poses.
[0015] In various embodiments, a user typically inputs a series of
poses for an object and a series of texture maps, specifying
wrinkles, and the like, associated with each pose. Next, based upon
the poses and texture maps, common pose elements and common
elements from the texture map are identified and stored as a series
of base poses and associated base texture maps. Later, during
render time, a desired pose for the object is received and mapped
to a weighted combination of the base poses. Next, a weighted
combination of the base texture maps is formed as the texture map
for the object in the desired pose. The object is then rendered
using the formed texture map in the desired pose.
[0016] In various embodiment, the process for specifying wrinkles
for an object may be an iterative process. For example, after
viewing the rendered results, the user may decide to modify the
wrinkle behavior of the object in a pose. To do this, the user
"trains" or creates a new texture map for the object in the new
pose. Next, the above process is repeated to define a new series of
base poses and associated base texture maps. Subsequently, when the
object is in the new pose, the texture map is substantially similar
to the new texture map.
[0017] According to one aspect of the invention, a method for a
computer system is disclosed. One technique includes retrieving a
plurality of base poses for an object, and retrieving a plurality
of base texture maps associated with the plurality of base poses.
Processes may include receiving a desired pose for the object,
determining a plurality of coefficients associated with the
plurality of base poses in response to the desired pose and to the
plurality of base poses, and determining a desired texture map in
response to the plurality of coefficients and to the plurality of
base texture maps.
[0018] According to another aspect of the invention, a computer
program product for a computer system including a processor is
described. The computer code may include code that directs the
processor to determine a plurality of base poses for an object,
code that directs the processor to determine a plurality of base
texture maps associated with the plurality of base poses, and code
that directs the processor to determine the desired pose for the
object. The code may also include code that directs the processor
to determine a weighted combination of the plurality of base poses
for the object to represent the desired pose for the object,
wherein the weighted combination comprises a plurality of
coefficients, and code that directs the processor to form a desired
texture map by forming a weighted combination of the plurality of
base texture maps in response to the plurality of coefficients and
the plurality of base texture maps. The codes typically reside on a
tangible media such as a magnetic media, optical media,
semiconductor media, and the like.
[0019] According to yet aspect of the invention, a rendering
apparatus is discussed. The apparatus may include a memory
configured to store a first plurality of component poses for a
three-dimensional object, wherein the memory is also configured to
store a first plurality of component two-dimensional images
associated with the first plurality of component poses. The system
may also include a processor coupled to the memory, wherein the
processor is configured to receive a specification of a desired
pose for the three-dimensional object, wherein the processor is
configured to determine a weighted combination of a second
plurality of component poses from the first plurality of component
poses to approximately form the desired pose, wherein the processor
is configured to form a desired two-dimensional image from a
weighted combination of a second plurality of two-dimensional
images from the first plurality of two-dimensional images, wherein
the second plurality of two-dimensional images are associated with
the second plurality of component poses.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] In order to more fully understand the present invention,
reference is made to the accompanying drawings. Understanding that
these drawings are not to be considered limitations in the scope of
the invention, the presently described embodiments and the
presently understood best mode of the invention are described with
additional detail through use of the accompanying drawings in
which:
[0021] FIG. 1 illustrates a block diagram of a rendering system
according to one embodiment of the present invention;
[0022] FIGS. 2A-B illustrate a block diagram of a process according
to an embodiment of the present invention;
[0023] FIG. 3 illustrates a block diagram of a process according to
an embodiment of the present invention;
[0024] FIGS. 4A-C illustrate an example of an embodiment of the
present invention; and
[0025] FIGS. 5A-D illustrates examples of rendered wrinkles.
DETAILED DESCRIPTION OF THE INVENTION
[0026] FIG. 1 is a block diagram of typical computer rendering
system 100 according to an embodiment of the present invention.
[0027] In the present embodiment, computer system 100 typically
includes a monitor 110, computer 120, a keyboard 130, a user input
device 140, a network interface 150, and the like.
[0028] In the present embodiment, user input device 140 is
typically embodied as a computer mouse, a trackball, a track pad,
wireless remote, and the like. User input device 140 typically
allows a user to select objects, icons, text and the like that
appear on the monitor 110.
[0029] Embodiments of network interface 150 typically include an
Ethernet card, a modem (telephone, satellite, cable, ISDN),
(asynchronous) digital subscriber line (DSL) unit, and the like.
Network interface 150 are typically coupled to a computer network
as shown. In other embodiments, network interface 150 may be
physically integrated on the motherboard of computer 120, may be a
software program, such as soft DSL, or the like.
[0030] Computer 120 typically includes familiar computer components
such as a processor 160, and memory storage devices, such as a
random access memory (RAM) 170, disk drives 180, and system bus 190
interconnecting the above components.
[0031] In one embodiment, computer 120 is a PC compatible computer
having multiple microprocessors such as Xeon.TM. microprocessor
from Intel Corporation. Further, in the present embodiment,
computer 120 typically includes a UNIX-based operating system.
[0032] RAM 170 and disk drive 180 are examples of tangible media
for storage of data, audio/video files, computer programs,
embodiments of the herein described invention including scene
descriptors, object data files, shader descriptors, a rendering
engine, output image files, texture maps, displacement maps, object
pose data files, and the like. Other types of tangible media
include floppy disks, removable hard disks, optical storage media
such as CD-ROMS and bar codes, semiconductor memories such as flash
memories, read-only-memories (ROMS), battery-backed volatile
memories, networked storage devices, and the like.
[0033] In the present embodiment, computer system 100 may also
include software that enables communications over a network such as
the HTTP, TCP/IP, RTP/RTSP protocols, and the like. In alternative
embodiments of the present invention, other communications software
and transfer protocols may also be used, for example IPX, UDP or
the like.
[0034] FIG. 1 is representative of computer rendering systems
capable of embodying the present invention. It will be readily
apparent to one of ordinary skill in the art that many other
hardware and software configurations are suitable for use with the
present invention. For example, the use of other micro processors
are contemplated, such as Pentium.TM. or Itanium.TM.
microprocessors; Opteron.TM. or AthlonXP.TM. microprocessors from
Advanced Micro Devices, Inc; PowerPC G3.TM., G4.TM. microprocessors
from Motorola, Inc.; and the like. Further, other types of
operating systems are contemplated, such as Windows.RTM. operating
system such as WindowsXP.RTM., WindowsNT.RTM., or the like from
Microsoft Corporation, Solaris from Sun Microsystems, LINUX, UNIX,
MAC OS from Apple Computer Corporation, and the like.
[0035] FIGS. 2A-B illustrate a block diagram of a process according
to an embodiment of the present invention. More specifically, FIGS.
2A-B illustrate a process of defining and processing of texture
maps.
[0036] Initially, a user opens a model of a three-dimensional
object in a working environment, step 200. In typical embodiments,
the model of the object is defined by another user such as an
object modeler in an object creation environment. The model of the
object is typically a geometric description of surfaces of the
object and includes a number of animation variables (avars) that
are used to control or pose the object.
[0037] Next, the user then specifies a pose for the object, step
210. In embodiments of the present invention, the user specifies
the pose by manually entering values for the animation variables or
automatically via manipulation of keypoints associated with the
animation variables. In some embodiments, the pose is considered an
"extreme" pose, or a "reference" pose.
[0038] In some embodiments, based upon this pose, one or more views
of the object are then specified, step 220. In various embodiments,
a view may include a specification of a camera position and
orientation relative to the object in space. For example, a view
may include a default view such as a "front view" camera or a "top
view" camera; a perspective view, an isometric view, and the like.
Additionally, the view camera characteristics may be determined by
the user. Next, one or more two-dimensional images of the object
associated with the views are generated and stored, step 230. In
the present embodiment, the two-dimensional images are images
"taken" with the view camera(s) specified above.
[0039] In other embodiments of the present invention, default views
of the three-dimensional object in a default or "neutral" pose are
specified, accordingly, step 220 may not be performed. In various
embodiments, a two-dimensional image associated with a default view
of the object in a neutral pose is computed off-line, and may not
be part of a rendering process pipeline.
[0040] In FIGS. 2A-B, the next step includes the user using a
conventional two-dimensional paint-type program, to "paint" a
texture map, step 240. In embodiments of the present invention, any
conventional paint program such as Adobe Photoshop may be used for
"painting" the image.
[0041] In some embodiments of the present image, the
two-dimensional image formed in step 230 is opened in the paint
program, and the user "paints" the image in an overlay layer. In
other embodiments, a default view of the three-dimensional object
in a "neutral" pose is opened in the paint program, and again the
user "paints" the image in an overlay layer.
[0042] The values of the overlay image may represent any number of
characteristics of the surface of the object. For example, the
overlay image may represent a surface base color, a texture map or
displacement map (representing surface roughness, surface wrinkles,
surface creases, and the like), or other type of surface effect. As
merely an example of embodiments, the overlay represents
wrinkle-type data, with data values from 0 to 1. Specifically,
where the overlay data includes values from 0.5+ to 1, these areas
indicate upward protrusions from the surface; where the overlay
data includes values from 0 to 0.5-, these areas indicate
indentations into the surface; and where the overlay data is 0.5,
the surface is unperturbed. In other embodiments, different ways to
represent protrusions and indentations with a two-dimensional
overlay image from wrinkles, cracks, or the like, are
contemplated.
[0043] In the present embodiment, the pose for the object, and the
overlay image are associated and stored in memory, step 250.
[0044] Next, the process described above typically repeats at least
once for a different pose of the three-dimensional object, step
260. In embodiments of the present invention, for a full range of
facial animation poses, the inventors believe that at least seven
to eight different poses and associated overlay images (texture
maps) are desired. To better capture wrinkle behavior for a full
range of facial poses, from eight to twelve different poses and
associated overlay images are believed to be more desirable.
Additionally, for a full range of facial poses, twelve to fifteen,
and more different poses and associated overlay images are also
desirable for embodiments of the present invention. For embodiments
where only a portion of facial animation poses are to be "wrinkled"
fewer poses and overlay images are required, e.g. adding wrinkles
to only the eyes. When an object is symmetric, fewer poses and
overlay images may be used, taking into account the symmetry, e.g.
wrinkles associated with raising a right eyebrow can be used to
specify wrinkles for raising a left eyebrow. In other embodiments,
specifying wrinkles of non-facial animation objects may also
require fewer poses and overlay images (texture maps). For example,
to specify wrinkles of elbows, as few as two or three poses and
overlay images can be used.
[0045] As the result of the above process, a number of "extreme"
poses and associated texture maps are specified. Next, in some
embodiments, the specified texture maps are reversed-mapped to the
object in a "neutral" pose, step 270. In various embodiments, this
may be done by projecting the two-dimensional texture maps back
upon the respective associated "extreme" poses; "un-posing" the
object from the "extreme" pose back to the "neutral" pose; and then
creating one or more two-dimensional views of the object in the
neutral pose. In another embodiment, the reverse-map may be
performed in two-dimensions by mapping a series of key points in
the overlay image to key points in a similar view of the object in
the neutral pose. In other embodiments of the present invention,
step 270 is not required when the user paints upon a view of the
object in the "neutral" pose, as was previously described.
[0046] In the present embodiments, a principle component analysis
is performed on the extreme poses to determine a number of "base"
poses for the object. In various implementations, this process
includes first determining the most common characteristic, or
principle component, of the object from the extreme poses of the
object, step 275. For example, for the most common feature for a
face in a number of extreme poses may be a raised eyebrow.
[0047] Next, the process includes defining a "base" pose as the
three-dimensional object posed with the most common characteristic,
step 280. The base pose is typically a weighted combination of the
extreme poses. Continuing the example above, the first base pose
would be a face with a raised eyebrow. In this embodiment, the
associated base texture pose is also a weighted combination of the
texture maps associated with the extreme poses, using the same
weights, step 290. For example, if a base pose is a 70% weight of a
first extreme pose and a 30% weight of a second extreme pose, the
associated base texture map would be approximately a 70% weight of
the first extreme texture map and 30% weight of the second extreme
texture map.
[0048] Finally, in this embodiment, the principle component (base
pose) is removed from the extreme poses, and the associated base
texture map is also removed from the associated extreme pose
texture maps, step 295. The process then repeats to identify the
next most common characteristic of the poses, etc., step 300. In
various embodiments, the number of base poses determined may be the
same as the number of extreme poses, and in other embodiments, the
number of base poses may be less. For example, from eight extreme
poses, six base poses may be determined; from twelve extreme poses,
eight base poses may be determined; from fifteen extreme poses, ten
base poses may be determined; and the like.
[0049] As a result of the above process, a number of base poses,
and a corresponding number of associated base texture maps that are
determined are stored, step 310. In other embodiments, the
principle component analysis is also described as an Eigen XY
analysis. Other methods for performing the decomposition from
extreme poses and texture maps into base poses and base texture
maps are contemplated.
[0050] In embodiments of the present invention, the principle
component analysis is performed as determined as follows:
[0051] The pose inputs are defined as:
[0052] Rest pose +E,ovs, P.sub.j,j=0, . . . ,v--1
(v.ident.nGeomValues) and
[0053] n Extreme poses {overscore (P)}.sub.ij,i=0, . . . ,n-1
(n.ident.nSamples).
[0054] The corresponding wrinkle maps inputs are defined as:
[0055] Rest map {overscore (W)}.sub.l,l=0, . . . ,d-1
(d.ident.nDispValues) and
[0056] n Extreme maps {overscore (W)}.sub.il.
[0057] Accordingly the following inputs are determined:
[0058] n Delta poses P.sub.ij={overscore (P)}.sub.ij-{overscore
(P)}.sub.j
[0059] n Delta maps W.sub.il={overscore (W)}.sub.il-{overscore
(W)}.sub.l
[0060] Next, solve for C.sub.i.sup.k F.sub.j.sup.k for k=0, . . .
,m-1, m<n, such that P.sub.ij.apprxeq..SIGMA..sub.k
C.sub.i.sup.kF.sub.j.sup.k, then F j k = i .times. P ij .times. C i
k . ##EQU1## where C: curves/time; F: shapes/space.
[0061] Similarly, since poses motivate wrinkles,
W.sub.il.apprxeq..SIGMA..sub.k C.sub.i.sup.kD.sub.l.sup.k,
therefore solve D l k = i .times. W il .times. C i k , ##EQU2##
where D.sub.l.sup.k represent wrinkle displacement variations.
[0062] In the present embodiment, the above process may be
performed "on-line" or "off-line." That is, the above process may
be performed in the actual rendering process pipeline or
separately, i.e. before the rendering process pipeline. In various
embodiments, the process below may be integrated into the rendering
process pipeline.
[0063] FIG. 3 illustrates a block diagram of a process according to
an embodiment of the present invention. More specifically, FIG. 3
illustrates a process of dynamically determining how wrinkles,
creases, or the like, are to be rendered.
[0064] Initially, typically within a rendering process pipeline,
the base poses for a three-dimensional object and base texture maps
determined above are retrieved into memory, step 400. Next, the
desired pose for the three-dimensional object is also retrieved
into memory, step 410. In the present embodiment, the desired pose
may be unique for every frame to be rendered.
[0065] In embodiments of the present invention, the desired pose is
decomposed into the base poses, and a weighting for the base poses
is determined, step 420. More specifically, a weighted combination
of the base poses is determined in this step that approximately
reproduce the desired pose. In various embodiments, the base poses
are "orthogonal" from each other, thus the weighted combination is
relatively unique for each desired pose.
[0066] Mathematically, the following is performed in various
embodiments to determine the weights:
[0067] Given new pose (desired pose) P.sub.j, determine new delta
pose P'.sub.j=P.sub.j-{overscore (P)}.sub.j.
[0068] Next, find amplitudes (weights) a k = j .times. P j '
.times. F ^ j k , where .times. .times. F ^ j k = F j k j .times. (
F j k ) 2 ##EQU3## such that
P'.sub.j.apprxeq..SIGMA..sub.ka.sup.k{circumflex over
(F)}.sup.k.sub.j.
[0069] Next, in embodiments of the present invention, the weights
determined for the base poses are applied to the associated base
texture maps, step 430. In particular, the weighting of the base
poses are typically used to form a weighted combination of the base
texture maps. The weighted combination is the texture map
associated with the desired pose (the desired pose texture map). In
embodiments of the present invention, the weighting of the base
texture maps can be a weighted average, a gray scale logical
function such as an OR, NOR, AND, and the like.
[0070] Mathematically, the following is performed in various
embodiments to determine the new (desired) texture map:
[0071] Determine a new delta map: W l ' .apprxeq. k .times. a k
.times. D l k j .times. ( F j k ) 2 , ##EQU4##
[0072] Then, the corresponding new texture map is therefore
W.sub.l=W'.sub.l+{overscore (W)}.sub.l.
[0073] In embodiments of the present invention, the desired pose
texture map and the desired pose is passed on to the rendering
process pipeline for rendering, step 440. In various embodiments,
the rendering engine used is the Pixar brand rendering engine,
Renderman.RTM.. The resulting two-dimensional image (frame) formed
by the rendering engine, step 450, thus includes the
three-dimensional object posed in the desired pose including
wrinkles, creases, or the like, specified by the desired pose
texture map.
[0074] In the present embodiment, the image is stored on media such
as a hard disk, optical disk, film media, printed media, or the
like, step 460. Subsequently, the image may be retrieved from the
media, step 470, and output to one or more users (e.g. audience,
animator), step 480.
[0075] FIGS. 4A-C illustrate an example of an embodiment of the
present invention. More specifically, FIGS. 4A-B illustrate an
example of a principle component decomposition.
[0076] In FIG. 4A, a number of "extreme" poses 500 and a number of
extreme pose texture maps 510 are illustrated. In this example, the
most common components of extreme poses 500 are an enlarged right
eye 515, an enlarged left eye 525, a smile 535, etc. As can be seen
these components are associated with a right raised eyebrow 520,
left raised eyebrow 530, then smile lines 540, etc. in extreme pose
texture maps 510.
[0077] In FIG. 4B a number of base poses 550, and corresponding
base pose texture maps 560 are illustrated. In this example, base
poses 550 are derived from extreme poses 500, and base pose texture
maps 560 are derived from extreme pose texture maps 510. As
illustrated, base pose 570 is an enlarged right eye, base pose 580
is an enlarged left eye, base pose 590 is a smile, and the like.
The corresponding base pose texture maps 600, 610 and 620 formed
are also illustrated.
[0078] FIG. 4C illustrates the process of forming a base pose
texture map from a desired pose 620. As can be seen, desired pose
620 includes raised left eye 630, raised right eye 640, and a smile
650. In this example, it is determined that desired pose 620 is
formed from base poses 570, 580 and 590, accordingly, weights 660
are determined. As illustrated, weights 660 are applied to the base
pose texture maps to form the desired pose texture map 670.
[0079] In the present embodiments, desired pose texture map 670 and
desired pose 620 are sent along the rendering process pipeline for
rendering.
[0080] FIGS. 5A-D illustrates examples of rendered wrinkles. More
specifically, FIG. 5A illustrates a base pose 700 of a character
face, including wrinkles 705 on a lip 710.
[0081] In the examples in FIG. 5B and 5C, the character face is
posed to smile, and as shown, the lip stretches accordingly. In the
example in FIG. 5B, when only a single texture map for the wrinkle
is used, wrinkle 720 stretches along with lip 730. As a result, the
wrinkle appears to widen. Such a result is unexpected in real life,
as wrinkles tend to disappear when the skin is stretched.
Accordingly, previous methods did not accurately simulate fine
wrinkles or lines.
[0082] As can be seen in the example in FIG. 5C, when embodiments
of the present invention are used, when in the smile pose, wrinkles
tend to disappear from the lip 740.
[0083] In the example in FIG. 5D, with embodiments of the present
invention, when the character face is placed in other poses,
wrinkles 750 may appear that were not shown in base pose 700.
[0084] What is generally disclosed in the present application are
methods, apparatus, computer program products, and the like that
can associate dynamic textures onto a surface, without deforming or
otherwise animating geometry, through pose-based association. Many
changes or modifications are readily envisioned. In light of the
above disclosure, one of ordinary skill in the art would recognize
that the above embodiments are useful for specifying and rendering
microscale three-dimensional geometry such as cracked, wrinkled,
rusty, patterned, embossed, or the like materials such as skin,
cloth, paint, scales, hide, and the like. As an example, the above
embodiments may be applied to the seam of clothes. In one pose of
the cloth, there is no binding or wrinkling at the seam, however in
other poses of the cloth, wrinkles should appear adjacent to the
seam, as the material stretches around the seam, but wrinkles may
not appear on the seam.
[0085] In embodiments of the present invention, the generation of
texture maps for wrinkles and creases can be implemented into a
rendering process pipeline, such as provided by Pixar's
Renderman.RTM. product. In the prior art, users such as animator
would not render fine wrinkles and creases because of the high
computational requirements. Alternatively, in the prior art,
animators would render fine wrinkles and creases with a single
texture map, however with unnatural results, as illustrated in FIG.
5B, above. Accordingly, the inventors believe that the embodiments
of the present invention now provide a usable system in which
objects are rendered with fine wrinkles in a realistic manner.
Accordingly, the inventors believe that frames of animation
including objects having such fine wrinkles will be noticeably more
realistic that was previously performed for animated features.
[0086] It should be understood that "rendering" may refer to a high
quality process of converting an image from a mathematical
description of a scene using a program such as RenderMan.RTM..
Additionally, "rendering" may refer to any graphical visualization
of the mathematical description of the object, or any conversion of
geometry to pixels, for example "rendering" with a lower quality
rendering engine, or the like. Examples of low-quality rendering
engines include GL and GPU hardware and software renderers, and the
like. Additionally, the rendering may be performed for any purpose,
such as for visualization purposes, for film production purposes,
for gaming purposes, and the like.
[0087] Further embodiments can be envisioned to one of ordinary
skill in the art after reading this disclosure. In other
embodiments, combinations or sub-combinations of the above
disclosed invention can be advantageously made. The block diagrams
of the architecture and flow charts are grouped for ease of
understanding. However it should be understood that combinations of
blocks, additions of new blocks, re-arrangement of blocks, and the
like are contemplated in alternative embodiments of the present
invention.
[0088] The specification and drawings are, accordingly, to be
regarded in an illustrative rather than a restrictive sense. It
will, however, be evident that various modifications and changes
may be made thereunto without departing from the broader spirit and
scope of the invention as set forth in the claims.
* * * * *