U.S. patent application number 12/635999 was filed with the patent office on 2010-11-25 for viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery.
This patent application is currently assigned to ANIMETRICS INC.. Invention is credited to Michael MILLER.
Application Number | 20100295854 12/635999 |
Document ID | / |
Family ID | 32995971 |
Filed Date | 2010-11-25 |
United States Patent
Application |
20100295854 |
Kind Code |
A1 |
MILLER; Michael |
November 25, 2010 |
VIEWPOINT-INVARIANT IMAGE MATCHING AND GENERATION OF
THREE-DIMENSIONAL MODELS FROM TWO-DIMENSIONAL IMAGERY
Abstract
A method and system for characterizing features in a source
multifeatured three-dimensional object and for locating a
best-matching three-dimensional object from a reference database of
such objects by performing a viewpoint invariant search among the
reference objects. The invention further includes the creation of a
three-dimensional representation of the source object by deforming
a reference object.
Inventors: |
MILLER; Michael; (Jackson,
NH) |
Correspondence
Address: |
WILMERHALE/BOSTON
60 STATE STREET
BOSTON
MA
02109
US
|
Assignee: |
ANIMETRICS INC.
Conway
NH
|
Family ID: |
32995971 |
Appl. No.: |
12/635999 |
Filed: |
December 11, 2009 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10794353 |
Mar 5, 2004 |
7643685 |
|
|
12635999 |
|
|
|
|
60452431 |
Mar 6, 2003 |
|
|
|
60452430 |
Mar 6, 2003 |
|
|
|
60452429 |
Mar 6, 2003 |
|
|
|
Current U.S.
Class: |
345/427 |
Current CPC
Class: |
G06K 9/00208 20130101;
G06K 9/00288 20130101; G06K 9/6255 20130101 |
Class at
Publication: |
345/427 |
International
Class: |
G06T 15/20 20060101
G06T015/20 |
Claims
1-76. (canceled)
77. A method involving at least one source 2D image of a source 3D
object, the method comprising: providing a reference 3D
representation; and simultaneously searching over rigid motions and
deformations of the reference 3D representation to identify a
best-match reference 3D representation most resembling the at least
one source 2D projection, wherein simultaneously searching
comprises searching over rigid motions of the reference 3D
representation and for each orientation of the reference 3D
representation in the search over rigid motions of the reference 3D
representation, using a closed form expression to compute a
deformed reference 3D representation that generates a best-fit with
the at least one source 2D image, and wherein the best-match 3D
representation is the deformed reference 3D representation for the
orientation in the search that yields a best best-fit.
78. The method of claim 77, wherein the closed form expression
corresponds to a back-projection into 3D space of the at least one
source 2D image.
79. The method of claim 1, wherein the closed form expression
corresponds to de-projected positions in 3D space of feature items
from the at least one source 2D image.
80. The method of claim 1, wherein simultaneously searching over
rigid motions and deformations of the reference 3D representation
to identify the best-match reference 3D representation most
resembling the at least one source 2D projection is performed
without generating any 2D projections of the reference 3D
representation or deformed versions thereof.
81. A method involving at least one source 2D image of a source 3D
object, the method comprising: providing a reference 3D
representation; and searching over rigid motions and deformations
of the reference 3D representation to identify a best match
reference 3D representation most resembling the at least one source
2D projection, wherein said searching involves applying both rigid
motion and deformation operators to that reference 3D
representation to generate multiple versions of the reference 3D
representation, and for each version of the reference 3D
representation, computing a measure of fit between that version of
the reference 3D representation and the at least one source 2D
image, wherein the best-fit 3D representation is the version of the
reference 3D representation that yields a best measure of fit, and
wherein the deformation operators are numerical representations of
transformations that are of infinite dimension.
82. The method of claim 1, wherein searching over rigid motions and
deformations of the reference 3D representation to identify a best
match reference 3D representation most resembling the at least one
source 2D projection is performed without actually generating any
projections.
83. A method involving at least one source 2D image of a source 3D
object, the method comprising: providing a reference 3D
representation; and searching over rigid motions and deformations
of the reference 3D representation to identify a best match
reference 3D representation most resembling the at least one source
2D projection, wherein said searching involves applying both rigid
motion and deformation operators to that reference 3D
representation to generate multiple versions of the reference 3D
representation, and for each version of the reference 3D
representation, computing a measure of fit between that version of
the reference 3D representation and the at least one source 2D
image, wherein the best-fit 3D representation is the version of the
reference 3D representation that yields a best measure of fit, and
wherein searching over rigid motions and deformations of the
reference 3D representation to identify a best match reference 3D
representation most resembling the at least one source 2D
projection is performed without actually generating any
projections.
Description
RELATED APPLICATIONS
[0001] This application claims priority to and the benefits of U.S.
Provisional Applications Ser. Nos. 60/452,429, 60/452,430 and
60/452,431 filed on Mar. 6, 2003 (the entire disclosures of which
are hereby incorporated by reference).
FIELD OF THE INVENTION
[0002] The present invention relates to object modeling and
matching systems, and more particularly to the generation of a
three-dimensional model of a target object from two- and
three-dimensional input.
BACKGROUND OF THE INVENTION
[0003] In many situations, it is useful to construct a
three-dimensional (3D) model of an object when only a partial
description of the object is available. In a typical situation, one
or more two-dimensional (2D) images of the 3D object may be
available, perhaps photographs taken from different viewpoints. A
common method of creating a 3D model of a multi-featured object is
to start with a base 3D model which describes a generic or typical
example of the type of object being modeled, and then to add
texture to the model using one or more 2D images of the object. For
example, if the multi-featured object is a human face, a 3D
"avatar" (i.e., an electronic 3D graphical or pictorial
representation) would be generated by using a pre-existing,
standard 3D model of a human face, and mapping onto the model a
texture from one or more 2D images of the face. See U.S. Pat. No.
6,532,011 B1 to Francini et al., and U.S. Pat. No. 6,434,278 B1 to
Hashimoto. The main problem with this approach is that the 3D
geometry is not highly defined or tuned for the actual target
object which is being generated.
[0004] A common variant of the above approach is to use a set of 3D
base models and select the one that most resembles the target
object before performing the texture mapping step. Alternatively, a
single parameterized base model is used, and the parameters of the
model are adjusted to best approximate to the target. See U.S. Pat.
No. 6,556,196 B1 to Blanz et al. These methods serve to refine the
geometry to make it fit the target, at least to some extent.
However, for any target object with a reasonable range of intrinsic
variability, the geometry of the model will still not be well tuned
to the target. This lack of geometric fit will detract from the
verisimilitude of the 3D model to the target object.
[0005] Conventional techniques typically also require that the 2D
images being used for texturing the model be acquired from known
viewpoints relative to the 3D object being modeled. This usually
limits the use of such approaches to situations where the model is
being generated in a controlled environment in which the target
object can be photographed. Alternatively, resort may be had to
human intervention to align 2D images to the 3D model to be
generated. See U.S. Patent Publication No. 2002/0012454 to Liu et
al. This manual step places a severe limit on the speed with which
a 3D model can be generated from 2D imagery.
[0006] Accordingly, a need exists for an automated approach that
systematically makes use of available 2D source data for a 3D
object to synthesize an optimal 3D model of the object.
SUMMARY OF THE INVENTION
[0007] The present invention provides an automated method and
system for generating an optimal 3D model of a target multifeatured
object when only partial source data describing the object is
available. The partial source data often consists of one or more 2D
projections of the target object or an obscuration of a single
projection, but may also include 3D data, such as from a 3D camera
or scanner. The invention uses a set of reference 3D
representations that span, to the extent practicable. the
variations of the class of objects to which the target object
belongs. The invention may automatically identify feature items
common to the source data and to the reference representations, and
establish correspondences between them. For example, if the target
object is a face the system may identify points at the extremities
of the eyes and mouth, or the nose profile, and establish
correspondences between such features in the source data and in the
reference representations. Manual identification and matching of
feature items can also be incorporated if desired. Next, all
possible positions (i.e., orientations and translations) for each
3D reference representation are searched to identify the position
and reference representation combination whose projection most
closely matches the source data. The closeness of match is
determined by a measure such as the minimum mean-squared error
(MMSE) between the feature items in the projection of the 3D
representation, and the corresponding feature items in the source
projection. A comparison is performed in 3D between the estimated
deprojected positions of the feature items from the 2D source
projection and the corresponding feature items of the 3D
representation. The closest-fitting 3D reference representation may
then be deformed to optimize the correspondence with the source
projection. Each point in the mesh which defines the geometry of
the 3D representation is free to move during the deformation. The
search for the best-fitting position (i.e., orientation and
translation) is repeated using the deformed 3D representation, and
the deformation and search may be repeated iteratively until
convergence occurs or terminated at any time.
[0008] Thus the geometry of the 3D model is tailored to the target
object in two ways. First, when more than one reference
representation is available, the selection of the best-fitting
reference representation from a set of references enables the
optimal coarse-grain choice to be made. Second, deformation enables
fine scale tuning in which errors introduced by inaccurate choice
of viewpoint are progressively reduced by iteration. The invention
requires no information about the viewpoint from which the 2D
source projection was captured, because a search is performed over
all possible viewpoints, and the viewpoint is taken to be that
which corresponds to the closest fit between the projected 3D
representation and the 2D source data.
[0009] In a first aspect, the invention comprises a method of
comparing at least one source 2D projection of a source
multifeatured object to a reference library of 3D reference
objects. In accordance with the method, a plurality of reference 3D
representations of generically similar multifeatured objects is
provided, and a viewpoint-invariant search of the reference 3D
representations is performed to locate the reference 3D
representation having a 2D projection most resembling the source
projection(s). In some embodiments, resemblance is determined by a
degree of alignment between feature items in the 3D representation
and corresponding feature items in the source 2D projection(s).
Each reference 3D representation may be searched over a range of
possible 2D projections of the 3D representation without actually
generating any projections. The search over a range of possible 2D
projections may comprise computing a rigid motion of the reference
3D representation optimally consistent with a viewpoint of the
source multifeatured object in at least one of the 2D projections.
The rigid motions may comprise pitch, roll, yaw, and translation in
three dimensions. Automatic camera calibration may be performed by
estimation of camera parameters, such as aspect ratio and field of
view, from image landmarks.
[0010] In some embodiments, the optimum rigid motion may be
determined by estimating a conditional mean pose or geometric
registration as it relates to feature items comprising points,
curves, surfaces, and subvolumes in a 3D coordinate space
associated with the reference 3D representation such that the
feature items are projectionally consistent with feature items in
source 2D projection(s). MMSE estimates between the conditional
mean estimate of the projected feature items and corresponding
feature items of the reference 3D representation are generated. The
rigid motion may be constrained by known 3D position information
associated with the source 2D projection(s).
[0011] In some embodiments, the feature items may include curves as
well as points which are extracted from the source projection using
dynamic programming. Further, areas as well as surfaces and or
subvolumes may be used as features generated via isocontouring
(such as via the Marching Cubes algorithm) or automated
segmentation algorithms. The feature items used in the matching
process may be found automatically by using correspondences between
the 2D source projection(s) and projected imagery of at least one
reference 3D object.
[0012] The invention may further comprise the step of creating a 3D
representation of the source 2D projection(s) by deforming the
located (i.e., best-fitting) reference 3D representation so as to
resemble the source multifeatured object. In one embodiment, the
deformation is a large deformation diffeomorphism, which serves to
preserve the geometry and topology of the reference 3D
representation. The deformation step may deform the located 3D
representation so that feature items in the source 2D projection(s)
align with corresponding features in the located reference 3D
representation. The deformation step may occur with or without
rigid motions and may include affine motions. Further, the
deformation step may be constrained by at least one of known 3D
position information associated with the source 2D projection(s),
and 3D data of the source object. The deformation may be performed
using a closed form expression.
[0013] In a second aspect, the invention comprises a system for
comparing at least one source 2D projection of a source
multifeatured object to a reference library of 3D reference
objects. The system comprises a database comprising a plurality of
reference 3D representations of generically similar multifeatured
objects and an analyzer for performing a viewpoint-invariant search
of the reference 3D representations to locate the reference 3D
representation having a 2D projection most resembling the source
projection(s). In some embodiments, the analyzer determines
resemblance by a degree of alignment between feature items in the
3D representation and corresponding feature items in the source 2D
projection(s). The analyzer may search each reference 3D
representation over a range of possible 2D projections of the 3D
representation without actually generating any projections. In some
embodiments, the analyzer searches over a range of possible 2D
projections by computing a rigid motion of the reference 3D
representation optimally consistent with a viewpoint of the source
multifeatured object in at least one of the 2D projections. The
rigid motions may comprise pitch, roll, yaw, and translation in
three dimensions. The analyzer may be configured to perform
automatic camera calibration by estimating camera parameters. such
as aspect ratio and field of view, from image landmarks.
[0014] In some embodiments, the analyzer is configured to determine
the optimum rigid motion by estimating a conditional mean of
feature items comprising points, curves, surfaces, and subvolumes
in a 3D coordinate space associated with the reference 3D
representation such that the feature items are projectionally
consistent with feature items in the source 2D projection(s). The
analyzer is further configured to generate MMSE estimates between
the conditional mean estimate of the projected feature items and
corresponding feature items of the reference 3D representation. The
rigid motion may be constrained by known 3D position information
associated with the source 2D projection(s).
[0015] In some embodiments, the analyzer is configured to extract
feature items from the source projection using dynamic programming.
In further embodiments, the analyzer may be configured to find
feature items used in the matching process automatically by using
correspondences between source imagery and projected imagery of at
least one reference 3D object.
[0016] The invention may further comprise a deformation module for
creating a 3D representation of the at least one source 2D
projection by deforming the located (i.e., best-fitting) reference
3D representation so as to resemble the source multifeatured
object. In one embodiment, the deformation module deforms the
located reference 3D representation using large deformation
diffeomorphism, which serves to preserve the geometry and topology
of the reference 3D representation. The deformation module may
deform the located 3D representation so that feature items in the
source 2D projection(s) align with corresponding features in the
located reference 3D representation. The deformation module may or
may not use rigid motions and may use affine motions. Further, the
deformation module may be constrained by at least one of known 3D
position information associated with the source 2D projection(s)
and 3D data of the source object. The deformation module may
operate in accordance with a closed form expression.
[0017] In a third aspect, the invention comprises a method of
comparing a source 3D object to at least one reference 3D object.
The method involves creating 2D representations of the source
object and the reference object(s) and using projective geometry to
characterize a correspondence between the source 3D object and a
reference 3D object. For example, the correspondence may be
characterized by a particular viewpoint for the 2D representation
of the 3D source object.
[0018] In a fourth aspect, the invention comprises a system for
comparing a source 3D object to at least one reference 3D object.
The system comprises a projection module for creating 2D
representations of the source object and the reference object(s)
and an analyzer which uses projective geometry to characterize a
correspondence between the source 3D object and a reference 3D
object.
[0019] In a fifth aspect, the above described methods and systems
are used for the case when the 3D object is a face and the
reference 3D representations are avatars.
[0020] In a sixth aspect, the invention comprises a method for
creating a 3D representation from at least one source 2D projection
of a source multifeatured object. In accordance with the method, at
least one reference 3D representation of a generically similar
object is provided, one of the provided representation(s) is
located, and a 3D representation of the source 2D projection(s) is
created by deforming the located reference representation in
accordance with the source 2D projection(s) so as to resemble the
source multifeatured object. In some embodiments, the source 2D
projection(s) is used to locate the reference representation. In
further embodiments, the set of reference representations includes
more than one member, and the reference most resembling the source
2D projection(s) is located by performing a viewpoint-invariant
search of the set of reference representations, without necessarily
actually generating any projections. The search may include
computing a rigid motion of the reference representation optimally
consistent with a viewpoint of the source multifeatured object in
at least one of the source projections.
[0021] In a preferred embodiment, a 3D representation of the source
projection(s) is created by deforming the located reference
representation so as to resemble the source multifeatured object.
The deformation may be a large deformation diffeomorphism. In some
embodiments, the deformation deforms the located reference so that
feature items in the source projection(s) align with corresponding
feature items in the located 3D reference representation. In some
embodiments, the deformation is performed in real time.
[0022] In a seventh aspect. the invention comprises a system for
creating a 3D representation from at least one source 2D projection
of a source multifeatured object. The system includes a database of
at least one reference 3D representation of a generically similar
object, and an analyzer for locating one of the provided
representation(s). The system further includes a deformation module
for creating a 3D representation of the source 2D projection(s) by
deforming the located reference representation in accordance with
the source 2D projection(s) so as to resemble the source
multifeatured object. In some embodiments, the analyzer uses the
source 2D projection(s) to locate the reference representation. In
further embodiments, the set of reference representations includes
more than one member, and the analyzer locates the reference most
resembling the source 2D projection(s) by performing a
viewpoint-invariant search of the set of reference representations,
without necessarily actually generating any projections. The search
may include computing a rigid motion of the reference
representation optimally consistent with a viewpoint of the source
multi featured object in at least one of the source
projections.
[0023] In a preferred embodiment, the deformation module creates a
3D representation of the source projection(s) by deforming the
located reference representation so as to resemble the source
multifeatured object. The deformation may be a large deformation
diffeomorphism. In some embodiments, the deformation module deforms
the located reference so that feature items in the source
projection(s) align with corresponding feature items in the located
3D reference representation. In some embodiments, the deformation
module operates in real time.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] In the drawings, like reference characters generally refer
to the same parts throughout the different views. The drawings are
not necessarily to scale, emphasis instead generally being placed
upon illustrating the principles of the invention. In the following
description, various embodiments of the invention are described
with reference to the following drawings, in which:
[0025] FIG. 1 schematically illustrates the various components of
the invention, starting with the target object, the reference
objects, and yielding an optimal 3D model after performing a search
and deformation.
[0026] FIGS. 2A, 2B, and 2C schematically illustrate the components
of a 3D avatar.
[0027] FIG. 3 schematically illustrates the matching of feature
items in the 2D imagery.
[0028] FIG. 4 is a block diagram showing a representative hardware
environment for the present invention.
[0029] FIG. 5 is a block diagram showing components of the analyzer
illustrated in FIG. 4.
[0030] FIG. 6 is a block diagram showing the key functions
performed by the analyzer.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0031] FIG. 1 illustrates the basic operation of the invention in
the case where the 3D target multifeatured object is a face and the
set of reference 3D representations are avatars. The matching
process starts with a set of reference 3D avatars which represent,
to the extent practicable, the range of different types of heads to
be matched. For example, the avatars may include faces of men and
women, faces with varying quantities and types of hair, faces of
different ages, and faces representing different races. Typically
the reference set includes numerous (e.g., several hundred or more)
avatars, though the invention works with as few as one reference
object, and as many for which storage space and compute time are
available. In some situations, only a single reference avatar will
be used. This case may arise, for example, when the best-fitting
avatar has been selected manually, or by some other means, or when
only one reference avatar is available. In FIG. 1, the source data
of the target face is illustrated as a single 2D photograph of the
target 3D face taken from an unknown viewpoint. First, selected
features to be used for matching are identified in the source
photograph. The features may be points, such as the extremities of
the mouth and eyes, or curves representing a profile, an eyebrow,
or other distinctive curve, or subareas such as an eyebrow, nose or
cheek. The corresponding features are identified in the reference
avatars. The selection of features in the target photograph, and
the identification of corresponding features in the reference
avatars may be done automatically according to the invention. Next,
a viewpoint-invariant search is conducted in which each 3D avatar
is notionally subjected to all possible rigid motions and the
features projected into 2D. The positions of the projected avatar
features are compared to the feature positions in the target
photograph. The avatar for which a particular rigid motion provides
the closest fit between projected features and those of the source
photograph is selected as the best reference avatar. FIG. 1
illustrates the best-fitting reference avatar to be the middle
one.
[0032] Next, the best reference avatar is deformed to match the
target photograph more closely. First the features of the
photograph are reverse projected to the coordinates of the best
reference avatar in the orientation and position corresponding to
the best match. The mesh points of the avatar are then deformed in
3D to minimize the distances between the reverse-projected features
of the photograph and the corresponding avatar features. The avatar
resulting from this deformation will be a closer approximation to
the target 3D face. The rigid motion search and deformation steps
may be repeated iteratively, e.g., until the quality of fit no
longer improves appreciably. The resulting 3D model is the optimal
match to the target face.
[0033] The invention can be used effectively even when the source
imagery includes only a part of the target face, or when the target
face is partially obscured, such as, for example, by sunglasses or
facial hair. The approach of the invention is suitable for any
multifeatured object, such as faces, animals, plants, or buildings.
For ease of explanation, however, the ensuing description will
focus on faces as an exemplary (and non-limiting) application.
[0034] FIGS. 2A, 2B, and 2C show the components of a representative
avatar. In one embodiment of the invention, the geometry of the
avatar is represented by a mesh of points in 3D which are the
vertices of set of triangular polygons approximating the surface of
the avatar. FIG. 2A illustrates a head-on view 202 and a side view
204 of the triangular polygon representation. In one
representation, each vertex is given a color value, and each
triangular face may be colored according to an average of the color
values assigned to its vertices. The color values are determined
from a 2D texture map 206, illustrated in FIG. 2B, which may be
derived from a photograph. In FIG. 2C, the final avatar with
texture is illustrated in a head-on view 208 and side view 210. The
avatar is associated with a coordinate system which is fixed to it,
and is indexed by three angular degrees of freedom (pitch, roll,
and yaw), and three translational degrees of freedom of the rigid
body center in three-space. In addition, individual features of the
avatar, such as the chin, teeth and eyes may have their own local
coordinates (e.g., chin axis) which form part of the avatar
description. The present invention may be equally applied to
avatars for which a different data representation is used. For
example, texture values may be represented as RGB values, or using
other color representations, such as HSL. The data representing the
avatar vertices and the relationships among the vertices may vary.
For example, the mesh points may be connected to form
non-triangular polygons representing the avatar surface.
[0035] The invention may include a conventional rendering engine
for generating 2D imagery from a 3D avatar. The rendering engine
may be implemented in OpenGL, or in any other 3D rendering system,
and allows for the rapid projection of a 3D avatar into a 2D image
plane representing a camera view of the 3D avatar. The rendering
engine may also include the specification of the avatar lighting,
allowing for the generation of 2D projections corresponding to
varying illumination of the avatar. Lighting corresponding to a
varying number of light sources of varying colors, intensities, and
positions may be generated.
[0036] The feature items in the 2D source projection which are used
for matching are selected by hand or via automated methods. These
items may be points or curves. When the source projection includes
a front view, suitable points may be inflection points at the lips,
points on the eyebrows, points at the extremities of the eyes, or
extremities of nostrils, and suitable curves may include an eyebrow
or lip. When the source projection includes a side view, the
feature points corresponding to the profile are used and may
include the tip of the nose or chin. Suitable feature item curves
may include distinct parts of the profile, such as nose, forehead,
or chin.
[0037] When the feature items are determined manually, a user
interface is provided which allows the user to identify feature
points individually or to mark groups of points delineated by a
spline curve, or to select a set of points forming a line.
[0038] The automated detection of feature items on the 2D source
projection is performed by searching for specific features of a
face, such as eyeballs, nostrils, and lips. As understood by those
of ordinary skill in the art, the approach may use Bayesian
classifiers and decision trees in which hierarchical detection
probes are built from training data generated from actual avatars.
The detection probes are desirably stored at multiple pixel scales
so that the specific parameters, such as for orientation of a
feature, are only computing on finer scales if the larger-scale
probes yield a positive detection. The feature detection probes may
be generated from image databases representing large numbers of
individuals who have had their features demarcated and segregated
so that the detection probes become specifically tuned to these
features. The automated feature detection approach may use pattern
classification, Bayes nets, neural networks, or other known
techniques for determining the location of features in facial
images.
[0039] The automated detection of curve features in the source
projection may use dynamic programming approaches to generate
curves from a series of points so as to reduce the amount of
computation required to identify an optimal curve and maximize a
sequentially additive cost function. Such a cost function
represents a sequence of features such as the contrast of the
profile against background, or the darkness of an eyebrow, or the
crease between lips. A path of N points can be thought of as
consisting of a starting node x.sub.0 and a set of vectors v.sub.0,
v.sub.1, . . . , v.sub.N-1 connecting neighboring nodes. The nodes
comprising this path are defined as
x i = j = 0 i = 1 v j + x 0 . ##EQU00001##
Rather than searching over all paths of length N, dynamic
programming may be used to generate maximum (or minimum) cost
paths. This reduces the complexity of the algorithm from K.sup.N to
NK.sup.2 where N is the length of a path and K is the total number
of nodes, as dynamic programming takes advantage of the fact that
the cost is sequentially additive, allowing a host of sub-optimal
paths to be ignored. Dynamic programming techniques and systems are
well-characterized in the art and can be applied as discussed
herein without undue experimentation.
[0040] Next, the 3D rotation and translation from the avatar
coordinates to the source projection is determined. This
corresponds to finding the viewpoint from which the source
projection was captured. In preferred embodiments, this is achieved
by calculating the position of the avatar in 3D space that best
matches the set of selected feature items in the 2D source
projection. Generally, these feature items will be points, curves,
or subareas and the source projection will be a photograph on which
the position of these items can be measured, either manually or
automatically. The position calculation may be based on the
computation of the conditional mean estimate of the reverse
projection positions in 3D of the 2D feature items, followed by the
computation of MMSE estimates for the rotation and translation
parameters in 3D, given the estimates of the 3D positions of the
feature items. Since position in 3D space is a vector parameter,
the MMSE estimate for translation position is closed form; when
substituted back into the squared error function, it gives an
explicit function in terms of only the rotations. Since the
rotations are not vector parameters, they may be calculated using
non-linear gradient descent through the tangent space of the group
or via local representation using the angular velocities of the
skew-symmetric matrices.
[0041] In addition to or in lieu of the least squares or weighted
least squares techniques described herein, the distance metrics
used to measure the quality of fit between the reverse projections
of feature items from the source imagery and corresponding items in
the 3D avatar may be, for example, Poisson or other distance
metrics which may or may not satisfy the triangle inequality.
[0042] If feature items measured in 3D are available, such as from
actual 3D source data from 3D cameras or scanners. the feature item
matching may be performed directly, without the intermediate step
of calculating the conditional mean estimate of the deprojected 2D
features. The cost function used for positioning the 3D avatar can
be minimized using algorithms such as closed form quadratic
optimization, iterative Newton descent or gradient methods.
[0043] The 3D positioning technique is first considered without
deformation of the reference avatar. In the following, a 3D
reference avatar is referred to as a CAD (computer-aided design)
model, or by the symbol CAD. The set of x.sub.j=(x.sub.j, y.sub.j,
z.sub.j), j=1, . . . , N features is defined on the CAD model. The
projective geometry mapping is defined as either positive or
negative z, i.e., projection occurs along the z axis. In all the
projective geometry
p j = ( .alpha. 1 x j - z j , .alpha. 2 y j - z j )
##EQU00002##
(for negative z-axis projection), or
p j = ( .alpha. 1 x j z j , .alpha. 2 y j z j ) ##EQU00003##
(for positive z-axis projection) is the projected position of the
point x.sub.j where .alpha. is the projection angle. Let the rigid
transformation be of the form A=O, b: xOx+b centered around
x.sub.c=0. For positive (i.e., z>0) mapping and n=0,
p j = ( .alpha. 1 x j z j , .alpha. 2 y j z j ) , i = 1 , , N ,
##EQU00004##
where n is the cotangent of the projective angle. The following
data structures are defined throughout:
P i = ( P i 1 .alpha. 1 P i 2 .alpha. 2 1 ) , Q i = ( I - P i ( P i
) 1 P i 2 ) , Q _ = i = 1 N Q i , X Q = i = 1 N Q i X i ( Equation
1 ) X j = ( x j 1 x j 2 x j 3 0 0 0 0 0 0 0 0 0 x j 1 x j 2 x j 3 0
0 0 0 0 0 0 0 0 x j 1 x j 2 x j 3 ) , ( Equation 2 )
##EQU00005##
with (.cndot.)' matrix transpose, and the identity matrix
I = ( 1 0 0 0 1 0 0 0 1 ) . ##EQU00006##
For negative (i.e., z<0) mapping
, p j = ( .alpha. 1 x j - z j , .alpha. 2 y j - z j ) ,
##EQU00007##
and the change
P i = ( - p i 1 a 1 , p i 2 a 2 , 1 ) ' ##EQU00008##
is made.
[0044] The basis vectors, Z.sub.1,Z.sub.2,Z.sub.3 at the tangent to
the 3.times.3 rotation element O are defined as:
Z.sub.1=1.sub.1O.sup.old=[o.sub.21, o.sub.22, o.sub.23, -o.sub.11,
-o.sub.12, -o.sub.13, 0, 0, 0]' (Equation 3)
Z.sub.2=1.sub.2O.sup.old=[o.sub.31, o.sub.32, o.sub.33, 0, 0, 0,
-o.sub.11, -o.sub.12, -o.sub.13]' (Equation 4)
Z.sub.3=1.sub.3O.sup.old=[0, 0, 0, o.sub.31, o.sub.32, o.sub.33,
-o.sub.21, -o.sub.22, -o.sub.23]' (Equation 5)
where 1 1 = ( 0 1 0 - 1 0 0 0 0 0 ) , 1 2 = ( 0 0 1 0 0 0 - 1 0 0 )
, 1 3 = ( 0 0 0 0 0 0 0 - 1 0 ) . ( Equation 6 ) ##EQU00009##
[0045] The reverse projection of feature points from the 2D
projection may now be performed. Given the feature points
p.sub.j=(p.sub.j1, p.sub.j2), j=1, 2, . . . in the image plane, the
minimum norm-estimates for {circumflex over (z)}.sub.i are given
by, O, {circumflex over (b)} as
z ^ i = Ox j + b , P i P i 2 , ##EQU00010##
and the MMSE, O, {circumflex over (b)} satisfies
min z , O , b i = 1 N Ox i + b - z i P i 2 = min O , b i = 1 N ( Ox
i + b ) t Q i ( Ox i + b ) . ( Equation 7 ) ##EQU00011##
[0046] During the process of matching a source image to reference
avatars, there may be uncertainty in the determined points x,
implying that cost matching is performed with a covariance
variability structure built in to the formula. In this case, the
norm has within it a 3.times.3 matrix which represents this
variability in the norm.
[0047] The optimum rotation and translation may next be estimated
from feature points. Given the projective points p.sub.j, j=1, 2, .
. . , the rigid transformation has the form O, b: xOx+b (centered
around center x.sub.c=0). Then for positive (z>0) mapping and
n=0,
p j = ( .alpha. 1 x j z j , .alpha. 2 y j z j ) , ##EQU00012##
min z , O , b i = 1 N Ox i + b - z i P i 2 = min O , b i = 1 N ( Ox
i + b ) t Q i ( Ox i + b ) . ( Equation 8 ) ##EQU00013##
[0048] The optimum translation rotation solutions are preferably
generated as follows. Compute the 3.times.9 matrix M.sub.t=X.sub.i-
Q.sup.-1 X.sub.Q and evaluate exhaustively the cost function
choosing minimum O and computing the translation
b ^ = - ( i = 1 N Q i ) - 1 i = 1 N Q i O ^ x i ##EQU00014##
with minimum O attained, for example, via brute force search over
the orthogonal group which may for example be parameterized by
pitch, roll or yaw or running the gradient search algorithm to
convergence as follows.
Brute Force : O ^ = arg max o O t ( i = 1 N M i t Q i M i ) O ; (
Equation 9 ) Gradient : O new = i = 1 3 a i new 1 i O old , .alpha.
j new = 2 ( i = 1 N M i t Q i M i ) O old , Z j , j = 1 , 2 , 3
with f , g = i = 1 3 f i g i . ( Equation 10 ) ##EQU00015##
[0049] In a typical application some information about the position
of the object in 3D space is known. For example, in a system which
takes a succession of photographs of a moving source object, such
as in a tracking system, the position from a previous image may be
available. The invention may incorporate this information into the
matching process as follows. Given a sequence of points p.sub.i,
i=1, . . . , N and a rigid transformation of the form A=O, b: xOx+b
(centered around x.sub.e=0), the MMSE of rotation and translation
O, {circumflex over (b)} satisfies:
min z , O , b i = 1 N Ox i + b - z i P i 2 + ( b - .mu. ) t - 1 ( b
- .mu. ) = min O , b i = 1 N ( Ox i + b ) t Q i ( Ox i + b ) + ( b
- .mu. ) t - 1 ( b - .mu. ) . ( Equation 11 ) ##EQU00016##
The 3.times.9 matrix M.sub.i and a 3.times.1 column vector N are
computed:
M.sub.iX.sub.i- Q.sub..SIGMA..sup.-1X.sub.Q, N=
Q.sub..SIGMA..sup.-1X.sub.Q Q.sub..SIGMA.,=( Q+.SIGMA..sup.-1),
.psi.= Q.sub..SIGMA..sup.-1.SIGMA..sub.u, .phi.=
Q.sub..SIGMA..sup.-1.SIGMA..sub..mu.-.mu.,
.SIGMA..sub..mu.=.SIGMA..sup.-1.mu.. (Equation 12)
The translation is then determined {circumflex over (b)}=-
Q.sub..SIGMA..sup.-1X.sub.QO+ Q.sub..SIGMA..sup.-1.SIGMA..sub..mu.
at the minimum O obtained by exhaustive search or gradient
algorithm run until convergence:
Brute Force : O ^ = arg max o O t ( i = 1 N M i t Q i M i + N t - 1
N ) O + 2 O t ( i = 1 N M i t Q i .psi. - N - 1 .phi. ) ( Equation
13 ) Gradient : O new = i = 1 3 a i new 1 i O old , .alpha. j new =
2 ( i = 1 N M i t Q i M i = N t - 1 N ) O old + 2 i = 1 N M i t Q i
.psi. - N 1 - 1 .phi. , Z j ( Equation 14 ) ##EQU00017##
with the projection onto the basis vectors Z.sub.1,Z.sub.2,Z.sub.3
of equations 3-5 defined at the tangent to O.sup.old in the
exponential representation where .alpha..sup.new are the
directional derivatives of cost.
[0050] The rotation/translation data may be indexed in many
different ways. For example, to index according to the rotation
around the center of the object, rather than fixed external world
coordinates, the coordinates are just reparameterized by defining a
{tilde over (x)}.rarw.x-x.sub.c. All of the techniques described
herein remain the same.
[0051] The preferred 3D algorithm for rigid motion efficiently
changes states of geometric pose for comparison to the measured
imagery. The preferred 3D algorithm for diffeomorphic
transformation of geometry matches the geometry to target 2D image
features. It should be understood, however, that other methods of
performing the comparison of the 3D representation to the source
imagery may be used, including those that do not make use of
specific image features.
[0052] Once the rigid motion (i.e., rotation and translation) that
results in the best fit between 2D source imagery and a selected 3D
avatar is determined, the 3D avatar may be deformed in order to
improve its correspondence with the source imagery. The allowed
deformations are generally limited to diffeomorphisms of the
original avatar. This serves to preserve the avatar topology,
guaranteeing that the result of the deformation will be a face. The
deformations may also enforce topological constraints, such as the
symmetry of the geometry. This constraint is especially useful in
situations where parts of the source object are obscured, and the
full geometry is inferred from partial source information.
[0053] FIGS. 3A and 3B illustrate the effect of avatar deformation
on the matching of the avatar to the source imagery. In FIG. 3A,
feature points are shown as black crosses on the source image 302.
An example is the feature point at the left extremity of the left
eyebrow 304. The projections of the corresponding feature points
belonging to the best-matching reference avatar with optimal rigid
motion prior to deformation are shown as white crosses. It can be
seen that the projected point corresponding to the left extremity
of the left eyebrow 306 is noticeably displaced from its
counterpart 304. In FIG. 3B, the same source image 302 is shown
with feature points again indicated by black crosses. This time,
the best-fitting avatar feature points shown as white crosses are
now projected after deformation. The correspondence between source
feature points and avatar feature points is markedly improved, as
shown, for example, by the improved proximity of the projected left
eyebrow feature point 308 to its source counterpart 304.
[0054] The 3D avatar diffeomorphism calculation starts with the
initial conditions for placement of the avatar determined by the
feature item detection and computation of the best-fitting rigid
motion and the original geometry of the avatar. It then proceeds by
allowing all of the points on the avatar to move independently
according to a predefined formula to minimize the distance between
the deformed avatar points in 3D and the conditional mean estimates
of the 2D landmark points reverse projected to the 3D coordinates.
Once this diffeornorphism is calculated, the 3D landmark rigid
motion algorithm is applied again to the source projections and
feature items to find the best guess of the camera positions given
this newly transformed avatar with its new vertex positions.
Subsequently, a new diffeomorphism is generated, and this process
is continued until it converges. Alternatively, iteration may not
be used, with the rigid motion calculation being performed only a
single time, and just one diffeomorphism transformation applied. In
the case where camera orientations (i.e., the viewpoint of the
measured source projections) are known precisely, these can be used
as fixed inputs to the calculation, with no rigid transformation
required. When the measured sets of feature items are in 3D, such
as from a cyber scan or 3D camera observations of the candidate
head, the avatar transformed onto the candidate sets of points
without any intermediate generation of the candidate points in 3D
space via the conditional mean algorithm for generating 3D points
from 2D sets of points.
[0055] The diffeomorphic deformation of an avatar proceeds as
follows. Given the set x.sub.j=(x.sub.j, y.sub.j, z.sub.j), j=1, .
. . , N feature items defined on the CAD model, with the projective
geometry mapping with
.alpha. 1 = 2 n w , .alpha. 2 = 2 n h , ##EQU00018##
where n is cotangent of the angle, and w,h are aspect ratio widths
and heights,
( x , y , z ) p ( x , y , z ) = ( .alpha. 1 x z , .alpha. 2 y z ) ,
##EQU00019##
with observations of the feature items through the projective
geometry
p j = ( .alpha. 1 x j z j , .alpha. 2 y j z j ) . ##EQU00020##
The goal is to construct the deformation of the CAD model
x.fwdarw.x+u(x), x .di-elect cons. CAD with unknown camera rigid
motions corresponding to the measured projective image feature
items. The projective points for each orientation v=1, . . . , V,
and smoothing matrices
P i ( v ) = ( p i 1 ( v ) a 1 P i 2 ( v ) a 2 1 ) , Q i ( v ) = I P
i ( v ) P i ( v ) t P i ( v ) 2 , Q _ i = v = 1 V O ( v ) t Q i ( v
) O ( v ) , Q _ O i = v = 1 V O ( v ) t Q i ( v ) , ( Equation 15 )
Q _ = ( Q _ 1 0 0 0 Q _ 2 0 0 0 Q _ N ) 3 N .times. 3 N , Q _ o = (
Q _ o 1 0 0 0 Q _ o 2 0 0 0 Q _ N ) 3 N .times. 3 N , ( Equation 16
) K = ( K 11 K 12 K 1 N K 21 K 21 K 2 N K N 1 K N 1 K NN ) 3 N
.times. 3 N , K ij = ( K ij ( 1 , 1 ) K ij ( 1 , 2 ) K ij ( 1 , 3 )
K ij ( 2 , 1 ) K ij ( 2 , 2 ) K ij ( 2 , 3 ) K ij ( 3 , 1 ) K ij (
3 , 2 ) K ij ( 3 , 3 ) ) , ( Equation 17 ) ##EQU00021##
are constructed, where for example K.sub.ij=diag
(e.sup.-a.parallel.x.sup.i.sup.-x.sup.j.sup..parallel.,
e.sup.-a.parallel.x.sup.j.sup.-x.sup.j.sup..parallel.,
e.sup.-a.parallel.x.sup.j.sup.-x.sup.j.sup..parallel.) corresponds
to the square root inverse Laplacian operator L=diag
(-.gradient..sup.2+c).
[0056] In one embodiment, the avatar may be deformed with small
deformations only and no rigid motions. For this embodiment, it is
assumed that the measured feature items are all points from a
single camera viewing which generated the projected source image in
which the feature points were measured. The goal is to construct
the deformation of the CAD model constructing the mapping
x.fwdarw.x+u(x), x .di-elect cons. CAD:
min u , z n Lu 2 + i = 1 N ( x n + u ( x n ) ) - z n P n 2 = min u
Lu 2 + n = 1 N ( x n + u ( x n ) ) t Q n ( x n + u ( x n ) ) . (
Equation 18 ) ##EQU00022##
[0057] First, the transformation of the model x.fwdarw.x+u(x) with
u(x)=.SIGMA..sub.n-1.sup.NK(x.sub.n, x).beta..sub.n and where
( .beta. 1 .beta. N ) = K - 1 ( - 1 .sigma. 2 ( K - 1 + 1 .sigma. 2
Q _ ) - 1 ( Q _ ( x 1 x N ) ) ) ( Equation 19 ) ##EQU00023##
is computed. Next, rigid motions are added and the following
equation solved for the optimizer:
min u , z n Lu 2 + i = 1 N v = 1 V ( O ( v ) ( x n + u ( x n ) ) +
b ( v ) ) t Q n ( v ) ( O ( v ) ( x n + u ( x n ) ) + b ( v ) ) . (
Equation 20 ) ##EQU00024##
The transformation of the model using small deformation
x.fwdarw.x+u(x) is computed, where
u(x)=.SIGMA..sub.n=1.sup.NK(x.sub.n, x).beta..sub.n and
( .beta. 1 ( 44 ) .beta. N ) = K - 1 ( - 1 .sigma. 2 ( K - 1 + 1
.sigma. 2 Q _ ) - 1 ( Q _ ( x 1 x N ) + ( v = 1 V O ( v ) t Q 1 ( v
) ( b ( v ) ) v = 1 V O ( v ) t Q N ( v ) ( b ( v ) ) ) ) . (
Equation 21 ) ##EQU00025##
[0058] In another embodiment, diffeomorphic deformations with no
rigid motions of the avatar are applied. In the case that the
change in shape of the face is extensive the large deformation
.phi.: x.phi.(x) satisfying
.phi.=.phi..sub.1.phi..sub.t=.intg..sub.oV.sub.s(.phi..sub.s(x))ds+x,x
.di-elect cons. CAD is generated. The deformation of the CAD model
constructing the mapping x.phi.(x), x .di-elect cons. CAD is
constructed:
min v t , t .di-elect cons. [ 0 , 1 ] , z n .intg. 0 1 Lv t 2 t + n
= 1 N .phi. ( x n ) - z n P n 2 = min v t , t .di-elect cons. [ 0 ,
1 ] .intg. 0 1 Lv t 2 t + n = 1 N .phi. ( x n ) t Q n .phi. ( x n )
. ( Equation 22 ) ##EQU00026##
Using the initialization v.sup.new=0,.phi..sup.new(x)=x, x
.di-elect cons. CAD, mappings are repeatedly generated from the new
vector field by running an iterative algorithm until
convergence:
v 1 new ( ) = n = 1 N K ( .phi. 1 new ( x n ) , ) D .phi. t new ( x
n ) .phi. t , 1 new Q n .phi. new ( x n ) , ( Equation 23 ) .phi.
new ( x ) = .intg. 0 1 v t new ( .phi. t new ( x ) ) x + x , (
Equation 24 ) ##EQU00027##
where
D .phi. 1 ( x n ) .phi. t , 1 = ( .differential. .phi. 1 .phi. t -
1 .differential. y j .phi. i ( x n ) ) . ##EQU00028##
The addition rigid motions to the large deformation. x.phi.(x), x
.di-elect cons. CAD is accomplished as follows:
min v t : .phi. t = v t ( .phi. t ) , t .di-elect cons. [ 0 , 1 ] ,
z n .intg. 0 1 Lv t 2 t + n = 1 N v = 1 V O ( v ) .phi. ( x n ) + b
( v ) - z n ( v ) P n ( v ) 2 = min v 1 t .di-elect cons. [ 0 , 1 ]
.intg. 0 1 Lv t 2 t + n = 1 N v = 1 V ( O ( v ) .phi. ( x n ) + b (
v ) ) t Q n ( v ) ( O ( v ) .phi. ( x n ) + b ( v ) ) . ( Equation
25 ) ##EQU00029##
Using the initialization v.sup.new=0,.phi..sup.new(x)=x, x
.di-elect cons. CAD, a mapping is generated from the new vector
field by running an iterative algorithm until convergence:
v 1 new ( ) = n = 1 N K ( .phi. t new ( x n ) , ) D .phi. t new ( x
n ) .phi. t , 1 tnew ( v = 1 V O ( v ) t Q n ( v ) ( O ( v ) .phi.
new ( x n ) + b ( v ) ) ) ; ( Equation 26 ) .phi. new ( x ) =
.intg. 0 1 v t new ( .phi. t new ( x ) ) x + x , Where D .phi. t (
x n ) .phi. t , 1 = ( .differential. .phi. 1 .phi. t - 1
.differential. y j .phi. t ( x n ) ) . ( Equation 27 )
##EQU00030##
[0059] In a further embodiment, the deformation may be performed in
real time for the case when the rigid motions (i.e., the
rotation/translation) which bring the avatar into correspondence
with the one or more source 2D projection are not known. A similar
approach to the one above is used, with the addition of an
estimation of the rigid motions using the techniques described
herein. The initialization u.sup.new=0 is used. Rigid motions are
calculated using the rotation/translation techniques above to
register the CAD model xx+u.sup.new(x) to each photograph,
generating rigid motions O.sup.(v)new,b.sup.(v)new,v=1, 2, . . .
O.sup.(v)new,b.sup.(v)new are fixed from the previous step, and the
deformation of the CAD model xx+u.sup.new(x) or large deformation
x.phi.(x) are computed using the above techniques to solve the
real-time small deformation or large deformation problem:
( small ) min u Lu 2 + n = 1 N v = 1 V ( O ( v ) ( x n + u ( x n )
) + b ( v ) ) t Q n ( v ) ( O ( v ) ( x n + u ( x n ) ) + b ( v ) )
. ( Equation 28 ) ( large ) min v t , t .di-elect cons. [ 0 , 1 ]
.intg. 0 1 Lv t 2 t + n = 1 N v = 1 V ( O ( v ) ( .phi. ( x n ) + b
( v ) ) t Q n ( v ) ( O ( v ) ( .phi. ( x n ) ) + b ( v ) ) . (
Equation 29 ) ##EQU00031##
[0060] In another embodiment, the avatar is deformed in real-time
using diffeomorphic deformations. The solution to the real-time
deformation algorithm generates a deformation which may be used as
an initial condition for the solution of the diffeomorphic
deformation. Real-time diffeomorphic deformation is accomplished by
incorporating the real-time deformations solution as an initial
condition and then performing a small number (in the region of 1 to
10) iterations of the diffeomorphic deformation calculation.
[0061] The deformation may include affine motions. For the affine
motion A: x.fwdarw.Ax where A is the 3.times.3 generalized linear
matrix so that
min A , z n n = 1 N v = 1 V O ( v ) Ax n + b ( v ) - z n ( v ) P n
( v ) 2 = min A n = 1 N v = 1 V ( O ( v ) Ax n + b ( v ) ) 1 Q n (
v ) ( O ( v ) Ax n + b ( v ) ) , ( Equation 30 ) ##EQU00032##
the least squares estimator A: x.fwdarw.Ax is computed:
A ^ = - ( n = 1 N X n t Q _ n X n ) - 1 ( n = 1 N X n t v = 1 V O (
v ) t Q n ( v ) b ( v ) ) . ( Equation 31 ) ##EQU00033##
[0062] In many cases, both feature items in the projective imagery
as well as the imagery itself can be used to drive the deformation
of the avatar. Augmentation of source data to incorporate source
imagery may improve the quality of the fit between the deformed
avatar and the target face. To implement this, one more term is
added to the deformation techniques. Let I be the measured imagery,
which in general includes multiple measured images I.sup.(v),v=1,
2, . . . corresponding to an indexed sequence of pixels indexed by
p .di-elect cons. [0,1].sup.2, with the projection mapping
points
x = ( x , y , z ) .di-elect cons. IR 3 p ( x ) = ( p 1 ( x ) = a 1
x z , p 2 ( x ) a 2 y z ) . ##EQU00034##
For the discrete setting of pixels in the source image plane with
color (R,G,B) template, the observed projective .PI.(p) is an
(R,G,B) vector and the projective matrix becomes
P x = ( a 1 z + n 0 0 0 a 2 z + n 0 ) , ##EQU00035##
operating on points (x,y,z) .di-elect cons. IR.sup.3 according to
the projective matrix
P x : ( x , y , z ) ( p 1 ( x , y , z ) , p 2 ( x , y , z ) ) = ( a
1 z + n 0 0 0 a 2 z + n 0 ) ( x y z ) , ##EQU00036##
the point x(p) being the revealed point which is not occluded
(closest point to the projection on the ray) on the 3D CAD model
which projects to the point p in the image plane. Next, the
projected template matrices resulting from finite differences on
the (R,G,B) components at the projective coordinate p of the
template value are required. The norm is interpreted
componentwise:
.gradient. t ( p ) = ( .differential. .differential. p 1 ( p r )
.differential. .differential. p 2 ( p r ) .differential.
.differential. p 1 ( p g ) .differential. .differential. p 2 ( p g
) .differential. .differential. p 1 ( p b ) .differential.
.differential. p 2 ( p b ) , ) I . ( p ) = . I ( p ) - ( p ) , (
Equation 32 ) .gradient. ~ t ( p ) = .gradient. t ( p ) P x ( p ) =
( .differential. ( p ) r .differential. p 1 .alpha. 1 z ( p ) + n
.differential. ( p ) r .differential. p 2 .alpha. 2 z ( p ) + n 0
.differential. ( p ) g .differential. p 1 .alpha. 1 z ( p ) + n
.differential. ( p ) g .differential. p 2 .alpha. 2 z ( p ) + n 0
.differential. ( p ) b .differential. p 1 .alpha. 1 z ( p ) + n
.differential. ( p ) g .differential. p 2 .alpha. 2 z ( p ) + n 0 )
, ( Equation 33 ) ##EQU00037##
with matrix norm
.parallel.A-B.parallel..sup.2=|A.sup.r-B.sup.r|.sup.2+|A.sup.g-B.sup.g|.s-
up.2+|A.sup.b-B.sup.b|.sup.2.
[0063] Associated with each image is a translation/rotation assumed
already known from the previous rigid motion calculation
techniques. The following assumes there is one 2D image, with O,b
identity, and let any of the movements be represented as
x.fwdarw.x+u(x). Then u(x)=Ox-x is a rotational motion, u(x)=b is a
constant velocity, u(x)=.SIGMA..sub.ie.sub.iE.sub.i(x) is a
constrained motion to a basis function such as "chin rotation,"
"eyebrow lift," etc., and the general motion u is given by:
min u p .di-elect cons. [ 0 , 1 ] 2 I ( p ) - ( p ) - .gradient. t
( p ) P x ( p ) u ( x ( p ) ) 2 = min u p .di-elect cons. [ 0 , 1 )
2 I . ( p ) - .gradient. ~ t ( p ) u ( x ( p ) ) 2 ( Equation 34 )
= min u - 2 p .di-elect cons. [ 0 , 1 ] 2 u ( x ( p ) ) t I . ( p )
.gradient. ~ ( p ) + p .di-elect cons. [ 0 , 1 ] 2 u ( x ( p ) ) t
.gradient. ~ ( p ) .gradient. ~ t ( p ) u ( x ( p ) ) . ( Equation
35 ) ##EQU00038##
This is linear in u so closed-form expressions exist for each of
the forms of u, for example, for the unconstrained general spline
motion,
u(x(p))=({tilde over (.gradient.)}.PI.(p){tilde over
(.gradient.)}.sup.t.PI.(p)).sup.-1 (p){tilde over
(.gradient.)}.PI.(p). (Equation 36)
This approach can be incorporated into the other embodiments of the
present invention for the various possible deformations described
herein.
[0064] In the situation where large numbers of feature points,
curves, and subvolumes are to be automatically generated from the
source projection(s) and 3D data (if any), image matching is
performed directly on the source imagery or on the fundamental 3D
volumes into which the source object can be divided. For the case
where the avatar is generated from 2D projective photographic
images, the measured target projective image has labeled points,
curves, and subregions generated by diffeomorphic image matching.
Defining a template projective exemplar face with all of the
labeled submanifolds from the avatar, the projective exemplar can
be transformed bijectively via the diffeomorphisms onto the target
candidate photograph, thereby automatically labeling the target
photographs into its constituent submanifolds. Given these
submanifolds, the avatars can then be matched or transformed into
the labeled photographs. Accordingly, in the image plane a
deformation .phi.: x.phi.(x) satisfying
.phi.=.intg..sub.0.sup.1v.sub.t(.phi..sub.t(x))dt+x, x .di-elect
cons. R.sup.2 is generated. The template and target images I.sub.0,
I.sub.1 are transformed to satisfy
min v t , t .di-elect cons. [ 0 , 1 ] , z n .intg. 0 1 Lv t 2 t + n
= 1 N I 0 .phi. - 1 - I 1 2 . ( Equation 37 ) ##EQU00039##
The given diffeomorphism is applied to labeled points, curves, and
areas in the template I.sub.0 thereby labeling those points,
curves, and areas in the target photograph.
[0065] When the target source data are in 3D, the diffeomorphisms
are used to define bijective correspondence in the 3D background
space, and the matching is performed in the volume rather than in
the image plane.
[0066] The following techniques may be used to select avatar models
automatically. Given a collection of avatar models {CAD.sup.a,a=1,
2, . . . }, and a set of measured photographs of the face of one
individual, the task is to select the avatar model which is most
representative of the individual face being analyzed. Let the
avatar models be specified via points, curves, surfaces, or
subvolumes. Assume for example N feature initial and target points,
x.sub.n.sup.a,x.sub.n .di-elect cons. IR.sup.d,n=1, . . . , N, with
x.sub.n=x.sub.n.sup.a+u(x.sub.n.sup.a), one for each avatar a=1, 2,
. . . .
[0067] In one embodiment, the avatar is deformed with small
deformations only and no rigid motions. For this embodiment, it is
assumed that the measured feature items are all points from a
single camera viewing which generated the projected source image in
which the feature points were measured. The matching
x.sub.n.sup.ax.sub.n.sup.a+u(x.sub.n.sup.a), n=1, . . . , N is
constructed and the CAD.sup.a model which is of smallest metric
distance is selected. The optimum selected avatar model is the one
closest to the candidate in the metric. Any of a variety of
distance functions may be used in the selection of the avatar,
including the large deformation metric from the diffeomorphism
mapping technique described above, the real-time metric described
above. the Euclidean metric, and the similitude metric of Kendall.
The technique is described herein using the real-time metric. When
there is no rigid motion, then the CAD model is selected to
minimize the metric based on one or several sets of features from
photographs, here described for one photo:
CAD a ^ = arg min CAD a , a = 1 , 2 , min u , z n Lu 2 + n = 1 N (
x n a + u ( x n a ) ) - z n P n 2 = arg min CAD a , a = 1 , 2 , min
u Lu 2 + n = 1 N ( x n a + u ( x n a ) ) t Q n ( x n a + u ( x n a
) ) . ( Equation 38 ) ##EQU00040##
In other embodiments both metrics including unknown or known rigid
motions, large deformation metrics, or affine motions can be used
such as described in equations 28 and 29, respectively.
[0068] For selecting avatars given 3D information such as features
of points, curves, surfaces and or subvolumes in the 3D volume,
then the metric is selected which minimizes the distance between
the measurements and the family of candidate CAD models. First, the
K matrix defining the quadratic form metric measuring the distance
is computed:
K = ( K ( x 1 a , x 1 a ) Kx ( 1 a , x 2 a ) Kx 1 a , x N a ) ) K (
x N a , x 1 a ) K ( x N a , x 2 a ) Kx N a , x N a ) ) ; ( Equation
39 ) ##EQU00041##
K maybe, for example, K (x.sub.i,x.sub.j) =diag
(e.sup.-a.parallel.x.sup.i.sup.-x.sup.j.sup..parallel.). Next, the
metric between the CAD models and the candidate photographic
feature points is computed according to
(x.sub.n.sup.ax.sub.n.sup.1=Ax.sub.n.sup.a+b+u(x.sub.n.sup.a), n=1,
. . . , N and CAD.sup.a of small distance is or inexact matching
(.sigma.=0 or inexact a .sigma..noteq.0) can be used:
exactCAD a ^ = arg min CAD a , a = 1 , 2 , min A , b ij = 1 N ( Ax
i + b - x i 1 ) t ( K - 1 ) ij ( Ax j + b - x j 1 ) ; ( Equation 40
) inexactCAD a ^ = arg min CAD a , a = 1 , 2 , min A , b ij = 1 N (
Ax i + b - x i 1 ) t ( ( K + .sigma. 2 I ) - 1 ) ij ( Ax j + b - x
j 1 ) . ( Equation 41 ) ##EQU00042##
The minimum none is determined by the error between the CAD model
feature points and the photographic feature points.
[0069] The present invention may also be used to match to
articulated target objects. The diffeomorphism and real-time
mapping techniques carry the template 3D representations
bijectively onto the target models. carrying all of the information
in the template. The template models are labeled with different
regions corresponding to different components of the articulated
object. For example, in the case of a face, the articulated regions
may include teeth, eyebrows, eyelids, eyes, and jaw. Each of these
subcomponents can be articulated during motion of the model
according to an articulation model specifying allowed modes of
articulation. The mapping techniques carry these triangulated
subcomponents onto the targets, thereby labeling them with their
subcomponents automatically. The resulting selected CAD model
therefore has its constituent parts automatically labeled, thereby
allowing each avatar to be articulated during motion sequences.
[0070] In the case when direct 3D measurement of the source object
is available x.sub.ny.sub.n .di-elect cons..sup.3, n=1, . . . , N
from points, curves, surfaces, or subvolumes, the techniques for
determining the rotation/translation correspondences are unchanged.
However, since the matching terms involve direct measurements in
the volume, there is no need for the intermediate step to determine
the dependence on the unknown z-depth via the MMSE technique.
[0071] Accordingly, the best matching rigid motion corresponds
to:
min O , b n = 1 N Ox n + b - y n 2 . ( Equation 42 )
##EQU00043##
[0072] The real-time deformation corresponds to:
min u , z n Lu 2 + n = 1 N ( x n + u ( x n ) ) - y n 2 . ( Equation
43 ) ##EQU00044##
[0073] The diffeomorphism deformation corresponds to with
.phi. = .intg. 0 1 v t ( .phi. t ( x ) ) t + x , x .di-elect cons.
3 : min v 1 t .di-elect cons. [ 0 , 1 ] , z n .intg. 0 1 Lv t 2 t +
n = 1 N .phi. ( x n ) - y n 2 . ( Equation 44 ) ##EQU00045##
[0074] The techniques described herein also allow for the automated
calibration camera parameters, such as the aspect ratio and field
of view. The set of x.sub.j=(x.sub.j,y.sub.j,z.sub.j), j=1, . . . ,
N features are defined on the CAD model. The positive depth
projective geometry mapping with
a 1 1 .gamma. 1 , a 2 = 1 .gamma. 2 ##EQU00046##
is defined, according to
( x , y , z ) p ( x , y , z ) = ( x .gamma. 1 z , y .gamma. 2 z ) ,
z .di-elect cons. [ 0 .infin. ) , n > 0. ##EQU00047##
Given are observations of some features through the projective
geometry
p j = ( x j .gamma. 2 z j , y j z j ) . ##EQU00048##
[0075] The calibration of the camera is determined under the
assumption that there is no transformation (affine or other) of the
avatar. The z value is parameterized by incorporating the n
frustrum distance so that all depth coordinates are the above
coordinates plus frustrum depth. Videos can show different aspect
ratios
AR = a 1 a 2 ##EQU00049##
and fields-of-view
FOV = 2 tan - 1 1 a 1 . ##EQU00050##
The technique estimates the aspect ratios
.gamma..sub.1,.gamma..sub.2 from measured points
P.sub.i(.gamma..sub.1P.sub.i1,.gamma..sub.2P.sub.i2, 1).sup.t, i=1,
. . . N:
min O , by 1 , y 2 , z i , i = 1 , i = 1 N Ox i + b - z i P i 2 =
min O , by 1 , y 2 i = 1 N Ox i + b 2 - i = 1 N ( Ox i + b ) t P i
2 P i 2 . ( Equation 45 ) ##EQU00051##
[0076] Using the initialization
.gamma..sub.1.sup.new=.gamma..sub.2.sup.new=1, the calculation is
run to convergence.
[0077] In the first step, the data terms
P i = ( .gamma. 1 old P i 1 .gamma. 2 old P i 2 1 )
##EQU00052##
are solved, and the following rotations/translations are
computed:
Q i = ( I - P i ( P i ) t P i 2 ) , Q _ = i = 1 N Q i , X Q = i = 1
N Q i X i , M i = X i - Q _ - 1 X Q ; ( Equation 46 ) O ^ = arg min
O O t ( i = 1 N M i 1 Q i M i ) O , b ^ = ( i = 1 N Q i ) - 1 i = 1
N Q i O ^ x i , .psi. i = ( ( O ^ x i + b ^ ) 1 P i 1 ( O ^ x i + b
^ ) 2 P i 2 ( O ^ x i + b ^ ) 3 ) ( Equation 47 ) ##EQU00053##
[0078] Next, the expression
max .gamma. 1 , .gamma.2 - i = 1 N .psi. 1 i .gamma. 1 + .psi. 2 i
.gamma. 2 + .psi. 3 i 2 P 1 i 2 .gamma. 1 2 + P 2 i 2 .gamma. 2 2 +
1 ##EQU00054##
is maximized using an optimization method, such as Newton Raphson,
gradient or conjugate gradient. Using the gradient algorithm, for
example, the calculation is run to convergence, and the first step
is repeated. The gradient method is shown here, with step-size
selected for stability:
( .gamma. 1 new .gamma. 2 new ) = ( .gamma. 1 old .gamma. 2 old ) +
.differential. .gamma. ( .gamma. old ) step - size ( Equation ) 48
) .differential. .gamma. 1 = n = 1 N 2 .psi. i 1 .psi. i 1 .gamma.
1 + .psi. i 2 .gamma. 2 + .psi. i 3 ( p i 1 2 .gamma. 1 2 + p i 2 2
.gamma. 2 2 + 1 ) 2 - n = 1 N 2 p i 1 ( .psi. i 1 .gamma. 1 + .psi.
i 2 .gamma. 2 + .psi. i 3 ) 2 ( p i 1 2 .gamma. 1 2 + p i 2 2
.gamma. 2 2 + 1 ) 2 ( Equation 49 ) .differential. .gamma. 2 = n =
1 N 2 .psi. i 2 .psi. i 1 .gamma. 1 + .psi. i 2 .gamma. 2 + .psi. i
3 ( p i 1 2 .gamma. 1 2 + p i 2 2 .gamma. 2 2 + 1 ) 2 - n = 1 N 2 p
i 2 ( .psi. i 1 .gamma. 1 + .psi. i 2 .gamma. 2 + .psi. i 3 ) 2 ( p
i 1 2 .gamma. 1 2 + p i 2 2 .gamma. 2 2 + 1 ) 2 . ( Equation 50 )
##EQU00055##
[0079] The techniques described herein may be used to compare a
source 3D object to a single reference object. 2D representations
of the source object and the reference object are created, and the
correspondence between them is characterized using mathematical
optimization and projective geometry. Typically, the correspondence
is characterized by specifying the viewpoint from which the 2D
source projection was captured.
[0080] Refer now to FIG. 4, which illustrates a hardware system 400
incorporating the invention. As indicated therein, the system
includes a video source 402 (e.g., a video camera or a scanning
device) which supplies a still input image to be analyzed. The
output of the video source 402 is digitized as a frame into an
array of pixels by a digitizer 404. The digitized images are
transmitted along the system bus 406 over which all system
components communicate, and may be stored in a mass storage device
(such as a hard disc or optical storage unit) 408 as well as in
main system memory 410 (specifically, within a partition defining a
series of identically sized input image buffers) 412.
[0081] The operation of the illustrated system is directed by a
central-processing unit ("CPU") 414. To facilitate rapid execution
of the image-processing operations hereinafter described, the
system preferably contains a graphics or image-processing board
416; this is a standard component well-known to those skilled in
the art.
[0082] The user interacts with the system using a keyboard 418 and
a position-sensing device (e.g., a mouse) 420. The output of either
device can be used to designate information or select particular
points or areas of a screen display 420 to direct functions
performed by the system.
[0083] The main memory 410 contains a group of modules that control
the operation of the CPU 414 and its interaction with the other
hardware components. An operating system 424 directs the execution
of low-level, basic system functions such as memory allocation,
file management and operation of mass storage devices 408. At a
higher level, the analyzer 426, implemented as a series of stored
instructions, directs execution of the primary functions performed
by the invention, as discussed below: and instructions defining a
user interface 428 allow straightforward interaction over screen
display 422. The user interface 428 generates words or graphical
images on the display 422 to prompt action by the user, and accepts
commands from the keyboard 418 and/or position-sensing device 420.
Finally, the memory 410 includes a partition 430 for storing for
storing a database of 3D reference avatars, as described above.
[0084] The contents of each image buffer 412 define a "raster,"
i.e., a regular 2D pattern of discrete pixel positions that
collectively represent an image and may be used to drive (e.g., by
means of image-processing board 416 or an image server) screen
display 422 to display that image. The content of each memory
location in a frame buffer directly governs the appearance of a
corresponding pixel on the display 422.
[0085] It must be understood that although the modules of main
memory 410 have been described separately, this is for clarity of
presentation only; so long as the system performs all the necessary
functions, it is immaterial how they are distributed within the
system and the programming architecture thereof Likewise, though
conceptually organized as grids, pixelmaps need not actually be
stored digitally in this fashion. Rather, for convenience of memory
utilization and transmission, the raster pattern is usually encoded
as an ordered array of pixels.
[0086] As noted above, execution of the key tasks associated with
the present invention is directed by the analyzer 426, which
governs the operation of the CPU 414 and controls its interaction
with main memory 410 in performing the steps necessary to match and
deform reference 3D representations to match a target multifeatured
object. FIG. 5 illustrates the components of a preferred
implementation of the analyzer 426. The projection module 502 takes
a 3D model and makes a 2D projection of it onto any chosen plane.
In general, an efficient projection module 502 will be required in
order to create numerous projections over the space of rotations
and translations for each of the candidate reference avatars. The
deformation module 504 performs one or more types of deformation on
an avatar in order to make it more closely resemble the source
object. The deformation is performed in 3D space, with every point
defining the avatar mesh being free to move in order to optimize
the fit to the conditional mean estimates of the reverse projected
feature items from the source imagery. In general. deformation is
only applied to the best-fitting reference object, if more than one
reference object is supplied. The rendering module 506 allows for
the rapid projection of a 3D avatar into 2D with the option of
including the specification of the avatar lighting. The 2D
projection corresponds to the chosen lighting of the 3D avatar. The
feature detection module 508 searches for specific feature items in
the 2D source projection. The features may include eyes, nostrils,
lips, and may incorporate probes that operate at several different
pixel scales.
[0087] FIG. 6 illustrates the functions of the invention performed
in main memory. In step 602, the system examines the source imagery
and automatically detects features of a face. such as eyeballs,
nostrils, and lips that can be used for matching purposes, as
described above. In step 604, the detected feature items are
reverse projected into the coordinate frame of the candidate
avatar, as described above and using equation 7. In step 606, the
optimum rotation/translation of the candidate avatar is estimated
using the techniques described above and using equations 8, 9 and
10. In step 608, any prior information that may be available about
the position of the source object with respect to the available 2D
projections is added into the computation, as described herein
using equations 1 1-13. When 3D measurements of the source are
available, this data is used to constrain the rigid motion search
as shown in step 610 and as described above with reference to
equations 41-43. When the rotation/translation search 606 is
completed over all the reference 3D avatars, the best-fitting
avatar is selected in step 612, as described above, with reference
to equations 38-40. Subsequently, the best-fitting avatar located
in step 612 is deformed in step 614. 3D measurements of the source
object 610, if any, are used to constrain the deformation 614. In
addition, portions of the source imagery 616 itself may be used to
influence the deformation 614.
[0088] The invention provides for several different kinds of
deformation which may be optionally applied to the best-fitting
reference avatar in order to improve its correspondence with the
target object. The deformations may include real-time deformation
without rigid motions in which a closed form expression is found
for the deformation, as described above using equations 18, 19. A
diffeomorphic deformation of the avatar with no rigid motions may
be applied (equations 22-24). Alternatively, a real time
deformation with unknown rigid motion of the avatar may be deployed
(equations 28, 29). A real-time diffeormorphic deformation may be
applied to the avatar by iterating the real-time deformation. The
avatar may be deformed using affine motions (equations 30, 31). The
deformation of the avatar may be guided by matching a projection to
large numbers of feature items in the source data, including the
identification of submanifolds within the avatar, as described
above with reference to equation 37. When the target object is
described by an articulated model, the deformations described above
may be applied to each articulated component separately.
[0089] The invention enables camera parameters, such as aspect
ratio and field of view to be estimated as shown in step 618 and
described above. with reference to equations 44-49.
[0090] As noted previously, while certain aspects of the hardware
implementation have been described for the case where the target
object is a face and the reference object is an avatar, the
invention is not limited to the matching of faces, but may be used
for matching any multifeatured object using a database of reference
3D representations that correspond to the generic type of the
target object to be matched.
[0091] It will therefore be seen that the foregoing represents a
highly extensible and advantageous approach to the generation of 3D
models of a target multifeatured object when only partial
information describing the object is available. The terms and
expressions employed herein are used as terms of description and
not of limitation, and there is no intention, in the use of such
terms and expressions, of excluding any equivalents of the features
shown and described or portions thereof, but it is recognized that
various modifications are possible within the scope of the
invention claimed. For example, the various modules of the
invention can be implemented on a general-purpose computer using
appropriate software instructions, or as hardware circuits, or as
mixed hardware-software combinations (wherein, for example, pixel
manipulation and rendering is performed by dedicated hardware
components).
* * * * *