U.S. patent application number 13/774983 was filed with the patent office on 2013-11-28 for systems and methods for generating a 3-d model of a user for a virtual try-on product.
This patent application is currently assigned to 1-800 CONTACTS, INC.. The applicant listed for this patent is 1-800 CONTACTS, INC.. Invention is credited to Ryan Engle, Darren Turetzky.
Application Number | 20130314401 13/774983 |
Document ID | / |
Family ID | 49621242 |
Filed Date | 2013-11-28 |
United States Patent
Application |
20130314401 |
Kind Code |
A1 |
Engle; Ryan ; et
al. |
November 28, 2013 |
SYSTEMS AND METHODS FOR GENERATING A 3-D MODEL OF A USER FOR A
VIRTUAL TRY-ON PRODUCT
Abstract
A computer-implemented method for generating a three-dimensional
(3-D) model of a user. A plurality of images of a user are
obtained. An angle of view relative to the user pictured in at
least one of the plurality of images is calculated. It is
determined whether the calculated angle of view matches a
predetermined viewing angle. Upon determining the calculated angle
of view matches the predetermined viewing angle, at least one of
the plurality of images is selected.
Inventors: |
Engle; Ryan; (Pflugerville,
TX) ; Turetzky; Darren; (Cedar Park, TX) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
1-800 CONTACTS, INC. |
Draper |
UT |
US |
|
|
Assignee: |
1-800 CONTACTS, INC.
Draper
UT
|
Family ID: |
49621242 |
Appl. No.: |
13/774983 |
Filed: |
February 22, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61650983 |
May 23, 2012 |
|
|
|
61735951 |
Dec 11, 2012 |
|
|
|
Current U.S.
Class: |
345/419 |
Current CPC
Class: |
G06K 9/46 20130101; G06T
2210/61 20130101; G06T 15/04 20130101; G06T 17/30 20130101; G06K
9/00221 20130101; G06T 19/00 20130101; G06T 19/006 20130101; G06T
2210/16 20130101; G06T 17/00 20130101; G06T 2200/04 20130101; G06T
15/08 20130101; G02C 13/003 20130101 |
Class at
Publication: |
345/419 |
International
Class: |
G06T 17/00 20060101
G06T017/00 |
Claims
1. A computer-implemented method for generating a three-dimensional
(3-D) model of a user, the method comprising: obtaining a plurality
of images of the user; calculating an angle of view relative to the
user pictured in at least one of the plurality of images;
determining whether the calculated angle of view matches a
predetermined viewing angle; and upon determining the calculated
angle of view matches the predetermined viewing angle, selecting at
least one of the plurality of images.
2. The method of claim 1, further comprising: displaying a
real-time image of the user on a display while obtaining the
plurality of images of the user; and displaying a guideline on the
display in relation to the displayed real-time image of the
user.
3. The method of claim 1, further comprising: performing a
cross-correlation algorithm to track a feature of the user in the
plurality of images, the tracked features being used to generate a
3-D model of the user.
4. The method of claim 3, further comprising: generating texture
coordinate information from the determined 3-D structure of the
user, wherein the texture coordinate information relates a
two-dimensional (2-D) coordinate of each selected image to a 3-D
coordinate of the 3-D model of the user.
5. The method of claim 4, further comprising: generating at least
one geometry file to store data related to a 3-D structure, wherein
each at least one geometry file comprises a plurality of vertices
corresponding to a universal morphable model.
6. The method of claim 5, further comprising: calculating a
coefficient for each generated geometry file based on the
determined 3-D structure of the user.
7. The method of claim 6, further comprising: combining linearly
each generated geometry file based on each calculated co-efficient
to generate a polygon mesh of the user.
8. The method of claim 7, further comprising: applying each
selected image to the generated polygon mesh of the user according
to the generated texture coordinate information.
9. The method of claim 1, wherein the predetermined viewing angle
comprises a plurality of evenly spaced 10-degree rotation
steps.
10. A computing device configured to generate a three-dimensional
(3-D) model of a user, comprising: a processor; memory in
electronic communication with the processor; instructions stored in
the memory, the instructions being executable by the processor to:
obtain a plurality of images of the user; calculate an angle of
view relative to the user pictured in at least one of the plurality
of images; determine whether the calculated angle of view matches a
predetermined viewing angle; and upon determining the calculated
angle of view matches the predetermined viewing angle, select at
least one of the plurality of images.
11. The computing device of claim 10, wherein the instructions are
executable by the processor to: display a real-time image of the
user on a display while obtaining the plurality of images of the
user; and display a guideline on the display in relation to the
displayed real-time image of the user.
12. The computing device of claim 10, wherein the instructions are
executable by the processor to: perform a cross-correlation
algorithm to track a feature of the user in the plurality of
images, the tracked features being used to generate a 3-D model of
the user.
13. The computing device of claim 12, wherein the instructions are
executable by the processor to: generate texture coordinate
information from the determined 3-D structure of the user, wherein
the texture coordinate information relates a two-dimensional (2-D)
coordinate of each selected image to a 3-D coordinate of the 3-D
model of the user.
14. The computing device of claim 13, wherein the instructions are
executable by the processor to: generate at least one geometry file
to store data related to a 3-D structure, wherein each at least one
geometry file comprises a plurality of vertices corresponding to a
universal morphable model.
15. The computing device of claim 14, wherein the instructions are
executable by the processor to: calculate a coefficient for each
generated geometry file based on the determined 3-D structure of
the user.
16. The computing device of claim 15, wherein the instructions are
executable by the processor to: combine linearly each generated
geometry file based on each calculated coefficient to generate a
polygon mesh of the user.
17. The computing device of claim 16, wherein the instructions are
executable by the processor to: apply each selected image to the
generated polygon mesh of the user according to the generated
texture coordinate information.
18. The computing device of claim 10, wherein the predetermined
viewing angle comprises a plurality of evenly spaced 10-degree
rotation steps.
19. A computer-program product for generating a three-dimensional
(3-D) model of a user, the computer-program product comprising a
non-transitory computer-readable medium storing instructions
thereon, the instructions being executable by a processor to:
obtain a plurality of images of the user; calculate an angle of
view relative to the user pictured in at least one of the plurality
of images; determine whether the calculated angle of view matches a
predetermined viewing angle; and upon determining the calculated
angle of view matches the predetermined viewing angle, select at
least one of the plurality of images.
20. The computer-program product of claim 19, wherein the
instructions are executable by the processor to: perform a
cross-correlation algorithm to track a feature of the user in the
plurality of images, the tracked features being used to generate a
3-D model of the user; generate texture coordinate information from
the determined 3-D structure of the user; generate at least one
geometry file to store data related to a 3-D structure; calculate a
coefficient for each generated geometry file based on the
determined 3-D structure of the user; and combine linearly each
generated geometry file based on each calculated coefficient to
generate a polygon mesh of the user.
Description
RELATED APPLICATIONS
[0001] This application claims priority to U.S. application Ser.
No. 61/650,983, entitled SYSTEMS AND METHODS TO VIRTUALLY TRY-ON
PRODUCTS, filed on May 23, 2012; and U.S. application Ser. No.
61/735,951, entitled SYSTEMS AND METHODS TO VIRTUALLY TRY-ON
PRODUCTS, filed on Jan. 2, 2013, both of which are incorporated
herein in their entirety by this reference.
BACKGROUND
[0002] The use of computer systems and computer-related
technologies continues to increase at a rapid pace. This increased
use of computer systems has influenced the advances made to
computer-related technologies. Indeed, computer systems have
increasingly become an integral part of the business world and the
activities of individual consumers. Computers have opened up an
entire industry of internet shopping. In many ways, online shopping
has changed the way consumers purchase products. For example, a
consumer may want to know what they will look like in and/or with a
product. On the webpage of a certain product, a photograph of a
model with the particular product may be shown. However, users may
want to see more accurate depictions of themselves in relation to
various products.
SUMMARY
[0003] According to at least one embodiment, a computer-implemented
method for generating a three-dimensional (3-D) model of a user is
described. A plurality of images of a user may be obtained. An
angle of view relative to the user pictured in at least one of the
plurality of images may be calculated. It may be determined whether
the calculated angle of view matches a predetermined viewing angle.
The predetermined viewing angle may include a plurality of evenly
spaced 10-degree rotation steps. Upon determining the calculated
angle of view matches the predetermined viewing angle, at least one
of the plurality of images may be selected.
[0004] In one embodiment, a real-time image of the user may be
displayed while obtaining the plurality of images of the user. A
guideline may be displayed in relation to the displayed real-time
image of the user. A cross-correlation algorithm may be performed
to track a feature between two or more of the plurality of images
of the user. A 3-D model of the user may be generated from the
detected features of the user. Texture coordinate information may
be generated from the determined 3-D structure of the user. The
texture coordinate information may relate a two-dimensional (2-D)
coordinate of each selected image to a 3-D coordinate of the 3-D
model of the user. At least one geometry file may be generated to
store data related to a 3-D structure, wherein each at least one
geometry file comprises a plurality of vertices corresponding to a
universal morphable model.
[0005] In some configurations, a coefficient may be calculated for
each generated geometry file based on the determined 3-D structure
of the user. Each generated geometry file may be combined linearly
based on each calculated coefficient to generate a polygon mesh of
the user. Each selected image may be applied to the generated
polygon mesh of the user according to the generated texture
coordinate information.
[0006] A computing device configured to generate a
three-dimensional (3-D) model of a user is also described. The
device may include a processor and memory in electronic
communication with the processor. The memory may store instructions
that are executable by the processor to obtain a plurality of
images of a user, calculate an angle of view relative to the user
pictured in at least one of the plurality of images, determine
whether the calculated angle of view matches a predetermined
viewing angle, and upon determining the calculated angle of view
matches the predetermined viewing angle, select at least one of the
plurality of images.
[0007] A computer-program product to generate a three-dimensional
(3-D) model of a user is also described. The computer-program
product may include a non-transitory computer-readable medium that
stores instructions. The instructions may be executable by a
processor to obtain a plurality of images of a user, calculate an
angle of view relative to the user pictured in at least one of the
plurality of images, determine whether the calculated angle of view
matches a predetermined viewing angle, and upon determining the
calculated angle of view matches the predetermined viewing angle,
select at least one of the plurality of images.
[0008] Features from any of the above-mentioned embodiments may be
used in combination with one another in accordance with the general
principles described herein. These and other embodiments, features,
and advantages will be more fully understood upon reading the
following detailed description in conjunction with the accompanying
drawings and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The accompanying drawings illustrate a number of exemplary
embodiments and are a part of the specification. Together with the
following description, these drawings demonstrate and explain
various principles of the instant disclosure.
[0010] FIG. 1 is a block diagram illustrating one embodiment of an
environment in which the present systems and methods may be
implemented;
[0011] FIG. 2 is a block diagram illustrating another embodiment of
an environment in which the present systems and methods may be
implemented;
[0012] FIG. 3 is a block diagram illustrating one example of a
model generator;
[0013] FIG. 4 is a block diagram illustrating one example of an
image processor;
[0014] FIG. 5 illustrates an example arrangement for capturing an
image of a user;
[0015] FIG. 6 is a diagram illustrating an example of a device for
capturing an image of a user;
[0016] FIG. 7 illustrates an example arrangement of a virtual 3-D
space including a depiction of a 3-D model of a user;
[0017] FIG. 8 is a flow diagram illustrating one embodiment of a
method for generating a 3-D model of a user;
[0018] FIG. 9 is a flow diagram illustrating one embodiment of a
method for applying an image of a user to a polygon mesh model of
the user;
[0019] FIG. 10 is a flow diagram illustrating one embodiment of a
method for displaying a feedback image to a user; and
[0020] FIG. 11 depicts a block diagram of a computer system
suitable for implementing the present systems and methods.
[0021] While the embodiments described herein are susceptible to
various modifications and alternative forms, specific embodiments
have been shown by way of example in the drawings and will be
described in detail herein. However, the exemplary embodiments
described herein are not intended to be limited to the particular
forms disclosed. Rather, the instant disclosure covers all
modifications, equivalents, and alternatives falling within the
scope of the appended claims.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
[0022] The systems and methods described herein relate to the
virtually trying-on of products. Three-dimensional (3-D) computer
graphics are graphics that use a 3-D representation of geometric
data that is stored in the computer for the purposes of performing
calculations and rendering two-dimensional (2-D) images. Such
images may be stored for viewing later or displayed in real-time. A
3-D space may include a mathematical representation of a 3-D
surface of an object. A 3-D model may be contained within a
graphical data file. A 3-D model may represent a 3-D object using a
collection of points in 3-D space, connected by various geometric
entities such as triangles, lines, curved surfaces, etc. Being a
collection of data (points and other information), 3-D models may
be created by hand, algorithmically (procedural modeling), or
scanned such as with a laser scanner. A 3-D model may be displayed
visually as a two-dimensional image through a process called 3-D
rendering, or used in non-graphical computer simulations and
calculations. In some cases, the 3-D model may be physically
created using a 3-D printing device.
[0023] A device may capture an image of the user and generate a 3-D
model of the user from the image. A 3-D polygon mesh of an object
may be placed in relation to the 3-D model of the user to create a
3-D virtual depiction of the user wearing the object (e.g., a pair
of glasses, a hat, a shirt, a belt, etc.). This 3-D scene may then
be rendered into a 2-D image to provide the user a virtual
depiction of the user in relation to the object. Although some of
the examples used herein describe articles of clothing,
specifically a virtual try-on pair of glasses, it is understood
that the systems and methods described herein may be used to
virtually try-on a wide variety of products. Examples of such
products may include glasses, clothing, footwear, jewelry,
accessories, hair styles, etc.
[0024] FIG. 1 is a block diagram illustrating one embodiment of an
environment 100 in which the present systems and methods may be
implemented. In some embodiments, the systems and methods described
herein may be performed on a single device (e.g., device 102). For
example, a model generator 104 may be located on the device 102.
Examples of devices 102 include mobile devices, smart phones,
personal computing devices, computers, servers, etc.
[0025] In some configurations, a device 102 may include a model
generator 104, a camera 106, and a display 108. In one example, the
device 102 may be coupled to a database 110. In one embodiment, the
database 110 may be internal to the device 102. In another
embodiment, the database 110 may be external to the device 102. In
some configurations, the database 110 may include polygon model
data 112 and texture map data 114.
[0026] In one embodiment, the model generator 104 may enable a user
to initiate a process to generate a 3-D model of the user. In some
configurations, the model generator 104 may obtain multiple images
of the user. For example, the model generator 104 may capture
multiple images of a user via the camera 106. For instance, the
model generator 104 may capture a video (e.g., a 5 second video)
via the camera 106. In some configurations, the model generator 104
may use polygon model data 112 and texture map data 114 to generate
a 3-D representation of a user. For example, the polygon model data
112 may include vertex coordinates of a polygon model of the user's
head. In some embodiments, the model generator 104 may use color
information from the pixels of multiple images of the user to
create a texture map of the user. In some configurations, the model
generator 104 may generate and/or obtain a 3-D representation of a
product. For example, the polygon model data 112 and texture map
data 114 may include a 3-D model of a pair of glasses. In some
embodiments, the polygon model data 112 may include a polygon model
of an object. In some configurations, the texture map data 114 may
define a visual aspect (e.g., pixel information) of the 3-D model
of the object such as color, texture, shadow, or transparency.
[0027] In some configurations, the model generator 104 may generate
a virtual try-on image by rendering a virtual 3-D space that
contains a 3-D model of a user and a 3-D model of a product. In one
example, the virtual try-on image may illustrate the user with a
rendered version of the product. In some configurations, the model
generator 104 may output the virtual try-on image to the display
108 to be displayed to the user.
[0028] FIG. 2 is a block diagram illustrating another embodiment of
an environment 200 in which the present systems and methods may be
implemented. In some embodiments, a device 102-a may communicate
with a server 206 via a network 204. Example of networks 204
include, local area networks (LAN), wide area networks (WAN),
virtual private networks (VPN), wireless networks (using 802.11,
for example), cellular networks (using 3G and/or LTE, for example),
etc. In some configurations, the network 204 may include the
internet. In some configurations, the device 102-a may be one
example of the device 102 illustrated in FIG. 1. For example, the
device 102-a may include the camera 106, the display 108, and an
application 202. It is noted that in some embodiments, the device
102-a may not include a model generator 104. In some embodiments,
both a device 102-a and a server 206 may include a model generator
104 where at least a portion of the functions of the model
generator 104 are performed separately and/or concurrently on both
the device 102-a and the server 206.
[0029] In some embodiments, the server 206 may include the model
generator 104 and may be coupled to the database 110. For example,
the model generator 104 may access the polygon model data 112 and
the texture map data 114 in the database 110 via the server 206.
The database 110 may be internal or external to the server 206.
[0030] In some configurations, the application 202 may capture
multiple images via the camera 106. For example, the application
202 may use the camera 106 to capture a video. Upon capturing the
multiple images, the application 202 may process the multiple
images to generate result data. In some embodiments, the
application 202 may transmit the multiple images to the server 206.
Additionally or alternatively, the application 202 may transmit to
the server 206 the result data or at least one file associated with
the result data.
[0031] In some configurations, the model generator 104 may process
multiple images of a user to generate a 3-D model of the user. The
model generator 104 may render a 3-D space that includes the 3-D
model of the user and a 3-D polygon model of an object to render a
virtual try-on 2-D image of the object and the user. The
application 202 may output a display of the user to the display 108
while the camera 106 captures an image of the user.
[0032] FIG. 3 is a block diagram illustrating one example of a
model generator 104-a. The model generator 104-a may be one example
of the model generator 104 depicted in FIGS. 1 and/or 2. As
depicted, the model generator 104-a may include a scanning module
302, an image processor 304, and a display module 306.
[0033] In some configurations, the scanning module 302 may obtain a
plurality of images of a user. In some embodiments, the scanning
module 302 may activate the camera 106 to capture at least one
image of the user. Additionally, or alternatively, the scanning
module 302 may capture a video of the user.
[0034] In some embodiments, the image processor 304 may process an
image of the user captured by the scanning module 302. The image
processor 304 may be configured to generate a 3-D model of the user
from the processing of the image. Operations of the image processor
304 are discussed in further detail below.
[0035] In some configurations, the display module 306 may display a
realtime image of the user on a display (e.g. display 108) while
obtaining the plurality of images of the user. For example, as the
camera 106 captures an image of the user, the captured image of the
user may be displayed on the display 108 to provide a visual
feedback to the user. In some embodiments, the display module 306
may display a guideline on the display in relation to the displayed
real-time image of the user. For example, one or more guidelines
may provide a visual cue to the user. For instance, a guideline may
provide a visual cue of the direction in which the user should be
holding the device 102 (e.g., a tablet computing device in
landscape or portrait mode). Additionally, a guideline may provide
a visual leveling cue to assist the user in maintaining the device
relatively level or in the same plane while the user pans or
rotates the device 102 around him- or herself. Additionally, a
guideline may provide the user a visual depth cue to assist the
user in maintaining the device at a relatively same depth (e.g., at
arm's length) while the user pans or rotates the device 102 around
the user.
[0036] FIG. 4 is a block diagram illustrating one example of an
image processor 304-a. The image processor 304-a may be one example
of the image processor 304 illustrated in FIG. 3. As depicted, the
image processor 304 may include a viewpoint module 402, a
comparison module 404, a selection module 406, and a
cross-correlation module 408. Additionally, the image processor
304-a may include a texture mapping module 410, a geometry module
412, a coefficient module 414, a linear combination module 416, and
an application module 418.
[0037] In some configurations, the viewpoint module 402 may
calculate an angle of view relative to the user pictured in at
least one of the plurality of images. For example, the viewpoint
module 402 may determine that in one image of the user's head, the
user held the device 102 10-degrees to the left of center of the
user's face. The comparison module 404 may determine whether the
calculated angle of view (e.g., 10-degrees to the left of center of
the user's face) matches a predetermined viewing angle. In some
embodiments, the predetermined viewing angle includes a plurality
of evenly spaced 10-degree rotation steps. For example, a head-on
image showing the user facing the camera directly may be selected
as a viewing angle reference point, or 0-degrees. The next
predetermined viewing angles in either direction may include
+/-10-degrees, +/-20-degrees, +/-30-degrees, and so forth, in
10-degree increments. Thus, the comparison module 404 may determine
that an image depicting the user holding the device 102 10-degrees
to the left of center of the user's face matches a predetermined
viewing angle of +10-degrees (or -10-degrees).
[0038] In some embodiments, upon determining the calculated angle
of view matches the predetermined viewing angle, the selection
module 406 may select at least one of the plurality of images. For
example, the selection module 406 may select an image for further
processing. The cross-correlation module 408 may perform a
cross-correlation algorithm to track a feature between two or more
of the plurality of images of the user. For example, the image
processor 304-a, via the cross-correlation module 408, may perform
template matching. Additionally, or alternatively, the image
processor 304-a, via the cross-correlation module 408, may perform
a structure from motion algorithm to track features in the images
of the user. From the detected features of the user, the image
processor 304-a may construct a 3-D model of the user.
[0039] In some configurations, the texture mapping module 410 may
generate texture coordinate information from the determined 3-D
structure of the user. The texture coordinate information may
relate a two-dimensional (2-D) coordinate (e.g., UV coordinates) of
each selected image to a 3-D coordinate (e.g., XYZ coordinates) of
the 3-D model of the user.
[0040] In one embodiment, the geometry module 412 generates at
least one geometry file to store data related to a 3-D structure.
Each at least one geometry file may include a plurality of vertices
corresponding to a universal morphable model. For instance, each
geometry file may include a different generic model of a user,
where each model depicts a user with certain features and
characteristics. For example, one geometry file may include a
polygon mesh depicting characteristics typical of a male-looking
face. Another geometry file may include a polygon mesh depicting
characteristics of a female-looking face, and so forth.
[0041] In some configurations, the coefficient module 414
calculates a coefficient for each generated geometry file based on
the determined 3-D structure of the user. The linear combination
module 416 may combine linearly each generated geometry file based
on each calculated coefficient to generate a polygon mesh of the
user. In other words, each coefficient may act as a weight to
determine how much each particular geometry file affects the
outcome of linearly combining each geometry file. For example, if
the user is a female, then the coefficient module 414 may associate
a relatively high coefficient (e.g., 1.0) to a geometry file that
depicts female characteristics, and may associate a relatively low
coefficient (e.g., 0.01) to a geometry file that depicts male
characteristics. Thus, each geometry file may be combined linearly,
morphing a 3-D polygon mesh to generate a realistic model of the
user based on the 3-D characteristics of the user calculated from
one or more captured images of the user. The application module 418
may apply each selected image to the generated polygon mesh of the
user according to the generated texture coordinate information,
resulting in a 3-D model of the user.
[0042] FIG. 5 illustrates an example arrangement 500 for capturing
an image 504 of a user 502. In particular, the illustrated example
arrangement 500 may include the user 502 holding a device 102-b.
The device 102-b may include a camera 106-a and a display 108-a.
The device 102-b, camera 106-a, and display 108-a may be examples
of the device 102, camera 106, and display 108 depicted in FIGS. 1
and/or 2.
[0043] In one example, the user 502 holds the device 102-b at arm's
length with the camera 106-a activated. The camera 106-a may
capture an image 504 of the user and the display 108-a may show the
captured image 504 to the user 502 (e.g., a real-time feedback
image of the user). In some configurations, the camera 106-a may
capture a video of the user 502. In some embodiments, the user may
pan the device 102-b around the user's face to allow the camera
106-a to capture a video of the user from one side of the user's
face to the other side of the user's face. Additionally, or
alternatively, the user 502 may capture an image of other areas
(e.g., arm, leg, torso, etc.).
[0044] FIG. 6 is a diagram 600 illustrating an example of a device
102-c for capturing an image 602 of a user. The device 102-c may be
one example of the device 102 illustrated in FIGS. 1 and/or 2. As
depicted, the device 102-c may include a camera 106-b, a display
108-b, and an application 202-a. The camera 106-b, display 108-b,
and application 202-a may each be an example of the respective
camera 106, display 108, and application 202 illustrated in FIGS. 1
and/or 2.
[0045] In one embodiment, the user may operate the device 102-c.
For example, the application 202-a may allow the user to interact
with and/or operate the device 102-c. In one embodiment, the
application 202-a may allow the user to capture an image 605 of the
user. For example, the application 202-a may display the captured
image 602 on the display 108-b. In some cases, the application
202-a may permit the user to accept or decline the image 602 that
was captured.
[0046] FIG. 7 illustrates an example arrangement 700 of a virtual
3-D space 702. As depicted, the 3-D space 702 of the example
arrangement 700 may include a 3-D model of a user's head 704. In
some embodiments, the 3-D model of the user's head 704 may include
a polygon mesh model of the user's head, which may be stored in the
database 110 as polygon data 112. The polygon data 112 of the 3-D
model of the user may include 3-D polygon mesh elements such as
vertices, edges, faces, polygons, surfaces, and the like.
Additionally, or alternatively, the 3-D model of the user's head
704 may include at least one texture map, which may be stored in
the database 110 as texture map data 114.
[0047] FIG. 8 is a flow diagram illustrating one embodiment of a
method 800 for generating a 3-D model of a user. In some
configurations, the method 800 may be implemented by the model
generator 104 illustrated in FIGS. 1, 2, and/or 4. In some
configurations, the method 800 may be implemented by the
application 202 illustrated in FIG. 2.
[0048] At block 802, a plurality of images of a user may be
obtained. At block 804, an angle of view relative to the user
pictured in at least one of the plurality of images may be
calculated. At block 806, it may be determined whether the
calculated angle of view matches a predetermined viewing angle. In
some configurations, the predetermined viewing angle includes a
plurality of rotation steps. As explained above, in some
configurations, the predetermined viewing angle includes a
plurality of evenly spaced 10-degree rotation steps. At block 808,
upon determining the calculated angle of view matches the
predetermined viewing angle, at least one of the plurality of
images may be selected. Upon determining the calculated angle of
view does not match the predetermined viewing angle, the method
returns to block 804.
[0049] FIG. 9 is a flow diagram illustrating one embodiment of a
method 900 for applying an image of a user to a polygon mesh model
of the user. In some configurations, the method 900 may be
implemented by the model generator 104 illustrated in FIGS. 1, 2,
and/or 4. In some configurations, the method 900 may be implemented
by the application 202 illustrated in FIG. 2.
[0050] At block 902, a cross-correlation algorithm to track
features in the images of the user to determine a 3-D structure of
the user. At block 904, texture coordinate information may be
generated from the determined 3-D structure of the user. As
explained above, the texture coordinate information may relate a
2-D coordinate (e.g., UV coordinates) of each selected image to a
3-D coordinate (e.g., XYZ coordinates) of the 3-D model of the
user.
[0051] At block 906, at least one geometry file may be generated to
store data related to a 3-D structure. As explained above, each at
least one geometry file may include a plurality of vertices
corresponding to a universal morphable model. At block 908, a
coefficient for each generated geometry file based on the
determined 3-D structure of the user may be calculated. At block
910, each generated geometry file may be combined linearly based on
each calculated coefficient to generate a polygon mesh of the user.
At block 912, each selected image may be applied to the generated
polygon mesh of the user according to the generated texture
coordinate information.
[0052] FIG. 10 is a flow diagram illustrating one embodiment of a
method 1000 for displaying a feedback image to a user. In some
configurations, the method 1000 may be implemented by the model
generator 104 illustrated in FIGS. 1, 2, and/or 4. In some
configurations, the method 1000 may be implemented by the
application 202 illustrated in FIG. 2.
[0053] At block 1002, a real-time image of the user may be
displayed on a display (e.g., display 108) while obtaining the
plurality of images of the user. As explained above, as the camera
106 captures an image of the user, the captured image of the user
may be displayed on the display 108 to provide a visual feedback to
the user. At block 1004, a guideline on the display in relation to
the displayed realtime image of the user may be displayed. One or
more guidelines may provide a visual cue to the user while an image
is being captured.
[0054] FIG. 11 depicts a block diagram of a computer system 1100
suitable for implementing the present systems and methods. The
depicted computer system 1100 may be one example of a server 206
depicted in FIG. 2. Alternatively, the system 1100 may be one
example of a device 102 depicted in FIGS. 1, 2, 5, and/or 6.
Computer system 1100 includes a bus 1102 which interconnects major
subsystems of computer system 1100, such as a central processor
1104, a system memory 1106 (typically RAM, but which may also
include ROM, flash RAM, or the like), an input/output controller
1108, an external audio device, such as a speaker system 1110 via
an audio output interface 1112, an external device, such as a
display screen 1114 via display adapter 1116, serial ports 1118 and
mouse 1146, a keyboard 1122 (interfaced with a keyboard controller
1124), multiple USB devices 1126 (interfaced with a USB controller
1128), a storage interface 1130, a host bus adapter (HBA) interface
card 1136A operative to connect with a Fibre Channel network 1138,
a host bus adapter (HBA) interface card 1136B operative to connect
to a SCSI bus 1140, and an optical disk drive 1142 operative to
receive an optical disk 1144. Also included are a mouse 1146 (or
other point-and-click device, coupled to bus 1102 via serial port
1118), a modem 1148 (coupled to bus 1102 via serial port 1120), and
a network interface 1150 (coupled directly to bus 1102).
[0055] Bus 1102 allows data communication between central processor
1104 and system memory 1106, which may include read-only memory
(ROM) or flash memory (neither shown), and random access memory
(RAM) (not shown), as previously noted. The RAM is generally the
main memory into which the operating system and application
programs are loaded. The ROM or flash memory can contain, among
other code, the Basic Input-Output system (BIOS) which controls
basic hardware operation such as the interaction with peripheral
components or devices. For example, a model generator 104-b to
implement the present systems and methods may be stored within the
system memory 1106. The model generator 104-b may be one example of
the model generator 104 depicted in FIGS. 1, 2, and/or 3.
Applications resident with computer system 1100 are generally
stored on and accessed via a non-transitory computer readable
medium, such as a hard disk drive (e.g., fixed disk 1152), an
optical drive (e.g., optical drive 1142), or other storage medium.
Additionally, applications can be in the form of electronic signals
modulated in accordance with the application and data communication
technology when accessed via network modem 1148 or interface
1150.
[0056] Storage interface 1130, as with the other storage interfaces
of computer system 1100, can connect to a standard computer
readable medium for storage and/or retrieval of information, such
as a fixed disk drive 1152. Fixed disk drive 1152 may be a part of
computer system 1100 or may be separate and accessed through other
interface systems. Modem 1148 may provide a direct connection to a
remote server via a telephone link or to the Internet via an
internet service provider (ISP). Network interface 1150 may provide
a direct connection to a remote server via a direct network link to
the Internet via a POP (point of presence). Network interface 1150
may provide such connection using wireless techniques, including
digital cellular telephone connection, Cellular Digital Packet Data
(CDPD) connection, digital satellite data connection or the
like.
[0057] Many other devices or subsystems (not shown) may be
connected in a similar manner (e.g., document scanners, digital
cameras and so on). Conversely, all of the devices shown in FIG. 11
need not be present to practice the present systems and methods.
The devices and subsystems can be interconnected in different ways
from that shown in FIG. 11. The operation of at least some of the
computer system 1100 such as that shown in FIG. 11 is readily known
in the art and is not discussed in detail in this application. Code
to implement the present disclosure can be stored in a
non-transitory computer-readable medium such as one or more of
system memory 1106, fixed disk 1152, or optical disk 1144. The
operating system provided on computer system 1100 may be
MS-DOS.RTM., MS-WINDOWS.RTM., OS/2.RTM., UNIX.RTM., Linux.RTM., or
another known operating system.
[0058] Moreover, regarding the signals described herein, those
skilled in the art will recognize that a signal can be directly
transmitted from a first block to a second block, or a signal can
be modified (e.g., amplified, attenuated, delayed, latched,
buffered, inverted, filtered, or otherwise modified) between the
blocks. Although the signals of the above described embodiment are
characterized as transmitted from one block to the next, other
embodiments of the present systems and methods may include modified
signals in place of such directly transmitted signals as long as
the informational and/or functional aspect of the signal is
transmitted between blocks. To some extent, a signal input at a
second block can be conceptualized as a second signal derived from
a first signal output from a first block due to physical
limitations of the circuitry involved (e.g., there will inevitably
be some attenuation and delay). Therefore, as used herein, a second
signal derived from a first signal includes the first signal or any
modifications to the first signal, whether due to circuit
limitations or due to passage through other circuit elements which
do not change the informational and/or final functional aspect of
the first signal.
[0059] While the foregoing disclosure sets forth various
embodiments using specific block diagrams, flowcharts, and
examples, each block diagram component, flowchart step, operation,
and/or component described and/or illustrated herein may be
implemented, individually and/or collectively, using a wide range
of hardware, software, or firmware (or any combination thereof)
configurations. In addition, any disclosure of components contained
within other components should be considered exemplary in nature
since many other architectures can be implemented to achieve the
same functionality.
[0060] The process parameters and sequence of steps described
and/or illustrated herein are given by way of example only and can
be varied as desired. For example, while the steps illustrated
and/or described herein may be shown or discussed in a particular
order, these steps do not necessarily need to be performed in the
order illustrated or discussed. The various exemplary methods
described and/or illustrated herein may also omit one or more of
the steps described or illustrated herein or include additional
steps in addition to those disclosed.
[0061] Furthermore, while various embodiments have been described
and/or illustrated herein in the context of fully functional
computing systems, one or more of these exemplary embodiments may
be distributed as a program product in a variety of forms,
regardless of the particular type of computer-readable media used
to actually carry out the distribution. The embodiments disclosed
herein may also be implemented using software modules that perform
certain tasks. These software modules may include script, batch, or
other executable files that may be stored on a computer-readable
storage medium or in a computing system. In some embodiments, these
software modules may configure a computing system to perform one or
more of the exemplary embodiments disclosed herein.
[0062] The foregoing description, for purpose of explanation, has
been described with reference to specific embodiments. However, the
illustrative discussions above are not intended to be exhaustive or
to limit the invention to the precise forms disclosed. Many
modifications and variations are possible in view of the above
teachings. The embodiments were chosen and described in order to
best explain the principles of the present systems and methods and
their practical applications, to thereby enable others skilled in
the art to best utilize the present systems and methods and various
embodiments with various modifications as may be suited to the
particular use contemplated.
[0063] Unless otherwise noted, the terms "a" or "an," as used in
the specification and claims, are to be construed as meaning "at
least one of." In addition, for ease of use, the words "including"
and "having," as used in the specification and claims, are
interchangeable with and have the same meaning as the word
"comprising." In addition, the term "based on" as used in the
specification and the claims is to be construed as meaning "based
at least upon."
* * * * *