U.S. patent application number 14/503287 was filed with the patent office on 2015-05-14 for omni-channel simulated digital apparel content display.
The applicant listed for this patent is Jatin Chhugani, Mihir Naware, Jonathan Su. Invention is credited to Jatin Chhugani, Mihir Naware, Jonathan Su.
Application Number | 20150134495 14/503287 |
Document ID | / |
Family ID | 53043418 |
Filed Date | 2015-05-14 |
United States Patent
Application |
20150134495 |
Kind Code |
A1 |
Naware; Mihir ; et
al. |
May 14, 2015 |
OMNI-CHANNEL SIMULATED DIGITAL APPAREL CONTENT DISPLAY
Abstract
Techniques for an omni-channel approach for displaying simulated
digital apparel content are presented herein. A machine can detect
an available amount of a computing resource on a client device. A
determination that the client device is to render only a
three-dimensional body model, among a set of models that includes
the three-dimensional body model and a three-dimensional garment
model, and that a server is to render the three-dimensional garment
model, may occur based on the detected available amount of the
computing resource on the client device. The machine can provide
the client device with the three-dimensional garment model draped
on the three-dimensional body model. The machine can cause the
server to render at least a portion of the three-dimensional
garment model in accordance with the determination. The machine can
cause the client device to render at least a portion of the
three-dimensional body model in accordance with the
determination.
Inventors: |
Naware; Mihir; (Redwood
City, CA) ; Chhugani; Jatin; (Santa Clara, CA)
; Su; Jonathan; (San Jose, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Naware; Mihir
Chhugani; Jatin
Su; Jonathan |
Redwood City
Santa Clara
San Jose |
CA
CA
CA |
US
US
US |
|
|
Family ID: |
53043418 |
Appl. No.: |
14/503287 |
Filed: |
September 30, 2014 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61905126 |
Nov 15, 2013 |
|
|
|
61904263 |
Nov 14, 2013 |
|
|
|
61904522 |
Nov 15, 2013 |
|
|
|
61905118 |
Nov 15, 2013 |
|
|
|
61905122 |
Nov 15, 2013 |
|
|
|
Current U.S.
Class: |
705/27.2 |
Current CPC
Class: |
A41H 3/007 20130101;
A41H 1/00 20130101; G06T 15/005 20130101; G06T 2210/16 20130101;
G06T 2215/16 20130101; G06T 19/20 20130101; G06T 17/10 20130101;
G06F 30/20 20200101; G06F 2111/02 20200101; G06T 17/00 20130101;
G06K 9/6262 20130101; G06T 17/20 20130101; G06Q 30/0643 20130101;
G06F 2113/12 20200101; G06T 19/00 20130101 |
Class at
Publication: |
705/27.2 |
International
Class: |
G06T 17/10 20060101
G06T017/10; G06K 9/62 20060101 G06K009/62; G06T 7/00 20060101
G06T007/00; G06Q 30/06 20060101 G06Q030/06 |
Claims
1. A method comprising: detecting an available amount of a
computing resource on a client device; determining that the client
device is to render only a three-dimensional body model, among a
set of models that includes the three-dimensional body model and a
three-dimensional garment model, and that a server is to render the
three-dimensional garment model, the determining being based on the
detected available amount of the computing resource on the client
device; providing the client device with the three-dimensional
garment model draped on the three-dimensional body model; causing
the server to render at least a portion of the three-dimensional
garment model in accordance with the determining; and causing the
client device to render at least a portion of the three-dimensional
body model in accordance with the determining.
2. The method of claim 1, wherein the client device renders the
portion of the three-dimensional body model without rendering any
portion of the three-dimensional garment model, and wherein the
three-dimensional body model includes a face and hair.
3. The method of claim 1, wherein the server renders the portion of
the three-dimensional garment model without rendering any portion
of the three-dimensional body model.
4. The method of claim 1, wherein the three-dimensional garment
model includes garment points that represent a surface of a
garment, the method further comprising: draping the
three-dimensional garment model on the three-dimensional body model
by positioning at least a portion of three-dimensional body model
inside the three-dimensional garment model that includes the
garment points.
5. The method of claim 4, further comprising: calculating a
simulated force acting on a subset of the garment points based on
the positioning of at least the portion of the three-dimensional
body model.
6. The method of claim 5, wherein the simulated force includes a
simulated gravitational force upon the draped three-dimensional
garment model.
7. The method of claim 5, wherein the simulated force includes a
simulated tension force upon the draped three-dimensional garment
model.
8. The method of claim 5, further comprising: selecting a size from
a set of available sizes for a garment based on the simulated force
calculated based on the positioning of at least the portion of the
three-dimensional body model.
9. The method of claim 1, further comprising: causing the server to
generate a group of images of the draped three-dimensional garment
model, the group of images depicting multiple viewpoints around the
draped three-dimensional garment model.
10. The method of claim 9, wherein with the generated group of
images has a quantity based on the detected available amount of the
computing resource on the client device.
11. The method of claim 1, further comprising: providing the client
device with a partial image rendered by the server from at least a
portion of the three-dimensional garment model draped on the
three-dimensional body model.
12. The method of claim 11, wherein the client device is configured
to receive the partial image and generate a full image based on the
partial image and by rendering at least the portion of the
three-dimensional body model.
13. The method of claim 11, wherein the partial image is generated
using a graphics processing unit in the server.
14. The method of claim 11, further comprising determining a
resolution of the partial image based on the detected available
amount of the computing resource on the client device.
15. The method of claim 11, further comprising determining a frame
rate at which an animation of the draped three-dimensional model is
to be displayed, the frame rate being determined based on the
detected available amount of the computing resource on the client
device.
16. The method of claim 1, wherein the detecting of the available
amount of the computing resource includes detecting an available
capacity of a graphics processing unit within the client
device.
17. The method of claim 1, further comprising: generating and
providing a partial image that depicts a garment in which regions
of different tightness have different colors, the partial image
being generated by rendering at least a portion of the
three-dimensional garment model draped on the three-dimensional
body model.
18. The method of claim 17, wherein the generating of the partial
image includes calculating a simulated force that acts on a first
region of the three-dimensional garment model but not on a second
region of the three-dimensional garment model.
19. A system comprising: a garment simulation module configured to:
detect an available amount of a computing resource on a client
device; determine that the client device is to render only a
three-dimensional body model, among a set of models that includes
the three-dimensional body model and a three-dimensional garment
model, and that a server is to render the three-dimensional garment
model, the determining being based on the detected available amount
of the computing resource on the client device; a rendering module
comprising one or more processors and configured to: provide the
client device with the three-dimensional garment model draped on
the three-dimensional body model; cause the server to render at
least a portion of the three-dimensional garment model in
accordance with the determination; and cause the client device to
render at least a portion of the three-dimensional body model in
accordance with the determination.
20. A non-transitory machine-readable storage medium comprising
instructions that, when executed by one or more processors of a
machine, cause the machine to perform operations comprising:
detecting an available amount of a computing resource on a client
device; determining that the client device is to render only a
three-dimensional body model, among a set of models that includes
the three-dimensional body model and a three-dimensional garment
model, and that a server is to render the three-dimensional garment
model, the determining being based on the detected available amount
of the computing resource on the client device; providing the
client device with the three-dimensional garment model draped on
the three-dimensional body model; causing the server to render at
least a portion of the three-dimensional garment model in
accordance with the determining; and causing the client device to
render at least a portion of the three-dimensional body model in
accordance with the determining.
Description
[0001] This application claims the priority benefit of: (1) U.S.
Provisional Application No. 61/905,126, filed Nov. 15, 2013; (2)
U.S. Provisional Application No. 61/904,263, filed Nov. 14, 2013;
(3) U.S. Provisional Application No. 61/904,522, filed Nov. 15,
2013; (4) U.S. Provisional Application No. 61/905,118, filed Nov.
15, 2013; and (5) U.S. Provisional Application No. 61/905,122,
filed Nov. 15, 2013, which applications are incorporated herein by
reference in their entirety.
TECHNICAL FIELD
[0002] The present application relates generally to the technical
field of data processing and specifically to three-dimensional
(3-D) modeling and simulation.
BACKGROUND
[0003] Shopping for clothes and accessories in physical stores can
be an arduous task and, due to traveling and parking, can be very
time consuming. With the advent of online shopping, consumers may
purchase clothing while staying home, via a computer or any
electronic device connected to the Internet. Additionally,
purchasing clothes online can be different from purchasing clothes
in a store. One difference is the lack of a physical dressing room
to determine if and how an article of clothing fits the particular
consumer. Since different consumers can have different dimensions,
seeing how an article of clothing fits, by use of a virtual
dressing room, can be a very important aspect of a successful and
satisfying digital shopping experience.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] FIG. 1 illustrates an example system for extracting body
dimensions from two-dimensional photographs of garments, in
accordance to certain example embodiments.
[0005] FIG. 2 is a block diagram illustrating an example file
system, in accordance with certain example embodiments.
[0006] FIG. 3 is a block diagram illustrating an example simulation
module, in accordance with certain example embodiments.
[0007] FIG. 4 is a flow diagram of a process for displaying a
garment-draped avatar on a client device, in accordance with
certain example embodiments.
[0008] FIG. 5 illustrates a user interface presented by a display
module, in accordance with certain example embodiments.
[0009] FIG. 6 illustrates a user interface for a 360 degree view of
a garment-draped avatar, in accordance with certain example
embodiments.
[0010] FIG. 7 illustrates different viewpoints from the 360 degree
view, in accordance with certain example embodiments.
[0011] FIG. 8 illustrates a virtual fitting room example of the
garment-draped avatar, in accordance with certain example
embodiments.
[0012] FIG. 9 illustrates various omni-channels, in accordance with
certain example embodiments.
[0013] FIG. 10 illustrates an example of a tablet synchronized with
a television with the virtual fitting room, in accordance with
certain example embodiments.
[0014] FIG. 11 illustrates recommended sizes for different
garments, in accordance with certain example embodiments.
[0015] FIG. 12 illustrates a sample triangle associated with a
tessellated garment, in accordance with certain example
embodiments.
[0016] FIG. 13 illustrates an example of a fit map, in accordance
with certain example embodiments.
[0017] FIG. 14 illustrates another example of a fit map, in
accordance with example embodiments.
[0018] FIG. 15 illustrates an online purchase of a garment, in
accordance with certain example embodiments.
[0019] FIG. 16 is a high-level diagram for displaying an animation
or a garment-draped avatar, in accordance with certain example
embodiments.
[0020] FIG. 17 is a block diagram illustrating components of a
machine, according to some example embodiments, able to read
instructions from a machine-readable medium and perform any one or
more of the methodologies discussed herein.
DESCRIPTION OF EMBODIMENTS
[0021] Techniques for an omni-channel approach to apparel and
accessory e-commerce for displaying simulated digital apparel
content are provided, in accordance with various example
embodiments. The techniques described herein are specifically
tailored for apparel and accessory commerce and retail activity.
Omni-channel simulated digital apparel content includes displaying
images of garments draped on a user-specific avatar across multiple
access points (e.g., desktop, laptops, tablets, televisions, mobile
phones, and store fronts). Additionally, Omni-channel simulated
digital apparel content includes images related to the look and fit
of garments on user-specific or generic 3-D human-form avatars. For
example, the fit map can display the look and fit of garments on an
avatar, and the fit map can be deployed for an omni-channel
approach. The term "omni-channel," for purposes herein, refers to a
technique for displaying content (e.g., simulated digital apparel)
on two or more devices (e.g., all available shopping channels) for
a particular user, which may provide the user with a consistent and
coherent consumer shopping experience as the user shifts attention
from one device to another. A consistent and coherent shopping
experience can be especially important when the multiple devices
have differences in computing, input/output, communication, and
security parameters. The techniques described herein can enable a
fully-featured, consistent, coherent commerce experience to the
customer, irrespective of which device or channel the customer
engages with the business on.
[0022] The displayed images are generated by rendering an image of
a garment-draped on an avatar. A machine configured by a suitable
garment simulation module may perform such a rendering. The
rendering includes determining one or more forces (e.g.,
gravitational force or tension force) in the garment when the
garment is draped on the avatar, as the avatar performs one or more
animations (e.g., walking or stooping). The determination (e.g.,
calculation) of such a force can be computationally intensive;
therefore, the amount of rendering to be performed on a client
device may be determined (e.g., by a machine configured by a
suitable garment simulation module) based on the amount of a
computing resource on the client device.
[0023] In some example embodiments, a simulation of the garment on
an animated avatar (e.g., walking down a fashion show runway) can
be displayed (e.g., by a server machine or a client device). For
example, for each frame of an animation, a garment simulation
module of a server machine may compute and store vertex positions
of the simulated garment (e.g., vertices that constitute a model of
the garment), as draped on a user's avatar, as well as compute and
store one or more of the forces that may act on or be exerted on
the simulated garment. Additionally, for each frame of the
animation, the user's avatar can be rendered along with the
corresponding garment under well-lit conditions. The rendering may
generate and store a series of images of the simulated garment
(e.g., with or without the user's avatar being depicted in any
given image). The resolution of the image may be set dynamically
(e.g., by the garment simulation module). For example, each image
may be dynamically set to a resolution of 800.times.600 pixels
based on a determination that the client device of the user can
render only images 800.times.600 pixels in size or smaller for
real-time or near-real-time rendering (e.g., with a frame rate of
15 frames per second or greater).
[0024] The series of images can be generated with varying
resolutions for display on different client devices. For example, a
range of image resolutions may be chosen and the corresponding sets
of images rendered by a garment simulation module. As an
alternative, each image may be rendered at a very high resolution
(e.g., 10,000.times.8.000 pixels), and sub-sampling (e.g., to
create a smaller resolution image) may be performed at run-time
(e.g., by the garment simulation module).
[0025] The garment simulation module may display to customers
static or dynamic information about the properties of garments
across multiple access points (e.g., client devices) by using a mix
of pre-processed content (e.g., processed on the server machine)
and run-time content (e.g., processed on the client device). In
some instances, the amount of run-time content to be processed on
the client device can be pre-determined (e.g., by the garment
simulation module) in order to provide a better user experience at
a particular client device.
[0026] According to some example embodiments, an access module
within a server machine may detect an available amount of computing
resources on a client device. The available amount of computing
resources can include communication resources (e.g., bandwidth).
Based on the detected available amount of the computing resource,
at least four different scenarios can occur. According to a first
scenario, both the rendering of a three-dimensional (3-D) body
model and the rendering of a 3-D garment model are processed on
(e.g., performed by) the server machine (e.g., by using a garment
simulation module or rendering module), and generated images based
on both renderings are then transmitted to the client device.
According to a second scenario, the rendering of the 3-D garment
model is processed on the server machine, and the rendering of the
3-D body model is processed on the client device. According to a
third scenario, the rendering of the 3-D body model is processed on
the server, and the rendering of the 3-D garment model is processed
on the client device. According to a fourth scenario, both the
rendering of the 3-D body model and the rendering of the 3-D
garment model are processed on the client device.
[0027] In the second scenario described above, a determination can
be made by a garment simulation module (e.g., within the server
machine) that the client device is to render only a 3-D body model,
among a set of models that includes the 3-D body model and a 3-D
garment model, and that a server machine is to render the 3-D
garment model. Continuing with the second scenario, the garment
simulation module may provide the client device with the 3-D
garment model draped on the 3-D body model. Additionally, the
garment simulation module may cause the server machine to render at
least a portion of the 3-D garment model in accordance with the
determination. Furthermore, the garment simulation module may cause
the client device to render at least a portion of the 3-D body
model in accordance with the determination. In some instances, the
garment simulation module is the part of the server machine that
renders the 3-D garment model. In other instances, the garment
simulation module can cause a cloud-based server system (e.g., with
multiple server machines) to render at least a portion of the 3-D
garment model. The communication between the server and client can
include transmission of the 3-D body model (e.g., 3-D body model,
parts of the 3-D body model, or representative information of the
3-D body model) that can be used by a client device or a server to
create the 3D body model. The representative information of the 3-D
body model can include salient dimensions (e.g., bust, waist, hip,
or height) or assets (e.g., photographs of the face or hair).
[0028] In some instances, the images transmitted from the server
machine to the client device can include a group of images of the
draped 3-D garment model. The group of images may illustrate
multiple viewpoints around the draped 3-D garment model, such as a
360 degree view. Additionally, a second group of images can be
transmitted from the server machine to the client device. The
second group of images may form all or part of an animation of the
garment-draped avatar. The quantity of images delivered from the
server machine to the client device may be determined by the server
machine based on the frame rate of the animation, and the frame
rate may be determined based on the available amount of computing
resources at the client device.
[0029] Additionally, simulated forces can be calculated and
displayed on a fit map based on the garment model being draped and
simulated on the body model. Using the fit map, a garment size can
be recommended (e.g., by the garment simulation module or other
recommendation engine within the server machine, the client device,
or both) to a specific user based on the calculated simulated
forces. Depending on the implementation, the simulated forces can
be calculated by the garment simulation module, the rendering
module, the server, or the client device.
[0030] Examples merely typify possible variations. Unless
explicitly stated otherwise, components and functions are optional
and may be combined or subdivided, and operations may vary in
sequence or be combined or subdivided. In the following
description, for purposes of explanation, numerous specific details
are set forth to provide a thorough understanding of example
embodiments. It will be evident to one skilled in the art, however,
that the present subject matter may be practiced without these
specific details.
[0031] Reference will now be made in detail to various example
embodiments, some of which are illustrated in the accompanying
drawings. In the following detailed description, numerous specific
details are set forth in order to provide a thorough understanding
of the present disclosure and the described embodiments. However,
the present disclosure may be practiced without these specific
details.
[0032] FIG. 1 is a block diagram illustrating a network environment
100 in accordance with some example embodiments. The network
environment 100 includes client devices (e.g., a client device
10-1, a client device 10-2, a client device 10-3) connected to a
server 202 (e.g., server machine) via a network 34 (e.g., the
Internet). The server 202 may include one or more processing units
222 (e.g., central processing units (CPUs)) for executing one or
more software modules, programs, or instructions stored in a memory
236 and thereby performing one or more of the processing operations
discussed herein; one or more communications interfaces 220; the
memory 236; and one or more communication buses 230 for
interconnecting these components. The communication buses 230 may
include circuitry (e.g., a chipset) that interconnects and controls
communications between system components. The server 202 also
optionally includes a power source 224 and a controller 212 coupled
to a mass storage 214. The network environment 100 optionally
includes a user interface 232 that includes a display device 226
and a keyboard 228.
[0033] The memory 236 may include high-speed random access memory,
such as dynamic random-access memory (DRAM), static random-access
memory (SRAM), double data rate random-access memory (DDR RAM), or
other random-access solid state memory devices. Additionally, the
memory 236 may include non-volatile memory, such as one or more
magnetic disk storage devices, optical disk storage devices, flash
memory devices, or other non-volatile solid state storage devices.
The memory 236 may optionally include one or more storage devices
remotely located from the CPU 222. The memory 236, or alternately
the non-volatile memory device within the memory 236, may be or
include a non-transitory computer-readable storage medium. In some
example embodiments, the memory 236, or the computer-readable
storage medium of the memory 236, stores the following programs,
modules, and data structures, or a subset thereof: an operating
system 240; a file system 242; an access module 244; a garment
simulation module 246; a rendering module 248; and a display module
250.
[0034] The operating system 240 is configured for handling various
basic system services and for performing hardware-dependent tasks.
The file system 242 can store and organize various files utilized
by various programs. The access module 244 can communicate with
client devices (e.g., the client device 10-1, the client device
10-2, or the client device 10-3) via the one or more communications
interfaces 220 (e.g., wired, or wireless), the network 34, other
wide area networks, local area networks, metropolitan area
networks, and so on. Additionally, the access module 244 can access
information for the memory 236 via the one or more communication
buses 230.
[0035] The garment simulation module 246 is configured to generate
a 3-D garment model. Additionally, the garment simulation module
246 can act on a generated a 3-D body model. U.S. Non-Provisional
application Ser. No. 14/270,244 2014, filed May 5, 2014, titled
"3-D DIGITAL MEDIA CONTENT CREATION FROM PLANAR GARMENT IMAGES,"
which is incorporated herein by reference, further describes
techniques for generating the 3-D body model based on salient
dimensions, photos of fitting garments, or self-identification
(e.g., user identifying representative bodies that are similar to
the body type of the user).
[0036] Alternatively, the garment simulation module 246 can
generate the 3-D body model based on the techniques described above
(e.g., salient dimensions). For example, the garment simulation
module 246 can cause the client device 10-1 or the server 202 to
generate the 3-D garment model or the 3-D body model.
[0037] Additionally, the garment simulation module 246 can drape or
cause a client device (e.g., client device 10-1) or the server 202
to drape the garment model on the body model. For example, the
garment simulation module 246 can position the body model inside
the garment model. Moreover, the garment simulation module 246 can
calculate or cause the client device 10-1 or the server 202 to
calculate one or more simulated forces acting on garment points
associated with (e.g., corresponding to or included in) the garment
model based on the positioning of the body model inside the garment
model. A fit map can be determined using the calculated simulated
forces. The fit map can be presented on a display of the client
device 10-1. The garment simulation module 246 and the rendering
module 248 can generate the fit map.
[0038] In some instances (e.g., the second scenario described
above), the body measurement may not be transmitted by the client
device 10-1 (e.g., for privacy reasons). In these instances, the
garment simulation module 246 can cause the client device 10-1 to
generate the 3-D body model. For example, the server 202 can
generate the garment model using the garment simulation module 246
and the rendering module 248, and the client device 10-1 can
generate the body model.
[0039] The rendering module 248 can generate an image of the 3-D
garment model draped on the 3-D body model based on the calculated
one or more simulated forces. The simulated forces can be
calculated, by the rendering module 248, based on methods described
herein (e.g., a three-spring implementation of a sample triangle
with three vertices).
[0040] Additionally, the garment simulation module 246 or the
rendering module 248 can determine the image resolution and the
number of frames per second for the animation, based on computing
resources of the client device. Furthermore, the garment simulation
module 246 or the rendering module 248 can determine the amount of
data (e.g., image data) that is pre-rendered, stored, and
transferred to the client device 10-1 on the fly in contrast to the
amount of data that is rendered on the client device 10-1.
[0041] The display module 250 can be configured to cause
presentation of one or more generated images on a display of a
device (e.g., client device 10-1). For example, the display module
250 may present the 3-D simulation discussed above on the display
of mobile device. The presentation of the 3-D simulation may be
based on the actions of the garment simulation module 246 and the
rendering module 248.
[0042] The network 34 may be any network that enables communication
between or among machines, databases, and devices (e.g., the server
202 and the client device 10-1). Accordingly, the network 34 may be
a wired network, a wireless network (e.g., a mobile or cellular
network), or any suitable combination thereof. The network 34 may
include one or more portions that constitute a private network, a
public network (e.g., the Internet), or any suitable combination
thereof. Accordingly, the network 34 may include one or more
portions that incorporate a local area network (LAN), a wide area
network (WAN), the Internet, a mobile telephone network (e.g., a
cellular network), a wired telephone network (e.g., a plain old
telephone system (POTS) network), a wireless data network (e.g., a
Wi-Fi network or a WiMAX network), or any suitable combination
thereof. Any one or more portions of the network 34 may communicate
information via a transmission medium. As used herein,
"transmission medium" refers to any intangible (e.g., transitory)
medium that is capable of communicating (e.g., transmitting)
instructions for execution by a machine (e.g., by one or more
processors of such a machine), and includes digital or analog
communication signals or other intangible media to facilitate
communication of such software.
[0043] The server 202 and the client devices (e.g., the client
device 10-1, the client device 10-2, and the client device 10-3)
may each be implemented in a computer system, in whole or in part,
as described below with respect to FIG. 17.
[0044] Any of the machines, databases, or devices shown in FIG. 1
may be implemented in a computer modified (e.g., configured or
programmed) by software (e.g., one or more software modules) to be
a special-purpose computer to perform one or more of the functions
described herein for that machine, database, or device. For
example, a computer system able to implement any one or more of the
methodologies described herein is discussed below with respect to
FIG. 17. As used herein, a "database" is a data storage resource
and may store data structured as a text file, a table, a
spreadsheet, a relational database (e.g., an object-relational
database), a triple store, a hierarchical data store, or any
suitable combination thereof. Moreover, any two or more of the
machines, databases, or devices illustrated in FIG. 1 may be
combined into a single machine, and the functions described herein
for any single machine, database, or device may be subdivided among
multiple machines, databases, or devices.
[0045] FIG. 2 further describes the memory 236 in the server 202,
as initially described in FIG. 1. FIG. 2 includes an expanded
depiction of the file system 242. The file system 242 may store one
or more of the following data objects (e.g., files and databases):
garment model files 251; extracted geometry files 252; extracted
texture files 253; stitching information files 254; a garment
template database 255; draping parameters files 256; simulation
parameters files 257; and simulation result geometry files 258.
FIG. 4 further describes operations using the data objects from
FIG. 2.
[0046] FIG. 3 is a block diagram illustrating components of the
garment simulation module 246, according to some example
embodiments, as initially described in FIG. 1. The garment
simulation module 246 is shown as including a boundary extraction
module 261; a texture mapping module 262; a tessellation module
263; a stitching module 264; a draping module 265; and a simulation
module 266, all configured to communicate with each other (e.g.,
via a bus, shared memory, or a switch).
[0047] Any one or more of the modules described herein may be
implemented using hardware (e.g., one or more processors of a
machine) or a combination of hardware and software. For example,
any module described herein may configure a processor (e.g., among
one or more processors of a machine) to perform the operations
described herein for that module. Moreover, any two or more of
these modules may be combined into a single module, and the
functions described herein for a single module may be subdivided
among multiple modules. Furthermore, according to various example
embodiments, modules described herein as being implemented within a
single machine, database, or device may be distributed across
multiple machines, databases, or devices.
[0048] Each of the above identified elements may be stored in one
or more of the previously mentioned memory devices, and corresponds
to a set of instructions for performing a function described above.
The above identified modules or programs (e.g., sets of
instructions) need not be implemented as separate software
programs, procedures, or modules, and thus various subsets of these
modules may be combined or otherwise rearranged in various example
embodiments. In some example embodiments, the memory 236 may store
a subset of the modules and data structures identified above.
Furthermore, the memory 236 may store additional modules and data
structures not described above.
[0049] The number of servers used to implement the garment
simulation module 246 and the rendering module 248 and how features
are allocated among them will vary from one implementation to
another, and may depend in part on the amount of data traffic that
the network environment 100 handles during peak usage periods as
well as during average usage periods. Additionally, the number of
servers to implement the garment simulation module 246 and the
rendering module 248 may depend in part on the available amount of
computing resources on the client device (e.g., client device
10-1).
[0050] FIG. 4 is a flowchart representing a method 400 for
displaying a garment-draped avatar, according to some example
embodiments. The method 400 is governed by instructions stored in a
computer-readable storage medium and that are executed by one or
more processors of one or more servers. Each of the operations
shown in FIG. 4 may correspond to instructions stored in a computer
memory or computer-readable storage medium.
[0051] Operations in the method 400 may be performed by the server
202, using modules (e.g., garment simulation module 246, or
rendering module 248) described above with respect to FIGS. 1-3. As
shown in FIG. 4, the method 400 includes operations 410, 420, 430,
440, and 450. In certain example embodiments, the method 400
includes an operation for rendering a 3-D garment model and a 3-D
body model using various devices. Method 400 may be performed to
implement the second scenario that was previously mentioned, where
the server 202 renders the 3-D garment model and the client device
10-1 renders the 3-D body model.
[0052] In operation 410, the garment simulation module 246 detects
an available amount of computing resources on a client device. For
example, the garment simulation module 246 may utilize the access
module 244 to access the computing resources on client device 10-1
using the communications interface 220 via the network 34. The
information relating to the detected computing resources may then
be stored by the garment simulation module 246 in the simulation
parameters files 257.
[0053] As used herein, a "computer resource" refers to any physical
or virtual component of limited availability within a computer
system. In some instances, one or more devices connected to a
computer system can be a resource. Additionally, one or more
internal system components (e.g., processor, memory, video card,
display screen, and power) may be a resource. For example,
processing speed or the resolution of the display screen may be a
factor in determining the percentages of pre-processed and run-time
content to be rendered by the server 202 or the client device 10-1.
Moreover, virtual system resources include files, network
connections, and memory areas. For example, the available amount of
random access memory (RAM) or virtual memory can be a factor in
determining the percentages of pre-processed and run-time content.
Furthermore, the amount of available computing resources can
correspond to computing resources in the graphics processing unit
(GPU) of the client device 10-1. Furthermore, in some embodiments,
the detecting of the available amount of the computing resource can
include detecting the available capacity of a graphics processing
unit within the client device 10-1.
[0054] In operation 420, based on the detected available amount of
computing resources from operation 410, the garment simulation
module 246 determines that the client device 10-1 is to render only
a 3-D body model, among a set of models that includes the 3-D body
model and a 3-D garment model. The 3-D body model can include the
body of a user, the face of the user, the hair of the user and
other features (e.g., make-up, and style). Additionally, the
garment simulation module 246 may determine and that the server 202
is to render a 3-D garment model based on the detected available
amount of computing resources. The garment simulation module 246
may configure at least one processor among the one or more
processors (e.g., the CPU 222) to perform this determination.
[0055] For example, based on the detected available amount of
computing resources at the client device 10-1, the garment
simulation module 246 may determine that for an optimal user
experience (e.g., animation displayed without delay during user
interface), the garment model is to be rendered by the server 202
and the avatar is to be rendered by the client device 10-1. As
previously mentioned, rendering can be computationally intensive,
and therefore a partial image may be rendered (e.g., pre-processed)
by the server 202 and transmitted to the client device 10-1.
[0056] In operation 430, the garment simulation module 246 provides
the client device 10-1 with the 3-D garment model draped on the 3-D
body model. The garment simulation module 246 may configure at
least one processor among the one or more processors (e.g., the CPU
222) to provide the client device with the 3-D garment model draped
on the 3-D body model.
[0057] For example, the 3-D garment model can include garment
points (e.g., a set of points) that model or otherwise represent at
least one surface of a garment. The garment simulation module 246
can drape the 3-D garment model on the 3-D body model by
positioning at least a portion of the 3-D body model inside the 3-D
garment model that includes the garment points.
[0058] In some instances, the body profile can be stored on a cloud
server for the user to retrieve using the client device 10-1. In
some other instances, the body profile can be stored on a
third-party server (e.g., similar to server 202) of a merchant that
a user can access when browsing a virtual fitting room. In yet some
other instances, some or all aspects of the body profile can be
stored in the client device 10-1 and not transmitted to any server
(e.g., server 202) for privacy reasons.
[0059] In operation 440, the garment simulation module 246 causes
the server 202 to render at least a portion of the 3-D garment
model in accordance with the determination made in operation 420.
In some instances, the server 202 renders the portion of the 3-D
garment model without rendering any portion of the 3-D body model.
According to an example embodiment, the garment simulation module
246 is part of the server that renders at least a portion of the
3-D garment model. According to another example embodiment, the
garment simulation module 246 can cause a cloud-based server to
render at least a portion of the 3-D garment model.
[0060] Rendering the 3-D garment model may include calculating a
simulated force acting on a subset of the garment points based on
the positioning of at least the portion of the 3-D body model
within the 3-D garment model. A tessellation method can be used for
calculating the simulated force, as later described in FIG. 13. In
some instances, the simulated force includes a simulated
gravitational force upon the draped 3-D garment model. The
simulated force can also include a simulated tension force, elastic
force, aerodynamic force, or friction force upon the draped 3-D
garment model.
[0061] The rendering module 248 is configured to generate an image
of the 3-D model based on one or more calculated simulated forces,
and the generated image may be descriptive of the garment-draped
avatar as influenced by these calculated simulated forces. The
rendering module 248 may configure at least one processor among the
one or more processors (e.g., the CPU 222) to generate the image
using the draping module 265 and the simulation module 266. The
garment-draped avatar may then be presented based on the one or
more simulated forces. The presentation may be performed by
digitally draping the 3-D model onto the avatar. In various example
embodiments, the rendering involves taking data from all previous
operations, combining the data, and inputting the data into a cloth
simulation engine. Additionally, the simulation results may be
stored by the rendering module 248 in the simulation result
geometry files 258.
[0062] Moreover, the garment simulation module 246 can be further
configured to calculate a simulated force acting on the subset of
the garment points based on a material property of the garment. The
material property of the garment may include a sheerness value, a
linear stiffness value, or a bending stiffness value.
[0063] In operation 450, the garment simulation module 246 causes
the client device 10-1 to render at least a portion of the 3-D body
model in accordance with the determination made in operation 420.
In some instances, the client device 10-1 renders the portion of
the 3-D body model without rendering any portion of the 3-D garment
model.
[0064] As illustrated in FIG. 5, the display module 250 presents to
a user a garment (e.g., jeans) 510 draped on an avatar 520. The
display module 250 may present the generated image 530 on a display
of a device 540 (e.g., client device 10-1). The display module 250
can configure the user interface 232 for the presentation. The
display module 250 can configure at least one processor among the
one or more processors (e.g., the CPU 222) to present the generated
image on the display of a mobile device (e.g., device 540).
[0065] In various example embodiments, the output is stored as a
series of images. Both the resolution and number of images can be
set dynamically. Additionally, the output can include other
content, such as videos, 3-D objects, or text description of the
simulation output.
[0066] As illustrated in FIG. 6, which is also referred to herein
as the V.360 example, the garment may be displayed in a static body
position. The garment 610 draped on an avatar 620 specific to a
user can be rendered under well-lit conditions by revolving the
viewpoint in a circle around a 3D model that combines the garment
610 and the avatar 620 of the user. A user can select a user
interface button 630 for revolving the avatar 620 in a circle.
Additionally, a projected image can be rendered for each of the
viewpoints. As illustrated in FIG. 7, a first viewpoint 710, a
second viewpoint 720, a third view point 730, and a fourth
viewpoint 740 can be selected using the user interface button
630.
[0067] In some instances, the rendered images form all or part of a
series of images (e.g., images that constitute an animation). Both
the resolution of the images and the number of images in the series
can be set dynamically (e.g., by the garment simulation module 246.
In one example, the garment simulation module 246 may generate
thirty images that are 12 degrees apart with a resolution of
800.times.600 pixels.
[0068] Furthermore, a whole range of image resolutions may be
chosen by the garment simulation module 246 for the same content
and the corresponding sets of images rendered. For example, as an
alternative to the example above, 360 images with viewpoints that
are one degree apart may be rendered at a high resolution (e.g.,
10,000.times.8000 pixels). Additionally, these images can be
sub-sampled to create a smaller resolution image to be performed at
run-time.
[0069] FIG. 8 illustrates the virtual fitting room example. In the
virtual fitting room example, customers can mix and match clothes
(e.g., shirts 810 and pants 820) virtually in-store, and digitally
try on inventory on an avatar 830. Similar to the V.360 example of
FIG. 6, a user interface button 840 can be selected by a user to
revolve the avatar 830 in a circle.
[0070] Additionally, as illustrated in FIG. 9, the omni-channel
techniques described herein can provide a user 910 with a personal
look-and-feel across multiple (e.g., all) media and channels (e.g.,
mobile device 920, tablet 930, laptop 940, desktop 950, social
network 960, brick-and-mortar 970, digital wall in a store front
980, and digital monitor 990).
[0071] Furthermore, as illustrated in FIG. 10, the omni-channel
techniques can be synchronized between or among different channels
(e.g., devices belonging to the same user). For example, a user
1010 can be shopping online using a tablet 1020, while the image is
also projected on a section 1030 of a television 1040. A television
show can be displayed on another section of the television 1040. In
this example, the garment simulation module 246 can determine the
resolution, frame rate, and the amount of information to be
rendered on each device (e.g., tablet 1020, or television
1040).
[0072] In various embodiments, as previously noted with respect to
the four scenarios described above, the images may be:
pre-rendered, stored and transferred on the fly; rendered on the
client device; or a combination of both.
[0073] When the images are pre-rendered, stored and transferred to
the client device on the fly, the corresponding images (e.g.,
thirty in the previous example) can be transferred and displayed to
the user (e.g., user 910 or user 1010) at run-time. The images can
be based on the user's dimensions and the garment of interest. The
images can be transferred via an interactive interface, in which
the user determines the image displayed by changing the view-point.
The pre-rendered technique can have a very low computational
overhead on the client device 10-1. For example, the client device
10-1 may just decompress and display the image, which can smoothly
be performed even on low-end mobile devices. The size of the image
can be chosen based on the amount of data transferred being within
a given budget of transfer bandwidth to the client. Additionally,
the size of the image can also be based on the display resolution
of the client device 10-1. The pre-rendered implementation allows
for an interactive experience on the client device 10-1 with high
fidelity garment images for any kind of device. The series of
images can be pre-rendered, stored, and transferred at run-time to
the client device 10-1 to reduce computational overhead at the
client device 10-1.
[0074] Alternatively, when the images are rendered on the client
device 10-1, the relevant garment's vertex positions can be
transferred to the client device 10-1, and, at run-time, the images
are rendered on the client device 10-1. The relevant garment's
vertex positions or parameters are transferred to the client device
10-1 at run-time in order for the rendering to occur at the client
device 10-1. The client device 10-1 can use the same scene settings
as in the pre-rendered implementation. This implementation may be
optimized for a client device (e.g., client 10-1) with powerful
computing resources.
[0075] In yet another implementation, the images can be based on a
combination of the pre-rendering on the server 202, and rendering
on the client device 10-1. A combination of pre-rendered images and
relevant vertex positions or parameters can be sent to the client
at run-time depending upon the desired computational overhead at
the client device 10-1. A whole spectrum of intermediate solutions
can exist using a combination of the pre-rendering on the server
202 and rendering on the client device 10-1. For example, the
draped garment can be pre-rendered on the server 202, but the body
can be rendered on the client device 10-1.
[0076] In the second and third scenarios, the user's avatar can be
rendered on the client device 10-1, which can help strengthen the
security aspect of virtual shopping using avatars since the body
measurements of a user are stored in the client device 10-1. In
such embodiments, the information about the user's dimensions is
less likely to be compromised, because the information is not sent
to a server (e.g., server 202). Additionally, rendering the avatar
on the client device 10-1 can allow for a more customizable
rendering experience for the user. Therefore, when the client
device 10-1 has the specified amount of compute resources, the
server 202 can deliver a whole range of customized personal garment
rendering to help the user make a more informed purchase decision.
In some instances, a secure transaction between the server and
client (e.g., a one-time, session-specific transfer of personal
information) can be used to convey private information about the
user for purposes of simulation, rendering and display.
[0077] In some instances, the server 202 can provide the client
device 10-1 with a partial image rendered from at least a portion
of the 3-D garment model draped on the 3-D body model. The client
device 10-1 can be configured to receive the partial image and
generate a full image based on the partial image and by rendering
at least the portion of the 3-D body model as described in
operation 450.
[0078] According to some example embodiments, the garment
simulation module 246 can be further configured to cause the server
202 to generate a group of images of the draped 3-D garment model.
The group of images can depict multiple viewpoints around the
draped 3-D garment model. Additionally, the generated group of
images can have a quantity based on the detected available amount
of the computing resource on the client device 10-1.
[0079] For example, when determined that the client device 10-1 is
a high-powered (e.g., high-end processor) desktop computer, the
server 202 can capture 360-degree images of the garment-draped
avatar at every one degree to create a high-quality, 360-degree
view of the garment-draped avatar. Alternatively, when it is
determined that the client device 10-1 is a low-end mobile device,
the server 202 can capture 36 images of the garment-draped on the
avatar at every 10 degree to create a low-quality 360-degree view
of the garment-draped avatar.
[0080] The rendered images can be generated using a GPU in the
server 202 or the client device 10-1. Additionally, in some
instance, the detection of the available amount of the computing
resource can be based on the available capacity of a GPU within the
client device 10-1.
[0081] According to some example embodiments, the server 202 can be
further configured, by the garment simulation module 246, to
transmit the image rendered in operation 440 at a specific
resolution based on the detected available amount of the computing
resource on the client device 10-1. For example, when it is
determined that the available amount of the computing resource in
the client device 10-1 includes a low resolution supported by a
display, then the server 202 can transmit the rendered image with a
supported low resolution (e.g., 800 pixels by 600 pixels).
[0082] According to some example embodiments, the server 202 can be
further configured, by the garment simulation module 246, to
transmit the images rendered in operation 440 with a specific frame
rate at which an animation of the draped avatar is to be displayed.
The frame rate may be determined based on the detected available
amount of computing resources on the client device 10-1. The frame
rate may be determined directly from the detected available amount
of computing resources on the client device 10-1. For example, the
higher the detected available amount of computing resources on the
client device 10-1, the higher the frame rate may be set.
[0083] In the animation of the draped avatar example, the 3-D body
model, the 3-D garment model, or both as a combined model, can have
a first body position, and the garment simulation module 246 may be
further configured to change (e.g., reposition) the 3-D body model,
the 3-D garment model, or both, to a second body position.
Additionally, the garment simulation module 246 can reposition at
least a portion of the 3-D body model inside the garment points
based on the change of the 3-D body model to the second body
position, and calculate the simulated forces acting on a second
subset of the garment points based on the repositioning. The frame
rate can correspond to the number of images generated as the avatar
repositions from the first body position to the second body
position.
[0084] The rendering module 248 can be configured to animate the
generated image as the 3-D body model moves from the first body
position to the second body position, and the display module 250
can be configured to cause presentation of the animation on the
client device 10-1. Additionally, the rendering module 248 can
create a set of avatars (e.g., static, animated, or dynamic) for a
content stage (e.g., fashion performance, 360.degree. view, fit
map, or suggest a size).
[0085] Additionally, the rendering module 248 can be further
configured to animate the generated image as the 3-D body model
moves from the first body position to the second body position, and
subsequently to a third body position, which can be presented using
the display module 250.
[0086] For example, the garment simulation module 246 and rendering
module 248 can animate body meshes of an avatar under different
animation sequences, such as swinging a golf club. In some
instances, the system can animate the body meshes to perform a
fashion presentation by superimposing motion-captured data (e.g.,
of different points on a body mesh) on the given mesh. Any kind of
motion can be superimposed to form a catalogue of motions that a
user can eventually choose from. For example, for a ten-second
motion clip when the frame rate is set at 30 frames-per-second
animation, the system can compute 300 frames (10 seconds times 30
frames) of the avatar.
[0087] In various example embodiments, for each of the above
animation frames, the garment simulation module 246 and rendering
module 248 can perform the stable garment simulation to compute the
vertex positions of the garment. The garment positions can then be
stored. Likewise, the forces can be computed and stored by the
server 202 or client device 10-1 based on the available amount of
computing resources on the client device 10-1. The garment
simulation module 246 and rendering module 248 can exploit spatial
coherence within consecutive frames to speed up the simulation
run-time, for example by using the stable position of the previous
frame as the starting position for the current frame and computing
the resultant motion parameters of the garment.
[0088] For example, by simulating the garment model, the garment
simulation module 246 can simulate a fashion experience. In some
instances, simulation of the garment can include placing the
garment around the body at an appropriate position, and running
simulations based on calculations. The simulation can advance the
position and other related variables of the vertices of the garment
based on different criteria (e.g., the laws of physics, garment
material properties, body-garment interaction). The result is a
large system of equations (e.g., one variable for each force
component) that the garment simulation module 246 can solve in an
iterative fashion. The simulation can be completed when the
simulation becomes stable. For example, the simulation can become
stable when the garment reaches a steady state with a net force of
zero.
[0089] Moreover, the precision can be adjusted to accommodate
varying levels of desired accuracy of the garment model and can be
based on available amount of computing resources on the client
device 10-1. The precision can be automatically adjusted by the
garment simulation module 246 and rendering module 248 based on the
client device 10-1, 10-2, 10-3 (e.g., lower precision for a mobile
device, higher precision for a large screen display). In some
instances, the standard error of tolerance is a parameter that can
be set. Tolerance can be measured by actual units of distance
(e.g., 0.01 inches). Alternatively, tolerance can be measured in
numbers of pixels.
[0090] According to some example embodiments, the garment
simulation module 246 can be further configured to distort the 3-D
garment model. For example, the rendering module 248 can distort
the 3-D garment model by stretching or twisting the 3-D garment
model. Distorting the digital garment model can generate 3-D models
that are representative of the family of sizes of a garment
typically carried and sold by retailers.
[0091] Distorting techniques can be used for recommending a size.
For example, tops are usually distributed in a few generic sizes
(e.g., XS, S, M, L, XL, or XXL). By computing the tension map for
each size for the user's avatar, a recommended size can be
suggested. The recommended size can be based on the size that fits
the avatar's dimensions the closest with minimum distortion to the
garment.
[0092] The distortion of the 3-D digital garment model can be
uniform for the entire model (i.e., the entire model is grown or
shrunk), or specific to individual zones (e.g., specific garment
areas) with different distortions (e.g., scale factors) for the
individual zones. Furthermore, the scaling of dimensions of the
garments can be arbitrary (as in the case of creating a custom
size), or can be according to specifications provided by a garment
manufacturer. The specifications can be based on grading rules,
size charts, actual measurements, or digital measurements.
[0093] As illustrated in FIG. 11, using a generated fit map, the
garment simulation module 246 can determine the recommended size.
Accordingly, the display module 250 can present an avatar 1110 with
a recommended size 1120 to the user. Furthermore, the garment
simulation module 246 can determine a recommended size based on the
available garment sizes stored in the file system 242, or based on
the user's current wardrobe 1130. By computing the tension map for
each size for the user's avatar 1110, a recommended size 1120 can
be suggested. The recommended size 1120 can be based on the size
that fits the avatar's 1110 dimensions the closest with minimum
distortion to the garment.
[0094] In addition to suggesting a recommended size, techniques for
incorporating a user's fitting preferences (e.g., loose around the
waist) are also described. Algorithms to compute a personalized
size recommendation for the user can further be developed based on
a user's buying and return pattern. In some instances, the
personalized size recommendation can be based on dividing the body
into zones and having a list of acceptable sizes for each zone.
Furthermore, fit and size recommendation can be based on specific
information about the class or type of garment. For example, given
that yoga pants have a tight fit, when the class of garment is
determined to be yoga pants, the garment simulation module 246 can
infer that the garment has a tight fit based on parameters obtained
from the manufacturer or a lookup table. Similarly, the garment
simulation module 246 can infer that flare jeans have a loose fit
at the bottom of the jeans.
[0095] For example, the body can be divided into zones. For a
woman, the zones can include shoulders, bust, waist, hips, thighs,
calves, and so on. For a given size of a garment of a certain
category (e.g., jeans), the technique can determine if the garment
fits based on the user's buying and return pattern. When the
garment fits, the dimensions of the garment in each applicable zone
can be added to a list of acceptable dimensions for the user. When
the garment fits, the algorithm used by the garment simulation
module 246 may assume that all the dimensions fit the user.
Alternatively, when the garment does not fit (e.g., the user
returns the garment), the dimensions of the garment in each
applicable zone are added to a list of unacceptable dimensions and
stored in a database, by the garment simulation module 246.
Similarly, when the garment does not fit, the algorithm may assume
that at least one of the dimensions did not fit the user.
[0096] A classifier (e.g., sequential minimization optimization
(SMO)) may be used for each garment category implemented by the
garment simulation module 246 based on the dimensions that either
fit or do not fit the user. For a given new garment in a specific
category, the garment simulation module 246 can predict the correct
size based on the classifier and recommend the size to the user.
Based on feedback (e.g., the user's buying and return pattern), the
user's preference and the classifiers can be updated by the garment
simulation module 246. In some instances, five to ten garments for
a given category can help achieve over 90% accuracy on the correct
user size. Accordingly, the number of garments to train and
converge on user's preferences can be low (e.g., less than 10).
[0097] Now referring to FIG. 12, the 3-D garment model rendered by
the server in operation 440 can be a tessellated 3-D garment model.
The tessellated 3-D garment model can include a group of vertices
associated with points on the surface of the garment. The garment
points can be generated using a tessellation technique by the
tessellation module 263. The tessellated geometric shapes can be
stored in the extracted geometry files 252. For example, a shirt
1210 can be tessellated with triangles (e.g., about 20,000
triangles when a triangle edge is around 1 centimeter), and the
vertices of the triangles can be the garment points of the 3-D
garment model. The garment points can include location information
such as an x, y, and z position value.
[0098] The garment simulation module 246 can position at least a
portion of the generated avatar inside the garment points. In some
instances, positioning can include placing the garment on or around
the avatar, given that the avatar may be fixed in some embodiments.
In these instances, the garment can be stretched and deformed based
on the simulation.
[0099] The simulations can be implemented through specific modules
(e.g., the simulation module 266) stored in the memory 236. Some
examples of implementations and equations are described below. For
example, below is the system of equations to be used for a
three-spring implementation of a sample triangle 1250 with three
vertices (i.e., a vertex 1252, a vertex 1254, or a vertex 1256)
associated with the tessellated shirt 1210, as illustrated in FIG.
12.
spring force 1 = ( k s restlength 1 ) * ( x 2 - x 1 - restlength 1
) * spring direction 1 + ( k d restlength 1 ) * Dot Product ( v 2 -
v 1 , spring direction 1 ) * spring direction 1 ( Equation 1 )
spring force 2 = ( k s restlength 2 ) * ( x 3 - x 2 - restlength 2
) * spring direction 2 + ( k d restlength 2 ) * Dot Product ( v 3 -
v 2 , spring direction 2 ) * spring direction 2 ( Equation 2 )
spring force 3 = ( k s restlength 3 ) * ( x 1 - x 3 - restlength 3
) * spring direction 3 + ( k d restlength 3 ) * Dot Product ( v 1 -
v 3 , spring direction 3 ) * spring direction 3 ( Equation 3 )
##EQU00001## [0100] Where k.sub.s is the elastic spring constant,
k.sub.d is the damping spring constant, and each vertex has a
position (x) and velocity (v).
[0101] In the equations above, when the denominator is a restlength
value, a non-zero value can be used for zero-length springs.
Additionally, the equations can use a visual restlength value when
the denominator is not the restlength value, which in zero-length
spring cases is 0. This allows for the system to handle zero-length
springs without dividing by 0.
[0102] To further explain the equations above, a walkthrough of the
equations is described. The garment simulation module 246 and
rendering module 248 can maintain the positions and velocities of
all the points that represent the garment. In future iterations,
the simulator can update the positions of the points over time by
computing the net force on each point at each instance in time.
Then, based on the mass of the particle and the laws of motion,
F=ma, an acceleration can be calculated. The acceleration
determines a change in velocity, which can be used to update the
velocity of each point. Likewise, the velocity determines a change
in position, which can be used to update the positions. Therefore,
at each point in the simulation, the simulator can compute the net
force on each particle. The forces exerted on each particle can be
based on a gravitational force, spring forces, or other forces
(e.g., drag forces to achieve desired styling). The equation for
gravitational force is F=mg, and the spring force is described
above.
[0103] The spring force F has two components, an elastic component
(e.g., the part of the equation multiplied by k.sub.s) and a
damping component (e.g., the part of the equation multiplied by
k.sub.d). The elastic component is related to the oscillation of
the spring. The strength of the elastic force is proportional to
the amount the spring is stretched from the restlength value, which
can be determined by x.sub.2-x.sub.1 (e.g., the current length of
the spring) minus the restlength value. For example, the more the
spring is compressed or stretched, the higher the force pushing the
spring to return to its rest state. Additionally, k.sub.s is a
spring constant that allows for scaling up/down the force based on
the strength of the spring, which is then multiplied by the spring
direction to give the force a direction (e.g., in the direction of
the spring).
[0104] The damping component calculates the damping effect (e.g.,
heat being generated by the spring moving, drag). Damping can be
drag force, where the higher the velocity, the higher the drag
force. Accordingly, damping can be proportional to velocity. In the
case of a spring, there can be two particles moving, so instead of
a single velocity, the simulator computes a relative velocity
between the two endpoints (e.g., v.sub.2-v.sub.1 in FIG. 12). For
example, the larger the relative velocity, the faster the points
are moving apart or coming close together, and as a result the
larger the damping force (e.g., the damping is proportional to
relative velocity). Additionally, k.sub.d is the damping spring
constant to scale the damping force either up or down, which can be
multiplied by the spring direction to give the force a
direction.
[0105] The resultant output can be stored or displayed to a user.
In some instances, for each of the bodies, the garment simulation
module 246 can capture the position of the vertices at the end of
the simulation, and store the information in a database. For a mesh
with K vertices, a total of 3K numbers are stored (the x, y, and z
positions for each vertex). These constitute the look of the given
garment on any given body.
[0106] In various example embodiments, at the steady state of each
simulation, the garment simulation module 246 can also compute the
forces being exerted in the springs (e.g., edges) of the mesh. For
example, for an edge between two vertices (e.g., V.sub.1 and
V.sub.2), the resultant force on V.sub.1 (and correspondingly
V.sub.2) equals:
F(V.sub.1)=k(V.sub.1,V.sub.2)*Delta(V.sub.1.sub.--V.sub.2), where
(Equation 4) [0107] k(V.sub.1, V.sub.2) is the spring constant of
the spring joining V.sub.1 and V.sub.2 (e.g., a function of the
material property of the garment); and [0108]
Delta(V.sub.1.sub.--V.sub.2) is a velocity-dependent force function
based on the change in position vectors for V.sub.1, V.sub.2 as
compared to their original rest state. These forces can then be
accumulated for each vertex to compute the resultant force.
[0109] In various example embodiments, for each of the bodies, the
garment simulation module 246 can store the resultant force on each
vertex (e.g., 1252, 1254, 1256) in the simulation result geometry
files 258. The resultant force on each vertex can serve as a
measure of the tightness (e.g., for large force magnitude) or
looseness in different regions of the garment. The resultant force
computed can be interpreted as a stress, pressure, or compression
on the garment. Additionally, the resultant force can be a
representation of a force felt by the body at the corresponding
point or region.
[0110] FIGS. 13-14 illustrate the resultant forces in a fit map.
For example, the tight regions can be depicted using warm colors,
and the loose regions depicted using cool colors.
[0111] Techniques for displaying a fit map on a garment for the
same static position are provided, in accordance with example
embodiments. The fit map can illustrate tension forces, inferred
force, or pressure on the body. The fit map can show and convey
regions of the garment that can be tight or loose on a user. This
additional information can aid the user in making an informed
purchase decision without physically trying on the garment.
[0112] As illustrated by FIG. 13, the garment model can be draped
on the body model. According to some example embodiments, the
method 400 of FIG. 4 can further include operations where the
garment simulation module 246 is configured to generate a fit map
based on the calculated simulated forces, and the display module
250 can present the garment-draped avatar with a generated fit map
1310 as illustrated in FIG. 13.
[0113] A fit map can show display cues. For example, a set of
output forces can be chosen. Each output force can correspond to a
range of forces (e.g., tight, loose) that can be displayed to the
user. Additionally, style information can be presented based on the
force. For example, loose or tight clothing may convey some style
information. FIG. 13 shows an example of a fit map 1310 with color
display cues. As illustrated in FIG. 13, the display cues can be
overlaid, by the rendering module 268, on the rendered garment
itself. As illustrated, the generated fit map 1310 can be based on
a magnitude of the calculated simulated forces. For example, when
the magnitude of the calculated simulated forces is high, the fit
map 1310 can label that section of the garment as a tight section
1320. Alternatively, a loose section 1330 occurs when the magnitude
of the calculated simulated forces is low.
[0114] Furthermore, the fit map can convey derivative information
such as the relative differences in force, style, and fit between
two garments. For example, a user can use the derivative
information from the fit map to select between the two sizes or
style. In some instances, the derivative information can be
presented using colors or cues.
[0115] As illustrated in FIG. 14, a fit map 1410 can be generated
by assigning a color to a garment point (e.g., a vertex in the
tessellated garment model). The color values can be determined
based on the calculated simulated force. Each color corresponds to
a range of forces. For each vertex, the corresponding color can be
computed and stored. The color information can be rendered from
revolving viewpoints around the body to compute a color-coded
tension map.
[0116] For example, in the fit map 1410, each vertex of the shape
(e.g., triangle) is assigned a red-green-blue (RGB) value. In some
instances, the generated fit map 1410 can be colored based on a
magnitude of the calculated simulated forces. For example, sections
of the garment that are tight around the body of a user can be
colored red 1420, while loose sections of the garment can be
colored blue 1430. Thus in the triangulation method, each triangle
potentially has three different RGB values. The rest of the points
of the triangle can then be interpolated. Interpolation allows for
the RGB values of the remaining points in the triangle to be filled
in using a linear combination method (e.g., the points of the
triangle are weighted based on the distance to the three vertices
and the RGB values are assigned accordingly).
[0117] As previously mentioned, for both of the above examples, the
output can be stored as a series of images. Both the resolution and
number of images can be set dynamically. According to one example
embodiment, the garment simulation module 246 can generate thirty
images that are 12 degrees apart with a resolution of 800.times.600
pixels. Furthermore, a whole range of image resolutions may be
chosen, and the corresponding sets of images rendered.
[0118] In various embodiments, omni-channel marketing involves
delivering a consistent, correlated, connected look and feel to all
interactions between a business (e.g., brand) and a consumer. For
example, in apparel retail, consumers can interact with a brand or
retailer via a physical brick-&-mortar store, online web store,
smart TV, mobile application, email message, digital kiosk, digital
wall or digital or physical catalog. Using the techniques described
herein, omni-channel marketing can make sure that each experience
on any access channel with the brand reinforces the brand image in
the minds of the customer.
[0119] Conversely, customers can use multiple devices at various
times as they progress with the
impression-discovery-research-purchase-feedback cycles of commerce.
Delivering a consistent look and feel and coherent information
improves the customer's satisfaction and loyalty. In the present
disclosure, we describe a method (e.g., method 400) of delivering
personalized simulated apparel content with a consistent look, feel
and quality across any channel of engagement between the brand and
the customer.
[0120] In various embodiments, the role of brick-&-mortar
stores is evolving to incorporate the use of digital devices and
media in a physical store, in keeping with the evolution of retail
in an omni-channel engagement world. More broadly the traditional
roles of a store being both a product discovery center as well as a
fulfillment center are getting split up, with the physical store
front focusing on a product discovery experience using a
combination of digital technologies and physical products.
[0121] FIG. 15 illustrates an example embodiment of the retail
scenario. For example, e-commerce technologies like the virtual
fitting technologies described herein can be used in store in
conjunction with technologies like UPC code scanning either on a
customer's mobile device or a fixed asset like a digital wall,
dressing room or kiosk. Customers can mix and match clothes 1510
virtually in store, or digitally try-on 1520 inventory and concept
garments from other locations. Another implementation of this
technology is in-store personalized style 1530 and size
recommendations either based on inventory in the store or within
the retailer or brand's entire catalog. Virtual fitting technology
and size and style recommendation technology can be triggered based
on inputs such as location in a store, time of the year, or any
other relevant variable. Further, such recommendations can be
triggered by a customer action, such as the scan of a bar code or
location-based trigger. Additionally, the recommendation can be
triggered by the owner or operator of the store. Further, the
in-store virtual fitting experience can be performed by a customer
for another customer, such as a friend of a family member,
regardless of location (e.g., co-located in the store, or away from
the store). Furthermore, the technology can employ the body
information as well as the style and fit preference for the
digitally augmented in-store experience.
[0122] FIG. 16 is a high-level diagram for displaying an animation
or a garment-draped avatar, in accordance with certain example
embodiments. The garment simulation module 246 can take as input
tessellation using an access module 244, and can output 3-D models
of clothing on an avatar using a rendering module 248. The
simulation module 246 can use digitization 1610, modeling 1620,
simulation 1630, and automated 1640 techniques to generate a 3-D
simulation. The 3-D simulation can include a catwalk 1650, a
360-degree view 1660, a recommended size 1670 suggestion, or a
virtual fitting room 1680 experience. The simulation module 246 can
move points around to fit an avatar based on a simulated force
(e.g., friction, stitching force). Additionally, based on this
modeling, the points are connected via springs and can be stretched
based on a simulated force (e.g., gravity, material property of
garment). The simulation module 246 and rendering module 248 can
solve a system of equations, given that the equations are all
inter-connected. In one example, the system of equations can be
based on the spring force on each vertex. According to various
example embodiments, one or more of the methodologies described
herein may facilitate the online purchase of garments.
[0123] When these effects are considered in aggregate, one or more
of the methodologies described herein may obviate a need for
certain efforts or resources that otherwise would be involved in
determining body measurements of a user from garment images.
Efforts expended by a user in generating user-specific body models
may be reduced by one or more of the methodologies described
herein. Computing resources used by one or more machines,
databases, or devices (e.g., within the network environment 100)
may similarly be reduced since the different scenarios can be
dependent on the available amount of computing resources on the
client device or server. Examples of such computing resources
include processor cycles, network traffic, memory usage, data
storage capacity, power consumption, and cooling capacity.
[0124] FIG. 17 is a block diagram illustrating components of a
machine 1700, according to some example embodiments, able to read
instructions 1724 from a machine-readable medium 1722 (e.g., a
non-transitory machine-readable medium, a machine-readable storage
medium, a computer-readable storage medium, or any suitable
combination thereof) and perform any one or more of the
methodologies discussed herein, in whole or in part. Specifically,
FIG. 17 shows the machine 1700 in the example form of a computer
system (e.g., a computer) within which the instructions 1724 (e.g.,
software, a program, an application, an applet, an app, or other
executable code) for causing the machine 1700 to perform any one or
more of the methodologies discussed herein may be executed, in
whole or in part. The server 202 can be an example of the machine
1700.
[0125] In alternative embodiments, the machine 1700 operates as a
standalone device or may be connected (e.g., networked) to other
machines. In a networked deployment, the machine 1700 may operate
in the capacity of a server machine or a client machine in a
server-client network environment, or as a peer machine in a
distributed (e.g., peer-to-peer) network environment. The machine
1700 may be a server computer, a client computer, a personal
computer (PC), a tablet computer, a laptop computer, a netbook, a
cellular telephone, a smartphone, a set-top box (STB), a personal
digital assistant (PDA), a web appliance, a network router, a
network switch, a network bridge, or any machine capable of
executing the instructions 1724, sequentially or otherwise, that
specify actions to be taken by that machine. Further, while only a
single machine is illustrated, the term "machine" shall also be
taken to include any collection of machines that individually or
jointly execute the instructions 1724 to perform all or part of any
one or more of the methodologies discussed herein.
[0126] The machine 1700 includes a processor 1702 (e.g., a CPU, a
GPU, a digital signal processor (DSP), an application specific
integrated circuit (ASIC), a radio-frequency integrated circuit
(RFIC), or any suitable combination thereof), a main memory 1704,
and a static memory 1706, which are configured to communicate with
each other via a bus 1708. The processor 1702 may contain
microcircuits that are configurable, temporarily or permanently, by
some or all of the instructions 1724 such that the processor 1702
is configurable to perform any one or more of the methodologies
described herein, in whole or in part. For example, a set of one or
more microcircuits of the processor 1702 may be configurable to
execute one or more modules (e.g., software modules) described
herein.
[0127] The machine 1700 may further include a graphics display 1710
(e.g., a plasma display panel (PDP), a light emitting diode (LED)
display, a liquid crystal display (LCD), a projector, a cathode ray
tube (CRT), or any other display capable of displaying graphics or
video). The machine 1700 may also include an alphanumeric input
device 1712 (e.g., a keyboard or keypad), a cursor control device
1714 (e.g., a mouse, a touchpad, a trackball, a joystick, a motion
sensor, an eye tracking device, or other pointing instrument), a
storage unit 1716, an audio generation device 1718 (e.g., a sound
card, an amplifier, a speaker, a headphone jack, or any suitable
combination thereof), and a network interface device 1720.
[0128] The storage unit 1716 includes the machine-readable medium
1722 (e.g., a tangible and non-transitory machine-readable storage
medium) on which are stored the instructions 1724 embodying any one
or more of the methodologies or functions described herein. The
instructions 1724 may also reside, completely or at least
partially, within the main memory 1704, within the processor 1702
(e.g., within the processor's cache memory), or both, before or
during execution thereof by the machine 1700. Accordingly, the main
memory 1704 and the processor 1702 may be considered
machine-readable media (e.g., tangible and non-transitory
machine-readable media). The instructions 1724 may be transmitted
or received over the network 34 via the network interface device
1720. For example, the network interface device 1720 may
communicate the instructions 1724 using any one or more transfer
protocols (e.g., hypertext transfer protocol (HTTP)).
[0129] The machine-readable medium 1722 may include a magnetic or
optical disk storage device, solid state storage devices such as
flash memory, or other non-volatile memory device or devices. The
computer-readable instructions stored on the machine-readable
medium 1722 are in source code, assembly language code, object
code, or another instruction format that is interpreted by one or
more processors.
[0130] In some example embodiments, the machine 1700 may be a
portable computing device, such as a smartphone or tablet computer,
and have one or more additional input components 1730 (e.g.,
sensors or gauges). Examples of such input components 1730 include
an image input component (e.g., one or more cameras), an audio
input component (e.g., a microphone), a direction input component
(e.g., a compass), a location input component (e.g., a global
positioning system (GPS) receiver), an orientation component (e.g.,
a gyroscope), a motion detection component (e.g., one or more
accelerometers), an altitude detection component (e.g., an
altimeter), and a gas detection component (e.g., a gas sensor).
Inputs harvested by any one or more of these input components 1730
may be accessible and available for use by any of the modules
described herein.
[0131] As used herein, the term "memory" refers to a
machine-readable medium 1722 able to store data temporarily or
permanently and may be taken to include, but not be limited to,
random-access memory (RAM), read-only memory (ROM), buffer memory,
flash memory, and cache memory. While the machine-readable medium
1722 is shown in an example embodiment to be a single medium, the
term "machine-readable medium" should be taken to include a single
medium or multiple media (e.g., a centralized or distributed
database, or associated caches and servers) able to store the
instructions 1724. The term "machine-readable medium" shall also be
taken to include any medium, or combination of multiple media, that
is capable of storing the instructions 1724 for execution by the
machine 1700, such that the instructions 1724, when executed by one
or more processors of the machine 1700 (e.g., the processor 1702),
cause the machine 1700 to perform any one or more of the
methodologies described herein, in whole or in part. Accordingly, a
"machine-readable medium" refers to a single storage apparatus or
device, as well as cloud-based storage systems or storage networks
that include multiple storage apparatus or devices. The term
"machine-readable medium" shall accordingly be taken to include,
but not be limited to, one or more tangible (e.g., non-transitory)
data repositories in the form of a solid-state memory, an optical
medium, a magnetic medium, or any suitable combination thereof.
[0132] The foregoing description, for purposes of explanation, has
been described with reference to specific embodiments. However, the
illustrative discussions above are not intended to be exhaustive or
to limit the present disclosure to the precise forms disclosed.
Many modifications and variations are possible in view of the above
teachings. The embodiments were chosen and described in order to
best explain the principles of the present disclosure and its
practical applications, to thereby enable others skilled in the art
to best utilize the present disclosure and various embodiments with
various modifications as are suited to the particular use
contemplated.
[0133] Throughout this specification, plural instances may
implement components, operations, or structures described as a
single instance. Although individual operations of one or more
methods are illustrated and described as separate operations, one
or more of the individual operations may be performed concurrently,
and nothing requires that the operations be performed in the order
illustrated. Structures and functionality presented as separate
components in example configurations may be implemented as a
combined structure or component. Similarly, structures and
functionality presented as a single component may be implemented as
separate components. These and other variations, modifications,
additions, and improvements fall within the scope of the subject
matter herein.
[0134] Certain embodiments are described herein as including logic
or a number of components, modules, or mechanisms. Modules may
constitute software modules (e.g., code stored or otherwise
embodied on a machine-readable medium or in a transmission medium),
hardware modules, or any suitable combination thereof. A "hardware
module" is a tangible (e.g., non-transitory) unit capable of
performing certain operations and may be configured or arranged in
a certain physical manner. In various example embodiments, one or
more computer systems (e.g., a standalone computer system, a client
computer system, or a server computer system) or one or more
hardware modules of a computer system (e.g., a processor or a group
of processors) may be configured by software (e.g., an application
or application portion) as a hardware module that operates to
perform certain operations as described herein.
[0135] In some embodiments, a hardware module may be implemented
mechanically, electronically, or any suitable combination thereof.
For example, a hardware module may include dedicated circuitry or
logic that is permanently configured to perform certain operations.
For example, a hardware module may be a special-purpose processor,
such as a field programmable gate array (FPGA) or an ASIC. A
hardware module may also include programmable logic or circuitry
that is temporarily configured by software to perform certain
operations. For example, a hardware module may include software
encompassed within a general-purpose processor or other
programmable processor. It will be appreciated that the decision to
implement a hardware module mechanically, in dedicated and
permanently configured circuitry, or in temporarily configured
circuitry (e.g., configured by software) may be driven by cost and
time considerations.
[0136] Accordingly, the phrase "hardware module" should be
understood to encompass a tangible entity, and such a tangible
entity may be physically constructed, permanently configured (e.g.,
hardwired), or temporarily configured (e.g., programmed) to operate
in a certain manner or to perform certain operations described
herein. As used herein, "hardware-implemented module" refers to a
hardware module. Considering embodiments in which hardware modules
are temporarily configured (e.g., programmed), each of the hardware
modules need not be configured or instantiated at any one instance
in time. For example, where a hardware module comprises a
general-purpose processor configured by software to become a
special-purpose processor, the general-purpose processor may be
configured as respectively different special-purpose processors
(e.g., comprising different hardware modules) at different times.
Software (e.g., a software module) may accordingly configure one or
more processors, for example, to constitute a particular hardware
module at one instance of time and to constitute a different
hardware module at a different instance of time.
[0137] Hardware modules can provide information to, and receive
information from, other hardware modules. Accordingly, the
described hardware modules may be regarded as being communicatively
coupled. Where multiple hardware modules exist contemporaneously,
communications may be achieved through signal transmission (e.g.,
over appropriate circuits and buses) between or among two or more
of the hardware modules. In embodiments in which multiple hardware
modules are configured or instantiated at different times,
communications between such hardware modules may be achieved, for
example, through the storage and retrieval of information in memory
structures to which the multiple hardware modules have access. For
example, one hardware module may perform an operation and store the
output of that operation in a memory device to which it is
communicatively coupled. A further hardware module may then, at a
later time, access the memory device to retrieve and process the
stored output. Hardware modules may also initiate communications
with input or output devices, and can operate on a resource (e.g.,
a collection of information).
[0138] The various operations of example methods described herein
may be performed, at least partially, by one or more processors
that are temporarily configured (e.g., by software) or permanently
configured to perform the relevant operations. Whether temporarily
or permanently configured, such processors may constitute
processor-implemented modules that operate to perform one or more
operations or functions described herein. As used herein,
"processor-implemented module" refers to a hardware module
implemented using one or more processors.
[0139] Similarly, the methods described herein may be at least
partially processor-implemented, a processor being an example of
hardware. For example, at least some of the operations of a method
may be performed by one or more processors or processor-implemented
modules. As used herein, "processor-implemented module" refers to a
hardware module in which the hardware includes one or more
processors. Moreover, the one or more processors may also operate
to support performance of the relevant operations in a "cloud
computing" environment or as a "software as a service" (SaaS). For
example, at least some of the operations may be performed by a
group of computers (as examples of machines including processors),
with these operations being accessible via a network (e.g., the
Internet) and via one or more appropriate interfaces (e.g., an
application program interface (API)).
[0140] The performance of certain operations may be distributed
among the one or more processors, not only residing within a single
machine, but deployed across a number of machines. In some example
embodiments, the one or more processors or processor-implemented
modules may be located in a single geographic location (e.g.,
within a home environment, an office environment, or a server
farm). In other example embodiments, the one or more processors or
processor-implemented modules may be distributed across a number of
geographic locations.
[0141] Some portions of the subject matter discussed herein may be
presented in terms of algorithms or symbolic representations of
operations on data stored as bits or binary digital signals within
a machine memory (e.g., a computer memory). Such algorithms or
symbolic representations are examples of techniques used by those
of ordinary skill in the data processing arts to convey the
substance of their work to others skilled in the art. As used
herein, an "algorithm" is a self-consistent sequence of operations
or similar processing leading to a desired result. In this context,
algorithms and operations involve physical manipulation of physical
quantities. Typically, but not necessarily, such quantities may
take the form of electrical, magnetic, or optical signals capable
of being stored, accessed, transferred, combined, compared, or
otherwise manipulated by a machine. It is convenient at times,
principally for reasons of common usage, to refer to such signals
using words such as "data," "content," "bits," "values,"
"elements," "symbols," "characters," "terms," "numbers,"
"numerals," or the like. These words, however, are merely
convenient labels and are to be associated with appropriate
physical quantities.
[0142] Unless specifically stated otherwise, discussions herein
using words such as "processing," "computing," "calculating,"
"determining," "presenting," "displaying." or the like may refer to
actions or processes of a machine (e.g., a computer) that
manipulates or transforms data represented as physical (e.g.,
electronic, magnetic, or optical) quantities within one or more
memories (e.g., volatile memory, non-volatile memory, or any
suitable combination thereof), registers, or other machine
components that receive, store, transmit, or display information.
Furthermore, unless specifically stated otherwise, the terms "a" or
"an" are herein used, as is common in patent documents, to include
one or more than one instance. Finally, as used herein, the
conjunction "or" refers to a non-exclusive "or," unless
specifically stated otherwise.
* * * * *