U.S. patent application number 14/270244 was filed with the patent office on 2015-05-14 for 3-dimensional digital garment creation from planar garment photographs.
The applicant listed for this patent is Jatin Chhugani, Mihir Naware, Jonathan Su. Invention is credited to Jatin Chhugani, Mihir Naware, Jonathan Su.
Application Number | 20150134302 14/270244 |
Document ID | / |
Family ID | 53043418 |
Filed Date | 2015-05-14 |
United States Patent
Application |
20150134302 |
Kind Code |
A1 |
Chhugani; Jatin ; et
al. |
May 14, 2015 |
3-DIMENSIONAL DIGITAL GARMENT CREATION FROM PLANAR GARMENT
PHOTOGRAPHS
Abstract
Techniques for generating and presenting a three-dimensional
garment model are presented herein. A communication interface can
be configured to receive images, where all visible parts of the
garment may be captured by the received images. A garment creation
module can be configured to generate partial shapes of the garment
based on the received images. Additionally, the garment creation
module can determine a type of garment by comparing the generated
partial shapes to a database of reference garment shapes.
Furthermore, the garment creation module can generate a
three-dimensional garment model by joining the partial shapes based
on the determined type of garment, and can tessellate the generated
three-dimensional garment model. A user interface can be configured
to present the tessellated three-dimensional garment model on a
three-dimensional body model.
Inventors: |
Chhugani; Jatin; (Santa
Clara, CA) ; Su; Jonathan; (San Jose, CA) ;
Naware; Mihir; (Redwood City, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Chhugani; Jatin
Su; Jonathan
Naware; Mihir |
Santa Clara
San Jose
Redwood City |
CA
CA
CA |
US
US
US |
|
|
Family ID: |
53043418 |
Appl. No.: |
14/270244 |
Filed: |
May 5, 2014 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61905126 |
Nov 15, 2013 |
|
|
|
61904263 |
Nov 14, 2013 |
|
|
|
61904522 |
Nov 15, 2013 |
|
|
|
61905118 |
Nov 15, 2013 |
|
|
|
61905122 |
Nov 15, 2013 |
|
|
|
Current U.S.
Class: |
703/1 |
Current CPC
Class: |
G06T 17/10 20130101;
G06F 2113/12 20200101; G06K 9/6262 20130101; G06Q 30/0643 20130101;
G06T 15/005 20130101; G06T 2215/16 20130101; G06T 17/00 20130101;
A41H 1/00 20130101; G06F 2111/02 20200101; G06T 17/20 20130101;
G06F 30/20 20200101; G06T 2210/16 20130101; A41H 3/007 20130101;
G06T 19/00 20130101; G06T 19/20 20130101 |
Class at
Publication: |
703/1 |
International
Class: |
G06F 17/50 20060101
G06F017/50 |
Claims
1. A system comprising: one or more processors; a communication
interface configured to: receive a first image depicting a first
view of a garment; and receive a second image depicting a second
view of the garment; a garment creation module that configures at
least one processor among the one or more processors to: generate a
first partial shape of the garment based on the received first
image; generate a second partial shape of the garment of the
garment based on the received second image; determine a type of
garment by comparing the generated first and second partial shapes
to a database of reference garment shapes; generate a
three-dimensional garment model by joining the first partial shape
and the second partial shape based on the determined type of
garment, the generated three-dimensional garment model including a
first group of vertices; and tessellate the generated
three-dimensional garment model by adding a second group of
vertices to the generated three-dimensional garment model; and a
user interface configured to present the tessellated
three-dimensional garment model on a three-dimensional body model,
the tessellated three-dimensional garment model being presented
based on a simulated force.
2. The system of claim 1, wherein the garment creation module
configures the one or more processors to tessellate the generated
three-dimensional garment model by tessellating the generated
three-dimensional garment model into triangles, wherein vertices in
the triangles correspond to points that are interior to the
generated three-dimensional garment model.
3. The system of claim 2, wherein overlaps and gaps do not exist
between the triangles of the tessellated three-dimensional garment
model.
4. The system of claim 2, wherein a triangle among the triangles of
the tessellated three-dimensional garment model has a minimum
angle, and wherein the garment creation module further configures
the one or more processors to tessellate the generated
three-dimensional garment model by maximizing the minimum angle of
the triangle.
5. The system of claim 1, wherein the garment creation module
configures the one or more processors to apply a texture map to the
tessellated three-dimensional garment model by assigning a color to
a vertex in the second group of vertices based the received first
image.
6. The system of claim 1, wherein the garment creation module
configures the one or more processors to determine actual
dimensions of the garment using a reference object placed near the
garment in the first image.
7. The system of claim 1, wherein the simulated force includes at
least one of a gravitational force, a frictional force, or
aerodynamic drag.
8. The system of claim wherein the user interface is configured to
present the tessellated three-dimensional garment model on the
three-dimensional body model based on a material property of the
garment, wherein the material property includes at least one of
sheerness, linear stiffness, or bending stiffness.
9. The system of claim 1, wherein the garment creation module
configures the one or more processors to generate the first partial
shape of the garment by: calculating a difference in color between
a pixel and its adjacent pixels within the first image; and
determining that the pixel is on a first edge of the first partial
shape of the garment based on the difference transgressing a
predetermined threshold value.
10. The system of claim 9, wherein the garment creation module
configures the one or more processors to join the first and second
partial shapes by aligning the first edge of the first partial
shape with a second edge of the second partial shape.
11. The system of claim 9, wherein the garment creation module
configures the one or more processors to smooth the first edge of
the first partial shape into a smoothed curve by removing artifacts
in the first image, wherein the artifacts is due to a lighting
effect in the first image and an image compression of the first
image.
12. The system of claim 1, wherein the garment creation module
configures the one or more processors to generate a first
flat-panel as the first partial shape and a second flat-panel as
the second partial shape.
13. The system of claim 12, wherein the garment creation module
configures the one or more processors to generate the
three-dimensional garment model by joining at least a portion of
the first flat-panel with at least a portion of the second
flat-panel.
14. The system of claim 1, wherein all visible parts of the garment
are captured in the received first image and received second
image.
15. The system of claim 1, wherein the communication interface is
further configured to receive a third image depicting a third view
of a garment, wherein all visible parts of the garment is captured
by the received first image, received second image and received
third image.
16. The system of claim 1, wherein the garment creation module
configures the one or more processors to generate different sized
garment models by distorting the three-dimensional garment
model.
17. A method comprising: receiving a first image depicting a first
view of a garment; receiving a second image depicting a second view
of the garment; generating a first partial shape of the garment
based on the received first image; generating a second partial
shape of the garment of the garment based on the received second
image; determining, by one or more processors, a type of garment by
comparing the generated first and second partial shapes to a
database of reference garment shapes; generating a
three-dimensional garment model by joining the first partial shape
and the second partial shape based on the determined type of
garment, the generated three-dimensional garment model including a
first group of vertices; tessellating the generated
three-dimensional garment model by adding a second group of
vertices to the generated three-dimensional garment model; and
presenting the tessellated three-dimensional garment model on a
three-dimensional body model, the tessellated three-dimensional
garment model being presented based on a simulated force.
18. The method of claim 17, further comprising: applying a texture
map to the tessellated three-dimensional garment model by assigning
a color to a vertex in the second group of vertices based on the
received first image.
19. The method of claim 17, wherein the presenting the tessellated
three-dimensional garment model on the three-dimensional body model
is further based on calculations using a material property of the
garment, wherein the material property includes at least one of
sheerness, linear stiffness, or bending stiffness.
20. A non-transitory machine-readable storage comprising
instructions that, when executed by one or more processors of a
machine, cause the machine to perform operations comprising:
receiving a first image depicting a first view of a garment;
receiving a second image depicting a second view of the garment;
generating a first partial shape of the garment based on the
received first image; generating a second partial shape of the
garment of the garment based on the received second image;
determining a type of garment by comparing the generated first and
second partial shapes to a database of reference garment shapes;
generating a three-dimensional garment model by joining the first
partial shape and the second partial shape based on the determined
type of garment, the generated three-dimensional garment model
including a first group of vertices; tessellating the generated
three-dimensional garment model by adding a second group of
vertices to the generated three-dimensional garment model; and
presenting the tessellated three-dimensional garment model on a
three-dimensional body model, the tessellated three-dimensional
garment model being presented based on a simulated force.
Description
CLAIM OF PRIORITY
[0001] This application claims the priority benefit of: (1) U.S.
Provisional Application No. 61/905,126, filed Nov. 15, 2013; (2)
U.S. Provisional Application No. 61/904,263, filed Nov. 14, 2013;
(3) U.S. Provisional Application No. 61/904,522, filed Nov. 15,
2013; (4) U.S. Provisional Application No. 61/905,118, filed Nov.
15, 2013; (5) U.S. Provisional Application No. 61/905,122, filed
Nov. 15, 2013, which are incorporated herein by reference in their
entirety.
TECHNICAL FIELD
[0002] The present application relates generally to the technical
field of three-dimensional (3-D) modeling and, in one specific
example, to 3-D garment modeling for online shopping.
BACKGROUND
[0003] Shopping for clothes in conventional (e.g., non-online) can
be an arduous task and, due to travelling and parking, can be very
time consuming. With the advent of online shopping, consumers may
purchase clothing, while staying home, via a computer or any
electronic device connected to the Internet. Additionally,
purchasing clothes online can be different in comparison to
purchasing clothes in a store. One difference is the lack of a
physical dressing room to see if and how an article of clothing
fits the particular consumer. Since different consumers can have
different dimensions, seeing how an article of clothing fits, by
use of a dressing room, can be a very important aspect of a
successful and satisfying shopping experience.
[0004] The systems and methods described in the present disclosure
attempt to provide solutions to the problems presented above.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] FIG. 1 illustrates an exemplary system for three-dimensional
(3-D) digital garment creation from planar garment photographs, in
accordance with embodiments of the present disclosure.
[0006] FIG. 2 is a block diagram illustrating an exemplary file
system, in accordance with embodiments of the present
disclosure.
[0007] FIG. 3 is a block diagram illustrating an exemplary 3-D
digital garment creation module, in accordance with embodiments of
the present disclosure.
[0008] FIG. 4 is a flow diagram of a process for 3-D digital
garment creation, according to certain embodiments of the present
disclosure.
[0009] FIG. 5 is a flow diagram continuing the process for 3-D
digital garment creation from FIG. 4, according to certain
embodiments of the present disclosure.
[0010] FIGS. 6-8 illustrate examples of garments in a garment
template database, in accordance with embodiments of the present
disclosure.
[0011] FIG. 9 illustrates a method for creating 3-D digital jeans
based on a front image and a back image of the jeans and presenting
the digital jeans on a 3-D body model, in accordance with
embodiments of the present disclosure.
[0012] FIG. 10 illustrates method for creating a 3-D digital dress
based on a front image and a back image of the dress and presenting
the digital dress on a 3-D body model, in accordance with
embodiments of the present disclosure.
[0013] FIG. 11 illustrates an example for joining partial shapes to
generate a 3-D digital shirt, in accordance with embodiments of the
present disclosure.
[0014] FIG. 12 illustrates another example for joining partial
shapes to generate a 3-D hooded sweatshirt without joining some of
the edges, in accordance with embodiments of the present
disclosure.
[0015] FIG. 13 illustrates a sample triangle associated with the
tessellated garment, in accordance with embodiments of the present
disclosure.
[0016] FIG. 14 illustrates an example of a triangulation method, in
accordance with embodiments of the present disclosure.
[0017] FIG. 15 illustrates a method for calibrating the size of the
garment based on a calibration object, in accordance with
embodiments of the present disclosure.
[0018] FIG. 16 is a block diagram illustrating components of a
machine, according to some example embodiments, able to read
instructions from a machine-readable medium and perform any one or
more of the methodologies discussed herein.
DESCRIPTION OF EMBODIMENTS
[0019] Example systems and methods for 3-dimensional (3-D) digital
garment creation from one or more planar garment images are
described. The systems can include instructions to produce a 3-D
garment model using one or more planar garment images (e.g.,
photographs). Additionally, the systems can present the garment
model on a 3-D body model based on various body shapes/dimensions,
the tension or force in the garment draped on a body, and how the
garment flows as the body performs actions.
[0020] Examples merely typify possible variations. Unless
explicitly stated otherwise, components and functions are optional
and may be combined or subdivided, and operations may vary in
sequence or be combined or subdivided. In the following
description, for purposes of explanation, numerous specific details
are set forth to provide a thorough understanding of example
embodiments. It will be evident to one skilled in the art, however,
that the present subject matter may be practiced without these
specific details.
[0021] Reference will now be made in detail to various embodiments,
examples of which are illustrated in the accompanying drawings. In
the following detailed description, numerous specific details are
set forth in order to provide a thorough understanding of the
present disclosure and the described embodiments. However, the
present disclosure may be practiced without these specific details.
In other instances, well-known methods, procedures, components, and
circuits have not been described in detail so as not to
unnecessarily obscure aspects of the embodiments.
[0022] FIG. 1 is a block diagram illustrating a system 100 in
accordance with one embodiment of the present disclosure. The
system 100 includes client devices (e.g., client device 10-1,
client device 10-2, client device 10-3) connected to server 202 via
network 34 (e.g., the Internet). Server 202 typically includes one
or more processing units (CPUs) 222 for executing modules, programs
and/or instructions stored in memory 236 and thereby performing
processing operations; one or more communications interfaces 220;
memory 236; and one or more communication buses 230 for
interconnecting these components. Communication buses 230
optionally include circuitry (e.g., a chipset) that interconnects
and controls communications between system components. Server 202
also optionally includes power source 224 and controller 212
coupled to mass storage 214. System 100 optionally includes a user
interface 232 comprising a display device 226 and a keyboard
228.
[0023] Memory 236 includes high-speed random access memory, such as
dynamic random-access memory (DRAM), static random-access memory
(SRAM), double data rate random-access memory (DDR RAM) or other
random access solid state memory devices; and may include
non-volatile memory, such as one or more magnetic disk storage
devices, optical disk storage devices, flash memory devices, or
other non-volatile solid state storage devices. Memory 236 may
optionally include one or more storage devices remotely located
from the CPU(s) 222. Memory 236, or alternately the non-volatile
memory device(s) within memory 236, comprises a non-transitory
computer readable storage medium. In some embodiments, memory 236,
or the computer readable storage medium of memory 236, stores the
following programs, modules and data structures, or a subset
thereof: an operating system 240; a file system 242; a network
communications module 244; and a 3-D digital garment creation
module 246.
[0024] The operating system 240 can include procedures for handling
various basic system services and for performing hardware dependent
tasks. The file system 242 can store and organize various files
utilized by various programs. The network communications module 244
can communicate with client devices (e.g., client device 10-1,
client device 10-2, client device 10-3) via the one or more
communications interfaces 220 (e.g., wired, wireless), the network
34, other wide area networks, local area networks, metropolitan
area networks, and so on.
[0025] The network 34 may be any network that enables communication
between or among machines, databases, and devices (e.g., the server
202 and the client device 10-1). Accordingly, the network 34 may be
a wired network, a wireless network (e.g., a mobile or cellular
network), or any suitable combination thereof. The network 34 may
include one or more portions that constitute a private network, a
public network (e.g., the Internet), or any suitable combination
thereof. Accordingly, the network 34 may include one or more
portions that incorporate a local area network (LAN), a wide area
network (WAN), the Internet, a mobile telephone network (e.g., a
cellular network), a wired telephone network (e.g., a plain old
telephone system (POTS) network), a wireless data network (e.g.,
Wi-Fi network or WiMAX network), or any suitable combination
thereof. Any one or more portions of the network 34 may communicate
information via a transmission medium. As used herein,
"transmission medium" refers to any intangible (e.g., transitory)
medium that is capable of communicating (e.g., transmitting)
instructions for execution by a machine (e.g., by one or more
processors of such a machine), and includes digital or analog
communication signals or other intangible media to facilitate
communication of such software.
[0026] The server 202 and the client devices (e.g., client device
10-1, client device 10-2, client device 10-3) may each be
implemented in a computer system, in whole or in part, as described
below with respect to FIG. 16.
[0027] Any of the machines, databases, or devices shown in FIG. 1
may be implemented in a general-purpose computer modified (e.g.,
configured or programmed) by software (e.g., one or more software
modules) to be a special-purpose computer to perform one or more of
the functions described herein for that machine, database, or
device. For example, a computer system able to implement any one or
more of the methodologies described herein is discussed below with
respect to FIG. 16. As used herein, a "database" is a data storage
resource and may store data structured as a text file, a table, a
spreadsheet, a relational database (e.g., an object-relational
database), a triple store, a hierarchical data store, or any
suitable combination thereof. Moreover, any two or more of the
machines, databases, or devices illustrated in FIG. 1 may be
combined into a single machine, and the functions described herein
for any single machine, database, or device may be subdivided among
multiple machines, databases, or devices.
[0028] Although FIG. 1 shows a system 100, FIG. 1 is intended more
as a functional description of the various features which may be
present in a set of servers than as a structural schematic of the
embodiments described herein. In practice, and as recognized by
those of ordinary skill in the art, items shown separately could be
combined and some items could be separated. For example, some items
shown separately in FIG. 1 could be implemented on single servers
and single items could be implemented by one or more servers.
[0029] FIG. 2 further describes the exemplary memory 236 in server
202, as initially described in FIG. 1. FIG. 2 includes an expanded
depiction of exemplary file system 242. File system 242 may include
one or more of the following files: input image photo files 251;
extracted geometry files 252; extracted texture files 253;
stitching information files 254; garment template database 255;
draping parameter files 256; simulation parameter files 257; and
simulation result geometry files 258. FIGS. 4-5 further describe
operations using the files from FIG. 2.
[0030] FIG. 3 is a block diagram illustrating components of the 3-D
digital garment creation module 246, according to some example
embodiments, as initially described in FIG. 1. The 3-D digital
garment creation module 246 is shown as including a boundary
extraction module 261; a texture mapping module 262; a tessellation
module 263; a stitching module 264; a draping module 265; and a
simulation module 266 all configured to communicate with each other
(e.g., via a bus, shared memory, or a switch). FIGS. 4-5 further
describe operations using the modules from FIG. 3.
[0031] Any one or more of the modules described herein may be
implemented using hardware (e.g., one or more processors of a
machine) or a combination of hardware and software. For example,
any module described herein may configure a processor (e.g., among
one or more processors of a machine) to perform the operations
described herein for that module. Moreover, any two or more of
these modules may be combined into a single module, and the
functions described herein for a single module may be subdivided
among multiple modules. Furthermore, according to various example
embodiments, modules described herein as being implemented within a
single machine, database, or device may be distributed across
multiple machines, databases, or devices.
[0032] Each of the above identified elements may be stored in one
or more of the previously mentioned memory devices, and corresponds
to a set of instructions for performing a function described above.
The above identified modules or programs (i.e., sets of
instructions) need not be implemented as separate software
programs, procedures or modules, and thus various subsets of these
modules may be combined or otherwise rearranged in various
embodiments. In some embodiments, memory 236 may store a subset of
the modules and data structures identified above. Furthermore,
memory 236 may store additional modules and data structures not
described above.
[0033] The actual number of servers used to implement a 3-D digital
garment creation module 246 and how features are allocated among
them will vary from one implementation to another, and may depend
in part on the amount of data traffic that the system handles
during peak usage periods as well as during average usage
periods.
[0034] FIGS. 4-5 are flowcharts representing a method 400 for
3-dimensional digital garment creation from one or more planar
garment images, according to certain embodiments of the present
disclosure. Method 400 is governed by instructions stored in a
computer readable storage medium and that are executed by one or
more processors of one or more servers. Each of the operations
shown in FIGS. 4-5 may correspond to instructions stored in a
computer memory or computer readable storage medium.
[0035] Operations in the method 400 may be performed by the server
202, using modules described above with respect to FIG. 3, As shown
in FIGS. 4-5, the method 400 includes operations 410, 420, 430,
440, 450, 460, 470 and 480. Optionally, method 400 can include an
operation for calibrating the size of the garment and an operation
for applying a texture map on the digital garment.
[0036] The computer readable storage medium may include a magnetic
or optical disk storage device, solid state storage devices such as
flash memory, or other non-volatile memory device or devices. The
computer readable instructions stored on the computer readable
storage medium are in source code, assembly language code, object
code, or other instruction format that is interpreted by one or
more processors.
[0037] The foregoing description, for purposes of explanation, has
been described with reference to specific embodiments. However, the
illustrative discussions above are not intended to be exhaustive or
to limit the present disclosure to the precise forms disclosed.
Many modifications and variations are possible in view of the above
teachings. The embodiments were chosen and described in order to
best explain the principles of the present disclosure and its
practical applications, to thereby enable others skilled in the art
to best utilize the present disclosure and various embodiments with
various modifications as are suited to the particular use
contemplated.
[0038] At operation 410, 3-D digital garment creation module 246
can receive a first image depicting a first view of a garment. The
first image (e.g., planar garment photographs) can include input
image photo files 251. For example, a user can capture the front
view of a pair of jeans using a camera on a mobile device and
transmit the image, using a receiver on the mobile device, to the
3-D digital garment creation module 246.
[0039] Similar to the operation 410, 3-D digital garment creation
module 246 can receive a second image depicting a second view of a
garment at operation 420. In some instances, two received images
can suffice, if all visible parts of the garment are captured in
the set of received images. In some other instances, one or more
other images (e.g., third image, fourth image) may be received by
the 3-D digital garment creation module in order to capture all
visible part of the garment.
[0040] For example, a user can capture the front and the back view
of a pair of jeans with just two images using a camera and transmit
the image to the 3-D digital garment creation module 246. The first
and second images can be received from a client device (e.g.,
client device 10-1) or a third party vendor using network 34 (e.g.,
Bluetooth, cellular, internet).
[0041] A first and a second side of a garment can be determined
using the first and second image received at operations 410 and
420. The images received at operations 410 and 420 can be stored in
the input image photo files 251.
[0042] At operation 430, 3-D digital garment creation module 246
can generate a first partial shape of the garment based on the
received first image using boundary extraction module 261.
[0043] At operation 440, 3-D digital garment creation module 246
can generate a second partial shape of the garment based on the
received second image using boundary extraction module 261. The
partial shapes generated at operations 430 and 440 can be stored in
the extracted geometry files 252. Optionally, when texture
information is obtained from the received images, the texture
information associated with the generated partial shapes can be
stored in the extracted texture files 253.
[0044] In some instances, generating the partial shape can be based
on determining an identified boundary or outline of the garment.
The boundary can be determined by identifying a discrete set of
points (e.g., set of vertices) using a boundary detection
algorithm.
[0045] One example of a boundary detection algorithm can be to
determine the color-range of the background of the image by
averaging out pixel values at the boundary (e.g., first row, first
column, last row, last column) of the input image. The background
color can be determined to be B (i.e., B.sub.RED, B.sub.GREEN,
B.sub.BLUE). Additionally, a pre-determined threshold value (t) can
be chosen. The threshold value can be set by the user or calculated
by the system (e.g., system 100). All pixel values in the received
images that are within a range of the background color (i.e.,
B.sub.RED+/-t, B.sub.GREEN+/-t, B.sub.BLUE+/-t) are interpreted as
background pixels, and hence not part of the garment. Having
identified each pixel value as either foreground (i.e., part of
garment) or background, for each row of pixels, the pixel values
where there is a transition between foreground and background can
be identified as the contour/garment boundary pixels. Using these
boundary pixels, an outline can be used to generate a partial shape
of the garment.
[0046] In another example of a boundary detection algorithm, fir
each row of pixels, the intensity (or color value) at each pixel is
compared to the intensity (or color value) of the previous pixel.
For a pre-determined threshold, once the difference between
consecutive pixel values exceeds the threshold, the identified
pixels can be classified as boundary pixels. In some instances, the
intensity values for the foreground and background can be assigned
via the scan line method. The scan line method includes traversing
individual pixels and assigning the designation of background to
the colors that match the outer edges of the photograph. In another
instances, the boundary can be identified (e.g., extracted) using a
gradient calculation method. In the gradient calculation method,
differences in pixel color and intensity are calculated between
adjacent pixels. A boundary can be identified when the differences
are above a predetermined threshold value (e.g., sharp difference
in pixel color and/or intensity between adjacent pixels). In yet
other instances, the boundary can be determined using both the scan
line method and the gradient calculation method. Using both methods
can allow for a more accurate identification of the boundary.
[0047] Generating the partial shapes can include creating a
continuous curve using the identified boundary. As mentioned, the
identified boundary can be a discrete set of points. The discrete
set of points can be a set of vertices associated with pixels that
have been identified as boundary points using a boundary detection
algorithm. The curve can be created by joining the discrete set of
points that are determined to be boundaries of the garment and then
running a smoothing function to eliminate outliers. Additionally,
the curve can be modified based on a garment template from the
garment database. The curve can be smoothed out by eliminating
noise (e.g., remove outliers from the data), For example, noise can
refer to the artifacts in image acquisition (e.g., lighting, image
compression). Hence, the process of noise removal can help create a
smooth edge instead of a jagged edge.
[0048] Moreover, the precision can be adjusted to accommodate
varying levels of desired accuracy of the created digital garment
and can be based on computation power. The precision can be
automatically adjusted by the system based on the client device
(e.g., lower precision or mobile device, higher precision for large
screen display). In some instances, the standard error of tolerance
is a parameter that can be set. Tolerance can be measured by actual
units of distance (e.g., 0.01 inches). Alternatively, tolerance can
be measured in number of pixels.
[0049] Furthermore, accuracy parameters can be received (e.g., from
a user) or determined (e.g., by 3-D digital garment creation module
246) to help identify the boundary of the garment. Accuracy
parameters can include, but are not limited to, extracted geometry
files 252, extracted texture files 253, stitching information files
254 and garment template database 255.
[0050] Optionally, texture and optical properties can be determined
from the images (e.g., photographs) at operations 430 and 440 in
stored in the extracted texture files 253. The texture information
can be used to determine the material properties of the garment and
can be used to generate the texture map. The material properties of
the garment can be used for calculating the simulated forces on the
3-D garment at operation 480. Furthermore, the material properties
can be matched to the garment template database 255 at operation
450 in order to determine the type of garment using the texture
mapping module 262. For example, the system can identify pleats in
a garment when every part of the garment is captured in one of the
input images. Moreover, the material property can be extracted even
if the images of the garment are stretched or sheared. The optical
properties can be used during the optional operations of applying a
texture map to the 3-D digital garment.
[0051] At operation 450, the 3-D digital garment creation module
246 can determine a type of garment by comparing the generated
first and second partial shapes to a database of reference garment
shapes using the garment template database 255 and the stitching
module 264.
[0052] The garment template database 255 can include stitching
information files 254. The stitching information files include
which corresponding edges in the partial shapes are connected to
each other. The draping parameters files 256 can also extracted
from the garment template database 255. Additionally, the
simulation parameters files 257 can also extracted from the garment
template database 255.
[0053] FIGS. 6-8 illustrate examples of garments in a garment
template database 255 used in operations 450, in accordance with
embodiments of the present disclosure. For example, in FIG. 6, the
jeans garment template 505 can include information such as the
number of panels 510, stitching information 515 of the jeans, body
placement parameters 520 of the jeans, draping parameters 525,
simulation parameters 530, and other relevant information
associated with the jeans garment template.
[0054] In another example, in FIG. 7, the sleeveless dress garment
template 535 can include information such as the number of panels
540, stitching information 545 of the dress, body placement
parameters 550 of the dress, draping parameters 555, simulation
parameters 560, and other relevant information associated with the
sleeveless dress garment template.
[0055] FIG. 8 illustrates an exemplary garment template database
255, which includes the jeans garment template 505 of FIG. 6, and
the sleeveless dress template 535 of FIG. 7. Additionally, the
garment template database 255 can include other garment
templates.
[0056] Returning back to method 400, at operation 450, the 3-D
digital garment creation module 246 can extract the identified
boundary from the partial shapes and match the shape of the
extracted boundary to known databases of shapes (e.g., garment
template database 255) of categorized garments (e.g., jeans garment
template 505, sleeveless dress garment template 535) in order to
determine the type of garment.
[0057] At operation 460, the 3-D digital garment creation module
246 can generate a 3-D garment module by joining the first partial
shape and the second partial shape based on the determined type of
garment. The generated 3-D garment module can include a first group
of vertices based on the set of vertices from the partial shapes.
The first group of vertices can be the outline of the 3-D garment
module when the partial shapes have been joined (e.g.,
stitched).
[0058] For example, as illustrated in FIG. 9, two images (e.g.,
photographs) of the front of the jeans and the back of the jeans
can be sufficient when all parts of the garment are captured in the
images. Using the two images, the 3-D digital garment creation
module 246 can generate a first partial shape corresponding to the
front of the jeans 610 and a second partial shape corresponding to
the back of the jeans 620 at operations 430 and 440. Then, at
operation 450, the 3-D digital garment creation module 246 can
determine that the received images are images of a pair of jeans by
comparing the generated partial shapes to the jeans garment
template 505 in the garment template database 255. Moreover, based
on the determination that the garment is a pair of jeans, at
operation 460 the 3-D digital garment creation module 246 can join
the partial shapes to generate a 3-D pair of the digital jeans 630.
As will be further described at operation 470, the digital jeans
630 can be tessellated. Furthermore, the 3-D pair of digital jeans
630 can be presented on an avatar 640 at operation 480. The avatar
640 can have similar dimensions to the user that is interested in
purchasing the jeans. Moreover, a fit map 650 corresponding to the
tightness and/or looseness of the jeans on the avatar 640 can be
presented to the user.
[0059] In another example, as illustrated in FIG. 10, two partial
shapes of the front of a dress 710 and the back of a dress 720 are
generated based on received images. The 3-D digital garment
creation module 246 can generate a 3-D digital dress 730 with only
two received images and present the 3-D digital dress 730 on an
avatar 740. Similar to the example in FIG. 9, only two images may
be necessary because all parts of the dress are captured in the two
images. If all parts on the dress are not captured in the two
received images, then more images may be required to generate the
3-D digital garment. Additionally, the avatar 740 can illustrate
how the dress looks and feels by demonstrating a fashion
presentation 750 (e.g., catwalk) with the 3-D digital dress 730.
Alternatively, the avatar 740 can illustrate how the dress looks
and feels by demonstrating a lifestyle presentation. The lifestyle
presentation can show the garments in use in everyday
activities.
[0060] Continuing with operation 460, and as illustrated in FIG.
11, the 3-D digital garment creation module 246 can join the
partial shapes by digitally stitching together the shapes of the
different sides of the garment to produce a garment model. The
different sides may include a first side 810 and a second side 820
of the garment. For example, after generating a partial shape for
front and back of the garment at operations 430 and at 440, the two
partial shapes can be joined (e.g., stitched together digitally) as
illustrated by the joining of the digital shirt 830. As previously
mentioned, the digital garment can be presented on an avatar
840.
[0061] In some instances, when all parts of the garment are not
captured in the first two received images, more than two sides can
be joined to generate the 3-D garment. For example, in FIG. 11, the
different sides may also include a third side 850 and a fourth side
860 of the garment.
[0062] Continuing with operation 460, in some embodiments, a
digital stitch can be based on a line connecting two points. 3-D
digital garment creation module 246 can align the front side and
the back side versions of the garment by looking for similar
analogous points on a side using the other side as a reference. The
3-D digital garment creation module 246 can recognize which edges
to join by matching a particular garment shape to a particular
entry already stored in the garment template database 255. An
exemplary garment database can hold entries for different garments
(e.g., jeans garment template 505, sleeveless dress garment
template 535, blouse garment template, sweater garment template,
shirt garment template). In some embodiments, if the shape does not
match a previously stored entry in the basic garment database, then
algorithms may be needed to provide guidance in sewing the sides
together for the particular new garment shape. Alternatively, the
intervention can be automated. The shape can then be stored as a
new entry into the basic garment database.
[0063] In some instances, the stitch length can be set to zero,
thus producing a zero length spring. A good stitching job can be
represented by setting the stitch length to zero. Additionally, in
some instances, the 3-D digital garment creation module 246 can
prevent bad stitching jobs by inhibiting stitching the front and
the back of a garment where the stitches are long and can be seen.
Accordingly, a stitch length equal to zero or close to zero length
allows for a better digitally stitched garment at operation 460.
However, setting the stitch length to zero or close to zero can be
computationally intensive, because the simulation may need to solve
a large number of equations. To illustrate this exemplary
simulation, when using equations representing springs, based on
Hooke's law, the denominator may be the length of the spring.
Therefore, when the length of the spring has been set to zero, the
equation solver has to solve equations with a zero in the
denominator, which is not possible. Accordingly, another more
computationally intensive formula for representing a spring,
without using a denominator equal to zero, may be used.
[0064] Additionally, 3-D digital garment creation module 246 can
recognize which points to stitch and which points not to stitch
based on a specific algorithm. For example, in FIG. 12, 3-D digital
garment creation module 246 recognizes that first edge 910 and
second edge 920 are not supposed to be joined (e.g., stitched)
because those edges are intended to be an opening (e.g., opening to
allow a user's head to fit through). Therefore, when the digital
hooded sweatshirt 930 is generated at operation 460, the first edge
910 and second edge 920 are not joined. In some instances, the 3-D
digital garment creation module 246 can recognize which edges to
not join by matching a particular garment shape to a particular
entry already stored in a basic garment database.
[0065] Returning to method 400, at operation 470, 3-D digital
garment creation module 246 can tessellate the generated 3-D
garment model by adding a second group of vertices to the generated
3-D garment model using the tessellation module 263. As illustrated
in FIG. 13, tessellation can include breaking down (e.g., tiling) a
garment into many tessellated geometric shape (e.g., sample
triangle 950) to generate a tessellated garment 940. For example,
the shirt can be tessellated with triangles (e.g., about 20,000
triangles when triangle edge is around 1 centimeters), and the
vertices (i.e., vertex 952, vertex 954, vertex 956) of the
triangles can be the second group of vertices in the generated 3-D
garment model. The vertices of the triangles can give location
information of certain points in the material. The location
information can be an x, y and z position value, and the location
position can be independent of color and design of the garment.
[0066] Tessellation can be used to determine the location of
certain points in the material of the garment. The certain points
in the material of the garment can be represented by planar shapes.
For example, the interior of the boundary of the garment can be
filled with a plurality of similar geometric shapes. The points
used for the tessellation can be based on the vertices of the
shape. The shapes for the tessellation can be triangles, given that
triangles are an efficient way (e.g., less computational power,
faster tessellation speed) of representing a tessellated
garment.
[0067] Furthermore, the points of the tessellated geometric shape
can bend outside the shape, but not within. For example, if the
tessellated shape is a triangle, different triangles can be folded
over other triangles, but a triangle cannot be folded within
itself. In other words, the triangle itself remains planar. In such
example, the three vertices of the triangle determine the three
points. An example tessellation can be an extracted shape (e.g., a
shirt shape) being filled with a plurality of triangles, each with
edges that can be calibrated (e.g., 1 cm). Thus, each point on the
shirt can be approximated or located by reference to the nearest
vertex on the most proximate triangle to the location of the
determined position. In some embodiments, the triangles are
equilateral triangles to maximize efficiency. In some arrangements,
tessellation is consistent for each garment and thus, in the
example, the same 1 cm edge triangle shape is used for tessellation
of all extracted shapes. Alternatively, different tessellation
shapes are used for different extracted shapes. Furthermore,
tessellation can refer to the location of points of material and
can be independent of the color and design of the garment.
[0068] Continuing with operation 470, according to some
embodiments, the Delaunay triangulation method can be the
triangulation method used for tessellation. In the Delaunay
triangulation method, each iteration of the triangulation can try
to maximize the minimum angle of the triangles in order to make
close-to-uniform triangles. By maximizing the angles, the system
ensures that none of the triangles are too skewed, and ensures the
physical simulation runs efficiently.
[0069] For example, as illustrated in FIG. 14, Triangulation-2 980
can be better than Triangulation-1 970 for tessellation. As shown
in FIG. 14, the minimum angle 982 in Triangulation-2 980 is greater
than the minimum angle 972 in Triangulation-1 970, As a result, the
triangles in Triangulation-2 980 are close-to-uniform, and can help
with the draping and simulating the digitized garment.
[0070] In various embodiments, data of tessellation and boundary
can be compatible with single instruction multiple data (SIMD).
SIMD can be a type of vector processor that uses the same
instruction on multiple elements. SIMD compatibility can ensure
that the code is consistent with the hardware. Making the processes
SIMD friendly can allow for utilization of the hardware in a more
efficient manner because current hardware includes processors, or
processors with SIMD units, Additionally, the tessellation can be
done in parallel (e.g., performing the tessellation using multiple
SIMD units in parallel) in order to increase the tessellation
speed, and the simulation of the garment under different
scenarios.
[0071] Optionally, method 400 can include an operation for
calibration, as illustrated in FIG. 15. The first and/or second
images received at operations 410 and/or 420 can include an object
(e.g., credit card) with a known size for the 3-D digital garment
creation module 246 to calibrate the boundary of the garment. In
various embodiments, identifying the boundary can include computing
shape and size of the garment.
[0072] In FIG. 15, the calibration object 1010 can be placed near
the garment before the image is taken such that the one or more
planar garment images (e.g., image of the front side of the jeans
1020, image of the back side of the jeans 1030) also include the
calibration object 1010. In some instances, the calibration object
1010 can be placed on the garment, where the calibration object
1010 is clearly visible in the photograph but not distinct from the
garment itself. A square object may be a better object for
calibration because of the straight lines, four equal sides and
four equal angles.
[0073] The calibration technique in method 400 can determine the
actual dimensions of the garment depicted in the one or more
photographs. The calibration technique can be achieved through
proportional comparison by utilizing any object of standard size
(e.g., grid paper of standard size, a standard credit card, a
CD).
[0074] Calibration can assign an x, y, z position value to each
pixel. Given the garment is laid out on a planar surface, the
system may need the relative position of three points to compute
the calibration (or projection mapping from image to object space).
For example, using the calibration object 1010, the system can
extract the four corner points, and given the dimensions of the
calibration object 1010, the system has enough information to
compute the calibration. Based on the calibration, the system can
present the garment on an avatar 1040 and display properties 1050
(e.g., rise measurement, inseam measurement, hips measurement,
thigh measurement, calf measurement) associated with the garment.
Similarly, with a grid paper as the calibration object 1010, the
system can use the relative positions of three points to compute
this calibration.
[0075] Optionally, method 400 can further include applying a
texture map to the 3-D garment model. In one or more arrangement,
the 3-D digital garment creation module 246 applies a texture map
to the tessellated three-dimensional garment model. The texture map
can include assigning a color to a vertex in the second group of
vertices based the received first image. The color values can be
extracted from the received images, or alternatively, may be
assigned from a different image (e.g., a texture swatch applied to
the whole garment). Since a shape of the garment has already been
determined using the operations described above, texture mapping
can give the garment a texture and color. The texture can be
represented as color. For example, in texture mapping, each vertex
of the shape (e.g., triangle) is assigned a red-green-blue-alpha
(RGBA) value. Alpha can be the transparency value. Thus in the
triangulation method, each triangle has potentially three different
RGBA values per triangle. The rest of the points of the triangle
can then be interpolated. Interpolation allows for the RGBA values
of the remaining points in the triangle to be filled in using a
linear combination method (e.g., the points of the triangle are
weighted based on the distance to the three vertices and the RGBA
values are assigned accordingly). The interpolated values can be
extracted from the received image, or alternatively, may be
assigned from a different image (e.g., a texture swatch applied to
the whole garment).
[0076] At operation 480, 3-D digital garment creation module 246
can present the tessellated 3-D garment model on a body model using
the draping module 265 and the simulation module 266. The
tessellated 3-D garment model is presented based on a simulated
force. The presentation can be done by digitally draping the
tessellated 3-D garment model onto a 3-D body model. In some
embodiments, 3-D digital garment creation module 246 can put the
digitally stitched garment generated at operation 470 onto a
standard body, as illustrated by avatars 640 and 740. In various
embodiments, operation 480 involves taking data from all previous
operations and combining them and inputting them into a cloth
simulation engine. Additionally, the simulation results from
operation 480 can be stored in the simulation result geometry files
258.
[0077] Optionally, method 400 can include generating multiple sizes
of the same garment by scaling or distorting the 3-D digital
garment model. Scaling or distorting the 3-D digital garment model
can generate 3-D models that are representative of the family of
sizes of a garment typically carried and sold by retailers.
Alternatively, scaling or distorting the 3-D digital garment model
can generate a specific sized version of the garment. The
distortion of the 3-D digital garment model can be uniform for the
entire model (i.e., the entire model is grown or shrunk), or
specific to individual zones (e.g., specific garment areas) with
different distortions (e.g., scale factors) for the individual
zones. Additionally, the scaling of dimensions of the garments can
be arbitrary (as in the case of creating a custom size), or can be
according to specifications. The specifications can be based on
grading rules, size charts, actual measurements, and/or digital
measurements.
[0078] As illustrated in FIGS. 9-10, a cloth engine can take as
input tessellation and material properties and can output 3-D
models of clothing on avatars 640 and 740. The cloth engine can
move the points around to fit a 3-D body model based on a simulated
force (e.g., friction, stitching force). Additionally, based on
this modeling, the points are connected via springs and can be
stretched based on a simulated force (e.g., gravity, material
property of garment). The cloth engine can solve a system of
equations, given that the equations are all inter-connected. In one
example, the system of equations can be based on the spring force
on each vertex.
[0079] Various operations described in method 400 can be
implemented through specific modules stored in memory 236. Some
examples of implementations and equations are described below. For
example, below is the system of equations to be used with method
400 for a three-spring implementation of a sample triangle 950 with
three-vertices (i.e., vertex 952, vertex 954, vertex 956)
associated with a tessellated garment 940, as illustrated in FIG.
13.
spring force 1 = ( k s restlength 1 ) * ( x 2 - x 1 - restlength 1
) * spring direction 1 + ( k d restlength 1 ) * Dot Product ( v 2 -
v 1 , spring direction 1 ) * spring direction 1 ( Equation 1 )
spring force 2 = ( k s restlength 2 ) * ( x 3 - x 2 - restlength 2
) * spring direction 2 + ( k d restlength 2 ) * Dot Product ( v 3 -
v 2 , spring direction 2 ) * spring direction 2 ( Equation 2 )
spring force 3 = ( k s restlength 3 ) * ( x 1 - x 3 - restlength 3
) * spring direction 3 + ( k d restlength 3 ) * Dot Product ( v 1 -
v 3 , spring direction 3 ) * spring direction 3 ( Equation 3 )
##EQU00001## [0080] Where k.sub.s is the elastic spring constant,
k.sub.d is the damping spring constant, and each vertex has a
position (x) and velocity (v).
[0081] In the equations above, when the denominator is a restlength
value, a non-zero value can be used for zero-length springs.
Additionally, the equations can use a visual restlength value when
the denominator is not the restlength value, which in zero-length
spring cases is 0. This allows for the system to handle zero length
springs without dividing by 0.
[0082] To further explain the equations above, a walkthrough of the
equations is described. The state that the simulator (e.g., 3-D
digital garment creation module 246) can maintain is the positions
and velocities of all the points that represent the garment. As the
simulator moves forward in time, the simulator can update the
position of the points over time by computing the net force on each
point at each instance in time. Then, based on the mass of the
particle, the simulator can use the equation based on the laws of
motion, f=ma, to calculate an acceleration. The acceleration
determines a change in velocity, which can be used to update the
velocity of each point. Likewise, the velocity determines the
change in position, which can be used to update the positions.
Therefore, at each point in the simulation, the simulator can
compute the net force on each particle. The forces exerted on each
particle can be based on a gravitational force, spring forces, or
other forces (e.g., drag forces to achieve desired styling). The
equation for gravitational force is f=mg, and the spring force is
described above.
[0083] The spring force f has two components, an elastic component
(i.e., part of equation multiplied by k.sub.s) and a damping
component (i.e., part of equation multiplied by k.sub.d). The
elastic component calculates the oscillation of the spring. The
strength of the elastic force is proportional to the amount the
spring is stretched from the restlength value, which can be
determined by x2-x1 (i.e., the current length of the spring) minus
the restlength value. For example, the more the spring is
compressed or stretched, the higher the force pushing the spring to
return to its rest state. Additionally, k.sub.s is a spring
constant that allows for scaling up/down the force based on the
strength of the spring, which is then multiplied by the spring
direction to give the force a direction (i.e., in the direction of
the spring).
[0084] The damping component calculates the damping effect (e.g.,
heat being generated by the spring moving, drag). Damping can be
drag force, where the higher the velocity, the higher the
drag/damping force. Accordingly, damping can be proportional to
velocity. In the case of a spring, there can be two particles
moving, so instead of a single velocity the simulator computes a
relative velocity between the two endpoints (e.g., v2-v1 in FIG.
13). For example, the larger the relative velocity, the faster the
points are moving apart or coming close together, and as a result
the larger the damping force (i.e., the damping is proportional to
relative velocity). Additionally, k.sub.d is the damping spring
constant to scale the damping force up/down, which can multiply by
the spring direction to give the force a direction.
[0085] According to various example embodiments, one or more of the
methodologies described herein may facilitate the online purchase
of garments. Moreover, one or more of the methodologies described
herein may facilitate the visualization of a garment on a 3-D body
model using 3-D digital garment creation module 246.
[0086] When these effects are considered in aggregate, one or more
of the methodologies described herein may obviate a need for
certain efforts or resources that otherwise would be involved in
digitalizing the garment from images. Efforts expended by a user in
generating 3-D models may be reduced by one or more of the
methodologies described herein. Computing resources used by one or
more machines, databases, or devices (e.g., within the system 100)
may similarly be reduced. Examples of such computing resources
include processor cycles, network traffic, memory usage, data
storage capacity, power consumption, and cooling capacity.
[0087] FIG. 16 is a block diagram illustrating components of a
machine 1200, according to some example embodiments, able to read
instructions 1224 from a machine-readable medium 1222 (e.g., a
non-transitory machine-readable medium, a machine-readable storage
medium, a computer-readable storage medium, or any suitable
combination thereof) and perform any one or more of the
methodologies discussed herein, in whole or in part. Specifically,
FIG. 16 shows the machine 1200 in the example form of a computer
system (e.g., a computer) within which the instructions 1224 (e.g.,
software, a program, an application, an applet, an app, or other
executable code) for causing the machine 1200 to perform any one or
more of the methodologies discussed herein may be executed, in
whole or in part. Server 202 can be an example of machine 1200.
[0088] In alternative embodiments, the machine 1200 operates as a
standalone device or may be connected (e.g., networked) to other
machines. In a networked deployment, the machine 1200 may operate
in the capacity of a server machine or a client machine in a
server-client network environment, or as a peer machine in a
distributed (e.g., peer-to-peer) network environment. The machine
1200 may be a server computer, a client computer, a personal
computer (PC), a tablet computer, a laptop computer, a netbook, a
cellular telephone, a smartphone, a set-top box (STB), a personal
digital assistant (PDA), a web appliance, a network router, a
network switch, a network bridge, or any machine capable of
executing the instructions 1224, sequentially or otherwise, that
specify actions to be taken by that machine. Further, while only a
single machine is illustrated, the term "machine" shall also be
taken to include any collection of machines that individually or
jointly execute the instructions 1224 to perform all or part of any
one or more of the methodologies discussed herein.
[0089] The machine 1200 includes a processor 1202 (e.g., a central
processing unit (CPU), a graphics processing unit (GPU), a digital
signal processor (DSP), an application specific integrated circuit
(ASIC), a radio-frequency integrated circuit (RFIC), or any
suitable combination thereof), a main memory 1204, and a static
memory 1206, which are configured to communicate with each other
via a bus 1208. The processor 1202 may contain microcircuits that
are configurable, temporarily or permanently, by some or all of the
instructions 1224 such that the processor 1202 is configurable to
perform any one or more of the methodologies described herein, in
whole or in part. For example, a set of one or more microcircuits
of the processor 1202 may be configurable to execute one or more
modules (e.g., software modules) described herein.
[0090] The machine 1200 may further include a graphics display 1210
(e.g., a plasma display panel (PDP), a light emitting diode (LED)
display, a liquid crystal display (LCD), a projector, a cathode ray
tube (CRT), or any other display capable of displaying graphics or
video). The machine 1200 may also include an alphanumeric input
device 1212 (e.g., a keyboard or keypad), a cursor control device
1214 (e.g., a mouse, a touchpad, a trackball, a joystick, a motion
sensor, an eye tracking device, or other pointing instrument), a
storage unit 1216, an audio generation device 1218 (e.g., a sound
card, an amplifier, a speaker, a headphone jack, or any suitable
combination thereof), and a network interface device 1220.
[0091] The storage unit 1216 includes the machine-readable medium
1222 (e.g., a tangible and non-transitory machine-readable storage
medium) on which are stored the instructions 1224 embodying any one
or more of the methodologies or functions described herein. The
instructions 1224 may also reside, completely or at least
partially, within the main memory 1204, within the processor 1202
(e.g., within the processor's cache memory), or both, before or
during execution thereof by the machine 1200. Accordingly, the main
memory 1204 and the processor 1202 may be considered
machine-readable media (e.g., tangible and non-transitory
machine-readable media). The instructions 1224 may be transmitted
or received over the network 34 via the network interface device
1220. For example, the network interface device 1220 may
communicate the instructions 1224 using any one or more transfer
protocols (e.g., hypertext transfer protocol (HTTP)).
[0092] In some example embodiments, the machine 1200 may be a
portable computing device, such as a smart phone or tablet
computer, and have one or more additional input components 1230
(e.g., sensors or gauges). Examples of such input components 1230
include an image input component (e.g., one or more cameras), an
audio input component (e.g., a microphone), a direction input
component (e.g., a compass), a location input component (e.g., a
global positioning system (GPS) receiver), an orientation component
(e.g., a gyroscope), a motion detection component (e.g., one or
more accelerometers), an altitude detection component (e.g., an
altimeter), and a gas detection component (e.g., a gas sensor).
Inputs harvested by any one or more of these input components may
be accessible and available for use by any of the modules described
herein.
[0093] As used herein, the term "memory" refers to a
machine-readable medium able to store data temporarily or
permanently and may be taken to include, but not be limited to,
random-access memory (RAM), read-only memory (ROM), buffer memory,
flash memory, and cache memory. While the machine-readable medium
1222 is shown in an example embodiment to be a single medium, the
term "machine-readable medium" should be taken to include a single
medium or multiple media (e.g., a centralized or distributed
database, or associated caches and servers) able to store
instructions 1224. The term "machine-readable medium" shall also be
taken to include any medium, or combination of multiple media, that
is capable of storing the instructions 1224 for execution b the
machine 1200, such that the instructions 1224, when executed by one
or more processors of the machine 1200 (e.g., processor 1202),
cause the machine 1200 to perform any one or more of the
methodologies described herein, in whole or in part. Accordingly, a
"machine-readable medium" refers to a single storage apparatus or
device, as well as cloud-based storage systems or storage networks
that include multiple storage apparatus or devices. The term
"machine-readable medium" shall accordingly be taken to include,
but not be limited to, one or more tangible (e.g., non-transitory)
data repositories in the form of a solid-state memory, an optical
medium, a magnetic medium, or any suitable combination thereof.
[0094] Throughout this specification, plural instances may
implement components, operations, or structures described as a
single instance. Although individual operations of one or more
methods are illustrated and described as separate operations, one
or more of the individual operations may be performed concurrently,
and nothing requires that the operations be performed in the order
illustrated. Structures and functionality presented as separate
components in example configurations may be implemented as a
combined structure or component. Similarly, structures and
functionality presented as a single component may be implemented as
separate components. These and other variations, modifications,
additions, and improvements fall within the scope of the subject
matter herein.
[0095] Certain embodiments are described herein as including logic
or a number of components, modules, or mechanisms. Modules may
constitute software modules (e.g., code stored or otherwise
embodied on a machine-readable medium or in a transmission medium),
hardware modules, or any suitable combination thereof. A "hardware
module" is a tangible (e.g., non-transitory) unit capable of
performing certain operations and may be configured or arranged in
a certain physical manner. In various example embodiments, one or
more computer systems (e.g., a standalone computer system, a client
computer system, or a server computer system) or one or more
hardware modules of a computer system (e.g., a processor or a group
of processors) may be configured by software (e.g., an application
or application portion) as a hardware module that operates to
perform certain operations as described herein.
[0096] In some embodiments, a hardware module may be implemented
mechanically, electronically, or any suitable combination thereof.
For example, a hardware module may include dedicated circuitry or
logic that is permanently configured to perform certain operations.
For example, a hardware module may be a special-purpose processor,
such as a field programmable gate array (FPGA) or an ASIC. A
hardware module may also include programmable logic or circuitry
that is temporarily configured by software to perform certain
operations. For example, a hardware module may include software
encompassed within a general-purpose processor or other
programmable processor. It will be appreciated that the decision to
implement a hardware module mechanically, in dedicated and
permanently configured circuitry, or in temporarily configured
circuitry (e.g., configured by software) may be driven by cost and
time considerations.
[0097] Accordingly, the phrase "hardware module" should be
understood to encompass a tangible entity, and such a tangible
entity may be physically constructed, permanently configured (e.g.,
hardwired), or temporarily configured (e.g., programmed) to operate
in a certain manner or to perform certain operations described
herein. As used herein, "hardware-implemented module" refers to a
hardware module. Considering embodiments in which hardware modules
are temporarily configured (e.g., programmed), each of the hardware
modules need not be configured or instantiated at any one instance
in time. For example, where a hardware module comprises a
general-purpose processor configured by software to become a
special-purpose processor, the general-purpose processor may be
configured as respectively different special-purpose processors e
comprising different hardware modules) at different times. Software
(e.g., a software module) may accordingly configure one or more
processors, for example, to constitute a particular hardware module
at one instance of time and to constitute a different hardware
module at a different instance of time.
[0098] Hardware modules can provide information to, and receive
information from, other hardware modules. Accordingly, the
described hardware modules may be regarded as being communicatively
coupled. Where multiple hardware modules exist contemporaneously,
communications may be achieved through signal transmission (e.g.,
over appropriate circuits and buses) between or among two or more
of the hardware modules. In embodiments in which multiple hardware
modules are configured or instantiated at different times,
communications between such hardware modules may be achieved, for
example, through the storage and retrieval of information in memory
structures to which the multiple hardware modules have access. For
example, one hardware module may perform an operation and store the
output of that operation in a memory device to which it is
communicatively coupled. A further hardware module may then, at a
later time, access the memory device to retrieve and process the
stored output. Hardware modules may also initiate communications
with input or output devices, and can operate on a resource (e.g.,
a collection of information).
[0099] The various operations of example methods described herein
may be performed, at least partially, by one or more processors
that are temporarily configured (e.g., by software) or permanently
configured to perform the relevant operations. Whether temporarily
or permanently configured, such processors may constitute
processor-implemented modules that operate to perform one or more
operations or functions described herein. As used herein,
"processor-implemented module" refers to a hardware module
implemented using one or more processors.
[0100] Similarly, the methods described herein may be at least
partially processor-implemented, a processor being an example of
hardware. For example, at least some of the operations of a method
may be performed by one or more processors or processor-implemented
modules. As used herein, "processor-implemented module" refers to a
hardware module in which the hardware includes one or more
processors. Moreover, the one or more processors may also operate
to support performance of the relevant operations in a "cloud
computing" environment or as a "software as a service" (SaaS). For
example, at least some of the operations may be performed by a
group of computers (as examples of machines including processors),
with these operations being accessible via a network (e.g., the
Internet) and via one or more appropriate interfaces (e.g., an
application program interface (API)).
[0101] The performance of certain operations may be distributed
among the one or more processors, not only residing within a single
machine, but deployed across a number of machines. In some example
embodiments, the one or more processors or processor-implemented
modules may be located in a single geographic location (e.g.,
within a home environment, an office environment, or a server
farm). In other example embodiments, the one or more processors or
processor-implemented modules may be distributed across a number of
geographic locations.
[0102] Some portions of the subject matter discussed herein may be
presented in terms of algorithms or symbolic representations of
operations on data stored as bits or binary digital signals within
a machine memory (e.g., a computer memory). Such algorithms or
symbolic representations are examples of techniques used by those
of ordinary skill in the data processing arts to convey the
substance of their work to others skilled in the art. As used
herein, an "algorithm" is a self consistent sequence of operations
or similar processing leading to a desired result. In this context,
algorithms and operations involve physical manipulation of physical
quantities. Typically, but not necessarily, such quantities may
take the form of electrical, magnetic, or optical signals capable
of being stored, accessed, transferred, combined, compared, or
otherwise manipulated by a machine. It is convenient at times,
principally for reasons of common usage, to refer to such signals
using words such as "data," "content," "bits," "values,"
"elements," "symbols," "characters," "terms," "numbers,"
"numerals," or the like. These words, however, are merely
convenient labels and are to be associated with appropriate
physical quantities.
[0103] Unless specifically stated otherwise, discussions herein
using words such as "processing," "computing," "calculating,"
"determining," "presenting," "displaying," or the like may refer to
actions or processes of a machine (e.g., a computer) that
manipulates or transforms data represented as physical (e.g.,
electronic, magnetic, or optical) quantities within one or more
memories (e.g., volatile memory, non-volatile memory, or any
suitable combination thereof), registers, or other machine
components that receive, store, transmit, or display information.
Furthermore, unless specifically stated otherwise, the terms "a" or
"an" are herein used, as is common in patent documents, to include
one or more than one instance. Finally, as used herein, the
conjunction "or" refers to a non-exclusive "or," unless
specifically stated otherwise.
[0104] It will be understood that, although the terms "first,"
"second," etc. may be used herein to describe various elements,
these elements should not be limited by these terms. These terms
are only used to distinguish one element from another.
* * * * *