U.S. patent application number 13/466612 was filed with the patent office on 2015-06-18 for generating reduced resolution textured model from higher resolution model.
This patent application is currently assigned to Google Inc.. The applicant listed for this patent is Stephen Charles HSU. Invention is credited to Stephen Charles HSU.
Application Number | 20150170398 13/466612 |
Document ID | / |
Family ID | 53176374 |
Filed Date | 2015-06-18 |
United States Patent
Application |
20150170398 |
Kind Code |
A1 |
HSU; Stephen Charles |
June 18, 2015 |
Generating Reduced Resolution Textured Model From Higher Resolution
Model
Abstract
An exemplary method for simplifying a texture of a
three-dimensional model includes simplifying a first
three-dimensional model to determine a second three-dimensional
model. The first three-dimensional model has a higher resolution
than the second three-dimensional model. The method also includes
allocating a texture atlas for the second three-dimensional model.
The method further includes filling in the texture atlas for the
second three-dimensional model. Filling in the texture atlas may
include determining a location on the second three-dimensional
model corresponding to a pixel in the texture atlas for the second
three-dimensional model, determining a location on the first
three-dimensional model corresponding to the determined location on
the second three-dimensional model, determining a color value
texture mapped to the first three-dimensional model at the
determined location on the first three-dimensional model, and
setting the determined color value to the pixel in the texture
atlas for the second three-dimensional model.
Inventors: |
HSU; Stephen Charles; (San
Carlos, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
HSU; Stephen Charles |
San Carlos |
CA |
US |
|
|
Assignee: |
Google Inc.
Mountain View
CA
|
Family ID: |
53176374 |
Appl. No.: |
13/466612 |
Filed: |
May 8, 2012 |
Current U.S.
Class: |
345/420 |
Current CPC
Class: |
G06T 15/06 20130101;
G06T 15/04 20130101; G06T 7/90 20170101; G06T 17/20 20130101; G06T
2210/36 20130101 |
International
Class: |
G06T 15/04 20110101
G06T015/04; G06T 15/06 20060101 G06T015/06; G06T 7/40 20060101
G06T007/40; G06T 17/00 20060101 G06T017/00 |
Claims
1. A method for simplifying a texture of a three-dimensional model,
comprising: (a) simplifying, with one or more computing devices, a
first three-dimensional model to determine a second
three-dimensional model, the first three-dimensional model having a
higher resolution than the second three-dimensional model, the
first three-dimensional model having a first texture atlas
associated therewith that corresponds to a mapping of color onto at
least a portion of a three-dimensional surface of the first
three-dimensional model; (b) allocating, with the one or more
computing devices, a second texture atlas for the second
three-dimensional model; and (c) filling in, with the one or more
computing devices, the second texture atlas for the second
three-dimensional model such that the second texture atlas
corresponds to a mapping of color onto at least a portion of a
three-dimensional surf ace of the second three-dimensional model,
wherein the filling in the second texture atlas (c) comprises: (d)
determining, with the one or more computing devices, a location on
the second three-dimensional model corresponding to a pixel in the
second texture atlas for the second three-dimensional model; (e)
determining, with the one or more computing devices, a ray
extending from the location on the second three-dimensional model
determined in (d) to a corresponding location on the first
three-dimensional model; (f) determining with the one or more
computing devices, an intersection of the ray and the first
three-dimensional model; (g) determining, with the one or mere
computing devices, a color value texture mapped to the first
three-dimensional model at the intersection determined in (f) based
on the first texture atlas; and (h) setting, with the one or more
computing devices, the color value determined in (g) to the pixel
in the second texture atlas for the second three-dimensional
model.
2. The method of claim 1, wherein the location on the second
three-dimensional model is a three-dimensional point on the second
three-dimensional model.
3. The method of claim 1, wherein the corresponding location on the
first three-dimensional model is a three-dimensional point on the
first three-dimensional model.
4. (canceled)
5. The method of claim 1, wherein the ray is normal to the second
three-dimensional model.
6. The method of claim 1, wherein the ray is normal to the first
three-dimensional model.
7. The method of claim 1, wherein the ray extends to the first
three-dimension model in a fixed direction.
8. The method of claim 1, wherein the first texture atlas for the
first three-dimensional model is different from the second, texture
atlas for the second three-dimensional model.
9. The method of claim 1, wherein the simplifying (a) includes
removing one or more vertices in the first three-dimensional model
and joining edges connected to the one or more removed vertices to
other vertices in the first three-dimensional model.
10. A system for simplifying a texture of a three-dimensional
model, comprising: a model simplifying engine configured to
simplify a first three-dimensional model to determine a second
three-dimensional model, the first three-dimensional model having a
higher resolution than the second three-dimensional model, the
first three-dimensional model having a first texture atlas
associated therewith that corresponds to a mapping of color onto at
least a portion, of a three-dimensional surface of the first
three-dimensional model; and a texture engine configured to: (i)
allocate a second texture atlas for the second three-dimensional
model; and (ii) fill in the second texture atlas for the second
three-dimensional model such that the second texture atlas
corresponds to a mapping of color onto at least a portion of a
three-dimensional surface of the second three-dimensional model,
wherein when the texture engine fills in the second texture atlas
for the second three-dimensional model, the texture engine is
configured to: determine a location on the second three-dimensional
model corresponding to a pixel in the second texture atlas for the
second three-dimensional model; determine a ray extending from the
determined location on the second three-dimensional model to a
corresponding location on the first-three dimensional model;
determine an intersection of the ray and the first
three-dimensional model determine a color value texture mapped to
the first three dimensional model at the determined intersection
based on the first texture atlas; and set the determined color
value to the pixel in the second texture atlas for the second
three-dimensional model.
11. The system of claim 10, wherein the location on the second
three-dimensional model is a three-dimensional point on the second
three-dimensional model.
12. (canceled)
13. The system of claim 10, wherein the ray is normal to the second
three-dimensional model.
14. The system of claim 10, wherein the ray is normal to the first
three-dimensional model.
15. The system of claim 10, wherein the ray extends to the first
three-dimension model in a fixed direction.
16. The system of claim 10, wherein the first texture atlas for the
first three-dimensional model is different from the second texture
atlas for the second three-dimensional model.
17. An apparatus comprising at least one non-transitory computer
readable storage medium encoding instructions thereon that, in
response to execution by a computing device, cause the computing
device to perform operations comprising: (a) simplifying a first
three-dimensional model to determine a second three-dimensional
model, the first three-dimensional model having a higher resolution
than the second three-dimensional model, the first
three-dimensional model having a first texture atlas associated
therewith that corresponds to a mapping of color onto at least a
portion of a three-dimensional surface of the first
three-dimensional model; (b) allocating a second texture atlas for
the second three-dimensional model; and (c) filling in the second
texture atlas for the second three-dimensional model such that the
second; texture atlas corresponds to a mapping of color onto at
least a portion of a three-dimensional Surface of the second
three-dimensional model at least a portion of a three-dimensional
surface of the second three-dimensional model, wherein the
operations further comprise: (d) determining a location on the
second three-dimensional model corresponding to a pixel in the
second texture atlas for the second three-dimensional model; (e)
determining a ray extending from location on the second
three-dimensional model determined in (d) to a corresponding
location on the first three-dimensional model; (f) determining,
with the one or more computing devices, an intersection of the ray
and the first three-dimensional model; (g) determining a color
value texture mapped to the first three-dimensional model at the
intersection determined in (f) based on the first texture atlas;
and (g) setting the color value determined in (g) to the pixel in
the second texture atlas for the second three-dimensional
model.
18. The apparatus of claim 17, wherein the location on the second
three-dimensional model is a three-dimensional point on the second
three-dimensional model.
19. (canceled)
20. The apparatus of claim 17, wherein the ray IS normal to the
second three-dimensional model.
21. The apparatus of claim 17, wherein the ray is normal to the
first three-dimensional model.
22. The apparatus of claim 17, wherein the ray extends to the first
three-dimension model in a fixed direction.
23. The apparatus of claim 17, wherein the first texture atlas for
the first three-dimensional model is different from the second
texture atlas for the second three-dimensional model.
Description
BACKGROUND
[0001] 1. Field
[0002] This disclosure generally relates to displaying
three-dimensional models.
[0003] 2. Background
[0004] A geographic information system (GIS) is a system that can
be used for storing, retrieving, manipulating, and displaying a
three-dimensional model. The three-dimensional model may include
satellite images texture mapped to terrain, such as mountains,
valleys, and canyons. The GIS uses a virtual camera to navigate
through a three-dimensional environment. The virtual camera defines
what portion of a three-dimensional model to display to a user in a
display area.
[0005] A client device may display the three-dimensional model in a
geographic
[0006] information environment. The three-dimensional model may
have any number of level-of-detail (LOD) representations that may
be displayed in the geographic information environment. Accounting
for an LOD of a three-dimensional model may increase or decrease
the complexity of a three-dimensional model as a virtual camera
moves closer to or farther from the model.
[0007] Texture is applied to a surface of the three-dimensional
model to give the three-dimensional model a more realistic
appearance. When a user switches from a higher resolution model to
a lower resolution model the appearance of the texture applied to
the lower resolution model may vary significantly. The geometry of
the lower resolution model is simplified and includes fewer
vertices than the higher resolution model. Accordingly, it may be
computationally inefficient to apply the texture used for the
higher resolution model to the lower resolution model.
[0008] Further, texturing of coarse resolution models is
inefficient in space and time because each tile in the model spans
a large area and thus may require many source aerial images (e.g.,
1000 images). For example, each source image may require computing
and storing a corresponding depth buffer for occlusion
handling.
BRIEF SUMMARY
[0009] This disclosure generally relates to simplifying a texture
of a three-dimensional model and in particular to generating a
lower resolution version of a given textured three-dimensional
model.
[0010] An exemplary method for simplifying a texture of a
three-dimensional model includes simplifying a first
three-dimensional model to determine a second three-dimensional
model. The first three-dimensional model has a higher resolution
than the second three-dimensional model. The method also includes
allocating a texture atlas for the second three-dimensional model.
The method further includes filling in the texture atlas for the
second three-dimensional model. Filling in the texture atlas for
the second three-dimensional model may include: determining a
location on the second three-dimensional model corresponding to a
pixel in the texture atlas for the second three-dimensional model,
determining a location on the first three-dimensional model
corresponding to the determined location on the second
three-dimensional model, determining a color value texture mapped
to the first three-dimensional model at the determined location on
the first three-dimensional model, and setting the determined color
value to the pixel in the texture atlas for the second
three-dimensional model.
[0011] Other embodiments of these aspects include corresponding
systems, apparatuses, and computer program products configured to
perform the actions of these methods, encoded on computer storage
devices.
[0012] Further features and advantages of embodiments described
herein, as well as the structure and operation of various
embodiments, are described in detail below with reference to the
accompanying drawings. It is noted that the embodiments described
below are not limited to the specific embodiments described herein.
Such embodiments are presented herein for illustrative purposes
only. Additional embodiments will be apparent to persons skilled in
the relevant art based on the teachings contained herein.
BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES
[0013] Embodiments are described with reference to the accompanying
drawings. The accompanying drawings, which are incorporated herein
and form a part of the specification, illustrate embodiments and,
together with the description, further serve to explain the
principles of the embodiments and to enable a person skilled in the
relevant art to make and use the embodiments. In the drawings, like
reference numbers may indicate identical or functionally similar
elements. The drawing in which an element first appears is
generally indicated by the left-most digit in the corresponding
reference number.
[0014] FIG. 1 is a diagram illustrating a process flow diagram for
generating a textured three-dimensional model at different LODs,
according to an embodiment.
[0015] FIG. 2 is a diagram illustrating a texture simplifying
system for simplifying a texture of a three-dimensional model,
according to an embodiment.
[0016] FIG. 3 is an illustration of four example textured meshes,
according to an embodiment.
[0017] FIGS. 4A-4D are diagrams illustrating four texture atlases,
according to an embodiment.
[0018] FIG. 5 is a diagram illustrating a simplified output mesh,
according to an embodiment.
[0019] FIG. 6 is a diagram illustrating an initial texture atlas
for a simplified three-dimensional model, according to an
embodiment.
[0020] FIG. 7 is a diagram illustrating color values mapped to a
three-dimensional model, according to an embodiment.
[0021] FIG. 8 is an illustration of a simplified textured
three-dimensional model, according to an embodiment.
[0022] FIG. 9 is an illustration of a flowchart of an example
method for simplifying a texture of a three-dimensional model,
according to an embodiment.
[0023] FIG. 10 is an illustration of an example computer system in
which embodiments may be implemented as computer-readable code.
DETAILED DESCRIPTION
I. Overview
II. Generating a Textured Three-Dimensional Model
[0024] A. Data Extraction
[0025] B. Mesh Creation
[0026] C. Texture and Simplified Models
[0027] D. Texture Simplification
[0028] E. Render Appropriate Version of the Three-Dimensional
Model
III. Example Texture Simplifying System
[0029] A. Model Simplification
[0030] B. Texture Simplification
VI. Example Method
V. Example Computer Embodiment
I. OVERVIEW
[0031] This description generally relates to generating textured
meshes at multiple levels of geometry and texture resolution.
[0032] Generating a lower resolution version of a given textured
three-dimensional model may be useful. For example, a series of
different resolution models may be generated from one source model
so that the resolution appropriate for the current viewpoint is
rendered on the screen. Displaying the lower resolution model is
less computationally expensive than displaying the higher
resolution model. Further, it is useful to reduce not only the
complexity of the three-dimensional geometry of the model but also
the resolution of the image textures. A lower resolution version of
the textured three-dimensional geometry of the model may be more
efficient to render.
[0033] This disclosure describes methods and techniques to generate
the reduced resolution textures. In an embodiment, a first
three-dimensional model is simplified to determine a second
three-dimensional model. The first three-dimensional model has a
higher resolution than the second three-dimensional model.
[0034] A texture atlas is allocated for the second
three-dimensional model. A texture atlas may refer to a mapping of
a color onto a portion of a three-dimensional surface.
[0035] After the texture atlas is allocated for the second
three-dimensional model, the texture atlas for the second
three-dimensional model is filled in. To fill in the texture atlas
for the second three-dimensional model, a location on the second
three-dimensional model corresponding to a pixel in the texture
atlas for the second three-dimensional model may be determined.
After determining the location on the second three-dimensional
model, a location on the first three-dimensional model
corresponding to the determined location on the second
three-dimensional model may be determined, and a color value
texture mapped to the first three-dimensional model at the
determined location on the first three-dimensional model may be
determined. The determined color value may then be set to the pixel
in the texture atlas for the second three-dimensional model.
Respective pixels in the texture atlas for the second
three-dimensional model may be set to a color value texture mapped
to a corresponding location on the first three-dimensional
model.
[0036] In an embodiment, the location on the first
three-dimensional model corresponding to the location on the second
three-dimensional model is determined by extending a ray from the
location on the second three-dimensional model to a corresponding
location on the first three-dimensional model and determining an
intersection of the ray and the first three-dimensional model. In
an embodiment, determining a color value texture mapped to the
first three-dimensional model at the determined location on the
first three-dimensional model includes determining a color value
texture mapped to the first three-dimensional model at the
determined intersection.
[0037] In the detailed description that follows, references to "one
embodiment", "an embodiment", "an example embodiment", etc.,
indicate that the embodiment described may include a particular
feature, structure, or characteristic, but every embodiment may not
necessarily include the particular feature, structure, or
characteristic. Moreover, such phrases are not necessarily
referring to the same embodiment. Further, when a particular
feature, structure, or characteristic is described in connection
with an embodiment, it is submitted that it is within the knowledge
of one skilled in the art to affect such feature, structure, or
characteristic in connection with other embodiments whether or not
explicitly described.
II. GENERATING A TEXTURED THREE-DIMENSIONAL MODEL
[0038] FIG. 1 is a diagram illustrating a process flow diagram 100
for generating a textured three-dimensional model at different
LODs, according to an embodiment.
A. Data Extraction
[0039] Diagram 100 includes a digital surface model (DSM)
extraction stage 110 that extracts data from images in image
database 105. The images may be taken of a geographic area and may
be taken from multiple positions and angles from a camera source
located, for example, in an aircraft. As the aircraft flies over
the geographic area, the camera source may take multiple images of
a particular point. Two or more images may overlap and the depths
between the images may be reconstructed.
[0040] DSM extraction stage 110 defines a set of three-dimensional
points. In an example, a three-dimensional geometry is a
three-dimensional model of the Earth. A grid covers an area of
space of the Earth, and for every location (x, y) of the grid a
height is determined. An output from DSM extraction 110 may be a
height field for each location of the grid.
[0041] In an example, a bundle adjustment is performed based on
extracting data from the images in image database 105. The bundle
adjustment calibrates cameras and aligns poses. Cameras may capture
a scene at different angles and each angle may capture specific
objects. A surface of a three-dimensional model may be
reconstructed from the direction that the cameras were looking at a
scene. For example, a camera at a 45-degree angle may capture
objects beneath a bridge, and hollow spaces may be reconstructed
separately for each camera. These reconstructed pieces may be
referred to as patches or mesh sections.
B. Mesh Creation
[0042] A mesh creation stage 115 combines the mesh sections into a
single three-dimensional mesh. In an example, an output of mesh
creation stage 115 is a polygonal mesh. A polygonal mesh represents
the geometry of a three-dimensional model and is a
three-dimensional graphics representation for a surface that has no
texture on it. The polygonal mesh may be a consistent untextured
mesh that is purely three-dimensional geometry information (e.g.,
the mesh has no color information).
C. Texture and Simplified Models
[0043] A texture is applied to a surface of the three-dimensional
model to enhance the appearance of the three-dimensional model. A
mesh texturing stage 120 may determine texture pixels for a surface
of a three-dimensional mesh. Mesh texturing stage 120 draws from
images in image database 105. Based on the three-dimensional mesh
and the images, mesh texturing stage 120 determines what each
location on the surface should look like (e.g., the color of that
location).
[0044] The three-dimensional model may be associated with a texture
atlas. A texture atlas may refer to a two-dimensional
representation of a three-dimensional model in which color values
are stored and mapped to a surface of the three-dimensional model.
In an embodiment, the texture atlas refers to space in an image in
which color values are stored and mapped to a surface of the
three-dimensional model. The texture may be defined on a surface of
a manifold. An output of mesh texturing stage 120 may be a mesh
having a texture map that is mapped to a surface of the
three-dimensional model.
[0045] An LOD generation stage 125 generates one or more versions
of a source mesh at different resolutions. A source mesh may refer
to the mesh that is used to generate one or more meshes at
different resolutions. Different LODs may correspond to different
resolutions. For example, one or more LOD simplified versions of
the source mesh may be generated. In this example, each of the
first and second LOD versions has a lower resolution than the
source mesh. Further, the first LOD version of the source mesh may
have a higher resolution than the second LOD version of the source
mesh.
[0046] It may be advantageous to display the coarser version of the
model when the final set of data to be rendered to a screen is
dense and the current viewpoint of the virtual camera is such that
it is appropriate to render the coarser version of the model.
Accordingly, the appropriate version of the model may be rendered
to a screen without unnecessary processing.
D. Texture Simplification
[0047] A series of different resolution models may be generated
from one source model, so that the resolution appropriate for the
current viewpoint may be rendered on the screen. Further, it may be
useful to reduce not only the complexity of the three-dimensional
geometry of the model but also the resolution of the image
textures. When the user zooms out, it is desirable to render a
coarser version of the geometry and the texture information.
Similarly, when the user zooms in, it is desirable to render a
finer version of the geometry and the texture information.
[0048] It may be undesirable to apply the source texture atlas of
the source mesh to a simplified LOD version of the source mesh
because the simplified LOD version may include merged triangles
that map to disjoint regions of the source texture atlas.
Accordingly, it may be undesirable to use the source texture atlas
and reassign texture coordinates of the vertices for the simplified
model.
[0049] This disclosure describes methods and techniques to generate
a reduced resolution texture for a simplified LOD version of the
source mesh. For example, in an embodiment, a lower resolution
version of a given textured three-dimensional model is generated.
In particular, the texture atlas and texture coordinate assignments
for the lower resolution version may be generated, resulting in an
enhanced user experience.
[0050] A texture simplification stage 130 determines a texture
atlas for simplified LOD versions of the source mesh at different
resolutions. In an example, a texture atlas is allocated for each
simplified LOD version of the source mesh. In an embodiment,
texture simplification stage 130 generates a new texture atlas and
texture coordinate assignments for one or more of the simplified
LOD versions of the source mesh. The texture atlas includes pixels,
and each pixel in the texture atlas is set to a color value that is
associated with both the source and simplified meshes (more details
are below). In this way, a consistent texture may be applied to the
simplified LOD version of the source mesh.
E. Render Appropriate Version of the Three-Dimensional Model
[0051] An ingestion stage 135 ingests one or more generated
versions of the source mesh at different resolutions along with the
texture information, and stores this data in a geographic database
140. After the appropriate version of the source mesh is generated,
the version is put into a data structure that is servable to a
client 145. For example, the data may be streamed to client 145 for
display in a geographic information environment. A user may
manipulate the three-dimensional model and the appropriate
resolution of the textured three-dimensional model may be
displayed.
III. EXAMPLE TEXTURE SIMPLIFYING SYSTEM
[0052] FIG. 2 is a diagram illustrating a texture simplifying
system 200 for simplifying a texture of a three-dimensional model,
according to an embodiment.
A. Model Simplification
[0053] System 200 includes a model simplifying engine 210. Model
simplifying engine 210 simplifies a first three-dimensional model
to determine a second three-dimensional model. The first
three-dimensional model has a higher resolution than the second
three-dimensional model. Similarly, the second three-dimensional
model has a lower resolution than the first three-dimensional
model.
[0054] In an embodiment, a three-dimensional model is simplified by
removing vertices in the three-dimensional model and joining the
edges connected to the removed vertices to other vertices in the
three-dimensional model. Accordingly, the simplified
three-dimensional model may include fewer vertices compared to the
original three-dimensional model.
[0055] FIG. 3 is an illustration of four example textured meshes
310, 320, 330, and 340, according to an embodiment.
[0056] In an example, textured meshes 310, 320, 330, and 340 are
adjacent meshes that may be joined together into one simplified
output mesh and included in a rendering of a three-dimensional
model. In particular, each of the textured meshes 310, 320, 330,
and 340 may be simplified and merged together to generate a lower
resolution version of the original meshes. Mesh 320 includes a
three-dimensional point 350 (more details below).
[0057] FIGS. 4A-4D are diagrams illustrating four texture atlases
for each of the four meshes illustrated in FIG. 3, according to an
embodiment. The texture atlases include smaller sub-images that are
each a texture of a part of the source three-dimensional model.
[0058] FIG. 4A illustrates a texture atlas 410 for mesh 310, FIG.
4B illustrates a texture atlas 420 for mesh 320, FIG. 4C
illustrates a texture atlas 430 for mesh 330, and FIG. 4D
illustrates a texture atlas 440 for mesh 340. Each triangle in a
three-dimensional model is mapped to a particular color in the
texture atlas.
[0059] FIG. 5 is a diagram 500 illustrating a simplified output
mesh 500, according to an embodiment. In an example, model
simplifying engine 210 simplifies a three-dimensional model
including texture meshes 310, 320, 330, and 340 to determine
simplified output mesh 500. Photorealistic colors may be added to a
surface of the simplified three-dimensional mesh to enhance the
realism of the three-dimensional model. More details regarding FIG.
5 are below.
B. Texture Simplification
[0060] Referring back to FIG. 2, texture simplifying system 200
also includes a texture engine 220. Texture engine 220 allocates a
texture atlas for the simplified three-dimensional model.
[0061] FIG. 6 is a diagram 600 illustrating an initial output
texture atlas 600 for a simplified three-dimensional model,
according to an embodiment. The pixels in initial output texture
atlas 600 have not yet been assigned colors. In an example, texture
engine 200 allocates an output texture atlas for the simplified
three-dimensional model. The output texture atlas for the
simplified model may be empty, and as explained in further detail
below texture engine 220 may set a color value to each pixel in the
texture atlas for the simplified three-dimensional model.
[0062] In an embodiment, model simplifying engine 210 simplifies a
first three-dimensional model to determine a second
three-dimensional model. In an embodiment, the first
three-dimensional model is simplified by removing vertices in the
first three-dimensional model and joining the edges connected to
the removed vertices to other vertices in the first
three-dimensional model.
[0063] Texture engine 220 allocates a texture atlas for the
simplified three-dimensional model and fills in the texture atlas
for the simplified three-dimensional model.
[0064] To fill in the texture atlas for the simplified
three-dimensional model, texture engine 220 may determine a
location on the second three-dimensional model corresponding to a
pixel in the texture atlas for the second three-dimensional model,
determine a location on the first three-dimensional model
corresponding to the determined location on the second
three-dimensional model, determine a color value texture mapped to
the first three-dimensional model at the determined location on the
first three-dimensional model, and set the determined color value
to the pixel in the texture atlas for the second three-dimensional
model. Texture engine 220 may perform these actions for each pixel
in the texture atlas for the second three-dimensional model.
[0065] In an embodiment, for respective pixels in the texture atlas
for the second three-dimensional model, texture engine 220
determines a location on the second three-dimensional model
corresponding to the pixel. The location may be a three-dimensional
point on the second three-dimensional model.
[0066] In an example, texture engine 220 determines that
three-dimensional point 510 on simplified output mesh 500
corresponds to pixel 610 in texture atlas 600. Further, each point
along the surface of simplified output mesh 500 may correspond to
an area in texture atlas 600.
[0067] Texture engine 220 may then determine the location on the
first three-dimensional model corresponding to the determined
location on the second three-dimensional model. In an embodiment,
to determine the location on the first three-dimensional model,
texture engine 220 determines a ray extending from the location on
the second three-dimensional model to a corresponding location on
the first three-dimensional model and determines an intersection of
the ray and the first three-dimensional model. In an embodiment,
the location on the first three-dimensional model is not just the
surface, but may be a three-dimensional point on the first
three-dimensional model. It may be advantageous for texture engine
220 to refer back to the first three-dimensional model (e.g.,
higher resolution model) because the second three-dimensional model
includes simplified meshes that lack lines in three-dimensional
space. In this way, texture engine 220 may determine an appropriate
color value for the pixel in the texture atlas for the second
three-dimensional model.
[0068] In an example, texture engine 220 determines a ray extending
from three-dimensional point 510 on simplified output mesh 500 to a
corresponding location on meshes 310, 320, 330, or 340. For
example, the ray may extend from three-dimensional point 510 to
three-dimensional point 350 on mesh 320.
[0069] In an embodiment, the ray is normal to the second
three-dimensional model. For example, to fill in some of the pixels
in the texture atlas, a three-dimensional point on the second
three-dimensional model corresponding to a particular pixel is
computed. The three-dimensional point on the second
three-dimensional model may be projected perpendicular to the mesh
face to the nearest face in the first three-dimensional model.
[0070] In another embodiment, the ray is normal to the first
three-dimensional model. In another embodiment, the ray extends to
the first three-dimension model in a fixed direction (e.g., 0, 0,
1) or straight line.
[0071] After determining a ray extending from the location on the
second three-dimensional model to a corresponding location on the
first three-dimensional model, texture engine 220 determines an
intersection of the ray and the first three-dimensional model. This
may include determining a three-dimensional point on the first
three-dimensional model that intersects with the ray.
[0072] After determining the location on the first
three-dimensional model, texture engine 220 determines a color
value texture mapped to the first three-dimensional model at the
determined location on the first three-dimensional model. In an
embodiment, texture engine 220 determines a color value texture
mapped to the first three-dimensional model at the determined
intersection.
[0073] FIG. 7 is a diagram 700 illustrating color values mapped to
a three-dimensional model, according to an embodiment. The color
values are set to pixels in a texture atlas for a simplified
version of the three-dimensional model. FIG. 7 includes texture
atlases 410, 420, 430, and 440 for meshes 310, 320, 330, and 340,
respectively (see FIGS. 3 and 4), and a texture atlas 750 for the
second three-dimensional model. Texture atlas 750 includes space in
an image in which color values are stored and mapped to a surface
of the second three-dimensional model.
[0074] A color value texture in texture atlas 410 referenced by
arrow 710 is mapped to a location on the first three-dimensional
model. In an example, texture engine 220 determines the color value
texture referenced by arrow 710. Texture engine 220 then sets the
determined color value to the pixel in texture atlas 750 referenced
by arrow 710. For example, the pixel color may be copied from the
point's texture atlas position for the first three-dimensional
model to the pixel in the texture atlas for the second
three-dimensional model.
[0075] Similarly, texture engine 220 sets the color value
referenced by arrow 720 in mesh 420 to the referenced pixel in
texture atlas 750, and texture engine 220 sets the color value
referenced by arrow 730 in mesh 440 to the referenced pixel in
texture atlas 750.
[0076] FIG. 8 is an illustration of a simplified textured
three-dimensional model, according to an embodiment. For example,
the simplified textured three-dimensional model may be the second
three-dimensional model determined by model simplifying engine 210.
The source texture atlas for the first three-dimensional model is
different from the texture atlas for the second three-dimensional
model.
[0077] Model simplifying engine 210 and texture engine 220 may be
implemented as software, hardware, firmware, or any combination
thereof.
VI. EXAMPLE METHOD
[0078] FIG. 9 is an illustration of a flowchart 900 of an example
method for simplifying a texture of a three-dimensional model,
according to an embodiment. Method 900 may be used in operation of
system 200 in FIG. 2. Although method 900 is described with respect
to system 200, it is not meant to be limited to system 200.
[0079] At stage 905, a first three-dimensional model is simplified
to determine a second three-dimensional model. The first
three-dimensional model has a higher resolution than the second
three-dimensional model. In an example, model simplifying engine
210 simplifies a first three-dimensional model to determine a
second three-dimensional model, the first three-dimensional model
having a higher resolution than the second three-dimensional
model.
[0080] In an embodiment, the first three-dimensional model is
simplified by removing vertices in the first three-dimensional
model and joining the edges connected to the removed vertices to
other vertices in the first three-dimensional model. The resulting
simplified three-dimensional model may be the second
three-dimensional model. Accordingly, the second three-dimensional
model may include fewer vertices compared to the first
three-dimensional model.
[0081] At stage 910, a texture atlas for the second
three-dimensional model is allocated. In an example, texture engine
220 allocates a texture atlas for the second three-dimensional
model. The first three-dimensional model also has a texture atlas.
The texture atlas for the first three-dimensional model is
different from the texture atlas for the second three-dimensional
model.
[0082] At stage 920, the texture atlas for the second
three-dimensional model is filled in. In an example, texture engine
220 fills in the texture atlas for the second three-dimensional
model. Stage 920 may include stages 930-960.
[0083] At stage 930, a location on the second three-dimensional
model corresponding to a pixel in the texture atlas for the second
three-dimensional model may be determined. In an example, texture
engine 220 determines a location on the second three-dimensional
model corresponding to a pixel in the texture atlas for the second
three-dimensional model. The location on the second
three-dimensional model may be a three-dimensional point on the
second three-dimensional model.
[0084] At stage 940, a location on the first three-dimensional
model corresponding to the location on the second three-dimensional
model determined in stage 930 is determined. In an example, texture
engine 220 determines a location on the first three-dimensional
model corresponding to the determined location on the second
three-dimensional model. The location on the first
three-dimensional model may be a three-dimensional point on the
first three-dimensional model.
[0085] At stage 950, a color value texture mapped to the first
three-dimensional model at the location on the first
three-dimensional model determined in stage 940 is determined. In
an example, texture engine 220 determines a color value texture
mapped to the first three-dimensional model at the determined
location on the first three-dimensional model.
[0086] In an embodiment, the location on the first
three-dimensional is determined by determining a ray extending from
the location on the second three-dimensional model to a
corresponding location on the first three-dimensional model, and
determining an intersection of the ray and the first
three-dimensional model. The ray may be normal to the second
three-dimensional model, normal to the first three-dimensional
model, or may extend to the first three-dimension model in a fixed
direction.
[0087] In an embodiment, texture engine 220 determines the color
value texture mapped to the first three-dimensional model at the
determined location on the first three-dimensional model by
determining a color value texture mapped to the first
three-dimensional model at the determined intersection.
[0088] At stage 960, the color value determined in stage 950 is set
to the pixel in the texture atlas for the second three-dimensional
model. In an example, texture engine 220 sets the determined color
value to the pixel in the texture atlas for the second
three-dimensional model.
[0089] Stages 905-960 may be implemented as software, hardware,
firmware, or any combination thereof. Further, operations for the
above-described embodiments may be further described with reference
to one or more logic flows. It may be appreciated that the
representative logic flows do not necessarily have to be executed
in the order presented, or in any particular order, unless
otherwise indicated. Moreover, various activities described with
respect to the logic flows can be executed in serial or parallel
fashion. The logic flows may be implemented using one or more
hardware elements and/or software elements of the described
embodiments or alternative elements as desired for a given set of
design and performance constraints. For example, the logic flows
may be implemented as logic (e.g., computer program instructions)
for execution by a logic device (e.g., a general-purpose or
specific-purpose computer).
V. COMPUTER EMBODIMENT
[0090] In an embodiment, the system and components of embodiments
described herein are implemented using well known computers. For
example, process flow diagram 100 and texture simplifying system
200 may be implemented using system 1000.
[0091] FIG. 10 is an illustration of an example computer system
1000 in which embodiments may be implemented as computer-readable
code. Hardware, software, or any combination of such may embody any
of the modules and components in FIGS. 1 and 2. Further,
embodiments of model simplifying engine 210 and texture engine 220
can also be implemented as computer-readable code executed on one
or more computing devices capable of carrying out the functionality
described herein.
[0092] If programmable logic is used, such logic may execute on a
commercially available processing platform or a special purpose
device. One of ordinary skill in the art may appreciate that
embodiments of the disclosed subject matter can be practiced with
various computer system configurations, including multi-core
multiprocessor systems, minicomputers, mainframe computers,
computers linked or clustered with distributed functions, as well
as pervasive or miniature computers that may be embedded into
virtually any device.
[0093] For instance, a computing device having at least one
processor device and a memory may be used to implement the
above-described embodiments. A processor device may be a single
processor, a plurality of processors, or combinations thereof.
Processor devices may have one or more processor "cores."
[0094] Various embodiments are described in terms of this example
computer system 1000. After reading this description, it will
become apparent to a person skilled in the relevant art how to
implement embodiments using other computer systems and/or computer
architectures. Although operations may be described as a sequential
process, some of the operations may in fact be performed in
parallel, concurrently, and/or in a distributed environment, and
with program code stored locally or remotely for access by single
or multi-processor machines. In addition, in some embodiments the
order of operations may be rearranged without departing from the
spirit of the disclosed subject matter.
[0095] Processor device 1004 may be a special purpose or a
general-purpose processor device. As will be appreciated by persons
skilled in the relevant art, processor device 1004 may also be a
single processor in a multi-core/multiprocessor system, such system
operating alone, or in a cluster of computing devices operating in
a cluster or server farm. Processor device 1004 is connected to a
communication infrastructure 1006, for example, a bus, message
queue, network, or multi-core message-passing scheme. Computer
system 1000 may also include display interface 1002 and display
unit 1030. Display interface 1002 allows results of the computer
operations to be displayed to a user or an application developer
via display unit 1030.
[0096] Computer system 1000 also includes a main memory 1008, for
example, random access memory (RAM), and may also include a
secondary memory 1010. Secondary memory 1010 may include, for
example, a hard disk drive 1012, removable storage drive 1014.
Removable storage drive 1014 may include a floppy disk drive, a
magnetic tape drive, an optical disk drive, a flash memory, or the
like. The removable storage drive 1014 reads from and/or writes to
a removable storage unit 1018 in a well-known manner. Removable
storage unit 1018 may include a floppy disk, magnetic tape, optical
disk, etc. which is read by and written to by removable storage
drive 1014. As will be appreciated by persons skilled in the
relevant art, removable storage unit 1018 includes a computer
usable storage medium having stored therein computer software
and/or data.
[0097] In alternative implementations, secondary memory 1010 may
include other similar means for allowing computer programs or other
instructions to be loaded into computer system 1000. Such means may
include, for example, a removable storage unit 1022 and an
interface 1020. Examples of such means may include a program
cartridge and cartridge interface (such as that found in video game
devices), a removable memory chip (such as an EPROM, or PROM) and
associated socket, and other removable storage units 1022 and
interfaces 1020 which allow software and data to be transferred
from the removable storage unit 1022 to computer system 1000.
[0098] Computer system 1000 may also include a communications
interface 1024. Communications interface 1024 allows software and
data to be transferred between computer system 1000 and external
devices. Communications interface 1024 may include a modem, a
network interface (such as an Ethernet card), a communications
port, a PCMCIA slot and card, or the like. Software and data
transferred via communications interface 1024 may be in the form of
signals, which may be electronic, electromagnetic, optical, or
other signals capable of being received by communications interface
1024. These signals may be provided to communications interface
1024 via a communications path 1026. Communications path 1026
carries signals and may be implemented using wire or cable, fiber
optics, a phone line, a cellular phone link, an RF link or other
communications channels.
[0099] In this document, the terms "computer program medium" and
"computer usable medium" are used to generally refer to media such
as removable storage unit 1018, removable storage unit 1022, and a
hard disk installed in hard disk drive 1012. Computer program
medium and computer usable medium may also refer to memories, such
as main memory 1008 and secondary memory 1010, which may be memory
semiconductors (e.g. DRAMs, etc.).
[0100] Computer programs (also called computer control logic) are
stored in main memory 1008 and/or secondary memory 1010. Computer
programs may also be received via communications interface 1024.
Such computer programs, when executed, enable computer system 1000
to implement embodiments as discussed herein. In particular, the
computer programs, when executed, enable processor device 1004 to
implement the processes, such as the stages in the method
illustrated by flowchart 900 of FIG. 9 discussed above.
Accordingly, such computer programs represent controllers of the
computer system 1000. Where embodiments are implemented using
software, the software may be stored in a computer program product
and loaded into computer system 1000 using removable storage drive
1014, interface 1020, and hard disk drive 1012, or communications
interface 1024.
[0101] Embodiments also may be directed to computer program
products including software stored on any computer useable medium.
Such software, when executed in one or more data processing device,
causes a data processing device(s) to operate as described herein.
Embodiments employ any computer useable or readable medium.
Examples of computer useable mediums include, but are not limited
to, primary storage devices (e.g., any type of random access
memory), secondary storage devices (e.g., hard drives, floppy
disks, CD ROMS, ZIP disks, tapes, magnetic storage devices, and
optical storage devices, MEMS, nanotechnological storage device,
etc.).
[0102] The Summary and Abstract sections may set forth one or more
but not all exemplary embodiments as contemplated by the
inventor(s), and thus, are not intended to limit the embodiments
and the appended claims in any way.
[0103] The present disclosure has been described above with the aid
of functional building blocks illustrating the implementation of
specified functions and relationships thereof. The boundaries of
these functional building blocks have been arbitrarily defined
herein for the convenience of the description. Alternate boundaries
can be defined so long as the specified functions and relationships
thereof are appropriately performed.
[0104] The foregoing description of the specific embodiments will
so fully reveal the general nature of the embodiments that others
can, by applying knowledge within the skill of the art, readily
modify and/or adapt for various applications such specific
embodiments, without undue experimentation, without departing from
the general concept of the present disclosure. Therefore, such
adaptations and modifications are intended to be within the meaning
and range of equivalents of the disclosed embodiments, based on the
teaching and guidance presented herein. It is to be understood that
the phraseology or terminology herein is for the purpose of
description and not of limitation, such that the terminology or
phraseology of the present specification is to be interpreted by
the skilled artisan in light of the teachings and guidance.
[0105] The breadth and scope of the present disclosure should not
be limited by any of the above-described exemplary embodiments, but
should be defined only in accordance with the following claims and
their equivalents.
* * * * *