U.S. patent application number 13/256695 was filed with the patent office on 2012-03-01 for 3d building generalization for digital map applications.
Invention is credited to Oliver Kannenberg.
Application Number | 20120050285 13/256695 |
Document ID | / |
Family ID | 42083933 |
Filed Date | 2012-03-01 |
United States Patent
Application |
20120050285 |
Kind Code |
A1 |
Kannenberg; Oliver |
March 1, 2012 |
3D BUILDING GENERALIZATION FOR DIGITAL MAP APPLICATIONS
Abstract
A digital map application enables display of a large amount of
3D buildings or 3D structures to provide enhanced display and
navigation features. The 3D models (116, 216, 316) are composed
from a detailed set of attributes which, when combined, portray a
highly detailed visual rendering of a physical object as it exists
in real life. By selectively suppressing attributes, and in
appropriate cases deriving new attributes from existing data,
varying degrees of the 3D model (116, 216) can be represented in
lower levels of detail with reduced processing resources and to
achieve a more realistic depiction. The generalization of the 3D
models can be structured as a function of the distance between the
3D model and an imaginary observer datum or other suitable
reference point. In one embodiment, a plurality of contemporaneous
rendering zones (34, 36, 38) are established so that a 3D model
(116, 216, 316) is displayed with a particular combination or set
of attributes depending which rendering zone (34, 36, 38) it is
in.
Inventors: |
Kannenberg; Oliver;
(Wennigsen, DE) |
Family ID: |
42083933 |
Appl. No.: |
13/256695 |
Filed: |
March 15, 2010 |
PCT Filed: |
March 15, 2010 |
PCT NO: |
PCT/EP2010/053292 |
371 Date: |
November 21, 2011 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61202585 |
Mar 16, 2009 |
|
|
|
Current U.S.
Class: |
345/419 |
Current CPC
Class: |
G09B 29/10 20130101;
G01C 21/206 20130101; G01C 21/32 20130101; G09B 29/003
20130101 |
Class at
Publication: |
345/419 |
International
Class: |
G06T 15/00 20110101
G06T015/00 |
Claims
1. A method for rendering three-dimensional (3D) objects on a
display screen (12) for digital mapping applications, said method
comprising the steps of: providing a digital map having at least
one 3D model corresponding to a physical object in reality;
providing a navigation device (10) having a display screen (12) and
being configured to display rendered images of the 3D model;
associating a plurality of attributes with the 3D model which, when
combined, portray on the display screen (12) a detailed visual
rendering of the physical object as it exists in real life;
establishing a reference point in the digital map; selectively
displaying the 3D model in the display screen (12) with different
attributes depending upon the distance between the 3D model and the
reference point.
2. A method for rendering three-dimensional (3D) objects on a
display screen (12) for digital mapping applications, said method
comprising the steps of: providing a digital map having at least
first and second 3D models corresponding to two different physical
objects in reality; associating a detailed set of attributes with
the first 3D model which, when combined, portray a detailed visual
rendering of the corresponding physical object as it exists in real
life; associating a detailed set of attributes with the second 3D
model which, when combined, portray a detailed visual rendering of
the corresponding physical object as it exists in real life;
providing a navigation system having a display screen (12) capable
of presenting a portion of the digital map including the first and
second 3D models; establishing a reference point in the digital
map, the first 3D model being spatially closer to the reference
point than the second 3D model; and displaying the first 3D model
with substantially all of its detailed set of attributes while
simultaneously displaying the second 3D model with a modified set
of attributes generalized from its detailed set of attributes.
3. A method for rendering three-dimensional (3D) objects on a
display screen (12) for digital mapping applications, said method
comprising the steps of: providing a navigation system having a
display screen (12); providing a digital map having at least one 3D
model corresponding to a physical object in reality; associating a
plurality of attributes with the 3D model which, when combined,
portray on the display screen (12) a detailed visual rendering of
the physical object as it exists in real life; establishing a
plurality of contemporaneous rendering zones in the digital map as
viewed in the display screen (12), the contemporaneous rendering
zones including a proximal rendering zone (34, 34') and a distal
rendering zone (38, 38'); and selectively displaying the 3D model
in the display screen (12) with different attributes depending upon
which rendering zone the 3D model is in, wherein the 3D model is
displayed with the most detailed attributes when located in the
proximal rendering zone (34, 34') and with generalized attributes
when located in the distal rendering zone (38, 38').
4. The method according to claim 1 including establishing an
intermediate rendering zone spaced between the proximal and distal
rendering zones, and when the 3D model is in the intermediate
rendering zone displaying the 3D model on the display screen (12)
with more generalized attributes than when located in the proximal
rendering zone and less generalized attributes than when located in
the distal rendering zone.
5. The method according to claim 1, further including establishing
a reference point in relation to the digital map, the proximal
rendering zone disposed adjacent the reference point and the distal
rendering zone spaced farthest from the reference point.
6. The method according to claim 1, wherein said step of
establishing a plurality of rendering zones includes arranging the
rendering zones generally parallel to the road centerline.
7. The method according to claim 1, further including the step of
rendering at least two adjacent 3D models as a single building
group at a lower level of detail setting.
8. The method according to claim 1, wherein the attributes of the
3D model include object shape, roof shape, average color, and
facade detail.
9. The method according to claim 1, further including the step of
deriving an average facade color attribute from a facade texture
attribute.
10. The method according to claim 1, further including the step of
deriving an average roof color attribute from a roof texture
attribute.
11. The method according to claim 1 further including the step of
moving the 3D model relative to the reference point in the screen,
and changing the attributes displayed with the 3D model if the 3D
model moves to a different rendering zone.
12. The method according to claim 1, wherein said step of
establishing a reference point includes designating a centerline of
the road segment as the reference point.
13. A navigation-capable device (10) configured to display a
driving route on the display screen (12) according to claim 1.
14. A storage medium used to store 3D model attributes for
augmenting a digital map according to claim 1.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent
Application No. 61/202,585 filed Mar. 16, 2009, the entire
disclosure of which is hereby incorporated by reference and relied
upon.
STATEMENT OF COPYRIGHTED MATERIAL
[0002] A portion of the disclosure of this patent document contains
material which is subject to copyright protection. The copyright
owner has no objection to the facsimile reproduction by anyone of
the patent document or the patent disclosure, as it appears in the
official patent file or records, but otherwise reserves all
copyright rights whatsoever.
BACKGROUND OF THE INVENTION
[0003] 1. Field of the Invention
[0004] This invention relates to digital maps, and more
particularly toward a method for rendering three-dimensional models
of real-life physical objects in a digital map.
[0005] 2. Related Art
[0006] Personal navigation devices and/or map reading devices 10,
like that shown for example in FIG. 1, are configured to utilize
digital maps for information, route planning and navigation
purposes. The navigation system 10 includes a display screen 12 or
suitable graphic user interface that portrays a network of road
segments 14. For the sake of clarity, it is to be understood that
the term "road" or "street" refers in the broadest sense to any
geometry that supports transportation--including motorized vehicle
traffic, bicycle traffic, pedestrian traffic, and the like. When
configured as a portable device with position determining
capabilities, the navigation device 10 can be correlated to the
digital map and displayed on or referenced in the images portrayed
on the display screen 12. Examples of such devices include in-car
navigation systems, hand-held navigation systems, some PDAs, cell
phones, and the like. Alternatively, the navigation device 10 may
not be configured with position determining capabilities, such as
is the case with many personal computers, some PDAs and other more
basic computing devices.
[0007] The navigation device 10 shown in FIG. 10 displays a bird's
eye mosaic on the left-hand side of the screen 12 and a 3D
(three-dimensional) simulation or rendering on the right-hand side
of the screen 12. Many navigation devices 10 include or execute
software programs which enable both bird's eye mosaic and 3D
renderings on the display screen 12--either simultaneously as shown
in FIG. 1, or alternatively. Interest in 3D rendering in connection
with navigation devices 10 is growing in the marketplace. Because a
navigation device display screen 12 has a limited amount of space,
however, it is most efficient to construct the underlying software
programs and functionality so that large amounts of data are loaded
only when there are enough pixels to display the data adequately.
This relates to a concept known generally as Level of Detail (LoD).
LoD is a mechanism which allows systems developers to specify a
data set with lower resolution to be substituted for full
resolution data in appropriate circumstances. The lower resolution
data set loads faster and occupies a smaller portion of the display
screen. Regions may be used in connection with LoDs for this
purpose. In a Region, it is possible to specify an area of the
screen which an object must occupy in order to be visible. Once the
projected size of the region goes outside of these limits, it is no
longer visible and the Region becomes inactive. See, for example,
FIG. 2 which specifies three Regions (1-3) which are effectively
nested within a digital map. Typically, the larger Region 1 is
associated with a coarse or low resolution. The smaller, inside
Regions 2 and 3 are associated with increasingly finer Levels of
Detail. Each Region, therefore, has a set LoD that specifies the
projected screen size of the region in pixels that are required for
the associated region to be active. Thus, as the user's viewpoint
moves closer, Regions with finer LoD become active because the
Region takes up more screen space. Regions with finer LoD replace
the previously loaded Regions with coarser LoDs. As successive
nested Regions become active, they can either accumulate data
associated with each preceding Region, or replace entirely the data
of the previously loaded Region.
[0008] Although the use of Regions with an associated LoD provide
benefits in terms of efficient utilization of computer processing
power, they have still many drawbacks particularly in the field of
navigation and 3D model renderings, where it is important that the
renderings simulate live, fluid motion rather than abrupt
transitions and snapshots. For example, FIG. 3 illustrates, in
exemplary form, a 3D view of a city area rendered at a particular
LoD. This particular LoD includes not only building size and shape,
but also roof colors, pediment details, and facade details such as
windows, doors and other exterior features. For purposes of
navigation, an image like this provides more detail than is needed
and can not justify the processing resources and time required to
generate the view. In other words, the average person utilizing a
navigation device 10 is interested primarily in traveling to a
particular destination, and particularly how to get there. A highly
detailed city view like that shown in FIG. 3, while visually
interesting, presents substantially more information about far-away
buildings than is required to effectively assist the person in
reaching their destination. This is also quite unrealistic; in real
life visual details diminish with distance.
[0009] Therefore, it is to be understood that when a personal
navigation device 10 is required to display a large amount of 3D
buildings or 3D structures, the available memory or power of the
enabling computer presents a formidable technical limitation and/or
cost factor. A typical prior art approach to addressing this issue
is handled by the software that is responsible for presenting the
display image. As described earlier, the approach is to load only
the area that is currently in the view port of the display screen
12, and to use multiple Level of Details resolution features. These
techniques, however, either fail to provide an optimal Level of
Detail for navigation purposes or provide too much detail such that
performance is wasted processing large amounts of unnecessary data
that can slow or even overload the memory of the personal
navigation device 10. They also result in a non-realistic
presentation of distant objects rendered with the same level of
detail as near objects.
[0010] Therefore, there is a need for an improved method for
generating 3D model images of physical objects, such as buildings
and points of interest (POI), and presenting such 3D models in a
digital map application in an efficient, optimal, and realistic
manner.
SUMMARY OF THE INVENTION
[0011] This invention relates to methods and techniques for
rendering three-dimensional (3D) objects on a display screen with
varying degrees of detail for digital mapping applications. A
digital map is provided having at least one 3D model corresponding
to a physical object in reality. A plurality of attributes are
associated with the 3D model which, when combined, portray on the
display screen a detailed visual rendering of the physical object
as it exists in real life. According to one embodiment of the
invention, a plurality of contemporaneous rendering zones are
established in the digital map as viewed in the display screen 12
of a navigation device 10. These contemporaneous rendering zones
include at least a proximal rendering zone and a distal rendering
zone. The 3D model is selectively displayed in the display screen
with varying levels of attributes depending upon which rendering
zone the 3D model is in. When the 3D model is displayed in the
proximal rendering zone, all or most of its attributes are used in
the rendering thereby creating a detailed, lifelike image of the
physical object in reality. However, when the 3D model is located
in the distal rendering zone, it is portrayed with a minimal number
of its attributes which requires less processing resources. The
attributes can be either based on stored, and/or derived from
stored attributes.
[0012] In one embodiment, the invention is distinguished from prior
art techniques by enabling the addition or removal of attributes,
rather than changes in pixel resolution, to determine the Level of
Detail (LoD) at which a particular 3D model is rendered on the
display screen. The subject method is particularly well adapted for
use in guiding a traveler along a predetermined route in a digital
map. The display screen of the navigation device will show 3D
models with varying levels of attributes depending upon their
distance away from the observer datum or other suitable reference
point.
[0013] The invention also contemplates a navigation device
configured to display a generalized, i.e., simplified, 3D model on
its display screen in which the attributes used for the rendering
are derived from existing attribute data and then attached to the
3D model. The method of generalizing 3D model attributes is
beneficial for display purposes, and also advantageous for data
storage/processing purposes.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] These and other features and advantages of the present
invention will become more readily appreciated when considered in
connection with the following detailed description and appended
drawings, wherein:
[0015] FIG. 1 is an exemplary view of a portable navigation device
according to one embodiment of this invention including a display
screen for presenting map data information;
[0016] FIG. 2 is a schematic illustration of the prior art
technique of Regions which are nested and associated with different
levels of detail (LoD) which become active depending upon the zoom
level of the observer;
[0017] FIG. 3 represents the view port or display screen of a prior
art navigation device 10 on which a 3D rendering of buildings in a
city appear at the same Level of Detail (LoD);
[0018] FIG. 4 is an exemplary view of a 3D model rendered in
connection with the digital map at a lowest LoD according to this
invention;
[0019] FIG. 5 is a view as in FIG. 4 showing the same 3D model
rendered in a second or intermediate LoD;
[0020] FIG. 6 is a view of the same 3D model rendered in a third or
full LoD;
[0021] FIG. 7 is a depiction of a building facade as used in a 3D
model showing the manner in which individual tiles may be arranged
to portray a high LoD rendering;
[0022] FIG. 8A depicts a cluster of 3D models rendered with
attributes to provide the highest level of detail (LoD-3) and which
include facades, pediment and roof textures, etc.;
[0023] FIG. 8B is a view as in FIG. 8A showing the same building
objects rendered as 3D models in LoD-2 using another set of
attributes, e.g., average derived colors for the facades and/or
pediments generalized from the texture attributes of LoD-3;
[0024] FIG. 8C is a view showing the building objects of FIG. 8B
rendered as 3D models at LoD-1 based on a set of attributes
generalized from LoD-2 data or other generalized rules, and which
in this case result in a noticeable change in the model shapes and
footprints;
[0025] FIG. 9 is a view of a personal navigation device according
to this invention having a display screen on which three
contemporaneous rendering zones (LoD-1, LoD-2, LoD-3) are
provided;
[0026] FIGS. 10A-C represent a sequence of images as may be
portrayed on the display screen of the device shown in FIG. 9,
wherein 3D models are displayed in different LoDs and wherein the
LoD for a particular 3D model will change as the model moves to a
different rendering zone in the display screen;
[0027] FIG. 11A is a view of a display screen for a personal
navigation device according to an alternative embodiment of this
invention wherein contemporaneous rendering zones extend generally
parallel to a road centerline; and
[0028] FIG. 11B is a view as in FIG. 11A showing the depiction of
3D objects on the display screen located in different rendering
zones and thus rendered with different LoDs.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0029] Referring to the figures, wherein like numerals indicate
like or corresponding parts throughout the several views, this
invention pertains to digital maps as used by navigation systems,
as well as other map applications which may include those viewable
through internet enabled computers, PDAs, cellular phones, and the
like. 3D models can be rendered from numerous individual attributes
which, when combined together, result in a highly detailed,
realistic visual depiction of the physical object to which they
correspond. However, these same 3D models can be rendered from
fewer or different attributes which result in a less detailed
visual depiction of the physical object, as compared with the
full-attribute rendering. And still further, the same 3D models can
be rendered with a minimum number or selection of attributes which
result in a very basic, coarse visual depiction of the physical
object. Generally, a 3D model rendered with fewer attributes
requires less computing resources than one rendered with more
attributes.
[0030] Attributes are features well known to those skilled in
digital map fields for other (non-3D model) applications. When
applied to 3D models, the attributes may for example include meta
information pertaining to object position (x, y, z), object shape,
pediment shape, roof detail, and facade detail. Other attributes
are certainly possible. Some attributes can even be derived from
given attributes. An average color can be derived e.g. from a
textured image of the facade (analyzing the single or some
characteristic pixels of the image) or composed from single roof
elements having different colors.
[0031] FIG. 4 depicts a simplified example of a 3D model 116 as may
be represented in a digital map which, in this particular case,
corresponds to a multilevel building structure in real life. The
model 116 is shown here in its lowest level of detail, LoD-1,
wherein the only attribute represented is its block shape and
position. In other words, the model 116 shows the three-dimensional
exterior structure and elevation without any other attributes
relating to details or colorings. As will be described
subsequently, it may be desirable to group nearby or clustered
objects for the purpose of rendering them as a common model 116.
The same building structure is shown in FIG. 5 at a second level of
detail, LoD-2, and this 3D model is generally indicated at 216. At
LoD-2, the rendered 3D model 216 includes more attribute content,
such as average (i.e., derived) roof color, pediment and average
wall color attributes. LoD-2 may include the block and position
data attributes of the 3D model 116, or may use different
attributes and/or derivations to achieve placement and footprint
details. Thus, the 3D model 216 at LoD-2 more closely approximates
the physical object as it exists in reality, as compared to the 3D
model 116 shown in FIG. 4. Therefore, it follows naturally that the
processing resources needed to render the 3D model 216 are greater
than those required to render the 3D model 116.
[0032] FIG. 6 is a view of the same building structure but this
time the model is shown in LoD-3 and generally indicated at 316. In
this LoD-3, roof texture, pediment texture and facade attributes
have been included with the rendering of the building to provide a
visually accurate, life-like image of the physical object as it
exists in real life. It should be reiterated that not all of the
attributes identified above need necessarily be stored as existing
or pre-processed data in the memory of the navigation device 10.
Rather, some of the attributes may be derived or calculated from
other attributes either on-the-fly or as pre-processed data which
are later read and put on the display 12. For example, the average
color attributes (e.g., roof and facade) which, in this example,
are associated with the model 216 of LoD-2 (FIG. 5), may be
calculated from the respective coloring attributes associated with
textures of LoD-3. Of course, many other attributes in addition to
those described above may be included depending upon the
circumstances.
[0033] Consistent with known teachings, an attribute (like the
facade texture for example) may be composed of numerous assembled
components for purpose of data compression. Thus, in the example of
FIG. 7, facade tiles are shown to represent separate library
elements which, in the example shown here, include six discrete
library components 18-28. Of course, other rendering techniques and
attribute types may be applicable, which attributes can be
selectively added or removed (i.e., generalized) from the model
rendering in response to the distance of the 3D model from the
observer datum or other suitable reference point on the display
screen 12.
[0034] The concepts of this invention enable the selective
generalization of attributes used to render 3D models 116, 216, 316
of physical objects. These data can be either pre-processed and
stored so that an application may simply "read" the pre-processed
data and put in on the display screen 12, or the other possibility
is to read the original data, calculate the additional attributes
and then display it without having the results stored. Naturally,
several forms in between are possible as well, so that one may
pre-process some of the data and calculate the remaining attributes
on-the-fly. This, of course, depends on the application and
hardware preferences, memory and storage availability, CPU power,
time to calculate on-the-fly, and so forth.
[0035] An appropriate storage medium is provided to store the 3D
model attributes and data needed for augmenting a digital map
according to these principles. Such data can be converted in
different formats for the display such as, for example, in KMZ/KML
files. Accordingly, maps with three-dimensional information about
the buildings and structures can be delivered in different formats
(shape, database, files, etc.) and then accessed by an application
and further processed.
[0036] Based on a set of textured buildings 316 acquired from the
full use of attributes, various actions can be executed on a single
building so that one gathers additional information that can be
added as new attributes or features and used to render lower
resolution 3D images. These for example might include computing the
representative color of the building based on LoD-3 digital texture
images (e.g., facade, pediment, eaves, basements, etc.) for LoD-2
presentations, or computing the representative building height from
geometric details of the building element (e.g., building body,
building roof, etc.) for LoD-1 views.
[0037] FIG. 8A shows a cluster of 3D models of building objects 316
placed within the context of a digital map. In this view, all of
the 3D models are rendered with the highest level of detail (LoD-3)
using all or most of the available attribute data, or using the
stored attribute data containing the most details of facades and
textures. FIG. 8B shows these same building objects rendered as 3D
models 216 using fewer attributes and/or using another set of
attributes, e.g., average derived colors for the facades and/or
pediments, generalized from the texture attributes of LoD-3. In
this case, the level of detail is LoD-2. FIG. 8C shows these same
building objects again rendered as 3D models 116 at LoD-1. LoD-1
provides 3D model renderings using a relatively few or minimal
number of attribute data. The building objects rendered as 3D
models at LoD-1 are based in whole or in part on a set of
attributes generalized from LoD-2 (or LoD-3) data so as to provide
only a very rough approximation of the objects in real life. These
LoD-1 renderings can appear so generalized as to result in
noticeable changes in the model shapes and footprints, for
example.
[0038] Based on certain defined rules which will be apparent to
those of skill in the art (functional, physical, geographical,
etc.), single groups can be composed to building groups at the
lower level of detail settings. This is shown, for example, in
FIGS. 8C where some multiple building objects have been clustered
together and rendered as a unified block 116. For the building
groups 116, additional information can be gathered based on the
data that was collected on single buildings as described above.
These might include average building group color, average building
group height, computing the average color of the roof, composed
building group footprint, etc. Based on the additional calculated
structures or additional attributes, a process is able to generate
a new data structure and it is possible to set up different levels
of detail (LoD) based on the requirements. This data structure can
be stored in a database or in other formats for persistent storage.
As a result, complex 3D models can be accessed by a digital map
application and converted to an output format in several levels of
generalization. One possible scenario could be the storage of the
processed data in an Oracle database, access the data by a web
service and have multiple KMZ files as output that can be
visualized in Google Earth.
[0039] FIG. 9 shows an exemplary navigation system 10 having a
display screen 12 according to one embodiment of this invention.
The display screen 12 is arranged in this example to depict a
digital map in three-dimensional form so that both the road
segments and any physical objects in reality are rendered using 3D
modeling techniques. As shown here, a horizon line 30 coincides
with a vanishing point VP. Construction lines 32 converge on the
vanishing point VP for purposes of illustration only. According to
the principles of this invention, a plurality of contemporaneous
rendering zones are established in the digital map as viewed in the
display screen 12. These contemporaneous rendering zones include a
proximal rendering zone 34, an intermediate rendering zone 36, and
a distal rendering zone 38. These rendering zones 34-38 may be
associated with distance from an imaginary observer which, in this
particular example, is presumed to be in line with the bottom edge
of the display screen 12. The lower edge (0 m mark) of the display
screen 12 thus functions as a reference point from which the
rendering zones are gauged. As shown by the markings at left, the
proximal rendering zone 34 may span a distance (relative to the
observer) from 0 to 1,000 meters; the intermediate rendering zone
36 from 1,000 to 3,000 meters; and the distal rendering zone 38
from 3,000 to 15,000 meters.
[0040] Of course, these spans are offered here for exemplary
purposes only and may be adjusted to suit the particular
application. Furthermore, it is not essential that an intermediate
rendering zone 36 be used, as adequate functionality may be
achieved with only proximal 34 and distal 38 rendering zones.
Similarly, more than one intermediate rendering zone 36 may be
included so that four or more rendering zones are active, each
rendering models with varying Levels of Detail and attribute data.
Different rendering zone geometries can be established, and the
rendering zone boundaries can be dynamic rather than fixed.
[0041] 3D models 116, 216, 316 that appear in the display screen 12
will be selectively rendered with varying levels of attributes
(i.e., different LoDs) depending upon which rendering zone the 3D
model is in. 3D models 316 displayed in the proximal rendering zone
34 will be displayed with the most attributes and corresponds to
LoD-3 in the example of FIG. 6. 3D models 216 appearing in the
intermediate rendering zone 36 will be presented with fewer
attributes, i.e., at LoD-2. 3D models 116 located in the distal
rendering zone 38 will be presented or rendered with the least
number of attributes and corresponding generally to the LoD-1 as
shown in FIG. 4. For 3D models that extend across two or more
rendering zones, rules can be established to determine which zone
will have priority.
[0042] FIGS. 10A-C represent a sequence in which the navigation
device 10 is transported relative to a road 14 in reality so that
the 3D models 116, 216, 316 move relative to the display screen 12.
As these models move from one rendering zone to the next,
attributes are either added or subtracted so that the renderings
change between LoD-1, -2 and -3. Thus, a 3D model 216 appearing in
FIG. 10A in the intermediate rendering zone 36 passes into the
proximal rendering zone 32 when the navigation system 10 is
transported forwardly a sufficient distance. As the 3D model enters
the proximal rendering zone 34, additional attributes are used in
its rendering so that the 3D model becomes more realistic in its
appearance. Accordingly, the data processing resources of the
navigation system 10 are not burdened to fully process and render
models 116, 216 outside of the proximal rendering zone 34. And
models 116 located in the distal rendering zone 38 receive the
lowest processing attention and as such are rendered with the
lowest level of detail (LoD-1).
[0043] In the example of FIGS. 9 and 10A-C, a reference point is
established in relation to the digital map at the lower edge (0 m)
of the display screen 12. The proximal rendering zone 34 is
disposed directly adjacent this reference point and the distal
rendering zone 38 is spaced farthest from this reference point.
FIGS. 11A and 11B depict an alternative example wherein the
reference point is selected as a road center line 40. In this
example, where prime designations are added for convenience, the
proximal rendering zone 34' comprises those stretches of real
estate flanking the center line 40 approximately 20 meters to each
side. Specific measurements are provided for exemplary purposes
only. Likewise, the intermediate rendering zones 36' are also
arranged parallel to the road center line 40, outlying the proximal
rendering zones 34'. In this example, the intermediate rendering
zones 36' comprise 30 meter bands along the outer edges of the
proximal rendering zone 34'. The distal rendering zone 38'
comprises those portions of the digital map which are visible in
the display screen 12 and lie outside of the proximal 34' and
intermediate 36' rendering zones. In other words, the distal
rendering zones 38' comprise those spaces lying laterally outwardly
from the intermediate rendering zones 36'.
[0044] As in the preceding examples, all 3D models 316 located in
the proximal rendering zone 34' will be rendered with the highest
level of detail, LoD-3. 3D models 216 residing in the intermediate
rendering zones 36' will be rendered with an intermediate level of
detail, LoD-2, like that shown in FIG. 5. 3D models 116 residing in
the distal rendering zone 38' will be rendered with the lowest
level of detail, LoD-1. When a 3D model crosses a rendering zone
boundary, e.g., when the navigation device 10 is in motion and 3D
models change position on the screen 12, the 3D model is re-render
with a different number, combination and/or derivation of
attributes based on its new rendering zone.
[0045] FIG. 11B provides an example where 3D models 116, 216 and
316 are located in their respective rendering zones 38', 36' 34'.
Directional arrow 42 represents navigation instructions provided by
the system 10. 3D models 316 residing directly along the routed
path 42 are the only models rendered in the highest level of
detail, LoD-3, because they are presumed to provide the highest
degree of navigational assistance and to more closely approximate
realistic viewing experience. Buildings and other physical objects
spaced farther (laterally) from the road center line 40 are
rendered in progressively lessening detail (LoD-2 or LoD-1) because
they are less significant or less useful for navigational purposes
and to more closely approximate realistic viewing experience.
[0046] The foregoing invention has been described in accordance
with the relevant legal standards, thus the description is
exemplary rather than limiting in nature. Variations and
modifications to the disclosed embodiment may become apparent to
those skilled in the art and fall within the scope of the
invention.
* * * * *