U.S. patent application number 14/168149 was filed with the patent office on 2014-10-30 for system for generating geocoded three-dimensional (3d) models.
This patent application is currently assigned to HOVER, INC.. The applicant listed for this patent is HOVER, INC.. Invention is credited to Marc L. Herbert, Neophytos Neophytou, James L. Pittel.
Application Number | 20140320485 14/168149 |
Document ID | / |
Family ID | 48049205 |
Filed Date | 2014-10-30 |
United States Patent
Application |
20140320485 |
Kind Code |
A1 |
Neophytou; Neophytos ; et
al. |
October 30, 2014 |
SYSTEM FOR GENERATING GEOCODED THREE-DIMENSIONAL (3D) MODELS
Abstract
Embodiments of the invention relate to the visualization of
geographical information and the combination of image information
to generate geographical information. Specifically, embodiments of
the invention relate to a process and system for correlating
oblique images data and terrain data without extrinsic information
about the oblique imagery. Embodiments include a visualization tool
to allow simultaneous and coordinated viewing of the correlated
imagery. The visualization tool may also provide distance and
measuring, three-dimensional lens, structure identification, path
finding, visibility and similar tools to allow a user to determine
distance between imaged objects.
Inventors: |
Neophytou; Neophytos; (Glen
Cove, NY) ; Herbert; Marc L.; (Wantagh, NY) ;
Pittel; James L.; (Glen Head, NY) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
HOVER, INC. |
LOS ALTOS |
CA |
US |
|
|
Assignee: |
HOVER, INC.
LOS ALTOS
CA
|
Family ID: |
48049205 |
Appl. No.: |
14/168149 |
Filed: |
January 30, 2014 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
13858707 |
Apr 8, 2013 |
8649632 |
|
|
14168149 |
|
|
|
|
12265656 |
Nov 5, 2008 |
8422825 |
|
|
13858707 |
|
|
|
|
Current U.S.
Class: |
345/419 |
Current CPC
Class: |
G06K 9/00214 20130101;
G06T 19/20 20130101; G06T 2200/24 20130101; G06T 2207/10028
20130101; G06T 2219/2008 20130101; G06F 3/14 20130101; G06T 17/05
20130101; G06T 17/00 20130101; G06T 2210/04 20130101; G06K 9/00637
20130101; G06T 19/003 20130101; G06T 2200/08 20130101; G06T 2210/56
20130101; G06T 15/20 20130101; G06T 15/04 20130101 |
Class at
Publication: |
345/419 |
International
Class: |
G06T 17/05 20060101
G06T017/05 |
Claims
1. A system of generating three-dimensional (3D) models, the system
comprising: persistent storage for storing at least orthogonal and
oblique images corresponding to at least one structure; a geocoding
engine correlating one or more images of the at least orthogonal
and oblique images corresponding to the at least one structure and
producing a solution set output comprising a plurality of x, y, z
coordinates for each of the correlated one or more oblique images;
and a visualization and analysis tool receiving the solution set
and including at least a structure identification component
configured to extract a 3D model by: identification of a facade of
the at least one structure; identification of walls of the at least
one structure; and texturing the identified facade and walls of the
at least one structure.
2. The system of claim 1, wherein the oblique images include at
least ground based oblique images.
3. The system of claim 1, wherein for two or more of the oblique
images, the geocoding engine matches multiple boundary points to
the one or more at least orthogonal images.
4. The system of claim 3, wherein the multiple boundary points
identify one or more aspects of the at least one structure.
5. The system of claim 3, wherein the identification of the facade
comprises extracting a bounded space of the multiple boundary
points for at least one of the aspects of the at least one
structure.
6. The system of claim 3, wherein the identification of the walls
comprises extracting a bounded space of the multiple boundary
points for at least one of the aspects of the at least one
structure.
7. The system of claim 3, wherein the structure identification
component extraction reveals 3D positions of one or more of the
multiple boundary points.
8. The system of claim 1, wherein the texturing comprises retrieved
textures from corresponding ones of the one or more images of the
at least orthogonal and oblique images corresponding to the at
least one structure.
9. The system of claim 1, further comprising a distance measurement
component configured to measure distances between two or more of
the plurality of x, y, z coordinates.
10. The system of claim 1, wherein the structure identification
component is further configured to match texturing patterns to
determine occlusions.
11. A system of generating three-dimensional (3D) models, the
system comprising: storage for storing at least orthogonal and
ground based oblique images corresponding to at least one
structure; a geocoding engine correlating one or more images of the
at least orthogonal and ground based oblique images corresponding
to the at least one structure and producing a solution set output
comprising a plurality of coordinates for each of the correlated
one or more ground based oblique images; and a visualization and
analysis tool receiving the solution set and including the at least
a structure identification component configured to extract a 3D
model by: identification of a facade of the at least one structure;
identification of walls of the at least one structure; and
texturing the identified facade and walls of the at least one
structure.
12. The system of claim 11, wherein the identification of the
facade includes identification of a plurality of facade boundary
points.
13. The system of claim 12, wherein the visualization and analysis
tool is further configured for dragging one or more of the
plurality facade boundary points.
14. The system of claim 12, wherein the structure identification
component is further configured, during texturing, to recover 3D
coordinate data of a texture pattern determined from one or more
texture pattern matching algorithms to correct for inaccurate
structure identification.
15. A system of generating three-dimensional (3D) models, the
system comprising: storage for storing at least orthogonal and
ground based oblique images corresponding to the at least one
structure; a geocoding engine correlating one or more images of the
at least orthogonal and ground based oblique images corresponding
to at least one structure and producing a solution set output
comprising a plurality of coordinates for each of the correlated
one or more ground based oblique images; and a visualization and
analysis tool receiving the solution set and including at least a
structure identification component configured to extract a 3D model
by: identification of a facade of the at least one structure;
identification of walls of the at least one structure; texturing
the identified facade and walls of the at least one structure; and
overlaying the texture onto the 3D model.
16. The system of claim 15, wherein the structure identification
component is further configured to adjust for orientation of the
texturing.
17. The system of claim 15, wherein the structure identification
component is further configured, during texturing, to locate
textures using one or more pattern matching algorithms.
18. The system of claim 17, wherein the structure identification
component is further configured to recover 3D coordinate data of a
pattern determined from the one or more pattern matching algorithms
to correct for inaccurate structure identification.
19. The system of claim 15, wherein the identification of the
facade includes identification of a plurality of facade boundary
points.
20. The system of claim 19, wherein the visualization and analysis
tool is further configured for dragging one or more of the
plurality of facade boundary points.
Description
CROSS REFERENCE TO PATENT APPLICATIONS
[0001] The present U.S. Utility patent application claims priority
pursuant to 35 U.S.C. .sctn.121, as a divisional of U.S. Utility
patent application Ser. No. 13/858,707, entitled "A System and
Method for Correlating Oblique Images to 3D Building Models," filed
Apr. 8, 2013, to be issued as U.S. Pat. No. 8,649,632, which is a
divisional of U.S. Utility patent application Ser. No. 12/265,656,
entitled "Method and System for Geometry Extraction, 3D
Visualization and Analysis Using Arbitrary Oblique Imagery," filed
Nov. 5, 2008, now U.S. Pat. No. 8,422,825, all of which are hereby
incorporated herein by reference in their entirety and made part of
the present U.S. Utility patent application for all purposes.
BACKGROUND
[0002] Images of a geographic region are used for construction and
military purposes. Construction planners utilize detailed maps and
images of a potential construction site during development planning
Military intelligence use image data to identify or monitor
potential military targets or strategic locations. Satellite images
of an area are available for these purposes, but due to their
"bird's eye" or orthogonal view point, it is difficult to use these
images for determining the height of imaged structures or
characteristics of imaged structures. These aspects of structures
are visible from an angled or "oblique" view point. Oblique images
can be captured through aerial photography. To correlate
information between different oblique images, terrain maps and
orthogonal images, it is necessary to have precise information
about each of the oblique images and the sources of the oblique
images. For each image, the camera location, speed of travel, lens
focal length, camera angle, altitude, range finding information and
similar information are needed to correlate the images to a terrain
map. Images captured from moving vehicles must be travelling in a
straight path and similar restrictions on information requirements
are necessary to correlate information to a terrain map. Systems
for correlating images to terrain maps are not able to utilize
images if this extrinsic information is unavailable.
FIELD OF THE INVENTION
[0003] Embodiments of the invention relate to the visualization and
correlation of geographical information and image information.
Specifically, embodiments of the invention relate to a process and
system for correlating a set of oblique images to real world
coordinates and providing interactive tools to utilize the
correlated images.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] Embodiments of the invention are illustrated by way of
example and not by way of limitation in the figures of the
accompanying drawings in which like references indicate similar
elements. It should be noted that references to "an" or "one"
embodiment in this disclosure are not necessarily to the same
embodiment, and such references mean at least one.
[0005] FIG. 1 is a diagram of one embodiment of a geocoding
engine.
[0006] FIG. 2A is an illustration of one embodiment of an
orthophoto.
[0007] FIG. 2B is an illustration of one embodiment of an oblique
photo.
[0008] FIG. 3 is a flowchart of one embodiment of a process for
geo-locating images.
[0009] FIG. 4 is a flowchart of one embodiment of a process for
determining camera parameters.
[0010] FIG. 5 is a flowchart of one embodiment of a process for
structure extraction.
[0011] FIG. 6 is a diagram of one embodiment of an interface for
inputting the coordinates of a tie point on a structure.
[0012] FIG. 7 is a diagram of one embodiment of an interface for
displaying automatic structure detection.
[0013] FIG. 8 is a diagram of one embodiment of an interface for
three dimensional structure display.
[0014] FIG. 9 is a flowchart of one embodiment of a process for
lens projection.
[0015] FIG. 10 is a diagram of one embodiment of an interface for
displaying a three-dimensional lens.
[0016] FIG. 11 is a diagram of one embodiment of an integrated
visualization and analysis interface.
[0017] FIG. 12 is a diagram of one embodiment of an interface for
displaying line of sight analysis.
[0018] FIG. 13 is a diagram of one embodiment of an interface for
path finding visibility analysis.
[0019] FIG. 14 is a diagram of one embodiment of an interface for
first-person navigation.
DETAILED DESCRIPTION
[0020] FIG. 1 is a diagram of one embodiment of a geocoding engine.
"Geocoding" as used herein is a correlation of image data to world
coordinate data. The world coordinates may be real world coordinate
data, virtual world coordinate data or similar coordinate system
data. In one embodiment, the geocoding and visualization system 121
has access to multiple types of geographical data and imaging data.
This data is stored in an electronic storage medium such as a
persistent storage system 109 in communication with the system 121.
Geocoding and visualization system 121 can operate on a single
machine such as a desktop computer, workstation, mainframe, server,
laptop computer or similar computer system or can be distributed
across multiple computer systems. Persistent storage systems 109
can be any type of magnetic, optical, FLASH or similar data storage
system.
[0021] In one embodiment, the geocoding and visualization system
121 has access to a set of digital terrain models (DTM) 101. A
`set` as used herein is any number of items including one item. A
DTM 101 includes a set of universal terrain map coordinates
identifying the absolute location and elevation of a set of points
in a geographical region of the earth. The coordinate system can be
any coordinate system including latitude and longitude or similar
systems. In another embodiment, the system 121 utilizes digital
elevation models (DEMs) or similar models and terrain mapping
systems in place of or in combination with DTMs. For sake of
convenience, DTMs are discussed herein as one example embodiment.
DTMs 101 are available for many areas of the world. However, DTMs
101 do not include information about man made structures such as
buildings, roads, bridges and similar structures.
[0022] Another type of information that is available to the system
121 is vertical images 103 such as orthogonal images and similar
images. A vertical image 103 is a vertical or orthogonal view of
the terrain and structures in a defined area. As used herein a
`vertical image` is an image captured from an overhead position, a
position above a target, or at a right angle or an angle near to a
right angle to the target. For example, the image can be taken from
an overhead position at an eighty-seven degree angle or similar
angle close to a ninety-degree angle. A vertical image can be
`rectified` to fit an associated DTM. Rectifying a vertical image
entails mapping the pixels of the vertical image to the coordinates
of a DTM. For sake of convenience, as used herein a `vertical
image` can be either a rectified image or standard image. A
vertical image 103 can be used for measurements of distances and
object relationships by providing exact geographical locations for
objects such as buildings and similar structures. However, many
details and characteristics of structures and objects are hidden in
vertical images. For example, in a vertical image, it is difficult
to distinguish between different types of objects such as pipes,
fences, paths, and ditches, because from the vertical viewpoint
they have a similar appearance. A third type of information that is
available to the system 121 is oblique imaging 105. Oblique imagery
105 includes images taken at an angle other than the vertical
perspective or images derived from vertical imagery that provide
perspective after processing. Oblique images provide a perspective
line of sight that reveals information that is not visible in an
orthophoto view. For example, an oblique image has an angle to the
target between zero and eighty-nine degrees.
[0023] FIG. 2A is a diagram of one embodiment of an example
vertical image depicting a first structure 201, a second structure
203, a third structure 205 and a fourth structure 207. Vertical
images can be captured by aerial photography, satellite imagery,
laser radar (lidar), synthetic aperture radar (SAR), standard
radar, infrared systems or similar systems. Structures 201 and 205
in the vertical image view depicted in FIG. 2A appear as two
concentric circles. Structures 203 and 207 appear to be elongated
structures that run along side the other structures. However, it is
not clear from the vertical image view what each of the structures
in fact is. For example, structures 203 and 207 may be ditches,
walls, pipes, power lines, shadows or similar structures or terrain
features. Structures 201 and 205 may be pits, wells, fences,
multilevel structures or similar structures. Without perspective,
it is not possible to identify these structures or features.
[0024] FIG. 2B is a diagram of one embodiment of an example oblique
perspective of the structures depicted in FIG. 2A. Oblique images
can be captured by aerial photography or imaging systems, satellite
imagery, aerial sensor systems, ground based imaging systems, video
or video capture technology and similar systems. It can be seen
from the illustration of FIG. 2B that structure 201 is a silo type
structure with a domed roof. Structure 205 is a well structure with
only a small portion of the circular wall of the well above ground.
Structure 203 is a wall like structure. Structure 207 is a flat
structure like a path or road. Structures 201, 203, 205 and 207
appear very different viewed at the angle of an oblique image. In
contrast, in the vertical image view of FIG. 2A it is difficult to
identify each of the structures. However, it is difficult to
correlate an oblique viewpoint image with information such as a DTM
to determine real world coordinates for aspects of structures
depicted in the image.
[0025] Returning to the discussion of FIG. 1, in one embodiment,
the geocoding engine 107 receives each of these types of
information (DTM, vertical images and oblique images) and
correlates each type of data. The geocoding engine 107 includes a
triangulation engine 135, camera parameter solver 111, pixel
mapping component 113 and generates a terrain model 137 and
correlation solution data 115. The geocode engine 107 may be a
single application or software module or a set of applications or
software modules. The geocode engine 107 can operate on a single
machine or can be distributed over a set of machines.
[0026] In one embodiment, to geocode the incoming information and
find the correlation between the oblique imagery, the DTM and
vertical imagery, it is necessary to determine the ground elevation
(z coordinate) information as well as each x and y coordinate
associated with these images. The triangulation engine 135 utilizes
the DTM to generate the ground z value for each x and y coordinate
pair relevant to the set of images or area to be analyzed. The
triangulation engine 135 can be a separate application or a module
or component of the geocode engine 107. The triangulation engine
135 uses the Delaunay Triangulation process and an error-based
simplification process to obtain a triangulated model of the
terrain 137, which is made accessible to the visualization and
analysis component 117.
[0027] Other techniques for determining the set of z values that
may be used in the analyzed area include natural neighbor
interpolation, surface patches, quadratic surfaces, polynomial
interpolation, spline interpolation, Art Gallery Theorem, Chvatal's
Art Gallery Theorem, compact surface, Japanese Theorem, simple
polygon, tessellation, triangulation point, convex hull, halfspace
intersection, Voronai diagrams or similar methods and algorithms.
In one embodiment, a Delaunay triangulation method is used such as
the `radial sweep,` Watson algorithm, `gift wrap,` `divide and
conquer,` `incremental` or similar Delaunay triangulation
variation. In one embodiment, a simple case may be a regular grid
of elevations that is directly interpolated.
[0028] In one embodiment, the triangulation engine 135 outputs a
resulting set of x, y and z coordinates or a vertex array as an
enhanced terrain model 137. This enhanced terrain model 137 can be
passed on or made available to the visualization and analysis tool
117. The triangulation engine 135 also passes the enhanced terrain
model 137 on to the camera parameter solver component 111 and pixel
mapping component 113.
[0029] In one embodiment, the camera parameter solver 111 utilizes
the terrain model 137, vertical imagery data 103 and oblique
imagery data 105. The camera parameter solver component 111 can be
a separate application or a module or a component of the geocode
engine 107. In one embodiment, a vertical image 103 may be
rectified to the enhanced terrain map 113 using standard rectifying
methods. Rectifying the vertical image can include manipulating the
resolution or orientation of an orthophoto to correspond to the
proportions and characteristics of the DTM or enhanced terrain
map.
[0030] In one embodiment, the camera parameter solver component 111
and pixel mapping component 113 are tasked with correlating the
oblique imagery 105 with the rectified vertical image and enhanced
terrain model. The camera parameter solver 111 utilizes four or
more "tie points" in the process of determining the position and
orientation of each image and the camera that took the image. This
information is then utilized to map each pixel of these pixels to
real-world coordinates thereby correlating each of the images with
one another by tying them to real-world coordinates. A tie point is
a point identified by a user or through automatic means that is
located on the ground in each of the rectified vertical images and
oblique images.
[0031] For the sake of convenience, this discussion utilizes an
example where a single vertical or oblique image is correlated to
real-world coordinates. This process can be extrapolated to combine
any number of rectified vertical images and oblique images.
Utilizing the tie points, the camera parameter solver component 111
and pixel mapping component 113 determine an x, y and z coordinate
for each pixel location in each oblique image. This correlation of
pixel locations and coordinates is stored or output as solution
data 115 to be utilized by the visualization and analysis tool 117.
The solution data 115 and enhanced terrain model 137 are stored for
future use in any electronic medium (e.g., persistent storage
system 109) that is in communication with the system 121.
[0032] In one embodiment, to complete the correlation of the
oblique image and the rectified image, the camera parameter solver
component 111 determines the exact location of the camera that
captured each oblique image including the coordinates, focal
length, orientation and similar information related to the camera
that captured the oblique image. If this information is known then
the known information may be utilized. However, the camera
parameter solver component 111 is capable of determining or
approximating this information for each oblique image without any
extrinsic information about the oblique image.
[0033] The pixel mapping component 113 utilizes the camera
parameters generated by the camera parameter solver component 111
as well as the enhanced terrain model 137 and maps each pixel of
the oblique images to real-world x, y and z coordinates. The pixel
mapping component 113 outputs resulting correlation solution data
115 that can then be processed and utilized by the visualization
and analysis tool 117.
[0034] In one embodiment, the visualization and analysis tool 117
allows a user to interact and view the correlated imagery and
information. FIG. 11, discussed in further detail below, is a
diagram of one embodiment of the interface for the visualization
and analysis tool 117. The interface allows a user to view each of
the images alone or in combination with one another and view the
orientation of each image in relation to the other images. In
addition, the visualization and analysis tool 117 provides a set of
additional tools for marking points in the images, such as tie
points. Other tools provided by the visualization and analysis tool
117 include distance measurement tools for checking a distance
within images, path finding tools 131, structure identification
tools 125, three-dimensional lens component 129, visibility
component 133 and similar tools. The visualization and analysis
tool 117 utilizes the solution data 115 and enhanced terrain model
137 to generate the view of and manipulation of images as well as
support other tools.
[0035] In one embodiment, the geocoding engine 107 and
visualization and analysis tool 117 also utilize other data formats
and types as input. In one embodiment, the other types of data
include video data and video capture data, three dimensional model
data, other types of mapping data, extrinsic imagery data such as
range finding and altimeter data, imaging device related data such
as camera type and focal length of a lens, vehicle data related to
the capture of the image such as vehicle speed and similar
data.
[0036] The data generated by the geocoding engine 107 can be
exported to other programs. For example, other applications that
utilize the data generated by the geocode engine may include
computer aided design (CAD) programs, geographic information
systems (GIS) and 3-D model rendering programs. In one embodiment,
the solution data 115 and visual representation of the data is
formatted or converted for use or display through a website or
similarly presented on the Internet. This data is made available
and transmitted to electronic devices including laptops, field
equipment global positioning (GPS) devices, personal digital
assistants (PDAs), command and control systems and similar
devices.
[0037] The structure identification component 125 can be a
component of the visualization and analysis tool 117 or a separate
component that interfaces with the visualization and analysis tool
117. The structure identification component 125 receives user input
that identifies a facade or rooftop of a structure through a user
interface 123 of the visualization and analysis tool 117. The
structure identification component 125 then identifies the other
features such as the walls of the structure using the solution data
115. The textures of each wall and roof of the structure are
retrieved from corresponding images. A model and texture overlay is
then created from this data. The model and texture overlay can be
rotated in three dimensions through the visualization and analysis
tool 117.
[0038] The distance measurement component 127 can be a component of
the visualization and analysis tool 117 or a separate component
that interfaces with the visualization and analysis tool 117. The
distance measurement component 127 receives input from a user
through a user interface 123 of the visualization and analysis tool
117. The user input identifies a start and end point for a distance
measurement. The distance measurement can be an elevation
measurement, a ground measurement or any combination thereof. The
distance measurement component 127 utilizes the solution data to
calculate the distance between the two identified points.
[0039] The three-dimensional lens component 129 can be a component
of the visualization and analysis tool 117 or a separate component
that interfaces with the visualization and analysis tool 117. A
user can activate the three-dimensional lens component 127 through
the user interface 123 of the virtualization and analysis tool 117.
The three-dimensional lens component determines the current viewing
angle of a user through the user interface 123 of the visualization
and analysis tool 117. An oblique or similar image with the closest
corresponding viewing angle is selected using the solution data
115. The pixels of the selected oblique image that correspond to a
lens area in the user interface 123 are projected or drawn into the
lens area to give a three-dimensional perspective to an area of a
two-dimensional vertical image. The lens can be moved by a user
over any two dimensional vertical image and the displayed
three-dimensional perspective is updated as the lens moves and as
the point of view in the user interface changes. This update can
include selecting a different oblique image to map into the lens
based on proximity to the change in the point of view.
[0040] The path finding component 131 can be a component of the
visualization and analysis tool 117 or a separate component that
interfaces with the visualization and analysis tool 117. The path
finding component 131 receives input from a user through a user
interface 123 of the visualization and analysis tool 117. The user
input identifies a start and end point for a path. The user can
also identify any number of intermediate points for the path. The
path finding component 131 draws the path in each displayed
correlated image of the user interface 123 by plotting the path in
each image using the solution data 115.
[0041] The visibility component 133 can be a component of the
visualization and analysis tool 117 or a separate component that
interfaces with the visualization and analysis tool 117. The
visibility component 133 can receive a user input through the user
interface 123 and/or data from the path finding component 131. The
visibility component 133 can identify lines of sight to an
identified point or path using the solution data 115. The
visibility is then displayed through the user interface 123.
[0042] FIG. 3 is a flowchart of one embodiment of a process for
geo-locating images. The process can be initiated by input of tie
points (block 301). Tie points are a set of pixels or locations
within a set of images that match one another. The tie points are
locations of aspects of structures or features in each image. For
example, a tie point can be a corner of a building. In one
embodiment, tie points must be natural or man-made features on the
ground such as building floor corners, roads or similar structures
or features. The same corner is identified in each image. Any
number of images and tie points can be input. In one embodiment, a
minimum of four tie-points must be identified in each image and
include a vertical image or other image that is correlated to a
terrain model.
[0043] In one embodiment, the visualization and analysis tool
adjusts the selected tie points (block 303). The adjustment relies
on edge detection and similar algorithms to find specific features
such as building corners in proximity to the selected location
within an image. The tie points are then moved to correspond to the
detected feature or structure. This allows a user to select tie
points without having to closely zoom into each image, thereby
improving the speed at which tie points can be selected.
[0044] After a set of tie points is input for each image an
estimate of the camera parameters for each image is calculated
(block 305). The process of determining the camera parameters is
discussed below in further detail in regard to FIG. 4. The camera
parameters can be calculated without any extrinsic data related to
the images. The camera parameters can include the focal length,
film size, camera orientation and similar image data. The camera
parameters allow for the geocoding or correlation of the oblique
images (block 307).
[0045] The geocoding or correlation data is then utilized to
correlate each pixel of the images to real-world coordinates. This
can be achieved by recovering the two-dimensional pixel location on
a geo-correlated oblique image, given a three-dimensional
geo-location chosen in the overlapping area of the vertical image.
In one embodiment, the following formula is utilized for this
mapping where (X,Y,Z) is the original point in world coordinates,
(X.sub.0, Y.sub.0, Z.sub.0) is the camera location in world
coordinates, r is the 3.times.3 rotation matrix representing the
camera orientation, FocalLength is the camera's focal length,
ScaleFactor is the film scaling factor (in terms of pixels per mm),
imgSize.width is the width of the image in pixels, imgSize.height
is the height of the image in pixels and result is the resulting
point in image coordinates (pixels):
P x = FocalLength r 11 ( X - X 0 ) + r 12 ( Y - Y 0 ) + r 13 ( Z -
Z 0 ) r 31 ( X - X 0 ) + r 32 ( Y - Y 0 ) + r 33 ( Z - Z 0 )
ScaleFactor + imgSize width 2 ##EQU00001## P y = FocalLength r 21 (
X - X 0 ) + r 22 ( Y - Y 0 ) + r 23 ( Z - Z 0 ) r 31 ( X - X 0 ) +
r 32 ( Y - Y 0 ) + r 33 ( Z - Z 0 ) ScaleFactor + imgSize height 2
##EQU00001.2##
[0046] The three-dimensional geo-location of a pixel in a
correlated oblique image is then calculated. The following formula
can be used, where P is the original point in image coordinates, h
is the expected terrain height of the corresponding world
coordinate, (X.sub.0, Y.sub.0, Z.sub.0) is the camera location in
world coordinates, and r is the 3.times.3 rotation matrix
representing the camera orientation. FocalLength is the camera's
focal length, invScaleFactor is the film scaling factor (in terms
mm per pixels), imgSize.width is the width of the image in pixels,
imgSize.height is the height of the image in pixels, and W is the
resulting point in world coordinates (m):
PF x = ( P x - imgSize width 2 ) * invScaleFactor ##EQU00002## PF y
= ( P y - imgSize height 2 ) * invScaleFactor ##EQU00002.2## W x =
( X 0 - h ) r 11 PF x + r 21 PF y - r 31 FocalLength r 13 PF x + r
23 PF y - r 33 FocalLength + X 0 ##EQU00002.3## W y = ( Y 0 - h ) r
12 PF x + r 22 PF y - r 32 FocalLength r 13 PF x + r 23 PF y - r 33
FocalLength + Y 0 ##EQU00002.4## W z = h ##EQU00002.5##
[0047] In one embodiment, these formulas are executed through a
graphics processor unit (GPU) to improve performance. The above
three-dimensional recovery formula requires a valid z-value for the
terrain, in order to provide an accurate estimation. The best
method of doing this would be to cast a ray, beginning at an x, y
image location and then finding the exact intersection of this ray
with the enhanced terrain model. However, this process is very
expensive computationally and significantly slows down the user
interaction with the system. The equivalent operation of this
ray-casting operation can be performed on the GPU as a
"reverse-projection" of the enhanced terrain model onto the image.
Using the above two-dimensional recovery, the enhanced terrain
model is projected onto an off-screen frame-buffer equivalent to an
oblique image size, where every pixel in this buffer contains the
z-value of the "reverse-projected" terrain model. To recover the
correct z-value that would result by casting a ray beginning at the
pixel location and ending at the first hit into the terrain model,
a simple look-up of the corresponding pixel of the off-screen
frame-buffer can be performed.
[0048] The results of the mapping of the pixels are output as a
correlation solution data set (block 309). This data set is
produced for each image or set of images. This solution set can be
used by all components and tools in the system including path
finding, distance measurement, visibility, three-dimensional lens,
structure identification and similar components and tools. The
solution data can be stored in any persistent storage system in
communication with the system (block 311).
[0049] FIG. 4 is a flowchart of one embodiment of a process for
determining camera parameters. The process of determining camera
parameters is dependent on the identification of a set of tie
points as described above (block 401). The process then selects a
set of initial camera parameters upon which other camera parameters
will be estimated (block 403). The selected initial camera
parameters can be initial values within a range for each parameter.
The process iterates through each combination of selected parameter
values. In one embodiment, internal camera parameters, such as
focal length and film size, are excluded from the estimation
process. These are selected parameters that are iterated through.
The entire estimation process can be completed in less than fifteen
seconds as measured on an Intel Pentium-3 based machine. If any
selectable parameters are known, then the iteration is simplified
as the number of permutations of the selectable parameters is
reduced.
[0050] The camera model can be described by a set of collinearity
equations:
Kx = F r 11 ( X - X 0 ) + r 12 ( Y - Y 0 ) + r 13 ( Z - Z 0 ) r 31
( X - X 0 ) + r 32 ( Y - Y 0 ) + r 33 ( Z - Z 0 ) ##EQU00003## Ky =
F r 21 ( X - X 0 ) + r 22 ( Y - Y 0 ) + r 23 ( Z - Z 0 ) r 31 ( X -
X 0 ) + r 32 ( Y - Y 0 ) + r 33 ( Z - Z 0 ) ##EQU00003.2##
[0051] Where X, Y, Z=Coordinates of a point in world/ground space,
K.sub.x, K.sub.y=Coordinates of a projected point on the image
plane, F=Focal length, X.sub.0, Y.sub.0, Z.sub.0=Coordinates of the
Camera position (Projection Center), and r.sub.i=the elements of
the 3.times.3 rotation matrix defining the camera orientation. In
the framework as described above, F=focal length is set at the
beginning of the iterative process. The projection coordinates
K.sub.x, K.sub.y are expressed in the ground coordinate system in
millimeters. Thus, given a film size D, (which is also set at the
beginning of the iterative process), and the pixel coordinates
P.sub.x, P.sub.y of a projected point on the image plane, then the
ImageScale=max(ImagePixelDim.sub.x, ImagePixelDim.sub.y)/D;
K.sub.x=(P.sub.x-CenterOflmage.sub.x)*ImageScale; and
K.sub.y=(P.sub.y-CenterOflmage.sub.y)*ImageScale.
[0052] The selected set of parameter values are utilized to
identify a largest area triangle within the set of images using the
tie points (block 405). The largest triangle is used to calculate a
three-point space resection problem using the world-image pairs for
the tie points (block 407). The three tie points that form the
largest area triangle on the ground are identified. To identify the
largest area triangle, all possible 3-point combinations are taken
to compute the area of their formed triangle using Heron's formula,
which states that the area (A) of a triangle whose sides have
lengths a, b, and c is:
A = ( a + b + c ) ( a + b - c ) ( b + c - a ) ( c + a - b ) 4 .
##EQU00004##
[0053] The three-point space resection problem is solved where O is
the perspective center (or top vertice of a tetrahedron) and
P.sub.1, P.sub.2, P.sub.3 are three world space reference points
(forming the base of the tetrahedron) whose distances a,b,c (i.e.
the distances between P.sub.1, P.sub.2, P.sub.3) are also known.
From the image coordinates of the given points we form unit vectors
along the edges of the tetrahedron OP.sub.1, OP.sub.2, OP.sub.3 and
then use the dot products of these vectors to get the internal
angles
.quadrature..quadrature..quadrature..quadrature..quadrature..quadrature..-
quadrature.. This leaves distances from P.sub.1, P.sub.2, P.sub.3
to O referred to as S.sub.1, S.sub.2 and S.sub.3 as the unknowns to
be computed. Given points P1, P2, P3 on the ground and the internal
angles
.quadrature..quadrature..quadrature..quadrature..quadrature..quadrature..-
quadrature., (computed by forming OP1, OP2, OP3 on the image
plane), we recover the distances S1, S2 and S3 which are then used
to recover the center of projection O. To accomplish this,
Grunert's solution (as described in Tan, W., 2004, Surveying and
Land Information Science, 64(3):177-179) is followed, which uses
the law of cosines. This involves the solving of quartic equations
in order to obtain a solution. Example solutions to the quartic
equations include the Ferrari Polynomial (as described in Tan,
Ibid), which returns two roots, and the use of the Newton-Raphson
iteration (as described in Tan, Ibid) using a starting point of
v=1.0. In this implementation, two starting points are used for the
Newton-Raphson iteration (v=0.5, v=1.5), in order to yield two
solutions. The Abramowitz and Stegun algorithms can also be used
for a solution to the quartic equation, which yields an additional
4 solutions (as described in Abramowitz, M and Stegun, I. A., 1972,
Handbook of Mathematical Functions, U.S. Department of Commerce).
Using each of these methods results in a total of eight solutions
to the three-point resection problem (block 409).
[0054] A conformal transformation for each of the guess values from
the eight solutions is then calculated (block 411). Given the
distances S1, S2 and S3 the locations of the 3 model points in the
camera coordinate system can be calculated as follows:
P'.sub.1=S.sub.1i.sub.1 P'.sub.2=S.sub.2i.sub.2
P'.sub.3=S.sub.3i.sub.3 with the vectors i.sub.1, i.sub.2, i.sub.3
formed as i.sub.1=(PC.sub.1.x, PC.sub.1.y, F), i.sub.2=(PC.sub.2.x,
PC.sub.2.y, F), i.sub.3=(PC.sub.3.x, PC.sub.3.y, F) where F is the
given focal length defined at the beginning of iteration process.
Given the points P'.sub.1, P'.sub.2, P'.sub.3 and their
counterparts PC.sub.1, PC.sub.2, PC.sub.3 all in the camera
coordinate system a conformal transformation can be applied as
defined by Dewitt (as described in Dewitt, B. A. 1996,
Photogrammetric Engineering and Remote Sensing, 62(1):79-83)
[0055] The results of the conformal transformation give an initial
approximation X, Y, Z for the camera position, and a set of the
angles for the initial approximation of the camera orientation. The
space resection algorithm can then be used to compute the final
solution. The collinearity equations described above are first
linearized using Taylor's theorem. The resulting system is then
solved using the Gauss-Newton method. One of the challenges of
solving the space resection problem using this approach is that a
good initial approximation is required, otherwise the algorithm
will diverge. The process described above ensures that a good
approximation is provided for general camera orientations, as
opposed to the assumptions of planar imagery that previous methods
have relied upon.
[0056] Finally, a least squares fit is calculated for the solution
(block 413). The least squares fit can be performed using the
Gauss-Newton method. A comparison is made between the calculated
least squares fit and a stored least squares fit that represents a
previous `best guess` in selection of the camera parameters. If the
calculated least squares fit is an improvement over the stored
least squares fit, then the selected camera parameters are stored
along with their least squares fit value (block 415). If the
calculated least squares fit is not an improvement, then the camera
value parameters and their least squares fit are discarded.
[0057] A check is made to determine whether all of the camera
parameters have been exhausted by iteration through each
permutation of the combination of the camera parameter values
(block 417). If all of the permutations of the parameters have not
been exhausted, then the next set of parameters are selected and
the process continues (block 403). If all of the parameter
permutations have been exhausted, then the stored set of parameters
that represent the best fit for the images are assigned to the
corresponding images (block 419).
[0058] FIG. 5 is a flowchart of one embodiment of a process for
structure extraction. The process of structure extraction
identifies and generates three-dimensional models of buildings and
similar structures within an area within the images selected by a
user. The process is initiated by a user identifying a rooftop or
similar aspect of a structure (block 503). The user can identify
the input aspect of the structure through the user interface of the
visualization and analysis tool. The interface can provide a line
drawing tool, rectangle drawing tool, circle drawing tool or
similar set of tools to enable a user to match a shape to that of
an aspect of a structure. (block 501).
[0059] The input aspect of the structure is identified in a single
image. The image can be either the orthographic or any of the
correlated oblique views. The structure extraction process then
identifies the aspect in each of the correlated images by applying
a stereo algorithm to cross-correlate the rooftop or similar aspect
across each correlated image (block 505). This will reveal the 3D
positions of the roof points.
[0060] In the image with the identified aspect of the structure a
texture is extracted within the boundary of the aspect (block 507).
For example, when the aspect is the rooftop, the pixels of the
rooftop are extracted as a texture. A copy of the extracted texture
is then adjusted for the orientation of each of the correlated
images to generate a texture that should match one present in each
image (block 509).
[0061] A pattern matching algorithm is then utilized to locate the
generated textures in each of the corresponding images (block 511).
The search is centered on the location data generated in the
oblique targeting and cross-correlation calculation. In one
embodiment, a GPU-based pattern matching method referred to as an
"occlusion query" is utilized. The pattern matching method counts
the number of pixels that have been successfully drawn from a
shape. A fragment program cancels or rejects the pixels of a shape
to be drawn using a comparison measure. The occlusion query counts
the number of pixels that were cancelled or rejected. Once the
adjusted shape for each oblique is generated an iterative process
is applied. The adjusted shape is drawn with all the pixels
activated. This enables a count of the maximum possible number of
pixels that can be drawn in the situation where a full match
occurs. The count is used to normalize all subsequent matches for
comparisons. For each oblique, the set of all possible x, y
locations for the adjusted shape is iterated through. At each
iteration, the fragment shader is activated to perform the texture
comparison. The number of successfully drawn pixels is counted
using the occlusion query. The number of successfully drawn pixels
is normalized by dividing by the maximum number of pixels and a
comparison with previous results is made. The result with the
highest score is chosen as the match.
[0062] The texture of the adjusted rooftop is compared to the
corresponding oblique image pixel by pixel at the test location.
The comparison includes a comparison of the color of each
corresponding pixel (color.sub.1, color.sub.2). In one embodiment,
a color-space conversion to the LAB color-space is performed where
the luminance channel is suppressed by a factor of 0.5. The
distance of the resulting color-space vectors is then compared to a
predefined threshold to determine a match as follows:
Match = vectorLength ( RGBtoLAB ( color 1 ) [ 0.5 1.0 1.0 ] -
RGBtoLAB ( color 2 ) [ 0.5 1.0 1.0 ] ) < Threshold
##EQU00005##
[0063] The fragment shader discards the pixel if it returns false
to the comparison. This causes only the similar pixels to survive
and be counted during the process of an occlusion query process.
The system automatically matches the selection within the other
obliques using the texture from the user-marked oblique, adjusted
to the image space (i.e., orientation) of the respective oblique.
The system evaluates several positions for the adjusted texture by
superimposing the reference texture onto the other oblique images
and comparing their luminance and colors pixel-by-pixel. These
comparisons are performed within fractions of a second, by taking
advantage of a combination of GPU based occlusion queries and
specialized fragment programs. In cases where the above automated
selection fails, the user may indicate the correct position of the
rooftop by clicking his selection in an additional oblique.
[0064] Upon matching the pattern, the three-dimensional location
data of the pattern is recovered from each image (block 513). The
results of each recovery are compared (block 515) and a selection
of the best results is made (block 517). This helps to correct for
distortion or inaccurate structure identification in the first
image. The other aspects of the structure are then determined
(block 519). For example, if the rooftop of a building has been
determined then the walls are determined by dropping edges from
each corner of the building to the base elevation of the terrain
(block 519). The structure walls are extracted as one wall per
rooftop line segment. These walls consist of the roof-points with
their counterparts on the ground. The user can then refine this
selection by dragging the points of the outline. This immediately
affects the extracted building.
[0065] This results in a three-dimensional model of the structure
or feature. The textures associated with each side, wall or other
aspect of the modeled structure or features are then extracted from
the image with the best point of view for that aspect (block 521).
The extracted textures and model can be stored separately or added
to the correlation solution data or similar data structure. These
models and textures can be utilized by other models to generate
three-dimensional representations of the structures and features of
an area.
[0066] FIG. 6 is a diagram of one embodiment of an interface for
inputting the coordinates of a tie point on a structure. The input
of tie points is illustrated in the context of a user identifying
an aspect of a structure for use in determining a three dimensional
model of the structure. The user has selected 601 the rooftop of a
building in a first oblique 603A. In the illustrated example, the
images 603A-E have been correlated and are being displayed such
that the same point is being viewed from the respective angles of
each image. The terrain model with the vertical image draped over
it is displayed in window 603E. However, the identification of tie
points prior to correlation is the same. A user selects a set of
points in one of the images. In this example, the selection in
image 603A has been completed. As an alternative to the automated
correlation process, the user can select one additional point
correspondence in any of the images 603B-E.
[0067] FIG. 7 is a diagram of one embodiment of an interface for
displaying automatic structure detection. This figure shows the
next step of the structure extraction process such that the aspect
of the structure identified in the first image has now been matched
in each of the other images 701A-C. Any number of images can be
simultaneously displayed and the identified structure can be shown
in each correlated image. The identified structure can be shown in
vertical, oblique or any correlated images.
[0068] Once a structure is identified it may be saved and added to
an aggregation of stored structures. Any number of structures can
be identified in the set of correlated images. Any number of
structures can be shown at any given time through the user
interface. In one embodiment, the user interface includes a user
interface selection mechanism 703 to assist the user in organizing
and viewing identified structures, images, projects and the
like.
[0069] FIG. 8 is a diagram of one embodiment of an interface for
three dimensional structure display. The completed extraction is
displayed in the model window 803. The three-dimensional structure
801 has been drawn on the terrain map that is draped with the
vertical image. The three-dimensional model 801 of the structure
has been draped with the extracted textures from the other images
to create a complete three-dimensional reproduction of the selected
building. This model can be manipulated and viewed from any angle
by manipulation of the available images presented through the
visualization and analysis tool.
[0070] FIG. 9 is a flowchart of one embodiment of a process for
lens projection. The three-dimensional lens projection tool can be
activated by any type of selection mechanism through the user
interface of the visualization and analysis tool. The user selects
a position to display the lens (block 901). The three-dimensional
lens tool then determines the oblique image that has the closest
camera position to the current view point of the user (block 903).
The portion of the image that maps onto the lens is then projected
onto the three-dimensional terrain model that is draped with the
vertical image (block 905). The projection can be a pixel by pixel
projection onto the lens area. The projection is continuously
updated. The lens area can be moved by user interaction with the
user interface of the visualization and analysis tool, such as
mouse or keyboard directional input. As the lens is moved, the
projection of the pixels and the selection of the best oblique is
updated.
[0071] FIG. 10 is a diagram of one embodiment of an interface for
displaying a three-dimensional lens. The lens area 1001 is movable
by the user. Similarly, the underlying vertical image 1003 can be
repositioned. The interior of the lens area 1001 includes the
mapped pixels of the oblique image that most closely aligns to the
current user viewpoint of the vertical image. Any size or shape of
lens area 1001 can be utilized. The lens area can have a drawn
boundary or the full lens area can be used to project the
correlated image. In one embodiment, multiple lenses can be
positioned on an image
[0072] FIG. 11 is a diagram of one embodiment of an integrated
visualization and analysis interface. This image illustrates a set
of windows for accessing the tools of the visualization and
analysis tool. The visualization and analysis tool can support any
number of images and related data sources being displayed. The
displayed images can include correlated oblique images 1103A-D, a
three-dimensional terrain model with a vertical image draped over
it 1105, and similar content. A reference marker 1101 that
indicates the common reference or view point for each of the
currently displayed images. Other data sources, such as video
sources 1107 that are related to a reference point or area that is
currently displayed. Information displays 1109A-D that provide
information about each of the images including orientation, scale,
coordinates and similar information. Any number of other additional
tools or components can also be displayed or accessed through the
visualization and analysis tool including those discussed previous
and those to be discussed subsequently.
[0073] FIG. 12 is a diagram of one embodiment of an interface for
displaying line of sight analysis. The line of sight tool is
displayed through the visualization and analysis tool. The line of
sight tool includes the identification 1201 of the line of sight on
the vertical image. A selected target point 1205 that a user
desires to view and a selected viewpoint 1203 are part of the line
of sight 1201. A user can select any point within any image shown
in the visualization and analysis tool as either a viewpoint or
target point.
[0074] A horizontal line of sight map 1207 shows the elevation
along the line of sight. This enables the user to determine at what
point a viewpoint of the target is obstructed and other information
about the line of sight. The horizontal line map can include
distance information, a determination of visibility of the target
from the viewpoint, degree of visibility and similar
information.
[0075] FIG. 13 is a diagram of one embodiment of an interface for
path finding and visibility analysis. The visibility and path
finding tools can be combined to illustrate the visibility of an
entire path. This can be useful for determining a safest route for
a convoy or similar use. A user defines a path 1305 on the terrain
map. The visibility component then determines all areas 1303 that
can view any portion of the path or the nearest portion of the
path. Areas of visibility 1303 can be colored or similarly
identified. In other embodiments, the areas of visibility may be
outlined, bordered or similarly indicated. Any number of paths and
areas of visibility can be determined and represented on any type
of image that has been correlated. Paths that are identified can
have any length or complexity.
[0076] FIG. 14 is a diagram of one embodiment of an interface for
first-person navigation. The visualization and analysis tool can
also include a first person viewing mode in the user interface. The
first person mode 1401 zooms into the terrain map and gives the
user a perspective of an individual on the ground. The map can then
be navigated through a peripheral device by moving the camera
around as though walking or driving across the map. A targeting
interface 1403 allows the user to select a location on the screen
to determine distance, bearing and similar information. Extracted
structures 1405 are also displayed as part of the three-dimensional
navigable landscape. Any number of extracted structures 1405 can be
displayed. Other data can also be displayed including line of
sight, pathfinding and similar data.
[0077] The first-person navigation interface 1401 can be utilized
for training simulations, walk-throughs, and similar activities.
The correlated image, model and structure data enable accurate
recreation of real world settings in three-dimensional space using
two-dimensional imagery. Additional graphical and three-dimensional
models could be added by a user to enhance the realism of the
training simulation or walk-throughs such as vehicle models,
vegetation simulation and similar elements.
[0078] In one embodiment, the geocoding engine, visualization tool
and overall imaging system may be implemented in software, for
example, in a simulator, emulator or similar software. A software
implementation may include a microcode implementation. A software
implementation may be stored on a machine readable medium. A
"machine readable" medium may include any medium that can store or
transfer information. Examples of a machine readable medium include
a ROM, a floppy diskette, a CD-ROM, an optical disk, a hard disk,
removable data storage such as memory sticks, universal serial bus
memory keys or flash drives, compact flash, jump drives, DiskOnKey,
portable image storage thumb drives and similar media and mediums.
In one embodiment, the software implementation may be in an object
oriented paradigm or similar programming paradigm. The parts of the
system may be structured and coded as a set of interrelated
objects.
[0079] In the foregoing specification, the invention has been
described with reference to specific embodiments thereof. It will,
however, be evident that various modifications and changes can be
made thereto without departing from the broader spirit and scope of
the invention as set forth in the appended claims. The
specification and drawings are, accordingly, to be regarded in an
illustrative rather than a restrictive sense.
* * * * *