U.S. patent application number 17/171544 was filed with the patent office on 2021-08-19 for collaborative 3d mapping and surface registration.
The applicant listed for this patent is Raytheon Company. Invention is credited to Richard W. Ely, Stephen J. Raif, Steven B. Seida, Torsten A. Staab, Jody D. Verret.
Application Number | 20210256722 17/171544 |
Document ID | / |
Family ID | 1000005402469 |
Filed Date | 2021-08-19 |
United States Patent
Application |
20210256722 |
Kind Code |
A1 |
Staab; Torsten A. ; et
al. |
August 19, 2021 |
COLLABORATIVE 3D MAPPING AND SURFACE REGISTRATION
Abstract
Subject matter regards generating a 3D point cloud and
registering the 3D point cloud to the surface of the Earth
(sometimes called "geo-locating"). A method can include capturing,
by unmanned vehicles (UVs), image data representative of respective
overlapping subsections of the object, registering the overlapping
subsections to each other, and geo-locating the registered
overlapping subsections.
Inventors: |
Staab; Torsten A.; (Bristow,
VA) ; Seida; Steven B.; (Wylie, TX) ; Verret;
Jody D.; (Rockwall, TX) ; Ely; Richard W.;
(Lewisville, TX) ; Raif; Stephen J.; (Sachse,
TX) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Raytheon Company |
Waltham |
MA |
US |
|
|
Family ID: |
1000005402469 |
Appl. No.: |
17/171544 |
Filed: |
February 9, 2021 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62975016 |
Feb 11, 2020 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G01S 17/894 20200101;
G06K 9/0063 20130101; G06T 7/593 20170101; B64C 2201/123 20130101;
G06T 2207/10028 20130101; B64C 39/024 20130101; B64C 2201/127
20130101 |
International
Class: |
G06T 7/593 20060101
G06T007/593; G06K 9/00 20060101 G06K009/00; G01S 17/894 20060101
G01S017/894; B64C 39/02 20060101 B64C039/02 |
Claims
1. A method for generating a three-dimensional (3D) point cloud of
an object, the method comprising: capturing, by unmanned vehicles
(UVs), image data representative of respective overlapping
subsections of the object; registering the overlapping subsections
to each other; and geo-locating the registered overlapping
subsections.
2. The method of claim 1, further comprising: capturing, by a UV of
the UVs, a first overhead image of a starting geo-location at which
the image data is captured; and wherein geo-locating the
overlapping subsections includes correlating the first overhead
image with a second overhead image for which geo-location is
known.
3. The method of claim 2, wherein the second overhead image is a
satellite image generated by a satellite.
4. The method of claim 3, wherein geo-locating the registered
overlapping subsection includes determining a normalized cross
correlation between image chips of the first overhead image and
image chips of the second overhead image.
5. The method of claim 1, further comprising: receiving, from an
operator of a UV of the UVs, a starting geo-location, and a heading
of the UV; and wherein geo-locating the registered overlapping
subsections is performed based on the starting geo-location and the
heading.
6. The method of claim 2, further comprising: performing, by a UV
of the UVs, a light detection and ranging (LIDAR) scan to generate
LIDAR scan data; and wherein geo-locating the registered
overlapping subsections includes correlating the first overhead
image with the LIDAR scan data.
7. The method of claim 1, further comprising: associating, by the
UV, geo-location data of the UV with image data generated by the
UV; and wherein geo-locating the registered overlapping subsections
occurs based on the geo-location data.
8. The method of claim 1, further comprising: generating a first
three-dimensional (3D) point set based on the geo-located
registered overlapping subsections; and registering the first 3D
point set to a second 3D point set to generate a merged 3D point
set.
9. The method of claim 8, wherein registering the first 3D point
set to the second 3D point set includes scaling, rotating, and
translating one or more of the first and second 3D point sets using
a least squares estimate bundle adjustment based on tie points
between the first and second 3D point sets.
10. A system comprising: unmanned vehicles configured to capture
image data representative of respective overlapping subsections of
an object; and processing circuitry configured to: register the
overlapping subsections to each other; and geo-locate the
registered overlapping subsections.
11. The system of claim 10, wherein a UV of the UVs is further
configured to capture a first overhead image of a starting
geo-location at which the image data is captured and wherein
geo-locating the overlapping subsections includes correlating the
first overhead image with a second overhead image for which
geo-location is known.
12. The system of claim 11, wherein the second overhead image is a
satellite image.
13. The system of claim 12, wherein geo-locating the registered
overlapping subsection includes determining a normalized cross
correlation of image chips of the first overhead image and the
second overhead image.
14. The system of claim 10, wherein the processing circuitry is
further configured to: receive, from an operator of a UV of the
UVs, a starting geo-location, and a heading of the UAV; and wherein
geo-locating the registered overlapping subsections is performed
based on the starting geo-location and the heading.
15. The system of claim 11, wherein a UV of the UVs is further
configured to perform a light detection and ranging (LIDAR) scan to
generate LIDAR scan data; and wherein geo-locating the registered
overlapping subsections includes correlating the first overhead
image with the LIDAR scan data.
16. A machine-readable medium including instructions that, when
executed by a machine, cause the machine to perform operations
comprising: receiving, by unmanned vehicles (UVs), image data
representative of respective overlapping subsections of an object;
registering the overlapping subsections to each other; and
geo-locating the registered overlapping subsections.
17. The machine-readable medium of claim 16, wherein the operations
further comprise: receiving, by the UV, geo-location data of the UV
associated with the image data generated by the UV; and wherein
geo-locating the registered overlapping subsections occurs based on
the geo-location data.
18. The machine-readable medium of claim 16, wherein the operations
further comprise: generating a first three-dimensional (3D) point
set based on the geo-located registered overlapping subsections;
and registering the first 3D point set to a second 3D point set to
generate a merged 3D point set.
19. The machine-readable medium of claim 18, wherein registering
the first 3D point set to the second 3D point set includes scaling,
rotating, and translating one or more of the first and second 3D
point sets using a least squares estimate bundle adjustment based
on tie points between the first and second 3D point sets.
20. The machine-readable medium of claim 16, wherein the operations
further comprise receiving light detection and ranging (LIDAR) scan
data of the object from a UV of the UVs; and wherein geo-locating
the registered overlapping subsections includes correlating the
first overhead image with the LIDAR scan data.
Description
CLAIM OF PRIORITY
[0001] This patent application claims the benefit of U.S.
Provisional Application Ser. No. 62/975,016, filed Feb. 11, 2020,
which is incorporated by reference herein in its entirety.
TECHNICAL FIELD
[0002] Some embodiments described herein generally relate to
generating a three-dimensional (3D) mapping and registering the
generated 3D mapping to a surface.
BACKGROUND
[0003] Generating a 3D point cloud can be resource intensive. The
generation of the point cloud can include gathering two-dimensional
(2D) images (e.g., satellite imagery, ground imagery (images taken
from a camera on the ground), or an elevation therebetween) and
performing photogrammetry, performing a light detection and ranging
(LIDAR) scan, a human generating a computer-aided design (CAD)
drawing, or the like. Each of these techniques has associated
difficulties, tolerances, and resource requirements, which can be
cost or resource prohibitive.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] In the drawings, which are not necessarily drawn to scale,
like numerals may describe similar components in different views.
Like numerals having different letter suffixes may represent
different instances of similar components. Some embodiments are
illustrated by way of example, and not limitation, in the figures
of the accompanying drawings.
[0005] FIG. 1 illustrates, by way of example, a conceptual block
diagram of an embodiment of a technique for 3D point cloud
registration, such as with error propagation.
[0006] FIG. 2 illustrates, by way of example, a diagram of an
embodiment of a technique of performing the operation of the
processing circuitry.
[0007] FIG. 3 illustrates, by way of example, a conceptual block
diagram of an embodiment of a collaborative system for 3D point
cloud generation.
[0008] FIG. 4 illustrates, by way of example, a diagram of an
embodiment of a system for generating the registered 3D point
cloud.
[0009] FIG. 5 illustrates, by way of example, a diagram of an
embodiment of a volume that is being mapped by UAVs.
[0010] FIG. 6 illustrates, by way of example, a diagram of an
embodiment of a system for 3D point cloud generation and
geo-registration.
[0011] FIG. 7 illustrates, by way of example, a diagram of
embodiments of operations that can aid in geo-locating a 3D point
cloud.
[0012] FIG. 8 illustrates, by way of an example, an embodiment of a
system 800 for 3D point set registration and merging.
[0013] FIG. 9 illustrates an example diagram of an embodiment of
the relationship between ground point coordinate estimates
{circumflex over (V)}.sub.j and corresponding 3D data set
observations {tilde over (V)}.sub.ij.
[0014] FIG. 10 illustrates an example of an embodiment of a bundle
adjustment operation.
[0015] FIG. 1 illustrates, by way of example, a diagram of an
embodiment of a method for 3D point set generation and
registration.
[0016] FIG. 12 illustrates, by way of example, a block diagram of
an embodiment of a machine in the example form of a computer system
within which instructions, for causing the machine to perform any
one or more of the methodologies discussed herein, may be
executed.
DETAILED DESCRIPTION
[0017] Aspects of embodiments regard improving point cloud
generation or registration of the point cloud to a geo-location of
the Earth or other surface.
[0018] FIG. 1 illustrates, by way of example, a conceptual block
diagram of an embodiment of a technique for 3D point cloud
registration, such as with error propagation. A system of FIG. 1
includes input 102 provided to processing circuitry 114. The
processing circuitry 114 generates output in the form of a 3D point
cloud 128. The input 102 can include one or more of an image or
video 104, a light detection and ranging (LIDAR) scan 106, a
three-dimensional (3D) computer-aided drafting (CAD) model 108,
satellite imagery 110, or other data 112. The input 102 can be
processed into two or more 3D point clouds before being provided to
the processing circuitry 114.
[0019] The image, video 104 can include a red, green, blue (RGB),
infrared (IR), black and white, grayscale, or other intensity
image. The image, video 104 can include a video that comprises
frames. Photogrammetry can be performed on the data of image, video
104, such as to generate one of the 3D point clouds.
[0020] Photogrammetry can include performing a geometric bundle
adjustment on the two-dimensional (2D) images, to register the
geometry of the 2D images to each other. The bundle adjustment can
adjust geometry of an image of the images to be consistent with
geometry of other images of the images. The geometry can be defined
in metadata, such as by using rational polynomial coefficients
(RPC). Other image registration techniques are possible. For
example, 2D images not previously associated with the 3D point
cloud can be registered to the 3D point cloud. Tie points can be
identified between each 2D image and the 3D point cloud. The
geometry of each 2D image can be adjusted to match the 3D point
cloud by using an affine transformation.
[0021] The LIDAR scan 104 can be generated by illuminating a target
with a laser light and measuring the reflected light with a sensor.
Differences in laser return times and wavelengths can then be used
to make one or more of the 3D point clouds. This is because these
differences can be used to determine distance to the object off of
which the light was reflected and returned to the sensor. LIDAR 3D
point clouds often have no intensity information. For the LIDAR
case, and others, it can be useful to attribute the 3D point cloud
with intensity or color data from an image that covers the same
area of interest. Further discussion on this point is provided
elsewhere herein.
[0022] The CAD model 108 is a human-designed (e.g., with or without
computer aid) 2D or 3D model, such as the blueprint of a building.
The CAD model 108 is defined by geometrical parameters and readily
adjustable by a human using a computer.
[0023] The satellite imagery 110 includes images generated at high
altitudes from one or more cameras of a satellite or satellites.
The satellite image provides a nadir, or near-nadir view of a
geographical location. The nadir view is of a point on a celestial
sphere directly below an observer (e.g., the point on the sphere
closest the observer).
[0024] Other sources 112 can include other man-made measurements
(e.g., thermal imaging), computer aided measurements, or other data
that can be used to generate a 3D point cloud of the geographical
region.
[0025] The processing circuitry 114 can include hardware, software,
firmware, or a combination thereof, configured to implement the
operations of registering 3D point clouds to each other. Hardware
can include one or more electric or electronic components
configured to electrical signals to indicate results of the
operation. Electric or electronic components can include one or
more transistors, resistors, capacitors, diodes, inductors,
switches, logic gates (e.g., AND, OR, XOR, negate, buffer, or the
like), multiplexers, power supplies, regulators, analog to digital
or digital to analog converters, amplifiers, processors (e.g.,
application specific integrated circuits (ASIC), field programmable
gate array (FPGA), graphics processing units (GPUs), central
processing units (CPUs), or the like), or the like electrically
coupled to perform the operations.
[0026] The operations for 3D point cloud registration can include
2D/3D point cloud conversion and normalization at operation 116, 3D
point cloud geo-registration at operation 118, and 3D point cloud
fusion and adaptive filtering at operation 120. These operations
are discussed in more detail below.
[0027] The operation 118, in general, can include determining a
scale 122 factor adjustment, a rotation 124 adjustment, and a
translation 126 adjustment between the 3D point clouds to be
registered. The operation 118 can include using an iterative,
normalized cross-covariance technique, that minimizes a least
squares difference between tie-points ground control points (GCPs),
or the like. This is discussed in more detail below. The result of
the registration is a registered 3D point cloud 128 that inherits
the best errors (smallest errors) in the 3D point cloud inputs.
Again, more detail is provided below.
[0028] FIG. 2 illustrates, by way of example, a diagram of an
embodiment of a technique of performing the operation 118 of the
processing circuitry 114. As previously mentioned, more detail can
be found below. In general, tie points 212, 214, 216, 218 between
the various 3D point cloud inputs can be identified. In the example
of FIG. 2, the 3D point clouds include the LIDAR scan 106, the CAD
model 108, the image, video 104, and the satellite imagery 110.
More or fewer 3D point clouds can be used. The tie points 212, 214,
216, 218 are data points that correspond to a same geographic
location. For example, a corner of a structure, a high elevation
point, or the like, can make for a good tie point. The tie points
212, 214, 216, 218 can be used to determine how to adjust the
corresponding 3D point clouds to be registered to generate the 3D
point cloud 128. The tie point 220 corresponds to the registered
location of the tie points 212, 214, 216, 218.
[0029] In performing the operation 118 an accounting of the
registered points 222, and the corresponding data that was used to
the generate the registered points 222 can be performed. The
registered points 222 can include an x value 224, a y value 226, a
z value 228, metadata 230, and source pointers 232. The metadata
230 can include source reference vectors 234 indicating data source
(e.g., the image, video 104, LIDAR scan 106, CAD model 108, or
satellite imagery 110, etc.) from which the registered points 222
were determined and the source data 236 of those sources.
[0030] A system, device, or method to implement the techniques of
FIG. 1 or 2 can provide a hybrid, multi-source, multi-resolution 3D
point cloud creation and enrichment through multi-source,
multi-modal 3D point cloud ingest and fusion. The techniques of
FIG. 1 or 2 can provide an ability to create a more comprehensive
(as compared to prior techniques), multi-resolution, 3D hybrid
point cloud by combining a sparse 3D point cloud with a dense 3D
point cloud, a low-resolution 3D point cloud with a high-resolution
3D point cloud or the like. The techniques of FIG. 1 or 2 can
provide an ability to fill in a missing section of a 3D point
cloud, such as by using another 3D point cloud.
[0031] The techniques of FIGS. 1 and 2 can provide an ability to
replace or augment noisy/low quality 3D point cloud sections with
high fidelity data from other location-relevant 2D and 3D data
sources. The techniques of FIGS. 1 and 2 can provide an ability to
detect error and correct a 3D point cloud through multi-3D point
cloud cross-correlation & validation. A resulting hybrid 3D
point cloud can preserve, through metadata, data source lineage
that allows users to leverage metadata (e.g., pixel color and
intensity, object classifications and dimensions) from fused
sources.
[0032] The techniques of FIGS. 1 and 2 provide a hybrid 3D point
cloud-derived location intelligence (e.g., detection, localization,
and classification of permanent or stationary objects in a scene or
environmental conditions (e.g., power or phone lines, radiation
areas that may not be detectable by the onboard sensors of an
unmanned aerial vehicle (UAV) or an unmanned ground vehicle (UGV))
that can aid in obstacle avoidance and planning of complex mapping
operations. The techniques of FIGS. 1 and 2 can provide an ability
to incorporate fusion criteria, such as trustworthiness of the data
source or classification levels.
[0033] The techniques of FIGS. 1 and 2 provide a user-controllable
filtering and criteria, such as classification level of a data
source or trustworthiness of a data source, quality, or age. The
techniques of FIGS. 1 and 2 provide an ability to control which
areas (e.g., rooms within a building) and/or objects get included
or excluded in a resulting, hybrid 3D point cloud. The techniques
of the FIGS. provide an ability to link other measurements (e.g.,
temperature, radiation, noise level, humidity) to each data point
in the resulting hybrid 3D point cloud (e.g., via a graph,
multi-dimensional array, hash table, dictionary, or linked list
representation).
[0034] The extensible, hybrid cloud source vector 222 allows the
resulting hybrid 3D point cloud to also store non-sensitive (e.g.,
unclassified) and sensitive (e.g., classified) point cloud data in
the same cloud. Hybrid point cloud access controls, for example,
can be enforced by controlling the visibility of the source data
vectors 222, or encrypting the sensitive source data elements 236
of the hybrid point cloud.
[0035] FIG. 3 illustrates, by way of example, a conceptual block
diagram of an embodiment of a system 300 for 3D point cloud
generation. The 3D point cloud generated using the system 300 can
be used as an input 102 to the processing circuitry 114, for
example. The system 300 as illustrated includes unmanned aerial
vehicles (UAVs) 330, 332, 334, 336, with imaging devices (indicated
by diverging lines 338). The imaging devices can include intensity
or non-intensity imaging devices. Examples of intensity imaging
device include an RGB, grayscale, black and white, infrared, or
another camera. Examples of non-intensity imaging devices include
LIDAR, or the like. The UAVs 330, 332, 334, 336 can be programmed
to capture image data of an object 340 or a geographical region of
interest. The UAVs 330, 332, 334, 336 can cooperatively capture
sufficient image data of the object 340 to generate a 3D point
cloud of the object 340.
[0036] The concepts presented herein regarding UAVs are not limited
to drone-/UAV-collected data. Concepts can also apply to UGVs,
unmanned vessels, and manual-/human-driven data and collection
methods.
[0037] FIG. 4 illustrates, by way of example, a diagram of an
embodiment of a system 400 for generating the registered 3D point
cloud 128. The system 400 as illustrated includes 3D mapping
scheduler 440, a collaborative 3D object mapping operation 442, 3D
point clouds 444 from the UAVs 330, 332, 334, 336, a 3D point cloud
database (DB) 446, the processing circuitry 114, and the registered
3D point cloud 128 (both from FIG. 1).
[0038] The 3D mapping scheduler 440 can command the UAVs 330, 332,
334, 336 what tasks to perform and when. The 3D mapping scheduler
440 can change the task or time to perform the task after a mission
has begun. The UAVs 330, 332, 334, 336 can communicate and alter
the task or timing on their own. The UAVs 330, 332, 334, 336 can be
autonomous or semi-autonomous.
[0039] The 3D mapping scheduler 440 can provide the UAVs 330, 332,
334, 336 with a task that includes a geographic region to be
modelled, a resolution of the model to be generated, a technique to
be used in generating the model (e.g., color image, satellite
image, radiation or temperature scan, LIDAR, etc.), or one or more
time constraints in performing the tasks. The UAVs 330, 332, 334,
336 can operate to satisfy the schedule and constraints provided by
the scheduler 440. While illustrated as a centralized unit, the
scheduler 440 does not need to be a centralized unit. The scheduler
can be distributed across multiple resources (e.g., UAVs) and run
locally/on-board (e.g., as an agent) to provide a distributed
dynamic scheduler. Such an implementation can include the UAVs 330,
332, 334, 336 communicating and planning tasks among
themselves.
[0040] These operations of the UAVs 330, 332, 334, 336 are part of
collaborative 3D object mapping operation 442. The result of the 3D
mapping operation 442 can be a 3D point cloud 444 from the UAVs. As
previously discussed, the discussion regards UAVs, but also applies
to other manned or unmanned vehicles, such as ground, water, air
vehicles, or a combination thereof.
[0041] The 3D point cloud 444 can be stored in a point cloud
database 446. The point cloud database 446 can include a memory
device for storing one or more point clouds, such as the 3D point
clouds 444 or the registered point cloud 128. The 3D point clouds
444 from the UAVs 330, 332, 334, 336 can be provided to the
processing circuitry 114. The processing circuitry 114 can register
the point clouds 444 to generate the registered point cloud
128.
[0042] FIG. 5 illustrates, by way of example, a diagram of an
embodiment of a volume that is being mapped by UAVs, such as three
of the UAVs 330, 332, 334, 336. Currently, part of the volume, as
indicated at 502, is partially mapped. The different patterns on
voxels or subsections of the volume indicate which UAV has mapped
the subsection (if a UAV has mapped the subsection). The scheduler
440 or the UAVs 330, 332, 334, 336 can determine which UAV will map
the currently unimaged subsections based on a variety of criteria
(e.g., object priorities, speed, cost, distances, available
sensors, required scan resolution, remaining flight times/battery
life, UAV health status, etc.). An example assignment of the
mapping is provided at 504.
[0043] An advantage of one or more embodiments of FIGS. 1-5 can
include one or more of: enabling collaborative, concurrent,
inside-out and outside-in 3D mapping of objects of interests (e.g.,
buildings, tunnels, or other objects) using a swarm of two or more
autonomous, semi-autonomous, or man-controlled drones; providing an
ability to leverage pre-existing 3D point clouds and 3D point
clouds generated through other means (e.g., CAD models,
photogrammetry) to speed up mapping process, preventing replication
of effort, increasing fault-tolerance, and/or reducing cost;
generating a fused 3D point cloud that can be composed of
multi-resolution 3D point clouds; providing a collaborative mapping
and hybrid 3D point cloud creation process supports dynamic
filtering based on user-definable criteria (e.g., exclude basement
from mapping, specified resolution, specified time frame, specified
technique, or the like); efficiently delegating mapping
objectives/areas of interests using pre-assigned mission planner or
assigned at run time by a dynamic scheduler; providing a dynamic
scheduler that enables ad-hoc re-tasking of mapping objectives in
response to changing mapping objectives, environmental conditions,
or drone(s) health status (e.g., battery status, sensor or motor
failure, data storage constraints, change in weather conditions, or
the like); providing a swarming solution with autonomous mapping
drones, since drones can negotiate/self-organize and potentially
trade their 3D mapping assignments/objectives based on a variety of
criteria or conditions (e.g., tasking from a failing drone gets
automatically re-assigned to another qualified drone); change in
mapping priorities/order; auto-selection of closest drone(s) to
targeted area(s); auto-selection of drone with fastest or most
qualified sensor payload; selection based on drone's health status
(e.g., remaining battery life or storage, strongest downlink
signal, or the like); providing an ability to incorporate 3D no-fly
zones and exclusion area definitions in mission planning and
scheduling, providing an ability to explicitly task drones or allow
autonomous, onboard decision making (e.g., based on mission
objectives and/or current conditions and location) to adaptively
switch between sensors (e.g., Lidar, IR, or high definition (HD)
cameras), turn sensors on and off, and adjust sampling rates in
flight to speed up mapping, optimize storage utilization, speed up
onboard analytics, extend flight time/battery life, or reduce
electronic signature; providing an ability to link in UTM (Unmanned
Traffic Management) systems (e.g., Federal Aviation Administration
(FAA), Airmap) in 3D mapping planner, dynamic scheduler, and
platform's real-time flight control system to avoid air space
violations and collisions with other aircraft and unmanned systems;
providing an ability to 3D map in large area or complex object in
multiple stages (e.g., first use a survey drone to rapidly generate
a coarse, low-res 3D point cloud of the area of interest, then use
a geo-spatial portioning technique to subdivide the resulting 3D
model (e.g., by floors, volume, openings (windows, doors), voxels,
or sectors), and then task the survey drone(s) to explore certain
areas/sub sections (sequentially or in parallel) within the
coarse/quick scan-generated 3D model; or providing a 3D
divide-and-conquer approach, enabling parallel, multi-drone and
priority-driven 3D exploration and mapping.
[0044] FIG. 6 illustrates, by way of example, a diagram of an
embodiment of a system 600 for 3D point cloud generation and
geo-registration. The system 600 as illustrated includes an
operator 602, the UAV 330, and a geographical region 604 to be
mapped. The operator 602 can operate the UAV 330, or the UAV 330
can operate autonomously or semi-autonomously to generate image
data of the geographical region 604. In some instances, the UAV 330
can have a global positioning system (GPS) or the like that informs
the UAV 330 of its location relative to the surface of the Earth.
In these instances, the GPS coordinates can be used to register the
3D data to the surface of the Earth. In other instances, however,
the UAV 330 does not have such a system. Note it is not necessary
to have GPS data for the entire flight. GPS data for a portion of
the flight can be used to register data from the entire flight.
When the UAV does not have the positioning system, some other
techniques can be used to register the 3D point cloud generated
using the image data from the UAV 330 to the surface of the Earth.
Operations of the other techniques are provided regarding a method
700 provided in FIG. 7. Note that not all operations of FIG. 7 are
required or even useful in all situations.
[0045] FIG. 7 illustrates, by way of example, a diagram of an
embodiment of a technique 700 for registration of a 3D point cloud.
The technique 700 for registration of the 3D point cloud generated
by the UAV 330 can include the operator 602 providing a starting
location and a heading of the UAV 330, at operation 702. The
operation 702 is helpful, such as when no overhead imagery of the
area or no 3D point cloud of the area is available. The heading and
starting location can be determined using a compass, computing
device, or the like. The heading and starting location can include
an associated, estimated error. This data can be used to register
the image data to the surface of the Earth. Using this technique,
the initial heading and starting location can be used to associate
the remaining 3D points to points on the surface of the Earth.
[0046] The technique 700 for registration of the 3D point cloud
generated by the UAV 330 can include flying to a specified height
and taking a nadir, or near nadir image of the starting location,
at operation 704. The operation 704 is helpful, such as when
overhead imagery or the 3D point cloud of the area are available.
Overhead imagery often includes metadata, sometimes called rational
polynomial coefficients (RPC), that detail locations of the pixels
of the overhead imagery on the Earth. The image captured by the UAV
330 can then be registered (using normalized cross correlation of
image chips, for example) to the available overhead imagery. In
some embodiments, the UAV 330 can perform a LIDAR scan and take an
image at the elevation. This data can be correlated with the
overhead imagery to determine the starting location, at operation
706.
[0047] The geo-location registration can be performed with error
(e.g., linear error or circular error, or a combination thereof)
propagation. The linear error and circular error are best when
correlating to a 3D point cloud, less accurate when correlating to
overhead imagery, and even less accurate when only a starting
heading and starting location are available. GPS data is about as
accurate as an overhead imagery correlation.
[0048] The operation 704 can be helpful because the overhead
imagery and the 3D point clouds are generally in nadir or
near-nadir views of the tops of objects, while LIDAR from the UAV
330 sees the sides of objects in the imagery. To help correlate the
LIDAR data to the overhead imagery or 3D point cloud, a nadir or
near-nadir image can be generated of the starting location.
[0049] FIGS. 8-10 regard methods, systems, and devices for
registering a first 3D point cloud (or a portion thereof) to a
second 3D point cloud (or a portion thereof) to generate a merged
3D point cloud. One or more the first and second 3D point clouds
can include an associated error. The associated error can be
propagated to the merged 3D point cloud. The error of the 3D point
cloud can be used in a downstream application. Example applications
include targeting and mensuration. A targeteer (one who performs
targeting) can benefit from the error to better inform their
targeting location choice. A mensuration of an object can benefit
from the error as well.
[0050] The merged 3D point clouds can include error that is better
than any of the 3D point clouds individually. For example, if the
first 3D point cloud includes a lower error (relative to the second
3D point cloud) in the x and y directions and the second 3D point
cloud includes a lower error (relative to the first 3D point cloud)
in the z direction, the merged 3D point cloud can include error
bounded by the first 3D point cloud in the x and y directions and
by the second 3D point cloud in the z direction. The merged point
cloud can thus inherit the better of the errors between the first
and second point clouds for a specified parameter.
[0051] FIG. 8 illustrates, by way of an example, an embodiment of a
system 800 for 3D point set registration and merging. The system
800 can include the processing circuitry 114 that receives tie
points 808, tie point error 810, a first 3D point set 102A, a
second 3D point set 102B, and first or second point set error 812.
The first or second point set error 812 includes error for at least
one of the first 3D point set 102A and the second 3D point set
102B. The first or second point set error 812 can thus include
error for the first 3D point set 102A, the second 3D point set
102B, or the first 3D point set 102A and the second 3D point set
102B.
[0052] The first 3D point set 102A or the second 3D point set 102B
can include a point cloud, a 3D surface, or the like. The first 3D
point set 102A and the second 3D point set 102B can include (x, y,
z) data for respective geographic regions. The geographic regions
of the first 3D point set 102A and the second 3D point set 102B at
least partially overlap. One or more of the first point set 102A
and the second point set 102B can include intensity data. Intensity
data can include one or more intensity values, such as red, green,
blue, yellow, black, white, gray, infrared, thermal, or the like.
One or more of the first point set 102A and the second the point
set 102B can include error data. The error data is illustrated as
being a separate input in FIG. 1, namely the first or second point
set error 812. The error data can indicate an accuracy of the
corresponding point of the point set.
[0053] The tie points 808 can associate respective points between
the first 3D point set 102A and the second 3D point set 102B. The
tie points 808 can indicate a first point (x.sub.1, y.sub.1,
z.sub.1) in the first 3D point set 102A, a second point (x.sub.2,
y.sub.2, z.sub.2) in the second 3D point set 102B or an error
associated with the tie point 808 (shown as separate input tie
point error 810). The tie point error 810 can indicate how
confident one is that the first and second points correspond to the
same geographic location. The tie point error 810 can include nine
entries indicating a covariance (variance or cross-covariance)
between three variables. The three variables can be error in the
respective directions (x, y, z). A matrix representation of the tie
point error 810 is provided as
[ e x e xy e xz e yx e y e yz e zx e zy e z ] ##EQU00001##
where the diagonal terms are respective variances in the given
directions, and the off-diagonal terms are covariances between the
directions.
[0054] The first or second point set error 812 can sometimes be
improved, such as to be more rigorous. Sometimes, the first or
second point set error 812 can be in a form that is not digestible
by the bundle adjustment operation 818. The point set error 812 can
be conditioned by a condition point set error operation 814 to
generate an error matrix 816. The condition point set error
operation 814 can include generating a covariance matrix 816 of
error parameters of the first 3D point set 102A or the second 3D
point set 102B. The error parameters can include seven parameters.
Three of the parameters can include translation in x, y, and z,
respectively. Three of the parameters can be for rotation in x, y,
and z (roll, pitch, and yaw), respectively. One of the parameters
can be for a scale factor between the first 3D point set 102A and
the second 3D point set 102B. An example of the matrix 816 produced
by the condition point set error operation 814 is provided as
[ x _ x _ .times. y _ x _ .times. z _ x _ .times. .omega. x _
.times. .phi. x _ .times. .kappa. x _ .times. s y _ .times. x _ y _
y _ .times. z _ y _ .times. .omega. y _ .times. .phi. y _ .times.
.kappa. y _ .times. s z _ .times. x _ z _ .times. y _ z _ z _
.times. .omega. z _ .times. .phi. z _ .times. .kappa. z _ .times. s
.omega. .times. x _ .omega. .times. y _ .omega. .times. z _ .omega.
.omega..phi. .omega..kappa. .omega. .times. .times. s .phi. .times.
x _ .phi. .times. y _ .phi. .times. z _ .phi..omega. .phi.
.phi..kappa. .phi. .times. .times. s .kappa. .times. x _ .kappa.
.times. y _ .kappa. .times. z _ .kappa..omega. .kappa..phi. .kappa.
.kappa. .times. .times. s s .times. x _ s .times. y _ s .times. z _
s .times. .times. .omega. s .times. .times. .phi. s .times. .times.
.kappa. s ] ##EQU00002##
where x is translation in x, y is translation in y, where z is
translation in z, .omega. is roll, .rho. is pitch, .kappa. is yaw,
and s is scale.
[0055] The bundle adjustment operation 818 can receive the tie
points 808, tie point error 810, first 3D point set 102A, second 3D
point set 102B, and the error matrix 816 at input. The bundle
adjustment operation 818 can produce a merged 3D point set 128 and
a merged 3D point set error 822 as output. The bundle adjustment
operation 818 can use a least squares estimator (LSE) for
registration of the first 3D point set 102A and the second 3D point
set 102B. The operation 818 is easily extendable to merging more
than two 3D data sets even though the description regards only two
3D data sets at times. The bundle adjustment operation 818 can use
one or more photogrammetric techniques. The bundle adjustment
operation 818 can include outlier rejection. The bundle adjustment
operation 818 can determine error model parameters for the 3D data
sets. Application of the error model parameters to the first 3D
point set 102A and the second 3D point set 102B, results in the
relative alignment (registration) of the first 3D point set 102A
and the second 3D point set 102B.
[0056] A reference number with a letter suffix is a specific
instance of the general item without the letter suffix. For
example, the 3D point set 102A is a specific instance of the
general 3D point set 102.
[0057] FIG. 9 illustrates an example diagram of an embodiment of
the relationship between ground point coordinate estimates
{circumflex over (V)}.sub.j and the corresponding 3D data set
observations {tilde over (V)}.sub.ij. In FIG. 9, three
misregistered 3D data sets 902, 904, and 906 and a reference frame
924 are illustrated. First image observations 908, 910, 912 and a
first associated ground point 914 and second image observations
916, 918, 920, and a second associated ground point 922 are
illustrated. The ground point 914 can be determined using a least
squares estimator. The least squares estimator can reduce (e.g.,
minimize) the discrepancy (across all observations and ground
points) across all images. The least squares estimator can project
an error in one or more of the 3D data sets to an error in a
registered 3D data set.
[0058] This section establishes some preliminary notational
conventions and symbol definitions used for developing the
formulations for the bundle adjustment operation 818. The bundle
adjustment operation 818 can include identifying a ground point
that reduces a discrepancy between the ground point and
corresponding points in respective images, and then adjusting
points in the 3D data sets in a manner that reduces the
discrepancy. The term "3D data set" is sometimes referred to as an
"image". For convenience, example sizes of vectors and matrices are
indicated below the symbol in red. Thus, the symbol
A N .times. M ##EQU00003##
denotes a matrix A with N rows and M columns. Column vectors from
R.sup.3 thus have the annotation 3.times.1. Components of a vector
V are written as
V 3 .times. 1 = [ xyz ] T . ##EQU00004##
If the vector includes diacritical marks or distinguishing
embellishments, these are transferred to the components, as in V=[x
y z].sup.T and V'=[x' y' z'].sup.T.
[0059] Equation modeling of the relationship between points in one
3D space to corresponding points in another 3D space is now
described. A common reference space is established across all of
the images. The reference space can be constructed to accommodate a
simultaneous adjustment of more than two images. Correspondences
can be formed between points in the reference space and the
measured conjugate point locations in each image. The observation
equation can be represented as Equation 1:
V ~ 3 .times. 1 = ( 1 + s ) .times. T 3 .times. 3 .function. ( V ^
3 .times. 1 - V _ 3 .times. 1 ) Equation .times. .times. 1
##EQU00005##
[0060] where {circumflex over (V)} is a reference-space 3D
coordinate, {tilde over (V)} is the observation of {circumflex over
(V)} in an image and the orientation and offset relationship
between reference space and image space is taken from the
orientation matrix V and offset vector using Equation 2:
T .ident. T .function. ( .omega. , .PHI. , .kappa. ) = [ c .times.
.times. .PHI. .times. .times. c .times. .times. .kappa. c .times.
.times. .omega. .times. .times. s .times. .times. .kappa. + s
.times. .times. .omega. .times. .times. s .times. .times. .PHI.
.times. .times. c .times. .times. .kappa. s .times. .times. .omega.
.times. .times. s .times. .times. .kappa. + c .times. .times.
.omega. .times. .times. s .times. .times. .PHI. .times. .times. c
.times. .times. .kappa. - c .times. .times. .PHI. .times. .times. s
.times. .times. .kappa. c .times. .times. .omega. .times. .times. c
.times. .times. .kappa. - s .times. .times. .omega. .times. .times.
s .times. .times. .PHI. .times. .times. s .times. .times. .kappa. s
.times. .times. .omega. .times. .times. c .times. .times. .kappa. -
c .times. .times. .omega. .times. .times. s .times. .times. .PHI.
.times. .times. s .times. .times. .kappa. s .times. .times. .PHI. -
s .times. .times. .omega. .times. .times. c .times. .times. .PHI. c
.times. .times. .omega. .times. .times. c .times. .times. .PHI. ]
Equation .times. .times. 2 ##EQU00006##
[0061] where the symbols "c" and "s" denote trigonometric cosine
and sine functions, respectively. The quantities
.theta. 3 .times. 1 .ident. [ .omega. .PHI. .kappa. ] T
##EQU00007##
refer to rotation angles (roll, pitch and yaw) about an image's x,
y, and z axes respectively. The scalar s represents an isometric
scale correction factor (nominally zero). The above form is
conducive to modeling a simultaneous least squares adjustment of
all images' offsets and orientations, provided that estimates of
reference space coordinates for all conjugate image observations
vectors are available. This form is more suitable and flexible than
explicitly holding a single image as a reference for at least one
of several reasons: (1) there are reference space ground
coordinates that permit the potential use of ground control points,
whose a priori covariances are relatively small (e.g., they carry
high weighting in the solution); (2) the above formulation is
suitable for a simultaneous adjustment for data that includes small
or minimal overlap (mosaics), as well as, many images collected
over the same area (stares) or any combination in between; and (3)
a single image can effectively (e.g., implicitly) be held as a
reference by appropriate a priori weighting of its error model
parameters.
[0062] The symbol {circumflex over (V)} will be referred to as a
ground point (akin to tie point ground locations and ground control
point locations in a classical photogrammetric image adjustment).
The symbol {tilde over (V)} will be referred to as a ground point
observation (akin to image tie point observation locations in a
classical photogrammetric image adjustment).
[0063] Unlike the classical photogrammetric treatment, {circumflex
over (V)} and {tilde over (V)} are both "on the ground" in the
sense that they both represent ground coordinates in 3D (in the
classical imagery case, the observations are in image space and are
thus 2D coordinates). Further, the point may very well not be "on
the ground" but could be on a building rooftop, treetop canopy,
etc. However, the terminology "ground point" and "ground point
observation" will be used.
[0064] If j is taken to be the index of an arbitrary ground point
and i to be the index of an arbitrary image, the observation
equation (Equation 1) can be written as Equation 3
{tilde over (V)}.sub.ij=(1+s.sub.i)T.sub.i({circumflex over
(V)}.sub.j-V.sub.i) Equation 3
[0065] where {circumflex over (V)}.sub.j.ident.[{circumflex over
(x)}.sub.j y.sub.j {circumflex over (z)}.sub.j].sup.T as the
j.sup.th ground point, V.sub.i .ident.[x.sub.i y.sub.i
z.sub.i].sup.T as the offset vector between image i and the
reference space origin, and where
T i .ident. T .function. ( .omega. i , .PHI. i , .times. .kappa. i
) = [ c .times. .times. .PHI. i .times. .times. c .times. .times.
.kappa. i c .times. .times. .omega. .times. .times. s .times.
.times. .kappa. i + s .times. .times. .omega. i .times. .times. s
.times. .times. .PHI. i .times. .times. c .times. .times. .kappa. i
s .times. .times. .omega. i .times. .times. s .times. .times.
.kappa. i + c .times. .times. .omega. i .times. .times. s .times.
.times. .PHI. i .times. .times. c .times. .times. .kappa. i - c
.times. .times. .PHI. i .times. .times. s .times. .times. .kappa. i
c .times. .times. .omega. .times. .times. c .times. .times. .kappa.
i - s .times. .times. .omega. i .times. .times. s .times. .times.
.PHI. i .times. .times. s .times. .times. .kappa. i s .times.
.times. .omega. i .times. .times. c .times. .times. .kappa. i - c
.times. .times. .omega. i .times. .times. s .times. .times. .PHI. i
.times. .times. s .times. .times. .kappa. i s .times. .times. .PHI.
i - s .times. .times. .omega. i .times. .times. c .times. .times.
.PHI. i c .times. .times. .omega. i .times. .times. c .times.
.times. .PHI. i ] Equation .times. .times. 4 ##EQU00008##
[0066] is the orientation matrix between image i and the reference
space frame and where s.sub.i is image i scale correction factor.
Thus, {tilde over (V)}.sub.ij is the coordinate of the i.sup.th
image's observation of ground point j.
[0067] If a particular ground point is found in two or more images,
it can serve as a point which ties the images together (one of the
tie points 808). These are generically referred to as tie points
(or tie points). A single tie point is often referred to as a
collection of image observations (with coordinates) of the same
point on the ground along with the corresponding ground point (with
coordinates).
[0068] Since the observations over many images i are the
measurements containing error, the true ground point coordinates
are generally unknown. To facilitate this, an initial estimate of
the ground point location can be computed. The initial estimate is
provided as Equation 5
V ^ j ( 0 ) .ident. 1 I j .times. i .di-elect cons. I j .times. V ~
ij R Equation .times. .times. 5 ##EQU00009##
[0069] The ground points themselves are treated as derived (but
unconstrained) observations and allowed to adjust in performance of
the operation 818. There can be an observation of interest whose
true ground coordinates are well known. These are classically
called ground control points (or GCPs). Since this development can
accommodate both GCPs and tie points, the more general terms of
"ground point" and "ground point observation" are sometimes used
(as contrasted with "tie point ground coordinate" and "tie point
observation").
[0070] The bundle adjustment operation 818 can operate on two or
more images taken over a same area (with observations for tie
points, sometimes called a stare scenario); two or more images
taken in strips (forming a mosaic of data, with 2-way, 3-way, or
m-way observations in strip overlap regions); tie points in which
the corresponding ground points may appear in two or more images,
incorporation of GCPs for features in imagery, providing an
absolute registration; accommodation of a full covariance for tie
point observations. This is conducive for tie point correlation
techniques which are highly asymmetrical (e.g., as long as the
asymmetry can be characterized as a measurement covariance).
[0071] The relationship between ground point coordinate estimates
{circumflex over (V)}.sub.j and the corresponding image
observations {tilde over (V)}.sub.ij can be understood as a stare
scenario between three misregistered images.
[0072] For the development of the LSE formulation (and associated
preprocessing) that can be performed by the bundle adjustment
operation 118, more definitions are provided in Table 1.
TABLE-US-00001 TABLE 1 Definitions of Symbols Symbol Definition V R
W 3 .times. 1 ##EQU00010## Location of the origin of the reference
frame with respect to the world frame. This is thus the location of
the reference frame coordinatized in the world-frame. V _ i 3
.times. 1 ##EQU00011## Translation of i.sup.th image with respect
to reference frame origin. V.sub.i .ident. [x.sub.i y.sub.i
z.sub.i].sup.T .theta. i 3 .times. 1 ##EQU00012## Orientation
angles of i.sup.th image with respect to image frame origin
.theta..sub.i .ident. [.omega..sub.i .PHI..sub.i
.kappa..sub.i].sup.T s i 1 .times. 1 ##EQU00013## Isometric scale
factor correction for the i.sup.th image. V _ i ( 0 ) 3 .times. 1
##EQU00014## Initial (zeroth-iteration) value for V.sub.i, thus
assumed to be with respect to reference frame origin. .theta. i ( 0
) 3 .times. 1 ##EQU00015## Initial (zeroth-iteration) value for
.theta..sub.i. Each element is taken to be zero. s i ( 0 ) 1
.times. 1 ##EQU00016## Initial (zeroth iteration) value for
s.sub.i. Nominally s.sub.i.sup.(0) .ident. 0. T i 3 .times. 3
##EQU00017## Orientation matrix for i.sup.thimage built from
.theta..sub.i V ^ j 3 .times. 1 ##EQU00018## Ground point
coordinates for ground point j with respect to the reference frame
origin {circumflex over (V)}.sub.j .ident. [{circumflex over
(x)}.sub.j y.sub.j {circumflex over (z)}.sub.j].sup.T V ^ j ( 0 ) 3
.times. 1 ##EQU00019## Initial (zeroth-iteration) estimated value
for {circumflex over (V)}.sub.j V ~ ij W 3 .times. 1 ##EQU00020##
Ground point observation coordinate of ground point j on image i
coordinatized in the world frame (e.g., these are UTM coordinates
of the ground point observation location). V ~ ij 3 .times. 1
##EQU00021## Ground point observation coordinate of ground point j
on image i. These are implicitly assumed to be coordinatized in the
local image frame for image i. {tilde over (V)}.sub.ij .ident.
[{tilde over (x)}.sub.ij {tilde over (y)}.sub.ij {tilde over
(z)}.sub.ij].sup.T .SIGMA. . i 7 .times. 7 ##EQU00022## A priori
covariance of i.sup.th image translation, orientation and scale
correction parameter vector W . i 7 .times. 7 ##EQU00023## A priori
parameter weight matrix for image i. {dot over (W)}.sub.i = {dot
over (.SIGMA.)}.sub.i.sup.-1 .SIGMA. j 3 .times. 3 ##EQU00024## A
priori covariance of ground point j W j 3 .times. 3 ##EQU00025## A
priori weight matrix for ground point j. {umlaut over (W)}.sub.j =
{umlaut over (.SIGMA.)}.sub.j.sup.-1 .SIGMA. ~ ij 3 .times. 3
##EQU00026## A priori covariance for observation of ground point j
upon image i W ~ ij 3 .times. 3 ##EQU00027## A priori observation
weight matrix for observation of ground pointj upon image i. {tilde
over (W)}.sub.ij = ({tilde over (.SIGMA.)}.sub.ij).sup.-1 General
Indexing m Number of images i Image index. i .di-elect cons. {1, 2,
. . . , m} n Total number of ground points j Ground point index j
.di-elect cons. {1, 2, . . . , n} q Total number of ground point
observations b Ground point observation index b .di-elect cons. {1,
2, . . . , q} (p) Non-linear least squares iteration index G.sub.i
The index set of all ground points appearing in image i. Thus
G.sub.i .OR right. {1, 2, . . . , n} O.sub.j The index set of
observations of ground pointj over all images. Thus O.sub.j .OR
right. {1, 2, . . . , q} I.sub.j The index set of images upon which
ground pointj is an observation. Thus I.sub.j .OR right. {1, 2, . .
. , m} M.sub.b.sup.G Mapping of observation index to Ground point
index. M.sub.b.sup.G gives the ground point index (.di-elect cons.
1, 2, . . . , n}) for a specified observation index b .di-elect
cons. {1, 2, . . . , q}. |S| Cardinality of set S (e.g., the number
of index elements in set S).
[0073] Ground point observations can be indexed by ground point j
and image i (as in {tilde over (V)}.sub.ij) or by linear indexing,
b (as in {tilde over (V)}.sub.b). Use of the subscripting depends
upon the context. In the former, it is of interest to characterize
the fact that a particular ground point j appears on a particular
image i. In the latter, it is of interest to enumerate all
observations independent of to which image or to which ground point
they refer.
[0074] Since some 3D point set data is presented in a "world" space
coordinate system (e.g., Universal Transverse Mercator (UTM) map
projection) and since the observation Equation 3 is image
dependent, some coordinate frame definitions and transformations
can aid understanding.
[0075] If it is assumed that ground point observation locations are
specified in world coordinates, it is of interest to transform the
ground point observation locations to be "image" relative. Further,
it can be of interest to obtain the ground locations and image
offsets themselves to be relative to a "local" reference coordinate
frame.
[0076] A motivation for a local reference coordinate frame can be
to remove large values from the coordinates. For example, UTM
coordinates can typically be in the hundreds of thousands of
meters. This makes interpretation of the coordinates more
difficult, for example, when examining a report of updated
coordinate locations. A motivation for an image-relative coordinate
frame can be so that the interpretation of the orientation angles
comprising the T.sub.i matrices can be relative to the center of
the data set. This is contrasted with the origin of rotation being
far removed from the data set (e.g., coincident with the local
reference frame origin in the mosaic scenario).
[0077] In both cases, the transformations between coordinate frames
simply involve a 3D translation. The mnemonics W, R and I are used
to denote the "world", "reference" and "image" coordinate frames,
respectively. To facilitate the transformations, the following
convention is established. A superscript on a vector denotes the
coordinate frame to which it is referred. Thus {tilde over
(V)}.sub.ij.sup.W corresponds to the world space coordinates of a
particular tie point observation, while {tilde over
(V)}.sub.ij.sup.R and {tilde over (V)}.sub.ij.sup.I represent the
same tie point observation but referred to the reference frame and
image frame, respectively.
[0078] Following the above convention, the symbol can V.sub.A.sup.B
represent "the location of the origin of frame A coordinatized in
frame B". Thus, V.sub.R.sup.W can represent the location of the
reference frame in the world coordinate system (e.g., UTM
coordinates of the origin of the reference frame). The relationship
between an arbitrary vector V.sup.R coordinatized in the reference
frame and the same vector V.sup.W coordinatized in the world frame
can be represented by Equation 6
V.sup.W=V.sup.R+V.sub.R.sup.W Equation 6
[0079] The reference frame can be established as an average of all
of the world-space coordinates of tie points. This offset (denoted
V.sub.R.sup.W) can be determined using Equation
V R W = 1 q .times. b = 1 q .times. V ~ b W Equation .times.
.times. 7 ##EQU00028##
[0080] For simplicity, it can be assumed that the reference frame
origin, referred to by the world frame, can be computed by a
process external to the bundle adjustment operation 818 (e.g., by
the process that assembles the tie points 808 for use in the bundle
adjustment operation 818).
[0081] The image frame (e.g., a frame defined on a per-image basis)
can be the world coordinates of the center of an image. Under the
assumption that there are bounding coordinates in the image data
(specifying the min and max extents of the data in world-frame X, Y
and Z), the center of the data can thus be taken to be the
respective averages of the min and max extents. Since this image
frame refers to world space, the computed offset is denoted
V.sub.I.sup.W. If bounding coordinates are not available, value for
V.sub.I.sup.W is taken as the average of the tie point locations
over the specific image i, as described in Equation 8
V l W .ident. 1 G i .times. j .di-elect cons. G i .times. V ~ ij
Equation .times. .times. 8 ##EQU00029##
[0082] The image frame offset in reference space coordinates is
taken to be the initial value for V.sup.(0) on a per image basis.
Thus, for each image i, an external process can compute reference
frame coordinates according to Equation 9
V.sub.i.sup.(0).sup.R=V.sub.I.sup.W-V.sub.R.sup.W Equation 9
[0083] Since the tie point observation values can be input in world
coordinates and since the observation equation domain assumes
reference frame coordinates, some preprocessing of the input data
can help make it consistent with that assumed by the observation
equation (Equations 1 or 3). The tie point observation coordinates
can be converted from world space to reference space. This can be
performed for each observation per Equation 10.
{tilde over (V)}.sub.ij.sup.R={tilde over
(V)}.sub.ij.sup.W-V.sub.R.sup.W Equation 8
[0084] Next, since the true ground point coordinates used in
Equation 3 can be unknown, they can be estimated. The ground point
coordinates can be assumed to be coordinatized in the reference
frame. The initial estimated values for the ground coordinates of
each tie point can be computed as an average of the ground point
observations over all images in which it appears as described b
Equation 11
V ^ j ( 0 ) .ident. 1 I j .times. i .di-elect cons. I j .times. V ~
ij R Equation .times. .times. 11 ##EQU00030##
[0085] Since the true locations of the tie point ground coordinates
can be treated as unknown, the a priori covariance can reflect this
by treating the errors in the ground coordinates as numerically
unconstrained (in units of meters squared) as described by Equation
12
{umlaut over
(.SIGMA.)}.sub.j.ident.diag([10.sup.1210.sup.1210.sup.12]) Equation
12
[0086] The tie point observation coordinates for use in the
observation equation can be converted to image-relative coordinates
using Equation 13.
{tilde over (V)}.sub.ij={tilde over
(V)}.sub.ij.sup.R-V.sub.i.sup.(0) Equation 13
[0087] Next, a least squares formulation and solution are
discussed. Since the observation equation, Equation 1 or 3, is
non-linear in the orientation angles that form T.sub.i, the least
squares problem becomes a non-linear least squares problem.
Equation 3 can be linearized. Solving the linearized equation can
be a multidimensional root finding problem (in which the root is
the vector of solution parameters).
[0088] For simplification in notation of the linearization,
consider a fixed image and a fixed ground point. Let the unknown
error model parameters (offset, orientation, and scale correction)
be represented by Equation 14:
X 7 .times. 1 .ident. [ x _ y _ z _ .omega. .phi. .kappa. s ] T
Equation .times. .times. 14 ##EQU00031##
[0089] The observation equation for ground point observation {tilde
over (V)} can be written as Equation 15
{tilde over (V)}=(1+s)T({circumflex over (V)}-{tilde over (V)})
Equation 15
[0090] where T is the true image orientation matrix, V is the true
image translation vector, {circumflex over (V)} is the true ground
point coordinate and {tilde over (V)} is the corresponding ground
point observation coordinate.
[0091] If one wishes to include the ground point coordinates
{circumflex over (V)} as additional observations, the solution for
X and {circumflex over (V)} can be cast as a root solving problem
based on Equation 16
F(X;{circumflex over (V)})=0 Equation 16
where
F(X;{circumflex over (V)})={tilde over (V)}-(1+s)T({circumflex over
(V)}-V) Equation 17
[0092] In vector form, the function, F, can be represented by
Equation 18
[ f 1 f 2 f 3 ] = [ x .about. y ~ z .about. ] - ( 1 + s ) .times. T
( .times. [ x ^ y ^ z ^ ] - [ x _ y _ z _ ] ) Equation .times.
.times. 18 ##EQU00032##
[0093] The function F can be approximated using a first-order
Taylor series expansion of F about initial estimates X.sup.(0) and
{circumflex over (V)}.sup.(0) as in Equation 19
0 = F .function. ( X ; V ^ ) .apprxeq. F .function. ( X ( 0 ) ; V ^
( 0 ) ) + .differential. F ( 0 ) .differential. X .times. .DELTA. .
+ .differential. F ( 0 ) .differential. V ^ .times. .DELTA.
Equation .times. .times. 19 ##EQU00033##
[0094] where X.sup.(0) is an initial approximation of X, V.sup.(0)
is an initial approximation of {circumflex over (V)}, the
Jacobians
.differential. F ( 0 ) .differential. X .times. .times. and .times.
.times. .differential. F ( 0 ) .differential. V ^ ##EQU00034##
and are the partial derivatives of F evaluated at X.sup.(0) and
{circumflex over (V)}.sup.(0) respectively, {dot over
(.DELTA.)}.ident.[.DELTA.x .DELTA.y .DELTA.z .DELTA..omega.
.DELTA..phi. .DELTA..kappa. .DELTA.s].sup.T is a vector of
corrections to X, {circumflex over
(.DELTA.)}.ident.[.DELTA.{circumflex over (x)} .DELTA.y
.DELTA.{circumflex over (z)}] is a vector of corrections to
{circumflex over (V)}. The values for X.sup.(0) and {circumflex
over (V)}.sup.(0) are discussed in Table 4 and in sections 3.3.2
and 3.3.3.
B . 3 .times. 7 .ident. .times. .differential. F .differential. X =
[ .differential. f 1 .differential. X .differential. f 2
.differential. X .differential. f 3 .differential. X ] = [
.differential. f 1 .differential. x _ .differential. f 1
.differential. y _ .differential. f 1 .differential. z _
.differential. f 1 .differential. .omega. .differential. f 1
.differential. .phi. .differential. f 1 .differential. .kappa.
.differential. f 1 .differential. s .differential. f 2
.differential. x _ .differential. f 2 .differential. y _
.differential. f 2 .differential. z _ .differential. f 2
.differential. .omega. .differential. f 2 .differential. .phi.
.differential. f 2 .differential. .kappa. .differential. f 2
.differential. s .differential. f 3 .differential. x _
.differential. f 3 .differential. y _ .differential. f 3
.differential. z _ .differential. f 3 .differential. .omega.
.differential. f 3 .differential. .phi. .differential. f 3
.differential. .kappa. .differential. f 3 .differential. s .times.
] Equation .times. .times. 20 .times. B 3 .times. 3 .ident.
.differential. F .differential. V ^ = [ .differential. f 1
.differential. V ^ .differential. f 2 .differential. V ^
.differential. f 3 .differential. V ^ ] = [ .differential. f 1
.differential. x ^ .differential. f 1 .differential. y ^
.differential. f 1 .differential. z ^ .differential. f 2
.differential. x ^ .differential. f 2 .differential. y ^
.differential. f 2 .differential. z ^ .differential. f 3
.differential. x ^ .differential. f 3 .differential. y ^
.differential. f 3 .differential. z ^ ] Equation .times. .times. 21
##EQU00035##
[0095] Note that the dot symbols are merely notations, following
the classical photogrammetric equivalent, and do not intrinsically
indicate "rates," as is sometimes denoted in other classical
physics contexts.
[0096] In matrix notation, Equation 19 can be written as
[ .differential. F ( 0 ) .differential. X .times. .differential. F
( 0 ) .differential. V ^ ] .function. [ .DELTA. . .DELTA. ] = - F
.function. ( X ( 0 ) ; V ^ ( 0 ) ) .times. .times. or Equation
.times. .times. 22 [ B . 3 .times. 7 B 3 .times. 3 ] .function. [
.DELTA. 7 .times. 1 . .DELTA. 3 .times. 1 ] = 3 .times. 1 Equation
.times. .times. 23 ##EQU00036##
[0097] Since the problem is nonlinear, the estimation of the
parameter vector can be iterated (via a multi-dimensional extension
of the Newton-Raphson method for root finding, or other technique).
The solution can include re-linearization at each iteration. The
re-linearization can be performed about the most recent estimate of
the parameter vector. The linearized form of Equation 22 at
iteration (p) can be represented as in Equation 24.
F .function. ( X ; V ) .apprxeq. F .function. ( X ( p ) ; V ^ ( p )
) + .differential. F ( p ) .differential. X .times. .DELTA. .
.times. .differential. F ( p ) .differential. V ^ .times. .DELTA.
Equation .times. .times. 24 ##EQU00037##
[0098] where X.sup.(p) is the p iteration estimate of the parameter
vector X, {circumflex over (V)}.sup.(p) is the p.sup.th iteration
estimate of {circumflex over (V)},
.differential. F ( p ) .differential. X ##EQU00038##
is the Jacobian of F with respect to X evaluated at X.sup.(p),
.differential. F ( p ) .differential. V ^ ##EQU00039##
is the Jacobian of F with respect to {circumflex over (V)}
evaluated at {circumflex over (V)}.sup.(p), {dot over (.DELTA.)} is
a vector of corrections to X for the p.sup.th iteration, and
{umlaut over (.DELTA.)} is a vector of corrections to {circumflex
over (V)} for the p.sup.th iteration.
[0099] With each iteration, the parameter and ground point vectors
can be updated with the most recent correction as in Equations 25
and 26.
X.sup.(p)=X.sup.(p-1)+{dot over (.DELTA.)} Equation 25
{circumflex over (V)}.sup.(p)=.sup.(p-1)+{umlaut over (.DELTA.)}
Equation 26
[0100] For the initial iteration, initial values for X.sup.(0) and
{circumflex over (V)}.sup.(0) can be estimated as discussed
previously. The system represented by Equation 24 is now linear in
{dot over (.DELTA.)} and {umlaut over (.DELTA.)}. A linear solver
can be used to solve for the parameters.
[0101] For a particular image i and a particular ground point j,
Equation 23 can be written as Equation 27
[ B . ij 3 .times. 7 B ij 3 .times. 3 ] .function. [ .DELTA. . i 7
.times. 1 .DELTA. j 3 .times. 1 ] = ij 3 .times. 1 Equation .times.
.times. 27 ##EQU00040##
[0102] The discrepancy vector for the p.sup.th iteration is thus be
represented as in Equation 28
.epsilon..sub.ij.sup.(p)=-F(X.sup.(p);{circumflex over
(V)}.sup.(p)) Equation 28
and thus
.epsilon..sub.ij.sup.(p)=-[{tilde over
(V)}.sub.ij-(1+s.sub.i)T.sub.i({circumflex over
(V)}.sub.j-V.sub.i)] Equation 29
[0103] To accommodate a simultaneous solution of all images and
ground points, Equation 27 can be extended as
[ B . 11 0 0 B 11 0 0 B . 12 0 0 0 B 12 0 0 0 0 B . 21 0 B 21 0 0 0
0 0 B mn 0 0 B mn ] .function. [ .DELTA. . 1 .DELTA. . 2 .DELTA. .
m .DELTA. 1 .DELTA. 2 .DELTA. n ] = [ 11 12 21 mn ] Equation
.times. .times. 30 ##EQU00041##
[0104] Equation 30 can be re-written as Equation 31
B.DELTA.=E Equation 31
[0105] then the normal equation matrix can be represented as
Equation 32 or Equation 33
(B.sup.TB).DELTA.=B.sup.TE Equation 32
Z.DELTA.=H Equation 33
[0106] It can be less efficient to form B as in Equation 30, for
one or more of the following reasons: (1) B is very sparse; (2) the
quantities {dot over (B)}.sub.ij and {umlaut over (B)}.sub.ij are
nonzero if and only if ground point j is observed on image i. For
this reason, the classical development of the normal matrix
B.sup.TB and right-hand side vector B.sup.TE uses summations over
the appropriate indexing. These summations are provided in the
normal matrix partitioning below.
[0107] The foregoing equations form a foundation for the present
problem that is sufficient for development of the normal equations,
examination of the normal matrix structure and formulation of the
normal equation solution.
[0108] The normal equation can be written as in Equation 34
Z.DELTA.=H Equation 34
[0109] The matrices can be partitioned as in Equations 35-37
Z = [ N . + W . N _ N _ T N + W ] Equation .times. .times. 35
.DELTA. = [ .DELTA. . .DELTA. ] Equation .times. .times. 36 H = [ K
. K ] - [ W . .times. C . W .times. C ] Equation .times. .times. 37
##EQU00042##
[0110] The quantities {dot over (K)}, {umlaut over (K)}, and
{umlaut over (C)} are described in more details elsewhere
herein.
[0111] Combining Equations 35, 36 and 37 yields Equation 38
[ N . + W . N _ N _ T N + W ] .function. [ .DELTA. . .DELTA. ] = [
K . K ] - [ W . .times. C . W .times. C ] Equation .times. .times.
38 ##EQU00043##
[0112] The matrix Z can thus be represented as Equation 39
Z = N + W = [ N . 7 .times. m .times. 7 .times. m N _ 7 .times. m
.times. 3 .times. n N _ T 3 .times. n .times. 6 .times. m N 3
.times. n .times. 3 .times. n ] + [ W . 7 .times. m .times. 7
.times. m 0 0 W 3 .times. n .times. 3 .times. n ] Equation .times.
.times. 39 ##EQU00044##
[0113] The matrix {dot over (N)} can be written as Equation 40
N . 7 .times. m .times. 7 .times. m = [ N . 1 7 .times. 7 0 0 0 N .
2 7 .times. 7 0 0 0 N . m 7 .times. 7 ] Equation .times. .times. 40
##EQU00045##
[0114] and analogously {dot over (W)} can be written as Equation
41
W . 7 .times. m .times. 7 .times. m = [ W . 1 7 .times. 7 0 0 0 W .
2 7 .times. 7 0 0 0 W . m 7 .times. 7 ] Equation .times. .times. 41
##EQU00046##
[0115] The block entries of {dot over (N)}.sub.i can be defined as
in Equation 42
N . i 7 .times. 7 = .SIGMA. j .di-elect cons. G i .times. B . ij T
.times. W ~ ij .times. B . ij Equation .times. .times. 42
##EQU00047##
[0116] The subscripts ij on the {dot over (B)}.sub.ij matrices
indicate that they are a function of image i and ground point
j.
[0117] The matrix N can be expanded as in Equation 43
N 3 .times. n .times. 3 .times. n = [ N 1 3 .times. 3 0 0 0 N 2 3
.times. 3 0 0 0 N n 3 .times. 3 ] Equation .times. .times. 43
##EQU00048##
[0118] {umlaut over (W)} can be expanded as in Equation 44:
W 3 .times. n .times. 3 .times. n = [ W 1 3 .times. 3 0 0 0 W 2 3
.times. 3 0 0 0 W n 3 .times. 3 ] Equation .times. .times. 44
##EQU00049##
[0119] The block entries of Equation 43 can be defined as in
Equation 45
N j 3 .times. 3 = .SIGMA. i .di-elect cons. O j .times. B ij T
.times. W ~ ij .times. B ij Equation .times. .times. 45
##EQU00050##
[0120] The matrix N from Equation 39 can be expanded as in Equation
46
N _ 7 .times. m .times. 3 .times. n .ident. [ N _ 11 7 .times. 3 N
_ 12 7 .times. 3 N _ 1 .times. n 7 .times. 3 N _ 21 7 .times. 3 N _
22 7 .times. 3 N _ 2 .times. n 7 .times. 3 N _ m .times. .times. 1
7 .times. 3 N _ m .times. .times. 2 7 .times. 3 N _ mn 7 .times. 3
] Equation .times. .times. 46 ##EQU00051##
[0121] The block entries of N from Equation 45 can be defined as in
Equation 47
N _ ij 7 .times. 3 = B . ij T .times. W ~ ij .times. B ij .times. i
.di-elect cons. { 1 , , m } , j .di-elect cons. { 1 , , n }
Equation .times. .times. 47 ##EQU00052##
[0122] In a similar development the right hand side matrix H from
Equation 34 can be expanded as in Equation 48
H = [ K . 1 K . 2 K . m K 1 K 2 K n ] - [ W . 0 0 W ] .function. [
C . C ] Equation .times. .times. 48 ##EQU00053##
[0123] The subblocks of H can be defined as in Equations 49 and
50
K . i 7 .times. 1 = j .di-elect cons. G i .times. B . ij T .times.
W ~ ij .times. ij Equation .times. .times. 49 K j 3 .times. 1 = i
.di-elect cons. O i .times. B ij T .times. W ~ ij .times. ij
Equation .times. .times. 50 ##EQU00054##
[0124] with the discrepancy vector .epsilon..sub.ij defined as in
Equation 29.
.sup.(p)= .sup.(p-1)- .sup.(0) Equation 51
{umlaut over (C)}.sup.(p)={umlaut over (C)}.sup.(p-1)-{umlaut over
(C)}.sup.(0) Equation 52
[0125] The values for .sup.(0) and {umlaut over (C)}.sup.(0) are
the initial parameter values. The initial values for the
translation parameters portion of .sup.(0) can be taken to be the
V.sub.i.sup.(0).sup.R as computed in Equation 9. The initial values
for the rotation parameters portion of 60) can be taken to be
zero.
[0126] The initial values of {umlaut over (C)}.sup.(0) can be taken
to be the values of the ground point coordinates {circumflex over
(V)}.sub.j.sup.(0) as computed in accord with Equation 11.
[0127] The solution to the normal equation matrix on iteration (p)
can be determined as in Equation 53
.DELTA..sup.(p)=Z.sup.-1H Equation 53
[0128] At each iteration, the parameters can be updated via
Equations 51 and Equation 52 and the normal matrix can be formed
and solved again. The process can continue until the solution
converges. Examples of the convergence criterion can be discussed
in the following section.
[0129] Since the solution is iterated, a convergence criterion can
be established. An example of a convergence criterion is to compute
the root-mean-square (RMS) of the residuals as in Equation 54
R ( p ) = T .times. q - 7 .times. m Equation .times. .times. 54
##EQU00055##
[0130] The value in the denominator of Equation 54 represents the
number of degrees of freedom (e.g., the number of observation
equations minus the number of estimated parameters).
[0131] Since typically q 7m Equation 54 can be estimated as in
Equation 55
R ( p ) = T .times. q Equation .times. .times. 55 ##EQU00056##
[0132] The condition q 7m can be guaranteed with sufficient
redundancy of ground point observations as compared with the number
of images (e.g., enough tie points are measured between the images
so that the aforementioned condition is satisfied).
[0133] Convergence happens when the residuals settle to the same
values on consecutive iterations. The convergence criterion can
be
|R.sup.(p)-R.sup.(p-1)|<.delta. Equation 56
[0134] where .delta. is a prescribed tolerance.
[0135] A rigorous formulation for the standard error of unit weight
(to be used in error propagation discussed elsewhere) is provided
in Equation 57
[ .sigma. ( p ) ] 2 = j = 1 n .times. i .di-elect cons. I j .times.
ij T .times. W ~ ij .times. ij + i = 1 m .times. C . i T .times. W
. i .times. C . i + j = 1 n .times. C j T .times. W j .times. C j
nodf Equation .times. .times. 57 ##EQU00057##
[0136] where ndof is the number of degrees of freedom--the number
of observation equations minus the number of error model solution
parameters:
ndof=q-7m Equation 58
[0137] Since blundered points can be effectively removed from the
solution via deweighting, the number of observations remaining
effectively doesn't include the blunders. To be strictly correct,
the value for q in Equation 58 can be the number of non-blundered
observations.
[0138] The full form of the matrix Equation 34 can be reduced under
the assumption that the errors in the ground point locations are
uncorrelated. Under this assumption, the error covariance matrix of
the ground point locations {umlaut over (.SIGMA.)} becomes a
block-diagonal matrix of 3.times.3 matrix blocks. Since it is a
sparse matrix, its inverse is easily computed by inverting the
3.times.3 diagonal blocks. The development in this section
reformulates the normal equations taking advantage of this. The
result is a reduced normal equation matrix in which the size of the
normal matrix is 6m.times.6m instead of (6m+3n).times.(6m+3n). This
gives the obvious advantage that the size of the normal matrix is
much smaller and remains invariant with the number of ground
points.
[0139] The reduced system formation is sometimes referred to as a
"ground point folding," since the ground point portion of the
reduced normal matrix is incorporated into the image portion. The
development of the reduced normal equation begins with the original
normal equation from Equation 34 and repeated as Equation 59
Z.DELTA.=H Equation 59
[0140] To facilitate ground point folding into a reduced normal
equation matrix, Equation 59 can be re-written as Equation 60
[ Z . 7 .times. m .times. 7 .times. m Z _ 7 .times. m .times. 3
.times. n Z 3 .times. n .times. 6 .times. m _ T Z 3 .times. n
.times. 3 .times. n ] .function. [ .DELTA. 7 .times. m .times. 1 .
.DELTA. 3 .times. n .times. 1 ] = [ H . 7 .times. m .times. 1 H 3
.times. n .times. 1 ] .times. .times. where Equation .times.
.times. 60 Z . = N . + W . Equation .times. .times. 61 Z _ = N _
Equation .times. .times. 62 Z = N + W Equation .times. .times. 63 H
. = K . - W . .times. C . Equation .times. .times. 64 H = K - W
.times. C Equation .times. .times. 65 ##EQU00058##
[0141] Suppose a matrix system Z.DELTA.=H is partitioned into
blocks of the appropriate sizes as
[ A B C D ] .function. [ a b ] = [ c d ] Equation .times. .times.
66 ##EQU00059##
[0142] where the matrices A and D are both square.
[0143] Further, assume that matrix D is non-singular and can be
represented as a sparse block diagonal matrix. Then
[A-BD.sup.-1C][a]=[c-BD.sup.-1d] Equation 67
[0144] Applying Equation 67 to Equation 59 provides the reduced
normal matrix equation
[ -Z{umlaut over (Z)}.sup.-1Z.sup.T][{dot over (.DELTA.)}]=[{dot
over (H)}- {umlaut over (Z)}.sup.-1{umlaut over (H)}] Equation
68
[0145] The reduced normal equation matrix can be written as in
Equation 69
M{dot over (.DELTA.)}=C Equation 69
[0146] where M.ident.[ -Z{umlaut over (Z)}.sup.-1Z.sup.T] and
C.ident.[{dot over (H)}- {umlaut over (Z)}.sup.-1{umlaut over
(H)}].
[0147] Next it is of interest to examine the form of the components
of the reduced system for an efficient implementation. Let
{circumflex over (Z)}=Z{umlaut over (Z)}.sup.-1Z.sup.T. Then
Z ^ 7 .times. m .times. 7 .times. m = [ Z _ 11 ( 7 .times. 3 ) Z _
1 .times. n ( 7 .times. 3 ) Z _ m .times. .times. 1 ( 7 .times. 3 )
Z _ mn ( 7 .times. 3 ) ] .times. [ Z 1 - 1 ( 3 .times. 3 ) 0 0 0 Z
2 - 1 ( 3 .times. 3 ) 0 0 0 Z n - 1 ( 3 .times. 3 ) ] [ .times. Z _
11 T ( 3 .times. 7 ) Z _ m .times. .times. 1 T ( 3 .times. 7 ) Z _
1 .times. n T ( 3 .times. 7 ) Z _ mn T ( 3 .times. 7 ) ] Equation
.times. .times. 70 ##EQU00060##
[0148] By extension
Z ^ 7 .times. m .times. 7 .times. m = .times. [ .times. j = 1 n
.times. Z _ 1 .times. j .times. Z j - 1 .times. Z _ 1 .times. j T j
= 1 n .times. Z _ j1 .times. Z j - 1 .times. Z _ 2 .times. j T j =
1 n .times. Z _ 1 .times. j .times. Z j - 1 .times. Z _ mj T j = 1
n .times. Z _ 2 .times. j .times. Z j - 1 .times. Z _ 1 .times. j T
j = 1 n .times. Z _ 2 .times. j .times. Z j - 1 .times. Z _ 2
.times. j T j = 1 n .times. Z _ 2 .times. j .times. Z j - 1 .times.
Z _ mj T j = 1 n .times. Z _ mj .times. Z j - 1 .times. Z _ 1
.times. j T j = 1 n .times. Z _ mj .times. Z j - 1 .times. Z _ 2
.times. j T j = 1 n .times. Z _ mj .times. Z j - 1 .times. Z _ mj T
] Equation .times. .times. 71 ##EQU00061##
[0149] The blocks of {circumflex over (Z)} in Equation 71 can be
the equivalent N.sub.ij as defined in equation 47.
[0150] The assumption that errors in the a priori ground points are
uncorrelated yields Equation 72
W 3 .times. n .times. 3 .times. n = [ 1 - 1 3 .times. 3 0 0 0 2 - 1
3 .times. 3 0 0 0 n - 1 3 .times. 3 ] Equation .times. .times. 72
##EQU00062##
[0151] where {umlaut over (.SIGMA.)}.sub.j.sup.-1 is the inverse of
the a priori covariance matrix for ground point j. Thus
Z j - 1 3 .times. 3 = [ N j + W j ] - 1 Equation .times. .times. 73
##EQU00063##
[0152] The general row and column term for {circumflex over (Z)}
can then be given by
Z ^ r , c 7 .times. 7 = j = 1 n .times. Z _ rj .times. Z j - 1
.times. Z _ cj T Equation .times. .times. 74 ##EQU00064##
[0153] and, by the definition of Z.sub.ij, {circumflex over
(Z)}.sub.r,c is zero if and only if images r and c have no ground
points in common. Also note that 2 is block symmetric. Thus, in its
formation, only the upper block triangle need be formed, followed
by reflection of the upper right triangle to the lower left
triangle for completion of the matrix formation.
[0154] The matrix M can thus be written as in Equation 75
M ( 7 .times. m .times. 7 .times. m ) = [ Z . 1 ( 7 .times. 7 ) 0 0
0 Z . 2 ( 7 .times. 7 ) 0 0 0 Z . p ( 7 .times. 7 ) ] - [ .times. j
= 1 n .times. Z _ 1 .times. j .times. Z j - 1 .times. Z _ 1 .times.
j T j = 1 n .times. Z _ j1 .times. Z j - 1 .times. Z _ 2 .times. j
T j = 1 n .times. Z _ 1 .times. j .times. Z j - 1 .times. Z _ mj T
j = 1 n .times. Z _ 2 .times. j .times. Z j - 1 .times. Z _ 1
.times. j T j = 1 n .times. Z _ 2 .times. j .times. Z j - 1 .times.
Z _ 2 .times. j T j = 1 n .times. Z _ 2 .times. j .times. Z j - 1
.times. Z _ mj T j = 1 n .times. Z _ mj .times. Z j - 1 .times. Z _
1 .times. j T j = 1 n .times. Z _ mj .times. Z j - 1 .times. Z _ 2
.times. j T j = 1 n .times. Z _ mj .times. Z j - 1 .times. Z _ mj T
] Equation .times. .times. 75 ##EQU00065##
[0155] The reduced matrix M can be formed by first storing the
diagonal entries of and then subtracting the summed entries of the
subtrahend in Equation 75 (namely the {circumflex over (Z)}.sub.r,c
defined in Equation 74).
[0156] Since the subblocks of the subtrahend are merely summations
over the ground point indexes, j, the matrix, M, can be built by
iterating over the ground points (assuming the minuend of Equation
75 on-diagonals were formed in advance) and subtracting out the
contributions for a particular ground point in the appropriate
place within M.
[0157] The constant column vector C can be formed similarly with
some of the same matrices:
C = [ H . 1 H . m ] - [ j = 1 n .times. Z _ 1 .times. j .times. Z j
- 1 .times. H j j = 1 n .times. Z _ mj .times. Z j - 1 .times. H j
] Equation .times. .times. 76 ##EQU00066##
[0158] After the matrices M and C are built, the solution vector
for the adjustable parameters from the reduced system can be
computed as
{dot over (.DELTA.)}=M.sup.-1C Equation 77
[0159] The solution vector can be decomposed into
per-image-adjustable vectors di for each image i as in Equation
78:
.DELTA. . = [ .DELTA. . 1 .DELTA. . 2 .DELTA. . m ] Equation
.times. .times. 78 ##EQU00067##
[0160] After the solution vector {dot over (.DELTA.)} for the
image-adjustable parameters is obtained, the solution vector
{umlaut over (.DELTA.)} for corrections to the ground point
positions can be extracted (or "unfolded") from the reduced system.
To formulate the extraction, Equation 67 can be used to obtain
Equation 79
.DELTA. 3 .times. n .times. 1 = Z - 1 .function. [ H - Z _ T
.times. .DELTA. . ] .times. .times. if Equation .times. .times. 79
.DELTA. 3 .times. n .times. 1 = [ .DELTA. 1 3 .times. 1 .DELTA. 2 3
.times. 1 .DELTA. n 3 .times. 1 ] Equation .times. .times. 80
##EQU00068##
[0161] represents the correction vector for the ground points
then
[ .DELTA. 1 3 .times. 1 .DELTA. 2 3 .times. 1 .DELTA. n 3 .times. 1
] = [ .times. Z 1 - 1 ( 3 .times. 3 ) 0 0 0 Z 2 - 1 ( 3 .times. 3 )
0 0 0 Z n - 1 ( 3 .times. 3 ) ] [ [ .times. H 1 3 .times. 1 H 2 3
.times. 1 H n 3 .times. 1 ] - [ Z _ 11 T ( 3 .times. 7 ) Z _ m
.times. .times. 1 T ( 3 .times. 7 ) Z _ 1 .times. n T ( 3 .times. 7
) Z _ mn T ( 3 .times. 7 ) ] .function. [ .DELTA. . 1 7 .times. 1
.DELTA. . 2 7 .times. 1 .DELTA. . m 7 .times. 1 ] ] Equation
.times. .times. 81 ##EQU00069##
[0162] where {dot over (.DELTA.)}.sub.i is the adjustable parameter
correction vector for image i. Thus
.DELTA. j 3 .times. 1 = Z j - 1 .function. [ H j - .SIGMA. i
.di-elect cons. I j .times. Z _ ij T .times. .DELTA. . i ] Equation
.times. .times. 82 ##EQU00070##
[0163] where I.sub.j is as defined in Table 5 (the index set of
images upon which ground point j is an observation).
[0164] This section provides formulations for extraction of a
posteriori error covariances for ground points. If a priori sensor
model error estimates are available (and reliable), the errors may
be propagated to the space of the registration error models. In
this case, the error propagation is a rigorous predicted error for
the accuracy of the a posteriori ground point locations.
[0165] The a posteriori error covariances of the image parameters
are the appropriate subblocks of the inverse of the reduced normal
matrix M.sup.-1 from Equation 69 (after application of the variance
of unit weight, as described at the end of this section). For the
full normal matrix solution, the a posteriori error covariance can
be the inverse of the normal matrix, Z.sup.-1, times the variance
of unit weight. For the reduced system, however, the a posteriori
error covariances of the ground points can be extracted from
M.sup.-1 by unfolding. To facilitate this, the full normal matrix
can be written as
Z = [ Z . ( 7 .times. m .times. 7 .times. m ) Z _ ( 7 .times. m
.times. 3 .times. n ) Z _ T ( 3 .times. n .times. 7 .times. m ) Z (
3 .times. n .times. 3 .times. n ) ] Equation .times. .times. 83
##EQU00071##
[0166] Denote the inverse matrix blocks as
Z - 1 .ident. [ .SIGMA. . .SIGMA. _ .SIGMA. _ T .SIGMA. ] Equation
.times. .times. 84 ##EQU00072##
[0167] Note that, {dot over (.SIGMA.)} and {umlaut over (.SIGMA.)}
as defined are distinctly different from those defined in previous
sections. (The symbols in the present section are a posteriori
covariances and those in previous sections are a priori
covariances). However, this subtle distinction is not problematic
if the appropriate context is adhered.
.SIGMA. 3 .times. n .times. 3 .times. n = Z - 1 + Z - 1 .times. Z _
T .times. .SIGMA. . .times. Z _ .times. Z - 1 Equation .times.
.times. 85 ##EQU00073##
[0168] The a posteriori covariance between ground points r and c
can be represented as block element {umlaut over (.SIGMA.)}.sub.r,c
of {umlaut over (.SIGMA.)}. With n as the number of ground points
and m as the number of images,
.SIGMA. = [ .SIGMA. 11 .SIGMA. 1 .times. n .SIGMA. n .times.
.times. 1 .SIGMA. nn ] = [ Z 1 - 1 0 0 0 Z 2 - 1 0 0 0 Z n - 1 ] +
[ Z 1 - 1 0 0 0 Z 2 - 1 0 0 0 Z n - 1 ] .times. Z _ T .times.
.SIGMA. . .times. Z _ .function. [ Z 1 - 1 0 0 0 Z 2 - 1 0 0 0 Z n
- 1 ] Equation .times. .times. 86 ##EQU00074##
[0169] The r.sup.th row of {umlaut over (.SIGMA.)} involves only
{umlaut over (Z)}.sub.r.sup.-1 of the first {umlaut over
(Z)}.sup.-1 matrix in term two of Equation 86. Similarly, the
c.sup.th column of {umlaut over (.SIGMA.)} involves only {umlaut
over (Z)}.sub.x.sup.-1 of the second {umlaut over (Z)}.sup.-1
matrix in term two. Thus
{umlaut over (.SIGMA.)}.sub.r,c=.delta.(r,c){umlaut over
(Z)}.sub.c.sup.-1+{umlaut over (Z)}.sub.r.sup.-1Z.sup.T{dot over
(.SIGMA.)}Z{umlaut over (Z)}.sub.c.sup.-1 Equation 87
[0170] where the delta function can be
.delta. .function. ( r , c ) .ident. { 1 if .times. .times. r = c 0
otherwise Equation .times. .times. 88 ##EQU00075##
[0171] Now the form of the (r,c) block of Z.sup.T{dot over
(.SIGMA.)}Z is derived. Let
G 3 .times. m .times. 3 .times. m .ident. Z _ T .times. .SIGMA. .
.times. Z _ = [ Z _ 11 T ( 3 .times. 7 ) Z _ m .times. .times. 1 T
( 3 .times. 7 ) Z _ 1 .times. n T ( 3 .times. 7 ) Z _ mn T ( 3
.times. 7 ) ] .function. [ .SIGMA. . 11 ( 7 .times. 7 ) .SIGMA. .
12 ( 7 .times. 7 ) .SIGMA. . 1 .times. m ( 7 .times. 7 ) .SIGMA. .
21 ( 7 .times. 7 ) .SIGMA. . 22 ( 7 .times. 7 ) .SIGMA. . 2 .times.
m ( 7 .times. 7 ) .SIGMA. . m .times. .times. 1 ( 7 .times. 7 )
.SIGMA. . m .times. .times. 2 ( 7 .times. 7 ) .SIGMA. . mm ( 7
.times. 7 ) ] .times. [ Z _ 11 ( 7 .times. 3 ) Z _ .times. 1
.times. n ( 7 .times. 3 ) Z _ m .times. .times. 1 ( 7 .times. 3 ) Z
_ mn ( 7 .times. 3 ) ] Equation .times. .times. 89 ##EQU00076##
[0172] The r.sup.th row of G involves only the r.sup.th row of
Z.sup.T and the c.sup.th column of G involves only the c.sup.th
column of Z. Thus
G r , c 3 .times. 3 = [ Z 1 .times. r T .times. .SIGMA. . 11 + + Z
_ mr .times. .SIGMA. . m .times. .times. 1 .times. Z 1 .times. r T
.times. .SIGMA. . 12 + + Z _ mr .times. .SIGMA. . m2 .times. ]
.times. Z _ = [ .SIGMA. s = 1 m .times. Z _ sr T .times. .SIGMA. .
s .times. .times. 1 .times. .SIGMA. s = 1 m .times. Z _ sr T
.times. .SIGMA. . s .times. .times. 2 .times. ] .function. [ Z _ 1
.times. c Z _ 2 .times. c Z _ mc ] = t = 1 m .times. .times. [ s =
1 m .times. .times. Z _ sr T .times. .SIGMA. . st ] .times. Z _ tc
Equation .times. .times. 90 ##EQU00077##
[0173] Now Z.sub.ij=0 if ground point j is not an observation on
image i.
[0174] Thus
T.sub.r,c=.SIGMA..sub.t.di-elect
cons.I.sub.c[.SIGMA..sub.s.di-elect cons.I.sub.r{umlaut over
(Z)}.sub.sr.sup.T{dot over (.SIGMA.)}.sub.st]Z.sub.tc Equation
91
[0175] where I.sub.j is the index set of images upon which ground
point j is an observation. Substituting Equation 91 into Equation
87 yields
{umlaut over (.SIGMA.)}.sub.r,c=.delta.(r,c){umlaut over
(Z)}.sub.c.sup.-1+{umlaut over
(Z)}.sub.r.sup.-1[.SIGMA..sub.s.di-elect
cons.I.sub.c[.SIGMA..sub.s.di-elect
cons.I.sub.rZ.sub.sr.SIGMA..sub.st]Z.sub.tc]{umlaut over
(Z)}.sub.c.sup.-1 Equation 92
[0176] The a posteriori covariance is usually defined by scaling
the inverse of the normal matrix by an estimate of the variance of
unit weight. An estimate of the variance of unit weight is denoted
as [.sigma..sup.(p)].sup.2 and is provided in Equation 57. Thus,
the above formulation can be used, but instead defining
[ .SIGMA. . .SIGMA. _ .SIGMA. _ T .SIGMA. ] = [ .sigma. ( p ) ] 2
.function. [ Z - 1 ] Equation .times. .times. 93 ##EQU00078##
[0177] For a full normal matrix solution, Z.sup.-1 is readily
available, thus the a posteriori covariance of the error model
parameters and ground points can be the right hand side of Equation
93.
[0178] The right hand summand of Equation 92 includes the factor
[.sigma..sup.(p)].sup.2 since it includes {dot over
(.SIGMA.)}.sub.st. However, the left hand summand does not include
the factor. This can be compensated for by a modified form of
Equation 92
{umlaut over
(Z)}.sub.r,c=[.sigma..sup.(p)].sup.2.delta.(r,c){umlaut over
(Z)}.sub.c.sup.-1+{umlaut over
(Z)}.sub.r.sup.-1[.SIGMA..sub.t.di-elect
cons.I.sub.c[.SIGMA..sub.t.di-elect cons.I.sub.rZ.sub.sr.sup.T{dot
over (.SIGMA.)}.sub.st]Z.sub.tc]{umlaut over (Z)}.sub.c.sup.-1
Equation 94
[0179] If the standard error of unit weight .sigma..sup.(p) is
deemed to be unreliable (e.g., is much greater than unity) this may
be an indicator of improper (or incorrect) a priori error
covariance in the process. One can still, however, be able to
provide a reliable error estimate from the least squares process by
simply forcing the standard error to one (e.g., by setting
.sigma..sup.(p).rarw.1.0 in Equations 93 and 94.
[0180] FIG. 10 illustrates an example of an embodiment of the
operation 818. The operation 818 can include 3D data set
registration with error propagation. The operation 818, as
illustrated, includes initializing solution and corrections, at
operation 1002; determining discrepancies, at operation 1004;
determining the normal equation, at operation 1006; updating
parameters based on the determined normal equation, at operation
1008; determining discrepancies, at operation 1010; determining
error, at operation 1012; and compensating misregistration of the
first 3D point set 102A and the second 3D point set 102B, at
operation 1014.
[0181] The operation 1002 can include setting the solution vector X
and the correction vector .DELTA.X to the zero vector.sup.1:
X = 0 7 .times. 1 ##EQU00079## .DELTA. .times. X = 0 7 .times. 1
##EQU00079.2##
[0182] The solution vector X can be set to a fixed-point location
for the linearization. If an a priori estimate is available, it can
be used here in place of the zero vector.
[0183] The operation 1004 can include computing the discrepancy
vector for each observation as provided in Equation 29. The
operation 1006 can include building the normal equations matrices
and solving for the correction vector as provided in Equation 53.
The operation 1008 can include updating the parameter vector for
the current iteration as provided in Equations 51 and 52. Details
of the operation 1008 for unfolding of the ground points for the
folded normal equation solution is provided via pseudocode
below.
[0184] The operation 1010 can include computing the discrepancies
as provided in Equation 29. The operation 1012 can include
computing a standard error of unit weight via Equation 57. Note
that the standard error can the square root of the left hand side
of the Equation 57 (e.g., .sigma..sup.(p)= {square root over
([.sigma..sup.(p)].sup.2)}).
[0185] If the delta between the current and previous standard error
of unit weight is less than the convergence criterion in absolute
value, the solution nominally converged. To accommodate blunder
rejection, the convergence criterion check can be augmented with a
check to see if the blunder weights should be used in continuation
of the solution ("useBW", indicating to use "blunder-checking
weighting"). If convergence occurs and useBW is true, this is an
indicator to perform blunder checking, and this time using a
normalized residual computation in order to check for blunders on
the next iteration.
[0186] If useBW is true, blunders can be computed. If there are
blunders remaining, the blunder "cycle" number is incremented and
the process is repeated with the correction vector reset to a
priori values (e.g., go to operation 1002). If there are no
blunders remaining, a check can be performed to see if the number
of post convergence blunder cycles can be set to zero. This check
can be performed to effectively force one more solution after all
blunders have been eliminated.
[0187] If useBW is false and it is currently a first iteration of a
blundering solution, useBW can be set to true. This has the effect
of forcing the normalized residual blunder iteration for
determining the blunders on subsequent iterations. In this case, a
solution has converged but normalized blunder residuals have not
been computed. Setting useBW to true can forces this to happen on
the next solution iteration. The solution can be iterated by going
to the operation 1006. If there are no more blunders and the number
of blunders is not zero, this indicates the "non-blunder iteration"
solution has converged.
[0188] The operation 818 can include providing a report that
includes an iteration number, current correction vector .DELTA.X,
current iteration estimates of parameters and ground points (e.g.,
as computed in equations 51 and 52), standard error of unit weight
(e.g., as provided in Equation 55). The operation 818 can include a
check for non-convergence by examining the current iteration number
with a maximum number of iterations, M. If the number of iterations
exceeds the maximum, stop the iteration process. The solution did
not converge. An exception can be raised and the operation 818 can
be complete.
[0189] The following is a pseudocode outline for the operation 818
for computing the full normal equations solution. This first
pseudocode does not include ground point folding.
[0190] The names in {braces} allude to method (e.g., function)
names in a software implementation. Also, within the development
below, ground point indexes, ground point observation indexes and
image indexes are assumed to be zero-relative. For efficiency in
the implementation, the following elements can be cached in a
per-image workspace object, which is updated with each
iteration:
1. Values for trig functions s.omega., c.OMEGA., s.OMEGA.,
c.OMEGA., s.kappa., c.kappa. as given in Equation 4. 2. Three
3.times.3 partial derivative matrices of T with respect to the
three angles as given in the following Equations 95-97:
T .omega. .ident. .differential. T .differential. .omega. = [ 0 - s
.times. .omega. .times. s .times. .kappa. + c.omega.s.PHI.c.kappa.
c.omega.s.kappa. + s.omega.s.PHI.c.kappa. 0 - s.omega.c.kappa. -
c.omega.s.PHI.s.kappa. c.omega.c.kappa. - s.omega.s.PHI.s.kappa. 0
- c.omega.c.PHI. - s.omega.c.PHI. ] Equation .times. .times. 95 T
.PHI. .ident. .differential. T .differential. .PHI. = [ -
s.PHI.c.kappa. s.omega.c.PHI.c.kappa. - c.omega.c.PHI.c.kappa.
s.PHI.s.kappa. - s.omega.c.PHI.s.kappa. c.omega.c.PHI.s.kappa.
c.PHI. s.omega.s.PHI. - c.omega.s.PHI. ] Equation .times. .times.
96 T .kappa. .ident. .differential. T .differential. .kappa. = [ -
c.PHI.s.kappa. c.omega.c.kappa. - s.omega.s.PHI.s.kappa.
s.omega.c.kappa. + c.omega.s.PHI.s.kappa. - c.PHI.c.kappa. -
c.omega.s.kappa. - s.omega.s.PHI.c.kappa. - s.omega.s.kappa. +
c.omega.s.PHI.c.kappa. 0 0 0 ] Equation .times. .times. 97
##EQU00080##
3. Rotation matrix T.sub.i as given in Equation 4. 4. 3.times.1
vector V.sub.i
[0191] The pseudocode begins by setting the non-linear least
squares iteration index (p) to zero.
TABLE-US-00002 1. {initializeData} Store nominal initial values for
the solved parameters for each image i .di-elect cons. {0, ... , m
- 1}. The initial values for V.sub.i.sup.(0) and
.theta..sub.i.sup.(0). Equation 98 C . i ( 0 ) .ident. [ V _ i ( 0
) .theta. i ( 0 ) s i ( 0 ) ] 7 .times. 1 ##EQU00081## 2.
{initializeData} Compute a. Initial ground point coordinates via
Equation 11 for each ground point j .di-elect cons. {0, ... , n -
1}. These form {umlaut over (C)}.sub.j.sup.(0). b. Initial image
cache data as described previously. 3. {outputIterationReport}
Output initial iteration report (input parameters, initial ground
point and ground point observation coordinates.) 4.
{initializeMatrices} Block partition the {dot over (N)} portion of
Z into m subblocks, each of size 6 .times. 6. 5.
{initializeMatrices} Block partition the {umlaut over (N)} portion
of Z into n subblocks, each of size 3 .times. 3. 6.
{initializeMatrices} Block partition H similarly 7.
{computeAndStoreDiscrepancies} For each ground point index j
.di-elect cons. {0, ... , n - 1} and each observation of that
ground point i .di-elect cons. O.sub.j, compute the discrepancy
vector .epsilon..sub.ij given in Equation 29 as follows. for j
.di-elect cons. {0, ... , n - 1} { Fetch the most recent ground
point position {circumflex over (V)}.sub.j. for i .di-elect cons.
I.sub.j and observation index b .di-elect cons. O.sub.j { a)
Retrieve image-cached values for T.sub.i and V.sub.i b) Retrieve
ground point observation {tilde over (V)}.sub.ij = {tilde over
(V)}.sub.b c) Apply the observation equation to obtain the
projected value for {circumflex over (V)}.sub.j. Equation 99
V.sub.ij = (1 + s.sub.i)T.sub.i({circumflex over (V)}.sub.j -
V.sub.i) d) Compute and store the discrepancy vector for
observation b as in Equation 29 Equation 100 .epsilon..sub.b
.ident. .epsilon..sub.ij = -[{tilde over (V)}.sub.ij - V.sub.ij]
}// end for i }//end for j 8. Compute the standard error of unit
weight as in Equation 57. 9. {buildNormalMatrices + initialize
Weights} Zero Z and H and initialize Z (and likewise H) with the
block weight matrices on the diagonal. This involves setting the
blocks of Z to the subblocks of {dot over (W)} and {umlaut over
(W)}. and setting the subblocks (subrows) of H to -{dot over (W)}
and -{umlaut over (W)}{umlaut over (C)}. 10. {sumInPartials} Loop
over ground points and images containing the ground points and sum
in the contributions of the {dot over (B)} and {umlaut over (B)}
matrices into Z and H. for j .di-elect cons. {0, ... , n - 1} { for
i .di-elect cons. I.sub.j { Retrieve .epsilon..sub.b .ident.
.epsilon..sub.ij as computed in Equation 100 Compute {dot over
(B)}.sub.ij and {umlaut over (B)}.sub.ij as in Equations 101 and
102. Equation 101 B . ij 3 .times. 7 = [ ( 1 + s i ) .times. T i 3
.times. 3 ( 1 + s i ) .times. A ij 3 .times. 3 - T i .times. Y ij 3
.times. 1 ] ##EQU00082## Equation 102 B ij 3 .times. 3 = - ( 1 + s
i ) .times. [ T i ] 3 .times. 3 ##EQU00083## Retrieve observation
weight matrix {tilde over (W)}.sub.ij {dot over (N)}.sub.i : Sum
{dot over (B)}.sub.ij.sup.T{tilde over (W)}.sub.ij{dot over
(B)}.sub.ij into Z.block(i, i) {umlaut over (N)}.sub.j : Sum
{umlaut over (B)}.sub.ij.sup.T{tilde over (W)}.sub.ij{umlaut over
(B)}.sub.ij into Z.block(m + j, m + j) N.sub.ij : Sum {dot over
(B)}.sub.ij.sup.T{tilde over (W)}.sub.ij{umlaut over (B)}.sub.ij
into Z.block(i, m + j) .sub.i : Sum {dot over
(B)}.sub.ij.sup.T{tilde over (W)}.sub.ij.epsilon..sub.ij into
H.block(i, 0) {umlaut over (C)}.sub.j : Sum {umlaut over
(B)}.sub.ij.sup.T{tilde over (W)}.sub.ij.epsilon..sub.ij into
H.block(m + j, 0) } //end i } // end j 11. {solveNormalEquation}
Form the lower transpose of the Z matrix and solve the system
.DELTA. = Z.sup.-1H. Note that the normal equation system is a
symmetric system (e.g., the normal matrix Z is symmetric). Thus, a
symmetric system solver can be used. In the case of a symmetric
system solver, it may not be necessary to form the lower triangle.
12. {updateParameters} Update all the parameters by extracting the
corrections from the .DELTA. matrix as in Equations 51 and 52. 13.
If (p) .noteq. 0 compare with the previous root mean square (RMS)
of residuals and check for convergence. The convergence condition
can be Equation 103 |R.sup.(p) - R.sup.(p-1)| < .delta. 14.
{computePostConvergenceResiduals + checkForBlunders} If convergence
has been reached, perform automatic blunder editing. If convergence
has not been reached, increment the iteration index Equation 104
(p) .rarw. (p + 1) and go to step 7.
[0192] What follows is pseudocode for the operation 818 for
building the reduced normal equations system, computing corrections
to ground point positions and performing error propagation via
extraction of data from the reduced normal matrix. This portion of
the pseudocode includes ground point coordinate folding.
[0193] As in the full normal solution provided in the previous
pseudocode, the same per-image elements are cached in a workspace
object and updated with each iteration. The algorithm for the
reduced solution can be broken into two major portions: priming and
folding. Priming involves storing of weights and the contributions
along the diagonal of the full normal equation matrix (and
corresponding data for the right hand column vector H). This
corresponds to the portion of Z. Thus, priming involved formation
of the minuends of Equation 75 and Equation 76. Folding can include
incorporation of the subtrahends of the aforementioned
Equations.
[0194] To provide an efficient implementation, a ground point
workspace can be created. The workspace can include the following
elements: {umlaut over (Z)}.sub.j, {umlaut over (H)}, {umlaut over
(Z)}.sub.j.sup.-1. These things are indexed by ground point for the
ground point workspace. The technique can begin by setting the
non-linear least squares iteration index (p) to zero.
TABLE-US-00003 1. {initalizeData}Store nominal initial-values for
the solved parameters for each image i .di-elect cons. {0, ..., m -
1}. The initial values for V.sub.i.sup.(0) and
.theta..sub.i.sup.(0) can also be set along with .sub.i.sup.(0) as
in Equation 98. 2. {initalizeData} Compute a. Initial ground point
coordinates via Equation 11 for each ground point j .di-elect cons.
{0, ..., m - 1}. These form C.sub.j.sup.(0){umlaut over (.)} b.
Initial image cache data as described above. 3.
{outputIterationReport} Output initial iteration report (input
parameters, initial ground point, and ground point observation
coordinates). 4. {initializeMatrices}Block partition the reduced
normal matrix M into m subblocks, each of size 6 .times. 6. Block
partition the reduced column vector C similarly 5.
{computeAndStoreDiscrepancies}For each ground point index j
.di-elect cons. {0, ..., m - 1} and each observation of that ground
point i .di-elect cons. O.sub.j , compute the discrepancy vector
.epsilon..sub.ij given in Equation 29 as: for j .di-elect cons. {0,
..., m - 1} { Fetch the most recent ground point position
{circumflex over (V)}.sub.j. for i .di-elect cons. Ij and
observation index b .di-elect cons. O.sub.j { a) Retrieve
image-cached values for T.sub.i and V.sub.i b) Retrieve ground
point observation {tilde over (V)}.sub.ij = {tilde over (V)}.sub.b
c) Apply the observation equation to obtain the projected value for
{circumflex over (V)}.sub.j from Equation 99. d) Compute and store
the discrepancy vector for observation b as in Equation 100 }// end
for i }//end for j 6. Compute the standard error of unit weight as
in Equation 57. 7. {buildNormalMatrices} a. Zero the reduced
matrices M and C b. {initlailze Weights}Initialize M (and likewise
C) with the block weight matrices on the diagonal. This involves
setting the blocks of M to the subblocks of {dot over (W)} and
setting the subblocks (subrows) of C to -{dot over (W)} . c.
{sumInPartialsAndFoldGPs} Form the main diagonal and ground point
matrices {umlaut over (Z)}.sub.j by iterating over ground points.
Perform folding for ground point j for j .di-elect cons. {0, ..., m
- 1} { PRIMING: Store {umlaut over (W)}.sub.j into {umlaut over
(Z)}.sub.j of GPWS Store -{umlaut over (W)}.sub.j{umlaut over
(C)}.sub.j into {umlaut over (H)}.sub.j of GPWS for i .di-elect
cons. I.sub.j (where I.sub.j is set of image indexes upon which GP
j is an observation) { Form partial derivatives: Build {dot over
(B)}.sub.ij as in Equation 101 Build {umlaut over (B)}.sub.ij as in
Equation 102 Retrieve discrepancy vector .epsilon..sub.ij as
computed in Equation 100. Retrieve observation weight matrix {tilde
over (W)}.sub.ij Sum in contribution of GP j`s obs in image i
within M: Sum {dot over (B)}.sub.ij.sup.T{tilde over
(W)}.sub.ij{dot over (B)}.sub.ij into M.block(i,i) Sum {dot over
(B)}.sub.ij.sup.T{tilde over (W)}.sub.ij.epsilon..sub.ij into
C.block(i,0) Sum in i`s contribution to {umlaut over (Z)}.sub.j and
{umlaut over (H)}.sub.j: Sum {umlaut over (B)}.sub.ij.sup.T{tilde
over (W)}.sub.ij{umlaut over (B)}.sub.ij into {umlaut over
(Z)}.sub.j Sum {umlaut over (B)}.sub.ij.sup.T{tilde over
(W)}.epsilon..sub.ij into {umlaut over (H)}.sub.j }// end i Invert
{umlaut over (Z)}.sub.j and store into GPWS as {umlaut over
(Z)}.sub.j.sup.-1 FOLDING INTO M (note : iteration loop over j is
still valid) for r .di-elect cons. I.sub.j { Form {umlaut over
(Z)}.sub.rj = {dot over (B)}.sub.ij.sup.T{tilde over
(W)}.sub.ij{umlaut over (B)}.sub.ij as in Equations 69 and 47 for c
.di-elect cons. I.sub.j |.sub.c.gtoreq.r { Form Z.sub.cj.sup.T Sum
in -Z.sub.rj{umlaut over (Z)}.sub.j.sup.-1Z.sub.cj.sup.T into
M.biock(r,c). }//end c Sum in -Z.sub.rj{umlaut over
(Z)}.sub.j.sup.-1{umlaut over (H)}.sub.j into C.block(r,0). }//end
r }//end j 8. Complete the lower diagonal entries of M and solve
{dot over (.DELTA.)} = M.sup.-1C. As in the full normal equation
solution, note that M is symmetric and thus a symmetric system
solver is in order. 9. First use pseudocode provided below to
compute corrections to ground points. Then update all the
parameters from the {dot over (.DELTA.)} vector. 10. If (p) .noteq.
0 compare with the previous RMS of residuals and check for
convergence. The convergence condition is |R.sup.(p) - R.sup.(p-1)|
< .epsilon. Equation 105 11. {computePostConvergenceResiduals +
checkForBlunders} if convergence has been reached, perform
automatic blunder editing as detailed elsewhere. After there are no
more blunders, proceed with error propagation {propagateErrors} If
convergence has not been reached, increment the iteration index as
in Equation 104 and go to step 5.
[0195] After the solution vector {dot over (.DELTA.)} is obtained,
unfolding the ground point corrections is a matter of employing
Equation 80, replicated here for reference:
TABLE-US-00004 {umlaut over (.DELTA.)}.sub.j.sub.3 .sub..times.
.sub.1 = {umlaut over (Z)}.sub.j.sup.-1 [{umlaut over (H)}.sub.j -
.SIGMA..sub.i.di-elect cons.IjZ.sub.ij.sup.T{dot over
(.DELTA.)}.sub.i] Equation 80 {unfoldGroundPoints} for j .di-elect
cons. {0, ..., m - 1} { Retrieve {umlaut over (Z)}.sub.j.sup.-1 and
{umlaut over (H)}.sub.j from GPWS Store {umlaut over (H)}.sub.j
into a new subtrahend matrix S (i.e. initialize S to {umlaut over
(H)}.sub.j) for r .di-elect cons. I.sub.j { Form Z.sub.rj Sum
-Z.sub.rj.sup.T{dot over (.DELTA.)}.sub.r into S. }// end r Compute
{umlaut over (.DELTA.)}.sub.j = {umlaut over (Z)}.sub.j.sup.-1S
}//end j
[0196] The general cross error covariance between ground point
indexes r and c can obtained b evaluation of Equation 94.
TABLE-US-00005 {relativeGroundPointCov} Retrieve {umlaut over
(Z)}.sub.r.sup.-1 and {umlaut over (Z)}.sub.c.sup.-1 from ground
point workspace. Obtain indexing sets I.sub.r and I.sub.c (image
indexes of ground points r and c) Allocate matrix {umlaut over
(.SIGMA.)}.sub.r,c.sub.3 .sub..times. .sub.3 and initialize to
zero. This is the output of this function. Allocate matrix P.sub.3
.times. .sub.7 and initialize to zero for t .di-elect cons. I.sub.c
{ Allocate matrix Q.sub.3 .times. .sub.7 and initialize to zero for
s .di-elect cons. I.sub.r { Form Z.sub.sr and Z.sub.tc from
Equation 47. (Note that for the reduced system, N = Z) Extract {dot
over (.SIGMA.)}.sub.st={dot over (.SIGMA.)}.getBlock( s, t ), where
{dot over (.SIGMA.)} is as defined in Equation 93 Set Q .rarw. Q +
Z.sub.st.sup.T{dot over (.SIGMA.)}.sub.st }//end s Set P .rarw. P +
QZ.sub.rc }//end t Compute {umlaut over (.SIGMA.)}.sub.r,c =
{umlaut over (Z)}.sub.r.sup.-1P{umlaut over (Z)}.sub.c.sup.-1 if( r
= c) { Set {umlaut over (.SIGMA.)}.sub.r,c .rarw.
[.sigma..sup.(p)].sup.2{umlaut over (Z)}.sub.c.sup.-1 + {umlaut
over (.SIGMA.)}.sub.r,c } retrun {umlaut over (.SIGMA.)}.sub.r,c
//end {relativeGroundPointCov}
[0197] The full 3n.times.3n ground covariance matrix
.SIGMA. 3 .times. n .times. 3 .times. n ##EQU00084##
may be obtained by invoking the method for r.di-elect cons.{1, 2, .
. . , n} and for c.di-elect cons.{r, r+1, . . . , n}. Note that the
indexing for c starts with r since the full ground covariance
matrix is symmetric (i.e., build the upper triangle of
.SIGMA. 3 .times. n .times. 3 .times. n ##EQU00085##
and "reflect about the diagonal" to obtain the lower symmetric
portion).
[0198] What follows regards how to perform operation 1014. The
operation 1014 proceeds given the outputs of the MLSE techniques
discussed. The compensation applies the inverse of the observation
equation, accommodating the various relative frame offsets to
arrive at compensated world space coordinates from misregistered
world space coordinates
V.sub.reg.sup.W=Compensate.sub.i(V.sub.misreg.sup.W) Equation
106
[0199] The motivation for providing the inputs and outputs in world
space coordinates can be that is the native space of the inputs and
desired space of the outputs for each element of each image's point
cloud.
[0200] For an arbitrary misregistered vector V.sub.misreg.sup.W on
image i, the compensation formula can be performed as in Equation
107
V reg W = 1 ( 1 + s i ( p ) ) .times. T i T .function. ( V misreg W
- V _ i ( 0 ) R - V R W ) + V _ i ( p ) + V R W Equation .times.
.times. 107 ##EQU00086##
[0201] where T.sub.i is constructed from the solution vector
.theta..sub.i.sup.(p) and the other symbols in Equation 107 are
defined elsewhere. Note that the values for
-V.sub.i.sup.(0).sup.R-V.sub.R.sup.W and
+V.sub.i.sup.(0)+V.sub.R.sup.W and T.sub.i.sup.T can be precomputed
on a per image basis when applying Equation 107 for a
time-efficient implementation.
[0202] FIG. 11 illustrates, by way of example, a diagram of an
embodiment of a method 1100 for 3D point set generation and
registration. The method 1100 as illustrated includes capturing, by
unmanned vehicles (UVs), image data representative of respective
overlapping subsections of the object, at operation 1102;
registering the overlapping subsections to each other, at operation
1104; and geo-locating the registered overlapping subsections, at
operation 1106.
[0203] The method 1100 can include capturing, by a UV of the UVs, a
first overhead image of a starting geo-location at which the image
data is captured and wherein geo-locating the overlapping
subsections includes correlating the first overhead image with a
second overhead image for which geo-location is known. The method
1100 can include, wherein the second overhead image is a satellite
image. The method 1100 can include, wherein geo-locating the
registered overlapping subsection includes determining a normalized
cross correlation of image chips of the first overhead image and
the second overhead image.
[0204] The method 1100 can include receiving, from an operator of a
UV of the UVs, a starting geo-location, and a heading of the UV.
The method 1100 can include wherein geo-locating the registered
overlapping subsections is performed based on the starting
geo-location and the heading. The method 1100 can include
performing, by a UV of the UVs, a light detection and ranging
(LIDAR) scan to generate LIDAR scan data. The method 1100 can
include wherein geo-locating the registered overlapping subsections
includes correlating the first overhead image with the LIDAR scan
data.
[0205] The method 1100 can include associating, by the UV,
geo-location data of the UV with image data generated by the UV.
The method 1100 can include, wherein geo-locating the registered
overlapping subsections occurs based on the geo-location data. The
method 1100 can include generating a first three-dimensional (3D)
point set based on the geo-located registered overlapping
subsections. The method 1100 can include registering the first 3D
point set to a second 3D point set to generate a merged 3D point
set. The method 1100 can include, wherein registering the first 3D
point set to the second 3D point set includes scaling, rotating,
and translating one or more of the first and second 3D point sets
using a least squares estimate bundle adjustment based on tie
points between the first and second 3D point sets.
[0206] FIG. 12 illustrates, by way of example, a block diagram of
an embodiment of a machine in the example form of a computer system
1200 within which instructions, for causing the machine to perform
any one or more of the methodologies discussed herein, may be
executed. In alternative embodiments, the machine operates as a
standalone device or may be connected (e.g., networked) to other
machines. In a networked deployment, the machine may operate in the
capacity of a server or a client machine in server-client network
environment, or as a peer machine in a peer-to-peer (or
distributed) network environment. The machine may be a personal
computer (PC), a tablet PC, a set-top box (STB), a Personal Digital
Assistant (PDA), a cellular telephone, a web appliance, a network
router, switch or bridge, or any machine capable of executing
instructions (sequential or otherwise) that specify actions to be
taken by that machine. Further, while only a single machine is
illustrated, the term "machine" shall also be taken to include any
collection of machines that individually or jointly execute a set
(or multiple sets) of instructions to perform any one or more of
the methodologies discussed herein.
[0207] The example computer system 1200 includes a processor 1202
(e.g., processing circuitry 118, such as can include a central
processing unit (CPU), a graphics processing unit (GPU), field
programmable gate array (FPGA), other circuitry, such as one or
more transistors, resistors, capacitors, inductors, diodes,
regulators, switches, multiplexers, power devices, logic gates
(e.g., AND, OR, XOR, negate, etc.), buffers, memory devices, or the
like, or a combination thereof), a main memory 1204 and a static
memory 1206, which communicate with each other via a bus 1208. The
computer system 1200 may further include a display device 1210
(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)).
The computer system 1200 also includes an alphanumeric input device
1212 (e.g., a keyboard), a user interface (UI) navigation device
1214 (e.g., a mouse), a disk drive unit 1216, a signal generation
device 1218 (e.g., a speaker), a network interface device 1220, and
radios 1230 such as Bluetooth, WWAN, WLAN, and NFC, permitting the
application of security controls on such protocols.
[0208] The disk drive unit 1216 includes a machine-readable medium
1222 on which is stored one or more sets of instructions and data
structures (e.g., software) 1224 embodying or utilized by any one
or more of the methodologies or functions described herein. The
instructions 1224 may also reside, completely or at least
partially, within the main memory 1204 and/or within the processor
1202 during execution thereof by the computer system 1200, the main
memory 1204 and the processor 1202 also constituting
machine-readable media.
[0209] While the machine-readable medium 1222 is shown in an
example embodiment to be a single medium, the term
"machine-readable medium" may include a single medium or multiple
media (e.g., a centralized or distributed database, and/or
associated caches and servers) that store the one or more
instructions or data structures. The term "machine-readable medium"
shall also be taken to include any tangible medium that is capable
of storing, encoding or carrying instructions for execution by the
machine and that cause the machine to perform any one or more of
the methodologies of the present invention, or that is capable of
storing, encoding or carrying data structures utilized by or
associated with such instructions. The term "machine-readable
medium" shall accordingly be taken to include, but not be limited
to, solid-state memories, and optical and magnetic media. Specific
examples of machine-readable media include non-volatile memory,
including by way of example semiconductor memory devices, e.g.,
Erasable Programmable Read-Only Memory (EPROM), Electrically
Erasable Programmable Read-Only Memory (EEPROM), and flash memory
devices; magnetic disks such as internal hard disks and removable
disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
[0210] The instructions 1224 may further be transmitted or received
over a communications network 1226 using a transmission medium. The
instructions 1224 may be transmitted using the network interface
device 1220 and any one of a number of well-known transfer
protocols (e.g., HTTP). Examples of communication networks include
a local area network ("LAN"), a wide area network ("WAN"), the
Internet, mobile telephone networks, Plain Old Telephone (POTS)
networks, and wireless data networks (e.g., WiFi and WiMax
networks). The term "transmission medium" shall be taken to include
any intangible medium that is capable of storing, encoding or
carrying instructions for execution by the machine, and includes
digital or analog communications signals or other intangible media
to facilitate communication of such software.
Additional Notes and Examples
[0211] Example 1 includes a method for generating a
three-dimensional (3D) point cloud of an object, the method
comprising capturing, by unmanned vehicles (UVs), image data
representative of respective overlapping subsections of the object,
registering the overlapping subsections to each other, and
geo-locating the registered overlapping subsections.
[0212] In Example 2, Example 1 can further include capturing, by a
UV of the UVs, a first overhead image of a starting geo-location at
which the image data is captured and wherein geo-locating the
overlapping subsections includes correlating the first overhead
image with a second overhead image for which geo-location is
known.
[0213] In Example 3, Example 2 can further include, wherein the
second overhead image is a satellite image.
[0214] In Example 4, Example 3 can further include, wherein
geo-locating the registered overlapping subsection includes
determining a normalized cross correlation of image chips of the
first overhead image and the second overhead image.
[0215] In Example 5 at least one of Examples 1-4 can further
include receiving, from an operator of a UV of the UVs, a starting
geo-location, and a heading of the UV, and wherein geo-locating the
registered overlapping subsections is performed based on the
starting geo-location and the heading.
[0216] In Example 6, at least one of Examples 2-5 can further
include performing, by a UV of the UVs, a light detection and
ranging (LIDAR) scan to generate LIDAR scan data, and wherein
geo-locating the registered overlapping subsections includes
correlating the first overhead image with the LIDAR scan data.
[0217] In Example 7, at least one of Examples 1-6 can further
include associating, by the UV, geo-location data of the UV with
image data generated by the UV, and wherein geo-locating the
registered overlapping subsections occurs based on the geo-location
data.
[0218] In Example 8, at least one of Examples 1-7 can further
include generating a first three-dimensional (3D) point set based
on the geo-located registered overlapping subsections and
registering the first 3D point set to a second 3D point set to
generate a merged 3D point set.
[0219] In Example 9, Example 8 can further include, wherein
registering the first 3D point set to the second 3D point set
includes scaling, rotating, and translating one or more of the
first and second 3D point sets using a least squares estimate
bundle adjustment based on tie points between the first and second
3D point sets.
[0220] Example 10 includes a system comprising unmanned vehicles
configured to capture image data representative of respective
overlapping subsections of an object, and processing circuitry
configured to register the overlapping subsections to each other,
and geo-locate the registered overlapping subsections.
[0221] In Example 11, Example 10 can further include, wherein a UV
of the UVs is further configured to capture a first overhead image
of a starting geo-location at which the image data is captured and
wherein geo-locating the overlapping subsections includes
correlating the first overhead image with a second overhead image
for which geo-location is known.
[0222] In Example 12, Example 11 can further include, wherein the
second overhead image is a satellite image.
[0223] In Example 13, Example 12 can further include, wherein
geo-locating the registered overlapping subsection includes
determining a normalized cross correlation of image chips of the
first overhead image and the second overhead image.
[0224] In Example 14, at least one of Examples 10-13 can further
include, wherein the processing circuitry is further configured to
receive, from an operator of a UV of the UVs, a starting
geo-location and a heading of the UAV, and wherein geo-locating the
registered overlapping subsections is performed based on the
starting geo-location and the heading.
[0225] In Example 15, at least one of Examples 11-14 can further
include, wherein a UV of the UVs is further configured to perform a
light detection and ranging (LIDAR) scan to generate LIDAR scan
data; and wherein geo-locating the registered overlapping
subsections includes correlating the first overhead image with the
LIDAR scan data.
[0226] Example 16 includes a (e.g., non-transitory)
machine-readable medium including instructions that, when executed
by a machine, cause the machine to perform operations comprising
receiving, by unmanned vehicles (UVs), image data representative of
respective overlapping subsections of an object, registering the
overlapping subsections to each other, and geo-locating the
registered overlapping subsections.
[0227] In Example 17, Example 16 can further include, wherein the
operations further comprise receiving, by the UV, geo-location data
of the UV associated with the image data generated by the UV, and
wherein geo-locating the registered overlapping subsections occurs
based on the geo-location data.
[0228] In Example 18, at least one of Examples 16-17 can further
include, wherein the operations further comprise generating a first
three-dimensional (3D) point set based on the geo-located
registered overlapping subsections and registering the first 3D
point set to a second 3D point set to generate a merged 3D point
set.
[0229] In Example 19, Example 18 can further include, wherein
registering the first 3D point set to the second 3D point set
includes scaling, rotating, and translating one or more of the
first and second 3D point sets using a least squares estimate
bundle adjustment based on tie points between the first and second
3D point sets.
[0230] In Example 20, at least one of Examples 16-19 can further
include, wherein the operations further comprise receiving light
detection and ranging (LIDAR) scan data of the object from a UV of
the UVs; and wherein geo-locating the registered overlapping
subsections includes correlating the first overhead image with the
LIDAR scan data.
[0231] Although an embodiment has been described with reference to
specific example embodiments, it will be evident that various
modifications and changes may be made to these embodiments without
departing from the broader spirit and scope of the invention.
Accordingly, the specification and drawings are to be regarded in
an illustrative rather than a restrictive sense. The accompanying
drawings that form a part hereof, show by way of illustration, and
not of limitation, specific embodiments in which the subject matter
may be practiced. The embodiments illustrated are described in
sufficient detail to enable those skilled in the art to practice
the teachings disclosed herein. Other embodiments may be utilized
and derived therefrom, such that structural and logical
substitutions and changes may be made without departing from the
scope of this disclosure. This Detailed Description, therefore, is
not to be taken in a limiting sense, and the scope of various
embodiments is defined only by the appended claims, along with the
full range of equivalents to which such claims are entitled.
* * * * *