U.S. patent application number 13/060444 was filed with the patent office on 2011-12-22 for aerial image generating apparatus, aerial image generating method, and storage medium having aerial image generating program stored therein.
This patent application is currently assigned to MITSUBISHI ELECTRIC CORPORATION. Invention is credited to Ryujiro KUROSAKI, Masakazu MIYA, Yoshihiro SHIMA, Junichi TAKIGUCHI, Mitsunobu YOSHIDA.
Application Number | 20110310091 13/060444 |
Document ID | / |
Family ID | 41721380 |
Filed Date | 2011-12-22 |
United States Patent
Application |
20110310091 |
Kind Code |
A2 |
YOSHIDA; Mitsunobu ; et
al. |
December 22, 2011 |
AERIAL IMAGE GENERATING APPARATUS, AERIAL IMAGE GENERATING METHOD,
AND STORAGE MEDIUM HAVING AERIAL IMAGE GENERATING PROGRAM STORED
THEREIN
Abstract
An objective is to provide a road image including no features
such as trees and tunnels hiding or covering a road surface. A
mobile measuring apparatus 200 installed in a vehicle may acquire a
distance and orientation point cloud 291, a camera image 292, GPS
observation information 293, a gyro measurement value 294, and an
odometer measurement value 295, while moving in a target area. The
position and attitude localizing apparatus 300 may localize the
position and attitude of the vehicle based on the GPS observation
information 293, the gyro measurement value 294 and the odometer
measurement value 295. The point cloud generating apparatus 400 may
generate a point cloud 491 based on the camera image 292, the
distance and orientation point cloud 291, and a position and
attitude localized value 391. The point cloud orthoimage generating
apparatus 100 may extract points close to a road surface
exclusively from the point cloud 491 by removing points higher than
the road surface, orthographically project each extracted point
onto a horizontal plane, and generate a point cloud orthoimage 191.
The point cloud orthoimage 191 may show the road surface including
no features covering or hiding the road surface.
Inventors: |
YOSHIDA; Mitsunobu;
(Chiyoda-ku, Tokyo, JP) ; MIYA; Masakazu;
(Chiyoda-ku, Tokyo, JP) ; SHIMA; Yoshihiro;
(Chiyoda-ku, Tokyo, JP) ; TAKIGUCHI; Junichi;
(Chiyoda-ku, Tokyo, JP) ; KUROSAKI; Ryujiro;
(Chiyoda-ku, Tokyo, JP) |
Assignee: |
MITSUBISHI ELECTRIC
CORPORATION
7-3, Marunouchi 2-chome
Chiyoda-ku, Tokyo
JP
100-8310
|
Prior
Publication: |
|
Document Identifier |
Publication Date |
|
US 20110164037 A1 |
July 7, 2011 |
|
|
Family ID: |
41721380 |
Appl. No.: |
13/060444 |
Filed: |
August 24, 2009 |
PCT Filed: |
August 24, 2009 |
PCT NO: |
PCT/JP2009/064700 |
371 Date: |
February 24, 2011 |
Current U.S.
Class: |
345/419 |
Current CPC
Class: |
G06T 15/30 20130101;
G09B 29/12 20130101; G06T 15/08 20130101; G06T 17/05 20130101 |
Class at
Publication: |
345/419 |
International
Class: |
G06T 15/00 20110101
G06T015/00 |
Foreign Application Data
Date |
Code |
Application Number |
Aug 29, 2008 |
JP |
2008-221359 |
Claims
1. An aerial image generating apparatus generating an aerial image
of a ground surface by using a 3D point cloud indicating 3D
coordinates of a spot on the ground, the aerial image generating
apparatus comprising: a 3D point cloud projecting section
configured to generate the aerial image by projecting each point of
the 3D point cloud onto a plane based on the 3D coordinates of each
point indicated by the 3D point cloud by using CPU (Central
Processing Unit).
2. The aerial image generating apparatus according to claim 1,
further comprising: a predetermined height point cloud extracting
section configured to extract from the 3D point cloud as a
predetermined height point cloud a point whose height is within a
predetermined height range based on the 3D coordinates of each
point indicated by the 3D point cloud, by using CPU, wherein the 3D
point cloud projecting section generates the aerial image by
projecting each point of the predetermined height point cloud onto
the plane based on the 3D coordinates indicated by each point of
the predetermined height point cloud extracted from the 3D point
cloud by the predetermined height point cloud extracting section,
by using CPU.
3. The aerial image generating apparatus according to claim 2,
wherein the predetermined height point cloud extracting section
extracts a point whose height is the same or lower than a
predetermined height as a member of the predetermined height point
cloud.
4. The aerial image generating apparatus according to claim 3,
wherein the predetermined height point cloud extracting section
extracts a point whose height from the ground is the same or lower
than the predetermined height as a member of the predetermined
height point cloud.
5. The aerial image generating apparatus according to claim 1,
further comprising: a point density calculating section configured
to calculate a point density of each point of the 3D point cloud
projected onto the plane by the 3D point cloud projecting section
for each zone of the plane divided into zones of a predetermined
size, by using CPU; a standing feature specifying section
configured to specify an image portion of the aerial image showing
a standing feature based on the point density calculated by the
point density calculating section, by using CPU; and a standing
feature discriminating section configured to generate the aerial
image in which the image portion specified by the standing feature
specifying section is discriminated from other image portions, by
using CPU.
6. The aerial image generating apparatus according to claim 5,
further comprising: a predetermined height point cloud extracting
section configured to extract from the 3D point cloud a point whose
height is within the predetermined height range as the
predetermined height point cloud based on the 3D coordinates of
each point indicated by the 3D point cloud, by using CPU, wherein
the 3D point cloud projecting section generates the aerial image by
projecting each point of the predetermined height point cloud onto
the plane based on the 3D coordinates of each point of the
predetermined height point cloud extracted from the 3D point cloud
by the predetermined point cloud extracting section, by using
CPU.
7. The aerial image generating apparatus according to claim 6,
wherein the predetermined height point cloud extracting section
extracts a point whose height is the same or higher than the
predetermined height as the predetermined height point cloud.
8. The aerial image generating apparatus according to claim 7,
wherein the predetermined height point cloud extracting section
extracts a point whose height from the ground is the same or higher
than the predetermined height as the predetermined height point
cloud.
9. The aerial image generating apparatus according to claim 4 or 8,
further comprising: a ground height specifying section configured
to specify a ground height based on the height of each point
indicated by the 3D point cloud, by using CPU, wherein the
predetermined height point cloud extracting section extracts the
predetermined height point cloud based on the ground height
specified by the ground height specifying section.
10. The aerial image generating apparatus according to claim 9,
wherein the ground height specifying section specifies the ground
height of each zone obtained by dividing by a predetermined size
the plane onto which each point of 3D point cloud is projected by
the 3D point cloud projecting section, based on the height of each
point indicated by 3D point cloud projected onto each zone, and
wherein the predetermined height point cloud extracting section
extracts the predetermined height point cloud for each zone based
on the ground height of each zone specified by the ground height
specifying section.
11. The aerial image generating apparatus according to claim 10,
wherein the ground height specifying section extracts a
predetermined number of points in order from the lowest of all the
points of the 3D point cloud projected onto a first zone, and
specifies the ground height of the first zone based on the height
of the predetermined number of points extracted.
12. The aerial image generating apparatus according to claim 11,
wherein the ground height specifying section extracts a point whose
height is the lowest of all the points of the 3D point cloud
projected onto the first zone, and treats the height of the
extracted point as the ground height of the first zone.
13. The aerial image generating apparatus according to claim 9,
wherein the ground height specifying section extracts a point
indicating a curb of a road from the 3D point cloud based on the 3D
coordinates of each point indicated by the 3D point cloud, and
specifies the ground height based on the 3D coordinates of an
extracted point.
14. The aerial image generating apparatus according to claim 13,
wherein the ground height specifying section calculates a 3D
equitation indicating a road surface based on the 3D coordinates of
the extracted point, and calculates the height of the road surface
as the ground height based on the calculated 3D equation.
15. The aerial image generating apparatus according to claim 14,
wherein the ground height specifying section specifies portions of
a couple of curbs on both sides of a road in the aerial image
generated by the 3D point cloud projecting section, extracts at
least two points from points projected onto one of the portions of
curbs, extracts at least one point from points projected onto the
other portion of curbs, calculates a 3D equation indicating a plane
including at least extracted three points as the 3D equation
indicating the road surface, and calculates the height of the road
surface as the ground height based on the calculated 3d
equation.
16. The aerial image generating apparatus according to any one of
claim 1 to claim 15, further comprising: an aerial image display
section configured to display the generated aerial image on a
display unit; and a camera image display section configured to
specify a point projected onto a designated image portion of the
aerial image displayed by the aerial image display section, and
displays a camera image taken at a site of measurement where the
specified point was measured, by using CPU.
17. The aerial image generating apparatus according to any one of
claim 1 to claim 16, wherein the 3D point cloud is generated based
on a distance and orientation point cloud indicating distance and
orientation to a point measured by a laser scanner installed in a
vehicle.
18. The aerial image generating apparatus according to any one of
claim 1 to claim 17, wherein each point of the 3D point cloud
indicates 3D coordinates and color of a feature at a position
specified by the 3D coordinates.
19. An aerial image generating method for generating an aerial
image of the ground surface by using a 3D point cloud indicating
the 3D coordinates of a spot on the ground, the method comprising:
generating the aerial image by a 3D point cloud projecting section
projecting each point of the 3D point cloud onto a plane based on
the 3D coordinates of each point indicated by the 3D point cloud,
by using CPU (Central Processing Unit) in a 3D point cloud
projecting process.
20. The aerial image generating method according to claim 19,
further comprising: extracting from the 3D point cloud by a
predetermined height point cloud extracting section, each point
whose height is within a predetermined height range as a
predetermined height point cloud based on the 3D coordinates of
each point indicated by the 3D point cloud, by using CPU, in a
predetermined height point cloud extracting process, wherein the 3D
point cloud projecting process performed by the 3D point cloud
projecting section includes: generating the aerial image by
projecting each point of the predetermined height point cloud onto
the plane based on the 3D coordinates of each point of the
predetermined height point cloud extracted from the 3D point cloud
by the predetermined height point cloud extracting section, by
using CPU.
21. The aerial image generating method according to claim 19,
further comprising: calculating by a point density calculating
section a point density of each point of the 3D point cloud
projected onto the plane by the 3D point cloud projecting section
for each zone of the plane divided into zones of a predetermined
size, by using CPU, in a point density calculating process,
specifying by a standing feature specifying section an image
portion of the aerial image showing a standing feature based on the
point density calculated by the point density calculating section,
by using CPU, in a standing feature specifying process, and
generating by a standing feature discriminating section the aerial
image in which the image portion specified by the standing feature
specifying section is discriminated from other image portions, by
using CPU, in a standing feature discriminating process.
22. An aerial image generating program causing a computer to
execute the aerial image generating method according to any one of
claim 19 to claim 21.
Description
TECHNICAL FIELD
[0001] The present invention relates to an aerial image generating
apparatus, an aerial image generating method, and an aerial image
generating program for generating a road orthoimage by using a
colored laser point cloud, for example.
BACKGROUND ART
[0002] A laser point cloud indicating distance and orientation
measured by a laser scanner reproduces the 3D shape of a feature on
the ground. A larger number of laser points make a 3D shape more
accurate, and therefore a vast number of laser points are
acquired.
[0003] However, the laser point cloud includes a point cloud
obtained by measuring a feature that is not intended for
reproduction. Therefore, there is a need of extracting the laser
point cloud obtained by measuring a feature intended for
reproduction from massive laser points.
[0004] Laser point clouds have been extracted by the following
methods:
(1) A laser point cloud is viewed in a three dimensional manner,
and a point is extracted if necessary with visual confirmation;
and
(2) A laser point cloud is superimposed on a camera image on a
display to help identify a target feature, and a point is extracted
if necessary with visual confirmation.
[0005] The method (1) however poses the following problems, for
example:
(A) Laser points need to be designated one by one for extraction;
and
(B) Extracted laser points cannot be used directly on CAD (Computer
Aide Design).
[0006] The method (2) poses the following problems, for
example:
(A) A target feature can be identified only by a laser point cloud
showing points arranged in the direction of the field of vision of
the camera;
(B) It takes time and labor to select an appropriate camera image;
and
(C) It is hard to identify the place where a target feature is
located.
[0007] Those introduced methods require visual confirmation for
each point to be extracted, which takes time. On the other hand,
automatic recognition techniques have been under development. With
the automatic recognition, recognizable features are limited and
the recognition rate is not sufficient enough. Also, visual
confirmation is required for correction.
PRIOR ART REFERENCE
Patent Document
[0008] Patent Document 1: JP 2007-218705 A
SUMMARY OF THE INVENTION
Problems to be Solved by the Invention
[0009] An objective of the present invention is to remove
unnecessary points from a massive number of acquired laser points,
and extract necessary laser points exclusively, for example.
Means to Solve the Problems
[0010] According to one aspect of the present invention, an aerial
image generating apparatus may generate an aerial image of a ground
surface by using a 3D point cloud indicating 3D coordinates of a
spot on the ground. The aerial image generating apparatus may
include a 3D point cloud projecting section that is configured to
generate the aerial image by projecting each point of the 3D point
cloud onto a plane based on the 3D coordinates of each point
indicated by the 3D point cloud by using CPU (Central Processing
Unit).
[0011] The aerial image generating apparatus may further include a
predetermined height point cloud extracting section that is
configured to extract from the 3D point cloud as a predetermined
height point cloud a point whose height is within a predetermined
height range based on the 3D coordinates of each point indicated by
the 3D point cloud, by using CPU, The 3D point cloud projecting
section may generate the aerial image by projecting each point of
the predetermined height point cloud onto the plane based on the 3D
coordinates indicated by each point of the predetermined height
point cloud extracted from the 3D point cloud by the predetermined
height point cloud extracting section, by using CPU.
[0012] The aerial image generating apparatus may further include a
point density calculating section configured to calculate a point
density of each point of the 3D point cloud projected onto the
plane by the 3D point cloud projecting section for each zone of the
plane divided into zones of a predetermined size, by using CPU; a
standing feature specifying section configured to specify an image
portion of the aerial image showing a standing feature based on the
point density calculated by the point density calculating section,
by using CPU; and a standing feature discriminating section
configured to generate the aerial image in which the image portion
specified by the standing feature specifying section is
discriminated from other image portions, by using CPU.
Advantageous Effects of the Invention
[0013] According to the present invention, it is allowed to extract
a laser point cloud (a predetermined height point cloud) indicating
a road surface without visual confirmation, and generate an aerial
image of a road including no features such as tunnels and trees
hiding or covering the road surface, for example.
[0014] It is also allowed to extract a laser point cloud indicating
a standing feature such as a power pole without visual
confirmation, and generate an aerial image in which a standing
feature is discriminated from a road surface, for example.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] FIG. 1 shows a configuration of a point cloud orthoimage
generating system 800 according to a first embodiment;
[0016] FIG. 2 shows an external view of a mobile measuring
apparatus 200 according to the first embodiment;
[0017] FIG. 3 shows an example of hardware resources of a point
cloud orthoimage generating apparatus 100 according to the first
embodiment;
[0018] FIG. 4 shows a flow chart of a point cloud orthoimage
generating method according to the first embodiment;
[0019] FIG. 5 shows a road map illustrating an area (a target area)
in which the mobile measuring apparatus 200 has moved;
[0020] FIG. 6 shows a point cloud orthoimage 191 of the target area
(FIG. 5);
[0021] FIG. 7 shows an example of an aerial image of a point cloud
491;
[0022] FIG. 8 shows an example of an aerial image of the point
cloud 491;
[0023] FIG. 9 shows a configuration of the point cloud orthoimage
generating apparatus 100 according to a second embodiment:
[0024] FIG. 10 shows a flow chart of a point cloud orthoimage
generating process (S140) according to the second embodiment:
[0025] FIG. 11 shows the point cloud orthoimage 191 of a target
area b (FIG. 6) onto which a predetermined height point cloud 129a
whose height from a ground height 139a is the same or lower than 50
cm is orthographically projected;
[0026] FIG. 12 shows the point cloud orthoimage 191 of the target
area a (FIG. 6) onto which the predetermined height point cloud
129a whose height from the ground height 139a is the same or lower
than 50 cm is orthographically projected;
[0027] FIG. 13 shows a configuration of the point cloud orthoimage
generating apparatus 100 according to a third embodiment;
[0028] FIG. 14 shows a flow chart of the point cloud orthoimage
generating process (S140) according to the third embodiment:
[0029] FIG. 15 shows the point cloud orthoimage 191 of the target
area b (FIG. 6) onto which the predetermined height point cloud
129a whose height from the ground height 139a is the same or higher
than 50 cm is orthographically projected;
[0030] FIG. 16 shows an enlarged view of a part of the target area
b:
[0031] FIG. 17 shows the point cloud orthoimage 191 of a target
area a (FIG. 6) onto which the predetermined height point cloud
129a whose height from the ground height 139a is the same or higher
than 50 cm is orthographically projected;
[0032] FIG. 18 shows an enlarged view of a part of the target area
a;
[0033] FIG. 19 illustrates a method for specifying the ground
height 139a according to a fourth embodiment (Example 1);
[0034] FIG. 20 illustrates a method for specifying the ground
height 139a according to the fourth embodiment (Example 2);
[0035] FIG. 21 illustrates a method for specifying a curb point
cloud according to the fourth embodiment (Example 2);
[0036] FIG. 22 illustrates a screen showing an image of the point
cloud 491 including a road and curbs on each side of the road;
[0037] FIG. 23 shows a flow chart of a curb point cloud specifying
method according to the fourth embodiment (Example 2);
[0038] FIG. 24 shows the curb point cloud specified by the curb
point cloud specifying method according to the fourth embodiment
(Example 2);
[0039] FIG. 25 illustrates a method for specifying the ground
height 139a according to the fourth embodiment (Example 3 (1));
[0040] FIG. 26 illustrates a method for specifying the ground
height 139a according to the fourth embodiment (Example 3 (2));
and
[0041] FIG. 27 shows a configuration of a map data generating
system 801 according to a fifth embodiment.
DESCRIPTION OF EMBODIMENTS
Embodiment 1
[0042] An aerial image generating apparatus that generates an
aerial image of the ground based on a 3D point cloud indicating the
3D coordinates of each point on the ground will be described
according to a first embodiment.
[0043] FIG. 1 shows a configuration of a point cloud orthoimage
generating system 800 according to the first embodiment.
[0044] The configuration of the orthoimage generating system 800 of
the first embodiment will be discussed with reference to FIG.
1.
[0045] The orthoimage generating system 800 includes a mobile
measuring apparatus 200, a position and attitude localizing
apparatus 300, a point cloud generating apparatus 400, and a point
cloud orthoimage generating apparatus 100.
[0046] The mobile measuring apparatus 200 may be a mobile object
(e.g., a vehicle or airplane) equipped with a laser scanner 210, a
camera 220, a GPS receiver 230, a gyro 240, and an odometer
250.
[0047] The mobile measuring apparatus 200 acquires various kinds of
measurement data as the base of a 3D point cloud while moving on
the ground (or in the air).
[0048] The laser scanner 210 irradiates a laser beam towards a
point on the ground and then observes a laser pulse reflected off a
feature at the point. The laser scanner 210 measures the
orientation of the feature based on the direction of laser
irradiation, and also measures the distance to the feature based on
a period of time delay between irradiation of laser and detection
of reflected laser.
[0049] The laser scanner 210 is also called a laser radar or a
laser rangefinder (LRF).
[0050] Hereinafter, point cloud data indicating distance and
orientation to a feature at each point measured by a laser scanner
210, and a direction of laser irradiation will be referred to as a
"distance and orientation point cloud 291".
[0051] The camera 220 takes a picture of a feature at the site of
measurement of the laser scanner 210 (the point where the mobile
measuring apparatus 200 is located at the time of a laser
observation by the laser scanner 210) at the same time as the laser
scanner 210 measures the distance and orientation point cloud
291.
[0052] Hereinafter, image data taken by the camera 220 will be
referred to as a camera image 292.
[0053] The GPS receiver 230 observes positioning signals
transmitted from a plurality of Global Positioning System (GPS)
satellites at the same time as the laser scanner 210 measures the
distance and orientation point cloud 291. The GPS receiver 230 then
acquires information such as a navigation message indicated by a
positioning signal, the phase of a carrier wave to be used for
carrying a positioning signal, a pseudo distance indicating
distance between the GPS receiver 230 and a GPS satellite
calculated based on the transfer time of a positioning signal, and
a positioning result calculated based on the pseudo distance.
[0054] Hereinafter, the information acquired by the GPS receiver
230 will be referred to as "GPS observation information 293".
[0055] The gyro 240 measures an angular velocity in the three axial
directions (Roll, Pitch, and Yaw) of the mobile measuring apparatus
200 at the same time as the laser scanner 210 measures the distance
and orientation point cloud 291.
[0056] Hereinafter, the angular velocity in the three axial
direction measured by the gyro 240 will be referred to as a "gyro
measurement value 294".
[0057] The odometer 250 measures the amount of change in velocity
of the mobile measuring apparatus 200 at the same time as the laser
scanner 210 measures the distance and orientation point cloud
291.
[0058] Hereinafter, the amount of change in velocity measured by
the odometer 250 will be referred to as an "odometer measurement
value 295".
[0059] A measuring apparatus storing section 290 stores the
distance and orientation point cloud 291, the camera image 292, the
GPS observation information 293, the gyro measurement value 294,
and the odometer measurement value 295.
[0060] The distance and orientation point cloud 291, the camera
image 292, the GPS observation information 293, the gyro
measurement value 294, and the odometer measurement value 295 each
indicate a measurement time, and are correlated with one another by
the measurement time.
[0061] FIG. 2 shows an external view of the mobile measuring
apparatus 200 according to the first embodiment.
[0062] For example, the mobile measuring apparatus 200 may be built
as a vehicle 202 as shown in FIG. 2.
[0063] The laser scanner 210, the camera 220, the GPS receiver 230,
and the gyro 240 are installed and secured to a top panel 201
placed on a top portion of the vehicle 202. The odometer 250 is
placed in the vehicle 202. The figure shows an installation example
of the laser scanner 210 and the camera 220, which may
alternatively be installed at a front or rear portion of the
vehicle 202.
[0064] The vehicle 202 moves around on the roads in a target area
of measurement.
[0065] The laser scanner 210 is installed at a rear portion of the
vehicle 202. The laser scanner 210 irradiates laser beams towards
the backside and the lateral side of the vehicle 202 while
oscillating substantially at 240 degrees in the width direction of
the vehicle 202 (in the x-axis direction). The laser scanner 210
then observes returned laser beams reflected from features locating
behind the vehicle 202 or in the lateral direction of the vehicle
202, and acquires the distance and orientation point cloud 291 of
measured features in the target area of measurement.
[0066] The camera 220 is installed at a front portion of the
vehicle 202. The camera 220 repeats taking pictures in the moving
direction of the vehicle 202 (in the z-axis direction), and
acquires the camera image 292 of the target area of
measurement.
[0067] The GPS receivers 230 are installed at three locations on
the top panel 201, and each acquire the GPS observation information
293 from a positioning signal received from a GPS satellite.
[0068] The gyro 240 measures the angular velocity of x-axis,
y-axis, and z-axis of the vehicle 202 to acquire the gyro
measurement value 294.
[0069] The odometer 250 measures the amount of change in velocity
of the mobile measuring apparatus 200 by counting the rotations of
the wheels to acquire the odometer measurement value 295.
[0070] Referring to FIG. 2, a point O indicates the coordinate
center of the mobile measuring apparatus 200 (hereinafter, referred
to as a navigation reference point). The coordinates of the mobile
measuring apparatus 200 means the coordinates of the point O. An
amount of displacement (hereinafter, referred to as offset) to a
point O from each of the laser scanner 210, the camera 220, the GPS
receiver 230, and the gyro 240 is measured in advance. The
coordinates of each of the laser scanner 210, the camera 220, the
GPS receiver 230, and the gyro 240 can be obtained by adding the
offset to the coordinates of the point O.
[0071] Hereinafter, a description will be given by assuming that
the coordinates of each of the laser scanner 210, the camera 220,
the GPS receiver 230, and the gyro 240 match the point O, and are
equivalent to the coordinates of the mobile measuring apparatus
200.
[0072] The line of sight of the camera 220 is assumed to be
equivalent to the attitude angle of the mobile measuring apparatus
200.
[0073] Referring to FIG. 1, the position and attitude localizing
apparatus 300 includes a position and attitude localizing section
310 and a localizing apparatus storing section 390, and calculates
the position and attitude of the mobile measuring apparatus 200 at
the time of measurement.
[0074] The position and attitude localizing section 310 calculates
the position (latitude, longitude, and height [altitude]) (East,
North, and Up) and the attitude angle (a roll angle, a pitch angle,
and a yaw angle) of the mobile measuring apparatus 200 at the time
of measurement by using Central Processing Unit (CPU) based on the
GPS observation information 293, the gyro measurement value 294 and
the odometer measurement value 295 acquired from the mobile
measuring apparatus 200.
[0075] For example, the position and attitude localizing section
310 treats the positioning result included in the GPS observation
information 293 as the position of the mobile measuring apparatus
200.
[0076] Alternatively, however, the position and attitude localizing
section 310 may calculate a pseudo-distance based on the phase of a
carrier wave included in the GPS observation information 293, and
then calculate the position of the mobile measuring apparatus 200
based on the calculated pseudo-distance.
[0077] Still alternatively, the position and attitude localizing
section 310 may calculate the position and attitude angle of the
mobile measuring apparatus 200 by dead reckoning based on the gyro
measurement value 294 and the odometer measurement value 295. Dead
reckoning is a method for estimating the current position and
attitude angle of an object by integrating the angular velocity of
an attitude angle with a moving speed to obtain amount of change
from a past time, and then adding the amount of change to a past
position and attitude angle.
[0078] Hereinafter, the position and attitude angle of the mobile
measuring apparatus 200 calculated by the position and attitude
localizing section 310 will be referred to as a "position and
attitude localized value 391". The position and attitude localized
value 391 indicates the position and attitude angle of the mobile
measuring apparatus 200 at each time.
[0079] The localizing apparatus storing section 390 stores the
position and attitude localized value 391.
[0080] The point cloud generating apparatus 400 includes a 3D point
cloud generating section 410, a point cloud generating section 420,
and a point cloud generating apparatus storing section 490. The
point cloud generating apparatus 400 generates a 3D point cloud
indicating the 3D coordinates and color of each point on the
ground.
[0081] The 3D point cloud generating section 410 generates a 3D
point cloud 419a by using CPU based on the distance and orientation
point cloud 291 acquired by the mobile measuring apparatus 200 and
the position and attitude localized value 391 calculated by the
position and attitude localizing apparatus 300. More specifically,
the 3D point cloud generating section 410 generates the 3D point
cloud 419a indicating the 3D coordinates of each point of the
distance and orientation point cloud 291 by extracting the position
and attitude of the mobile measuring apparatus 200 of each point of
the distance and orientation point cloud 291 at each time of
measurement from the position and attitude localized value 391, and
then calculating the 3D coordinates of a point away from the
extracted position and attitude by the distance and orientation of
each point.
[0082] The point cloud generating section 420 generates the point
cloud 491 by using CPU based on 3D point cloud 419a generated by
the 3D point cloud generating section 410 and the camera image 292
acquired by the mobile measuring apparatus 200. The point cloud 491
shows color in addition to 3D coordinates for each point, and is
therefore called a colored laser point cloud.
[0083] More specifically, the point cloud generating section 420
calculates as an imaging plane of the camera 220 a plane orthogonal
to an imaging direction at a position away from the position where
the image was taken by a focal distance in the imaging direction
(in the line of sight of the camera 220). The imaging plane is
equal to the plane of the camera image 292. The point cloud
generating section 420 projects each point of the 3D point cloud
419a onto the camera image 292 (an imaging plane) based on the 3D
coordinates of each point of the 3D point cloud 419a, and treats
the color of each point as the color of the pixel of the camera
image 292 onto which each point is projected.
[0084] The point cloud generating apparatus storing section 490
stores the point cloud 491.
[0085] The point cloud orthoimage generating apparatus 100 (an
example of an aerial image generating apparatus) includes a point
cloud projecting section 110 and an image generating apparatus
storing section 190. The orthoimage generating apparatus 100
generates an aerial image of a target area based on the point cloud
491.
[0086] The point cloud projecting section 110 generates an aerial
image of a target area by using CPU based on the point cloud 491
generated by the point cloud generating apparatus 400.
Specifically, the point cloud projecting section 110 calculates a
horizontal plane corresponding to the latitude and longitude of the
target area, and orthographically projects each point of the point
cloud 491 onto a calculated horizontal plane based on the 3D
coordinates of each point. More specifically, the point cloud
projecting section 110 treats the 3D coordinates (x, y, z) of each
point of the point cloud 491 as "z (height)=0", and arranges each
point at a part of the horizontal plane corresponding to the 2D
coordinates (x, y).
[0087] For example, the point cloud projecting section 110
calculates an imaging plane, assuming that the image has been taken
by a camera directed vertically downward a predetermined position
up in the sky above the target area, and orthographically projects
each point of the point cloud 491 onto a calculated imaging plane.
The 3D coordinates of the predetermined viewpoint are the latitude
and longitude of the center of the measuring area, and a
predetermined height. Each point of the point cloud 491 is
projected onto a part of an imaging plane having the same latitude
and longitude.
[0088] The horizontal plane onto which each point of the point
cloud 491 is orthographically projected shows an image of the
measured area viewed vertically downward from the sky.
[0089] Hereinafter, a Bitmap image on a horizontal plane onto which
each of the point cloud 491 is orthographically projected will be
referred to as a "point cloud orthoimage 191 (an example of an
aerial image)".
[0090] Alternatively, however, the plane onto which each of the
point cloud 491 is orthographically projected is not limited to the
horizontal plane, and may be a plane inclined to the horizontal
plane. In this case, the plane onto which each of the point cloud
491 is orthographically projected shows an image of the measuring
area viewed diagonally downward from the sky (an example of an
aerial image).
[0091] Still alternatively, the type of projection used for
projecting the point cloud 491 by the point cloud projecting
section 110 is not limited to the orthographical projection, and
may be a center projection, for example.
[0092] FIG. 3 shows example hardware resources of the orthoimage
generating apparatus 100 according to the first embodiment.
[0093] Referring to FIG. 3, the orthoimage generating apparatus 100
includes a CPU 911 (also called as a Central Processing Unit, a
central processor, a processing unit, an arithmetic unit, a
microprocessor, a microcomputer, or a processor). The CPU 911 is
coupled to a ROM 913, a RAM 914, a communication board 915, a
display unit 901, a keyboard 902, a mouse 903, a Flexible Disk
Drive (FDD) 904, a compact disk drive (CDD) 905, a printer unit
906, a scanner unit 907, and a magnetic disk drive 920 via a bus
912, and controls those hardware devices. The magnetic disk drive
920 may be replaced by a storage device such as an optical disk
drive, or a memory card read/write drive.
[0094] The RAM 914 is an example of a volatile memory. The storage
media of the ROM 913, the FDD 904, the CDD 905, and the magnetic
disk drive 920 are examples of nonvolatile memories. Those devices
are examples of storage equipment, storage units, or storing
sections.
[0095] The communication board 915, the keyboard 902, the scanner
unit 907, the FDD 904 are examples of input equipment, input units,
or input sections.
[0096] The communication board 915, the display unit 901, the
printer unit 906 are examples of output equipment, output units, or
output sections.
[0097] The communication board 915 is connected to a communication
network such as a Local Area Network (LAN), the Internet, a Wide
Area Network (WAN) such as ISDN, or a telephone line, with or
without wires.
[0098] The magnetic disk drive 920 stores an Operating System (OS)
921, a window system 922, a program group 923, and a file group
924. The programs of the program group 923 are executed by the CPU
911, the OS 921, and the window system 922.
[0099] The program group 923 stores a program for executing a
function described as a "section" in the description of this and
the following embodiments. The program is read and executed by the
CPU 911.
[0100] The file group 924 stores resultant data obtained by
executing the function of a "section" such as a "judgment result",
a "calculation result", a "processing result", or the like; data to
be exchanged between programs for executing the functions of
"sections"; other information; data; a signal value; a variable
value; and a parameter described in this and the following
embodiments, as an individual item as a "file" or a "database".
[0101] A "file" and a "database" are stored in a storage medium
such as a disk or a memory. Information, data, a signal value, a
variable value, and a parameter stored in a storage medium such as
a disk or a memory are read into a main memory or a cache memory by
the CPU 911 via a read/write circuit, and used in a CPU operation
for extraction, search, reference, comparison, computation,
calculation, processing, output, print, display, or the like.
During a CPU operation for extraction, search, reference,
comparison, computation, calculation, processing, output, print,
display or the like, information, data, a signal value, a variable
value, or a parameter is stored temporarily in a main memory, a
cache memory, or a buffer memory.
[0102] An arrow shown in a flow chart described in this and the
following embodiments primarily indicates an input/output of data
or a signal. Data or a signal value is stored in a storage medium
such as a memory of the RAM 914, a flexible disk of the FDD 904, a
compact disk of the CDD 905, a magnetic disk of the magnetic disk
drive 920, an optical disk, a mini disk, a Digital Versatile disc
(DVD), or the like. Data or a signal value is transmitted online
via the bus 912, a signal line, a cable, or other transmission
media.
[0103] A "section" described in this and the following embodiments
may be a "circuit", a "device", a "piece of equipment", or a
"means". A "section" may otherwise be a "step", a "procedure", or a
"process". More specifically, a "section" descried in this and the
following embodiments may be implemented by firmware stored in the
ROM 913. Alternatively, a "section" descried in this and the
following embodiments may be implemented solely by software; or
solely by hardware such as an elemental device, a device, a
substrate, wiring or the like; or by a combination of software and
hardware; or by a combination of software, hardware and firmware.
Firmware and software may be stored as a program in a storage
medium, such as a magnetic disk, a flexible disk, an optical disk,
a compact disk, a mini disk, a DVD, or the like. A program is read
and executed by the CPU 911. Specifically, a program causes a
computer to function as a "section", or causes a computer to
execute the procedure or method of a "section".
[0104] Like the orthoimage generating apparatus 100, the mobile
measuring apparatus 200, the position and attitude localizing
section 300, and the point cloud generating apparatus 400 include a
CPU and a memory, and execute a function described as a
"section".
[0105] FIG. 4 shows a flow chart of a point cloud orthoimage
generating method according to the first embodiment.
[0106] A point cloud orthoimage generating method of the orthoimage
generating system 800 of the first embodiment will be described
below with reference to FIG. 4.
[0107] The mobile measuring apparatus 200, the position and
attitude localizing apparatus 300, the point cloud generating
apparatus 400, the orthoimage generating apparatus 100 and the
"sections" of those apparatuses execute the following processes by
using the CPU.
<S110: Distance and Orientation Point Cloud Measuring
Process>
[0108] First, the vehicle 202 carrying the mobile measuring
apparatus 200 moves around in a target area.
[0109] While the vehicle 200 is moving around in a target area, the
laser scanner 210, the camera 220, the GPS receiver 230, the gyro
240 and the odometer 250 installed in the mobile measuring
apparatus 200 perform measurements and acquire the distance and
orientation point cloud 291, the camera image 292, the GPS
observation information 293, the gyro measurement value 294, and
the odometer measurement value 295.
<S120: Position and Attitude Localizing Process>
[0110] Then, the position and attitude localizing section 310 of
the position and attitude localizing apparatus 300 calculates the
position and attitude localized value 391 based on the GPS
observation information 293, the gyro measurement value 294, and
the odometer measurement value 295 acquired in S110.
[0111] The position and attitude localized value 391 indicates the
3D coordinates and 3D attitude angle of the mobile measuring
apparatus 200 at each time when the mobile measuring apparatus 200
moves in the target area.
<S130: Point Cloud Generating Process>
[0112] Then, the 3D point cloud generating section 410 of the point
cloud generating apparatus 400 generates the 3D point cloud 419a
based on the distance and orientation point cloud 291 acquired in
S110 and the position and attitude localized value 391 calculated
in S120. The point cloud generating section 420 of the point cloud
generating apparatus 400 generates the point cloud 491 based on the
3D point cloud 419a and the camera image 292 acquired in S110.
[0113] The 3D point cloud 419a indicates the 3D coordinates of each
point of the distance and orientation point cloud 291. Each point
of the 3D point cloud 419a corresponds to a point of the distance
and orientation point cloud 291.
[0114] The 3D point cloud generating section 410 extracts from the
position and attitude localized value 391 the position and attitude
of the mobile measuring apparatus 200 at the time of measurement of
each point of the distance and orientation point cloud 291. The 3D
point cloud generating section 410 then calculates as the 3D
coordinates of each point the 3D coordinates of a point away from
an extracted position and attitude by the distance and orientation
of each point.
[0115] The point cloud 491 indicates the 3D coordinates and color
of each point of the 3D point cloud 419a. Each point of the point
cloud 491 corresponds to a point of the 3D point cloud 419a and a
point of the distance and orientation point cloud 291.
[0116] The point cloud generating section 420 projects each point
of the 3D point cloud 419a onto the camera image 292 based on the
3D coordinates of each point and treats the color of each point as
the color of a pixel onto which each point of the 3D point cloud
419a is projected.
[0117] Alternatively, however, the point cloud 491 may not be
colored by the point cloud generating section 420. The point cloud
491 may indicate black and white information (grayscale)
corresponding to an observed brightness of reflected laser. For
example, the point cloud generating section 420 may assign a whiter
color to a point of the point cloud 491 having a higher brightness
of reflected laser, and a darker color to a point having a lower
brightness of reflected laser.
<S140: Point Cloud Orthoimage Generating Process>
[0118] The point cloud projecting section 110 of the orthoimage
generating apparatus 100 generates the point cloud orthoimage 191
based on the point cloud 491 generated in S130.
[0119] The point cloud orthoimage 191 shows an image of a target
area viewed vertically downward from the sky.
[0120] The point cloud projecting section 110 treats as the point
cloud orthoimage 191 of the target area an image obtained by
orthographically projecting each point of the point cloud 491 onto
a horizontal plane corresponding to the latitude and longitude of
the target area.
[0121] The plane, onto which each point of the point cloud 491 is
orthographically projected, however, may not be limited to the
horizontal plane. The plane may alternatively be inclined to the
horizontal plane.
[0122] Still alternatively, the type of projection used for
projecting the point cloud 491 by the point cloud projecting
section 110 may not be limited to orthographical projection.
Central projection may be used instead, for example.
[0123] Examples of the point cloud orthoimage 191 generated by the
point cloud orthoimage generating method (S110 to S140) will be
described below.
[0124] FIG. 5 shows a road map of an area (a target area) in which
the mobile measuring apparatus 200 has moved around.
[0125] It is assumed, for example, the mobile measuring apparatus
200 perform measurements while moving around in the area shown in
FIG. 5, and acquires the distance and orientation point cloud 291,
the camera image 292, the GPS observation information 293, the gyro
measurement value 294, and the odometer measurement value 295, in
the distance and orientation point cloud measuring process
(S110).
[0126] FIG. 6 shows the point cloud orthoimage 191 of the target
area (FIG. 5).
[0127] The point cloud projecting section 110 orthographically
projects the point cloud 491, thereby obtaining the point cloud
orthoimage 191 shown in FIG. 6, in the point cloud orthoimage
generating process (S140).
[0128] As shown in FIG. 6, the point cloud orthoimage 191 matches
the road map shown in FIG. 5. The point cloud orthoimage 191 may
show the road of the target area with high accuracy corresponding
to the measuring accuracy of the mobile measuring apparatus 200 and
the localizing accuracy of the position and attitude localizing
apparatus 300.
[0129] FIG. 7 and FIG. 8 show examples of aerial images of
different intersections in close-up indicated by the point cloud
491.
[0130] In the point cloud orthoimage generating process (S140),
when the point cloud 491 is projected onto a plane inclined to a
horizontal plane (or when the point cloud orthoimage 191 is rotated
about a horizontal axis by image processing), an aerial image such
as those shown in FIG. 7 and FIG. 8 may be generated.
[0131] As shown in FIG. 7 and FIG. 8, various kinds of features,
such as an intersection, a house, a parked vehicle, and a
pedestrian crossing, may be shown by an aerial image generated by
projecting the point cloud 491. Each of the features shown in the
aerial image is displayed with a high degree of accuracy in
position and size corresponding to the measuring accuracy of the
mobile measuring apparatus 200 and the localizing accuracy of the
position and attitude localizing apparatus 300.
[0132] The orthoimage generating apparatus 100 may thus generate,
with a high degree of accuracy by projecting the point cloud 491
onto a plane, the image (the point cloud orthoimage 191, an aerial
image, etc.) of a target area viewed from an angle (vertically
downwards, obliquely downwards, etc.) not actually used by the
camera when the photograph was taken.
[0133] The orthoimage generating apparatus 100 of the first
embodiment may also be described as follows.
[0134] The orthoimage generating apparatus 100 detects with a high
degree of accuracy features such as a sign, a white line, a road
surface mark, a manhole, a curb, a power pole, a pole, a
streetlight, an electric wire, and a wall by using a laser point
cloud (the distance and orientation point cloud 291) acquired by
the mobile measuring apparatus 200.
[0135] The point cloud 491 includes 3D position information (3D
coordinates) for each point. Accordingly, the orthoimage generating
apparatus 100 is allowed to generate an image viewed from an
arbitrary direction by arranging the point clouds 491 in series.
Therefore, the orthoimage generating apparatus 100 may generate the
point cloud orthoimage 191 equivalent to the orthoimage of an
aerial photo when the point clouds 491 arranged in series are
viewed from directly above. The point cloud orthoimage 191 is less
distorted compared to an orthoimage generated by using a camera
image taken from a vehicle, with a high degree of accuracy and a
wide viewing angle.
[0136] The point cloud orthoimage 191 shows features, such as a
while line, a curb, and a wall surface, clearly. The point cloud
orthoimage 191 may therefore be used for generating a road map. For
example, the point cloud orthoimage 191 may be pasted as a
background on a CAD (Computer Aided Design) image. If each feature
appearing on the point cloud orthoimage 191 is traced with lines, a
current road map may be generated at high speed. It is also
possible to extract each feature from the point cloud orthoimage
191 by image processing and generate a road map automatically.
[0137] Patent Document 1 (JP 2007-218705 A) discloses a method for
calculating the position and attitude of a measurement carriage
(S101 of Patent Document 1) based on various kinds of measurement
data acquired by the measurement carriage to generate a road
surface shape model (a 3D point cloud) (S106 of Patent Document 1).
Patent Document 1 also discloses a method for projecting a road
surface shape model (a 3D point cloud) onto a camera image (S107 of
Patent Document 1).
[0138] The mobile measuring apparatus 200 corresponds to the
measurement carriage of Patent Document 1. The position and
attitude localizing apparatus 300 corresponds to the vehicle
position and attitude (triaxial) calculating section of Patent
Document 1. The point cloud generating apparatus 400 corresponds to
the road surface shape model generating section of Patent Document
1.
Embodiment 2
[0139] A description will now be given of a second embodiment in
which the point cloud orthoimage 191 allows the whole road to be
visible without being covered by trees, tunnels, and the like.
[0140] Hereinafter, a description will be given primarily of
elements that are different from those discussed in the first
embodiment, and therefore elements that will not be elaborated
below are assumed to be the same as those discussed in the first
embodiment.
[0141] FIG. 9 shows a configuration of the orthoimage generating
apparatus 100 according to the second embodiment.
[0142] The configuration of the orthoimage generating apparatus 100
of the second embodiment will be described below with reference to
FIG. 9.
[0143] The orthoimage generating apparatus 100 (an example of an
aerial image generating apparatus) includes the point cloud
projecting section 110, a predetermined height point cloud
extracting section 120, a ground height specifying section 130, a
point cloud orthoimage display section 140, a camera image display
section 150, and the storing section 190.
[0144] The ground height specifying section 130 specifies a ground
height 139a by using CPU based on height (altitude) indicated by 3D
coordinates of each point of the point cloud 491 (an example of a
3D point cloud) generated by the point cloud generating apparatus
400.
[0145] The predetermined height point cloud extracting section 120
extracts every point whose height is within a predetermined range
from the point cloud 491 based on the 3D coordinates of each point
of the point cloud 491 (an example of a 3D point cloud) generated
by the point cloud generating apparatus 400.
[0146] More specifically, the point cloud extracting section 120
extracts every point whose height from the ground is the same or
lower than a predetermined height based on the ground height 139a
specified by the ground height specifying section 130.
[0147] Hereinafter, each point extracted from the point cloud 491
by the point cloud extracting section 120 will be referred to as a
predetermined height point cloud 129a.
[0148] The point cloud projecting section 110 (an example of a 3D
point cloud projecting section) generates the point cloud
orthoimage 191 (an example of an aerial image) by projecting each
point of the predetermined height point cloud 129a onto a plane by
using CPU based on the 3D coordinates of each point of the
predetermined height point cloud 129a extracted from the point
cloud 491 by the point cloud extracting section 120.
[0149] The point cloud orthoimage display section 140 (an example
of an aerial image display section) displays the point cloud
orthoimage 191 generated by the point cloud generating section 110
on the display unit 901.
[0150] The camera image display section 150 specifies a point
projected onto a designated portion of the point cloud orthoimage
191 displayed by the orthoimage display section 140, by using CPU,
and displays on the display unit 901 a camera image 292 taken at a
site of measurement where the specified point has been
measured.
[0151] The storing section 190 stores the camera image 292 acquired
by the mobile measuring apparatus 200 and the point cloud
orthoimage 191 generated by the point cloud projecting section
110.
[0152] FIG. 10 shows a flow chart of the point cloud orthoimage
generating process (S140) according to the second embodiment.
[0153] The flow of the point cloud orthoimage generating process
(S140) of the second embodiment will be described below with
reference to FIG. 10.
[0154] The "sections" of the orthoimage generating apparatus 100
execute processes described below by using CPU.
<S141A: Ground Height Specifying Process>
[0155] First, the ground height specifying section 130 specifies
the ground height 139a based on the height of the 3D coordinates of
each point of the point cloud 491.
[0156] When specifying the ground height 139a of a specific zone,
for example, the ground height specifying section 130 extracts a
point having the lowest height of all the points indicating the
latitude and the longitude in the specific zone, and treats the
height of the extracted point as the ground height 139a of that
specific zone.
[0157] A method for specifying the ground height 139a will be
elaborated in a fourth embodiment.
<S142A: Predetermined Height Point Cloud Extracting
Process>
[0158] Then, the point cloud extracting section 120 treats as a
reference height the ground height 139a specified in S141A, and
extracts as the predetermined height point cloud 129a every point
whose height from the ground height 139a is the same or lower than
a predetermined height from the point cloud 491.
[0159] Specifically, the predetermined height point cloud 129a is
obtained by removing every point whose height from the ground
height 139a is higher than the predetermined height from the point
cloud 491.
[0160] For example, if the predetermined height is "50 cm", then
the point cloud extracting section 120 extracts as the
predetermined height point cloud 129a every point at which the
height indicated by the 3D coordinates is the same or lower than
"(the ground height 139a)+50 [cm]" from the point cloud 491. If the
ground height 139a is specified for each zone, the point cloud
extracting section 120 extracts the predetermined height point
cloud 129a for each zone.
<S143A: 3D Point Cloud Projecting Process>
[0161] Then, the point cloud projecting section 110 generates the
point cloud orthoimage 191 by orthographically projecting each
point of the predetermined height point cloud 129a extracted in
S142A onto a horizontal plane.
[0162] However, the plane onto which each point of the
predetermined height point cloud 129a is orthographically projected
is not limited to the horizontal plane, and the type of projection
by the point cloud projecting section 110 is not limited to the
orthographical projection, either.
<S144A: Aerial Image Display Process>
[0163] Then, the orthoimage display section 140 displays on the
display unit 901 the point cloud orthoimage 191 generated in
S143A.
[0164] Then, it is assumed that a user designates an image portion
of the point cloud orthoimage 191 displayed on the display unit 901
by using the mouse 903, the keyboard 902, or the like, in order to
confirm the camera image 292 of that image portion.
<S145A: Camera Image Display Process>
[0165] The camera image display section 150 displays on the display
unit 901 the camera image 292 corresponding to the image portion
designated by the user.
[0166] Specifically, the camera image display section 150 specifies
a point of the point cloud 491 projected onto the image portion
designated by the user, specifies the camera image 292 taken at the
site of measurement of the specified point (hereinafter referred to
as a specified point), and displays the specified camera image 292
on the display unit 901.
[0167] The specified camera image 292 is the camera image 292 taken
at the time when the specified point was measured. The time when
the specified point was measured is the time when the point of the
distance and orientation point cloud 291 as the original data of
the specified point was measured.
[0168] This allows the user to recognize by the camera image 292
displayed on the display unit 901 a feature that is difficult to be
identified in the point cloud orthoimage 191.
[0169] Examples of the point cloud orthoimage 191 generated by the
point cloud orthoimage generating processes (S141A to S143A) will
be described below.
[0170] FIG. 11 shows the point cloud orthoimage 191 of a target
area b (FIG. 6). FIG. 12 shows the point cloud orthoimage 191 of a
target area a (FIG. 6). Both the point cloud orthoimages 191 of
FIG. 11 and FIG. 12 are obtained by orthographically projecting the
predetermined height point cloud 129a whose height from the ground
height 139a is the same or lower than 50 cm, exclusively.
[0171] In the predetermined height point cloud extracting process
(S142A), every point whose height from the ground height 139a is
higher than 50 cm is removed from the point cloud 491, and every
point whose height from the ground height 139a is the same or lower
than 50 cm is exclusively extracted as the predetermined height
point cloud 129a.
[0172] As a result, roads shown in FIG. 11 and FIG. 12 are not
hidden or covered by features hiding or covering the roads or
covering features, such as trees and tunnels, and thus white lines,
road boundaries and the like are clearly visible.
[0173] The orthoimage generating apparatus 100 of the second
embodiment may also be described as follows.
[0174] Aerial photos used as road images have failed to show
portions of a road hidden or covered beneath a tree, inside a
tunnel, and the like when viewed from directly above.
[0175] However, the orthoimage generating apparatus 100, on the
other hand, is configured to generate the point cloud orthoimage
191 with the limited use of the point cloud 491 so that the point
cloud whose height from the road surface (the ground height 139a)
is the same or lower than the predetermined height (the
predetermined height point cloud 129a) is exclusively used. This
results in removing features such as trees and tunnels covering or
hiding a road, and thereby showing the whole surface of a road.
[0176] Furthermore, the vehicle 202 carrying the mobile measuring
apparatus 200 moves with a predetermined distance (e.g., 5 m
approximately) from other vehicles. This may contribute to acquire
the distance and orientation point cloud 291 without containing
measurement values of other vehicles. As a result, the orthoimage
generating apparatus 100 may generate the point cloud orthoimage
191 showing no moving vehicles by using the point cloud 491
generated based on the distance and orientation point cloud 291
including no measurement values of other vehicles.
[0177] Thus, the orthoimage generating apparatus 100 is capable of
generating clear road images including no obscure portions.
[0178] The orthoimage generating apparatus 100 may also show the
camera image 292 in conjunction with the point cloud orthoimage 191
if a user has difficulty in identifying the type of a feature or
the writing on a feature (e.g., a power pole, a streetlight, or a
sign) in the point cloud orthoimage 191. For example, the
orthoimage generating apparatus 100 may display the point cloud
orthoimage 191 and the camera image 292 on a CAD screen by liking
them together. Specifically, the orthoimage generating apparatus
100 may retrieve the camera image 292 taken around the time when a
point was measured. This may allow a user to identify whether a
feature shown in the point cloud orthoimage 191 is a power pole, a
streetlight, or a sign. Alternatively, the orthoimage generating
apparatus 100 may be configured to extract the content (e.g., the
text or graphics on a sign, etc.) of a sign or the like by
processing the camera image 292, and then display the extracted
information in conjunction with the camera image 292. This may
allow a user not only to discriminate between a power pole and a
streetlight, but also to confirm the content or writing on a sign
or the like.
Embodiment 3
[0179] A description will now be given of a third embodiment in
which the point cloud orthoimage 191 allows a feature in a standing
condition to be specified.
[0180] Hereinafter, a description will be given primarily of
elements that are different from those discussed in the first and
second embodiments, and therefore elements that will not be
elaborated below are assumed to be the same as those discussed in
the first and second embodiments.
[0181] FIG. 13 shows a configuration of the orthoimage generating
apparatus 100 according to the third embodiment.
[0182] The configuration of the orthoimage generating apparatus 100
of the third embodiment will be described below with reference to
FIG. 3.
[0183] The point cloud orthoimage generating apparatus 100 (an
example of the aerial image generating apparatus) includes the
point cloud projecting section 110, the orthoimage display section
140, the camera image display section 150, a point density
calculating section 160, a standing feature specifying section 170,
a standing feature discriminating section 180, and the storing
section 190.
[0184] The point density calculating section 160 calculates by
using CPU a point density 169a of the point cloud orthoimage 191
generated by the point cloud projecting section 110 for each zone
of the point cloud orthoimage 191 divided into zones of a
predetermined size.
[0185] The standing feature specifying section 170 specifies a
portion of the point cloud orthoimage 191 showing a feature in a
standing condition or a standing feature, by using CPU, based on
the point density 169a calculated by the point density calculating
section 160.
[0186] Hereinafter, an image portion specified by the standing
feature specifying section 170 will be referred to as a standing
feature image portion 179a.
[0187] The standing feature discriminating section 180 generates,
by using CPU, the point cloud orthoimage 191 in which the standing
feature image portion 179a specified by the standing feature
specifying section 170 is discriminated from other image
portions.
[0188] FIG. 14 shows a flow chart of the point cloud orthoimage
generating process (S140) according to the third embodiment.
[0189] The flow of the point cloud orthoimage generating process
(S140) of the third embodiment will be described below with
reference to FIG. 14.
[0190] The "sections" of the orthoimage generating apparatus 100
execute processes explained below by using CPU.
<S141B: 3D Point Cloud Projecting Process>
[0191] The point cloud projecting section 110 generates the point
cloud orthoimage 191 by projecting the point cloud 491 onto a
horizontal plane.
[0192] Hereinafter, a horizontal plane onto which the point cloud
491 is orthographically projected will be referred to as a
"projected plane".
<S142B: Point Density Calculating Process>
[0193] The point density calculating section 160 divides the
projected plane into zones of a predetermined size, and calculates
the point density 169a of each point of the point cloud 491 for
each zone.
[0194] Each zone is minute in size. The size is approximately "30
cm.times.30 cm" of the real world and not the size within an image,
for example. One pixel of the point cloud orthoimage 191 may
correspond to one zone, for example.
[0195] The "point density 169a" is assumed to be the number of
points of the point cloud 491 projected onto the minute zone.
<S143B: Standing Feature Specifying Process>
[0196] The standing feature specifying section 170 specifies as the
standing feature image portion 179a each minute zone whose point
density 169a calculated in S142B is the same or more than a
predetermined number.
[0197] The laser scanner 210 performs measurements in the height
direction on the sides of the vehicle 202, and therefore a feature
having height (hereinafter, referred to as the standing feature),
such as a wall surface, a power pole, or a streetlight, is measured
at a plurality of points in the height direction. A feature having
no height, on the other hand, such as a road surface is measured at
one point in the height direction. Therefore, the point density
169a of a standing feature is higher than the point density 169a of
a road surface. Given this fact, the standing feature image portion
179a specifies as the standing feature image portion 179a a minute
zone whose point density 169a is the same or higher than a
predetermined number.
[0198] For example, the standing feature specifying section 170 may
specify as the standing feature image portion 179a a minute zone
onto which ten or more points are projected.
<S144B: Standing Feature Discriminating Process>
[0199] The standing feature discriminating section 180 generates
the point cloud orthoimage 191 in which the standing feature image
portion 179a specified in S143B is discriminated from other image
portions.
[0200] For example, the standing feature discriminating section 180
may assign a predetermined color to the standing feature image
portion 179a.
[0201] Alternatively, the standing feature discriminating section
180 may assign different colors between the standing feature image
portion 179a and other image portions (e.g., "red" for the standing
feature image portion 179a and "black" for other image portions),
for example.
[0202] Still alternatively, the standing feature discriminating
section 180 may add a specific mark to the standing feature image
portion 179a, for example.
[0203] Still alternatively, the standing feature discriminating
section 180 may divide the standing feature image portion 179a by
point density, and then add different colors or marks to the
standing feature image portion 179a for each point density. The
standing feature discriminating section 180 may assign colors,
"white", and "green" to "red" in order from high to low of the
point density, for example.
<S145B: Aerial Image Display Process>
[0204] The orthoimage display section 140 displays the point cloud
orthoimage 191 generated in S144B on the display unit 901.
<S146B: Camera Image Display Process>
[0205] The camera image display section 150 displays the camera
image 292 corresponding to an image portion designated by a user on
the display unit 901 in the same manner as that of S145A (FIG.
10).
[0206] The orthoimage generating apparatus 100 may specify a
standing feature in the point cloud orthoimage 191 by calculating
the point density of a minute zone. This allows a user to know the
position of a standing feature such as a power pole.
[0207] The orthoimage generating apparatus 100 may be provided with
the ground height specifying section 130 and the point cloud
extracting section 120, like the second embodiment.
[0208] The point cloud extracting section 120 extracts as the
reference height the ground height 139a specified by the ground
height specifying section 130, and extracts as the predetermined
height point cloud 129a every point whose height from the ground
height 139a is the same or higher than a predetermined height from
the point cloud 491. In other words, the predetermined height point
cloud 129a is obtained by removing every point whose height from
the ground height 139a is lower than the predetermined height from
the point cloud 491.
[0209] The point cloud projecting section 110 generates the point
cloud orthoimage 191 based on the predetermined height point cloud
129a extracted by the point cloud extracting section 120.
[0210] Contrary to the second embodiment, the point cloud
projecting section 110 may generate the point cloud orthoimage 191
that shows no road surface by the point cloud extracting section
120 removing every point whose height from the ground height 139a
is lower than "50 cm", for example.
[0211] This allows the orthoimage generating apparatus 100 to
specify a standing feature accurately and allows a user to identify
the standing feature without difficulty.
[0212] The following are examples of the point cloud orthoimage 191
onto which the predetermined height point cloud 129a whose height
from the ground height 139a is 50 cm or higher is orthographically
projected.
[0213] FIG. 15 shows the point cloud orthoimage 191 of the target
area b (FIG. 6), and FIG. 16 shows a part of the target area b in
close-up.
[0214] FIG. 17 shows the point cloud orthoimage 191 of the target
area a (FIG. 6), and FIG. 18 shows a part of the target area a in
close-up.
[0215] FIG. 15 to FIG. 18 each show the point cloud orthoimage 191
onto which the predetermined height point cloud 129a whose height
from the ground height 139a is 50 cm or higher is orthographically
projected exclusively, and therefore show no road surfaces.
[0216] As illustrated in FIG. 16 as the close-up of an area
enclosed by a dashed line in FIG. 15 and FIG. 18 as the close-up of
an area enclosed by the dashed line in FIG. 17, the point cloud
orthoimage 191 clearly shows features such as a tree, a wall, an
electric wire, and a power pole.
[0217] The point cloud orthoimage 191 may be shown in different
colors according to point density. For example, the color may
change from "red, green to white" as the density becomes higher. A
feature standing upright like a power pole has a high density, and
may therefore be indicated by a green point or a white point.
[0218] The use of the point cloud orthoimage 191 thus described may
facilitate calculating the position of a standing feature, and
thereby contribute to a greater reduction in time and labor in
calculating the position of a standing feature, compared to the
usual manual work.
[0219] FIG. 18 shows electric wires extending from the walls of
houses and power poles. An electric wire may be used as a tool for
discriminating between a power pole and a streetlight. If a target
feature cannot be identified whether it is a power pole, a
streetlight, or other standing features, then a user may operate
the orthoimage generating apparatus 100, display the camera image
292 on the orthoimage generating apparatus 100 and confirm the
target feature on the camera image 292. This may identify the
feature correctly.
[0220] The orthoimage generating apparatus 100 of the third
embodiment may also be described as follows.
[0221] The point cloud orthoimage 191 shows an image viewed from
directly above. Therefore, a third dimensional feature (a standing
feature) whose color is similar to the color of the ground, such as
a power pole, becomes invisible. A third dimensional feature such
as a power pole could be a key target in a road map.
[0222] Given this fact, the orthoimage generating apparatus 100 is
configured to calculate the point density of the point cloud
orthoimage 191 for each minute zone, and display minute zones in
different degrees of brightness, colors, shapes, and the like
according to point density.
[0223] Power poles and wall surfaces are usually built upright.
Therefore, the point density of the point cloud orthoimage 191 onto
which the point cloud 491 is vertically projected is high at a
portion indicating a power pole or a wall surface. On the other
hand, the density of a ground surface or the like is low because
there is only one point in a vertical direction.
[0224] The point cloud orthoimage 191 thus shown in different
degrees of brightness, colors, shapes, and the like according to
point density may help facilitate identifying and detecting the
location of a third dimensional feature such as a power pole.
[0225] The limited use of the storing section 490 exclusively for
points whose height is the same or higher than the predetermined
height for generating the point cloud orthoimage 191 may facilitate
identifying and detecting a three dimensional feature.
Embodiment 4
[0226] A method for specifying the ground height 139a by the ground
height specifying section 130 will be described in a fourth
embodiment.
Example 1
[0227] The ground height specifying section 130 extracts a point
whose height is the lowest of all the points showing latitude and
longitude in a zone of the point cloud 491, and treats the height
of an extracted point as the ground height 139a of that zone.
[0228] FIG. 19 illustrates a method for specifying the ground
height 139a according to the fourth embodiment (Example 1). FIG. 19
includes a top view illustrating a plan view of a slope and a
bottom view illustrating a side view of the slope.
[0229] The ground height specifying section 130 divides a target
area including a slope into meshes of a predetermined size (e.g.,
100 m.times.100 m) according to latitude and longitude, as shown in
the top view of FIG. 19. The ground height specifying section 130
extracts a point P whose height is the lowest of all the points in
each zone as shown in the bottom view of FIG. 19. The ground height
specifying section 130 treats the height of the 3D coordinates of
the point P as the ground height 139a of that zone.
[0230] The point cloud extracting section 120 extracts from the
point cloud 491 every point whose height from the point P (the
ground height 139a) is the same or lower than a predetermined
height x (e.g., 50 cm). This allows the point cloud projecting
section 110 to generate the point cloud orthoimage 191 showing the
slope.
Example 2
[0231] The ground height specifying section 130 extracts from the
point cloud 491 every point indicating a curb of the road, and
specifies the ground height 139a based on the 3D coordinates of
each extracted point.
[0232] FIG. 20 shows a method for specifying the ground height 139a
according to the fourth embodiment (Example 2), illustrating a zone
in a target area divided into zones of a predetermined size.
[0233] First, the ground height specifying section 130 specifies
points indicating the curbs on both sides of a slope (portions of
two curbs) based on the 3D coordinates of each point of the point
cloud 491.
[0234] Then, the ground height specifying section 130 extracts a
point A whose altitude is the highest and a point B whose altitude
is the lowest of all the points indicating one of the curbs, and
extracts a point C whose altitude is the highest of all the points
indicating the other curb. Alternatively, the point C may be an
arbitrary point indicating a point or curb whose height is the
lowest.
[0235] The ground height specifying section 130 calculates a 3D
equation of a plane including the point A, the point B, and the
point C based on the 3D coordinates of the extracted three points
A, B and C, as an equation indicating the inclination of a slope
(hereinafter, referred to as a road surface equation).
[0236] The road surface equation calculated by the ground height
specifying section 130 indicates the ground height 139a of a slope
according to latitude and longitude.
[0237] The point cloud extracting section 120 specifies a zone in
which each point of the point cloud 491 is located based on the
latitude and longitude of each point, calculates the ground height
139a by substituting the latitude and longitude of each point into
the road surface equation of that zone, and extracts the
predetermined height point cloud 129a by comparing the calculated
ground height 139a with the height of each point.
[0238] The following are different methods for specifying each
point indicating a curb (hereinafter, referred to as a curb point
cloud):
(1) a method for making a user select the three points, A, B and C
of the curbs; and
(2) a method for specifying the curb point cloud based on
discontinuity of the position of each point.
[0239] First, the method for making a user select the three points,
A, B and C of the curbs (1) will be described.
[0240] The ground height specifying section 130 projects the point
cloud 491 onto the camera image 292, displays on the display unit
901 the camera image 292 onto which the point cloud 491 is
projected (a superimposed image of the point cloud 491 and the
camera image 292), and makes the user select the three points A, B
and C from among the points of the displayed point cloud 491, like
S107 (FIG. 13) disclosed in Patent Document 1 (JP 2007-218705
A).
[0241] Then, the method for specifying the curb point cloud based
on point discontinuity of the position of each point (2) will be
described.
[0242] FIG. 21 shows the method for specifying the curb point cloud
according to the fourth embodiment (Example 2).
[0243] FIG. 21 includes a top view illustrating a vertical cross
section of a road and curbs on the both sides, with latitude and
longitude in a horizontal direction and height (altitude) in a
vertical direction. FIG. 21 also includes a bottom view
illustrating an area enclosed by the dashed line in the top view in
close-up. Circles shown in the figure indicate points of the point
cloud 491. Hereinafter, each circle is referred to as a "3D point".
Each 3D point is measured one by one from left to right or right to
left according to the movement of the mobile measuring apparatus
200. Hereinafter, a line connecting a plurality of 3D points in a
horizontal row acquired through a measurement value from left to
right or right to left will be referred to as a "scan line".
Circles illustrated in FIG. 21 indicate a plurality of 3D points on
a scan line from left to right.
[0244] FIG. 22 shows a screen displaying an image of the point
cloud 491 showing a road and curbs on the both sides.
[0245] FIG. 22 includes a main screen (a full screen) showing an
image of a road surface based on the point cloud 491 (an example of
the point cloud orthoimage 191), and a subscreen (a top left
portion of the screen) showing an image of a vertical cross section
of a side street based on a portion of the point cloud 491
extracted as a side street portion.
[0246] The image on the subscreen in FIG. 22 is an image of real
data corresponding to the drawing of FIG. 21.
[0247] The ground height specifying section 130 calculates a
straight line showing the degree of inclination of the said part
based on consecutive 3D points arranged in order of measurement,
and specifies a point where the road and the curb meet as a 3D
point indicating a curb based on the amount of change in
inclination shown by the calculated straight line.
[0248] For example, the ground height specifying section 130
calculates a straight line 1 based on the I-th, (I-1)-th, and
(I-2)-th 3D points, and a straight line 2 based on the I-th,
(I+1)-th, and (I+2)-th 3D points, with the I-th 3D point as the
base point. Hereinafter, the x-th 3D point will be referred to as a
"point x". The straight line 1 passes a point I, and also passes a
point between a point I-1 and a point I-2. The straight line 2
passes the point I, and also passes a point between a point I+1 and
a point I+2. Alternatively, the straight lines 1 and 2 may be
calculated based on consecutive four or more 3D points (e.g., a
point I-3 to the point I, or the point I to a point I+3), or based
on two 3D points (e.g., the point I-1 and the point I, or the point
I and the point I+1).
[0249] The ground height specifying section 130 specifies the 3D
point of a curb based on an inclination difference (amount of
change) between the straight line 1 and straight line 2 and a
height difference among a plurality of 3D points of the straight
line 2 (the straight line 1 in the case of the left side curb). The
ground height specifying section 130 calculates the point I-1 or
I+1 as the 3D point of the curb if the amount of change is the same
or more than a predetermined amount, and the height difference
between the point I and the point I+2 is the same or less than a
predetermined value (e.g., 20 cm) corresponding to the height of
the curb. The point I+2 is the 3D point having the largest height
difference from the point I among the points I, I+1 and I+2 of the
line 2. A road surface is usually inclined on the curb side, and
therefore it would be desirable to select, as the 3D point of the
curb, one of 3D points preceding or following the point I ( . . . ,
the point I-2, the point I-1, the point I+1, the point I+2, . . .
), and not the point I, to be used for calculating the ground
height 139a. For example, the ground height specifying section 130
may select the point I-1 or I+1 as the 3D point of the curb.
Alternatively, the point I may be selected as the 3D point of the
curb.
[0250] The ground height specifying section 130 thus specifies the
curb point cloud with each of the point cloud 491 as the base
point.
[0251] FIG. 23 shows a flow chart of a curb point cloud specifying
method according to the fourth embodiment (Example 2).
[0252] A process flow of the curb point cloud specifying method
(FIG. 21) will be described below with reference to FIG. 23.
[0253] First, the ground height specifying section 130 reads the
point cloud 491 (S210).
[0254] Then, the ground height specifying section 130 selects scan
lines one by one, and extracts a plurality of 3D points on a
selected scan line from the point cloud 491. Hereinafter, the
plurality of extracted 3D points on the scan line will be referred
to as a "scan point cloud" (S220).
[0255] The ground height specifying section 130 then selects a 3D
point as the base point I from the extracted scan point cloud, and
calculates the straight lines 1 and 2 passing the base point based
on a plurality of consecutive points from the selected base point I
(S230).
[0256] The ground height specifying section 130 then determines
whether or not the base point I is a portion of curb based on the
inclination difference between the straight lines 1 and 2, and the
height difference among the 3D points of the straight line 2 (or
the straight line 1) (S231).
[0257] The ground height specifying section 130 stores the 3D point
of the curb (e.g., the point I-1 or I+1) when the base point I is
determined to be the portion of curb (S232).
[0258] The ground height specifying section 130 repeats S230
through S232 on each point of the scan point cloud extracted in
S220, and specifies and stores the 3D points of the curbs on the
both sides (S233).
[0259] Furthermore, the ground height specifying section 130
specifies and stores the 3D points of the curbs on the both sides
based on the angle of laser irradiation and the height of each
point (S250) if the 3D points of the curbs cannot be specified in
the processes of S230 through S233 (S240).
[0260] Specifically, the ground height specifying section 130
specifies the 3D point of the curb as follows.
[0261] First, the ground height specifying section 130 extracts
from among the scan point cloud a plurality of 3D points whose
angle of laser irradiation is close to the angle of laser
irradiation of the 3D point of the curb specified by the previous
scan line. For example, the ground height specifying section 130
may extract a point n-3 to a point n+3 on the scan line from the
scan point cloud if the 3D point of the curb specified by the
previous scan line is the n-th 3D point on the scan line.
[0262] The ground height specifying section 130 then stores as the
3D point of the curb a 3D point (one of the point n-3 to the point
n+3) one or more points preceding or following the 3D point whose
height is the lowest of all the plurality of extracted 3D points.
Alternatively, the 3D point of the curb may be calculated under an
additional condition that difference in angle of laser irradiation
from the 3D point of the curb specified by the previous scan line
is the same or less than a predetermined value (e.g., 1
degree).
[0263] The ground height specifying section 130 repeats S220
through S250 for every scan line (S260), and groups a plurality of
3D points of the left side curb and a plurality of 3D points of the
right side curb, respectively (S270).
[0264] FIG. 24 shows a curb point cloud specified by a curb point
cloud specifying method according to the fourth embodiment (Example
2).
[0265] As shown in FIG. 24, the curb point cloud specified by the
curb point cloud specifying method of the forth embodiment (Example
2) matches the road map of the target area illustrated in FIG.
5.
Example 3
[0266] The ground height specifying section 130 specifies the
ground height 139a based on the height of a navigation reference
point O of the mobile measuring apparatus 200.
[0267] The navigation reference point is the center of the
coordinates of the mobile measuring apparatus 200 as described with
reference to FIG. 2.
[0268] The following are methods for specifying the ground height
139a based on the height of the navigation reference point O of the
mobile measuring apparatus 200:
(1) a method for specifying the ground height 139a by calculating a
3D equation of a road surface; and
(2) a method for specifying the ground height 139a at each time of
measurement.
[0269] FIG. 25 shows the method of specifying the ground height
139a according to the fourth embodiment (Example 3 (1)).
[0270] (1) The method for specifying the ground height 139a by
calculating a 3D equation of a road surface will be explained below
with reference to FIG. 25.
[0271] It is assumed that the 3D coordinates of the navigation
reference point O and each point of the point cloud 491 have been
acquired at time t0, time t1, and time t2.
[0272] It is also assumed that the height from the ground surface
of the previously measured navigation reference point O is 2000
mm.
[0273] The ground height specifying section 130 calculates as a 3D
equation of a road surface the 3D equation of a plane that is 2000
mm lower than a plane passing each navigation reference point O (or
a plane that is the closest to each navigation reference point O)
based on the height of the navigation reference point O at time t0,
time t1, and time t2. The ground height specifying section 130
calculates the ground height 139a by substituting the latitude and
longitude of each point of the point cloud 491 into the 3D equation
of the road surface.
[0274] FIG. 26 shows the method for specifying the ground height
139a according to the fourth embodiment (Example 3 (2)).
[0275] (2) The method for specifying the ground height 139a for
each time of measurement will be explained below with reference to
FIG. 26.
[0276] It is assumed that the 3D coordinates of the navigation
reference point O and each point of the point cloud 491 have been
acquired at each time.
[0277] It is also assumed that the height from the ground surface
of the previously measured navigation reference point O is 2000
mm.
[0278] The ground height specifying section 130 calculates as the
ground height 139a the height that is 2000 mm lower than the height
of the navigation reference point O at the time of measurement of
each point of the point cloud 491.
[0279] However, the following alternative may also be possible: The
ground height specifying section 130 does not calculate the ground
height 139a. The point cloud extracting section 120 calculates the
height that is 1500 mm lower than the height of the navigation
reference point O (height that is 500 mm above the ground surface
[a predetermined height]) as a corrected separation reference
height. Each point lower (or higher) than the corrected separation
reference height is extracted from the point cloud 491 as the
predetermined height point cloud 129a.
[0280] The respective specifying methods discussed in the fourth
embodiment may allow the ground height specifying section 130 to
calculate the ground height 139a accurately if the road is
inclined.
Embodiment 5
[0281] The point cloud orthoimage 191 generated by the orthoimage
generating apparatus 100 may be useful for generating a road map,
for example.
[0282] A system generating a road map will be described in a fifth
embodiment.
[0283] FIG. 27 shows a configuration of a map data generating
system 801 according to the fifth embodiment.
[0284] The configuration of the map data generating system 801 of
the fifth embodiment will be described below with reference to FIG.
27.
[0285] The map data generating system 801 includes a CAD apparatus
500 in addition to the configuration of the orthoimage generating
system 800 discussed in the previous embodiments.
[0286] City planning maps showing roads and houses, street books
recording road monopolies such as power poles, manholes, and
advertisement towers, and appended maps of road management book
recording road curbs, guardrails, signs, and the like have been
used in road management. There has been a demand for improving the
accuracy of the city planning maps, street books and appended maps
to road management book.
[0287] The point cloud orthoimage 191 shows roads uncovered with
high accuracy by removing features such as trees and tunnels
covering oar hiding the roads (Embodiment 2). The point cloud
orthoimage 191 also shows up standing features such as power poles
and streetlights in a discriminatory manner (Embodiment 3).
Therefore, the point cloud orthoimage 191 is useful for the
generation of a city planning map, a street book, and an appended
map of road management book.
[0288] The CAD apparatus 500 includes a CAD section 510 and a CAD
storing section 590. The CAD apparatus 500 generates map data 591
(e.g., a city planning map, or an appended map of road management
book) by using the point cloud orthoimage 191 generated by the
orthoimage generating apparatus 100.
[0289] The CAD section 510 displays on the display unit 901 the
point cloud orthoimage 191 and the camera image 292 in response to
a user's operation and generates the map data, by using CPU.
[0290] The CAD storing section 590 stores the map data 591.
[0291] A user operates the CAD apparatus 500 by means of the
keyboard 902 or the mouse 903 to display the point cloud orthoimage
191 generated in the second embodiment. The user then generates a
road map by tracing roads displayed on the point cloud orthoimage
191, and stores a generated road map as the map data 591.
[0292] The user then displays the point cloud orthoimage 191
generated in the third embodiment, selects standing features shown
in the displayed point cloud orthoimage 191 one by one, and
displays the camera image 292 of a selected portion. The user then
specifies the type of a standing feature based on the displayed
camera image 292, and stores as the map data 591 the road map in
which the position and type of each standing feature is set.
[0293] Alternatively, the CAD section 510 may extract roads and
standing features not based on user's selections but based on image
processing.
[0294] Thus, the use of the point cloud orthoimage 191a may allow
the user to generate a city planning map or an appended map of road
management book more easily than ever.
[0295] With further reference to this and the previous embodiments,
the orthoimage generating apparatus 100, the mobile measuring
apparatus 200, the position and attitude localizing apparatus 300,
the point cloud generating apparatus 400, and the CAD apparatus 500
may alternatively be separate units or incorporated into a single
unit.
[0296] Still alternatively, those apparatuses may be independent
units not connected to one another via a network, or wired/wireless
communication devices connected to a LAN or the Internet to
exchange data with one another.
EXPLANATION OF REFERENCE SIGNS AND NUMERALS
[0297] 100 point cloud orthoimage generating apparatus [0298] 110
point cloud projecting section [0299] 120 predetermined height
point cloud extracting section [0300] 129a predetermined height
point cloud [0301] 130 ground height specifying section [0302] 139a
ground height [0303] 140 point cloud orthoimage display section
[0304] 150 camera image display section [0305] 160 point density
calculating section [0306] 169a point density [0307] 170 standing
feature specifying section [0308] 179a standing feature image
portion [0309] 180 standing feature discriminating section [0310]
190 image generating apparatus storing section [0311] 191 point
cloud orthoimage [0312] 200 mobile measuring apparatus [0313] 201
top panel [0314] 202 vehicle [0315] 210 laser scanner [0316] 220
camera [0317] 230 GPS receiver [0318] 240 gyro (gyroscope) [0319]
250 odometer [0320] 290 measuring apparatus storing section [0321]
291 distance and orientation point cloud [0322] 292 camera image
[0323] 293 GPS observation information [0324] 294 gyro measurement
value [0325] 295 odometer measurement value [0326] 300 position and
attitude localizing apparatus [0327] 310 position and attitude
localizing section [0328] 390 localizing apparatus storing section
[0329] 391 position and attitude localized value [0330] 400 point
cloud generating apparatus [0331] 410 3D point cloud generating
section [0332] 419a 3D point cloud [0333] 420 point cloud
generating section [0334] 490 point cloud generating apparatus
storing section [0335] 491 point cloud [0336] 500 CAD apparatus
[0337] 510 CAD section [0338] 590 CAD storing section [0339] 591
map data [0340] 800 point cloud orthoimage generating system [0341]
801 map data generating system [0342] 901 display unit [0343] 902
keyboard [0344] 903 mouse [0345] 904 FDD [0346] 905 CDD [0347] 906
printer unit [0348] 907 scanner unit [0349] 908 microphone [0350]
909 speaker [0351] 911 CPU [0352] 912 bus [0353] 913 ROM [0354] 914
RAM [0355] 915 communication board [0356] 920 magnetic disk drive
[0357] 921 OS [0358] 922 window system [0359] 923 program group
[0360] 924 file group
* * * * *