U.S. patent application number 16/008812 was filed with the patent office on 2018-12-20 for camera angle estimation method for around view monitoring system.
The applicant listed for this patent is HYUNDAI MOBIS CO., LTD., KOREA NATIONAL UNIVERSITY OF TRANSPORTATION INDUSTRY-ACADEMIC COOPERATION FOUNDATION. Invention is credited to Kyoungtaek CHOI, Dong Wook JUNG, Ho Gi JUNG, Hoon Min KIM, Sung Joo LEE, Jae Kyu SUHR.
Application Number | 20180365857 16/008812 |
Document ID | / |
Family ID | 64656634 |
Filed Date | 2018-12-20 |
United States Patent
Application |
20180365857 |
Kind Code |
A1 |
LEE; Sung Joo ; et
al. |
December 20, 2018 |
CAMERA ANGLE ESTIMATION METHOD FOR AROUND VIEW MONITORING
SYSTEM
Abstract
A camera angle estimation method for an around view monitoring
system includes uniformizing and extracting, by a control unit of
the around view monitoring system, feature points from an image of
each of at least three cameras, acquiring, by the control unit,
corresponding points by tracking the extracted feature points,
integrating, by the control unit, the corresponding points acquired
from the image of each of the cameras with one another, estimating,
by the control unit, vanishing points and vanishing lines by using
the integrated corresponding points, and estimating an angle of
each of the cameras on the basis of the estimated vanishing points
and vanishing lines.
Inventors: |
LEE; Sung Joo; (Seoul,
KR) ; KIM; Hoon Min; (Anyang-si, KR) ; JUNG;
Dong Wook; (Seoul, KR) ; CHOI; Kyoungtaek;
(Hwaseong-si, KR) ; SUHR; Jae Kyu; (Incheon,
KR) ; JUNG; Ho Gi; (Seoul, KR) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
HYUNDAI MOBIS CO., LTD.
KOREA NATIONAL UNIVERSITY OF TRANSPORTATION INDUSTRY-ACADEMIC
COOPERATION FOUNDATION |
Seoul
Chungju-si |
|
KR
KR |
|
|
Family ID: |
64656634 |
Appl. No.: |
16/008812 |
Filed: |
June 14, 2018 |
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G08G 1/167 20130101;
B60R 11/04 20130101; G06T 7/536 20170101; G06T 2207/30256 20130101;
G06T 2207/20032 20130101; G06T 7/13 20170101; G06T 7/77 20170101;
G06T 7/80 20170101; G06T 2207/20036 20130101; G06T 2207/30252
20130101; G06K 9/00798 20130101; G06T 7/74 20170101; H04N 7/181
20130101 |
International
Class: |
G06T 7/80 20060101
G06T007/80; H04N 7/18 20060101 H04N007/18; G06T 7/536 20060101
G06T007/536 |
Foreign Application Data
Date |
Code |
Application Number |
Jun 14, 2017 |
KR |
10-2017-0074556 |
Oct 12, 2017 |
KR |
10-2017-0132250 |
Claims
1. A camera angle estimation method for an around view monitoring
system, comprising: uniformizing and extracting, by a control unit
of the around view monitoring system, feature points from an image
of each of at least three cameras; acquiring, by the control unit,
corresponding points by tracking the extracted feature points;
integrating, by the control unit, the corresponding points acquired
from the image of each of the cameras with one another; estimating,
by the control unit, vanishing points and vanishing lines by using
the integrated corresponding points; and estimating an angle of
each of the cameras on the basis of the estimated vanishing points
and vanishing lines.
2. The camera angle estimation method for an around view monitoring
system of claim 1, wherein the feature points are points easily
distinguished from a surrounding background in the image of each of
the cameras, and are points easily distinguished even when a shape,
a size, or a position of an object is changed and easily found from
the image even when a point of view of the camera or illumination
is changed.
3. The camera angle estimation method for an around view monitoring
system of claim 1, wherein, in the uniformizing and extracting of
the feature points, the control unit divides the image of each of
the cameras into a plurality of preset areas in order to uniformize
the feature points, and allows a predetermined number of feature
points to be forcibly extracted from the divided each area.
4. The camera angle estimation method for an around view monitoring
system of claim 1, wherein the image of each of the at least three
cameras is an image continuously or sequentially captured, and
includes an image captured immediately subsequent to a previously
captured image or a camera image having a temporal difference of a
specific frame or more.
5. The camera angle estimation method for an around view monitoring
system of claim 1, wherein, in order to estimate the vanishing
points and the vanishing lines, the control unit draws virtual
straight lines, along which two corresponding points extend in a
longitudinal direction, to estimate one vanishing point at a spot
at which the virtual straight lines cross each other, and draws
virtual straight lines, which respectively connect both ends of the
two corresponding points to each other, to estimate a remaining
vanishing point at a spot at which the virtual straight lines
extend and cross each other, thereby estimating a vanishing line
that connects the two vanishing points to each other.
6. The camera angle estimation method for an around view monitoring
system of claim 1, wherein, in the estimating of the angle of each
of the cameras, the control unit estimates, as the angle of each of
the cameras, a rotation matrix R.sub.e for converting a coordinate
system of a real world road surface to an image coordinate system
on which a distortion-corrected image is displayed.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] The present application claims priority under 35 U.S.C.
.sctn. 119(a) to Korean Patent Application Nos. 10-2017-0074556 and
10-2017-0132250, filed on Jun. 14, 2017 and Oct. 12, 2017,
respectively, which are incorporated herein by reference in their
entirety.
BACKGROUND
1. Technical Field
[0002] Embodiments of the present disclosure relate to a camera
angle estimation method for an around view monitoring system, and
more particularly, to a camera angle estimation method for an
around view monitoring system, by which in order to correct an
image of a camera for generating an around view image in an around
view monitoring system of a vehicle, feature points are extracted
from a peripheral image captured by the camera even though
correction patterns are not installed in the vicinity of the
vehicle installed with the camera, so that it is possible to
estimate an angle of the camera on the basis of corresponding
points acquired by tracking the feature points.
2. Related Art
[0003] In vehicles recently sold, an advanced driver assistance
system (ADAS) is increasingly mounted to help safe driving of a
driver.
[0004] As a part of such an advanced driver assistance system
(ADAS), vehicles, in which a ultrasonic sensor or a rear view
camera is mounted, are increased in order to reduce the occurrence
of accidents due to a blind spot. Recently, vehicles, in which an
around view monitoring (AVM) system is mounted, are also
increased.
[0005] Particularly, since the around view monitoring (AVM) system
can monitor all directions of 360.degree. about a vehicle, it
attracts attention. However, since a wide space, where a specific
facility (for example, a lattice or a lane pattern for camera
correction) is installed, and a skilled person for correction work
are required in order to correct a camera after the system is
installed, it is disadvantageous in terms of cost and time.
Therefore, there is a limitation in wide use of the around view
monitoring (AVM) system.
[0006] The background technology of the present invention is
disclosed in Korean Unexamined Patent Publication No. 2016-0056658
(May 20, 2016 (Publication date). Entitled "an around view
monitoring system and a control method thereof").
SUMMARY
[0007] Various embodiments are directed to a camera angle
estimation method for an around view monitoring system, by which in
order to correct an image of a camera for generating an around view
image in an around view monitoring system of a vehicle, feature
points are extracted from a peripheral image captured by the camera
even though correction patterns are not installed in the vicinity
of the vehicle installed with the camera, so that it is possible to
estimate an angle of the camera on the basis of corresponding
points acquired by tracking the feature points.
[0008] In an embodiment, a camera angle estimation method for an
around view monitoring system includes: uniformizing and
extracting, by a control unit of the around view monitoring system,
feature points from an image of each of at least three cameras;
acquiring, by the control unit, corresponding points by tracking
the extracted feature points; integrating, by the control unit, the
corresponding points acquired from the image of each of the cameras
with one another; estimating, by the control unit, vanishing points
and vanishing lines by using the integrated corresponding points;
and estimating an angle of each of the cameras on the basis of the
estimated vanishing points and vanishing lines.
[0009] In an embodiment, the feature points are points easily
distinguished from a surrounding background in the image of each of
the cameras, and are points easily distinguished even when a shape,
a size, or a position of an object is changed and easily found from
the image even when a point of view of the camera or illumination
is changed.
[0010] In an embodiment, in the uniformizing and extracting of the
feature points, the control unit divides the image of each of the
cameras into a plurality of preset areas in order to uniformize the
feature points, and allows a predetermined number of feature points
to be forcibly extracted from the divided each area.
[0011] In an embodiment, the image of each of the at least three
cameras is an image continuously or sequentially captured, and
includes an image captured immediately subsequent to a previously
captured image or a camera image having a temporal difference of a
specific frame or more.
[0012] In an embodiment, in order to estimate the vanishing points
and the vanishing lines, the control unit draws virtual straight
lines, along which two corresponding points extend in a
longitudinal direction, to estimate one vanishing point at a spot
at which the virtual straight lines cross each other, and draws
virtual straight lines, which respectively connect both ends of the
two corresponding points to each other, to estimate a remaining
vanishing point at a spot at which the virtual straight lines
extend and cross each other, thereby estimating a vanishing line
that connects the two vanishing points to each other.
[0013] In an embodiment, in the estimating of the angle of each of
the cameras, the control unit estimates, as the angle of each of
the cameras, a rotation matrix R.sub.e for converting a coordinate
system of a real world road surface to an image coordinate system
on which a distortion-corrected image is displayed.
[0014] In accordance with an embodiment, in order to correct an
image of a camera for generating an around view image in an around
view monitoring system of a vehicle, feature points are extracted
from a peripheral image captured by the camera even though
correction patterns are not installed in the vicinity of the
vehicle installed with the camera, and an angle of the camera is
estimated on the basis of corresponding points acquired by tracking
the feature points, so that the image of the camera is
automatically corrected on the basis of the estimated angle of the
camera and thus it is possible to more simply generate an exact
around view image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] FIG. 1 is an exemplary diagram illustrating a schematic
configuration of an around view monitoring system in accordance
with an embodiment.
[0016] FIG. 2 is a flowchart for explaining a camera angle
estimation method for an around view monitoring system in
accordance with an embodiment.
[0017] FIG. 3 is an exemplary diagram for explaining one of feature
point detection methods in the related art as compared with a
feature point detection method in accordance with an
embodiment.
[0018] FIG. 4 is an exemplary diagram for explaining a feature
point extraction method in accordance with an embodiment.
[0019] FIG. 5 is an exemplary diagram for explaining a method for
acquiring corresponding points through feature point tracking in a
continuous image in FIG. 2.
[0020] FIG. 6 is an exemplary diagram illustrating a result
obtained by integrating corresponding points acquired by tracking
feature points in a plurality of continuously captured images in
FIG. 2;
[0021] FIG. 7 is an exemplary diagram for additionally explaining a
method explained in FIG. 6, in which corresponding points acquired
from a continuous camera image are integrated with one another.
[0022] FIG. 8 is an exemplary diagram for explaining a method for
estimating vanishing points and vanishing lines by using acquired
corresponding points in FIG. 2.
[0023] FIG. 9 is an exemplary diagram for explaining a relation
between a camera angle and vanishing points related to an
embodiment.
[0024] FIG. 10 is an exemplary diagram illustrating an around view
image generated using an image corrected through a camera angle
estimation method for an around view monitoring system in
accordance with an embodiment.
DETAILED DESCRIPTION
[0025] Hereinafter, a camera angle estimation method for an around
view monitoring system will be described below with reference to
the accompanying drawings through various examples of
embodiments.
[0026] FIG. 1 is an exemplary diagram illustrating a schematic
configuration of an around view monitoring system in accordance
with an embodiment.
[0027] As illustrated in FIG. 1, the around view monitoring system
in accordance with the embodiment includes one or more cameras 120,
121, 122 installed at a vehicle and a control unit 110 that outputs
an around view image obtained by processing images captured by the
cameras 120, 121, 122 to a screen of an audio video navigation
(AVN) device (not illustrated).
[0028] The control unit 110 estimates installation angles (or
camera angles) of the cameras 120, 121, 122 and automatically
corrects camera images on the basis of the estimated camera
installation angles (or camera angles and see FIG. 2).
[0029] Then, the control unit 110 outputs the around view image
obtained by processing the corrected camera images to the screen of
the audio video navigation (AVN) device (not illustrated).
[0030] When the camera angles are estimated, the camera images can
be easily corrected on the basis of the estimated camera angles.
Hereinafter, in the present embodiment, a method for estimating the
camera installation angles (or the camera angles) will be described
in detail with reference to FIG. 2 to FIG. 10.
[0031] As illustrated in FIG. 1, in the case of the around view
monitoring system, at least four cameras may be installed at the
front, rear, right, and left sides of a vehicle.
[0032] For example, six four cameras may be installed at the front,
rear, right (for example, a right side view mirror), left (for
example, a left side view mirror), an inner front (for example, a
rear view mirror), and an inner rear sides of a vehicle.
[0033] FIG. 2 is a flowchart for explaining the camera angle
estimation method for the around view monitoring system in
accordance with the embodiment.
[0034] As illustrated in FIG. 2, the control unit 110 uniformizes
and extracts feature points (for example, points easily
distinguished from a surrounding background in image matching, and
points easily distinguished even when the shape, the size, or the
position of an object is changed and easily found from an image
even when the point of view of a camera or illumination is changed)
from the camera images (S101).
[0035] The uniformization and extraction of the feature points
represent extraction of feature points uniformly distributed in an
entire area of the camera image.
[0036] Furthermore, the control unit 110 tracks the uniformized
feature points and acquires corresponding points (S102) (in this
case, since the corresponding points are acquired by tracking the
feature points, the corresponding points may be linear in
correspondence to the shape of the tracked feature points), and
integrates the acquired corresponding points with corresponding
points acquired from a plurality of images (for example, camera
images of three frames or more) captured continuously or
sequentially (S103).
[0037] Furthermore, the control unit 110 estimates vanishing points
and vanishing lines by using the acquired corresponding points and
estimates camera angles on the basis of the estimated vanishing
points and vanishing lines (S104).
[0038] Furthermore, the control unit 110 corrects images of the
cameras on the basis of the estimated camera angles (S105).
[0039] After the camera images are corrected as described above,
the control unit 110 generates an around view image by combining
the corrected camera images with one another according to a
predetermined around view algorithm.
[0040] Hereinafter, in the present embodiment, the method according
to steps of FIG. 2 for estimating the camera angles will be
described in more detail with reference to FIG. 3 to FIG. 10.
[0041] FIG. 3 is an exemplary diagram for explaining one of feature
point detection methods in the related art as compared with the
feature point detection method in accordance with the
embodiment.
[0042] In extracting (or detecting) the feature points in FIG. 2,
the control unit 110 detects positions (that is, feature points)
which are characteristic in the camera images.
[0043] However, the feature points may not be uniformly distributed
according to capturing environments and thus an estimation error of
the camera angles may be increased. In this regard, in the present
embodiment, a process for uniformizing a distribution of the
feature points is performed.
[0044] As well-known methods for detecting the feature points,
there are various methods such as a Harris corner detector, Shi and
Tomasi corner detector, FAST, and DOG.
[0045] For example, FIG. 3 is an exemplary diagram illustrating a
feature point extraction result before uniformizing the feature
points extracted (or detected) using the Shi and Tomasi corner
detector of the well-known methods for detecting the feature
points, and illustrates an example in which about 320 feature
points (red points) are extracted. However, when a prescribed
number of feature points are extracted as illustrated in FIG. 3, it
is highly probable that feature points will be mainly detected on a
short-distance road surface or around a remote obstacle.
[0046] In this regard, in the present embodiment, a camera image is
divided into a plurality of preset areas in order to uniformly
distribute feature points (see (a) of FIG. 4), and a prescribed
number of feature points are forcibly extracted (or detected) in
the divided each area. That is, in the present embodiment, the
feature points are extracted from the camera image through
uniformization.
[0047] FIG. 4 is an exemplary diagram for explaining the feature
point extraction method in accordance with the embodiment, wherein
(a) of FIG. 4 illustrates an example in which the camera image is
uniformly divided into 4.times.8 pixels and (b) of FIG. 4
illustrates a result (for example, 320 feature points) obtained by
extracting (or detecting) a prescribed number of (for example, 10)
feature points in the divided each area.
[0048] Accordingly, when the feature point extraction result
illustrated in (b) of FIG. 4 is compared with the feature point
extraction result illustrated in of FIG. 3, it is possible to
confirm that a distribution of the feature points is relatively
very uniformized.
[0049] Referring again to FIG. 2, in the step of tracking the
uniformized feature points and acquiring the corresponding points
(S102), the control unit 110 tracks the extracted feature points
from a continuous image (that is, a continuously captured camera
image) in order to acquire the corresponding points (see FIG.
5).
[0050] FIG. 5 is an exemplary diagram for explaining the method for
acquiring the corresponding points through the feature point
tracking in the continuous image in FIG. 2, and typically, in the
related art, feature points are matched with one another by using
only two (or two frames of) images to acquire corresponding points.
However, in the present embodiment, in order to stably ensure a
larger number of corresponding points from a road surface having a
small characteristic, a method for tracking feature points from a
continuous image (for example, three frames or more).
[0051] In this case, in tracking the feature points, it may be
possible to use well-known various optical flow tracking methods
such as census transform (CT, a method for comparing a change in
the brightness of a surrounding area of one pixel with the
brightness of a center pixel) and Kanade-Lucas Tomasi (KLT). FIG. 5
shows a result in which corresponding points are acquired by
tracking the feature points detected in (b) of FIG. 4.
[0052] FIG. 6 is an exemplary diagram illustrating a result
obtained by integrating the corresponding points acquired by
tracking the feature points in the plurality of continuously
captured images in FIG. 2, and it is advantageous to acquire and
use a plurality of corresponding points as much as possible in
order to improve the accuracy of camera angle estimation. However,
in order to acquire the plurality of corresponding points, since it
is necessary to simultaneously extract and track a plurality of
feature points, there is a problem that much execution time is
required.
[0053] In this regard, the present embodiment uses a method for
extracting and tracking a small number of feature points from each
camera image by using a continuous camera image, acquiring
corresponding points according to each camera image, and
integrating the corresponding points acquired from each camera
image with one another.
[0054] For example, as illustrated in FIG. 6, in the case of using
a method for extracting and tracking feature points from camera
images of continuous three frames ((a), (b), and (c) of FIG. 6),
acquiring corresponding points according to the camera images, and
then integrating the corresponding points acquired from the camera
images with one another ((d) of FIG. 6), as compared with a method
for acquiring corresponding points (for example, 2,000 to 3,000)
from one camera image, the method, in which a small number of
corresponding points (for example, 400 to 600) are acquired from
the continuously captured camera images and integrated with one
another, is more effective in terms of time in obtaining
substantially the same result (for example, obtaining 2,000 to
3,000 corresponding points) and it is possible to ensure the
corresponding points ((d) of FIG. 6) uniformly distributed in the
entire camera image through the integration of the corresponding
points even when the corresponding points exist only in a part of
each camera image (see (a), (b), and (c) of FIG. 6). Consequently,
it is possible to improve the stability of estimation of parameters
(that is, parameters for estimating camera angles).
[0055] FIG. 7 is an exemplary diagram for additionally explaining
the method explained in FIG. 6, which integrates the corresponding
points acquired from the continuous camera image with one another,
wherein (a) of FIG. 7 is an exemplary diagram illustrating a result
obtained by extracting and tracking 960 feature points from one
camera image at one time and (b) of FIG. 7 is an exemplary diagram
illustrating a result obtained by extracting and tracking 320
feature points from each camera image of continuously captured
three frames and integrating the feature points with one
another.
[0056] It can be understood that a computation amount (that is, a
load for computation) used in the result of (b) of FIG. 7 at one
time is very smaller than that of (a) of FIG. 7 and it is possible
to ensure the feature points uniformly distributed in the entire
camera image.
[0057] In the present embodiment, the continuously captured image
(or the continuous image) is not limited to an image captured
immediately subsequent to a previously captured image. That is, in
the present embodiment, when a plurality of images are used, the
images are not necessarily to be continuous and camera images
having a temporal difference of a specific frame or more may also
be used. It represents that currently acquired corresponding points
and corresponding points acquired 10 minutes ago may be integrated
with each other for use.
[0058] FIG. 8 is an exemplary diagram for explaining a method for
estimating vanishing points and vanishing lines by using the
acquired corresponding points in FIG. 2
[0059] In the present embodiment, the control unit 110 estimates
vanishing points and vanishing lines by using the corresponding
points acquired from camera images, and estimates a rotation matrix
between the ground and the camera.
[0060] The rotation matrix is a matrix for obtaining a coordinate
of a new point when one point on a two-dimensional or
three-dimensional space is counterclockwise rotated about the
origin by a desired angle.
[0061] For example, when a vehicle moves straight, two acquired
corresponding points should be parallel to each other and should
have substantially the same length on the real world road surface
(a bird's-eye view) as illustrated in (a) of FIG. 8.
[0062] However, when the corresponding points are captured as a
camera image, the directions and lengths of the corresponding
points are not substantially equal to each other due to perspective
distortion as illustrated in (b) of FIG. 8.
[0063] Since the two corresponding points form a parallelogram in
the real world, the control unit 110 draws virtual straight lines,
along which the corresponding points extend in the longitudinal
direction, to obtain one vanishing point at a spot at which the
virtual straight lines cross each other, and draws virtual straight
lines, which respectively connect both ends of the corresponding
points to each other, to obtain the other vanishing point at a spot
at which the virtual straight lines extend and cross each other, so
that it is possible to obtain a vanishing line that connects the
two vanishing points to each other as illustrated in (c) of FIG.
8.
[0064] FIG. 9 is an exemplary diagram for explaining a relation
between a camera angle and vanishing points related to the
embodiment.
[0065] As illustrated in FIG. 9, the control unit 110 estimates, as
a camera angle, a rotation matrix R.sub.e for converting a
coordinate system of the real world road surface to an image
coordinate system on which a distortion-corrected image is
displayed. That is, in the present embodiment, it is noted that a
camera angle to be estimated indicates the rotation matrix R.sub.e
for converting the coordinate system of the real world road surface
to the image coordinate system on which the distortion-corrected
image is displayed.
[0066] It indicates the conversion of the coordinate system of the
real world road surface as illustrated in (a) of FIG. 9 to the
image coordinate system (see (b) of FIG. 9) on which a distorted
image (actually the distortion-corrected image, and since a
straight line may be slightly displayed as a curve in an actual
camera image, it indicates an image obtained by correcting a curve
to be a straight line) is displayed as illustrated in (b) of FIG.
9.
[0067] Accordingly, for the convenience of estimation, an angle
R.sub.e of a camera may be estimated through reverse conversion
(that is, conversion of the distortion-corrected image coordinate
system to the coordinate system of the real world road surface).
That is, in FIG. 9, when the image is rotated by R.sub.e.sup.T (the
image rotation of (b) of FIG. 9 to (a) of FIG. 9), tow vanishing
points v.sub.1 and v.sub.2 estimated in (b) of FIG. 9 are converted
to points p.sub.1 and p.sub.2 at infinity.
[0068] For example, the conversion operation illustrated in FIG. 9
is expressed by Equation 1 below.
KR e T K - 1 v 1 = p 1 KR e T K - 1 v 2 = p 2 R e T K - 1 v 1 = K -
1 p 1 R e T K - 1 v 2 = K - 1 p 2 R e T v 1 ' = p 1 ' R e T v 2 ' =
p 2 ' Equation 1 ##EQU00001##
[0069] In Equation 1 above, K denotes a matrix including camera
internal variables and v.sub.1', and v.sub.2' denote results
obtained by multiplying the vanishing points v.sub.1 and v.sub.2 by
K.sup.-1, wherein K is a value obtainable in advance.
[0070] Furthermore, since p.sub.1' denotes a straight direction of
a vehicle in the image, p.sub.1' is [1 0 0].sup.T in the case of a
side camera and is [0 1 0].sup.T in the case of front/rear
cameras.
[0071] Accordingly, in the case of the side camera, the coordinate
of the first vanishing point v.sub.1' is r.sub.1, which is the
first column vector of the angle R.sub.e of the camera to be
estimated, as expressed by Equation 2 below.
R e T v 1 ' = p 1 ' R e T v 1 ' = [ 1 0 0 ] v 1 ' = R e [ 1 0 0 ] =
[ r 1 r 2 r 3 ] [ 1 0 0 ] = r 1 v 1 ' = r 1 Equation 2
##EQU00002##
[0072] Meanwhile, it is not possible to fix the exact position of
p.sub.2', but since this point is separated to infinity, it may be
expressed by [a b 0].sup.T.
[0073] When v.sub.2' is converted to R.sub.e.sup.T, since p.sub.2'
is obtained, {circle around (1)} of Equation 3 below may be
obtained.
R e T v 2 ' = p 2 ' R e T v 2 ' = [ a b 0 ] [ r 1 T r 2 T r 3 T ] v
2 ' = [ a b 0 ] r 3 T v 2 ' = 0 R e T v 1 ' = p 1 ' R e T v 1 ' = [
a b 0 ] [ r 1 T r 2 T r 3 T ] v 2 ' = [ a b 0 ] r 3 T v 1 ' = 0 [ v
1 ' T v 2 ' T ] r 3 = 0 Equation 3 ##EQU00003##
[0074] Furthermore, when Equation 2 above is converted in
substantially the same manner, {circle around (3)} of Equation 3
above may be obtained. Finally, when {circle around (1)} and
{circle around (2)} of Equation 3 above are added to each other,
{circle around (3)} of Equation 3 above may be obtained.
[0075] Furthermore, {circle around (3)} of Equation 3 above
represents that r.sub.3 is a parameter of a vanishing line which is
a straight line geometrically connecting the two vanishing points
v.sub.1', and v.sub.2' to each other.
[0076] Meanwhile, since r.sub.1 is v.sub.1', by Equation 2 above,
r.sub.1 may be calculated based on the following Equation 4 for
calculating vanishing points. That is, r.sub.1 may be calculated by
the following Equation 4.
[0077] In the following Equation 4, (x.sub.i, y.sub.i) and
(x.sub.i', y.sub.i') denote a coordinate of an i.sup.th
corresponding point.
Ar 1 = 0 [ y 1 - y 1 ' x 1 ' - x 1 x 1 y 1 ' - x 1 ' y 1 y N - y N
' x N ' - x N x N y N ' - x N ' y N ] [ r 11 r 12 r 13 ] = [ 0 0 ]
Equation 4 ##EQU00004##
[0078] Furthermore, r.sub.3 denotes a parameter of a vanishing
line.
[0079] Accordingly, a plurality of vanishing points v.sub.2 are
calculated from a combination of corresponding points and straight
line estimation is performed based on the plurality of vanishing
points v.sub.2, so that it is possible to calculate r.sub.3.
[0080] In this case, when using a forced condition that the
straight line (that is, the vanishing line, r.sub.3) should pass
through r.sub.1(v.sub.1), r.sub.3 may be obtained as expressed by
Equation 5 below. That is, r.sub.3 may be calculated by Equation 5
below.
[ v 2 , 1 x - v 1 x v 2 , 1 y - v 1 y v 2 , N x - v 1 x v 2 , N y -
v 1 y ] [ r 31 r 32 ] = [ 0 0 ] r 3 = [ r 31 r 32 - r 31 v 1 x - r
32 v 1 y ] Equation 5 ##EQU00005##
[0081] In Equation 5 above, v.sub.1.sup.x and y.sub.1.sup.y denote
a coordinate of the vanishing point v.sub.1 calculated through
Equation 4 above, and (v.sub.2,i.sup.x, v.sub.2,j.sup.y) denote a
coordinate of an it.sup.th vanishing point obtained through the
combination of the corresponding points. As described above, after
r.sub.1 and r.sub.3 are calculated, r.sub.2 is calculated through a
cross product of r.sub.1 and r.sub.3, so that the R.sub.e of the
camera, which is desired to be estimated, is estimated.
[0082] FIG. 10 is an exemplary diagram illustrating an around view
image generated using an image corrected through the camera angle
estimation method for the around view monitoring system in
accordance with the embodiment.
[0083] (a) of FIG. 10 is a camera image captured in a place where
an experiment was performed, and (b) of FIG. 10 is an around view
image generated by automatically performing calibration on the
basis of a camera angle of the around view monitoring system
estimated on the basis of the captured camera image.
[0084] As illustrated in FIG. 10, in the present embodiment, it can
be understood that calibration is performed by automatically
estimating an angle of a camera even in a situation in which there
is almost no characteristic such as a lane and a pattern (a
situation in which there are a mark obtained by removing a lane
from the ground and only a building in a long distance) as
illustrated in (a) of FIG. 10.
[0085] As described above, in the present embodiment, it is
possible to automatically estimate a camera angle of an around view
monitoring (AVM) system on the basis of an image captured by a
camera. Particularly, it is possible to estimate the camera angle
even in a situation in which there is no special pattern (a special
pattern for camera correction) such as a lattice and a lane on the
ground.
[0086] Consequently, in the present embodiment, when a general
driver, other than a professional engineer, drives a vehicle on an
arbitrary road (anywhere), since an angle of a camera is
automatically estimated, the convenience of the AVM system is
improved and the installation cost is reduced. Furthermore, since a
limitation condition (a special facility such as a tolerance
correction device) for system operation environments or correction
is reduced, utilization is improved, so that a limitation for
overseas export can be solved.
[0087] While various embodiments have been described above, it will
be understood to those skilled in the art that the embodiments
described are by way of example only. Accordingly, the camera angle
estimation method for an around view monitoring system described
herein should not be limited based on the described
embodiments.
* * * * *