U.S. patent application number 15/492771 was filed with the patent office on 2018-10-25 for method of providing a dynamic region of interest in a lidar system.
The applicant listed for this patent is Analog Devices, Inc.. Invention is credited to Michael H. Anderson, Scott R. Davis, Ronald A. Kapusta, Benjamin Luey, Scott D. Rommel, Harvy Weinberg.
Application Number | 20180306905 15/492771 |
Document ID | / |
Family ID | 60888666 |
Filed Date | 2018-10-25 |
United States Patent
Application |
20180306905 |
Kind Code |
A1 |
Kapusta; Ronald A. ; et
al. |
October 25, 2018 |
Method of Providing a Dynamic Region of interest in a LIDAR
System
Abstract
A system and method for providing a dynamic region of interest
in a lidar system can include scanning a light beam over a field of
view to capture a first lidar image, identifying a first object
within the captured first lidar image, selecting a first region of
interest within the field of view that contains at least a portion
of the identified first object, and capturing a second lidar image,
where capturing the second lidar image includes scanning the light
beam over the first region of interest at a first spatial sampling
resolution, and scanning the light beam over the field of view
outside of the first region of interest at a second spatial
sampling resolution, wherein the second sampling resolution is less
than the first spatial sampling resolution.
Inventors: |
Kapusta; Ronald A.;
(Carlisle, MA) ; Luey; Benjamin; (Denver, CO)
; Weinberg; Harvy; (Sharon, MA) ; Davis; Scott
R.; (Denver, CO) ; Anderson; Michael H.;
(Lyons, CO) ; Rommel; Scott D.; (Lakewood,
CO) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Analog Devices, Inc. |
Norwood |
MA |
US |
|
|
Family ID: |
60888666 |
Appl. No.: |
15/492771 |
Filed: |
April 20, 2017 |
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G01S 17/89 20130101;
G01S 7/483 20130101; G01S 17/58 20130101; G01S 7/4808 20130101;
G01S 7/4817 20130101; G01S 17/931 20200101 |
International
Class: |
G01S 7/481 20060101
G01S007/481; G01S 17/89 20060101 G01S017/89; G01S 7/48 20060101
G01S007/48 |
Claims
1. A method for providing a dynamic region of interest in a lidar
system, the method comprising: scanning a light beam over a field
of view to capture a first lidar image; identifying a first object
within the captured first lidar image; selecting a first region of
interest within the field of view that contains at least a portion
of the identified first object; and capturing a second lidar image,
where capturing the second lidar image includes: scanning the light
beam over the first region of interest at a first spatial sampling
resolution; and scanning the light beam over the field of view
outside of the first region of interest at a second spatial
sampling resolution, wherein the second sampling resolution is
different than the first spatial sampling resolution.
2. The method of claim 1 further comprising: identifying a second
object outside of the first region of interest; selecting a second
region of interest that contains at least a portion of the
identified second object; and capturing a third lidar image, where
capturing the third lidar image includes: scanning the light beam
over the first region of interest and the second region of interest
at the first spatial sampling resolution; and scanning the light
beam over the field of view outside of both the first region of
interest and the second region of interest at a third spatial
sampling resolution, wherein the third sampling resolution is
different than the first spatial sampling resolution.
3. The method of claim 1 further comprising: detecting a movement
of the identified first object; and adjusting a characteristic of
the first region of interest in response to the detected movement
of the identified first object.
4. The method of claim 3 comprising adjusting a size of the first
region of interest in response to the detected movement of the
identified first object.
5. The method of claim 3 comprising adjusting a size and position
of the first region of interest in response to the detected
movement of the identified first object.
6. The method of claim 1 further comprising: detecting a change in
the size of the identified first object; and adjusting a size of
the first region of interest to accommodate the detected change in
size of the identified first object.
7. The method of claim 6 comprising reducing a second spatial
sampling resolution in response to an increase in the size of the
first region of interest.
8. The method of claim 6 comprising increasing a second spatial
sampling resolution in response to a decrease in the size of the
first region of interest.
9. The method of claim 1 further comprising scanning the light beam
over the field of view to capture successive lidar images, wherein
the region of interest is capable of being adjusted after the
capture of each successive lidar image.
10. The method of claim 1 further comprising: identifying a second
object outside of the first region of interest; selecting a second
region of interest that contains at least a portion of the
identified second object; and capturing a third lidar image, where
capturing the third lidar image includes: scanning the light beam
over the first region of interest at the first spatial sampling
resolution; scanning the light beam over the second region of
interest at a third spatial sampling resolution; and scanning the
light beam over the field of view outside of the first region of
interest and the second region of interest at the second spatial
sampling resolution, wherein the third sampling resolution is
different than the second spatial sampling resolution.
11. The method of claim 1 wherein identifying a first object within
the captured first lidar image includes detecting at least one edge
of the first object.
12. The method of claim 1 wherein identifying a first object within
the captured first lidar image includes detecting at least one lane
marker.
13. A system for providing a dynamic region of interest in a lidar
system, the system comprising: a laser configured to emit a light
beam towards a target region; control circuitry configured to
instruct an optical system to scan the light beam over the target
region; an optical system having a field of view and configured to
direct a portion of the light beam received from the target region;
a photodetector configured to receive the portion of the light beam
directed from the optical system to form a first lidar image; and
detection circuitry configured to identify a first object within
the first lidar image; wherein the control circuitry is further
configured to select a first region of interest within the field of
view that contains at least a portion of the identified first
object, instruct the optical system to scan the light beam over the
first region of interest at a first spatial sampling resolution,
and instruct the optical system to scan the light beam over the
field of view outside of the first region of interest at a second
spatial sampling resolution different than the first spatial
sampling resolution, and wherein the photodetector is further
configured to receive a corresponding portion of the light beam to
form a second lidar image.
14. The system of claim 13 wherein the detection circuitry is
further configured to identify a second object outside of the first
region of interest, the control circuitry is further configured to
select a second region of interest that contains a portion of the
identified second object, instruct the optical system to scan the
light beam over the first region of interest and the second region
of interest at the first spatial sampling resolution, and instruct
the optical system to scan the light beam over the field of view
outside of both the first region of interest and the second region
of interest at a third spatial sampling resolution different than
the first spatial sampling resolution.
15. The system of claim 13 wherein the detection circuitry is
further configured to detect a movement of the identified first
object and the control circuitry is configured to adjust a
characteristic of the first region of interest in response to the
detected movement of the identified first object.
16. The system of claim 15 wherein the control circuitry is further
configured to adjust a size of the first region of interest in
response to the detected movement of the identified first
object.
17. The system of claim 15 wherein the control circuitry is further
configured to adjust a size and position of the first region of
interest in response to the detected movement of the identified
first object.
18. The system of claim 13 wherein the detection circuitry is
further configured to detect a change in the size of the identified
first object and the control circuitry is further configured to
adjust a size of the first region of interest to accommodate the
detected change in size of the identified first object.
19. The system of claim 18 wherein the control circuitry is further
configured to reduce the second spatial sampling resolution in
response to an increase in the size of the first region of
interest.
20. A system for providing a dynamic region of interest in a lidar
system, the system comprising: means for scanning a light beam over
a field of view to capture a first lidar image; means for
identifying a first object within the captured first lidar image;
means for selecting a first region of interest within the field of
view that contains at least a portion of the identified first
object; and means for capturing a second lidar image, where
capturing the second lidar image includes: scanning the light beam
over the first region of interest at a first spatial sampling
resolution; and scanning the light beam over the field of view
outside of the first region of interest at a second spatial
sampling resolution, wherein the second sampling resolution is
different than the first spatial sampling resolution.
Description
FIELD OF THE DISCLOSURE
[0001] The present disclosure relates to systems and methods for
providing a dynamic region of interest in a LIDAR system.
BACKGROUND
[0002] Certain lidar systems include a laser that can be discretely
scanned over a series of points in a target region and a detector
that can detect a reflected portion of the discretely scanned
laser, such as to provide an image of the target region. An angular
resolution of the lidar system can depend on the number of points
that can be scanned by the laser within a field of view of the
lidar system.
SUMMARY OF THE DISCLOSURE
[0003] In certain lidar systems with a large field of view and a
very fine angular resolution, thermal management of the lidar
system and receive-side analog-to-digital conversion circuitry can
present design challenges. In an example, a large field of view can
include a .+-.30.degree. horizontal field of view and a
.+-.0.6.degree. vertical field of view, a very fine angular
resolution can include a 0.1.degree. horizontal angular resolution
and a 0.2.degree. vertical angular resolution, and the lidar system
can include a 20 Hz frame update rate. In such an example, the
lidar system can scan 540,000 points per second, and can correspond
to an average laser power of 720 mW for a laser outputting 1 .mu.J
per pulse. An average laser power of 720 mW can be high enough to
cause considerable thermal design challenges. In an example where
multiple laser pulses can be used for each point in a 2D field of
view, the average laser power can be much higher. For example, if
five pulses per point in the 2D field of view are used, the average
laser power can be 3.6 W. The inventors have recognized, among
other things, that it is possible to reduce a number of points
scanned per lidar image, such as by providing varying spatial
resolution in the lidar images, such as to overcome difficulties
with thermal management and receive-side electronics. Further
features of the disclosure are provided in the appended claims,
which features may optionally be combined with each other in any
permutation or combination, unless expressly indicated otherwise
elsewhere in this document.
[0004] In an aspect, the disclosure can feature a method for
providing a dynamic region of interest, such as in a lidar system.
The method can include scanning a light beam over a field of view,
such as to capture a first lidar image. The method can also include
identifying a first object, such as within the captured first lidar
image. The method can also include selecting a first region of
interest, such as within a field of view that contains at least a
portion of the identified first object. The method can also include
capturing a second lidar image, where capturing the second lidar
image can include scanning the light beam over the first region of
interest, such as at a first spatial sampling resolution and
scanning the light beam over the field of view outside of the first
region of interest, such as at a second spatial sampling
resolution, wherein the second sampling resolution can be different
than the first spatial sampling resolution. In an example, the
second sampling resolution can be less than the first spatial
sampling resolution. The method can also include identifying a
second object, such as can be outside of the first region of
interest, selecting a second region of interest that can contain at
least a portion of the identified second object, and capturing a
third lidar image, where capturing the third lidar image can
include scanning the light beam over the first region of interest
and the second region of interest at the first spatial sampling
resolution and scanning the light beam over the field of view
outside of both the first region of interest and the second region
of interest at a third spatial sampling resolution, where the third
sampling resolution can be different than the second spatial
sampling resolution. In an example, the third sampling resolution
can be less than the second spatial sampling resolution. In an
example, the second object can be identified outside of the first
region of interest using the captured second lidar image. The
method can also include detecting a movement of the identified
first object, and adjusting a characteristic of the first region of
interest, such as in response to the detected movement of the
identified first object. The method can also include adjusting a
size of the first region of interest, such as in response to the
detected movement of the identified first object. The method can
also include adjusting a size and position of the first region of
interest in response to the detected movement of the identified
first object. The method can also include detecting a change in the
size of the identified first object and adjusting a size of the
first region of interest, such as to accommodate the detected
change in size of the identified first object. The method can also
include reducing a second spatial sampling resolution, such as in
response to an increase in the size of the first region of
interest. The method can also include increasing a second spatial
sampling resolution, such as in response to a decrease in the size
of the first region of interest. The method can also include
scanning the light beam over the field of view to capture
successive lidar images, wherein the region of interest is capable
of being adjusted after the capture of each successive lidar image.
The method can also include identifying a second object outside of
the first region of interest, selecting a second region of interest
that can contain at least a portion of the identified second
object, and capturing a third lidar image, where capturing the
third lidar image can include scanning the light beam over the
first region of interest at the first spatial sampling resolution,
scanning the light beam over the second region of interest at a
third spatial sampling resolution, and scanning the light beam over
the field of view outside of the first region of interest and the
second region of interest, at the second spatial sampling
resolution, wherein the third sampling resolution can be different
than the first spatial sampling resolution. In an example, the
third sampling resolution can be less than the second spatial
sampling resolution. In an example, the second object can be
identified outside of the first region of interest by using the
captured second lidar image. Identifying a first object within the
captured first lidar image can include detecting at least one edge
of the first object. Identifying a first object within the captured
first lidar image can include detecting at least one lane
marker.
[0005] In an aspect, the disclosure can feature a system for
providing a dynamic region of interest in a lidar system. The
system can include a laser configured to emit a light beam, such as
towards a target region. The system can also include control
circuitry configured to instruct an optical system to scan the
light beam over the target region. The system can also include an
optical system having a field of view and can configured to direct
a portion of the light beam received from the target region. The
system can also include a photodetector configured to receive the
portion of the light beam directed from the optical system, such as
to form a first lidar image. The system can also include detection
circuitry that can be configured to identify a first object within
the first lidar image. The control circuitry can be further
configured to select a first region of interest within the field of
view that can contain at least a portion of the identified first
object, instruct the optical system to scan the light beam over the
first region of interest at a first spatial sampling resolution,
and instruct the optical system to scan the light beam over the
field of view outside of the first region of interest at a second
spatial sampling resolution that can be different than the first
spatial sampling resolution. In an example, the first spatial
sampling resolution can be less than the second spatial sampling
resolution. The photodetector can be further configured to receive
a corresponding portion of the light beam to form a second lidar
image. The detection circuitry can be further configured to
identify a second object outside of the first region of interest in
the second lidar image. The control circuitry can be further
configured to select a second region of interest that can contain a
portion of the identified second object, instruct the optical
system to scan the light beam over the first region of interest and
the second region of interest at the first spatial sampling
resolution, and instruct the optical system to scan the light beam
over the field of view outside of both the first region of interest
and the second region of interest at a third spatial sampling
resolution that can be different than the first spatial sampling
resolution. In an example, the third spatial sampling resolution
can be less than the first spatial sampling resolution. The
detection circuitry can be further configured to detect a movement
of the identified first object and the control circuitry can be
configured to adjust a characteristic of the first region of
interest, such as in response to the detected movement of the
identified first object. The control circuitry can be further
configured to adjust a size of the first region of interest, such
as in response to the detected movement of the identified first
object. The control circuitry can be further configured to adjust a
size and position of the first region of interest, such as in
response to the detected movement of the identified first object.
The detection circuitry can be further configured to detect a
change in the size of the identified first object and the control
circuitry can be further configured to adjust a size of the first
region of interest, such as to accommodate the detected change in
size of the identified first object. The control circuitry can be
further configured to reduce the second spatial sampling
resolution, such as in response to an increase in the size of the
first region of interest. The control circuitry can be further
configured to increase the second spatial sampling resolution, such
as in response to a decrease in the size of the first region of
interest.
[0006] In an aspect, the disclosure can feature a system for
providing a dynamic region of interest in a lidar system. The
system can include a means for scanning a light beam over a field
of view, such as to capture a first lidar image. The means for
scanning can include control circuitry and a scanning laser, such
as control circuitry 104 and scanning laser 106 as shown in FIG.
1A. The system can also include a means for identifying a first
object within the captured first lidar image. The means for
identifying can include detection circuitry, such as detection
circuitry 124 as shown in FIG. 1A. The system can also include a
means for selecting a first region of interest within the field of
view that contains at least a portion of the identified first
object. The means for selecting can include control circuitry, such
as control circuitry 104 as shown in FIG. 1A. The system can also
include a means for capturing a second lidar image, where capturing
the second lidar image can includes scanning the light beam over
the first region of interest at a first spatial sampling resolution
and scanning the light beam over the field of view outside of the
first region of interest at a second spatial sampling resolution,
wherein the second sampling resolution can be different than the
first spatial sampling resolution. In an example, the second
sampling resolution can be less than the first spatial sampling
resolution. The means for capturing a second lidar image can
include control circuitry and a scanning laser, such as control
circuitry 104 and scanning laser 106 as shown in FIG. 1A.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The present disclosure will now be described, by way of
example, with reference to the accompanying drawings, in which:
[0008] FIG. 1A illustrates a diagram of a lidar system.
[0009] FIGS. 1B-1D illustrate examples of a frame in a lidar
system.
[0010] FIGS. 2A-2C illustrate an example of a sequence of frames in
a lidar system.
[0011] FIGS. 3A-3B illustrate an example of a sequence of frames in
a lidar system.
[0012] FIG. 4 illustrates a method of operation of a lidar
system.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE DISCLOSURE
[0013] FIG. 1A shows an example of a lidar system 100. The lidar
system 100 can include control circuitry 104, a scanning laser 108,
an optical system 116, a photosensitive detector 120, and detection
circuitry 124. The control circuitry 104 can be connected to the
scanning laser 108 and the detection circuitry 124. The
photosensitive detector 120 can be connected to the detection
circuitry 124. During operation, the control circuitry 104 can
provide instructions to the scanning laser 108, such as to cause
the scanning laser to scan a light beam over a target region 112.
In an example, the scanning laser 108 can include laser that can
emit a light beam and an optical system, such as an electro-optic
waveguide. The electro-optic waveguide can adjust an angle of the
light beam based on the received instructions from the control
circuitry 104. The target region 112 can correspond to a field of
view of the optical system 116. The scanning laser 108 can scan a
light beam over the target region 112 in a series of scanned points
114. The optical system 116 can receive at least a portion of the
light beam from the target region 112 and can image the scanned
points 114 onto the photosensitive detector 120 (e.g., a CCD). The
detection circuitry 124 can receive and process the image of the
scanned points from the photosensitive detector 120, such as to
form a frame. In an example, the control circuitry 104 can select a
region of interest that is a subset of the field of view of the
optical system and instruct the scanning laser to scan over the
region of interest. In an example, the detection circuitry 124 can
include circuitry for digitizing the received image. In an example,
the lidar system 100 can be installed in an automobile, such as to
facilitate an autonomous self-driving automobile.
[0014] FIG. 1B illustrates an example of a frame 130 corresponding
to a 2D image, such as that captured with lidar system 100. The
frame 130 can include a collection of scanned points 114. The
scanned points 114 can be regularly spaced by a distance d, along a
grid. The spacing d of the scanned points 114 can determine the
angular resolution of a lidar system, such as the lidar system 100.
For example, a larger spacing can correspond to a coarser angular
resolution and a smaller spacing can correspond to a finer angular
resolution. In an example, the frame 130 can include a region of
interest 135 that corresponds to a field of view of the optical
system 116 (e.g., all points within the field of view can be
scanned).
[0015] FIG. 1C illustrates an example of a frame 130, such as that
captured with lidar system 100. The frame 130 can include a
collection of scanned points 114. The scanned points 114 can be
regularly spaced along a grid. The spacing of the scanned points
114 can determine the angular resolution of a lidar system, such as
the lidar system 100. For example, a larger spacing can correspond
to a coarser angular resolution and a smaller spacing can
correspond to a finer angular resolution. In an example, the frame
130 can include a region of interest 135 that corresponds to a
subset of a field of view of the optical system 116. In an example,
the scanning laser 108 scan a light beam over the region of
interest 135, but not other points within the field of view of the
lidar system 100 (e.g., only a fraction of points within the field
of view can be scanned).
[0016] FIG. 1D illustrates an example of a frame 130, such as that
captured with lidar system 100. The frame 130 can include a
collection of scanned points 114. The scanned points 114 can be
regularly spaced along a grid. The spacing of the scanned points
114 can determine the angular resolution of a lidar system, such as
the lidar system 100. For example, a larger spacing can correspond
to a coarser angular resolution and a smaller spacing can
correspond to a finer angular resolution. In an example, the frame
130 can include a region of interest 135 that corresponds to a
subset of a field of view of the optical system 116. In an example,
the scanning laser 108 scan a light beam over the region of
interest 135, but not other points within the field of view of the
lidar system 100 (e.g., only a fraction of points within the field
of view can be scanned).
[0017] FIGS. 2A-2C illustrate an example of a sequence of frames
230-232 where the scanned points can be irregularly spaced across a
field of view of the optical system 116. The first frame 230 as
illustrated in FIG. 2A can include a first region of interest 235.
The first region of interest 235 can include a collection of
regularly spaced scanned points. The scanned points in the first
region of interest 235 can correspond to a first angular
resolution. Outside of the first region of interest 235, the
scanned points can be regularly spaced with a larger spacing than
the first region of interest 235, corresponding to a coarser
angular resolution than in the first region of interest 235.
Outside of the first region of interest 235, every third column in
every other row can be scanned as illustrated in FIG. 2A. However,
other patterns of scanning can be utilized outside of the first
region of interest 235. For example, a scanning pattern outside of
the first region of interest can include every second column, in
every third row. More generally, the scanning pattern outside of
the first region of interest 235 can include every n.sup.th column
in every m.sup.th row. The first region of interest 235 can be
dynamically adjusted on a frame-to-frame basis, such as based on an
analysis of the frame by the detection circuitry 124. In the
example shown in FIG. 2A, the first frame can accommodate up to 144
scanned points, the first region of interest 235 can include 36
scanned points, and the portion of the frame outside of the region
of interest can include 17 scanned points, for a total of 53
scanned points out of a total of 144 possible scanned points. The
second frame 231 as illustrated in FIG. 2B can include a second
region of interest 236. The second region of interest 236 can be
determined based on an object detected in the first frame 230. The
second region of interest 236 can be smaller than the first region
of interest 235 and can include a collection of regularly spaced
scanned points. The scanned points in the second region of interest
236 can correspond to a first angular resolution. Outside of the
second region of interest 236, the scanned points can be regularly
spaced with a larger spacing than the second region of interest
236, corresponding to a coarser angular resolution than in the
second region of interest 236. The second region of interest 236
can be dynamically adjusted on a frame-to-frame basis, such as
based on an analysis of the first frame 230 by the detection
circuitry 124. In an example where the second region of interest
236 can be smaller than a first region of interest 235, a total
number of scanned points in the frame 231 can be smaller than the
total number of scanned points in the frame 230. In the example
shown in FIG. 2B, the second frame can accommodate up to 144
scanned points, the second region of interest 236 can include 12
scanned points, and the portion of the frame outside of the region
of interest can include 23 scanned points, for a total of 45
scanned points out of a total of 144 possible scanned points. The
third frame 232 as illustrated in FIG. 2C can include a third
region of interest 237 and a region of disinterest 240. The third
region of interest 237 can be determined based on an object
detected in the second frame 231. The third region of interest 237
can be the same size as the second region of interest 236 and can
include a collection of regularly spaced scanned points. The
scanned points in the third region of interest 237 can correspond
to a first angular resolution. Outside of the third region of
interest 237, the scanned points can be regularly spaced with a
larger spacing than the third region of interest 237, corresponding
to a coarser angular resolution than in the third region of
interest 237. The third region of interest 237 can be dynamically
adjusted on a frame-to-frame basis, such as based on an analysis of
the second frame 231 by the detection circuitry 124. In the region
of disinterest 240, the scanned points can be regularly spaced with
a larger spacing than outside of the third region of interest 237.
In an example, no points are scanned in the region of disinterest
240. In an example, the region of disinterest can correspond to an
area in the frame that includes a quasi-stationary object. The size
and location of the region of disinterest 240 can be determined
based on the identification of one or more objects within the
second frame 231. Similar to the regions of interest 235-237, the
region of disinterest 240 can be dynamically adjusted on a
frame-to-frame basis. In an example where the third region of
interest 236 can be the same size as the second region of interest
235, a total number of scanned points in the third frame 232 can be
smaller than the number of scanned points in the second frame 231.
In the example shown in FIG. 2C, the third frame can accommodate up
to 144 scanned points, the third region of interest 237 can include
12 scanned points, the region of disinterest 240 can exclude up to
20 scanned points, and the portion of the frame outside of the
region of interest can include 18 scanned points, for a total of 30
scanned points out of a total of 144 possible scanned points.
[0018] FIGS. 3A-3B illustrate a sequence of frames 330-331, such as
can be collected by a lidar system in an automobile where the
scanned points can be irregularly spaced across a field of view
that can include a road and associated landscape. The first frame
330 as illustrated in FIG. 3A can include a first region of
interest 335, a second region of interest 345, and a region of
disinterest 340. The first region of interest 335 can include a
collection of regularly spaced scanned points. The scanned points
in the first region of interest 335 can correspond to a first
angular resolution. The first region of interest 335 can correspond
to a portion of a road having at least one lane, where each lane
can be approximately 4 meters wide. A width of the first region of
interest 335 can be selected, such as to accommodate the width of
three lanes (e.g., a lane that an automobile is driving in and
additionally, one lane on either side of the lane that the
automobile is driving in). The width of the first region of
interest 335 can be sized to accommodate a radius of curvature of
the road. For example, at a relatively high speed of 150 km/hr, a
radius of curvature of the road can be approximately 1 km,
corresponding to a road that can be 4.degree. off of a longitudinal
axis at a distance of 150 m. At a medium speed of 80 km/hr, a
radius of curvature of the road can be approximately 200 m,
corresponding to a road that can be 10.degree. off of a
longitudinal axis at a distance of 60 m. To account for the radius
of curvature of the road, the first region of interest 335 can
extend 20.degree. in a horizontal direction, and to account for a
vertical extent of other automobiles (e.g. an automobile can extend
4 m and the region of interest can be sized to accommodate twice
the vehicle height at a distance of 60 m), the first region of
interest can extend 4.degree. in a vertical direction. The second
region of interest 345 can be smaller than the first region of
interest 335 and can include a collection of regularly spaced
scanned points. The scanned points in the second region of interest
345 can correspond to the first angular resolution. The second
region of interest 345 can correspond to a portion of a lane marker
on a road. Outside of the first region of interest 335 and the
second region of interest 345, the scanned points can be regularly
spaced with a larger spacing than the first region of interest 335
and the second region of interest 345, corresponding to a coarser
angular resolution than in the first region of interest 335 or the
second region of interest 345. Outside of the first region of
interest 335 and the second region of interest 345, every m.sup.th
column in every n.sup.th row can be scanned with the exception of
the region of disinterest 340. The region of disinterest 340 can
designate an area within the frame 330 where the scanned points can
be regularly spaced with a larger spacing than in the first region
of interest 335, the second region of interest 345, or the region
outside of the first region of interest 335 and the second region
of interest 345. In an example, no points are scanned within the
region of disinterest 340. The region of disinterest 340 can
include fixed road infrastructure, such as guard rails and the road
shoulder. The region of disinterest can include a road surface near
an automobile. The region of disinterest 340 can correspond to
objects such as trees, rocks, or mountains within a field of view
of a lidar system, such as lidar system 100. The first region of
interest 335, the second region of interest 345, and the region of
disinterest 340 can be adjusted dynamically, such as based on the
motion of objects within the field of view of the lidar system 100.
FIG. 3B illustrates a second frame 331 where the regions of
interest and disinterest have been dynamically updated, such as
based on a change in the relative position of the road and lane
markers within the field of view of the lidar system 100. The
second frame 331 as illustrated in FIG. 3b can include a first
region of interest 336, a second region of interest 346, and a
region of disinterest 341. The first region of interest 336 can
include a collection of regularly spaced scanned points. The
scanned points in the first region of interest 335 can correspond
to a first angular resolution. The first region of interest 335 can
correspond to a portion of a road having at least one lane, where
each lane can be approximately 4 meters wide. A width of the first
region of interest 335 can be selected, such as to accommodate the
width of three lanes (e.g., a lane that an automobile is driving in
and additionally, one lane on either side of the lane that the
automobile is driving in). The width of the first region of
interest 335 can be sized to accommodate a radius of curvature of
the road. For example, at a relatively high speed of 150 km/hr, a
radius of curvature of the road can be approximately 1 km,
corresponding to a road that can be 40 off of a longitudinal axis
at a distance of 150 m. At a medium speed of 80 km/hr, a radius of
curvature of the road can be approximately 200 m, corresponding to
a road that can be 10.degree. off of a longitudinal axis at a
distance of 60 m. To account for the radius of curvature of the
road, the first region of interest 335 can extend 20.degree. in a
horizontal direction, and to account for a vertical extent of other
automobiles (e.g. an automobile can extend 4 m and the region of
interest can be sized to accommodate twice the vehicle height at a
distance of 60 m), the first region of interest can extend
4.degree. in a vertical direction. The second region of interest
346 can be smaller than the first region of interest 336 and can
include a collection of regularly spaced scanned points. The
scanned points in the second region of interest 346 can correspond
to the first angular resolution. The second region of interest 346
can correspond to a portion of a lane marker on a road. Outside of
the first region of interest 336 and the second region of interest
346, the scanned points can be regularly spaced with a larger
spacing than the first region of interest 336 and the second region
of interest 346, corresponding to a coarser angular resolution than
in the first region of interest 336 or the second region of
interest 346. Outside of the first region of interest 336 and the
second region of interest 346, every m.sup.th column in every
n.sup.th row can be scanned with the exception of the region of
disinterest 341. The region of disinterest 341 can designate an
area within the frame 331 where the scanned points can be regularly
spaced with a larger spacing than in the first region of interest
336, the second region of interest 346, or the region outside of
the first region of interest 336 and the second region of interest
346. In an example, no points are scanned within the region of
disinterest 341. The region of disinterest 341 can correspond to
objects such as trees, rocks, or mountains within a field of view
of a lidar system, such as lidar system 100.
[0019] FIG. 4 illustrates a method of adjusting a field of view in
a lidar system, such as lidar system 100. A light beam, such as can
be emitted by the scanning laser 108 can be scanned over a target
region within a field of view of an optical system, such as optical
system 116 and a first image can be captured by a photosensitive
detector, such as the photosensitive detector 120 (step 410). A
first object can be identified within the first image by detection
circuitry, such as the detection circuitry 124 (step 420). Control
circuitry, such as control circuitry 104 can select a first region
of interest that includes at least a portion of the identified
first object (step 430). A second lidar image can then be captured
(step 440). The capturing of the second lidar image can include
steps 450 and 460 described below. A light beam, such as can be
emitted by the scanning laser 108 can be scanned over the first
region of interest at a first spatial sampling resolution (step
450). A light beam, such as can be emitted by the scanning laser
108 can be scanned over the field of view outside of the first
region of interest at a second spatial sampling resolution, wherein
the second sampling resolution can be less than the first spatial
sampling resolution (step 460). In an example, detection circuitry,
such as the detection circuitry 124 can identify a second object
outside of the first region of interest in the captured second
lidar image. Control circuitry, such as control circuitry 104 can
select a second region of interest that can contain at least a
portion of the identified second object. A third lidar image can
then be captured, where capturing the third lidar image can include
scanning a light beam, such as that emitted by the scanning laser
108, over both the first and second regions of interest at the
first spatial sampling resolution and over a field of view outside
of both the first and second regions of interest at a third spatial
sampling resolution that can be less than the second spatial
sampling resolution. In an example, the control circuitry 104 can
receive external data, such as from an inertial sensor, GPS, radar,
camera, or wheel speed sensor data, and in response to the received
external data, the control circuitry 104 can adjust a size or
position of the first region of interest.
* * * * *