U.S. patent application number 17/717773 was filed with the patent office on 2022-07-28 for detection device and method for adjusting parameter thereof.
The applicant listed for this patent is Hesai Photonics Technology Co., Ltd.. Invention is credited to Yifan LI, Shengping MAO, Rui WANG, Shixiang WU, Shaoqing XIANG, Liangchen YE, Xuezhou ZHU.
Application Number | 20220236392 17/717773 |
Document ID | / |
Family ID | |
Filed Date | 2022-07-28 |
United States Patent
Application |
20220236392 |
Kind Code |
A1 |
YE; Liangchen ; et
al. |
July 28, 2022 |
DETECTION DEVICE AND METHOD FOR ADJUSTING PARAMETER THEREOF
Abstract
The present invention provides a detection device and a method
for adjusting a parameter thereof. The device includes a real-time
collection module, configured to collect and obtain environment
information in real time; a real-time location information
obtaining module, configured to obtain location information in real
time; a parameter determining module, configured to determine a
value of a target parameter of the detection device based on at
least either the obtained environment information or the obtained
location information; and a parameter adjustment module, configured
to adjust the parameter of the detection device in real time based
on the determined value of the target parameter. Based on the
present invention, the parameter of the detection device can be
adjusted in real time, and the detection device can adapt to
diversified road conditions and improve detection accuracy.
Inventors: |
YE; Liangchen; (SHANGHAI,
CN) ; MAO; Shengping; (SHANGHAI, CN) ; WANG;
Rui; (SHANGHAI, CN) ; ZHU; Xuezhou; (SHANGHAI,
CN) ; XIANG; Shaoqing; (SHANGHAI, CN) ; LI;
Yifan; (SHANGHAI, CN) ; WU; Shixiang;
(SHANGHAI, CN) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Hesai Photonics Technology Co., Ltd. |
Shanghai |
|
CN |
|
|
Appl. No.: |
17/717773 |
Filed: |
April 11, 2022 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
16930141 |
Jul 15, 2020 |
11346926 |
|
|
17717773 |
|
|
|
|
PCT/CN2018/082396 |
Apr 9, 2018 |
|
|
|
16930141 |
|
|
|
|
International
Class: |
G01S 7/497 20060101
G01S007/497; G01S 17/931 20060101 G01S017/931; G01S 17/42 20060101
G01S017/42; H04N 5/232 20060101 H04N005/232 |
Foreign Application Data
Date |
Code |
Application Number |
Jan 17, 2018 |
CN |
201810046634.9 |
Jan 17, 2018 |
CN |
201810046635.3 |
Jan 17, 2018 |
CN |
201810046646.1 |
Jan 17, 2018 |
CN |
201810046647.6 |
Claims
1. A lidar device, comprising: a plurality of laser transceivers;
and a scanner having a plurality of vibrating mirrors, wherein each
of the plurality of laser transceivers is configured to emit light
to the scanner at a respective incident angle to produce an
individual field of view to collectively form a first field of view
in a first resolution, and the plurality of laser transceivers are
configured to generate a point cloud of a surrounding environment
of the lidar in real time using the first field of view; a
parameter determining module, configured to determine a value of a
parameter of the lidar based on the generated point cloud, wherein
the parameter determining module is configured to: determining a
value for adjusting the vibrating mirrors to produce a second field
of view such that the second field of view has a second resolution
greater than the first resolution, and wherein the second field of
view is smaller than the first field of view, and a parameter
adjustment module, configured to adjust the parameter of the lidar
in real time based on the second field of view having the second
resolution.
2. The lidar device of claim 1, wherein the parameter determining
module is further configured to dynamically adjust a power of the
laser transceivers in response to detected reflectance of road
obstacles.
3. The lidar device of claim 1, wherein the parameter determining
module is further configured to determine whether the generated
point cloud includes an object.
4. The lidar device of claim 3, wherein in response to determining
that the generated point cloud includes an object, the parameter
determining module is further configured to determining a second
value for adjusting the vibrating mirrors such that the second
field of view is focused on the object.
5. The lidar device of claim 1, wherein the second field of view is
produced by overlapping the individual fields of view of the laser
transceivers.
6. The lidar device of claim 1, wherein the laser transceivers are
configured to detect a road condition ahead of a vehicle; and the
parameter determining module is further configured to determine a
first focus direction of the lidar device in coincidence with a
vehicle centerline from back to front of a vehicle when the road
condition indicates a flat road condition ahead of the vehicle.
7. The lidar device of claim 6, wherein the parameter determining
module is further configured to determine a second focus direction
of the lidar device, wherein the second focus direction deviates
from the vehicle centerline downward by a first angle when the road
condition information indicates an uphill road condition ahead of
the vehicle.
8. The lidar device of claim 6, wherein the parameter determining
module is further configured to determine a third focus direction
of the lidar device, wherein the third focus direction deviates
from the vehicle centerline upward by a second angle when the road
condition information indicates a downhill road condition ahead of
the vehicle.
9. The lidar device of claim 6, wherein the parameter determining
module is further configured to determine a fourth focus direction
of the lidar device, wherein the fourth focus direction deviates
from the vehicle centerline leftward by a third angle when the road
condition information indicates a left turn ahead of the
vehicle.
10. The lidar device of claim 6, wherein the parameter determining
module is further configured to determine a fifth focus direction
of the lidar device, wherein the fifth focus direction deviates
from the vehicle centerline rightward by a fourth angle when the
road condition information indicates a right turn ahead of the
vehicle.
11. A vehicle comprising a lidar device, wherein the lidar device
comprises: a plurality of laser transceivers; and a scanner having
a plurality of vibrating mirrors, wherein each of the plurality of
laser transceivers is configured to emit light to the scanner at a
respective incident angle to produce an individual field of view to
collectively form a first field of view in a first resolution, and
the plurality of laser transceivers are configured to generate a
point cloud of a surrounding environment of the lidar in real time
using the first field of view; a parameter determining module,
configured to determine a value of a parameter of the lidar based
on the generated point cloud, wherein the parameter determining
module is configured to: determining a value for adjusting the
vibrating mirrors to produce a second field of view such that the
second field of view has a second resolution greater than the first
resolution, and wherein the second field of view is smaller than
the first field of view, and a parameter adjustment module,
configured to adjust the parameter of the lidar in real time based
on the second field of view having the second resolution.
12. The vehicle of claim 11, wherein the parameter determining
module is further configured to dynamically adjust a power of the
laser transceivers in response to detected reflectance of road
obstacles.
13. The vehicle of claim 11, wherein the parameter determining
module is further configured to determine whether the generated
point cloud includes an object.
14. The vehicle of claim 13, wherein in response to determining
that the generated point cloud includes an object, the parameter
determining module is further configured to determining a second
value for adjusting the vibrating mirrors such that the second
field of view is focused on the object.
15. The vehicle of claim 11, wherein the second field of view is
produced by overlapping the individual fields of view of the laser
transceivers.
16. The vehicle of claim 11, wherein the laser transceivers are
configured to detect a road condition ahead of the vehicle; and the
parameter determining module is further configured to determine a
first focus direction of the lidar device in coincidence with a
vehicle centerline from back to front of a vehicle when the road
condition indicates a flat road condition ahead of the vehicle.
17. The vehicle of claim 16, wherein the parameter determining
module is further configured to determine a second focus direction
of the lidar device, wherein the second focus direction deviates
from the vehicle centerline downward by a first angle when the road
condition information indicates an uphill road condition ahead of
the vehicle.
18. The vehicle of claim 16, wherein the parameter determining
module is further configured to determine a third focus direction
of the lidar device, wherein the third focus direction deviates
from the vehicle centerline upward by a second angle when the road
condition information indicates a downhill road condition ahead of
the vehicle.
19. The vehicle of claim 16, wherein the parameter determining
module is further configured to determine a fourth focus direction
of the lidar device, wherein the fourth focus direction deviates
from the vehicle centerline leftward by a third angle when the road
condition information indicates a left turn ahead of the
vehicle.
20. The vehicle of claim 16, wherein the parameter determining
module is further configured to determine a fifth focus direction
of the lidar device, wherein the fifth focus direction deviates
from the vehicle centerline rightward by a fourth angle when the
road condition information indicates a right turn ahead of the
vehicle.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of U.S. patent
application Ser. No. 16/930,141, filed on Jul. 15, 2020, entitled
"DETECTION DEVICE AND METHOD FOR ADJUSTING PARAMETER THEREOF",
which is a continuation of PCT International Patent Application No.
PCT/CN2018/082396, filed on Apr. 9, 2018, which claims priority to
Chinese Patent Application No. 201810046646.1, entitled
"VEHICLE-MOUNTED DETECTION DEVICE AND METHOD FOR ADJUSTING
PARAMETER THEREOF, MEDIUM, AND DETECTION SYSTEM" filed on Jan. 17,
2018, Chinese Patent Application No. 201810046634.9 entitled
"METHOD FOR ADJUSTING ORIENTATION OF FIELD-OF-VIEW CENTER OF LIDAR,
MEDIUM, AND LIDAR SYSTEM" filed on Jan. 17, 2018, Chinese Patent
Application No. 201810046635.3 entitled "METHOD FOR ADJUSTING FIELD
OF VIEW OF LIDAR" filed on Jan. 17, 2018, and Chinese Patent
Application No. 201810046647.6 entitled "LIDAR SYSTEM, METHOD FOR
PROCESSING POINT CLOUD DATA OF LIDAR, AND READABLE MEDIUM", filed
on Jan. 17, 2018, all of which are incorporated herein by reference
in their entirety.
BACKGROUND
Technical Field
[0002] The present application relates to the field of autonomous
driving technologies, and in particular, to a detection device and
a method for adjusting a parameter thereof.
Related Art
[0003] Due to capabilities of detecting a three-dimensional
coordinate model of an object around a vehicle body and achieving a
purpose of environment perception, lidar is widely applied in
various fields such as autonomous driving. With the development of
driverless technologies, a wide field of view, a high resolution,
and long ranging have become main development directions of lidars
in the future.
[0004] The high resolution of the lidar contradicts its long
ranging because the high resolution makes distribution of laser
light denser in a space, decreases a laser power threshold for
human eye safety, and then reduces ranging distance of the lidar.
Meanwhile, for a lidar with the same pixel points, a wider field of
view means a lower resolution of the image. Therefore, on condition
of the same cost, a wider field of view of the lidar also
contradicts its high resolution. That is, a wider field of view
means a lower resolution, and a narrower field of view means a
higher resolution. The size of the field of view of the lidar in
the prior art is fixed. For example, when a rotary mechanical lidar
HDL-64E of Velodyne has N pairs of transceiver modules
longitudinally, each pair of transceiver modules is responsible for
the field of view at a specific angle longitudinally, and
corresponds to a fixed field of view.
[0005] For a lidar used in the field of autonomous driving, due to
a fixed size of its field of view, in some scenarios, in order to
improve an angular resolution of the lidar, only the number of
pairs of the transceiver modules has to be increased, thereby
increasing the size, power consumption, and costs of the lidar
system. In addition, this will increase a safety threshold of human
eyes and reduce a ranging distance.
SUMMARY
[0006] To resolve the technical problems in the prior art, the
embodiments of the present specification provide a detection device
and a method for adjusting a parameter thereof. The technical
solutions are as follows:
[0007] According to a first aspect, a detection device is provided,
including: a real-time collection module, configured to collect and
obtain environment information in real time; a real-time location
information obtaining module, configured to obtain location
information in real time; a parameter determining module,
configured to determine a value of a target parameter of the
detection device based on at least either the obtained environment
information or the obtained location information; and a parameter
adjustment module, configured to adjust the parameter of the
detection device in real time based on the determined value of the
target parameter.
[0008] According to a second aspect, a method for adjusting a
parameter of a detection device is provided, including: collecting
and obtaining environment information around a vehicle in real
time; obtaining location information of the vehicle in real time;
determining a value of a target parameter of a vehicle-mounted
detection device based on the obtained environment information
around the vehicle and the location information of the vehicle; and
adjusting the parameter of the vehicle-mounted detection device in
real time based on the determined value of the target
parameter.
[0009] This specification can achieve the following beneficial
effects: By collecting and obtaining the environment information
around the detection device and the location information of the
vehicle in real time, the value of the target parameter of the
detection device is determined based on the obtained environment
information around the vehicle and the location information of the
vehicle, and then the parameter of the detection device is adjusted
in real time based on the determined value of the target parameter,
thereby adapting to diversified road conditions and improving
detection accuracy of the detection device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] Specific implementations of this specification are further
described in detail below with reference to the accompanying
drawings.
[0011] FIG. 1 is a flowchart of a method for adjusting a parameter
of a detection device according to an embodiment of this
specification;
[0012] FIG. 2 is a schematic structural diagram of a
vehicle-mounted detection device according to an embodiment of this
specification;
[0013] FIG. 3 is a schematic structural diagram of a lidar system
in the prior art;
[0014] FIG. 4 is a schematic structural diagram of a lidar system
according to an embodiment of this specification;
[0015] FIG. 5 is a schematic diagram of stitching a field of view
in a lidar system according to an embodiment of this
specification;
[0016] FIG. 6 is a schematic diagram of stitching a field of view
in a lidar system according to an embodiment of this
specification;
[0017] FIG. 7 is a schematic diagram of stitching a field of view
in a lidar system according to an embodiment of this
specification;
[0018] FIG. 8 is a detailed flowchart of a method for adjusting an
orientation at a center of a field of view of a lidar according to
an embodiment of this specification;
[0019] FIG. 9 is a schematic diagram of a desired detection angle
according to an embodiment of this specification;
[0020] FIG. 10 is a schematic structural diagram of a scanning
apparatus according to an embodiment of this specification;
[0021] FIG. 11 is a schematic diagram of a field of view of a lidar
according to an embodiment of this specification;
[0022] FIG. 12 is a schematic structural diagram of a lidar system
according to an embodiment of this specification;
[0023] FIG. 13 is a detailed flowchart of a method for adjusting a
field of view of a lidar according to an embodiment of this
specification;
[0024] FIG. 14 is a schematic diagram of a size of a field of view
of a lidar according to an embodiment of this specification;
and
[0025] FIG. 15 is a schematic diagram of point cloud data of a
lidar according to an embodiment of this specification.
DETAILED DESCRIPTION
[0026] To make a person skilled in the art better understand
solutions of this specification, the following clearly and
completely describes the technical solutions in the embodiments of
this specification with reference to the accompanying drawings in
the embodiments of this specification. Apparently, the described
embodiments are some of the embodiments of this specification
rather than all of the embodiments. All other embodiments obtained
by a person of ordinary skill in the art based on the embodiments
of this specification without creative efforts shall fall within
the protection scope of this specification.
[0027] Referring to FIG. 1, an embodiment of the present invention
provides a method for adjusting a parameter of a vehicle-mounted
detection device. The method may include the following steps:
[0028] Step S101: Collect and obtain environment information around
a vehicle in real time.
[0029] In specific implementation, the environment information may
include weather information, road condition information, traffic
indication information, or one or more thereof. The road condition
information may include information about road conditions, for
example, rain, snow or fog conditions on a road, and rugged road
conditions; and may include traffic conditions and information
about a vehicle movement state, for example, traffic congestion,
existence of high-intensity glare on the road, and left turn of the
vehicle. The traffic indication information may include traffic
light indication information, lane line indication information,
road sign information, and the like. The environment information
around the vehicle may be collected and obtained in real time by
one or more vehicle-mounted detection devices such as a sensor
device. For example, environment information such as traffic
lights, lane line information, road signs, and other surrounding
vehicles may be obtained by a vision sensor. The vehicle-mounted
detection device may be an environment sensing device such as a
lidar, a vision sensor, a millimeter wave radar, a laser
rangefinder, an infrared night vision device, or may be a body
state sensing device. In a possible embodiment, the body state
sensing device is an INS, or a system that integrates a GPS and an
INS.
[0030] In a possible embodiment, the environment information around
the vehicle is obtained by a lidar. The lidar may transmit a laser
pulse signal. After meeting a target obstacle, the transmitted
laser pulse signal is reflected by the target obstacle and is
received by a detection system. A distance of the corresponding
target obstacle may be measured by measuring a round-trip time of
laser light. For example, the distance of the corresponding target
obstacle is measured by using a time of flight (TOF) method. By
scanning and detecting the entire target region, the lidar can
finally implement three-dimensional imaging. A three-dimensional
image includes environment information around the vehicle.
[0031] Step S102: Obtain the location information of the vehicle in
real time.
[0032] The location information may include: an absolute location
of the vehicle, and map information near the absolute location. For
example, the location information includes: an absolute location of
the vehicle, and high-precision map information near the absolute
location. Location information around the vehicle may be obtained
in real time through a GPS navigation system and a real-time
Internet map. For example, the absolute position of the vehicle is
obtained in real time through a GPS navigation system, and the
high-precision map near the absolute position is obtained in real
time by downloading from the Internet. The high-precision map may
include basic two-dimensional road data, for example, lane marks
and surrounding infrastructure, and may include data related to
traffic control, road construction, and wide-area weather, and may
further include fast changing dynamic data such as accidents,
congestion, surrounding vehicles, pedestrians, and signal lights.
The dynamic data may be characterized by a relatively high update
speed and a relatively high positioning precision, for example, a
minute-level or second-level update speed, and a centimeter-level
positioning precision.
[0033] Step S103: Determine a value of a target parameter of a
vehicle-mounted detection device based on the obtained environment
information around the vehicle and the location information of the
vehicle.
[0034] The vehicle-mounted device may include a lidar, a vision
sensor, a millimeter wave radar, a laser rangefinder, an infrared
night vision device, or one or more thereof. When the
vehicle-mounted detection device is a lidar, the target parameters
of the vehicle-mounted detection device include a field-of-view
range of the lidar, a wavelength of the lidar, a horizontal
resolution of the lidar, a vertical resolution of the lidar, a
scanning frequency of the lidar, a beam tilt angle of the lidar, a
power of the lidar, or one or more thereof. The field-of-view range
of the lidar is the size of a field of view of the lidar. The
vehicle-mounted detection device is a lidar. During the movement
process of the vehicle, the environment information and the
location information are collected and obtained in real time, and
then a target region is generated based on the environment
information and location information that are obtained in real
time, and the orientation of the field of view of the lidar is
adjusted to the target region. Because the target region can be
generated in real time and the lidar can be adjusted to the target
region, the size of the target region can be adjusted in real time
based on the current environment information and location
information without increasing costs or affecting the ranging
distance. In this way, the size of the field of view of the lidar
is adjusted in real time to fit scenes that require different
resolutions.
[0035] In a possible embodiment, the vehicle-mounted detection
device is a lidar. During the movement process of the vehicle, the
environment information and the location information are collected
and obtained in real time, and then a desired detection angle is
generated based on the environment information and location
information that are obtained in real time, and the beam tilt angle
of the lidar such as the orientation angle at the center of the
field of view is adjusted to the desired detection angle. Because
the desired detection angle can be generated in real time and the
orientation of the lidar can be adjusted to the desired detection
angle, the beam tilt angle of the lidar can be adjusted in real
time based on the real-time environment information and location
information to fit different scenes during the movement
process.
[0036] When the in-vehicle detection device is a lidar, the desired
detection angle may be generated in the following way based on the
current road condition information: When the road condition
information is a flat road, the angle that coincides with the
vehicle centerline is used as the desired detection angle to obtain
more surrounding environment information; when the road condition
information is an uphill road, an angle obtained by deviating the
vehicle centerline downward by a preset first angle is used as the
desired detection angle to obtain more ground environment
information and avoid the problem of loss of effective ground
information caused by an overhead orientation of the field of view
of the lidar; when the road condition information is a downhill
road, an angle obtained by deviating the vehicle centerline upward
by a preset second angle is used as the desired detection angle to
obtain more surrounding environment information and avoid the
problem that the field of view of the lidar can obtain only the
environment information at a near distance; when the road condition
information is a left turn, an angle obtained by deviating the
vehicle centerline leftward by a preset third angle is used as the
desired detection angle to obtain more left side environment
information; and, when the road condition information is a right
turn, an angle obtained by deviating the vehicle centerline
rightward by a preset fourth angle is used as the desired detection
angle to obtain more right side environment information.
[0037] In a possible embodiment, the vehicle-mounted detection
device is a lidar. During the movement process of the vehicle, the
environment information around the vehicle, such as a reflectance
of road obstacles, is collected and obtained in real time, and then
the value of the power of the lidar is determined based on the
obtained environment information, and the power of the lidar is
adjusted. The power of the lidar is adjusted dynamically according
to the environment information collected in real time, such as the
reflectance of road obstacles, thereby reducing the power
consumption of the lidar without omitting obstacles.
[0038] In a possible embodiment, a local environment map
corresponding to the vehicle may be constructed based on the
obtained environment information around the vehicle and the
location information. Then a preset region around the vehicle is
selected in the local environment map. Thereafter a local movement
route of the vehicle and the value of the target parameter of the
vehicle-mounted detection device are calculated in the preset
region. In specific implementation, the local movement route for
the vehicle to meet traffic rules and safety requirements and the
value of the target parameter of the vehicle-mounted detection
device may be calculated in the preset region according to the
traffic rules and the safety requirements. Alternatively, the local
movement route of the vehicle and the value of the target parameter
of the vehicle-mounted detection device may be calculated in the
preset region according to other criteria. This is not limited in
this embodiment of the present invention. In specific
implementation, the absolute location information of the vehicle,
information on other vehicles around the vehicle, and road
environment perception information may be integrated with the
high-precision map information to create a local environment map
corresponding to the vehicle. Alternatively, the local environment
map corresponding to the vehicle may be created based on other
location information and environment information. This is not
limited in this embodiment of the present invention.
[0039] Step S104: Adjust the parameter of the vehicle-mounted
detection device in real time based on the determined value of the
target parameter.
[0040] The parameter of the vehicle-mounted detection device may be
adjusted in real time by sending a control instruction to the
vehicle-mounted detection device, or the parameter of the
vehicle-mounted detection device may be manually adjusted, which is
not limited in this specification.
[0041] In a possible embodiment, when the vehicle is moving uphill
or downhill, the value of the beam tilt angle such as a pitch angle
of the lidar or the vision sensor may be determined according to
real-time road condition information, and then the beam tilt angle
of the lidar or the vision sensor may be adjusted by using a
control instruction. When the vehicle is moving on an expressway,
more attention may be paid to the field of view directly ahead.
When the vehicle is moving on a city street, it is necessary to pay
attention to an entire field-of-view range. By collecting
environmental information and location information in real time,
the vehicle-mounted detection device can adjust the field-of-view
range in real time and reduce the probability of omitting target
obstacles.
[0042] Understandably, steps S101 and S102 are only used to
distinguish between two different obtaining actions, but do not
limit specific order of the obtaining actions. In specific
implementation, step S101 may be performed before step S102, or
step S102 may be performed before step S101, or the two steps may
be performed in parallel. The vehicle-mounted detection device
configured to collect and obtain environment information around the
vehicle in real time may be exactly the same as, partly the same
as, or completely different from the vehicle-mounted detection
device that adjusts parameters in real time.
[0043] In a possible embodiment, the vehicle-mounted detection
device configured to collect and obtain environment information
around the vehicle in real time is exactly the same as the
vehicle-mounted detection device that adjusts parameters in real
time. For example, a lidar collects and obtains the environment
information around the vehicle in real time, and then adjusts the
parameter of the lidar in real time based on the determined value
of the target parameter; or a vision sensor collects and obtains
the environment information around the vehicle in real time, and
then adjusts the parameter of the vision sensor in real time based
on the determined value of the target parameter; or a lidar and a
vision sensor collect and obtain the environment information around
the vehicle in real time, and then adjust the parameters of the
lidar and the vision sensor in real time based on the determined
value of the target parameter.
[0044] In a possible embodiment, the vehicle-mounted detection
device configured to collect and obtain environment information
around the vehicle in real time is completely different from the
vehicle-mounted detection device that adjusts parameters in real
time. For example, a lidar collects and obtains the environment
information around the vehicle in real time, and then adjusts the
parameter of the vision sensor in real time based on the determined
value of the target parameter; or a vision sensor and an infrared
night vision device collect and obtain the environment information
around the vehicle in real time, and then adjust the parameter of
the lidar in real time based on the determined value of the target
parameter; or a vision sensor collect and obtain the environment
information around the vehicle in real time, and then adjust the
parameters of the lidar and the infrared night vision device in
real time based on the determined value of the target
parameter.
[0045] In a possible embodiment, the vehicle-mounted detection
device configured to collect and obtain environment information
around the vehicle in real time is partly the same as the
vehicle-mounted detection device that adjusts parameters in real
time. For example, a lidar and a vision sensor collect and obtain
the environment information around the vehicle in real time, and
then adjust the parameter of the vision sensor in real time based
on the determined value of the target parameter.
[0046] In conclusion, by collecting and obtaining the environment
information around the vehicle and the location information of the
vehicle in real time, the value of the target parameter of the
vehicle-mounted detection device is determined based on the
obtained environment information around the vehicle and the
location information of the vehicle, and then the parameter of the
vehicle-mounted detection device is adjusted in real time based on
the determined value of the target parameter, thereby adapting to
diversified road conditions, improving detection accuracy of the
vehicle-mounted detection device, and improving accuracy and safety
of a driverless vehicle.
[0047] To enable those skilled in the art to better understand and
implement the present invention, an embodiment of the present
invention further provides a vehicle-mounted detection device
capable of implementing the method for adjusting a parameter of a
vehicle-mounted detection device, as shown in FIG. 2.
[0048] Referring to FIG. 2, the vehicle-mounted detection device
may include a first obtaining module, a second obtaining module, a
determining module, and an adjustment module.
[0049] The first obtaining module is configured to collect and
obtain environment information around a vehicle in real time.
[0050] The second obtaining module is configured to obtain location
information of the vehicle in real time.
[0051] The determining module is configured to determine a value of
a target parameter of a vehicle-mounted detection device based on
the obtained environment information around the vehicle and the
location information of the vehicle.
[0052] The adjustment module is configured to adjust the parameter
of the vehicle-mounted detection device in real time based on the
determined value of the target parameter.
[0053] In specific implementation, for the working process and
principles of the vehicle-mounted detection device, reference may
be made to the description in the method provided in the foregoing
embodiment, and details are omitted here.
[0054] As shown in FIG. 3, an existing lidar system includes a
scanning module 10 and a laser transceiver module 11. The scanning
module 10 is configured to reflect a laser pulse signal, which is
transmitted by the laser transceiver module 11, into a space,
receive a laser pulse echo signal reflected from a space obstacle,
and then reflect the laser pulse echo signal to the laser
transceiver module 11 to implement measurement of space
coordinates. A two-dimensional space corresponding to echo signals
detectable by the laser transceiver module 11 is a field of view 12
of the lidar system. To improve the angular resolution of the
existing lidar, the number of pairs of the transceiver modules has
to be doubled. The increase of the pairs of the transceiver modules
not only leads to a sharp increase in costs, but also greatly
increases the size and complexity of the system, thereby reducing
reliability of the system. In addition, the increase of the
longitudinal angular resolution may also increase futile
information of some non-critical regions, and increase the
processing complexity of a sensing system.
[0055] As shown in FIG. 4, an embodiment of the present
specification provides a lidar system. The lidar system includes a
scanning module 21 and multiple laser transceiver modules 22. The
scanning module 21 is configured to reflect a laser pulse signal
into a space, and receive a laser pulse echo signal reflected by a
space obstacle. Each laser transceiver module 22 is incident on the
scanning module 21 at a corresponding preset angle, and an
overlapping region exists between fields of view corresponding to
at least two laser transceiver modules 22. In specific
implementation, the scanning module 21 may be a two-dimensional
galvanometer. The galvanometer reflects the laser pulse signal,
which is transmitted by multiple laser transceiver modules 22, to
the space, and receives the laser pulse echo signal reflected by
the space obstacle.
[0056] In specific implementation, the corresponding field of view
of a laser transceiver module is small, and the resolution is also
low, especially for long-distance regions. Therefore, multiple
laser transceiver modules may be applied. Each laser transceiver
module is incident on the same scanning module at a corresponding
incident angle to form multiple corresponding fields of view in the
space. By presetting a reasonable angle, the multiple fields of
view can form a densified field of view by overlapping different
regions in the space. In the densified region, the angular
resolution can be doubled.
[0057] The multiple laser transceiver modules can stitch regions in
the space to form multiple fields of view to increase the size of
the field of view of the lidar.
[0058] Because the multiple laser transceiver modules can increase
the size of the field of view of the lidar, the indicator
requirements on the size of the field of view of a scanning module
can be reduced, thereby facilitating the implementation of other
scanning modules of better performance parameters. For example, the
indicator requirements on the size of the field of view of a
two-dimensional galvanometer can be reduced, thereby facilitating
the implementation of other two-dimensional galvanometers of better
performance parameters.
[0059] In a possible embodiment, the laser transceiver module 22
includes a coaxial laser transceiver module, in which an optical
axis of a transmitting optical path coincides with an optical axis
of a receiving optical path. In another embodiment of the present
invention, the laser transceiver module 22 includes a non-coaxial
laser transceiver module, in which an optical axis of a
transmitting optical path does not coincide with an optical axis of
a receiving optical path.
[0060] In specific implementation, the overlapping region of the
field of view may be a region at the center of the field of view, a
region above the center of the field of view, or a region below the
center of the field of view. A proper preset angle may be set
according to actual needs, so that a region expected to be
primarily detected is the overlapping region.
[0061] In specific implementation, the overlapping region may be an
overlapping region of a horizontal (that is, landscape) field of
view, or an overlapping region of a vertical (that is, portrait)
field of view.
[0062] In specific implementation, the overlapping region may be
obtained by stitching two, three, four or more fields of view.
[0063] In a possible embodiment, FIG. 5 is a schematic diagram of
stitching fields of view corresponding to the lidar system shown in
FIG. 4.
[0064] Referring to FIG. 5, four laser transceiver modules 22 form
a basic field of view 31A vertically, in which the amount of point
cloud data is small, the angular resolution is low, and the
field-of-view range is small. After the four fields of view are
stitched, a wider field-of-view range can be obtained. At the same
time, there is a lot of point cloud data in the overlapping region
32A, and a densified field of view can be formed in which the
angular resolution is doubled.
[0065] In another embodiment of the present invention, FIG. 6 is a
schematic diagram of stitching fields of view corresponding to the
lidar system shown in FIG. 4.
[0066] Referring to FIG. 6, two laser transceiver modules 22 form a
basic field of view 41A vertically, in which the amount of point
cloud data is small, the angular resolution is low, and the
field-of-view range is small. After the two fields of view are
stitched, a wider field-of-view range can be obtained. At the same
time, there is a lot of point cloud data in the overlapping region
42A, and a densified field of view can be formed in which the
angular resolution is doubled.
[0067] In specific implementation, for short-distance scenes, the
low angular resolution of the basic field of view can meet the
resolution requirements of a driverless driving system. For
long-distance scenes, in order to recognize objects of the same
size, a higher angular resolution is required. In an actual driving
process, the system essentially pays attention to the objects right
ahead of the vehicle. Therefore, by designing a reasonable preset
angle, the overlapping region of the field of view is a
long-distance region right ahead. This not only satisfies the
high-resolution, short-distance and long-distance detection
requirements of the driverless driving system, but also reduces the
design requirements on the resolution in non-critical regions, and
reduces the complexity and cost of the lidar.
[0068] In specific implementation, the overlapping region may be
obtained by stitching vertical fields of view, or may be obtained
by stitching horizontal fields of view.
[0069] In another embodiment of the present invention, FIG. 7 shows
the field of view corresponding to the lidar system shown in FIG.
4. The angular resolution of the basic field of view 51A formed by
the four laser transceiver modules 22 horizontally is low, and the
amount of point cloud data is small. After the four fields of view
are stitched, a wider field-of-view range can be obtained. At the
same time, there is a lot of point cloud data in the overlapping
region 52A, and a densified field of view can be formed in which
the angular resolution is doubled.
[0070] By applying the above lidar system, and by setting a
reasonable preset angle, a small number of low-resolution
transceiver modules can be used to stitch regions in a space to
form multiple fields of view that have an overlap region. On the
one hand, the overlapping region can meet long-distance
high-resolution detection requirements. On the other hand,
non-overlapping regions meet the low-resolution design requirements
of non-critical regions, so as to reduce the processing complexity
of the sensing system. Therefore, the lidar system provided in the
embodiments of the present invention can improve the angular
resolution of the lidar at a lower cost and a lower system
processing complexity. For the overlapping region, dense point
cloud data can be obtained, and the angular resolution is doubled.
Therefore, the accuracy of object detection of the lidar can be
improved by processing the dense point cloud data.
[0071] In some possible scenarios, the inability of the lidar to
adjust the orientation of the center of the field of view will
cause deviation of the field of view in some scenes, and make it
impossible to collect effective point cloud data. For example, when
a vehicle moves uphill, the inability to adjust the orientation of
the center of the field of view will cause the field of view of the
lidar to deflect to the sky, thereby losing a lot of effective
ground information.
[0072] In a possible embodiment, as shown in FIG. 8, the current
movement information or road condition information is obtained in
real time first. In specific implementation, the center of the
field of view of the existing lidar is fixed relative to the body
coordinate system, and cannot be adapted to different scenes. For
example, when the vehicle moves uphill, the orientation angle at
the center of the field of view will deflect to the sky, thereby
losing a lot of effective ground information. In this embodiment of
the present invention, the current movement information or road
condition information is obtained in real time, and then the
orientation angle at the center of the field of view of the lidar
is adjusted according to the movement information or road condition
information that are obtained in real time. The movement
information may include: uphill movement, downhill movement, flat
road movement, left turn movement, right turn movement. The road
condition information may include: uphill road condition, downhill
road condition, and flat road condition.
[0073] Current road condition information may be obtained in real
time based on a pre-downloaded map such as simultaneous
localization and mapping (SLAM), or the current road condition
information may be obtained in real time based on a point cloud map
constructed by the lidar, or the current road condition information
may be obtained in real time based on data captured by a
vehicle-mounted camera.
[0074] The current movement information may be obtained in real
time based on sensor parameters inside an autonomous driving system
such as a steering wheel parameter. This is not limited in this
embodiment of the present invention.
[0075] A desired detection angle is generated based on the obtained
current movement information or road condition information. Because
different movement directions or different road condition
information correspond to different desired orientations, the
desired detection angle may be generated based on the obtained
current movement information or road condition information. The
desired detection angle may change along a horizontal field of
view, or may change along a vertical field of view, which is not
limited in this embodiment of the present invention.
[0076] In a possible embodiment, the desired detection angle may be
generated in the following way based on the obtained current
movement information or road condition information with reference
to a vehicle centerline (that is, a centerline located at the
center of the vehicle body and pointing to the front of the
vehicle): When the movement information is flat road movement or
the road condition information is a flat road, the angle coincident
with the vehicle centerline is used as a desired detection angle to
obtain more surrounding environment information. When the movement
information is uphill movement or the road condition information is
an uphill road, an angle obtained by deviating the vehicle
centerline downward by a preset first angle is used as the desired
detection angle to obtain more ground environment information and
avoid the loss of effective ground information caused by overhead
orientation of the field of view of the lidar. When the movement
information is a downhill movement or the road condition
information is a downhill road, an angle obtained by deviating the
vehicle centerline upward by a preset second angle is used as the
desired detection angle to obtain more surrounding environment
information and avoid the problem that the field of view of the
lidar can obtain only the environment information at a near
distance. When the movement information is a left turn movement, an
angle obtained by deviating the vehicle centerline leftward by a
preset third angle is used as the desired detection angle to obtain
more left side environment information. When the movement
information is a right turn movement, an angle obtained by
deviating the vehicle centerline rightward by a preset fourth angle
is used as the desired detection angle to obtain more right side
environment information.
[0077] To enable those skilled in the art to better understand and
implement the present invention, an embodiment of the present
invention provides a schematic diagram of a desired detection
angle, as shown in FIG. 9.
[0078] Referring to FIG. 9, a relationship between the desired
detection angle and the vehicle centerline is as follows: When the
vehicle is moving on a flat road, the desired detection angle 212
coincides with the vehicle centerline 211 to obtain more
surrounding environment information. When the vehicle is moving
uphill, the desired detection angle 222 is below the vehicle
centerline 221 to obtain more ground environment information. When
the vehicle is moving downhill, the desired detection angle 232 is
above the vehicle centerline 231 to obtain more surrounding
environment information.
[0079] Step S103: Adjust an orientation angle at the center of the
field of view of the lidar to the desired detection angle based on
the generated desired detection angle.
[0080] In specific implementation, the parameter of a scanning
apparatus of the lidar may be adjusted to control the orientation
angle at the center of the field of view of the lidar system to be
the desired detection angle. Optionally, the scanning apparatus is
a two-dimensional galvanometer. The two-dimensional galvanometer
transmits the laser pulse signal, which is transmitted by the
lidar, to a two-dimensional space, and receives a laser pulse echo
signal reflected from the two-dimensional space. The
two-dimensional galvanometer used to control the orientation angle
at the center of the field of view of the lidar to be the desired
detection angle facilitates engineering implementation of an
integrated and miniaturized lidar. Optionally, the scanning
apparatus is two one-dimensional galvanometers that are
perpendicular to each other and capable of vibrating independently.
The two one-dimensional galvanometers control the scanning of the
vertical field of view and the scanning of the horizontal field of
view respectively. The center locations of the two galvanometers
are controlled by two one-dimensional galvanometers respectively to
implement two-dimensional orientation at the center of the field of
view of the lidar. When the scanning apparatus is a two-dimensional
galvanometer, the drive voltage or drive current of the
two-dimensional galvanometer may be adjusted to control the
orientation angle at the center of the field of view of the lidar
system to be the desired detection angle. When the scanning
apparatus is two one-dimensional galvanometers that are
perpendicular to each other and capable of vibrating independently,
the drive voltage or drive current of the two one-dimensional
galvanometers may be adjusted to control the orientation angle at
the center of the field of view of the lidar system to be the
desired detection angle. By using two one-dimensional
galvanometers, the control is simplified, but the system size is
larger.
[0081] To enable those skilled in the art to better understand and
implement the present invention, an embodiment of the present
invention provides a schematic structural diagram of a scanning
apparatus, as shown in FIG. 10.
[0082] Referring to FIG. 10, the scanning apparatus includes a
one-dimensional galvanometer 31 and a one-dimensional galvanometer
32. The one-dimensional galvanometer 31 and the one-dimensional
galvanometer 32 are perpendicular to each other and can rotate
independently.
[0083] In specific implementation, a laser pulse signal transmitted
by a lidar is first incident on the one-dimensional galvanometer
31, and then reflected to the one-dimensional galvanometer 32. The
laser pulse signal is reflected to a space by the one-dimensional
galvanometer 32, and the one-dimensional galvanometer 32 receives a
laser pulse echo signal emitted back from the space. The rotation
direction of the one-dimensional galvanometer 31 is perpendicular
to the rotation direction of the one-dimensional galvanometer 32,
and the two galvanometers can rotate independently. Therefore, the
one-dimensional galvanometer 31 and the one-dimensional
galvanometer 32 can be controlled to vibrate independently, thereby
controlling the orientation at the center of the horizontal field
of view and the orientation at the center of the vertical field of
view of the lidar, and implementing two-dimensional change of the
orientation at the center of the field of view of the lidar.
[0084] In specific implementation, the lidar may be placed on a
two-dimensional gimbal. Parameters of the two-dimensional gimbal,
such as rotation direction parameters, are adjusted to control the
orientation angle at the center of the field of view of the lidar
system to be the desired detection angle.
[0085] To enable those skilled in the art to better understand and
implement the present invention, an embodiment of the present
invention provides a schematic diagram of a field-of-view range of
a lidar, as shown in FIG. 11.
[0086] Referring to FIG. 11, the field of view detectable by the
lidar 40 is a two-dimensional plane 41, and an angle between the
lidar 40 and a center point 42 of the two-dimensional plane 41 is
an orientation angle at the center of the field of view of the
lidar 40.
[0087] By applying the above solution, the current movement
information or road condition information is obtained in real time,
then the desired detection angle is generated based on the current
movement information or road condition information, and the
orientation angle at the center of the field of view of the lidar
is adjusted to the desired detection angle. In this way, the
orientation angle at the center of field of view of the lidar is
adjusted in real time to adapt to different scenes during the
movement process.
[0088] To enable those skilled in the art to better understand and
implement the present invention, an embodiment of the present
invention further provides a lidar system capable of implementing
the method for adjusting an orientation at the center of a field of
view of a lidar, as shown in FIG. 12. The lidar system may include
an obtaining unit, a generating unit, and an adjusting unit.
[0089] The obtaining unit is configured to obtain a current
movement direction or road condition information in real time.
[0090] The generating unit is configured to generate a desired
detection angle based on the obtained current movement information
or road condition information.
[0091] The adjusting unit is configured to adjust an orientation
angle at the center of the field of view of the lidar to the
desired detection angle based on the generated desired detection
angle.
[0092] Optionally, the adjusting unit is configured to adjust a
parameter of a scanning apparatus of the lidar to control the
orientation angle at the center of the field of view of the lidar
system to be the desired detection angle.
[0093] In specific implementation, the scanning apparatus is a
two-dimensional galvanometer or two one-dimensional galvanometers
that are perpendicular to each other and capable of vibrating
independently.
[0094] In specific implementation, when the scanning apparatus is a
two-dimensional galvanometer, the adjustment unit is configured to
adjust a drive voltage or drive current of the two-dimensional
galvanometer to control the orientation angle at the center of the
field of view of the lidar system to be the desired detection
angle. When the scanning apparatus is two one-dimensional
galvanometers that are perpendicular to each other and capable of
vibrating independently, the adjustment unit is configured to
adjust the drive voltage or drive current of the two
one-dimensional galvanometers to control the orientation angle at
the center of the field of view of the lidar system to be the
desired detection angle.
[0095] In an embodiment of the present invention, the lidar system
further includes a two-dimensional gimbal (not shown). The
adjustment unit is configured to adjust a parameter of the
two-dimensional gimbal to control the orientation direction at the
center of the field of view of the lidar system to be the desired
detection angle.
[0096] In specific implementation, the obtaining unit is configured
to obtain current road condition information in real time in at
least one of the following ways: a pre-downloaded map, a point
cloud map constructed by the lidar, or data captured by a
camera.
[0097] In specific implementation, the generating unit includes: a
first generating subunit (not shown), a second generating subunit
(not shown), a third generating subunit (not shown), a fourth
generating subunit (not shown), a fifth generating subunit (not
shown), a sixth generating subunit (not shown), and a seventh
generating subunit.
[0098] The first generating subunit is configured to use an angle
in coincidence with a vehicle centerline as a desired detection
angle when movement information is flat road movement or road
condition information is a flat road condition.
[0099] The second generating subunit is configured to use an angle,
which is obtained by deviating the vehicle centerline downward by a
preset first angle, as the desired detection angle when the
movement information is an uphill movement or the road condition
information is an uphill road condition.
[0100] The third generating subunit is configured to use an angle,
which is obtained by deviating the vehicle centerline upward by a
preset second angle, as the desired detection angle when the
movement information is a downhill movement or the road condition
information is a downhill road condition.
[0101] The fourth generating subunit is configured to use an angle,
which is obtained by deviating the vehicle centerline leftward by a
preset third angle, as the desired detection angle when the
movement information is a left turn movement.
[0102] The fifth generating subunit is configured to use an angle,
which is obtained by deviating the vehicle centerline rightward by
a preset fourth angle, as the desired detection angle when the
movement information is a right turn movement.
[0103] In specific implementation, for the working process and
principles of the lidar system, reference may be made to the
description in the method provided in the foregoing embodiment, and
details are omitted here.
[0104] As shown in FIG. 13, in a possible embodiment, a method for
adjusting a field of view of a lidar is provided, including: [0105]
obtaining current road condition information in real time.
[0106] The size of a field of view (FOV) of an existing lidar is
fixed. Therefore, in some scenarios, in order to improve an angular
resolution of the lidar, the number of pairs of transceiver modules
has to be increased, resulting in increase of the size, power
consumption, and costs of a lidar system. In view of this,
according to the present invention, the current road condition
information is obtained in real time, and then the size of the
field of view of the lidar is adjusted in real time according to
the obtained road condition information to meet angular resolution
requirements in different scenes. The road condition information
obtained in real time is all object information detected. For
example, the road condition information may be a suspected vehicle
object at a long distance ahead. Specifically, current road
condition information may be obtained in real time based on a
pre-downloaded map such as simultaneous localization and mapping
(SLAM), or the current road condition information may be obtained
in real time based on a point cloud map constructed by the lidar,
or the current road condition information may be obtained in real
time based on data captured by a vehicle-mounted camera. [0107]
determining a resolution requirement and a field-of-view
requirement based on the obtained current road condition
information, and determining a target region according to the
resolution requirement and the field-of-view requirement.
[0108] The determining a target region according to the resolution
requirement and the field-of-view requirement includes: determining
that a first range region is the target region when the resolution
requirement is a first resolution and the field-of-view requirement
is a first field of view; determining that a second range region is
the target region when the resolution requirement is a second
resolution and the field-of-view requirement is a second field of
view, where the first resolution is higher than the second
resolution, the first field of view is smaller than the second
field of view, and the first range region is smaller than the
second range region. Specifically, a resolution requirement and a
field-of-view requirement may be determined based on the obtained
current road condition information, and a target region may be
determined based on the resolution requirement and the
field-of-view requirement. For example, when the road condition
information is a suspected vehicle object at a long distance ahead,
because a low resolution makes the object unrecognizable, the
current resolution requirement is the first resolution such as a
high resolution, and the field-of-view requirement is a first field
of view such as a small field of view. A first range area, such as
a small range area, around the suspected vehicle object a long
distance ahead may be defined as a target region, thereby
facilitating the lidar to shrink the field of view subsequently,
scan the target region centrally, and obtain more accurate
information. [0109] adjusting the field of view of the lidar to the
target region.
[0110] In specific implementation, the field of view of the lidar
is adjusted to the target region. That is, the orientation of the
lidar is adjusted to the target region, and the target region is
detected. In specific implementation, a first range region may be
selected as the target region, and the field of view of the lidar
may be adjusted to the target region. For example, a suspected
object is found based on current road condition information.
However, due to a limited resolution, complete contour information
of the suspected object is unavailable. Therefore, the current
resolution requirement is a first resolution such as a high
resolution, and the field-of-view requirement is a first field of
view such as a small field of view. A first range region around the
suspected object, such as a small range region, may be used as a
target region, and the orientation of the lidar is adjusted to the
target region. Because the field of view becomes smaller, the lidar
can clearly distinguish the complete contour information of the
suspected object. Based on the complete contour information, a
sensing processing unit of a driverless system can determine the
type of the suspected object and other key information after simple
processing, thereby improving reliability of autonomous driving. In
specific implementation, a second range region such as a wide range
region may be selected as the target region, and the field of view
of the lidar may be adjusted to the target region. For example,
when no suspected object is found based on the current road
condition information, the current resolution requirement is a
second resolution, such as a low resolution, and the field-of-view
requirement is a second field of view, such as a wide field of
view. The field-of-view range of the lidar may be widened, the
target region may be determined, and the orientation of the lidar
may be adjusted to the target region. Due to the widened field of
view, the lidar can detect a wide range in all related surrounding
regions to find suspected objects.
[0111] In specific implementation, the parameter of a scanning
apparatus of the lidar may be adjusted to control the field of view
of the lidar system to be the target region.
[0112] In an embodiment of the present invention, when the scanning
apparatus is a two-dimensional galvanometer, a drive voltage of the
two-dimensional galvanometer is adjusted to control the field of
view of the lidar to be the target region.
[0113] In specific implementation, an optical parameter of the
lidar may be adjusted to control the field of view of the lidar
system to be the target region.
[0114] In an embodiment of the present invention, the optical
parameter is a focal length parameter of a transmitting optical
system and a focal length parameter of a receiving optical
system.
[0115] To enable those skilled in the art to better understand and
implement the present invention, an embodiment of the present
invention provides a schematic diagram of a field-of-view size of a
lidar, as shown in FIG. 14.
[0116] Referring to FIG. 14, road condition information in a wide
field of view is obtained based on a real-time point cloud map of
the lidar; then a small target region is determined based on a
suspected vehicle in the road condition information obtained in
real time, and a field of view of the lidar is adjusted to the
small target region, that is, an orientation of the lidar is
adjusted to the small target region. Because the field of view
becomes smaller, the lidar can clearly distinguish complete contour
information of the suspected vehicle.
[0117] To enable those skilled in the art to better understand and
implement the present invention, an embodiment of the present
invention provides a schematic diagram of point cloud data of a
lidar. As shown in FIG. 15, for a target vehicle in a space, point
cloud data detected by using a wide field of view is on the left
side of the drawing, and point cloud data detected by using a small
field of view is on the right side of the drawing. It can be seen
that point cloud data (shown on the left side of the drawing)
detected in the wide field of view is not enough (only two pieces
of point cloud data are detected) for distinguishing the contour
information of the target vehicle. A sensing processing unit of the
driverless driving system cannot determine specific information of
the target vehicle. However, after the field of view is shrunk,
more point cloud data (shown on the right side of the drawing) can
be detected in the small field of view, the contour information of
the target vehicle can be distinguished clearly, and a sensor of a
driverless system can determine the type of the target vehicle and
other key information through simple processing.
[0118] By applying the above solution, the current road condition
information is obtained in real time, the target region is
determined based on the obtained current road condition
information, and the field of view of the lidar is adjusted to the
target region. In this way, without increasing costs or affecting
ranging, the size of the target region can be adjusted in real time
based on the current road condition information, and the size of
the field of view of the lidar is adjusted in real time to adapt to
different resolution scenes.
[0119] A person of ordinary skill in the art may understand that
all or some of the steps of the various methods in the foregoing
embodiments may be implemented by a program instructing relevant
hardware. The program may be stored in a computer-readable storage
medium. The storage medium may include a read-only memory (ROM), a
random access memory (RAM), a magnetic disk, an optical disc, or
the like.
[0120] The foregoing descriptions are merely exemplary embodiments
of this specification, but are not intended to limit this
specification. Any modification, equivalent replacement, or
improvement made without departing from the spirit and principle of
this specification should fall within the protection scope of this
specification.
* * * * *