U.S. patent application number 17/505640 was filed with the patent office on 2022-02-10 for precisely controlled chirped diode laser and coherent lidar system.
The applicant listed for this patent is Intel Corporation. Invention is credited to George RAKULJIC, Naresh SATYAN.
Application Number | 20220043155 17/505640 |
Document ID | / |
Family ID | |
Filed Date | 2022-02-10 |
United States Patent
Application |
20220043155 |
Kind Code |
A1 |
SATYAN; Naresh ; et
al. |
February 10, 2022 |
PRECISELY CONTROLLED CHIRPED DIODE LASER AND COHERENT LIDAR
SYSTEM
Abstract
A light detection and ranging (LIDAR) system may include a laser
source configured to emit one or more optical beams; a scanning
optical system configured to scan the one or more optical beams
over a scene and capture reflections of the one or more optical
beams from the scene; a measurement system configured to divide the
scene into a plurality of pixels, the measurement system comprising
a detector configured to detect a return signal from multiple
pixels of the plurality of pixels as the one or more optical beams
are scanned across the scene, and a data processor configured to
perform data processing from the return signal from the multiple
pixels to determine a range and/or range rate for each pixel of the
scene.
Inventors: |
SATYAN; Naresh; (Pasadena,
CA) ; RAKULJIC; George; (Santa Monica, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Intel Corporation |
Santa Clara |
CA |
US |
|
|
Appl. No.: |
17/505640 |
Filed: |
October 20, 2021 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
16032535 |
Jul 11, 2018 |
11187807 |
|
|
17505640 |
|
|
|
|
International
Class: |
G01S 17/89 20060101
G01S017/89; G01S 7/481 20060101 G01S007/481; G01S 7/4915 20060101
G01S007/4915; G02F 1/21 20060101 G02F001/21; H01S 5/026 20060101
H01S005/026; G01S 7/4914 20060101 G01S007/4914; G01S 7/4911
20060101 G01S007/4911; H01S 5/10 20060101 H01S005/10; G01S 17/42
20060101 G01S017/42; G01S 7/497 20060101 G01S007/497; G01S 17/34
20060101 G01S017/34 |
Claims
1. A light detection and ranging (LIDAR) system, comprising: a
laser source configured to emit one or more optical beams; a
scanning optical system configured to scan the one or more optical
beams over a scene and capture reflections of the one or more
optical beams from the scene, a measurement system configured to
divide the scene into a plurality of pixels, the measurement system
comprising a detector configured to detect a return signal from
multiple pixels of the plurality of pixels as the one or more
optical beams are scanned across the scene, and a data processor
configured to perform data processing from the return signal from
the multiple pixels to determine a range and/or range rate for each
pixel of the scene.
2. The LIDAR system of claim 1, the laser source configured to vary
the optical frequency of the one or more optical beams in
accordance with a periodic frequency versus time function.
3. The LIDAR system of claim 1, wherein the data processing
comprises a sliding-window data processing from the return signal
from the multiple pixels to determine the range and/or range rate
for each pixel of the scene.
4. The LIDAR system of claim 3, wherein the sliding-window data
processing comprises a sliding-window Fourier transformation.
5. A light detection and ranging (LIDAR) system, comprising: a
laser source configured to emit one or more optical beams; a
scanning optical system configured to scan the one or more optical
beams over a scene and capture reflections of the one or more
optical beams from the scene, a measurement system configured to
divide the scene into a plurality of pixels, the measurement system
comprising a multiplicity of detectors, wherein for each pixel of
the plurality of pixels a first detector of the multiplicity of
detectors is arranged along a scanning axis of the scanning optical
system and a second detector of the multiplicity of detectors is
arranged spatially separated from the first detector, wherein the
first detector determines a first range, and the second detector
determines a second range larger than the first range.
6. The LIDAR system of claim 5, wherein the multiplicity of
detectors are spatially staggered.
7. The LIDAR system of claim 5, wherein each of the detectors of
the multiplicity of detectors only detects a return signal from a
subset of target ranges.
8. A staring coherent light detection and ranging (LIDAR) system,
comprising: a frequency modulated laser configured to emit one or
more optical beams; an optical system configured to emit the one or
more optical beams to a scene and to capture reflections of the one
or more optical beams, a measurement system configured to divide
the scene into a plurality of pixels, the measurement system
comprising a detector array comprising a plurality of detectors,
and configured to simultaneously determine a return signal from
multiple pixels of the plurality of pixels of the scene, and mix
the return signal from multiple pixels with one or more local
oscillator beams to determine a range and/or range rate for each of
the multiple pixels.
9. The staring coherent LIDAR system of claim 8, wherein the
detector array is a first detector array, and the measurement
system further comprises a second detector array configured to
perform optical balancing of the first detector array.
10. The staring coherent LIDAR system of claim 9, wherein the first
detector array and the second array are arranged on separate
chips.
11. The staring coherent LIDAR system of claim 9, wherein the first
detector array and the second array are arranged on a single
wafer.
12. The staring coherent LIDAR system of claim 8, wherein the
measurement system further comprises at least one optical component
configured to perform optical balancing of the detector array on
adjacent pixels by introducing an .pi.-phase shift on a local
oscillator signal or a beam from the scene.
13. The staring coherent LIDAR system of claim 12, wherein the at
least one optical component is one of the group of: a phase
shifter, a polarization optics, and a pixelated polarizer.
14. The staring coherent LIDAR system of claim 8, wherein the
optical system is configured to simultaneously emit the one or more
optical beams to an entirety of the scene and simultaneously
capture reflections of the one or more optical beams from the
entirety of the scene.
15. The staring coherent LIDAR system of claim 8, the measurement
system further comprising a data processor configured to perform
sliding-window Fourier transformations from the return signal from
multiple pixels of the plurality of pixels to determine the range
and/or range rate for each pixel of the scene.
16. The staring coherent LIDAR system of claim 15, wherein the data
processor and the detector array are integrated monolithically in
close proximity.
17. The staring coherent LIDAR system of claim 15, wherein the data
processor and the detector array are integrated in hybrid
integration in close proximity.
18. The LIDAR system of claim 8, wherein the frequency modulated
laser is configured to vary the optical frequency of the one or
more optical beams in accordance with a periodic frequency versus
time function.
19. The staring coherent LIDAR system of claim 8, wherein the
frequency modulated laser is configured to emit the one or more
optical beams having a frequency of alternate up-and-down-chirps.
Description
NOTICE OF COPYRIGHTS AND TRADE DRESS
[0001] A portion of the disclosure of this patent document contains
material which is subject to copyright protection. This patent
document may show and/or describe matter which is or may become
trade dress of the owner. The copyright and trade dress owner has
no objection to the facsimile reproduction by anyone of the patent
disclosure as it appears in the Patent and Trademark Office patent
files or records, but otherwise reserves all copyright and trade
dress rights whatsoever.
RELATED APPLICATION INFORMATION
[0002] This patent claims priority from provisional patent
application No. 62/675,567, filed May 23, 2018, titled HIGH-SPEED
COHERENT LIDAR, and provisional patent application no. 62/536,425,
filed Jul. 24, 2017, titled COHERENT CHIRPED LIDAR FOR 3D IMAGING,
the contents of both are incorporated by reference herein in their
entirety.
BACKGROUND
Field
[0003] This disclosure relates to three-dimensional (3D) image
systems and, specifically, to coherent Light Detection and Ranging
(LIDAR) imaging systems.
Description of the Related Art
[0004] 3D imaging systems are critical for a number of applications
in the automotive industry, robotics, unmanned vehicles etc.
Traditionally, pulsed LIDAR systems relying on the time of flight
(TOF) technique have been investigated for these applications,
wherein the range to a target is determined by measuring the time
taken by a single narrow pulse to be reflected from a distant
target. TOF LIDAR suffers from some major challenges and
limitations. Since the amount of light reflected from distant
targets is very small (typically under 100 photons in the
measurement time) and the pulse width needs to be very small (<1
ns) to achieve high range accuracy, these systems require
sophisticated detectors, typically high-speed photon-counters. The
TOF technique places very stringent requirements on the dynamic
range of the detector and associated electronics (typically about
60 dB, or 20 bits). Further, these very sensitive detectors often
have difficulty dealing with crosstalk from other LIDAR systems or
from other sources of light (including direct sunlight) when
operated in real-world situations.
[0005] Coherent LiDAR is a promising 3D imaging technology because
of its potential to achieve excellent performance on a number of
key metrics: high-speed (>1 million points/second), long-range
(>200 m for targets with albedo of 0.1), high lateral resolution
(<0.1 degrees), and fine range precision (<10 cm). FIG. 1 is
a simplified block diagram of an exemplary coherent LIDAR system
100. The LIDAR system 100 is based on the frequency modulated
continuous wave (FMCW) technique, where the frequency of a
continuous wave (CW) laser is "chirped" or changed in accordance
with a predetermined periodic frequency versus time function. The
periodic frequency versus time function may be a positive linear
sawtooth function where the frequency starts at a baseline value,
increases linearly over a time period, resets to the baseline
value, and repeats periodically. The periodic frequency versus time
function may be a negative linear sawtooth function where the
frequency starts at a baseline value, decreases linearly over a
time period, resets to the baseline value, and repeats
periodically. The periodic frequency versus time function may be a
linear triangular function where the frequency starts at a baseline
value, increases linearly over a time period, decreases linearly
back to the baseline value over a second time period, and repeats
periodically. The periodic frequency versus time function may be
some other linear or nonlinear function. The system 100 includes a
semiconductor diode laser 100 to produce the chirped waveform. This
device will be referred to in this patent as a Chirped Diode Laser
(CHDL).
[0006] The output of the CHDL 110 is divided into two components by
a tap coupler 120. A small fraction is separated from the output to
be used as a Local Oscillator (LO) wave 125. The majority
(typically >90%) of the CHDL output power is directed to a
target 160 via a circulator 130. Typically, an optical system (not
shown), such as a telescope, is used to form the CHDL output into a
collimated output beam 135 that illuminates the target 160. Light
reflected from the target 140 is collected by the same optical
system (not shown) and returned to the circulator 130. The
reflected light exits the circulator 130 and is combined with the
local oscillator wave 125 in a 2.times.1 coupler 145. The combined
LO wave and the reflected light from the target are incident on a
photodetector (PD) 150. The photodetector 150 provides an output
current proportional to the incident optical power. The
photodetector 150 effectively multiplies the amplitudes of the
reflected light and the L wave to create a coherent "beat signal"
whose frequency is directly proportional to the round-trip time
delay to the target, and the range to the target is thus
determined.
[0007] While FIG. 1 is drawn is if the optical paths between the
CHDL, couplers, circulator and photodetector are optical fibers,
this is not necessarily the case. The couplers and circulator can
be implemented using discrete optical elements and the optical
paths between these elements may be in free space.
[0008] FIG. 2A is a graph of optical frequency versus time which
illustrates the operation of a coherent LIDAR system, such as the
coherent LIDAR system 100. In this example, the optical frequency
of the output wave from a chirped diode laser (CHDL) follows a
positive linear sawtooth function where the optical frequency
during each chirp period T is given by
.omega.=.omega.0+.xi.t, (1) [0009] where .omega.=optical frequency
of the laser output wave; [0010] .omega.0=a baseline frequency at
the start of each chirp; [0011] .xi.=a chirp rate measured in
frequency per unit time; and [0012] t=time.
[0013] The optical frequency of the reflected wave from a target
follows a similar function, but is offset in time from the output
wave by a period given by
.tau.=2R/c (2) [0014] where .tau.=the time interval between the
output and reflected chirps=the round-trip time to the target:
[0015] R=the range the target; and [0016] c=the speed of light. The
chirp period T must be longer than the round trip time to the
target .tau. to provide a measurement interval T.sub.M. The
required length of the measurement interval T.sub.M is determined
by, among other factors, the signal-to-noise ratio of the reflected
beam. At any given time during the measurement interval T.sub.M,
the frequency difference .DELTA..omega. between the output and
reflected waves is given by:
[0016] .DELTA..omega.=.xi.r. (3)
[0017] FIG. 2B is a graphical representation of the mixing of a LO
wave and a reflected wave incident on a photodetector. The output
current from the photodetector is given by
i.apprxeq.(P.sub.LOP.sub.Ref).sup.1/2
cos(.DELTA..omega.t+.omega..sub.0.tau.) (4)
where i=current output from the photodetector; and [0018] P.sub.LO,
P.sub.REF=power of the LO and reflected waves. .DELTA..omega. can
be determined by processing the output current from the
photodetector. For example, the current value may be digitized with
a sample rate substantially higher than the anticipated value of
.DELTA..omega.>. A Fourier transform or other process may be
performed on the digitized samples to determine .DELTA..omega..
Alternatively. Aw may be determined by a bank of hardware filters
or some other technique. The range R to the target may then be
determined by
[0018] R=.DELTA..omega.c/2.xi. (5) [0019] where c=the speed of
light. The resolution of the range measurement is given by
[0019] .delta..apprxeq.c/2B (6) [0020] where .delta.=resolution in
the range direction, and [0021] B=frequency change during chip
period in cycles/sec. [0022] =.xi.T/2.pi..
[0023] A coherent LIDAR system can determine both a range to a
target and a rate of change of the range (range rate) to the
target. FIG. 3 is a graph of optical frequency versus time which
illustrates determining a rate of change of the range to a target.
The reflected beam from a moving target will be subject to a
Doppler frequency shift given by
.omega..sub.D=2.pi./.lamda.(dR/dt) (7) [0024] where
.omega.D=Doppler frequency shift in the reflected beam; [0025]
.lamda.=the laser wavelength; and dR/dt=the rate of change of the
range to the target.
[0026] When a target is illuminated with a laser beam with an
up-chirp (i.e., a beam that increases in frequency with time), the
Doppler shift and the frequency shift due to the delay of the
reflected beam are additive, such that
.DELTA..omega..sup.+=.omega.D+.xi..tau.; (8)
where .DELTA..omega..sup.+=the frequency difference between the
output and reflected waves for an up-chirp. When a target is
illuminated with a laser beam with a down-chirp (i.e., a beam that
decreases in frequency with time), the Doppler shift and the
frequency shift due to the delay of the reflected beam are
subtractive, such that
.DELTA..omega.'=.omega.D-.xi..tau.; (9) [0027] where
.DELTA..omega.''=the frequency difference between the output and
reflected waves for a down-chirp. Illuminating a target with both
an up-chirp beam and a down-chirp beam, concurrently or
sequentially, allows determination of both range and range-rate.
For example, the optical frequency of a single CHDL can be
modulated to follow a linear triangular function, as shown in FIG.
3, to provide sequential up-chirp and down-chirp measurements.
Simultaneous determination of range and range rate is a major
advantage of coherent LIDAR systems that enables faster and better
target tracking in a variety of applications.
DESCRIPTION OF THE DRAWINGS
[0028] FIG. 1 is a simplified block diagram of a coherent LIDAR
system.
[0029] FIG. 2A is a graph of optical frequency versus time
illustrating the operation of a coherent LIDAR system.
[0030] FIG. 2B is a graphical representation of the output current
from a photodetector.
[0031] FIG. 3 is a is a graph of optical frequency versus time
illustrating determination of range and range-rate based on Doppler
shift.
[0032] FIG. 4 is a block diagram of a state machine chirped diode
laser.
[0033] FIG. 5 is a block diagram of a coherent LIDAR system using
two chirped lasers to concurrently measure range and range
rate.
[0034] FIG. 6 is a graph of optical frequency versus time
illustrating multiple measurements during a single laser chirp.
[0035] FIG. 7 is a graphical representation of an amplified chirped
laser for long range LIDAR systems.
[0036] FIG. 8 is a block diagram of a scanning coherent LIDAR
system.
[0037] FIG. 9 is a block diagram of a scanning optical subsystem
for use in a coherent LIDAR system.
[0038] FIG. 10 is a block diagram of another scanning optical
subsystem for use in a coherent LIDAR system.
[0039] FIG. 11 is a block diagram of another scanning optical
subsystem for use in a coherent LIDAR system.
[0040] FIG. 12 is a block diagram of another scanning optical
subsystem for use in a coherent LIDAR system.
[0041] FIG. 13A is a graphical representation of temporal
over-sampling.
[0042] FIG. 13B is a graphical representation of spatial
over-sampling.
[0043] FIG. 14 is a block diagram of a coherent LIDAR system using
spatial oversampling.
[0044] FIG. 15 is a block diagram of a scanning optical subsystem
for use in a coherent LIDAR system with spatial over-sampling.
[0045] FIG. 16A is a block diagram of a single element is a staring
coherent LIDAR system.
[0046] FIG. 16B is a block diagram of a staring LIDAR system.
[0047] FIG. 17 is a depiction of a staring LIDAR sensor array
integrated on a single chip.
[0048] Throughout this description, elements appearing in figures
are assigned three-digit or four-digit reference designators, where
the two least significant digits are specific to the element and
the most significant digit or digits provide the number of the
figure where the element is introduced. An element that is not
described in conjunction with a figure may be presumed to have the
same characteristics and function as a previously-described element
having the same reference designator.
DETAILED DESCRIPTION
[0049] Description of Apparatus
[0050] The key requirement for a coherent chirped LIDAR system is a
laser whose optical frequency varies with time in a precisely
controlled fashion. LIDAR systems commonly incorporate
semiconductor diode lasers and attempt to control the laser to
produce a precisely linear chirped wave. However, the principles
described in this patent can be more generally applied to any type
of laser whose output frequency can be varied by changing one or
more input parameters. These principles can also be applied to
generating nonlinear chirps.
[0051] Feedback-controlled chirped diode lasers measure the
frequency output characteristic of the laser and use the
measurement to provide closed-loop feedback to control the laser
output frequency. However, measuring and controlling the rate of
change of the laser output frequency typically requires a finite
time interval. For example, a fraction of the laser output power
may be transmitted through an unbalanced (or asymmetric)
Mach-Zehnder interferometer (MZI) and onto a photodetector. The
output frequency of the beat signal produced by the
MZI--photodetector is directly proportional to the slope of the
frequency chirp (i.e., the rate of change of frequency with time).
By comparing this beat frequency to a desired beat frequency
(corresponding to a desired frequency chirp rate), an error signal
can be generated and fed back to the laser (typically after
additional filtering). This closed-loop system can generate a
precisely controlled linear chirp, but only works well if the chirp
duration and the LIDAR measurement time are substantially longer
that the time interval needed to measure and control the slope of
the frequency chirp.
[0052] The chirp rate of high-speed coherent LIDAR systems is
dictated by the required resolution and image update rate. Some
systems may use chirp durations or measurement times less than 1
microsecond. Closed-loop laser control, as described in the
preceding paragraph, does not work for these high speed LIDAR
systems, because the propagation delays in the feedback system
(such as optical delays in the unbalanced MZI and the response time
of the laser and any loop filters) are comparable to or larger than
the chirp duration itself.
[0053] The transmitter output in a coherent LIDAR system is a
sequence of identical (at least in theory) frequency chirps that
repeat periodically, as shown in FIG. 2A. Environmental
fluctuations, such as temperature changes, that cause the laser
performance to change with time typically have a time constant that
is much larger than the chirp repetition period. Thus it is
possible to control a future chirp based on measurements taken on
one or more previous chirps, so long as the delay of the control
system is smaller than the time constant of any environmental
fluctuations.
[0054] FIG. 4 is a block diagram of a frequency modulated laser 400
suitable for use in a high speed LIDAR system. The system includes
a laser device 410 that is driven by a laser driver circuit 415
that controls the frequency of the laser output. The laser device
410 may be a diode laser, in which case the laser driver 415
controls the output frequency of the laser 410 by varying an
electrical current provided to an input of the laser 410. The laser
device 410 may be some other type of laser, in which case the laser
driver 415 may control the output frequency of the laser 410 by
varying one or more other parameters.
[0055] A portion of the output beam from the laser 410 is extracted
by an optical tap 420 and applied to an optical frequency
discriminator 425 that provides a measurement of the rate of change
of the output frequency of the laser output. The optical tap 420
may be a tap coupler as shown, an asymmetric beam splitter, or some
other device that extracts a small portion of the laser output. The
optical frequency discriminator may be, for example, an asymmetric
MZI and photodetector as shown in FIG. 4. In this case, the output
of the photodetector is a signal having a frequency proportional to
the rate of change of the laser frequency (the asymmetric MZI and
photodetector operate exactly as described for the coherent LIDAR
system 100, with the "range" to the target determined by the
difference in the length of the two legs of the MZI). When the
output of the laser 410 is a perfectly linear chirp, the signal
output from the photodetector will be a constant predetermined
frequency. When the output of the laser deviates from a linear
chirp, a corresponding deviation will occur in the frequency of the
signal output from the photodetector. A technique other than an
asymmetric MZI and photodetector may be used for the optical
frequency discriminator 425.
[0056] An error determination module 430 receives the output from
the optical frequency discriminator 425, and determines the
deviation of the laser frequency from its intended value as a
function of time during the chirp period. The error determination
may be performed by hardware and/or by a processor executing a
method, such as a Hilbert transform, implemented in software. The
error determination for the present chirp ("chirp k") is provided
to a correction determination module 435 that determines a
correction to be applied to the drive signal for a subsequent chirp
k+1 or, more generally, a future chirp period, where the delay
between the measurements and the future time period is less than
the time constant of any environmental fluctuations. The correction
module 435 may determine the correction to be applied to the future
chirp using hardware and/or a processor executing a method
implemented in software. The correction determination module 435
may determine the correction to be applied to the future chirp
based upon the error determination and/or the correction for one or
more prior chirps. For example, the correction determination module
435 may determine the correction to be applied to the future chirp
based, at least in part, on a weighted sum or weighted average of
the determined errors for two or more prior periods of the periodic
frequency versus time function. The correction determination module
435 may determine the correction to be applied to the future chirp
based, at least in part, on digital or analog filtering of the
determined errors for one or more prior periods of the periodic
frequency versus time function.
[0057] The laser 400 will be subsequently refer to as a "state
machine CHDL" because where the information about the current
"state" of the chirp is fed back to influence the future state of
the chirp.
[0058] Although not shown in FIG. 4, the laser frequency error can
be determined from multiple and adaptively varying measurements
and/or filters. For example, the free spectral range of the
asymmetric MZI may be varied to measure large discrepancies when
the system is turned on (or when a parameter such as temperature is
changed), but switched to higher sensitivities to measure small
errors when the system is at or near steady state. Further, the
drive signal fed into the laser may also include open-loop
pre-distortion to compensate for known nonlinearities in the laser
characteristics and/or environmental compensation for known changes
to the environment, such as a separate measurement of the
temperature of the LIDAR system (which generates a known shift in
laser operating parameters). The use of open-loop pre-distortion
and/or environmental compensation may allow the laser 400 to reach
a steady state quicker when the system is turned on or when an
environmental parameter such as temperature is changed.
[0059] As illustrated in FIG. 3, the time required to obtain the
range (R) and range rate (dR/dt) information for a single target
pixel (picture element) using a conventional FMCW approach is 2T,
where the one-sided chirp duration T is the sum of the round trip
time delay .tau.=2R/c and the measurement time T.sub.M. The signal
to noise ratio (SNR) required for good target detection determines
the value of T.sub.M, and T.sub.M can be reduced by adjusting
system parameters such as CHDL power or receiver collection
aperture. After a time (2T), the beam is scanned to the next pixel
on the target. The total number of pixels that can be measured in a
given time, termed the "3D imaging rate" (3D-IR) is (1/2T) pixels
per second.
[0060] It is desirable that the 3D-IR be as high as possible. For
the standard system of FIG. 1, even if the measurement time T.sub.M
is made very small, the 3D imaging rate is limited by the maximum
round trip transit time in the scene. i.e.,
3D-IR<1/2.tau..sub.max=c/4R.sub.max. (8)
This is a limitation imposed by the finite speed of light c. For
example, for a maximum range of 300 m, the 3D imaging rate is
limited to 0.25 million pixels per second. This limitation may not
be acceptable in some applications.
[0061] The "speed of light limit" on the 3D imaging rate can be
overcome with two improvements to the basic coherent LIDAR system
of FIGS. 1, 2, and 3. The first improvement, as incorporated into
the LIDAR system 500 of FIG. 5, is to use two CHDLs 510, 570 to
simultaneously illuminate the same pixel on the target, with the
frequencies of the CHDLs chirping in opposite directions. For
example, the frequency of the first CHDL 510 may follow a positive
sawtooth function and the frequency of the second CHDL 570 may
follow a negative sawtooth function. Alternatively, the frequency
of the first CHDL 510 may follow a triangle function and the
frequency of the second CHDL 570 may follow a triangle function
shifted 180 degrees (shifted in time by one chirp period T)
compared to the first CHDL. This enables the up and down
measurements of FIG. 3 to be performed simultaneously.
[0062] In the LIDAR system 500, elements with reference designators
from 510 to 560 have the same function as the counterpart elements
in the LIDAR system 100 and will not be further described. The
LIDAR system 500 includes second CHDL 570, which is chirped in the
opposite direction as the first CHDL 510. The first and second
CHDLs 510, 570 may be multiplexed by a beam combiner 575. For
example, the first and second CHDLs 510, 570 may have orthogonal
polarization, and the beam combiner 575 may be a polarization beam
splitter. The first and second CHDLs 510, 570 may have different
wavelengths, and the beam combiner 575 may be a dichroic beam
splitter. In any case, the beams from the first and second CHDLs
510, 570 are combined and directed to the target 560. The reflected
beams from the target are separated by beam divider (using
polarization or wavelength as appropriate) are directed to separate
detectors 550, 585. With this approach, the time to obtain R and
dR/dt for a pixel is T, and the 3D imaging rate is (1/T) pixels per
second. Again, by minimizing the measurement time T.sub.M, the
limitation on 3D-IR is 0.5 million pixels per second for a maximum
range of 300 meters. A 3D-IR of 0.5 million pixels per second,
while twice the rate of conventional coherent LIDAR systems, may
still not be sufficient for some application.
[0063] The next improvement in the conventional coherent LIDAR
system results from recognition that the speed of light
fundamentally imposes a delay or latency in the measurement, rather
than a restriction on the imaging rate. The improvement to overcome
the speed of light limit is to not reset the chirp after every
pixel, but instead to use a chirp with longer extent T.sup.+ that
spans a number of (scanned) pixels as shown in FIG. 6. The example
of FIG. 6 illustrates the chirp extended over three pixels, with
the reflection from each pixel measured during a respective
measurement time. The previously described state machine CHDL can
provide a precisely controlled linear chirp over a frequency range
sufficient to allow a single chirp to span hundreds of scanned
pixels. In this case, the effective 3D-IR.apprxeq.1/T.sub.M, which
is solely limited by the measurement time and not the round-trip
time to the target. Different pixels are effectively measured using
chirped waves with different optical frequencies, which eliminates
any ambiguity in their range measurements, and allows faster
measurements than are possible with the conventional coherent LIDAR
systems.
[0064] As shown in FIG. 6, the returns from different pixels can
have different time delays and Doppler shifts. This does not pose a
major problem, and can be accounted for by straightforward methods.
In one approach to resolving the measurements of multiple pixels, a
detector that measures the return signal from multiple pixels as
the illumination beam is scanned across them is used in conjunction
with sliding-window Fourier transforms that resolve the ambiguity
in the data processing. In another approach, a multiplicity of
detectors is used along with the beam-scanning element. This
embodiment takes advantage of the fact that the returned beam
"lags" the illumination beam in a scanning system, creating a
spatial separation between the two. The spatial separation is
larger for farther ranges compared to nearer targets, and this fact
is exploited by using a multiplicity of spatially staggered
detectors where each one only measures the return signal from a
subset of target ranges.
[0065] LIDAR systems impose a stringent requirement on the number
of photons that need to be collected by the coherent receiver in
order to achieve accurate measurements of range and range-rate. The
number of photons collected is determined by the transmitted laser
power, the reflectivity of the target, and the size of the receiver
collection optics. Long-range (i.e., longer than 100 meters) LIDAR
systems benefit from the use of high output powers (e.g., 100 mW to
10 W) to minimize the size and complexity of the collection optics
used in the coherent receiver. However, long-range coherent LIDAR
systems also require very narrow laser line width, which is
generally incompatible with high laser output power. Semiconductor
lasers with narrow line widths typically have output powers less
than 100 mW.
[0066] A semiconductor laser may be used in conjunction with a
semiconductor optical amplifier (SOA) or flared tapered amplifier
in order to achieve the desired higher output powers while
maintaining the required narrow line width. The output of a narrow
line width master oscillator, which may be a state machine CHDL,
may be fed into an optical amplifier, typically in a semiconductor
medium. However, the optical and spectral properties of the
oscillator can be affected by optical feedback effects and coupling
of amplified spontaneous emission (ASE) from the amplifier section
to the oscillator, which can dramatically increase the line width.
Thus, a feedback barrier may be disposed between the CHDL and the
amplifier to ensure that the line width and other properties of the
CHDL are not affected by feedback from the amplifier section. The
feedback barrier may be, for example, an optical isolator or an
attenuator. A master-oscillator power-amplifier (MOPA) laser with a
broad-area or flared/tapered amplifier can provide single-mode
operation at high (i.e., greater than 10 W) output power on a
single integrated semiconductor chip.
[0067] FIG. 7 is a schematic diagram of a narrow line width, high
power. MOPA laser 700 suitable for use in long range coherent LIDAR
systems. The MOPA laser 700 includes a feedback insensitive
oscillator 710, an optional feedback barrier 715, a preamplifier
720 and a tapered amplifier 730. The feedback insensitive
oscillator, which may be state machine CHDL as previously
described, includes a laser cavity 714 sandwiched between a
reflector 712 and a high reflectivity output mirror 716. The
frequency chirp of such an oscillator can be controlled precisely
using the techniques described above. Insensitivity to optical
feedback is achieved by increasing the reflectivity of the output
mirror 716, which ensures that most of the light fed back towards
the oscillator 710 from the preamplifier and amplifier 720, 730
does not affect the oscillation in the laser cavity 714. Increasing
the reflectivity of the output mirror effectively incorporates the
feedback barrier 715 into the output mirror 716. The increase in
the reflectivity of the output mirror 716 will reduce the power
output of the oscillator 710. The reduction of oscillator output
power is made up by the use of the preamplifier 720 between the
oscillator 710 and the flared/tapered amplifier 730 to boost the
optical power. The same effect of reducing the amount of optical
power fed backwards into the laser can also be achieved using an
optical loss element (such as a coupler/splitter or an absorbing
section) as the feedback barrier 715. This technique also reduces
the optical power output of the oscillator, which can be
compensated by the gain of the preamplifier 720.
[0068] With few exceptions, LIDAR systems can be segregated into
staring systems and scanning systems. In a staring LIDAR, the
transmitted laser beam illuminates the entire scene to be imaged.
Reflections from the entire scene are imaged onto a detector array,
with each detector element in the array corresponding to respective
pixel in the scene. A staring coherent LIDAR must spread the
reflected and LO beam over the entire detector array, which leads
to insufficient signal-to-noise ratio unless the available laser
power is very high. Thus, coherent LIDAR systems typically use an
optical system to scan the transmitted bean sequentially over the
scene.
[0069] A coherent receiver for a coherent LIDAR system employing a
scanning transmitted beam has a fundamental architectural
challenge. A transmitted beam having a beam diameter D0 is scanned
across the scene within a wide field of view .THETA.. The scanning
is typically performed in two dimensions. However, the subsequent
figures only show scanning in one direction for ease of
representation. The same design can be easily extended to
two-dimensional scanning. The size of the transmitted beam, D0, is
determined by the size of the scanning optic and the required
angular resolution (typical values of D0 are 1-3 mm). The coherent
receiver has to modally overlap the received photons from the
target with the LO wave on a photodetector. One solution for the
coherent receiver is to use an imaging lens that images the entire
field of view .THETA. on a fixed detector or detector array, and
illuminate the entire detector area, whether it is a single large
detector or a detector array, with the LO at all times. However,
since only a fraction of the field of view is imaged at any given
time (this fraction can be 1/10.sup.5 or smaller), this leads to an
inefficient use of LO power and detector area, and can result in
very poor signal to noise ratio due to LO shot noise. Thus the
receiver in a typical coherent LIDAR is typically scanned along
with the transmitted beam. In other words, the LO beam needs to be
mode-matched with the return beam from the scene as different parts
of the field of view are illuminated.
[0070] The effective collection aperture of the coherent receiver,
D1, is dictated by the requirement to collect enough photons from
the target to make a high-SNR measurement. With a sufficiently
high-power laser, D1=D0, which is to say the transmitted beam
diameter and the receiver collection aperture may be the same. In
this case, the LIDAR optical system can be a simple "cat's-eye"
configuration iso-called because the transmitted and reflected
light propagate in opposing directions along the same optical path,
as is the case with light reflected from the eyes of a cat) where
the return beam from the target retraces the optical path of the
transmitted beam, as in the coherent LIDAR system 800 shown in FIG.
8.
[0071] In the LIDAR system 800, elements with reference designators
from 810 to 855 have the same function as the counterpart elements
in the LIDAR system 100 and will not be further described. The
laser beam (other than the fraction extracted for the LO) is output
from the circulator 830 and expanded/collimated by lens 870 to form
an output beam having diameter D0. The lens 870, and all lenses in
subsequent drawings, are depicted as single refractive lens
elements but may be any combination of refractive, reflective,
and/or diffractive elements to perform the required function. The
output beam impinges upon scanning mirror 875, which can be rotated
about an axis 880. The scanning mirror 875 may be, for example, a
MEMS (micro-electro-mechanical system) mirror capable of high speed
scanning. Rotation of the scanning mirror causes the output beam to
scan across the scene through a field of view .THETA..
[0072] Light reflected from the target impinges upon the scanning
mirror 875 and is directed to the optic 870. Optic 870 captures
(i.e., focuses) the reflected light, which is directed to the
circulator 830. The captured reflected light exits the circulator
830 and is combined with the LO beam and detected as previously
described. The diameter Do of the transmitted beam and the
collection aperture of the receiver are defined by the optic 870.
Increasing the diameter of the transmitted beam correspondingly
increases the diameter of the receiver aperture (and thus the
number of received photons) at the expense of increasing the size
of the optic 870 and the scanning mirror 875. The requirement for
high speed scanning over the field of view limits the allowable
size of the mirror. The transmitted beam diameter of scanning LIDAR
systems is typically limited to 1 mm to 3 mm.
[0073] In the absence of a very high-power laser, the necessary
diameter of the receiver aperture D1 is preferably larger than the
transmitted beam diameter D0, and the challenge is to ensure mode
overlap between the received beam and the LO beam at the detector
with sufficient signal-to-noise ratio.
[0074] FIG. 9 is a schematic diagram of an optical system 900 for a
scanning coherent LIDAR system in which a diameter D1 of a receiver
collection aperture may be substantially larger than a diameter D0
of a transmitted beam 930. A first lens 910 receives light from a
CHDL and forms a collimated beam with a diameter D0. The collimated
beam impinges on a scanning mirror 915 which is rotated to cause
the beam to scan over a total scan angle .THETA.. A second lens 940
having a diameter D1, where D1>D0, receives reflected light 935
from a target scene and forms an image of the target scene on a
detector array or a single detector. In either case, an area of the
detector array or single detector is equal to or larger than the
scene image formed by the second lens 940. The use of a detector
array instead of a single detector ensures that the detector has a
low enough capacitance to achieve the required bandwidth. Each
detector in a detector array corresponds to a pixel in the scene
and only one detector in the array is active for any given scan
angle. At any given instant, the reflected light 935 received by
the second lens 940 originates at a point in the target scene that
is illuminated by the transmitted beam 930. Thus, as the angle of
the transmitted beam 930 is changed, the received light is focused
to a spot that movies laterally in the focal plane of the second
lens 940 (i.e., across the plane of the detector 945). In systems
with two-dimensional scanning of the transmitted beam, the spot of
received light will scan in two dimensions across the detector 945.
Since the target is illuminated with a smaller beam than the
collection optic, the spot size at the plane of the detector for a
given angle of illumination is larger than the resolution-limited
spot size of the lens.
[0075] To achieve coherent detection of the reflected light, it is
necessary that the LO beam and the received light be superimposed
at the detector. It is possible to achieve this with an LO beam
that illuminates the entire detector 945. Since the received light
forms a spot that scans across the detector 945 as the transmitted
beams is scanned across the scene, it is advantageous for the LO
beam to instead scan across the detector 945 in a corresponding
manner. To this end, a portion of the transmitted beam is extracted
by a tap beam splitter 920 to form the LO beam 925. The LO beam is
then combined with the received light 935 by a second beam splitter
950. Since the LO beam 925 is extracted from the transmitted beam
930 as it is scanned, the angle of the LO beam changes in
conjunction with the scanning of the transmitted beam. The tap beam
splitter 920 and beam splitter 950 effectively form a corner
reflector such that the LO beam is parallel to the received light
when the LO and received beams are combined. Thus, the second lens
940 focuses the LO beam to a spot that superimposed on the spot of
received light at the detector 945.
[0076] FIG. 10 is a schematic diagram of another optical system
1000 for a scanning coherent LIDAR system in which a diameter D1 of
a receiver collection aperture may be substantially larger than a
diameter D0 of a transmitted beam 1030. A first lens 1010 receives
light from a CHDL and forms a collimated beam with a diameter D0.
The collimated beam impinges on a scanning mirror 1015 which is
rotated to cause the beam to scan over a total scan angle .THETA..
A second lens 1040 having a diameter D1, where D1>D0, receives
reflected light 1035 from a target scene and forms an image of the
target scene on a detector array or a single detector as previously
described. At any given instant, the reflected light 1035 received
by the second lens 1040 forms a spot that scans across the detector
1045 as previously described in conjunction with FIG. 9.
[0077] A third lens 1060 and a fourth lens 1065 form a 1:1
telescope that relays the transmitted beam 1030 from the scanning
mirror 1015 to the scene. A portion of the transmitted beam is
extracted by a tap beam splitter 1020 between the third and fourth
lenses 1060, 1065 to form the LO beam 1025. The LO beam and the
received light 1035 are combined by a second beam splitter 1050. A
relay lens 1070 in the path of the LO beam focuses the LO beam to a
spot at the plane of the detector 1045. When the focal lengths of
the third lens 1060 and the further lens 1065 are equal to the
focal length of the second lens 1040, the focused spot of the LO
beam will have the same size as the focused spot of received
light.
[0078] While FIG. 9 and FIG. 10 illustrate coherent LIDAR systems
using separate optical paths for the transmitter and receiver, a
system with a single set of optics in the cat's-eye configuration
minimizes the complexity of the photodetector and associated
electronics complexity. However, to allow the use of a large
receiver aperture, the illumination beam needs to be transformed
from a first beam diameter D0 (which can be constrained by the
practical size of the scanning mirror) to a second beam diameter
D1>D0 (and vice versa for the reflected light returning through
the same optical path) without compromising the total field of view
.THETA..
[0079] The number of optical modes, or unique angular positions, a
beam can assume within a field of view is determined by the angular
resolution of the beam. A beam with diameter D0 has an angular
resolution .about..lamda./D0 (ignoring constant scaling factors of
order unity throughout this discussion). Therefore a beam with
diameter D0 can be scanned to fill out a certain number of optical
modes N0.about..THETA.*D0/.lamda. within the field of view .THETA..
When a scanned beam of diameter D0 is optically transformed into a
new beam with a diameter D1, the angular resolution of the new beam
will be .about..lamda./MD0, where M=D1/D0 is the magnification
factor. However, the total number of available optical modes
remains constant when the diameter of the beam is magnified. A
simple telescope that magnifies the beam diameter from D0 to D1
will reduce the total field of view from .THETA. to .THETA./M to
conserve the number of optical modes.
[0080] After the beam diameter is magnified to D1, the scanner will
provide N0 modes, with an angular resolution of .lamda./MD0,
distributed over a field of view of .THETA./M. To recover the
original field of view, the N0 modes must be spaced apart with an
angular distance between adjacent modes of .lamda./D0, such that
the N0 modes are sparsely distributed across the field of view
.THETA.. This means that only a portion of each scene resolution
element is illuminated as the transmitted beam is scanned over the
scene. Practically this means that the LIDAR system images the same
total field of view in the same measurement time by trading off
scene pixel fill factor for an increase in the received optical
power.
[0081] A mode transformer may be used to transform a set of N0
closely spaced modes spanning a field of view e/M into a set of N0
modes that sparsely sample a field of view .THETA.. For example, a
mode transformer can be implemented by coupling each available mode
into a respective single mode optical fiber, and then moving the
fibers apart from each other to "sparsify" the modes. A collimating
lens can then be used to convert the light from each optical fiber
into a D1-sized beam.
[0082] Another practical embodiment of a mode converter uses a
microlens array. FIG. 11 is a schematic diagram of an optical
system 1100 for a scanning coherent LIDAR system in which the
available modes are distributed sparsely over the entire field of
view .THETA.. A first lens 1110 receives light from a circulator,
such as the circulator 130 in FIG. 1, and forms a collimated beam
with a diameter D0. The collimated beam is incident on a scanning
mirror 1115 which rotates to scan the collimated beam through a
scan angle corresponding to the field of view .THETA.. A second
lens 1120 receives the scanning beam from the scan mirror 1115 and
creates a moving array of spots at its focal plane. A microlens
array (MLA) 1030 is placed at this focal plane, with each microlens
matched to the spot size formed by the scanning beam passing
through the second lens 1120. Each element of the microlens array
1030 then converts (further focuses) the spot incident on it,
thereby creating a sparse array of smaller spots at an image plan
1135 of the MLA. A third lens 1040 then collimates light from the
array of smaller spots into a sparse, but now fully angularly-swept
beam of larger size D1. The effective speeds (i.e., focal length
divided by diameter, or f-number) of the microlenses and the third
lens are matched. Reflected light received from the scene follows
the reverse path to return to the circulator. Thus, the optical
system 1100 creates an array of spots that samples the full angular
field of view .THETA. at the angular resolution of the initial beam
of size D0, while collecting more target photons corresponding to
the larger beam size D1.
[0083] When the focal lengths of the second lens 1120 and the third
lens 1140 are the equal, the field of view .THETA. will be equal to
the beam scan angle. When the focal lengths of the second and third
lenses 1120, 1140 are unequal the field of view can be expanded or
compressed compared to the beam scan angle.
[0084] The optical system 1100 of FIG. 11 includes a wide-field of
view third lens 1140 to form a scanning beam of diameter D1 and to
collect photons from the full field of view. FIG. 12 is a schematic
diagram of an optical system 1200 in which the large lens 1140 of
FIG. 11 is replaced by three smaller lenses 1240A, 1240B. 1240C
that each create D1-sized beams that only scan over one-third of
the total field of view .THETA.. Folding mirrors 1250A, 1250B,
1250C, 1250D are used to "combine" the fields-of-view of the three
lenses 1240A, 1240B, 1240C to achieve the full angular field of
view .THETA.. Other elements of the optical system 1200 have the
same function as the corresponding elements in the optical system
1100. The optical system 1200 has the advantage of using smaller
optics and reducing overall system complexity by taking advantage
of the fact that the size and complexity of optical elements tend
to scale nonlinearly with the field of view. A different number of
smaller lenses (rather than three), or different relay optics can
be used instead (other than folding mirrors), to achieve the same
desired result. In addition, by placing the folding mirrors 1250B,
1250C before the microlens array 1230, the large microlens array
1230 may be replaced by three smaller microlens arrays to achieve
the same desired result, using appropriate relay optics.
[0085] Coherent LIDAR measures the amplitude and phase of the
electromagnetic field reflected from the target. This reflected
field is strongly influenced by surface irregularities on the
target within the illuminated spot. These irregularities result in
random variations in the amplitude and phase of the reflected
field, commonly known as speckle. In most practical cases, the
intensity of the reflected field has an exponential probability
distribution and the phase has a uniform probability distribution.
This means that even a bright target can occasionally have a low
intensity and may be below the detection threshold of the LIDAR
system. The spatial scale of the speckle variations is given by the
resolution of the receiver optics. For example, in a conventional
coherent LIDAR system, if the angular resolution of the LIDAR
system is 0.1 degrees, the amplitude and phase of the speckle
pattern change every 0.1 degrees.
[0086] The probabilistic nature of target reflections also occurs
in RADAR systems, and techniques for RADAR detection of so-called
"fluctuating targets" have been developed. These techniques rely on
multiple RADAR measurements of the target to overcome the target
strength (and phase) fluctuations. These measurements are then
coherently or incoherently "integrated" to overcome the negative
effects of target fluctuations. Integration refers to the process
of combining multiple measurements to extract range and/or Doppler
measurements, and can provide a higher probability of detection of
the target for a given signal to noise ratio of the measurements.
The process of incoherent integration ignores the optical phase of
the reflected field, whereas coherent integration utilizes the
phase information. The process of integration works best if the
multiple measurements being integrated are in some way uncorrelated
from each other, so that the target fluctuations can be "averaged"
out. Speckle in a LIDAR system is comparable to a fluctuating
target in a RADAR system, and similar mathematical techniques can
be applied to mitigate the effects of speckle. However, the goal of
a high-speed coherent LIDAR system is to obtain range and Doppler
information from every scene pixel in a single scan of the field of
view. Thus integration over multiple scans, as used in RADAR
systems, cannot be directly applied to a LIDAR system.
[0087] The key to speckle mitigation in LIDAR systems is to obtain
multiple measurements over each scene pixel during each scan of the
field of view, and then coherently or incoherently integrate
(combine) these measurements to mitigate target fluctuations
(speckle). FIG. 13A and FIG. 13B illustrate two approaches to
obtain the multiple measurements. The common idea behind both
approaches is to divide the pixel into N subpixels, perform
separate LIDAR measurements on the subpixels, and coherently or
incoherently integrate these measurements to provide a composite
measurement for the pixel.
[0088] In the first approach shown in FIG. 13A, each scene pixel
1310 is partitioned into N subpixels 1320, where N is an integer
greater than one, that are measured sequentially. In the example of
FIG. 13A, N=3. Each subpixel is measured using an illumination beam
that is scanned across the pixel during the pixel measurement time
T.sub.M (as defined in FIG. 1). The angular size of the
illumination beam determines the subpixel size. A one-dimensional
scan is shown in FIG. 13A for simplicity, but other scanning
patterns are possible. N separate LIDAR measurements are
sequentially performed over these subpixels (each measurement takes
time T.sub.M/N), and these measurements are coherently or
incoherently integrated to determine the range and/or range-rate of
the pixel. A beam with a narrower angular size than the desired
pixel size can be implemented in multiple ways, e.g., by using a
scanning element with a larger aperture in the LIDAR system, or by
using a magnifying optic such as a telescope or a diffraction
grating to increase the size of the beam while using a scanner with
a small aperture. Note that the (near-field) size of the beam at
the LIDAR transmitter/receiver is inversely proportional to the
angular size of the beam in the far field (i.e., at the
target).
[0089] Alternatively, the LIDAR measurements on N subpixels within
a pixel can be performed simultaneously and in parallel, as
illustrated in FIG. 13B. In this example, N=4. The N subpixels are
simultaneously illuminated and imaged on different photodetectors
to obtain N different LIDAR measurements (range and/or range-rate).
In this case, each measurement is performed over the full pixel
measurement time T.sub.M. The N measurements are coherently or
incoherently integrated to provide a composite range and/or
range-rate of the pixel.
[0090] A hybrid of the techniques shown in FIG. 13A and FIG. 13B is
also possible. For example, a pixel may be divided into four
subpixels (as in FIG. 13B) which are scanned in two horizontal (as
shown in FIG. 13A) steps by two vertically-offset beams.
[0091] FIG. 14 is a schematic diagram of a coherent LIDAR system
1400 that performs simultaneous measurements of N subpixels within
each pixel. The system 1400 is similar to the LIDAR system 800 of
FIG. 8, but with N beams propagating in parallel though much of the
system.
[0092] The output of a CHDL 1410 is divided into N parallel beams.
A tap coupler 1420 is used to extract a small fraction of each beam
as a respective LO beam. The majority (typically >90%) of each
CHDL beam is directed toward a target via an N-channel circulator
1430. The N-channel circulator may be, for example, implemented
using discrete optical elements (e.g. known combinations of
polarizing beam splitters and a Faraday rotator) such that N beams
can pass through the circulator in parallel. A lens 1470 converts
the N output beams 1435 from the circulator 1430 into N collimated
output beams at slightly different angles such that the N beams
illuminate N subpixels within a target pixel, as shown in FIG. 13B.
The N output beams are scanned across a field of view by a scanning
mirror 1475.
[0093] The lens 1470 collects light reflected from the target (not
shown) and forms the reflected light into N received light beams
1440. The received light beams separated from the output beams by
the circulator 1430 and are combined with respective LO beams by N
couplers or beamsplitters 1445. The combined LO waves and the
received light beams from the target are respectively incident on N
photodetectors 1450. The N photodetectors 1450 provide respective
measurements indicative of the range and range rate of respective
subpixels, as previously described. A processor (not shown)
coherently or incoherently combines the N subpixel measurements to
provide a composite range and/or range rate measurement for the
pixel.
[0094] FIG. 15 is a block diagram of a scanning optical 150 system
for a LIDAR that that performs simultaneous measurements of N
subpixels within each pixel. The optical system 1500 is similar to
the optical system 1100 of FIG. 11, but with N beams propagating in
parallel though much of the system.
[0095] A first lens 1510 receives N beams from a circulator, such
as the circulator 1430 in FIG. 14, and forms N collimated beams
with diameter D0. The collimated beams are incident on a scanning
mirror 1515 at slightly different angles (not shown). The scanning
mirror 1515 rotates to scan the N collimated beams through a scan
angle corresponding to the field of view .THETA.. A second lens
1520 receives the N scanning beams from the scan mirror 1515 and
creates a moving array of spots at its focal plane. A microlens
array (MLA) 1530 is placed at this focal plane. The N beams from
the circulator are configured such that each beam illuminates a
single microlens in the MLA, and N microlenses are together
illuminated. As a result. N smaller spots are formed at the image
plane 1535 of the MLA. These N spots are formed into N slightly
offset beams by the third lens 1540 that simultaneously illuminate
N subpixels of the scene. A single beam from the circulator, with a
different size, can be used instead of the N beams, to achieve the
desired goal of simultaneously illuminating N microlenses after
passing through the first lens 1510, scanning element 1515 and
second lens 1520.
[0096] The reflected light from these N subpixels is received by
the third lens and propagates through the scanning optical system
1500 in the reverse direction, which results in N separate beams
returned to the circulator, as was the case in the LIDAR system
1400 of FIG. 14. As previously described in conjunction with FIG.
14, the N beams of reflected light are then combined with
respective LO beams. The combined beams are respectively incident
on N photodetectors that provide respective measurements indicative
of the range and range rate of respective subpixels. A processor
(not shown) coherently or incoherently combines the N subpixel
measurements to provide a composite range and/or range rate
measurement for the pixel.
[0097] A high-resolution coherent LIDAR or imaging system has two
important components: i) a swept-frequency laser or "chirped laser"
with a large frequency sweep range B to provide high axial
resolution at the range of interest; and ii) a technique to
translate the one-pixel (fixed (x,y)) measurement laterally in two
dimensions to obtain a full 3-D image.
[0098] Coherent LIDAR systems for 3-D imaging typically rely on the
scanning of a one-pixel measurement across the scene to be image.
Examples of scanning LIDAR systems were previously shown in FIGS.
8, 9, 10, 11, 12, 14 and 15.
[0099] An alternative to scanning LIDAR systems is a staring system
consisting of a swept frequency laser, and a detection technique
that is capable of capturing or measuring the 3-D image of a scene,
or a multi-pixel portion of a scene concurrently. Such a system has
the potential to be inexpensive, robust, and contain no moving
parts.
[0100] FIG. 16A is a block diagram of one pixel of the detection
side of a staring coherent LIDAR. A local oscillator beam having
beam power of P.sub.LO and reflected light from a target pixel
having power Pw are combined by a beam splitter. A balanced
detector pair 1610 may be used to obtain a high dynamic range. The
output of the balanced detector 1610 is amplified using a
transimpedance amplifier (TIA) 1615, digitized using an
analog-to-digital converter (ADC) 1620, and the spectrum of the
photocurrent signal is calculated by a Digital Signal Processor
(DSP) 1625 using a Fourier transform. The DSP 1625 may be on the
chip containing the TIA 1615 and ADC 1620, or external to the chip.
While the basic output from a measurement of a single pixel are two
values corresponding to the strength (reflectivity) and range of
the target at the pixel, further data processing to enhance the
signal (e.g., thresholding, filtering, interpolation etc.) are also
possible.
[0101] This single pixel detector described above is extended to an
N-element array (one-dimensional or two-dimensional) to implement
the full-field 3-D imaging system as shown in FIG. 16B and
[0102] FIG. 16B is a block diagram of a receiver array for
full-field 3-D imaging where K pixels of the scene are
simultaneously imaged. Two aligned detector arrays 1660A 1660B are
used to perform the optical balancing. Each detector array contains
K detectors to perform simultaneous measurements on K pixels of the
scene. The detector arrays 1660A, 1660B may be located on separate
chips/wafers or on a single wafer (using an external flip mirror or
equivalent to align the two outputs of the beam splitter on the two
detector arrays). Alternatively, other balancing approaches may be
implemented with a single photodetector array. e.g., using phase
shifters, or polarization optics and pixelated polarizers, on
adjacent pixels to introduce a 7t-phase shift on the LO or the
target beam. The addition of a n-phase shift in one of LO or target
arms simulates a balanced detection scheme. An unbalanced single
detector array can also be used to perform the measurement, but
this may limit the dynamic range of the system. The dynamic range
can be improved by subtracting the common mode current from the
photodetector using an external current source.
[0103] The output of the detector arrays (whether K single
detectors or 2K detectors to form K balanced pairs) is amplified
with an array of TIAs 1665, digitized with an array of ADCs 1670,
and one or more signal processors 1675 performs the Fourier
transforms (typically using the FFT algorithm), the detection
algorithms, and the input/output communication. The output of
receiver array is typically two "images" (i.e., two values per
pixel) corresponding to the depth map and the intensity of the
reflections. Measuring the scene with alternate up- and down-chirps
will also allow measuring the range rate of each scene pixel.
Further data processing algorithms may also be implemented by the
smart detector array or external processors.
[0104] The various electronic components (detectors, TIAs, ADCs and
signal processors) may be fully integrated on the same wafer
substrate, e.g., silicon electronics with silicon or
silicon-germanium detectors, or III-V semiconductor detectors and
electronics. Alternatively, hybrid integration may be used where
dies with different functionalities (detectors, amplifiers,
mixed-mode circuits etc.) are incorporated on the substrate using
pick-and-place techniques. Different functional elements may be
spatially separated as shown schematically in this figure, or may
be implemented as "composite pixels" that incorporate these
different functional elements in close proximity to each other as
shown in FIG. 17.
[0105] FIG. 17 is a block diagram of an alternative implementation
of a receiver array 1700 where the detectors 1710, TIA 1715, ADC
1720, and other electronic components are integrated,
monolithically or using hybrid integration, in close proximity to
form a series of functional unit cells 1705. As shown in FIG. 17,
two adjacent detectors 1710 are used for balancing. This approach
can be readily modified to use a single detector per unit cell, or
the unit cell may only contain some functional blocks, with the
remaining functions being performed separately.
CLOSING COMMENTS
[0106] Throughout this description, the embodiments and examples
shown should be considered as exemplars, rather than limitations on
the apparatus and procedures disclosed or claimed. Although many of
the examples presented herein involve specific combinations of
method acts or system elements, it should be understood that those
acts and those elements may be combined in other ways to accomplish
the same objectives. With regard to flowcharts, additional and
fewer steps may be taken, and the steps as shown may be combined or
further refined to achieve the methods described herein. Acts,
elements and features discussed only in connection with one
embodiment are not intended to be excluded from a similar role in
other embodiments.
[0107] As used herein, "plurality" means two or more. As used
herein, a "set" of items may include one or more of such items. As
used herein, whether in the written description or the claims, the
terms "comprising", "including", "carrying", "having",
"containing", "involving", and the like are to be understood to be
open-ended, i.e., to mean including but not limited to. Only the
transitional phrases "consisting of" and "consisting essentially
of", respectively, are closed or semi-closed transitional phrases
with respect to claims. Use of ordinal terms such as "first",
"second", "third", etc., in the claims to modify a claim element
does not by itself connote any priority, precedence, or order of
one claim element over another or the temporal order in which acts
of a method are performed, but are used merely as labels to
distinguish one claim element having a certain name from another
element having a same name (but for use of the ordinal term) to
distinguish the claim elements. As used herein, "and/or" means that
the listed items are alternatives, but the alternatives also
include any combination of the listed items.
* * * * *