U.S. patent application number 17/162358 was filed with the patent office on 2021-11-04 for movable object for performing real-time mapping.
The applicant listed for this patent is DJI Technology, Inc.. Invention is credited to Joshua ACOSTA, Blake KARWOSKI, Arjun Sukumar MENON, Fernando PABLO QUEVEDO.
Application Number | 20210341614 17/162358 |
Document ID | / |
Family ID | 1000005766197 |
Filed Date | 2021-11-04 |
United States Patent
Application |
20210341614 |
Kind Code |
A1 |
ACOSTA; Joshua ; et
al. |
November 4, 2021 |
MOVABLE OBJECT FOR PERFORMING REAL-TIME MAPPING
Abstract
Techniques are disclosed for real-time mapping in a movable
object environment. A real-time mapping system can include at least
an unmanned aerial vehicle (UAV), comprising a propulsion system, a
main body coupled to the propulsion system and a payload assembly
coupled to the main body via a mounting assembly, wherein the
payload assembly includes a payload comprising a scanning sensor
and a positioning sensor, the payload assembly configured to orient
the scanning sensor at a plurality of angles relative to the main
body.
Inventors: |
ACOSTA; Joshua; (Palo Alto,
CA) ; MENON; Arjun Sukumar; (San Jose, CA) ;
PABLO QUEVEDO; Fernando; (Palo Alto, CA) ; KARWOSKI;
Blake; (Palo Alto, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
DJI Technology, Inc. |
Burbank |
CA |
US |
|
|
Family ID: |
1000005766197 |
Appl. No.: |
17/162358 |
Filed: |
January 29, 2021 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
PCT/US2019/058219 |
Oct 25, 2019 |
|
|
|
17162358 |
|
|
|
|
62752273 |
Oct 29, 2018 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G05D 1/0094 20130101;
B64D 47/00 20130101; G01S 17/89 20130101; B64C 2201/123 20130101;
G05D 1/101 20130101; B64C 39/024 20130101; G01S 7/4813
20130101 |
International
Class: |
G01S 17/89 20060101
G01S017/89; G01S 7/481 20060101 G01S007/481; G05D 1/10 20060101
G05D001/10; G05D 1/00 20060101 G05D001/00; B64D 47/00 20060101
B64D047/00; B64C 39/02 20060101 B64C039/02 |
Claims
1. An unmanned aerial vehicle (UAV), comprising: a propulsion
system; a main body coupled to the propulsion system; and a payload
assembly coupled to the main body via a mounting assembly, wherein
the payload assembly includes a payload comprising a scanning
sensor and a positioning sensor, the payload assembly configured to
orient the scanning sensor at a plurality of angles relative to the
main body.
2. The UAV of claim 1, wherein the payload assembly comprises a
plurality of payload support brackets configured to couple the
payload assembly to the mounting assembly, the plurality of payload
support brackets configured to provide the plurality of angles
relative to the main body at which the payload can be oriented.
3. The UAV of claim 2, wherein the payload assembly comprises a
pivot bracket configured to couple the scanning sensor and the
positioning sensor, the pivot bracket configured to be aligned with
the plurality of angles provided by the plurality of payload
support brackets for changing a scanning angle of the scanning
sensor.
4. The UAV of claim 3, wherein the scanning sensor is mounted to a
side of the pivot bracket and the positioning sensor is mounted to
another side of the pivot bracket.
5. The UAV of claim 1, wherein the plurality of angles includes
angles corresponding at least to angles of 0 degrees, 35 degrees,
or 90 degrees relative to the main body.
6. The UAV of claim 2, wherein the plurality of payload support
brackets are connected to the mounting assembly using a dovetail
quick release connection.
7. The UAV of claim 6, wherein the plurality of payload support
brackets are connected to a base plate of the mounting assembly,
the base plate having a plurality of dovetail grooves.
8. The UAV of claim 7, wherein the base plate is coupled to the
mounting assembly using a plurality of dampers.
9. The UAV of claim 9, wherein the landing gear bracket include
compressible materials configured to absorb or dissipate energy of
an impulse force when landing.
10. The UAV of claim 9, wherein each landing gear assembly is
coupled to the main body on an arm of the main body adjacent to a
motor heat sink.
11. The UAV of claim 1, wherein the mounting assembly comprises a
dampened plate assembly configured to couple the payload assembly
to the main body, the dampened plate assembly comprising a first
plate coupled to the main body and a second plate coupled to the
first plate via a plurality of dampers, wherein a fastener is
inserted through the dampened plate assembly and the payload
assembly.
12. The UAV of claim 18, wherein a reflective member is coupled to
the mounting assembly via the fastener, wherein the reflective
member is positioned to overlap with at least a portion of a field
of view (FOV) of the scanning sensor to cause the FOV of the
scanning sensor to be broadened to include a second FOV comprising
a reflection off of the reflective member.
13. The UAV of claim 19, wherein the reflective member includes a
mirror or a plate including a reflective coating.
14. The UAV of claim 1, wherein the mounting assembly includes a
plate assembly, the plate assembly comprising a first plate coupled
to the main body via a plurality of expansion brackets and a second
plate coupled to the first plate via a plurality of dampers,
wherein a portion of the payload assembly is inserted through a
portion of the first plate, and wherein the portion of the payload
assembly is coupled to the second plate using a fastener.
15. The UAV of claim 21, wherein the plurality of expansion
brackets includes at least two pairs of expansion brackets coupled
to the main body, wherein each pair of expansion brackets are
coupled to each other via an alignment bracket to maintain a
parallel back plane of the pair of expansion brackets.
16. The UAV of claim 1, wherein the scanning sensor is a light
detection and ranging (LiDAR) sensor.
17. The UAV of claim 23, wherein the LiDAR sensor implements a
scanning pattern, the scanning pattern including at least one of a
spiral pattern or a flower pattern.
18. The UAV of claim 1, wherein the scanning sensor is fixed at one
of the plurality of angles during a scanning mission.
19. An unmanned aerial vehicle (UAV), comprising: a propulsion
system; a main body coupled to the propulsion system; a payload
assembly coupled to the main body via a mounting assembly, wherein
the payload assembly includes a payload comprising a scanning
sensor and a positioning sensor, the payload assembly configured to
orient the scanning sensor at a plurality of angles relative to the
main body; and at least two landing gear assemblies, each landing
gear assembly including landing gear legs coupled to the main body
using a landing gear bracket, wherein the landing gear bracket
couples the landing gear legs at fixed angles to clear the landing
gear legs from a field of view (FOV) of the scanning sensor.
20. An unmanned aerial vehicle (UAV), comprising: a propulsion
system; a main body coupled to the propulsion system; and a payload
assembly coupled to the main body via a mounting assembly, wherein
the payload assembly includes a payload comprising a scanning
sensor coupled to a positioning sensor via a pivot bracket, the
payload assembly configured to orient the scanning sensor at a
plurality of angles relative to the main body using the pivot
bracket, and wherein the payload assembly comprises a positioning
sensor enclosure covering the positioning sensor, the positioning
sensor enclosure mounted to a side of the pivot bracket mounting
the positioning sensor that is opposite to another side of the
pivot bracket mounting the scanning sensor.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of International Patent
Application No. PCT/US2019/058219, filed Oct. 25, 2019, which
claims the benefit of U.S. Provisional Application No. 62/752,273,
filed Oct. 29, 2018, which is hereby incorporated by reference.
COPYRIGHT NOTICE
[0002] A portion of the disclosure of this patent document contains
material which is subject to copyright protection. The copyright
owner has no objection to the facsimile reproduction by anyone of
the patent document or the patent disclosure, as it appears in the
Patent and Trademark Office patent file or records, but otherwise
reserves all copyright rights whatsoever.
FIELD OF THE INVENTION
[0003] The disclosed embodiments relate generally to techniques for
mapping and more particularly, but not exclusively, to a movable
object for performing real-time mapping.
BACKGROUND
[0004] Movable objects such as unmanned aerial vehicles (UAVs) can
be used for performing surveillance, reconnaissance, and
exploration tasks for various applications. Movable objects may
carry a payload, including various sensors, which enables the
movable objects to capture sensor data during movement of the
movable objects. The captured sensor data may be rendered on a
client device, such as a client device in communication with the
movable objects via a remote control, remote server, or other
computing device.
SUMMARY
[0005] Techniques are disclosed for real-time mapping in a movable
object environment. A real-time mapping system can include at least
an unmanned aerial vehicle (UAV), comprising a propulsion system, a
main body coupled to the propulsion system and a payload assembly
coupled to the main body via a mounting assembly, wherein the
payload assembly includes a payload comprising a scanning sensor
and a positioning sensor, the payload assembly configured to orient
the scanning sensor at a plurality of angles relative to the main
body.
BRIEF DESCRIPTION OF DRAWINGS
[0006] FIG. 1 illustrates an example of a movable object in a
movable object environment, in accordance with various
embodiments.
[0007] FIG. 2 illustrates an example of a movable object
architecture in a movable object environment, in accordance with
various embodiments.
[0008] FIG. 3 illustrates an example of a mapping manager in a
movable object environment, in accordance with various
embodiments.
[0009] FIGS. 4A and 4B illustrate an example of a hierarchical data
structure, in accordance with various embodiments.
[0010] FIGS. 5A and 5B illustrate an example of outlier removal in
mapping data, in accordance with various embodiments.
[0011] FIG. 6 illustrates an example of intensity values in mapping
data, in accordance with various embodiments.
[0012] FIG. 7 illustrates an example of supporting a movable object
interface in a software development environment, in accordance with
various embodiments.
[0013] FIG. 8 illustrates an example of a movable object interface,
in accordance with various embodiments.
[0014] FIG. 9 illustrates an example of components for a movable
object in a software development kit (SDK), in accordance with
various embodiments.
[0015] FIG. 10 shows a flowchart of a method of target mapping in a
movable object environment, in accordance with various
embodiments.
[0016] FIG. 11 shows an isometric view of a movable object for
performing real-time mapping, in accordance with an embodiment.
[0017] FIG. 12 shows a rear view of a movable object for performing
real-time mapping, in accordance with an embodiment.
[0018] FIG. 13 shows an isometric exploded view of the mounting
assembly and payload assembly, in accordance with an
embodiment.
[0019] FIG. 14 shows an isometric view of base plate, in accordance
with some embodiments.
[0020] FIG. 15 shows an isometric assembled view of the mounting
assembly and payload assembly, in accordance with an
embodiment.
[0021] FIGS. 16A-16D show additional views of the payload assembly
in accordance with various embodiments.
[0022] FIG. 17 shows an alternative enclosure to prevent jamming of
the positioning sensor of a movable object in accordance with
various embodiments.
[0023] FIGS. 18A and 18B show example alignments of lower expansion
brackets in accordance with various embodiments.
[0024] FIG. 19 shows an example of the alignment brackets being
connected to the movable object body and the expansion brackets, in
accordance with an embodiment.
[0025] FIG. 20 shows an alternative mechanical attachment of the
payload assembly to a movable object in accordance with various
embodiments.
[0026] FIGS. 21-23 show an alternative mechanical attachment of the
light detection and ranging (LiDAR) sensor and positioning sensor
to a movable object in accordance with various embodiments.
[0027] FIG. 24 shows an example of a landing gear in accordance
with various embodiments.
[0028] FIG. 25 shows an example of a landing gear bracket in
accordance with various embodiments.
[0029] FIG. 26 shows an example of an alternative landing gear
mounting point in accordance with various embodiments.
[0030] FIGS. 27-29 show LiDAR fields of view in accordance with
various embodiments.
[0031] FIGS. 30-32 show examples of angled positions of a scanning
sensor coupled to a movable object, in accordance with various
embodiments.
[0032] FIG. 33 shows example scanning patterns that may be
implemented by LiDAR sensors that may be used in various
embodiments.
DETAILED DESCRIPTION
[0033] The invention is illustrated, by way of example and not by
way of limitation, in the figures of the accompanying drawings in
which like references indicate similar elements. It should be noted
that references to "an" or "one" or "some" embodiment(s) in this
disclosure are not necessarily to the same embodiment, and such
references mean at least one.
[0034] The following description of the invention describes target
mapping using a movable object. For simplicity of explanation, an
unmanned aerial vehicle (UAV) is generally used as an example of a
movable object. It will be apparent to those skilled in the art
that other types of movable objects can be used without
limitation.
[0035] Embodiments enable a movable object to map a target
environment in real-time using data collected from a positioning
sensor and a scanning sensor. Alternative embodiments may take
advantage of post-processing to generate a map following completion
of one or more data collection missions executed by one or more
movable objects. For example, the various embodiments may utilize
scan matching techniques for mapping a complex target environment.
Embodiments can be used to provide LiDAR-based real-time mapping
for various applications, such as construction, surveying, target
inspection, etc. Rather than collecting data to be post-processed
into a map representation of the target, a map can be constructed
in real-time, enabling a version of the map to be rendered on a
client device as it is collected. Such live rendering may enable
the user to determine if any areas within the target environment
have not been scanned by a scanning sensor electronically coupled
to the movable object. Additionally, a high-density version of the
map can be generated during the mapping mission and downloaded upon
return of the movable object. In various embodiments, a mapping
manager may utilize a parallel computing architecture to perform
the real-time mapping while the movable object is performing its
mapping mission. In some embodiments, the mapping data may be
output as a LiDAR Data Exchange File (LAS) which may be used by
various tools to render the map of the target environment and/or
use the mapping data for further processing, planning, etc.
Metadata embedded in the LAS output file can facilitate integration
of the map with various third-party tools. In various embodiments,
the map may be output in various file formats depending on user
preferences.
[0036] In some embodiments, a mapping manager can receive mapping
data from a scanning sensor (such as a LiDAR sensor or other sensor
that provides high resolution scanning of a target environment),
and positioning data from a positioning sensor (e.g., a global
positioning system (GPS) module, real-time kinematic (RTK) module,
an inertial measurement unit (IMU) module, or other positioning
sensor). The mapping data can be geo-referenced using the
positioning data and used to construct the map of the target
environment. Embodiments objectively geo-reference the mapping
data, enabling various target environments to be mapped regardless
of environment complexity.
[0037] FIG. 1 illustrates an example of a movable object in a
movable object environment 100, in accordance with various
embodiments. As shown in FIG. 1, client device 110 in a movable
object environment 100 can communicate with a movable object 104
via a communication link 106. The movable object 104 can be an
unmanned aircraft, an unmanned vehicle, a handheld device, and/or a
robot. The client device 110 can be a portable personal computing
device, a smart phone, a remote control, a wearable computer, a
virtual reality/augmented reality system, and/or a personal
computer. Additionally, the client device 110 can include a remote
controller 111 and communication system 120A, which is responsible
for handling the communication between the client device 110 and
the movable object 104 via communication system 120B. For example,
the communication between the client device 110 and the movable
object 104 (e.g., an unmanned aircraft) can include uplink and
downlink communication. The uplink communication can be used for
transmitting control signals, the downlink communication can be
used for transmitting media or video stream, mapping data collected
scanning sensors, or other sensor data collected by other
sensors.
[0038] In accordance with various embodiments, the communication
link 106 can be (part of) a network, which is based on various
wireless technologies, such as the WiFi, Bluetooth, 3G/4G, and
other radio frequency technologies. Furthermore, the communication
link 106 can be based on other computer network technologies, such
as the internet technology, or any other wired or wireless
networking technology. In some embodiments, the communication link
106 may be a non-network technology, including direct
point-to-point connections such as universal serial bus (USB) or
universal asynchronous receiver-transmitter (UART).
[0039] In various embodiments, movable object 104 in a movable
object environment 100 can include a payload assembly 122 and a
payload, such as a scanning sensor 124 (e.g., a LiDAR sensor).
Although the movable object 104 is described generally as an
aircraft, this is not intended to be limiting, and any suitable
type of movable object can be used. One of skill in the art would
appreciate that any of the embodiments described herein in the
context of aircraft systems can be applied to any suitable movable
object (e.g., a UAV). In some instances, the payload may be
provided on the movable object 104 without requiring the payload
assembly.
[0040] In accordance with various embodiments, the movable object
104 may include one or more movement mechanisms 116 (e.g.,
propulsion mechanisms), a sensing system 118, and a communication
system 120B. The movement mechanisms 116 can include one or more of
rotors, propellers, blades, engines, motors, wheels, axles,
magnets, nozzles, animals, or human beings. For example, the
movable object may have one or more propulsion mechanisms. The
movement mechanisms may all be of the same type. Alternatively, the
movement mechanisms can be different types of movement mechanisms.
The movement mechanisms 116 can be mounted on the movable object
104 (or vice-versa), using any suitable means such as a support
element (e.g., a drive shaft). The movement mechanisms 116 can be
mounted on any suitable portion of the movable object 104, such on
the top, bottom, front, back, sides, or suitable combinations
thereof.
[0041] In some embodiments, the movement mechanisms 116 can enable
the movable object 104 to take off vertically from a surface or
land vertically on a surface without requiring any horizontal
movement of the movable object 104 (e.g., without traveling down a
runway). Optionally, the movement mechanisms 116 can be operable to
permit the movable object 104 to hover in the air at a specified
position and/or orientation. One or more of the movement mechanisms
116 may be controlled independently of the other movement
mechanisms, for example by an application executing on client
device 110, computing device 112, or other computing device in
communication with the movement mechanisms. Alternatively, the
movement mechanisms 116 can be configured to be controlled
simultaneously. For example, the movable object 104 can have
multiple horizontally oriented rotors that can provide lift and/or
thrust to the movable object. The multiple horizontally oriented
rotors can be actuated to provide vertical takeoff, vertical
landing, and hovering capabilities to the movable object 104. In
some embodiments, one or more of the horizontally oriented rotors
may spin in a clockwise direction, while one or more of the
horizontally oriented rotors may spin in a counterclockwise
direction. For example, the number of clockwise rotors may be equal
to the number of counterclockwise rotors. The rotation rate of each
of the horizontally oriented rotors can be varied independently in
order to control the lift and/or thrust produced by each rotor, and
thereby adjust the spatial disposition, velocity, and/or
acceleration of the movable object 104 (e.g., with respect to up to
three degrees of translation and up to three degrees of rotation).
As discussed further herein, a controller, such as flight
controller 114, can send movement commands to the movement
mechanisms 116 to control the movement of movable object 104. These
movement commands may be based on and/or derived from instructions
received from client device 110, computing device 112, or other
entity.
[0042] The sensing system 118 can include one or more sensors that
may sense the spatial disposition, velocity, and/or acceleration of
the movable object 104 (e.g., with respect to various degrees of
translation and various degrees of rotation). The one or more
sensors can include any of the sensors, including GPS sensors,
motion sensors, inertial sensors, proximity sensors, or image
sensors. The sensing data provided by the sensing system 118 can be
used to control the spatial disposition, velocity, and/or
orientation of the movable object 104 (e.g., using a suitable
processing unit and/or control module). Alternatively, the sensing
system 118 can be used to provide data regarding the environment
surrounding the movable object, such as weather conditions,
proximity to potential obstacles, location of geographical
features, location of manmade structures, and the like.
[0043] The communication system 120B enables communication with
client device 110 via communication link 106, which may include
various wired and/or wireless technologies as discussed above, and
communication system 120A. The communication system 120A or 120B
may include any number of transmitters, receivers, and/or
transceivers suitable for wireless communication. The communication
may be one-way communication, such that data can be transmitted in
only one direction. For example, one-way communication may involve
only the movable object 104 transmitting data to the client device
110, or vice-versa. The data may be transmitted from one or more
transmitters of the communication system 120B of the movable object
to one or more receivers of the communication system 120A of the
client device, or vice-versa. Alternatively, the communication may
be two-way communication, such that data can be transmitted in both
directions between the movable object 104 and the client device
110. The two-way communication can involve transmitting data from
one or more transmitters of the communication system 120B of the
movable object 104 to one or more receivers of the communication
system 120A of the client device 110, and transmitting data from
one or more transmitters of the communication system 120A of the
client device 110 to one or more receivers of the communication
system 120B of the movable object 104.
[0044] In some embodiments, a client device 110 may communicate
with a mapping manager 126 installed on computing device 112 over a
transparent transmission channel of a communication link 106. The
transparent transmission channel can be provided through the flight
controller of the movable object which allows the data to pass
through unchanged (e.g., "transparent") to the mapping manager or
other application on computing device 112. In some embodiments,
mapping manager 126 may utilize a software development kit (SDK),
application programming interfaces (APIs), or other interfaces made
available by the movable object, computing device, scanning sensor
124, etc. In various embodiments, the mapping manager may be
implemented by one or more processors on movable object 104 (e.g.,
flight controller 114 or other processors), computing device 112,
remote controller 111, client device 110, or other computing device
in communication with movable object 104. In some embodiments,
mapping manager 126 may be implemented as an application executing
on client device 110, computing device 112, or other computing
device in communication with movable object 104.
[0045] In some embodiments, an application executing on client
device 110 or computing device 112 can provide control data to one
or more of the movable object 104, payload assembly 122, and
payload 124 and receive information from one or more of the movable
object 104, payload assembly 122, and payload 124 (e.g., position
and/or motion information of the movable object, payload assembly
or payload; data sensed by the payload such as image data captured
by a payload camera or mapping data captured by a LiDAR sensor; and
data generated from image data captured by the payload camera or
LiDAR data generated from mapping data captured by the LiDAR
sensor).
[0046] In some embodiments, the control data may result in a
modification of the location and/or orientation of the movable
object (e.g., via control of the movement mechanisms 116), or a
movement of the payload with respect to the movable object (e.g.,
via control of the payload assembly 122). The control data from the
application may result in control of the payload, such as control
of the operation of scanning sensor 124, a camera or other image
capturing device (e.g., taking still or moving pictures, zooming in
or out, turning on or off, switching imaging modes, changing image
resolution, changing focus, changing depth of field, changing
exposure time, changing viewing angle or field of view).
[0047] In some instances, the communications from the movable
object, payload assembly and/or payload may include information
obtained from one or more sensors (e.g., of the sensing system 118
or of the scanning sensor 124 or other payload) and/or data
generated based on the sensing information. The communications may
include sensed information obtained from one or more different
types of sensors (e.g., GPS sensors, motion sensors, inertial
sensors, proximity sensors, or image sensors). Such information may
pertain to the position (e.g., location, orientation), movement, or
acceleration of the movable object, payload assembly, and/or
payload. Such information from a payload may include data captured
by the payload or a sensed state of the payload.
[0048] In some embodiments, computing device 112 can be added to
the movable object. The computing device can be powered by the
movable object and can include one or more processors, such as
CPUs, GPUs, field programmable gate arrays (FPGAs), system on chip
(SoC), application-specific integrated circuit (ASIC), or other
processors. The computing device can include an operating system
(OS), such as Windows 10.RTM., Linux.RTM., Unix.RTM.-based
operating systems, or other OS. Mission processing can be offloaded
from the flight controller 114 to the computing device 112. In
various embodiments, the mapping manager 126 can execute on the
computing device 112, client device 110, payload 124, a remote
server (not shown), or other computing device.
[0049] In some embodiments, mapping manager 126 can be used to
provide LiDAR-based real-time mapping for various applications,
such as construction, surveying, target inspection, etc. Rather
than collecting data to be post-processed into a map representation
of the target, a map can be constructed in real-time, enabling a
version of the map to be rendered on client device 110 as it is
collected. Such live rendering may enable the user to determine if
any areas within the target environment have not been scanned by
scanning sensor 124. Additionally, another version of the map may
be downloaded and used upon return of the movable object. In
various embodiments, the mapping manager 126 may utilize a parallel
computing architecture in computing device 112 to perform the
real-time mapping. In some embodiments, the mapping manager 126 may
perform data compression to transform a dense map into a sparse map
to be rendered on client device 110. By way of compressing the
dense map into the sparse map, the mapping manager 126 may be used
to reduce data size required for transmission from the movable
object 104 to the client device 110, and thus, data transmission
time and bandwidth are saved for efficient real-time map rendering.
In such embodiments, the live rendering of the map may be a lower
resolution or a compressed data version of the map (i.e., a sparse
map) compared to the version obtained from the movable object upon
its return from scanning the target environment (i.e., a dense
map). In some embodiments, the map may be output as a LiDAR Data
Exchange File (LAS) which may be used by various tools to render
the map of the target environment and/or use the mapping data for
further processing, planning, etc. Metadata embedded in the LAS
output file can facilitate integration of the map with various
third-party tools. In various embodiments, the map may be output in
various file formats depending on user preferences.
[0050] Mapping manager 126 can receive mapping data from scanning
sensor 124. As discussed, scanning sensor 124 may be a LiDAR sensor
or other sensor that provides high resolution scanning of a target
environment. The mapping manager 126 may also receive positioning
data from a positioning sensor (e.g., a GPS module, RTK module, or
other positioning sensor). In some embodiments, the positioning
sensor may be part of functional modules 108, sensing system 118,
or a separate module coupled to movable object 104 which provides
positioning data for the movable object. The mapping data can be
geo-referenced using the positioning data and used to construct the
map of the target environment. Prior methods of 3D mapping have
relied on complex environments that are conducive to scan-matching.
Unlike prior mapping systems, which require complex environments in
order to use scan-matching to prepare the map, embodiments
objectively geo-reference the mapping data. This allows for various
target environments to be mapped regardless of environment
complexity.
[0051] Additional details of the movable object architecture are
described below with respect to FIG. 2.
[0052] FIG. 2 illustrates an example 200 of a movable object
architecture in a movable object environment, in accordance with
various embodiments. As shown in FIG. 2, a movable object 104 can
include a computing device 112 and flight controller 114. The
computing device 112 can connect to the scanning sensor 124 via a
high bandwidth connection, such as Ethernet or universal serial bus
(USB). The computing device 112 may also connect to a positioning
sensor 202 over a low bandwidth connection, such as universal
asynchronous receiver-transmitter (UART). As discussed, the
positioning sensor 202 may be included as a separate module (as
shown in FIG. 2) or may be included as part of functional modules
108 or sensing system 118. Positioning sensor 202 may include a
radio 204, such as a 4G, 5G, or other cellular or mobile network
radio. The radio 204 may be used by RTK module 206 to enhance
positioning data collected by GPS module 208. Although a GPS module
is shown in FIG. 2, any global navigation satellite service may be
used, such as GLOSNASS, Galileo, BeiDou, etc. RTK module 206 can
receive a reference signal from a reference station using radio 204
and provide a correction to the positioning data provided by GPS
module 208. Additionally, GPS module 208 can output a clock signal,
such as a pulse per second (1PPS) signal, to the scanning sensor
124. This allows for the scanning sensor and the GPS sensor to
apply synchronized time stamps to their collected data using the
same clock signal.
[0053] In various embodiments, the computing device 112 can connect
to one or more high bandwidth components, such as one or more
cameras, a stereo vision module, or payload. The computing device
112 can connect to the flight controller 114 via UART and/or USB to
send and receive data to and from the remote control via
communication system 120B. In various embodiments, the computing
device 112 may include one or more CPUs, GPUs, field programmable
gate arrays (FPGA), systems on chip (SoC), or other
processor(s).
[0054] Flight controller 114 can connect to various functional
modules 108, such as magnetometer 210, barometer 212, and inertial
measurement unit (IMU) 214. In some embodiments, communication
system 120B can connect to computing device 112 instead of, or in
addition to, flight controller 114. In some embodiments, sensor
data collected by the one or more functional modules 108 and the
positioning sensor 202 can be passed from the flight controller 114
to the computing device 112.
[0055] In some embodiments, flight controller 114 and computing
device 112 can be implemented as separate devices (e.g., separate
processors on separate circuit boards). Alternatively, one or more
of the flight controller 114 and computing device 112 can be
implemented as a single device, such as an SoC. In various
embodiments, computing device 112 may be removable from the movable
object.
[0056] FIG. 3 illustrates an example 300 of a mapping manager 126
in a movable object environment, in accordance with various
embodiments. As shown in FIG. 3, a mapping manager 126 may execute
on one or more processors 302 of computing device 112. The one or
more processors 302 may include CPUs, GPUs, FGPAs, SoCs, or other
processors, and may be part of a parallel computing architecture
implemented by computing device 112. The mapping manager 126 may
include sensor interfaces 303, data preparation module 308, and map
generator 316.
[0057] Sensor interfaces 303 can include a scanning sensor
interface 304 and a positioning sensor interface 306. The sensor
interfaces 303 may include hardware and/or software interfaces. The
scanning sensor interface 304 can receive data from the scanning
sensor (e.g., a LiDAR or other scanning sensor) and the positioning
sensor interface 306 can receive data from the positioning sensor
(e.g., a GPS sensor, an RTK sensor, an IMU sensor, and/or other
positioning sensors or a combination thereof). In various
embodiments, the scanning sensor may produce mapping data in a
point cloud format. The point cloud of the mapping data may be a
three-dimensional representation of the target environment. In some
embodiments, the point cloud of the mapping data may be converted
to a matrix representation. The positioning data may include GPS
coordinates for the movable object and, in some embodiments, may
include roll, pitch, and yaw values associated with the movable
object corresponding to each GPS coordinate. The roll, pitch, and
yaw values may be obtained from the positioning sensor, such as an
inertial measurement unit (IMU), or other sensor. As discussed, the
positioning data may be obtained from an RTK module, which corrects
the GPS coordinates based on a correction signal received from a
reference station. In some embodiments, the RTK module may produce
a variance value associated with each output coordinate. The
variance value may represent the accuracy of the corresponding
positioning data. For example, if the movable object is performing
sharp movements, the variance value may go up, indicating less
accurate positioning data has been collected. The variance value
may also vary depending on atmospheric conditions, leading to
different accuracies measured by the movable object depending on
the particular conditions present when the data was collected.
[0058] The positioning sensor and scanning sensor may share clock
circuitry. For example, the positioning sensor may include clock
circuitry and output a clock signal to the scanning sensor. In some
embodiments, a separate clock circuit may output a clock signal to
both the scanning sensor and the positioning sensor. As such, the
positioning data and the mapping data may be time-stamped using the
shared clock signal.
[0059] In some embodiments, the positioning sensor and scanning
sensor may output data with differing delays. For example, the
positioning sensor and the scanning sensor may not start generating
data at the same time. As such, the positioning data and/or mapping
data may be buffered to account for the delay. In some embodiments,
a buffer size may be chosen based on the delay between the output
of each sensor. In some embodiments, mapping manager can receive
the data from the positioning sensor and scanning sensor and output
synchronized data using the timestamps shared by the sensor data
with respect to the shared clock signal. This enables the
positioning data and mapping data to be synchronized before further
processing. Additionally, the frequency of the data obtained from
each sensor may be different. For example, the scanning sensor may
be producing data in the range of hundreds of kHz, while the
positioning sensor may be producing data in the range of hundreds
of Hz. Accordingly, to ensure each point of the mapping data has
corresponding positioning data, upsampling module 310 can
interpolate the lower frequency data to match the higher frequency
data. For example, assuming the positioning data is produced by the
positioning sensor at 100 Hz and the mapping data is produced by
the scanning sensor (e.g., a LiDAR sensor) at 100 kHz, the
positioning data may be upsampled from 100 Hz to 100 kHz. Various
upsampling techniques may be used to upsample the positioning data.
For example, a linear fit algorithm, such as least squares, may be
used. In some embodiments, non-linear fit algorithms may be used to
upsample the positioning data. Additionally, the roll, pitch, yaw
values of the positioning data may also be interpolated to match
the frequency of the mapping data. In some embodiments, the roll,
pitch, and yaw values may be spherical linear interpolated (SLERP)
to match the number of points in the mapping data. The time stamps
may likewise be interpolated to match the interpolated positioning
data.
[0060] Once the positioning data has been upsampled and
synchronized with the mapping data by upsampling module 310,
geo-reference module 312 can convert the matrix representation of
the mapping data from the frame of reference (or the reference
coordinate system) in which it was collected (e.g., scanner
reference frame or scanner reference coordinate system) to a
desired frame of reference (or a desired reference coordinate
system). For example, the positioning data may be converted from
the scanner reference frame to a north-east-down (NED) reference
frame (or a NED coordinate system). The reference frame to which
the positioning data is converted may vary depending on the
application of the map that is being produced. For example, if the
map is being used in surveying, it may be converted to the NED
reference frame. For another example, if the map is being used for
rendering motions such as flight simulation, it may be converted to
the FlightGear coordinate system. Other applications of the map may
effect conversions of the positioning data to different reference
frames or different coordinate systems.
[0061] Each point in the point cloud of the mapping data is
associated with a position in the scanner reference frame that is
determined relative to the scanning sensor. The positioning data of
the movable object, produced by the positioning sensor, may then be
used to convert this position in the scanner reference frame to the
output reference frame in a world coordinate system, such as a GPS
coordinate system. For example, the position of the scanning sensor
in the world coordinate system is known based on the positioning
data. In some embodiments, the positioning sensor and the scanning
module may be offset (e.g., due to being located at different
positions on the movable object). In such embodiments, a further
correction factoring in this offset may be used to convert from the
scanner reference frame to the output reference frame (e.g., each
measured position in the positioning data may be corrected using
the offset between the positioning sensor and the scanning sensor).
For each point in the point cloud of the mapping data, the
corresponding positioning data can be identified using the time
stamp. The point can then be converted to the new reference frame.
In some embodiments, the scanner reference frame can be converted
into a horizontal reference frame using the interpolated roll,
pitch, and yaw values from the positioning data. Once the mapping
data has been converted into the horizontal reference frame, it may
be further converted into a Cartesian frame or other output
reference frame. Once each point has been converted, the result is
a geo-referenced point cloud, with each point in the point cloud
now referenced to the world coordinate system. In some embodiments,
the geo-referenced point cloud can be provided to map generator 316
before performing outlier removal to remove outlier data from the
geo-referenced point cloud.
[0062] After the geo-referenced point cloud has been produced,
outlier removal module 314 can remove outlier data from the
geo-referenced point cloud. In some embodiments, the geo-referenced
point cloud may be downsampled, reducing the number of outliers in
the data. Downsampling of this data may be performed using voxels.
In some embodiments, the points in each voxel may be averaged, and
one or more averaged points may be output per voxel. As such,
outlier points will be removed from the data set in the course of
averaging the points in each voxel. In various embodiments, the
resolution of the voxels (e.g., the size of each voxel), may be
arbitrarily defined. This allows for sparse and dense downsampled
point clouds to be produced. The resolution may be determined by
the user, or by the mapping manager based on, e.g., available
computing resources, user preferences, default values, or other
application-specific information. For example, a lower resolution
(e.g., larger voxel size) may be used to produce a sparse
downsampled point cloud for visualization on a client device or a
mobile device. Additionally, or alternatively, outliers may be
removed statistically. For example, the distance from each point to
its nearest neighbor may be determined and statistically analyzed.
If the distance from a point to its nearest neighbor is greater
than a threshold value (e.g., a standard deviation of the nearest
neighbor distances in the point cloud), then that point may be
removed from the point cloud. In some embodiments, the outlier
removal technique may be selectable by the user or be automatically
selected by the mapping manager. In some embodiments, outlier
removal may be disabled.
[0063] As discussed, the point cloud data may be a
three-dimensional representation of the target environment. This 3D
representation can be divided into voxels (e.g., 3D pixels).
[0064] After statistical outlier removal, the resulting point cloud
data can be provided to map generator 316. In some embodiments, the
map generator 316 may include a dense map generator 318 and/or a
sparse map generator 320. In such embodiments, dense map generator
318 can produce a high-density map from the point cloud data
received before outlier removal, and sparse map generator 320 can
produce a low-density map from the sparse downsampled point cloud
data received after outlier removal. In other embodiments, dense
map generator 318 and sparse map generator 320 may produce a
high-density map and a low-density map separately from the point
cloud received both after outlier removal. In such embodiments,
each map generator may generate the output map using the same
process but may vary the size of the voxels to produce high-density
or low-density maps. In some embodiments, the low-density map can
be used by a client device or a mobile device to provide real-time
visualization of the mapping data. The high-density map can be
output as a LIDAR Data Exchange File (LAS) or other file type to be
used with various mapping, planning, analysis, or other tools.
[0065] The map generator may use the point cloud data to perform a
probabilistic estimation of the position of points in the map. For
example, the map generator may use a 3D mapping library, such as
OctoMap to produce the output map. The map generator can divide the
point cloud data into voxels. For each voxel, the map generator can
determine how many points are in the voxel and, based on the number
of points and the variance associated with each point, determine
the probability that a point is in that voxel. The probability may
be compared to an occupancy threshold and, if the probability is
greater than the occupancy threshold, a point may be represented in
that voxel in the output map. In some embodiments, the probability
that a given voxel is occupied can be represented as:
P .function. ( n .times. | .times. z 1 : t ) = [ 1 + 1 - P
.function. ( n .times. | .times. z t ) P .function. ( n .times. |
.times. z t ) .times. 1 - P ( n .times. | .times. z 1 .times. :
.times. t - 1 ) P .function. ( n .times. | .times. z 1 : t - 1 )
.times. P .function. ( n ) 1 - P .function. ( n ) ] - 1
##EQU00001##
[0066] The probability P(n|z.sub.1:t) of a node n being occupied is
a function of the current measurement z.sub.1, a prior probability
P(n), and the previous estimate P(n|z.sub.1:t-1). Additionally,
P(n|z.sub.t) represents the probability that voxel n is occupied
given the measurement z.sub.t. This probability may be augmented to
include the variance of each point, as measured by the positioning
sensor, as represented by the following equations:
P .function. ( n ) = 1 2 .times. P x .function. ( x , .mu. x ,
.sigma. x 2 ) .times. P y .function. ( y , .mu. y , .sigma. y 2 )
.times. P z .function. ( z , .mu. z , .sigma. z 2 ) + 1 2
##EQU00002## P .function. ( n , .mu. , .sigma. 2 ) = 1 2 .times.
.pi. .times. .sigma. 2 .times. e - ( x - .mu. ) 2 2 .times. .sigma.
2 ##EQU00002.2##
[0067] In the equations above, P(n) represents the total
probability that a voxel n is occupied. The use of 1/2 in the above
equation is implementation specific, such that the probability is
mapped to a range of 1/2-1. This range may vary, depending on the
particular implementation in use. In the above equations, the total
probability is the product of probabilities calculated for the x,
y, and z dimensions. The probability in each dimension may be
determined based on the mean, .mu., for each point in that
dimension, and the variance, .sigma..sup.2, of each measurement in
a given dimension, with x, y, and z corresponding to the coordinate
values of a given point. A large number of points near the mean
point in a given voxel may increase the probability, while a more
diffuse collection of points in the voxel may lower the
probability. Likewise, a large variance associated with the data
(e.g., indicating lower accuracy position data has been collected)
may lower the probability while a lower variance may increase the
probability. P(n, .mu., .sigma..sup.2) represents the Gaussian
distribution for the voxel, given the mean and variance values of
the points in that voxel.
[0068] If the total probability of a voxel being occupied is
greater than a threshold occupancy value, then a point can be added
to that voxel. In some embodiments, all of the points in that voxel
can be averaged, and the resulting mean coordinate can be used as
the location of the point in that voxel. This improves the accuracy
of the resulting map over alternative methods, such as using the
center point of an occupied voxel as the point, which may result in
skewed results depending on the resolution of the voxels. In
various embodiments, the occupancy threshold can be set based on
the amount of processing resources available and/or based on the
acceptable amount of noise in the data for a given application. For
example, the occupancy threshold can be set to a default value of
70%. A higher threshold can be set to reduce noise. Additionally,
the occupancy threshold may be set depending on the quality of the
data being collected. For example, data collected under one set of
conditions may be high quality (e.g., low variance) and a lower
occupancy threshold can be set, while lower quality data may
necessitate a higher occupancy threshold.
[0069] The resulting map data, with one point in each occupied
voxel, can then be output as an LAS file, or other file format. In
some embodiments, the geo-referenced point cloud data may be output
without additional processing (e.g., outlier removal). In some
embodiments, each point in the point cloud data may also be
associated with an intensity value. The intensity value may
represent various features of the object being scanned, such as
elevation above a reference plane, material composition, etc. The
intensity value for each point in the output map may be an average
of the intensity values measured for each point in the mapping data
collected by the scanning sensor (e.g., a LiDAR sensor).
[0070] FIGS. 4A and 4B illustrate an example of a hierarchical data
structure, in accordance with various embodiments. As discussed
above, and as shown in FIG. 4A, data representing a 3D environment
400 can be divided into a plurality of voxels. As shown in FIG. 4A,
the target environment can be divided into eight voxels, with each
voxel being further divided into eight sub-voxels, and each
sub-voxel divided into eight further smaller sub-voxels. Each voxel
may represent a different volumetric portion of the 3D environment.
The voxels may be subdivided until a smallest voxel size is
reached. The resulting 3D environment can be represented as a
hierarchical data structure 402, where the root of the data
structure represents the entire 3D environment, and each child node
represents a different voxel in different hierarchy within the 3D
environment.
[0071] FIGS. 5A and 5B illustrate an example of outlier removal in
mapping data, in accordance with various embodiments. As shown in
FIG. 5A, when a target object is scanned, it may be represented as
a plurality of points, with those points clustered on different
parts of the object, including surfaces (such as surface 501),
edges (such as edge 503), and other portions of the target object
in the target environment. For simplicity of depiction, these
surfaces, edges, etc. are shown solid. In various regions 500A-500F
of the data, there are additional outlier points. This may be most
noticeable in regions of empty space, as shown in FIG. 5A. These
points are diffuse, as compared to the more densely packed points
of the surfaces and edges of the target object. Outlier removal may
be used to eliminate or reduce the number of these points in the
data. As discussed above, the geo-referenced point cloud data may
be downsampled, reducing the number of outliers in the data.
Additionally, or alternatively, outliers may be removed
statistically. For example, the distance from each point to its
nearest neighbor may be determined and statistically analyzed. If
the distance from a point to its nearest neighbor is greater than a
threshold value (e.g., a standard deviation of the nearest neighbor
distances in the point cloud), then that point may be removed from
the point cloud. As shown in FIG. 5B, following outlier removal,
the regions of the point cloud data 502A-502F have been reduced,
providing a cleaner 3D map.
[0072] FIG. 6 illustrates an example 600 of intensity values in
mapping data, in accordance with various embodiments. As shown in
FIG. 6, one example of intensity values may be to represent
elevation above a reference plane. In this example, different
elevation ranges may be assigned a different intensity value
602-606, as depicted here using greyscale coloration. In various
embodiments, intensity may be represented using different colors to
represent different values or ranges of values. Additionally,
intensity may be used to represent different materials being
scanned. For example, steel and concrete will absorb and reflect
the incident radiation produced by the scanning sensor differently,
enabling the scanning sensor to identify different materials in
use. Each material may be encoded as a different intensity value
associated with each point and represented by a different color in
the output map. Additionally, although the example shown in FIG. 6
shows three greyscale colors representing different elevation
ranges, in various embodiments, continuous gradients of colors may
be used to represent continuous changes in elevation value above a
reference plane.
[0073] FIG. 7 illustrates an example of supporting a movable object
interface in a software development environment, in accordance with
various embodiments. As shown in FIG. 7, a movable object interface
703 can be used for providing access to a movable object 701 in a
software development environment 700, such as a software
development kit (SDK) environment. In some embodiments, the movable
object interface 703, may render a real-time map generated by the
mapping manager and other interfacing components for receiving user
input. The real-time map may be rendered on a display of a client
device or other computing device in communication with the movable
object. As used herein, the SDK can be an onboard SDK implemented
on an onboard environment that is coupled to the movable object
701. The SDK can also be a mobile SDK implemented on an off-board
environment that is coupled to a client device or a mobile device.
As discussed above, the mapping manager can be implemented using an
onboard SDK coupled to the movable object 701 or a mobile SDK
coupled to a client device or a mobile device to enable
applications to perform real-time mapping, as described herein.
[0074] Furthermore, the movable object 701 can include various
functional modules A-C 711-713, and the movable object interface
703 can include different interfacing components A-C 731-733. Each
said interfacing component A-C 731-733 in the movable object
interface 703 corresponds to a module A-C 711-713 in the movable
object 701. In some embodiments, the interfacing components may be
rendered on a user interface of a display of a client device or
other computing device in communication with the movable object. In
such an example, the interfacing components, as rendered, may
include selectable command buttons for receiving user
input/instructions to control corresponding functional modules of
the movable object.
[0075] In accordance with various embodiments, the movable object
interface 703 can provide one or more callback functions for
supporting a distributed computing model between the application
and movable object 701.
[0076] The callback functions can be used by an application for
confirming whether the movable object 701 has received the
commands. Also, the callback functions can be used by an
application for receiving the execution results. Thus, the
application and the movable object 701 can interact even though
they are separated in space and in logic.
[0077] As shown in FIG. 7, the interfacing components A-C 731-733
can be associated with the listeners A-C 741-743. A listener A-C
741-743 can inform an interfacing component A-C 731-733 to use a
corresponding callback function to receive information from the
related module(s).
[0078] Additionally, a data manager 702, which prepares data 720
for the movable object interface 703, can decouple and package the
related functionalities of the movable object 701. The data manager
702 may be onboard, that is coupled to or located on the movable
object 701, which prepares the data 720 to be communicated to the
movable object interface 703 via communication between the movable
object 701 and a client device or a mobile device. The data manager
702 may be off board, that is coupled to or located on a client
device or a mobile device, which prepares data 720 for the movable
object interface 703 via communication within the client device or
the mobile device. Also, the data manager 702 can be used for
managing the data exchange between the applications and the movable
object 701. Thus, the application developer does not need to be
involved in the complex data exchanging process. In some
embodiments, mapping manager 126 may be one implementation of data
manager 702. In such an embodiment, the mapping manager is used for
managing mapping data, including generating a map using mapping
data and positioning data and rendering the generated map for
display based on a default setting or a user selection.
[0079] For example, the onboard or mobile SDK can provide a series
of callback functions for communicating instant messages and for
receiving the execution results from a movable object. The onboard
or mobile SDK can configure the life cycle for the callback
functions in order to make sure that the information interchange is
stable and completed. For example, the onboard or mobile SDK can
establish connection between a movable object and an application on
a smart phone (e.g. using an Android system or an iOS system).
Following the life cycle of a smart phone system, the callback
functions, such as the ones receiving information from the movable
object, can take advantage of the patterns in the smart phone
system and update the statements accordingly to the different
stages in the life cycle of the smart phone system.
[0080] FIG. 8 illustrates an example of a movable object interface,
in accordance with various embodiments. As shown in FIG. 8, a
movable object interface 803 can be rendered on a display of a
client device or other computing devices representing statuses of
different components of a movable object 801. Thus, the
applications, e.g., APPs 804-806, in the movable object environment
800 can access and control the movable object 801 via the movable
object interface 803. As discussed, these apps may include an
inspection app 804, a viewing app 805, and a calibration app
806.
[0081] For example, the movable object 801 can include various
modules, such as a camera 811, a battery 812, a gimbal 813, and a
flight controller 814.
[0082] Correspondently, the movable object interface 803 can
include a camera component 821, a battery component 822, a gimbal
component 823, and a flight controller component 824 to be rendered
on a computing device or other computing devices to receive user
input/instructions by way of using the APPs 804-806.
[0083] Additionally, the movable object interface 803 can include a
ground station component 826, which is associated with the flight
controller component 824. The ground station component operates to
perform one or more flight control operations, which may require a
high-level privilege.
[0084] FIG. 9 illustrates an example of components for a movable
object in a software development kit (SDK), in accordance with
various embodiments. The SDK 900 may be an onboard SDK implemented
on an onboard mapping manager or a mobile SDK implemented on
mapping manager located on a client device or a mobile device. The
SDK 900 may correspond to all or a portion of the mapping manager
described above or may be used to implement the mapping manager as
a standalone application. As shown in FIG. 9, the drone class 901
in the SDK 900 is an aggregation of other components 902-907 for a
movable object (e.g., a drone). The drone class 901, which have
access to the other components 902-907, can exchange information
with the other components 902-907 and controls the other components
902-907.
[0085] In accordance with various embodiments, an application may
be accessible to only one instance of the drone class 901.
Alternatively, multiple instances of the drone class 901 can
present in an application.
[0086] In the SDK, an application can connect to the instance of
the drone class 901 in order to upload the controlling commands to
the movable object. For example, the SDK may include a function for
establishing the connection to the movable object. Also, the SDK
can disconnect the connection to the movable object using an end
connection function. After connecting to the movable object, the
developer can have access to the other classes (e.g. the camera
class 902, the battery class 903, the gimbal class 904, and the
flight controller class 905). Then, the drone class 901 can be used
for invoking the specific functions, e.g. providing access data
which can be used by the flight controller to control the behavior,
and/or limit the movement, of the movable object.
[0087] In accordance with various embodiments, an application can
use a battery class 903 for controlling the power source of a
movable object. Also, the application can use the battery class 903
for planning and testing the schedule for various flight tasks. As
battery is one of the most restricted elements in a movable object,
the application may seriously consider the status of battery not
only for the safety of the movable object but also for making sure
that the movable object can finish the designated tasks. For
example, the battery class 903 can be configured such that if the
battery level is low, the movable object can terminate the tasks
and go home outright. For example, if the movable object is
determined to have a battery level that is below a threshold level,
the battery class may cause the movable object to enter a power
savings mode. In power savings mode, the battery class may shut
off, or reduce, power available to various components that are not
integral to safely returning the movable object to its home. For
example, cameras that are not used for navigation and other
accessories may lose power, to increase the amount of power
available to the flight controller, motors, navigation system, and
any other systems needed to return the movable object home, make a
safe landing, etc.
[0088] Using the SDK, the application can obtain the current status
and information of the battery by invoking a function to request
information from in the Drone Battery Class. In some embodiments,
the SDK can include a function for controlling the frequency of
such feedback.
[0089] In accordance with various embodiments, an application can
use a camera class 902 for defining various operations on the
camera in a movable object, such as an unmanned aircraft. For
example, in SDK, the Camera Class includes functions for receiving
media data in SD card, getting & setting photo parameters,
taking photo and recording videos.
[0090] An application can use the camera class 902 for modifying
the setting of photos and records. For example, the SDK may include
a function that enables the developer to adjust the size of photos
taken. Also, an application can use a media class for maintaining
the photos and records.
[0091] In accordance with various embodiments, an application can
use a gimbal class 904 for controlling the view of the movable
object. For example, the Gimbal Class can be used for configuring
an actual view, e.g. setting a first personal view of the movable
object. Also, the Gimbal Class can be used for automatically
stabilizing the gimbal, in order to be focused on one direction.
Also, the application can use the Gimbal Class to change the angle
of view for detecting different objects.
[0092] In accordance with various embodiments, an application can
use a flight controller class 905 for providing various flight
control information and status about the movable object. As
discussed, the flight controller class can include functions for
receiving and/or requesting access data to be used to control the
movement of the movable object across various regions in a movable
object environment.
[0093] Using the Flight Controller Class, an application can
monitor the flight status, e.g. using instant messages. For
example, the callback function in the Flight Controller Class can
send back the instant message every one thousand milliseconds (1000
ms).
[0094] Furthermore, the Flight Controller Class allows a user of
the application to investigate the instant message received from
the movable object. For example, the pilots can analyze the data
for each flight in order to further improve their flying
skills.
[0095] In accordance with various embodiments, an application can
use a ground station class 907 to perform a series of operations
for controlling the movable object.
[0096] For example, the SDK may require applications to have an
SDK-LEVEL-2 key for using the Ground Station Class. The Ground
Station Class can provide one-key-fly, on-key-go-home, manually
controlling the drone by app (i.e. joystick mode), setting up a
cruise and/or waypoints, and various other task scheduling
functionalities.
[0097] In accordance with various embodiments, an application can
use a communication component for establishing the network
connection between the application and the movable object.
[0098] FIG. 10 shows a flowchart of a method of target mapping in a
movable object environment, in accordance with various embodiments.
At operation/step 1002, mapping data can be obtained from a
scanning sensor (e.g., a LiDAR sensor) supported by a movable
object (e.g., a UAV). In some embodiments, the scanning sensor can
be a LiDAR sensor. At operation/step 1004, positioning data can be
obtained from a positioning sensor (e.g., a GPS sensor, an RTK
sensor, an IMU sensor, and/or other positioning sensors or a
combination thereof) supported by the movable object (e.g., a UAV).
In some embodiments, the positioning sensor can be an RTK sensor or
an IMU sensor.
[0099] At operation/step 1006, the mapping data can be associated
with the positioning data based at least on time data associated
with the mapping data and the positioning data. In some
embodiments, associating the mapping data with the positioning data
may include upsampling the positioning data to include a number of
positions equal to a number of points in the mapping data, and
associating each point in the mapping data with a corresponding
position in the upsampled positioning data. In some embodiments,
the time data associated with the mapping data and the positioning
data may be obtained using clock circuitry providing a reference
clock signal electronically coupled to the scanning sensor and the
positioning sensor.
[0100] At operation/step 1008, a map in a first coordinate system
may be generated based at least on the associated mapping data and
positioning data. In some embodiments, generating the map may
include, for each voxel of a plurality of voxels of the map,
determining one or more points from the mapping data to be located
in the voxel, and determining an occupancy probability for the
voxel based at least on a number of points in that voxel. In some
embodiments, the occupancy probability is determined based on a
variance of the positioning data associated with the one or more
points located in the voxel. In some embodiments, for each voxel
having an occupancy probability greater than an occupancy
probability threshold value, an average position of the one or more
points in the voxel can be calculated, and a point can be generated
in the map at the average position. In some embodiments, for each
voxel having an occupancy probability greater than the occupancy
probability threshold value, an average intensity value of the one
or more points in the voxel can be calculated, and the average
intensity value can be associated with the generated point in the
map. In an embodiment, the average intensity value is calculated
based on feature of each point in the voxel, where the feature of
each point is associated with an elevation or a material detected
by the scanning sensor.
[0101] In some embodiments, the method may further include
determining a distribution of points in the mapping data, each of
the points in the mapping data associated with a distance from a
nearest neighboring point in the mapping data, and removing any
points associated with a distance greater than a distance threshold
value. In some embodiments, the method may further include
downsampling the mapping data by a scaling factor, dividing the
mapping data into a plurality of voxels, and outputting an average
point from the downsampled mapping data for each of the plurality
of voxels. In some embodiments, the method may further include
transforming the map into a second coordinate system and outputting
the transformed map. For example, the positioning data may be
converted from the scanner reference frame to a north-east-down
(NED) reference frame (or a NED coordinate system). The reference
frame to which the positioning data is converted may vary depending
on the application of the map that is being produced. For example,
if the map is being used in surveying, it may be converted to the
NED reference frame. For another example, if the map is being used
for rendering motions such as flight simulation, it may be
converted to the FlightGear coordinate system. Other applications
of the map may effect conversions of the positioning data to
different reference frames or different coordinate systems.
[0102] In some embodiments, geo-referencing as described above may
be combined with scan matching, such as Simultaneous Localization
and Mapping (SLAM) or LiDAR Odometry and Mapping (LOAM).
Traditional methods make use of SLAM with or without inertial
navigation input. For example, some methods inject IMU information
with SLAM and sometimes inject odometry via GPS which provides an
improved mapping algorithm. Unlike traditional methods, embodiments
can perform direct geo-referencing as discussed above, and then a
layer of SLAM or LOAM can be added on top of the geo-references.
This provides a robust mapping algorithm as the geo-references
serves as a floor on the quality of the resulting map.
[0103] In some embodiments, geo-referencing as described above may
be combined with normal distribution transformation (NDT). NDT is a
LiDAR scan registration method which is in between a feature-based
registration method (such as LOAM) and a point-based registration
method (such as iterative closest point). The "features" of the
world are described by multivariate Gaussian distributions defined
in each voxel. A probability density function (PDF) is generated
for each cell and points are matched to the map by maximizing the
sum of probability generated by the PDF, points x, and a
transformation T:
T = arg .times. .times. min T .times. i = 1 k .times. - p i
.function. ( Tx i ) ##EQU00003## p i .function. ( x i ) .about. N (
.mu. , .SIGMA. i ) .times. .mu. i = [ .mu. x .mu. y .mu. z ]
##EQU00003.2##
[0104] As discussed, in various embodiments, a movable object may
be used for performing real-time mapping of various application
environments, such as construction site mapping, surveying, target
object mapping, etc. In some embodiments, the movable object may be
an unmanned aerial vehicle (UAV), such as shown in FIG. 11, which
has been configured to perform real-time mapping. FIG. 11 shows an
isometric view 1100 of a movable object for performing real-time
mapping, in accordance with an embodiment. As discussed above, a
UAV in various embodiments may include a main body 1110. The main
body may include, or be coupled to, a sensing system, a
communications system, movement mechanisms, such as motors 1112
which may independently power rotors (not shown) to cause the UAV
to fly and navigate a predefined route and/or a route based on real
time user commands, and other systems and functional modules as
discussed above.
[0105] As shown in FIG. 11, a UAV configured to perform real-time
mapping may include a payload assembly (or a sensor assembly) 1102,
which may include a scanning sensor and a positioning sensor (as
discussed above), and a mounting assembly 1104 to connect the
payload assembly to the movable object body. In some embodiments,
the UAV may further include a landing gear assembly 1106 designed
to provide a secure platform when not in flight and during
landing/takeoff, while not interfering with the field of view (FOV)
of the scanning sensor of payload assembly 1102. In some
embodiments, the angle 1108 of the landing gear may be based on the
FOV of the scanning sensor in use. This may include fixed landing
gear, such as shown in FIG. 11, and adjustable landing gear, which
may change the angle 1108 of the landing gear dynamically, based on
the sensor in use.
[0106] FIG. 12 shows a rear view 1200 of a movable object (e.g., a
UAV) for performing real-time mapping, in accordance with an
embodiment. As shown in FIG. 12, a main body 1202 can be coupled to
the mounting assembly 1104. In some embodiments, the mounting
assembly can include expansion brackets 1204 which are coupled to
the main body 1202 using one or more fasteners, and which provide
clearance for various components of the movable object, such as
batteries, onboard computing device 1206, and other components. The
mounting assembly may further include a base plate 1208 coupled to
the expansion brackets using a plurality of dampers 1210. In some
embodiments, the payload assembly 1102 including a LiDAR/global
navigation satellite service (GNSS) IMU system is secured to the
underside of the movable object via the mounting assembly using a
plurality of mounting points of the base plate 1208. Vibration
isolation is provided to the LiDAR/GNSS IMU system via the base
plate 1208 resting on the plurality of dampers 1210.
[0107] As shown in FIG. 12, base plate 1208 may include dovetail
mounts 1212. In various embodiments, the dovetail mounts 1212
enable the user to slide the payload assembly 1102 (e.g., including
a LiDAR/GNSS IMU system) in position and secure with two fasteners,
such as M4 wingnuts or other suitable fasteners. This enables the
payload assembly, including the LiDAR/GNSS IMU system to be easily
removed from the movable object via the mounting assembly. In some
embodiments, the angle 1216 of the landing gear legs from the
movable object may be adjustable, depending on the size of the
payload, batteries, or other components and/or based on the FOV of
the scanning sensor or other sensors included in the payload.
[0108] FIG. 13 shows an isometric exploded view 1300 of the
mounting assembly and payload assembly, in accordance with an
embodiment. As shown in FIG. 13, mounting assembly 1104 may include
base plate 1208 which is coupled to the expansion brackets 1204 at
a plurality of locations 1304. For example, as shown in FIG. 13,
the lower end of each expansion bracket (e.g., the end closest to
the payload assembly) may include a hole which aligns with a
corresponding hole on the base plate 1208. Each location may
include a damper that, when assembled, is under compression between
the base plate 1208 and the expansion brackets 1204. Each damper
may be made of a compressible material, such as a natural or
synthetic rubber, thermoplastic elastomer, or other shock absorbing
material. The dampers may be secured between the base plate 1208
and the expansion brackets 1204 at the plurality of locations. For
example, each damper may include flared ends. A lower flared end
may be forced through a hole in an expansion bracket 1204 and an
upper flared end may be forced through a hole in the base plate
1208. Such an arrangement couples the base plate 1208 to the
expansion brackets 1204 using the plurality of dampers. In the
example shown in FIG. 13, there may be six dampers coupling the
base plate 1208 to the expansion brackets 1204, three on either
side of the base plate 1208. However, the base plate 1208 may be
coupled to the expansion brackets 1204 at more or fewer locations,
in accordance with various embodiments.
[0109] The base plate 1208 may further include a plurality of holes
1306 which line up with holes 1308 of the payload assembly when
assembled via the dovetails. A fastener may be inserted through the
aligned holes to secure the payload assembly to the base plate
1208. As shown in FIG. 13, the payload assembly may include two
payload support brackets 1302. Each payload support bracket may
include a dovetail which enables the payload support bracket to be
connected to the dovetail groove in the base plate 1208.
Alternatively, in some embodiments, the base plate 1208 may include
a dovetail that can be inserted into a dovetail groove in each
payload support bracket 1302. The payload assembly may further
include a pivot bracket 1312. The pivot bracket can be connected to
scanning sensor 1310 (such as a LiDAR sensor) using one or more
fasteners. The pivot bracket may include one or more pivot arms
1314 which can be connected to the payload support brackets 1302.
For example, in the embodiment of FIG. 13, the pivot bracket
includes two pivot arms 1314. A hole 1316 in each pivot arm can be
aligned with a hole 1318 in each payload support bracket 1302 to
secure the scanning sensor 1310 at a particular scanning angle. In
various embodiments, the payload support bracket can be secured to
the pivot bracket using any suitable fastener which is removable
such that the pivot bracket can be pivoted to another scanning
angle (e.g., aligned with a different hole in the payload support
bracket) and then recoupled to change the scanning angle of the
scanning sensor 1310.
[0110] As shown in FIG. 13, the dovetail design on the pivot
bracket enables the LiDAR/GNSS IMU system to be quickly removed.
The pivot bracket 1312 may further include at least one mounting
point 1319 to which a positioning sensor 1320 (e.g., a GNSS IMU
system, an RTK system, an RTK/IMU system, etc.) can be attached. By
mounting the pivot bracket 1312 to the scanning sensor 1310 on one
side of the pivot bracket 1312 and the positioning sensor 1320 on
the other side of the pivot bracket 1312, any relative motion
between the two sensors is eliminated or greatly reduced. The
positioning sensor 1320 can be bolted, screwed, epoxied, or be
otherwise fixedly coupled to the pivot bracket 1312. In some
embodiments, the positioning sensor 1320 may be permanently coupled
to the pivot bracket 1312 or may be removably coupled to the pivot
bracket 1312. The positioning sensor 1320 may be protected from the
elements and/or electromagnetic interference using a positioning
sensor enclosure 1322 which may be coupled to the pivot bracket
1312. The positioning sensor enclosure 1322 provides ingress
protection (IP) for the electronics from foreign debris and
moisture. In some embodiments, the enclosure 1322 is sprayed with a
thin layer of paint or other substance to protect the GNSS IMU from
jamming caused by electromagnetic interference. This may include
copper paint, or other material suitable to preventing jamming due
to electromagnetic interference.
[0111] Additionally, in some embodiments, an onboard computing
device 1206 (such as computing device 112 described above) may be
coupled to the base plate 1208 using a computing device bracket
1324. In some embodiments, expansion brackets 1204 can be
stabilized using alignment brackets 1326. These can ensure that the
expansion brackets 1204 remain parallel while in flight and
eliminate or reduce undesirable twisting of the mounting assembly
which might affect the reliability of the sensor data, such as
mapping data and positioning data, collected by the scanning sensor
and positioning sensor.
[0112] FIG. 14 shows an isometric view 1400 of base plate 1208, in
accordance with some embodiments. Base plate 1208 can include the
plurality of mounting locations 1402 at which the dampers may be
used to couple the base plate to the expansion brackets, as
discussed above. Additionally, mounting holes 1306 may be located
above the dovetail grooves 1404, with the holes penetrating from
the surface of the base plate 1208 to the dovetail grooves 1404,
allowing them to be aligned with holes of the payload support
interface as discussed above.
[0113] FIG. 15 shows an isometric assembled view 1500 of the
mounting assembly and payload assembly, in accordance with an
embodiment. As shown in FIG. 15, the dovetails of payload support
brackets 1302 are inserted into the dovetail grooves of base plate
1208. Positioning sensor enclosure 1322 is in place covering the
positioning sensor and coupled to the pivot bracket. In the
embodiment of FIG. 15, the pivot arms are aligned with an
intermediate hole of the payload support brackets 1302, causing the
scanning sensor to be set at an angle between 90 degrees and 0
degrees, such as 35 degrees relative to the base plate 1208 or a
horizontal plane when a UAV (e.g., as shown above with respect to
FIG. 11) is in its landing pose. The expansion brackets 1204 and
alignment brackets 1326 can be coupled to a UAV (e.g., as shown
above with respect to FIG. 11), to secure the mounting assembly to
the UAV for mapping missions, as discussed above.
[0114] FIGS. 16A-16D show additional views 1600-1606 of the payload
assembly in accordance with various embodiments. As shown in FIG.
16A at 1600, the pivot bracket 1312 may include a plurality of
mounting points 1608 to which the positioning sensor may be
coupled. As shown in FIG. 16B at 1602, the positioning sensor can
be mounted to the plurality of mounting points 1608 by aligning a
plurality of mounting holes 1610 in the positioning sensor 1320 and
securing the positioning sensor 1320 to the mounting points 1608
using fasteners, such as screws or other suitable fasteners. As
shown in FIG. 16C, at 1604, the payload support brackets 1302 can
be coupled to the pivot brackets at pivot arms 1314. As discussed,
the pivot arms can be aligned to corresponding holes on the payload
support brackets to secure the scanning sensor 1310 at a predefined
scanning angle. In some embodiments, the payload support brackets
1302 may include grooves to which the pivot arms 1304 may be
coupled, enabling a selection of a range of angles rather than
particular angles predefined by the hole placements in the payload
support brackets. As shown in FIG. 16D, at 1606, the positioning
sensor enclosure 1322 can be secured to the pivot bracket to
protect the positioning sensor from debris, electromagnetic
interference, and/or other environmental hazards. As discussed, the
assembled payload assembly may be connected to the base plate of
the mounting assembly using dovetails and coupled to the base plate
by aligning holes 1308 with corresponding holes in the base plate
and securing with a suitable fastener.
[0115] FIG. 17 shows an alternative enclosure 1700 to prevent
jamming of the positioning sensor of a movable object in accordance
with various embodiments. In order to prevent the GNSS-IMU from
jamming due to electromagnetic inference, an aluminum enclosure can
be added which serves as a Faraday cage with its various circular
cutouts. The size and locations of the circular cutouts may be
selected for the frequency or frequencies of electromagnetic
interference that are expected and/or those frequencies most likely
to interfere with the functioning of the IMU.
[0116] FIGS. 18A and 18B show example alignments of lower expansion
brackets in accordance with various embodiments. As shown at 1800
and 1802, an alignment bracket 1326 can provide alignment of the
expansion brackets such that the expansion brackets remain in
parallel to one another. Absent the alignment brackets, the
expansion brackets may freely rotate in place after installation on
the movable object. Thus, there is a need to constrain the
expansion brackets such that the back planes of the brackets remain
parallel. In some embodiments, the alignment bracket may include a
square (or other shaped) extension 1804. This may be inserted into
correspondingly shaped holes 1806 in the expansion brackets
1204.
[0117] FIG. 19 shows an example 1900 of the alignment bracket 1326
being connected to the movable object body 1202 and the expansion
brackets 1204, in accordance with an embodiment. In some
embodiments, two threaded screw holes can be used: a first for
alignment and a second for mounting on the movable object. For
example, movable object body 1202 may include holes 1902 through
which a screw may be inserted and screwed into a first threaded
screw hole of alignment bracket 1326. Alignment bracket 1326 may
further include a second threaded screw hole 1904 which may be
aligned with a hole 1906 in expansion brackets 1204, as shown in
FIG. 19.
[0118] FIG. 20 shows an alternative 2000 mechanical attachment of
the payload assembly (including a scanning sensor and a positioning
sensor, as discussed above) to a movable object via a mounting
assembly, in accordance with various embodiments. Plate 2002 is
fastened to the underside of the movable object directly or
indirectly via the mounting assembly, and plate 2004 can be
fastened to the payload assembly. The two plates can be coupled
using a plurality of dampers 2006 which are under compression when
assembled. A fastener (such as a wingnut or other suitable
fastener) can be attached into the bottom of the rod piece 2008
inserted through plates 2002 and 2004 and fastened to the payload
assembly through the hole 2010 in a crossbar 2012 connecting the
payload support brackets 1302, making the payload assembly easily
removable. In some embodiments, plate 2002 may be coupled to the
base plate of the mounting assembly described above or may be used
to replace the base plate of the mounting assembly. In other
embodiments, plate 2002 may be coupled directly to the main body of
a movable object (e.g., an UAV).
[0119] In some embodiments, a mirror or a plate with a reflective
coating material may be installed on the scanning sensor (such as
the LiDAR sensor) via the fastener, at a certain angle that
overlaps with at least a portion of the FOV of the scanning sensor
in use. By way of applying the mirror or the plate with the
reflective coating material, the FOV of the scanning sensor may be
broadened to include a second FOV that is created by laser lights
illuminated from the scanning sensor that are being reflected by
the mirror or the plate with the reflective coating material.
[0120] FIGS. 21-23 show an alternative mechanical attachment of the
LiDAR and positioning sensor to a movable object in accordance with
various embodiments. As shown in FIG. 21, the alternative
mechanical attachment shown at 2100 may include a payload assembly
similar to that discussed above. The payload assembly may include
payload support brackets 2102 coupled by a crossbar 2104. A
T-extrusion member 2106 can be fastened to an upper surface of the
crossbar (e.g., the surface closest to the movable object). The
T-extrusion member may be permanently coupled to the crossbar
(e.g., welded, epoxied, or otherwise fixedly coupled) or fastened
using one or more removable fasteners. A long axis 2108 of the
T-extrusion member may be substantially orthogonal to a long axis
2110 of a clearance slot in alternative base plate 2112. FIG. 22
shows an alternative view 2200 of the mechanical attachment. To
attach the payload assembly, the payload assembly can be rotated 90
degrees relative to the alternative base plate 2112 such that the
long axis 2108 of the T-extrusion member is aligned with the long
axis 2110 of the clearance slot and the T-extrusion member can be
raised through the clearance slot. For example, as shown in FIG. 23
the T-extrusion member 2106 has been raised through the clearance
slot of the alternative base plate 2112 and then the payload
assembly has been rotated 90 degrees in the opposite direction.
Once positioned as shown in FIG. 23, the T-extrusion member can be
secured to the top pressure plate 2300 with fasteners (e.g.,
wingnuts or other suitable fasteners). The top pressure plate 2300
can be coupled to the alternative base plate 2112 using a plurality
of dampers 2302 to reduce the movement of the payload assembly
relative to the alternative base plate 2112.
[0121] FIG. 24 shows an example 2400 of a landing gear in
accordance with various embodiments. As shown in FIG. 24, the
landing gear legs 2402 may be angled 1108 based on the field of
view (FOV) of the scanning sensor 1310. For example, in the
embodiment of FIG. 24, the landing gear legs 2402 may be at a fixed
angle of 55 degrees (e.g., measured in the plane formed by the
landing gear legs) to accommodate the FOV of the particular
scanning sensor in use. Additionally, or alternatively, the angle
1108 of the landing gear legs 2402 may be adjusted, manually or
automatically, based on the FOV of the scanning sensor in use. In
some embodiments, the landing gear legs 2402 may be coupled to the
movable object using a landing gear bracket 2404. The landing gear
bracket can provide the fixed angle 1108 of the landing gear legs
2402. The landing gear bracket can be connected to the movable
object using connecting member 2406. In some embodiments, the
connecting member 2406 and the landing gear legs 2402 can be made
of the same material. In some embodiments, the connecting member
can connect to a landing gear mounting bracket of the movable
object body via a quick release connection.
[0122] FIG. 25 shows an example 2500 of a landing gear bracket 2404
in accordance with various embodiments. In some embodiments, the
landing gear bracket 2404 can secure the landing gear legs 2402 as
well as provide shock absorption for the landing gear legs. For
example, to improve upon the landing gear performance from drops
and other shocks, the landing gear bracket 2404 may include a
rubber (such as, SPE I synthetic elastomer) damper to fill the void
at the location where the landing gear legs and connection member
(such as carbon fiber tubes) intersect. This rubber damper helps to
absorb and dissipate energy during an impulse force.
[0123] FIG. 26 shows an example 2600 of an alternative landing gear
mounting point in accordance with various embodiments. A mounting
point 2602 can be used to attach landing gear legs on the movable
object such that the landing gear remains outside the field of view
of the scanning sensor. This design uses the space adjacent to the
motor heat sinks 2604 to attach the landing gear to the movable
object arms. Using this small area helps to improve the vertical
alignment of landing legs. In some embodiments, a landing gear leg
can be attached adjacent to each motor heat sink. For example, if
the movable object is a quadcopter with four motors and
corresponding heat sinks, then four landing gear mounting points
can be provided with four landing gear legs. In some embodiments,
at least three landing gear mounting points and associated landing
gear legs may be provided regardless of the number of motors
equipped on the movable object.
[0124] FIGS. 27-29 show LiDAR fields of view in accordance with
various embodiments. In the example 2700 shown in FIG. 27, the
LiDAR is positioned to scan at 35 degrees. The LiDAR's FOV is in
between the two planes 2702. In this position the FOV of the LiDAR
would intersect the traditional landing gear assembly 2704 (shown
dashed to indicate where it would be located if it were provided).
As shown, the traditional landing gear assembly 2704 would
partially obstruct the FOV of the scanning sensor. As such, to
obtain a complete scan of an area would require additional scanning
missions to be performed that account for the blind spots
introduced by the traditional landing gear assembly. To address
this, the angled landing gear assembly 2706 can include angled
legs, as discussed above. As discussed, the landing gear legs may
be angled depending on the FOV of the scanning sensor in use. As
shown in FIG. 27, when so angled, the landing gear legs remain
outside of LiDAR's field of view at the 35.degree. position. This
eliminates the obstructions caused by the traditional landing gear
assembly, leading to more efficient scanning missions. FIGS. 28 and
29 show the LiDAR FOV at its 90.degree. position. In the example
2800 of FIG. 28, the UAV is shown from the side view, with the
LiDAR positioned to scan at 90 degrees. The FOV 2802 of the LiDAR
in this position is obstructed by the traditional landing gear
assembly 2704, similar to that discussed above with respect to FIG.
27. However, the angled landing gear assembly 2706 does not
obstruct the FOV 2802 of the LiDAR in this position. FIG. 29 shows
an isometric view 2900 of the UAV with the LiDAR in the 90-degree
position. As discussed with respect to FIG. 28, the angled landing
gear assembly 2706 does not obstruct the FOV 2802 in this position
as the traditional landing gear assembly 2704 would.
[0125] FIGS. 30-32 show examples of angled positions of a scanning
sensor coupled to a movable object, in accordance with various
embodiments. In the example 3000 shown in FIG. 30, the LiDAR 1310
can be positioned at 0 degrees relative to horizontal. As discussed
above, this position can be achieved using the pivot bracket 1312
and the payload support brackets 1302. In the example 3100 shown in
FIG. 31, the LiDAR 1310 can be positioned at 35 degrees relative to
horizontal. This angle position may also be achieved using the
payload support brackets 1302 and the pivot bracket 1312, by
aligning corresponding holes in the brackets for the 35-degree
position. Similarly, the example 3200 shown in FIG. 32 shows the
LiDAR 1310 positioned at 90 degrees relative to horizontal. As
discussed above, this position may be achieved by changing the
alignment of holes in the pivot bracket 1312 to corresponding holes
in the payload support bracket 1302. In some embodiments, rather
than holes which provide predefined angular positions, the payload
support brackets 1302 may include slots, which enable the user to
select various angular positions.
Payload Support Bracket
[0126] FIG. 33 shows example 3300 scanning patterns that may be
implemented by LiDAR sensors that may be used in various
embodiments. A LiDAR system is an active sensing system that emits
light beams and measures a two-way travel time (i.e.
time-of-flight) for the reflected light detected by the LiDAR
sensor. The collected sensor data may generally be used to measure
a range or a distance to an object which has reflected the light
emitted by the LiDAR. Further, the object's position in a
three-dimensional space (e.g., recorded with x-, y-, and
z-coordinates, or latitude, longitude, and elevation values, or
other coordinate systems, etc.) may be determined using (1) the
detected two-way travel time of the emitted light beam, (2) the
scanning angle of the light beam in reference to the
three-dimensional space, and/or (3) the absolute location of the
LiDAR sensor detected using a GPS, GNSS, INS or IMU sensor,
etc.
[0127] Different LiDAR sensors may be associated with different
scan patterns, scan frequencies, and/or scan angles. The scan
patterns of a LiDAR sensor can be virtually any waveform by way of
utilizing different scanning mechanisms (e.g., using a
constant-velocity rotating polygon mirror or an oscillating
mirror). Some examples of scan patterns include parallel scan
lines, which may be generated by a rotating polygon mirror, or
sawtooth scan lines which may be generated by an oscillating
mirror. Other examples may include a sinusoidal scan pattern 3302,
sawtooth scan pattern 3304, elliptical scan pattern 3306, spiral
scan pattern 3308, or flower shape scan pattern 3310, or uniform
scan pattern 3312 (which may be a series of concentric scans, such
as circular, oval, or other scan shapes).
[0128] LiDAR data may be collected or recorded as discrete points
or as a full waveform. Discrete points identify and record points
at each peak location in the waveform curve. A full waveform
records a distribution of returned energy and thus captures more
information compared to discrete points. Whether collected as
discrete points or full waveform, LiDAR data are available as
discrete points, which is known as a LiDAR point cloud. LiDAR point
cloud is usually stored as .las format (or .laz format, which is a
highly compressed version of .las). Each lidar point data and its
metadata may include various data attributes, such as associated
coordinate values, an intensity value representing the amount of
light energy recorded by the sensor, or classification data
representing the type of object the laser return reflected off of
(such as classified as ground or non-ground, as different
altitudes, or as different material features), etc.
[0129] Many features can be performed in, using, or with the
assistance of hardware, software, firmware, or combinations
thereof. Consequently, features may be implemented using a
processing system (e.g., including one or more processors).
Exemplary processors can include, without limitation, one or more
general purpose microprocessors (for example, single or multi-core
processors), application-specific integrated circuits,
application-specific instruction-set processors, graphics
processing units, physics processing units, digital signal
processing units, coprocessors, network processing units, audio
processing units, encryption processing units, and the like.
[0130] Features can be implemented in, using, or with the
assistance of a computer program product which is a storage medium
(media) or computer readable medium (media) having instructions
stored thereon/in which can be used to program a processing system
to perform any of the features presented herein. The storage medium
can include, but is not limited to, any type of disk including
floppy disks, optical discs, DVD, CD-ROMs, microdrive, and
magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs,
flash memory devices, magnetic or optical cards, nanosystems
(including molecular memory ICs), or any type of media or device
suitable for storing instructions and/or data.
[0131] Stored on any one of the machine readable medium (media),
features can be incorporated in software and/or firmware for
controlling the hardware of a processing system, and for enabling a
processing system to interact with other mechanism utilizing the
results. Such software or firmware may include, but is not limited
to, application code, device drivers, operating systems and
execution environments/containers.
[0132] Features of the invention may also be implemented in
hardware using, for example, hardware components such as
application specific integrated circuits (ASICs) and
field-programmable gate array (FPGA) devices. Implementation of the
hardware state machine so as to perform the functions described
herein will be apparent to persons skilled in the relevant art.
[0133] Additionally, the present invention may be conveniently
implemented using one or more conventional general purpose or
specialized digital computer, computing device, machine, or
microprocessor, including one or more processors, memory and/or
computer readable storage media programmed according to the
teachings of the present disclosure. Appropriate software coding
can readily be prepared by skilled programmers based on the
teachings of the present disclosure, as will be apparent to those
skilled in the software art.
[0134] While various embodiments have been described above, it
should be understood that they have been presented by way of
example, and not limitation. It will be apparent to persons skilled
in the relevant art that various changes in form and detail can be
made therein without departing from the spirit and scope of the
invention.
[0135] The present invention has been described above with the aid
of functional building blocks illustrating the performance of
specified functions and relationships thereof. The boundaries of
these functional building blocks have often been arbitrarily
defined herein for the convenience of the description. Alternate
boundaries can be defined so long as the specified functions and
relationships thereof are appropriately performed. Any such
alternate boundaries are thus within the scope and spirit of the
invention.
[0136] The foregoing description has been provided for the purposes
of illustration and description. It is not intended to be
exhaustive or to limit the invention to the precise forms
disclosed. The breadth and scope should not be limited by any of
the above-described exemplary embodiments. Many modifications and
variations will be apparent to the practitioner skilled in the art.
The modifications and variations include any relevant combination
of the disclosed features. The embodiments were chosen and
described in order to best explain the principles of the invention
and its practical application, thereby enabling others skilled in
the art to understand the invention for various embodiments and
with various modifications that are suited to the particular use
contemplated. It is intended that the scope of the invention be
defined by the following claims and their equivalence.
[0137] In the various embodiments described above, unless
specifically noted otherwise, disjunctive language such as the
phrase "at least one of A, B, or C," is intended to be understood
to mean either A, B, or C, or any combination thereof (e.g., A, B,
and/or C). As such, disjunctive language is not intended to, nor
should it be understood to, imply that a given embodiment requires
at least one of A, at least one of B, or at least one of C to each
be present.
* * * * *