U.S. patent application number 15/698361 was filed with the patent office on 2019-03-07 for method and apparatus for support surface edge detection.
The applicant listed for this patent is SYMBOL TECHNOLOGIES, LLC. Invention is credited to Richard Jeffrey Rzeszutek.
Application Number | 20190073554 15/698361 |
Document ID | / |
Family ID | 65364232 |
Filed Date | 2019-03-07 |
![](/patent/app/20190073554/US20190073554A1-20190307-D00000.png)
![](/patent/app/20190073554/US20190073554A1-20190307-D00001.png)
![](/patent/app/20190073554/US20190073554A1-20190307-D00002.png)
![](/patent/app/20190073554/US20190073554A1-20190307-D00003.png)
![](/patent/app/20190073554/US20190073554A1-20190307-D00004.png)
![](/patent/app/20190073554/US20190073554A1-20190307-D00005.png)
![](/patent/app/20190073554/US20190073554A1-20190307-D00006.png)
![](/patent/app/20190073554/US20190073554A1-20190307-D00007.png)
![](/patent/app/20190073554/US20190073554A1-20190307-D00008.png)
![](/patent/app/20190073554/US20190073554A1-20190307-D00009.png)
![](/patent/app/20190073554/US20190073554A1-20190307-D00010.png)
View All Diagrams
United States Patent
Application |
20190073554 |
Kind Code |
A1 |
Rzeszutek; Richard Jeffrey |
March 7, 2019 |
METHOD AND APPARATUS FOR SUPPORT SURFACE EDGE DETECTION
Abstract
A method of detecting an edge of a support surface by an imaging
controller includes: obtaining a plurality of depth measurements
captured by a depth sensor and corresponding to an area containing
the support surface; selecting, by the imaging controller, a
candidate set of the depth measurements based on at least one of
(i) an expected proximity of the edge of the support surface to the
depth sensor, and (ii) an expected orientation of the edge of the
support surface relative to the depth sensor; fitting, by the
imaging controller, a guide element to the candidate set of depth
measurements; and detecting, by the imaging controller, an output
set of the depth measurements corresponding to the edge from the
candidate set of depth measurements according to a proximity
between each candidate depth measurement and the guide element.
Inventors: |
Rzeszutek; Richard Jeffrey;
(Toronto, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
SYMBOL TECHNOLOGIES, LLC |
Lincolnshire |
IL |
US |
|
|
Family ID: |
65364232 |
Appl. No.: |
15/698361 |
Filed: |
September 7, 2017 |
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G01S 17/89 20130101;
G06K 7/10792 20130101; G06K 9/00624 20130101; G01S 17/86 20200101;
G06K 9/3241 20130101; G01S 17/42 20130101; G01S 7/4808 20130101;
G06K 9/4604 20130101; G06K 9/00664 20130101; G06K 9/3233 20130101;
G01S 17/06 20130101 |
International
Class: |
G06K 9/46 20060101
G06K009/46; G01S 17/89 20060101 G01S017/89; G06K 9/32 20060101
G06K009/32; G06K 7/10 20060101 G06K007/10; G01S 17/02 20060101
G01S017/02; G06K 9/00 20060101 G06K009/00; G01S 17/06 20060101
G01S017/06 |
Claims
1. A method of detecting an edge of a support surface by an imaging
controller, comprising: obtaining a plurality of depth measurements
captured by a depth sensor and corresponding to an area containing
the support surface; selecting, by the imaging controller, a
candidate set of the depth measurements based on at least one of
(i) an expected proximity of the edge of the support surface to the
depth sensor, and (ii) an expected orientation of the edge of the
support surface relative to the depth sensor; fitting, by the
imaging controller, a guide element to the candidate set of depth
measurements; and detecting, by the imaging controller, an output
set of the depth measurements corresponding to the edge from the
candidate set of depth measurements according to a proximity
between each candidate depth measurement and the guide element.
2. The method of claim 1, wherein obtaining the depth measurements
comprises obtaining a lidar scan including, for each of a plurality
of sweep angles, a respective group of the depth measurements.
3. The method of claim 2, wherein selecting the candidate set
further comprises, for each sweep angle, selecting a single minimum
depth measurement from the group corresponding to the sweep
angle.
4. The method of claim 3, wherein obtaining the depth measurements
comprises obtaining a plurality of lidar scans each including, for
each of the plurality of sweep angles, a respective group of the
depth measurements; and wherein selecting the candidate set further
comprises, for each sweep angle, selecting a single minimum depth
measurement from the plurality of groups corresponding to the sweep
angle.
5. The method of claim 2, wherein fitting the guide element
comprises fitting a curve to the candidate set of depth
measurements
6. The method of claim 5, wherein fitting the curve to the
candidate set of depth measurements comprises at least one of:
minimizing a depth of the curve; and maximizing a population of the
candidate depth measurements intersected by the curve.
7. The method of claim 2, wherein detecting the output set
comprises: determining, for each candidate depth measurement, a
distance between the candidate depth measurement and the guide
element; identifying local minima among the distances; and
selecting the candidate depth measurements corresponding to the
local minima.
8. The method of claim 1, wherein obtaining the depth measurements
comprises obtaining a depth image including a plurality of pixels
each containing one of the depth measurements.
9. The method of claim 8, wherein selecting the candidate set of
depth measurements comprises: subdividing the depth image into a
plurality of patches; generating normal vectors for each of the
patches; and selecting the depth measurements contained in patches
having normal vectors with a predetermined orientation.
10. The method of claim 8, wherein fitting the guide element
comprises: at each of a predetermined sequence of depth ranges,
fitting a plane to the candidate depth measurements within the
depth range; and selecting one of the planes intersecting the
greatest number of the candidate depth measurements.
11. The method of claim 10, wherein detecting the output set of
depth measurements comprises selecting the candidate depth
measurements that are intersected by the one of the planes.
12. The method of claim 1, wherein the support surface is a
shelf.
13. A computing device for detecting an edge of a support surface,
comprising: a memory; and an imaging controller including: a
preprocessor configured to obtain a plurality of depth measurements
captured by a depth sensor and corresponding to an area containing
the support surface; a selector configured to select a candidate
set of the depth measurements based on at least one of (i) an
expected proximity of the edge of the support surface to the depth
sensor, and (ii) an expected orientation of the edge of the support
surface relative to the depth sensor; a guide generator configured
to fit a guide element to the candidate set of depth measurements;
and an output detector configured to detect an output set of the
depth measurements corresponding to the edge from the candidate set
of depth measurements according to a proximity between each
candidate depth measurement and the guide element.
14. The computing device of claim 13, wherein the preprocessor is
configured to obtain the depth measurements by obtaining a lidar
scan including, for each of a plurality of sweep angles, a
respective group of the depth measurements.
15. The computing device of claim 14, wherein the selector is
configured to select the candidate set by, for each sweep angle,
selecting a single minimum depth measurement from the group
corresponding to the sweep angle.
16. The computing device of claim 15, wherein the preprocessor is
further configured to obtain the depth measurements by obtaining a
plurality of lidar scans each including, for each of the plurality
of sweep angles, a respective group of the depth measurements; and
wherein the selector is further configured to select the candidate
set by, for each sweep angle, selecting a single minimum depth
measurement from the plurality of groups corresponding to the sweep
angle.
17. The computing device of claim 14, wherein the guide generator
is configured to fit the guide element by fitting a curve to the
candidate set of depth measurements
18. The computing device of claim 17, wherein the guide generator
is configured to fit the curve to the candidate set of depth
measurements by at least one of: minimizing a depth of the curve;
and maximizing a population of the candidate depth measurements
intersected by the curve.
19. The computing device of claim 14, wherein the output detector
is configured to detect the output set by: determining, for each
candidate depth measurement, a distance between the candidate depth
measurement and the guide element; identifying local minima among
the distances; and selecting the candidate depth measurements
corresponding to the local minima.
20. The computing device of claim 13, wherein the preprocessor is
configured to obtain the depth measurements by obtaining a depth
image including a plurality of pixels each containing one of the
depth measurements.
21. The computing device of claim 20, wherein the selector is
configured to select the candidate set of depth measurements by:
subdividing the depth image into a plurality of patches; generating
normal vectors for each of the patches; and selecting the depth
measurements contained in patches having normal vectors with a
predetermined orientation.
22. The computing device of claim 20, wherein the guide generator
is configured to fit the guide element by: at each of a
predetermined sequence of depth ranges, fitting a plane to the
candidate depth measurements within the depth range; and selecting
one of the planes intersecting the greatest number of the candidate
depth measurements.
23. The computing device of claim 22, wherein the output detector
is configured to detect the output set of depth measurements by
selecting the candidate depth measurements that are intersected by
the one of the planes.
Description
BACKGROUND
[0001] Environments in which inventories of objects are managed,
such as products for purchase in a retail environment, may be
complex and fluid. For example, a given environment may contain a
wide variety of objects with different sizes, shapes, and other
attributes. Such objects may be supported on shelves in a variety
of positions and orientations. The variable position and
orientation of the objects, as well as variations in lighting and
the placement of labels and other indicia on the objects and the
shelves, can render detection of structural features such as the
edges of the shelves difficult.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0002] The accompanying figures, where like reference numerals
refer to identical or functionally similar elements throughout the
separate views, together with the detailed description below, are
incorporated in and form part of the specification, and serve to
further illustrate embodiments of concepts that include the claimed
invention, and explain various principles and advantages of those
embodiments.
[0003] FIG. 1 is a schematic of a mobile automation system.
[0004] FIG. 2A depicts a mobile automation apparatus in the system
of FIG. 1.
[0005] FIG. 2B is a block diagram of certain internal hardware
components of the mobile automation apparatus in the system of FIG.
1.
[0006] FIG. 2C is a block diagram of certain internal hardware
components of the server in the system of FIG. 1.
[0007] FIG. 3 is a flowchart of a method of support surface edge
detection.
[0008] FIG. 4 depicts the capture of data employed in the method of
FIG. 3 according to a first sensing technology.
[0009] FIG. 5 depicts the capture of data employed in the method of
FIG. 3 according to a second sensing technology.
[0010] FIGS. 6A-6B depict embodiments of methods for performing
block 310 of the method of FIG. 3.
[0011] FIGS. 7A-7B depict results of the performance of the methods
of FIGS. 6A-6B, respectively.
[0012] FIGS. 8A-8B depict embodiments of methods for performing
block 315 of the method of FIG. 3.
[0013] FIGS. 9A-9B depict results of the performance of the method
of FIG. 8A.
[0014] FIGS. 10A-10C depict results of the performance of the
method of FIG. 8B.
[0015] FIGS. 11A-11B depict embodiments of methods for performing
block 320 of the method of FIG. 3.
[0016] FIGS. 12A-12B depict results of the performance of the
method of FIG. 8B.
[0017] Skilled artisans will appreciate that elements in the
figures are illustrated for simplicity and clarity and have not
necessarily been drawn to scale. For example, the dimensions of
some of the elements in the figures may be exaggerated relative to
other elements to help to improve understanding of embodiments of
the present invention.
[0018] The apparatus and method components have been represented
where appropriate by conventional symbols in the drawings, showing
only those specific details that are pertinent to understanding the
embodiments of the present invention so as not to obscure the
disclosure with details that will be readily apparent to those of
ordinary skill in the art having the benefit of the description
herein.
DETAILED DESCRIPTION
[0019] In retail environments in which a plurality of products is
supported on shelves, systems may be configured to capture images
of the shelves and determine, from the images, various information
concerning the products. For example, price labels may be located
and decoded within the image, for use in ensuring that products are
labelled with the correct prices. Further, gaps between products on
the shelves may be identified as potentially indicating that one or
more products are out of stock and require replenishment. The above
determinations may require the identification of distances between
the capture device and the shelf edges to describe the
three-dimensional structure of the shelf edges, for use as
reference structures for the identification of labels, products,
gaps, and the like.
[0020] The identification of shelf edges from depth measurements is
complicated by a variety of factors, including the proximity to the
shelf edges of products having a wide variety of shapes and
orientations. Such factors also include lighting variations,
reflections, obstructions from products or other objects, and the
like.
[0021] Examples disclosed herein are directed to a method of
detecting an edge of a support surface by an imaging controller.
The method includes: obtaining a plurality of depth measurements
captured by a depth sensor and corresponding to an area containing
the support surface; selecting, by the imaging controller, a
candidate set of the depth measurements based on at least one of
(i) an expected proximity of the edge of the support surface to the
depth sensor, and (ii) an expected orientation of the edge of the
support surface relative to the depth sensor; fitting, by the
imaging controller, a guide element to the candidate set of depth
measurements; and detecting, by the imaging controller, an output
set of the depth measurements corresponding to the edge from the
candidate set of depth measurements according to a proximity
between each candidate depth measurement and the guide element.
[0022] Further examples disclosed herein are directed a to
computing device for detecting an edge of a support surface,
comprising: a memory; and an imaging controller including: a
preprocessor configured to obtain a plurality of depth measurements
captured by a depth sensor and corresponding to an area containing
the support surface; a selector configured to select a candidate
set of the depth measurements based on at least one of (i) an
expected proximity of the edge of the support surface to the depth
sensor, and (ii) an expected orientation of the edge of the support
surface relative to the depth sensor; a guide generator configured
to fit a guide element to the candidate set of depth measurements;
and an output detector configured to detect an output set of the
depth measurements corresponding to the edge from the candidate set
of depth measurements according to a proximity between each
candidate depth measurement and the guide element.
[0023] FIG. 1 depicts a mobile automation system 100 in accordance
with the teachings of this disclosure. The system 100 includes a
server 101 in communication with at least one mobile automation
apparatus 103 (also referred to herein simply as the apparatus 103)
and at least one client computing device 105 via communication
links 107, illustrated in the present example as including wireless
links. In the present example, the links 107 are provided by a
wireless local area network (WLAN) deployed within the retail
environment by one or more access points. In other examples, the
server 101, the client device 105, or both, are located outside the
retail environment, and the links 107 therefore include wide-area
networks such as the Internet, mobile networks, and the like. As
will be described in greater detail below, the system 100 also
includes a dock 108 for the apparatus 103. The dock 108 is in
communication with the server 101 via a link 109 that in the
present example is a wired link. In other examples, however, the
link 109 is a wireless link.
[0024] The client computing device 105 is illustrated in FIG. 1 as
a mobile computing device, such as a tablet, smart phone or the
like. In other examples, the client device 105 includes computing
devices such as a desktop computer, a laptop computer, another
server, a kiosk, a monitor, or other suitable device. The system
100 can include a plurality of client devices 105, each in
communication with the server 101 via respective links 107.
[0025] The system 100 is deployed, in the illustrated example, in a
retail environment including a plurality of shelf modules 110-1,
110-2, 110-3 and so on (collectively referred to as shelves 110,
and generically referred to as a shelf 110--this nomenclature is
also employed for other elements discussed herein). Each shelf
module 110 supports a plurality of products 112. Each shelf module
110 includes a shelf back 116-1, 116-2, 116-3 and a support surface
(e.g. support surface 117-3 as illustrated in FIG. 1) extending
from the shelf back 116 to a shelf edge 118-1, 118-2, 118-3. The
shelf modules 110 are typically arranged in a plurality of aisles,
each of which includes a plurality of modules aligned end-to-end.
In such arrangements, the shelf edges 118 face into the aisles,
through which customers in the retail environment as well as the
apparatus 103 may travel. As will be apparent from FIG. 1, the term
"shelf edge" 118 as employed herein, which may also be referred to
as the edge of a support surface (e.g., the support surfaces 117)
refers to a surface bounded by adjacent surfaces having different
angles of inclination. In the example illustrated in FIG. 1, the
shelf edge 118-3 is at an angle of about ninety degrees relative to
each of the support surface 117-3 and the underside (not shown) of
the support surface 117-3. In other examples, the angles between
the shelf edge 118-3 and the adjacent surfaces, such as the support
surface 117-3, is more or less than ninety degrees.
[0026] More specifically, the apparatus 103 is deployed within the
retail environment, and communicates with the server 101 (via the
link 107) to navigate, autonomously or partially autonomously, the
length 119 of at least a portion of the shelves 110. The apparatus
103 is equipped with a plurality of navigation and data capture
sensors 104, such as image sensors (e.g. one or more digital
cameras) and depth sensors (e.g. one or more Light Detection and
Ranging (LIDAR) sensors, one or more depth cameras employing
structured light patterns, such as infrared light), and is further
configured to employ the sensors to capture shelf data. In the
present example, the apparatus 103 is configured to capture a
plurality of depth measurements corresponding to the shelves 110,
each measurement defining a distance from the depth sensor to a
point on the shelf 110 (e.g., a product 112 disposed on the shelf
110 or a structural component of the shelf 110, such as a shelf
edge 118 or a shelf back 116).
[0027] The server 101 includes a special purpose imaging
controller, such as a processor 120, specifically designed to
control the mobile automation apparatus 103 to capture data (e.g.
the above-mentioned depth measurements), obtain the captured data
via a communications interface 124 and store the captured data in a
repository 132 in a memory 122. The server 101 is further
configured to perform various post-processing operations on the
captured data and to detect certain structural features--such as
the shelf edges 118--within the captured data. The post-processing
of captured data by the server 101 will be discussed below in
greater detail. The server 101 may also be configured to determine
product status data based in part on the above-mentioned shelf edge
detections, and to transmit status notifications (e.g.
notifications indicating that products are out-of-stock, low stock
or misplaced) to the mobile device 105 responsive to the
determination of product status data.
[0028] The processor 120 is interconnected with a non-transitory
computer readable storage medium, such as the above-mentioned
memory 122, having stored thereon computer readable instructions
for executing control of the apparatus 103 to capture data, as well
as the above-mentioned post-processing functionality, as discussed
in further detail below. The memory 122 includes a combination of
volatile (e.g. Random Access Memory or RAM) and non-volatile memory
(e.g. read only memory or ROM, Electrically Erasable Programmable
Read Only Memory or EEPROM, flash memory). The processor 120 and
the memory 122 each comprise one or more integrated circuits. In an
embodiment, the processor 120, further includes one or more central
processing units (CPUs) and/or graphics processing units (GPUs). In
an embodiment, a specially designed integrated circuit, such as a
Field Programmable Gate Array (FPGA), is designed to perform the
shelf edge detection discussed herein, either alternatively or in
addition to the imaging controller/processor 120 and memory 122. As
those of skill in the art will realize, the mobile automation
apparatus 103 also includes one or more controllers or processors
and/or FPGAs, in communication with the controller 120,
specifically configured to control navigational and/or data capture
aspects of the apparatus 103. The client device 105 also includes
one or more controllers or processors and/or FPGAs, in
communication with the controller 120, specifically configured to
process (e.g. to display) notifications received from the server
101.
[0029] The server 101 also includes the above-mentioned
communications interface 124 interconnected with the processor 120.
The communications interface 124 includes suitable hardware (e.g.
transmitters, receivers, network interface controllers and the
like) allowing the server 101 to communicate with other computing
devices--particularly the apparatus 103, the client device 105 and
the dock 108--via the links 107 and 109. The links 107 and 109 may
be direct links, or links that traverse one or more networks,
including both local and wide-area networks. The specific
components of the communications interface 124 are selected based
on the type of network or other links that the server 101 is
required to communicate over. In the present example, as noted
earlier, a wireless local-area network is implemented within the
retail environment via the deployment of one or more wireless
access points. The links 107 therefore include either or both
wireless links between the apparatus 103 and the mobile device 105
and the above-mentioned access points, and a wired link (e.g. an
Ethernet-based link) between the server 101 and the access
point.
[0030] The memory 122 stores a plurality of applications, each
including a plurality of computer readable instructions executable
by the processor 120. The execution of the above-mentioned
instructions by the processor 120 configures the server 101 to
perform various actions discussed herein. The applications stored
in the memory 122 include a control application 128, which may also
be implemented as a suite of logically distinct applications. In
general, via execution of the control application 128 or
subcomponents thereof, the processor 120 is configured to implement
various functionality. The processor 120, as configured via the
execution of the control application 128, is also referred to
herein as the controller 120. As will now be apparent, some or all
of the functionality implemented by the controller 120 described
below may also be performed by preconfigured hardware elements
(e.g. one or more Application-Specific Integrated Circuits (ASICs))
rather than by execution of the control application 128 by the
processor 120.
[0031] Turning now to FIGS. 2A and 2B, the mobile automation
apparatus 103 is shown in greater detail. The apparatus 103
includes a chassis 201 containing a locomotive mechanism 203 (e.g.
one or more electrical motors driving wheels, tracks or the like).
The apparatus 103 further includes a sensor mast 205 supported on
the chassis 201 and, in the present example, extending upwards
(e.g., substantially vertically) from the chassis 201. The mast 205
supports the sensors 104 mentioned earlier. In particular, the
sensors 104 include at least one imaging sensor 207, such as a
digital camera, as well as at least one depth-sensing sensor 209,
such as a 3D digital camera. The apparatus 103 also includes
additional depth sensors, such as LIDAR sensors 211. In other
examples, the apparatus 103 includes additional sensors, such as
one or more RFID readers, temperature sensors, and the like.
[0032] In the present example, the mast 205 supports seven digital
cameras 207-1 through 207-7, and two LIDAR sensors 211-1 and 211-2.
The mast 205 also supports a plurality of illumination assemblies
213, configured to illuminate the fields of view of the respective
cameras 207. That is, the illumination assembly 213-1 illuminates
the field of view of the camera 207-1, and so on. The sensors 207
and 211 are oriented on the mast 205 such that the fields of view
of each sensor face a shelf 110 along the length 119 of which the
apparatus 103 is travelling. The apparatus 103 is configured to
track a location of the apparatus 103 (e.g. a location of the
center of the chassis 201) in a common frame of reference
previously established in the retail facility, permitting data
captured by the mobile automation apparatus to be registered to the
common frame of reference.
[0033] To that end, the mobile automation apparatus 103 includes a
special-purpose controller, such as a processor 220, as shown in
FIG. 2B, interconnected with a non-transitory computer readable
storage medium, such as a memory 222. The memory 222 includes a
combination of volatile (e.g. Random Access Memory or RAM) and
non-volatile memory (e.g. read only memory or ROM, Electrically
Erasable Programmable Read Only Memory or EEPROM, flash memory).
The processor 220 and the memory 222 each comprise one or more
integrated circuits. The memory 222 stores computer readable
instructions for execution by the processor 220. In particular, the
memory 222 stores a control application 228 which, when executed by
the processor 220, configures the processor 220 to perform various
functions related to the navigation of the apparatus 103 (e.g. by
controlling the locomotive mechanism 203) and to the detection of
shelf edges in data captured by the sensors (e.g. the depth cameras
209 or the lidar sensors 211). The application 228 may also be
implemented as a suite of distinct applications in other
examples.
[0034] The processor 220, when so configured by the execution of
the application 228, may also be referred to as a controller 220
or, in the context of shelf edge detection from captured data, as
an imaging controller 220. Those skilled in the art will appreciate
that the functionality implemented by the processor 220 via the
execution of the application 228 may also be implemented by one or
more specially designed hardware and firmware components, such as
FPGAs, ASICs and the like in other embodiments.
[0035] The memory 222 may also store a repository 232 containing,
for example, a map of the environment in which the apparatus 103
operates, for use during the execution of the application 228. The
apparatus 103 may communicate with the server 101, for example to
receive instructions to initiate data capture operations, via a
communications interface 224 over the link 107 shown in FIG. 1. The
communications interface 224 also enables the apparatus 103 to
communicate with the server 101 via the dock 108 and the link
109.
[0036] In the present example, as discussed below, one or both of
the server 101 (as configured via the execution of the control
application 128 by the processor 120) and the mobile automation
apparatus 103 (as configured via the execution of the application
228 by the processor 220), are configured to process depth
measurements captured by the apparatus 103 to identify portions of
the captured data depicting the shelf edges 118. In further
examples, the data processing discussed below may be performed on a
computing device other than the server 101 and the mobile
automation apparatus 103, such as the client device 105. The data
processing mentioned above will be described in greater detail in
connection with its performance at the server 101, via execution of
the application 128.
[0037] Turning now to FIG. 2C, before describing the operation of
the application 128 to identify the shelf edges 118 from captured
image data, certain components of the application 128 will be
described in greater detail. As will be apparent to those skilled
in the art, in other examples the components of the application 128
may be separated into distinct applications, or combined into other
sets of components. Some or all of the components illustrated in
FIG. 2C may also be implemented as dedicated hardware components,
such as one or more ASICs or FPGAs. For example, in one embodiment,
to improve reliability and processing speed, at least some of the
components of FIG. 2C are programmed directly into the imaging
controller 120, which may be an FPGA or an ASIC having circuit and
memory configuration specifically designed to optimize image
processing of a high volume of sensor data received from the mobile
automation apparatus 103. In such an embodiment, some or all of the
control application 128, discussed below, is an FPGA or an ASIC
chip.
[0038] The control application 128 includes a preprocessor 200
configured to obtain depth measurements corresponding to the
shelves 110 and the products 112 supported thereon, and to
preprocess the depth measurements, for example by filtering the
depth measurements prior to downstream processing operations. The
control application 128 also includes a selector 204 configured to
select a candidate set of depth measurements from the preprocessed
depth measurements (i.e., the output of the preprocessor 200). As
will be discussed below, the candidate set of depth measurements
are depth measurements considered likely to correspond to shelf
edges 118. The control application 128 also includes a guide
generator 208 configured to generate a guide element (such as a
curve or a plane) against which the above-mentioned candidate set
of depth measurements is evaluated by an output detector 212 to
detect an output set among the candidate set of depth measurements.
The output set of depth measurements contains the depth
measurements considered to have the greatest likelihood of
corresponding to shelf edges 118.
[0039] The functionality of the control application 128 will now be
described in greater detail. Turning to FIG. 3, a method 300 of
detecting an edge of a support surface is shown. The method 300
will be described in conjunction with its performance on the system
100 and with reference to the components illustrated in FIG. 2C. In
other words, in the discussion below, the support surface is a
support surface 117 as shown in FIG. 1, and the edge to be detected
is therefore a shelf edge 118 as shown in FIG. 1. In other
examples, other support surfaces and their edges may also be
detected via performance of the method 300. As noted earlier,
additionally, in other examples, some or all of the method 300 is
performed by the components illustrated in FIG. 2B.
[0040] At block 305, the controller 120, and in particular the
preprocessor 200, is configured to obtain a plurality of depth
measurements captured by a depth sensor and corresponding to an
area containing the above-mentioned support surface. In other
words, in the present example the depth measurements correspond to
an area containing at least one shelf support surface 117 and shelf
edge 118. The depth measurements obtained at block 305 are, for
example, captured by the apparatus 103 and stored in the repository
132. The preprocessor 200 is therefore configured, in the above
example, to obtain the depth measurements by retrieving the
measurements from the repository 132.
[0041] The depth measurements can take a variety of forms,
according to the depth sensor employed (e.g. by the apparatus 103)
to capture the measurements. For example, the apparatus 103 can
include a lidar sensor, and the depth measurements therefore
include one or more lidar scans captured as the apparatus 103
travels the length of an aisle (i.e., a set of adjacent shelf
modules 110). The lidar sensor of the apparatus 103 captures depth
measurements by sweeping a line of laser light across the shelves
110 through a predetermined set of sweep angles and determining,
for each of a the sweep angles, a group of depth measurements along
the line.
[0042] FIG. 4 illustrates a simplified capture of depth
measurements by the apparatus 103 employing a lidar sensor. In
particular, as the apparatus 103 travels along the shelves 110 (the
shelf 110-3 is shown in FIG. 4 for illustrative purposes), in a
direction of travel 400 substantially parallel to the shelf 110, a
lidar sensor 404 of the apparatus 103 sweeps a line (e.g. of laser
light) 408 through a range of angles, for example in the direction
412. In the example of FIG. 4, the line 408 is illustrated as being
substantially vertical. In other examples, the line 408 may be
inclined relative to vertical. For each angle, a group of depth
measurements are determined along the line 408 by the sensor 404.
When the line has swept through the full range of angles, the
apparatus 103 may be configured to initiate a further sweep upon
detection that the apparatus 103 has traveled a predetermined
distance along the shelf 110.
[0043] Thus, in the example illustrated in FIG. 4, four lidar scans
416-1, 416-2, 416-3 and 416-4 are captured by the apparatus 103,
each containing groups of depth measurements for each of the set of
sweep angles. As shown in connection with the scan 416-1, the
plurality of depth measurements include a group (shown as columns)
for each of a set of sweep angles. Each group contains a number of
depth measurements (each in a distinct row of the array shown in
FIG. 4), corresponding to different points along the length of the
line 408. The depth measurements need not be generated in the
tabular format shown in FIG. 4. For example, the sensor 404 can
also generate the measurements in polar coordinates.
[0044] In other examples, the apparatus 103 captures the depth
measurements with the use of a depth camera, such as a stereoscopic
camera including a structured light projector (e.g. which projects
a pattern of infrared light onto the shelves 110). In such
examples, referring to FIG. 5, the apparatus 103 proceeds along the
shelves 110 in a direction of travel 500 and employs a depth camera
504 with a field of view 508 to capture a sequence of images 516-1,
516-2, 516-3, 516-4 of the shelf 110. The images each include an
array of pixels, with each pixel containing a depth measurement
(e.g., a value in meters or another suitable unit). The pixels can
also contain intensity values (e.g. a value from 0 to 255) which
map to depth values, rather than the depth values themselves. An
example pixel array is shown for the image 516-1 in FIG. 5, in
which darker pixels are further from the sensor 504 and the lighter
pixels are closer to the sensor 504.
[0045] Returning to FIG. 3, at block 305 the preprocessor 200 can
also be configured to perform one or more filtering operations on
the depth measurements. For example, depth measurements greater
than a predefined threshold may be discarded from the data captured
at block 305. Such measurements may be indicative of surfaces
beyond the shelf backs 116 (e.g. a ceiling, or a wall behind a
shelf back 116). The predefined threshold may be selected, for
example, as the sum of the known depth of a shelf 110 and the known
width of an aisle. In other examples, measurements may be discarded
from data captured with a lidar sensor corresponding to sweep
angles beyond a predefined threshold. The predefined angular
threshold is selected based on any one or more of the height at
which the sensor 404 is mounted on the apparatus 103, the height of
the shelf module 110, and the closest distance from the shelf
module 110 at which the apparatus 103 is capable of traversing the
shelf module 110. For example, in an implementation in which the
sensor 404 is mounted on the apparatus 103 at a height of about 1
m, and in which the apparatus 103 can traverse the shelf 110 at a
minimum distance of about 0.55 m, the predefined angular threshold
may be set at about +/-60 degrees.
[0046] The control application 128 is then configured to proceed to
block 310. At block 310, the control application 128, and more
specifically the selector 204, is configured to select a candidate
set of the depth measurements obtained at block 305. The candidate
set of depth measurements is selected based on at least one of an
expected proximity of the shelf edge 118 to the depth sensor, and
an expected orientation of the shelf edge 118 relative to the depth
sensor. As will be discussed in greater detail below, the apparatus
103 is assumed by the selector 204 to travel in a direction
substantially parallel to the shelf edge 118. As a result, the
distance between the sensor (e.g. lidar sensor 404 or depth camera
504) is expected to remain substantially constant throughout the
captured data. Further, because the support surfaces 117 extend
from the shelf backs 116 toward the aisle in which the apparatus
103 travels, the shelf edges 118 are expected to be closer to the
apparatus 103 than other structures (e.g. products 112) depicted in
the captured data. Still further, each shelf edge 118 is assumed to
have a known orientation. For example, each shelf edge 118 may be
expected to be a substantially vertical surface. As will be seen
below, when the data captured at block 305 captured with the lidar
sensor 404, the candidate set of measurements is selected based on
an expected proximity to the depth sensor, and when the data
captured at block 305 is captured with the depth camera 504, the
candidate set of measurements is selected based on an expected
orientation to the depth sensor.
[0047] Turning to FIG. 6A, a method 600 of selecting a candidate
set of depth measurements (i.e., of performing block 310 of the
method 300) is illustrated. In particular, the selector 204 is
configured to perform the method 600 when the depth measurements
obtained at block 305 are captured using a lidar sensor.
[0048] At block 605, the selector 204 is configured to select a
sweep angle. As seen in FIG. 4 in connection with the scan 416-1,
the depth measurements include respective groups of measurements
for each of a plurality of sweep angles. In the example of FIG. 4,
the sweep angles cover a range of about 120 degrees; in other
examples, other ranges of sweep angles may be implemented. For a
first instance of block 605, the selector 204 is configured to
select a first sweep angle (e.g., the angle -60 degrees shown in
FIG. 4).
[0049] At block 610, for the selected sweep angle, the selector 204
is configured to select the minimum depth measurement corresponding
to that sweep angle. Thus, referring again to the scan 416-1, the
selector 204 is configured to select the minimum depth measurement
among the values d.sub.-60-1, d.sub.-60-2, d.sub.-60-3,
d.sub.-60-19, and d.sub.-60-20. In the present example, when the
depth measurements obtained at block 305 include a plurality of
lidar scans (e.g., the scans 416-1 through 416-4 shown in FIG. 4),
the selection of a minimum depth measurement at block 610 is made
among all available depth measurements for the current sweep angle.
That is, the scans 416-2, 416-3 and 416-4 also include groups of
depth measurements for the angle -60 degrees, and at block 610 the
selector 204 is configured to select a single minimum depth
measurement from among the groups of measurements corresponding to
-60 degrees from all four scans 416. In other examples, the
selector 204 is configured to repeat the selection of minimum depth
measurements separately for each scan 416, thus generating a
one-dimensional array for each scan. In further examples, the
selector 204 is configured to combine the above approaches,
generating a one-dimensional array for each scan 416, and also
generating a single aggregated array across all scans 416. As will
be seen below, the generation of a guide element at block 315 may
be performed with respect to the single aggregated array, while the
detection of output depth measurements at block 320 may be
performed with respect to the individual scan-specific arrays.
[0050] In some examples, rather than selecting the minimum depth at
block 610, the selector 204 is configured to select a
representative sample for each sweep angle other than the minimum
depth measurement. For example, the selector 204 can be configured
to select the median of the depth measurements for each sweep
angle. Such an approach may be employed by the selector 204 in some
embodiments when the depth measurements captured by the apparatus
103 contain a level of noise above a predefined threshold.
[0051] Having selected the minimum depth measurement for the
current sweep angle at block 610, the selector is configured to add
the selected depth measurement to the candidate set, along with an
indication of the sweep angle corresponding to the minimum depth
measurement (i.e., the angle selected at block 605). At block 615,
the selector 204 is then configured to determine whether any sweep
angles remain to be processed. When the determination is
affirmative, the performance of the method 600 returns to block
605, and block 610 is repeated for the next sweep angle (e.g., -55
degrees as shown in FIG. 4). When all sweep angles have been
processed, the determination at block 615 is negative, and the
selector 204 passes the selected candidate set of depth
measurements to the guide generator 208 for the performance of
block 315 of the method 300. The candidate set of depth
measurements includes, for each sweep angle, a single depth
measurement, and may therefore be represented by a one-dimensional
array similar in structure to a single row of the scan 416-1 shown
in FIG. 4. FIG. 7A illustrates an example candidate set of depth
measurements obtained through the performance of the method 600,
plotted as a single line (i.e. a one-dimensional dataset),
specifying the selected minimum distance for each sweep angle.
[0052] FIG. 6B illustrates an implementation of block 310 in
another embodiment, in which the depth measurements obtained at
block 305 are captured using a depth camera. In this embodiment,
the selector 204 is configured to implement block 310 of the method
300 by performing a method 650 to select the candidate set of depth
measurements.
[0053] At block 655, the selector 204 is configured to subdivide
the image containing depth measurements into a plurality of
overlapping patches. For example, each patch may have dimensions of
3.times.3 pixels, and overlap adjacent patches by 2 pixels in the
vertical and horizontal directions. That is, the patches are
selected such that every pixel in the depth image is the center of
one patch. In other examples, larger patch dimensions may also be
employed (e.g. 5.times.5 pixels), with a greater degree of overlap
to provide one patch centered on each pixel. In further examples,
the overlap between patches may be reduced to reduce the
computational burden imposed by the performance of the method 650,
at the expense of reduced resolution of the candidate set, as will
be evident below.
[0054] At block 655, having selected a patch (e.g. the upper-left
patch of 3.times.3 pixels of the image 516-1 shown in FIG. 1), the
selector 204 is configured to generate a normal vector for the
selected patch. The normal vector is a vector that extends from the
central pixel of the patch, and is perpendicular to a plane defined
by the depth measurements in the patch. Therefore, the selector 204
is configured to apply a suitable plane-fitting operation to the
depth measurements contained in the selected patch to generate the
normal vector. An example of a suitable plane-fitting operations
include generating the plane from three non-collinear points. Other
examples include orthogonal regression using total least squares,
RANdom SAmple Consensus (RANSAC), and Least Median of Squares
(LMedS).
[0055] At block 660, the selector 204 is configured to determine
whether the normal vector generated at block 655 has a predefined
orientation. As noted earlier, when the data captured at block 305
is captured with the depth camera 504, the candidate set of
measurements is selected based on an expected orientation to the
depth sensor. The expected orientation of the shelf edge 118
relative to the depth sensor (e.g. the camera 504), as shown in
FIG. 5, is facing towards the depth sensor. In other words,
referring to FIG. 7B which illustrates a portion 700 of the image
516-1, the patches of the captured image depicting shelf edges 118
are expected to exhibit normal vectors oriented substantially in
the depth or Z direction, with minimal contribution from the X or Y
directions.
[0056] The selector 204 is therefore configured to perform the
determination at block 660 by comparing the normal vector generated
at block 655 to the predefined expected orientation. Referring
again to FIG. 7B, a number of example patches 704, 708, 712 716 and
720 are shown, with depictions of their normal vectors. In the
present example, the expected orientation is parallel to the Z axis
in the image frame of reference (i.e. directly perpendicular to the
page of FIG. 7B), The determination at block 660 can include, for
example, determining the components of the normal vector in each of
the X, Y and Z directions and determining whether the magnitude of
the Z direction is greater than the X and Y directions by a
predetermined factor. In another example, the determination at
block 660 includes determining whether the X and Y magnitudes of
the normal vector are below predetermined thresholds. As seen in
FIG. 7B, the normal vectors of the patches 704 and 708 deviate
significantly from the Z direction, while normal vector of the
patch 712 is substantially parallel to the Z axis and the normal
vectors of the patches 717 and 720 are parallel to the Z axis.
[0057] Returning to FIG. 6B, when the determination is negative at
block 660 (as in the case of the patch 704, for example) the
selector 204 proceeds directly to block 655. When the determination
at block 660 is affirmative, however, the selector 204 is
configured to select the center pixel of the current patch, and add
the selected pixel to the set of candidate depth measurements. The
selector 204 is then configured to proceed to block 665, to
determine whether further patches remain to be processed. When the
determination at block 665 is affirmative, the selector 204 repeats
blocks 655-670 until the determination at block 665 is negative, at
which point the selector 204 passes the set of candidate depth
measurements (e.g., indicated by pixel positions in the image frame
of reference) to the guide generator 208 for use at block 315 of
the method 300.
[0058] Returning to FIG. 3, at block 315 the guide generator 208 is
configured to fit a guide element to the candidate set of depth
measurements. The nature of the guide element is dependent on the
type of depth measurements obtained at block 305. The guide element
is generated as a set of parameters defining the element, such as
the equation of a curve, plane or the like, expressed in the frame
of reference of the captured data (i.e., the lidar data or depth
image). For example, a planar guide element may be expressed as a
normal vector perpendicular to the plane, and a distance parameter
defining the length of the normal from the plane to the origin of
the above frame of reference. As a further example, a linear guide
element may be expressed as a vector defining the orientation of
the line in the above frame of reference. When the depth
measurements obtained at block 305 are lidar measurements, block
305 is implemented according to block 800 shown in FIG. 8A.
Specifically, the guide generator 208 is configured to fit a curve
to the candidate set of depth measurements (e.g. those selected via
the performance of the method 600 discussed above). The guide
generator 208 is configured to generate the above-mentioned curve
to minimize a depth of the curve (i.e. to place the curve as close
as possible to the origin in the lidar frame of reference), and to
maximize the number of the candidate depth measurements that are
intersected by the curve (i.e., to maximize the population of
inliers of the curve). Turning to FIG. 9A, the candidate set of
depth measurements from FIG. 7A is illustrated, along with a curve
900 fitted to the candidate set according to block 800. FIG. 9B
illustrates the same candidate set and the curve 900 in polar
coordinates, in which the curve 900 appears as a straight line.
Having generated the guide element at block 800, the guide
generator 208 is configured to pass the guide element (e.g. as an
equation defining the curve 900 in the lidar frame of reference),
as well as the candidate set of depth measurements, to the output
detector 212 for use at block 320 of the method 300.
[0059] Referring to FIG. 8B, when the depth measurements obtained
at block 305 are captured with a depth camera, the guide generator
208 is configured to implement block 315 via the performance of a
method 850. In particular, the guide generator 208 is configured to
assess each of a plurality of depth ranges (which may also be
referred to as search volumes), as described below.
[0060] At block 855, the guide generator 208 is configured to
select a depth range. In the present example, the guide generator
208 is configured to assess depth ranges in a sequence beginning
with a minimum depth (e.g. a depth of zero, indicating a search
volume immediately adjacent to the depth sensor at the time of data
capture), with each and increasing by predefined distances. Thus,
for example, the depth ranges assessed may include a depth range of
0 to 0.2 m, 0.2 m to 0.4 m, 0.4 m to 0.6 m, and so on, until a
predefined maximum depth (e.g., 2.0 m). Each depth range may
contain a subset of the candidate set of pixels selected through
the performance of the method 650. The guide generator 208 may also
be configured to determine whether the selected depth range
contains any candidate pixels, and when it does not, to immediately
advance to the next depth range.
[0061] At block 860, the guide generator 208 is configured to fit a
plane to the subset of candidate pixels contained within the
current depth range. The subset includes any of the candidate
pixels having depth measurements (e.g., along the Z axis) within
the depth range, regardless of the position of such pixels in the
image (e.g., the position on the X and Y axes). Turning to FIG.
10A, the original depth image 516-1 is illustrated, along with the
candidate set of pixels 1000. As discussed above, the candidate set
of pixels 1000 are those having normal vectors oriented
substantially in the Z direction. FIG. 10B illustrates a subset
1004 of the candidate pixels 1000 that fall within a first depth
range selected at block 855. As seen by comparing FIGS. 10A and
10B, the pixels corresponding to the cylindrical product 112
supported on the upper support surface 117 (e.g., see FIGS. 1 and
5) are not contained in the first depth range, as their depths are
greater than the furthest extent of the first depth range.
[0062] The guide generator 208 is configured to fit a plane 1008 to
the subset 1004 according to a suitable plane fitting operation.
For example, a plane fitting operation may be selected for the
performance of block 860 that maximizes the number of points in the
subset 1004 that are intersected by the plane (i.e., that are
inliers of the plane). As seen in FIG. 10B, the plane 1008 contains
certain pixels in regions 1012-1 and 1012-2 from the subset 1004
while omitting others. Specifically, the pixels contained within
the plane 1008 include those depicting the shelf edges 118, but
omit the remainder of the subset 1004, which are at greater depths
than the pixels in the regions 1012.
[0063] Returning to FIG. 8B, at block 865 the guide generator 208
is configured to determine whether the plane generated at block 860
has a predefined orientation. The predefined orientation reflects
the expected orientation of the shelf edges 118 relative to the
depth camera, as discussed above. Therefore, in the present
example, the predefined orientation is perpendicular to the Z axis
(i.e., parallel to the XY plane in the image frame of reference).
At block 865, therefore, the guide generator 208 is configured to
determine an error metric between the orientation of the plane 1008
and the predefined orientation. The plane orientation and the
predefined orientation may be represented by the guide generator as
normal vectors to each of the planes, which may be compared as
discussed earlier in connection with block 660.
[0064] When the determination at block 865 is negative, the plane
generated at block 860 is discarded, and the guide generator 208
determines at block 870 whether any depth ranges remain to be
assessed. When the determination at block 865 is affirmative,
however, as in the case of the plane 1008 shown in FIG. 10B, the
guide generator 208 is configured to proceed to block 875. At block
875, the guide generator 208 is configured to determine whether the
population of the candidate set of depth measurements that are
inliers of the plane generated at block 860 is greater than the
currently stored best plane. In the present example performance of
the method 850, no best plane has been stored, and the
determination is therefore affirmative at block 875. The guide
generator 208 is therefore configured to proceed to block 880 and
to update a best plane indicator in the memory 122 to correspond to
the plane generated at block 860. The plane indicator stored in the
memory 122 includes any suitable set of parameters defining the
plane (e.g., a normal vector and a depth).
[0065] The guide generator 208 is then configured to determine, at
block 870, whether any depth ranges remain to be assessed. In the
present example performance, a second depth range remains to be
assessed, as shown in FIG. 10C by way of a second subset 1016 of
the candidate set of depth measurements 1000. The guide generator
208 is configured to select the next depth range at block 855, to
fit a further plane (shown as a plane 1018 in FIG. 10C) to the
subset of candidate depth measurements within the depth range at
block 860, and to determine whether the plane 1018 has the
predefined orientation at block 865. In the example shown in FIG.
10C, the plane 1018 does not have the predefined orientation, and
the guide generator 208 is therefore configured to discard the
plane 1018 and proceed to block 870. As will also be apparent from
FIG. 10C, even if the plane 1018 matched the predefined
orientation, the number of candidate depth measurements intersected
by the plane 1018, indicated at 1020, is lower than the number
candidate depth measurements intersected by the plane 1008. The
determination at block 875 would therefore also have been negative,
resulting in the plane 1018 being discarded.
[0066] When all depth ranges have been assessed, a negative
determination at block 870 leads the guide generator 208 to pass
the current best plane (e.g. as a normal vector and a depth) to the
output detector 212 for further processing at block 320 of the
method 300.
[0067] Returning to FIG. 3, at block 320 the output detector 212 is
configured to detect an output set of the depth measurements that
are likely to correspond to the shelf edge 118 from the candidate
set of depth measurements. In other words, the output set of depth
measurements detected at block 320 identifies the locations of the
shelf edges 118 in the captured data. The output set of depth
measurements are detected according to a proximity between each
candidate depth measurement and the guide element generated at
block 315. Referring to FIG. 11A, when the depth measurements are
captured via the depth camera 504, the detection of the output set
of depth measurements is performed by the output detector 212 at
block 1100 by selecting the inliers of the best plane determined
via the method 850. That is, recalling that the plane 1008 in FIG.
10B was selected as the best plane in the present example, the
detection of the output set of depth measurements (i.e., the
detection of the shelf edges 118) is performed by the output
detector 212 selecting the pixels falling within the regions 1012-1
and 1012-2. The output detector 212 is then configured to proceed
to block 325, as will be discussed further below.
[0068] Turning to FIG. 12B, when the depth measurements are
captured via the lidar sensor 404, the output detector 212 is
configured to perform block 320, in some examples, via the
performance of a method 1150. At block 1155, the output detector
212 is configured to select one of the scans processed in blocks
305-315, where multiple scans are available. For example, the
output detector 212 may be configured to select the scan 416-1.
Having selected a scan, the output detector 212 is configured to
determine a distance between the guide element (e.g., the curve
900) and the previously selected minimum depths for the scan. In
other words, the curve 900 (which is generated based on the set of
scans 416 at block 315) is compared with the minimum depths per
sweep angle for each individual scan 416.
[0069] At block 1160, the output detector 212 is configured to
select local minima among the distances determined at block 1155.
The local minima may be selected from a preconfigured range of
sweep angles (e.g. one minimum distance may be selected among five
consecutive distances). Turning to FIG. 12A, a set of minimum
depths per sweep angle is shown, as selected from a scan 416 (e.g.
the scan 416-1). Example local minimum distances 1204-1 and 1204-2
are also illustrated between the curve 900 and ranges 1208-1 and
1208-2, respectively, of the minimum depth measurements. FIG. 12B
illustrates the distances between each of the minimum depth
measurements and the curve 900 (which itself is not shown in FIG.
12B), with the local minima 1204-1 and 1204-2 mentioned above
shown, along with further local minima 1204-3, 1204-4, 1204-5 and
1204-6. As will be apparent, additional local minima are also
identified, but are not labelled in FIG. 12B for simplicity of
illustration.
[0070] Returning to FIG. 11B, at block 1165 the output detector 212
is configured to discard the selected minimum distances that exceed
a preconfigured distance threshold. The threshold is preconfigured
as a distance beyond which the likelihood of detected local minima
depicting a shelf edge 118 is low. For example, the threshold may
be preconfigured between zero and about ten centimeters. In other
examples, the threshold may be preconfigured as between zero and
about five centimeters. In further examples, as shown in FIG. 12B,
the threshold is preconfigured at about two centimeters, as
illustrated by the line 1212. At block 1165, therefore, the output
detector 212 is configured to discard the local minima 1204-1
through 1204-4, and to select the local minima 1204-5 and
1204-6.
[0071] At block 1170, the output detector 212 is configured to
determine whether any scans remain to be processed. When scans
remain to be processed, the performance of blocks 1155-1165 is
repeated for each remaining scan. When the determination at block
1170 is negative, the output detector 212 proceeds to block 1175.
At block 1175, the output detector is configured to discard any
local minimum distances that fail to meet a detection threshold.
The detection threshold is a preconfigured number of scans in which
a local minimum must be detected at the same sweep angle in order
to be retained in the output set of depth measurements. For
example, if the detection threshold is three, and local minima are
selected for the sweep angle of -55 degrees for only two scans
(i.e., the remaining scans do not exhibit local minima at -55
degrees), those local minima are discarded. Discarding local minima
that do not meet the detection threshold may prevent the selection
of depth measurements for the output set that correspond to
measurement artifacts or structural anomalies in the shelf edges
118. In other examples, the performance of block 1175 may be
omitted.
[0072] Following the performance of block 1175, or following the
negative determination at block 1170 if block 1175 is omitted, the
output detector 212 is configured to proceed to block 325.
[0073] At block 325, the output detector 212 is configured to store
the output set of depth measurements. The output set of depth
measurements are stored, for example in the repository in
association with the captured data (e.g. the captured lidar scans
416 or depth images 516), and include at least identifications of
the output set of depth measurements. Thus, for lidar data, the
output set is stored as a set of sweep angle and line index
coordinates corresponding to the local minima selected at block
1160 and retained through blocks 1165 and 1175. For depth image
data, the output set is stored as pixel coordinates (e.g. X and Y
coordinates) of the inlier pixels identified at block 1100. The
output set, as stored in the memory 122, can be passed to further
downstream functions of the server or retrieved by such
functions.
[0074] In the foregoing specification, specific embodiments have
been described. However, one of ordinary skill in the art
appreciates that various modifications and changes can be made
without departing from the scope of the invention as set forth in
the claims below. Accordingly, the specification and figures are to
be regarded in an illustrative rather than a restrictive sense, and
all such modifications are intended to be included within the scope
of present teachings.
[0075] The benefits, advantages, solutions to problems, and any
element(s) that may cause any benefit, advantage, or solution to
occur or become more pronounced are not to be construed as a
critical, required, or essential features or elements of any or all
the claims. The invention is defined solely by the appended claims
including any amendments made during the pendency of this
application and all equivalents of those claims as issued.
[0076] Moreover in this document, relational terms such as first
and second, top and bottom, and the like may be used solely to
distinguish one entity or action from another entity or action
without necessarily requiring or implying any actual such
relationship or order between such entities or actions. The terms
"comprises," "comprising," "has", "having," "includes",
"including," "contains", "containing" or any other variation
thereof, are intended to cover a non-exclusive inclusion, such that
a process, method, article, or apparatus that comprises, has,
includes, contains a list of elements does not include only those
elements but may include other elements not expressly listed or
inherent to such process, method, article, or apparatus. An element
proceeded by "comprises . . . a", "has . . . a", "includes . . .
a", "contains . . . a" does not, without more constraints, preclude
the existence of additional identical elements in the process,
method, article, or apparatus that comprises, has, includes,
contains the element. The terms "a" and "an" are defined as one or
more unless explicitly stated otherwise herein. The terms
"substantially", "essentially", "approximately", "about" or any
other version thereof, are defined as being close to as understood
by one of ordinary skill in the art, and in one non-limiting
embodiment the term is defined to be within 10%, in another
embodiment within 5%, in another embodiment within 1% and in
another embodiment within 0.5%. The term "coupled" as used herein
is defined as connected, although not necessarily directly and not
necessarily mechanically. A device or structure that is
"configured" in a certain way is configured in at least that way,
but may also be configured in ways that are not listed.
[0077] It will be appreciated that some embodiments may be
comprised of one or more generic or specialized processors (or
"processing devices") such as microprocessors, digital signal
processors, customized processors and field programmable gate
arrays (FPGAs) and unique stored program instructions (including
both software and firmware) that control the one or more processors
to implement, in conjunction with certain non-processor circuits,
some, most, or all of the functions of the method and/or apparatus
described herein. Alternatively, some or all functions could be
implemented by a state machine that has no stored program
instructions, or in one or more application specific integrated
circuits (ASICs), in which each function or some combinations of
certain of the functions are implemented as custom logic. Of
course, a combination of the two approaches could be used.
[0078] Moreover, an embodiment can be implemented as a
computer-readable storage medium having computer readable code
stored thereon for programming a computer (e.g., comprising a
processor) to perform a method as described and claimed herein.
Examples of such computer-readable storage mediums include, but are
not limited to, a hard disk, a CD-ROM, an optical storage device, a
magnetic storage device, a ROM (Read Only Memory), a PROM
(Programmable Read Only Memory), an EPROM (Erasable Programmable
Read Only Memory), an EEPROM (Electrically Erasable Programmable
Read Only Memory) and a Flash memory. Further, it is expected that
one of ordinary skill, notwithstanding possibly significant effort
and many design choices motivated by, for example, available time,
current technology, and economic considerations, when guided by the
concepts and principles disclosed herein will be readily capable of
generating such software instructions and programs and ICs with
minimal experimentation.
[0079] The Abstract of the Disclosure is provided to allow the
reader to quickly ascertain the nature of the technical disclosure.
It is submitted with the understanding that it will not be used to
interpret or limit the scope or meaning of the claims. In addition,
in the foregoing Detailed Description, it can be seen that various
features are grouped together in various embodiments for the
purpose of streamlining the disclosure. This method of disclosure
is not to be interpreted as reflecting an intention that the
claimed embodiments require more features than are expressly
recited in each claim. Rather, as the following claims reflect,
inventive subject matter lies in less than all features of a single
disclosed embodiment. Thus the following claims are hereby
incorporated into the Detailed Description, with each claim
standing on its own as a separately claimed subject matter.
* * * * *