U.S. patent application number 15/956159 was filed with the patent office on 2019-01-31 for self-checkout anti-theft vehicle systems and methods.
This patent application is currently assigned to Sbot Technologies Inc.. The applicant listed for this patent is Sbot Technologies Inc.. Invention is credited to Ahmed Beshry, Lin Gao, Yilin Huang, Shiyuan Yang.
Application Number | 20190034897 15/956159 |
Document ID | / |
Family ID | 65038046 |
Filed Date | 2019-01-31 |






United States Patent
Application |
20190034897 |
Kind Code |
A1 |
Gao; Lin ; et al. |
January 31, 2019 |
Self-Checkout Anti-Theft Vehicle Systems and Methods
Abstract
Disclosed herein relates to a self-checkout anti-theft vehicle
system, comprising: a self-checkout vehicle having a plurality of
sensors and components implemented thereon, the self-checkout
vehicle being used by shoppers for storing selected merchandises in
a retail environment; and a centralized computing device. The
centralized computing device is configured to: obtain information
related to each merchandise selected and placed into the
self-checkout vehicle by a shopper by exchanging data with the
plurality of sensors and components via a first communication
network, identify each merchandise via a second, different
communication network based at least upon the information obtained
from the plurality of sensors and components, and process payment
information of each merchandise.
Inventors: |
Gao; Lin; (Syosset, NY)
; Huang; Yilin; (Fuzhou, CN) ; Yang; Shiyuan;
(Hangzhou, CN) ; Beshry; Ahmed; (Calgary,
CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Sbot Technologies Inc. |
Brooklyn |
NY |
US |
|
|
Assignee: |
Sbot Technologies Inc.
Brooklyn
NY
|
Family ID: |
65038046 |
Appl. No.: |
15/956159 |
Filed: |
April 18, 2018 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62537140 |
Jul 26, 2017 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G07G 1/0036 20130101;
G06Q 20/204 20130101; G07G 1/0081 20130101; G06K 7/1417 20130101;
G07G 1/009 20130101; G07G 3/003 20130101; G06Q 20/18 20130101; G06Q
20/401 20130101; G06N 3/0454 20130101; G06Q 20/202 20130101; G06K
7/10722 20130101; G06N 3/08 20130101; G06Q 20/208 20130101; G07G
1/14 20130101; B62B 3/1412 20130101; G06N 3/0427 20130101; G06N
3/0445 20130101; G06N 5/022 20130101; B62B 3/1428 20130101; B62B
2203/50 20130101; G06N 5/003 20130101 |
International
Class: |
G06Q 20/18 20060101
G06Q020/18; G06N 3/08 20060101 G06N003/08; G06K 7/14 20060101
G06K007/14; G06K 7/10 20060101 G06K007/10; G06Q 20/40 20060101
G06Q020/40 |
Claims
1. A self-checkout anti-theft vehicle system, comprising: a
self-checkout vehicle having a plurality of sensors and components
implemented thereon, the self-checkout vehicle being used by
shoppers for storing selected merchandises in a retail environment;
and a centralized computing device configured to: obtain
information related to each merchandise selected and placed into
the self-checkout vehicle by a shopper by exchanging data with the
plurality of sensors and components via a first communication
network, identify each merchandise via a second, different
communication network based at least upon the information obtained
from the plurality of sensors and components, and process payment
information of each merchandise.
2. The self-checkout anti-theft vehicle system of claim 1, wherein
the centralized computing device is configured to identify each
merchandise by at least: training an image recognition neural
network deployed in the second, different communication network
using at least the information obtained from the plurality of
sensors and components; analyzing the information to determine a
plurality of parameters for identifying each merchandise in the
image recognition neural network; and performing at least one
object recognition process of each merchandised in the image
recognition neural network based at least on a portion of the
plurality of parameters.
3. The self-checkout anti-theft vehicle system of claim 2, wherein
the plurality of sensors and components comprise: at least one 2D
barcode scanner configured to: collect information relating to a
merchandise placed inside the self-checkout vehicle by at least
scanning a merchandise ID associated with the merchandise, and
transmit the information to the centralized computing device via
the first communication network for identifying the merchandise;
and at least one image recognition device configured to: collect
one or more images of the merchandise after the merchandise has
been placed inside the self-checkout vehicle, and transmit the one
or more images to the centralized computing device via the first
communication network to directly identify the merchandise or
determine whether the shopper has placed the correct item after
scanning the merchandise ID with the at least one 2D barcode
scanner.
4. The self-checkout anti-theft vehicle system of claim 3, wherein
the at least one 2D barcode scanner and the at least one image
recognition device are configured to automatically collect and
label merchandise image recognition information after the at least
one 2D barcode scanner scans the merchandise ID of the
merchandise.
5. The self-checkout anti-theft vehicle system of claim 3, wherein
the at least one image recognition device is configured to: collect
one or more images of a merchandise after the merchandise has been
placed inside the self-checkout vehicle or upon detecting that the
merchandise is being placed into the self-checkout vehicle, and
transmit the one or more images to the centralized computing device
via the first communication network.
6. The self-checkout anti-theft vehicle system of claim 4, wherein
the plurality of sensors and components comprise at least one light
curtain configured to: trigger the at least one image recognition
device to collect the one or more images.
7. The self-checkout anti-theft vehicle system of claim 2, wherein
the plurality of sensors and components comprise at least one
weight sensor configured to: determine weight information related
to the merchandise, and transmit the determined weight information
to the centralized computing device via the first communication
network.
8. The self-checkout anti-theft vehicle system of claim 2, wherein
the plurality of sensors and components comprise at least one
sensor configured to: determine shape information related to the
merchandise, and transmit the shape information to the centralized
computing device via the first communication network.
9. The self-checkout anti-theft vehicle system of claim 2, wherein
the plurality of sensors and components comprise at least one
sensor configured to: determine location information related to the
merchandise, and transmit the location information to the
centralized computing device via the first communication
network.
10. The self-checkout anti-theft vehicle system of claim 2, wherein
the plurality of sensors and components comprise at least one
motion sensor configured to: triangulate motion and incline
information of the self-checkout vehicle, and transmit the motion
and incline information to the centralized computing device.
11. The self-checkout anti-theft vehicle system of claim 2, wherein
the centralized computing device is further configured to perform
multiple object recognition processes of each merchandised in the
image recognition neural network based on different combinations of
the plurality of parameters.
12. The self-checkout anti-theft vehicle system of claim 1, wherein
the plurality of sensors and components comprise at least one
locking device monitored and controlled by the centralized
computing device for preventing the shopper from taking out any
merchandise out of the self-checkout vehicle without payment.
13. The self-checkout anti-theft vehicle system of claim 12,
further comprising one or more shopping bags for storing the
selected merchandises placed into the self-checkout vehicle, each
shopping bag being attached to the at least one locking device.
14. The self-checkout anti-theft vehicle system of claim 1,
wherein, via the first communication network, the centralized
computing device is further configured to: process data received
from each of the plurality of sensors and components, update a
status of each of the plurality of sensors and components, and send
and receive one or more mechanical commands to trigger a specific
action in the self-checkout vehicle.
15. The self-checkout anti-theft vehicle system of claim 1, via the
second, different communication network, the centralized computing
device is further configured to: send or receive timely information
updates relating to inventory, coupon, promotions, or stock
availability; verify a payment status of each merchandise in the
self-checkout vehicle; and send or receive customer information,
location and receipts.
16. The self-checkout anti-theft vehicle system of claim 1, further
comprising: at least one pathway configured to control and direct
at least the self-checkout vehicle to a check-out location via the
second, different communication network; and another pathway
positioned at the check-out location, and controlled by the
centralized computing device to verify payment information of the
merchandise as the shopper walks through the at least one pathway.
Description
CROSS REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of U.S. Provisional
Patent Application No. 62/537,140, filed Jul. 26, 2017, which is
incorporated herein by reference.
TECHNICAL FIELD
[0002] The present disclosure relates generally to self-checkout
anti-theft systems and methods, and more specifically, to network
connections, sensor fusion and the mechanical structure of a
self-checkout vehicle.
BACKGROUND
[0003] Currently, an increasing number of convenience stores,
grocery markets and retail outlets utilize self-checkout kiosks to
allow customers to self-service their checkout. The benefit of
self-checkout is apparent: grocers are able to save cashier labor
while helping to reduce customer wait time by opening additional
cash wrap. Despite its benefits, shoppers often encounter technical
difficulties, require staff assistance and still line up at
self-checkout registers at busy times.
[0004] In order to provide a better shopping environment for
customers in physical stores, a seamless self-checkout format is
needed. Since customers conventionally use a shopping cart or a
shopping basket during their store visit, it is more desirable if
customers can directly purchase and bag their purchased goods in
their shopping vehicles and directly walk out of the store
thereafter. In the meantime, necessary anti-theft measures need to
be implemented in such self-checkout vehicles to ensure the
interests of the grocers are protected.
BRIEF SUMMARY OF THE INVENTION
[0005] The self-checkout anti-theft systems and methods disclosed
herein provide a holistic checkout experience that also prevents
theft. In one aspect, the disclosed system contemplates, among
other features, a centralized computing device that communicates
with all the sensors and mechanical structures in the self-checkout
vehicle and acts as the command center. The centralized computing
device may be connected to an in-store and/or external network
through wireless connection devices, including but not limited to
Wi-Fi, Bluetooth, Zigbee and the like. The external network
connection may allow the centralized computing device to, including
but not limited to: 1) send or receive timely information updates
relating to inventory, coupon, promotions, stock availability and
the like; 2) verify payment status of merchandise in the cart; 3)
payment processing; 4) identify item information based on image
processing; and 5) send or receive customer information and
receipts. The centralized computing device may also communicate
with internal sensors or mechanical devices through wired
connections or wireless connection devices via an internal network
such as Wi-Fi, Bluetooth, Zigbee and the like. The internal network
connection may allow the centralized computing device to, including
but not limited to: 1) send or receive data from sensors for
further processing; 2) communicate between the sensors to
triangulate merchandise information; 3) update status of vehicle
components; and 4) send or receive mechanical commands to trigger a
specific action in the self-checkout vehicle.
[0006] In accordance with aspects of the present application, a
self-checkout anti-theft vehicle system is disclosed. The system
comprises: a self-checkout vehicle having a plurality of sensors
and components implemented thereon, the self-checkout vehicle being
used by shoppers for storing selected merchandises in a retail
environment; and a centralized computing device. The centralized
computing device is configured to: obtain information related to
each merchandise selected and placed into the self-checkout vehicle
by a shopper by exchanging data with the plurality of sensors and
components via a first communication network, identify each
merchandise via a second, different communication network based at
least upon the information obtained from the plurality of sensors
and components, and process payment information of each
merchandise.
BRIEF DESCRIPTION OF THE FIGURES
[0007] For a more complete understanding of the example aspects,
references are made to the following descriptions taken in
connection with the accompanying drawings in which:
[0008] FIG. 1 illustrates a self-checkout anti-theft system, in
accordance with aspects of the present disclosure;
[0009] FIG. 2 illustrates a perspective view of a self-checkout
vehicle, in accordance with an example aspect of the present
disclosure;
[0010] FIG. 3 illustrates another perspective view of a
self-checkout vehicle, in accordance with an example aspect of the
present disclosure;
[0011] FIG. 4 illustrates a deep learning neural network of a
self-checkout anti-theft system, in accordance with an example
aspect of the present disclosure; and
[0012] FIG. 5 illustrates an example computer system through which
the disclosed aspects of the systems and methods may be
implemented.
[0013] The drawings referred to in this description are not to be
understood as being drawn to scale except if specifically noted,
and such drawings are only exemplary in nature.
DETAILED DESCRIPTION
[0014] Among other features, disclosed herein is a self-checkout
anti-theft system. Various aspects of the present disclosure will
be described with reference to the drawings, wherein like reference
numerals are used to refer to like elements throughout. In the
following description, for purposes of explanation, numerous
specific details are set forth in order to promote a thorough
understanding of one or more aspects of the disclosure. It may be
evident in some or all instances, however, that any aspects
described below can be practiced without adopting the specific
design details described below. In other instances, well-known
structures and devices are shown in block diagram form in order to
facilitate description of one or more aspects of the
disclosure.
[0015] FIG. 1 shows a self-checkout anti-theft system 100, in
accordance with aspects of the present disclosure. System 100 may
comprise a self-checkout vehicle 102 that may be used by a shopper
in a retail environment (e.g., a department store, a supermarket)
for storing and identifying at least one selected merchandise, and
facilitating a transaction of the selected merchandise without
requiring the shopper to go to a traditional check-out counter,
station, or location for payment. The term "vehicle" may refer to
any portable or movable physical structure supplied by a retailer
for use by its customers inside the retail environment, such as a
wheeled shopping cart in various sizes, a hand-held shopping
basket, or a wheelchair/motorized vehicle integrated with a
shopping receptacle for use by handicapped or disabled
shoppers.
[0016] The self-checkout vehicle 102 may comprise at least one
hardware processor 104 configured to execute and control a
plurality of sensors and components implemented thereon for
collecting and processing information related to each merchandise
selected and placed into the self-checkout vehicle 102 by a
shopper. In an aspect, the plurality of sensors and components may
comprise a barcode scanner 106, an image recognition sensor 108, a
weight sensor 110, a locking device 112, and other sensors and
components 114. Via various I/O components (not shown), the
processor 104 may be coupled to memory 116 which includes computer
storage media in the form of volatile and/or nonvolatile memory for
executing machine executable instructions stored thereon. The
memory 116 may be removable, non-removable, or a combination
thereof.
[0017] As also shown in FIG. 1, the self-checkout vehicle 102 may
communicate with a centralized computing device 124 via a first
communication network 120 that is configured to, e.g., transmit and
receive data to and from the plurality of sensors and components of
the self-checkout vehicle 102 for further processing, communicate
between these sensors and components to triangulate merchandise
information, update a status of each sensor and component, and
transmit and receive mechanical commands to trigger a specific
action in the self-checkout vehicle 102. The aforementioned
plurality of sensor and components can extract necessary
merchandise-based information, such as location, weight and partial
barcode capture in order to reduce the search parameters of, e.g.,
an image recognition neural network 400 which will be described
fully below in relation to FIG. 4.
[0018] It is to be appreciated that the self-checkout anti-theft
system 100 may include any suitable and/or necessary interface
components (not shown), which provide various adapters, connectors,
channels, communication paths, to facilitate exchanging signals and
data between various hardware and software components of the
self-checkout vehicle 102, the centralized computing device 124,
and any applications, peer devices, remote or local server
systems/service providers, additional database system(s), and/or
with one another that are available on or connected via the
underlying network 120 and associated communication channels and
protocols 118a, 118b (e.g., Internet, wireless, LAN, cellular,
Wi-Fi, WAN).
[0019] Moreover, the centralized computing device 124 may be
deployed in a second, different communication network 122 to
communicate with a plurality of computing devices associated with,
for example, a retailer inventory and point of sale (POS) system or
any third party database/system/server 126a-c, such that the
centralized computing device 124 may be configured to transmit or
receive timely information updates relating to a retailer's
inventory, coupon, promotions, stock availability and the like,
verify payment status of merchandise in the self-checkout vehicle
102, payment processing, and identify merchandise information based
at least on, e.g., image processing, and send or receive customer
information and receipts.
[0020] More specifically, referring to FIG. 2, a barcode scanner
202 (i.e., barcode scanner 106) of the self-checkout vehicle 102 of
FIG. 1 may be implemented to identify any merchandise selected and
placed into the self-checkout vehicle 102 by a shopper, according
to aspects of the present application. Generally, each merchandise
in a retail store may be associated with at least one unique
merchandise ID code. Examples of merchandise ID code may include,
but are not limited to a, bar code, universal product code (UPC),
quick response (QR) code, numeric code, alphanumeric code, or any
other two-dimensional (2D) image code or three-dimensional (3D)
image code. The barcode scanner 202 may accordingly include any
suitable type of circuitry for reading the unique merchandise ID
code. Examples of barcode scanner may comprise a pen-type scanner,
a laser scanner, a charge-coupled device (CCD) scanner, a
camera-based scanner, a video camera reader, a large field-of-view
reader, or an omnidirectional barcode scanner. In one aspect, the
barcode scanner 202 may be disposed or positioned on a selected
area of the self-checkout vehicle 102, as shown in FIG. 2.
Alternatively, the barcode scanner 202 may be implemented as a
stand-alone cordless and/or wireless electronic device that may be
detachably mounted on a specific self-checkout vehicle 102 during
use. Moreover, the barcode scanner 202 may be body mounted on the
shopper (e.g., a wrist band) to leave her hands free to handle
objects or goods being scanned or dealing with other tasks or for
any other reason or need.
[0021] According to an aspect of the present application, the
barcode scanner 202 may be configured to collect information
relating to the selected merchandise based on the merchandise ID
code which may include a machine-readable code in the form of
numbers and a pattern of parallel lines of varying widths, printed
on and identifying specific merchandise. For example, a linear or
1-dimensional (1D) barcode may include two parts: a barcode and a
12-digit UPC number. The first six numbers of the barcode may be a
manufacturer's identification number. The next five digits may
represent the merchandise's number. The last number may be a check
digit which may enable the barcode scanner 202 to determine if the
barcode has been scanned correctly. A linear barcode typically
holds any type of text information up to 85 characters. In
contrast, a 2D barcode is more complex (can store over 7,000
characters) and may include more information in the code such as
price, quantity, web address, expiration dates, or image.
Furthermore, engraved or applied to merchandise itself as a part of
the manufacturing process, a 3D barcode may include bars and/or
squares that are protrusive and can be felt when touched. The time
it takes the laser of the barcode scanner 202 to be reflected back
and be recorded may determine the height of each bar/square as a
function of distance and time, such that the character represented
by the 3D code may be interpreted. 3D barcodes may be a solution
for rectifying various problems, such as inaccurate pricing,
inventory errors, and overstocking, as it is difficult, if not
entirely impossible, to alter or obstruct the 3D barcode's
information.
[0022] When using a 2D barcode, for example, the barcode scanner
202 may read the symbols of the merchandise ID code and convert or
decode them into information such as the merchandise's origin,
price, type, location, expiration date, etc. In one aspect,
processing circuitry in or associated with the barcode scanner 202
may be configured to provide a raw signal proportional to signal
intensities detected while scanning the merchandise ID code with
limited or no decoding performed within the scanner 202. Rather,
the raw signal may be transmitted to the centralized computing
device 124 via the first communication network 120 for identifying
the merchandise, thereby achieving a more compact design and
implementation of the barcode scanner 202. Accordingly, the
centralized computing device 124 may be configured to process the
obtained information regarding the merchandise received from the
barcode scanner 202 based at least on the merchandise ID code,
correlate such information with at least data stored in various
database/system/server 126a-c in order to, e.g., identify the
merchandise, update a retailer's inventory and stock availability
information associated with database/system/server 126a-c,
determine appropriate coupons and promotions for distributing to
the shopper, and facilitate payment processing if the merchandise
is checked out by the shopper.
[0023] In accordance with another aspect of the present
application, the barcode scanner 202, in conjunction with image
recognition sensors 108, may be used to facilitate deep learning
image labeling data collection process. For example, when a shopper
decides to purchase an item, he/she may scan the merchandise ID
code and then proceed to putting the item into the self-checkout
vehicle 102. During the process of putting the merchandise into the
self-checkout vehicle 102, image recognition sensors 108 may be
consistently collecting image data of the merchandise. As the
shopper already used the barcode scanner 202 to scan the
merchandise ID, the centralized computing device 124 may be
configured to identify the product and automatically label the
associated images collected from the image recognition sensors 108
with that specific merchandise ID. The labeled image data may be
then leveraged for training, e.g., the neutral network 400 of FIG.
4 for eventual scan-less identification applications.
[0024] According to yet another aspect of the present application,
referring back to FIG. 1, the image recognition sensor 108 may be
configured to: collect one or more images of the merchandise after
the merchandise has been placed inside the self-checkout vehicle
102, and transmit the images to the centralized computing device
124 via the first communication network 120 for determining whether
the shopper has placed the correct item after scanning the
merchandise with the barcode scanner 106. Specifically, the
centralized computing device 124 may utilize the computation
resources of the associated database/system/server 126a-c to
contemplate a deep learning system for training various neural
network models based at least on merchandise images received from
the self-checkout vehicle 102 for object detection and recognition
purposes. For example, as shown in FIGS. 2 and 3, data such as
images and video data obtained from cameras 204 and/or 302 of the
self-checkout vehicle 102 may be used by the centralized computing
device 124 and associated database/system/server 126a-c to form a
distributed neural network 400 of FIG. 4, for real-time data
analysis including segmentation, object detection, tracking,
recognition, or the like. Such distributed neural network may be
scalable to exchange data with additional devices/sensors and any
other suitable neural network such as a convolutional neural
network (CNN), a deep neural network (DNN), recurrent convolutional
neural network (RCNN). As shown in FIG. 4, the network 400 may
include an input layer on an input end, a sequence of interleave
convolutional layers and subsampling layers, and a fully connected
layer at an output end. When a merchandise is added into the
self-checkout vehicle 102, circuitry of the input layer module of
the network 400 may be triggered to obtain still image data, video
frame data, or any available data of the merchandise captured and
transmitted by the cameras 204 and/or 302. In one aspect,
normalized image data in the red-green-blue color space may serve
as inputs to the network 400. The input data may comprise a variety
of different parameters of each merchandise including but not
limited to the shape, size, colors, and text information printed on
each merchandise. The network 400 may be configured to extract
merchandise features based on the input data, perform object
detection and tracking of each merchandise, and correlate with
various merchandise specific information stored in at least one of
the associated database/system/server 126a-c (e.g., a retailer's
inventory database).
[0025] More specifically, a convolutional layer may receive data
from the input layer in order to generate feature maps. For
example, an input to a convolutional layer may include a
m.times.m.times.r image where m is the height and width of the
image (measured in pixel) and r is the number of channels, e.g., an
RGB image has r=3. The convolutional layer may have k filters (or
kernels) of size n.times.n.times.q where n is smaller than the
dimension of the image and q may either be the same as the number
of channels r or smaller and may vary for each kernel. The size of
each filter gives rise to locally connected structures which are
each convolved with the image to produce k feature maps of size
m-n+1. Each map is then subsampled by a subsampling layer typically
with mean or max pooling over p.times.p contiguous regions where p
may range between 2 for small images and usually not more than 5
for larger inputs. For example, max-pooling may provide for
non-linear down-sampling of feature maps to generate subsampled
feature maps. In an aspect, a subsampling layer may apply
max-pooling by portioning feature maps into a set of
non-overlapping portions and providing a maximum value for each
portion of the set of non-overlapping portions. Either before or
after a subsequent subsampling layer, an additive bias and
sigmoidal nonlinearity may be applied to each feature map. For
example, units of the same color may have been assigned the same
weights. Any number of convolutional layers and subsampling layers
may be added into the network 400 for generating and providing
subsampled features maps to the fully connected layer. The fully
connected layer may use, e.g., a softmax activation function to use
the features maps output from preceding convolutional layer or
subsampling layer to classify the original input image into various
classes based on training dataset stored on one of the associated
database/system/server 126a-c. For example, possible outputs from
the fully connected layer may indicate at least one of: a value
representing the volume of a product; a value about at least one
outer dimension of a product; a value representative of the
geometrical shape of a product; a value representative of
geometrical relations of a product, such as a relation between at
least two of width, height and length; a set of at least two values
related to colors of a product; a set of values related to the area
which at least one specific color takes up in a product including
the percentage that areas with a certain color take up with respect
to at least one side of the outer surface of the product; data
related to the color taking up the biggest fraction, optionally the
second biggest fraction, etc. of at least one side of the outer
surface of the product. Thereafter, the neural network 400 may
perform object detection based at least on the outputs from the
fully connected layer and the merchandise specific information
stored in at least one of the associated database/system/server
126a-c (e.g., a retailer's inventory database) to determine whether
the shopper has placed the correct item after scanning the
merchandise with the barcode scanner 202.
[0026] Alternatively, according to another aspect of the present
application, referring back to FIG. 1, the image recognition sensor
108 of the self-checkout vehicle 102 may be configured to: collect
one or more images of a merchandise after the merchandise has been
placed inside the self-checkout vehicle 102 or upon detecting that
the merchandise is being placed into the self-checkout vehicle 102,
and transmit the images to the centralized computing device 124 via
the communication network 120. That is, without requiring the
shopper to scan each merchandise, other sensors and components 114
of the self-checkout vehicle 102 may comprise one or more motion
sensors configured to monitor and track movements relating to
merchandise placement into or removal from itself (e.g., via
triangulation), and capture and transmit merchandise images to the
centralized computing device 124 for object detection and
recognition. For example, the centralized computing device 124 may
implement the neural network 400 of FIG. 4 to extract various
features of each merchandise image via a plurality of interleaved
convolutional layers and sub-sampling layers and identify each
merchandise based on the extracted features, via, e.g., the fully
connected layer. In one aspect, at least a portion of the neural
network 400 may be configured to form a scalable end-to-end
distributed neural network framework that may be used in various
different contexts such as shopper facial recognition and/or voice
recognition, or other cloud-based deep learning systems for
retailer inventory management or shopping behavior analysis.
[0027] It should be appreciated that, in addition to the deep
learning based object detection and recognition techniques
described above, the self-checkout anti-theft system 100 of FIG. 1
may contemplate, for example, rigid or deformable template matching
based methods, knowledge based methods, object based image analysis
methods, or any other suitable methods. In one aspect, template
matching based methods generally include generating and storing a
template for each to-be-detected object class (e.g., each
merchandise in a store) by hand-crafting or learning from specific
training set, and comparing an object image and the stored
templates at a number of defined positions to measure similarity
and locate the best matches via allowable translation, rotation,
and scale changes. The most popular similarity measures may include
the sum of absolute differences (SAD), the sum of squared
differences (SSD), the normalized cross correlation (NCC), and the
Euclidean distance (ED).
[0028] Further, knowledge based object detection methods may focus
on encoding specific shape or geometric information of a
merchandise and spatial constraints or relationships between the
merchandise and its background (specific location inside a store)
to establish prior knowledge and detection rules for various
hypotheses. Subsequently, an input image may be compared against
the hypotheses via at least a set of selected search parameters
within the neural network 400 thereby significantly reducing object
recognition time. For example, instead of searching all of the
available merchandise images associated with a store upon receiving
at least one input image of a merchandise from the image
recognition sensor 108 of the self-checkout vehicle 102, the
centralized computing device 124 may also simultaneously receive
the location data of the self-checkout vehicle 102 within the store
(e.g., a specific side of an aisle of the store, or the counter
location of a deli department of the store). Such location data may
be determined by the other sensors and components 114 of the
self-checkout vehicle 102 via a global positioning system (GPS)
transceiver or any suitable locator apparatus. That is, the
self-checkout vehicle 102 may be equipped with a GPS or similar
device to pinpoint the exact location of the self-checkout vehicle
102 within the store, or calculate a triangulated position based on
how quickly the other sensors and components 114 respond to
different signals broadcast by different base stations deployed
within the store. Based at least upon the received location data of
the self-checkout vehicle 102 and store merchandise layout
information, the centralized computing device 124 may be configured
to search a portion of all available merchandise images stored in
the neural network 400, focusing on merchandise satisfying a
limited set of parameters. Thereafter, to further narrow down the
search results and resolve ambiguity, the centralized computing
device 124 may be configured to rely on other available merchandise
information (e.g., the weight of the merchandise measured by the
weight sensor 110) to perform one or more searches within results
returned by a pervious search effort to finally identify the
objection placed in the self-checkout vehicle 102.
[0029] To improve search speed and accuracy, in one aspect, the
centralized computing device 124 may be configured to
simultaneously perform multiple above-noted object recognition
operations with different search parameters within different
datasets of the neural network 400. For example, for misplaced
store items that have been chosen and placed in the self-checkout
vehicle 102 by a customer, a search based on the detected location
and weight of the merchandise may be supplemented by one or more
sequential or concurrent searches based on different search
parameters (e.g., a combination of detected unique merchandise ID
code and weight of the merchandise). Such additional searches may
be triggered in response to detecting a selected threshold value
for an on-going search has been exceeded. For example, in response
to detecting that 60% of an initial search of an input merchandise
image against a portion of merchandise images saved in the neural
network 400 based on location and weight information of the
merchandise yields less than 5 hits, the centralized computing
device 124 may be configured to initiate at least one additional
search based on a different combination of search parameters (e.g.,
a specific customer's shopping history and the unique merchandise
ID code of the merchandise). For another example, concurrent or
sequential additional searches may be performed within labeled
image data of merchandise that are included in in-store promotions
and collected from multiple shoppers during a selected period of
time (e.g., past three days).
[0030] Moreover, an object based image analysis method may first
segment an image into a number of homogenous regions representing a
relatively homogeneous group of pixels by selecting desired shape,
scale, and compactness criteria. For example, the shape parameter
may define to which percentage the homogeneity of shape is weighted
against the homogeneity of spectral values. The compactness
parameter may include a sub-parameter of shape and is used to
optimize image objects with regard to compactness or smoothness.
The scale parameter may be used for controlling the internal
heterogeneity of the resulting objects and is therefore correlated
with their average size, i.e., a larger value of the scale allows a
higher internal heterogeneity, which increases the number of pixels
per object and vice versa. Once segments are generated, one may
extract object features, such as spectral information as well as
size, shape, texture, geometry, and contextual semantic features.
These features are then selected and fed to a classifier (e.g.,
membership function classifier, nearest neighbor classifier,
decision tree, neural network of FIG. 4) for classification.
[0031] It should be appreciated that the image recognition neural
network 400 may have two form-factors: computing performed directly
on the self-checkout vehicle 102 via a graphics process unit (GPU)
together with a central processing unit (collectively represented
by the processor 104 in FIG. 1); and computing performed in a local
server (e.g., the centralized computing device 124 of FIG. 1) which
may be configured to exchange information with the processor unit
104 of the self-checkout vehicle 102 via the first communication
network 120.
[0032] As shown in FIG. 2, the self-checkout vehicle 102 may also
use at least one light curtain or infrared/laser sensor 206 to
distinguish between a shopper's hand and an object, and
contemptuously trigger at least one camera 204 or 302 to start
collecting images of each merchandise which is moving with respect
to a selected reference position of the self-checkout vehicle 102
(e.g., the upper rim of the vehicle), thereby indicating an
addition of merchandise to the self-checkout vehicle 102. Moreover,
at least one miniature radar (not shown) may be installed on the
self-checkout vehicle 102 to determine shape information related to
each merchandise, detect movements of each merchandise with respect
to the self-checkout vehicle 102, and transmit the captured
information to the centralized computing device 124 via the
communication network 120. In one aspect, a plurality of weight
sensors 208 may be additionally installed on the bottom of the
self-checkout vehicle 102 to continuously monitor the content
change of the self-checkout vehicle 102. Alternatively, a matrix of
pressure sensors mounted to a plate may be used to cover the bottom
of the self-checkout vehicle 102. As such, by checking signals of
pressure sensors of the plate, weight information of each added
merchandise may be derived.
[0033] As one or more merchandises being added to the self-checkout
vehicle 102 at respective locations inside a retail store, a touch
screen 210 or 304 on the vehicle 102 may be used to indicate to the
shopper, e.g., a list showing the name, price and quantity of the
content. In one aspect, if the centralized computing device 124 has
stored thereon information regarding an individual's past shopping
records or habits, information may be transmitted by the
centralized computing device 124 to be displayed on the touch
screen 210 or 304 to show a previously bought product may be
currently on sale, or that there is a specific offer about the
product. Other information such as store layout map, promotions, or
various marketing materials may be selected and displayed. Further,
if a merchandise is no longer needed and permanently removed from
the self-checkout vehicle 102, the shopper may use the touch screen
210 or 304 to delete the merchandise from the list. As described
previously, the centralized computing device 124 is configured to
continuously monitor the plurality of sensors and components of the
self-checkout vehicle 102. Any change detected by the
sensors/components with respect to the content of the self-checkout
vehicle 102 will be transmitted to the centralized computing device
124, and relevant information stored in the network 122 will be
updated by the centralized computing device 124 accordingly.
[0034] In one aspect, to spare the efforts of reloading selected
merchandises into one or more shopping bags at the checkout, the
self-checkout vehicle 102 may have at least one shopping bag
attached to a locking device (not shown). Such locking device may
be controlled by the centralized computing device 124 to not only
keep the attached shopping bag maximally stretched at all times and
ensure that the shopping bag does not crumble or fold thereby
allowing a maximum viewing angle for the cameras 204 or 302, but
also prevent the shopper from taking out any merchandise out of the
self-checkout vehicle 102 without payment. The locking device may
be made of solenoid, electronic switch or any mechanical device
which allows physical lock and unlock.
[0035] Moreover, the shopper may use the touch screen 210 or 304 to
initiate a final review of all the selected merchandise in the
self-checkout vehicle 102, and indicate her preferred payment
methods (e.g., credit card, internet payment accounts). The
centralized computing device 124 then communicates with appropriate
databases 126a-c to facilitate the transaction based on the
selected shopper's payment method. For example, a credit card
reader 212 or 306 may be installed on the self-checkout vehicle
102, and the touch screen 210 or 304 may be configured to display
shopper authentication information and credit card transaction
information. Specifically, when the shopper slides or inserts a
credit card through a slot, the credit card reader 212 or 306 may
obtain information stored on the card (e.g., account number,
account holder's name, expiration date, etc.) and encrypt this
information for payment processing at the centralized computing
device 124. Upon successful payment, the centralized computing
device 124 may prepare a purchase receipt that may be transmitted
to the shopper's mobile device(s) or printable at the store. In
addition, the shopping bag attached to the self-checkout vehicle
102 may be released from the locking device, such that the shopper
is allowed to carry the shopping bag within or out of the retail
store without triggering other anti-theft sensors. Moreover, the
centralized computing device 124 may reset all the sensors and
components of the self-checkout vehicle 102 after a completed
transaction.
[0036] As also shown in FIG. 2, a battery 214 may be installed on
the self-checkout vehicle 102 for powering various circuitry and
components. The battery may be located at the base as shown, or at
the handle of the vehicle 102, or elsewhere on the vehicle 102.
Alternatively or additionally, power may be generated by a charging
system, for example, a voltage generator which produces power from
motion of the self-checkout vehicle 102. The charging system may
charge the battery 214, which in turn powers other circuitry and
components of the vehicle 102. Further, one or more speed sensors
216 may be installed on the vehicle 102 for detecting any vehicle
movement. For example, when the vehicle 102 is moving, the data
obtained from the weight sensor 208 may not be accurate. As such,
when the speed sensors 216 detect that the vehicle 102 is moving,
the processor 104 of the vehicle 102 may temporarily disable part
of the vehicle functions, such as forbidding adding in new items in
order to help adjust weight measurement by the weight sensor 208.
Alternatively, as one or more merchandises are being added, the
speed sensors 216 will detect the self-checkout vehicle movement
and inclination to normalize the data collected by the weight
sensor 208. As the self-checkout vehicle 102 is being moved within
its environment, the speed sensors 216 will detect changes in level
and speed and will be used to ensure the proper indication of the
product weight is displayed on the self checkout vehicle 102.
Moreover, the speed sensors 216 will be used to detect changes in
customer activity and movement to subsequently determine when to
take a weight measurement of merchandise being added.
[0037] In accordance with yet another aspect of the present
application, at least one pathway may be implemented in the retail
store and configured to control and direct the self-checkout
vehicle 102 to a check-out location via, e.g., the communication
network 120. Further, a turnstile may be positioned at the
check-out location, and controlled by the centralized computing
device 124 to verify payment information of the merchandise as the
shopper walks through the turnstile.
[0038] FIG. 5 illustrates an example computing system 20 (which may
be a computer or a server) on which the disclosed systems and
method can be implemented according to an example aspect. It should
be appreciated that the detailed computer system 500 can correspond
to the self-checkout vehicle 102 or the centralized computing
device 124 provided to implement the algorithms described
above.
[0039] As shown, the computing system 20 includes at least one
processing unit 21 (e.g., a GPU, or a CPU, or a combination of
both), a system memory 22 and a system bus 23 connecting the
various system components, including the memory associated with the
central processing unit 21. The central processing unit 21 and the
central computing unit 600 can correspond to the processor 104 or
the processor of the centralized computing device 124 (not shown)
and the system memory 22 can correspond to memory 116 of FIG. 1,
according to an exemplary aspect. Furthermore, the system bus 23 is
realized like any bus structure known from the prior art, including
in turn a bus memory or bus memory controller, a peripheral bus and
a local bus, which is able to interact with any other bus
architecture. The system memory includes read only memory (ROM) 24
and random-access memory (RAM) 25. The basic input/output system
(BIOS) 26 includes the basic procedures ensuring the transfer of
information between elements of the computing system 20, such as
those at the time of loading the operating system with the use of
the ROM 24.
[0040] The computing system 20, in turn, includes a hard disk 27
for reading and writing of data, a magnetic disk drive 28 for
reading and writing on removable magnetic disks 29 and an optical
drive 30 for reading and writing on removable optical disks 31,
such as CD-ROM, DVD-ROM and other optical information media. The
hard disk 27, the magnetic disk drive 28, and the optical drive 30
are connected to the system bus 23 across the hard disk interface
32, the magnetic disk interface 33 and the optical drive interface
34, respectively. The drives and the corresponding computer
information media are power-independent modules for storage of
computer instructions, data structures, program modules and other
data of the computing system 20.
[0041] The present disclosure provides the implementation of a
system that uses a hard disk 27, a removable magnetic disk 29 and a
removable optical disk 31, but it should be understood that it is
possible to employ other types of computer information media 56
which are able to store data in a form readable by a computer
(solid state drives, flash memory cards, digital disks,
random-access memory (RAM) and so on), which are connected to the
system bus 23 via the controller 55.
[0042] The computing system 20 has a file system 36, where the
recorded operating system 35 is kept, and also additional program
applications 37, other program modules 38 and program data 39. The
user is able to enter commands and information into the computing
system 20 by using input devices (keyboard 40, mouse 42). Other
input devices (not shown) can be used: microphone, scanner, and so
on. Such input devices usually plug into the computing system 20
through a serial port 46, which in turn is connected to the system
bus, but they can be connected in other ways, for example, with the
aid of a parallel port, a game port or a universal serial bus
(USB). A monitor 47 or other type of display device is also
connected to the system bus 23 across an interface, such as a video
adapter 48. In addition to the monitor 47, the personal computer
can be equipped with other peripheral output devices (not shown),
such as loudspeakers, a printer, and so on.
[0043] The computing system 20 is able to operate within a network
environment, using a network connection to one or more remote
computers 49. The remote computer (or computers) 49 are also
computers or servers having the majority or all of the
aforementioned elements in describing the nature of a computing
system 20. Other devices can also be present in the computer
network, such as routers, network stations, peer devices or other
network nodes. According to one aspect, the remove computer(s) 49
can correspond to the computer devices capable of managing
transaction log 140, as discussed above.
[0044] Network connections can form a local-area computer network
(LAN) 50, such as a wired and/or wireless network, and a wide-area
computer network (WAN). Such networks are used in corporate
computer networks and internal company networks, and they generally
have access to the Internet. In LAN or WAN networks, the computing
system 20 is connected to the local-area network 50 across a
network adapter or network interface 51. When networks are used,
the computing system 20 can employ a modem 54 or other modules for
providing communications with a wide-area computer network such as
the Internet. The modem 54, which is an internal or external
device, is connected to the system bus 23 by a serial port 46. It
should be noted that the network connections are only examples and
need not depict the exact configuration of the network, i.e., in
reality there are other ways of establishing a connection of one
computer to another by technical communication modules, such as
Bluetooth.
[0045] In various aspects, the systems and methods described herein
may be implemented in hardware, software, firmware, or any
combination thereof. If implemented in software, the methods may be
stored as one or more instructions or code on a non-transitory
computer-readable medium. Computer-readable medium includes data
storage. By way of example, and not limitation, such
computer-readable medium can comprise RAM, ROM, EEPROM, CD-ROM,
Flash memory or other types of electric, magnetic, or optical
storage medium, or any other medium that can be used to carry or
store desired program code in the form of instructions or data
structures and that can be accessed by a processor of a general
purpose computer.
[0046] In the interest of clarity, not all of the routine features
of the aspects are disclosed herein. It will be appreciated that in
the development of any actual implementation of the present
disclosure, numerous implementation-specific decisions must be made
in order to achieve the developer's specific goals, and that these
specific goals will vary for different implementations and
different developers. It will be appreciated that such a
development effort might be complex and time-consuming, but would
nevertheless be a routine undertaking of engineering for those of
ordinary skill in the art having the benefit of this
disclosure.
[0047] Furthermore, it is to be understood that the phraseology or
terminology used herein is for the purpose of description and not
of restriction, such that the terminology or phraseology of the
present specification is to be interpreted by the skilled in the
art in light of the teachings and guidance presented herein, in
combination with the knowledge of the skilled in the relevant
art(s). Moreover, it is not intended for any term in the
specification or claims to be ascribed an uncommon or special
meaning unless explicitly set forth as such.
[0048] The various aspects disclosed herein encompass present and
future known equivalents to the known modules referred to herein by
way of illustration. Moreover, while aspects and applications have
been shown and described, it would be apparent to those skilled in
the art having the benefit of this disclosure that many more
modifications than mentioned above are possible without departing
from the inventive concepts disclosed herein.
* * * * *