U.S. patent application number 11/381473 was filed with the patent office on 2007-04-05 for gaming object recognition.
This patent application is currently assigned to TANGAM GAMING TECHNOLOGY INC.. Invention is credited to Maulin Gandhi, Prem Gururajan.
Application Number | 20070077987 11/381473 |
Document ID | / |
Family ID | 37461021 |
Filed Date | 2007-04-05 |
United States Patent
Application |
20070077987 |
Kind Code |
A1 |
Gururajan; Prem ; et
al. |
April 5, 2007 |
Gaming object recognition
Abstract
The present invention relates to a system and method for
identifying and tracking gaming objects. The system comprises an
overhead camera for capturing an image the table, a detection
module for detecting a feature of the object on the image, a search
module for extracting a region of interest of the image that
describes the object from the feature, a search module for
extracting a region of interest of the image that describes the
object from the feature, a feature space module for transforming a
feature space of the region of interest to obtain a transformed
region of interest, and an identity module comprising a statistical
classifier trained to recognize the object from the transformed
region. The search module is able to extract a region of interest
of an image from any detected feature indicative of its position.
The system may be operated in conjunction with a card reader to
provide two different sets of playing card data to a tracking
module, which may reconcile the provided data in order to detect
inconsistencies with respect to playing cards dealt on the
table.
Inventors: |
Gururajan; Prem; (Kitchener,
CA) ; Gandhi; Maulin; (Kitchener, CA) |
Correspondence
Address: |
BERESKIN AND PARR
40 KING STREET WEST
BOX 401
TORONTO
ON
M5H 3Y2
CA
|
Assignee: |
TANGAM GAMING TECHNOLOGY
INC.
Kitchener
CA
N2M 5E7
|
Family ID: |
37461021 |
Appl. No.: |
11/381473 |
Filed: |
May 3, 2006 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60676936 |
May 3, 2005 |
|
|
|
60693406 |
Jun 24, 2005 |
|
|
|
60723481 |
Oct 5, 2005 |
|
|
|
60723452 |
Oct 5, 2005 |
|
|
|
60736334 |
Nov 15, 2005 |
|
|
|
60760365 |
Jan 20, 2006 |
|
|
|
60771058 |
Feb 8, 2006 |
|
|
|
Current U.S.
Class: |
463/22 |
Current CPC
Class: |
G07F 17/32 20130101;
G07F 17/322 20130101; G07F 17/3232 20130101 |
Class at
Publication: |
463/022 |
International
Class: |
A63F 9/24 20060101
A63F009/24 |
Claims
1. A system for identifying a gaming object on a gaming table
comprising: at least one overhead camera for capturing an image of
said table; a detection module for detecting a feature of said
object on said image; a search module for extracting a region of
interest of said image that describes said object from said
feature; a feature space module for transforming a feature space of
said region of interest to obtain a transformed region of interest;
and an identity module comprising a statistical classifier trained
to recognize said object from said transformed region.
2. The system of claims 1, wherein said feature space module
comprises a Principal Component Analysis module for transforming
said feature space according to principal component analysis
algorithms.
3. The system of claim 1, further comprising a dimensionality
reduction module for reducing said transformed region into a
reduced representation according to dimensionality reduction
algorithms, wherein said statistical classifier is trained to
recognize said object from said reduced representation.
4. The system of claim 1, wherein said identity module comprises a
cascade of classifiers.
5. The system of claim 1, wherein said detection module comprises a
cascade of classifiers.
6. The system of claim 4, further comprising a boosting module for
combining weak ones of said cascade of classifiers.
7. The system of claim 5, further comprising a boosting module for
combining weak ones of said cascade of classifiers.
8. The system of claims 4, wherein said detection module comprises
a cascade of classifiers, further comprising a boosting module for
combining weak classifiers of said cascades of classifiers.
9. The system of claim 1, wherein said object is a card belonging
to a deck of cards, and further comprising a deck verification
module for receiving a suit and a rank of said card from said
statistical classifier, and verifying that said deck of cards
adheres to a provided set of standards.
10. The system of claim 1, wherein said object is a playing card,
and said region of interest is a region of said image occupied by
an index of said card.
11. The system of claim 10, wherein said region of interest is a
region of said image occupied by a suit of said card.
12. A method of identifying a value of a playing card placed on a
game table comprising: capturing an image of said table; detecting
at least one feature of said playing card on said image; delimiting
a target region of said image according to said feature, wherein
said target region overlaps a region of interest, and said region
of interest describes said value; scanning said target region for a
pattern of contrasting points; detecting said pattern; delimiting
said region of interest of said image according to a position of
said pattern; and analyzing said region of interest to identify
said value.
13. The method of claim 12, wherein said feature is a segment of an
edge of said card.
14. The method of claim 13, further comprising determining at least
two scan lines parallel to said edge within said target region,
wherein said scanning is performed along said lines, and whereby
said scanning is more efficient.
15. The method of claim 12, wherein said scanning is performed
along lines perpendicular to said edge, and said detecting
comprises recording a most contrasting point for each of said lines
to obtain a series of points, and applying a pattern recognition
algorithm to said series to identify a pattern characteristic of a
card identifying symbol.
16. The method of claim 15, wherein said applying a pattern
recognition algorithm comprises convolving said pattern with a mask
of properties expected from a card identifying symbol.
17. The method of claim 12, wherein said feature is a corner of
said card.
18. A system for detecting an inconsistency with respect to playing
cards dealt on a game table comprising: a card reader for
determining an identity of each playing card as it is being dealt
on said table; an overhead camera for capturing images of said
table; a recognition module for determining an identity of each
card positioned on said table from said images; and a tracking
module for comparing said identity determined by said card reader
with said identity determined by said recognition module, and
detecting said inconsistency.
19. The system of claim 18, wherein said card reader determines a
dealing order of said each playing card as it is being dealt on
said table, said recognition module determines a position of said
each card positioned on said table, and said tracking module
compares said identity and said order determined by said card
reader with said identity and said position determined by said
recognition module and detects said inconsistency according to
procedures of a game.
20. The system of claim 18, wherein said recognition module
determines an approximate identity of said each card positioned on
said table, and said tracking module compares said approximate
identity with said identity determined by said recognition module,
and detects said inconsistency.
21. The system of claim 18, wherein said card reader is comprised
in a card shoe for storing playing cards to be dealt on said table.
Description
RELATED APPLICATION
[0001] The present application claims priority from U.S.
provisional patent applications No. 60/676,936, filed May 3, 2005;
60/693,406, filed Jun. 24, 2005; 60/723,481 filed Oct. 5 2005,
60/723,452 filed Oct. 5 2005, 60/736,334 filed Nov. 15 2005,
60/760,365 filed Jan. 20 2005 and 60/771,058 filed Feb. 8,
2006.
BACKGROUND OF THE INVENTION
[0002] Casinos propose a wide variety of gambling activities to
accommodate players and their preferences. Some of those activities
reward strategic thinking while others are impartial, but each one
of them obeys a strict set of rules that favours the casino over
its clients.
[0003] The success of a casino relies partially on the efficiency
and consistency with which those rules are applied by the dealer. A
pair of slow dealing hands or an undeserved payout may have
substantial consequences on profitability.
[0004] Another critical factor is the consistency with which those
rules are respected by the player. Large sums of money travel
through the casino, tempting players to bend the rules. Again, an
undetected card switch or complicity between a dealer and a player
may be highly detrimental to profitability.
[0005] For those reasons among others, casinos have traditionally
invested tremendous efforts in monitoring gambling activities.
Initially, the task was performed manually, a solution that was
both expensive and inefficient. However, technological innovations
have been offering advantageous alternatives that reduce costs
while increasing efficiency.
[0006] One of the most important aspects of table game monitoring
consists in recognizing playing cards, or at the very least, their
value with respect to the game being played. Such recognition is
particularly challenging when the card corner or the central region
of a playing card is undetectable within an overhead image of a
card hand, or more generally, within that of an amalgam of
overlapping objects. Current solutions for achieving such
recognition bear various weaknesses, especially when confronted to
those particular situations.
[0007] U.S. patent application Ser. No. 11/052,941, titled
"Automated Game Monitoring", by Tran, discloses a method of
recognizing a playing card positioned on a table within an overhead
image. The method consists in detecting the contour of the card,
validating the card from its contour, detecting adjacent corners of
the card, projecting the boundary of the card based on the adjacent
corners, binarizing pixels within the boundary, and counting the
number of pips to identify the value of the card. While such a
method is practical for recognizing a solitary playing card, or at
least one that is not significantly overlapped by other objects, it
may not be applicable in cases where the corner or central region
of the card is undetectable due to the presence of overlapping
objects. It also does not provide a method of distinguishing face
cards. Furthermore, it does not provide a method of extracting a
region of interest encompassing a card identifying symbol when only
a partial card edge is available or when card corners are not
available.
[0008] A paper titled "Introducing Computers to Blackjack:
Implementation of a Card Recognition System Using Computer Vision
Techniques", written by G. Hollinger and N. Ward, proposes the use
of neural networks to distinguish face cards. The method proposes
determining a central moment of individual playing cards to
determine a rotation angle. This approach of determining a rotation
angle is not appropriate for overlapping cards forming a card hand.
They propose counting the number of pips in the central region of
the card to identify number cards. This approach of pip counting
will not be feasible when a card is significantly overlapped by
another object. They propose training three neural networks to
recognize face card symbols extracted from an upper left region of
a face card, where each of the networks would be dedicated to a
distinct face card symbol. The neural network is trained using a
scaled image of the card symbol. A possible disadvantage of trying
to directly recognize images of a symbol using a neural network is
that it may have insufficient recognition accuracy especially under
conditions of stress such as image rotation, noise, insufficient
resolution and lighting variations.
[0009] Several references propose to achieve such recognition by
endowing each playing card with detectable and identifiable
sensors. For instance, U.S. patent application Ser. No. 10/823,051,
titled "Wireless monitoring of playing cards and/or wagers in
gaming", by SOLTYS, discloses playing cards bearing a conductive
material that may be wirelessly interrogated to achieve recognition
in any plausible situation, regardless of visual obtrusions. One
disadvantage of their implementation is that such cards are more
expensive than normal playing cards. Furthermore, adhering casinos
would be restricted to dealing such special playing cards instead
of those of their liking.
[0010] Card recognition is particularly instrumental in detecting
inconsistencies on a game table, particularly those resulting from
illegal procedures. However, such detection is yet to be entirely
automated and seamless as it requires some form of human
intervention.
[0011] MP Bacc, a product marketed by Bally Gaming for detecting an
inconsistency within a game of Baccarat consists of a card shoe
reader for reading bar-coded cards as they are being dealt, a
barcode reader built into a special table for reading cards that
were dealt, as well as a software module for comparing data
provided by the card reader and discard rack.
[0012] The software module verifies that the cards that have been
removed from the shoe correspond to those that have been inserted
into the barcode reader on the table. It also verifies that the
order in which the cards have been removed from the shoe
corresponds to the order in which they were placed in the barcode
reader. One disadvantage of this system is that it requires the use
of bar-coded cards and barcode readers to be present in the playing
area. The presence of such devices in the playing area may be
intrusive to players. Furthermore, dealers may need to be trained
to use the special devices and therefore the system does not appear
to be seamless or natural to the existing playing environment.
SUMMARY OF THE INVENTION
[0013] It would be desirable to be provided with a system for
recognizing playing cards positioned on a game table in an accurate
and efficient manner.
[0014] It would be desirable to be provided with a method of
recognizing standard playing cards positioned on a game table
without having to detect their corner.
[0015] It would also be desirable to be provided with a seamless,
automated, and reliable system for detecting inconsistencies on a
game table and providing an accurate description of the context in
which detected inconsistencies occurred.
[0016] An exemplary embodiment is directed to a system for
identifying a gaming object on a gaming table comprising at least
one overhead camera for capturing an image of the table; a
detection module for detecting a feature of the object on the
image; a search module for extracting a region of interest of the
image that describes the object from the feature; a feature space
module for transforming a feature space of the region of interest
to obtain a transformed region of interest; and an identity module
trained to recognize the object from the transformed region.
[0017] According to another embodiment, at least one factor
attributable to casino and table game environments and gaming
objects impedes reliable recognition of said object by said
statistical classifier when trained to recognize said object from
said region of interest without transformation by said feature
space module.
[0018] Another embodiment is directed to a method of identifying a
value of a playing card placed on a game table comprising:
capturing an image of the table; detecting at least one feature of
the playing card on the image; delimiting a target region of the
image according to the feature, wherein the target region overlaps
a region of interest, and the region of interest describes the
value; scanning the target region for a pattern of contrasting
points; detecting the pattern; delimiting the region of interest of
the image according to a position of the pattern; and analyzing the
region of interest to identify the value.
[0019] Another embodiment is directed to a system for detecting an
inconsistency with respect to playing cards dealt on a game table
comprising: a card reader for determining an identity of each
playing card as it is being dealt on the table from the shoe; an
overhead camera for capturing images of the table; a recognition
module for determining an identity of each card positioned on the
table from the images; and a tracking module for comparing the
identity determined by the card reader with the identity determined
by the recognition module, and detecting the inconsistency.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] For a better understanding of embodiments of the present
invention, and to show more clearly how it may be carried into
effect, reference will now be made, by way of example, to the
accompanying drawings which aid in understanding and in which:
[0021] FIG. 1 is an overhead view of a card game;
[0022] FIG. 2 is a side plan view of an imaging system;
[0023] FIG. 3 is a side plan view of an overhead imaging
system;
[0024] FIG. 4 is a top plan view of a lateral imaging system;
[0025] FIG. 5 is an overhead view of a gaming table containing RFID
detectors;
[0026] FIG. 6 is a block diagram of the components of an exemplary
embodiment of a system for tracking gaming objects;
[0027] FIG. 7 is a plan view of card hand representations;
[0028] FIG. 8 is a flowchart of a first embodiment of an IP
module;
[0029] FIG. 9 is an overhead view of a gaming table with proximity
detection sensors;
[0030] FIG. 10 is a plan view of a card position relative to
proximity detection sensors;
[0031] FIG. 11 illustrates an overhead image of a card hand where
the corners of a card are undetectable;
[0032] FIG. 12 is a flowchart describing the preferred method of
extracting a region of interest from a card edge;
[0033] FIG. 13 illustrates an application of the preferred method
of extracting a region of interest from a card edge;
[0034] FIG. 14 is a flowchart describing another method for
extracting a region of interest from a card edge;
[0035] FIG. 15 illustrates an application of another method of
extracting a region of interest from a card edge;
[0036] FIG. 16 is a block diagram of the preferred system for
identifying a gaming object on a gaming table;
[0037] FIG. 17 illustrates an example of a feature space that may
be used for recognition purposes
[0038] FIG. 18 is a flowchart describing a method of detecting
inconsistencies with respect to playing cards dealt on a game
table;
[0039] FIG. 19 illustrates a first application of the method of
detecting inconsistencies with respect to playing cards dealt on a
game table;
[0040] FIG. 20 illustrates a second application a method of
detecting inconsistencies with respect to playing cards dealt on a
game table;
[0041] FIG. 21 illustrates a third a method of detecting
inconsistencies with respect to playing cards dealt on a game
table;
[0042] FIG. 22 illustrates a Feed Forward Neural Network;
[0043] FIG. 23 illustrates Haar feature classifiers;
[0044] FIG. 24 is a flowchart describing a method of calibrating an
imaging system within the context of table game tracking; and
[0045] FIG. 25 illustrates a combination of weak classifiers into
one strong classifier as achieved through a boosting module.
DETAILED DESCRIPTION OF THE INVENTION
[0046] In the following description of exemplary embodiments we
will use the card game of blackjack as an example to illustrate how
the embodiments may be utilized.
[0047] Referring now to FIG. 1 an overhead view of a card game is
shown generally as 10. More specifically, FIG. 1 is an example of a
blackjack game in progress. A gaming table is shown as feature 12.
Feature 14 is a single player and feature 16 is the dealer. Player
14 has three cards 18 dealt by dealer 16 within dealing area 20.
The dealer's cards are shown as feature 22. In this example dealer
16 utilizes a card shoe 24 to deal cards 18 and 22 and places them
in dealing area 20. Within gaming table 12 there are a plurality of
betting regions 26 in which a player 14 may place a bet. A bet is
placed through the use of chips 28. Chips 28 are wagering chips
used in a game, examples of which are plaques, jetons, wheelchecks,
Radio Frequency Identification Device (RFID) embedded wagering
chips and optically encoded wagering chips.
[0048] An example of a bet being placed by player 14 is shown as
chips 28a within betting region 26a. Dealer 16 utilizes chip tray
30 to receive and provide chips 28. Feature 32 is an imaging
system, which is utilized by the present invention to provide
overhead imaging and optional lateral imaging of game 10. An
optional feature is a player identity card 34, which may be
utilized by the present invention to identify a player 14.
[0049] At the beginning of every game players 14 that wish to play
place their wager, usually in the form of gaming chips 28, in a
betting region 26 (also known as betting circle or wagering area).
Chips 28 can be added to a betting region 26 during the course of
the game as per the rules of the game being played. The dealer 16
then initiates the game by dealing the playing cards 18, 22.
Playing cards can be dealt either from the dealer's hand, or from a
card dispensing mechanism such as a shoe 24. The shoe 24 can take
different embodiments including non-electromechanical types and
electromechanical types. The shoe 24 can be coupled to an apparatus
(not shown) to read, scan or image cards being dealt from the shoe
24. The dealer 16 can deal the playing cards 18, 22 into dealing
area 20. The dealing area 20 may have a different shape or a
different size than shown in FIG. 1. The dealing area 20, under
normal circumstances, is clear of foreign objects and usually only
contains playing cards 18, 22, the dealer's body parts and
predetermined gaming objects such as chips, currency, player
identity card 34 and dice. A player identity card 34 is an identity
card that a player 14 may possess, which is used by the player to
provide identity data and assist in obtaining complimentary
("comps") points from a casino. A player identity card 34 may be
used to collect comp points, which in turn may be redeemed later on
for comps. Dealers may have dealer identity cards (not shown)
similar to player identity cards that dealers use to register
themselves at the table.
[0050] During the progression of the game, playing cards 18, 22 may
appear, move, or be removed from the dealing area 20 by the dealer
16. The dealing area 20 may have specific regions outlined on the
table 12 where the cards 18, 22 are to be dealt in a certain
physical organization otherwise known as card sets or "card hands",
including overlapping and non-overlapping organizations.
[0051] For the purpose of this disclosure, chips, cards, card
hands, currency bills, player identity cards, dealer identity
cards, lammers and dice are collectively referred to as gaming
objects. In addition the term "gaming region" is meant to refer to
any section of gaming table 12 including the entire gaming table
12.
[0052] Referring now to FIG. 2, a side plan view of an imaging
system is shown. This is imaging system 32 of FIG. 1. Imaging
system 32 comprises overhead imaging system 40 and optional lateral
imaging system 42. Imaging system 32 can be located on or beside
the gaming table 12 to image a gaming region from a top view and/or
from a lateral view. Overhead imaging system 40 can periodically
image a gaming region from a planar overhead perspective. The
overhead imaging system 40 can be coupled to the ceiling or to a
wall or any location that would allow an approximate top view of
the table 12. The optional lateral imaging system 42 can image a
gaming region from a lateral perspective. Imaging systems 40 and 42
are connected to a power supply and a processor (not shown) via
wiring 44 which runs through tower 46.
[0053] The imaging system 32 utilizes periodic imaging to capture a
video stream at a specific number of frames over a specific period
of time, such as for example, thirty frames per second. Periodic
imaging can also be used by an imaging system 32 when triggered via
software or hardware means to capture an image upon the occurrence
of a specific event. An example of a specific event would be if a
stack of chips were placed in a betting region 26. An optical chip
stack or chip detection method utilizing overhead imaging system 40
can detect this event and can send a trigger to lateral imaging
system 42 to capture an image of the betting region 26. In an
alternative embodiment overhead imaging system 40 can trigger an
RFID reader to identify the chips. Should there be a discrepancy
between the two means of identifying chips the discrepancy will be
flagged.
[0054] Referring now to FIG. 3, a side plan view of an overhead
imaging system is shown. Overhead imaging system 40 comprises one
or more imaging devices 50 and optionally one or more lighting
sources (if required) 52 which are each connected to wiring 44.
Each imaging device 50 can periodically produce images of a gaming
region. Charged Coupling Device (CCD) sensors, Complementary Metal
Oxide Semiconductor (CMOS) sensors, line scan imagers, area-scan
imagers and progressive scan imagers are examples of imaging
devices 50. Imaging devices 50 may be selective to any frequency of
light in the electromagnetic spectrum, including ultra violet,
infra red and wavelength selective. Imaging devices 50 may be color
or grayscale. Lighting sources 52 may be utilized to improve
lighting conditions for imaging. Incandescent, fluorescent,
halogen, infra red and ultra violet light sources are examples of
lighting sources 52.
[0055] An optional case 54 encloses overhead imaging system 40 and
if so provided, includes a transparent portion 56, as shown by the
dotted line, so that imaging devices 50 may view a gaming
region.
[0056] Referring now to FIG. 4, a top plan view of a lateral
imaging system is shown. Lateral imaging system 42 comprises one or
more imaging devices 50 and optional lighting sources 52 as
described with reference to FIG. 3.
[0057] An optional case 60 encloses lateral imaging system 42 and
if so provided includes a transparent portion 62, as shown by the
dotted line, so that imaging devices 50 may view a gaming
region.
[0058] The examples of overhead imaging system 40 and lateral
imaging system 42 are not meant by the inventors to restrict the
configuration of the devices to the examples shown. Any number of
imaging devices 50 may be utilized and if a case is used to house
the imaging devices 50, the transparent portions 56 and 62 may be
configured to scan the desired gaming regions.
[0059] According to one embodiment of the present invention, a
calibration module assigns parameters for visual properties of the
gaming region. FIG. 24 is a flowchart describing the operation of
the calibration module as applied to the overhead imaging system.
The calibration process can be: manual, with human assistance;
fully automatic; or semi automatic.
[0060] Referring back to FIG. 24, a first step 4800 consists in
waiting for an image of the gaming region from the overhead
imager(s). The next step 4802 consists in displaying the image to
allow the user to select the area of interest where gaming
activities occur. For instance, within the context of blackjack
gaming, the area of interest can be a box encompassing the betting
boxes, the dealing arc, and the dealer's chip tray.
[0061] In step 4804, coefficients for perspective correction are
calculated. Such correction consists in an image processing
technique whereby an image can be warped to any desired view point.
Its application is particularly useful if the overhead imagers are
located in the signage and the view of the gaming region is
slightly warped. A perfectly overhead view point would be best for
further image analysis. A checkerboard or markers on the table may
be utilized to assist with calculating the perspective correction
coefficients.
[0062] Subsequently, in step 4806, the resulting image is displayed
to allow the user to select specific points or regions of interest
within the gaming area. For instance, the user may select the
position of betting spots and the region encompassing the dealer's
chip tray. Other specific regions or points within the gaming area
may be selected.
[0063] In the next step 4808, camera parameters such as shutter
value, gain value(s) are calculated and white balancing operations
are performed. Numerous algorithms are publicly available to one
skilled in the art for performing camera calibration.
[0064] In step 4810, additional camera calibration is performed to
adjust the lens focus and aperture.
[0065] Once the camera calibration is complete and according to
step 4812, an image of the table layout, clear of any objects on
its surface, is captured and saved as a background image. Such an
image may be for detecting objects on the table. The background
image may be continuously captured at various points during system
operation in order to have a most recent background image.
[0066] In step 4814, while the table surface is still clear of
objects additional points of interest such as predetermined markers
are captured.
[0067] In the final step 4816, the calibration parameters are
stored in memory.
[0068] It must be noted that the calibration concepts may be
applied for the lateral imaging system as well as other imaging
systems.
[0069] In an optional embodiment, continuous calibration checks may
be utilized to ensure that the initially calibrated environment
remains relevant. For instance a continuous brightness check may be
performed periodically, and if it fails, an alert may be asserted
through a feedback device indicating the need for re-calibration.
Similar periodic, automatic checks may be performed for white
balancing, perspective correction, and region of interest
definition.
[0070] As an example, if lighting in the gaming region changes
calibration may need to be performed again. A continuous brightness
check may be applied periodically and if the brightness check
fails, an alert may be asserted through one of the feedback devices
indicating the need for re-calibration. Similar periodic, automatic
checks may be performed for white balancing, perspective correction
and the regions of interest.
[0071] In an optional embodiment, a white sheet similar in shade to
a playing card surface may be placed on the table during
calibration in order to determine the value of the white sheet at
various points on the gaming table and consequently the lighting
conditions at these various points. The recorded values may be
subsequently utilized to determine threshold parameters for
detecting positions of objects on the table.
[0072] It must be noted that not all steps of calibration need
human input. Certain steps such as white balancing may be performed
automatically.
[0073] In addition to the imaging systems described above,
exemplary embodiments may also make use of RFID detectors for
gambling chips containing an RFID. FIG. 5 is an overhead view of a
gaming table containing RFID detectors 70. When one or more chips
28 containing an RFID are placed on an RFID detector 70 situated
below a betting region 26 the values of the chips 28 can be
detected by the RFID detector 70. The same technology may be
utilized to detect the values of RFID chips within the chip tray
30.
[0074] Referring now to FIG. 6 a block diagram of the components of
an exemplary embodiment is shown. Identity and Positioning module
(IP module) 80 identifies the value and position of cards on the
gaming table 12. Intelligent Position Analysis and Tracking module
(IPAT module) 84 performs analysis of the identity and position
data of cards and interprets them intelligently for the purpose of
tracking game events, game states and general game progression. The
Game Tracking module (GT module) 86 processes data from the IPAT
module 84 and keeps track of game events and game status. The GT
module 86 can optionally obtain input from Bet Recognition module
88. Bet Recognition module 88 identifies the value of wagers placed
at the game. Player Tracking module 90 keeps track of patrons and
players that are participating at the games. An optional dealer
tracking module can keep track of the dealer dealing at the table.
Surveillance module 92 records video data from imaging system 32
and links game event data to recorded video. Surveillance module 92
provides efficient search and replay capability by way of linking
game event time stamps to the recorded video. Analysis and
Reporting module 94 analyzes the gathered data in order to generate
reports on players, tables and casino personnel. Example reports
include reports statistics on game related activities such as
profitability, employee efficiency and player playing patterns.
Events occurring during the course of a game can be analyzed and
appropriate actions can be taken such as player profiling,
procedure violation alerts or fraud alerts.
[0075] Modules 80 to 94 communicate with one another through a
network 96. A 100 Mbps Ethernet Local Area Network or Wireless
Network can be used as a digital network. The digital network is
not limited to the specified implementations, and can be of any
other type, including local area network (LAN), Wide Area Network
(WAN), wired or wireless Internet, or the World Wide Web, and can
take the form of a proprietary extranet.
[0076] Controller 98 such as a processor or multiple processors can
be employed to execute modules 80 to 94 and to coordinate their
interaction amongst themselves, with the imaging system 32 and with
input/output devices 100, optional shoe 24 and optional RFID
detectors 70. Further, controller 98 utilizes data stored in
database 102 for providing operating parameters to any of the
modules 80 to 94. Modules 80 to 94 may write data to database 102
or collect stored data from database 102. Input/Output devices 100
such as a laptop computer, may be used to input operational
parameters into database 102. Examples of operational parameters
are the position coordinates of the betting regions 26 on the
gaming table 12, position coordinates of the dealer chip tray 30,
game type and game rules.
[0077] Before describing how the present invention may be
implemented we first provide some preliminary definitions.
Referring now to FIG. 7 a plan view of card representations is
shown. A card or card hand is first identified by an image from the
imaging system 32 as a blob 110. A blob may be any object in the
image of a gaming area but for the purposes of this introduction we
will refer to blobs 110 that are cards and card hands. The outer
boundary of blob 110 is then traced to determine a contour 112
which is a sequence of boundary points forming the outer boundary
of a card or a card hand. In determining a contour, digital imaging
thresholding is used to establish thresholds of grey. In the case
of a card or card hand, the blob 110 would be white and bright on a
table. From the blob 110 a path is traced around its boundary until
the contour 112 is established. A contour 112 is then examined for
regions of interest (ROI) 118, which identify a specific card.
Although in FIG. 7 ROI 118 has been shown to be the rank and suit
of a card an alternative ROI could be used to identify the pip
pattern in the centre of a card. From the information obtained from
ROIs 118 it is possible to identify cards in a card hand 120.
[0078] IP module 80 may be implemented in a number of different
ways. In a first embodiment, overhead imaging system 32 (see FIG.
2) located above the surface of the gaming table provides overhead
images. An overhead image need not be at precisely ninety degrees
above the gaming table 12. In one embodiment it has been found that
seventy degrees works well to generate an overhead view. An
overhead view enables the use of two dimensional Cartesian
coordinates of a gaming region. One or more image processing
algorithms process these overhead images of a gaming region to
determine the identity and position of playing cards on the gaming
table 12.
[0079] Referring now to FIG. 8 a flowchart of an embodiment of an
IP module 80 is shown. Beginning at step 140 initialization and
calibration of global variables occurs. Examples of calibration are
manual or automated setting of camera properties for an imager 32
such as shutter value, gain levels and threshold levels. In the
case of thresholds, a different threshold may be stored for each
pixel in the image or different thresholds may be stored for
different regions of the image. Alternatively, the threshold values
may be dynamically calculated from each image. Dynamic
determination of a threshold would calculate the threshold level to
be used for filtering out playing cards from a darker table
background.
[0080] Moving to step 142 the process waits to receive an overhead
image of a gaming region from overhead imaging system 40. At step
144 a thresholding algorithm is applied to the overhead image in
order to differentiate playing cards from the background to create
a threshold image. A background subtraction algorithm may be
combined with the thresholding algorithm for improved performance.
Contrast information of the playing card against the background of
the gaming table 12 can be utilized to determine static or adaptive
threshold parameters. Static thresholds are fixed while dynamic
thresholds may vary based upon input such as the lighting on a
table. The threshold operation can be performed on a gray level
image or on a color image. Step 144 requires that the surface of
game table 12 be visually contrasted against the card. For
instance, if the surface of game table 12 is predominantly white,
then a threshold may not be effective for obtaining the outlines of
playing cards. The output of the thresholded image will ideally
show the playing cards as independent blobs 110. This may not
always be the case due to issues of motion or occlusion. Other
bright objects such as a dealer's hand may also be visible as blobs
110 in the thresholded image. Filtering operations such as erosion,
dilation and smoothing may optionally be performed on the
thresholded image in order to eliminate noise or to smooth the
boundaries of a blob 110.
[0081] In the next step 146, the contour 112 corresponding to each
blob 110 is detected. A contour 112 can be a sequence of boundary
points of the blob 110 that more or less define the shape of the
blob 110. The contour 112 of a blob 110 can be extracted by
traversing along the boundary points of the blob 110 using a
boundary following algorithm. Alternatively, a connected components
algorithm may also be utilized to obtain the contour 112.
[0082] Once the contours 112 have been obtained processing moves to
step 148 where shape analysis is performed in order to identify
contours that are likely not cards or card hands and eliminate
these from further analysis. By examining the area of a contour 112
and the external boundaries, a match may be made to the known size
and/or dimensions of cards. If a contour 112 does not match the
expected dimensions of a card or card hand it can be discarded.
[0083] Moving next to step 150, line segments 114 forming the card
and card hand boundaries are extracted. One way to extract line
segments is to traverse along the boundary points of the contour
112 and test the traversed points with a line fitting algorithm.
Another potential line detection algorithm that may be utilized is
a Hough Transform. At the end of step 150, line segments 114
forming the card or card hand boundaries are obtained. It is to be
noted that, in alternate embodiments, straight line segments 114 of
the card and card hand boundaries may be obtained in other ways.
For instance, straight line segments 114 can be obtained directly
from an edge detected image. For example, an edge detector such as
the Laplace edge detector can be applied to the source image to
obtain an edge map of the image from which straight line segments
114 can be detected. These algorithms are non-limiting examples of
methods to extract positioning features, and one skilled in the art
might use alternate methods to extract these card and card hand
positioning features.
[0084] Moving to step 152, one or more corners 116 of cards can be
obtained from the detected straight line segments 114. Card corners
116 may be detected directly from the original image or thresholded
image by applying a corner detector algorithm such as for example,
using a template matching method using templates of corner points.
Alternatively, the corner 116 may be detected by traversing points
along contour 112 and fitting the points to a corner shape. Corner
points 116, and line segments 114 are then utilized to create a
position profile for cards and card hands, i.e. where they reside
in the gaming region.
[0085] Moving to step 154, card corners 116 are utilized to obtain
a Region of Interest (ROI) 118 encompassing a card identifying
symbol, such as the number of the card, and the suit. A card
identifying symbol can also include features located in the card
such as the arrangement of pips on the card, or can be some other
machine readable code.
[0086] Corners of a card are highly indicative of a position of a
region of interest. For this very reason, they constitute the
preferred reference points for extracting regions of interest.
Occasionally, corners of a card may be undetectable within an
amalgam of overlapping gaming objects, such as a card hand. The
present invention provides a method of identifying such cards by
extracting a region of interest from any detected card feature that
may constitute a valid reference point.
[0087] FIG. 11 illustrates an overhead image of a card hand 3500
comprised of cards 3502, 3504, 3506, and 3508. The card 3504
overlaps the card 3502 and is overlapped by the card 3506 such that
corners of the card 3504 are not detectable.
[0088] According to a preferred embodiment of the invention, the
overhead image is analyzed to obtain the contour of the card hand
3500. Subsequently, line segments 3510, 3512, 3514, 3516, 3518,
3520, 3522, and 3524 forming the contour of the card hand 3500 are
extracted. The detected line segments are thereafter utilized to
detect convex corners 3530, 3532, 3534, 3536, 3538, and 3540.
[0089] As mentioned herein above, corners constitute the preferred
reference points for extracting Regions of Interest. In the
following description, the term "index corner" refers to a corner
of a card in the vicinity of which a region of interest is located.
The term "blank corner" refers to a corner of a card that is not an
index corner.
[0090] The corner 3530 is the first one to be considered. A sample
of pixels drawn within the contour, in the vicinity of the corner
3530, is analyzed in order to determine whether the corner 3530 is
an index corner. A sufficient number of contrasting pixels are
detected and the corner 3530 is identified as an index corner.
Consequently, a region of interest is projected and extracted
according to the position of the corner 3530, as well as the width,
height, and offset of regions of interests from index corners.
[0091] Similarly, the corner 3532 is identified as an index corner
and a corresponding region of interest is projected and
extracted.
[0092] The corner 3534 is the third to be considered. Corner 3534
is identified as a blank corner. Due to their coordinates, the
corners 3532 and 3534 are identified as belonging to a same card,
and consequently, the corner 3534 is dismissed from further
analysis.
[0093] Similarly to corners 3530 and 3532, the corner 3536 is
identified as an index corner and a corresponding region of
interest is projected and extracted.
[0094] The corners 3538 and 3540 are the last ones to be
considered. Due to their coordinates, the corners 3530, 3538 and
3540 are identified as belonging to a same card, and consequently,
the corners 3538 and 3540 are dismissed from further analysis.
[0095] As a result of the corner analysis, the regions of interest
of the cards 3502, 3506 and 3508 of the card hand 3500 have been
extracted. However, none of the corners of the card 3504 has been
detected and consequently, no corresponding region of interest has
been extracted.
[0096] In order to extract any remaining regions of interest, the
extracted line segments 3510, 3512, 3514, 3516, 3518, 3520, 3522,
and 3524 forming the contour of the card hand 3500 are utilized
according to a method provided by the present invention.
[0097] In FIG. 12, a flowchart describing the preferred method for
extracting a region of interest from a card edge segment is
provided. It must be noted that a partial card edge segment may
suffice for employing this method.
[0098] In step 3600, two scan line segments are determined. The
scan line segments are of the same length as the analyzed line
segment. Furthermore, the scan line segments are parallel to the
analyzed line segment. Finally, a first of the scan line segments
is offset according to a predetermined offset of the region of
interest from a corresponding card edge. The second of the scan
line segments is offset from the first scan line segment according
to the predetermined width of the rank and suit symbols.
[0099] In step 3602, pixel rows delimited by the scan line segments
are scanned, and for each of the rows a most contrasting color or
brightness value is recorded.
[0100] Subsequently, in step 3604, the resulting sequence of most
contrasting color or brightness values, referred to as a
contrasting value scan line segment, is analyzed to identify
regions that may correspond to a card rank and suit. The analysis
may be performed according to pattern matching or pattern
recognition algorithms.
[0101] According to the preferred embodiment, the sequence of
contrasting color values is convolved with a mask of properties
expected from rank characters and suit symbols. For instance, in
the context of a white card having darker coloured rank characters
and suit symbols, the mask may consist of a stream of darker pixels
corresponding to the height of rank characters, a stream of
brighter pixels corresponding to the height of spaces separating
rank characters and suit symbols, and a final stream of darker
pixels corresponding to the height of suit symbols. The result of
the convolution will give rise to peaks where a sequence of the set
of contrasting color values corresponds to the expected properties
described by the mask.
[0102] Several methods are available for performing such
convolution, including but not limited to cross-correlation,
squared difference, correlation coefficient, as well as their
normalized versions.
[0103] In step 3606, the resulting peaks are detected, and the
corresponding regions of interests are extracted.
[0104] FIG. 13 illustrates an analysis of the line segment 3510
according to the preferred embodiment of the invention.
[0105] First, two scan line segments, 3700 and 3702 are determined.
The scan line segments 3700 and 3702 are of the same length as the
line segment 3510. Furthermore, the scan line segments are parallel
to the line segment 3510. Finally, the scan line segment 3700 is
offset from the line segment 3510 according to a predetermined
offset of the region of interest from a corresponding card edge.
The scan line segment 3702 is offset from the scan line segment
3700 according to the predetermined width of the rank characters
and suit symbols.
[0106] Subsequently, rows delimited by the scan line segments 3700
and 3702 are scanned. For each of the rows, a most contrasting
color or brightness value is recorded to form a sequence of
contrasting color or brightness values 3704, also referred to as a
contrasting value scan line segment.
[0107] Once the sequence 3704 is obtained, it is convolved with a
mask 3706 of properties expected from rank characters and suit
symbols. The mask 3706 consists of a stream of darker pixels
corresponding to the height of rank characters, a stream of
brighter pixels corresponding to the height of spaces separating
rank characters and suit symbols, and a final stream of darker
pixels corresponding to the height of suit symbols.
[0108] A result 3708 of the convolution gives rise to a peak 3710
where a sub-sequence of sequence 3704 corresponds to the expected
properties described by the mask 3706. Finally, a region of
interest 3714 corresponding to the card 3502 is extracted.
[0109] In FIG. 14, a flowchart describing another embodiment of the
method for extracting a region of interest from a line segment is
provided.
[0110] In step 3800, several scan line segments are determined. The
scan line segments are of the same length as the analyzed line
segment. Furthermore, the scan line segments are parallel to the
analyzed line segment. Finally, a first of the scan line segments
is offset from the analyzed line segment according to a
predetermined offset of the region of interest from a corresponding
card edge. The other scan line segments are offset from the first
scan line segment according to the predetermined width of the rank
and suit symbols. The scan line segments are positioned in that
manner to ensure that at least some of them would intersect any
characters and symbols located along the analyzed line segment.
[0111] In step 3802, each scan line segment is scanned and points
of contrasting color or brightness values are recorded to assemble
a set of contrasting points, which we will refer to as seed
points.
[0112] Subsequently, in step 3804, the set of contrasting points is
analyzed to identify clusters that appear to be defining, at least
partially, rank characters and suit symbols. The clusters can be
extracted by grouping the seed points or by further analyzing the
vicinity of one or more of the seed points using a region growing
algorithm.
[0113] Finally, in step 3806, regions of interest are extracted
from the identified clusters of contrasting points.
[0114] FIG. 15 illustrates an analysis of the line segment 3510
according to the preferred embodiment of the invention.
[0115] First, two scan line segments 3900 and 3902 are determined.
The scan line segments 3900 and 3902 are of the same length as the
line segment 3510. Furthermore, the scan line segments 3900 and
3902 are parallel to the line segment 3510. Finally, the scan line
segment 3900 is offset from the line segment 3510 according to a
predetermined offset of the region of interest from a corresponding
card edge segment. The scan line segment 3902 is offset from the
scan line segment 3900 according to the predetermined width of rank
characters and suit symbols. The scan line segments 3900 and 3902
are positioned in that manner to ensure that at least one of them
would intersect any characters and symbols located along the line
segment 3510.
[0116] The scan line segments 3900 and 3902 are scanned and points
of contrasting color and brightness values are recorded to assemble
a sequence of contrasting points. Subsequently, the sequence is
analyzed and clusters of seed points 3910, 3912 and 3914 are
identified as likely to define, at least partially, rank characters
and suit symbols.
[0117] Finally, regions of interest 3920, 3922, and 3924 are
extracted respectively from the clusters of seed points 3910, 3912,
and 3914. Therefore, the method has succeeded in extraction a
region of interest of a card having no detectable corners.
[0118] Referring back to FIG. 11, the same invention is applied to
the line segments 3512, 3514, 3516, 3518, 3520, 3522, and 3524 as
well, in order to identify any desirable region of interest that is
yet to be extracted.
[0119] Although the invention has been described within the context
of a hand of cards, it may be applied within the context of a
single gaming object, or an amalgam of overlapping gaming
objects.
[0120] Although the invention has been described as preceded by a
corner analysis, it may be applied without any previous corner
analysis. However, it is usually preferable to start with a corner
analysis since corners are preferred over line segments as
reference points.
[0121] Although the invention has been described as a method of
extracting a region of interest from a card edge, it may do so from
any detected card feature, provided that the feature constitutes a
valid reference point for locating a region of interest. For
instance, the method may be applied to extract regions of interest
from detected corners, or detected pips, instead of line segments.
Such versatility is a sizeable asset within the context of table
games, where some playing cards may present a very limited number
of detectable features.
[0122] It is important to note that the preceding corner analysis
could have been performed according to the invention.
[0123] Referring back to FIG. 8, at step 156, a recognition method
may be applied to identify the value of the card. In one
embodiment, the ROI 118 is rotated upright and a statistical
classifier, also referred to as machine learning model, can be
applied to recognize the symbol. Prior to recognition, the ROI 118
may be pre-processed by thresholding the image in the ROI 118
and/or narrowing the ROI 118 to encompass the card identifying
symbols. Examples of statistical classifiers that may be utilized
with this invention include Neural Networks, Support Vector
Machines, Hidden Markov Models and Bayesian Networks. A
Feed-forward Neural Network is one example of a statistical
classifier that may be used with this system. Training of the
statistical classifier may happen in a supervised or unsupervised
manner. In an alternate embodiment, a method that does not rely on
a statistical classifier, such as template matching, may be
utilized. In yet another embodiment, the pattern of pips on the
cards may be utilized to recognize the cards, provide a sufficient
portion of the pattern is visible in a card hand. A combination of
recognition algorithms may be used to improve accuracy of
recognition.
[0124] The present invention provides a system for identifying a
gaming object on a gaming table in an efficient and seamless
manner. The system comprises at least one overhead camera for
capturing a plurality of images of the table; a detection module
for detecting a feature of the object on an image of the plurality;
a search module for extracting a region of interest of the image
that describes the object from the feature; a feature space module
for transforming a feature space of the region of interest to
obtain a transformed region of interest; a dimensionality reduction
module for reducing the transformed region into a reduced
representation according to dimensionality reduction algorithms,
and an identity module trained to recognize the object from the
transformed region.
[0125] Within the context of the system illustrated in FIG. 6, the
overhead camera corresponds to the Imager 32. As for the detection
module, the search module, the feature space module, the
dimensionality reduction module, and the identification module,
they are components of the IP module 80.
[0126] FIG. 16 is a block diagram of the preferred system for
identifying a gaming object on a gaming table.
[0127] The Imager 32 provides an overhead image of the game table
to a Detection module 4000. Subsequently, the Detection Module 4000
detects features of potential gaming objects placed on the game
table. Such detection may be performed according to any of the
aforementioned methods; for instance, it may consist of the steps
142, 144, 146, 148, 150, and 152, as illustrated in FIG. 8.
[0128] According to one embodiment of the present invention, the
Detection Module 4000 comprises a cascade of classifiers trained to
recognize specific features of interest such as corners and
edges.
[0129] According to another embodiment of the present invention,
the system further comprises a Booster Module, and the Detection
Module 4000 comprises a cascade of classifiers. The Booster module
serves the purpose of combining weak classifiers of the cascade
into a stronger classifier as illustrated in FIG. 25. It may
operate according to one of several boosting algorithms including
Discrete Adaboost, Real Adaboost, LogitBoost, and Gentle
Adaboost.
[0130] Referring back to FIG. 16, the Detection Module 4000
provides the image along with the detected features to a Search
Module 4002. The latter extracts regions of interest within the
image from the detected features. Such extraction may be performed
according to any of the aforementioned methods; for instance, it
may consist of the steps 3600, 3602, 3604, and 3606, illustrated in
FIG. 12. The extracted regions of interest may be further processed
by applying image thresholding, rotating or by refining the region
of interest.
[0131] The Search Module 4002 provides the extracted regions of
interest to the Feature Space (FS) Module 4004. For each region of
interest, the FS Module 4004 transforms a provided representation
into a feature space, or a set of feature spaces that is more
appropriate for recognition purposes.
[0132] According to one embodiment, each region of interest
provided to the FS Module 4004 is represented as a grid of pixels,
wherein each pixel is assigned a color or brightness value.
[0133] Prior to performing a transformation, the FS Module 4004
must select a desirable feature space according to a required type,
speed, and robustness of recognition. The selection may be
performed in a supervised manner, an unsupervised manner, or
both.
[0134] FIG. 17 illustrates an example of a feature space that may
be used for recognition purposes. The feature space consists in a
histogram of the grayscale values stored in each column of a pixel
grid.
[0135] Once a feature space is selected, the FS Module 4004 applies
a corresponding feature space transformation on a corresponding
image.
[0136] It is important to distinguish feature space transformations
from geometrical transformations. The geometrical transformation of
an image consists in reassigning positions of pixels positions
within a corresponding grid. While such a transformation does
modify an image, it does not modify underlying semantics; the means
by which the original image and its transformed version are
represented is the same. On the other hand, feature space
transformations modify underlying semantics.
[0137] One example of a feature space transformation consists in
modifying the representation of colours within a pixel grid from
RGB (Red, Green, and Blue) to HSV (Hue, Saturation, and Value or
Brightness). In this particular case, the data is not modified, but
its representation is. Such a transformation is advantageous in
cases where it is desirable for the brightness of a pixel to be
readily available. Furthermore, the HSV space is less sensitive to
a certain type of noise than its RGB counterpart.
[0138] The Hough Line Transform is another example of a feature
space transformation. It consists in transforming a binary image
from a set of pixels to a set of lines. In the new feature space,
each vector represents a line whereas in the original space, each
vector represents the coordinates of a pixel. Consequently, such a
transformation is particularly advantageous for applications where
lines are to be analyzed.
[0139] Other feature space transformations include various
filtering operations such as Laplace and Sobel. Pixels resulting
from such transformations store image derivative information rather
than image intensity.
[0140] Canny edge detection, Fast Fourier Transform (FFT), and
Discrete Cosine Transform (DCT), and Wavelet transforms are other
examples of feature space transformations. Images resulting from
FFT and DCT are no longer represented spatially (by a pixel grid),
but rather in a frequency domain, wherein each point represents a
particular frequency contained in the real-domain image. Such
transformations are practical because the resulting feature space
is invariant with respect to some transformations, and robust with
respect to others. For instance, discarding the higher frequency
components of an image resulting from a DCT makes it more resilient
to noise, which is generally present in high frequencies. As a
result, recognition is more reliable.
[0141] Within the context of the present invention, the use of
different feature spaces provides for additional robustness with
respect to parameters such as lighting variations, brightness,
image noise, image resolutions, ambient smoke, as well as
geometrical transformations such as rotations and translations. As
a result, the system of the present invention provides for greater
training and recognition accuracy.
[0142] According to a preferred embodiment of the present
invention, Principal Component Analysis (PCA) is the main feature
space transformation in the arsenal of the FS Module 4004. It is a
linear transform that selects a new coordinate system for a given
data set, such that the greatest variance by any projection of the
data set relates to a first axis, known as the principal component,
the second greatest variance, on the second axis, and so on.
[0143] The first step of the PCA consists in constructing a 2D
matrix A of size n.times.wh where each column is an image vector,
given n images of w.times.h pixels. Each image vector is formed by
concatenating all the pixel rows of a corresponding image into
vector. The second step consists in computing an average image from
the matrix A by summing up all the rows and dividing by n. The
resulting clement vector of size (wh) is called u. The third step
consists in subtracting u from all the columns of A to get a mean
subtracted matrix B of size (n.times.wh). The fourth step consists
in computing the dot products of all possible image pairs. Let C be
the new (n.times.n) matrix where C[i][j]=dot product of B[i] and
B[j]. C is the covariance matrix. The penultimate step consists in
Compute the n Eigen values and corresponding Eigen vectors of C.
Finally, all Eigen values of C are sorted from the highest Eigen
value to the lowest Eigen value.
[0144] According to another embodiment, the FS Module 4004 applies
predominantly one or more of the DCT, FFT, Log Polar Domains, or
other techniques resulting in edge images.
[0145] Referring back to FIG. 16, and according to the preferred
embodiment of the invention, the FS Module 4004 provides the
transformed representation, or set of representations to a
Dimensionality Reduction (DR) Module 4008. The DR Module 4008
reduces the dimensionality of the provided representations by
applying feature selection techniques, feature extraction
techniques, or a combination of both.
[0146] According to the preferred embodiment of the present
invention, the representations provided by the FS Module 4004
result from the application of a PCA, and the DR Module 4006
reduces their dimensionality by applying a feature selection
technique that consists in selecting a subset of the PCA
coefficients that contain the most information.
[0147] According to one embodiment of the present invention, the
representations provided by the FS Module 4004 result from the
application of a DCT, and the DR Module 4006 reduces their
dimensionality by applying a feature selection technique that
consists in selecting a subset of the DCT coefficients that contain
the most information.
[0148] According to another embodiment of the present invention,
the DR Module 4006 reduces the dimensionality of the provided
representations by applying a feature extraction technique that
consists in projecting them into a feature space of fewer
dimensions.
[0149] According to another embodiment of the present invention,
the representations provided by the FS Module 4004 result from the
application of a DCT, and the DR Module applies a combination of
feature selection and feature extraction techniques that consists
in selecting a subset of the DCT coefficients that contain the most
information, and applying PCA on the selected coefficients.
[0150] Within the context of the present invention, the application
of dimensionality reduction techniques reduces computing
computational overhead, thereby increasing the training and
recognition procedures performed by the Identity Module 4008.
Furthermore, dimensionality reduction tends to eliminate, or at the
very least reduce noise, and therefore, increase recognition and
training efficiency.
[0151] According to another embodiment of the invention, the FS
Module 4004 provides the transformed representation or set of
transformed representations to an Identity Module 4008 trained to
recognize gaming objects from dimensionality reduced
representations of regions of interest.
[0152] Referring back to FIG. 16 and according to the preferred
embodiment of the present invention, the DR Module 4006 provides
the dimensionality reduced representations to an Identity Module
4008, which identifies a corresponding gaming object.
[0153] Still according to the preferred embodiment of the present
invention, the Identity Module 4008 comprises a statistical
classifier trained to recognize gaming objects from dimensionality
reduced representations.
[0154] According to one embodiment of the present invention, the
Identity Module 4008 comprises a Feed-forward Neural Network such
as the one illustrated in FIG. 22 that consists of input nodes,
multiple hidden layers, and output nodes. The hidden layers can be
partially connected, as those shown in FIG. 22, or fully connected.
During the initial supervised training mode, a back propagation
learning method is utilized in conjunction with the error function
an error function to allow the Neural Network to adjust its
internal weights according the inputs and outputs.
[0155] According to another embodiment of the present invention,
the Identification Module comprises a cascade of classifiers.
[0156] According to another embodiment of the present invention,
the system further comprises a Booster Module, and the Identity
Module 4008 comprises a cascade of classifiers. The Booster module
serves the purpose of combining weak classifiers of the cascade
into a stronger classifier. It may operate according to one of
several boosting algorithms including Discrete Adaboost, Real
Adaboost, LogitBoost, and Gentle Adaboost.
[0157] Referring back to FIG. 16, and according to one embodiment
of the present invention, the system is used to perform deck
verification. When such verification is required, the dealer
presents the corresponding cards on the table, in response to which
the Identity Module 4008 is automatically triggered to provide the
rank and suit of each identified card to a Deck Verification Module
4010. The latter module analyzes the provided data to ensure that
the deck of cards adheres to a provided set of standards.
[0158] According to one embodiment of the present invention, the
Detection Module 4000 recognizes a configuration of playing cards
suitable for a deck verification procedure and triggers the
Identity Module 4008 to provide the rank and suit of each
identified card to a Deck Verification Module 4010.
[0159] According to another embodiment of the present invention,
the Identity Module 4008 is manually triggered to provide the rank
and suit of each identified card to the Deck Verification Module
4010.
[0160] Referring back to FIG. 8, once the identity and position
profile of each visible card in the gaming region has been
obtained, the data can be output to other modules at step 158.
Examples of data output at step 158 may include the number of card
hands, the Cartesian coordinates of each corner of a card in a hand
(or other positional information such as line segments), and the
identity of the card as a rank and/or suit.
[0161] At step 160 the process waits for a new image and when
received processing returns to step 144.
[0162] Referring now to FIG. 9 an overhead view of gaming table
with proximity detection sensors is shown. In an alternative
embodiment IP module 80 may utilize proximity detection sensors
170. Card shoe 24 is a card shoe reader, which dispenses playing
cards and generates signals indicative of card identity. An example
of a card shoe reader 24 may include those disclosed in U.S. Pat.
No. 5,374,061 to Albrecht, U.S. Pat. No. 5,941,769 to Order, U.S.
Pat. No. 6,039,650 to Hill, or U.S. Pat. No. 6,126,166 to Lorson.
Commercial card shoe readers such as for example the MP21 card
reader unit sold by Bally Gaming or the Intelligent Shoe sold by
Shuffle Master Inc. may be utilized. In an alternate embodiment of
the card shoe reader, a card deck reader such as the readers
commercially sold by Bally Gaming and Shuffle Master can be
utilized to determine the identity of cards prior to their
introduction into the game. Such a card deck reader would
pre-determine a sequence of cards to be dealt into the game. An
array of proximity detection sensors 170 can be positioned under
the gaming table 12 parallel to the table surface, such that
periodic sampling of the proximity detection sensors 170 produces a
sequence of frames, where each frame contains the readings from the
proximity detection sensors. Examples of proximity detection
sensors 170 include optical sensors, infra red position detectors,
photodiodes, capacitance position detectors and ultrasound position
detectors. Proximity detection sensors 170 can detect the presence
or absence of playing cards (or other gaming objects) on the
surface of gaming table 12. Output from the array of proximity
detection sensors can be analog or digital and can be further
processed in order to obtain data that represents objects on the
table surface as blobs and thus replace step 142 of FIG. 8. In this
embodiment a shoe 24 would provide information on the card dealt
and sensors 170 would provide positioning data. The density of the
sensor array (resolution) will determine what types of object
positioning features may be obtained. To assist in obtaining
positioning features further processing may be performed such as
shown in FIG. 10 which is a plan view of a card position relative
to proximity detection sensors 170. Sensors 170 provide signal
strength information, where the value one represents an object
detected and the value zero represents no object detected. Straight
lines may be fitted to the readings of sensors 170 using a line
fitting method. In this manner proximity detection sensors 170 may
be utilized to determine position features such as line segments
114 or corners 116.
[0163] In this embodiment, identity data generated from the card
shoe reader 24 and positioning data generated from proximity
detection sensors 170 may be grouped and output to other modules.
Associating positional data to cards may be performed by the IPAT
module 84.
[0164] In another alternate embodiment of the IP module 80, card
reading may have an RFID based implementation. For example, RFID
chips embedded inside playing cards may be wirelessly interrogated
by RFID antennae or scanners in order to determine the identity of
the cards. Multiple antennae may be used to wirelessly interrogate
and triangulate the position of the RFID chips embedded inside the
cards. Card positioning data may be obtained either by wireless
interrogation and triangulation, a matrix of RFID sensors, or via
an array of proximity sensors as explained herein.
[0165] We shall now describe the function of the Intelligent
Position Analysis and Tracking module (IPAT module) 84 (see FIG.
6). The IPAT module 84 performs analysis of the identity and
position data of cards/card hands and interprets them
"intelligently" for the purpose of tracking game events, game
states and general game progression. The IPAT module may perform
one or more of the following tasks: [0166] a) Object modeling;
[0167] b) Object motion tracking; [0168] c) Points in contour test;
[0169] d) Detect occlusion of cards; [0170] e) Set status flags for
card positional features; and [0171] f) Separate overlapping card
hands into individual card hands.
[0172] According to the present invention, the IPAT module 84, in
combination with the Imager 32, the IP module 80, and the card shoe
24, may also detect inconsistencies that occur on a game table as a
result of an illegal or erroneous manipulation of playing
cards.
[0173] According to a preferred embodiment of the present
invention, the system for detecting inconsistencies that occur on a
game table as a result of an illegal or erroneous manipulation of
playing cards comprises a card shoe for storing playing cards to be
dealt on the table; a card reader for determining an identity and a
dealing order of each playing card as it is being dealt on the
table from the shoe; an overhead camera for capturing images of the
table, a recognition module for determining an identity and a
position of each card positioned on the table from the images; and
a tracking module for comparing the dealing order and identity
determined by the card reader with the identity and the position
determined by the recognition module, and detecting the
inconsistency.
[0174] Within the context of the system illustrated in FIG. 6, the
card shoe and card reader correspond to the card shoe 24, which
comprises an embedded card reader. The overhead camera corresponds
to the Imager 32. The recognition module corresponds to the IP
module 80. Finally, the tracking module corresponds to the IPAT
module 84.
[0175] In FIG. 18, a flowchart describing the interaction between
the IPAT module 84, IP module 80, and card shoe 24 for detecting
such inconsistencies is provided. In step 4200, the IPAT module 84
is calibrated and its global variables are initialized. In step
4202, the IPAT module 84 receives data from the card shoe 24.
[0176] In the preferred embodiment of the present invention, the
data is received immediately following each removal of a card from
the card shoe 24. In another embodiment, the data is received
following each removal of a predetermined number of cards from the
card shoe 24. In yet another embodiment, the data is received
periodically.
[0177] In the preferred embodiment of the present invention, the
data consist of a rank and suit of a last card to be removed from
the card shoe 24. In another embodiment, the data consist of a rank
of a last card to be removed from the card shoe 24.
[0178] In step 4204, the IPAT module 84 receives data from the IP
module 80.
[0179] In the preferred embodiment of the present invention, the
data is received periodically. In another embodiment, the data is
received in response to the realization of step 4202.
[0180] In the preferred embodiment of the present invention, the
data consist of a rank, suit, and position of each card placed on
the game table.
[0181] In another embodiment, the data consist of a rank and suit
of each card placed on the game table.
[0182] In yet another embodiment, the data consist of a rank of
each card placed on the game table.
[0183] In yet another embodiment, the data consist of a suit of
each card placed on the game table.
[0184] In yet another embodiment of the present invention, the data
consist of a likely rank, and suit, as well as a position of each
card placed on the game table.
[0185] In yet another embodiment of the present invention, the data
consist of a likely rank and a position of each card placed each
card placed on the game table.
[0186] In yet another embodiment of the present invention, the data
consist of a likely suit and a position of each card placed each
card placed on the game table.
[0187] In step 4206, the IPAT module 84 compares the data provided
by the card shoe 24 with those provided by the IP module 80.
[0188] In the preferred embodiment of the present invention, the
IPAT module 84 verifies whether the rank and suit of cards removed
from the card shoe 24 as well as the order in which they were
removed correspond to the rank, suit, and position of cards placed
on the game table according to a set of rules of the game being
played.
[0189] In another embodiment, the IPAT module 84 verifies whether
the rank and suit of cards removed from the card shoe 24 correspond
to the rank and suit of those that are placed on the game
table.
[0190] If an inconsistency is detected, the IPAT module 84 informs
the surveillance module 92 according to step 4208. Otherwise, the
IPAT module 84 returns to step 4202 as soon as subsequent data is
provided by the card shoe 24.
[0191] The invention will now be described within the context of
monitoring a game of Baccarat. According to the rules of the game,
a dealer withdraws four cards from a card shoe and deals two hands
of two cards, face down; one for the player, and one for the bank.
The player is required to flip the dealt cards and return them back
to the dealer. The latter organizes the returned cards on the table
and determines the outcome of the game. One known form of cheating
consists in switching cards. More specifically, a player may hide
cards of desirable value, switch a dealt card with one of the
hidden cards, flip the illegally introduced card and return it back
to the dealer. The present invention provides an efficient and
seamless means to detect such illegal procedures.
[0192] As mentioned hereinabove, according to the rules of the
Baccarat, the dealer must withdraw four cards from the card shoe.
According to a first exemplary scenario, the dealer withdraws in
order the Five of Spade, Six of Hearts, Queen of Clubs, and the Ace
of Diamonds. The rank and suit of each of the four cards is read by
the card shoe 24, and provided to the IPAT module 84.
[0193] The player flips the dealt card and returns them to the
dealer. The latter organizes the four cards on the table as
illustrated in FIG. 19. The Five of Spades 4300 and the Six of
Hearts 4302 are placed in a region dedicated to the player's hand,
and the Queen of Clubs 4304 and Ace of Diamonds 4306 are placed in
a region dedicated to the bank's hand.
[0194] The Imager 32 captures overhead images of the table, and
sends the images to the IP module 80 for processing. The IP module
80 determines the position, suit, and rank of cards 4300, 4302,
4304, and 4306, and provides the information to the IPAT module 84.
The latter compares the data received from the card shoe reader and
the IP module, and finds no inconsistency. Consequently, it waits
for a new set of data from the card shoe reader.
[0195] According to a second exemplary scenario, the dealer
withdraws in order the Five of Spade, Six of Hearts, Queen of
Clubs, and the Ace of Diamonds. The rank and suit of each of the
four cards is read by the card shoe 24, and provided to the IPAT
module 84.
[0196] The player switches one of the dealt cards with one of his
hidden cards to form a new hand, flips the cards of the new hand,
and returns them to the dealer. The latter arranges the four cards
returned by the player as illustrated in FIG. 20. The Five of
Spades 4300 and Four of Hearts 4400 are placed in a region
dedicated to the player's hand, and the Queen of Clubs 4304 and Ace
of Diamonds 4306 are placed in a region dedicated to the bank's
hand.
[0197] The Imager 32 captures overhead images of the table, and
sends the images to the IP module 80 for processing. The IP module
80 determines the position, suit, and rank of cards 4300, 4400,
4304 and 4306, and provides the information to the IPAT module 84.
The latter compares the data received from the card shoe 24 and the
IP module, and finds an inconsistency; the rank of the cards 4300,
4302, 4304, and 4306 removed from the card shoe do not correspond
to the rank of the cards 4300, 4400, 4304 and 4306 placed on the
table. More specifically, the card 4302 has been replaced by 4400,
which likely results from a card switching procedure. Consequently,
the IPAT module 84 provides a detailed description of the detected
inconsistency to the surveillance module 92.
[0198] According to a third exemplary scenario, the dealer
withdraws in order the Five of Spades, Six of Hearts, Queen of
Clubs, and the Ace of Diamonds. The rank and suit of each of the
four cards is read by the card shoe 24, and provided to the IPAT
module 84.
[0199] The player flips the dealt card and returns them to the
dealer. The latter organizes the four cards on the table in an
erroneous manner, as illustrated in FIG. 21. The Five of Spades
4300 and the Queen of Clubs 4304 are placed in a region dedicated
to the player's hand, and the Six of Hearts 4302 and Ace of
Diamonds 4306 are placed in a region dedicated to the bank's
hand.
[0200] The Imager 32 captures overhead images of the table, and
sends the images to the IP module 80 for processing. The IP module
80 determines the position, suit, and rank of cards 4300, 4302,
4304 and 4306, and provides the information to the IPAT module 84.
The latter compares the data received from the card shoe reader 24
and the IP module, and finds an inconsistency; while the rank and
suit of the cards removed from the card shoe correspond to the rank
and suit of the cards positioned on the table, the order in which
the cards were removed from the card shoe does not correspond to
the order in which the cards were organized on the table. More
specifically, the card 4302 has been permutated with the card 4304.
Consequently, the IPAT module 84 provides a detailed description of
the detected inconsistency to the surveillance module 92.
[0201] While the invention has been described within the context of
monitoring a game of Baccarat, it is applicable to any table game
involving playing cards dealt from a card shoe.
[0202] We shall now discuss the functionality of the game tracking
(GT) module 86 (see FIG. 6). The GT module 86 processes input
relating to card identities and positions to determine game events
according to a set of rules of the game being played. It also keeps
track of the state of the game, which it updates according to the
determined game events. It may also store and maintain previous
game states in memory, to which it may refer for determining a next
game state.
[0203] Returning to FIG. 6 we will now discuss bet recognition
module 88. Bet recognition module 88 can determine the value of
wagers placed by players at the gaming table. In one embodiment, an
RFID based bet recognition system can be implemented, as shown in
FIG. 5. Different embodiments of RFID based bet recognition can be
used in conjunction with gaming chips containing RFID transmitters.
As an example, the RFID bet recognition system sold by Progressive
Gaming International or by Chipco International can be
utilized.
[0204] The bet recognition module 88 can interact with the other
modules to provide more comprehensive game tracking. As an example,
the game tracking module 86 can send a capture trigger to the bet
recognition module 88 at the start of a game to automatically
capture bets at a table game.
[0205] Referring to FIG. 6 we will now discuss player tracking
module 90. Player tracking module 90 can obtain input from the IP
module 80 relating to player identity cards. The player tracking
module 90 can also obtain input from the game tracking module 86
relating to game events such as the beginning and end of each game.
By associating each recognized player identity card with the wager
located closest to the card in an overhead image of the gaming
region, the wager can be associated with that player identity card.
In this manner, comp points can be automatically accumulated to
specific player identity cards.
[0206] Optionally the system can recognize special player identity
cards with machine readable indicia printed or affixed to them (via
stickers for example). The machine readable indicia can include
matrix codes, barcodes or other identification indicia. Such
specialty identity cards may also be utilized for identifying and
registering a dealer at a table. Furthermore, specialty identity
cards may be utilized to indicate game events such as a deck being
shuffled or a dispute being resolved at the table.
[0207] Optionally, biometrics technologies such as face recognition
can be utilized to assist with identification of players.
[0208] We will now discuss the functionality of surveillance module
92. Surveillance module 92 obtains input relating to automatically
detected game events from one or more of the other modules and
associates the game events to specific points in recorded video.
The surveillance module 92 can include means for recording images
or video of a gaming table. The recording means can include the
imagers 32. The recording means can be computer or software
activated and can be stored in a digital medium such as a computer
hard drive. Less preferred recording means such as analog cameras
or analog media such as video cassettes may also be utilized.
[0209] We shall now discuss the analysis and reporting module 94 of
FIG. 6. Analysis and reporting module 94 can mine data in the
database 102 to provide reports to casino employees. The module can
be configured to perform functions including automated player
tracking, including exact handle, duration of play, decisions per
hour, player skill level, player proficiency and true house
advantage. The module 94 can be configured to automatically track
operational efficiency measures such as hands dealt per hour
reports, procedure violations, employee efficiency ranks, actual
handle for each table and actual house advantage for each table.
The module 94 can be configured to provide card counter alerts by
examining player playing patterns. It can be configured to
automatically detect fraudulent or undesired activities such as
shuffle tracking, inconsistent deck penetration by dealers and
procedure violations. The module 94 can be configured to provide
any combination or type of statistical data by performing data
mining on the recorded data in the database.
[0210] Output, including alerts and player compensation
notifications, can be through output devices such as monitors, LCD
displays, or PDAs. An output device can be of any type and is not
limited to visual displays and can include auditory or other
sensory means. The software can potentially be configured to
generate any type of report with respect to casino operations.
[0211] Module 94 can be configured to accept input from a user
interface running on input devices. These inputs can include,
without limitation, training parameters, configuration commands,
dealer identity, table status, and other inputs required to operate
the system.
[0212] Although not shown in FIG. 6 a chip tray recognition module
may be provided to determine the contents of the dealer's chip
bank. In one embodiment an RFID based chip tray recognition system
can be implemented. In another embodiment, a vision based chip tray
recognition system can be implemented. The chip tray recognition
module can send data relating to the value of chips in the dealer's
chip tray to other modules.
[0213] Although not shown in FIG. 6, a dealer identity module may
be employed to track the identity of a dealer. The dealer can
optionally either key in her unique identity code at the game table
or optionally she can use an identity card and associated reader to
register their identity. A biometrics system may be used to
facilitate dealer or employee identification.
[0214] The terms imagers and imaging devices have been used
interchangeably in this document. The imagers can have any
combination of sensor, lens and/or interface. Possible interfaces
include, without limitation, 10/100 Ethernet, Gigabit Ethernet,
USB, USB 2, FireWire, Optical Fiber, PAL or NTSC interfaces. For
analog interfaces such as NTSC and PAL a processor having a capture
card in combination with a frame grabber can be utilized to get
digital images or digital video.
[0215] The image processing and computer vision algorithms in the
software can utilize any type or combination or color spaces or
digital file formats. Possible color spaces include, without
limitation, RGB, HSL, CMYK, Grayscale and binary color spaces.
[0216] The overhead imaging system may be associated with one or
more display signs. Display sign(s) can be non-electronic,
electronic or digital. A display sign can be an electronic display
displaying game related events happening at the table in real time.
A display and the housing unit for the overhead imaging devices may
be integrated into a large unit. The overhead imaging system may be
located on or near the ceiling above the gaming region.
[0217] Although the subject matter has been described in language
specific to structural features and/or methodological acts, it is
to be understood that the subject matter defined in the appended
claims is not necessarily limited to the specific features or acts
described above. Rather, the specific features and acts described
above are disclosed as example forms of implementing the
claims.
* * * * *