U.S. patent application number 12/489116 was filed with the patent office on 2010-09-16 for pulsed laser-based firearm training system, and method for facilitating firearm training using detection of laser pulse impingement of projected target images.
This patent application is currently assigned to The United States of America as represented by. Invention is credited to James A. Skala, Elessar Taggart, Joel Lee Wilder.
Application Number | 20100233660 12/489116 |
Document ID | / |
Family ID | 42731012 |
Filed Date | 2010-09-16 |
United States Patent
Application |
20100233660 |
Kind Code |
A1 |
Skala; James A. ; et
al. |
September 16, 2010 |
Pulsed Laser-Based Firearm Training System, and Method for
Facilitating Firearm Training Using Detection of Laser Pulse
Impingement of Projected Target Images
Abstract
The invention provides a method for automatic calibration and
subsequent correlation of the position of a pulsed laser on a
projected image in a system having a projector for projecting
images onto a surface and a camera for sensing laser pulses on the
surface. The automatic calibration method sets the camera exposure,
allowing the system to operate in normal room-lighting conditions,
and correlates camera pixel positions to projected image pixel
positions by use of projected calibration images formed by sets of
horizontal and vertical lines, with automatic calibration
completing in less than 5 seconds. After calibration, the system
determines two-dimensional camera pixel centroids of laser beam
pulses on the projection surface to sub-pixel accuracy, and the
calibration data is used to correlate the camera pixel centroids to
the exact positions of the laser pulses on the projected image.
Inventors: |
Skala; James A.; (Hartselle,
AL) ; Wilder; Joel Lee; (Huntsville, AL) ;
Taggart; Elessar; (Huntsville, AL) |
Correspondence
Address: |
DEPARTMENT OF THE ARMY;LEGAL OFFICE
AMSAM - L - G - I, U.S. ARMY AVIATION & MISSILE COMMAND
REDSTONE ARSENAL
AL
35898-5000
US
|
Assignee: |
The United States of America as
represented by
Washington
DC
the Secretary of the Army
|
Family ID: |
42731012 |
Appl. No.: |
12/489116 |
Filed: |
June 22, 2009 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61076048 |
Jun 26, 2008 |
|
|
|
Current U.S.
Class: |
434/22 |
Current CPC
Class: |
F41A 33/02 20130101;
F41G 3/2627 20130101; F41J 5/10 20130101; F41G 3/2655 20130101 |
Class at
Publication: |
434/22 |
International
Class: |
F41G 3/26 20060101
F41G003/26 |
Goverment Interests
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT AND
DEDICATORY CLAUSE
[0002] This invention was made with Government support under a
contract awarded by the U.S. Army (Agreement # W31P4Q-05-A-0031 and
Subcontract #4600005773). The invention described herein may be
manufactured, used and licensed by or for the U.S. Government for
governmental purposes without payment of any royalties thereon.
Claims
1. A method for automatic calibration of a laser-based firearm
training system having a projector for projecting images onto a
surface and a camera for reading images projected onto the surface,
comprising: projecting a first image comprised of horizontal and
vertical lines upon the surface; reading the first image; adjusting
an exposure setting of the camera based on the first image so as to
ensure pixel values below saturation; projecting a second image
comprising a black color upon the surface; reading the second
image; calculating a background light level value for each pixel in
the second image; projecting a third image comprising a pattern of
horizontal lines upon the surface; reading the third image;
calculating a horizontal calibration value for each pixel in the
third image, wherein the resulting horizontal calibration value for
each pixel is reduced by the corresponding background light level
value; projecting a fourth image comprising a pattern of vertical
lines upon the surface; reading the fourth image; calculating a
vertical calibration value for each pixel in the fourth image,
wherein the resulting vertical calibration value for each pixel is
reduced by the corresponding background light level value;
calculating a centroid line for each vertical and horizontal line
and calculating a second order line least squares fit for each
centroid line, thereby generating three coefficients that describe
each vertical and horizontal line; and generating a data map for
mapping a pixel from an image read from the surface to a pixel in
an image projected into the surface, wherein the data map is based
on the three coefficients that describe each vertical and
horizontal line.
2. The method of claim 1, wherein the first image is set to the
color white.
3. The method of claim 1, wherein the first image is set to the
color red.
4. The method of claim 1, wherein the first step of calculating
further comprises: storing the background light level value for
each pixel in a two dimensional array.
5. The method of claim 4, wherein the second step of calculating
further comprises: storing the horizontal calibration value for
each pixel in a two dimensional array.
6. The method of claim 5, wherein the third step of calculating
further comprises: storing the vertical calibration value for each
pixel in a two dimensional array.
7. A method for automatic detection of a laser pulse in a
laser-based firearm training system having a projector for
projecting images onto a surface and a camera for reading images
projected onto the surface, comprising: reading a first projected
image into a first data structure; searching for a laser pulse in
the image by searching for pixels in the first data structure with
a pixel value exceeding a predefined threshold; selecting a set of
pixels around a pixel having a pixel value exceeding a predefined
threshold; calculating a centroid of the laser pulse in the first
data structure by performing a two dimensional centroid calculation
upon the set of pixels; and calculating a pixel location of the
laser pulse in the first image based on the centroid.
8. The method of claim 7, wherein the set of pixels comprises a set
of 16.times.16 pixels centered at a pixel having a pixel value
exceeding a predefined threshold.
9. The method of claim 8, wherein the centroid is a two-dimensional
centroid.
10. The method of claim 9, wherein the second step of calculating
further comprises: calculating an X-direction centroid.
11. The method of claim 10, wherein the step of calculating an
X-direction centroid further comprises: multiplying each pixel in
the set of pixels by an X-direction coordinate of the pixel.
12. The method of claim 11, wherein the step of calculating an
X-direction centroid further comprises: summing the product of
multiplying each pixel in the set of pixels by an X-direction
coordinate of the pixel so as to produce a sum.
13. The method of claim 11, wherein the step of calculating an
X-direction centroid further comprises: dividing the sum by the sum
of all pixel values in the set of pixels.
14. The method of claim 9, wherein the second step of calculating
further comprises: calculating a Y-direction centroid.
15. The method of claim 14, wherein the step of calculating a
Y-direction centroid further comprises: multiplying each pixel in
the set of pixels by a Y-direction coordinate of the pixel.
16. The method of claim 15, wherein the step of calculating a
Y-direction centroid further comprises: summing the product of
multiplying each pixel in the set of pixels by a Y-direction
coordinate of the pixel so as to produce a sum; and dividing the
sum by the sum of all pixel values in the set of pixels.
17. A firearm laser training system operable to detect the location
of a projected laser beam pulse upon a projected target image is
provided, said system comprising: a projection means operable to
project a target image upon a surface; a video camera means
operable to scan said target image to produce scanned images of
said target image, including impact locations of said laser beam
pulse on said target image; and a processing means operable to
receive from said video camera means information associated with
said impact locations detected by said video camera means, said
processing means comprising: an automatic calibration module
operable to correlate camera pixels to target pixels of the
projected target image; a detection module operable to determine a
camera pixel position of the laser beam pulse; and a shot
correlation module operable to correlate the camera pixel position
of the laser beam pulse to the projected target image, so as to
determine the accuracy of the shot.
18. The firearm laser training system of claim 17, wherein the
automatic calibration module comprises: a means operable to project
a first image comprised of horizontal and vertical lines upon the
surface; a means operable to read the first image; a means operable
to adjust an exposure setting of the camera based on the first
image, so as to ensure pixel values below saturation; a means
operable to project a second image comprising a black color upon
the surface; a means operable to read the second image; a means
operable to calculate a background light level value for each pixel
in the second image; a means operable to project a third image
comprising a pattern of horizontal lines upon the surface; a means
operable to read the third image; a means operable to calculate a
horizontal calibration value for each pixel in the third image,
wherein the resulting horizontal calibration value for each pixel
is reduced by the corresponding background light level value; a
means operable to project a fourth image comprising a pattern of
vertical lines upon the surface; a means operable to read the
fourth image; a means operable to calculate a vertical calibration
value for each pixel in the fourth image, wherein the resulting
vertical calibration value for each pixel is reduced by the
corresponding background light level value; a means operable to
calculate a centroid line for each vertical and horizontal line and
calculating a second order line least squares fit for each centroid
line, thereby generating three coefficients that describe each
vertical and horizontal line; and a means operable to generate a
data map for mapping a pixel from an image read from the surface to
a pixel in an image projected into the surface, wherein the data
map is based on the three coefficients that describe each vertical
and horizontal line.
19. The firearm laser training system of claim 18, wherein the
first image is set to the color white.
20. The firearm laser training system of claim 18, wherein the
first image is set to the color red.
21. The firearm laser training system of claim 18, wherein the
means operable to calculate the background light level for each
pixel in the second image further comprises: a means operable to
store the background light level value for each pixel in a two
dimensional array.
22. The firearm laser training system of claim 18, wherein the
means operable to calculate a horizontal calibration value for each
pixel in the third image further comprises: a means operable to
store the horizontal calibration value for each pixel in a two
dimensional array.
23. The firearm laser training system of claim 18, wherein the
means operable to calculate a vertical calibration value for each
pixel in the fourth image further comprises: a means operable to
store the vertical calibration value for each pixel in a two
dimensional array.
Description
PRIORITY CLAIMED
[0001] Benefit is claimed to provisional application No. 61/076,048
filed on Jun. 26, 2008.
FIELD OF THE INVENTION
[0003] This invention relates to firearm training and more
specifically to firearm arm training using a pulsed laser against
projected target images to determine hit points.
BACKGROUND OF THE INVENTION
[0004] Firearms are utilized for a variety of purposes, such as
hunting, sporting competition, law enforcement, and military
operations. The inherent danger associated with firearms
necessitates training and practice in order to minimize the risk of
injury. However, special facilities are required to facilitate
practice of handling and shooting the firearm.
[0005] These special facilities tend to provide a sufficiently
sized area for firearm training, where the area required for
training may become quite large, especially for sniper-type or
other firearm training with extended range targets. The facilities
further confine projectiles propelled from the firearm within a
prescribed space, thereby preventing harm to the surrounding
environment. Accordingly, firearm trainees are required to travel
to the special facilities in order to participate in a training
session, while the training sessions themselves may become quite
expensive since each session requires new ammunition for practicing
handling and shooting of the firearm.
[0006] In addition, firearm training is generally conducted by
several organizations (e.g., military, law enforcement, firing
ranges or clubs, etc.). Each of these organizations may have
specific techniques or manners in which to conduct firearm training
and/or qualify trainees. Accordingly, these organizations tend to
utilize different types of targets, or may utilize a common target,
but with different scoring criteria. Furthermore, different targets
may be employed by users for firearm training or qualification to
simulate particular conditions or provide a specific type of
training (e.g., grouping shots, hunting, clay pigeons, etc.).
[0007] Prior systems have been provided, using video cameras to
provide target tracking. For example, U.S. Pat. No. 5,366,229
(Suzuki) discloses a shooting game machine including a projector
for projecting a video image onto a screen, wherein the video image
includes a target. In this game, it is the goal of a player to fire
a laser gun to emit a light beam at the target displayed in the
video image on the screen.
[0008] In the Suzuki '229 patent, a video camera photographs the
screen and provides a picture signal to coordinate computing means
for determining the X and Y coordinates of the beam point on the
screen (so as to determine whether the light beam struck the
target). Such systems, for instance, utilize measurement of the
luminance and/or chromance of a video image to determine where,
within the target image, a target appears.
[0009] International Publication No. WO 92/08093 (Kunnecke et al.)
discloses a small arms target practice monitoring system including
a weapon, a target, a light-beam projector mounted on the weapon
and sighted to point at the target, and a processor. An evaluating
unit is connected to a camera to determine the coordinates of the
spot of light (projected by the light beam projector) on the
target. A processor is connected to the evaluating unit and
receives the coordinate information. The processor further displays
the spot on the projected target image displayed on a display
screen.
[0010] Such systems may calculate a position for the image, for
instance by reference to a region defined by matching chromance
and/or luminance to pattern values, which might match patterns for
flesh tones and/or other applicable patterns. Another type of video
tracking system relies on a specified window to isolate regions of
interest in order to determine a target. Analog comparison
techniques may be used to perform tracking.
[0011] Although the above conventional target tracking systems can
track a laser beam impingement on a projected video image,
calibration of such systems are tedious and inefficient, frequently
requiring manual calibration of same to ensure that the projected
image is properly correlated with the sensing device and computer
system.
[0012] The accuracy of any such system can never be better than the
calibration or alignment between the projected images and the laser
beam sensing system. For example, U.S. Pat. No. 6,616,452, calls
for the computer system to perform a mechanical calibration and a
system calibration before training/simulation may begin. The
mechanical calibration generally facilitates alignment of the
sensing device with the projected image (generally projected onto a
screen) and computer system, while the system calibration enables
determination of parameters for system operation.
[0013] In particular, in the system described in U.S. Pat. No.
6,616,452, the computer system displays a calibration graphical
user screen, and the user must then adjust the displayed
coordinates. The computer system compensates for the device viewing
angle offset, and requests the user to indicate, preferably via a
mouse or other input device, the corners of the projected image
within the captured images within a window of the calibration
screen. The coordinates for a corner designated by a user are
displayed on the screen, where the user may selectively adjust the
coordinates.
[0014] This process is repeated for each corner of the projected
image to define for the computer system the projected image within
the image captured by the camera. The horizontal and vertical lines
are adjusted in accordance with the entered information to indicate
the system perspective of the projected image. This calibration is
then repeated until horizontal lines are substantially coincident
with the corresponding projected image horizontal edges, and the
projected image horizontal center line and vertical line are
substantially coincident with the vertical center line of the
captured image, thereby indicating alignment of the projected image
with image captured (recorded) by the camera of the system.
[0015] Such conventional calibration means and methods are very
time consuming, of limited accuracy, and are dependent upon user
interaction throughout the process. Therefore, there is a need for
a system for firearm training capable of precise target tracking
with concomitant ease of calibration to correlate camera pixels to
target pixels of the projected target image in various light and
environmental conditions.
SUMMARY OF THE INVENTION
[0016] In one embodiment of the present invention, a method for
automatic calibration of a laser-based firearm training system
having a projector for projecting images onto a surface and a
camera for reading images projected onto the surface is provided.
The method includes projecting a first image comprised of an all
white screen upon the target surface, reading the first image and
adjusting an exposure setting of the camera based on the first
image so as to ensure pixel values below saturation.
[0017] The method further includes projecting a second image
comprising a black color upon the surface, reading the second image
and calculating a background light level value for each pixel in
the second image. The method further includes projecting a third
image comprising a pattern of horizontal lines upon the surface,
reading the third image and calculating a horizontal calibration
value for each pixel in the third image, wherein the resulting
horizontal calibration value for each pixel is reduced by the
corresponding background light level value.
[0018] The method further includes projecting a fourth image
comprising a pattern of vertical lines upon the surface, reading
the fourth image and calculating a vertical calibration value for
each pixel in the fourth image, wherein the resulting vertical
calibration value for each pixel is reduced by the corresponding
background light level value. The method further includes
calculating a centroid line for each vertical and horizontal line
and calculating a second order line least squares fit for each
centroid line, thereby generating three coefficients that describe
each vertical and horizontal line and generating a data map for
mapping a pixel from an image read from the surface to a pixel in
an image projected into the surface, wherein the data map is based
on the three coefficients that describe each vertical and
horizontal line.
[0019] In another embodiment of the present invention, a method for
automatic detection of a laser pulse in a laser-based firearm
training system having a projector for projecting images onto a
surface and a camera for reading images projected onto the surface
is provided. The method includes reading a first projected image
into a first data structure and searching for a laser pulse in the
image by searching for pixels in the first data structure with a
pixel value exceeding a predefined threshold.
[0020] The method further includes selecting a set of pixels around
a pixel having a pixel value exceeding a predefined threshold and
calculating a centroid of the laser pulse in the first data
structure by performing a two dimensional centroid calculation upon
the set of pixels. The method further includes calculating a pixel
location of the laser pulse in the first image based on the
centroid.
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The subject matter, which is regarded as the invention, is
particularly pointed out and distinctly claimed in the claims at
the conclusion of the specification. The foregoing and other
features and also the advantages of the invention will be apparent
from the following detailed description taken in conjunction with
the accompanying drawings.
[0022] FIG. 1 is a box diagram of the pulsed laser-based firearm
training system of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0023] A pulsed laser-based firearm training system, and method of
utilizing same is provided, wherein a laser beam pulse is "fired"
at a projected target image, and a video camera, in conjunction
with hardware and software means, determines the location of the
laser beam strike on the projected target image by correlation of
the projected target image pixels to the video camera pixels. The
hardware and software means, in conjunction with a video projector
and video camera, is capable of automatically correlating the video
camera pixels (i.e., the image "captured" by the system) to the
projected target image pixels (i.e., the projected image).
[0024] As illustrated in FIG. 1, the present invention provides a
pulsed laser-based firearm training system 1 comprised of a
projection means 3, a video camera means 5, and a hardware-software
processing means 7. In addition, a laser emitting device 9 is
provided. The system 1 is operable to detect the aim points of
weapons, having the laser emitting device mounted thereon or
integrated therewith, that are fired at a projected target image
11.
[0025] The performance of the system of the present invention is
relatively independent of the accuracy of setup of the video camera
means 5 with respect to the projected target image, and can be used
with no degradation in high and uneven ambient lighting conditions,
as long as the projected image is not completely overcome by the
ambient light. Importantly, the system 1 of the present invention,
via a software program (i.e., the automatic calibration module 13,
as illustrated) is executed on the processing means 7,
automatically calibrates itself in less than 3 seconds, i.e.,
correlates camera pixels (i.e., pixels garnered by the video camera
means 5) to target pixels of the projected target image 11 (i.e.,
pixels comprising the image that is projected as image 11) without
the need for human interaction.
[0026] In operation, the system 1 is first calibrated via the
automatic calibration module 13 and method. Then, a simulated or
actual weapon (usually any rifle or pistol), equipped with the
laser emitting device 9 (aligned with the sights or offset a
predetermined amount from the sights thereof), is fired at the
projected target image 11, triggering the laser emitting device 9
to pulse a dot laser-pulse 20 on the image 11.
[0027] The system 1 detects the positions of the laser pulse 20
impinging on the projected target image 11. Then, the system 1
correlates the positions of the detected laser pulse 20 on the
projected image to the corresponding pixel of the image captured by
the camera, based on the relation of the projected image to the
captured image calculated in the automatic calibration process
mentioned above. This process is performed in real time (less than
7 milliseconds when a 210 Frame Per Second video camera is used as
the video camera means 5).
[0028] The laser emitting device 9 may be any commercially
available visible or near infrared dot laser. For example, a 3
milliwatt commercial laser emitter, which is readily available,
performs well in the present application. However, the laser can be
any visible laser or a near IR laser that is visible to the camera,
with a minimum power of about 1.5 milliwatts. In the laser emitting
device 9 of the present invention, a laser pulse circuit applies
power to the laser when the weapon is "fired," and is referred to
as a "fire event". The laser pulse length must be at least one
video frame plus 2 milliseconds long (for a 210 frames per second
video camera, the pulse length should be at least 6.8 milliseconds
long).
[0029] The means of detecting the fire events may vary, including,
but not limited to, a trigger contact, a microphone sensing
circuit, and a magnetic proximity circuit. The fire event is
usually detected audibly when a real weapon is used in "dry fire"
mode, i.e., when the impact of the hammer is heard, a fire event is
generated. Various means can be used to detect fire events in both
simulated and actual weapons, such as a switch in the trigger
mechanism, magnetic reed switch or Hall Effect sensor. The laser
emitting device 9 is preferably configured such that the laser
circuit causes a small LED to blink when it fires the laser, so as
to provide a visible indication when an infrared (IR) laser is
used.
[0030] The projection means 3 may be any conventional video
projector, as long as the strength of the projected image does not
exceed that of the laser pulse, or, if the projected target image
is too bright, a "long pass" optical filter attached to the camera
lens fixes this problem; for example, a 550 nm long pass filter
allows the auto calibration to work, but causes the projected image
seen by the video camera to be subdued about 50% without
substantially affecting the brightness of the laser.
[0031] In the preferred embodiment, the 550 nm long pass filter was
found suitable with all projectors tested. The processing means 7
(i.e., a computer system) may be comprised of any conventional
computer system, such as a conventional IBM-compatible laptop or
other type of personal computer (e.g., notebook, desk top,
mini-tower, Apple Macintosh, palm pilot, etc.), preferably equipped
with a keyboard and a mouse.
[0032] The computer system (processing means 7) may utilize any of
the major platforms (e.g., Linux, Macintosh, Unix, OS2, etc.), but
preferably includes a Windows environment (e.g., Windows XP or
Vista). Further, the processing means 7 may include other
conventional components (e.g. processor, disk storage or hard
drive, etc.) having sufficient processing and storage capabilities
to effectively execute the system software.
[0033] The computer system is in communication with the set-up
parameters of the video camera means 5 via, for example, a USB
(Universal Serial Bus) interface to set frame exposure time and
other camera configuration parameters. The video camera means 5 may
be mounted on a tripod, table, component rack, etc., and positioned
at a suitable location from the surface upon which the target image
is to be projected. However, any type of mounting or other
structure may be utilized to support the video camera means 5. The
video camera means 5 is typically implemented by a camera employing
a CMOS (complementary-symmetry metal-oxide-semiconductor) imager.
For example, any conventional commercially available video camera
may be used.
[0034] However, preferably, a video camera capable of operating at
210 frames per second or greater is used. The use of a higher speed
camera can detect shots fired faster. The high speed video camera
also necessitates a higher speed interface to the FPGA (Field
Programmable Gate Array), such as the CameraLink standard, which
also reduces the required number of interface connections. The
preferred implementation is to use a commercial off-the shelf
(COTS) video camera and a CameraLink interface to an external (from
the camera), standalone FPGA processing board.
[0035] The processing means 7 (and the FPGA therein), in
conjunction with the video camera means 5, and the calibration
constants, detects the camera pixel location of the laser beam
impact on the target image (e.g., by capturing an image of the
target surface and detecting the location of the laser beam pulse
impact from the captured image), and includes a signal processor
(FPGA) and associated circuitry to provide impact location
information in the form of X and Y camera coordinates to the
processing means 7, or provide other data to the processing means 7
to enable determination of those camera coordinates.
[0036] As called for in the first embodiment herein, the processing
means comprises an automatic calibration module (i.e., a computer
software program) operable to correlate camera pixels to target
pixels of the projected target image. In particular, the automatic
calibration module correlates the target image pixels to the pixels
of the video camera means 5, to enable the processing means 7, in
conjunction with the video camera means 5, to correlate the
location of a laser beam impact on the target image as an X and Y
camera coordinate.
[0037] The resulting camera coordinates are transmitted to the
processing means 7 for translation to coordinates within the
computer system's scaled target space, to determine the laser beam
impact location in target-pixel X and Y coordinates, as described
below.
[0038] In practice, the processing means first generates a
projected target image onto a "target" surface (such as a screen,
but other surfaces, such as walls, may be used) that allows the
video projector to be set up in zoom, focus, and keyhole
adjustment. The preferred projected target image, in terms of ease
of camera setup, has short line features around the periphery of
the projected image, and inside that target image is displayed
whatever the video camera sees. The operator can command the
processing means (e.g., a PC, which sends commands to the camera
through the FPGA board) to set the video image brighter or darker,
as needed for best viewing by the operator. This allows the video
camera to be set up in alignment, zoom, and focus.
[0039] Then, the calibration procedure is begun with the projector
and camera in focus, and the projected image framed in the camera's
view acceptably well to the eye. When the video image adjustments
are complete, the operator commands the processing means (i.e., the
computer system 7) to execute the automatic calibration module 13
to correlate the video camera pixels with the projected target
image pixels.
[0040] The steps in the automatic calibration process are as
follows. First, a white screen image is projected upon the surface,
so as to project a raster image. In particular, a white screen is
displayed with all pixels set to color white and at maximum
intensity. Then, the image is read by the camera 5. Subsequently,
the camera exposure is automatically adjusted by commands to the
camera from the PC until no pixel read by camera 5 is at
saturation.
[0041] Subsequently, a black image is projected by projector 3 upon
the surface, for a plurality of successive frames. Then, multiple
successive frames of the image are read by the processing means and
a background light level value is saved for each camera 5 pixel
using an average from the multiple frames that were read. The
background light level values for each pixel are saved in a data
structure such as a matrix or two dimensional array such that the
two-coordinate address for each array element is identical to the
two-coordinate address for a pixel in the image read by camera 5.
Each array element includes a background light level value
calculated for the corresponding pixel read by camera 5.
[0042] Then, projector 3 projects a pattern of white horizontal
calibration lines upon the surface, with the center line missing.
Then, the image is read by the processing means and the previously
saved background image is subtracted to yield a differential image
that contains only the calibration lines. The differential image of
the horizontal calibration raster values for each pixel may be
stored in a two-dimensional array such that the two-coordinate
address for each array element is identical to the two-coordinate
address for a pixel in the image read by camera 5. Each array
element includes a horizontal calibration raster value calculated
for the corresponding pixel read by camera 5.
[0043] Then, projector 3 projects a pattern of equally spaced white
vertical calibration lines upon the surface, with center line
missing. Then, the image is read by the processing means and the
previously saved background image is subtracted to yield a
differential image that contains only the calibration lines.
[0044] The differential image of the vertical calibration raster
values for each pixel may be stored in a two-dimensional array such
that the two-coordinate address for each array element is identical
to the two-coordinate address for a pixel in the image read by
camera 5. Each array element includes a vertical calibration raster
value calculated for the corresponding pixel read by camera 5.
[0045] The computer 7 scans the differential calibration line
rasters and first finds the identity of each calibration line from
the missing line near the center of the calibration rasters. The
calibration lines have both width and noise as received from the
camera, and the camera pixels that form the calibration lines are
composed of analog values that contain random noise components. The
first step in using the calibration lines is to determine the exact
center line of each calibration line. A line centroid calculation
is used to determine the exact center of each calibration line and
to remove noise, thus providing a new calibration line to be used
when performing a least squares fit for each calibration line.
[0046] A line centroid calculation is performed for each camera
pixel along the length of each calibration line by selecting a
reference point a couple pixels away from the line and then, moving
toward and across the line to its far side, multiplying the pixel
value of each pixel encountered by the pixel distance from the
reference point, summing all the products, dividing that sum by the
total of all pixels encountered, and adding the quotient to the
pixel position of the reference pixel, thus arriving at the exact
fractional pixel value of the center of the line width at one pixel
position along the length of the calibration line.
[0047] The centroid process is repeated until centroid fractional
pixel values have been determined for all pixels along the length
of the calibration line. The locus of centroids now forms a new
calibration line that runs down the exact center of the original
calibration line that was composed of many camera pixels. A second
order least-squares fit of the new calibration line yields an
accurate correlation of camera pixels to target pixels along that
calibration line, and the original calibration line composed of
many analog pixel values has been reduced to three coefficients for
a second order line.
[0048] Finally, a data map is generated for mapping a pixel from an
image read from the surface (camera pixel) to a pixel in an image
projected into the surface (target pixel), wherein the data map is
based on the three coefficients that describe each vertical and
horizontal line.
[0049] The automatic calibration method of the present invention
yields superior accuracy because it involves thousands of pixels
per calibration line to determine the exact positions of the
displayed calibration lines. The line centroid calculations use
pixels adjacent to both sides of the line to obtain the maximum use
of available line position data. One set of three second-order line
coefficients are determined from a least-squares fit for each
calibration line that was projected on the screen. By nature, a
least-squares fit for the second order equations of the calibration
lines virtually ignores single pixel errors along the calibration
lines.
[0050] The automatic calibration procedure spans less than 3
seconds of time, and yields sub-pixel accurate correlations between
camera pixels and projected target pixels.
[0051] For example, a projector projecting 1024 by 768 pixels is
used to project the target image, with some rows of projected
pixels off-screen (i.e., off the target surface), i.e., some of the
target image projected both above the top and below the bottom of
the screen. A video camera is utilized, having a recording
capability of 640 by 480 pixels. Camera exposure is controlled by
setting the exposure time per frame. Note that more calibration
lines could be implemented, but testing proved that the accuracy of
the correlation is already at the sub-pixel level, and therefore
not needed.
[0052] Upon completion of the automatic calibration process
described above, the shot detection and correlation module (target
mode) may be entered. In the "target mode", the computer 7 no
longer needs to see the camera raster. The computer 7 commands the
FPGA to enter the "target mode" to start looking for laser pulses
in each camera frame. In target mode, the camera exposure is set
much lower than in the calibration mode described above, such that
the video camera sees little to none of the projected images on the
screen. Therefore, the laser pulse energy level must exceed the
maximum energy level of the projected target image, or a low-pass
filter can be placed over the camera lens to block a portion of the
visible projected target image while allowing the laser pulse to
pass through to the camera's imager with little attenuation.
[0053] The FPGA analyzes each camera raster image, looking for any
signal level that exceeds a certain threshold (which happens when a
laser pulse is present in a video frame). The FPGA then determines
the exact centroid of the laser pulse in camera pixels, using all
pixels in a large "Window Of Interest" (WOI) of the camera
imager.
[0054] The centroid calculations are performed as follows. For
example, suppose the chart below contains a set of 8.times.8 pixel
values of a laser pulse on a projected target image.
TABLE-US-00001 47 28 29 31 34 30 25 52 31 87 42 42 46 46 98 33 37
45 214 254 254 254 44 37 29 45 151 254 254 122 45 34 40 47 165 254
254 143 54 36 27 58 167 123 137 230 52 31 40 55 34 39 42 34 69 39
32 27 35 31 36 34 28 36
[0055] First, in order to determine the X centroid value, take the
sum of column one, and multiply it times its position in the
matrix, which is one:
1*(47+31+37+29+40+27+40+32)=283
[0056] Next, take the sum of column two, and multiply it times its
position in the matrix, which is two:
2*(28+87+45+45+47+58+55+27)=784
[0057] Next, take the sum of column three, and multiply it times it
position in the matrix, which is three:
3*(29+42+214+151+165+167+34+35)=2511
[0058] Continue this process until all eight columns have been
accounted for, and sum up the resulting values:
283+784+2511+4112+5285+5358+2905+2384=23622
[0059] Now, sum up the entire matrix, which is 5203, and the X
centroid value is calculated as
23622/5203=4.5401, a sub-pixel accurate result.
[0060] This X centroid value is the position within the matrix
itself In order to determine the overall X centroid value, the
relative position of the matrix, as shown in the chart above,
within the video frame should be accounted for.
[0061] The Y centroid value is computed in the same way, except
summing across rows instead of columns, with the result equal to
4.3686.
[0062] The preferred embodiment WOI is 16.times.16 camera pixels,
such that the entire laser pulse is contained inside the WOI. The
FPGA processes the WOI and calculates the camera pixel coordinates
of the laser pulse to sub-camera-pixel accuracy, and forwards the
resulting laser pulse position in camera coordinates to the
computer 7. The computer 7 converts the camera pixel coordinates
into projected target pixel coordinates using the second order line
coefficients that were calculated in the automatic calibration
procedure. The target coordinates are used to determine where the
shot struck the target images.
[0063] In a preferable example, a Xilinx.RTM. Virtex-4 FPGA (Field
Programmable Gate Array) receives the camera image through a
standard CameraLink interface. The FPGA analyzes each camera frame
to determine if a laser pulse has occurred by searching for a
camera pixel whose value exceeds an adjustable threshold,
hereinafter referred to as a trigger-pixel. When found, a Window Of
Interest (WOI) is framed around that point in order to capture
information from the entire laser pulse.
[0064] The preferred WOI is 16.times.16 pixels in size for a
640.times.480 pixel camera, with the top-left corner designated at
coordinates (1,1), the trigger-pixel designated at coordinates
(8,8), and the bottom-right corner designated at coordinates
(16,16). A circular buffer of video lines is maintained in FPGA
memory so that once the trigger-pixel has been located, the first
6.times.16 pixels of the WOI are retrieved from memory.
[0065] The remaining ten lines of the WOI are then filled from the
live video stream, relative to the location of the trigger-pixel.
This method accommodates various shapes of focused laser pulses,
while maintaining an ambient-light margin around the WOI edge.
[0066] Once the WOI has been captured, a two-dimensional centroid
calculation is performed on the result in order to determine the
exact camera-pixel position of the laser pulse to a fraction of a
pixel in both the X and Y directions. The two-dimensional centroid
calculation is achieved by accumulating weighted sums based on
column and row positions, dividing by the sum of the WOI, and
adding offsets based on the WOI location within the camera
frame.
[0067] In particular, the two-dimensional centroid calculation
proceeds as follows: Each pixel value corresponds to the intensity
of light seen by that camera pixel. First, the ambient light-level
is accounted for by taking an average of the pixels around the edge
of the WOI (the first and last columns, and the first and last
rows), and then subtracting this result from each pixel in the WOI.
Experimentally, this has proven to increase accuracy over
processing the WOI unaltered. It is important that the laser pulse
be sized so that it does not hit in the frame edge of the WOI, in
order to not artificially raise the ambient light-level.
[0068] Subsequently, the X-direction centroid is determined by
multiplying each pixel in the WOI by its Y-direction pixel number
(1 to 16), summing all 256 products, and dividing the sum by the
sum of all pixel values in the WOI. The Y-direction centroid is
determined by multiplying each pixel in the WOI by its X-direction
pixel number (1 to 16), summing all 256 products, and dividing the
sum by the sum of all pixel values in the WOI. This process
provides camera coordinates of the laser pulse to sub-pixel
accuracy.
[0069] The resulting camera pixel position is forwarded to the
processing means for implementation in the shot correlation.
[0070] In practice, the FPGA provides a laser pulse position in
camera pixels. This is converted to target pixels, using the second
order curve constants that were determined during the automatic
calibration module execution described above. In order to find the
target pixel X coordinate using the preferred method, the
processing means calculates the X positional coordinates of the
vertical lines using the previously determined calculation
constants along the row of the Y position given by the FPGA. With
these X positions, the processing means then determines the two
closest positions to the given X coordinate from the FPGA.
[0071] These two positions of the vertical lines in the camera
frame correlate to the same positions used in the target frame. The
processing means uses these line coordinates to determine the
target pixel X coordinate through linear interpolation--the target
pixel is positioned relative to the two vertical lines in the
target frame the same as the given camera pixel is positioned
relative to the two vertical lines in the camera frame. In order to
find the target pixel Y coordinate, the same steps are taken using
the horizontal lines.
[0072] Lastly, the resulting X and Y pixel coordinates are scaled
by the target resolution over the camera resolution to determine
the actual target pixel position. Note that further processing
could have been used instead of linear interpolation to yield a
slightly more accurate target pixel position of the laser pulse,
but it was found by testing that the linear interpolation readily
yields single pixel accuracy that is adequate for our immediate
needs.
[0073] Although specific embodiments of the present invention have
been disclosed herein, those having ordinary skill in the art will
understand that changes can be made to the specific embodiments
without departing from the spirit and scope of the invention. The
scope of the invention is not to be restricted, therefore, to the
specific embodiments. Furthermore, it is intended that the appended
claims cover any and all such applications, modifications, and
embodiments within the scope of the present invention.
* * * * *