U.S. patent application number 11/052921 was filed with the patent office on 2006-02-02 for optical muzzle blast detection and counterfire targeting system and method.
Invention is credited to Duane SR. Burchick, Mehmet C. Ertern, Eric Heidhausen, Stanley Moroz, Myron Pauli, William Seisler.
Application Number | 20060021498 11/052921 |
Document ID | / |
Family ID | 35730690 |
Filed Date | 2006-02-02 |
United States Patent
Application |
20060021498 |
Kind Code |
A1 |
Moroz; Stanley ; et
al. |
February 2, 2006 |
Optical muzzle blast detection and counterfire targeting system and
method
Abstract
An authomated system for remote detection of muzzle blasts
produced by rifles, artillery and other weapons, and similar
explosive events. The system includes an infrared camera, image
processing circuits, targeting computation circuits, displays, user
interface devices, weapon aim point measurement devices,
confirmation sensors, target designation devices and counterfire
weapons. The camera is coupled to the image processing circuits.
The image processing circuits are coupled to the targeting location
computation circuits. The aim point measurement devices are coupled
to the target computation processor. The system includes visual
target confirmation sensors which are coupled to the targeting
computation circuits.
Inventors: |
Moroz; Stanley; (Waldorf,
MD) ; Pauli; Myron; (Vienna, VA) ; Seisler;
William; (Alexandria, VA) ; Burchick; Duane SR.;
(Washington, MD) ; Ertern; Mehmet C.; (Bethesda,
MD) ; Heidhausen; Eric; (Woodbin, MD) |
Correspondence
Address: |
NAVAL RESEARCH LABORATORY;ASSOCIATE COUNSEL (PATENTS)
CODE 1008.2
4555 OVERLOOK AVENUE, S.W.
WASHINGTON
DC
20375-5320
US
|
Family ID: |
35730690 |
Appl. No.: |
11/052921 |
Filed: |
February 9, 2005 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10738074 |
Dec 17, 2003 |
|
|
|
11052921 |
Feb 9, 2005 |
|
|
|
Current U.S.
Class: |
89/41.06 |
Current CPC
Class: |
F41G 3/147 20130101;
G01S 3/784 20130101 |
Class at
Publication: |
089/041.06 |
International
Class: |
F41G 1/32 20060101
F41G001/32 |
Claims
1. (canceled)
2. An apparatus comprising: a spectral filter; a temporal filter; a
spatial filter; an image processor cooperating with at least one of
said spectral filter, said temporal filter, and said spatial filter
to detect a flash event; a targeting processor cooperating with
said image processor to determine a target location based on the
flash event; and a gimbal cooperating with said targeting processor
to slew toward the target location.
3. The apparatus according to claim 2, further comprising at least
one of: a target confirmation sensor cooperating with said
targeting processor and with said gimbal; and a counterfire device
connected to said target confirmation sensor.
4. The apparatus according to claim 3, wherein said target
confirmation sensor comprises one of a pair of binoculars and a
telescope.
5. The apparatus according to claim 3, further comprising: an aim
point measurement device aligned with said target confirmation
sensor.
6. The apparatus according to claim 2, wherein said image
processor, for an image comprising a plurality of pixels, adjusts
at least one of a pedestal value and a gain value of at least a
portion of the plurality of pixels, thereby spreading a histogram
of the image.
7. The apparatus according to claim 7, wherein said image processor
comprises an exposure control.
8. The apparatus according to claim 2, wherein said spectral filter
comprises a cold filter setting corresponding to at least one of a
gunfire characteristic, an ordnance characteristic, a background
clutter characteristic, and an atmospheric characteristic.
9. The apparatus according to claim 2, further comprising at least
one of: a range finder connected to said gimbal; a magnetometer
interfacing with said targeting processor; a compass interfacing
with said targeting processor; and a global positioning satellite
transceiver interfacing with said targeting processor, wherein said
targeting processor geolocates the flash event based at least in
part from data from at least one of said range finder, said
magnetometer, said compass, and said global positioning satellite
transceiver.
10. The apparatus according to claim 2, further comprising: a
detection camera comprising a wide angle anamorphic lens
cooperating with at least one of said spectral filter, said
temporal filter, and said spatial filter.
11. The apparatus according to claim 2, further comprising: an
optical illuminator connected to said gimbal to identify the
target.
Description
BACKGROUND OF THE INVENTION
[0001] The present invention relates to (1) an optical muzzle blast
detection and counterfire targeting system for remotely detecting
the location of muzzle blasts produced by rifles, artillery and
other weapons and similar explosive events, especially sniper fire;
and (2) a system for directing counterfire weapons on to this
location.
[0002] Prior Art
[0003] Hillis U.S. Pat. No. 5,686,889 relates to an infrared sniper
detection enhancement system. According to this Hillis patent,
firing of small arms results in a muzzle flash that produces a
distinctive signature which is used in automated or machine-aided
detection with an IR (infrared) imager. The muzzle flash is intense
and abrupt in the 3 to 5 mum band. A sniper detection system
operating in the 3 to 5 mum region must deal with the potential
problem of false alarms from solar clutter. Hillis reduces the
false alarm rate of an IR based muzzle flash or bullet tracking
system (during day time) by adding a visible light (standard video)
camera. The IR and visible light video are processed using temporal
and/or spatial filtering to detect intense, brief signals like
those from a muzzle flash. The standard video camera helps detect
(and then discount) potential sources of false alarm caused by
solar clutter. If a flash is detected in both the IR and the
visible spectrum at the same time, then the flash is mostly
probably the result of solar clutter from a moving object.
According to Hillis, if a flash is detected only in the IR, then it
is most probably a true weapon firing event.
[0004] In Hirshberg U.S. Pat. No. 3,936,822 a round detecting
method and apparatus are disclosed for automatically detecting the
firing of weapons, such as small arms, or the like. According to
this Hirshberg patent, radiant and acoustic energy produced upon
occurrence of the firing of a weapon and emanating from the muzzle
thereof are detected at known, substantially fixed, distances
therefrom. Directionally sensitive radiant and acoustic energy
transducer means directed toward the muzzle to receive the
radiation and acoustic pressure waves therefrom may be located
adjacent each other for convenience. In any case, the distances
from the transducers to the muzzle, and the different propagation
velocities of the radiant and acoustic waves are known. The
detected radiant (e.g. infrared) and acoustic signals are used to
generate pulses, with the infrared initiated pulse being delayed
and/or extended so as to at least partially coincide with the
acoustic initiated pulse; the extension or delay time being made
substantially equal to the difference in transit times of the
radiant and acoustic signals in traveling between the weapon muzzle
and the transducers. The simultaneous occurrence of the generated
pulses is detected to provide an indication of the firing of the
weapon. With this arrangement extraneously occurring radiant and
acoustic signals detected by the transducers will not function to
produce an output from the apparatus unless the sequence is
corrected and the timing thereof fortuitously matches the
above-mentioned differences in signal transit times. If desired,
the round detection information may be combined with target
miss-distance information for further processing and/or
recording.
SUMMARY OF THE INVENTION
[0005] According to the present invention, an infrared camera
stares at its field of view and generates a video signal
proportional to the intensity of light. The camera is sensitive in
the infrared spectral band where the intensity signature of the
flash to be detected minus atmospheric attenuation is maximized.
The video signal is transmitted to an image processor where
temporal and spatial filtering via digital signal processing to
detect the signature of a flash and determine the flash location
within the camera's field of view. The image processing circuits
are analog and digital electronic elements. In another aspect and
feature of the invention, the image processing circuits are coupled
to target location computation circuits and flash location
information is transmitted to the targeting location computation
circuits. The targeting computation circuit is digital electronic
circuitry with connections to the other devices in the system. The
field of view of the camera is correlated to the line of sight of
the confirmation sensor by using aim point measurement devices
which are coupled to the target computation processor. The displays
are video displays and show camera derived imagery superimposed
with detection and aiming symbology and system status reports. The
user interface devices are keypads and audible or vibrational
alarms which control the operation of the system and alert the user
to flash detections which are equated to sniper firing, for
example. In still another aspect of the invention, the weapon aim
point measurement devices include inertial measurement units,
gyroscopes, angular rate sensors, magnetometer-inclinometers, or
gimbaled shaft encoders. Visual target confirmation sensors are
binoculars or rifle scopes with associated aim point measurement
devices. Counterfire weapons contemplate rifles, machine guns,
mortars, artillery, missiles, bombs, and rockets.
OBJECTS OF THE INVENTION
[0006] The basic objective of the present invention is to provide
an automated and improved muzzle blast detector system and method
which uses multi-mode filtering to eliminate and/or minimize false
alarms.
[0007] Another object of the invention is to provide a muzzle blast
detector which accurately locates direction and range to muzzle
blast source.
[0008] Another object of the invention is to provide a sniper
detection method and apparatus which uses temporal, spectral and
spectral filtering to discriminate between
[0009] actual muzzle blasts and non-muzzle blast infrared
generating events.
DESCRIPTION OF THE DRAWINGS
[0010] FIG. 1 is a general block diagram of a muzzle blast
detection system incorporating the invention,
[0011] FIG. 2 is a further block diagram of the detection system of
the invention,
[0012] FIGS. 3A and 3B are graphs of simulated event signatures and
corresponding matched filter for 60 FPS video,
[0013] FIG. 4 is a diagrammatic representation of the event
filter,
[0014] FIG. 5 illustrates a sample detection filter,
[0015] FIG. 6 is a circuit diagram of a detector with an adaptive
threshold level,
[0016] FIG. 7 is a depiction of a low pass spatial filter response
h (K,l),
[0017] FIG. 8 is a circuit diagram showing adaptive detection
system with low pass filtered "[sgr]" and high pass filter e
(2).
[0018] FIG. 9 illustrates the decision filter,
[0019] FIG. 10 illustrates the overall detection and location
algorithm, and
[0020] FIG. 11 illustrates the video acquisition subscription.
[0021] FIG. 12 is a schematic diagram of an embodiment of the
instant invention.
[0022] FIG. 13 is a flow chart of an embodiment of the instant
invention.
DETAILED DESCRIPTION OF THE INVENTION
[0023] The aspect of the invention comprises an infrared camera 10
connected to image processing circuits 11 and a video display 14
which may include an annunciator 14A to provide an immediate
audible or tactile indication of the muzzle blast event. The camera
10 stares at a field of view, and the video signal is fed to the
image processor 11. The pedestal and gain controls of the camera
are controlled by the image processor.
Detection
[0024] The image processor outputs the live infrared video to the
display. Concurrently algorithms to detect the presence of a muzzle
flash in the image are executed on the image processor. When a
muzzle flash is detected the image processor 11 overlays a symbol
on the display around the pixel location where the flash was
detected. The algorithms that detect the muzzle flash operate by
processing several frames of video data through a temporal and
spatial digital filter. The activity level at each pixel location
is adaptively tracked and the effect of background clutter is
reduced by varying the detection threshold at each pixel according
to the past activity around that pixel location. The detection
algorithms are described in more detail in the section entitled
Detection of Short Duration Transient Events Against a Cluttered
Background.
Automatic Pedestal and Gain
[0025] An algorithm is used for automatic adjustment of the
pedestal and gain values of the imaging system to achieve high
dynamic range. Additional user control over these settings allows
certain regions of the image to be dark or saturated. This
algorithm is described in the section entitled Automatic Pedestal
and Gain Adjustment Algorithm.
Targeting
[0026] The coordinates of the detected muzzle flash are fed to
targeting circuitry 12 to guide a visual target confirmation sensor
13, such as binoculars or a telescope, and a counterfire weapon,
such as a rifle, onto the target.
Weapon Aim Point to Camera Coordinate Calibration
[0027] Given weapon aim point measurement readings 15, the
corresponding image coordinates in the camera field of view are
derived. The aim point measurement devices generate an azimuth and
elevation reading. The calibration procedure includes aiming the
weapon at three known calibration points. These points are marked
by the user on the display 14 using a cursor. The image coordinates
and the aim point measurements for these points are used to
generate a mathematical transformation so that, given any aim point
measurement, it's corresponding image location can be calculated.
Symbology denoting the current weapon aim point is displayed on
screen 14, and the difference in target screen locations is used to
guide the return fire shooter onto the target.
Visual Confirmation
[0028] An aim point measuring device 15 is aligned with the
confirmation sensor. This device provides the azimuth and elevation
(line of sight) of the sensor. The aim point measurement device 15
is correlated to the camera optical axis and orientation using a
multipoint calibration procedure, thereby relating azimuth and
elevation readings to camera pixel locations. The targeting
processor calculates the difference between muzzle flash detection
location and the instantaneous pointing location and displays
guidance symbology to direct the confirmation sensor to the
target.
Confirmation Sensor Aim Point to Camera Coordinate Calibration
[0029] The line of sight of the confirmation sensor is calibrated
to camera coordinates using the three-point calibration algorithm
used for calibrating the weapon aim points to camera coordinates.
Either the same or different calibration points can be used for
weapon to camera and confirmation sensor to camera calibration.
Symbology denoting the current confirmation sensor line of sight is
displayed on screen, and the difference in target screen locations
is used to guide the observer onto the target.
Calibration Using Gimbaled Telescope with Encoders
[0030] A telescope, on a gimbal with shaft encoders, mounted on the
camera is used to determine the location of the calibration points.
The user points the telescope at a calibration point. The telescope
gimbal is aligned with the camera, and the image coordinates of the
telescope line of sight are known. By selecting three calibration
points and aiming the weapon or confirmation sensor at these points
the transformation between the aim point measurement devices and
camera coordinates can be calculated.
User Interface
[0031] The user interface includes a keyboard KB and cursor
controlled mechanism M to control the operation of the system, a
video display screen 14, and a detection alarm 14A. The user is
alerted to a detection through an audible alarm of a silent
tactical vibration, or other type of silent alarm device which is
triggered by the targeting processor. The user is then guided
through symbology overlaid on the display to move the confirmation,
sensor weapon until the line of sight is aligned with the detected
flash.
Ring Display
[0032] A peripheral vision aiming device is also used to guide a
confirmation sensor or weapon to the target. The aiming device
consists of a ring of individual lights controlled by the targeting
processor. The ring may be placed on the front of a rifle scope, in
line with the rifle's hard sites or other locations in the
peripheral view of the operator. When a detection is made, the
targeting processor activates one or more lights to indicate the
direction and distance the confirmation sensor/weapon must be moved
to achieve alignment with the flash. The number of activated lights
increases as the degree of alignment increases. All lights are
activated when alignment is achieved.
[0033] The following section describes the adaptive algorithm for
detection of rapid transient events where a noisy background is
present. The theoretical background and a sample implementation are
given.
[0034] It is desired to detect and locate transient events against
a noisy background in real time. The detection and location of such
an event requires a prior knowledge about the spectral, spatial and
temporal signatures of typical events. It is also desirable to have
information about the background conditions in which the detection
system is expected to operate. This information consists of the
spectral, spatial and temporal characteristics of the
background.
[0035] If the statistics of the four-dimensional signal which is
specified as the signature of a typical event (spectral, spatial
and temporal axes) are known, and if the same statistics for
various backgrounds are measured, it becomes a simple matter of
applying standard stochastic analysis methods (matched filtering)
in order to solve the problem. However, this information is not
readily available and there are several other problems which make
this approach unfeasible.
[0036] The first difficulty is that the instrumentation to
simultaneously extract all components of signals that have
spectral, spatial and temporal components is not readily available.
Equipment is available to acquire simultaneously either the
spectral and temporal (spectrometry), or the spatial and temporal
(video) components from a scene. It is also possible, through the
use of several imagers to acquire multispectral image sets,
essentially sampling the scene at several spectral bands.
[0037] Operating at a suitably chosen fixed spectral band, the
intensity variation as a function of time was the easiest component
of the event signature to detect.
Detection Methods Which Deal Only with Spatially and Temporally
Varying Signal Components at a Fixed Spectral Band
[0038] The concept of matched filtering can be used if the
statistics of the events to be detected and backgrounds are
available. However, many factors, such as humidity, ambient
temperature, range, sun angle, etc. influence these statistics. It
is not practicable to gather data for all combinations of rapid
transient events and background scenes. Thus, for the detection
algorithm to reliably work against different background
environments, it has to adapt to these environments.
The Detection System
[0039] The video signal from the camera 10, under control of
controller 18, is digitized 16 and supplied to an image processing
system 17 and continuously stored in memory M at frame rates (FIG.
2). In this invention, the image processor 17 is adapted to operate
on the latest and several of the most recent frames captured.
Although in this case the processor operates on progressively
scanned 256*256 pixel frames at a rate of 60 frames per second, the
algorithm can be used at other resolutions and frame rates.
[0040] The camera 10 being used is a CCD imager, which integrates
the light falling on each pixel until that pixel's charge is read
out. The read out time is typically much less than the typical
transient event duration. This means that the imager effectively
has a 100% duty cycle, with no dead times between frames for each
pixel. The camera pedestal and gain settings are set to fully
utilize the dynamic range of the image processing system. The
algorithms for this are described later herein.
[0041] The first stage of the detection algorithm includes a
temporal Event Filter 20 which is tuned to detect rapid transient
signatures, followed by a spatio-temporal Detection Filter designed
to reject background clutter. The output of this first stage is a
list of candidate event times and locations. These coordinates form
the input to a logical processing stage which then estimates the
probability of the candidate event actually being due to a single
uncorrelated rapid transient.
The Event Filter 20
[0042] The event filter 20 is a finite impulse response matched
filter which operates on each pixel in the sequence. The impulse
response of the filter is derived by estimating the signature of
the typical transition event.
[0043] The events to be detected typically have much shorter
duration than the frame repetition rate. Therefore, most of the
time the rapid transients occur wholly inside one frame. However,
it is possible to have a single event overlapping two adjacent
frames. The time of occurrence of a transient event and the frame
times are uncorrelated processes, and the overlap can be modeled by
considering the event time to be uniformly distributed over the
frame interval.
[0044] A simple model of a rapid transient signature consists of a
pair of exponentials, one on the rising edge and another on the
falling edge of the event. FIG. 3 shows the case where a rising
time constant [tgr] (r) of 0.125 mS and a falling time constant
[tgr] (f) of 0.5 mS are chosen. This waveform is convolved with the
rectangular window of the frame and the result integrated over
successive frame periods reveals the optimal matched filter
coefficients.
[0045] The event filter then is a tapped delay line finite impulse
response filter and its output, the error signal, can be written as
the simple convolution: [0046] Get Mathematical Equation
[0047] Since h(k), the impulse response of the Event Filter is
indexed only to the frame number, this filter is purely temporal
and has no spatial effects.
The Detection Filter
[0048] The simplest detection scheme for a transient event consists
of an event filter 20 followed by a threshold device (comparator
21, FIG. 5). This system works reasonably well in cases where the
background scenery is not noisy and where false alarm rejection
requirements are not demanding.
[0049] The simple detector approach is also useful in serving as a
baseline to compare the performance of more complicated algorithms.
A figure of merit can be devised for other algorithms by comparing
their detection performance to the simple detector.
[0050] In order to reduce the false alarm rate additional
processing is performed. The approach taken here is to use adaptive
filtering methods to vary the decision threshold spatially, so that
image areas of high activity have higher and areas of less activity
have lower threshold levels. Thus, the threshold level becomes a
varying surface across the image.
[0051] A good estimate of the activity level for each pixel in the
image is the mean square of the signal e(i,j,n), the event filter
output. Since this signal is already generated, its calculation
imposes no additional computational burden. The calculation of the
mean square however still needs to be performed.
[0052] Instead of the actual mean square computation to estimate
the energy of the intensity signal at each pixel, a recursive
estimate is used. Thus we define:
[sgr](i,j,n)=[mgr][sgr](i,j,n-1)+(1-[mgr]) e(i,j,n) (2) where [mgr]
the learning rate is a constant between 0 and 1. A typical value
for [mgr] is 0.95. The best choice for the learning rate will be
determined depending on the stationarity of the background scene
(in both the statistical and the physical senses).
[0053] The recursive formulation for [sgr](i,j,n) makes it easy to
implement. The infinite impulse response filter 32 that implements
this has a low pass transfer function, and thus tends to "average
out" the activity at each pixel location over its recent past.
[0054] To simplify implementation, it is possible to remove the
square-root operation 33 on the threshold surface, and compare the
estimated variance of the signal e to the square of its
instantaneous value. Thus, the output of the comparator essentially
becomes a measure of the difference of the instantaneous energy in
the signal to the estimated average energy at that pixel.
[0055] Some of the physical phenomena that cause false alarms are
edge effects, thermal effects such as convection, camera vibration,
and moving objects. A significant portion of these can be
eliminated by performing a spatial low pass operation on the
variance estimate signal a. This is to spread the threshold raising
effect of high energy pixels to their neighbors. However, a pure
low pass operation would also lower the a values at the peaks of
the curves. To offset this a "rubber-sheeting" low pass filter is
used. This is mathematically analogous to laying a sheet of elastic
material over the threshold surface. The threshold surface thus
generated is calculated by: [thgr](i,j,n)=max[[sgr](i,j,n),[sgr]
(LP)(i,j,n)] (3) where [sgr] (LP) is the low pass filtered
estimated variance, calculated by the convolution: [0056] Get
Mathematical Equation
[0057] The low pass spatial filter 45 coefficients h(k,l) are
chosen depending on the sharpness desired. A set of values which
gives good results is generated using a normalized sinc function is
plotted in FIG. 7.
[0058] A possible enhancement to the detection algorithm is the
inclusion of a spatial high pass filter 42 to reject image events
which occupy large areas. Depending on the application (i.e.
whether rapid transient events which occupy relatively large areas
are desired to be detected or not), such a filter may reduce the
system's susceptibility to false alarms due to events which are not
of interest. The block diagram of the detector incorporating these
modifications is shown in FIG. 8.
[0059] It should also be noted that in the system shown the
comparator 43 output is no longer a binary decision but a
difference signal. While it is possible to use the compactors'
binary output as a final decision stage, it is convenient to
further process the output of the detection filter.
The Decision Filter (FIG. 9)
[0060] For each pixel, a value for the detector signal det(i,j,n)
is generated at the frame rate. Thus, the data rate of the detector
output is comparable to the raw image data rate. The detector
signal is a measure of the likelihood that an event has occurred at
the corresponding pixel. This information has to be reduced to a
simple indication of the presence and location of the event. The
decision filter performs the required operation.
[0061] The detector output can be filtered in several ways. The
obvious and simple method is to compare it with a set threshold
value. Another way is to first localize the possible location of
the one most likely event in each frame, and then to decide whether
it actually is present or not. This approach is simple to implement
and results in significant reduction in the amount of data to be
processed. Its limitation is that it does not allow the detection
of multiple transient events occurring within a single frame.
[0062] The location of a single candidate transient event per frame
is done in locator 50 by finding the pixel location with the
maximum detector output. If this signal exceeds a detector
threshold T, then a "Transient Detected In Frame" indication is
output, otherwise the output indicates "No Transient Detected In
Frame".
[0063] The decision filter 51 operations are as follows: .times.
.times. Get .times. .times. Mathematical .times. .times. Equation T
.function. ( n ) = [ agr ] .times. T .function. ( n - 1 ) + ( 1 - [
agr ] ) .times. d .function. ( n ) ( 6 ) ##EQU1##
[0064] This operation, similar to the calculation of [sgr], is a
recursive implementation of an adaptive threshold. The learning
rate [agr] (again chosen between 0 and 1 and typically about 0.9)
determines the speed with which the system adapts to changes in the
background levels.
[0065] The decision filter block diagram is shown in FIG. 9.
[0066] The overall block diagram of the adaptive detection
algorithm is shown in FIG. 10.
[0067] Using the approach presented here, it is possible to
determine the presence or absence of short duration transient
events. The invention is especially useful when the background
scene is cluttered and contains elements which have statistical
properties similar to those of the events being searched for. This
is done by utilizing as much of the available knowledge about the
spectral, spatial, and temporal characteristics of the events to be
detected.
Automatic Pedestal and Gain Adjustment Algorithm
[0068] The detection of a rapid transient event in a noisy
background is significantly degraded if the full dynamic range of
the imaging system is not used. This presents a simple algorithm
for automatic adjustment of the pedestal and gain values of the
imaging system to achieve high dynamic range. In some situations it
is desired to have additional control over exposure to allow
certain regions of the image to be dark or saturated. A version of
the algorithm with exposure control is given below.
Automatic Pedestal and Gain Adjustment Algorithms
[0069] The pedestal and gain adjustment algorithm presented here
assumes an 8 bit imaging system is being used. The response is
assumed to be roughly linear. However, the algorithm will work well
with nonlinear imagers as well. The image acquisition subsystem
block diagram is shown in FIG. 11.
[0070] Two versions of the algorithm are presented here. The
simpler first version automatically sets the pedestal and gain
values to equalize the image so that all pixels lie throughout the
full range of the imaging system. The coefficients of the system
have to be adjusted so that the response is not oscillatory (i.e.
their values have to be chosen so that the closed loop transfer
function has magnitude less than unity). In the slightly more
complex second version, the user is given an additional control to
allow under- or over-exposure as desired.
[0071] The following procedure summarizes the detection system
algorithm without exposure control:
[0072] Grab one frame of data. Within a region of interest
(typically the whole picture minus a frame around the edges) count
the number of saturated pixels (n (5at)) and the number at full
darkness (n (zer)). Measure the value of the darkest pixel (botgap)
and the distance between the brightest pixel and 255 (topgap).
Change the pedestal and gain settings to spread the histogram of
the image. Repeat for next frame.
The dynamic pedestal and gain equations are: [Dgr] p=p(1)n-pbotgap
[Dgr] g=-g(1)n+g(2)topgap-k[Notidenticalwith]p
pedestal=pedestal+[Dgr] p gain=gain+[Dgr] g
[0073] Optimal values for the tracking parameters p (1), p (2), g
(1), G (2) and k depend on the camera response. However, since
feedback is used, this effectively "linearizes" the control loop,
and depending on the temporal response desired, suitable values can
be derived empirically.
[0074] The following describes the detection algorithm with
exposure control.
[0075] This version is slightly more complex in that it adds an
exposure control input to the original algorithm. The variable
exposure determines the amount of under- or overexposure desired.
This operates in a manner analogous to the exposure control found
in automatic cameras. When exposure is set at a positive value, the
pedestal and gain dynamics are set to allow a number of pixels to
stay saturated (overexposure). Similarly, a negative exposure
control allows a number of pixels to stay at zero (underexposure).
The dynamic equations are: n(up)=n(zer)+min(exposure, 0)
n(down)=nsac-max(exposure, 0) [Dgr] p=p(1)n(up)-p(2)botgap [Dgr]
g=-g(1)n(down)+g(2)topgap-k[Dgr]p pedestal=pedestal+[Dgr] p
gain=gain+[Dgr] g
[0076] Thus, with a positive exposure setting, the only effect is
at the top end of the digitization range, so that n (up) is not
altered (it stays equal to n (zer)) but n (down) is less than n
(sat). This means that a number of pixels (equal to the magnitude
of exposure) are allowed to stay saturated. Conversely, with a
negative exposure ndow is unaltered but n (up) is allowed to go to
a negative number, signifying that a number of pixels are allowed
to stay dark.
[0077] The above description of the VIPER suite incorporates by
reference herein U.S. Pat. No. 6, 496,593 to Krone, Jr. et al.
Decreased response time for Confirmation
[0078] In FIG. 12, a standard two axis pan and tilt gimbal 1210 is
operably connected to the Targeting Processor 1215. An alignment,
including registration and calibration, is performed between the
gimbal position and the detection camera pixel locations. The
alignment is accomplished using reference sources located at a
distance from the sensors. The gimbaled camera sensors are
calibrated so that the differing fields of view are matched to each
other. After this calibration, the gimbal can rapidly point at a
given location corresponding to a triggering event. A standard
joystick 1220 is interfaced to the Targeting Processor 1215 to
enable the user to move the gimbal 1210 independently to locate
areas of interest.
Day/Night Functionality
[0079] A Day/Night Color Vision System ("DNCVS") 1225 is placed on
the high speed gimbal 1210. This subsystem 1225 serves as an
adjunct system to the instant VIPER suite. The DNCVS 1225 provides
the user with a day/night "visual" validation of the triggering
event. The DNCVS 1225 comprises standard multi-spectral cameras
that are sensitive to both daytime and nighttime environments. Such
multi-spectral cameras include, for example, standard long wave
infrared, standard short wave infrared ("SWIR", and standard
visible band, e.g., video, cameras. The use of multiple cameras
permits viewing of camouflage, cold targets, hot targets, and
reflective (white/black) targets in several spectral bands. Use of
multiple bands optimizes target contrast and provides better
penetration through obscurants such as smoke and fog. Selection of
the proper bands for the situation enables the operator to observe
the scene for a wide variety of conditions. For day operations, the
visible video cameras provide the best performance. For twilight
operation, the SWIR cameras provide superior performance. For
starless nights, the long wave IR cameras offer the best
performance. Combining the sensors for transition periods, e.g.,
day to night, can give the best performance as the environmental
conditions change.
[0080] Variable camera fields-of-regard are embodied by, for
instance, standard zoom optics or standard controlled flip lenses.
The operator may select either automated or manual zoom controls
allowing optimization of the fields-of-regard.
[0081] The operator's user interface 1230 permits selection of
specific cameras of the DNCVS 1225 that can be displayed. This
display 1230 can be selected to be either monochromatic or color.
Various false color display schemes are available. Color fusion
schemes, such as described in U.S. patent application Ser. No.
09/840,235 to Penny G. Warren, entitled "Apparatus and Method for
Color Image Fusion," filed Apr. 24, 2001, and incorporated herein
by reference, are selectable for combination of multiple cameras
into a single display. Fusion of previously stored images with
real-time sensor imagery is also available. Each camera can be
optimized for maximum scene contrast by user-selected options. Both
analog and digital sensor data is available for processing or
storage. For highlighting features at long ranges super-resolution
enhancements can be employed. Frame summation techniques can be
employed for highlighting dim targets. Laser or other illuminators
are used to highlight dim objects or designate an area of interest
for external observers. Additionally, it is possible to convert
individual wide-band cameras into a multi-color operation by use of
laser (or other narrow-band) illuminators in an on-off contrast
fashion. These capabilities are controlled by the operator through
hardware and/or software interfaces.
[0082] The IR detection camera is mounted, for instance, on a plate
along with the gimbal 1210. The rest of the sensors are mounted on
the gimbal 1210. The Camera 1260 includes, a detection camera such
as a midwave IR detection camera. Other cameras are attached to the
gimbal 1210, for example, since their field of regard is much
smaller than the detection camera 1260. The other cameras are
included in the DNCVS 1225.
Video Storage
[0083] Camera imagery is also passed into a recording device 1235,
e.g., a Video Storage Device. The storage device 1235 enables
archiving and analysis of data and events at a later time.
Enhanced User Interface with External Display
[0084] An external portable display 1230 (e.g. a monocular with a
shuttered eyecup) is linked to the Targeting Processor 1215. This
enables multiple people in nearby locations to view the same real
time data that is presented on the system display.
Geolocation/Mapping
[0085] A standard Laser Range Finder 1240 is fixed to the gimbal
1210 permitting ranging to a designated object of interest. A
standard magnetometer/digital compass 1245 and standard GPS 1250
are interfaced to the Targeting Processor 1215 providing positional
reference of the detection system. The combination of the
information from the magnetometer 1245, gimbal 1210, laser range
finder 1240 and GPS 1250 provide the capability of geolocating the
place where the event occurred. The specific place is then
referenced and displayed on a stored map in the system and provided
to the system operator. Standard commercial software is available
for this function, such as Weapons Systems Mapping software
produced by DCS Corporation. This information can be passed to
external entities in order to enable them to react to the
event.
Tailored Spectral Bands for Different Missions
[0086] Several narrow-band cold filter settings have been developed
which optimize the performance of the present VIPER detection
system. Cold filters are filters cooled down to avoid noise
generated by a filter's heat. The noise otherwise drives down
contrast. These spectral band settings are chosen based upon the
characteristics of the gunfire or ordnance to be observed as well
as the properties of the background clutter and the intervening
atmosphere. For example, for urban operations, a narrowing of the
midwave IR camera passband reduces the false alarms at the cost of
shorter detection ranges. By choosing the spectral band, the
instant VIPER system is optimized for daytime or nighttime;
long-range or short-range detection; or urban vs. rural settings.
Proper choice of narrow spectral bands enhances system operation
when the system is on a moving platform. The optimization can be
fixed for a given situation. A variable filter setting is employed
if a standard tunable filter is available to adjust to the specific
situation. Alternatively, multiple cameras with individually
optimized filters are used instead.
Anamorphic Lens Improvement
[0087] A standard very wide angle anamorphic lens 1255 has been
developed and implemented that provides a wide angle field of view
in one dimension. This lens optimizes the field-of-regard of the
detection camera 1260 eliminating the need for multiple cameras to
provide the wide angular coverage
Increasing Re-active Coordination through Optical
Illumination/Designation
[0088] Optical illuminators/designators 1265 are attached to the
gimbal 1210. They can be aligned in such a fashion to enable the
user to illuminate/designate the event of interest. This cues
external entities to the existence/relative location of a possible
target.
Perimeter Defense Operations
[0089] The high speed gimbal 1210 containing, or communicating
with, the DNCVS 1225 can also be used as a Perimeter Defense
surveillance subsystem. This allows the operator to do a sweep-scan
or a step-stare over selected angular regions. The timing and
selection of the coverage is operator-controlled. The Perimeter
Defense surveillance subsystem enhances the situational awareness
of the operator by highlighting events such as intrusions. Motion
and scene change detection processing can be added to the Perimeter
Defense surveillance subsystem to highlight features. The operator
can examine the user display and decide to dwell on objects of
interest within the Perimeter Defense coverage. Optionally, a
triggering event, such as a muzzle flash, overrides the Perimeter
Defense surveillance so that the event can be identified and/or
targeted.
[0090] FIG. 13 shows an illustrative method according to the
instant invention.
[0091] In Step S1310, a physical flash has occurred in the IR
Detection Camera field of regard.
[0092] In Step S1320, the detection camera images the flash through
a sequence of frames. The Image Processor then filters the imagery
and determines if a shot has occurred. If so, it then passes a
message to the Targeting Processor. In this instance, this is
accomplished through an Ethernet interface between the Image
Processor and Targeting Computer.
[0093] In Step S1330, after a detected shot, the Image Processor
can alert the users with either a vibration, an audible alarm and
or a visible cue to alert friendly forces in the area.
[0094] In Step S1340, the Targeting Processor display then alerts
the user and updates the display to indicate such. For instance,
this may be accomplished through adding the detected event to a
list of already detected events and/or drawing an icon on a display
representing the detection camera field of regard.
[0095] In Step S1350, in the case that multiple events occur in a
short period of time, to prevent confusion of the operator, a
Gimbal Slew Override allows the user to deal with individual events
in a serial fashion.
[0096] In Step S1360, if the Gimbal Slew Override is on, the user
is busy attending to a previous event. Thus the gimbal is not
deviated from its current position. Meanwhile, the user has
available a selection of Commands (STEP S1380) to help react to the
previous event.
[0097] In Step S1370, if the Gimbal Slew Override is off, the
Targeting Processor then drives the gimbal to the position
corresponding to the alerted event. Imagery of the area of interest
is displayed on the Target Processor display.
[0098] In Step S1380, the Available User Commands are a set of
controls that help the user to adapt to various conditions. For
instance, a user may select to view a different color of imagery
based on day or night.
[0099] While the invention has been described and illustrated in
relation to preferred embodiments of the invention, it will be
appreciated that other embodiments, adaptations and modifications
of the invention will be readily apparent to those skilled in the
art.
* * * * *