U.S. patent application number 12/188043 was filed with the patent office on 2010-02-11 for predictive autofocusing system.
This patent application is currently assigned to HONEYWELL INTERNATIONAL INC.. Invention is credited to Jan Jelinek.
Application Number | 20100034529 12/188043 |
Document ID | / |
Family ID | 41653055 |
Filed Date | 2010-02-11 |
United States Patent
Application |
20100034529 |
Kind Code |
A1 |
Jelinek; Jan |
February 11, 2010 |
PREDICTIVE AUTOFOCUSING SYSTEM
Abstract
A system for providing a predictive autofocus prior to capturing
an image of an iris of a subject. A sequence of images of the
subject may be taken with a visible light sensitive camera. A speed
and/or location of the subject may be estimated from the images. An
encounter may be when the subject is within focus of the camera or,
in other words, a focus distance and subject distance coincide. The
focus may be determined in accordance with an intensity variance
determination of the subject in the image, and more particularly of
a subject's eye within a window of an image. Upon an encounter, an
image of the iris of the eye may be captured with an infrared
sensitive camera.
Inventors: |
Jelinek; Jan; (Plymouth,
MN) |
Correspondence
Address: |
HONEYWELL INTERNATIONAL INC.;PATENT SERVICES
101 COLUMBIA ROAD, P O BOX 2245
MORRISTOWN
NJ
07962-2245
US
|
Assignee: |
HONEYWELL INTERNATIONAL
INC.
Morristown
MN
|
Family ID: |
41653055 |
Appl. No.: |
12/188043 |
Filed: |
August 7, 2008 |
Current U.S.
Class: |
396/95 |
Current CPC
Class: |
G02B 7/36 20130101; G06K
9/00604 20130101; G03B 13/36 20130101 |
Class at
Publication: |
396/95 |
International
Class: |
G03B 13/36 20060101
G03B013/36 |
Goverment Interests
[0001] The U.S. Government may have rights in the present
invention.
Claims
1. A method for autofocusing comprising: taking a sequence of
images of a subject with a camera; estimating a first time of an
encounter of the subject from the images; computing one or more
encounters subsequent to the first time of an encounter; and taking
an image of the subject at one or more times of an encounter; and
wherein an encounter indicates that at least a portion of the
subject is within a focus of the camera.
2. The method of claim 1, wherein: the at least a portion of the
subject is covered by a window in the images; and the window
includes an eye of the subject.
3. The method of claim 2, wherein computing an encounter is based
on intensity variance of an area in the window centered on the
eye.
4. The method of claim 3, wherein an encounter is achieved when
either just the focus or the subject moves, or both the focus and
the subject move.
5. The method of claim 4, wherein upon at least one encounter, the
camera takes an image of an iris of the eye.
6. The method of claim 5, wherein the image of the iris is taken
under infrared light.
7. The method of claim 6, wherein the camera comprises one or more
sensors for taking the sequence of images of a subject and for
taking the image of an iris.
8. A predictive autofocusing system comprising: a camera for taking
images of a subject; and a processor connected to the camera; and
wherein the processor is for processing the images from the camera
to indicate an encounter wherein the subject is within a focus of
the camera.
9. The system of claim 8, wherein the processor is for processing
images to estimate a time of a future encounter.
10. The system of claim 8, wherein the processing of images is for
determining intensity variance of an area of an eye of the subject
to determine whether an amount of focus of the eye in the images is
sufficient for an encounter.
11. The system of claim 8, wherein upon an encounter, an image of
an iris of an eye of the subject can be captured by the camera.
12. The system of claim 11, wherein an encounter is achieved by
movement of either the subject or the focus of the camera, or of
both the subject and a focus of the camera.
13. The system of claim 11, wherein the camera is for taking images
of the subject in visible light for focusing, and for capturing an
image of the iris in infrared light.
14. A method for focusing comprising: taking images with a camera
of a subject; estimating a position and/or speed of the subject
relative to the camera from the images; computing a time of an
encounter from the position and/or speed of the subject relative to
the camera, wherein the encounter is at a time when the subject is
within focus of the camera; and taking an image of the subject upon
the time of an encounter.
15. The method of claim 14, wherein the position and/or speed of
the subject is estimated from image data of a window encompassing
an eye of the subject in the images.
16. The method of claim 15, wherein the image data comprises an
intensity variance proximate to the eye.
17. The method of claim 16, wherein an encounter is indicated when
the intensity variance is at a certain level.
18. The method of claim 17, wherein an image of the eye is acquired
during an encounter.
19. The method of claim 18, wherein the images having the image
data are taken in visible light and the image of the eye is taken
in infrared light.
20. The method of claim 17, further comprising deciding whether to
move the focus to obtain an encounter, or have the subject move
within a depth of the focus, or both.
Description
BACKGROUND
[0002] The present invention pertains to capturing images and
particularly capturing images of subjects. More particularly, the
invention pertains to focusing for such images.
SUMMARY
[0003] The invention is a predictive autofocusing system for still
or moving subjects using image data.
BRIEF DESCRIPTION OF THE DRAWING
[0004] FIG. 1 is a diagram of a layout for the present predictive
autofocusing system;
[0005] FIG. 2 is a graph that illustrates a sequence of actions
that makes up a stop-and-go autofocusing cycle;
[0006] FIG. 3 is a graph that shows a continuous focus sweep
version of the stop-and-go approach in FIG. 2;
[0007] FIG. 4 is a graph that shows the continuous focus sweep
autofocusing cycle when a subject is approaching a camera;
[0008] FIG. 5 is a graph that shows the continuous focus sweep
autofocusing cycle where the subject appears to be moving too fast
and a lens focus distance thus does not get ahead of the subject
distance resulting in an error;
[0009] FIG. 6 is a graph showing a subject that sweeps through a
focus range or distance through its movement;
[0010] FIGS. 7 and 8 are graphs showing forward and backward focus
sweeps, respectively, of the focus and subject distances from a
camera;
[0011] FIG. 9 is a plot of image intensity variance as the function
of a changing focus lens position;
[0012] FIG. 10 is a graph showing image intensity variance samples
for a standing subject relative to focusing error and time;
[0013] FIG. 11 is a graph showing image intensity variance samples
for a moving subject relative to focusing error and time;
[0014] FIG. 12 is a graph combining the graphs of FIGS. 10 and
11;
[0015] FIG. 13 is a graph showing that two subjects moving at
different speeds may have the same rates of variance change, if
their peak variances differ;
[0016] FIG. 14 is a graph which illustrates that greater subject
speed means a lower signal-to-noise ratio of the noisy variance
data, and thus indicates a need for more sample images to maintain
the speed estimation accuracy;
[0017] FIG. 15 is a schematic illustration of an example camera
system of the present autofocusing system;
[0018] FIG. 16 is a schematic illustration of how particular
elements of the camera system of FIG. 15 support an iris
camera;
[0019] FIG. 17 is a schematic illustration showing how subject
movement may be monitored;
[0020] FIG. 18 is a schematic illustration showing how digital tilt
and pan may be used to find and track an individual's iris;
[0021] FIG. 19 is a flow diagram showing an approach that may be
carried out using the camera system of FIG. 15;
[0022] FIG. 20 is a flow diagram showing an approach that may be
carried out using the camera system of FIG. 15; and
[0023] FIG. 21 is a flow diagram showing an approach that may be
carried out using the camera system of FIG. 15.
DESCRIPTION
[0024] An iris recognition system may work with iris images
acquired in the near infrared (NIR) spectrum. NIR illumination may
be provided by a special flash. An operational scenario of the
system appears to raise a question, namely, how to focus an iris
camera. First, given its very small depth of field, the iris camera
should be focused on a particular eye being photographed. Second,
the focusing should be done prior to the flash discharge, using
only ambient light. Third, determining the correct focus and
adjusting the lens to achieve the focus may take a certain amount
of time. If the subject is moving, the time needed to do the
focusing should be properly accounted for relative to the subject
speed, if the system is to produce well focused iris images. An
autofocusing system should predict where the subject's eye is going
to be in the near future and calculate the nearest time when its
lens' focus can "catch up" with the eye given the system's
computational and physical limitations.
[0025] The autofocusing approach of the present system may operate
on optical principles, i.e., such approach does not necessarily
explicitly measure the distance to the subject using a ranging
device like lidar. The approach may be based on trial-and-error
techniques when the system takes a few test images using different
focus settings and uses the information gleaned from them to
determine both the correct focus lens position and when to fire the
iris camera shot.
[0026] Predictive autofocusing may involve several phases. In phase
1, the system may take a sequence of test images and use the
sequence to estimate the subject's position and/or speed relative
to the camera. In phase 2, the position and/or speed may be used to
solve a dynamic pursuer-evader problem, whose solution is the
location and/or time of their earliest encounter. Here, the pursuer
may be the camera focus, which is "chasing" the "evading" subject.
As with any evader-pursuer problem, the solution should be computed
ahead of real time to allow the pursuer enough time to actually
reach the pre-calculated location of the encounter. In phase 3, the
pursuer may set out to move into the pre-calculated location of the
encounter and fire the shot when it gets there.
[0027] In order to focus on the eye, the predictive autofocusing
algorithm may rely on an eye finding algorithm to locate the eye in
the image. The eye finding and optical autofocusing algorithms may
require test images taken using only ambient light. Moreover, the
algorithms may work with frame rates higher than the rate a near
infrared, large image size iris camera can support. To overcome
certain constraints, the present system may be implemented using a
custom-built single-lens-splitter (SLS) camera that uses a beam
splitter to separate the NIR and visible light bouncing off the
subject and direct it into two separate cameras. The SLS camera is
described herein. Other kinds of cameras may instead be used.
[0028] In contrast to the autofocusing approaches used in digital
photographic cameras, the present system may use genuine image data
from a small window around the eye taken in the test images of a
sequence, use the image sequence to estimate the position and/or
speed of the subject, and feed the data and estimate into a
pursuer-evader problem solver to accurately predict the encounter
rather then just using simple feedback reacting to a few fast point
sensors to catch up with the moving subject. While such simple
approaches may work when taking conventional pictures, their
performance appears to fall short for iris imaging due to a very
short depth of field of the iris camera optics and a need for
precise focusing on a small, well defined area of an image.
[0029] A combined face and iris recognition system (CFAIRS) may
work with high resolution iris images acquired by its iris camera
in the near infrared (NIR) spectrum. Other kinds of recognition
system may be used. The illumination may be provided by a special
NIR flash, whose duration, on the order of one millisecond, may be
short enough to freeze the subject's motion during image
exposure.
[0030] The operational scenario should have a way to focus the iris
camera. First, the focusing should be done prior to the flash
discharge, using only the ambient light. Second, given the
extremely small depth of field of the iris camera optics, the iris
camera should be focused not on a vaguely defined "scene", but on
the particular eye being photographed.
[0031] Of the several phases, in phase 1, the system may acquire
data from images to determine the subject's position and/or speed.
Speed may include radial speed, which is measured along the optical
axis of the camera. However, there may be situations where the
subject motion has a high lateral speed (i.e., in a plane
perpendicular to the optical axis) as well, in which case the
system should determine the complete speed vector.
[0032] In phase 2, the position and/or speed may be used to solve a
dynamic pursuer-evader problem, a solution of which is the location
and/or time of their encounter. Here, the focus lens in the camera
objective, whose position determines the focus distance, may be
"chasing" the "evading" subject. When they meet, the system needs
to recognize this encounter and make the camera fire its shot of
the eye or iris of the subject. One may note that as with any
evader-pursuer issue, the solution needs to be computed ahead of
real time to allow time for the pursuer, whose velocity is always
limited, to perhaps actually reach the pre-calculated location of
the encounter so as to be prepared for an iris image capture. How
long this prediction needs to be, depends on the relative
pursuer-evader speed.
[0033] In phase 3, the pursuer may set out to move into the
pre-calculated location of the encounter. If the pursuer has the
ability to update its estimates of the subject's position and/or
velocity during the pursuit, the pursuer may counter the subject's
"evasive" maneuvers using feedback and improve its odds of
obtaining a well focused image of the subject, particularly the
subject's iris. The feedback may consist of just a periodic re-run
of phases 1 and 2.
[0034] The optical autofocusing approaches for moving subjects may
differ from other approaches just in their implementation of phase
1. A disadvantage of other approaches may be their inability to
precisely locate the focus target. As a tradeoff for precision, a
disadvantage of optical approaches may be the relatively long time
before they determine the subject's position and/or velocity, which
is largely determined by the time needed to collect the test
images. The time requirement appears to be of particular concern
when the subjects are moving. To speed up the collection, one needs
to use a camera with a fast readout. However, fast and large image
size sensors (i.e., ten to sixteen Mpixels or so) of the kind
needed for iris capture may not be currently available at a
reasonable price. To manage the size constraint, the single lens
splitter (SLS) camera that uses a beam splitter may support a
division-of-labor approach to the issue. The single lens splitter
camera is merely an example camera. Two separate cameras may
instead be used for visible and infrared image capture,
respectively. Or one camera may be used for both visible and
infrared image capture.
[0035] The SLS camera may have two cameras that share a common high
power zoom lens followed by a splitter that separates the incoming
light into its NIR and visible components. The NIR component may go
straight into the high resolution, but slow, iris camera. The
visible component may be deflected and it may enter a fast, low
resolution focus camera, whose purpose is to provide data to enable
proper and fast autofocusing.
[0036] Another limitation may stem from a need to repeatedly change
focus in the course of autofocusing. A general and conceptually
simplest approach may be the stop-and-go autofocusing in which the
system moves the focus lens, waits until it stops, then starts the
image exposure and waits until the exposure is over before it
begins to move the lens again to acquire the next image. This is an
approach which may involve moving mechanical parts with inertia.
Thus, getting the parts moving and stopping them may be slowed down
by dynamic transients. A continuous focus sweep autofocusing may
improve the speed and reduce mechanical stresses on the optics by
not requiring the focus lens to stop and start during each test
image acquisition cycle.
[0037] The SLS camera's apparent complexity or other camera's
properties might not be a consequence of using the particular
autofocusing approaches presented herein. An issue may be that,
given the sensor technology limitations, generally an iris camera
cannot focus by itself. Regardless of what autofocusing approach
one uses, the solution may require an auxiliary, be it another
camera, lidar or some other device, whose presence could complicate
the design, and whose role is to find the focus target in the
scene, i.e., the particular eye that the iris camera will
photograph, and autofocus on it using an approach of choice. Thus,
a target that moves radially (and possibly also laterally) might be
handled only by predictive autofocusing approaches. One reason for
predictive autofocusing is that due to the limited focus lens
velocity the lens focus (d.sub.F(t)), the lens focus can be changed
only so fast.
[0038] FIG. 1 is a diagram of the present predictive autofocusing
system 10 having a camera system 22 and a processor 30. The diagram
shows a basic relationship between a subject 21 and camera 22 in
view of distance 11 of the subject's eye from the camera and focus
distance 12. Focus distance 12 may reveal the momentary distance of
camera 22 to some place along a range 23 where subject 21,
particularly the subject's eye 27, would be in focus. The rear end
is the farthest and the near end of range 23 is the closest to
camera 22. The image collection may nearly always start at the rear
end but may be terminated before reaching the near end of range 23.
The place, point or location along range 23 may move with focus
change. The place, where the focus distance and subject distance
coincide, may be regarded as an encounter 15. Also, there may be a
light source 28 for illuminating subject 21 during an iris image
capture.
[0039] FIG. 2 illustrates a sequence of actions that makes up a
stop-and-go autofocusing cycle. The Figure is a graph of distance
(whether focus or subject) from camera 22 versus time. One may
initially assume that subject 21 is standing still at the distance
11 (d.sub.S) from camera 22. Focus distance 12 (d.sub.F) from the
camera is shown having moments of movement when not parallel to the
time coordinate. The Figure shows the stop-and-go autofocusing
cycle when the subject is standing still. The vertical dashed
arrows 13, 14 show the focus distance 12 errors at different
instants t.sub.1, t.sub.2, t.sub.3, t.sub.F and t.sub.N. Points 15
are where the focusing distance 12 coincides with subject distance
11. These may be regarded as encounters.
[0040] Camera 22 and CFAIRS system 22 may be terms which may be
used at times interchangeably in the present description. At time
to, system 22 may lock onto the subject 21 and initiate an iris
image capture approach. The system's ranging subsystem may obtain a
reading on the subject's approximate distance, which can be used to
preset the focus lens at a location, position or distance 12
d.sub.F where the subject 21 would be in focus if the subject were
standing that far or at that distance from the camera. Because the
subject's actual distance 11 is d.sub.S, the focus lens setting may
appear to be off by (d.sub.S--d.sub.F) meters. One may assume that
the initial difference 12 d.sub.F is virtually identical with an
end point of the range 23 and is beyond the subject (i.e.,
d.sub.S<d.sub.F, farther away from camera 22 than the subject 21
is), and that at time t.sub.1 the focus lens will have already
arrived at this point.
[0041] A focus camera of camera 22 may start the autofocusing cycle
at time t.sub.1 by taking its first image. The exposure time 25 may
be for the first image taking be designated a T.sub.E. The focus
during that time T.sub.E may be designated as a focus period 26,
whether the focus distance 12 is changing or not, and be a dashed
line portion of focus distance 12 graph. As soon as the exposure
ends at t.sub.1+T.sub.E, the system may once again start moving the
focus lens and thus changing the focus distance 12. After it stops
at time t.sub.2, the focus camera may take a second test image
during T.sub.E 25 and during the focus holding distance 26, and so
on. According to FIG. 2, a total of five images (N=5) images may be
taken. One may note that at time 16 (t.sub.F), the focus lens may
have passed through a correct focus position 15 (marked by a dot),
but at this time, the camera is not necessarily aware of the
correct focus position. What may be significant is that the focus
of the lens did pass through that position 15. It is the next
encounter 15 which may be useful for an iris image capture. Once
the first image becomes available at t.sub.1+T.sub.E, the
autofocusing algorithm may start to look for the eye at which to
focus. While knowing that the eye location, which may be regarded
as being at position 15, is not necessarily needed for the focus
lens stepping, which proceeds at pre-set time intervals of the
length T.sub.L, the eye location may be needed for determining that
the focus lens distance 12 position or point 23 has passed through
the correct focus point or position 15. For that, the autofocusing
algorithm should compute the image intensity variance around the
eye 27 in a few images before and after it reaches the focus point.
Of the images, only a few images would be acquired after the
correct focus point 15 is reached.
[0042] The number of images taken, N, may depend on the subject
speed, which is not known at this point in time. The faster the
subject 21 moves, then the more test images the focus camera needs
to take since it takes the focus lens longer to catch up at time 16
t.sub.F with the moving subject 21. In order to determine the image
sequence length dynamically, the system 10 would need to process
the test images in real time. This means that the time the system
has at most is T.sub.L seconds from when the current image exposure
terminates until the next image data becomes available to locate
the eye 27, to extract data from a window 29 surrounding the eye,
to calculate the image intensity variance over the window data and
to decide whether to terminate the test sequence acquisition. Upon
the capturing of a test image, virtually immediately in real time,
the intensity variance over the target window 29 may be calculated
before capturing the next image. Completing the test image sequence
at time t.sub.P.sub.1 may conclude phase 1.
[0043] The system may next enter phase 2, when it uses the data to
find the time 16 t.sub.F of the system's passing through the
correct focus point 15, to compute the estimate of the subject's
radial speed, .nu..sub.S, (i.e., down the iris camera's optical
axis) to compute the estimate of the subject's location,
d.sub.S(t.sub.F), at which the focus occurred, to compute the
prediction of the time t.sub.P.sub.'at which to take the iris
image, and to decide whether to actively move the focus lens into a
new position to speed up the process or wait until the moving
subject 21 passes through the second point 15 of focus.
[0044] When done computing the encounter specifics, which happens
at time t.sub.P.sub.2, the system may begin phase 3 to implement a
pursuit strategy. The encounter may take place at time
t.sub.P.sub.3, when the iris camera of camera 22, or another
camera, fires its flash and takes an iris picture.
[0045] The next step may be to extend the concept to moving
subjects. However, one may skip this extension and move on to the
continuous focus sweep autofocusing since the basic ideas appear
the same. A forward sweep approach may be considered.
[0046] Dynamic transients associated with the repeated moving and
stopping of the focus lens tend to slow down the image acquisition
process, mechanically stress the lens drive and increase power
consumption. A better solution may be not to stop the lens movement
but to expose pictures while the lens is moving. An added benefit
of this approach is that the exposure and move intervals overlap so
that the lens is already closer to its new position 23 when the
exposure ends and thus the lens gets there sooner. FIG. 3 shows a
continuous focus sweep version of the stop-and-go approach in FIG.
2. Since the focus sweeps (distance d.sub.F) forward here, i.e., to
a point of range 23 nearer the camera 22, this version may be
referred to as the forward sweep design. FIG. 3 shows the
continuous focus sweep autofocusing cycle when the subject 21 is
standing still, that is, d.sub.S is not changing. The focus
distance 12 may continue to change at the focus distance portion 26
during time 25 T.sub.E. This should not necessarily have an adverse
effect on image capture.
[0047] In FIG. 4, shows the subject 21 approaching camera 22 at the
constant speed .nu..sub.S, whose value is reflected in a slope of
line 11, each point of which indicates the subject's distance 11
from camera 22. The rate of error change may depend not only on the
focus lens, but also on how fast is the subject 21 moving (e.g.,
walking). One may note that because both the subject 21 and focus
point 15 appear moving toward camera 22 thus making the distance
shorter, their velocities may be negative. FIG. 4 shows the
continuous focus sweep autofocusing cycle when the subject 21 is
approaching camera 22. The focus error 13, 14 may change its sign
at the time t.sub.F. Focus error may go from being designated as
error 13 to being designated as error 14 as focus distance or line
12 crosses the first point 15 while moving from left to right in
FIGS. 2-4. Focus error 13 or 14 may be a difference between subject
distance 11 and focus distance 12.
[0048] As the Figures show, the lens focus may be in error by being
incorrectly set either before or beyond the subject 21. The focus
or focusing error e(t) at time t may be introduced as the
difference between the subject distance d.sub.S(t) and the focus
distance d.sub.F(t),
e(t)=d.sub.S(t)-d.sub.F(t). (1)
The focusing error at the start of the autofocusing cycle in the
forward sweep design,
e(t.sub.1)=d.sub.S(t.sub.1)-d.sub.F(t.sub.1)<0, (2)
may be negative, but change its sign later at time t.sub.F.
Assuming that both the subject and lens focus are moving at
constant velocities .nu..sub.S and .nu..sub.F, respectively, they
may advance in time .DELTA.t to new positions,
d.sub.S(t+.DELTA.t)=d.sub.S (t)+.nu..sub.S.DELTA.t, and
d.sub.F(t+.DELTA.t)=d.sub.F(t)+.nu..sub.F.DELTA.t, (3)
thus changing the focusing error to a new value,
e(t+.DELTA.t)=e(t)+(.nu..sub.S-.nu..sub.F).DELTA.t. (4)
[0049] A significant requirement for the continuous focus sweep to
work is that the focus lens may move only so fast in that the
focusing error change during the exposure is smaller than the depth
of field of the camera objective. If the focus camera exposure
lasts T.sub.E seconds, then the following inequality must hold.
|.nu..sub.S-.nu..sub.F|T.sub.E.ltoreq.depth of field for all n=1,
2, (5)
for the continuous forward focus sweep to work. Also, if the lens
focus point is to ever get ahead of the subject, the velocities
need to satisfy the inequality,
.nu..sub.F<.nu..sub.S.ltoreq.0 (6)
[0050] It may happen that the subject 21 is moving so fast that the
focus lens drive lacks the power to get the lens focus position 23
ahead of the subject and the inequality (6) is not met. In reality,
the autofocus may likely "time out" sooner, even though getting
lens focus ahead of the subject is still theoretically possible,
since getting ahead would likely take too much time to achieve.
FIG. 5 shows the continuous focus sweep autofocusing cycle where
the subject 21 appears to be or actually is moving too fast and the
lens focus distance 12 thus does not get ahead of the subject 21
distance 11, and error 13 increases.
[0051] The roles of the lens of camera 22 and the subject 21 may be
swapped. FIG. 4 may show an example way to take advantage of the
subject's motion. Here, the focus lens distance 12 of camera 22
could sit still at some time after the first encounter 15, sparing
the optics both the dynamic shocks and time consuming transients.
However, even though this approach may work for a moving subject
21, it would not necessarily work for a stationary subject 21 and
thus not be acceptable. FIG. 6 shows letting the subject 21 sweep
the focus range or distance through its movement.
[0052] The backward sweep approach may start from an initial
position where the focus lens distance 12 is preset so as to be
before the subject 21 (i.e., closer to camera 22 than subject
21),
e(t.sub.1)=d.sub.S(t.sub.1)-d.sub.F(t.sub.1)>0. (7)
The focus velocity may now need to head away from the camera,
.nu..sub.F.gtoreq.0, (8)
for the approach to work. While possible, the backward sweep
approach may be slower than the forward sweep approach. A reason
may be that even if the image sequence is shorter, when the time
t.sub.P.sub.2 comes to move the focus lens, the lens finds itself
much farther from the desired location, because the lens traveled
away from the location during the sequence capture.
[0053] The timings in the forward and backward focus sweep
approaches are shown in FIGS. 7 and 8, respectively. The Figures
have diagrams which are intentionally drawn so that both approaches
have a minimum possible number of two images past the first focus
point. Such a comparison of ideal cases, however, might not always
be valid. For instance, if the focus images are badly underexposed,
which could easily happen whenever the CFAIRS system is working in
a poorly lit environment, the forward sweep design may have to take
more than just two images past the first focus point. A more
appropriate comparison then might need obtaining enough data to
achieve a comparable signal-to-noise ratio rather than having the
same number of images. This may roughly imply comparable focusing
errors e(t.sub.N) and, consequently, a need to take more images
past the first focus point. It may be noted that the initial focus
distance error appears about the same in both diagrams of FIGS. 7
and 8.
[0054] Focus quality may be measured by the image intensity
variance computed over a window (patch) 29 of an area centered on
eye 27 (FIG. 1) which is the focus target in FIG. 16. Note that due
to lateral motion, the focus target may shift from one test image
to another, and thus should be determined for each test image. The
larger is the variance, the higher is the image contrast, and the
closer to focus is the image. FIG. 9 is a plot 31 of image
intensity variance as the function of a changing focus lens
position. The plot may be measured in an iris image over a small
area around one eye and may use the same kind of data that the
autofocusing algorithm extracts from its sequence of focus camera
images as described herein except that the focus sweep shown in the
plot appears much wider. The five images shown in other
illustrations noted herein would appear to cover a rather narrow
region around a peak 32. One may note that the peak 32 is not sharp
but has a flat top, which is highlighted in with a rectangle in the
Figure. The presence of the flat top or plateau 32 is a consequence
of the lens' depth of field and the plateau's width being
proportional to the depth of field. The larger the depth of field,
the wider is the plateau 32. Setting the lens' focus somewhere near
the plateau center may be considered good focusing. Since the depth
of field of an SLS camera 22 objective may be about 25 mm, the
plateau 32 seems fairly narrow.
[0055] Plot 31 shown in FIG. 9 was made from a sequence of well
exposed images and thus appears to have steep, clean slopes. Also,
the variance was measured on the same set of pixels in each image.
In the real world of moving subjects, however, the images should
have a short exposure, yet cannot be taken with a flash.
Consequently, the images may often appear underexposed and thus
grainy. Further, the resulting low dynamic range may lower the hill
(i.e., plot 31) and make it flatter while, at the same time, the
noise will make its slopes jagged. Moreover, the eyes 27 of a
moving subject 21 may shift from image to image, preventing a use
of the same set of pixels for calculating the variance and, in
effect, introducing another random noise into the data. The smaller
the area, the stronger will be this noise. This combination of
adverse effects may complicate the task of finding the hill's peak
32. Theoretically, getting two points on either slope, i.e., four
images, would suffice to locate the peak 32. In the real world
circumstances, getting a robust solution in the presence of noise
may require a few more images. How many more images would be needed
may be primarily dictated by the noise levels and a desired degree
of certainty.
[0056] Once the test images are collected at t.sub.P.sub.1,
estimating the subject's speed may be performed. The variance
change illustrated in FIG. 9 may be related to the timing diagrams
from the other Figures herein. For that purpose, FIG. 9 may be
redrawn so as to make the variance a function of the focusing
error. However, the error really is not an independent variable,
but varies in time during the sequence. To illustrate this fact,
the plot sketched in FIG. 10 has two abscissas 33 and 34, one for
the focusing error and another for time, respectively. Also, unlike
for the measurement plotted in FIG. 9, one knows the variance
values only at discrete times t, at which the focus camera images
were taken. They may start at t.sub.1 and be T.sub.L seconds
apart,
t.sub.n=t.sub.1+(n-1)T.sub.L for n=1, 2, . . . N. (9)
In system 10, the sampling period T.sub.L may be fixed.
[0057] FIG. 10 shows image intensity variance 35 samples 36 for a
standing subject 21. The focusing error scale 33 may be measured in
terms of the focus lens position set point values and thus be
regarded as absolute.
[0058] Once subject 21 is allowed to move, in the illustrations it
may be noted that the relationship between the focusing error and
time depends on the combined velocity (.nu..sub.S-.nu..sub.F) as
the equation (4) states. The error measured at the sampling
instants may be
e(t.sub.n)=e(t.sub.1)+(.nu..sub.S-.nu..sub.F) (n-1) T.sub.L for
n=1, 2, (10)
with e(t.sub.1)<0 being the forward sweep design assumption (2).
The inequality (6) may be rewritten as
0<.nu..sub.S-.nu..sub.F.ltoreq..nu..sub.F, (11)
from which the largest focus error increments, -.nu..sub.FT.sub.L,
may occur when the subject 21 is standing, i.e., the subject's
velocity .nu..sub.S=0. Or in other words, the faster that the
subject 21 is moving, the smaller the increments, which may be
manifested on the time axis 34 by shortening its scale as if the
samples were denser in time. A flat top 38 like top 32 of FIG. 9
may be noted.
[0059] FIG. 11 shows image intensity variance samples 37 for a
moving subject 21. A flat top 39 like top 32 may be noted. Holding
onto the absolute focus error scale 33 may cause the time axis 34
to change its scale. Starting from the same initial focus error
e(t.sub.1), the focus camera may take a number of images before the
focus catches up with subject 21 at time t.sub.F and eventually
gets ahead of subject 21 as illustrated in FIG. 11. If subject's
speed increases further, there comes the limit when
.nu..sub.S-.nu..sub.F=0 and the focus will keep "threading water"
at its initial position as the sampling becomes infinitely
dense.
e(t.sub.n)=e(t.sub.1) for n=1, 2. (12)
Increasing the subject's velocity even further may produce a
growing error e (t.sub.1)>e (t.sub.1). This phenomenon may
correspond to the case shown in FIG. 5. Crossing this limit may
cause the samples 37 to actually move away from the correct
focus.
[0060] Using the focus error scale 33 as the independent variable
may make the time axis scale 34 vary as appearing in FIGS. 10 and
11. However, one may choose an opposite approach as well, namely,
to take the time scale 34 as the independent variable in the plots
and accept that it will be the focus error scale 33 now which is
going to be variable as a function of the subject 21 velocity.
FIGS. 10 and 11 may then be combined into one and redrawn as shown
in FIG. 12. This Figure shows image intensity variance samples 36,
37 taken from standing and moving subjects 21. Because the time
scale 34 is now fixed, the focus error scale 33 may vary with the
subject 21 speed.
[0061] FIG. 13 shows that two subjects 21 moving at different
speeds may have virtually identical rates of variance 35 change, if
their peak variances differ. This Figure may give rise to several
items. First, the variance may depend on the image data within the
window over which it is computed. There might be a situation in
which the variance 41 (.sigma..sub.1.sup.2) belonging to a standing
subject 21 happens to be smaller than the variance 42
(.sigma..sub.2.sup.2) measured on another subject 21, who is
moving. As FIG. 13 shows, the rates of variance change in both
cases may be the same because of different variance values,
obscuring the speed differences. Thus, estimating the subject 21
speed should be carried from normalized variance data.
[0062] Second, the faster the subject 21 moves, the smaller are the
variance increments per sample. If the variance data is noisy, the
diminishing increments mean a lower signal-to-noise ratio (FIG.
14). Since the slope
.sigma. t ##EQU00001##
may have to be estimated to determine the subject velocity,
v S = v F + c .sigma. t , ( 13 ) ##EQU00002##
the number of images needed to maintain the same level of accuracy
may go up with the growing subject (21) speed, because while the
noise remains the same, the underlying focusing error increments
become smaller. The level of noise present in the images may thus
indirectly determine the maximum speed the system 10 can reliably
handle. FIG. 14 is a diagram which illustrates that greater subject
21 speed means a lower signal-to-noise ratio of the noisy variance
data, and thus a need for more sample images to maintain the speed
estimation accuracy.
[0063] The subject's speed may be determined from the samples
(i.e., test images) obtained before the system 10 passes through
the first focus, i.e., for t.sub.1.ltoreq.t.sub.F. The number of
these images, N.sub.before, should be such as to allow a reliable
estimation of the slope. The number of samples, N.sub.after, that
need to be collected past the focus point should be such as to
allow the algorithm safely decide that, first, the passing has
indeed happened and, second, estimate or reconstruct the slope to
the right of it well enough to determine the time t.sub.F when it
took place. N.sub.after is generally smaller than N.sub.before.
[0064] Making the prediction may be done. An optical approach to
autofocusing may determine the subject 21 distance from the
relationship relating the distance at which a lens is focused,
d.sub.F, to the values of the lens' zoom, s.sub.Z, and focus,
s.sub.F, and servo set points.
d.sub.F=f(s.sub.Z, s.sub.F) (14)
For a given lens and its instrumentation, this focus calibration
function may be fixed. The focus calibration function may be
determined once the system is built and stored as a regression
function of the calibration data. When using the regression
function, the first item is to ensure that the lens is properly
focused on the target whose distance is being estimated. This may
explain why there is an interest in determining virtually exactly
the time t.sub.F when the lens focus happens to be aligned with the
subject's eye. Knowing this time allows a recovery of the zoom,
s.sub.Z(t.sub.F), and focus, s.sub.F(t.sub.F), drive positions at
that instant and, consequently, also the subject's distance,
d.sub.S(t.sub.F)=d.sub.F(t.sub.F)=f(s.sub.Z(t.sub.F),
s.sub.F(t.sub.F)), (15)
which may be used as the initial conditions in the equations for
computing an encounter as noted herein. The encounter may be a
future situation when the lens focus and subject 21 are aligned
again, that is,
d.sub.S(t.sub.P.sub.3)=d.sub.F(t.sub.P.sub.3), (16)
where t.sub.P.sub.3 is as a yet unknown time when this alignment
occurs, and d.sub.S(t.sub.P.sub.3)=d.sub.F(t.sub.P.sub.3) is a yet
unknown distance from the camera 22, where it is going to take
place. If the system 10 knows the time t.sub.P.sub.3 beforehand,
then it may fire the iris camera at that moment, in contrast to the
first pass through the correct focus that happened at t.sub.F
without the system being aware of it. Also, the system should know
the distance d.sub.F(t.sub.P.sub.3) so that it can get the focus
lens in the right place, if necessary. Another part of the
prediction is to make sure that the predicted action is feasible,
that is, the system has enough time to get everything in place
before the time of encounter t.sub.P.sub.3 arrives.
[0065] To obtain the prediction, the equations of motions (3) with
the terminal condition (16) should be solved. Since the focus lens
velocity has discontinuities, the entire time interval from t.sub.F
to t.sub.P.sub.3 should be broken up into three subintervals
<t.sub.F, t.sub.N>, <t.sub.N, t.sub.P.sub.2> and
<t.sub.P.sub.2, t.sub.P.sub.3>, within which the focus lens
speed is constant and has values .nu..sub.F, 0 and -.nu..sub.F,
respectively. FIG. 4 illustrates the breakup, with a possible
exception. In the Figure, t.sub.N was chosen as the time when the
focus lens finished its forward sweep. This is arbitrary, because
one may envision an approach in which the focus lens keeps moving
even while the system 10 is already going through phase 2, all the
way to time t.sub.P.sub.2. One may note that in phase 3, the focus
lens movement appears away from the camera 22, toward the incoming
subject 21, and thus the lens' speed has a negative sign.
[0066] Phase 3 may be skipped altogether, if executing it would not
offer any significant time improvement over just waiting for the
subject 21 to move into the encounter distance. If this is the
case, then t.sub.P.sub.3=t.sub.P.sub.2.
[0067] The following solution may be generic, with all three phases
present as shown in the FIG. 4. The times t.sub.F, t.sub.N and
t.sub.P.sub.2 may be known to system 10. As it turns out, knowing
the distances that d.sub.S(t.sub.F)-d.sub.F(t.sub.F) is not
necessary. The equations describing the motions from t.sub.F to
t.sub.N may be
d.sub.S(t.sub.F)=d.sub.F(t.sub.F),
d.sub.S(t.sub.N)=d.sub.S(t.sub.F)+.nu..sub.S(t.sub.N-t.sub.F) and
d.sub.F(t.sub.N)=d.sub.F(t.sub.F)+.nu..sub.F(t.sub.N-t.sub.F)
(17)
Phase 2 equations may be
d.sub.S(t.sub.P.sub.2)=d.sub.S(t.sub.N)+.nu..sub.S(t.sub.P.sub.2-t.sub.N-
) and d.sub.F(t.sub.P.sub.2)=d.sub.F(t.sub.N). (18)
Phase 3 equations may be
d.sub.S(t.sub.p.sub.3)=d.sub.F(t.sub.p.sub.3),
d.sub.S(t.sub.P.sub.3)=d.sub.S(t.sub.p.sub.2)+.nu..sub.S(t.sub.p.sub.3-t.-
sub.p.sub.2) and
d.sub.F(t.sub.P.sub.3)=d.sub.F(t.sub.P.sub.2)-.nu..sub.F(t.sub.P.sub.3-t.-
sub.P.sub.2). (19)
Their solution may be the predicted encounter time,
(t.sub.P.sub.3-t.sub.F)=(.nu..sub.F/(.nu..sub.S+.nu..sub.F))
((t.sub.N-t.sub.F)+(t.sub.P.sub.2-t.sub.F)). (20)
As could be expected, the encounter time may be a function of time
increments and thus independent of the absolute value of the times
involved. Thus, one may be free to choose the instant from which
one starts measuring time.
[0068] The formula (20) may be valid as long as
t.sub.P.sub.3.gtoreq.t.sub.P.sub.2. This requirement may impose an
upper bound,
(t.sub.P.sub.2-t.sub.F).ltoreq.(.nu..sub.F/.sigma..sub.S)
(t.sub.N-t.sub.F) , (21)
on the time t.sub.P.sub.2. When t.sub.P.sub.2 reaches its maximum,
no time remains for any forthcoming lens focus motion anymore. This
would be the last time when the system 10 can still take the iris
shot, that is, t.sub.p.sub.3 .sub.max=t.sub.P.sub.2 .sub.max .
(t.sub.P.sub.3 .sub.max-.sub.F)=(t.sub.P.sub.2
.sub.max-t.sub.F)=(.nu..sub.F/.nu..sub.S) (t.sub.N-t.sub.F)
(22)
Equation (22) may also explain why the autofocusing system cycle
generally needs to have phase 3. If autofocusing system only
passively waited until the subject 21 moved into the right
position, then for .nu..sub.S.fwdarw.0, the time of encounter may
be t.sub.P.sub.3 .sub.max.fwdarw..infin..
[0069] Since the encounter instant computation cannot be initiated
until the test image sequence is completed, a lower bound on
t.sub.P.sub.2 may be added as well.
1 < 1 + ( ( t P 2 min - t N ) / ( t N - t F ) ) .ltoreq. ( t P 2
- t F ) / ( t N - t F ) = 1 + ( ( t P 2 - t N ) / ( t N - t F ) )
.ltoreq. 1 + ( ( t P 2 max - t N ) / ( t N - t F ) ) = v F / v S .
( 23 ) ##EQU00003##
[0070] The bounds may confirm what has been already established
herein, namely, that the speed of the lens focus distance change
should not be smaller than the subject speed for the continuous
focus sweep autofocusing to work. Additionally, the bounds may also
define what the "real time processing" means in the continuous
focus sweep autofocusing context. The time available for computing
the encounter instant, T.sub.C=t.sub.P.sub.2-t.sub.N, may become
progressively shorter as the speed ratio
.nu..sub.F/.nu..sub.S.fwdarw.1 (24)
and shrinks to zero, that is, t.sub.P.sub.2=t.sub.N, when
.nu..sub.F=.nu..sub.S. That extreme should not be allowed to happen
since the system 10 may need some minimal time, T.sub.C
min=t.sub.P.sub.2 .sub.min-t.sub.N>0, to do the computation.
Thus, the minimal speed ratio should be greater than a threshold
whose value depends on how much time the system needs for
computation.
(.nu..sub.f/.nu..sub.S).sub.min=1+(T.sub.C min/(t.sub.N-t.sub.F))
(25)
Since .nu..sub.F is fixed by the optics design, the inequality (25)
may limit the maximum subject speed that the autofocusing can
handle.
.nu..sub.Smax=(1/(1+(T.sub.C min/(t.sub.N-t.sub.F)))).nu..sub.F.
(26)
The difference between equations (22) and (20),
((.nu..sub.F/.nu..sub.S)(t.sub.N-t.sub.F))-(.nu..sub.F/(.nu..sub.S+.nu..-
sub.F))(t.sub.N-t.sub.F)-(.nu..sub.F/(.nu..sub.S+.nu..sub.F))(t.sub.P.sub.-
2-t.sub.f)=(.nu..sub.F/(.nu..sub.S+.nu..sub.F))((t.sub.P.sub.2-t.sub.F)-(.-
nu..sub.F/.nu..sub.S)(t.sub.N-t.sub.F)).ltoreq.(.nu..sub.F/.nu..sub.S+.nu.-
.sub.F))((t.sub.P.sub.2
.sub.max-t.sub.F)-(.nu..sub.F/.nu..sub.S)(t.sub.N-t.sub.F))-0,
(.nu..sub.F/(.nu..sub.S.nu..sub.F))(((t.sub.P.sub.2-t.sub.F)/(t.sub.N-t.s-
ub.F))-(.nu..sub.F/.nu..sub.S))=(.nu..sub.f/(.nu..sub.S+.nu..sub.F))(1+(T.-
sub.C/(t.sub.N-t.sub.F))-(.nu..sub.F/.nu..sub.S)).ltoreq.0,
(27)
may show that if one wants to maximize the speedup through the
forthcoming focus lens motion, then one should strive to make the
computation time T.sub.C as short as possible.
[0071] As a timing device, one may use either the computer clock or
design the code so that the computer's components execute in known
times or use timing derived from the stepping of the focus lens
drive.
[0072] The solution (20) may become known to the system 10
sometimes during phase 2. Before accepting it, the system should
check if it is not too close to the current time (of which the
system is aware) to be realizable, given the system components'
timing constraints. If the time is far out, it may likely make
sense to actually execute phase 3. If it is close, however, it may
make sense to drop phase 3 and recompute the predicted encounter
time without it.
[0073] Once the autofocusing algorithm decides on t.sub.P.sub.3, it
may determine the set point for the focus lens drive
S.sub.F(t.sub.P.sub.3). The encounter position may be obtained from
the focus calibration function,
d.sub.S(t.sub.P.sub.3)=d.sub.F(t.sub.P.sub.3)=f(s.sub.Z(t.sub.P.sub.3),
s.sub.F(t.sub.P.sub.3). (28)
[0074] In the FIG. 2, the three phases of the predictive
autofocusing cycle were introduced. With the detailed discussion of
the cycle herein, the autofocusing concept may be revisited.
[0075] The algorithm may start its clock at an arbitrarily chosen
time, which may be marked as to, because its choice has no effect
on the end result. A more important finding is, however, that the
encounter time computation appears to make no direct use of
anything that happened before the time, t.sub.F, of the first
passing through the focus. True, the time as well as the subject 21
speed at that time can be established only in retrospect, no sooner
than at t.sub.N, and for that purpose, the whole sequence of N test
images had to be taken, most of them before the passing through the
focus. In this respect, the diagrams sketched in FIGS. 2, 3 and 4
depict the scenarios with standing or slowly moving subjects. If
the subject 21 is moving faster and the subject's speed becomes
comparable to that of the focus lens, it may take the lens focus
much longer to catch up with the subject, and the number of images
taken before the passing will be large. Phase 1 may be much longer,
but a question may be whether this fact will actually put the
optical autofocusing at such a great disadvantage compared to other
options such as the use of lidar.
[0076] First, one may note that phases 2 and 3 may exist in any
predictive autofocusing concept. The formulae used in phase 2 to
compute the encounter specifics may be slightly different from
those derived herein, but this should be an inconsequential
difference. Regardless of the approach used, at the time
t.sub.P.sub.1, the system 10 should know the subject's position and
speed to enable the encounter calculations, and it does not matter
whether position and speed are absolute (as in the case of lidar)
or relative (as in the optical autofocusing). In other words,
different predictive autofocusing approaches may substantially
differ just in their implementation of phase 1.
[0077] It may be the case that a lidar can provide the position and
speed measurements of the subject in a shorter time. It might seem,
then, that as far as the agility is concerned, the optical
autofocusing appears much slower compared to the approaches based
on the ranging. An actual benefit of ranging approaches, however,
may not be as great as a first look may suggest. A reason is that
in actual operation, much of the test image sequence may be taken
during transitioning the SLS camera 22 from one subject 21 to the
next, an operation that is generally there despite of how the
camera is going to be focused.
[0078] An example single lens splitter camera 22 may provide high
quality iris images that can be used for identification and/or
tracking of subjects or individuals 21. A camera system may include
a focus camera and an iris camera. The latter may be referred to as
sub-cameras. The focus camera may be sensitive to ambient light or
some spectrum thereof, while the iris camera may be sensitive to
infrared or other spectrum of light. The focus camera and the iris
camera may share an optical path that includes one or more lens
that capture light, as well as a beam splitter or other optical
element that directs light of some wavelengths to the focus camera
and allows other wavelengths to reach the iris camera.
[0079] FIG. 15 is a diagram of an illustrative example camera
system 22, even though other kinds of camera system 22 arrangements
may be used herein with system 10. Camera system 22 may include a
focus camera 52 and an iris camera 54. In some instances, focus
camera 52 may have a considerably lower resolution than iris camera
54, but this is not necessarily required. A lens 56 may be used to
provide focus camera 52 with a field of view that is similar to a
field of view of iris camera 54. Lens 56 may be excluded, depending
on the particular specification and/or configuration of the focus
camera 52 and/or the iris camera 54.
[0080] Focus camera 52 may be sensitive to ambient light or some
spectrum thereof. Focus camera 52 may be any suitable camera that
has a sufficiently high frame rate, allows region of interest
selection and offers sensitivity to perform an auto-focusing
function, such as, for example a PixeLink.TM. PL-A741 camera.
Having a relatively high frame rate may mean that focus camera 52
may have a relatively lower resolution, but this is not always the
case. In some cases, focus camera 52 may have a frame rate of at
least about 100 frames per second, or a frame every ten
milliseconds.
[0081] It is contemplated that iris camera 54 may be any suitable
camera that is capable of acquiring an iris image in a desired
light spectrum and with a desired quality, such as, for example, a
REDLAKE.TM. ES11000 or a ES16000 digital camera. The light spectra
used may include, but are not limited to, visible and infrared
wavelengths. The desired image quality may depend on an intended
security application. For example, higher security level
applications typically require higher image quality. The image
quality is typically dependent on the entire optical path including
both the camera and its optics. For some applications, the minimum
iris image quality for various security levels is defined in ANSI
standard INCITS M1/03-0590.
[0082] Camera system 22 may include a lens 58. While a single lens
58 is illustrated, it will be recognized that in some applications,
depending for example on a distance between camera system 22 and a
possible subject 21, or perhaps depending at least in part on the
particular optics, two or more lenses 58 may be deployed, as
desired. Lens or lenses 58 may be configured to provide any desired
degree of magnification.
[0083] A beam splitter 62 or other optical element may be deployed
downstream of lens 58. Beam splitter 62 may be configured to permit
some wavelengths of light to pass straight through while other
wavelengths of light are deflected at an angle as shown. In some
instances, beam splitter 62 may be configured to permit infrared
light such as near infrared light (about 700 to about 900
nanometers) to pass through beam splitter 62 towards iris camera 54
while deflecting visible light (about 400 to about 700 nanometers)
or some spectrum thereof towards focus camera 52.
[0084] As a result, focus camera 52 and iris camera 54 may see the
same image, albeit in different wavelengths, and may be considered
as sharing an optical path, i.e., through lens 58. Focus camera 52
may be considered as having an optical axis 64 while iris camera 54
may be considered as having an optical axis 66. In some cases,
optical axis 64 is perpendicular or at least substantially
perpendicular to optical axis 66, but this is not required. Rather,
this may be a feature of the optical properties of beam splitter
62. In some instances, a zoom lens 58 may be considered as being
disposed along optical axis 66. In some cases, beam splitter 62 may
be disposed at or near an intersection of optical axis 64 and
optical axis 66, but this is not necessarily required.
[0085] Focus camera 52 may be used to move or focus a lens that is
part of lens 58. Since focus camera 52 and iris camera 54 see the
same image, by virtue of their common optical path, it should be
recognized that focusing lens 58 via focus camera 52 may provide an
initial focusing for iris camera 54, under ambient lighting
conditions. In some instances, focus camera 52 may move the focus
lens within lens 58 using one or more servo motors under the
control of any suitable auto-focusing algorithm. In some cases, a
controller (not shown in FIG. 15) may orchestrate the auto-focusing
operation.
[0086] Because light of differing wavelengths are refracted
differently as they pass through particular materials (glass lenses
and the like, for example), focusing lens 58 via one wavelength of
light may not provide a precise focus for iris camera 54 at another
wavelength of light. In some cases, it may be useful to calculate
or otherwise determine a correction factor that may be used to
correct the focus of lens 58 after lens 58 has been auto-focused
using the focus camera 52, but before the iris camera 54 captures
an image. Information regarding such correction may be found in,
for example, U.S. patent application Ser. No. 11/681,251, filed
Mar. 2, 2007. U.S. patent application Ser. No. 11/681,251, filed
Mar. 2, 2007, is hereby incorporated by reference.
[0087] FIG. 16 is another schematic illustration of camera system
22, showing some of the functions and interactions of the
individual components of camera system 22. Focus camera 52 may
perform several tasks, including for example, finding a focus
target point (generally indicated at reference number 68) and auto
focusing (generally indicated at reference number 70).
[0088] Once camera system 22 is pointed at a face, the focus camera
52 (or a separate controller or the like) is tasked with finding a
focus target within an image seen or sensed by focus camera 52. In
some cases, the focus target may be a predefined point on the focus
target, such as a predefined specific point on a face such as an
eye pupil or the nose bridge. Once the focus target is located at
functionality 68 and focus camera 52 is precisely autofocused on it
via functionality 70, it may be necessary to provide a focus
correction pertaining to the difference in focal length between the
ambient light or some spectrum thereof used to auto-focus the lens,
and the wavelength or wavelengths to be captured by the iris camera
54, as indicated at item 70. If or when the subject moves, such as
by walking, bending, turning its head, and the like, focus camera
52 may be tasked to focus lens 58 in an ongoing process. Once focus
has been achieved, camera system 22 may provide an in-focus flag 72
to initiate iris camera shutter control 74, and in some cases, a
flash controller. Iris image data 55 may be provided from camera
54.
[0089] In some situations, camera system 22 may be deployed in a
position that permits detection and identification of people who
are standing or walking in a particular location such as a hallway,
airport concourse, and the like. FIG. 17 is a diagram showing how
camera system 22 may track a moving individual. In this drawing, an
individual is walking or otherwise moving along walking path 76.
Camera system 22 may lock onto the individual at point 78 and be
able to track the individual until it reaches point 80, or vice
versa. Camera system 22 may be configured to lock onto and obtain
sufficient iris images in the time between point 78 and point 80,
and to identify the individual.
[0090] The present illustration makes several assumptions. For
example, a steering angle of plus or minus 22.5 degrees (or a total
path width of about 45 degrees) may be assumed. It may also be
assumed, for purposes of this illustration, that the individual is
unaware of being identified and thus is being uncooperative. As a
result, the individual happens to walk in a manner that increases
the relative angle between the camera and the individual. The
person may be detected at a distance of about 2 to 5 meters in this
example.
[0091] FIG. 18 shows digital tilt and pan within a field of view of
iris camera 54. In this example, iris camera 54 may be capable of
providing an image having about 11 megapixels. At a particular
distance, iris camera 54 may have a field of view that is indicated
by box 82. Box 82 is in scale relative to a subject or individual
21. A smaller box 86 shows the relative field of view necessary to
view the individual's iris. It can be seen that unless the
individual 21 moves excessively, iris camera 54 may digitally tilt
and/or pan the image to track box 86 within larger box 82 without
any need to mechanically adjust its physical pan and tilt. The
specific numbers of FIG. 18 may pertain to a particular system
design parameter set that, according to the ANSI standard
referenced herein, is suitable for a lower security
application.
[0092] It may be recognized that digital tilt and pan permit a
camera to remain pointed at a face without requiring mechanical
re-positioning as long as a desired portion of the image, such as a
face or a portion of a face, remain within the viewable image.
Because focus camera 52 and iris camera 54 have about the same
field of view, they may have about the same digital tilt and pan. A
focus target algorithm may find the focus target (such as an eye
pupil or nose bridge) within the focus camera image and then
precisely focus on it.
[0093] FIG. 19 is a flow diagram showing an illustrative but
non-limiting approach that may be carried out using camera system
22 (FIG. 15). At block 88, the lens may be focused, often under
ambient light or some spectrum thereof. In some instances, lens 58
may be focused via an iterative auto-focus algorithm using focus
camera 52, sometimes under ambient lighting or some selected
spectrum thereof. Control may pass to a block 90, where an iris
image is captured. In some instances, an iris image may be captured
using iris camera 54, which could be timed with a flash that
produces infrared light or any other light having a desired
spectrum.
[0094] FIG. 20 is a flow diagram showing an illustrative but
non-limiting approach that may be carried out using camera system
22 (FIG. 15). At block 87, a focus target may be located within a
focus image. At block 88, the lens may be focused at it. In some
instances, lens 58 may be auto-focused via an iterative auto-focus
algorithm using focus camera 52 under ambient lighting or some
selected spectra thereof. Control may then be passed to block 92,
where the lens is adjusted. In some cases, the focus of lens 58 may
be adjusted to correct for the differences between, for example,
ambient and infrared light. Then, at block 90, an iris image may be
captured. In some instances, an iris image may be captured using
iris camera 54, which can be timed with a flash that produces
infrared or any other desired light.
[0095] FIG. 21 is a flow diagram showing an illustrative but
non-limiting approach that may be carried out using camera system
22 (FIG. 15). At block 94, light that may be entering camera system
22 is split into an ambient light or some spectrum thereof and an
infrared light portion. Control may pass to block 96, where the
ambient light portion is directed into or towards focus camera 52,
and the infrared light portion is directed into or towards iris
camera 54. In some cases, these steps may be achieved by beam
splitter 62 (FIG. 15).
[0096] At block 98, a focus target may be found within the focus
camera image. Image data from a small area surrounding the focus
target can be extracted from the focus camera image at block 100,
and the extracted data may be used to precisely auto focus the
focus camera 52. Control may pass to block 102, where the focus
setting is corrected, if necessary, for any differences between the
light spectrum used for focusing and the light spectrum used for
image acquisition by iris camera 54. Control may pass to block 104,
where an iris image is captured using, for example, infrared light
sometimes aided by a flash discharge.
[0097] In the present specification, some of the matter may be of a
hypothetical or prophetic nature although stated in another manner
or tense.
[0098] Although the invention has been described with respect to at
least one illustrative example, many variations and modifications
will become apparent to those skilled in the art upon reading the
present specification. It is therefore the intention that the
appended claims be interpreted as broadly as possible in view of
the prior art to include all such variations and modifications.
* * * * *