U.S. patent application number 12/207272 was filed with the patent office on 2009-02-26 for system and method for auto-focusing an image.
This patent application is currently assigned to Ikonisys, Inc.. Invention is credited to Joel M. Recht.
Application Number | 20090052795 12/207272 |
Document ID | / |
Family ID | 32850081 |
Filed Date | 2009-02-26 |
United States Patent
Application |
20090052795 |
Kind Code |
A1 |
Recht; Joel M. |
February 26, 2009 |
SYSTEM AND METHOD FOR AUTO-FOCUSING AN IMAGE
Abstract
A method and system for auto-focusing is disclosed where the
number of the fractal pixels having a fractal dimension within a
predetermined range is determined. The focused image is determined
by adjusting the image to maximize the fractal pixel count.
Alternatively, the focused image may be selected from a set of
images, the image having the largest fractal pixel count selected
as the focused image.
Inventors: |
Recht; Joel M.; (Monsey,
NY) |
Correspondence
Address: |
KELLEY DRYE & WARREN LLP
400 ALTLANTIC STREET , 13TH FLOOR
STAMFORD
CT
06901
US
|
Assignee: |
Ikonisys, Inc.
New Haven
CT
|
Family ID: |
32850081 |
Appl. No.: |
12/207272 |
Filed: |
September 9, 2008 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10958512 |
Oct 5, 2004 |
7424165 |
|
|
12207272 |
|
|
|
|
10368055 |
Feb 14, 2003 |
6876776 |
|
|
10958512 |
|
|
|
|
Current U.S.
Class: |
382/255 |
Current CPC
Class: |
G06T 2207/10056
20130101; G06T 5/30 20130101; G06T 5/002 20130101 |
Class at
Publication: |
382/255 |
International
Class: |
G06K 9/40 20060101
G06K009/40 |
Claims
1. A computer program embodied in a computer readable medium for
automatically selecting an in-focus image of an object, comprising:
obtaining a set of image slices of an object at different focal
distances; determining a fractal pixel count of each image slice;
and selecting the image slice of the object displaying the highest
pixel count as the in-focus image of the object.
Description
CROSS REFERENCE
[0001] This application is a continuation of U.S. patent
application Ser. No. 10/958,512, filed on Oct. 5, 2004, which is a
continuation of U.S. patent application Ser. No. 10/368,055, filed
on Feb. 14, 2003, which disclosures are incorporated herein by
reference in their entirety.
FIELD OF THE INVENTION
[0002] The present invention relates to digital imaging. More
specifically, the invention relates to systems and methods for
automated focusing of an image.
BACKGROUND OF THE INVENTION
[0003] It is well known that manual evaluation of biological
samples is both slow and highly susceptible to error. It is also
well known that automating the sample evaluation both increases the
sample evaluation rate and reduces error.
[0004] FIG. 1 is a schematic of an automated scanning optical
microscopy system. The automated scanning optical microscopy system
100 includes an optical microscope modified to automatically
capture and save images of a sample 105 placed on a sample holder
107 such as, for example, a slide, which in turn is supported by a
stage 110. The optical components include an illumination source
120, objective lens 124, and camera 128. Housing 130 supports the
optical components. The design and selection of the optical
components and housing are known to one of skill in the optical art
and do not require further description.
[0005] The automated system 100 includes a controller that enables
the stage 110 supporting the slide 107 to place a portion of the
sample 105 in the focal plane of the objective lens. The camera 128
captures an image of the sample and sends the image signal to an
image processor for further processing and/or storage. In the
example, shown in FIG. 1, the image processor and controller are
both housed in a single PC 104 although other variations may be
used. The mechanical design of the stage 110 is known to one of
skill in the mechanical arts and does not require further
description.
[0006] The controller may also control a sample handling subsystem
160 that automatically transfers a slide 109 between the stage 110
and a storage unit 162.
[0007] The controller must also be capable of positioning the
sample such that the image produced by the camera 128 is in focus.
In addition, the controller must be able to position the sample
very rapidly in order to reduce the time required to capture an
image.
[0008] One method of auto-focusing an image employs a laser range
finder. The controller calculates the distance to a surface based
on the signal from the laser range finder. The advantage of such a
system is that it is very fast, thereby increasing the scan rate of
the automated system. The disadvantage of such a system is that it
requires additional hardware that may interfere with the optical
performance of the automated system. A second disadvantage of such
a system is the inability to focus directly on the feature of
interest in the sample. The signal from the laser range finder is
usually based on the highest reflective surface encountered by the
laser beam. This surface is usually the cover slip or the slide and
not the sample.
[0009] Another method of auto-focusing an image employs image
processing to determine when the image is in focus or,
alternatively, select the most focused image from a set of images
taken at different sample-objective lens distances. The advantage
of using image processing to auto-focus is that it can focus
directly on the sample instead of the slide or cover slip. The
disadvantage of image processing auto-focus is that it usually
requires large computational resources that may. limit the scan
rate of the automated system. The large computational requirement
arises because prior art algorithms based on maximizing the high
frequency power spectrum of the image or on detecting and
maximizing edges must perform large numbers of computations.
[0010] Therefore, there remains a need for a rapid auto-focusing
method that does not require large computational resources and can
directly focus the sample.
SUMMARY OF THE INVENTION
[0011] One embodiment of the present invention provides a method
for automatically focusing an object comprising: acquiring at least
one image of the object, the image characterized by a focal
distance and comprising at least one pixel; determining a fractal
pixel count of pixels having a fractal dimension within a
predetermined range; and selecting an optimal focal distance from
the image having the largest fractal pixel count.
[0012] Another embodiment of the present invention provides an
apparatus for automatically focusing an object comprising: an image
capture sensor for capturing an image of the object, the captured
image characterized by a focal distance; a fractal estimator
coupled to the image capture sensor, the fractal estimator adapted
to estimate a fractal dimension of at least one pixel of the
captured image; and a focus controller coupled to the fractal
estimator, the focus controller adapted to adjust the focal
distance of the object based on maximizing the number of pixels
having an estimated fractal dimension within a predetermined
range.
[0013] Another embodiment of the present invention provides an
apparatus for automatically focusing an object comprising: an image
capture sensor for capturing an image of the object, the captured
image characterized by a focal distance; means for determining a
fractal pixel count of the captured image; and means for
controlling the focal distance based upon maximizing the fractal
pixel count of the captured image.
[0014] Another embodiment of the present invention is directed to a
computer program product for use with an automated microscopy
system, the computer program product comprising: a computer usable
medium having computer readable program code means embodied in the
computer usable medium for causing the automated microscopy system
to automatically focus an object, the computer program product
having: computer readable code means for causing a computer to
acquire an image of the object at a focal distance; computer
readable code means for causing the computer to determine a fractal
pixel count of the acquired image; and computer readable code means
for causing the computer to adjust the focal distance of the object
based upon maximizing the fractal pixel count of the acquired
image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The invention will be described by reference to the
preferred and alternative embodiments thereof in conjunction with
the drawings in which:
[0016] FIG. 1 is a schematic diagram of a prior art automated
scanning optical microscopy system;
[0017] FIG. 2 is a flowchart of an embodiment of the present
invention;
[0018] FIG. 3 is a flowchart illustrating the generation of a
boundary image in an embodiment of the present invention;
[0019] FIG. 4a is a diagram illustrating an L=3 structuring element
used in one embodiment of the present invention;
[0020] FIG. 4b is a diagram illustrating an L=3 structuring element
used in another embodiment of the present invention;
[0021] FIG. 5 is a graph of C.sub.D VS.-S produced by the
embodiment shown in FIG. 2.
DETAILED DESCRIPTION OF THE PREFERRED AND ALTERNATIVE
EMBODIMENTS
[0022] An underlying idea supporting current auto-focusing
algorithms is that an image in focus will exhibit the largest
pixel-to-pixel variations (the difference of pixel values between
adjacent pixels) whereas an out-of-focus image will blur, or
reduce, the pixel-to-pixel variations in the image. Autocorrelation
or power spectrum algorithms are designed to measure and/or
maximize the high frequency (variations occurring over a small
number of pixels) component of an image. Autocorrelation, however,
is a computationally intensive process that is prohibitive when
considered for high scanning rate automated optical microscopy.
[0023] Maximum gradient methods avoid performing an autocorrelation
by detecting edges and maximizing the variation across the edge.
The maximum gradient method calculates the differences between
adjacent pixels along a pre-selected direction and several
directions may be used to find and maximize the pixel gradient
across edges. The ability of the maximum gradient method to find
the correct focus decreases as the contrast and/or brightness of
the image decreases because decreasing contrast or brightness also
decreases the differences between pixels.
[0024] The inventor has surprisingly discovered a method for
auto-focusing that requires neither a direct calculation of the
pixel-to-pixel variations of the maximum gradient method nor the
computationally intensive autocorrelation of the power spectrum
methods. Instead, the present invention analyzes each image at
different scale lengths to calculate a fractal dimension for each
pixel in the image. Furthermore, the fractal dimension is estimated
using the pixel values of the neighborhood pixels instead of a
count of the neighborhood pixels. The fractal dimension is then
used to determine if the image is a focused image.
[0025] FIG. 2 is a flow diagram of one embodiment of the present
invention. An image, I.sub.0, is read in step 205. The image may be
read directly from a camera or may be read from memory. In one
embodiment, the controller captures several images of the sample at
various sample-objective lens distances and stores each captured
image in an appropriate memory device such as a hard drive, an
optical drive, or semiconductor memory. In an alternative
embodiment, after each image is captured, the process shown in FIG.
2 is completed and only images meeting criteria described below are
saved to a memory device.
[0026] In step 210, a first boundary image, I.sub.B1, is generated
from 10 and stored. A second boundary image, I.sub.B2, is generated
from I.sub.0 and stored in step 215.
[0027] FIG. 3 is a flow diagram illustrating the generation of each
boundary image, I.sub.B. An erosion image, E.sub.L, is generated
from the captured image, I.sup.0, and stored in step 310. A
dilation image, D.sup.L, is generated from 10 and stored in step
320. In step 330, the boundary image, I.sub.B, is generated by
subtracting the erosion image from the dilation image, i.e.,
I.sub.B=D.sup.L-E.sup.L. The superscript, L, in E.sup.L and D.sup.L
both refer to the size, or scale, of the structuring element used
to perform the erosion or dilation, respectively.
[0028] The structuring element may be represented by an L.times.L
matrix comprised of ones and zeros. The structuring element is
characterized by an origin pixel and a neighborhood. The
neighborhood comprises all the matrix elements that are set to one
and is contained within the L.times.L matrix. An image is generated
by calculating a pixel value for the pixel at the origin of the
structuring element based on the pixel values of the pixels in the
neighborhood of the structuring element. In the case of erosion,
the pixel value of the origin pixel is set to the minimum of the
pixel values in the neighborhood. Dilation, in contrast, sets the
pixel value to the maximum of the pixel values in the neighborhood.
In one embodiment, the neighborhood is coextensive with the
structuring element where the L.times.L matrix comprised. of all
ones, as shown in FIG. 4a for an L=3 structuring element. In
another embodiment, the neighborhood is less than the structuring
element in that the L.times.L matrix includes at least one zero. In
another embodiment, the neighborhood is a "cross" or "plus"
centered on the origin of the structuring element, as shown in FIG.
4b for an L=3 structuring element.
[0029] Referring to FIG. 2, step 215 generates a second boundary
image, I.sub.B2, at a different scale, L.sub.2, using a different
size structuring element than the structuring element used to
generate I.sub.B1. The selection of the scale for both boundary
images may depend on the feature size of the object of interest,
the computational limitations of the processor, and other such
factors as is apparent to one of skill in the art. In one
embodiment, L.sub.1 and L.sub.2 are selected to maximize the
difference between L.sub.1 and L.sub.2 under constraints such as
those identified above. In one embodiment, L.sub.1 may be chosen
from the group consisting of greater than 2, 3, 4, 5, and-greater
than 5. In a preferred embodiment, the scale of I.sub.B1 is set to
L.sub.1=3 because L=3 represents the minimum, non-trivial balanced
(in the sense that the origin is centered in .the structuring
element) structuring element. L.sub.2 is selected such that L.sub.2
is greater than L.sub.1, or, stated differently, the ratio,
R=L.sub.2/L.sub.1>1. In one embodiment, R is in the range
selected from a group consisting of 1-4, 4-8, 8-16, and greater
than 16. In a preferred embodiment, R=7.
[0030] The fractal dimension, d.sub.p, for each pixel in I.sub.0 is
estimated from the boundary images I.sub.B1 and I.sub.B2 in step
220. The fractal dimension for each pixel may be estimated by the
equation (1):
1dp=log(N2N1)log(L2L1) (1)
[0031] where N.sub.2 represents the sum of the pixel values in the
neighborhood of the structuring element centered on the pixel in
I.sub.B2 and N.sub.1 represents the sum of the pixel values in the
neighborhood of the structuring. element centered on the pixel in
I.sub.B1.
[0032] In an alternative embodiment, the fractal map is generated
directly from I.sub.0. The fractal dimension for a pixel is
estimated by centering an L.sub.1.times.L.sub.1 structuring element
on the pixel and summing the pixel values of the pixels within the
structuring element to form a first sum, N.sub.1. A second
structuring element of size L.sub.2.times.L.sub.2, where
L.sub.2>L.sub.1, is centered. on the pixel and a second sum,
N.sub.2, of the pixel values of the pixels within the second
structuring element is calculated. The fractal dimension of the
pixel is estimated using the equation
2dp=log(N2/N1)log(L2/L1) (2)
[0033] where d.sub.p is the fractal dimension of the pixel in
I.sub.0, N.sub.2 is the sum of the pixel values in the
L.sub.2.times.L.sub.2 structuring element, N.sub.1 is the sum of
the pixel values in the L.sub.1.times.L.sub.1 structuring element,
and L.sub.2 and L.sub.1, are the sizes (in pixels) of the
respective structuring elements.
[0034] The form of equation (1) clearly shows that the fractal
dimension is estimated by taking ratios of pixel values and
therefore should provide a more robust method than maximum gradient
methods of identifying out-of-focus images in low light or low
contrast conditions. Furthermore, it is believed that the use of
sums in N.sub.1 and N.sub.2 reduces the statistical variations that
may be expected in low light conditions.
[0035] The inventor has discovered that images containing many
pixels having a fractal dimension within a range of values tend to
be more in-focus than images containing few pixels having fractal
dimensions within the range.
[0036] The predetermined range of fractal dimensions may be
determined by one of skill in the art by visually examining a group
of images and selecting the range such that at least one image but
not all the images are considered to be nearly in-focus. Setting
the range narrowly may exclude all the images whereas setting the
range broadly reduces the effectiveness of this test. In one
embodiment, the predetermined range is between 1 and 2. In a
preferred embodiment, the predetermined range is chosen from a
group consisting of 1-1.25, 1.25-1.5, 1.5-1.75, 1.75-2, 1.5-2, and
1.6-2.
[0037] In one embodiment, the predetermined range is determined by
an operator before auto-focusing operations. In an alternate
embodiment, the predetermined range may be set to a default range
and dynamically adjusted during auto-focusing operations to produce
at least one in-focus image.
[0038] After d.sub.p is estimated for each pixel in I.sub.0, a
count of the fractal pixels in the image is performed in step 230.
A fractal pixel is a pixel having a fractal dimension within the
predetermined range. The fractal pixel count, C.sub.d, is the
number of fractal pixels in the image.
[0039] After C.sub.d is determined for the captured image, step 235
checks for additional images. If there are additional images,
control jumps back to step 205 to repeat the process for the
additional image. If there are no more additional images, the focus
image is determined in step 240 by selecting the image having the
largest C.sub.d from the set of images.
[0040] FIG. 5 is a graph illustrating the selection of the in-focus
image from a set of images taken at different focal distances. In
the example shown in FIG. 5, 9 slices, or images taken at
progressively larger focal distances are numbered from 1 to 9 on
the horizontal axis. The fractal pixel count for each slice is
shown on the ordinate axis. In this example slice 5 contains the
highest number of fractal. pixels of the 9 slices and is selected
as the in-focus image. The slice 5 image 501 is shown in FIG. 5,
along with the slice 3 image 505, which is an out-of-focus image. A
fractal dimension map 502 shows the estimated fractal dimension for
each pixel in the slice 5 image. Fractal pixel images 503 506 show
the fractal pixels in the slice 5 and slice 3 images,
respectively.
[0041] Having described at least illustrative embodiments of the
invention, various modifications and improvements will readily
occur to those skilled in the art and are intended to be within the
scope of the invention. Accordingly, the foregoing description is
by way of example only and is not intended as limiting. The
invention is limited only as defined in the following claims and
the equivalents thereto.
* * * * *