U.S. patent application number 10/769150 was filed with the patent office on 2004-11-04 for image analysis system and method.
Invention is credited to Affleck, Rhett L., Bodnar, Mike, DeSieno, Duane, Ewing, William R., Hansen, Eric, Levin, Robert K., Lillig, John E., Neeper, Robert K..
Application Number | 20040218804 10/769150 |
Document ID | / |
Family ID | 32854497 |
Filed Date | 2004-11-04 |
United States Patent
Application |
20040218804 |
Kind Code |
A1 |
Affleck, Rhett L. ; et
al. |
November 4, 2004 |
Image analysis system and method
Abstract
An image analysis system and related methods for automation of
the monitoring of samples to determine crystal growth. Samples are
imaged from time to time using a set of imaging parameters. The
resulting images are evaluated to determine the contents of the
samples. If the evaluation of the image indicates the presence of
crystals, the sample may be re-imaged using a different set of
imaging parameters, and the resulting image analyzed to determine
its contents. The sample may also be evaluated by generating
multiple images of a sample using various sets of imaging
parameters, identifying pixels that depict regions of interest in a
plurality of images, merging the pixels from each region of
interest into a composite image and analyzing the composite image.
An image of a sample can also be evaluated by classifying the
pixels of the image based on the contents of the sample that they
depict, color-coding the pixels using the classifications, and
displaying the colored image for analysis.
Inventors: |
Affleck, Rhett L.; (Poway,
CA) ; Levin, Robert K.; (San Diego, CA) ;
Lillig, John E.; (Ramona, CA) ; Neeper, Robert
K.; (Ramona, CA) ; Ewing, William R.;
(Encinitas, CA) ; DeSieno, Duane; (La Jolla,
CA) ; Hansen, Eric; (San Luis Obispo, CA) ;
Bodnar, Mike; (San Diego, CA) |
Correspondence
Address: |
KNOBBE MARTENS OLSON & BEAR LLP
2040 MAIN STREET
FOURTEENTH FLOOR
IRVINE
CA
92614
US
|
Family ID: |
32854497 |
Appl. No.: |
10/769150 |
Filed: |
January 30, 2004 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60474989 |
May 30, 2003 |
|
|
|
60444586 |
Jan 31, 2003 |
|
|
|
60444519 |
Jan 31, 2003 |
|
|
|
60444585 |
Jan 31, 2003 |
|
|
|
Current U.S.
Class: |
382/141 |
Current CPC
Class: |
G01N 2035/042 20130101;
G02B 21/0016 20130101; C30B 7/00 20130101; G01N 2015/1493 20130101;
G01N 2001/4027 20130101; G01N 2035/00455 20130101; G01N 15/1475
20130101; B01L 9/523 20130101; G01N 2035/0425 20130101; C30B 29/58
20130101; G01N 2035/00881 20130101; G01N 21/253 20130101; G01N
2035/0463 20130101; G01N 35/028 20130101; G01N 35/0099 20130101;
G01N 2035/00356 20130101 |
Class at
Publication: |
382/141 |
International
Class: |
G06K 009/00 |
Claims
What is claimed is:
1. A method of evaluating crystal growth in a crystal growth
system, comprising: receiving a first image of a sample, said first
image generated by an imaging system using a first set of imaging
parameters; analyzing information depicted in said first image to
determine the contents of said sample; determining whether to
generate another image of said sample based on the contents of said
sample; and providing information to said imaging system to
generate a second image of the sample using a second set of imaging
parameters, wherein said second set of imaging parameters comprises
at least one imaging parameter that is different from an imaging
parameter in said first set of imaging parameters.
2. The method of claim 1, further comprising receiving said second
image of said sample.
3. The method of claim 2, further comprising analyzing information
depicted in said second image to determine the contents of said
sample.
4. The method of claim 1, wherein said different imaging parameter
is depth-of-field.
5. The method of claim 1, wherein said different imaging parameter
is illumination brightness level.
6. The method of claim 1, wherein said different imaging parameter
is illumination source type.
7. The method of claim 1, wherein said different imaging parameter
is magnification.
8. The method of claim 1, wherein said different imaging parameter
is polarization.
9. The method of claim 1, wherein said different imaging parameter
is illumination source position.
10. The method of claim 1, wherein said different imaging parameter
is the location of the area imaged.
11. The method of claim 1, wherein said different imaging parameter
is the focus.
12. The method of claim 1, wherein said analyzing information
comprises determining whether said first image depicts the presence
of crystals.
13. The method of claim 12, wherein said first image comprises
pixels, and said determining comprises classifying said pixels and
comparing the number of pixels classified as crystals to a
threshold value.
14. The method of claim 13, wherein said classifying comprises
using a neural network.
15. The method of claim 12, wherein said first image comprises
pixels, and wherein said determining comprises counting the number
of said pixels depicting objects in the sample and evaluating said
number using a threshold value.
16. The method of claim 12, wherein said first image comprises
pixels, and wherein said determining comprises classifying said
pixels as either clear or non-clear and evaluating said classified
pixels using a threshold value.
17. The method of claim 1, wherein said analyzing information
depicted in said first image comprises determining a region of
interest in said first image and wherein said information is used
to adjust said second set of imaging parameters so that the imaging
system generates a zoomed-in second image of said region on
interest.
18. The method of claim 1, wherein said analyzing information
depicted in said first image comprises displaying said first image
and receiving user input based on said displayed image.
19. A method of analyzing crystal growth, comprising: receiving a
first image having pixels depicting crystal growth information of a
sample; identifying a first set of pixels in said first image
comprising a first region of interest; receiving a second image
having pixels depicting crystal growth information of said sample;
identifying a second set of pixels in said second image comprising
a second region of interest; merging said first set of pixels and
said second set of pixels to form a composite image; and analyzing
said composite image to identify crystal growth information of said
sample.
20. The method of claim 19, wherein said first image is generated
by an imaging system using a first set of imaging parameters, said
second image is generated by said imaging system using a second set
of imaging parameters, and wherein said second set of imaging
parameters comprises at least one imaging parameter that is
different from the imaging parameters in said first set of imaging
parameters.
21. The method of claim 20, wherein different imaging parameter is
depth-of-field.
22. The method of claim 20, wherein said different imaging
parameter is illumination brightness level.
23. The method of claim 20, wherein said different imaging
parameter is illumination source type.
24. The method of claim 20, wherein said different imaging
parameter is magnification.
25. The method of claim 20, wherein said different imaging
parameter is polarization.
26. The method of claim 20, wherein said imaging parameter is
illumination source position.
27. The method of claim 20, wherein said different imaging
parameter is the location of the area imaged.
28. The method of claim 20, wherein said different imaging
parameter is the focus.
29. A method of analyzing crystal growth information, comprising:
receiving a first image comprising a set of pixels that depict the
contents of a sample; determining information for each pixel in
said set of pixels, wherein said information comprises a
classification describing the type of sample content depicted by
said each pixel, and a color code associated with each
classification; generating a second image based on said information
and said set of pixels; and displaying said second image.
30. The method of claim 29, further comprising determining crystal
growth information of the sample.
31. The method of claim 29, wherein said classification comprises
crystals.
32. The method of claim 31, wherein said classification further
comprises precipitate, edges, or clear.
33. A system for detecting crystal growth information, comprising:
an imaging subsystem with means for generating an image of a
sample, wherein said image comprises pixels that depict the content
of said sample; an image analyzer subsystem coupled to said imaging
system with means for receiving said image, means for classifying
the content of said sample using said pixels and means for
determining whether said sample should be re-imaged based on said
classifying; and a scheduler subsystem coupled to said imaging
analyzer system with means for causing said imaging subsystem to
re-image said sample.
34. A computer-readable medium containing instructions for
analyzing samples in a crystal growth system, by: receiving a first
image of a sample, said first image generated by an imaging system
using a first set of imaging parameters; analyzing information
depicted in said first image to determine the contents of said
sample; determining whether to generate another image of said
sample based on the contents of said sample; providing information
to said imaging system to generate a second image of the sample
using a second set of imaging parameters, wherein said second set
of imaging parameters comprises at least one imaging parameter that
is different from an imaging parameter in said first set of imaging
parameters; receiving said second image of said sample; and
analyzing information depicted in said second image to determine
the contents of said sample.
35. A computer-readable medium containing instructions for
analyzing crystals, by: receiving a first image having pixels
depicting crystal growth information of a sample; identifying a
first set of pixels in said first image comprising a first region
of interest; receiving a second image having pixels depicting
crystal growth information of said sample; identifying a second set
of pixels in said second image comprising a second region of
interest; merging said first set of pixels and said second set of
pixels to form a composite image; and analyzing said composite
image to identify crystal growth information of said sample.
36. A computer-readable medium containing instructions for
analyzing crystal growth information, by: receiving a first image
comprising a set of pixels that depict the contents of a sample;
determining information for each pixel in said set of pixels,
wherein said information comprises a classification describing the
type of sample content depicted by said each pixel, and a color
code associated with each classification; generating a second image
based on said information and said set of pixels; displaying said
second image; and visually analyzing said second image to determine
crystal growth information of the sample.
37. A method of analyzing crystal growth, comprising: generating a
first image having pixels depicting crystal growth information of a
sample at a first time; generating a second image having pixels
depicting crystal growth information of said sample at a second
time; analyzing said first image and said second image to identify
crystal growth information of said sample.
38. The method of claim 37, wherein analyzing comprises comparing
the number of pixels depicting crystal growth in said first image
with the number of pixels depicting crystal growth in said second
image.
39. The method of claim 37, wherein analyzing comprises comparing
the number of pixels within grid elements of said first image with
the number of pixels within respective grid elements of said second
image.
40. The method of claim 39, wherein the size of the grid elements
defined by dividing up the said first image and said second image
can vary from 1 pixel up to the total number of pixels in each
image.
Description
RELATED APPLICATIONS
[0001] This application claims priority under 35 U.S.C.
.sctn.119(e) to U.S. Provisional Application No. 60/474,989 filed
on May 30, 2003, and entitled IMAGE ANALYSIS SYSTEM AND METHOD,
U.S. Provisional Application No. 60/444,586 filed on Jan. 31, 2003,
and entitled AUTOMATED IMAGING SYSTEM AND METHOD, U.S. Provisional
Application No. 60/444,519 filed on Jan. 31, 2003, and entitled
AUTOMATED SAMPLE ANALYSIS SYSTEM AND METHOD, and U.S. Provisional
Patent Application No. 60/444,585 filed on Jan. 31, 2003, and
entitled REMOTE CONTROL OF AUTOMATED LABS, the entirety of which
are incorporated herein by reference.
BACKGROUND OF THE INVENTION
[0002] 1. Field of the Invention
[0003] This invention generally relates to systems and methods for
analyzing and exploiting images. More particularly, the invention
relates to systems and methods for identifying and analyzing images
of substances in samples.
[0004] 2. Description of the Related Technology
[0005] X-ray crystallography is used to determine the
three-dimensional structure of macromolecules, e.g., proteins,
nucleic acids, etc. This technique requires the growth of crystals
of the target macromolecule. Typically, crystal growth of
macromolecules is dependent on several environmental conditions,
e.g., temperature, pH, salt, and ionic strength. Hence, growing
crystals of macromolecules requires identifying the specific
environmental conditions that will promote crystallization for any
given macromolecule. Moreover, it is insufficient to find
conditions that result in any type of crystal growth; rather, the
objective is to determine those conditions that yield
well-diffracting crystals, i.e., crystal configurations that
provide the resolution desired to make the data useful.
[0006] Modern chemistry and biology laboratories produce and
analyze multiple samples concurrently in order to accelerate the
crystal growth development cycle. The samples are often produced
and stored in a sample storage container, such as the individual
wells in a well plate. Alternatively, drops of multiple samples are
placed at discrete locations on a plate, without the need for wells
to contain the sample. In either case, hundreds, thousands, or
more, different sample drops may be placed on a single analysis
plate. Similarly, a single laboratory may house thousands,
millions, or more, samples on plates for analysis. Thus, the number
of drops to monitor and analyze may be extremely large.
[0007] In the screening experiments, samples under investigation
are periodically evaluated to determine if suitable crystallization
of the sample has taken place. In a conventional laboratory, a
technician manually locates and removes each plate or sample
storage receptacle from a storage location and views each sample
well under a microscope to determine if the desired biological
changes have occurred. In most cases, the plates are stored in
laboratories within a controlled environment. For example, in
protein crystallization analysis, samples are often incubated for
long periods of time at controlled temperatures to induce
production of crystals. Thus, the technician must locate, remove,
and view the samples under a microscope in a refrigerated room.
Further increasing the demand for technician labor, hundreds or
thousands of samples in sample wells may need to be periodically
viewed or otherwise analyzed to determine the existence of crystals
in a sample well.
[0008] As an alternative, an image may be periodically generated
for each sample and provided to a technician, who need not be
geographically co-located with the sample, to analyze the image to
evaluate crystal growth. Automated image evaluation techniques can
also be used to analyze the image and evaluate the presence of
crystal growth and increase system throughput. However, current
image analysis techniques do not always receive sufficient
information from the sample image to accurately evaluate crystal
growth. Important information learned as a result of analyzing the
image is not automatically exploited, or used for further analysis
to facilitate a user's evaluation of the image. Additionally, in
current systems, the results of analyzing the image are not
adequately provided to facilitate easy interpretation and efficient
decision making.
[0009] Accordingly, there is a need in the industry for systems and
methods that overcome the aforementioned problems in the current
art.
SUMMARY OF CERTAIN INVENTIVE ASPECTS
[0010] This invention relates to systems and methods for automation
of the monitoring of samples to determine crystal growth. According
to one embodiment, the invention comprises a method of evaluating
crystal growth in a crystal growth system, comprising receiving a
first image of a sample, said first image generated by an imaging
system using a first set of imaging parameters, analyzing
information depicted in said first image to determine the contents
of said sample, determining whether to generate another image of
said sample based on the contents of said sample, providing
information to said imaging system to generate a second image of
the sample using a second set of imaging parameters, wherein said
second set of imaging parameters comprises at least one imaging
parameter that is different from an imaging parameter in said first
set of imaging parameters, receiving said second image of said
sample, and analyzing information depicted in said second image to
determine the contents of said sample. According to other
embodiments, the different imaging parameter included in the method
can be depth-of-field, illumination brightness level, focus, the
area imaged, the center location of the area imaged, illumination
source type, magnification, polarization, and/or illumination
source position. According to other embodiments of the method of
evaluating crystal growth, analyzing said first image comprises
determining a region of interest in said first image and wherein
said information comprises is used to adjust said second set of
imaging parameters so that the imaging system generates a zoomed-in
second image of said region on interest.
[0011] According to another embodiment, analyzing information in
the method of evaluating crystal growth in a crystal growth system
comprises determining whether said first image depicts the presence
of crystals, and can further comprise, wherein said first image
comprises pixels, and said determining comprises classifying said
pixels and comparing the number of pixels classified as crystals to
a threshold value.
[0012] According to another embodiment, a method of evaluating
crystal growth in a crystal growth system comprises counting the
number of said pixels depicting objects in the sample and
evaluating said number using a threshold value.
[0013] According to another embodiment of the invention, the method
of analyzing crystal growth comprises receiving a first image
having pixels depicting crystal growth information of a sample,
identifying a first set of pixels in said first image comprising a
first region of interest, receiving a second image having pixels
depicting crystal growth information of said sample, identifying a
second set of pixels in said second image comprising a second
region of interest, merging said first set of pixels and said
second set of pixels to form a composite image, and analyzing said
composite image to identify crystal growth information of said
sample. According to another embodiment said first image is
generated by an imaging system using a first set of imaging
parameters, said second image is generated by said imaging system
using a second set of imaging parameters, and wherein said second
set of imaging parameters comprises at least one imaging parameter
that is different from the imaging parameters in said first set of
imaging parameters.
[0014] According to another embodiment of the invention, a method
of analyzing crystal growth information comprises receiving a first
image comprising a set of pixels that depict the contents of a
sample, determining information for each pixel in said set of
pixels, wherein said information comprises a classification
describing the type of sample content depicted by said each pixel,
and a color code associated with each classification, generating a
second image based on said information and said set of pixels,
displaying said second image, and visually analyzing said second
image to determine crystal growth information of the sample.
[0015] According to another embodiment, the invention comprises a
system for detecting crystal growth information comprising an
imaging subsystem with means for generating an image of a sample,
wherein said image comprises pixels that depict the content of said
sample, an image analyzer subsystem coupled to said imaging system
with means for receiving said image, means for classifying the
content of said sample using said pixels and means for determining
whether said sample should be re-imaged based on said classifying;
and a scheduler subsystem coupled to said imaging analyzer system
with means for causing said imaging subsystem to re-image said
sample.
[0016] According to another embodiment, the invention comprises a
computer-readable medium containing instructions for analyzing
samples in a crystal growth system, by receiving a first image of a
sample, said first image generated by an imaging system using a
first set of imaging parameters, analyzing information depicted in
said first image to determine the contents of said sample,
determining whether to generate another image of said sample based
on the contents of said sample, providing information to said
imaging system to generate a second image of the sample using a
second set of imaging parameters, wherein said second set of
imaging parameters comprises at least one imaging parameter that is
different from an imaging parameter in said first set of imaging
parameters, receiving said second image of said sample, analyzing
information depicted in said second image to determine the contents
of said sample.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The above and other aspects, features, and advantages of the
invention will be better understood by referring to the following
detailed description, which should be read in conjunction with the
accompanying drawings, in which:
[0018] FIG. 1A is a high-level block diagram of an imaging system
according to the invention.
[0019] FIG. 1B is high-level block diagram of another imaging
system according to the invention.
[0020] FIG. 2 is a perspective view of an imaging system according
to the invention.
[0021] FIG. 3 is a perspective view of the imaging system shown in
FIG. 2, viewed from a different angle.
[0022] FIG. 4 is a perspective view of the imaging system shown in
FIG. 2, viewed from yet a different angle.
[0023] FIG. 5 is a plan front view of the imaging system shown in
FIG. 2.
[0024] FIG. 6 is a plan, right side view of the imaging system
shown in FIG. 2.
[0025] FIGS. 7A and 7B are perspective views from different angles
of a lens system as can be used with the imaging system shown in
FIG. 2.
[0026] FIG. 8 is a perspective view from below of a photo-filter
carriage that can be used with the imaging system shown in FIG.
2.
[0027] FIG. 9 is a perspective view of certain components as
assembled in the imaging system shown in FIG. 2.
[0028] FIG. 10 is a plan front view of certain components as
assembled in the imaging system shown in FIG. 2.
[0029] FIG. 11 is a plan, right side view of the components shown
in FIG. 10.
[0030] FIG. 12 is a perspective view of a light source as can be
used with the imaging system shown in FIG. 2.
[0031] FIG. 13 is a perspective view of a sample mount with the
light source shown in FIG. 12, viewed from a different angle.
[0032] FIG. 14A is a plant top view of the light source shown in
FIG. 12.
[0033] FIG. 14B is a cross-sectional view along the plane A-A of
the light source shown in FIG. 14A.
[0034] FIG. 15 is a exploded, perspective view of certain
components of the sample mount and the light source shown in FIG.
13.
[0035] FIG. 16 is a functional block diagram of an illumination
duration control circuit as can be used with the light source shown
in FIG. 12.
[0036] FIG. 17 is a functional block diagram of an automated sample
analysis system in which the imaging system according to the
invention can be used.
[0037] FIG. 18 is a block diagram of an imaging and analysis
system.
[0038] FIG. 19 is a block diagram of a computer that includes a
Crystal Resolve analysis module, according to one aspect of the
invention.
[0039] FIG. 20A is a block diagram of an analysis system process,
according to one embodiment of the invention.
[0040] FIG. 20B is a block diagram of an analysis system process,
according to one embodiment of the invention.
[0041] FIG. 21 is a flow diagram of an imaging analysis process,
according to one embodiment of the invention.
[0042] FIG. 22 is a flow diagram of an imaging analysis and control
process, according to one embodiment of the invention.
[0043] FIG. 23 is a flow diagram of an analysis process, according
to one embodiment of the invention.
DETAILED DESCRIPTION OF CERTAIN INVENTIVE EMBODIMENTS
[0044] Embodiments of the invention will now be described with
reference to the accompanying figures, wherein like numerals refer
to like elements throughout. The terminology used in the
description presented herein is not intended to be interpreted in
any limited or restrictive manner, simply because it is being
utilized in conjunction with a detailed description of certain
specific embodiments of the invention. Furthermore, embodiments of
the invention may include several novel features, no single one of
which is solely responsible for its desirable attributes or which
is essential to practicing the inventions herein described.
[0045] The imaging and analysis system and methods disclosed here
are related to embodiments of an automated sample analysis system
having an imaging system that is described in the related U.S.
provisional patent application No. 60/444,519, entitled "AUTOMATED
SAMPLE ANALYSIS SYSTEM AND METHOD." An imaging system that can
provide images of samples for analysis, in response to control
information, is described hereinbelow, followed by a description of
a system and processes for analyzing the images. It should be noted
here that the terms "image", "subimage" or "pixels" as used herein
at various locations do not necessarily mean an optical image,
subimage or pixels which are either usually displayed or printed,
but rather include digital representations or other representations
of such image, subimage or pixels. It should also be noted that the
term "sample" as used herein refers to any type of suitable sample,
for example, drops, droplets, the contents of a well, the contents
of a capillary, a sample in gel or any other embodiment of
containing a sample or material.
[0046] FIG. 1A is a high-level block diagram of an imaging system
100. In this embodiment, the imaging system 100 has an assembly 105
that is controlled by controllers and logic 110. The assembly 105
includes a stage 115 that holds and transports target samples to be
imaged by an image capture device 120. The imaging system 100
employs an optics assembly 125 to enhance the view of the target
samples before the image capture device 120 obtains the images of
the samples. An illuminator 130 is configured as part of the
assembly 105 to direct light at the samples held in the stage
115.
[0047] The assembly 105 also includes a translator 135 that
provides the structural support members and actuators to move any
one combination of the stage 115, image capture device 120, optics
125, or illuminator 130. The translator 135 may be configured to
move the combination of components in one, two, or three
dimensions. As will be discussed in detail below, in some
embodiments the stage 115 remains stationary while the translator
135 moves the image capture device 120 and optics 125 to a desired
well position in a sample plate held by the stage 115. In other
embodiments of the imaging system 100, the translator 135 moves the
stage 115 in a first axis and the image capture device 120 and
optics 125 in a second axis which is substantially perpendicular to
the first axis.
[0048] The controllers and logic 110 of the imaging system 100
provide instructions to and coordinate the activities of the
components of the assembly 105. The controllers may include a
microprocessor, controller, microcontroller, or any other computing
device. The logic includes the instructions to cause the controller
to perform the tasks or processing described here.
[0049] FIG. 1B is high-level block diagram of an imaging system
150. The imaging system 150 includes an assembly 155 in
communication with controllers and logic 160. The assembly 155 may
also be in communication with a data storage device 190, which
itself may be configured for communication with the controllers and
logic 160. The controllers and logic 160 control and coordinate the
activities of the components of the assembly 155.
[0050] In this embodiment, the assembly 155 includes a sample plate
mount 165 suitably configured to receive micro-titer plates of
various configurations and sizes. Alternatively, the sample plate
mount 165 can be configured to receive any sample matrix that
carries samples, regardless of whether the samples are stored in
individuals sample wells, rest on the surface of the sample matrix
(e.g., as droplets), or are embedded in the sample matrix. A source
of flash lighting 180 is arranged to direct light bursts to the
samples stored in the micro-titer plate carried by the sample plate
mount 165. An inventive system and method of providing the flash
lighting 180 will be discussed with reference to FIG. 16.
[0051] The assembly 155 includes a compound lens 175 that
cooperates with a digital camera 170 to acquire images of the
samples in the sample plate. The compound lens 175 may consist, for
example, of an objective lens, a zoom lens, and additional optics
chosen to provide the digital camera 170 with the desired image
from the light from the samples. In one embodiment, as will be
discussed further below, the compound lens 175 may be motorized
(i.e., provided with one or more actuators) so that the controllers
and logic 160 can automatically focus the scene, zoom on the scene,
and set the aperture.
[0052] In this embodiment, the assembly 155 includes an x-y
translator that moves either the sample plate mount 165 or the
compound lens 175, or both. Of course, if the digital camera 170 is
coupled to the compound lens 175, the x-y translator moves both the
digital camera 170 and the compound lens 175. In some embodiments,
the x-y translator 185 is configured to move the sample plate mount
165 in two axis, e.g., x and y coordinates. Alternatively, the x-y
translator 185 moves the compound lens 175 in two axis, while the
sample plate mount 165 remains stationary. In yet another
embodiment, the x-y translator consists of multiple and separate
actuators that move independently from one another the sample mount
165 or the compound lens 175.
[0053] It should be noted that the assembly 155, the controllers
and logic 160, and the data storage 190 are depicted as separate
components for schematic purposes only. That is, in some
embodiments of the imaging system 150 it is advantageous to, for
example, integrate the data storage device 190 into the assembly
155 and to include the controllers and logic 160 as part of one or
more of the components shown as being part of the assembly 155.
Similarly, the sample mount 165, digital camera 170, compound lens
175, flash lighting 180, and x-y translator 185 need not all be
configured as part of a single assembly 155 as shown.
[0054] Exemplary ways of using and constructing embodiments of the
imaging system 100 or 150 will be described in detail below with
reference to FIGS. 2-16, which depict a specific embodiment of the
imaging system. Of course, since there are multiple ways to
implement the imaging system, the following description of the
specific embodiment should not be taken to limit the full scope of
the inventive imaging system.
[0055] Illustrative Embodiment
[0056] With reference to FIGS. 2-6 and 9-11, perspective and plan
views of an imaging system 200 according to the invention are
illustrated. The imaging system 200 includes a sample plate mount
210 that receives a sample plate 212. An x-translator having an
actuator 218 (see FIG. 4) is coupled to the sample plate mount 210
to move the sample plate mount 210 into position above a light
source 216 and below a lens assembly 230. A digital camera 214 is
coupled to the lens assembly 230 to capture images of the wells in
the sample plate 212. A y-translator having an actuator 220 (see
FIG. 3) is coupled to the lens assembly 230 to move the lens
assembly 230 into position over a desired well of the sample plate
212.
[0057] Support Platform
[0058] The digital camera 214, lens assembly 230, sample plate
mount 210, light source 216, x-translator 218, and y-translator 220
are mounted on a platform 240 (see FIG. 2). The platform 240
generally consists of several structural members, brackets, or
walls, e.g., base 242, side wall 244, front wall 250, bracket 252,
bracket 246, post 248, and support member 254. The light source 216
can be fastened to the base 242. Rails 256 and 258, which support
the lens assembly 230 are fastened to the wall 250 of the platform
240 and to the support member 254. The sample plate mount 210 is
supported by a rail 262 and an outport guide 253 of the support
member 254. The rail 262 is supported through attachment to the
side wall 244 and the post 248. Of course, there are multiple,
equivalent alternatives to providing support for and configuring
the lens assembly 230, sample plate mount 210, light source 216,
and x-, y-translators 218 and 220 on the platform 240.
[0059] The platform 240 may be constructed of any of several
suitable materials, including but not limited to, aluminum, steel,
or plastics. Because in some applications it is critical to keep
vibration of the platform 240 to a minimum, materials that provide
rigidity to the platform 240 are preferred in such applications.
With regard to the rails 256, 258, and 262, these are preferably
manufactured with very smooth surfaces to carry the lens assembly
230 or the sample plate mount 210 in a smooth fashion, thereby
avoiding vibrations. As illustrated in FIGS. 3 and 9, supporting
the lens assembly 230 may be done by coupling the linear plain
bearings 264 and 266 to the rails 256 and 258. A similar coupling
using a "bushing" 267 (see FIG. 10) may be employed to fasten the
sample plate mount 210 to the rail 262. Bearings 264, 266, and 267
are chosen to provide smooth bearing surfaces for smooth
translation of the load, e.g., the lens assembly 230 or the sample
plate mount 210.
[0060] Sample Plate Mount
[0061] The sample plate mount 210 may be constructed from any rigid
material, e.g., steel, aluminum, or plastics. Preferably the sample
plate mount 210 is configured to accommodate, either directly or
through the use of adapters, various standard sizes of micro-titer
plates. Micro-titer plates that may be used with the sample plate
mount 210 include, but are not limited to, crystallography plates
manufactured by Linbro, Douglas, Greiner, and Corning. As will be
described further below, the sample plate mount 210 is coupled to
an actuator 218 for moving the sample plate mount 210 in one
axis.
[0062] Translators
[0063] The imaging system 200 includes two independent translators.
Typically, the sample plate mount 210 and the lens assembly 230
move on a plane that is substantially parallel to a plane defined
by the sample plate 212 carried by the sample plate mount 210. In
one embodiment, the controllers and logic 110 or 160 can control
x-, y-translators to position the sample plate mount 210 and the
lens assembly 230 at the coordinates of a specific well of the
sample plate 212.
[0064] An x-axis translator for moving the sample plate mount 210
consists of an actuator 218 (see FIG. 4) that rotates a threaded
rod 219 (or "lead screw") about its axis in clockwise or
counter-clockwise directions. In the embodiment shown in FIGS. 3,
4, and 10, the actuator 218 is coupled to the rod 219 via a belt
(not shown) and pulleys 221 and 221'. The sample plate 210 mount is
fastened to a "bushing" 267 (see FIG. 10) that rides on the rail
262. The sample plate mount 210 is also supported by the outport
guide 253 (see FIGS. 6 and 11) of the support member 254. The
"bushing" 267 is additionally coupled in a known manner to the rod
219. When the actuator 218 turns in one direction, its power is
transmitted via the belt and pulleys 221 and 221' to the rod 219,
which then moves the "bushing" 267 and, thereby, moves the sample
plate mount 210 in a linear direction.
[0065] A y-axis translator for moving the lens assembly 230
consists of an actuator 220 (see FIG. 3) that rotates a threaded
rod 260 about its axis in clockwise or counter-clockwise
directions. In the embodiment shown in FIGS. 3, 6, and 9, for
example, the actuator 220 is coupled to the rod 260 through a
slotted disc coupling (not shown). The lens assembly 230 is coupled
to bearings 264 and 266 that respectively ride on rails 256 and
258. The bearings 264 and 266 are coupled to the rod 260 through
plate 255 and the bracket 257 (see FIG. 6) in a known manner. When
the actuator 220 turns in one direction, its power is transmitted
via the slotted disc coupling to the rod 260, which then moves the
bearings 264 and 266 and, thereby, moves the lens assembly in a
linear direction.
[0066] The actuators 218 and 220 may be direct current gear motors
or 3-phase servo motors, for example. Of course, the type of motors
employed as the actuators 218 or 220 will depend on, among other
things, the weight of the sample plate mount 210 plus sample plate
212 or the lens assembly 230 and the digital camera 214. Another
factor in determining the type of motor is the desired speed. In
one embodiment, actuators 218 and 220 having a positioning
precision of 10-microns are used. Suitable motors may be obtained
from PITMANN.RTM. of Harleysville, Pa.
[0067] In the embodiment of the x-, y-translators described above,
each translator mechanism independently translates along an axis of
motion each of the sample plate mount 210 and the lens assembly
230. However, it should be noted that in other embodiments of the
imaging system 200, it may be desirable to maintain the lens
assembly stationary and only move the sample plate mount 210, which
would then have one or more translators to position the sample
plate mount 210 anywhere in an x-y coordinate area. Similarly, the
imaging system 200 may be configured so that an x-y translator (or
set of x-, y-translators) moves the lens assembly in the x-y
coordinate area, while the sample plate mount 210 remains
stationary over the light source 216. In one embodiment, the x-,
y-translators employ optical sensors 285 and 287 (see FIG. 5) as to
sense the start or end positions ("home positions") of the lens
assembly 230 or the sample plate mount 210.
[0068] In yet another embodiment, the imaging system 200 may also
include a z-axis translator (not shown) to lift or lower the sample
plate mount 210, lens assembly 230, or light source 216. The z-axis
translator may consist of, for example, an actuator, a lead screw,
one or more rails, and appropriate bearings and fasteners.
[0069] The actuators 218 and 220 may be governed by a controller
(not shown). Suitable controllers may be obtained from J R Kerr
Automation Engineering of Flagstaff, Arizona. The controller may be
configured to interpret high level commands from a computing
device. In one embodiment, when a specific axis is addressed, the
controller causes the actuator 220, for example, to move and keeps
count of the travel distance and final location. The controller can
be programmed to move the actuator 220 at varying speed, torque,
and acceleration.
[0070] Image Capture Device
[0071] In some embodiments of the imaging system 200, the image
capture device can be a film camera, a digital camera, a CMOS
camera, a charge coupled device (CCD), and the like, or some other
apparatus for capturing an image of an object. The embodiments of
the imaging system 200 described here employ a digital camera 214.
A suitable digital camera 214 is, for example, a CMOS digital
camera. However, it should be apparent that several digital
photography devices could also be employed. The CMOS camera 214 is
preferred because it provides random access to the image data and
is relatively low cost. In conventional imaging systems for
crystallography, a CMOS camera is typically not used because in
those systems the level of light is insufficient for this type of
camera. In contrast, the imaging system 200 is configured to
provide the level of light necessary to allow use of a CMOS
camera.
[0072] The digital camera 214 can be a CMOS camera having a pixel
resolution of 1280.times.1024 pixels, Bayer color filter, a pixel
size of 7.5.times.7.5 microns, and a data interface governed by the
IEEE 1394 standard (commonly known as "Firewire"). The digital
camera 124 may be fully digital and not require a frame grabber.
The digital camera 124 may also have a centered pixel area, e.g. a
1024.times.1024 or 800.times.600 pixel subset of the array, which
enhances the image quality since the edges of the array where
optical distortions increase are avoided. In one embodiment, the
digital camera 214 is connected separately to a host computer (not
shown) via a Firewire data interface. This allows for rapid
transfer of large amounts of image data, e.g., five images per
second.
[0073] Lens Assembly
[0074] One embodiment of the lens assembly 230 includes an
objective lens 231, a zoom lens 233, and an adapter 235. These
optical components are chosen to provide suitable field of view,
magnification, and image quality. The objective lens 231, zoom lens
233, and adapter 235 may be purchased from, for example, Navitar
Inc. of Rochester, N.Y.
[0075] In one embodiment, the zoom lens 233 may be the
"12.times.UltraZoom" zoom lens manufactured by Navitar. The zoom
lens 233 may provide a 12:1 zoom factor, a focus range of about
12-mm, and an aperture of about 0.14. The zoom lens 233 preferably
includes adapters for mounting the objective lens 231. The zoom
lens 233 may have actuators 233A, 233B, and 233C for providing,
respectively, automatic aperture adjustment, autozoom, and
autofocus functionality. In one embodiment, actuators 233B and 233C
have gear reductions of 262:1. Of course, the gear reduction ratio
is chosen to suit the particular application. For example, a 5752:1
gear ratio for the focus actuator 233C may be too slow for some
applications of the imaging system 200. The actuators 233A, 233B,
and 233C may be obtained from Navitar or from MicroMo Electronics,
Inc. of Clearwater, Florida.
[0076] The objective lens 231 may be, for example, a 5.times.
Mitutoyo Infinity Corrected Long Working Distance Microscope
Objective (model M Plan Apo 5) microscope accessory. The objective
lens 231 is coupled to the zoom lens 233. Since the light source
216 delivers sufficient light to the sample plate 212, the lens
assembly 230 is configured to allow for setting a small aperture in
order to increase the depth of field. The objective lens 231
preferably provides a working distance that allows adequate room
beneath the lens assembly 230 to manipulate a sample plate 212 and
provide a photo-filter carriage 237 in the image path. In one
embodiment, the working distance of the objective lens 231 is about
34-mm.
[0077] The adapter 235 serves to allow use of the digital camera
214. The adapter 235 may be, for example, a 1.times. Adapter model
number 1-6015 sold by Navitar. Of course, different combinations of
objective lenses 231 and adapters 235 may be used, e.g., a 2.times.
Adapter and 2.times. Objective combination. The combination of
1.times. Adapter and 5.times. Objective provides a suitable image
for most applications of the imaging system 200. In some
embodiments, it is desirable to use a 0.67.times. Adapter 235 with
a 10.times. Objective 231, for example, to provide a higher image
resolution.
[0078] The optical components of the lens assembly 230 can be
provided with actuators for remote and automatic control. To allow
software control of the optical components, controllers and control
logic (not shown) can control the actuators 233A, 233B, 233C, and
233D. The actuators (e.g., dc motors) may be coupled to the
aperture of the magnification and focus of the zoom lens 233, as
well as the photo-filter carriage 237. In some embodiments, the
actuators 233A, 233B, 233C, and 233D are preferably provided with
encoders to provide position information to the controllers. In one
embodiment, the actuators on the lens assembly 230 are 17-mm direct
current motors with 100:1 gear reducers. These motors may be
obtained from PITMANN.RTM. of Harleysville, Pennsylvania.
[0079] The lens assembly 230 may also include a photo-filter
carriage 237 that is configured to hold optical filters (not
shown). For example, the photo-filter carriage 237 can hold
polarization plates or color light filtering plates. FIG. 8
illustrates one embodiment of a photo-filter carriage 237 that may
be used with the imaging system 200. The photo-filter carriage 237
includes a filter wheel 237A for receiving one or more
photo-filters (not shown) in openings 237B. The photo-filters may
be held in place in the filter wheel 237A in a variety of ways. For
example, in the embodiment illustrated in FIG. 8, caps 237C in
cooperation with suitable fasteners hold the photo-filters in
place. The filter wheel 237A may be coupled to an actuator 233D for
remote and automatic control of the filter wheel 237A. The actuator
233D and the filter wheel 237A may be fastened, in a conventional
manner, to a clamp 237D that is coupled to, for example, the
objective lens 231 or the zoom lens 233 (see FIGS. 1 and 9). In one
embodiment, a polarization filter is coupled to a filter wheel so
that the polarization filter covers about 90 degrees of the wheel.
In this embodiment, the polarization filter can be rotated so that
the applied polarization varies between zero and ninety degrees.
Thus, the use of the polarization filter with a polarized light
source can provide analysis of the effect of samples on polarized
light. For example, when a polarized light source and the
polarization filter are cross-polarized then minimal light should
get to the objective lens 231, unless the sample re-orients the
polarized light, such as can happen when the light passes through
crystals.
[0080] The digital camera 214 in combination with the lens assembly
230 provides a broad depth of field to allow imaging of objects
such as protein crystals at varying depths within a sample droplet
stored in a sample well of a sample plate 212. In one embodiment,
the lens assembly 230 has a 12:1 zoom lens and, in cooperation with
the digital camera 214, can provide a 1 micron optical resolution.
In some embodiments, the lens assembly 230 and the digital camera
214 may be integrated as a single assembly.
[0081] Light Source
[0082] The light source 216 will now be described with reference to
FIGS. 12-15. FIG. 12 shows a perspective view of the light source
216. Since the crystallization of substances is often highly
sensitive to temperature changes, the light source 216 is
preferably configured to minimize the amount of heat transferred to
the sample plate 212, e.g., by isolating and removing heat
generated by the electronics 1408 and illuminators 1402 (see FIG.
14B).
[0083] Housing
[0084] With reference to FIGS. 12, 14B and 15, the light source 216
includes a housing 1202 adapted to store one or more illuminators
1402 (see FIGS. 14B and 15), cooling elements 1404, heat reflecting
glass 1406, light diffuser plate 1206, and corresponding
electronics 1405 and 1408. In one embodiment, the housing 1202
consists of a plurality of walls that serve as structural support
for the internal components and that substantially isolate the
internal components from the external environment. The housing 1202
can be constructed of a variety of materials including, but not
limited to, stainless steel, aluminum, and hard plastics. A
material with a low coefficient of heat transfer is preferred so as
to substantially keep heat generated within the housing 1202 from
reaching the outside through the walls of the housing 1202.
However, depending on the application, use of metals is appropriate
when cooling elements 1404 are provided. In some embodiments, one
or more of the internal surfaces of the walls of the housing 1202
may be coated with a suitable material that absorbs or reflects
various types of radiation and prevents them from reaching the
outside of the housing 1202.
[0085] In the embodiment of the light source 216 shown in FIGS.
12-14B, the top wall 1204A of the housing 1202 has an opening to
receive and support a light diffuser plate 1206. The plate 1206
serves to diffuse light from the illuminators 1402 onto the sample
plate 212. The plate 1206 may be, for example, a sheet of
translucent plastic. In one embodiment, inside the housing 1202 and
adjacent and below the plate 1206, a heat reflecting glass ("hot
mirror") 1406 (see FIG. 14B) is provided. The heat reflecting glass
1406 prevents most infra-red energy from exiting the housing
1202.
[0086] The wall 1204B of the housing 1202 may be provided with a
plurality of orifices 1208 that allows a cooling element 1404, such
as fan, to draw air into the housing 1202 for cooling the internal
components. A wall 1204C (see FIG. 14B) of the housing 1202 can be
fitted with an opening 1410 for receiving a duct that guides forced
air out of the housing 1202. A wall 1204D (see FIG. 13) of the
housing 1202 can be fitted with a power plug 1208 and a
communications port 1302. The housing 1202 is preferably adapted to
isolate an operator of the imaging system 200 from high voltages
that may be used to fire the illuminators 1402.
[0087] Of course, the housing 1202 may be configured in a variety
of ways not limited to that detailed above. For example, the
ventilation openings 1208 on wall 1204B may be replaced by one or
more fans built into the wall 1204B or the wall 1204E. Moreover,
depending on the specific location of the light source 216 in any
given application of the imaging system 200, the ventilation
openings 1208 may be located on the bottom wall (not shown) of the
housing 1202, for example.
[0088] Illuminators
[0089] With reference to FIGS. 14B and 15, the light source 216
includes one or more illuminators 1402 that generate light rays.
The illuminators 1402 may be various types, for example,
incandescent bulbs, light emitting diodes, or fluorescent tubes of
various types including, but not limited to, mercury- or neon-based
fluorescent tubes. In one embodiment, the illuminators 1402 are two
xenon tubes. Xenon tubes are well known in the relevant technology
and are readily available. The xenon tubes 1402 can include
borosilicate glass that absorbs ultra-violate radiation. Xenon
tubes are preferred because they produce sufficient light to allow
use of a CMOS camera 214 in the imaging system 200. Xenon tubes are
also preferred since they provide a broad spectrum of light rays,
which enables use of color to enhance detection of crystal growth
in the wells of the sample plate 212.
[0090] The actual dimensions of the illuminators 1402 are chosen to
suit the specific application. For example, in the imaging system
200 the xenon tubes 1402 are long enough to cover one dimension of
the sample plate 212 so that it is not necessary to move the light
source 216 when the lens assembly 230 or sample plate mount 210 are
repositioned. As shown in FIG. 14B, the illuminators 1402 may be
supported on a board 1405, which may also support electronics for
control of the illuminators 1402.
[0091] Off-Axis Lighting
[0092] In one embodiment, two illuminators 1402 are positioned to
provide different locations of the illumination source, e.g., both
on-axis and off-axis lighting of the wells in the sample plate 212.
As used here, the imaging axis of the lens assembly means the
principal axes of the lens assembly. For example, first and second
xenon tubes 1402 can be positioned, respectively, a first and
second distance from the imaging axis of the lens assembly 230.
Typically, the first and second distances are substantially equal
in length, and the first xenon tube is positioned opposite the
imaging axis from the second xenon tube.
[0093] In one embodiment, the xenon tubes 1402 are mounted about an
inch on either side of the area directly under the lens assembly
230. This configuration allows the use of an indirect lighting
effect when only one xenon tube is fired. That is, when two xenon
tubes are positioned off the imaging axis, the controllers and
logic 110 or 160 can control the tubes to provide on-axis or
off-axis illumination of the sample plate 212. One xenon tube can
be fired to provide off-axis illumination of the sample plate 212.
When the two xenon tubes are fired simultaneously a more
conventional backlit scene is obtained. In some applications,
off-axis illumination is preferred because it produces shadows on
small objects in a sample droplet stored in a well of the sample
plate 212. The shadows caused by off-axis lighting enhance the
ability of the controllers and logic 110 or 160, or an operator, to
detect objects in the sample.
[0094] In one embodiment, for example the imaging system 150 shown
in FIG. 1B, the controllers and logic 160 control the assembly 155
to capture two images of a droplet in a well plate of the sample
plate 212. The imaging system 150 captures one image with the light
source 216 lighting the sample with a first xenon tube. The imaging
system 150 captures a second image with the light source 216
lighting the sample with the second xenon tube. The controllers and
logic 160 can then combine the data from both images and perform an
analysis based on the combined data. This results in enhanced
characterization of the sample since the combination of the images
typically provides more information about crystallization of the
sample than a single image acquired with standard back lighting of
the scene.
[0095] Filters
[0096] In one embodiment, a source filter 270 (FIG. 2) may be
inserted in a filter slot 272 so that the filter 270 is interposed
between the light source 216 and the sample plate 212. The various
filters 270 may be inserted and removed from the filter slot 272 by
a plate handler. Thus, the filter 270 may be automatically removed,
or exchanged with another filter, by the imaging system 200. The
source filter 270 may be any type of filter, such as a wavelength
specific filter (e.g. red, blue, yellow, etc.) or a polarization
filter.
[0097] Flash Mode
[0098] In one embodiment of the imaging system 200, the light
source 216 includes one or more illuminators 1402 (e.g.,
fluorescent tubes) adapted to provide flash lighting. That is the
illuminators 1402 are controlled to illuminate only momentarily the
sample plate 212 as the digital camera 214 captures an image of a
well in the sample plate 212. This arrangement provides benefits
over known devices in which illuminators remain in the on-position
throughout the entire time that the sample plate 212 is handled by
an imaging system. In the imaging system 200, since the
illuminators 1402 are turned on for only a fraction of a second per
image, very little heat radiation is transferred to the wells of
the sample plate 212. Hence, one benefit of this configuration is
that the imaging system 200 can provide high illumination levels
for the camera 214 while minimizing energy or radiation transfer to
the samples in the sample plate 210. An exemplary control circuit
1600 that provides controlled flash lighting is described below
with reference to FIG. 16.
[0099] Flash Lighting Circuitry
[0100] FIG. 16 is a functional block diagram of an illumination
duration ("flash") control circuit 1600 for an illuminator 1402.
Although only one illuminator 1402 and control circuit 1600 is
shown, multiple illuminators 1402 can be used and independently
controlled using additional control circuits 1600. The illuminator
1402 can be, for example, a xenon tube having a length greater than
the maximum width of the sample plate 212 to be used in the imaging
system 100, 150, or 200. By having such a dimension, the
illuminator 1402 can be located underneath and along one axis of
the sample plate 212 to illuminate all the wells in one row or
column of the sample plate 212 without repositioning the
illuminator 1402.
[0101] A first end of the illuminator 1402 is connected to a first
capacitor 1602 and a first resistor 1604. The opposite end of the
first resistor 1604 is connected to a power supply 1606. The power
supply 1606 may be controlled by a dedicated RS232 line, for
example. The opposite or second end of the first capacitor 1602
that is not connected to the illuminator 1402 is connected to
ground or a voltage common.
[0102] The second end of the illuminator 1402 is connected to the
anode of a first silicon controlled rectifier ("SCR") 1607 and a
first terminal of a second capacitor 1608, respectively. An SCR is
a solid state switching device that can provide fast, variable
proportional control of electric power. A resistor 1620 is
connected between the first terminal of the second capacitor and
the cathode of a second SCR 1610. The second terminal of the second
capacitor 1608 is connected to an anode of the second SCR 1610. The
cathode of the first SCR 1607 is connected to the ground or voltage
common potential. The cathode of the second SCR 1610 is connected
to the cathode of the first SCR 1607 and is similarly connected to
ground or the voltage common potential. The anode of the second SCR
1610 is also connected to a second resistor 1614 that connects the
anode of the second SCR 1610 to the power supply 1606.
[0103] A trigger 1612 of the illuminator 1402 is connected to the
gate of the first SCR 1607 so that both can be triggered
simultaneously. This common connection controls the trigger 1612 of
the illuminator 1402 and the start of illumination. The gate of the
second SCR 1610 controls a stop or end of illumination.
[0104] The duration of illumination provided by the illuminator
1402 can be controlled as follows. Initially, the first and second
SCRs 1607 and 1610, respectively, are not conducting. The first
capacitor 1602 is charged up to the level of the voltage of the
power supply 1606 using the first resistor 1604. The power supply
1606 can, for example, charge the first capacitor to 300 volts or
more.
[0105] The size of the first capacitor 1602 relates to the amount
of energy that can be transferred to the illuminator 1402. The
illuminator 1402 provides an illumination based in part on the
amount of energy provided by the first capacitor 1602. The first
capacitor 1602 can be one capacitor or a bank of capacitors. The
first capacitor 1602 can be, for example, a 600.degree. F.
capacitor.
[0106] The size of the resistors 1620 and 1614 are determined in
part by the desired voltage rise time on the second capacitor 1608.
Smaller resistors 1620 and 1614 allow the second capacitor 1608 to
charge quickly. However, the second SCR 1610 can inadvertently
trigger if the voltage impulse at its anode is too great. Thus, the
value of the resistors 1620 and 1614 are typically chosen to allow
the second capacitor 1608 to recharge before the next image flash
trigger, but not to recharge so quickly as to inadvertently trigger
conduction in the second SCR 1610.
[0107] The resistor 1620 provides an electrical path from the anode
of the first SCR 1607 to ground or voltage common to allow the
second capacitor 1608 to charge.
[0108] The illuminator 1402 is ready to trigger once the first
capacitor 1602 is charged. The second capacitor 1608 is charged by
the power supply 1606 through the second resistor 1614 concurrent
with the charging of the first capacitor 1602. The second capacitor
1608 is chosen to be large enough to generate a current potential
that shuts off the first SCR 1607 and, thus, to terminate
illumination by the illuminator 1402. The second capacitor 1608 can
be a single capacitor or can be a bank of capacitors. The second
capacitor 1608 can be, for example, a 20 .mu.F capacitor.
[0109] After the first and second capacitors 1602 and 1608 have
been charged, the duration of illumination can be controlled. The
illuminator 1402 initially illuminates when the trigger signal is
provided to the control of the illuminator 1402 and the gate of the
first SCR 1607. The illuminator 1402 can include a triggering
circuit that triggers the illuminator 1402 in response to a logic
signal. If the illuminator 1402 does not include this circuit, an
external triggering circuit can be included.
[0110] The first SCR 1607 conducts in response to the trigger
signal. The first SCR 1607 then continues to conduct even in the
absence of a gate signal. The first SCR 1607 can be shut off by
interrupting the current through the SCR or by reducing the voltage
drop across the first SCR 1607 to below the forward voltage of the
device.
[0111] The second SCR 1610 is controlled by a stop signal generator
1616 to connect the second capacitor 1608 in parallel with the
first SCR 1607. However, the second capacitor 1608 is charged in
opposite polarity to the voltage drop across the first SCR 1607.
Thus, when the second SCR 1610 initially conducts, the voltage from
the second capacitor 1608 is placed in opposite polarity across the
first SCR 1607 thereby shutting off the first SCR 1607.
[0112] After the first SCR 1607 is triggered by a gate signal and
begins to conduct, the second end of the illuminator 1402 and the
first terminal of the second capacitor 1608 are pulled to ground
via the first SCR 1607. The illuminator 1402 then illuminates in
response to the current flowing through the illuminator 1402. The
second SCR 1610 controls turn-off of the illuminator 1402. The
second SCR 1610 begins to conduct when a stop signal is applied to
the gate of the second SCR 1610. This pulls the second terminal of
the second capacitor 1608 to ground. Because a capacitor resists
instantaneous voltage changes, the voltage across the second
capacitor 1608 momentarily causes the voltage at the anode of the
first SCR 1607 to be pushed below the ground or voltage common
potential. A negative voltage at the anode of the first SCR 1607
results in a loss of current flowing through the first SCR 1607,
which results in shut down of the first SCR 1607. The second
capacitor 1608 discharges almost immediately. The illuminator 1402
shuts off when the first SCR 1607 turns off because there is no
longer a current path through the illuminator 1402.
[0113] Thus, a microprocessor, controller, or microcontroller can
be programmed to control the trigger 1612 and stop signal generator
1616. The processor controls the trigger signal to initiate
illumination with the illuminator 1402. The processor then controls
the stop signal to control termination of the illuminator 1402. The
processor can thus control the trigger and stop signals to control
the duration of the illumination. The processor can control the
duration of the illumination (a "flash") in predetermined intervals
or can control the duration of the illumination over a range of
time. For example, the processor can control the duration of the
flash in microsecond steps across an interval of approximately 20
.mu.S-600 .mu.S. Alternatively, the processor can control the lower
range of the duration of the flash to be 0, 20, 40, 50, 75, 100,
150, 200, 250, 300, 350, 400, 450, 500, or 550 .mu.S. In another
alternative, the processor can control the upper range of the
duration of the flash to be 40, 50, 75, 100, 150, 200, 250, 300,
350, 400, 450, 500, 550 or 600 .mu.S. In one embodiment, the
digital camera 214 issues the signal to turn on the illuminator
1402 so that the "flash" will be in synchronization with the
electronic shutter of the digital camera 214.
[0114] The power supply 1606 can be a controllable high voltage
power supply. The microprocessor, controller, or microcontroller
can also control the output voltage of the power supply 1606 to
further control the illumination provided by the illuminator 1402.
For example, the microprocessor can control the output voltage of
the power supply 1606 to vary the illumination provided by the
illuminator 1402 for the same illumination duration. Thus, for a
given illumination duration, the microprocessor can control the
power supply 1606 to a lower output voltage to minimize the
illumination. Similarly, for the same illumination duration, the
microprocessor can control the power supply 1606 to a higher output
voltage, thereby increasing the illumination.
[0115] The microprocessor can control the output voltage of the
power supply 1606 over a range of, for example, 180-300 volts. The
illuminator 1402 may not consistently illuminate for voltages below
180 volts when the illuminator 1402 is a xenon flash tube. The
microprocessor can control the output voltage of the power supply
1606 using a digital control word. Thus, the microcontroller can
control the output voltage of the power supply 1606 in steps
determined in part by the number of bits in the control word and
the tunable range of the power supply 1606. The microcontroller
can, for example provide a 10-bit control word, an 8-bit control
word, a 6-bit control word, a 4-bit control word, or a 2-bit
control word. Alternatively, the power supply 1606 output voltage
can be continuously variable over a predetermined range.
[0116] Thus, the microcontroller can control a level of
illumination by controlling the illumination duration, the power
supply 1606 output voltage, or a combination of the two. The
microprocessor's ability to control the combination of the two
permits a wider range of brightness outputs than if only one
parameter were controllable. The microprocessors ability to control
both illumination duration and power supply 1606 output voltage is
advantageous for different lens zoom conditions. When magnification
is low, such as when the lens is zoomed out, a relatively small
amount of light is required. When magnification is high, a
relatively large amount of light is required to capture an image.
Use of filters and varying apertures may also be used to adjust the
amount of light from the light source.
[0117] Operation
[0118] The imaging system 200 includes software modules that
control and direct the lens assembly 230 to perform the following
functions. In one embodiment, the imaging system 200 is configured
to automatically control the brightness of the image. For example,
after the camera 214 captures an image of a well of the sample
plate 212, the software determines whether the brightness is within
predetermined thresholds. If the brightness does not fall within
the thresholds, the controllers and logic of the imaging system 200
iteratively adjust the illumination intensity of the illuminators
1402 to adjust the brightness of the images until the brightness
falls within the thresholds. In some embodiments, the brightness of
the image may be evaluated based on a predetermined region (or set
of pixels) of the image captured.
[0119] The brightness of the illuminators 1402 may be adjusted when
capturing a plurality of images of the same sample droplet. In one
embodiment, for example, the imaging system 150 shown in FIG. 1B,
the controllers and logic 160 control the assembly 155 to capture
two images of a droplet in a well plate of the sample plate 212.
The imaging system 150 captures one image with the light source 216
lighting the sample with a first brightness level. The imaging
system 150 captures a second image with the light source 216
lighting the sample with a second brightness level. In one
embodiment the controllers and logic 160 can then combine the data
from both images and perform an analysis based on the combined
data, which may result in enhanced characterization of the sample.
In some embodiments, the brightness used for the second image may
be logically controlled based on analyzing the brightness of the
first image, determining if a lighter or darker second image may
result in enhanced characterization of the sample, and adjusting
the light source 216 to light the sample accordingly.
[0120] The imaging system 200 can also be configured with software
to automatically focus the image. An exemplary autofocus routine is
as follows. Once the lens assembly 230 is positioned over a sample
of the sample plate 210, the objective lens 231 is moved along its
imaging axis to a predetermined starting position. The camera 214
then acquires an image of the sample and/or well at that focus
position. In one embodiment, the software obtains a "focus score."
This may be done, for example, by examining the brightness values
of a set of pixels (e.g., a 500.times.3 pixel area) in the captured
image, applying a low pass filter, and computing the sum of the
squares of the differences in brightness of adjacent pixels for the
set of pixels. The position and focus score data points are stored
in an array. The objective lens 231 is moved to the next
predetermined incremental position on its imaging axis, and the
process of acquiring an image, computing the focus score, and
storing the position and focus score values is repeated. This
process continues until the objective lens 231 has been moved to
all the predetermined or desired positions, e.g., until it reaches
a predetermined end position by incrementally moving in a
predetermined step size from the starting position. In one
embodiment, the step size depends at least in part upon a
predetermined maximum number of images to be acquired during the
autofocus routine.
[0121] Next, the software searches the lens position/focus score
array to identify the lens position with the best focus score. In
one embodiment, the software then proceeds to compute the lens
positions that are midway from the best focus score position to
positions adjacent to it in the array. That is, the software
examines the array of positions already imaged, finds the nearest
position greater than the lens position associated with the best
focus score, and calculates a "midpoint" position between them. A
similar process is performed with regard to the nearest lens
position that is less than the best focus score position. The
software then acquires images at the midpoint positions and obtains
corresponding focus scores. The software once again evaluates the
array to identify the image with the best focus score, using a step
size that is, say, one-half of the initial step size. These tasks
are repeated until, for example, a maximum number of images
acquired during autofocus, or a minimum step size, has been
reached.
[0122] In some embodiments, the imaging system 200 performs the
processes of autofocusing and automatically adjusting the
brightness, as described above, for each well sample of a sample
plate 212 received by the imaging system 200. After the desired
brightness and focus are set, the imaging system 200 then captures
an image and stores it in, for example, the data storage 190. In
one embodiment, the automatically determined brightness and focus
are also stored for each sample. In another embodiment, the
software of the imaging system 200 calculates and stores a value
associated with the mean of the brightness and focus positions for
the aggregate of wells samples of the first plate. This value is
then associated with each of the position/focus score data points
in the array. Subsequent plates are examined using the mean
brightness and focus as initial imaging values.
[0123] The imaging system 200 may also include additional
functionality related to automatically finding the edges of a
droplet in a well of a sample plate 212. In one embodiment, after
the edges of the drop have been found, the imaging system 200 finds
the centroid of the droplet and moves the lens assembly 230 to the
centroid. The imaging system 200 then determines the magnification
required to image substantially only that area corresponding to the
droplet, adjusts the zoom, and acquires the image.
[0124] In another embodiment, the imaging system 200 may be
configured to perform automatic adjustment of aperture. In this
embodiment, the imaging system 200 receives settings for either
maximum image resolution or maximum depth of field. The imaging
system 200 then determines the corresponding aperture by, for
example, looking one or more tables having values correlating
aperture with maximum resolution and/or maximum depth of field. Of
course, magnification data may be part of these tables.
[0125] In yet another embodiment, the imaging system 200 may be
configured to perform automatic zoom of a substance in a sample
stored in a well of the sample plate 212. In one embodiment, for
example, the imaging system identifies a "crystal-like object" in
the sample, calculates its centroid, moves the lens assembly 230
and digital camera 214 to the centroid, adjusts the zoom level, and
captures an image of the "crystal-like object." In another
embodiment, the imaging system 200 can be configured to capture an
image of a sample or a crystal-like object, perform image analysis
of the image, adjust imaging parameters (e.g., focus, depth of
field, aperture, zoom, illumination filtering, image filtering,
brightness, etc.) and retake an image of the sample or crystal-like
object. The imaging system 200 can perform this process iteratively
until predetermined thresholds (e.g., contrast, edge detection,
etc.) are met. In some embodiments, the images captured in an
iterative process can be either analyzed individually, or can be
combined with other images and the resulting image analyzed.
[0126] Thus, in one embodiment of the imaging system 200, the
imaging system receives a sample plate 212 and for each well sample
performs the following functions including, automatic adjustment of
brightness and aperture, autofocus, automatic detection of the
sample droplet, and acquisition and storage of images. The imaging
system 200 stores the aperture, brightness, focus position, drop
position and/or size. The imaging system 200 may then use mean
values of these factors as initial imaging settings for subsequent
plates.
[0127] To increase the amount of data available for analysis of the
sample, or crystal detection, in some embodiments an illumination
source filter 270 (FIG. 2) may be inserted in the filter slot 272
so that the filter 270 is (not shown) may be interposed between the
light source 216 and the sample plate 212. In one embodiment the
various filters 270 may be inserted and removed from the filter
slot 272 by a plate handler. Thus, the filter 270 may be
automatically removed or exchanged by the imaging system 200.
Alternatively, or additionally, an image filter (such as those that
may be placed in the photo-filter carriage 237) may be interposed
between the sample droplet in the sample plate 212 and the
objective lens 231. In one embodiment, the image filter includes a
polarization filter that provides a variable amount of polarization
on the light incident on the objective lens 231. The use of these
filters can be automatically controlled by imaging software
routines and/or determined by operator defined variables.
[0128] The motorized control of aperture, focus, and zoom of the
lens assembly 230 in conjunction with remote control of the light
source 216 (e.g., brightness and direction of illumination) allows
dynamic optimization of contrast, field of view, depth of field,
and resolution.
[0129] Imaging System Integrated with Automated Sample Analysis
System
[0130] FIG. 17 depicts a functional block diagram of an automated
sample analysis system 1700 having an imaging system 100, 150, or
200. The system 1700 includes controllers and logic 1760 for
controlling various subsystems housed in a cabinet 1702. The system
1700 can further include a shelf access door 1712 for allowing
access to a removable shelf system 1720 and/or a stationary shelf
system 1722. In one embodiment, a removable shelf access door 1710
can be provided. The system 1700 can include a transport assembly
1730 that can consist of a plate handler 1732, an elevator assembly
1734, and a rotatable platform 1736. The system 1700 can further
include an environmental control subsystem 1765 that employs a
refrigeration unit 1762 and/or a heater 1764.
[0131] In one embodiment, the system 1700 also includes an imaging
system 200 as has been described above. The imaging system 200,
having subcomponents 210, 214, 216, 218, 220, and 230, which are
fully detailed above with reference to FIGS. 2-16, can be housed in
the cabinet 1702. This arrangement ensures that the samples in the
sample plates remain at all times within the confines of a
controlled environment. That is, once a sample plate is stored in
the cabinet 1702, it is unnecessary to expose the sample plate to
the environment external to the cabinet since the system 1700 is
capable of automatically (i.e., without operator intervention)
carry out the imaging of the sample within the cabinet 1702.
[0132] Embodiments of an automated sample analysis system 1700
having an imaging system in accordance with the invention are
described in the related United States Provisional Patent
Application entitled "AUTOMATED SAMPLE ANALYSIS SYSTEM AND METHOD,"
having U.S. Patent Application No. 60/444,519, which is referenced
above.
[0133] Sample Analysis System
[0134] FIG. 18 depicts a block diagram of an imaging and analysis
system 1800, according to one embodiment of the invention. The
imaging system 1805 can be an imaging system 100, 150, or 200 as
described above, or another suitable imaging system that provides
similar functionality to the imaging systems described herein. The
system 1800 includes an imaging system controller 1820 that
provides logical control of the imaging system 1805 to, for
example, direct the imaging system 1805 to image a particular
sample on a particular sample plate 212, all the samples on the
sample plate 212, or image a subset of the samples. The imaging
controller 1820 may also control the imaging parameters used by the
imaging system 1805. Such imaging parameters can include, for
example, focus, depth of field, aperture, zoom, illumination
filtering, image filtering and brightness.
[0135] The system 1800 also includes an image storage device 1810
that stores images of samples captured by the imaging system 1805.
The image storage device 1810 can be any suitable computer
accessible storage medium capable of storing digital images, e.g.,
a random access memory (RAM), hard disk floppy disk, optical disk,
compact disks, or magnetic tape. The system 1800 shows the image
storage device 1810 separate from the imaging system 1805. In some
embodiments, the image storage device 1810 can be included in the
imaging system 1805, or it may be included in a system that may
also include an image analyzer 1815, the imaging system controller
1820, or a scheduler 1825. In one embodiment, a computer includes
all the control, scheduling, analysis and imaging software for the
system 1800. Alternatively, the software for the system 1800 may
reside and run on a plurality of computers that are in
communication with each other. In some embodiments, the imaging
system 1805 may be configured to provide captured images directly
to the image analyzer 1815, or it may be configured to typically
store images on the image storage device 1810 and provide images to
the image analyzer 1815 as directed by the imaging system
controller 1820.
[0136] The scheduler 1825 communicates with the image analyzer 1815
and the imaging system controller 1820 to control the analysis and
imaging of samples based on user provided input. For example, the
scheduler can schedule the imaging of a particular droplet or a
plurality of droplets on a sample plate, and coordinate the imaging
of said droplet or plurality of droplets with its subsequent
analysis. The scheduler 1825 can use a database 1830 to store
information relating to scheduling the images and image specific
information, for example, the size of pixels in each of the stored
images, in a suitable format for quick retrieval. Knowing the pixel
size can allow the analyzer 1815 to reduce sampling to an
appropriate density and size for particular objects in the image.
The information in the database 1830 can be available with each
request to process an image. The database 1830 can reside on the
same computer as the scheduler 1825 or on a separate computing
device.
[0137] The scheduler 1825 provides an analysis request to the image
analyzer 1815. According to one embodiment, the analysis request
includes an image list, including the resolution of each image and
the absolute X,Y location of its center. The image list typically
contains only one image but may contain a plurality of images. The
analysis request can also contain an analysis method including a
list of parameters that specify options controlling how to analyze
the image(s) and what to report. Additionally, the analysis request
can include the Uniform Resource Locator ("URL") of a definition
file 1835, i.e., an electronic address that may be on the Internet,
such as an ftp site, gopher server, or Web page. The definition
file 1835 defines parameters used by the image analyzer 1815, e.g.,
neural network dimensions, weights and training resolution (e.g.,
pixel granularity, or the spacing between pixels, of images used to
train the neural network). The definition file 1835 may be a single
file or a plurality of files, but will be referred to hereinafter
in the singular.
[0138] The image analyzer 1815 also receives an analysis method
file(s) 1840. The analysis method file may be a single file or a
plurality of files, but will be referred to hereinafter in the
singular. The analysis method file 1840 includes parameters that
can be used by the various image analysis modules contained in the
image analyzer 1815, e.g., a content analysis module 1930, a
notable regions module 1935, and a crystal object analysis module
1940 (FIG. 19), described below, according to one embodiment. The
image analyzer 1815 can also include functionality that determines
the content of an image in terms of objects and/or regions of, for
example, crystals or precipitate, or clear regions, that is,
regions that do not show any features. The image analyzer 1835
includes a neural network to identify features, e.g., crystals,
precipitate, and edges, that are depicted in the image, according
to one embodiment. Preferably, the image analyzer 1815 is
configured to identify objects and regions of interest in an image
quickly enough to allow the system 1800 to re-image specific
objects or regions, if desired, while the corresponding sample
plate is still in the imaging system 1805.
[0139] The image analyzer 1815 provides an analysis response to the
scheduler 1825. The analysis response, described in further detail
below, typically includes the parameters used to for the analysis
and the results of the particular analysis performed, e.g., the
count of crystal, precipitate, clear and edge samples, regions of
crystals, and/or a list and description of objects found in the
image.
[0140] The analysis results can be reviewed using an output display
1845 that can be co-located with the scheduler or at a remote
location. The output displays may be coupled to the system 1800 via
a web server, or via a LAN or other small network topology.
Embodiments of a remote output display in accordance with the
invention are described in related United States Provisional Patent
Application entitled "REMOTE CONTROL OF AUTOMATED LABS," having
Application No. 60/444,585.
[0141] Illustrative Embodiment
[0142] A computer containing analysis and control modules, and
methods related to controlling an imaging and analysis system are
illustrated and described with reference to FIGS. 19-22, according
to one embodiment of the invention. FIG. 19 depicts a computer 1900
that includes a processor 1905 in communication with memory 1910,
e.g., a hard disk and/or random access memory (RAM). The processor
1905 is also in communication with an image analysis module 1960
that can include various modules configured to perform the
functionality of the image analyzer 1815 (FIG. 18) described
herein.
[0143] The computer 1900 may contain conventional computer
electronics that are not shown, including a communications bus, a
power supply, data storage devices, and various interfaces and
drive electronics. Although not shown in FIG. 19, it is
contemplated that in some embodiments, the computer 1900 may
include a video display (e.g., monitor), a keyboard, a mouse,
loudspeakers or a microphone, a printer, devices allowing the use
of removable media including, but not limited to, magnetic tapes
and magnetic and optical disks, and interface devices that allow
the computer 1900 to communicate with another computer, including
but not limited to a computer network, a LAN, an intranet, or a
WAN, e.g., the Internet.
[0144] The computer 1900 is in communication with an imaging
storage device, for example, image storage device 1810 (FIG. 18),
and is configured to receive an image of a sample from the storage
device and determine the contents of the sample, using one or more
analysis processes. The computer 1900 can be co-located with the
image storage device, located near the image storage device, e.g.,
in the same building, or geographically separated from the image
storage device. The computer 1900 can receive the image from the
image storage device via, e.g., a direct electronic connection or
through a network connection, including a local area network, or a
wide area network, including the Internet. It is also contemplated
the computer 1900 can receive the image via a suitable type of
removable media, e.g., a 3.5" floppy disk, compact disc, ZIP drive,
magnetic tape, etc.
[0145] It is contemplated the computer 1900 can be implemented with
a wide range of computer platforms using conventional general
purpose single chip or multichip microprocessors, digital signal
processors, embedded microprocessors, microcontrollers and the
like. The computer 1900 can operate independently, or as part of a
computing system. The computer 1900 may include stand-alone
computers as well personal computers, workstations, servers,
clients, mini-computers, main-frame computers, laptop computers, or
a network of individual computers. The configuration of the
computer 1900 may be based, for example, on Intel Corporation's
family of microprocessors, such as the PENTIUM family and Microsoft
Corporation's WINDOWS operating systems such as WINDOWS NT, WINDOWS
2000, or WINDOWS XP.
[0146] The computer 1900 includes one or more modules or subsystems
that incorporate the analysis processes described herein. As can be
appreciated by a skilled technologist, each module can be
implemented in hardware or software, or a combination thereof, and
comprise various subroutines, procedures, definitional statements,
and macros that perform certain tasks. For example, in a software
implementation, all the modules are typically separately compiled
and linked into a single executable program. The processes
performed by each module may be arbitrarily redistributed to one of
the other modules, combined together with other processes in a
single module, or made available in, for example, a shareable
dynamic link library. A module may be configured to reside on the
addressable storage medium and configured to execute on one or more
processors. Thus, a module may include, by way of example, other
subsystems, components, such as software components,
object-oriented software components, class components and task
components, processes, functions, attributes, procedures,
subroutines, segments of program code, drivers, firmware,
microcode, circuitry, data, databases, data structures, tables,
arrays, and variables. It is also contemplated that the computer
1900 may be implemented with a wide range of operating systems such
as Unix, Linux, Microsoft DOS, Macintosh OS, OS/2 and the like.
[0147] The analysis module 1960 can include a pre-processing module
1925 that can filter the received image prior to further
processing. The image may be filtered to remove "noise" such as
speckles, high frequency noise or low frequency noise that may have
been introduced by any of the preceding steps including the imaging
step. Filtering methods to remove high frequency or low frequency
noise are well known in image processing, and many different
methods may be used to achieve suitable results. For example,
according to one embodiment in a filtering procedure that removes
speckle, for each pixel, the mean and standard deviation of every
other pixel along the perimeter of a 5.times.5 pixel area centered
on a pixel are computed. If the center pixel varies by more than a
threshold multiplied by the standard deviation, then it is replaced
by the mean value. Then the slope of the 5.times.5 image pixel
intensities is calculated and the center pixel is replaced by the
mean value of pixels interpolated on a line across the calculated
slope.
[0148] The analysis module 1960 also includes one or more modules
that perform image analysis to determine information about the
sample contents, including content analysis module 1930, notable
regions analysis module 1935, and crystal object analysis module
1940. The content analysis module 1930 determines the count of
crystal, precipitate, clear and edge pixels in the image, and can
be optionally enabled to operate only inside a specific region of
the sample. The notable regions analysis module 1935 determines a
list of regions of a specified pixel type, e.g., crystal,
precipitate, clear and edge pixels. The crystal object analysis
module 1940 determines objects containing crystal pixels that meet
certain criteria, for example, size, area, or density.
[0149] FIG. 19 also shows analysis module 1960 includes a report
inner/outer non-clear ratio module 1945 that determines the ratio
of non-clear pixel density inside a sample region over non-clear
pixel density outside a sample region. The analysis module 1960
also includes a graphical output analysis module 1950 that generate
a color-coded image depicting each of the various features found in
a sample image in a specified color. These modules are further
described hereinbelow. Other analysis modules 1955 that incorporate
different image analysis processes may also be included in the
analysis module 1960. In one example, an analysis module 1955 can
analyze the change in two or more images of the same sample taken
at two different times. The analysis module 1955 can receive the
count of pixels that are classified as crystal, precipitate, clear
or edge pixels in an image of a particular region of a sample at a
time T1 and save the count information with a reference to the
region of a sample imaged. When the same region of a sample is
re-imaged at a later time T2, the analysis module 1955 receives the
count of pixels that are classified as crystal, precipitate, clear
and edge pixels in the image of the sample region at time T2. The
analysis module 1955 can compare the count information from time T1
and T2 to determine if the droplet contains a crystal(s). One
analysis method compares the total number of pixels classified as
crystal pixels at time T1 and T2 to determine if the sample
contains crystal. Another comparison method compares the percentage
of crystal pixels at time T1 to the percentage of crystal pixels at
time T2. If the count or the percentage of crystal pixels increases
beyond a threshold value, the sample will be deemed to contain
crystals. The other pixel classifications (e.g., precipitate, clear
and edge) can also be compared and evaluated to facilitate the
crystal analysis. A time-based comparison method, where the count
information is saved for one image and compared to a second
subsequent image, can be used with any sample processing
algorithm.
[0150] In another example, the analysis module 1955 may analyze a
series of two or more images crystal growth using a grid approach.
In this analysis method, two images 11 and 12 are divided up into
grids, and the corresponding grids in each image are compared for
change in the number of crystal, using, for example, the actual
number of pixels or the percentage of crystal pixels. The pixel
count information can be kept for each image and used to compare to
other images taken at a different time. In any of the analysis
methods described herein, the method can include analyzing every
pixel, or skipping one or more pixels between the pixels
analyzed.
[0151] A scheduler module 1915 and an imaging system controller
module 1920 are also included in computer 1900, according to one
embodiment. These modules are configured to include functionality
that schedules the imaging of sample plates/droplet samples and
subsequent analysis of the images, and controls the imaging system
100, 150, 200, 1805, as described herein, e.g., for scheduler 1825
and imaging system controller 1820, respectively.
[0152] The image analysis software package may include support
software that performs training and configuring of perception and
analysis functionality, e.g., for a neural network. Some of the
algorithms included in the image analysis software modules may use
stochastic processing and may include the use of a pseudo-random
number generation to find answers. All such functions can be
provided a random number generator seed in request parameters
received by the software module. When the analysis modules are
properly configured, the same results should be obtained for a
given image given the same parameters that affect its algorithms
and any pre-processing of the image. The image analysis modules can
be configured so that an analysis method using a pseudo-random
number does not affect the results of a different analysis method
or software module.
[0153] In one embodiment, the image analysis software works with an
image size of, for example, 800 by 600 pixels, a zoomed-in
resolution of 2,046 pixels/mm (0.5 .mu.m/pixel), and a zoomed-out
resolution of 186 pixels/mm, (5.4 .mu.m/pixel), or 1,024 by 1,024
pixels, a zoomed-in resolution of 2,460 pixels/mm (0.41
.mu.m/pixel), and a zoomed-out resolution of 220 pixels/mm, (4.5
.mu.m/pixel). The image analysis modules may optionally use the
same neural network for both zoomed-in and zoomed-out images,
however, quality of the results may suffer if only one neural
network is used and it may be advantageous to train multiple neural
networks, e.g., one for zoomed-in images and one for zoomed-out
images. The image analysis software can also be adaptable to other
image sizes and pixel resolutions, however, the training of new
neural networks may be necessary in order to suitably process these
images. If the resolution of the images vary, each definition file
may include its training resolution, that is, the spacing between
sampled pixels that was used to train the neural network. This
information allows the algorithms to consider how to adapt images
of varying resolution for use with the neural networks.
[0154] The analysis module receives an analysis request (FIG. 18)
containing an image list that includes the images to be analyzed.
The analysis request also includes, for each image, its resolution
in pixels/mm and the absolute X-Y location of the center of the
image. Typically, there is only one image in the image list,
however, multi-image methods may also be used. The analysis request
also includes an analysis method, which is a collection of
parameters that specify options controlling how to analyze the
images and what to report. In specifying the analysis method, a URL
of the definition file is included. The definition file defines the
neural network's dimensions, weights and training resolution, i.e.,
a pixel granularity of the images that were used to train the
neural network. Examples of the parameters are first described
generally below, and then specifically as they relate to the
content analysis module 1930, notable regions analysis module 1935,
and the crystal object analysis module 1940, according to one
embodiment.
[0155] The analysis request may include parameters that specify how
a working copy of the image is prepared for all subsequent
processing. For example, parameters can include options for a color
to grayscale conversion of the image, and resizing of the image
using pixel interpolation methods. Also, the parameters may specify
the output of an image, for example, they may specify whether and
how an image file representing the pixel interpretation should be
generated. This generated image file may be visually displayed and
further evaluated by a user. The parameters are also used by the
analysis modules, e.g., in the content analysis module, the
parameters specify whether an image is scanned and analyzed to
determine statistics of its contents in terms of crystal,
precipitate, clear and edge features. These parameters specify
whether crystal-like objects should be searched for and reported.
Options may include a scan grid, an ID criteria and the maximum
number of objects to find.
[0156] The parameters may also be used by the notable region
analysis module 1935 to specify whether notable regions in an image
should be reported and, if so, the scan grid in micrometers, the
size that is the width times the height in micrometers, the ID
criteria, and the quantity of regions to report. The crystal object
analysis module 1940 can use the parameters to specify whether
effective contiguous subregions of crystals are identified and
reported as crystal objects, how this identification should be
performed, and the quantity of crystal objects to identify.
[0157] The parameters can also specify whether to report the
inner/outer non-clear ratio. If this ratio is to be specified, the
output includes a ratio of the non-clear pixel density inside a
sample region over the non-clear pixel density outside of the
sample region. For example, the ratio would be 3.0 if every 100th
pixel inside of a sample region is non-clear and every 300th pixel
outside of a sample region is non-clear. According to one
embodiment, ratios above 1 billion are truncated to that value.
[0158] Image sampling parameters may include, for example, a color
processing parameter which specifies how each pixel is converted to
a floating point intensity value, or it may specify the linear
grayscale for image conversion. If the image is already grayscale,
pixels are converted to black, e.g., 0.0, or to white, e.g., 1.0.
If color is selected, the pixels are linearly converted to 0.0 to
1.0 with equal channel weighting for each color. Pixel
interpolation parameters may include, for example, no pixel
interpolation, that is, only a closest pixel method will be used
for pixel interpolation. This is generally the fastest
interpolation method but typically results in reduced image
quality. Interpolation methods that may be selected include
bilinear and cubic spline interpolation, which yield higher quality
images but they are more computationally complex and take more time
or resources to generate. The re-size parameter includes options of
1:1, that is, the image is not resized, automatic, where the image
is resized to match the training resolution using the specified
interpolation method, and scale factor, where the image is re-sized
using this factor and specified interpolation method.
[0159] The analysis modules 1930, 1035, 1940 are configured to
receive an analysis request from a scheduler module 1915 and
generate a response, as described below. The content analysis
module 1930 determines counts of types of pixels in the sample
images, e.g., crystal, precipitate, clear and edge pixels, as
depicted in the image. In the illustrative embodiment described
herein, the content analysis module 1930 is implemented as a neural
network.
[0160] The content analysis module 1930 receives a set of
parameters that include parameters that indicate whether this
module should be enabled, whether the content analysis should take
place inside the sample region only or inside and outside the
sample region, and the number of pixels to be skipped during the
image analysis. If enable is set to NO, no analysis by the content
analysis module 1930 is done and nothing is reported. If enable is
set to YES, then the content analysis module analyzes the sample
image. If the inside-sample-region-only option is set to YES, the
edge of the sample region is found first, and the analysis is done
only within the sample region edge. If inside-sample-region-only is
set to NO, then checking is done inside and outside the sample
region. A process for identifying the edge of a sample region is
described hereinbelow in reference to FIG. 20, according to one
embodiment. If the number of pixels to be skipped is set to 0, all
the pixels in the image will be used. If the number of pixels to be
skipped is set to 1, every other pixel in the image will be used
for the content analysis, if 2, every third pixel will be used,
etc. The default parameter for skipped pixels is typically set to
0.
[0161] The response of the content analysis module 1930 includes an
"echo" of the parameters used during the content analysis
processing, and the counts of each pixel type pixels of crystal,
precipitate, clear and edge samples found in the image. If the
inside-sample-region-only option is enabled, the edge count can be
used to assess how well the edge of the sample region was found. If
it is not enabled, the edge count may be ignored.
[0162] The notable region analysis module 1935 processes an image
and determines regions of a specified size that include the minimum
levels of crystal, precipitate or non-clear pixels. The request
parameters for the notable region analysis module 1935 can include
an enable parameter which is set to either "YES" or "NO" that
determines if notable region analysis should be performed and
reported. The request parameters can also include a region size or
area that is used to determine the size of the smallest region the
notable region analysis module will identify. A skip-pixel
parameter can be included to control the number of pixels that will
be skipped during processing, where "0" means to check all of the
pixels, "1" means to sample every other pixel, that is, sample the
pixels with one unsampled pixel between them, etc. Typically, the
default value for skip-pixel is "0."
[0163] The request parameters can also include the maximum number
of regions to report and the minimum percentages of crystal pixels,
precipitate pixels and non-clear pixels to report. Typically,
pixels determined to be edge-type pixels are ignored. The notable
region analysis module 1935 can be configured to identify regions
with the highest percentage of each specified pixel type. If the
regions contain less than the minimum percentage of pixels, it is
not saved and the search for regions ends. Regions typically do not
go outside of the input image. Newly found regions generally do not
overlap existing regions. The report of results from the notable
regions analysis module includes all the request parameters and a
list of the regions identified The results for each region can
include its absolute position, size, the number of crystal pixels
and the total pixels sampled, not including edge pixels.
[0164] The crystal object analysis module 1940 identifies small
regions in the image that are rich in crystal pixels. The small
regions, or objects, comprise one or more "cells." The request
parameters for the crystal object analysis module can include an
enablement parameter which determines if this analysis should be
performed and reported. The request parameters also include a
skip-pixels parameter that operates as previously described above,
parameters that control the size of the cells identified, for
example, a cell-minimum-size parameter to control the smallest
width or height of a cell, a cell minimum area which indicates the
smallest overall area of a cell, a cell minimum density parameter
which indicates the proportion from 0 to 1 of crystal pixels the
cell must contain in order to be reported, and an
object-minimum-size parameter which indicates one or more
dimensions that the overall object must achieve in order to be
reported. The request parameters can also include a pseudo-random
generator seed which is used for the crystal object analysis
stochastic processing. The crystal object analysis module 1940
typically includes the limitation that the center of a cell cannot
be inside another cell. Identified cells that touch are grouped and
identified as a single crystal object, and the largest overall
dimension of the crystal object is computed. If the largest overall
dimension is less than the minimum size, the object is discarded.
The crystal object analysis processing can also compute an object
area as the sum of the cell density times the cell area, and
further compute the object centroid. The results from the crystal
object analysis module 1940 can include all the request parameters
provided to the module, a list of objects identified and their
description. The list is sorted in descending order by an object's
area. Each object description includes the object area
(.mu.m.sup.2), the centroid (X, Y in .mu.m) and a list of cells
that make up each object. Each cell is described with its absolute
position in size (.mu.m), crystal pixel count and total pixel
count.
[0165] The graphical output module 1950 generates a representation
of the analyzed image which can be displayed and further analyzed.
For example, grayscale and/or color coding pixel characteristics
may be adjusted by the graphical output module 1950. The analysis
request for the graphical output module 1950 includes an image path
parameter that defines where the image to be analyzed is found. If
the image path parameter is empty, no further processing is done. A
base value parameter indicates whether a "base image," i.e., an
image used to generate the representation of the analyzed image, is
either black, gray or white. If the base value is gray, the base
image begins as a grayscale rendition of the resampled image.
Otherwise, the base image begins as a white or black image, as
indicated by the base value. The parameters include a gray "min"
value and a gray "max" value, which are typically from 0 to 1, and
specify the linear grayscale compression. For example, adjusting
the gray min or max values can control the color coding contrast or
flatten the image, and they are typically set to defaults of 0 for
the gray min and 0.75 for the gray max.
[0166] An opaque parameter indicates whether a pixel in the base
image should be replaced with the color coding associated with the
particular type of corresponding pixel in the analyzed image. For
example, if the opaque parameter is set to YES or the base
parameter equals black or white, the appropriate color coding
replaces the pixel. If the opaque parameter is set to no, the color
for a base image pixel is generated by OR'ing the color with the
corresponding pixel in the analyzed image. A crystal color
parameter provided in the analysis request sets the color coding
value for pixels identified as crystals, a precipitate color
parameter sets the color coding for precipitate pixels, and an edge
color sets the color coding for pixels identified as edges. For
example, the default values for the crystal color parameter may be
blue, the precipitate color parameter may be green and the edge
color parameter may be red. The graphical output module 1950 writes
the color coded image file to the image path specified in the
request parameters, unless the path parameter is empty or invalid.
The generated color-coded image file typically does not contain
region annotations, but annotations can be superimposed on the
image file by another process, if desired. The graphical output
module 1950 provides an analysis report to the scheduler module
1915 that includes the request parameters that were used to produce
the color coded image file.
[0167] In one embodiment, the analysis modules 1930, 1935, 1940 can
function as service functions that are capable of quickly
identifying objects and/or regions within an image, so that a
scheduler module 1915 can dispatch control information to the
imaging controller module 1920, which in turn directs the imaging
system to re-image specific areas of a droplet using at least one
different imaging parameter, (e.g., the magnification or zoom level
may be different, a different configuration of lighting, such as,
off-axis lighting may be used, etc.), while the sample plate
containing the sample just analyzed is in the imaging device. In
one embodiment, an analysis module 1960 can analyze at least 10,000
images per day under typical conditions, where the images are less
than or equal to 1.0 mega pixels, i.e., the equivalent of
processing each image in 8.64 seconds, and where one instance of
the image analysis software is running on one PC. The analysis
module may be packaged and distributed in a Java 2 file. Java
message service may be used to receive requests and send the
responses from the analysis module(s). Extensible markup language
(XML) may also be used for the analysis requests and responses.
[0168] Test images are used with training software to train the
neural networks to analyze crystal growth in sample droplets. As
general software implementation of a neural network is well known
in the art, only the training of a neural network is described,
according to one embodiment of the invention. Training software
allows the user to create, open, display, edit and save lists of
images in training/test set files, and is described herein
according to one embodiment of the invention. The test images
include identified subimages containing edge, crystal, precipitate
and clear pixels within a wide variety of images. For each image,
the user can designate "training subimages" as crystal,
precipitate, edge or clear. The resolution of the subimages can be
user-adjustable. To minimize user fatigue during image designation,
the software can include a single-click designation action that
efficiently designates the subimages as crystal, precipitate, edge
or clear. The images containing the designated training subimages
can be saved as a set of training files. The training software can
display training subimages in table form and/or as color-coded
markers on an image. Subimages may be moved by either dragging the
marker or editing the table. Subimages may also be deleted either
from the image or from the table. The training software can be
configured to allow a user to define the neural network
dimensionality, select a training set file and another file for
testing, and perform iterative training and testing using the
selected sets of files. Training data, e.g., neural network
weights, training and test error, and the number of iterations is
saved in a definition file.
[0169] To train the neural network, the intensity levels of pixels
in a selected image area, e.g., a subimage, are provided as an
input to the neural network. The neural network identifies each
pixel as a particular type of pixel, e.g., edge, clear, crystal or
clear. The results are compared to what is actually correct, and
corresponding error values are calculated. Small adjustments are
made to the weights within the neural network based on the error
values, and then another test image containing a designated
subimage is provided as in input to the neural network. This
process is performed for other test images and can be repeated for
many thousands of iterations, where each time the weights may be
slightly adjusted to provide a more accurate output.
[0170] When the neural network is used for content analysis, an
image of a sample droplet is provided as an input to the neural
network. The output of the neural network includes a rating for
each pixel that indicates a degree of confidence that the pixel
depicts each of the different pixel classifications, for example,
edge, crystal, precipitate, and clear. The rating is typically
between zero and one, where zero indicates the lowest degree of
confidence and a 1 indicates the highest degree of confidence. The
overall content of an image can be determined counting the number
of pixels of each classification by computed as a percentage of the
crystal, precipitate, edge and clear pixels contained in the
image.
[0171] When considering the content analysis strategy, accuracy of
the results is important, but so is the speed of the analysis.
Analysis algorithms can allow the user to balance and prioritize
the characteristics of speed and quality. For example, one analysis
option identifies edges of a drop within the image, and may be used
with quick and coarse resolution search parameters to first
identify the edge of the drop, and then the interior of the drop
may be analyzed with a higher resolution search.
[0172] According to one embodiment of the invention, a supervised
learning type of neural network is used to classify the subimages
as crystal, precipitate, edge of drop or clear, using the pixel
intensity, not the pixel hue. In one embodiment, the entire image
is scanned, sampling subimages on a host-specified grid, where the
spacing of the grid is in millimeters, not pixels. The resolution
of the images is provided as a parameter received from the host.
Pie charts can be generated graphically showing the results of the
neural network analysis. According to one embodiment, the outputs
of a neural network can be summed for each type of object
identified and divided by the sum of all the outputs, for example,
the results can be A% crystal, B% precipitate, C% clear, where
A+B+C=100%.
[0173] Each image analysis method file contains neural network
definitions, e.g., "dimensions" and "weights." The method file also
includes parameters that specify the analysis options including
whether to perform drop edge detection, and if drop edge detection
is selected, the sample grid spacing used to find the edges of the
drop, and the sample grid spacing to find crystals within the drop.
For example, drop edge detection finds the edge of a drop quickly
with a relatively coarse grid spacing scan and then use a
relatively fine grid spacing scan inside the drop, according to one
embodiment. A database can be used to associate the image analysis
file with the image analysis results, so that if a better image
analysis method is available at a later time, an image may be
re-analyzed using the later analysis method.
[0174] The analysis modules can use a neural network to classify
the contents of an image. To aid the neural network in the
classification process, a fast operator can be used to identify if
a pixel has a particular crystal characteristic. One embodiment of
an edge detection process is described below and illustrated in
FIG. 20A. Color or black and white images of a sample droplet can
be generated and used for identifying crystals. At step 2005, the
edge detection process 2000 receives the image of a sample that may
contain crystals. At step 2010 the process 2000 determines if the
image received is a color image. If the image is a color image, it
is converted to a grayscale image at step 2015. The image may be
filtered at step 2020 to remove minimize undesirable
characteristics such as speckle or other types of image "noise"
during subsequent processing.
[0175] The edge detection process 2000 uses the gradient of the
intensity of the pixels in the image to identify edges. At step
2025, for a plurality of pixels in the image, gradient information
is calculated from a 3.times.3 set of pixels using a calculation
based on the best fit of a plane through the image points. The
gradient of intensity of the pixel in the center of the 3.times.3
set of pixels is the direction and magnitude of the maximum slope
of the plane. The use of a 3.times.3 set of pixels helps to
eliminate some of the effects of image noise on the process.
Gradient information is calculated for selected pixels in the
image. All the pixels in the image may be selected, or a subset of
the pixels, e.g., an area of interest in the image which may be
smaller than the whole image, may be selected. Gradient information
is calculated for each selected pixel and stored in three arrays of
the same dimensions as the received image. The first array contains
the cosine of the angle of the gradient direction. The second array
contains the sine of the angle of the gradient direction. The third
array contains the magnitude, or steepness, of the gradient. Pixels
with a calculated magnitude less than a given threshold have their
gradient information set to zero so they are eliminated from
further processing.
[0176] At step 2030, edge pixels are identified using the gradient
information. An edge pixel can be defined as a pixel for which the
magnitude of the gradient of the image is a local maximum in the
direction of the gradient. These pixels represent the points at
which the rate of change in intensity is the greatest. A separate
array of pixels is used (of the same dimensions as the original
image) to store this information for further processing.
[0177] At step 2035, edge pixels are formed into groups based on
the direction of their gradient. A threshold on the difference in
direction is used to include or exclude pixels from a group. Each
pixel in a group should be adjacent to another pixel in the group.
The edge pixels are labeled identifying the group to which they
belong. At step 2040, the group(s) with crystal characteristics are
selected and at step 2045 the selected groups are provided to
another analysis process for aid in further analysis of the
image.
[0178] One characteristic that separates a crystal from other
objects in an image is the straightness of the edge of the crystal.
FIG. 20B includes the same steps 2005-2035 as in FIG. 20A, and then
uses the crystal characteristic "straightness" to determine whether
a group of pixels depict a crystal. At step 2035 in FIG. 20B, edge
pixels are formed into a group(s), as described above for FIG. 20A.
At step 2040, the edge detection process 2000 determines the
"straightness" of each labeled group of pixels using linear
regression, according to one embodiment. The correlation from the
linear regression and the number of pixels in the group is used to
determine the "straightness" of the group. The straightness can be
defined as the product of the count of pixels in the group and the
reciprocal of 1.0 minus the fourth power of the correlation
coefficient for the group, according to one embodiment. If the
count of pixels is below a given threshold, the count is set to
zero.
[0179] At step 2055, the edge detection process 2000 generates an
image, hereinafter referred to as a "lines image," using the
previously calculated straightness information. The lines image is
the same shape and size as the subset of pixels selected for edge
detection. The intensity value for a pixel in the lines image is
set to the straightness value of the group that its corresponding
pixel belongs to. At step 2060, the lines image, containing
information indicating where "straight" pixels may be found, is
provided to an analysis module to aid in crystal
identification.
[0180] Referring now to FIGS. 18 and 21, in a typical imaging and
analysis process, the scheduler 1825 controls the imaging of
samples by communicating to the imaging system controller 1815 the
necessary information for imaging a particular plate and the
droplet samples on that plate. The imaging system controller 1815
directs the imaging system 1805 to generate the images of the
particular plate and droplet sample at a specified time or in a
specified sequence, and the images are stored on the image storage
device 1810. After an image is generated for a particular sample,
the scheduler 1825 sends an analysis request to the image analyzer
1815, and the corresponding image for that sample is provided to
the image analyzer 1815. The image analyzer 1815 determines the
contents of the image using one or more of the various analysis
modules, and provides results to the scheduler 1825 in an analysis
response.
[0181] FIG. 21 shows a process 2100 that uses the results of
analyzing an image for subsequent imaging of the same sample,
according to one embodiment of the invention. At step 2105, a first
image of a sample is generated using a first set of imaging
parameters, which may include for example, focus, depth of field,
aperture, zoom, illumination filtering, image filtering, and/or
brightness. An analysis process receives the first image at step
2110 and analyzes the first image in accordance with the analysis
request at step 2115. At step 2120, the process 2100 determines
whether crystal formation in the first image is suspected, the
presence of which can make an additional image of the sample
desirable. For example, to determine if an additional image is
desired, a score can be computed for the image. The score can be
based upon user-adjustable thresholds and weighting factors,
allowing the user to tailor preferences with experienced personal
judgment. If the overall score exceeds a specific threshold,
reimaging is warranted and an appropriate reimaging request is
dispatched. Scoring and threshold may be a function of apparent
image content and/or also a function of system bandwidth and
scheduling issues. The more available system resources are, e.g.,
the imaging subsystem, the more likely zoomed-in reimaging
occurs.
[0182] The analysis of the first image at step 2120 can be done
using a relatively fast running process, e.g., determining the
inner/outer non-clear ratio for the droplet sample, and a further,
more thorough analysis can be done at step 2140, according to one
embodiment. At step 2125, information is provided to the imaging
system that allows the same sample to be re-imaged to create a
second image of the sample. Subsequent images generated of the same
sample can use imaging parameters that are different than those
used to generate the first image, that is, at least one value of a
imaging parameter used to generate the second image is different
than the values of the imaging parameters used to create the first
image. At step 2135, the process 2100 receives the second image of
the sample and analyzes the second image at step 2140 using, for
example, the analysis methods described herein. Analysis results
are output for evaluation or display at step 2145.
[0183] Using the analysis data as feedback to the imaging process
and adjusting the imaging parameters accordingly, subsequently
generated images can more clearly show the presence of crystal
formation. For example, if the formation of crystal in the sample
droplet is suspected as a result of analyzing the first image,
information can be communicated to the imaging system to zoom-in on
the area where the crystal formation is suspected and re-image the
droplet using a higher magnification. Other imaging parameters,
e.g., focus, depth of field, aperture, zoom, illumination
filtering, image filtering, and brightness, can also be changed to
obtain an image that may better depict the contents of the
sample.
[0184] Timely analysis of the first image can result in a
relatively large time savings if a subsequent image of a particular
sample is desired. The process for handling a sample plate
containing the sample, e.g., fetching the correct plate from a
storage location, placing the plate in the imaging device, and
returning the plate to its storage location, is very time
consuming. When thousands of images are scheduled to be generated
in one day, minimizing the amount of plate handling during image
generation increases image generation and analysis throughput.
According to one embodiment, the images generated from the samples
on a sample plate are completely analyzed before the plate is
removed from the imaging device. If desired, additional subsequent
images of a sample contained on that plate can then be generated
without incurring the time required to re-fetch the plate. In
another embodiment, a certain percentage of the images are analyzed
before the plate is removed. While this may not allow every sample
to be re-imaged without re-fetching the plate, e.g., the analysis
of the last sample imaged may not be completed before the plate is
removed, it may still result in an overall time savings as it may
allow quick re-imaging of most of the samples, if desired, while
not unduly delaying the removal of the plate from the imaging
device.
[0185] FIG. 22 illustrates a process 2200 that includes generating
two images of a sample, where each image is generated using a set
of imaging parameters that has at least one different imaging
parameter than those used for the other image, according to one
embodiment of the invention. At step 2205 a first image is
generated using a first set of imaging parameters. At step 2210,
the first image is received by an analysis process which determines
one or more regions of interest in the first image at step 2215.
The analysis process may be, for example, an edge detection process
or a process implemented in one of the analysis modules, both or
which are described hereinabove.
[0186] At step 2220, a second image is generated using a second set
of imaging parameters where the second set of imaging parameters
includes at least one imaging parameter that is different than the
first set of imaging parameters. One or more imaging parameter may
be changed to generate the second image. For example, the focal
plane may be set to a different height relative to the droplet
sample, the illumination of the sample may be changed, including
using a different direction of illumination (e.g., lighting the
sample from alternate sides and off-axis lighting) or a different
illumination brightness level, the magnification or zoom level used
may be changed, and different filtering may be used for each image
(e.g., polarizing filters). At step 2225 the second image is
received by an analysis process, and analyzed to determine a region
or regions of interest at step 2230.
[0187] At step 2235, the regions of interest from the first and
second images are combined to form a composite image. Typically,
the composite image is the same size as the first and second
images. The first and second of images are analyzed to determine
the portion or portions of each image that will be used to form the
composite image. The composite images is generated by copying the
values of the pixels from each region of interest in the first and
second images into one composite image. At step 2240, the composite
image is analyzed for the presence of crystal formation by a user,
or automatically by an automatic or interactive analysis method,
e.g., using the content analysis module, the notable regions
analysis module, the crystal object analysis module, or a report
inner/outer non-clear ratio module, as previously described, and
the results are output at step 2245.
[0188] Although process 2200 shows a process to form a composite
image using two images generated with different imaging parameters,
more than two images may also be generated and used to form
composite images, where each image is generated using at least one
different imaging parameter, according to another embodiment. For
example, according to one embodiment, a plurality of images are
generated for a sample where the focal plane for each image is set
at a different "height" relative to the sample. The resulting
images may show varying sharpness in corresponding locations. The
sharpness of the corresponding portions of the images are compared
to determine which portion of each image should form the composite
image. The portion of each image that best satisfies specified
sharpness criteria, e.g., where a selected set of pixels exhibits
the greatest contrast, may be selected from the plurality of images
to form the composite image. The size of a portion of an image
compared to the other images may be as small as a single pixel or
several pixels, and may be as large as tens of pixels or hundreds
of pixels, or even larger.
[0189] FIG. 23 illustrates a process 2300 for visual evaluation of
crystal growth by a user, according to another embodiment of the
invention. At step 2305, process 2300 receives an image of a
sample. At step 2310, the process 2300 classifies the pixels of the
image according to their depiction of the contents of the sample,
e.g., the pixels are classified as depicting crystal, precipitate,
clear or an edge. The pixels of the image may be classified by
processes incorporated into the content analysis module 1930, the
notable regions analysis module 1935, the crystal object analysis
module 1940, as described above, or another suitable analysis
process. At step 2315, process 2300 generates a second image that
is color-coded using the pixel classification information from step
2310. Step 2315 may be performed by the above-described graphical
output analysis module 1950. To generate the second image, pixels
that were classified as edge, precipitate or crystal pixels are
depicted as a particular color, e.g., red for crystal pixels, green
for precipitate pixels, and blue for edge pixels. One or all the
classified pixels may be depicted according to a color-code scheme.
The second image can have opaque color-coded information, or
translucent color-coded information that also shows the original
image through the color. The second image is typically the same
size and shape as the image received at step 2305.
[0190] At step 2320, the color-coded second image is visually
displayed, for example, on a computer monitor or on a printout. At
step 2325, the second image is visually analyzed to determine
crystal growth information of the droplet sample. Displaying the
color-coded image to a user facilitates efficient interpretation of
the contents of the image and allows the presence of crystals in
the image to be easily and visualized.
[0191] The foregoing description details certain embodiments of the
invention. It will be appreciated, however, that no matter how
detailed the foregoing appears in text, the invention can be
practiced in many ways. As is also stated above, it should be noted
that the use of particular terminology when describing certain
features or aspects of the invention should not be taken to imply
that the terminology is being re-defined herein to be restricted to
including any specific characteristics of the features or aspects
of the invention with which that terminology is associated. The
scope of the invention should therefore be construed in accordance
with the appended claims and any equivalents thereof.
* * * * *