U.S. patent application number 11/925887 was filed with the patent office on 2008-06-19 for 3d ultrasound-based instrument for non-invasive measurement of amniotic fluid volume.
Invention is credited to Susannah Helen Bloch, Vikram Chalana, Stephen Dudycha, Gerald McMorrow, Yanwei Wang, Fuxing Yang.
Application Number | 20080146932 11/925887 |
Document ID | / |
Family ID | 35320697 |
Filed Date | 2008-06-19 |
United States Patent
Application |
20080146932 |
Kind Code |
A1 |
Chalana; Vikram ; et
al. |
June 19, 2008 |
3D ultrasound-based instrument for non-invasive measurement of
Amniotic Fluid Volume
Abstract
A hand-held 3D ultrasound instrument is disclosed which is used
to non-invasively and automatically measure amniotic fluid volume
in the uterus requiring a minimum of operator intervention. Using a
2D image-processing algorithm, the instrument gives automatic
feedback to the user about where to acquire the 3D image set. The
user acquires one or more 3D data sets covering all of the amniotic
fluid in the uterus and this data is then processed using an
optimized 3D algorithm to output the total amniotic fluid volume
corrected for any fetal head brain volume contributions.
Inventors: |
Chalana; Vikram; (Mill
Creek, WA) ; Wang; Yanwei; (Woodinville, WA) ;
Yang; Fuxing; (Woodinville, WA) ; Bloch; Susannah
Helen; (Seattle, WA) ; Dudycha; Stephen;
(Bothell, WA) ; McMorrow; Gerald; (Redmond,
WA) |
Correspondence
Address: |
BLACK LOWE & GRAHAM, PLLC
701 FIFTH AVENUE, SUITE 4800
SEATTLE
WA
98104
US
|
Family ID: |
35320697 |
Appl. No.: |
11/925887 |
Filed: |
October 27, 2007 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
11119355 |
Apr 29, 2005 |
|
|
|
11925887 |
|
|
|
|
10701955 |
Nov 5, 2003 |
7087022 |
|
|
11119355 |
|
|
|
|
10443126 |
May 20, 2003 |
7041059 |
|
|
10701955 |
|
|
|
|
PCT/US03/24368 |
Aug 1, 2003 |
|
|
|
10443126 |
|
|
|
|
PCT/US03/14785 |
May 9, 2003 |
|
|
|
11119355 |
|
|
|
|
10165556 |
Jun 7, 2002 |
6676605 |
|
|
PCT/US03/14785 |
|
|
|
|
10633186 |
Jul 31, 2003 |
7004904 |
|
|
11119355 |
|
|
|
|
10165556 |
Jun 7, 2002 |
6676605 |
|
|
10633186 |
|
|
|
|
11362368 |
Feb 24, 2006 |
|
|
|
10165556 |
|
|
|
|
PCT/US05/43836 |
Dec 6, 2005 |
|
|
|
11362368 |
|
|
|
|
11295043 |
Dec 6, 2005 |
|
|
|
PCT/US05/43836 |
|
|
|
|
PCT/US05/30799 |
Aug 29, 2005 |
|
|
|
11362368 |
|
|
|
|
PCT/US05/31755 |
Sep 9, 2005 |
|
|
|
PCT/US05/30799 |
|
|
|
|
11119355 |
Apr 29, 2005 |
|
|
|
PCT/US05/31755 |
|
|
|
|
10701955 |
Nov 5, 2003 |
7087022 |
|
|
11119355 |
|
|
|
|
10443126 |
May 20, 2003 |
7041059 |
|
|
10701955 |
|
|
|
|
PCT/US03/24368 |
Aug 1, 2003 |
|
|
|
11119355 |
|
|
|
|
PCT/US03/14785 |
May 9, 2003 |
|
|
|
11119355 |
|
|
|
|
10165556 |
Jun 7, 2002 |
6676605 |
|
|
PCT/US03/14785 |
|
|
|
|
10633186 |
Jul 31, 2003 |
7004904 |
|
|
11119355 |
|
|
|
|
60566823 |
Apr 30, 2004 |
|
|
|
60423881 |
Nov 5, 2002 |
|
|
|
60400624 |
Aug 2, 2002 |
|
|
|
60423881 |
Nov 5, 2002 |
|
|
|
60423881 |
Nov 5, 2002 |
|
|
|
60400624 |
Aug 2, 2002 |
|
|
|
60470525 |
May 12, 2003 |
|
|
|
60760677 |
Jan 20, 2006 |
|
|
|
60633485 |
Dec 6, 2004 |
|
|
|
60566823 |
Apr 30, 2004 |
|
|
|
60423881 |
Nov 5, 2002 |
|
|
|
60400624 |
Aug 2, 2002 |
|
|
|
60423881 |
Nov 5, 2002 |
|
|
|
60423881 |
Nov 5, 2002 |
|
|
|
60400624 |
Aug 2, 2002 |
|
|
|
Current U.S.
Class: |
600/447 |
Current CPC
Class: |
A61B 8/0866 20130101;
G06T 7/12 20170101; G06T 2207/30044 20130101; A61B 5/4343 20130101;
A61B 5/204 20130101; G01S 7/52065 20130101; A61B 8/14 20130101;
G06T 2207/20061 20130101; G06T 2207/30004 20130101; G06T 2207/10136
20130101; A61B 8/483 20130101; G06T 7/0012 20130101; G06T 7/155
20170101; G06T 7/62 20170101; G01S 15/8993 20130101; A61B 8/565
20130101; A61B 8/4472 20130101; A61B 8/0858 20130101; G06K 9/32
20130101; G06K 9/34 20130101; G06K 2209/05 20130101; G06T 7/143
20170101 |
Class at
Publication: |
600/447 |
International
Class: |
A61B 8/00 20060101
A61B008/00 |
Claims
1. A method to determine amniotic fluid volume in digital images,
the method comprising: positioning an ultrasound transceiver to
probe a first portion of a uterus of a patient, the transceiver
adapted to obtain a first plurality of scanplanes; re-positioning
the ultrasound transceiver to probe a second portion of the uterus
to obtain a second plurality of scanplanes; enhancing the images of
the amniotic fluid regions in the scanplanes with a plurality of
algorithms; registering the scanplanes of the first plurality with
the second plurality; associating the registered scanplanes into a
composite array, and determining the amniotic fluid volume of the
amniotic fluid regions within the composite array.
2. The method of claim 1, wherein plurality of scanplanes are
acquired from a rotational array, a translational array, or a wedge
array.
3. The method of claim 1, wherein the plurality of algorithms
includes algorithms for image enhancement, segmentation, and
polishing.
4. The method of claim 3, wherein segmentation further includes an
intensity clustering step, a spatial gradients step, a hysteresis
threshold step, a Region-of-Interest selection step, and a matching
edges filter step.
5. The method of claim 4, wherein the intensity clustering step is
performed in a first parallel operation, and the spatial gradients,
hysteresis threshold, Region-of-Interest selection, and matching
edges filter steps are performed in a second parallel operation,
and further wherein the results from the first parallel operation
are combined with the results from the second parallel
operation.
6. The method of claim 3, wherein image enhancement further
includes applying a heat filter and a shock filter to the digital
images.
7. The method of claim 6 wherein the heat filter is applied to the
digital images followed by application of the shock filter to the
digital images.
8. The method of claim 1, wherein the amniotic fluid volume is
adjusted for underestimation or overestimation.
9. The method of claim 8, wherein the amniotic fluid volume is
adjusted for underestimation by probing with adjustable ultrasound
frequencies to penetrate deep tissues and to repositioning the
transceiver to establish that deep tissues are exposed with probing
ultrasound of sufficient strength to provide a reflecting
ultrasound echo receivable by the transceiver, such that more than
one rotational array to detect deep tissue and regions of the fetal
head are obtained.
10. The method of claim 8, wherein amniotic fluid volume is
adjusted for overestimation by automatically determining fetal head
volume contribution to amniotic fluid volume and deducting it from
the amniotic fluid volume.
11. The method of claim 10, wherein the steps to adjust for
overestimated amniotic fluid volumes include a 2D clustering step,
a matching edges step, an all edges step, a gestational age factor
step, a head diameter step, an head edge detection step, and a
Hough transform step.
12. The method of claim 12, wherein the Hough transform step
includes a polar Hough Transform step, a Find Maximum Hough value
step, and a fill circle region step.
13. The method of claim 12, wherein the polar Hough Transform step
includes a first Hough transform to look for lines of a specified
shape, and a second Hough transform to look for fetal head
structures.
14. The method of claim 1, wherein the positions include lateral
and transverse.
15. A method to determine amniotic fluid volume in digital images,
the method comprising: positioning an ultrasound transceiver to
probe a first portion of a uterus of a patient, the transceiver
adapted to obtain a first plurality of scanplanes; re-positioning
the ultrasound transceiver to probe a second and a third portion of
the uterus to obtain a second and third plurality of scanplanes;
enhancing the images of the amniotic fluid regions in the
scanplanes with a plurality of algorithms; registering the
scanplanes of the first plurality through the third plurality;
associating the registered scanplanes into a composite array, and
determining the amniotic fluid volume of the amniotic fluid regions
within the composite array.
16. A method to determine amniotic fluid volume in digital images,
the method comprising: positioning an ultrasound transceiver to
probe a first portion of a uterus of a patient, the transceiver
adapted to obtain a first plurality of scanplanes; re-positioning
the ultrasound transceiver to probe a second through fourth portion
of the uterus to obtain a second through fourth plurality of
scanplanes; enhancing the images of the amniotic fluid regions in
the scanplanes with a plurality of algorithms; registering the
scanplanes of the first through fourth plurality; associating the
registered scanplanes into a composite array, and determining the
amniotic fluid volume of the amniotic fluid regions within the
composite array.
17. A method to determine amniotic fluid volume in digital images,
the method comprising: positioning an ultrasound transceiver to
probe a first portion of a uterus of a patient, the transceiver
adapted to obtain a first plurality of scanplanes; re-positioning
the ultrasound transceiver to probe a second through fifth portion
of the uterus to obtain a second through fifth plurality of
scanplanes; enhancing the images of the amniotic fluid regions in
the scanplanes with a plurality of algorithms; registering the
scanplanes of the first through the fifth plurality; associating
the registered scanplanes into a composite array, and determining
the amniotic fluid volume of the amniotic fluid regions within the
composite array.
18. A system for determining amniotic fluid volume, the system
comprising: a transceiver positioned from two to six locations of a
patient, the transceiver configured to deliver radio frequency
ultrasound pulses to amniotic fluid regions of a patient, to
receive echoes of the pulses reflected from the amniotic fluid
regions, to convert the echoes to digital form, and to obtain a
plurality of scanplanes in the form of an array for each location;
a computer system in communication with the transceiver, the
computer system having a microprocessor and a memory, the memory
further containing stored programming instructions operable by the
microprocessor to associate the plurality of scanplanes of each
array, and the memory further containing instructions operable by
the microprocessor to determine the presence of an amniotic fluid
region in each array and determine the amniotic fluid volume in
each array.
19. The system of claim 18, wherein the array includes rotational,
wedge, and translation.
20. The system of claim 18, wherein stored programming instructions
further include aligning scanplanes having overlapping regions from
each location into a plurality of registered composite
scanplanes.
21. The system of claim 20, wherein the stored programming
instructions further include fusing the registered composite
scanplanes amniotic fluid regions of the scanplanes of each
array.
22. The system of claim 21 wherein the stored programming
instructions further include arranging the fused composite
scanplanes into a composite array.
23. The system of claim 18, wherein the computer system is
configured for remote operation via a local area network or an
Internet web-based system, the internet web-based system having a
plurality of programs that collect, analyze, and store amniotic
fluid volume.
24. A method to determine amniotic fluid volume, the method
comprising: positioning an ultrasound transceiver to probe at least
a portion of a uterus of a patient, the transceiver configured to
obtain a plurality of scanlines; enhancing the images of the
amniotic fluid regions in the scanline plurality with a plurality
of algorithms; associating the registered scan lines into a
composite array, and determining the amniotic fluid volume of the
amniotic fluid regions within the composite array.
25. A system to improve image clarity in ultrasound images
comprising: an ultrasound transducer connected with a
microprocessor configured collect and process echoes returning from
at least two ultrasound-based images from a scanned
region-of-interest, wherein motion sections are compensated with
the stationary sections within the scanned region of interest.
Description
[0001] The following applications are incorporated by reference as
if fully set forth herein: U.S. application Ser. No. 11/119,355
filed Apr. 29, 2005; Ser. No. 11/362,368 filed Feb. 26, 2006; Ser.
No. 11/680,380 filed Feb. 28, 2007 and Ser. No. 11/925,654 filed
Oct. 26, 2007.
FIELD OF THE INVENTION
[0002] This invention pertains to the field of obstetrics,
particularly to ultrasound-based non-invasive obstetric
measurements.
BACKGROUND OF THE INVENTION
[0003] Measurement of the amount of Amniotic Fluid (AF) volume is
critical for assessing the kidney and lung function of a fetus and
also for assessing the placental function of the mother. Amniotic
fluid volume is also a key measure to diagnose conditions such as
polyhydramnios (too much AF) and oligohydramnios (too little AF).
Polyhydramnios and oligohydramnios are diagnosed in about 7-8% of
all pregnancies and these conditions are of concern because they
may lead to birth defects or to delivery complications. The
amniotic fluid volume is also one of the important components of
the fetal biophysical profile, a major indicator of fetal
well-being.
[0004] The currently practiced and accepted method of
quantitatively estimating the AF volume is from two-dimensional
(2D) ultrasound images. The most commonly used measure is known as
the use of the amniotic fluid index (AFI). AFI is the sum of
vertical lengths of the largest AF pockets in each of the 4
quadrants. The four quadrants are defined by the umbilicus (the
navel) and the linea nigra (the vertical mid-line of the abdomen).
The transducer head is placed on the maternal abdomen along the
longitudinal axis with the patient in the supine position. This
measure was first proposed by Phelan et al (Phelan J P, Smith C V,
Broussard P, Small M., "Amniotic fluid volume assessment with the
four-quadrant technique at 36-42 weeks' gestation," J Reprod Med
July; 32(7): 540-2, 1987) and then recorded for a large normal
population over time by Moore and Cayle (Moore T R, Cayle J E. "The
amniotic fluid index in normal human pregnancy," Am J Obstet
Gynecol May; 162(5): 1168-73, 1990).
[0005] Even though the AFI measure is routinely used, studies have
shown a very poor correlation of the AFI with the true AF volume
(Sepulveda W, Flack N J, Fisk N M., "Direct volume measurement at
midtrimester amnioinfusion in relation to ultrasonographic indexes
of amniotic fluid volume," Am J Obstet Gynecol April; 170(4):
1160-3, 1994). The correlation coefficient was found to be as low
as 0.55, even for experienced sonographers. The use of vertical
diameter only and the use of only one pocket in each quadrant are
two reasons why the AFI is not a very good measure of AF Volume
(AFV).
[0006] Some of the other methods that have been used to estimate AF
volume include:
[0007] Dye dilution technique. This is an invasive method where a
dye is injected into the AF during amniocentesis and the final
concentration of dye is measured from a sample of AF removed after
several minutes. This technique is the accepted gold standard for
AF volume measurement; however, it is an invasive and cumbersome
method and is not routinely used.
[0008] Subjective interpretation from ultrasound images. This
technique is obviously dependent on observer experience and has not
been found to be very good or consistent at diagnosing oligo- or
poly-hydramnios.
[0009] Vertical length of the largest single cord-free pocket. This
is an earlier variation of the AFI where the diameter of only one
pocket is measured to estimate the AF volume.
[0010] Two-diameter areas of the largest AF pockets in the four
quadrants. This is similar to the AFI; however, in this case, two
diameters are measured instead of only one for the largest pocket.
This two diameter area has been recently shown to be better than
AFI or the single pocket measurement in identifying oligohydramnios
(Magann E F, Perry K G Jr, Chauhan S P, Anfanger P J, Whitworth N
S, Morrison J C., "The accuracy of ultrasound evaluation of
amniotic fluid volume in singleton pregnancies: the effect of
operator experience and ultrasound interpretative technique," J
Clin Ultrasound, June; 25(5):249-53, 1997).
[0011] The measurement of various anatomical structures using
computational constructs are described, for example, in U.S. Pat.
No. 6,346,124 to Geiser, et al. (Autonomous Boundary Detection
System For Echocardiographic Images). Similarly, the measurement of
bladder structures are covered in U.S. Pat. No. 6,213,949 to
Ganguly, et al. (System For Estimating Bladder Volume) and U.S.
Pat. No. 5,235,985 to McMorrow, et al., (Automatic Bladder Scanning
Apparatus). The measurement of fetal head structures is described
in U.S. Pat. No. 5,605,155 to Chalana, et al., (Ultrasound System
For Automatically Measuring Fetal Head Size). The measurement of
fetal weight is described in U.S. Pat. No. 6,375,616 to Soferman,
et al. (Automatic Fetal Weight Determination).
[0012] Pertaining to ultrasound-based determination of amniotic
fluid volumes, Segiv et al. (in Segiv C, Akselrod S, Tepper R.,
"Application of a semiautomatic boundary detection algorithm for
the assessment of amniotic fluid quantity from ultrasound images."
Ultrasound Med Biol, May; 25(4): 515-26, 1999) describe a method
for amniotic fluid segmentation from 2D images. However, the Segiv
et al. method is interactive in nature and the identification of
amniotic fluid volume is very observer dependent. Moreover, the
system described is not a dedicated device for amniotic fluid
volume assessment.
[0013] Grover et al. (Grover J, Mentakis E A, Ross M G,
"Three-dimensional method for determination of amniotic fluid
volume in intrauterine pockets." Obstet Gynecol, December; 90(6):
1007-10, 1997) describe the use of a urinary bladder volume
instrument for amniotic fluid volume measurement. The Grover et al.
method makes use of the bladder volume instrument without any
modifications and uses shape and other anatomical assumptions
specific to the bladder that do not generalize to amniotic fluid
pockets. Amniotic fluid pockets having shapes not consistent with
the Grover et al. bladder model introduces analytical errors.
Moreover, the bladder volume instrument does not allow for the
possibility of more than one amniotic fluid pocket in one image
scan. Therefore, the amniotic fluid volume measurements made by the
Grover et al. system may not be correct or accurate.
[0014] None of the currently used methods for AF volume estimation
are ideal. Therefore, there is a need for better, non-invasive, and
easier ways to accurately measure amniotic fluid volume.
[0015] The clarity of ultrasound acquired images is affected by
motions of the examined subject, the motions of organs and fluids
within the examined subject, the motion of the probing ultrasound
transceiver, the coupling medium used transceiver and the examined
subject, and the algorithms used for image processing. As regards
image processing frequency domain approaches have been utilized in
the literature including using Wiener filters that is implemented
in the frequency domain and assumes that the point spread function
(PSF) is fixed and known. This assumption conflicts with the
observation that the received ultrasound signals are usually
non-stationary and depth-dependent. Since the algorithm is
implemented in the frequency domain, the error introduced in PSF
will leak across the spatial domain. As a result, the performance
of Wiener filtering is not ideal.
[0016] As regards prior uses of coupling mediums, the most common
container for dispensing ultrasound coupling gel is an 8 oz.
plastic squeeze bottle with an open, tapered tip. The tapered tip
bottle is inexpensive and easy to refill from a larger reservoir in
the form of a bag or pump-type and dispenses gel in a controlled
manner. Other embodiments include the Sontac.RTM. ultrasound gel
pad available from Verathon.TM. Medical, Bothell, Wash., USA is a
pre-packaged, circular pad of moist, flexible coupling gel 2.5
inches in diameter and 0.06 inches thick and is advantageously used
with the BladderScan devices. The Sontac pad is simple to apply and
to remove, and provides adequate coupling for a one-position
ultrasound scan in most cases. Yet others include the Aquaflex.RTM.
gel pads perform in a similar manner to Sontac pads, but are larger
and thicker (2 cm thick.times.9 cm diameter), and traditionally
used for therapeutic ultrasound or where some distance between the
probe and the skin surface ("stand-off") must be maintained.
[0017] The main purpose of an ultrasonic coupling medium is to
provide an air-free interface between an ultrasound transducer and
the body surface. Gels are used as coupling media since they are
moist and deformable, but not runny: they wet both the transducer
and the body surface, but stay where they are applied. The most
common delivery method for ultrasonic coupling gel, the plastic
squeeze bottle, has several disadvantages. First, if the bottle has
been stored upright the gel will fall to the bottom of the bottle,
and vigorous shaking is required to get the gel back to the bottle
tip, especially if the gel is cold. This motion can be particularly
irritating to sonographers, who routinely suffer from wrist and arm
pain from ultrasound scanning. Second, the bottle tip is a two-way
valve: squeezing the bottle releases gel at the tip, but releasing
the bottle sucks air back into the bottle and into the gel. The
presence of air bubbles in the gel may detract from its performance
as a coupling medium. Third, there is no standard application
amount: inexperienced users such as Diagnostic Ultrasound customers
have to make an educated guess about how much gel to use. Fourth,
when the squeeze bottle is nearly empty it is next to impossible to
coax the final 5-10% of gel into the bottle's tip for dispensing.
Finally, although refilling the bottle from a central source is not
a particularly difficult task, it is non-sterile and potentially
messy.
[0018] Sontac pads and other solid gel coupling pads are simpler to
use than gel: the user does not have to guess at an appropriate
application amount, the pad is sterile, and it can be simply lifted
off the patient and disposed of after use. However, pads do not
mold to the skin or transducer surface as well as the more
liquefied coupling gels and therefore may not provide ideal
coupling when used alone, especially on dry, hairy, curved, or
wrinkled surfaces. Sontac pads suffer from the additional
disadvantage that they are thin and easily damaged by moderate
pressure from the ultrasound transducer. (See Bishop S, Draper D O,
Knight K L, Feland J B, Eggett D. "Human tissue-temperature rise
during ultrasound treatments with the Aquaflex gel pad." Journal of
Athletic Training 39(2):126-131, 2004).
[0019] Relating to cannula insertion, unsuccessful insertion and/or
removal of a cannula, a needle, or other similar devices into
vascular tissue may cause vascular wall damage that may lead to
serious complications or even death. Image guided placement of a
cannula or needle into the vascular tissue reduces the risk of
injury and increases the confidence of healthcare providers in
using the foregoing devices. Current image guided placement methods
generally use a guidance system for holding specific cannula or
needle sizes. The motion and force required to disengage the
cannula from the guidance system may, however, contribute to a
vessel wall injury, which may result in extravasation.
Complications arising from extravasation resulting in morbidity are
well documented. Therefore, there is a need for image guided
placement of a cannula or needle into vascular tissue while still
allowing a health care practitioner to use standard "free"
insertion procedures that do not require a guidance system to hold
the cannula or needle.
SUMMARY OF THE INVENTION
[0020] The preferred form of the invention is a three dimensional
(3D) ultrasound-based system and method using a hand-held 3D
ultrasound device to acquire at least one 3D data set of a uterus
and having a plurality of automated processes optimized to robustly
locate and measure the volume of amniotic fluid in the uterus
without resorting to pre-conceived models of the shapes of amniotic
fluid pockets in ultrasound images. The automated process uses a
plurality of algorithms in a sequence that includes steps for image
enhancement, segmentation, and polishing.
[0021] A hand-held 3D ultrasound device is used to image the uterus
trans-abdominally. The user moves the device around on the maternal
abdomen and, using 2D image processing to locate the amniotic fluid
areas, the device gives feedback to the user about where to acquire
the 3D image data sets. The user acquires one or more 3D image data
sets covering all of the amniotic fluid in the uterus and the data
sets are then stored in the device or transferred to a host
computer.
[0022] The 3D ultrasound device is configured to acquire the 3D
image data sets in two formats. The first format is a collection of
two-dimensional scanplanes, each scanplane being separated from the
other and representing a portion of the uterus being scanned. Each
scanplane is formed from one-dimensional ultrasound A-lines
confined within the limits of the 2D scanplane. The 3D data sets is
then represented as a 3D array of 2D scanplanes. The 3D array of 2D
scanplanes is an assembly of scanplanes, and may be assembled into
a translational array, a wedge array, or a rotational array.
[0023] Alternatively, the 3D ultrasound device is configured to
acquire the 3D image data sets from one-dimensional ultrasound
A-lines distributed in 3D space of the uterus to form a 3D scancone
of 3D-distributed scanline. The 3D scancone is not an assembly of
2D scanplanes.
[0024] The 3D image datasets, either as discrete scanplanes or 3D
distributed scanlines, are then subjected to image enhancement and
analysis processes. The processes are either implemented on the
device itself or is implemented on the host computer.
Alternatively, the processes can also be implemented on a server or
other computer to which the 3D ultrasound data sets are
transferred.
[0025] In a preferred image enhancement process, each 2D image in
the 3D dataset is first enhanced using non-linear filters by an
image pre-filtering step. The image pre-filtering step includes an
image-smoothing step to reduce image noise followed by an
image-sharpening step to obtain maximum contrast between organ wall
boundaries.
[0026] A second process includes subjecting the resulting image of
the first process to a location method to identify initial edge
points between amniotic fluid and other fetal or maternal
structures. The location method automatically determines the
leading and trailing regions of wall locations along an A-mode
one-dimensional scan line.
[0027] A third process includes subjecting the image of the first
process to an intensity-based segmentation process where dark
pixels (representing fluid) are automatically separated from bright
pixels (representing tissue and other structures).
[0028] In a fourth process, the images resulting from the second
and third step are combined to result in a single image
representing likely amniotic fluid regions.
[0029] In a fifth process, the combined image is cleaned to make
the output image smooth and to remove extraneous structures such as
the fetal head and the fetal bladder.
[0030] In a sixth process, boundary line contours are placed on
each 2D image. Thereafter, the method then calculates the total 3D
volume of amniotic fluid in the uterus.
[0031] In cases in which uteruses are too large to fit in a single
3D array of 2D scanplanes or a single 3D scancone of 3D distributed
scanlines, especially as occurs during the second and third
trimester of pregnancy, preferred alternate embodiments of the
invention allow for acquiring at least two 3D data sets, preferably
four, each 3D data set having at least a partial ultrasonic view of
the uterus, each partial view obtained from a different anatomical
site of the patient.
[0032] In one embodiment a 3D array of 2D scanplanes is assembled
such that the 3D array presents a composite image of the uterus
that displays the amniotic fluid regions to provide the basis for
calculation of amniotic fluid volumes. In a preferred alternate
embodiment, the user acquires the 3D data sets in quarter sections
of the uterus when the patient is in a supine position. In this
4-quadrant supine procedure, four image cones of data are acquired
near the midpoint of each uterine quadrant at substantially equally
spaced intervals between quadrant centers. Image processing as
outlined above is conducted for each quadrant image, segmenting on
the darker pixels or voxels associated with amniotic fluid.
Correcting algorithms are applied to compensate for any
quadrant-to-quadrant image cone overlap by registering and fixing
one quadrant's image to another. The result is a fixed 3D mosaic
image of the uterus and the amniotic fluid volumes or regions in
the uterus from the four separate image cones.
[0033] Similarly, in another preferred alternate embodiment, the
user acquires one or more 3D image data sets of quarter sections of
the uterus when the patient is in a lateral position. In this
multi-image cone lateral procedure, each image cones of data are
acquired along a lateral line of substantially equally spaced
intervals. Each image cone are subjected to the image processing as
outlined above, with emphasis given to segmenting on the darker
pixels or voxels associated with amniotic fluid. Scanplanes showing
common pixel or voxel overlaps are registered into a common
coordinate system along the lateral line. Correcting algorithms are
applied to compensate for any image cone overlap along the lateral
line. The result is a fixed 3D mosaic image of the uterus and the
amniotic fluid volumes or regions in the uterus from the four
separate image cones.
[0034] In yet other preferred embodiments, at least two 3D scancone
of 3D distributed scanlines are acquired at different anatomical
sites, image processed, registered and fused into a 3D mosaic image
composite. Amniotic fluid volumes are then calculated.
[0035] The system and method further provides an automatic method
to detect and correct for any contribution the fetal head provides
to the amniotic fluid volume.
[0036] Systems, methods, and devices for image clarity of
ultrasound-based images are described. Such systems, methods, and
devices include improved transducer aiming and utilizing
time-domain deconvolution processes upon the non-stationary effects
of ultrasound signals. The processes deconvolution applies
algorithms to improve the clarity or resolution of ultrasonic
images by suppressed reverberation of ultrasound echoes. The
initially acquired and distorted ultrasound image is reconstructed
to a clearer image by countering the effect of distortion
operators. An improved point spread function (PSF) of the imaging
system is applied, utilizing a deconvolution algorithm, to improve
the image resolution, and remove reverberations by modeling them as
noise.
[0037] As regards improved transducer aiming particular embodiments
employ novel applications of computer vision techniques to perform
real time analysis. First, a computer vision method is introduced:
optical flow, which is a powerful motion analysis technique and is
applied in many different research or commercial fields. The
optical flow is able to estimate the velocity field of image series
and the velocity vector provides information of the contents inside
the image series. In the current field, if the target is with very
large motion and the motion is in a specific pattern, like moving
orientation, the velocity information inside and around the target
can be different from other parts in the field. Otherwise, there
will be no valuable information in current field and the scanning
has to be adjusted.
[0038] As regards analyzing the motions of organ movement and fluid
flows within an examined subject, new optical-flow-based methods
for estimating heart motion from two-dimensional echocardiographic
sequences, an optical-flow guided active contour method for
Myocardial tracking in contrast echocardiography, and a method for
shape-driven segmentation and tracking of the left ventricle.
[0039] As regards cannula insertion, ultrasound motion of the
cannula is configured by cannula fitted with echogenic ultrasound
micro reflectors.
[0040] As regards sonic coupling gel media to improve ultrasound
communication between a transducer and the examined subject,
embodiments include an apparatus that: dispenses a metered quantity
of ultrasound coupling gel and enables one-handed gel application.
The apparatus also preserves the gel in a de-gassed state (no air
bubbles), preserves the gel in a sterile state (no contact between
gel applicator and patient), includes a method for easy container
refill, and preserves the shape and volume of existing gel
application bottles.
BRIEF DESCRIPTION OF THE DRAWINGS
[0041] FIG. 1 is a side view of a microprocessor-controlled,
hand-held ultrasound transceiver;
[0042] FIG. 2A is a is depiction of the hand-held transceiver in
use for scanning a patient;
[0043] FIG. 2B is a perspective view of the hand-held transceiver
device sitting in a communication cradle;
[0044] FIG. 2C is a perspective view of an amniotic fluid volume
measuring system;
[0045] FIG. 3 is an alternate embodiment of an amniotic fluid
volume measuring system in schematic view of a plurality of
transceivers in connection with a server;
[0046] FIG. 4 is another alternate embodiment of an amniotic fluid
volume measuring system in a schematic view of a plurality of
transceivers in connection with a server over a network;
[0047] FIG. 5A a graphical representation of a plurality of scan
lines forming a single scan plane;
[0048] FIG. 5B is a graphical representation of a plurality of
scanplanes forming a three-dimensional array having a substantially
conic shape;
[0049] FIG. 5C is a graphical representation of a plurality of 3D
distributed scanlines emanating from the transceiver forming a
scancone;
[0050] FIG. 6 is a depiction of the hand-held transceiver placed
laterally on a patient trans-abdominally to transmit ultrasound and
receive ultrasound echoes for processing to determine amniotic
fluid volumes;
[0051] FIG. 7 shows a block diagram overview of the two-dimensional
and three-dimensional Input, Image Enhancement, Intensity-Based
Segmentation, Edge-Based Segmentation, Combine, Polish, Output, and
Compute algorithms to visualize and determine the volume or area of
amniotic fluid;
[0052] FIG. 8A depicts the sub-algorithms of Image Enhancement;
[0053] FIG. 8B depicts the sub-algorithms of Intensity-Based
Segmentation;
[0054] FIG. 8C depicts the sub-algorithms of Edge-Based
Segmentation;
[0055] FIG. 8D depicts the sub-algorithms of the Polish algorithm,
including Close, Open, Remove Deep Regions, and Remove Fetal Head
Regions;
[0056] FIG. 8E depicts the sub-algorithms of the Remove Fetal Head
Regions sub-algorithm;
[0057] FIG. 8F depicts the sub-algorithms of the Hough Transform
sub-algorithm;
[0058] FIG. 9 depicts the operation of a circular Hough transform
algorithm;
[0059] FIG. 10 shows results of sequentially applying the algorithm
steps on a sample image;
[0060] FIG. 11 illustrates a set of intermediate images of the
fetal head detection process;
[0061] FIG. 12 presents a 4-panel series of sonographer amniotic
fluid pocket outlines and the algorithm output amniotic fluid
pocket outlines;
[0062] FIG. 13 illustrates a 4-quadrant supine procedure to acquire
multiple image cones;
[0063] FIG. 14 illustrates an in-line lateral line procedure to
acquire multiple image cones;
[0064] FIG. 15 is a block diagram overview of the rigid
registration and correcting algorithms used in processing multiple
image cone data sets;
[0065] FIG. 16 is a block diagram of the steps in the rigid
registration algorithm;
[0066] FIG. 17A is an example image showing a first view of a fixed
scanplane;
[0067] FIG. 17B is an example image showing a second view view of a
moving scanplane having some voxels in common with the first
scanplane;
[0068] FIG. 17C is a composite image of the first (fixed) and
second (moving) images;
[0069] FIG. 18A is an example image showing a first view of a fixed
scanplane;
[0070] FIG. 18B is an example image showing a second view of a
moving scanplane having some voxels in common with the first view
and a third view;
[0071] FIG. 18C is a third view of a moving scanplane having some
voxels in common with the second view;
[0072] FIG. 18D is a composite image of the first (fixed), second
(moving), and third (moving) views;
[0073] FIG. 19 illustrates a 6-section supine procedure to acquire
multiple image cones around the center point of uterus of a patient
in a supine procedure;
[0074] FIG. 20 is a block diagram algorithm overview of the
registration and correcting algorithms used in processing the
6-section multiple image cone data sets depicted in FIG. 19;
[0075] FIG. 21 is an expansion of the Image Enhancement and
Segmentation block 1010 of FIG. 20;
[0076] FIG. 22 is an expansion of the RigidRegistration block 1014
of FIG. 20;
[0077] FIG. 23 is a 4-panel image set that shows the effect of
multiple iterations of the heat filter applied to an original
image;
[0078] FIG. 24 shows the affect of shock filtering and a
combination heat-and-shock filtering to the pixel values of the
image;
[0079] FIG. 25 is a 7-panel image set progressively receiving
application of the image enhancement and segmentation algorithms of
FIG. 21;
[0080] FIG. 26 is a pixel difference kernel for obtaining X and Y
derivatives to determine pixel gradient magnitudes for edge-based
segmentation; and
[0081] FIG. 27 is a 3-panel image set showing the progressive
demarcation or edge detection of organ wall interfaces arising from
edge-based segmentation algorithms.
[0082] FIGS. 1A-D depicts a partial schematic and a partial
isometric view of a transceiver, a scan cone comprising a
rotational array of scan planes, and a scan plane of the array of
an ultrasound harmonic imaging system;
[0083] FIG. 2 depicts a partial schematic and partial isometric and
side view of a transceiver, and a scan cone array comprised of
3D-distributed scan lines in alternate embodiment of an ultrasound
harmonic imaging system;
[0084] FIG. 3 is a schematic illustration of a server-accessed
local area network in communication with a plurality of ultrasound
harmonic imaging systems;
[0085] FIG. 4 is a schematic illustration of the Internet in
communication with a plurality of ultrasound harmonic imaging
systems;
[0086] FIG. 5 schematically depicts a method flow chart algorithm
120 to acquire a clarity enhanced ultrasound image;
[0087] FIG. 6 is an expansion of sub-algorithm 150 of master
algorithm 120 of FIG. 7;
[0088] FIG. 7 is an expansion of sub-algorithms 200 of FIG. 5;
[0089] FIG. 8 is an expansion of sub-algorithm 300 of master
algorithm illustrated in FIG. 5;
[0090] FIG. 9 is an expansion of sub-algorithms 400A and 400B of
FIG. 5;
[0091] FIG. 10 is an expansion of sub-algorithm 500 of FIG. 5;
[0092] FIG. 11 depicts a logarithm of a Cepstrum;
[0093] FIGS. 12A-C depict histogram waveform plots derived from
water tank pulse-echo experiments undergoing parametric and
non-parametric analysis;
[0094] FIGS. 13-25 are bladder sonograms that depict image clarity
after undergoing image enhancement processing by algorithms
described in FIGS. 5-10;
[0095] FIG. 13 is an unprocessed image that will undergo image
enhancement processing;
[0096] FIG. 14 illustrates an enclosed portion of a magnified
region of FIG. 13;
[0097] FIG. 15 is the resultant image of FIG. 13 that has undergone
image processing via nonparametric estimation under sub-algorithm
400A;
[0098] FIG. 16 is the resultant image of FIG. 13 that has undergone
image processing via parametric estimation under sub-algorithm
400B;
[0099] FIG. 17 the resultant image of an alternate image processing
embodiment using a Weiner filter.
[0100] FIG. 18 is another unprocessed image that will undergo image
enhancement processing;
[0101] FIG. 19 illustrates an enclosed portion of a magnified
region of FIG. 18;
[0102] FIG. 20 is the resultant image of FIG. 18 that has undergone
image processing via nonparametric estimation under sub-algorithm
400A;
[0103] FIG. 21 is the resultant image of FIG. 18 that has undergone
image processing via parametric estimation under sub-algorithm
400B;
[0104] FIG. 22 is another unprocessed image that will undergo image
enhancement processing;
[0105] FIG. 23 illustrates an enclosed portion of a magnified
region of FIG. 22;
[0106] FIG. 24 is the resultant image of FIG. 22 that has undergone
image processing via nonparametric estimation under sub-algorithm
400A;
[0107] FIG. 25 is the resultant image of FIG. 22 that has undergone
image processing via parametric estimation under sub-algorithm
400B;
[0108] FIG. 26 depicts a schematic example of a time velocity map
derived from sub-algorithm 310;
[0109] FIG. 27 depicts another schematic example of a time velocity
map derived from sub-algorithm 310;
[0110] FIG. 28 illustrates a seven panel image series of a beating
heart ventricle that will undergo the optical flow processes of
sub-algorithm 300 in which at least two images are required;
[0111] FIG. 29 illustrates an optical flow velocity map plot of the
seven panel image series of FIG. 28 presented in a 2D flow pattern
after undergoing sub-algorithm 310;
[0112] FIG. 30 illustrates an optical flow velocity map plot of the
seven panel image series of FIG. 28 along the X-axis direction or
phi direction after undergoing sub-algorithm 310;
[0113] FIG. 31 illustrates an optical flow velocity map plot of the
seven panel image series of FIG. 28 along the Y-axis direction
radial direction after undergoing sub-algorithm 310;
[0114] FIG. 32 illustrates a 3D optical vector plot after
undergoing sub-algorithm 310 and corresponds to the top row of FIG.
29;
[0115] FIG. 33 illustrates a 3D optical vector plot in the phi
direction after undergoing sub-algorithm 310 and corresponds to
FIG. 30 at T=1;
[0116] FIG. 34 illustrates a 3D optical vector plot in the radial
direction after undergoing sub-algorithm 310 and corresponds to
FIG. 31 at T=1;
[0117] FIG. 35 illustrates a 3D optical vector plot in the radial
direction above a Y-axis threshold setting of 0.6 after undergoing
sub-algorithm 310 and corresponds to FIG. 34 the threshold T that
are less than 0.6 are set to 0;
[0118] FIGS. 36A-G depicts embodiments of the sonic gel
dispenser;
[0119] FIGS. 37 and 38 are diagrams showing one embodiment of the
present invention;
[0120] FIG. 39 is diagram showing additional detail for a needle
shaft to be used with one embodiment of the invention;
[0121] FIGS. 40A and 40B are diagrams showing close-up views of
surface features of the needle shaft shown in FIG. 38;
[0122] FIG. 41 is a diagram showing imaging components for use with
the needle shaft shown in FIG. 38;
[0123] FIG. 42 is a diagram showing a representation of an image
produced by the imaging components shown in FIG. 41;
[0124] FIG. 43 is a system diagram of an embodiment of the present
invention;
[0125] FIG. 44 is a system diagram of an example embodiment showing
additional detail for one of the components shown in FIG. 38;
and
[0126] FIGS. 45 and 46 are flowcharts of a method of displaying the
trajectory of a cannula in accordance with an embodiment of the
present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0127] The preferred portable embodiment of the ultrasound
transceiver of the amniotic fluid volume measuring system are shown
in FIGS. 1-4. The transceiver 10 includes a handle 12 having a
trigger 14 and a top button 16, a transceiver housing 18 attached
to the handle 12, and a transceiver dome 20. A display 24 for user
interaction is attached to the transceiver housing 18 at an end
opposite the transceiver dome 20. Housed within the transceiver 10
is a single element transducer (not shown) that converts ultrasound
waves to electrical signals. The transceiver 10 is held in position
against the body of a patient by a user for image acquisition and
signal processing. In operation, the transceiver 10 transmits a
radio frequency ultrasound signal at substantially 3.7 MHz to the
body and then receives a returning echo signal. To accommodate
different patients having a variable range of obesity, the
transceiver 10 can be adjusted to transmit a range of probing
ultrasound energy from approximately 2 MHz to approximately 10 MHz
radio frequencies.
[0128] The top button 16 selects for different acquisition volumes.
The transceiver is controlled by a microprocessor and software
associated with the microprocessor and a digital signal processor
of a computer system. As used in this invention, the term "computer
system" broadly comprises any microprocessor-based or other
computer system capable of executing operating instructions and
manipulating data, and is not limited to a traditional desktop or
notebook computer. The display 24 presents alphanumeric or graphic
data indicating the proper or optimal positioning of the
transceiver 10 for initiating a series of scans. The transceiver 10
is configured to initiate the series of scans to obtain and present
3D images as either a 3D array of 2D scanplanes or as a single 3D
scancone of 3D distributed scanlines. A suitable transceiver is the
DCD372 made by Diagnostic Ultrasound. In alternate embodiments, the
two- or three-dimensional image of a scan plane may be presented in
the display 24.
[0129] Although the preferred ultrasound transceiver is described
above, other transceivers may also be used. For example, the
transceiver need not be battery-operated or otherwise portable,
need not have a top-mounted display 24, and may include many other
features or differences. The display 24 may be a liquid crystal
display (LCD), a light emitting diode (LED), a cathode ray tube
(CRT), or any suitable display capable of presenting alphanumeric
data or graphic images.
[0130] FIG. 2A is a photograph of the hand-held transceiver 10 for
scanning a patient. The transceiver 10 is then positioned over the
patient's abdomen by a user holding the handle 12 to place the
transceiver housing 18 against the patient's abdomen. The top
button 16 is centrally located on the handle 12. Once optimally
positioned over the abdomen for scanning, the transceiver 10
transmits an ultrasound signal at substantially 3.7 MHz into the
uterus. The transceiver 10 receives a return ultrasound echo signal
emanating from the uterus and presents it on the display 24.
[0131] FIG. 2B is a perspective view of the hand-held transceiver
device sitting in a communication cradle. The transceiver 10 sits
in a communication cradle 42 via the handle 12. This cradle can be
connected to a standard USB port of any personal computer, enabling
all the data on the device to be transferred to the computer and
enabling new programs to be transferred into the device from the
computer.
[0132] FIG. 2C is a perspective view of an amniotic fluid volume
measuring system 5A. The system 5A includes the transceiver 10
cradled in the cradle 42 that is in signal communication with a
computer 52. The transceiver 10 sits in a communication cradle 42
via the handle 12. This cradle can be connected to a standard USB
port of any personal computer 52, enabling all the data on the
transceiver 10 to be transferred to the computer for analysis and
determination of amniotic fluid volume.
[0133] FIG. 3 depicts an alternate embodiment of an amniotic fluid
volume measuring system 5B in a schematic view. The system 5B
includes a plurality systems 5A in signal communication with a
server 56. As illustrated each transceiver 10 is in signal
connection with the server 56 through connections via a plurality
of computers 52. FIG. 3, by example, depicts each transceiver 10
being used to send probing ultrasound radiation to a uterus of a
patient and to subsequently retrieve ultrasound echoes returning
from the uterus, convert the ultrasound echoes into digital echo
signals, store the digital echo signals, and process the digital
echo signals by algorithms of the invention. A user holds the
transceiver 10 by the handle 12 to send probing ultrasound signals
and to receive incoming ultrasound echoes. The transceiver 10 is
placed in the communication cradle 42 that is in signal
communication with a computer 52, and operates as an amniotic fluid
volume measuring system. Two amniotic fluid volume-measuring
systems are depicted as representative though fewer or more systems
may be used. As used in this invention, a "server" can be any
computer software or hardware that responds to requests or issues
commands to or from a client. Likewise, the server may be
accessible by one or more client computers via the Internet, or may
be in communication over a LAN or other network.
[0134] Each amniotic fluid volume measuring systems includes the
transceiver 10 for acquiring data from a patient. The transceiver
10 is placed in the cradle 52 to establish signal communication
with the computer 52. Signal communication as illustrated is by a
wired connection from the cradle 42 to the computer 52. Signal
communication between the transceiver 10 and the computer 52 may
also be by wireless means, for example, infrared signals or radio
frequency signals. The wireless means of signal communication may
occur between the cradle 42 and the computer 52, the transceiver 10
and the computer 52, or the transceiver 10 and the cradle 42.
[0135] A preferred first embodiment of the amniotic fluid volume
measuring system includes each transceiver 10 being separately used
on a patient and sending signals proportionate to the received and
acquired ultrasound echoes to the computer 52 for storage. Residing
in each computer 52 are imaging programs having instructions to
prepare and analyze a plurality of one dimensional (1D) images from
the stored signals and transforms the plurality of 1D images into
the plurality of 2D scanplanes. The imaging programs also present
3D renderings from the plurality of 2D scanplanes. Also residing in
each computer 52 are instructions to perform the additional
ultrasound image enhancement procedures, including instructions to
implement the image processing algorithms.
[0136] A preferred second embodiment of the amniotic fluid volume
measuring system is similar to the first embodiment, but the
imaging programs and the instructions to perform the additional
ultrasound enhancement procedures are located on the server 56.
Each computer 52 from each amniotic fluid volume measuring system
receives the acquired signals from the transceiver 10 via the
cradle 51 and stores the signals in the memory of the computer 52.
The computer 52 subsequently retrieves the imaging programs and the
instructions to perform the additional ultrasound enhancement
procedures from the server 56. Thereafter, each computer 52
prepares the 1D images, 2D images, 3D renderings, and enhanced
images from the retrieved imaging and ultrasound enhancement
procedures. Results from the data analysis procedures are sent to
the server 56 for storage.
[0137] A preferred third embodiment of the amniotic fluid volume
measuring system is similar to the first and second embodiments,
but the imaging programs and the instructions to perform the
additional ultrasound enhancement procedures are located on the
server 56 and executed on the server 56. Each computer 52 from each
amniotic fluid volume measuring system receives the acquired
signals from the transceiver 10 and via the cradle 51 sends the
acquired signals in the memory of the computer 52. The computer 52
subsequently sends the stored signals to the server 56. In the
server 56, the imaging programs and the instructions to perform the
additional ultrasound enhancement procedures are executed to
prepare the 1D images, 2D images, 3D renderings, and enhanced
images from the server 56 stored signals. Results from the data
analysis procedures are kept on the server 56, or alternatively,
sent to the computer 52.
[0138] FIG. 4 is another embodiment of an amniotic volume fluid
measuring system 5C presented in schematic view. The system 5C
includes a plurality of amniotic fluid measuring systems 5A
connected to a server 56 over the Internet or other network 64.
FIG. 4 represents any of the first, second, or third embodiments of
the invention advantageously deployed to other servers and computer
systems through connections via the network.
[0139] FIG. 5A a graphical representation of a plurality of scan
lines forming a single scan plane. FIG. 5A illustrates how
ultrasound signals are used to make analyzable images, more
specifically how a series of one-dimensional (1D) scanlines are
used to produce a two-dimensional (2D) image. The 1D and 2D
operational aspects of the single element transducer housed in the
transceiver 10 is seen as it rotates mechanically about an angle
.phi.. A scanline 214 of length r migrates between a first limiting
position 218 and a second limiting position 222 as determined by
the value of the angle .phi., creating a fan-like 2D scanplane 210.
In one preferred form, the transceiver 10 operates substantially at
3.7 MHz frequency and creates an approximately 18 cm deep scan line
214 and migrates within the angle .phi. having an angle of
approximately 0.027 radians. A first motor tilts the transducer
approximately 60.degree. clockwise and then counterclockwise
forming the fan-like 2D scanplane presenting an approximate
120.degree. 2D sector image. A plurality of scanlines, each
scanline substantially equivalent to scanline 214 is recorded,
between the first limiting position 218 and the second limiting
position 222 formed by the unique tilt angle .phi.. The plurality
of scanlines between the two extremes forms a scanplane 210. In the
preferred embodiment, each scanplane contains 77 scan lines,
although the number of lines can vary within the scope of this
invention. The tilt angle .phi. sweeps through angles approximately
between -60.degree. and +60.degree. for a total arc of
approximately 120.degree..
[0140] FIG. 5B is a graphical representation of a plurality of
scanplanes forming a three-dimensional array (3D) 240 having a
substantially conic shape. FIG. 5B illustrates how a 3D rendering
is obtained from the plurality of 2D scanplanes. Within each
scanplane 210 are the plurality of scanlines, each scanline
equivalent to the scanline 214 and sharing a common rotational
angle .phi.. In the preferred embodiment, each scanplane contains
77 scan lines, although the number of lines can vary within the
scope of this invention. Each 2D sector image scanplane 210 with
tilt angle 100 and range r (equivalent to the scanline 214)
collectively forms a 3D conic array 240 with rotation angle
.theta.. After gathering the 2D sector image, a second motor
rotates the transducer between 3.75.degree. or 7.5.degree. to
gather the next 120.degree. sector image. This process is repeated
until the transducer is rotated through 180.degree., resulting in
the cone-shaped 3D conic array 240 data set with 24 planes
rotationally assembled in the preferred embodiment. The conic array
could have fewer or more planes rotationally assembled. For
example, preferred alternate embodiments of the conic array could
include at least two scanplanes, or a range of scanplanes from 2 to
48 scanplanes. The upper range of the scanplanes can be greater
than 48 scanplanes. The tilt angle .phi. indicates the tilt of the
scanline from the centerline in 2D sector image, and the rotation
angle .theta., identifies the particular rotation plane the sector
image lies in. Therefore, any point in this 3D data set can be
isolated using coordinates expressed as three parameters,
P(r,.phi.,.theta.).
[0141] As the scanlines are transmitted and received, the returning
echoes are interpreted as analog electrical signals by a
transducer, converted to digital signals by an analog-to-digital
converter, and conveyed to the digital signal processor of the
computer system for storage and analysis to determine the locations
of the amniotic fluid walls. The computer system is
representationally depicted in FIGS. 3 and 4 and includes a
microprocessor, random access memory (RAM), or other memory for
storing processing instructions and data generated by the
transceiver 10.
[0142] FIG. 5C is a graphical representation of a plurality of
3D-distributed scanlines emanating from the transceiver 10 forming
a scancone 300. The scancone 300 is formed by a plurality of 3D
distributed scanlines that comprises a plurality of internal and
peripheral scanlines. The scanlines are one-dimensional ultrasound
A-lines that emanate from the transceiver 10 at different
coordinate directions, that taken as an aggregate, from a conic
shape. The 3D-distributed A-lines (scanlines) are not necessarily
confined within a scanplane, but instead are directed to sweep
throughout the internal and along the periphery of the scancone
300. The 3D-distributed scanlines not only would occupy a given
scanplane in a 3D array of 2D scanplanes, but also the
inter-scanplane spaces, from the conic axis to and including the
conic periphery. The transceiver 10 shows the same illustrated
features from FIG. 1, but is configured to distribute the
ultrasound A-lines throughout 3D space in different coordinate
directions to form the scancone 300.
[0143] The internal scanlines are represented by scanlines 312A-C.
The number and location of the internal scanlines emanating from
the transceiver 10 is the number of internal scanlines needed to be
distributed within the scancone 300, at different positional
coordinates, to sufficiently visualize structures or images within
the scancone 300. The internal scanlines are not peripheral
scanlines. The peripheral scanlines are represented by scanlines
314A-F and occupy the conic periphery, thus representing the
peripheral limits of the scancone 300.
[0144] FIG. 6 is a depiction of the hand-held transceiver placed on
a patient trans-abdominally to transmit probing ultrasound and
receive ultrasound echoes for processing to determine amniotic
fluid volumes. The transceiver 10 is held by the handle 12 to
position over a patient to measure the volume of amniotic fluid in
an amniotic sac over a baby. A plurality of axes for describing the
orientation of the baby, the amniotic sac, and mother is
illustrated. The plurality of axes includes a vertical axis
depicted on the line L(R)-L(L) for left and right orientations, a
horizontal axis LI-LS for inferior and superior orientations, and a
depth axis LA-LP for anterior and posterior orientations.
[0145] FIG. 6 is representative of a preferred data acquisition
protocol used for amniotic fluid volume determination. In this
protocol, the transceiver 10 is the hand-held 3D ultrasound device
(for example, model DCD372 from Diagnostic Ultrasound) and is used
to image the uterus trans-abdominally. Initially during the
targeting phase, the patient is in a supine position and the device
is operated in a 2D continuous acquisition mode. A 2D continuous
mode is where the data is continuously acquired in 2D and presented
as a scanplane similar to the scanplane 210 on the display 24 while
an operator physically moves the transceiver 10. An operator moves
the transceiver 10 around on the maternal abdomen and the presses
the trigger 14 of the transceiver 10 and continuously acquires
real-time feedback presented in 2D on the display 24. Amniotic
fluid, where present, visually appears as dark regions along with
an alphanumeric indication of amniotic fluid area (for example, in
cm.sup.2) on the display 24. Based on this real-time information in
terms of the relative position of the transceiver 10 to the fetus,
the operator decides which side of the uterus has more amniotic
fluid by the presentation on the display 24. The side having more
amniotic fluid presents as regions having larger darker regions on
the display 24. Accordingly, the side displaying a large dark
region registers greater alphanumeric area while the side with less
fluid shows displays smaller dark regions and proportionately
registers smaller alphanumeric area on the display 24. While
amniotic fluid is present throughout the uterus, its distribution
in the uterus depends upon where and how the fetus is positioned
within the uterus. There is usually less amniotic fluid around the
fetus's spine and back and more amniotic fluid in front of its
abdomen and around the limbs.
[0146] Based on fetal position information acquired from data
gathered under continuous acquisition mode, the patient is placed
in a lateral recumbent position such that the fetus is displaced
towards the ground creating a large pocket of amniotic fluid close
to abdominal surface where the transceiver 10 can be placed as
shown in FIG. 6. For example, if large fluid pockets are found on
the right side of the patient, the patient is asked to turn with
the left side down and if large fluid pockets are found on the left
side, the patient is asked to turn with the right side down.
[0147] After the patient has been placed in the desired position,
the transceiver 10 is again operated in the 2D continuous
acquisition mode and is moved around on the lateral surface of the
patient's abdomen. The operator finds the location that shows the
largest amniotic fluid area based on acquiring the largest dark
region imaged and the largest alphanumeric value displayed on the
display 24. At the lateral abdominal location providing the largest
dark region, the transceiver 10 is held in a fixed position, the
trigger 14 is released to acquire a 3D image comprising a set of
arrayed scanplanes. The 3D image presents a rotational array of the
scanplanes 210 similar to the 3D array 240.
[0148] In a preferred alternate data acquisition protocol, the
operator can reposition the transceiver 10 to different abdominal
locations to acquire new 3D images comprised of different scanplane
arrays similar to the 3D array 240. Multiple scan cones obtained
from different positions provide the operator the ability to image
the entire amniotic fluid region from different view points. In the
case of a single image cone being too small to accommodate a large
AFV measurement, obtaining multiple 3D array 240 image cones
ensures that the total volume of large AFV regions is determined.
Multiple 3D images may also be acquired by pressing the top bottom
16 to select multiple conic arrays similar to the 3D array 240.
[0149] Depending on the position of the fetus relative to the
location of the transceiver 10, a single image scan may present an
underestimated volume of AFV due to amniotic fluid pockets that
remain hidden behind the limbs of the fetus. The hidden amniotic
fluid pockets present as unquantifiable shadow-regions.
[0150] To guard against underestimating AFV, repeated positioning
the transceiver 10 and rescanning can be done to obtain more than
one ultrasound view to maximize detection of amniotic fluid
pockets. Repositioning and rescanning provides multiple views as a
plurality of the 3D arrays 240 images cones. Acquiring multiple
images cones improves the probability of obtaining initial
estimates of AFV that otherwise could remain undetected and
un-quantified in a single scan.
[0151] In an alternative scan protocol, the user determines and
scans at only one location on the entire abdomen that shows the
maximum amniotic fluid area while the patient is the supine
position. As before, when the user presses the top button 16, 2D
scanplane images equivalent to the scanplane 210 are continuously
acquired and the amniotic fluid area on every image is
automatically computed. The user selects one location that shows
the maximum amniotic fluid area. At this location, as the user
releases the scan button, a full 3D data cone is acquired and
stored in the device's memory.
[0152] FIG. 7 shows a block diagram overview the image enhancement,
segmentation, and polishing algorithms of the amniotic fluid volume
measuring system. The enhancement, segmentation, and polishing
algorithms are applied to each scanplane 210 or to the entire scan
cone 240 to automatically obtain amniotic fluid regions. For
scanplanes substantially equivalent to scanplane 210, the
algorithms are expressed in two-dimensional terms and use formulas
to convert scanplane pixels (picture elements) into area units. For
the scan cones substantially equivalent to the 3D conic array 240,
the algorithms are expressed in three-dimensional terms and use
formulas to convert voxels (volume elements) into volume units.
[0153] The algorithms expressed in 2D terms are used during the
targeting phase where the operator trans-abdominally positions and
repositions the transceiver 10 to obtain real-time feedback about
the amniotic fluid area in each scanplane. The algorithms expressed
in 3D terms are used to obtain the total amniotic fluid volume
computed from the voxels contained within the calculated amniotic
fluid regions in the 3D conic array 240.
[0154] FIG. 7 represents an overview of a preferred method of the
invention and includes a sequence of algorithms, many of which have
sub-algorithms described in more specific detail in FIGS. 8A-F.
FIG. 7 begins with inputting data of an unprocessed image at step
410. After unprocessed image data 410 is entered (e.g., read from
memory, scanned, or otherwise acquired), it is automatically
subjected to an image enhancement algorithm 418 that reduces the
noise in the data (including speckle noise) using one or more
equations while preserving the salient edges on the image using one
or more additional equations. Next, the enhanced images are
segmented by two different methods whose results are eventually
combined. A first segmentation method applies an intensity-based
segmentation algorithm 422 that determines all pixels that are
potentially fluid pixels based on their intensities. A second
segmentation method applies an edge-based segmentation algorithm
438 that relies on detecting the fluid and tissue interfaces. The
images obtained by the first segmentation algorithm 422 and the
images obtained by the second segmentation algorithm 438 are
brought together via a combination algorithm 442 to provide a
substantially segmented image. The segmented image obtained from
the combination algorithm 442 are then subjected to a polishing
algorithm 464 in which the segmented image is cleaned-up by filling
gaps with pixels and removing unlikely regions. The image obtained
from the polishing algorithm 464 is outputted 480 for calculation
of areas and volumes of segmented regions-of-interest. Finally the
area or the volume of the segmented region-of-interest is computed
484 by multiplying pixels by a first resolution factor to obtain
area, or voxels by a second resolution factor to obtain volume. For
example, for pixels having a size of 0.8 mm by 0.8 mm, the first
resolution or conversion factor for pixel area is equivalent to
0.64 mm.sup.2, and the second resolution or conversion factor for
voxel volume is equivalent to 0.512 mm.sup.3. Different unit
lengths for pixels and voxels may be assigned, with a proportional
change in pixel area and voxel volume conversion factors.
[0155] The enhancement, segmentation and polishing algorithms
depicted in FIG. 7 for measuring amniotic fluid areas or volumes
are not limited to scanplanes assembled into rotational arrays
equivalent to the 3D array 240. As additional examples, the
enhancement, segmentation and polishing algorithms depicted in FIG.
7 apply to translation arrays and wedge arrays. Translation arrays
are substantially rectilinear image plane slices from incrementally
repositioned ultrasound transceivers that are configured to acquire
ultrasound rectilinear scanplanes separated by regular or irregular
rectilinear spaces. The translation arrays can be made from
transceivers configured to advance incrementally, or may be
hand-positioned incrementally by an operator. The operator obtains
a wedge array from ultrasound transceivers configured to acquire
wedge-shaped scanplanes separated by regular or irregular angular
spaces, and either mechanistically advanced or hand-tilted
incrementally. Any number of scanplanes can be either
translationally assembled or wedge-assembled ranges, but preferably
in ranges greater than 2 scanplanes.
[0156] Other preferred embodiments of the enhancement, segmentation
and polishing algorithms depicted in FIG. 7 may be applied to
images formed by line arrays, either spiral distributed or
reconstructed random-lines. The line arrays are defined using
points identified by the coordinates expressed by the three
parameters, P(r,.phi.,.theta.), where the values or r, .phi., and
.theta. can vary.
[0157] The enhancement, segmentation and polishing algorithms
depicted in FIG. 7 are not limited to ultrasound applications but
may be employed in other imaging technologies utilizing scanplane
arrays or individual scanplanes. For example, biological-based and
non-biological-based images acquired using infrared, visible light,
ultraviolet light, microwave, x-ray computed tomography, magnetic
resonance, gamma rays, and positron emission are images suitable
for the algorithms depicted in FIG. 7. Furthermore, the algorithms
depicted in FIG. 7 can be applied to facsimile transmitted images
and documents.
[0158] FIGS. 8A-E depict expanded details of the preferred
embodiments of enhancement, segmentation, and polishing algorithms
described in FIG. 7. Each of the following greater detailed
algorithms are either implemented on the transceiver 10 itself or
are implemented on the host computer 52 or on the server 56
computer to which the ultrasound data is transferred.
[0159] FIG. 8A depicts the sub-algorithms of Image Enhancement. The
sub-algorithms include a heat filter 514 to reduce noise and a
shock filter 518 to sharpen edges. A combination of the heat and
shock filters works very well at reducing noise and sharpening the
data while preserving the significant discontinuities. First, the
noisy signal is filtered using a 1D heat filter (Equation E1
below), which results in the reduction of noise and smoothing of
edges. This step is followed by a shock-filtering step 518
(Equation E2 below), which results in the sharpening of the blurred
signal. Noise reduction and edge sharpening is achieved by
application of the following equations E1-E2. The algorithm of the
heat filter 514 uses a heat equation E1. The heat equation E1 in
partial differential equation (PDE) form for image processing is
expressed as:
.differential. u .differential. t = .differential. 2 u
.differential. x 2 + .differential. 2 u .differential. y 2 , E1
##EQU00001##
[0160] where u is the image being processed. The image u is 2D, and
is comprised of an array of pixels arranged in rows along the
x-axis, and an array of pixels arranged in columns along the
y-axis. The pixel intensity of each pixel in the image u has an
initial input image pixel intensity (I) defined as u.sub.0=I. The
value of I depends on the application, and commonly occurs within
ranges consistent with the application. For example, I can be as
low as 0 to 1, or occupy middle ranges between 0 to 127 or 0 to
512. Similarly, I may have values occupying higher ranges of 0 to
1024 and 0 to 4096, or greater. The heat equation E1 results in a
smoothing of the image and is equivalent to the Gaussian filtering
of the image. The larger the number of iterations that it is
applied for the more the input image is smoothed or blurred and the
more the noise that is reduced.
[0161] The shock filter 518 is a PDE used to sharpen images as
detailed below. The two dimensional shock filter E2 is expressed
as:
.differential. u .differential. t = - F ( ( u ) ) .gradient. u , E2
##EQU00002##
where u is the image processed whose initial value is the input
image pixel intensity (I): u.sub.0=I where the l(u) term is the
Laplacian of the image u, F is a function of the Laplacian, and
.parallel..gradient.u.parallel. is the 2D gradient magnitude of
image intensity defined by equation E3.
.parallel..gradient.u.parallel.= {square root over
(u.sub.x.sup.2+u.sub.y.sup.2)}, E3 [0162] where [0163]
u.sup.2.sub.x=the square of the partial derivative of the pixel
intensity (u) along the x-axis, [0164] u.sup.2.sub.y=the square of
the partial derivative of the pixel intensity (u) along the y-axis,
[0165] the Laplacian l(u) of the image, u, is expressed in equation
E4 as
[0165]
l(u)=u.sub.xxu.sub.x.sup.2+2u.sub.xyu.sub.xu.sub.y+u.sub.yyu.sub.-
y.sup.2 E4 [0166] where equation E4 relates to equation E1 as
follows: [0167] u.sub.x is the first partial derivative
[0167] .differential. u .differential. x ##EQU00003##
of u along the x-axis, [0168] u.sub.y is the first partial
derivative
[0168] .differential. u .differential. y ##EQU00004##
of u along the y-axis, [0169] u.sub.x.sup.2 is the square of the
first partial derivative
[0169] .differential. u .differential. x ##EQU00005##
of u along the x-axis, [0170] u.sub.y.sup.2 is the square of the
first partial derivative
[0170] .differential. u .differential. y ##EQU00006##
of u along the y-axis, [0171] u.sub.xx is the second partial
derivative
[0171] .differential. 2 u .differential. x 2 ##EQU00007##
of u along the x-axis, [0172] u.sub.yy is the second partial
derivative
[0172] .differential. 2 u .differential. y 2 ##EQU00008##
of u along the y-axis, [0173] u.sub.xy is cross multiple first
partial derivative
[0173] .differential. y .differential. xdy ##EQU00009##
of u along the x and y axes, and [0174] the sign of the function F
modifies the Laplacian by the image gradient values selected to
avoid placing spurious edges at points with small gradient
values:
[0174] F ( ( u ) ) = 1 , if ( u ) > 0 and .gradient. u > t =
- 1 , if ( u ) < 0 and .gradient. u > t = 0 , otherwise
##EQU00010## [0175] where t is a threshold on the pixel gradient
value .parallel..gradient.u.parallel..
[0176] The combination of heat filtering and shock filtering
produces an enhanced image ready to undergo the intensity-based and
edge-based segmentation algorithms as discussed below.
[0177] FIG. 8B depicts the sub-algorithms of Intensity-Based
Segmentation (step 422 in FIG. 7). The intensity-based segmentation
step 422 uses a "k-means" intensity clustering 522 technique where
the enhanced image is subjected to a categorizing "k-means"
clustering algorithm. The "k-means" algorithm categorizes pixel
intensities into white, gray, and black pixel groups. Given the
number of desired clusters or groups of intensities (k), the
k-means algorithm is an iterative algorithm comprising four
steps:
[0178] 1. Initially determine or categorize cluster boundaries by
defining a minimum and a maximum pixel intensity value for every
white, gray, or black pixels into groups or k-clusters that are
equally spaced in the entire intensity range.
[0179] 2. Assign each pixel to one of the white, gray or black
k-clusters based on the currently set cluster boundaries.
[0180] 3. Calculate a mean intensity for each pixel intensity
k-cluster or group based on the current assignment of pixels into
the different k-clusters. The calculated mean intensity is defined
as a cluster center. Thereafter, new cluster boundaries are
determined as mid points between cluster centers.
[0181] 4. Determine if the cluster boundaries significantly change
locations from their previous values. Should the cluster boundaries
change significantly from their previous values, iterate back to
step 2, until the cluster centers do not change significantly
between iterations. Visually, the clustering process is manifest by
the segmented image and repeated iterations continue until the
segmented image does not change between the iterations.
[0182] The pixels in the cluster having the lowest intensity
value--the darkest cluster--are defined as pixels associated with
amniotic fluid. For the 2D algorithm, each image is clustered
independently of the neighboring images. For the 3D algorithm, the
entire volume is clustered together. To make this step faster,
pixels are sampled at 2 or any multiple sampling rate factors
before determining the cluster boundaries. The cluster boundaries
determined from the down-sampled data are then applied to the
entire data.
[0183] FIG. 8C depicts the sub-algorithms of Edge-Based
Segmentation (step 438 in FIG. 7) and uses a sequence of four
sub-algorithms. The sequence includes a spatial gradients 526
algorithm, a hysteresis threshold 530 algorithm, a
Region-of-Interest (ROI) 534 algorithm, and a matching edges filter
538 algorithm.
[0184] The spatial gradient 526 computes the x-directional and
y-directional spatial gradients of the enhanced image. The
Hysteresis threshold 530 algorithm detects salient edges. Once the
edges are detected, the regions defined by the edges are selected
by a user employing the ROI 534 algorithm to select
regions-of-interest deemed relevant for analysis.
[0185] Since the enhanced image has very sharp transitions, the
edge points can be easily determined by taking x- and y-derivatives
using backward differences along x- and y-directions. The pixel
gradient magnitude .parallel..gradient.I.parallel. is then computed
from the x- and y-derivative image in equation E5 as:
.parallel..gradient.I.parallel.= {square root over
(I.sub.x.sup.2+I.sub.y.sup.2)} E5
[0186] Where I.sup.2.sub.x=the square of x-derivative of intensity;
and [0187] I.sub.2.sub.y=the square of y-derivative of intensity
along the y-axis.
[0188] Significant edge points are then determined by thresholding
the gradient magnitudes using a hysteresis thresholding operation.
Other thresholding methods could also be used. In hysteresis
thresholding 530, two threshold values, a lower threshold and a
higher threshold, are used. First, the image is thresholded at the
lower threshold value and a connected component labeling is carried
out on the resulting image. Next, each connected edge component is
preserved which has at least one edge pixel having a gradient
magnitude greater than the upper threshold. This kind of
thresholding scheme is good at retaining long connected edges that
have one or more high gradient points.
[0189] In the preferred embodiment, the two thresholds are
automatically estimated. The upper gradient threshold is estimated
at a value such that at most 97% of the image pixels are marked as
non-edges. The lower threshold is set at 50% of the value of the
upper threshold. These percentages could be different in different
implementations. Next, edge points that lie within a desired
region-of-interest are selected 534. This region of interest
selection 534 excludes points lying at the image boundaries and
points lying too close to or too far from the transceiver 10.
Finally, the matching edge filter 538 is applied to remove outlier
edge points and fill in the area between the matching edge
points.
[0190] The edge-matching algorithm 538 is applied to establish
valid boundary edges and remove spurious edges while filling the
regions between boundary edges. Edge points on an image have a
directional component indicating the direction of the gradient.
Pixels in scanlines crossing a boundary edge location will exhibit
two gradient transitions depending on the pixel intensity
directionality. Each gradient transition is given a positive or
negative value depending on the pixel intensity directionality. For
example, if the scanline approaches an echo reflective bright wall
from a darker region, then an ascending transition is established
as the pixel intensity gradient increases to a maximum value, i.e.,
as the transition ascends from a dark region to a bright region.
The ascending transition is given a positive numerical value.
Similarly, as the scanline recedes from the echo reflective wall, a
descending transition is established as the pixel intensity
gradient decreases to or approaches a minimum value. The descending
transition is given a negative numerical value.
[0191] Valid boundary edges are those that exhibit ascending and
descending pixel intensity gradients, or equivalently, exhibit
paired or matched positive and negative numerical values. The valid
boundary edges are retained in the image. Spurious or invalid
boundary edges do not exhibit paired ascending-descending pixel
intensity gradients, i.e., do not exhibit paired or matched
positive and negative numerical values. The spurious boundary edges
are removed from the image.
[0192] For amniotic fluid volume related applications, most edge
points for amniotic fluid surround a dark, closed region, with
directions pointing inwards towards the center of the region. Thus,
for a convex-shaped region, the direction of a gradient for any
edge point, the edge point having a gradient direction
approximately opposite to the current point represents the matching
edge point. Those edge points exhibiting an assigned positive and
negative value are kept as valid edge points on the image because
the negative value is paired with its positive value counterpart.
Similarly, those edge point candidates having unmatched values,
i.e., those edge point candidates not having a negative-positive
value pair, are deemed not to be true or valid edge points and are
discarded from the image.
[0193] The matching edge point algorithm 538 delineates edge points
not lying on the boundary for removal from the desired dark
regions. Thereafter, the region between any two matching edge
points is filled in with non-zero pixels to establish edge-based
segmentation. In a preferred embodiment of the invention, only edge
points whose directions are primarily oriented co-linearly with the
scanline are sought to permit the detection of matching front wall
and back wall pairs.
[0194] Returning to FIG. 7, once Intensity-Based 422 and Edge-Based
Segmentation 438 is completed, both segmentation methods use a
combining step that combines the results of intensity-based
segmentation 422 step and the edge-based segmentation 438 step
using an AND Operator of Images 442. The AND Operator of Images 442
is achieved by a pixel-wise Boolean AND operator 442 step to
produce a segmented image by computing the pixel intersection of
two images. The Boolean AND operation 442 represents the pixels as
binary numbers and the corresponding assignment of an assigned
intersection value as a binary number 1 or 0 by the combination of
any two pixels. For example, consider any two pixels, say
pixel.sub.A and pixel.sub.B, which can have a 1 or 0 as assigned
values. If pixel.sub.A's value is 1, and pixel.sub.B's value is 1,
the assigned intersection value of pixel.sub.A and pixel.sub.B is
1. If the binary value of pixel.sub.A and pixel.sub.B are both 0,
or if either pixel.sub.A or pixel.sub.B is 0, then the assigned
intersection value of pixel.sub.A and pixel.sub.B is 0. The Boolean
AND operation 542 takes the binary any two digital images as input,
and outputs a third image with the pixel values made equivalent to
the intersection of the two input images.
[0195] Upon completion of the AND Operator of Images 442 algorithm,
the polish 464 algorithm of FIG. 7 is comprised of multiple
sub-algorithms. FIG. 8D depicts the sub-algorithms of the Polish
464 algorithm, including a Close 546 algorithm, an Open 550
algorithm, a Remove Deep Regions 554 algorithm, and a Remove Fetal
Head Regions 560 algorithm.
[0196] Closing and opening algorithms are operations that process
images based on the knowledge of the shape of objects contained on
a black and white image, where white represents foreground regions
and black represents background regions. Closing serves to remove
background features on the image that are smaller than a specified
size. Opening serves to remove foreground features on the image
that are smaller than a specified size. The size of the features to
be removed is specified as an input to these operations. The
opening algorithm 550 removes unlikely amniotic fluid regions from
the segmented image based on a-priori knowledge of the size and
location of amniotic fluid pockets.
[0197] Referring to FIG. 8D, the closing 546 algorithm obtains the
Apparent Amniotic Fluid Area (AAFA) or Volume (AAFV) values. The
AAFA and AAFV values are "Apparent" and maximal because these
values may contain region areas or region volumes of non-amniotic
origin unknowingly contributing to and obscuring what otherwise
would be the true amniotic fluid volume. For example, the AAFA and
AAFV values contain the true amniotic volumes, and possibly as well
areas or volumes due to deep tissues and undetected fetal head
volumes. Thus the apparent area and volume values require
correction or adjustments due to unknown contributions of deep
tissue and of the fetal head in order to determine an Adjusted
Amniotic Fluid Area (AdAFA) value or Volume (AdAVA) value 568.
[0198] The AdAFA and AdAVA values obtained by the Close 546
algorithm are reduced by the morphological opening algorithm 550.
Thereafter, the AdAFA and AdAVA values are further reduced by
removing areas and volumes attributable to deep regions by using
the Remove Deep Regions 554 algorithm. Thereafter, the polishing
algorithm 464 continues by applying a fetal head region detection
algorithm 560.
[0199] FIG. 8E depicts the sub-algorithms of the Remove Fetal Head
Regions sub-algorithm 560. The basic idea of the sub-algorithms of
the fetal head detection algorithm 560 is that the edge points that
potentially represent a fetal skull are detected. Thereafter, a
circle finding algorithm to determine the best-fitting circle to
these fetal skull edges is implemented. The radii of the circles
that are searched are known a priori based on the fetus'
gestational age. The best fitting circle whose fitting metric lies
above a certain pre-specified threshold is marked as the fetal head
and the region inside this circle is the fetal head region. The
algorithms include a gestational Age 726 input, a determine head
diameter factor 730 algorithm, a Head Edge Detection algorithm,
734, and a Hough transform procedure 736.
[0200] Fetal brain tissue has substantially similar ultrasound echo
qualities as presented by amniotic fluid. If not detected and
subtracted from amniotic fluid volumes, fetal brain tissue volumes
will be measured as part of the total amniotic fluid volumes and
lead to an overestimation and false diagnosis of oligo or
poly-hyraminotic conditions. Thus detecting fetal head position,
measuring fetal brain matter volumes, and deducting the fetal brain
matter volumes from the amniotic fluid volumes to obtain a
corrected amniotic fluid volume serves to establish accurately
measure amniotic fluid volumes.
[0201] The gestational age input 726 begins the fetal head
detection algorithm 560 and uses a head dimension table to obtain
ranges of head bi-parietal diameters (BPD) to search for (e.g., 30
week gestational age corresponds to a 6 cm head diameter). The head
diameter range is input to both the Head Edge Detection, 734, and
the Hough Transform, 736. The head edge detection 734 algorithm
seeks out the distinctively bright ultrasound echoes from the
anterior and posterior walls of the fetal skull while the Hough
Transform algorithm, 736, finds the fetal head using circular
shapes as models for the fetal head in the Cartesian image
(pre-scan conversion to polar form).
[0202] Scanplanes processed by steps 522, 538, 530, are input to
the head edge detection step 734. Applied as the first step in the
fetal head detection algorithm 734 is the detection of the
potential head edges from among the edges found by the matching
edge filter. The matching edge 538 filter outputs pairs of edge
points potentially belonging to front walls or back walls. Not all
of these walls correspond to fetal head locations. The edge points
representing the fetal head are determined using the following
heuristics: [0203] (1) Looking along a one dimensional A-mode scan
line, fetal head locations present a corresponding matching
gradient in the opposing direction within a short distance
approximately the same size as the thickness of the fetal skull.
This distance is currently set to a value 1 cm. [0204] (2) The
front wall and the back wall locations of the fetal head are within
a range of diameters corresponding to the expected diameter 730 for
the gestational age 726 of the fetus. Walls that are too close or
too far are not likely to be head locations. [0205] (3) A majority
of the pixels between the front and back wall locations of the
fetal head lie within the minimum intensity cluster as defined by
the output of the clustering algorithm 422. The percentage of
pixels that need to be dark is currently defined to be 80%.
[0206] The pixels found satisfying these features are then
vertically dilated to produce a set of thick fetal head edges as
the output of Head Edge Detection, 734.
[0207] FIG. 8F depicts the sub-algorithms of the Hough transform
procedure 736. The sub-algorithms include a Polar Hough Transform
738 algorithm, a find maximum Hough value 742 algorithm 742, and a
fill circle region 746. The Polar Hough Transform algorithm looks
for fetal head structures in polar coordinate terms by converting
from Cartesian coordinates using a plurality of equations. The
fetal head, which appears like a circle in a 3D scan-converted
Cartesian coordinate image, has a different shape in the pre-scan
converted polar space. The fetal head shape is expressed in terms
of polar coordinate terms explained as follows:
[0208] The coordinates of a circle in the Cartesian space (x,y)
with center (x.sub.0,y.sub.0) and radius R are defined for an angle
.theta. are derived and defined in equation E5 as:
x=R cos .theta.+x.sub.0
y=R sin .theta.+y.sub.0
(x-x.sub.0).sup.2+(y-y.sub.0).sup.2=R.sup.2 E5
[0209] In polar space, the coordinates (r,.phi.), with respect to
the center (r.sub.0,.phi..sub.0), are derived and defined in
equation E6 as:
r sin .phi.=R cos .theta.+r.sub.0 sin .phi..sub.0
r cos .phi.=R sin .theta.+r.sub.0 cos .phi..sub.0
(r sin .phi.-r.sub.0 sin .phi..sub.0).sup.2+(r cos .phi.-r.sub.0
cos .phi..sub.0).sup.2=R.sup.2 E6
[0210] The Hough transform 736 algorithm using equations E5 and E6
attempts to find the best-fit circle to the edges of an image. A
circle in the polar space is defined by a set of three parameters,
(r.sub.0,.phi..sub.0, R) representing the center and the radius of
the circle.
[0211] The basic idea for the Hough transform 736 is as follows.
Suppose a circle is sought having a fixed radius (say, R1) for
which the best center of the circle is similarly sought. Now, every
edge point on the input image lies on a potential circle whose
center lays R1 pixels away from it. The set of potential centers
themselves form a circle of radius R1 around each edge pixel. Now,
drawing potential circles of radius R1 around each edge pixel, the
point at which most circles intersect, a center of the circle that
represents a best-fit circle to the given edge points is obtained.
Therefore, each pixel in the Hough transform output contains a
likelihood value that is simply the count of the number of circles
passing through that point.
[0212] FIG. 9 illustrates the Hough Transform 736 algorithm for a
plurality of circles with a fixed radius in a Cartesian coordinate
system. A portion of the plurality of circles is represented by a
first circle 804a, a second circle 804b, and a third circle 804c. A
plurality of edge pixels are represented as gray squares and an
edge pixel 808 is shown. A circle is drawn around each edge pixel
to distinguish a center location 812 of a best-fit circle 816
passing through each edge pixel point; the point of the center
location through which most such circles pass (shown by a gray star
812) is the center of the best-fit circle 816 presented as a thick
dark line. The circumference of the best fit circle 816 passes
substantially through is central portion of each edge pixel,
represented as a series of squares substantially equivalent to the
edge pixel 808.
[0213] This search for best fitting circles can be easily extended
to circles with varying radii by adding one more degree of
freedom--however, a discrete set of radii around the mean radii for
a given gestational age makes the search significantly faster, as
it is not necessary to search all possible radii.
[0214] The next step in the head detection algorithm is selecting
or rejecting best-fit circles based on its likelihood, in the find
maximum Hough Value 742 algorithm. The greater the number of
circles passing through a given point in the Hough-space, the more
likely it is to be the center of a best-fit circle. A 2D metric as
a maximum Hough value 742 of the Hough transform 736 output is
defined for every image in a dataset. The 3D metric is defined as
the maximum of the 2D metrics for the entire 3D dataset. A fetal
head is selected on an image depending on whether its 3D metric
value exceeds a preset 3D threshold and also whether the 2D metric
exceeds a preset 2D threshold. The 3D threshold is currently set at
7 and the 2D threshold is currently set at 5. These thresholds have
been determined by extensive training on images where the fetal
head was known to be present or absent.
[0215] Thereafter, the fetal head detection algorithm concludes
with a fill circle region 746 that incorporates pixels to the image
within the detected circle. The fill circle region 746 algorithm
fills the inside of the best fitting polar circle. Accordingly, the
fill circle region 746 algorithm encloses and defines the area of
the fetal brain tissue, permitting the area and volume to be
calculated and deducted via algorithm 554 from the apparent
amniotic fluid area and volume (AAFA or AAFV) to obtain a
computation of the corrected amniotic fluid area or volume via
algorithm 484.
[0216] FIG. 10 shows the results of sequentially applying the
algorithm steps of FIGS. 7 and 8A-D on an unprocessed sample image
820 presented within the confines of a scanplane substantially
equivalent to the scanplane 210. The results of applying the heat
filter 514 and shock filter 518 in enhancing the unprocessed sample
is shown in enhanced image 840. The result of intensity-based
segmentation algorithms 522 is shown in image 850. The results of
edge-based segmentation 438 algorithm using sub-algorithms 526,
530, 534 and 538 of the enhanced image 840 is shown in segmented
image 858. The result of the combination 442 utilizing the Boolean
AND images 442 algorithm is shown in image 862 where white
represents the amniotic fluid area. The result of applying the
polishing 464 algorithm employing algorithms 542, 546, 550, 554,
560, and 564 is shown in image 864, which depicts the amniotic
fluid area overlaid on the unprocessed sample image 810.
[0217] FIG. 11 depicts a series of images showing the results of
the above method to automatically detect, locate, and measure the
area and volume of a fetal head using the algorithms outlined in
FIGS. 7 and 8A-F. Beginning with an input image in polar coordinate
form 920, the fetal head image is marked by distinctive bright
echoes from the anterior and posterior walls of the fetal skull and
a circular shape of the fetal head in the Cartesian image. The
fetal head detection algorithm 734 operates on the polar coordinate
data (i.e., pre-scan version, not yet converted to Cartesian
coordinates).
[0218] An example output of applying the head edge detection 734
algorithm to detect potential head edges is shown in image 930.
Occupying the space between the anterior and posterior walls are
dilated black pixels 932 (stacks or short lines of black pixels
representing thick edges). An example of the polar Hough transform
738 for one actual data sample for a specific radius is shown in
polar coordinate image 940.
[0219] An example of the best-fit circle on real data polar data is
shown in polar coordinate image 950 that has undergone the find
maximum Hough value step 742. The polar coordinate image 950 is
scan-converted to a Cartesian data in image 960 where the effects
of finding maximum Hough value 742 algorithm are seen in Cartesian
format.
[0220] FIG. 12 presents a 4-panel series of sonographer amniotic
fluid pocket outlines compared to the algorithm's output in a
scanplane equivalent to scanplane 210. The top two panels depict
the sonographer's outlines of amniotic fluid pockets obtained by
manual interactions with the display while the bottom two panels
show the resulting amniotic fluid boundaries obtained from the
instant invention's automatic application of 2D algorithms, 3D
algorithms, combination heat and shock filter algorithms, and
segmentation algorithms.
[0221] After the contours on all the images have been delineated,
the volume of the segmented structure is computed. Two specific
techniques for doing so are disclosed in detail in U.S. Pat. No.
5,235,985 to McMorrow et al, herein incorporated by reference. This
patent provides detailed explanations for non-invasively
transmitting, receiving and processing ultrasound for calculating
volumes of anatomical structures.
[0222] Multiple Image Cone Acquisition and Image Processing
Procedures:
[0223] In some embodiments, multiple cones of data acquired at
multiple anatomical sampling sites may be advantageous. For
example, in some instances, the pregnant uterus may be too large to
completely fit in one cone of data sampled from a single
measurement or anatomical site of the patient (patient location).
That is, the transceiver 10 is moved to different anatomical
locations of the patient to obtain different 3D views of the uterus
from each measurement or transceiver location.
[0224] Obtaining multiple 3D views may be especially needed during
the third trimester of pregnancy, or when twins or triplets are
involved. In such cases, multiple data cones can be sampled from
different anatomical sites at known intervals and then combined
into a composite image mosaic to present a large uterus in one,
continuous image. In order to make a composite image mosaic that is
anatomically accurate without duplicating the anatomical regions
mutually viewed by adjacent data cones, ordinarily it is
advantageous to obtain images from adjacent data cones and then
register and subsequently fuse them together. In a preferred
embodiment, to acquire and process multiple 3D data sets or images
cones, at least two 3D image cones are generally preferred, with
one image cone defined as fixed, and the other image cone defined
as moving.
[0225] The 3D image cones obtained from each anatomical site may be
in the form of 3D arrays of 2D scanplanes, similar to the 3D array
240. Furthermore, the 3D image cone may be in the form of a wedge
or a translational array of 2D scanplanes. Alternatively, the 3D
image cone obtained from each anatomical site may be a 3D scancone
of 3D-distributed scanlines, similar to the scancone 300.
[0226] The term "registration" with reference to digital images
means the determination of a geometrical transformation or mapping
that aligns viewpoint pixels or voxels from one data cone sample of
the object (in this embodiment, the uterus) with viewpoint pixels
or voxels from another data cone sampled at a different location
from the object. That is, registration involves mathematically
determining and converting the coordinates of common regions of an
object from one viewpoint to the coordinates of another viewpoint.
After registration of at least two data cones to a common
coordinate system, the registered data cone images are then fused
together by combining the two registered data images by producing a
reoriented version from the view of one of the registered data
cones. That is, for example, a second data cone's view is merged
into a first data cone's view by translating and rotating the
pixels of the second data cone's pixels that are common with the
pixels of the first data cone. Knowing how much to translate and
rotate the second data cone's common pixels or voxels allows the
pixels or voxels in common between both data cones to be
superimposed into approximately the same x, y, z, spatial
coordinates so as to accurately portray the object being imaged.
The more precise and accurate the pixel or voxel rotation and
translation, the more precise and accurate is the common pixel or
voxel superimposition or overlap between adjacent image cones. The
precise and accurate overlap between the images assures the
construction of an anatomically correct composite image mosaic
substantially devoid of duplicated anatomical regions.
[0227] To obtain the precise and accurate overlap of common pixels
or voxels between the adjacent data cones, it is advantageous to
utilize a geometrical transformation that substantially preserves
most or all distances regarding line straightness, surface
planarity, and angles between the lines as defined by the image
pixels or voxels. That is, the preferred geometrical transformation
that fosters obtaining an anatomically accurate mosaic image is a
rigid transformation that doesn't permit the distortion or
deforming of the geometrical parameters or coordinates between the
pixels or voxels common to both image cones.
[0228] The preferred rigid transformation first converts the polar
coordinate scanplanes from adjacent image cones into in x, y, z
Cartesian axes. After converting the scanplanes into the Cartesian
system, a rigid transformation, T, is determined from the
scanplanes of adjacent image cones having pixels in common. The
transformation T is a combination of a three-dimensional
translation vector expressed in Cartesian as t=(T.sub.x, T.sub.y,
T.sub.z), and a three-dimensional rotation R matrix expressed as a
function of Euler angles .theta..sub.x, .theta..sub.y,
.theta..sub.z around the x, y, and z axes. The transformation
represents a shift and rotation conversion factor that aligns and
overlaps common pixels from the scanplanes of the adjacent image
cones.
[0229] In the preferred embodiment of the present invention, the
common pixels used for the purposes of establishing registration of
three-dimensional images are the boundaries of the amniotic fluid
regions as determined by the amniotic fluid segmentation algorithm
described above.
[0230] Several different protocols may be used to collect and
process multiple cones of data from more than one measurement site
are described in FIGS. 13-14.
[0231] FIG. 13 illustrates a 4-quadrant supine procedure to acquire
multiple image cones around the center point of uterine quadrants
of a patient in a supine procedure. Here the patient lies supine
(on her back) displacing most or all of the amniotic fluid towards
the top. The uterus is divided into 4 quadrants defined by the
umbilicus (the navel) and the linea-nigra (the vertical center line
of the abdomen) and a single 3D scan is acquired at each quadrant.
The 4-quadrant supine protocol acquires four different 3D scans in
a two dimensional grid, each corner of the grid being a quadrant
midpoint. Four cones of data are acquired by the transceiver 10
along the midpoints of quadrant 1, quadrant 2, quadrant 3, and
quadrant 4. Thus, one 3D data cone per uterine quadrant midpoint is
acquired such that each quadrant midpoint is mutually substantially
equally spaced from each other in a four-corner grid array.
[0232] FIG. 14 illustrates a multiple lateral line procedure to
acquire multiple image cones in a linear array. Here the patent
lies laterally (on her side), displacing most or all of the
amniotic fluid towards the top. Four 3D images cones of data are
acquired along a line of substantially equally space intervals. As
illustrated, the transceiver 10 moves along the lateral line at
position 1, position 2, position 3, and position 4. As illustrated
in FIG. 14, the inter-position distance or interval is
approximately 6 cm.
[0233] The preferred embodiment for making a composite image mosaic
involves obtaining four multiple image cones where the transceiver
10 is placed at four measurement sites over the patient in a supine
or lateral position such that at least a portion of the uterus is
ultrasonically viewable at each measurement site. The first
measurement site is originally defined as fixed, and the second
site is defined as moving and placed at a first known inter-site
distance relative to the first site. The second site images are
registered and fused to the first site images After fusing the
second site images to the first site images, the third measurement
site is defined as moving and placed at a second known inter-site
distance relative to the fused second site now defined as fixed.
The third site images are registered and fused to the second site
images Similarly, after fusing the third site images to the second
site images, the fourth measurement site is defined as moving and
placed at a third known inter-site distance relative to the fused
third site now defined as fixed. The fourth site images are
registered and fused to the third site images
[0234] The four measurement sites may be along a line or in an
array. The array may include rectangles, squares, diamond patterns,
or other shapes. Preferably, the patient is positioned such that
the baby moves downward with gravity in the uterus and displaces
the amniotic fluid upwards toward the measuring positions of the
transceiver 10.
[0235] The interval or distance between each measurement site is
approximately equal, or may be unequal. For example in the lateral
protocol, the second site is spaced approximately 6 cm from the
first site, the third site is spaced approximately 6 cm from the
second site, and the fourth site is spaced approximately 6 cm from
the third site. The spacing for unequal intervals could be, for
example, the second site is spaced approximately 4 cm from the
first site, the third site is spaced approximately 8 cm from the
second site, and the third is spaced approximately 6 cm from the
third site. The interval distance between measurement sites may be
varied as long as there are mutually viewable regions of portions
of the uterus between adjacent measurement sites.
[0236] For uteruses not as large as requiring four measurement
sites, two and three measurement sites may be sufficient for making
a composite 3D image mosaic. For three measurement sites, a
triangular array is possible, with equal or unequal intervals.
Furthermore, is the case when the second and third measurement
sites have mutually viewable regions from the first measurement
site, the second interval may be measured from the first
measurement site instead of measuring from the second measurement
site.
[0237] For very large uteruses not fully captured by four
measurement or anatomical sites, greater than four measurement
sites may be used to make a composite 3D image mosaic provided that
each measurement site is ultrasonically viewable for at least a
portion of the uterus. For five measurement sites, a pentagon array
is possible, with equal or unequal intervals. Similarly, for six
measurement sites, a hexagon array is possible, with equal or
unequal intervals between each measurement site. Other polygonal
arrays are possible with increasing numbers of measurement
sites.
[0238] The geometrical relationship between each image cone must be
ascertained so that overlapping regions can be identified between
any two image cones to permit the combining of adjacent neighboring
cones so that a single 3D mosaic composite image is produced from
the 4-quadrant or in-line laterally acquired images.
[0239] The translational and rotational adjustments of each moving
cone to conform with the voxels common to the stationary image cone
is guided by an inputted initial transform that has the expected
translational and rotational values. The distance separating the
transceiver 10 between image cone acquisitions predicts the
expected translational and rotational values. For example, as shown
in FIG. 14, if 6 cm separates the image cones, then the expected
translational and rotational values are proportionally estimated.
For example, the (T.sub.x, T.sub.y, T.sub.z) and (.theta..sub.x,
.theta..sub.y, .theta..sub.z) Cartesian and Euler angle terms fixed
images p voxel values are defined respectively as (6 cm, 0 cm, 0
cm) and (0deg, 0deg, 0deg).
[0240] FIG. 15 is a block diagram algorithm overview of the
registration and correcting algorithms used in processing multiple
image cone data sets. The algorithm overview 1000 shows how the
entire amniotic fluid volume measurement process occurs from the
multiply acquired image cones. First, each of the input cones 1004
is segmented 1008 to detect all amniotic fluid regions. The
segmentation 1008 step is substantially similar to steps 418-480 of
FIG. 7. Next, these segmented regions are used to align (register)
the different cones into one common coordinate system using a Rigid
Registration 1012 algorithm. Next, the registered datasets from
each image cone are fused with each other using a Fuse Data 1016
algorithm to produce a composite 3D mosaic image. Thereafter, the
total amniotic fluid volume is computed 1020 from the fused or
composite 3D mosaic image.
[0241] FIG. 16 is a block diagram of the steps of the rigid
registration algorithm 1012. The rigid algorithm 1012 is a 3D image
registration algorithm and is a modification of the Iterated
Closest Point (ICP) algorithm published by P J Besl and N D McKay,
in "A Method for Registration of 3-D Shapes," IEEE Trans. Pattern
Analysis & Machine Intelligence, vol. 14, no. 2, February 1992,
pp. 239-256. The steps of the rigid registration algorithm 1012
serves to correct for overlap between adjacent 3D scan cones
acquired in either the 4-quadrant supine grid procedure or lateral
line multi data cone acquisition procedures. The rigid algorithm
1012 first processes the fixed image 1104 in polar coordinate terms
to Cartesian coordinate terms using the 3D Scan Convert 1108
algorithm. Separately, the moving image 1124 is also converted to
Cartesian coordinates using the 3D Scan Convert 1128 algorithm.
Next, the edges of the amniotic fluid regions on the fixed and
moving images are determined and converted into point sets p and q
respectively by a 3D edge detection process 1112 and 1132. Also,
the fixed image point set, p, undergoes a 3D distance transform
process 1116 which maps every voxel in a 3D image to a number
representing the distance to the closest edge point in p.
Pre-computing this distance transform makes subsequent distance
calculations and closest point determinations very efficient.
[0242] Next, the known initial transform 1136, for example, (6, 0,
0) for the Cartesian T.sub.x, T.sub.y, T.sub.z terms and (0, 0, 0)
for the .theta..sub.x, .theta..sub.y, .theta..sub.z Euler angle
terms for an inter-transceiver interval of 6 cm, is subsequently
applied to the moving image by the Apply Transform 1140 step. This
transformed image is then compared to the fixed image to examine
for the quantitative occurrence of overlapping voxels. If the
overlap is less than 20%, there are not enough common voxels
available for registration and the initial transform is considered
sufficient for fusing at step 1016.
[0243] If the overlapping voxel sets by the initial transform
exceed 20% of the fixed image p voxel sets, the q-voxels of the
initial transform are subjected to an iterative sequence of rigid
registration.
[0244] A transformation T serves to register a first voxel point
set p from the first image cone by merging or overlapping a second
voxel point set q from a second image cone that is common to p of
the first image cone. A point in the first voxel point set p may be
defined as p.sub.i=(x.sub.i, y.sub.i, z.sub.i) and a point in the
second voxel point set q may similarly be defined as
q.sub.j=(x.sub.j, y.sub.j, z.sub.j), If the first image cone is
considered to be a fixed landmark, then the T factor is applied to
align (translate and rotate) the moving voxel point set q onto the
fixed voxel point set p.
[0245] The precision of T is often affected by noise in the images
that accordingly affects the precision of t and R, and so the
variability of each voxel point set will in turn affect the overall
variability of each matrix equation set for each point. The
composite variability between the fixed voxel point set p and a
corresponding moving voxel point set q is defined to have a
cross-covariance matrix C.sub.pq, more fully described in equation
E8 as:
C pq = 1 n i = 1 n ( p i - p _ ) ( q i - q _ ) T E8
##EQU00011##
[0246] where, n is the number of points in each point set and p and
q are the central points in the two voxel point sets. How strong
the correlation is between two sets data is determined by
statistically analyzing the cross-covariance C.sub.pq. The
preferred embodiment uses a statistical process known as the Single
Value Decomposition (SVD) originally developed by Eckart and Young
(G. Eckart and G. Young, 1936, The Approximation of One Matrix by
Another of Lower Rank, Pychometrika 1, 211-218). When numerical
data is organized into matrix form, the SVD is applied to the
matrix, and the resulting SVD values are determined to solve for
the best fitting rotation transform R to be applied to the moving
voxel point set q to align with the fixed voxel point set p to
acquire optimum overlapping accuracy of the pixel or voxels common
to the fixed and moving images.
[0247] Equation E9 gives the SVD value of the cross-covariance
C.sub.pq:
C.sub.pq=UDV.sup.t E9
[0248] where D is a 3.times.3 diagonal matrix and U and V are
orthogonal 3.times.3 matrices
[0249] Equation E10 further defines the rotational R description of
the transformation T in terms of U and V orthogonal 3.times.3
matrices as:
R=UV.sup.T E10
[0250] Equation E11 further defines the translation transform t
description of the transformation T in terms of p, q and R as:
t= p-R q E11
[0251] Equations E8 through E11 present a method to determine the
rigid transformation between two point sets p and q--this process
corresponds to step 1152 in FIG. 17.
[0252] The steps of the registration algorithm are applied
iteratively until convergence. The iterative sequence includes a
Find Closest Points on Fixed Image 1148 step, a Determine New
Transform 1152 step, a Calculate Distances 1156 step, and Converged
decision 1160 step.
[0253] In the Find Closest Points on Fixed Image 1148 step,
corresponding q points are found for each point in the fixed set p.
Correspondence is defined by determining the closest edge point on
q to the edge point of p. The distance transform image helps locate
these closest points. Once p and closest -q pixels are identified,
the Determine New Transform 1152 step calculates the rotation R via
SVD analysis using equations E8-E10 and translation transform t via
equation E11. If, at decision step 1160, the change in the average
closest point distance between two iterations is less than 5%, then
the predicted-q pixel candidates are considered converged and
suitable for receiving the transforms R and t to rigidly register
the moving image Transform 1136 onto the common voxels p of the 3D
Scan Converted 1108 image. At this point, the rigid registration
process is complete as closest proximity between voxel or pixel
sets has occurred between the fixed and moving images, and the
process continues with fusion at step 1016.
[0254] If, however, there is >5% change between the predicted-q
pixels and p pixels, another iteration cycle is applied via the
Apply Transform 1140 to the Find Closest Points on Fixed Image 1148
step, and is cycled through the converged 1160 decision block.
Usually in 3 cycles, though as many as 20 iterative cycles, are
engaged until is the transformation T is considered converged.
[0255] A representative example for the application of the
preferred embodiment for the registration and fusion of a moving
image onto a fixed image is shown in FIGS. 17A-17C.
[0256] FIG. 17A is a first measurement view of a fixed scanplane
1200A from a 3D data set measurement taken at a first site. A first
pixel set p consistent for the dark pixels of AFV is shown in a
region 1204A. The region 1204A has approximate x-y coordinates of
(150, 120) that is closest to dark edge.
[0257] FIG. 17B is a second measurement view of a moving scanplane
1200B from a 3D data set measurement taken at a second site. A
second pixel set q consistent for the dark pixels of AFV is shown
in a region 1204B. The region 1204B has approximate x-y coordinates
of (50, 125) that is closest to dark edge.
[0258] FIG. 17C is a composite image 1200C of the first (fixed)
1200A and second (moving) 1200B images in which common pixels 1204B
at approximate coordinates (50,125) is aligned or overlapped with
the voxels 1204A at approximate coordinates (150, 120). That is,
the region 1204B pixel set q is linearly and rotational transformed
consistent with the closest edge selection methodology as shown in
FIGS. 13A and 13B from employing the 3D Edge Detection 1112 step.
The composite image 1200C is a mosaic image from scanplanes having
approximately the same .phi. and rotation .theta. angles.
[0259] The registration and fusing of common pixel sets p and q
from scanplanes having approximately the same .phi. and rotation
.theta. angles can be repeated for other scanplanes in each 3D data
set taken at the first (fixed) and second (moving) anatomical
sites. For example, if the composite image 1200C above was for
scanplane #1, then the process may be repeated for the remaining
scanplanes #2-24 or #2-48 or greater as needed to capture a
completed uterine mosaic image. Thus an array similar to the 3D
array 240 from FIG. 5B is assembled, except this time the scanplane
array is made of composite images, each composited image belonging
to a scanplane having approximately the same .phi. and rotation
.theta. angles.
[0260] If a third and a fourth 3D data sets are taken, the
respective registration, fusing, and assembling into scanplane
arrays of composited images is undertaken with the same procedures.
In this case, the scanplane composite array similar to the 3D array
240 is composed of a greater mosaic number of registered and fused
scanplane images.
[0261] A representative example the fusing of two moving images
onto a fixed image is shown in FIGS. 18A-18D.
[0262] FIG. 18A is a first view of a fixed scanplane 1220A. Region
1224A is identified as p voxels approximately at the coordinates
(150, 70).
[0263] FIG. 18B is a second view of a first moving scanplane 1220B
having some q voxels 1224B at x-y coordinates (300, 100) common
with the first measurements p voxels at x-y coordinates (150, 70).
Another set of voxels 1234A is shown roughly near the intersection
of x-y coordinates (200, 125). As the transceiver 10 was moved only
translationally, The scanplane 1220B from the second site has
approximately the same tilt .phi. and rotation .theta. angles of
the fixed scanplane 1220A taken from the first lateral in-line
site.
[0264] FIG. 18C is a third view of a moving scanplane 1220C. A
region 1234B is identified as q voxels approximately at the x-y
coordinates (250, 100) that are common with the second views q
voxels 1234A. The scanplane 1220c from the third lateral in-line
site has approximately the same tilt .phi. and rotation .theta.
angles of the fixed scanplane 1220A taken from the first lateral
in-line site and the first moving scanplane 1220B taken from the
second lateral in-line site.
[0265] FIG. 18D is a composite mosaic image 1220D of the first
(fixed) 1220A image, the second (moving) 1220B image, and the third
(moving) 1220C image representing the sequential alignment and
fusing of q voxel sets 1224B to 1224A, and 1234B with 1234A.
[0266] A fourth image similarly could be made to bring about a
4-image mosaic from scanplanes from a fourth 3D data set acquired
from the transceiver 10 taking measurements at a fourth anatomical
site where the fourth 3D data set is acquired with approximately
the same tilt .phi. and rotation .theta. angles.
[0267] The transceiver 10 is moved to different anatomical sites to
collect 3D data sets by hand placement by an operator. Such hand
placement could create the acquiring of 3D data sets under
conditions in which the tilt .phi. and rotation .theta. angles are
not approximately equal, but differ enough to cause some
measurement error requiring correction to use the rigid
registration 1012 algorithm. In the event where the 3D data sets
between anatomical sites, either between a moving supine site in
relation to its beginning fixed site, or between a moving lateral
site with its beginning fixed site, cannot be acquired with the
tilt .phi. and rotation .theta. angles being approximately the
same, then the built-in accelerometer measures the changes in tilt
.phi. and rotation .theta. angles and compensates accordingly so
that acquired moving images are presented if though they were
acquired under approximately equal tilt .phi. and rotation .theta.
angle conditions.
[0268] FIG. 19 illustrates a 6-section supine procedure to acquire
multiple image cones around the center point of a uterus of a
patient in a supine position. Each of the 6 segments are scanned in
the order indicated, starting with segment 1 on the lower right
side of the patient. The display on the scanner 10 is configured to
indicate how many segments have been scanned, so that the display
shows "0 of 6," "1 of 6," . . . "6 of 6." The scans are positioned
such that the lateral distances between each scanning position
(except between positions 3 and 4) are approximately about 8
cm.
[0269] To repeat the scan, the top button of the scanner 10 is
repetitively depressed, so that it returns the scan to "0 of 6," to
permit a user to repeat all six scans again. Finally, the scanner
10 is returned to the cradle to upload the raw ultrasound data to
computer, intranet, or Internet as depicted in FIGS. 2C, 3, and 4
for algorithmic processing, as will be described in detail below.
Within a predetermined time period, a result is generated that
includes an estimate of the amniotic fluid volume.
[0270] As with the quadrant and the four in-line scancone measuring
methods described earlier, the six-segment procedure ensures that
the measurement process detects all amniotic fluid regions. The
transceiver 10 projects outgoing ultrasound signals, in this case
into the uterine region of a patient, at six anatomical locations,
and receives incoming echoes reflected back from the regions of
interest to the transceiver 10 positioned at a given anatomical
location. An array of scanplane images are obtained for each
anatomical location based upon the incoming echo signals. Image
enhanced and segmented regions for the scanplane images are
determined for each scanplane array, which may be a rotational,
wedge, or translationally configured scanplane array. The segmented
regions are used to align or register the different scancones into
one common coordinate system. Thereafter, the registered datasets
are merged with each other so that the total amniotic fluid volume
is computed from the resulting fused image.
[0271] FIG. 20 is a block diagrammatic overview of an algorithm for
the registration and correction processing of the 6-section
multiple image cone data sets depicted in FIG. 19. A six-section
algorithm overview 1000A includes many of the same blocks of
algorithm overview 1000 depicted in FIG. 15. However, the
segmentation registration procedures are modified for the 6-section
multiple image cones. In the algorithm overview 1000A, the
subprocesses include the InputCones block 1004, an Image
Enhancement and Segmentation block 1010, a RigidRegistration block
1014, the FuseData block 1016, and the CalculateVolume block 1020.
Generally, the Image Enhanced and Segmentation block 1010 reduces
the effects of noise, which may include speckle noise, in the data
while preserving the salient edges on the image. The enhanced
images are then segmented by an edge-based and intensity-based
method, and the results of each segmentation method are then
subsequently combined. The results of the combined segmentation
method are then cleaned up to fill gaps and to remove outliers. The
area and/or the volume of the segmented regions is then
computed.
[0272] FIG. 21 is a more detailed view of the Image Enhancement and
Segmentation block 1010 of FIG. 20. Very similar to the algorithm
processes of Image Enhancement 418, Intensity-based segmentation
422, and Edge-based segmentation 438 explained for FIG. 7, the
enhancement-segmentation block 1010 begins with an input data block
1010A2, wherein the signals of pixel image data are subjected to a
blurring and speckle removal process followed by a sharpening or
deblurring process. The combination of blurring and speckle removal
followed by sharpening or deblurring enhances the appearance of the
pixel-based input image.
[0273] The blurring and deblurring is achieved by a combination of
heat and shock filters. The inputted pixel related data from
process 1010A2 is first subjected to a heat filter process block
1010A4. The heat filter block 1010A4 is a Laplacian-based filtering
and results in reduction of the speckle noise and smooths or
otherwise blurs the edges in the image. The heat filter block
1010A4 is modified via a user-determined stored data block 1010A6
wherein the number of heat filter iterations and step sizes are
defined by the user and are applied to the inputted data 1010A2 in
the heat filter process block 1010A4. The effect of heat iteration
number in progressively blurring and removing speckle from an
original image as the number of iteration cycles is increased is
shown in FIG. 23. Once the pixel image data has been heat filter
processed, the pixel image data is further processed by a shock
filter block 1010A8. The shock filter block 1010A8 is subjected to
a user-determined stored data block 1010A10 wherein the number
shock filter iterations, step sizes, and gradient threshold are
specified by the user. The foregoing values are then applied to
heat filtered pixel data in the shock filter block 1010A8. The
effect of shock iteration number, step sizes, and gradient
thresholds in reducing the blurring is seen in signal plots (a) and
(b) of FIG. 24. Thereafter, a heat and shock-filtered pixel data is
parallel processed in two algorithm pathways, as defined by blocks
1010B2-6 (Intensity-Based Segmentation Group) and blocks 1010C2-4
(Edge-Based Segmentation Group).
[0274] The Intensity-based Segmentation relies on the observation
that amniotic fluid is usually darker than the rest of the image.
Pixels associated with fluids are classified based upon a threshold
intensity level. Thus pixels below this intensity threshold level
are interpreted as fluid, and pixels above this intensity threshold
are interpreted as solid or non-fluid tissues. However, pixel
values within a dataset can vary widely, so a means to
automatically determine a threshold level within a given dataset is
required in order to distinguish between fluid and non-fluid
pixels. The intensity-based segmentation is divided into three
steps. A first step includes estimating the fetal body and shadow
regions, a second step includes determining an automatic
thresholding for the fluid region after removing the body region,
and a third step includes removing the shadow and fetal body
regions from the potential fluid regions.
[0275] The Intensity-Based Segmentation Group includes a fetal body
region block 1010B2, wherein an estimate of the fetal shadow and
body regions is obtained. Generally, the fetal body regions in
ultrasound images appear bright and are relatively easily detected.
Commonly, anterior bright regions typically correspond with the
dome reverberation of the transceiver 10, and the darker appearing
uterus is easily discerned against the bright pixel regions formed
by the more echogenic fetal body that commonly appears posterior to
the amniotic fluid region. In fetal body region block 1010B2, the
fetal body and shadow is found in scanlines that extend between the
bright dome reverberation region and the posterior bright-appearing
fetal body. A magnitude of the estimate of fetal and body region is
then modified by a user-determined input parameter stored in a body
threshold data block 1010B4, and a pixel value is chosen by the
user. For example, a pixel value of 40 may be selected by the user.
An example of the image obtained from blocks 1010B2-4 is panel (c)
of FIG. 25. Once the fetal body regions and the shadow has been
estimated, an automatic region threshold block 1010B6 is applied to
this estimate to determine which pixels are fluid related and which
pixels are non-fluid related. The automatic region threshold block
1010B6 uses a version of the Otsu algorithm (R M Haralick and L G
Shaprio, Computer and Robot Vision, vol. 1, Addison Wesley 1992,
page 11, incorporated by reference). Briefly, and in general terms,
the Otsu algorithm determines a threshold value from an assumed
bimodal pixel value histogram that generally corresponds to fluid
and some soft tissue (non-fluid) such as placental or other fetal
or maternal soft tissue. All pixel values less than the threshold
value as determined by the Otsu algorithm are designated as
potential fluid pixels. Using the Otsu algorithm determined
threshold value, the first pathway is completed by a removing body
regions above this threshold value in block 1010B8 so that the
amniotic fluid regions are isolated. An example of the effect of
the Intensity-based segmentation group is shown in panel (d) of
FIG. 25. The isolated amniotic fluid region image thus obtained
from the intensity-based segmentation process is then processed for
subsequent combination with the end result of the second edge-based
segmentation method.
[0276] Referring now to the second pathway or the Edge-Based
Segmentation Group, the procedural blocks find pixel points on an
image having high spatial gradient magnitudes. The edge-based
segmentation process begins processing the shock filtered 1010A8
pixel data via a spatial gradients block 1010C2 in which the
gradient magnitude of a given pixel neighborhood within the image
is determined. The gradient magnitude is determined by the taking
the X and Y derivatives using the difference kernels shown in FIG.
26. The gradient magnitude of the image is given by Equation
E7:
.parallel..gradient.I.parallel.= {square root over
(I.sub.x.sup.2+I.sub.y.sup.2)}
I.sub.x=I*K.sub.x
I.sub.y=I*K.sub.y E7
[0277] where * is the convolution operator.
[0278] Once the gradient magnitude is determined, pixel edge points
are determined by a hysteresis threshold of gradients process block
1010C4. In block 1010C4, a lower and upper threshold value is
selected. The image is then thresholded using the lower value and a
connected component labeling is carried out on the resulting image.
The pixel value of each connected component is measured to
determine which pixel edge points have gradient magnitude pixel
values equal to or greater than the upper threshold value. Those
pixel edge points having gradient magnitude pixel values equal to
or exceeding the upper threshold are retained. This retention of
pixels having strong gradient values serves to retain selected long
connected edges which have one or more high gradient points.
[0279] Thereafter, the image is thresholded using the upper value,
and a connected component labeling is carried out on the resulting
image. The hysteresis threshold 1010C4 is modified by a
user-determined edge threshold block 1010C6. An example of an
application of the second pathway will be shown in panels (b) for
the spatial gradients block 1010C2 and (c) for the threshold of
gradients process block 1010C4 of FIG. 27. Another example of
application of the edge detection block group for blocks 1010C2 and
1010C4 can also be seen in panel (e) of FIG. 25.
[0280] Referring again to FIG. 21, the first and second pathways
are merged at a combine region and edges process block 1010D2. The
combining process avoids erroneous segmentation arising from either
the intensity-based or edge-based segmentation processes. The goal
of the combining process is to ensure that good edges are reliably
identified so that fluid regions are bounded by strong edges.
Intensity-based segmentation may underestimate fluid volume, so
that the boundaries need to be corrected using the edge-based
segmentation information. In block 1010D2, the beginning and end of
each scanline within the segmented region is determined by
searching for edge pixels on each scanline. If no edge pixels are
found in the search region, the segmentation on that scanline is
removed. If edge pixels are found, then the region boundary
locations are moved to the location of these edge pixels. Panel (f)
of FIG. 25 illustrates the effects of the combining block
1010D2.
[0281] The segmentation resulting from the combination of region
and edge information occasionally includes extraneous regions or
even holes. A cleanup stage helps ensure consistency of segmented
regions in a single scanplane and between scanplanes. The cleanup
stage uses morphological operators (such as erosion, dilation,
opening, closing) using the Markov Random Fields (MRFs) as
disclosed in Forbes et al. (Florence Forbes and Adrian E. Raftery,
"Bayesian morphology: Fast Unsupervised Bayesian Image Analysis,"
Journal of the American Statistical Association, June 1999, herein
incorporated by reference). The combined segmentation images
receive the MRFs by being subjected to an In-plane Closing and
Opening process block 1010D4. The In-plane opening-closing block
1010D4 block is a morphological operator wherein pixel regions are
opened to remove pixel outliers from the segmented region, or that
fills in or "closes" gaps and holes in the segmented region within
a given scanplane. Block 1010D4 uses a one-dimensional structuring
element extending through five scanlines. The closing-opening block
is affected by a user-determined width, height, and depth parameter
block 1010D6. Thereafter, an Out-of-plane Closing and Opening
processing block 1010D8 is applied. The block 1010D8 applies a set
of out-of-plane morphological closings and openings using a
one-dimensional structuring element extending through three
scanlines. Pixel inconsistencies are accordingly removed between
the scanplanes. Panel (g) of FIG. 25 illustrates the effects of the
blocks 1010D4-8.
[0282] FIG. 22 is an expansion of the RigidRegistration block 1014
of FIG. 20. Similar in purpose and general operation using the
previously described ICP algorithm as used in the RigidRegistration
block 1012 of FIG. 16, the block 1014 begins with parallel inputs
of a fixed Image 1014A, a Moving Image 1014B, and an Initial
Transform input 1014B10.
[0283] The steps of the rigid registration algorithm 1014 correct
any overlaps between adjacent 3D scan cones acquired in the
6-section supine grid procedure. The rigid algorithm 1014 first
converts the fixed image 1104A2 from polar coordinate terms to
Cartesian coordinate terms using the 3D Scan Convert 1014A4
algorithm. Separately, the moving image 1014B2 is also converted to
Cartesian coordinates using the 3D Scan Convert 1014B4 algorithm.
Next, the edges of the amniotic fluid regions on the fixed and
moving images are determined and converted into point sets p and q,
respectively by a 3D edge detection process 1014A6 and 1014B6.
Also, the fixed image point set, p, undergoes a 3D distance
transform process 1014B8 which maps every voxel in a 3D image to a
number representing the distance to the closest edge point in p.
Pre-computing this distance transform makes subsequent distance
calculations and closest point determinations very efficient.
[0284] Next, the known initial transform 1014B10, for example, (6,
0, 0) for the Cartesian T.sub.x, T.sub.y, T.sub.z terms and (0, 0,
0) for the .theta..sub.x, .theta..sub.y, .theta..sub.z Euler angle
terms, for an inter-transceiver interval of 6 cm, is subsequently
applied to the moving image by the transform edges 1014B8 block.
This transformed image is then subjected to the Find Closest Points
on Fixed Image block 1014C2, similar in operation to the block 1148
of FIG. 16. Thereafter, a new transform is determined in block
1014C4, and the new transform is queried for convergence at
decision diamond 1014C8. If conversion is attained, the
RigidRegistration 1014 is done at terminus 1014C10. Alternatively,
if conversion is not attained, then a return to the transform edges
block 1014B8 occurs to start another iterative cycle.
[0285] The RigidRegistration block 1014 typically converges in less
than 20 iterations. After applying the initial transformation, the
entire registration process is carried out in case there are any
overlapping segmented regions between any two images. Similar to
the process described in connection with FIG. 16, an overlap
threshold of approximately 20% is currently set as an input
parameter.
[0286] FIG. 23 is a 4-panel image set that shows the effect of
multiple iterations of the heat filter applied to an original
image. The effect of shock iteration number in progressively
blurring and removing speckle from an original image as the number
of iterations increases is shown in FIG. 23. In this case the heat
filter is described by process blocks 1010A4 and A6 of FIG. 21. In
this example, an original image of a bladder is shown in panel (a)
having visible speckle spread throughout the image. Some blurring
is seen with the 10 iteration image in panel (b), followed by more
progressive blurring at 50 iterations in panel (c) and 100
iterations in panel (d). As the blurring increases with iteration
number, the speckle progressively decreases.
[0287] FIG. 24 shows the effect of shock filtering and a
combination heat-and-shock filtering to the pixel values of the
image. The effect of shock iteration number, step sizes, and
gradient thresholds on the blurring of a heat filter is seen in
ultrasound signal plots (a) and (b) of FIG. 24. Signal plot (a)
depicts a smoothed or blurred signal gradient as a sigmoidal long
dashed line that is subsequently shock filtered. As can be seen by
the more abrupt or steep stepped signal plots after shock
filtering, the magnitude of the shock filtered signal (short dashed
line) approaches that of the original signal (solid line) without
the choppy or noisy pattern associated with speckle. For the most
part there is virtually a complete overlap of the shock filtered
signal with the original signal through the pixel plot range.
[0288] Similarly, ultrasound signal plot (b) depicts the effects of
applying a shock filter to a noisy (speckle rich) signal line
(sinuous long dash line) that has been smooth or blurred by the
heat filter (short dashed line with sigmoidal appearance). In
operation the shock filter results in a generally deblurring or
sharpening of the edges of the image that were previously blurred.
Adjacent with, but not entirely overlapping with the original
signal (solid line) throughout the pixel plot range, the shock
filtered plot substantially overlaps the vertical portion of the
original signal, but is stay elevated in the low and high pixel
ranges. Like in (a), a more abrupt or steep stepped signal plot
after shock filtering is obtained without significant removal of
speckle. Dependent on the gradient threshold, step size, and
iteration number imposed by block 1010A10 upon shock block 1010A8,
different overlapping levels of the shock filtered line to that of
the original is obtained.
[0289] FIG. 25 is a 7-panel image set generated by the image
enhancement and segmentation algorithms of FIG. 21. Panel (a) is an
image of the original uterine image. Panel (b) is the image that is
produced from the image enhancement processes primarily described
in blocks 1010A4-6 (heat filters) and blocks 1010A8-10 (shock
filters) of FIG. 21. Panel (c) shows the effects of the processing
obtained from blocks 1010B2-4 (Estimate Shadow and Fetal Body
Regions/Body Threshold). Panel (d) is the image when processed by
the Intensity-Based Segmentation Block Group 1010B2-8. Panel (e)
results from application of the Edge-Based Segmentation Block Group
1010C2-6. Thereafter, the two Intensity-based and Edge-based block
groups are combined (combining block 1010D2) to result in the image
shown in panel (f). Panel (g) illustrates the effects of the blocks
In-plane and Out-of-plane opening and closing processing blocks
1010D4-8.
[0290] FIG. 26 is a pixel difference kernel for obtaining X and Y
derivatives to determine pixel gradient magnitudes for edge-based
segmentation. As illustrated, a simplest case convolution is
obtained for a first derivative computation where K.sub.x and
K.sub.y are convolution constants.
[0291] FIG. 27 is a 3-panel image set showing the progressive
demarcation or edge detection of organ wall interfaces arising from
edge-based segmentation algorithms. Panel (a) is the enhanced input
image. Panel (b) is the image result when the enhanced input image
is subjected to the spatial gradients block 1010C2. Panel (c) is
the image result when the enhanced and spatial gradients 1010C2
processed image is further processed by the threshold of gradients
process block 1010C4.
[0292] Demonstrations of the algorithmic manipulation of pixels of
the present invention are provided in Appendix 1: Examples of
Algorithmic Steps. Source code of the algorithms of the present
invention is provided in Appendix 2: Matlab Source Code.
[0293] While the preferred embodiment of the invention has been
illustrated and described, as noted above, many changes can be made
without departing from the spirit and scope of the invention. For
example, other uses of the invention include determining the areas
and volumes of the prostate, heart, bladder, and other organs and
body regions of clinical interest. Accordingly, the scope of the
invention is not limited by the disclosure of the preferred
embodiment.
[0294] Systems, methods, and devices for image clarity of
ultrasound-based images are described and illustrated in the
following figures. The clarity of ultrasound imaging requires the
efficient coordination of ultrasound transfer or communication to
and from an examined subject, image acquisition from the
communicated ultrasound, and microprocessor based image processing.
Oftentimes the examined subject moves while image acquisition
occurs, the ultrasound transducer moves, and/or movement occurs
within the scanned region of interest that requires refinements as
described below to secure clear images.
[0295] The ultrasound transceivers or DCD devices developed by
Diagnostic Ultrasound are capable of collecting in vivo
three-dimensional (3-D) cone-shaped ultrasound images of a patient.
Based on these 3-D ultrasound images, various applications have
been developed such as bladder volume and mass estimation.
[0296] During the data collection process initiated by DCD, a
pulsed ultrasound field is transmitted into the body, and the
back-scattered "echoes" are detected as a one-dimensional (1-D)
voltage trace, which is also referred to as a RF line. After
envelope detection, a set of 1-D data samples is interpolated to
form a two-dimensional (2-D) or 3-D ultrasound image.
[0297] FIGS. 1A-D depicts a partial schematic and a partial
isometric view of a transceiver, a scan cone comprising a
rotational array of scan planes, and a scan plane of the array of
various ultrasound harmonic imaging systems 60A-D illustrated in
FIGS. 3 and 4 below.
[0298] FIG. 1A is a side elevation view of an ultrasound
transceiver 10A that includes an inertial reference unit, according
to an embodiment of the invention. The transceiver 10A includes a
transceiver housing 18 having an outwardly extending handle 12
suitably configured to allow a user to manipulate the transceiver
10A relative to a patient. The handle 12 includes a trigger 14 that
allows the user to initiate an ultrasound scan of a selected
anatomical portion, and a cavity selector 16. The cavity selector
16 will be described in greater detail below. The transceiver 10A
also includes a transceiver dome 20 that contacts a surface portion
of the patient when the selected anatomical portion is scanned. The
dome 20 generally provides an appropriate acoustical impedance
match to the anatomical portion and/or permits ultrasound energy to
be properly focused as it is projected into the anatomical portion.
The transceiver 10A further includes one, or preferably an array of
separately excitable ultrasound transducer elements (not shown in
FIG. 1A) positioned within or otherwise adjacent with the housing
18. The transducer elements may be suitably positioned within the
housing 18 or otherwise to project ultrasound energy outwardly from
the dome 20, and to permit reception of acoustic reflections
generated by internal structures within the anatomical portion. The
one or more array of ultrasound elements may include a
one-dimensional, or a two-dimensional array of piezoelectric
elements that may be moved within the housing 18 by a motor.
Alternately, the array may be stationary with respect to the
housing 18 so that the selected anatomical region may be scanned by
selectively energizing the elements in the array.
[0299] A directional indicator panel 22 includes a plurality of
arrows that may be illuminated for initial targeting and guiding a
user to access the targeting of an organ or structure within an
ROI. In particular embodiments if the organ or structure is
centered from placement of the transceiver 10A acoustically placed
against the dermal surface at a first location of the subject, the
directional arrows may be not illuminated. If the organ is
off-center, an arrow or set of arrows may be illuminated to direct
the user to reposition the transceiver 10A acoustically at a second
or subsequent dermal location of the subject. The acrostic coupling
may be achieved by liquid sonic gel applied to the skin of the
patient or by sonic gel pads to which the transceiver dome 20 is
placed against. The directional indicator panel 22 may be presented
on the display 54 of computer 52 in harmonic imaging subsystems
described in FIGS. 3 and 4 below, or alternatively, presented on
the transceiver display 16.
[0300] Transceiver 10A includes an inertial reference unit that
includes an accelerometer 22 and/or gyroscope 23 positioned
preferably within or adjacent to housing 18. The accelerometer 22
may be operable to sense an acceleration of the transceiver 10A,
preferably relative to a coordinate system, while the gyroscope 23
may be operable to sense an angular velocity of the transceiver 10A
relative to the same or another coordinate system. Accordingly, the
gyroscope 23 may be of conventional configuration that employs
dynamic elements, or it may be an optoelectronic device, such as
the known optical ring gyroscope. In one embodiment, the
accelerometer 22 and the gyroscope 23 may include a commonly
packaged and/or solid-state device. One suitable commonly packaged
device may be the MT6 miniature inertial measurement unit,
available from Omni Instruments, Incorporated, although other
suitable alternatives exist. In other embodiments, the
accelerometer 22 and/or the gyroscope 23 may include commonly
packaged micro-electromechanical system (MEMS) devices, which are
commercially available from MEMSense, Incorporated. As described in
greater detail below, the accelerometer 22 and the gyroscope 23
cooperatively permit the determination of positional and/or angular
changes relative to a known position that is proximate to an
anatomical region of interest in the patient. Other configurations
related to the accelerometer 22 and gyroscope 23 concerning
transceivers 10A,B equipped with inertial reference units and the
operations thereto may be obtained from copending U.S. patent
application Ser. No. 11/222,360 filed Sep. 8, 2005, herein
incorporated by reference.
[0301] The transceiver 10A includes (or if capable at being in
signal communication with) a display 24 operable to view processed
results from an ultrasound scan, and/or to allow an operational
interaction between the user and the transceiver 10A. For example,
the display 24 may be configured to display alphanumeric data that
indicates a proper and/or an optimal position of the transceiver
10A relative to the selected anatomical portion. Display 24 may be
used to view two- or three-dimensional images of the selected
anatomical region. Accordingly, the display 24 may be a liquid
crystal display (LCD), a light emitting diode (LED) display, a
cathode ray tube (CRT) display, or other suitable display devices
operable to present alphanumeric data and/or graphical images to a
user.
[0302] Still referring to FIG. 1A, a cavity selector 16 may be
operable to adjustably adapt the transmission and reception of
ultrasound signals to the anatomy of a selected patient. In
particular, the cavity selector 16 adapts the transceiver 10A to
accommodate various anatomical details of male and female patients.
For example, when the cavity selector 16 is adjusted to accommodate
a male patient, the transceiver 10A may be suitably configured to
locate a single cavity, such as a urinary bladder in the male
patient. In contrast, when the cavity selector 16 is adjusted to
accommodate a female patient, the transceiver 10A may be configured
to image an anatomical portion having multiple cavities, such as a
bodily region that includes a bladder and a uterus. Alternate
embodiments of the transceiver 10A may include a cavity selector 16
configured to select a single cavity scanning mode, or a multiple
cavity-scanning mode that may be used with male and/or female
patients. The cavity selector 16 may thus permit a single cavity
region to be imaged, or a multiple cavity region, such as a region
that includes a lung and a heart to be imaged.
[0303] To scan a selected anatomical portion of a patient, the
transceiver dome 20 of the transceiver 10A may be positioned
against a surface portion of a patient that is proximate to the
anatomical portion to be scanned. The user actuates the transceiver
10A by depressing the trigger 14. In response, the transceiver 10
transmits ultrasound signals into the body, and receives
corresponding return echo signals that may be at least partially
processed by the transceiver 10A to generate an ultrasound image of
the selected anatomical portion. In a particular embodiment, the
transceiver 10A transmits ultrasound signals in a range that
extends from approximately about two megahertz (MHz) to
approximately about ten MHz.
[0304] In one embodiment, the transceiver 10A may be operably
coupled to an ultrasound system that may be configured to generate
ultrasound energy at a predetermined frequency and/or pulse
repetition rate and to transfer the ultrasound energy to the
transceiver 10A. The system also includes a processor that may be
configured to process reflected ultrasound energy that is received
by the transceiver 10A to produce an image of the scanned
anatomical region. Accordingly, the system generally includes a
viewing device, such as a cathode ray tube (CRT), a liquid crystal
display (LCD), a plasma display device, or other similar display
devices, that may be used to view the generated image. The system
may also include one or more peripheral devices that cooperatively
assist the processor to control the operation of the transceiver
10A, such a keyboard, a pointing device, or other similar devices.
In still another particular embodiment, the transceiver 10A may be
a self-contained device that includes a microprocessor positioned
within the housing 18 and software associated with the
microprocessor to operably control the transceiver 10A, and to
process the reflected ultrasound energy to generate the ultrasound
image. Accordingly, the display 24 may be used to display the
generated image and/or to view other information associated with
the operation of the transceiver 10A. For example, the information
may include alphanumeric data that indicates a preferred position
of the transceiver 10A prior to performing a series of scans. In
yet another particular embodiment, the transceiver 10A may be
operably coupled to a general-purpose computer, such as a laptop or
a desktop computer that includes software that at least partially
controls the operation of the transceiver 10A, and also includes
software to process information transferred from the transceiver
10A, so that an image of the scanned anatomical region may be
generated. The transceiver 10A may also be optionally equipped with
electrical contacts to make communication with receiving cradles 50
as discussed in FIGS. 3 and 4 below. Although transceiver 10A of
FIG. 1A may be used in any of the foregoing embodiments, other
transceivers may also be used. For example, the transceiver may
lack one or more features of the transceiver 10A. For example, a
suitable transceiver need not be a manually portable device, and/or
need not have a top-mounted display, and/or may selectively lack
other features or exhibit further differences.
[0305] Referring still to FIG. 1A is a graphical representation of
a plurality of scan planes that form a three-dimensional (3D) array
having a substantially conical shape. An ultrasound scan cone 40
formed by a rotational array of two-dimensional scan planes 42
projects outwardly from the dome 20 of the transceivers 10A. The
other transceiver embodiments 10B-10E may also be configured to
develop a scan cone 40 formed by a rotational array of
two-dimensional scan planes 42. The pluralities of scan planes 40
may be oriented about an axis 11 extending through the transceivers
10A-10E. One or more, or preferably each of the scan planes 42 may
be positioned about the axis 11, preferably, but not necessarily at
a predetermined angular position .theta.. The scan planes 42 may be
mutually spaced apart by angles .theta..sub.1 and .theta..sub.2.
Correspondingly, the scan lines within each of the scan planes 42
may be spaced apart by angles .phi..sub.1 and .phi..sub.2. Although
the angles .theta..sub.1 and .theta..sub.2 are depicted as
approximately equal, it is understood that the angles .theta..sub.1
and .theta..sub.2 may have different values. Similarly, although
the angles .phi..sub.1 and .phi..sub.2 are shown as approximately
equal, the angles .phi..sub.1 and .phi..sub.2 may also have
different angles. Other scan cone configurations are possible. For
example, a wedge-shaped scan cone, or other similar shapes may be
generated by the transceiver 10A, 10B and 10C.
[0306] FIG. 1B is a graphical representation of a scan plane 42.
The scan plane 42 includes the peripheral scan lines 44 and 46, and
an internal scan line 48 having a length r that extends outwardly
from the transceivers 10A-10E. Thus, a selected point along the
peripheral scan lines 44 and 46 and the internal scan line 48 may
be defined with reference to the distance r and angular coordinate
values .phi. and .theta.. The length r preferably extends to
approximately 18 to 20 centimeters (cm), although any length is
possible. Particular embodiments include approximately
seventy-seven scan lines 48 that extend outwardly from the dome 20,
although any number of scan lines is possible.
[0307] FIG. 1C a graphical representation of a plurality of scan
lines emanating from a hand-held ultrasound transceiver forming a
single scan plane 42 extending through a cross-section of an
internal bodily organ. The number and location of the internal scan
lines emanating from the transceivers 10A-10E within a given scan
plane 42 may thus be distributed at different positional
coordinates about the axis line 11 as required to sufficiently
visualize structures or images within the scan plane 42. As shown,
four portions of an off-centered region-of-interest (ROI) are
exhibited as irregular regions 49. Three portions may be viewable
within the scan plane 42 in totality, and one may be truncated by
the peripheral scan line 44.
[0308] As described above, the angular movement of the transducer
may be mechanically effected and/or it may be electronically or
otherwise generated. In either case, the number of lines 48 and the
length of the lines may vary, so that the tilt angle .phi. sweeps
through angles approximately between -60.degree. and +60.degree.
for a total arc of approximately 120.degree.. In one particular
embodiment, the transceiver 10 may be configured to generate
approximately about seventy-seven scan lines between the first
limiting scan line 44 and a second limiting scan line 46. In
another particular embodiment, each of the scan lines has a length
of approximately about 18 to 20 centimeters (cm). The angular
separation between adjacent scan lines 48 (FIG. 1B) may be uniform
or non-uniform. For example, and in another particular embodiment,
the angular separation .phi..sub.1 and .phi..sub.2 (as shown in
FIG. 5C) may be about 1.5.degree.. Alternately, and in another
particular embodiment, the angular separation .phi..sub.1 and
.phi..sub.2 may be a sequence wherein adjacent angles may be
ordered to include angles of 1.5.degree., 6.8.degree.,
15.5.degree., 7.2.degree., and so on, where a 1.5.degree.
separation is between a first scan line and a second scan line, a
6.8.degree. separation is between the second scan line and a third
scan line, a 15.5.degree. separation is between the third scan line
and a fourth scan line, a 7.2.degree. separation is between the
fourth scan line and a fifth scan line, and so on. The angular
separation between adjacent scan lines may also be a combination of
uniform and non-uniform angular spacings, for example, a sequence
of angles may be ordered to include 1.5.degree., 1.5.degree.,
1.5.degree., 7.2.degree., 14.3.degree., 20.2.degree., 8.0.degree.,
8.0.degree., 8.0.degree., 4.3.degree., 7.8.degree., and so on.
[0309] FIG. 1D is an isometric view of an ultrasound scan cone that
projects outwardly from the transceivers of FIGS. 1A-E.
Three-dimensional images of a region of interest may be presented
within a scan cone 40 that comprises a plurality of 2D images
formed in an array of scan planes 42. A dome cutout 41 that is the
complementary to the dome 20 of the transceivers 10A-10E is shown
at the top of the scan cone 40.
[0310] FIG. 2 depicts a partial schematic and partial isometric and
side view of a transceiver, and a scan cone array comprised of
3D-distributed scan lines in alternate embodiment of an ultrasound
harmonic imaging system. A plurality of three-dimensional (3D)
distributed scan lines emanating from a transceiver that
cooperatively forms a scan cone 30. Each of the scan lines have a
length r that projects outwardly from the transceivers 10A-10E of
FIGS. 1A-1E. As illustrated the transceiver 10A emits
3D-distributed scan lines within the scan cone 30 that may be
one-dimensional ultrasound A-lines. The other transceiver
embodiments 10B-10E may also be configured to emit 3D-distributed
scan lines. Taken as an aggregate, these 3D-distributed A-lines
define the conical shape of the scan cone 30. The ultrasound scan
cone 30 extends outwardly from the dome 20 of the transceiver 10A,
10B and 10C centered about an axis line 11. The 3D-distributed scan
lines of the scan cone 30 include a plurality of internal and
peripheral scan lines that may be distributed within a volume
defined by a perimeter of the scan cone 30. Accordingly, the
peripheral scan lines 31A-31F define an outer surface of the scan
cone 30, while the internal scan lines 34A-34C may be distributed
between the respective peripheral scan lines 31A-31F. Scan line 34B
may be generally collinear with the axis 11, and the scan cone 30
may be generally and coaxially centered on the axis line 11.
[0311] The locations of the internal and peripheral scan lines may
be further defined by an angular spacing from the center scan line
34B and between internal and peripheral scan lines. The angular
spacing between scan line 34B and peripheral or internal scan lines
may be designated by angle .PHI. and angular spacings between
internal or peripheral scan lines may be designated by angle O. The
angles .PHI..sub.1, .PHI..sub.2, and .PHI..sub.3 respectively
define the angular spacings from scan line 34B to scan lines 34A,
34C, and 31D. Similarly, angles O.sub.1, O.sub.2, and O.sub.3
respectively define the angular spacings between scan line 31B and
31C, 31C and 34A, and 31D and 31E.
[0312] With continued reference to FIG. 2, the plurality of
peripheral scan lines 31A-E and the plurality of internal scan
lines 34A-D may be three dimensionally distributed A-lines (scan
lines) that are not necessarily confined within a scan plane, but
instead may sweep throughout the internal regions and along the
periphery of the scan cone 30. Thus, a given point within the scan
cone 30 may be identified by the coordinates r, .PHI., and O whose
values generally vary. The number and location of the internal scan
lines emanating from the transceivers 10A-10E may thus be
distributed within the scan cone 30 at different positional
coordinates as required to sufficiently visualize structures or
images within a region of interest (ROI) in a patient. The angular
movement of the ultrasound transducer within the transceiver
10A-10E may be mechanically effected, and/or it may be
electronically generated. In any case, the number of lines and the
length of the lines may be uniform or otherwise vary, so that angle
.PHI. sweeps through angles approximately between -60.degree.
between scan line 34B and 31A, and +60.degree. between scan line
34B and 31B. Thus angle .PHI. in this example presents a total arc
of approximately 120.degree.. In one embodiment, the transceiver
10A, 10B and 10C may be configured to generate a plurality of
3D-distributed scan lines within the scan cone 30 having a length r
of approximately 18 to 20 centimeters (cm).
[0313] FIG. 3 is a schematic illustration of a server-accessed
local area network in communication with a plurality of ultrasound
harmonic imaging systems. An ultrasound harmonic imaging system 100
includes one or more personal computer devices 52 that may be
coupled to a server 56 by a communications system 55. The devices
52 may be, in turn, coupled to one or more ultrasound transceivers
10A and/or 10B, for examples the ultrasound harmonic sub-systems
60A-60D. Ultrasound based images of organs or other regions of
interest derived from either the signals of echoes from fundamental
frequency ultrasound and/or harmonics thereof, may be shown within
scan cone 30 or 40 presented on display 54. The server 56 may be
operable to provide additional processing of ultrasound
information, or it may be coupled to still other servers (not shown
in FIG. 3) and devices. Transceivers 10A or 10B may be in wireless
communication with computer 52 in sub-system 60A, in wired signal
communication in sub-system 10B, in wireless communication with
computer 52 via receiving cradle 50 in sub-system 10C, or in wired
communication with computer 52 via receiving cradle 50 in
sub-system 10D.
[0314] FIG. 4 is a schematic illustration of the Internet in
communication with a plurality of ultrasound harmonic imaging
systems. An Internet system 110 may be coupled or otherwise in
communication with the ultrasound harmonic sub-systems 60A-60D.
[0315] FIG. 5 schematically depicts a master method flow chart
algorithm 120 to acquire a clarity enhanced ultrasound image.
Algorithm 120 begins with process block 150, in which an acoustic
coupling or sonic gel is applied to the dermal surface near the
region-of-interest (ROI) using a degassing gel dispenser.
Embodiments illustrating the degassing gel dispenser and its uses
are depicted in FIGS. 36A-G below. After applying the sonic gel,
decision diamond 170 is reached with the query "Targeting a moving
structure?", and if negative to this query, algorithm 120 continues
to process block 200. At process block 200, the ultrasound
transceiver dome 20 of transceivers 10A,B are placed into the
dermal residing sonic gel to and pulsed ultrasound energy is
transmitted to the ROI. Thereafter, echoes of the fundamental
ultrasound frequency and/or harmonics thereof are captured by the
transceiver 10A,B and converted to echogenic signals. If the answer
to decision diamond is affirmative for targeting a moving structure
within the ROI, the ROI is re-targeted, at process block 300, using
optical flow real-time analysis.
[0316] Whether receiving echogenic signals from non-moving targets
within the ROI from processing block 200, or moving targets within
the ROI from process block 300, algorithm 120 continues with
processing blocks 400A or 400B. Processing blocks 400A and 400B
process echogenic datasets of the echogenic signals from process
blocks 200 and 300 using a point spread function algorithms to
compensate or otherwise suppress motion induced reverberations
within the ROI echogenic data sets. Processing block 400A employs
nonparametric analysis, and processing block 400B employs
parametric analysis and described in FIG. 9 below. Once motion
artifacts are corrected, algorithm 120 continues with processing
block 50 to segment image sections derived from the
distortion-compensated data sets. At process block 600, areas of
the segmented sections within 2D images and/or 3D volumes are
determined. Thereafter, master algorithm 120 completes at process
block 700 in which segmented structures within the static or moving
ROI is displayed along with any segmented section area and/or
volume measurements.
[0317] FIG. 6 is an expansion of sub-algorithm 150 of master
algorithm 120 of FIG. 7. Beginning from the entry point of master
algorithm 120, sub-algorithm 150 starts at process block 152
wherein a metered volume of sonic gel is applied from the
volume-controlled dispenser to the dermal surface believed to
overlap the ROI. Thereafter, at process block 154, any gas pockets
within the applied gel are expelled by a roller pressing action.
Sub-algorithm 150 is then completed and exits to sub-algorithm
200.
[0318] FIG. 7 is an expansion of sub-algorithms 200 of FIG. 5.
Entering from process block 154, sub-algorithm 200 starts at
process block 202 wherein the transceiver dome 20 of transceivers
10A,B are placed into the gas-purged sonic gel to get a firm sonic
coupling, and then at process block 206, pulsed frequency
ultrasound is transmitted to the underlying ROI. Thereafter, at
process block 210, ultrasound echoes from the ROI and any
intervening structure, is collected by the transceivers 10A,B and
converted to the echogenic data sets for presentation of an image
of the ROI. Once the image of the ROI is displayed, decision
diamond 218 is reached with the query "Are structures of interest
SOI sufficiently in view within ROI?", and if negative to this
query, sub-algorithm 200 continues to process block 222 in which
the transceiver is moved to a new anatomical location for
re-routing to process block 202. At process block 200, the
ultrasound transceiver dome 20 of transceivers 10A,B are placed
into the dermal residing sonic gel to and pulsed ultrasound energy
is transmitted to the ROI. If the answer to the decision diamond
218 is affirmative for a sufficiently viewed SOI, sub-algorithm 200
continues to process block 226 in which a 3D echogenic data set
array of the ROI is acquired using at least one of an ultrasound
fundamental and/or harmonic frequency. Sub-algorithm 200 is then
completed and exits to sub-algorithm 300.
[0319] FIG. 8 is an expansion of sub-algorithm 300 of master
algorithm illustrated in FIG. 5. Entering from process block 170,
sub-algorithm 300 begins in processing block 302 by making a
transceiver 10A,B-to-ROI sonic coupling similar to process block
202, transmitting pulse frequency ultrasound at process block 306,
and thereafter, at processing block 310, acquire ultrasound echoes,
convert to echogenic data sets, and present a currently displayed
image "i" of the ROI and compare "i" with any predecessor image
"i-1" of the ROI, if available. Thereafter, at process block 314,
pixel movement along Cartesian axes is ascertained to determine X
and Y-axis pixel center-of-optical flow, and similarly, followed by
process block 318 pixel movement along the phi angle to ascertain a
rotational center-of-optical flow. Thereafter, at process block
322, optical flow velocity maps to ascertain whether axial and
rotational vectors exceed a pre-defined threshold OFR value. Once
the velocity maps are obtained, decision diamond 326 is reached
with the query "Does optical flow velocity map match the expected
pattern for the structure being imaged?", and if negative,
sub-algorithm re-routes to process block 306 for retransmission of
ultrasound to the ROI via the sonically coupled transceiver 10A,B.
If affirmative for a matched velocity map and expected pattern of
the structure being imaged, sub-algorithm 300 continues with
process block 330 in which a 3D echogenic data set array of the ROI
is acquired using at least one of an ultrasound fundamental and/or
harmonic frequency. Sub-algorithm 300 is then completed and exits
to sub-algorithms 400A and 400B.
[0320] FIG. 9 depicts expansion of subalgorithms 400A and 400B of
FIG. 5. Sub-algorithm 400A employs nonparametric pulse estimation
and 400B employs parametric pulse estimation. Sub-algorithm 400A
describes an implementation of the CLEAN algorithm for reducing
reverberation and noise in the ultrasound signals and comprises an
RF line processing block 400A-2, a non-parametric pulse estimation
block 400A-4, a CLEAN iteration block 400A-6, a decision diamond
block 400A-8 having the query "STOP?", and a Scan Convert
processing block 400A-10. The same algorithm is applied to each RF
line in a scan plane, but each RF line uses its own unique estimate
of the point spread function of the transducer (or pulse estimate).
The algorithm is iterative by re-routing to Non parametric pulse
estimation block 400A-4 in that the point spread function is
estimated, the CLEAN sub-algorithm applied and then the pulse is
re-estimated from the output of the CLEAN sub-algorithm. The
iterations are stopped after a maximum number of iterations is
reached or the changes in the signal are sufficiently small.
Thereafter, once the iteration has stopped, the signals are
converted for presentation as part of a scan plane image at process
block 400A-10. Sub-algorithm 400A is then completed and exits to
sub-algorithms 500.
[0321] Referring to sub-algorithm 400B, parametric analysis employs
an implementation of the CLEAN algorithm that is not iterative.
Sub-algorithm 400B comprise comprises an RF line processing block
400B-2, a parametric pulse estimation block 400B-4, a CLEAN
algorithm block 400B-6, a CLEAN iteration block 400B-8, and a Scan
Convert processing block 400B-10. The point spread function of the
transducer is estimated once and becomes a priori information used
in the CLEAN algorithm. A single estimate of the pulse is applied
to all RF lines in a scan plane and the CLEAN algorithm is applied
once to each line. The signal output is then converted for
presentation as part of a scan plane image at process block
400B-10. Sub-algorithm 400B is then completed and exits to
sub-algorithms 500.
[0322] FIG. 10 is an expansion of sub-algorithm 500 of FIG. 5. 3D
data sets from processing blocks 400A-10 or 400B-10 of
sub-algorithms 400A or 400B are entered at input data process block
504 that then undergoes a 2-step image enhancement procedure at
process block 506. The 2-step image enhancement includes performing
a heat filter to reduce noise followed by a shock filter to sharpen
edges of structures within the 3D data sets. The heat and shock
filters are partial differential equations (PDE) defined
respectively in Equations E1 and E2 below:
.differential. u .differential. t = .differential. 2 u
.differential. x 2 + .differential. 2 u .differential. y 2 ( Heat
Filter ) E1 .differential. u .differential. t = - F ( ( u ) )
.gradient. u ( Shock Filter ) E2 ##EQU00012##
[0323] Here u in the heat filter represents the image being
processed. The image u is 2D, and is comprised of an array of
pixels arranged in rows along the x-axis, and an array of pixels
arranged in columns along the y-axis. The pixel intensity of each
pixel in the image u has an initial input image pixel intensity (I)
defined as u.sub.0=I. The value of I depends on the application,
and commonly occurs within ranges consistent with the application.
For example, I can be as low as 0 to 1, or occupy middle ranges
between 0 to 127 or 0 to 512. Similarly, I may have values
occupying higher ranges of 0 to 1024 and 0 to 4096, or greater. For
the shock filter u represents the image being processed whose
initial value is the input image pixel intensity (I): u.sub.0=I
where the l(u) term is the Laplacian of the image u, F is a
function of the Laplacian, and .parallel..gradient.u.parallel. is
the 2D gradient magnitude of image intensity defined by equation
E3:
.parallel..gradient.u.parallel.= {square root over
(u.sub.x.sup.2+u.sub.y.sup.2)} E3:
[0324] Where u.sup.2.sub.x=the square of the partial derivative of
the pixel intensity (u) along the x-axis, u.sup.2.sub.y=the square
of the partial derivative of the pixel intensity (u) along the
y-axis, the Laplacian l(u) of the image, u, is expressed in
equation E4:
l(u)=u.sub.xxu.sub.x.sup.2+2u.sub.xyu.sub.xu.sub.y+u.sub.yyu.sub.y.sup.2
[0325] Equation E9 relates to equation E6 as follows:
[0326] u.sub.x is the first partial derivative
.differential. u .differential. x ##EQU00013##
of u along the x-axis,
[0327] u.sub.x u.sub.y is the first partial derivative
.differential. u .differential. y ##EQU00014##
of u along the y-axis,
[0328] u.sub.x u.sub.x.sup.2 is the square of the first partial
derivative
.differential. u .differential. x ##EQU00015##
of u along the x-axis,
[0329] u.sub.x u.sub.y.sup.2 is the square of the first partial
derivative
.differential. u .differential. y ##EQU00016##
of u along the y-axis,
[0330] u.sub.x u.sub.xx is the second partial derivative
.differential. 2 u .differential. x 2 ##EQU00017##
of u along the x-axis,
[0331] u.sub.x u.sub.yy is the second partial derivative
.differential. 2 u .differential. y 2 ##EQU00018##
of u along the y-axis,
[0332] u.sub.xy is cross multiple first partial derivative
.differential. u .differential. xdy ##EQU00019##
of u along the x and y axes, and
[0333] u.sub.xy the sign of the function F modifies the Laplacian
by the image gradient values selected to avoid placing spurious
edges at points with small gradient values:
F ( ( u ) ) = 1 , if ( u ) > 0 and .gradient. u > t = - 1 ,
if ( u ) < 0 and .gradient. u > t = 0 , otherwise
##EQU00020##
[0334] where t is a threshold on the pixel gradient value
.parallel..gradient.u.parallel..
[0335] The combination of heat filtering and shock filtering
produces an enhanced image ready to undergo the intensity-based and
edge-based segmentation algorithms as discussed below. The enhanced
3D data sets are then subjected to a parallel process of
intensity-based segmentation at process block 510 and edge-based
segmentation at process block 512. The intensity-based segmentation
step uses a "k-means" intensity clustering technique where the
enhanced image is subjected to a categorizing "k-means" clustering
algorithm. The "k-means" algorithm categorizes pixel intensities
into white, gray, and black pixel groups. Given the number of
desired clusters or groups of intensities (k), the k-means
algorithm is an iterative algorithm comprising four steps:
Initially determine or categorize cluster boundaries by defining a
minimum and a maximum pixel intensity value for every white, gray,
or black pixels into groups or k-clusters that are equally spaced
in the entire intensity range. Assign each pixel to one of the
white, gray or black k-clusters based on the currently set cluster
boundaries. Calculate a mean intensity for each pixel intensity
k-cluster or group based on the current assignment of pixels into
the different k-clusters. The calculated mean intensity is defined
as a cluster center. Thereafter, new cluster boundaries are
determined as mid points between cluster centers. The fourth and
final step of intensity-based segmentation determines if the
cluster boundaries significantly change locations from their
previous values. Should the cluster boundaries change significantly
from their previous values, iterate back to step 2, until the
cluster centers do not change significantly between iterations.
Visually, the clustering process is manifest by the segmented image
and repeated iterations continue until the segmented image does not
change between the iterations.
[0336] The pixels in the cluster having the lowest intensity
value--the darkest cluster--are defined as pixels associated with
internal cavity regions of bladders. For the 2D algorithm, each
image is clustered independently of the neighboring images. For the
3D algorithm, the entire volume is clustered together. To make this
step faster, pixels are sampled at 2 or any multiple sampling rate
factors before determining the cluster boundaries. The cluster
boundaries determined from the down-sampled data are then applied
to the entire data.
[0337] The edge-based segmentation process block 512 uses a
sequence of four sub-algorithms. The sequence includes a spatial
gradients algorithm, a hysteresis threshold algorithm, a
Region-of-Interest (ROI) algorithm, and a matching edges filter
algorithm. The spatial gradient algorithm computes the
x-directional and y-directional spatial gradients of the enhanced
image. The hysteresis threshold algorithm detects salient edges.
Once the edges are detected, the regions defined by the edges are
selected by a user employing the ROI algorithm to select
regions-of-interest deemed relevant for analysis.
[0338] Since the enhanced image has very sharp transitions, the
edge points can be easily determined by taking x- and y-derivatives
using backward differences along x- and y-directions. The pixel
gradient magnitude .parallel..gradient.I.parallel. is then computed
from the x- and y-derivative image in equation E5 as:
.parallel..gradient.I.parallel.= {square root over
(I.sub.x.sup.2+I.sub.y.sup.2)}
[0339] Where I.sup.2.sub.x=the square of x-derivative of intensity
and I.sup.2.sub.y=the square of y-derivative of intensity along the
y-axis.
[0340] Significant edge points are then determined by thresholding
the gradient magnitudes using a hysteresis thresholding operation.
Other thresholding methods could also be used. In hysteresis
thresholding, two threshold values, a lower threshold and a higher
threshold, are used. First, the image is thresholded at the lower
threshold value and a connected component labeling is carried out
on the resulting image. Next, each connected edge component is
preserved which has at least one edge pixel having a gradient
magnitude greater than the upper threshold. This kind of
thresholding scheme is good at retaining long connected edges that
have one or more high gradient points.
[0341] In the preferred embodiment, the two thresholds are
automatically estimated. The upper gradient threshold is estimated
at a value such that at most 97% of the image pixels are marked as
non-edges. The lower threshold is set at 50% of the value of the
upper threshold. These percentages could be different in different
implementations. Next, edge points that lie within a desired
region-of-interest are selected. This region of interest algorithm
excludes points lying at the image boundaries and points lying too
close to or too far from the transceivers 10A,B. Finally, the
matching edge filter is applied to remove outlier edge points and
fill in the area between the matching edge points.
[0342] The edge-matching algorithm is applied to establish valid
boundary edges and remove spurious edges while filling the regions
between boundary edges. Edge points on an image have a directional
component indicating the direction of the gradient. Pixels in
scanlines crossing a boundary edge location can exhibit two
gradient transitions depending on the pixel intensity
directionality. Each gradient transition is given a positive or
negative value depending on the pixel intensity directionality. For
example, if the scanline approaches an echo reflective bright wall
from a darker region, then an ascending transition is established
as the pixel intensity gradient increases to a maximum value, i.e.,
as the transition ascends from a dark region to a bright region.
The ascending transition is given a positive numerical value.
Similarly, as the scanline recedes from the echo reflective wall, a
descending transition is established as the pixel intensity
gradient decreases to or approaches a minimum value. The descending
transition is given a negative numerical value.
[0343] Valid boundary edges are those that exhibit ascending and
descending pixel intensity gradients, or equivalently, exhibit
paired or matched positive and negative numerical values. The valid
boundary edges are retained in the image. Spurious or invalid
boundary edges do not exhibit paired ascending-descending pixel
intensity gradients, i.e., do not exhibit paired or matched
positive and negative numerical values. The spurious boundary edges
are removed from the image.
[0344] For bladder cavity volumes, most edge points for blood fluid
surround a dark, closed region, with directions pointing inwards
towards the center of the region. Thus, for a convex-shaped region,
the direction of a gradient for any edge point, the edge point
having a gradient direction approximately opposite to the current
point represents the matching edge point. Those edge points
exhibiting an assigned positive and negative value are kept as
valid edge points on the image because the negative value is paired
with its positive value counterpart. Similarly, those edge point
candidates having unmatched values, i.e., those edge point
candidates not having a negative-positive value pair, are deemed
not to be true or valid edge points and are discarded from the
image.
[0345] The matching edge point algorithm delineates edge points not
lying on the boundary for removal from the desired dark regions.
Thereafter, the region between any two matching edge points is
filled in with non-zero pixels to establish edge-based
segmentation. In a preferred embodiment of the invention, only edge
points whose directions are primarily oriented co-linearly with the
scanline are sought to permit the detection of matching front wall
and back wall pairs of a bladder cavity, for example the left or
right ventricle.
[0346] Referring again to FIG. 10, results from the respective
segmentation procedures are then combined at process block 514 and
subsequently undergoes a cleanup algorithm process at process block
516. The combining process of block 214 uses a pixel-wise Boolean
AND operator step to produce a segmented image by computing the
pixel intersection of two images. The Boolean AND operation
represents the pixels of each scan plane of the 3D data sets as
binary numbers and the corresponding assignment of an assigned
intersection value as a binary number 1 or 0 by the combination of
any two pixels. For example, consider any two pixels, say
pixel.sub.A and pixel.sub.B, which can have a 1 or 0 as assigned
values. If pixel.sub.A's value is 1, and pixel.sub.B's value is 1,
the assigned intersection value of pixel.sub.A and pixel.sub.B is
1. If the binary value of pixel.sub.A and pixel.sub.B are both 0,
or if either pixel.sub.A or pixel.sub.B is 0, then the assigned
intersection value of pixel.sub.A and pixel.sub.B is 0. The Boolean
AND operation takes the binary any two digital images as input, and
outputs a third image with the pixel values made equivalent to the
intersection of the two input images.
[0347] After combining the segmentation results, the combined pixel
information in the 3D data sets In a fifth process is cleaned at
process block 516 to make the output image smooth and to remove
extraneous structures not relevant to bladder cavities. Cleanup 516
includes filling gaps with pixels and removing pixel groups
unlikely to be related to the ROI undergoing study, for example
pixel groups unrelated to bladder cavity structures. Sub-algorithm
500 is then completed and exits to sub-algorithm 600.
[0348] FIG. 11 depicts a logarithm of a Cepstrum. The Cepstrum is
used in sub-algorithm 400A for the pulse estimation via application
of point spread functions to the echogenic data sets generated by
the transceivers 10A,B.
[0349] FIGS. 12A-C depict histogram waveform plots derived from
water tank pulse-echo experiments undergoing parametric and
non-parametric analysis. FIG. 12A is a measure plot. FIG. 12B is a
nonparametric pulse estimated pattern derived from sub-algorithm
400A. FIG. 12c is a parametric pulse estimated pattern derived from
sub-algorithm 400B.
[0350] FIGS. 13-25 are bladder sonograms that depict image clarity
after undergoing image enhancement processing by algorithms
described in FIGS. 5-10.
[0351] FIG. 13 is an unprocessed image that will undergo image
enhancement processing.
[0352] FIG. 14 illustrates an enclosed portion of a magnified
region of FIG. 13.
[0353] FIG. 15 is the resultant image of FIG. 13 that has undergone
image processing via nonparametric estimation under sub-algorithm
400A. The low echogenic region within the circle inset has more
contrast than the unprocessed image of FIGS. 13 and 14.
[0354] FIG. 16 is the resultant image of FIG. 13 that has undergone
image processing via parametric estimation under sub-algorithm
400B. Here the circle inset is in the echogenic musculature region
encircling the bladder and is shown with enhanced contrast and
clarity than the magnified, unprocessed image of FIG. 14.
[0355] FIG. 17 the resultant image of an alternate image-processing
embodiment using a Weiner filter. Weiner filtration image does not
have the clarity nor the contrast in the low echogenic bladder
region of FIG. 15 (compare circle insets).
[0356] FIG. 18 is another unprocessed image that will undergo image
enhancement processing.
[0357] FIG. 19 illustrates an enclosed portion of a magnified
region of FIG. 18.
[0358] FIG. 20 is the resultant image of FIG. 18 that has undergone
image processing via nonparametric estimation under sub-algorithm
400A. The low echogenic region is darker and the echogenic regions
are brighter with more contrast than the magnified, unprocessed
image of FIG. 19.
[0359] FIG. 21 is the resultant image of FIG. 18 that has undergone
image processing via parametric estimation under sub-algorithm
400B. The low echogenic region is darker and the echogenic regions
are brighter with enhanced contrast and clarity than the magnified,
unprocessed image of FIG. 19.
[0360] FIG. 22 is another unprocessed image that will undergo image
enhancement processing.
[0361] FIG. 23 illustrates an enclosed portion of a magnified
region of FIG. 22.
[0362] FIG. 24 is the resultant image of FIG. 22 that has undergone
image processing via nonparametric estimation under sub-algorithm
400A. The low echogenic region is darker and the echogenic regions
are brighter with more contrast than the magnified, unprocessed
image of FIG. 23.
[0363] FIG. 25 is the resultant image of FIG. 22 that has undergone
image processing via parametric estimation under sub-algorithm
400B. The low echogenic region is darker and the echogenic regions
are brighter with enhanced contrast and clarity than the magnified,
unprocessed image of FIG. 23.
[0364] FIG. 26 depicts a schematic example of a time velocity map
derived from sub-algorithm 310.
[0365] FIG. 27 depicts another schematic example of a time velocity
map derived from sub-algorithm 310.
[0366] FIG. 28 illustrates a seven panel image series of a beating
heart ventricle that will undergo the optical flow processes of
sub-algorithm 300 in which at least two images are required.
[0367] FIG. 29 illustrates an optical flow velocity map plot of the
seven panel image series of FIG. 28 presented in a 2D flow pattern
after undergoing sub-algorithm 310.
[0368] FIG. 30 illustrates an optical flow velocity map plot of the
seven panel image series of FIG. 28 along the X-axis direction or
phi direction after undergoing sub-algorithm 310.
[0369] FIG. 31 illustrates an optical flow velocity map plot of the
seven panel image series of FIG. 28 along the Y-axis direction
radial direction after undergoing sub-algorithm 310.
[0370] FIG. 32 illustrates a 3D optical vector plot after
undergoing sub-algorithm 310 and corresponds to the top row of FIG.
29.
[0371] FIG. 33 illustrates a 3D optical vector plot in the phi
direction after undergoing sub-algorithm 310 and corresponds to
FIG. 30 at threshold value T=1.
[0372] FIG. 34 illustrates a 3D optical vector plot in the radial
direction after undergoing sub-algorithm 310 and corresponds to
FIG. 31 at T=1.
[0373] FIG. 35 illustrates a 3D optical vector plot in the radial
direction above a Y-axis threshold setting of 0.6 after undergoing
sub-algorithm 310 and corresponds to FIG. 34 the threshold T that
are less than 0.6 are set to 0.
[0374] FIGS. 36A-G depicts embodiments of the sonic gel
dispenser.
[0375] FIG. 36A illustrates the metered dispensing of sonic gel by
calibrated rotation of a compressing wheel. The peristaltic
mechanism using the compressing wheel is shown in partial a side
view compressing wheel mechanism.
[0376] FIG. 36B illustrates in cross-section the inside the
dispenser showing a collapsible bag that is engaged by the
compressing wheel. As more rotation action is conveyed the
compressing wheel, the bag progressively collapses.
[0377] FIG. 36C illustrates an alternative embodiment employing
compression by hand gripping.
[0378] FIG. 36D illustrates an alternative embodiment employing
push button or lever compression to dispense metered quantities of
sonic gel.
[0379] FIG. 36E illustrates an alternative embodiment employing air
valves to limit re-gassing of internal sonic gel volume stores
within the sonic gel dispenser. The value is pinched closed while
when the gripping or compressing wheel pressure is lessened and
spring opens when the gripping or compressing wheel pressure is
increased to allow sonic gel to be dispensed.
[0380] FIG. 36F illustrates a side, cross-sectional view of the gel
dispensing system that includes a pre-packaged collapsible bottle
with a refill bag, a bottle holder that positions the pre-packaged
bottle for use, and a sealed tip that may be clipped open.
[0381] FIG. 36G illustrate a side view of the pre-packaged
collapsible bottle of FIG. 36F. A particular embodiment includes
eight ounce squeeze bottle.
[0382] FIGS. 37-46 concern insertion viewed by ultrasonic systems
in which the optimization of cannula motion detection during
insertion is enhanced with method algorithms directed to detect
moving cannula fitted with echogenic ultrasound micro
reflectors.
[0383] An embodiment related to cannula insertion generally
includes an ultrasound probe attached to a first camera and a
second camera and a processing and display generating system that
is in signal communication with the ultrasound probe, the first
camera, and/or the second camera. A user of the system scans tissue
containing a target vein using the ultrasound probe and a
cross-sectional image of the target vein is displayed. The first
camera records a first image of a cannula in a first direction and
the second camera records a second image of the cannula in a second
direction orthogonal to the first direction. The first and/or the
second images are processed by the processing and display
generating system along with the relative positions of the
ultrasound probe, the first camera, and/or the second camera to
determine the trajectory of the cannula. A representation of the
determined trajectory of the cannula is then displayed on the
ultrasound image.
[0384] FIG. 37 is a diagram illustrating a side view of one
embodiment of the present invention. A two-dimensional (2D)
ultrasound probe 1010 is attached to a first camera 1014 that takes
images in a first direction. The ultrasound probe 1010 is also
attached to a second camera 1018 via a member 1016. In other
embodiments, the member 1016 may link the first camera 1014 to the
second camera 1018 or the member 1016 may be absent, with the
second camera 1018 being directly attached to a specially
configured ultrasound probe. The second camera 1018 is oriented
such that the second camera 1018 takes images in a second direction
that is orthogonal to the first direction of the images taken by
the first camera 1014. The placement of the cameras 1014, 1018 may
be such that they can both take images of a cannula 1020 when the
cannula 1020 is placed before the cameras 1014, 1018. A needle may
also be used in place of a cannula. The cameras 1014, 1018 and the
ultrasound probe 1010 are geometrically interlocked such that the
cannula 1020 trajectory can be related to an ultrasound image. In
FIG. 37, the second camera 1018 is behind the cannula 1020 when
looking into the plane of the page. The cameras 1014, 1018 take
images at a rapid frame rate of approximately 1030 frames per
second. The ultrasound probe 1010 and/or the cameras 1014, 1018 are
in signal communication with a processing and display generating
system 1061.
[0385] First, a user employs the ultrasound probe 1010 and the
processing and display generating system 1061 to generate a
cross-sectional image of a patient's arm tissue containing a vein
to be cannulated ("target vein") 1019. This could be done by one of
the methods disclosed in the related patents and/or patent
applications which are herein incorporated by reference, for
example. The user then identifies the target vein 1019 in the image
using methods such as simple compression which differentiates
between arteries and/or veins by using the fact that veins collapse
easily while arteries do not. After the user has identified the
target vein 1019, the ultrasound probe 1010 is affixed to the
patient's arm over the previously identified target vein 19 using a
magnetic tape material 1012. The ultrasound probe 1010 and the
processing and display generating system 1061 continue to generate
a 2D cross-sectional image of the tissue containing the target vein
1019. Images from the cameras 1014, 1018 are provided to the
processing and display generating system 1061 as the cannula 1020
is approaching and/or entering the arm of the patient.
[0386] The processing and display generating system 1061 locates
the cannula 1020 in the images provided by the cameras 1014, 1018
and determines the projected location at which the cannula 1020
will penetrate the cross-sectional ultrasound image being
displayed. The trajectory of the cannula 1020 is determined in some
embodiments by using image processing to identify bright spots
corresponding to micro reflectors previously machined into the
shaft of the cannula 1020 or a needle used alone or in combination
with the cannula 1020. Image processing uses the bright spots to
determine the angles of the cannula 1020 relative to the cameras
1014, 1018 and then generates a projected trajectory by using the
determined angles and/or the known positions of the cameras 1014,
1018 in relation to the ultrasound probe 10. In other embodiments,
determination of the cannula 1020 trajectory is performed using
edge-detection algorithms in combination with the known positions
of the cameras 1014, 1018 in relation to the ultrasound probe 1010,
for example.
[0387] The projected location may be indicated on the displayed
image as a computer-generated cross-hair 1066, the intersection of
which is where the cannula 1020 is projected to penetrate the
image. When the cannula 1020 does penetrate the cross-sectional
plane of the scan produced by the ultrasound probe 1010, the
ultrasound image confirms that the cannula 1020 penetrated at the
location of the cross-hair 1066. This gives the user a real-time
ultrasound image of the target vein 1019 with an overlaid real-time
computer-generated image of the position in the ultrasound image
that the cannula 1020 will penetrate. This allows the user to
adjust the location and/or angle of the cannula 1020 before and/or
during insertion to increase the likelihood they will penetrate the
target vein 1019. Risks of pneumothorax and other adverse outcomes
should be substantially reduced since a user will be able to use
normal "free" insertion procedures but have the added knowledge of
knowing where the cannula 1020 trajectory will lead.
[0388] FIG. 38 is a diagram illustrating a top view of the
embodiment shown in FIG. 37. It is more easily seen from this view
that the second camera 1018 is positioned behind the cannula 1020.
The positioning of the cameras 1014, 1018 relative to the cannula
1020 allows the cameras 1014, 1018 to capture images of the cannula
1020 from two different directions, thus making it easier to
determine the trajectory of the cannula 1020.
[0389] FIG. 39 is diagram showing additional detail for a needle
shaft 1022 to be used with one embodiment of the invention. The
needle shaft 1022 includes a plurality of micro corner reflectors
1024. The micro corner reflectors 1024 are cut into the needle
shaft 1022 at defined intervals .DELTA.l in symmetrical patterns
about the circumference of the needle shaft 1022. The micro corner
reflectors 1024 could be cut with a laser, for example.
[0390] FIGS. 40A and 40B are diagrams showing close-up views of
surface features of the needle shaft 1022 shown in FIG. 39. FIG.
40A shows a first input ray with a first incident angle of
approximately 90.degree. striking one of the micro corner
reflectors 1024 on the needle shaft 1022. A first output ray is
shown exiting the micro corner reflector 1024 in a direction toward
the source of the first input ray. FIG. 40B shows a second input
ray with a second incident angle other than 90.degree. striking a
micro corner reflector 1025 on the needle shaft 1022. A second
output ray is shown exiting the micro corner reflector 1025 in a
direction toward the source of the second input ray. FIGS. 40A and
40B illustrate that the micro corner reflectors 1024, 1025 are
useful because they tend to reflect an output ray in the direction
from which an input ray originated.
[0391] FIG. 41 is a diagram showing imaging components for use with
the needle shaft 1022 shown in FIG. 39 in accordance with one
embodiment of the invention. The imaging components are shown to
include a first light source 1026, a second light source 1028, a
lens 1030, and a sensor chip 1032. The first and/or second light
sources 1026, 1028 may be light emitting diodes (LEDs), for
example. In an example embodiment, the light sources 1026, 1028 are
infra-red LEDs. Use of an infra-red source is advantageous because
it is not visible to the human eye, but when an image of the needle
shaft 1022 is recorded, the image will show strong bright dots
where the micro corner reflectors 1024 are located because silicon
sensor chips are sensitive to infra-red light and the micro corner
reflectors 1024 tend to reflect output rays in the direction from
which input rays originate, as discussed with reference to FIGS.
40A and 40B. In alternative embodiments, a single light source may
be used. Although not shown, the sensor chip 1032 is encased in a
housing behind the lens 1030 and the sensor chip 1032 and light
sources 1026, 1028 are in electrical communication with the
processing and display generating system1061. The sensor chip 1032
and/or the lens 1030 form a part of the first and second cameras
1014, 1018 in some embodiments. In an example embodiment, the light
sources 1026, 1028 are pulsed on at the time the sensor chip 1032
captures an image. In other embodiments, the light sources 1026,
1028 are left on during video image capture.
[0392] FIG. 42 is a diagram showing a representation of an image
1034 produced by the imaging components shown in FIG. 41. The image
34 may include a needle shaft image 1036 that corresponds to a
portion of the needle shaft 1022 shown in FIG. 41. The image 1034
also may include a series of bright dots 1038 running along the
center of the needle shaft image 1036 that correspond to the micro
corner reflectors 1024 shown in FIG. 41. A center line 1040 is
shown in FIG. 42 to illustrate how an angle theta (.theta.) could
be obtained by image processing to recognize the bright dots 1038
and determine a line through them. The angle theta represents the
degree to which the needle shaft 1022 is inclined with respect to a
reference line 1042 that is related to the fixed position of the
sensor chip 1032.
[0393] FIG. 43 is a system diagram of an embodiment of the present
invention and shows additional detail for the processing and
display generating system 1061 in accordance with an example
embodiment of the invention. The ultrasound probe 1010 is shown
connected to the processing and display generating system via M
control lines and N data lines. The M and N variables are for
convenience and appear simply to indicate that the connections may
be composed of one or more transmission paths. The control lines
allow the processing and display generating system 61 to direct the
ultrasound probe 1010 to properly perform an ultrasound scan and
the data lines allow responses from the ultrasound scan to be
transmitted to the processing and display generating system 1061.
The first and second cameras 1014, 1018 are also each shown to be
connected to the processing and display generating system 1061 via
N lines. Although the same variable N is used, it is simply
indicating that one or more lines may be present, not that each
device with a label of N lines has the same number of lines.
[0394] The processing and display generating system 1061 is
composed of a display 1064 and a block 1062 containing a computer,
a digital signal processor (DSP), and analog to digital (A/D)
converters. As discussed for FIG. 37, the display 1064 will display
a cross-sectional ultrasound image. The computer-generated cross
hair 66 is shown over a representation of a cross-sectional view of
the target vein 1019 in FIG. 43. The cross hair 1066 consists of an
x-crosshair 1068 and a z-crosshair 1070. The DSP and the computer
in the block 1062 use images from the first camera 1014 to
determine the plane in which the cannula 1020 will penetrate the
ultrasound image and then write the z-crosshair 1070 on the
ultrasound image provided to the display 1064. Similarly, the DSP
and the computer in the block 1062 use images from the second
camera 1018, which are orthogonal to the images provided by the
first camera 1014 as discussed for FIG. 37, to write the
x-crosshair 1068 on the ultrasound image.
[0395] FIG. 44 is a system diagram of an example embodiment showing
additional detail for the block 1062 shown in FIG. 39. The block
1062 includes a first A/D converter 1080, a second A/D converter
1082, and a third A/D converter 1084. The first A/D converter 1080
receives signals from the ultrasound probe 1010 and converts them
to digital information that is provided to a DSP 1086. The second
and third A/D converters 1082, 1084 receive signals from the first
and second cameras 1014, 1018 respectively and convert the signals
to digital information that is provided to the DSP 1086. In
alternative embodiments, some or all of the A/D converters are not
present. For example, video from the cameras 1014, 1018 may be
provided to the DSP 1086 directly in digital form rather than being
created in analog form before passing through A/D converters 1082,
1084. The DSP 1086 is in data communication with a computer 1088
that includes a central processing unit (CPU) 1090 in data
communication with a memory component 1092. The computer 1088 is in
signal communication with the ultrasound probe 1010 and is able to
control the ultrasound probe 1010 using this connection. The
computer 1088 is also connected to the display 64 and produces a
video signal used to drive the display 1064.
[0396] FIG. 45 is a flowchart of a method of displaying the
trajectory of a cannula in accordance with an embodiment of the
present invention. First, at a block 1200, an ultrasound image of a
vein cross-section is produced and/or displayed. Next, at a block
1210, the trajectory of a cannula is determined. Then, at a block
1220, the determined trajectory of the cannula is displayed on the
ultrasound image.
[0397] FIG. 46 is a flowchart showing additional detail for the
block 1210 depicted in FIG. 45. The block 1210 includes a block
1212 where a first image of a cannula is recorded using a first
camera. Next, at a block 1214, a second image of the cannula
orthogonal to the first image of the cannula is recorded using a
second camera. Then, at a block 1216, the first and second images
are processed to determine the trajectory of the cannula.
[0398] While the preferred embodiment of the invention has been
illustrated and described, as noted above, many changes can be made
without departing from the spirit and scope of the invention. For
example, a three dimensional ultrasound system could be used rather
than a 2D system. In addition, different numbers of cameras could
be used along with image processing that determines the cannula
1020 trajectory based on the number of cameras used. The two
cameras 1014, 1018 could also be placed in a non-orthogonal
relationship so long as the image processing was adjusted to
properly determine the orientation and/or projected trajectory of
the cannula 1020. Also, an embodiment of the invention could be
used for needles and/or other devices which are to be inserted in
the body of a patient. Additionally, an embodiment of the invention
could be used in places other than arm veins. Regions of the
patient's body other than an arm could be used and/or biological
structures other than veins may be the focus of interest. As
regards ultrasound-based algorithms, alternate embodiments may be
configured to image acquisitions other than ultrasound, for example
X-ray, visible and infrared light acquired images. Accordingly, the
scope of the invention is not limited by the disclosure of the
preferred embodiment.
* * * * *