U.S. patent application number 10/193718 was filed with the patent office on 2003-04-03 for identification and quantification of needle and seed displacement departures from treatment plan.
This patent application is currently assigned to ALPHA INTERVENTION TECHNOLOGY, INC.. Invention is credited to Cheng, Gang, Liu, Haisong, Yu, Yan.
Application Number | 20030065260 10/193718 |
Document ID | / |
Family ID | 26895809 |
Filed Date | 2003-04-03 |
United States Patent
Application |
20030065260 |
Kind Code |
A1 |
Cheng, Gang ; et
al. |
April 3, 2003 |
Identification and quantification of needle and seed displacement
departures from treatment plan
Abstract
A placement plan is developed for the placement of radioactive
seeds or other therapeutic agents in a prostate for brachytherapy.
The placement plan is made available to an intraoperative tracking
interface which also shows a live ultrasound image of the needle or
catheter placement in the prostate. Position errors can be detected
and corrected. Techniques for merging of orthogonal ultrasound
image sets, classification of malignancy through neural networks,
and statistical detection of seeds in an ultrasound image can be
used.
Inventors: |
Cheng, Gang; (New Berlin,
WI) ; Yu, Yan; (Rochester, NY) ; Liu,
Haisong; (Rochester, NY) |
Correspondence
Address: |
BLANK ROME COMISKY & MCCAULEY, LLP
900 17TH STREET, N.W., SUITE 1000
WASHINGTON
DC
20006
US
|
Assignee: |
ALPHA INTERVENTION TECHNOLOGY,
INC.
|
Family ID: |
26895809 |
Appl. No.: |
10/193718 |
Filed: |
July 12, 2002 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10193718 |
Jul 12, 2002 |
|
|
|
09636551 |
Aug 11, 2000 |
|
|
|
6438401 |
|
|
|
|
60200493 |
Apr 28, 2000 |
|
|
|
Current U.S.
Class: |
600/427 |
Current CPC
Class: |
A61B 8/0833 20130101;
A61N 5/1027 20130101; A61B 8/0841 20130101; A61B 2034/2059
20160201; A61N 2005/1011 20130101; A61N 5/103 20130101; A61B
2034/2055 20160201; A61B 2034/2051 20160201 |
Class at
Publication: |
600/427 |
International
Class: |
A61B 005/05 |
Claims
We claim:
1. A method for identifying and quantifying departures in placement
of needles or catheters from intended placements in a treatment
plan for treating a bodily organ, the needles or catheters carrying
therapeutic agents for insertion into the bodily organ for use in
the treatment plan, the method comprising: (a) inputting the
intended placements into an intraoperative tracking interface; (b)
for at least one needle or catheter, calculating a difference in an
x-y plane between the intended placement of that needle or catheter
and an actual placement of that needle or catheter; (c)
calculating, from the difference calculated in step (b), a position
error for each of the therapeutic agents; (d) adjusting positions
of therapeutic agents along each needle or catheter by an amount
determined in accordance with the position error calculated in step
(c); and (e) determining positions of a deposited number of
therapeutic agents through imaging.
2. The method of claim 1, further comprising: (f) assigning a
confidence level to each of the positions determined in step (e);
and (g) for each therapeutic agent whose position is assigned a
confidence level exceeding a predetermined threshold, recalculating
a dosimetry associated with that therapeutic agent.
3. The method of claim 2, further comprising: (h) adjusting
positions of the therapeutic agents having low confidence values
assigned in step (t) in accordance with x-ray imaging; (i)
recalculating a dosimetry associated with all of the therapeutic
agents; and (j) assigning a confidence level to the dosimetry
recalculated in step (i).
4. The method of claim 1, wherein the intraoperative tracking
interface permits an operator to select a needle or catheter for
step (b).
5. The method of claim 1, wherein the intraoperative tracking
interface displays at least one isodose plot of the bodily
organ.
6. The method of claim 1, wherein step (e) is performed through
real-time ultrasound imaging of the bodily organ.
7. The method of claim 6, wherein: the intraoperative tracking
interface permits an operator to select a needle or catheter; and
the real-time ultrasound imaging is performed in a direction of the
needle or catheter selected by the operator.
8. The method of claim 7, wherein the real-time ultrasound imaging
results in a column of ultrasound images along the needle or
catheter selected by the operator.
9. The method of claim 8, wherein the column of ultrasound images
undergoes gray-scale preprocessing.
10. The method of claim 9, wherein the gray-scale preprocessing
corrects a gray-scale histogram of the column of ultrasound images
to produce gray-scale corrected ultrasound images.
11. The method of claim 10, wherein the gray-scale corrected
ultrasound images are used to find locations of the therapeutic
agents along the needle or catheter selected by the operator.
12. The method of claim 11, wherein the locations of the
therapeutic agents found in the gray-scale corrected ultrasound
images are used for further correction of the positions of the
therapeutic agents.
13. The method of claim 11, wherein a number of therapeutic agents
whose locations are found in the gray-scale corrected ultrasound
images is compared to a number of therapeutic agents which are
actually along the needle or catheter selected by the operator.
14. The method of claim 13, wherein, when the numbers of
therapeutic agents are not equal, a column width of the column of
ultrasound images is changed, and the column of ultrasound images
is taken again.
15. The method of claim 1, wherein the bodily organ is a
prostate.
16. A system for identifying and quantifying departures in the
placement of needles or catheters from intended placements in a
treatment plan for treating a bodily organ, the needles or
catheters carrying therapeutic agents for insertion into the bodily
organ for use in the treatment plan, the system comprising: an
imaging device for imaging the bodily organ; an intraoperative
tracking interface comprising a display; and a computing device, in
electronic communication with the intraoperative tracking
interface, for: (a) inputting the intended placements into the
intraoperative tracking interface; (b) for at least one needle or
catheter, calculating a difference in an x-y plane between the
intended placement of that needle or catheter and an actual
placement of that needle or catheter; (c) calculating, from the
difference calculated in step (b), a position error for each of the
therapeutic agents; (d) adjusting positions of therapeutic agents
along each needle or catheter by an amount determined in accordance
with the position error calculated in step (c); and (e) determining
positions of a deposited number of therapeutic agents through
imaging carried out through the imaging device.
17. The system of claim 16, wherein the computing device also
performs the following: (f) assigning a confidence level to each of
the positions determined in step (e); and (g) for each therapeutic
agent whose position is assigned a confidence level exceeding a
predetermined threshold, recalculating a dosimetry associated with
that therapeutic agent.
18. The system of claim 16, wherein the imaging device comprises an
x-ray imaging device, and wherein the computing device further
performs the following: (h) adjusting positions of the therapeutic
agents having low confidence values assigned in step (f) in
accordance with x-ray imaging carried out through the x-ray imaging
device; (i) recalculating a dosimetry associated with all of the
therapeutic agents; and (j) assigning a confidence level to the
dosimetry recalculated in step (i).
19. The system of claim 16, wherein the intraoperative tracking
interface comprises an input device which permits the operator to
select a needle or catheter for step (b).
20. The system of claim 16, wherein the intraoperative tracking
interface displays at least one isodose plot of the bodily
organ.
21. The system of claim 16, wherein: the imaging device comprises a
real-time ultrasound imaging device; and the computing device
performs step (e) through real-time ultrasound imaging of the
bodily organ by the real-time ultrasound imaging device.
22. The system of claim 21, wherein: the intraoperative tracking
interface permits an operator to select a needle or catheter; and
the real-time ultrasound imaging is performed in a direction of the
needle or catheter selected by the operator.
23. The system of claim 22, wherein the real-time ultrasound
imaging results in a column of ultrasound images along the needle
or catheter selected by the operator.
24. The system of claim 23, wherein the computing device performs
gray-scale preprocessing on the column of ultrasound images.
25. The system of claim 24, wherein the gray-scale preprocessing
corrects a gray-scale histogram of the column of ultrasound images
to produce gray-scale corrected ultrasound images.
26. The system of claim 25, wherein the gray-scale corrected
ultrasound images are used to find locations of the therapeutic
agents along the needle or catheter selected by the operator.
27. The system of claim 26, wherein the locations of the
therapeutic agents found in the gray-scale corrected ultrasound
images are used for further correction of the positions of the
therapeutic agents.
28. The system of claim 26, wherein a number of therapeutic agents
whose locations are found in the gray-scale corrected ultrasound
images is compared to a number of therapeutic agents which are
actually along the needle or catheter selected by the operator.
29. The system of claim 28, wherein, when the numbers of
therapeutic agents are not equal, a column width of the column of
ultrasound images is changed, and the column of ultrasound images
is taken again.
30. A method for generating a 3D orthogonal compound ultrasound
image volume of a patient prostate, with high contrast and detailed
information of prostate anatomy, tumor lesions, and implanted
radioactive sources if any, comprising the steps of: (a) providing
a bi-plane ultrasound transducer having a transverse imaging
function and a longitudinal imaging function; (b) using the
transverse imaging function of the bi-plane ultrasound transducer,
moving the transducer linearly within the patient rectum in the
cranio-caudal direction to acquire a first image set of images of
transverse view of the prostate; (c) using the longitudinal imaging
function of the bi-plane transducer, rotating the transducer within
the rectum to acquire a second image set of images of
sagittal-coronal views of the prostate, which are orthogonal to the
transverse view images obtained in step (b); (d) measuring the
positions and orientations of the ultrasound transducer in
real-time while the images are taken in steps (b) and (c); (e)
reconstructing the first and second image sets to 3D image volumes
with a common voxel size; (f) registering the two 3D image volumes
reconstructed in step (e); and (g) combining the two 3D image
volumes which have been registered in step (f) by using an
intelligent compounding method.
31. The method of claim 30, wherein step (d) is performed by using
an electromagnetic sensing system.
32. The method of claim 30, wherein step (d) is performed by using
a mechanical encoder.
33. The method of claim 30, wherein step (d) is performed by using
an optical positioning system.
34. The method of claim 30, in which the transducer comprises a
switch for switching the transducer between the transverse imaging
function and the longitudinal imaging function, and wherein at
least one of steps (b) and (c) comprises pressing the switch.
35. The method of claim 30, step (d) comprises registering actual
coordinates and orientations of the transducer with each image at
the instant of acquisition of said each image, whereby
mis-registration is eliminated even when the transducer movement is
very rapid.
36. The method of claim 30, wherein the longitudinal images are
reconstructed into a 3D image volume by using the
destination-oriented method for every x-y plane, and then built up
by using a look-up table repeatedly for each successive plane.
37. The method of claim 30, wherein the two 3D image volumes are
registered by using non-rigid image registration methodology of
grayscale data sets.
38. The method of claim 30, wherein the two 3D image volumes are
registered by using a maximization of mutual information (MMI) of a
joint histogram formed between the two 3D image volumes.
39. The method of claim 38, wherein a feature of high image
intensity is used to reduce the number of grayscale bins when
computing the joint histogram.
40. The method of claim 30, wherein step (g) is performed by use of
a compounding weight, the compounding weight being based on
regional image information content.
41. A method for generating a 3D orthogonal compound ultrasound
image volume of a human organ, the method comprising: (a) providing
an ultrasound transducer capable of taking a first set of images in
a first direction and a second set of images in a second direction
which is orthogonal or at an angle to the first direction; (b)
taking the first set of images of the organ; (c) taking the second
set of images of the organ; (d) registering the first set of images
with the second set of images; and (e) combining the first and
second sets of images which have been registered in step (d) to
produce the ultrasound image volume.
42. An ultrasound imaging system for generating 3D orthogonal
compound ultrasound images for the diagnosis and treatment of
prostate cancer, comprising: (a) a bi-plane ultrasound transducer
capable of transverse imaging and longitudinal imaging of the
prostate to produce a transverse image set and a longitudinal image
set; (b) a measuring system for measuring the position and
orientation of thetransducer; (c) a rigid bio-compatible sheath
which covers the ultrasound transducer to isolate motion of the
transducer from the prostate; and (d) an image processing work
station for acquiring the transverse image set and the longitudinal
image set and for producing an image of the prostate from the
transverse image set and the longitudinal image set.
43. A normalized statistical tumor probability model (NSTPM) aimed
to help effective biopsy for tumor detection in an organ of a
patient, the model comprising: an input layer for receiving raw
data concerning the patient; at least one hidden layer for
extracting features from the raw data; and an output layer for
weighting and combining the features extracted by the at least one
hidden layer to produce a classification of the organ for the
biopsy.
44. The normalized statistical tumor probability model (NSTPM) of
claim 43, wherein the organ is a prostate.
45. A method to re-optimize a dosimetry plan for implantation of
therapeutic agents into an organ of a patient after partial or
complete implantation of an original implant plan, the method
comprising: (a) determining locations of therapeutic agents which
have been implanted in the partial or complete implantation; (b)
evaluating a dosimetry of the locations determined in step (a); and
(c) re-optimizing the dosimetry plan in accordance with the
dosimetry evaluated in step (b).
46. The method of claim 45, wherein the locations are determined in
step (a) such that each of the locations has an uncertainty
level.
47. The method of claim 46, wherein step (a) is performed
automatically using a computer.
48. The method of claim 46, wherein step (a) is performed by taking
an ultrasound image and performing a human inspection of the
ultrasound image.
49. The method of claim 45, wherein step (c) is performed by a
computer algorithm.
50. The method of claim 45, wherein step (c) is performed by human
inspection through trial-and-error.
51. A method of programming an artificial neural network to
recognize objects in images, the method comprising: (a) providing
first and second sets of image data, the objects being present in
at least one of the first and second sets of image data; (b)
calculating numerical values of features in the first and second
sets of image data; (c) supplying the numerical values of the
features to the artificial neural network and controlling the
artificial neural network to recognize the objects; (d) identifying
correct and incorrect identifications of the objects made by the
artificial neural network in step (c); and (e) using the correct
and incorrect identifications to train the artificial neural
network through feedback.
52. The method of claim 51, wherein the correct and incorrect
identifications comprise true positives, false negatives, true
negatives and false positives.
53. The method of claim 51, wherein the objects are therapeutic
agents in an organ of a patient.
54. The method of claim 53, wherein the numerical values of the
features comprise a correlation coefficient of one of the objects
with a template.
55. The method of claim 54, wherein a plurality of templates are
provided for different orientations of the objects.
56. The method of claim 53, wherein the numerical values of the
features comprise statistical parameters of selected sub-images in
the first and second sets of image data.
57. The method of claim 56, wherein the statistical parameters
comprise frequency domain parameters.
58. The method of claim 56, wherein the statistical parameters
comprise geometry parameters.
59. The method of claim 1, wherein step (a) comprises receiving a
manual adjustment of a position of at least one of the needles or
catheters.
60. The method of claim 59, wherein the manual adjustment comprises
an expansion of a pattern of the intended placements.
61. The method of claim 59, wherein the manual adjustment comprises
a shift of a pattern of the intended placements.
62. The method of claim 59, wherein the manual adjustment comprises
an adjustment of a single one of the needles or catheters.
63. The method of claim 1, wherein step (e) is performed using a
global search method.
64. The method of claim 63, wherein the global search method is
used to provide candidate locations of the therapeutic agents to
correct a bias in a tracking or segmentation algorithm.
65. The method of claim 63, wherein the global search method is
used to provide candidate locations of the therapeutic agents, and
wherein the candidate locations of the therapeutic agents are used
with an original plan of the needles or catheters to locate actual
positions of the needles or catheters.
66. The method of claim 1, wherein the global search method is
performed on a grayscale transform of an ultrasound image.
67. A method for adjusting a treatment plan for treating a bodily
organ, the treatment plan comprising insertion of needles or
catheters carrying therapeutic agents into the bodily organ, the
method comprising: (a) providing a computing device with a user
interface; (b) entering positions of the needles or catheters into
the computing device; (c) taking at least one image of the bodily
organ and inputting the at least one image into the computing
device; and (d) using the user interface to adjust at least one of
the positions entered in step (b) in accordance with the at least
one image input in step (c).
68. The method of claim 67, wherein the at least one image
comprises an ultrasound image.
69. The method of claim 67, wherein step (d) comprises expanding a
pattern of the positions.
70. The method of claim 67, wherein step (d) comprises shifting a
pattern of the positions.
71. The method of claim 67, wherein step (d) comprises adjusting a
single one of the positions.
72. The method of claim 67, further comprising: (e) after step (d),
performing an automatic tracking of the needles or catheters in
accordance with the at least one position adjusted in step (d).
73. A method for locating a therapeutic agent which have been
inserted into a bodily organ, the method comprising: (a) providing
a computing device with a user interface; (b) taking at least one
image of the bodily organ and inputting the at least one image into
the computing device; (c) drawing a boundary of the bodily organ;
(d) performing a global search in a portion of the at least one
image contained within the boundary for at least one local
extremum; and (e) determining a location of the therapeutic agent
in accordance with the at least one local extremum.
74. The method of claim 73, wherein step (c) is performed
manually.
75. The method of claim 73, wherein step (c) is performed
automatically.
76. The method of claim 73, wherein the at least one image
comprises at least one ultrasound image.
77. The method of claim 73, wherein the at least one local extremum
is a local maximum.
78. The method of claim 73, wherein step (e) comprises: (i)
determining a candidate location of the therapeutic agent; and (ii)
using the candidate location to correct a bias in a location of the
therapeutic agent determined by another technique.
79. The method of claim 78, wherein the other technique comprises a
needle tracking and segmentation algorithm.
80. The method of claim 73, wherein step (e) comprises: (i)
determining a candidate location of the therapeutic agent; and (ii)
using the candidate location and a planned needle pattern to
determine a final needle pattern.
Description
REFERENCE TO RELATED APPLICATION
[0001] The present application is a continuation-in-part of U.S.
patent application Ser. No. 09/636,551, filed Aug. 11, 2000, which
claims the benefit of U.S. Provisional Application No. 60/200,493,
filed Apr. 28, 2000. The disclosures of both of the
above-referenced applications are hereby incorporated by reference
in their entireties into the present disclosure.
BACKGROUND OF THE INVENTION
[0002] 1. Field of the Invention
[0003] The present invention is directed to an improvement to
treatment plans using brachytherapy or the like and more
specifically to a technique for rapid and accurate identification
and quantification of needle placement departures from such a
treatment plan. The present invention is also directed to improve
the brachytherapy treatment quality by realizing intraoperative
seed (or other radioactive source or therapeutic agent)
localization, dosimetry evaluation and feedback optimization. The
present invention also relates to medical ultrasound imaging of
prostate, to ultrasound imaging, and more particularly to a method
and apparatus for generating 3D orthogonal compound ultrasound
(OCU) for the diagnosis and treatment of prostate cancer.
[0004] 2. Description of Related Art
[0005] (1) Background of Intraoperative Optimization and Quality
Evaluation
[0006] In the treatment of prostate cancer, a method is often
employed to implant numerous radioactive seeds in a carefully
preplanned pattern in three dimensions within the prostate. That
procedure serves to deliver an amount of radiation dosage
concentrated around the prostate, while at the same time sparing
radiation-sensitive tissues, such as the urethra, the bladder and
the rectal wall. Customarily, 60 to 140 seeds are placed through 15
to 30 needles in the inferior (feet) to superior (head) direction.
Those needle positions are selected from a 13.times.13 grid of 0.5
cm evenly spaced template holes [U.S. Pat. No. 5,957,935, Sep. 28,
1999, Guide and holding bracket for a prostate implant
stabilization device], which are used to achieve precise needle
insertion. The number of those holes, which intersect with the
prostate cross section, and therefore are potentially usable, is
typically about 60. The number of mathematical combinations is
therefore greatly in excess of 10.sup.16, each of which is a
potential treatment plan but is associated with different degrees
of cancer control and a different likelihood of treatment
complications [Y. Yu and M. C. Schell, "A genetic algorithm for the
optimization of prostate implants," Med. Phys. 23, 2085-2091
(1996)].
[0007] In current clinical practice, the design of a suitable seed
configuration that is customized to the anatomy of a patient is
achieved by a highly trained medical physicist or dosimetrist by
using trial-and-error manual iterations. The practitioner usually
starts with an initial needle configuration based on personnel
experience or rules of thumb, and then adjusts the radioactive
strength per seed or the locations of certain needles or both,
until the calculated dose intensity distribution satisfies a set of
clinical considerations. That process requires between 15 minutes
and 2 hours, depending on the experience of the treatment planner
and the geometric complexity of the relationship between the
prostate and the surrounding anatomical structures.
[0008] Currently, those known treatment planning processes are
typically aided by one of several available commercial computerized
treatment planning systems. Such treatment planning systems enable
the user to manually outline the prostate in relation to a template
grid, to turn on or off any available needle positions and seed
positions within each needle, and to examine the resultant dose
distribution in two or three dimensions. Examples of such planning
systems include those offered by Multimedia Medical Systems (MMS)
of Charlottesville, Va., SSGI Prowess, of Chico, Calif., Nucletron
Plato, from Columbia, Md., Computerized Medical Systems (CMS)
Focus, of St Louis, Mo., Radiation Oncology Computer Systems
(ROCS), of Carlsbad, Calif., ADAC Laboratory's Pinnacle, of
Milpitas, Calif. and Theraplan, available from Theratronics
International Ltd. of Kanata, Ontario, Canada.
[0009] In a number of such known commercial treatment planning
systems, for example, those available from MMS and SSGI, the
initial needle configuration that otherwise would have to be turned
on by the human treatment planner is automatically set up by the
computer system. That initial setup of seed configuration is based
on simple rules of thumb, such as uniform loading, peripheral
loading or modified peripheral loading. In a number of instances,
the manufacturer claims that its planning system offers "automatic
planning," "geometric optimization," or "real-time dosimetry."
However, none of those commercial planning systems offer true
optimization in that the automatically loaded seeds are not
designed based on customized dosimetric calculations. Rather, they
are designed to fill the space of the prostate in some
predetermined manner. Therefore, such known automatic seed loading
techniques are designed to save between 15 and 30 mouse clicks by
the operator (or about 1 minute of operation). However, the user is
still required to apply his or her expert knowledge to iteratively
improve upon that initial design in order to achieve customized
optimization planning for any individual patient. Thus, there are
two significant drawbacks of the above-mentioned current
techniques: first, the complete treatment planning process is under
the manual guidance of a radiation planning expert using trial and
error techniques; and second, the adjustment of the delivered dose
is achieved by varying the radioactive strength per seed until an
isodose surface with the desired shape and size is scaled up or
down to the prescription dose, i.e., those techniques will suffer
when the activity per seed is fixed, as at the time of surgical
implantation in the operating suite.
[0010] Because of those two severe drawbacks, the currently
available commercial treatment planning systems are not suitable
for intraoperative treatment planning in the surgical suite, where
the patient is placed under anesthesia in volatile conditions and
where the cost per minute is very high. The variability of human
performance, experience and stress, and the general inability of
humans to manage large amounts of numerical data in 1 to 2 minutes
are also factors that deter current practitioners from performing
intraoperative treatment planning.
[0011] Because of those and other recent postimplant dosimetry
reports, there is now a resurgence of research interest as well as
commercial interest in developing methods for immediate dose
verification based on intraoperative imaging of the prostate and
implanted seeds on ultrasound during the procedure. Technology
advances now permit intraoperative optimized planning of dosimetry,
auto-segmentation of prostate anatomy, and software tracking of
seed deposition in real time, given adequate image quality. Regions
of sub-optimal dosimetry can be identified before the end of the
procedure, when supplemental seed placement is still possible.
[0012] An optimization technique for treatment planning is taught
by U.S. Pat. No. 5,391,139 to Edmundson. More specifically,
Edmundson is intended for use with a high dose rate (HDR) source
which is moved within a hollow needle implanted in a prostate or
other anatomical portion. The medical personnel using the system of
Edmundson select a needle location using empirically predetermined
placement rules. An image is taken of the prostate with the hollow
needles implanted in it, and the dwell time of the source at each
dwell position in the needle is optimized. However, placement
itself is not optimized, but must instead be determined by a human
operator.
[0013] Another optimization technique is taught by WO 00/25865 to
one of the inventors of the present invention. An implant planning
engine plans implants for radiotherapy, e.g., prostate
brachytherapy. The system optimizes intraoperative treatment
planning on a real-time basis using a synergistic formulation of a
genetic algorithm, multi-objective decision theory and a
statistical sensitive analysis.
[0014] While the above techniques allow calculation of optimized
dwell time, placement or the like, they do not provide for
detection and correction of errors in needle or seed placement. The
optimized pre-planning does not mean that the best plan can be
realized in clinical practice. In fact, it is very difficult to
insert the implantation needles to the precise three-dimensional
positions manually. That is, the actual implantation results do not
match with the plans. Furthermore, seeds are displaced from their
planned locations by the elasticity of surrounding tissue as soon
as they are dropped from the needle, thus introducing further
errors. In those situations, the realized treatment dosimetry will
be lower than the planned dosimetry so that the prostate will be
underdosed and the patient will not receive the most effective
therapy. To solve that problem, intraoperative detection and
compensation of errors in needle and seed placements must be
performed automatically, without halting the operation, both during
the procedure and immediately after the completion of the
implantation plan.
[0015] (2) Background of 3D Orthogonal Compound Ultrasound
[0016] Although transrectal ultrasound (TRUS) imaging is now widely
used in the diagnosis and treatment of the prostate cancer as an
image guidance tool, the current imaging modality is a single-scan
by using either transverse imaging or longitudinal imaging
ultrasound transducer, and thus the image quality is inadequate for
clearly visualizing tumors, prostate anatomy, and implanted
radioactive sources due to the noise, speckles, and shadows
existing in the TRUS images.
[0017] The effective accuracy of intraoperative optimization rests
on the inherent information content of the TRUS image set. Research
to date has carefully quantified the percentile detection rate of
seeds in the prostate as well as prostate segmentation during
brachytherapy, showing that the image content from standard axial
or sagittal scans alone cannot consistently achieve 100% accuracy
for dosimetry purposes.
[0018] The most common technique used in the diagnosis of prostate
cancer is core biopsy under transrectal ultrasound (TRUS)
image-guidance and histopathological analysis of the sampled
tissues. Although TRUS guided core biopsy is routinely used
clinically, the existing popular systematic sextant biopsy protocol
is qualitative in nature and results in a positive predictive value
of only 32% so that a significant number of patients with prostate
cancer are not being diagnosed and treated. Currently, standard
transrectal ultrasound, computer tomography, and magnetic resonance
imaging with endo-rectal coils have failed to accurately determine
the volume and stage of prostate tumors pre-operatively, and
prostate cancer grade determined on biopsy commonly under-grades
the actual tumor discovered in the final specimen. So there is an
increasing need to improve the accuracy of assessing the volume,
location, and stage of prostate tumors prior to treatment,
especially since brachytherapy has gained popularity. Accurate
knowledge of such tumor characteristics through an enhanced
non-invasive imaging modality would result in more precise
candidate selection and execution of that treatment. Other
innovative treatment options would also benefit. However, most
innovative imaging systems use methodologies that are not widely
available to clinicians in day-to-day clinical practice. Those
include magnetic resonance spectroscopy/angiography or PET scans.
Such imaging studies, while initially promising, are not a
significant improvement over existing methods and would not be
expected to be widely available to practicing physicians in the
near future. By contrast, virtually all urologists and radiologists
are familiar with transrectal prostate ultrasonography and biopsy
or implantation of radioactive implants. However, conventional TRUS
imaging has two drawbacks: (1) an inability to ensure
biopsy/treatment in a similar location to the original site and (2)
images are two-dimensional and thus cannot be correlated reliably
to histologic sections.
[0019] Siemens Medical Systems, Inc. owns a patent named
"3-Dimensional compound ultrasound field of view," (U.S. Pat. No.
5,655,535). However, their compound field of view ultrasound image
is derived from correlated frames of ultrasound image data, while
ours is uncorrelated. They used frames of sensed echo signals to
detect probe motion without the use of a dedicated position sensor
or motion sensor, which is also different from our sensor based
position and orientation detection method. Their purpose of
compounding is to increase the limited field of view of the
ultrasound transducer, not to increase the image quality of
ultrasound.
SUMMARY AND OBJECTIVES OF THE PRESENT INVENTION
[0020] It will be apparent from the above that a need exists in the
art to detect and correct errors in implementation of a
brachytherapy treatment plan.
[0021] It is therefore a primary object of the present invention to
permit rapid and accurate identification and quantification of
needle and seed placement departures from a treatment plan
generated prior to a brachytherapy implant based on real-time
ultrasound.
[0022] It is another primary object of the invention to allow a 3D
orthogonal compound ultrasound (OCU) imaging system to obtain the
real-time images of prostate anatomy and radioactive sources with
higher signal-to-noise ratio than current ultrasound
technologies.
[0023] It is another object of the invention to allow the 3D OCU
imaging system to be used to improve the precision and correctness
of radio-active source localization of intraoperative dosimetry
evaluation and treatment feedback.
[0024] It is another object of the invention to allow the 3D OCU
imaging system to be used as an imaging guidance in physical biopsy
so that the tumor lesions and prostate anatomy can be more easily
visualized by the doctors for better diagnosis.
[0025] It is another object of the invention to allow real-time
correction to the brachytherapy dosimetry and iterative
compensation of loss of dose coverage due to misplacement of the
needles/catheters and seeds.
[0026] It is still another object of the invention to permit such
identification, quantification 20 and correction without the need
for Computer Tomography (CT) or Magnetic Resonance Imaging (MRI)
during the interval between needle/catheter placement in the target
organ and final deposition of radioactive sources for irradiation
of the target organ.
[0027] (1) Dynamic Dosimetry Evaluation and Treatment Feedback
[0028] To achieve the above and other objects, a first embodiment
of the present invention is directed to a technique for identifying
and quantifying needle and seed displacement departures from a
placement plan for the placement of radioactive sources in a
prostate or other internal organ for brachytherapy or the like. The
placement plan is made available to an intraoperative tracking
interface, which also shows a live ultrasound image of the needle
or catheter placement in the prostate. The needle or catheter
placement in live ultrasound images can be located manually by
clicking a mouse or semi-automatically by image segmentation
algorithms. The manual or semi-automatic segmentation methods
depend upon the contrast between background image and brightness of
needle tips. The difference in the x-y plane between the planned
and actual locations of the needle or catheter is calculated, and
from that difference, the error in position of each seed is
calculated from the implantation pattern of that needle. The seeds
are moved, or the operator changes the number of seeds, and the
dose is recalculated. After the seeds are implanted in the tissue,
a small three-dimensional column of ultrasound images is taken
along the track of needle, and each seed located in the column of
images is automatically recognized and given a confidence level.
The confidence level shows the accuracy of the seed recognition in
view of the large noise interference in the transrectal ultrasound
images. The dosimetry is recalculated based on the recognized
seeds, whose confidence level exceeds a threshold preset by the
operator. Periodically throughout the seed placement and at the end
of seed placement, fluoroscopic x-rays are taken, and the seed
coordinates found in transrectal ultrasound images are matched to
the seed coordinates found in fluoroscopic x-ray image. Seed
locations with low confidence levels are adjusted based on the
x-ray locations because the x-ray images can provide accurate
planar projection of each seed, and the dosimetry is recalculated
based on those revised seed positions.
[0029] In a preferred embodiment, the technique is carried out
through the following steps.
[0030] 1. The needle/catheter placement plan is made available to
an intraoperative tracking interface. That interface contains an
electronic worksheet of needle and seed coordinates, a live
ultrasound image window into which real-time video image of
needle/catheter placement is fed, and a series of isodose dosimetry
panels reflecting the current state of dose coverage. Each of the
needles/catheters can be activated by highlighting the
corresponding row in the coordinates worksheet, or by highlighting
the corresponding grid location graphically.
[0031] 2. Following insertion of each needle/catheter, a
hyperechoic (i.e., bright) spot appears on the live ultrasound
image. That location of hyperechoic spot is manually identified by
the operator or semi-automatic thresholding method. The difference
in the x-y plane between the planned location and the actual
location of the needle/catheter is calculated to give errors
.DELTA.x and .DELTA.y. The errors .DELTA.x and .DELTA.y are then
reflected on the grid location and worksheet. The errors of each
seed, .DELTA.x' and .DELTA.y', are calculated based on straight
line interpolation at the planned z location of the seed; the
straight line is constructed by joining two known points: (a) the
actual needle location shown on ultrasound at the known z plane,
(b) the template coordinate outside the patient body, through which
the needle is inserted under precision template guidance (therefore
at that location .DELTA.x and .DELTA.y shall be assumed to equal
zero). The dose is then recalculated by moving the seeds along the
activated needle/catheter in x and y by amounts .DELTA.x' and
.DELTA.y', which may be the same or different for each and every
seed. The dosimetry updated by such feedback of seed placement
errors is redisplayed on the series of isodose panels.
[0032] In addition, the operator is permitted to change the number
of seeds deposited by the needle/catheter in question. In that
case, the operator is required to enter the new seed locations
along the needle/catheter, which overrides the original treatment
plan. Seed placement errors in such a case are tracked identically
to the procedure described above.
[0033] 3. A small 3D column of ultrasound images is acquired along
the needle track as constructed above. That column can be
perpendicular to the x-y plane, or in fact may often sustain an
angle .alpha. and an angle .beta. from the x and the y planes,
respectively. The exact number of seeds as deposited is identified
using image processing and recognition algorithms in that column of
3D ultrasound region of interest. Each seed identified in that
manner is assigned a confidence level, which depicts the
likelihood/uncertainty of seed localization. The size of that
column is initially set small; if the total number of seeds found
in that manner is not equal to the number of seeds deposited by the
given needle/catheter due to the misplacement of seeds, the width
of the column is adaptively adjusted to find the lost seeds (e.g.,
the width is increased to find additional seeds).
[0034] Whereas the previous step only quantifies the errors
.DELTA.x' and .DELTA.y' for each seed, the ultrasound step
quantifies .DELTA.z' for each seed and at the same time further
corrects .DELTA.x' and .DELTA.y'. If the confidence level of a
given seed's localization exceeds a threshold value (to be set by
the operator), the dosimetry is re-calculated yet again using the
updated seed location and displayed in the same isodose panels. The
isodose calculated is assigned a confidence level, which is a
numerical composite of the individual confidence levels of the
seeds and the dosimetric impact of positional uncertainties at each
seed location (e.g., in high dose region, positional uncertainty
has low impact).
[0035] 4. Periodically throughout the seed placement procedure and
at the end of seed placement, fluoroscopic x-ray images may be
taken in the anterior-posterior direction and at up to .+-.45
degrees on either side of the anterior-posterior directions. The
seed coordinates as determined above with ultrasound images are
two-dimensionally projected in the same orientations. A best match
to the x-ray seed projections is made based on multiple point
matching using those seed identifications with the highest
confidence levels. Subsequent to such matching, the seed locations
with low confidence levels are adjusted based on the x-ray
locations. As a result, the confidence levels of those latter seeds
are increased by an amount reflective of the best match quality.
The dosimetry is recalculated. The confidence level of the
dosimetry is updated using updated confidence levels of the
seeds.
[0036] (2) Structure of 3D Orthogonal Compound Ultrasound
[0037] Currently, 3D TRUS images are usually created by collecting
a series of 2D image slices at evenly spaced intervals in real time
during a transducer scan, in combination with detection of the
position and/or orientation of the transducer. One approach is
mounting the transducer in a computer-controlled motor-driven
assembly. Another approach is attaching a sensor to the transducer
and measuring the relative position and orientation from the sensor
to a transmitter by using non-contact sensing techniques.
Currently, the technique of using an electromagnetic (EM) sensing
system for that purpose has already been successfully implemented
by a number of groups including the present inventors.
[0038] Two scanning methods for 3D TRUS imaging are linear scanning
and rotational scanning. Linear scanning uses a transducer with a
curvilinear transverse array that is pulled back manually or
mechanically in the cranio-caudal direction to acquire images of
transverse (axial) view of the prostate, whereas the rotational
scanning technique rotates a transducer with a side-firing array
manually or mechanically to acquire the longitudinal images.
[0039] The linear scanning technique is now widely used because
therapy guidance requires that the transverse positions of the
needles are clearly visible. However, that scanning method has two
disadvantages. One is that the edges of the base and apex of the
prostate are nearly invisible on the transverse images because of
the complexity of human anatomy structure in those two areas; that
leads to inaccuracies in volume estimates and dosimetric planning.
The other is that the implanted seeds cannot be fully localized
because some seeds are invisible on the transverse image series due
to the impact of the speckle noise, shadowing by other seeds, and
calcification, bleeding and tissue heterogeneity of the gland. On
the other side, the apex and base of the prostate can be more
easily determined on longitudinal images from the rotational scan.
Though seed visualization still suffers from the various artifacts
listed above in rotational scans, the direction of the beam
relative to the seeds as well as shadowing artifacts are
uncorrelated with that of the transverse scan, i.e., a given seed
may be invisible in one look direction but clearly visible in the
other look direction. That is the basis of the rationale for
orthogonal compounding.
[0040] Ultrasound transducers that can electronically switch from
longitudinal to transverse imaging (by pressing a button) are now
commercially available and widely used in prostate imaging. The
basic equipment investment for OCU is already in place for most
clinicians; in addition, both scan modes are routinely used by
practitioners to visually assess needle placement. Adding
orthogonal compounding therefore do not introduce complexity to the
intervention, but may result in greatly improved imaging.
[0041] The present invention, the dual-scan orthogonal compound
ultrasound, is a novel imaging modality aimed to overcome those
drawbacks of the conventional TRUS imaging, and become the next
generation ultrasound imaging method for the treatment
(brachytherapy/thermal/cryo-t- herapies), diagnosis (biopsy), and
screening of prostate cancer. The present invention upgrades the
conventional TRUS imaging from 2D to 3D, and compounds two
orthogonal views of the prostate by dual-scan imaging so that a
clearer visualization of the prostate anatomy and tumors can be
obtained. However, it has been discovered that 3D adaptive
compounding of transverse and longitudinal scan sets could improve
both signal-to-noise and contrast-to-noise ratios and generate
phantom images with clearly visible seeds. Spatial compounding, the
combination of individual ultrasound images or scans obtained from
different view points, has been studied since the 1950s (Holmes et
al. 1954). In gray-scale imaging, image compounding provides
additional benefits such as reduction of speckles noise and shadows
(Kossoff et al. 1976). Since array probes have become available, 2D
compounding by electronic steering of the beam has been studied.
(Berson et al. 1981; Jespersen et al. 1998; Shattuck and von Ramm
1982). The present invention also includes a neural network
solution for detecting tumors directly from the 3D orthogonal
compound ultrasound images.
[0042] ATL Ultrasound Company now offers a commercial ultrasound
scanner with real-time 2D compound imaging using electronic beam
steering ("SonoCT.TM."). Although they have been issued a series of
patents related to ultrasonic diagnostic imaging system with
spatial compounding (U.S. Pat. Nos. 6,117,081; 6,126,598;
6,135,956; 6,210,328), there are two main differences with the
present invention. The first one is that their compounding target
is a two-dimensional image, while the compounding target of the
present invention is a three dimensional image volume. The second
one is that they use a series of partially overlapping component
image frames to compound, while the present invention uses
orthogonal image volumes as the compounding components that are
fully independent. Since 3D compounding can combine more
independent views than 2D compounding and offers the possibility of
out-of-plane refraction correction, the orthogonal 3D compound can
utilize more benefits of spatial compounding.
[0043] (3) Statistical Analysis of Texture Features
[0044] Many kinds of texture features can be used to identify
radioactive seeds in TRUS images, such as first order statistical
parameters (mean, maximum, minimum, summation, etc.), second order
statistical parameters (standard deviation, covariance, correlation
coefficient, etc), higher order statistical parameters (skewness,
kurtosis, etc.), frequency domain parameters (high pass filtering,
low pass filtering, band pass filtering, etc.), and geometry
parameters (area, Euler number, etc.).
[0045] In order to find effective features and their optimized
combination for the purpose of identifying seeds, Logit and Probit
models, discriminant analysis, and neural network classification
methods are applied to two test groups. One group represents the
seeds which are identified manually from the TRUS images; the other
group represents the non-seeds group including typical background
noises, calcifications, bleeding, air gaps, and shadows. The
selection of the two groups is confirmed by the day 0 CT (i.e., CT
images taken within the same day of the actual implant): the
positions of the radioactive seeds in the CT images are identified
automatically or manually, and affine-transformed to match the TRUS
image series. Some clearly visible seeds in the TRUS image series
can be visually identified and used as markers to correct the
registration. The seed positions identified from CT are
affine-transformed until they achieve a best match with those
identified from TRUS.
BRIEF DESCRIPTION OF THE DRAWINGS
[0046] Preferred embodiments of the present invention will be set
forth in detail with reference to the drawings, in which:
[0047] FIG. 1 shows a schematic diagram of a system for carrying
out any of the preferred embodiments of the present invention;
[0048] FIGS. 2A-2C show a flowchart of a process according to a
first preferred embodiment;
[0049] FIG. 3 shows a user interface used in the first preferred
embodiment;
[0050] FIG. 4 shows the user interface of FIG. 3 after the
calculation of a needle offset and also identifies certain process
steps of FIG. 2A with certain components of the user interface;
[0051] FIG. 5 shows a flow chart of an image processing technique
used to identify seeds in the ultrasound images;
[0052] FIGS. 6A and 6B show an image with a desired grayscale
distribution and a histogram of the desired grayscale distribution,
respectively;
[0053] FIGS. 7A and 7B show an image with a typical grayscale
distribution and a histogram of the typical grayscale distribution,
respectively;
[0054] FIGS. 8A and 8B show the image of FIG. 7A after
preprocessing and a histogram of the resulting grayscale
distribution, respectively;
[0055] FIGS. 9A and 9B show a sequence of images taken in a column
and an identification of those images having hyperechoic spots,
respectively;
[0056] FIG. 10 shows a plot of a threshold used to locate the
hyperechoic spots;
[0057] FIGS. 11A and 11B show ideal and typical plots,
respectively, of brightness along a needle path;
[0058] FIGS. 12A-12C show three types of peaks which may occur in
image data;
[0059] FIGS. 13A-13D show the locations of seeds in image data;
[0060] FIG. 14A shows a known technique for 2D spatial
compounding;
[0061] FIG. 14B shows a schematic of 3D orthogonal compounding
according to another preferred embodiment of the present
invention;
[0062] FIG. 15 is a diagram illustrating an imaging system with a
rectal shell used in the preferred embodiment of FIG. 14B;
[0063] FIGS. 16A-16D show four steps in the process of 3D image
acquisition and reconstruction according to the preferred
embodiment of FIG. 14B;
[0064] FIG. 17 is a schematic diagram illustrating the geometry
used to describe the reconstruction algorithm used in FIGS.
16A-16D;
[0065] FIG. 18 is a flow chart of the algorithm of 3D
reconstruction of longitudinal images;
[0066] FIG. 19 is a schematic diagram illustrating the construction
of the 2D joint histogram;
[0067] FIG. 20 is a flow chart of the registration algorithm;
[0068] FIGS. 21A-21C are three images showing one of the sagittal
images from the linear scan, from the rotational scan, and the
compound image, respectively;
[0069] FIGS. 22A and 22B are illustrations of the intelligent
compounding algorithm using global weighting and regional
weighting, respectively;
[0070] FIG. 23 is a schematic diagram of the neural network
architecture used in the normalized statistical tumor probability
model (NSTPM);
[0071] FIG. 24 is a flow chart of the use of NSTPM during biopsy
under the guidance of OCU imaging;
[0072] FIGS. 25A-25C show seed templates used to identify seeds in
TRUS images according to another preferred embodiment of the
present invention;
[0073] FIG. 26 shows a flow chart of a process for finding
optimized weight coefficients for identification of the seeds in
the TRUS images;
[0074] FIGS. 27A-27D show scans of the prostate in the manual
adjustment of a needle pattern;
[0075] FIG. 28A shows an original ultrasound image used for a
global search in another preferred embodiment of the present
invention;
[0076] FIG. 28B shows a grayscale transformed image based on the
image of FIG. 28A; and
[0077] FIG. 28C shows a graph of the measurement of the gray-scale
summation of each point in the image of FIG. 28B.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0078] Various preferred embodiments of the present invention will
be set forth in detail with reference to the drawings, in which
like reference numerals refer to like elements throughout.
[0079] FIG. 1 shows a system 100 on which any of the preferred
embodiments can be implemented. The system 100 includes a computer
102, which can be the same as the computer used in either of the
above-cited Edmundson and Yu references or any other suitable
device. The computer uses a display 104 and a user input device or
devices such as a keyboard 106 and a mouse 108. Other input devices
can be used; for example, the mouse 108 can be replaced by a light
pen for use with the display 104. The computer also receives input
from an ultrasound device 110 and a fluoroscopic x-ray device 112
by image digitization.
[0080] The system also includes components for administering the
brachytherapy to the patient. Those components include needles 114
having radioactive seeds 116 (or other radioactive objects, or,
even more generally, other therapeutic agents) spaced therealong in
accordance with a treatment plan. A template 118 having a grid of
holes 120 is used to position the needles 114 for insertion into
the patent's prostate. The specifics of the needles 114, the seeds
116 and the template 118 are known from the prior art cited above.
The needles 114 can be replaced by hollow needles or catheters in
accordance with the treatment plan to be used.
[0081] (1) Dynamic Dosimetry Feedback
[0082] The use of the system 100 in implementing the first
preferred embodiment will now be explained with reference to the
flow chart of FIGS. 2A-2C. In step 202, a treatment plan is
developed. Such a treatment plan can be the one developed in the
above-cited Yu reference and can be developed either on the
computer 102 or on a different device. In step 204, the treatment
plan is made available to an intraoperative tracking interface
implemented on the computer 102. If the treatment plan is not
developed on the computer 102, an appropriate communication medium
can be provided to supply the treatment plan to the computer
102.
[0083] The intraoperative tracking interface is displayed to the
user on the display 104. As shown in FIG. 3, the intraoperative
tracking interface 300 includes the following components. An
electronic worksheet 302 shows needle and seed coordinates, based
on the grid of holes 120 in the template 118, and identifies needle
locations with dots 304. A live ultrasound image window 306 shows a
real-time image of a section of the prostate obtained from the
ultrasound device 110 and allows a real-time view of needle
placement in the prostate. From the placement of the seeds, the
dosimetry is calculated, and a series of dosimetry panels 308 are
shown, each showing the dosimetry in a respective slice of the
prostate from the base to lo the apex. The dosimetry in the panels
308 is shown by isodose lines 310. The electronic worksheet 302
further includes a spreadsheet 312 in which each row indicates the
information of one of the needles. The spreadsheet 312 includes a
column 314 indicating a needle by a number, a column 316
identifying the hole 120 in the template 118 into which that needle
is inserted by its coordinates (letter and number), a column 318
indicating an offset, a column 320 indicating the number of seeds
on the needle, a column 322 indicating a .DELTA.x offset of the
needle from its planned position, a column 324 indicating a
.DELTA.y offset of the needle from its planned position, a column
326 indicating the number of currently selected seeds whose offsets
have been calculated and a column 328 indicating a total number of
seeds whose offsets have been calculated. A needle position 304
which the operator has selected is shown on the interface 300 as
flashing, as is the corresponding row 330 in the spreadsheet
312.
[0084] Following the insertion of each needle or catheter in step
206, the live ultrasound image 306 of the interface 300 displays a
bright (hyperechoic) spot 332 in step 208. In step 210, the
operator manually identifies the spot 332, e.g., by clicking on its
center with the mouse 108. In step 212, the difference in the x-y
plane between the planned location and the actual location of the
needle or catheter is calculated to give errors .DELTA.x and
.DELTA.y, which are shown both on the grid 302 and on the
highlighted row 330 of the spreadsheet. The positional errors in
the x-y plane of each seed, .DELTA.x' and .DELTA.y', are calculated
in step 214 based on straight line interpolation at the planned z
location of the seed. The straight line used in the interpolation
is constructed by joining two known points: (a) the actual needle
location shown on ultrasound at the known z plane and (b) the
template coordinate outside the patient body through which the
needle is inserted under precision template guidance. At the
template 118, .DELTA.x and .DELTA.y are assumed to equal zero. The
dose is then recalculated in step 216 by moving the seeds along the
activated needle or catheter in the x and y directions by those
amounts .DELTA.x' and .DELTA.y', which may be the same or different
for every seed. The dosimetry updated by such feedback of seed
placement errors is redisplayed in step 218 on the series of
isodose panels 308. FIG. 4 shows the updated interface 300 and also
identifies some of the above-mentioned method steps in association
with the corresponding elements of the interface 300.
[0085] In addition, the operator is permitted to change the number
of seeds deposited by the needle or catheter in question in step
220. In that case, the operator is required to enter the new seed
locations along the needle or catheter, which overrides the
original treatment plan in step 222. Seed placement errors in such
a case are tracked identically to the procedure described
above.
[0086] In step 224, a small 3D column of ultrasound images is
acquired along the straight line constructed in step 214. That
column can be perpendicular to the x-y plane or may be at an
arbitrary angle from the x and/or the y planes. The exact number of
seeds as deposited is identified in step 226, using image
processing and recognition algorithms to be described below, in the
3D column of ultrasound images. Each seed identified in the
ultrasound images is assigned a confidence level in step 228, which
indicates the likelihood or uncertainty of seed localization.
[0087] The size of the 3D column is initially set small. If it is
determined in step 230 that the total number of seeds found in step
226 is not equal to the number of seeds deposited by the given
needle or catheter, the width of the column is adaptively adjusted
in step 232; for instance, the width is increased to find
additional seeds.
[0088] Thus, .DELTA.z' is quantified for each seed, and at the same
time, .DELTA.x' and .DELTA.y' are further corrected according to
step 214. If it is determined in step 234 that the confidence level
of a given seed's localization exceeds a threshold value (set by
the operator), the dosimetry is re-calculated yet again in step 236
using the updated seed location and displayed in the same isodose
panels. The isodose calculated is assigned a confidence level in
step 238, which is a numerical composite of the individual
confidence levels of the seeds and the dosimetric impact of
positional uncertainties at each seed location. For example, in a
high dose region, positional uncertainty has low impact.
[0089] Periodically throughout the seed placement procedure and the
end of seed placement, fluoroscopic x-ray images may be taken in
step 240 in the anterior-posterior direction and at up to .+-.45
degrees on either side of the anterior-posterior direction. The
seed coordinates as determined above are projected in the same
orientations in step 242. A best match to the x-ray seed
projections is made in step 244 based on multiple point matching
using those seed identifications with the highest confidence
levels. Subsequent to such matching, the seed locations with low
confidence levels are adjusted in step 246 based on the x-ray
locations. As a result, the confidence levels of those latter seeds
are increased by a amount reflective of the best match quality. In
step 248, the dosimetry is recalculated, and the confidence level
of the dosimetry is updated using the updated confidence levels of
the seeds.
[0090] The image processing algorithms used in carrying out step
226 will now be explained. As shown in the flow chart of FIG. 5,
there are three basic steps. In step 502, which is a preprocessing
step, the image brightness and contrast are adjusted to make the
hyperechoic spots more distinct. In step 504, the seed pathway is
tracked for further correcting the offsets .DELTA.x' and .DELTA.y'
of the implanted seeds. In step 506, the seeds are identified for
correcting .DELTA.z' for each seed along the tracking pathway and
fine-tuning of .DELTA.x and .DELTA.y for each seed.
[0091] Step 502 involves executing a grayscale histogram
transformation to each image in the ultrasound series from apex to
base and is thus a pre-processing step. The purpose of step 502 is
to adjust the brightness and contrast of the images so that the
hyper-echoic spots will be more distinct in the transformed images.
According to experience acquired from many actual OR cases, an
image suitable for seed recognition processing has a grayscale
histogram similar to that shown in FIGS. 6A and 6B, whereas in most
cases, the images as taken have grayscale histograms similar to
that shown in FIGS. 7A and 7B.
[0092] As shown in FIG. 6B, it is preferred that the background be
very dark while the hyperechoic spots be very distinct. For that
preferred case, 50% of the pixels have grayscale levels below 30,
representing the background and dark tissues; 90% of the pixels
have grayscale levels below 60, with grayscale levels between 30
and 60 most likely representing the brighter tissues of the gland;
and 95% of the pixels have grayscale levels below 80, with levels
between 60 and 80 most likely representing some much brighter
tissues and some weaker airgaps The pixels with the highest
grayscale levels (from 80 to 255) are the hyper-echoic spots of
seeds and some stronger air gaps.
[0093] Here, the images are assumed to have an eight-bit grayscale
depth, namely, with grayscale values from zero to 255 inclusive. Of
course, other grayscale depths can be used instead.
[0094] In the images as taken, the 50%, 90% and 95% grayscale
levels are higher than the preferred ones set forth above. In the
example of FIGS. 7A and 7B, they are 60, 110 and 135,
respectively.
[0095] To transform an image as taken into an image as preferred,
the following grayscale transformation scheme can be used:
1 Original image Transformed image Below median (0.about.50%) 1
50%.about.90% 1.about.50 90%.about.95% 51.about.75 95%.about.100%
76.about.255
[0096] When the image of FIGS. 7A and 7B is subjected to such a
transformation, the result is as shown in FIGS. 8A and 8B. A
comparison of FIGS. 7A and 7B with FIGS. 8A and 8B shows that the
hyper-echoic spots in transformed image of FIGS. 8A and 8B are more
distinct than they are in the original image of FIGS. 7A and 7B.
Thus, it is easier for the subsequent algorithms to track and
identify the seeds. More importantly, it is possible for the
algorithms to use unified parameters to process cases with
different brightness and contrast settings. At the same time, that
gray-scale transformation does not lose the important information
in the ultrasound images.
[0097] Step 504, automatic tracking of the seeds along a same
needle, is used to correct .DELTA.z' (displacement from the planned
location) of the implanted seeds. Step 504 involves tracking the
pathway of the seeds, not just the seeds themselves. In other
words, the air gaps are also included, and step 504 does not
discriminate the seeds from the air gaps. Step 504 uses the
grayscale information region of interest (ROI), such as the maximum
value of a hyper-echoic spot, the mean and the standard deviation
of the ROI, the contrast defined by the maximum value divided by
the mean, and first/second order texture parameters, etc.
[0098] In step 504, a center and the size of an ROI are preset.
That operation can be manually done by the operator by clicking the
mouse on the hyper-echoic spots at any z-position or automatically
done by using the information from the treatment plan and above
needle tracking method. Thresholding and analysis are then used to
determine whether there is a hyper-echoic spot in the ROI. If there
is, the ROI center of the next image in the series is switched to
the current center position. If not, the previous center is
kept.
[0099] FIG. 9A shows a column of images taken along the pathway
corresponding to grid coordinates I2 in the grid 302 of FIG. 3.
FIG. 9B shows the same column of images, with boxes identifying the
images in which hyperechoic spots have been identified. As shown in
FIG. 9B, each hyperechoic spot occupies five consecutive images
because of the dimensions of the seed relative to the interval at
which the images are taken; an illustrative example of the relevant
dimensions will be given below.
[0100] The threshold measurement based on the grayscale analysis of
the ROI can be illustrated by FIG. 10. For the sake of clarity of
illustration, FIG. 10 shows only the maximum, mean, and contrast
measurements because they can be shown in a 2-D plot. FIG. 10 is
not drawn to scale, and the parameters are examples only, used to
make the illustration more intuitive.
[0101] The ROI whose grayscale features fall in the shadow area of
FIG. 10 is identified as an ROI containing a hyper-echoic spot. In
the figure, the four borders of the shadow area are represented
with four lines a, b, c, and d, respectively. The lines a and b
indicate that the maximum value of the ROT should be between
grayscale levels 75 and 255. The line c indicates that the mean
value of the ROI should be greater than 5. The line d indicates
that the contrast (the slope of the line in the 2-D coordinate
system constructed by the mean and maximum) should be greater than
2.
[0102] In practice, the line d may be replaced by a curve e (the
dotted curve in FIG. 10), which delimits the border more
accurately. That is because variations of the mean and the contrast
may result in different thresholds. Generally speaking, the greater
the mean, the smaller the threshold. As a result, curve e is in the
form as shown in the figure. The curve e can be implemented as a
curve equation or as a look-up table for correlating the threshold
to the mean.
[0103] Extending the illustrative example of FIG. 10 to more
measurement parameters results in a multi-dimensional space and a
shadowed sub-space similar to the shadow area in the 2-D space in
FIG. 10.
[0104] Step 506, detecting the real z-position of each seed placed
along the needle track, is in fact a task of cutting the seed
pathway into several segments by discriminating the spots
representing seeds from any spots representing air gaps. The
grayscale information cannot be used to achieve that goal because
some stronger air gaps have greater measurement values than weak
seeds, as will be explained below with reference to FIG. 11B.
Therefore, a wave form analysis method is used instead.
[0105] To simplify the illustration, it is assumed that the
distance between two contiguous images is 0.5 mm, so that one seed
can occupy at most 10 images in the series, and it usually occupies
fewer than 10 due to its slant. Thus, in a case in which the gland
has a length of 4.5 cm, the offset is 5 mm, and there are 5 seeds
with special spacing, i.e, no spacer, at the apex, an ideal
waveform of a needle track should have the appearance shown in FIG.
11A, having rectangular peaks 1102, 1104, 1106, 1108 and 1110
indicating the seeds. However, a real waveform is more likely to
have the appearance shown in FIG. 11B, having irregular peaks 1112,
1114, 1116, 1118 and 1120 indicating the seeds.
[0106] It can be seen in FIG. 11B that although the measured value
(MV) of the second peak 1114 is less than that of the air gap 1122
between the peaks 1116 and 1118 or that of the air gap 1124 between
the peaks 1118 and 1120, the second peak 1114 has a wave form of
peak, while each of the air gaps 1122 and 1124 has the wave form of
valley. That distinction between peaks and valleys can be used to
discriminate the seeds from the air gaps.
[0107] Since it is already known how many seeds are placed in the
needle track, the positions of the top several peaks are identified
as the centers of seeds. In the case of FIG. 11B, if the plan has
four seeds, their positions are taken as the peaks 1112, 1116, 1118
and 1120, but not 1114.
[0108] That principle is simple, while the difficulty is the
representation of the MV. Since any single grayscale measurement
cannot reflect the whole feature of the ROI, it is natural to use
their linear combination as the final MV, i.e.,
MV=.SIGMA.a.sub.iv.sub.l,
[0109] in which v.sub.l represents each feature such as maximum,
contrast, and standard deviation, etc, and a.sub.i represents the
coefficient of each feature. Of course, the combination of those
features is not constrained to the linear composition, which is the
simplest one. Simple least square statistics will determine the
value and the confidence interval for each coefficient.
[0110] Of course, the MV waveform should be smoothed before it can
be processed because the raw signal may contain many noise peaks,
as shown in FIG. 12A. Next, the false peaks are removed. For
example, if two peaks have a distance less than 6 units along the
z-axis, they most likely represent the same seed, so one of them
will be absorbed by the other, stronger one, as shown in FIG. 12B.
If a peak lies between two other higher peaks and has no distinct
drop off before and after it, it is most likely noise, as shown in
FIG. 12C.
[0111] After those adjustments to the waveform, the peaks are
detected to determine how many peaks there are. If the number is
greater than the implanted number N of seeds, only the highest N
peaks are taken as the seeds, as explained above with reference to
FIG. 11B. If the number is less than N, either seed identification
is forced using second-tier peaks (with reduced confidence), or the
preset transverse size of the ultrasound column is changed to
process a larger needle track that includes the exact number of the
implanted seeds.
[0112] FIGS. 13A-13D show sample seed identifications along grid
location 12. In FIGS. 13A and 13C, the seeds are identified by
black marks M, while in FIGS. 13B and 13D, they are left
unmarked.
[0113] Each seed identified in that manner is assigned a confidence
level according to the MV level and the fall-off characteristics of
the peak. The greater the MV level and the fall off of the peak,
the more likely it is a seed. The locations of the seeds and their
confidence values are convoluted into subsequent dosimetry
calculations, which result in a confidence level for each of the
dosimetry parameters arising from the dose-volume histogram,
including V100, D100, D95, D90, D80 and D50.
[0114] If the confidence on the chosen dosimetry parameter
(currently D90) is acceptably high, seed localization is said to be
reliable enough for re-planning and re-optimization of dosimetry,
in order to compensate for the dosimetric impact of the aggregate
seed misplacements. If the confidence on the chosen dosimetry
parameter is not sufficiently high, simple Baysian statistics are
used to determine which seed localizations require increased
confidence to achieve acceptable confidence in dosimetry. Repeat
ultrasound scans are acquired; imaging data for the given needle
column(s) are fused using redundant information but with increased
signal-to-noise ratio. The above-described process is repeated
starting from active seed pathway tracking and ending with
dosimetry confidence analysis.
[0115] If repeated application of the above process still cannot
achieve acceptable dosimetry confidence, fluoroscopy x-ray imaging
of the seeds after the implantation will be used to increase the
localization confidence of the given seeds found in TRUS images.
Ultrasound-based seed identification of high confidence values will
be used as "anchors" (fiducial marks) to register the ultrasound
and x-ray spaces. The stabilizing needles in the prostate can also
be used as positioning marks. The z-coordinates of the low
confidence seed localizations will then be corrected using the
x-ray projection(s). The confidence values are increased by a
variable based on the degree of seed overlap on the x-ray image,
the quality of the overall registration, and the quality of the
x-ray itself for seed localization. The 3 dimensional coordinates
of the TRUS detected seeds will be projected onto x-z plane to form
a 2 dimensional pattern. Then that pattern will be matched with the
2 dimensional pattern of x-ray by using the affine transform and
optimized to a global minimum distance differences. Then the seeds
pairs with distance differences less than a threshold, for example,
0.2 cm, will be considered as matched pairs with high confidence
and used as fiducial marks for registration. Note that the 2
dimensional x-ray pattern is the distortion corrected pattern. The
geometry parameter used for correction is acquired when the x-ray
is taken.
[0116] (2) 3D Orthogonal Compound Ultrasound
[0117] Another preferred embodiment of the present invention is
directed to 3D spatial compounding, which will be compared with a
known technique for 2D spatial compounding. FIG. 14A shows a simple
illustration of the known 2D spatial compounding. Two (or more)
scans of the same plane are accurately registered and then combined
together to produce a compounded image. FIG. 14B shows a schematic
of 3D orthogonal compounding according to the present embodiment.
Stacks of images captured from both the transverse view and the
longitudinal view are converted to image volume format in Cartesian
coordinates and combined. A scan is a set of images captured in one
continuous sequence. The image stack captured by a scan is
reconstructed into a volume. A compound volume is made up of dual
orthogonal scans (or, more generally, non-parallel scans) with
accurate registration.
[0118] A personal computer installed with a video frame grabber and
an electromagnetic, optical or mechanical position and orientation
measurement system (e.g., miniBIRD.TM., Ascension Technology, Inc.,
Burlington, Vt.) is connected to the ultrasound probe and used to
acquire a series of 2D images using both linear and rotational
scanning methods. The present inventors have verified that the EM
sensor can provide a resolution of 0.5 mm in static position and
0.1.degree. in orientation at 30 cm distance from the
electromagnetic field transmitter. The frame grabber can capture
real-time images from the composite video output of the ultrasound
machine at the rate of 30 frames per second.
[0119] Any movement of the prostate due to probe pullback during
image acquisition can cause instantaneous deformation of the
prostate and therefore mis-registration of different image series.
A system to overcome that problem is shown in FIG. 15. That system
1500 uses a rigid sheath ("rectal shell") 1502 fitted around the
TRUS transducer 1504 to avoid its direct contact with the rectal
wall of the patient P so that prostate movement is eliminated
during the pull back or rotation of the transducer. The sheath 1502
is made of bio-compatible materials and safe for human tissues. The
material should also have low attenuation of ultrasonic energy so
that the ultrasound image quality will not decrease significantly.
Acoustic coupling is ensured by applying a small amount of gel 1506
between the surface of the transducer 1504 and rectal shell 1502.
The posterior side of the rectal shell 1502 vents with tissue so
that pressure does not build up during forward movement. Fiducial
markers 1508 visible in both the transverse and longitudinal scans
can be embedded in the rectal shell 1502 and used to help register
the image volumes. A position sensor 1510, such as an
electromagnetic sensing system or an optical or mechanical encoder,
is also provided.
[0120] A bi-plane ultrasound transducer, which can easily switch
between transverse imaging and longitudinal imaging, is used for
imaging. Transverse view images are obtained while the transducer
is pulled back from base to apex at even spacing (e.g. 0.5 mm).
Longitudinal images are obtained while the transducer is rotated
slowly by free hand at even angle (e.g. 0.5.degree.). Image
acquisition is triggered automatically when the EM sensor detects
probe movement to each image interval. The actual (rather than
evenly-spaced) coordinates and angles of the probe at the instant
of image acquisition are registered with the image, therefore even
when the probe movement is very rapid there is no mis-registration
(but there may be potential under-sampling). The measurement rate
of the EM sensor is set to 120 Hz. The stream data reading mode and
real-time filtering can be used to avoid the delay of the sensor.
That technique has been extensively tested by the present inventors
for both linear and rotational scans under realistic operative
conditions. Careful error analysis indicates that sufficient
accuracy and sampling may be achieved if some attention is paid to
the EM transmitter positioning and speed of the scans (no more that
1.5 cm/s for linear scan and 15.degree./s for rotation). The
acquired 2D images are reconstructed into 3D volume in Cartesian
coordinates by using reformatting, so that the reformatted image
volume can be registered and compounded with the image volume
acquired by the linear scan.
[0121] The 3D reconstruction of the image volume acquired by
rotational scan is in fact a problem of coordinate transformation
from the polar coordinates (r, .theta.) into Cartesian coordinates
(x, y). FIG. 16A shows a schematic diagram of the acquired images
with respect to the transducer. The images are taken while the
transducer is rotated and are thus arranged as a fan of a series of
2D images, radiating outward from the transducer, and evenly spaced
over an arc of a certain degree (typically 100.degree.). FIG. 16B
shows a resulting stack of (r, z).sub..theta.images. The data from
the stack of images shown in FIG. 16B are rearranged by
interpretation to form a stack of (r, .theta.).sub.z images by
interpolation. Such a stack is shown in FIG. 16C. The data from
that stack are reconstructed into a 3D (x, y, z) image such as that
shown in FIG. 16D. The reconstruction produces a 3D image data cube
in which the relative orientation of the acquired images is
preserved.
[0122] FIG. 17 is a schematic diagram illustrating the geometry
used to describe the reconstruction algorithm. The source image
grid is in polar coordinates (r, .theta.), while the destination
image grid is in Cartesian coordinates (x, y). The gray scale value
at any point p in the destination image is calculated from those of
its neighbors a, b, c, and d in the source image in a manner which
will now be described.
[0123] If the rotation axis of the transducer is designated as the
z-axis, all the 2D image slices are perpendicular to the x-y plane,
so that the reconstruction for every x-y plane is the same. The 3D
reconstruction problem is therefore reduced to a 2D problem of
mapping the source data points, collected in cylindrical
coordinates (r, .theta., z), onto a regular grid of destination
data points, in Cartesian coordinates (x, y, z), i.e., for each
value of z, P (r, .theta., z).fwdarw.P (x, y, z), where P is the
grayscale value of a data point.
[0124] We use the destination-oriented method for the conversion
because the source-oriented method [i.e., for each grid point (r,
.theta.) in the source image, the Cartesian coordinate (x, y)=(rsin
.theta., rcos .theta.) is computed, and the corresponding grayscale
value is distributed to its nearest neighbors in the destination
grid] suffers from under-sampling far from the axis, i.e., some
destination grid points may not receive contributions from any
source point.
[0125] In the destination-oriented method, for each grid point (x,
y) in the destination image, the polar coordinate (r,
.theta.)=({square root}(x.sup.2+y.sup.2), arctan (x/y)) is
computed, and the corresponding grayscale value is interpolated
from its nearest neighbors in the source image. We use the bilinear
interpolation method, i.e., linear interpolation in both r and
.theta., for the reconstruction, in which the grayscale value P of
a grid point p in the destination image can be calculated as: 1 P =
( 1 - ) r r P a + r r P b + ( 1 - r r ) P c + ( 1 - ) ( 1 - r r ) P
d , (Eq.1)
[0126] where P.sub.a, P.sub.b, P.sub.c, P.sub.d are the gray scale
values of the neighbor points of p in the source image, a,b,c,d,
respectively. .DELTA..theta. is the angular separation between
successive 2D longitudinal images, at .theta..sub.l and
.theta..sub.l+1, .delta..theta. is the angle between .theta..sub.i
and p , .DELTA.r is the radial separation between r.sub.l and
r.sub.l+1, and .delta. r is the distance between r.sub.l and p , as
shown in FIG. 17; the dashed lines represent the polar coordinates
in which the points a,b,c,d, of the source image are located, and
solid lines represent the Cartesian coordinates in which the point
p of the destination image grid is located.
[0127] The above process will be summarized with reference to the
flow chart shown in FIG. 18. In step 1802, a two-dimensional series
of longitudinal images in the form of a stack of (r, z).sub..theta.
images is collected. In step 1804, the stack of step 1802 is
rearranged into a stack of (r, .theta.).sub.z images by
interpolation. In step 1806, using Equation 1 above, for each z
plane, the grayscale value at each grid point (x, y) in the
destination image is calculated. In step 1808, the 2D series of
transformed images is collected in the form of an (x, y).sub.z
stack to form an image volume (3D image) in Cartesian coordinates
(x, y, z).
[0128] Since the reconstruction algorithm for every x-y plane is
the same, the total reconstruction time can be reduced by producing
a look-up table that designates which source pixels correspond to
a,b,c,d in equation, and the associated values of
.delta..theta./.DELTA..theta. and .delta. r/.DELTA.r, for every
destination grid point p in the volume. The reconstructed 3D image
is then built up by using that table repeatedly for each successive
value of z.
[0129] Precise registration between separate scans obtained in the
same volume of interest is the key for 3D spatial compounding to
improve data quality. The present invention use the image-based
non-rigid registration of the grayscale data sets method because
the soft tissue movement and the sound speed inhomogeneities make
the compound using only the position sensor readings inappropriate
and inaccurate.
[0130] The two image volumes to be registered are obtained from
different scanning methods. Since position readings of each slice
are available from the position sensor 1510 of FIG. 15, a
preliminary registration based on geometry is used and
significantly reduce the registration time of the subsequent
registration.
[0131] Maximization of mutual information (MMI) of the histogram is
a new approach of image-based registration that has been used for
multimodality medical image registration by many groups (Collignon
et al. 1995; Studholme et al. 1995; Viola and Wells 1997; Maes et
al. 1997; Meyer et al. 1997). Recently, that method has also been
used for registration of 3D ultrasound images from scanning at
several different transducer tilt angles (Krucker et al. 2000). But
it has never been used for registration of intraoperative 3D TRUS
image volumes from linear scanning and rotational scanning, as
proposed in the present invention.
[0132] The MMI method uses the concept of mutual information (MI),
which is a basic concept from information theory for measuring the
statistical dependence between two random variables, here to
measure the statistical dependence between the image intensities of
corresponding voxels in two images, which is assumed to be maximal
if the images are geometrically aligned. Because no assumptions are
made regarding the nature of that dependence and no limiting
constraints are imposed on the image content, MMI is a very general
and powerful criterion, allowing robust, fully automated affine
registration of different images with different contrast and
resolution without the need for segmentation or other
preprocessing.
[0133] The mutual information is defined as
I(x, y)=H(x)+H(y)-H(x, y)
[0134] where H(x) and H(y) are the entropy of data sets x and y,
respectively, and H (x, y) is the joint entropy of the two data
sets. The entropy of a data set is defined as its average
information content, while the joint entropy is the average
information of two data sets. Entropy H is a measure of the amount
of uncertainty about the data set and can be computed using the
probability densities p (x), p (y) and joint probability density p
(x, y) by 2 H ( x ) = - x p ( x ) log p ( x ) H ( y ) = - y p ( y )
log p ( y ) H ( x , y ) = - x , y p ( x , y ) log p ( x , y )
[0135] So the mutual information I (x, y) can be computed by 3 I (
x , y ) = x , y p ( x , y ) log p ( x , y ) p ( x ) p ( y )
[0136] The gray-scale values of a pair of corresponding voxels in
two images that are to be registered are considered as two random
data sets x and y. The probability densities can be simply
approximated by the normalized gray scale histograms, and the joint
probability density can be approximated by the normalized 2D joint
histogram, with the two base axes corresponding to gray scale
levels in the two data sets. The 2D joint histogram is constructed
by raster scanning through all voxels in one image volume and
incrementing bin counts corresponding to the gray scale values from
the geometrically mapped voxel pair in the other image volume. The
joint histogram is normalized by dividing by the total number of
voxels in the image volume. So the histogram value p (g.sub.1,
g.sub.2) is equal to the normalized count of pixels with value
g.sub.1 in the first image volume that match a value g.sub.2 at the
same location in the second image volume under the current mapping
transformation. The procedure of constructing the 2D joint
histogram is illustrated in FIG. 19. For ultrasound images that are
8-bit grayscale (256 grayscale levels), the number of elements in
the joint histogram is 256.times.256.
[0137] Although typical ultrasound images have 256 grayscale
levels, fewer bins can be used (such as 64 bins, thus requiring
{fraction (1/16)} of the computation intensity) to form the 2D
joint histogram because most radioactive seeds implanted into the
prostate have large grayscale values while tissues have small
values. That feature can be used to reduce the number of grayscale
bins. That means that at least some of the implanted radioactive
seeds play an important role in the MMI registration.
[0138] The two image volumes x and y are related through a
geometric transformation T.sub..alpha. defined by the registration
parameter .alpha.. The MMI registration criterion states that the
images are geometrically aligned by the transformation
T.sub..alpha. for which the mutual information I (x, y) is maximal.
The registration optimization works by iteratively maximizing the
mutual information I (x, y). The only user interaction in the
process is the definition of an approximate initial alignment of
the two image volumes by placing three control points in each set.
The control points define a geometric transformation that is
applied to match one image volume to the other. The mutual
information of the matched pair is calculated, and the optimization
algorithms are used to modify the position of the control points
until I (x, y) is maximized.
[0139] The Nelder-Maed down hill simplex algorithm can be used.
That method is initialized with N+1 points, defining a
non-degenerate simplex in N-dimensional parameter space. That
simplex is then deformed iteratively by reflection, expansion or
contraction in order to move the vertices of the simplex towards
the minimum of the objective function. Convergence is declared when
the fractional difference between the lowest and the highest
function values evaluated at the vertices of the simplex is smaller
than a pre-defined threshold.
[0140] The optimization works in three stages, using increasing
numbers of control points and thus increasingly complex geometrical
transformations. The first stage uses three control points
(rotation and translation), the second stage four (full affine
transform), and the third a number greater than four (thin plate
spline warping), which is a trade-off between registration accuracy
and computation time. Each stage stops and passes its final
transformation onto the next stage when the mutual information
increment is less than a predefined threshold. For efficient
optimization, each stage uses a compressed version of the original
data set and different parameters to confine variation of the
control point positions and to determine convergence of each
optimization cycle.
[0141] FIG. 20 shows a flow chart of the registration algorithm. As
seen in that flow chart, the algorithm includes two loops, the
outer one of which controls the transition from stage to stage. In
step 2002, the control point placement is initiated; the user
selects the approximate control points in both volumes. In step
2004, the geometric mapping is computed. The first time step 2004
is performed, it is performed in stage 1, with rotation and
translation only; the transition to stage 2 (full affine
transformation, including scale) and stage 3 (thin plate spline
warping) will be explained below. In step 2006, interpolated mapped
volumes are computed. In step 2008, a 2D join histogram is
constructed. In step 2010, the mutual information is computed. In
step 2012, it is determined whether I is maximized. If not, then in
step 2014, the control points are moved according to the optimizer,
and the algorithm loops back to step 2004. If I is maximized, then
in step 2016, it is determined whether the last stage out of stages
1-3 has been performed. If not, then in step 2018, the control
points are created for the next stage, and the algorithm loops back
to step 2004, which is carried out for the next stage. If the last
stage has been performed, then in step 2020, the algorithm stops.
After registering and transforming the reformatted rotational scan
onto the linear scan, combining the two volumes into one creates a
compounded image volume. The most common techniques for combination
of overlapping data are the maximum and mean. However, those
methods do not take into account the relative confidence in the
data, which may vary with different scans. So in the present
invention, an intelligent compounding method is used. In that
method, the information content of each slice of one image volume
is computed and compared with the information content of the
corresponding slice of the other image volume. The contributions of
the single scan images to the compounded image is determined by the
information content of each. The intensity value of each pixel in
the compounded image is a weighted mean of the intensity of
corresponding pixels in both single scan images. That method is
different from the weighted spatial compounding proposed by Leotta
and Martin (2000) in that their weighting is based on incidence
angle of the ultrasound beam, which is an exterior determining
factor, whereas the weighting of the present embodiment is based on
the information content computed directly from the images, which is
an interior determining factor. In the present embodiment, the
weighting can be global, as shown in FIG. 22A, or regional, as
shown in FIG. 22B. The information content can be estimated by
entropy H, as described above. Weighted compounding at sub-image
(or pixel) level, e.g., moving region-of-interest (ROI) that scans
the image plane, can be used in the entropy formalism, so that the
compounded image plane contains differential weighting of the two
original image sets based on regional information content.
[0142] FIGS. 21A and 21B show, respectively, one of the sagittal
images from the linear scan, and one of the sagittal images from
the rotational scan. They are registered in the same sagittal plane
by observing the whole series of images from both volumes and
matching certain distinct marks, e.g. the urethra, the edges of the
phantom, the bright line on the top of the images formed by the
echoes from the margin of the phantom container, and some clearly
visible seeds. A simple compounding by averaging the images of
FIGS. 21A and 21B forms the compounded image shown in FIG. 21C. All
four seeds along that needle track are clearly visible on the
compounded image of FIG. 21C, but not on the image of FIG. 21A or
21B individually. The signal-to-noise ratio (SNR) and
contrast-to-noise ratio (CNR) are used to measure the image
statistics of the single scan images and the compounded image. SNR
is defined as
SNR=.mu./.sigma.
[0143] where .mu. and .sigma. are the mean and standard deviation
(STD) of the intensity, respectively. CNR of object to background
CNR.sub.o-b is defined as
CNR.sub.o-b=(.mu..sub.o-.mu..sub.b)/.sigma..sub.b
[0144] where .mu..sub.o and .mu..sub.b are the mean of object and
background intensity, respectively, and .sigma..sub.b is the
standard deviation (STD) of the background. The table below lists
the image statistics of seeds, phantom, and background,
respectively, and computes the CNR of seeds to phantom CNR.sub.s-p
and phantom to background CNR.sub.p-b, respectively. The table
shows that the CNR.sub.s-p, CNR.sub.p-b, and SNR of the compounded
image are 1.39, 1.36, and 1.41 times those of the average values of
the two single scan images, respectively. Those numbers are very
close to the theoretical estimation that SNR.sub.N={square
root}{square root over (NSNR)}.sub.0, where SNR.sub.N and SNR.sub.0
are the SNR of the compounded image averaged from N views and of
the single uncorrelated views, respectively. That indicates that
the image volumes from the linear scan and the rotational scan are
nearly uncorrelated, which is just expected for increasing SNR of
the compounded image and thus reducing the speckle noise.
[0145] List of the image statistics of seeds, phantom and
background of linear scan, rotational scan, and orthogonally
compounded image.
2 Ratio of the CNR/ SNR of compounded image divided by the average
values of the Linear Rotational Com- two single scan scan scan
pounded images Mean of seeds 198 128 164 .mu..sub.s Mean of 103 56
80 phantom .mu..sub.p Mean of back- 18 15.7 16 ground .mu..sub.b
STD of 18 15 12 phantom .sigma..sub.p STD of back- 7 3.5 4 ground
.sigma..sub.b CNR.sub.s-p 5.3 4.8 7.0 1.39 CNR.sub.p-b 12.1 11.5
16.0 1.36 SNR of 5.72 3.73 6.67 1.41 phantom
[0146] That experiment strongly validates the principle that the
orthogonally compounded image volume is better than any single scan
image volume for both prostate segmentation and the detection of
heterogeneities such as tumor modules or seeds. It suggests that
tissue typing by OCU in heterogeneous tissues (focal lesion in
prostate) may be more efficient than conventional TRUS. The
dual-scan OCU imaging can be used for improving the image quality
in prostate brachytherapy and biopsy.
[0147] A normalized statistical tumor probability model (NSTPM)
aimed to help effective biopsy based on OCU imaging for tumor
detection can be used in the present embodiment. The model is based
on a large number (e.g., 100) of individual prostate specimens with
localized cancers. It not only provides statistical information on
a 3D spatial distribution of prostate cancers, but also shows the
probability of cancer detection at each location in the prostate.
During the biopsy procedure, the probability model can be
dynamically mapped into OCU images during in vivo TRUS-guided
needle biopsy by dynamically registering the OCU images with the 3D
model and overlaying them together. Since the new biopsy technique
is based on statistical analysis of a quantitative database of
digitized prostate specimens, it is expected to significantly
improve the accuracy of prostate cancer detection.
[0148] To construct the statistical probability model of prostate
tumor distribution, we first collect pathological information about
the tumor distributions of real cases. Sample distributions are
obtained immediately after the surgical removal of the gland
(radical prostatectomy). We perform whole-mounted pathological
analysis of step-sectioned prostatectomy samples. The prostate
specimens are sectioned at even intervals (e.g. 2.25 mm).
Pathologists identify and grade all areas of cancer using Gleason's
method of scoring. A high resolution image of each whole mount
slide is captured using a microscopy system. Each digitized slice
is then segmented to identify key pathological structure including
surgical margin, capsule, urethra, seminal vesicles as well as
tumors. The pathologists provide independently reviewed whole-mount
slices of the prostate, trace the borders of the prostate and
identify the circumference of each prostate cancer, which is
assigned a Gleason's score.
[0149] Based on the current clinical conventions for TRUS-guided
prostate biopsy, a prostate gland is divided into different zones
(instead of dealing with individual pixel points because the
pixel-based statistical analysis is easily disturbed by the noise
and the final results can not be directly used by urologists or
computer software) that are accessible by the urologists under the
guidance of OCU imaging. The cancer inside each of those zones is
calculated by calculating the statistical probability from the
cases. The result is a bounding box of a prostate gland with a
certain number of zones each with information on spatial tumor
probability in it.
[0150] A neural network is used to realize the adaptive
sub-classification of the NSTPM because it is affected by some
other parameters such as the patient's age, PSA level, gland
volume, etc. As a general nonlinear approach to pattern recognition
and classification, neural network computational models contain
layers of simple interconnected computing nodes that operate as
nonlinear summing devices. Those nodes are organized into layers
and interconnected by weighted connection lines. From each layer,
every neuron is connected to all neurons in the next layer. The
weight values on the connection lines are adjusted when data is
presented to the network during a training process. Successful
training can result in an artificial neural network that performs a
classification. The most popular neural network architecture is the
multi-layered perception (MLP) with three layers of neurons. An
input layer receives the data to be analyzed. The second layer,
consisting of hidden units, organizes weighted combinations of
inputs, or "features" that aid in discriminating the different
categories in a classification. The output layer then combines
those features appropriately to perform the classification.
[0151] Many applications of neural networks in medicine and medical
decision have been reported. For example, in a U.S. Pat. No.
(6,025,128), titled "Prediction of prostate cancer progression by
analysis of selected predictive parameters," a neural network was
used to analyze and interpret cell morphology data. Utilizing
Markovian actors and other biomarkers as parameters, the network is
first trained with a set of cell data from known progressors and
known non-progressors, and is then used to predict prostate cancer
progression in patient samples. In another U.S. Pat. No.
(6,004,267), titled "Method for diagnosing and staging prostate
cancer," a neural network was used for providing prostate cancer
stage information for a patient based upon input data which
includes the patient's preoperative serum PSA, biopsy Gleason
score, and systematic biopsy-based information.
[0152] Unlike those patents, the present embodiment uses a neural
network to construct a normalized statistical tumor probability
model (NSTPM), which is aimed to help effective biopsy, not aimed
to detect tumor directly from the prostate by neural networks. It
is more realistic and applicable. This is additionally aided by
improved imaging of the OCU modality.
[0153] FIG. 23 shows the neural network used for sub-classification
of the NSTPM. As noted above and as known in the art of neural
networks, the neural network 2300 includes an input layer 2301, at
least one hidden layer 2304, and an output layer 2306. The neurons
2308 in the input layer 2302 receive the raw numbers from the
related parameters, such as the patient's age, PSA level, gland
volume, etc. Those numbers are weighted and combined by the hidden
neurons 2310 in the hidden layer or layers 2304 of the network
2306, which extract features useful for classification. Then,
values calculated by the hidden neurons 2310 are weighted and
combined by the output neurons 2312 in the output layer 2306, to
decide on the classification. The present example uses four output
neurons 2312, which represent different properties of the
anatomical zones in pathological analysis. Those neurons 2312
represent benign, malignant, malignant Gleason 1-3 and malignant
Gleason 4-5. Based on that NN model, corresponding network
structures are obtained for various anatomical zones and the final
NN structure precisely reflects the relationship between the
parameters with the tumor distribution probability in different
zones.
[0154] The network is trained by supervised learning, in which the
correct classification is presented to the output layer during
training, and the weights are adjusted so as to make the network's
output more closely match the correct classification at each point.
After training, the network outputs the classification when
presented with an input vector at its input layer. The fundamentals
of neural network training are known in the art and will therefore
not be described in detail here. It is important to avoid "data
over-training" of the dataset. To accomplish that goal, we divide
the original dataset into three parts. The first part is the
training set, which is used to train the NN. The second part is the
testing set, or verification set, which is used to test when to
stop training the network. Typically, the network's error when
applied to the testing set decreases during training until the
point at which over-training begins, and at that point, the error
on the testing set starts to increase. In a well-controlled study,
training is stopped when the error on the testing set is minimized.
The third set is the validation set, which is used to measure the
performance on the trained network. The validation set is
completely independent with both the training and testing sets and
thus gives a suitable benchmark on the NN performance. If the
validation set is used to train or to determine the stopping point
for training, then the performance of the network on the validation
set would be a predictor for new, independent data.
[0155] Performance of the trained network is then measured by a
root-mean-square-error (RMS error) calculation, accuracy,
sensitivity and selectivity calculations, receiver operating
characteristics (ROC) curve, and prediction/confidence intervals.
The RMS error shows literally the differences between the desired
output for the neural network and its actual output. The prediction
and/or confidence interval reflect the amount of variation inherent
in the training and calculation methodologies of the neural
network. The sensitivity is the fraction of data items that the
network classifies as containing a feature of interest taken out of
all the items that have that feature of interest. The selectivity
is the fraction of data items that the network classifies as not
containing a feature of interest taken from the entire set of data
that does not in fact contain that feature. It is important to set
the optimized tradeoff between sensitivity and selectivity.
Usually, a higher sensitivity causes a lower selectivity, and vice
versa. The ROC (receiver operating characteristics) curve can
provide that tradeoff. When a neural network is applied to a
classification problem, an entire ROC curve can be calculated based
on its output when the data set is presented to the network. Each
point along that ROC has a pair of values of sensitivity and
selectivity. The best point along the curve can be selected, taking
into account the clinical consequences of high or low sensitivity
versus selectivity. Usually, a high sensitivity is selected, and a
lower selectivity is accepted.
[0156] The 3D NSTPM is built and saved in the computer, in which
the successive slices of the prostate are put together into regions
such as prostate, suspicious areas, high Gleason areas, and low
Gleason areas, and landmarks such as urethra.
[0157] The NSTPM is used during biopsy as shown in the flow chart
of FIG. 24. At the beginning of the biopsy, in step 2402, a
dual-scan of the prostate is performed and orthogonal compound
image slices of the prostate obtained. Then, in step 2404,
segmentation of prostate boundary, rectum and urethra is performed
on that OCU image series. Based on the structure information of
prostate boundaries and urethra, we register the NSTPM into that
real prostate and obtain the tumor probability distribution in that
prostate in step 2406. An advanced 3D overlay display ability is
employed in the system. The series of boundaries of the prostate
segmented from the OCU images are reconstructed into a 3D surface
model in step 2408. The 3D tumor probability model is registered
with the 3D surface model in step 2410. For easy visual interface,
here each zone of the 3D tumor probability model is color-coded
according to the value of its probability in step 2412. For
example, the zone with highest probability is coded as red, the
second highest zone as pink and so on. The zone with lowest
probability can be coded as blue. As such, the urologists can
easily navigate the whole prostate in step 2412 and pick up the
best candidate areas for biopsy by dynamically moving and/or
turning the virtual US beam and making it intersect with the
candidates in 3D space. Of course, any other suitable coding could
be used. Then the biopsy is optimized to sample the high Gleason
area as much as possible.
[0158] (3) Statistical Analysis of Texture Features
[0159] In determining needle and seed displacement, it is helpful
to be able to determine the seeds quickly and accurately. To that
end, another preferred embodiment of the present invention provides
a statistical technique for doing so.
[0160] In order to find effective features and their optimized
combination for the purpose of identifying seeds, we applied Logit
and Probit models, discriminant analysis, and neural network
classification methods to two test groups. Logit and Probit models
are known in various fields, such as sociology, but have not yet
been applied in the same manner as in the present embodiment. One
group represents the seeds group which are identified manually from
the TRUS images, the other group represents the non-seeds group
including typical background noises, calcifications, bleeding, air
gaps, and shadows. The selection of the two groups is confirmed by
the day 0 CT (i.e., CT images taken within the same day of the
actual implant): the positions of the radioactive seeds in the CT
images are identified automatically or manually, and
affine-transformed to match the TRUS image series. Some clearly
visible seeds in the TRUS image series can be visually identified
and used as markers to correct the registration. The seed positions
identified from CT are affine-transformed until they achieve a best
match with those identified from TRUS.
[0161] For each test element in both groups, we compute some
numerical features from its 3 dimensional neighboring area of 2.5
mm.times.2.5 mm.times.3.5 mm block, some features from 2
dimensional slice of 2.5.times.2.5 mm neighboring area. A seed
template is designed to correlate with the sub-image area in the
TRUS image series, and the correlation coefficient is used as a
numerical feature. The seed template is a bright central part
surrounded by dark background. The whole area is also 2.5.times.2.5
mm, while the central part is a seeds echo shaped area, as shown in
FIGS. 25A-25C. Since the seeds in different parts of the prostate
may appear as bright echo with different orientations, different
seed templates may be created for different prostate positions.
[0162] FIGS. 25A-25C show only three different seed templates
corresponding to the orientations of seeds in the left, middle, and
right parts of the prostate, respectively. Those seed templates are
used to compute correlation coefficients with sub images. More
templates can be used for more orientations; therefore, FIGS.
25A-25C are illustrative rather than limiting.
[0163] All numerical features of both groups are computed and used
as regressors in the logit and probit models. A binary dependent
variable Y is created by assigning 1 to the seeds group and 0 to
the non-seeds group respectively. The regression coefficients,
standard errors, t-statistics, and probability are computed.
Normally, a probability lower than 0.05 is taken as strong evidence
of rejection of a hypothesis that the true coefficient is zero. So,
we keep those features with probability lower than 0.05. Those
features are: maximum, maximum minus mean, standard deviation,
correlation coefficient, skewness, kurtosis, and area divided by
Euler number.
[0164] FIG. 26 shows a flow chart of the process of finding
optimized weight coefficients a.sub.i (i=1, . . . ,N) of numerical
features v.sub.l (i=1, . . . ,N) to identify a manually selected
seeds group and a manually selected non-seeds group. The process
uses a neural network, whose structure and training method have
already been disclosed with reference to FIG. 23. The data from the
seeds group (Y=1) and the non-seeds group (Y=0) are supplied in
steps 2602 and 2604, respectively. In step 2606, a logit or probit
model is used for selecting the proper features v.sub.i, i=1, . . .
, N, which may be useful for identifying seeds and non-seeds. In
step 2608, those features are supplied to a neural network to find
the optimized weight coefficients of those features. The outputs of
the neural network include true positives, false negatives, true
negatives and false positives in steps 2610, 2612, 2614 and 2626,
respectively. From those outputs, the sensitivity, specificity and
efficiency of the weight coefficients are determined in steps 2618,
2620 and 2622, respectively. Those quantities are used in a
feedback loop to optimize the weight coefficients; the efficiency
is used directly, while the sensitivity and specificity are used
through an ROC curve in step 2624.
[0165] Then we use discriminant analysis and neural network as
non-linear classification methods to the hyper-space constructed by
the multiple features. We calculated receiver operating
characteristics (ROC) curve and then selected the best combination
of those features. The network is trained by supervised learning,
in which the correct classification is presented to the output
layer during training, and the weights are adjusted so as to make
the network's output more closely match the correct classification
at each point. After training, the network outputs the
classification when presented with an input vector at its input
layer. Performance of the trained network is then measured by
receiver operating characteristics (ROC) curve. When a neural
network is applied to a classification problem, an entire ROC curve
can be calculated based on its output when the data set is
presented to the network. Each point along that ROC has a pair of
values of sensitivity and 1-specificity. The best point along the
curve can be selected, taking into account the consequences of high
or low sensitivity versus specificity.
[0166] Finally, since the exact weights of those features may be
different for various degrees of TRUS image quality due to
different ultrasound machines, different machine settings, etc, we
optimize the weights of the features by finding a best match to a
day 0 CT seeds identification results. Up to now, the features we
used are maximum minus mean, standard deviation, correlation
coefficient, area divided by Euler number, skewness, kurtosis, and
band pass filtering by using FFT.
[0167] The relevant statistical quantities will now be defined. It
will be seen that each of the quantities builds on previous
quantities.
[0168] Definition of Mean: For univariate data Y.sub.1, Y.sub.2, .
. . , Y.sub.N, the formula for the mean is: 4 Y _ = i = 1 N Y i
N
[0169] where N is the number of points.
[0170] Definition of standard deviation: For univariate data
Y.sub.1, Y.sub.2, . . . , Y.sub.N, the formula for the standard
deviation is: 5 s = i = 1 N ( Y i - Y _ ) 2 N
[0171] Definition of Skewness: For univariate data Y.sub.1,
Y.sub.2, . . . , Y.sub.N, the formula for skewness is: 6 skewness =
i = 1 N ( Y i - Y _ ) 3 ( N - 1 ) s 3
[0172] Definition of Kurtosis: For univariate data Y.sub.1,
Y.sub.2, . . . , Y.sub.N, the formula for kurtosis is: 7 kurtosis =
i = 1 N ( Y i - Y _ ) 4 ( N - 1 ) s 4
[0173] Definition of correlation coefficient: For two univariate
data X and Y, the formula for the correlation coefficient is: 8 r =
X Y - X Y N ( X 2 - ( X ) 2 N ) ( Y 2 - ( Y ) 2 N )
[0174] Definition of area divided by Euler number: the sub-image
will first be changed to a binary image by thresholding, then
compute the area and Euler number of that binary image. Area is the
number of "on" pixels in the image, and Euler number is the number
of objects in the image minus the total number of holes in those
objects.
[0175] The two-dimensional needle pattern can be manually adjusted
before applying automatic needle tracking algorithm. After
capturing the final scan image series, the automatic needle
tracking starting positions are important to the algorithm. Because
of prostate swelling and seed migration, the needle tracking
starting positions may be different from the previous positions
clicked on the computer screen during the needle insertion
procedure. So the needle pattern must be shifted and scaled as a
group, and any single needle can be adjusted in its position as
desired according to the current TRUS image series. The user
interface explained above, or any other suitable user interface,
can be used.
[0176] FIGS. 27A-27D illustrate the process. FIG. 27A shows the
original needle pattern. FIG. 27B shows needle pattern expansion;
FIG. 27C, shift; and FIG. 27D, single needle movement. There is no
required order of scale and shift movement, scale movement and the
shift movement may be alternatively performed several times for
best match.
[0177] A global search method can be used to identify radioactive
seeds in TRUS image series. The global search method can be used to
identify radioactive seed candidates in TRUS image series and use
their positions to correct minor bias of the seed positions
identified by needle tracking/seed segmentation algorithm.
Alternatively, the global search method can be used to identify
radioactive seed candidates in TRUS image series and use their
positions and planned needle pattern to match out the final needle
pattern.
[0178] After the prostate boundaries are drawn, either manually or
automatically, a 3D global search is executed within the prostate
boundaries. It searches the local maximum in the 3D data cube,
which is composed of the measurements of each 3D coordinate. (For a
different imaging technology, the local minimum could be used
instead.) The measurement is a weighted summation of various image
features in a small roll area of 4.5 mm long, 1 mm width, and 1 mm
height, just like the geometry of a seed. FIGS. 28A-28C show an
example: FIG. 28A is an original ultrasound image; FIG. 28B is the
grayscale transformed image, in which the radioactive seeds echo
are more distinct; FIG. 28C is the measurement of the grayscale
summation of each points, from which one can see the clear
relationship between the radioactive seed echo bright points and
the local maximum of the measurement matrix. That is only an
example. The measurement, in fact, can be a weighted summation of
various image features.
[0179] That method can identify most clearly visible radioactive
seeds in TRUS image series. However, it can also mis-idenfity some
other echo bright points such as calcifications, bleeding, air
gaps, etc, as radioactive seeds. So the identifying results may be
only used as "seed candidates," but not radioactive seeds
themselves. The seed candidates can be used either to match out the
final needle pattern with the comparison of planned needle pattern
(or manually clicked needle pattern) or to correct minor bias of
the seed positions identified by needle tracking/seed segmentation
algorithm.
[0180] For the former use, the 3D pattern of the seed candidates
set is projected onto a 2D plane and matched by the 2D pattern of
needles, which is drawn on the screen in real time when the needle
is inserted. Since the 2D needle pattern can be adjusted as a whole
for shift and scale changes, and each needle position can be
adjusted separately, the final needle pattern may be obtained in
reference to the 2D projection pattern of the seed candidates.
[0181] For the latter use, since there may be some minor bias when
tracking needles, the final detected seed positions may not be
exactly on the 3 dimensional geometry center of the radioactive
seeds. So we can compare their positions with the seed candidates;
if any pair is within a distance threshold, for example, 0.15 cm in
3 dimensional, we can consider them as one radioactive seed and
merge it to the position of the seed candidate. By such method, we
can correct some minor bias when using the needle tracking based
identifying algorithm.
[0182] While various preferred embodiments of the present invention
have been set forth above in detail, those skilled in the art who
have reviewed the present disclosure will readily appreciate that
other embodiments can be realized within the scope of the present
invention. For example, the numerical values and statistical
parameters set forth above should be construed as illustrative
rather than limiting. The same is true of the arrangement of the
user interface of FIG. 3. Also, whenever radioactive seeds are
specified, it will be understood that other radioactive sources or
other therapeutic agents can be used instead. Furthermore, two sets
of images taken in orthogonal directions can be replaced, whenever
feasible, by two sets of images taken at a large angle, or any
non-zero angle, to each other. Moreover, even though various
preferred embodiments are taught separately, it will be understood
that they can be used together as needed. Therefore, the present
invention should be construed as limited only by the appended
claims.
* * * * *