U.S. patent application number 10/922700 was filed with the patent office on 2005-01-27 for displaying image data using automatic presets.
Invention is credited to Bissell, Andrew John, Poole, Ian.
Application Number | 20050017972 10/922700 |
Document ID | / |
Family ID | 29549654 |
Filed Date | 2005-01-27 |
United States Patent
Application |
20050017972 |
Kind Code |
A1 |
Poole, Ian ; et al. |
January 27, 2005 |
Displaying image data using automatic presets
Abstract
A computer automated method that applies supervised pattern
recognition to classify whether voxels in a medical image data set
correspond to a tissue type of interest is described. The method
comprises a user identifying examples of voxels which correspond to
the tissue type of interest and examples of voxels which do not.
Characterizing parameters, such as voxel value, local averages and
local standard deviations of voxel value are then computed for the
identified example voxels. From these characterizing parameters,
one or more distinguishing parameters are identified. The
distinguishing parameter are those parameters having values which
depend on whether or not the voxel with which they are associated
corresponds to the tissue type of interest. The distinguishing
parameters are then computed for other voxels in the medical image
data set, and these voxels are classified on the basis of the value
of their distinguishing parameters. The approach allows tissue
types which differ only slightly to be distinguished according to a
user's wishes.
Inventors: |
Poole, Ian; (Edinburgh,
GB) ; Bissell, Andrew John; (Edinburgh, GB) |
Correspondence
Address: |
RENNER OTTO BOISSELLE & SKLAR, LLP
1621 EUCLID AVENUE
NINETEENTH FLOOR
CLEVELAND
OH
44115
US
|
Family ID: |
29549654 |
Appl. No.: |
10/922700 |
Filed: |
August 20, 2004 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10922700 |
Aug 20, 2004 |
|
|
|
10726280 |
Dec 1, 2003 |
|
|
|
10726280 |
Dec 1, 2003 |
|
|
|
10212363 |
Aug 5, 2002 |
|
|
|
6658080 |
|
|
|
|
Current U.S.
Class: |
345/424 |
Current CPC
Class: |
A61B 8/461 20130101;
G06T 7/12 20170101; A61B 5/7445 20130101; G06T 2207/30101 20130101;
G06T 2210/41 20130101; A61B 6/466 20130101; G06T 2207/10081
20130101; G06T 2200/04 20130101; G06T 19/20 20130101; G06T
2219/2012 20130101 |
Class at
Publication: |
345/424 |
International
Class: |
G06T 017/00 |
Claims
What is claimed is:
1. A method of numerically processing a medical image data set
comprising voxels, the method comprising: (a) receiving user input
to positively and negatively select voxels that are and are not of
a tissue type of interest; (b) determining a distinguishing
function that discriminates between the positively and negatively
selected voxels on the basis of one or more characterizing
parameters of the voxels; and (c) classifying further voxels in the
medical image data set on the basis of the distinguishing
function.
2. The method according to claim 1, further comprising presenting
an example image representing the medical image data set to a user,
wherein the user positions a pointer at locations in the example
image to select corresponding voxels.
3. The method according to claim 2, wherein a selected voxel is
taken to be a voxel whose coordinates in the data set map to the
location of the pointer in the example image.
4. The method according to claim 2, wherein selected voxels are
taken to be voxels in a region surrounding a voxel whose
coordinates in the data set map to the location of the pointer in
the example image.
5. The method of claim 1, further comprising rendering an image of
the medical image data set, wherein the rendering takes account of
the classification of voxels, and displaying the image to the
user.
6. The method of claim 1, further comprising rendering an image of
the medical image data set, wherein the rendering is of a volume
data set representing values of the distinguishing function.
7. The method according to claim 1, wherein at least one of the one
or more characterizing parameters is a function of surrounding
voxels.
8. The method of claim 1, further comprising further classifying
voxels on the basis of the morphology of their respective
classifications in the medical image data set.
9. The method of claim 1, wherein the distinguishing function is
determined by computing the characterizing parameters for the
selected voxels and taking as the distinguishing function the value
of at least one characterizing parameter whose value depends on
whether its associated voxel has been positively or negatively
selected.
10. The method of claim 1, wherein voxels are classified as either
corresponding to the tissue type of interest or not corresponding
to the tissue type of interest.
11. The method of claim 10, further comprising rendering an image
of the medical image data set, wherein the rendering takes account
of the classification of voxels, and displaying the image to a
user.
12. The method of claim 11, wherein voxels classified as not
corresponding to the tissue type of interest are rendered as
transparent.
13. The method of claim 11, wherein voxels classified as
corresponding to the tissue type of interest are rendered as
transparent.
14. The method of claim 11, wherein voxels classified as
corresponding to the tissue type of interest are rendered in one
range of displayable colors and voxels classified as not
corresponding to the tissue type of interest are rendered in
another range of displayable colors.
15. The method of claim 1, wherein voxels are classified by
associating with them a probability that they correspond to the
tissue type of interest.
16. The method of claim 15, further comprising rendering an image
of the medical image data set, wherein the rendering takes account
of the classification of voxels, and displaying the image to a
user.
17. The method of claim 16, wherein the rendering takes account of
the classification by rendering a volume data set representing the
probability that the voxels correspond to the tissue type of
interest.
18. The method of claim 17, wherein voxels having a probability of
corresponding to the tissue type of interest of less than a
threshold level are rendered as transparent.
19. The method of claim 18, further comprising adjusting the
threshold level and re-rendering the image.
20. The method of claim 17, wherein a pre-determined fraction of
the voxels having the lowest probabilities of corresponding to the
tissue type of interest are rendered as transparent.
21. The method of claim 1, wherein the user input includes clinical
information regarding at least one of the positively and negatively
selected voxels, and wherein the distinguishing function is
determined from the characterizing parameters having regard to the
clinical information.
22. The method of claim 21, wherein the clinical information
specifies tissue type.
23. The method of claim 1, wherein the medical image data set
comprises a data set representing the probability that the voxels
correspond to a tissue type of interest determined in a previous
iteration of the method.
24. The method of claim 1, wherein the user input is prompted by
displaying an image to a user from a 3D data set comprising a
plurality of voxels, each with an associated signal value.
25. The method of claim 24, wherein the displaying of the image to
the user comprises: selecting a volume of interest (VOI) within the
3D data set; generating a histogram of signal values from voxels
that are within the VOI; applying a numerical analysis method to
the histogram to determine a visualization threshold; and setting
at least one of a plurality of boundaries for a visualization
parameter according to the visualization threshold.
26. A computer program product bearing computer readable
instructions for performing the method of claim 1.
27. A computer apparatus loaded with computer readable instructions
for performing the method of claim 1.
28. Apparatus for numerically processing a medical image data set
comprising voxels, the apparatus comprising: (a) storage from which
a medical image data set may be retrieved; (b) a user input device
configured to receive user input to positively and negatively
select voxels that are and are not of a tissue type of interest;
and (c) a processor configured to determine a distinguishing
function that discriminates between the positively and negatively
selected voxels on the basis of one or more characterizing
parameters of the voxels; and to classify further voxels in the
medical image data set on the basis of the distinguishing function.
Description
BACKGROUND OF THE INVENTION
[0001] The invention relates to the setting of visualization
parameter boundaries, such as color and opacity boundaries, for
displaying images, in particular two-dimensional (2D) projections
from three-dimensional (3D) data sets.
[0002] When displaying an image, such as in medical imaging
applications, it is known to associate particular signal values
with particular colors and opacities (known as visualization
parameters) to assist visualization. This mapping is done when
using data from a 3D data set (voxel data set) to compute a 2D data
set (pixel data set) representing a 2D projection of the voxel data
set for display on a computer screen or other conventional 2D
display apparatus. This process is known as rendering.
[0003] The 2D data set is more amenable to user interpretation if
different colors and opacities are allocated to different signal
values in the 3D data set. The details of the mapping of signal
values to colors and opacities are stored in a look-up table which
is often referred to as the RGBA color table (R, G, B and A
referring to red, green, blue and alpha (for opacity)
respectively). The color table can be defined such that an entire
color and opacity range is uniformly distributed between the
minimum and maximum signal values in the voxel data set, as in a
gray scale. Alternatively, the color table can be defined by
attributing different discrete colors and opacities to different
signal value ranges. In more sophisticated approaches, different
sub-ranges are ascribed different colors (e.g. red) and the shade
of the color is smoothly varied across each sub-range (e.g. crimson
to scarlet).
[0004] When displaying data such as in medical imaging, the signal
values comprising the data set do not usually correspond to what
would normally be regarded as visual properties, such as color or
intensity, but instead correspond to detected signal values from
the measuring system used, such as computer-assisted tomography
(CT) scanners, magnetic resonance (MR) scanners, ultrasound
scanners and positron-emission-tomography (PET) systems. As an
example, signal values from CT scanning will represent tissue
opacity, i.e. X-ray attenuation. In order to improve the ease of
interpretation of such images it is known to map different colors
and opacities to different ranges of display value such that
particular features, e.g. bone (which will generally have a
relatively high opacity) can be more clearly distinguished from
soft tissue (which will generally have a relatively low
opacity).
[0005] When displaying a 2D projection of a 3D data set, in
addition to attributing distinct ranges of color to voxels having
particular signal value ranges, voxels within the 3D data set may
also be selected for removal from the projected 2D image to reveal
other more interesting features. The choice of which voxels are to
be removed, or sculpted, from the projected image can also be based
on the signal value associated with particular voxels. For example,
those voxels having signal values which correspond to soft tissue
can be sculpted, i.e. not rendered and therefore "invisible",
thereby revealing those voxels having signal values corresponding
to bone which would otherwise be visually obscured by the soft
tissue.
[0006] The determination of the most appropriate color table (known
in the art as a preset) to apply to an image derived from a
particular 3D data set is not trivial and is dependent on many
features of the 3D data set. For example, the details of a suitable
color table will depend on the subject, what type of data is being
represented, whether (and if so, how) the data are calibrated and
what particular features of the 3D data set the user might wish to
highlight, which will depend on the clinical application. It can
therefore be a difficult and laborious task to produce a displayed
image that is clinically useful. Furthermore, there is inevitably
an element of user-subjectivity in manually defining a color table
and this can create difficulties in comparing and interpreting
images created by different users, or even supposedly similar
images created by a single user. In addition, the user will
generally base the choice of color table on a specific 2D
projection of the 3D data set rather than on characteristics of the
overall 3D data set. A color table chosen for application to one
particular projected image will not necessarily be appropriate to
another projection of the same 3D data set. A color table which is
objectively based on characteristics of the 3D data set rather than
a single projection would be preferred.
[0007] Accordingly, there is a need in the art for a method of
automatically determining appropriate color table presets when
displaying medical image data.
SUMMARY OF THE INVENTION
[0008] According to the invention there is provided a method of
setting visualization parameter boundaries for displaying an image
from a 3D data set comprising a plurality of voxels, each with an
associated signal value, comprising: selecting a volume of interest
(VOI) within the 3D data set; generating a histogram of signal
values from voxels that are within the VOI; applying a numerical
analysis method to the histogram to determine a visualization
threshold; and setting at least one of a plurality of boundaries
for a visualization parameter according to the visualization
threshold.
[0009] By restricting the histogram to voxels taken from the VOI, a
numerical analysis method can be applied to the histogram which is
sensitive to subtle variations in signal value and can reliably
identify significant boundaries within the 3D data set for
visualization. This allows the visualization parameter boundaries
to be set automatically, which is especially useful for 3D data
sets for which the signal values have no calibration, as is the
case for MR scans.
[0010] In some embodiments, a first visualization parameter
boundary is set at the visualization threshold. In other
embodiments, first and second visualization parameter boundaries
are set either side of the visualization threshold. This latter
approach can be advantageous if an opacity curve interpolation
algorithm is used to calculate an opacity curve between the
visualization parameter boundaries.
[0011] The numerical analysis method may be applied once to
determine only one visualization threshold. Remaining visualization
parameter boundaries can then be set manually. Alternatively, the
numerical analysis method can be applied iteratively to the
histogram to determine a plurality of visualization thresholds and
corresponding visualization parameter boundaries.
[0012] A significance test may be applied to visualization
thresholds and, according to the outcome of the significance test,
a significance marker can be ascribed for those ones of the voxels
having signal values at or adjacent the visualization threshold,
wherein the significance marker indicates significance or
insignificance of the visualization threshold.
[0013] If two visualization parameter boundaries are set, one each
side of the visualization threshold, and the visualization
threshold is determined to be significant, then it is convenient to
mark as significant only the voxels having signal values at one of
the two visualization parameter boundaries. In one example, if a
visualization threshold is calculated by the numerical analysis
method to lie at a signal value of 54, and visualization parameter
boundaries are set at 54.+-.3, i.e. at 51 and 57, then the voxels
with signal values of 57 can be marked as significant, and the
voxels with signal values of 51 as insignificant.
[0014] The significance test can be used to distinguish between
visualization parameter boundaries used as enhancements to
visualizations of a single tissue type (known as cosmetic
boundaries) and those used to identify different tissue-types for
the purpose of segmentation (known as significant boundaries).
Accordingly, the method may further comprise applying a selection
tool to the 3D data set, wherein the selection tool is sensitive to
the significance markers. One or more of the selection tools can be
designed to ignore voxels that have been marked as
insignificant.
[0015] The rate of change of a visualization parameter across a
visualization parameter boundary may also be modified based on the
significance of the visualization parameter boundary. A sharpness
parameter can be calculated for determining what rate of change of
the visualization parameter to apply at a boundary.
[0016] In some embodiments of the invention, the sharpness
parameter is the same as the significance marker. The sharpness
need not simply be a binary operand, but can adopt a range of
integer values, for example from 0 to 100. A sharpness of zero
indicates an insignificant boundary, which is referred to as a
cosmetic boundary in view of its irrelevance to selection tools. A
sharpness of 100 indicates a boundary that has the maximum degree
of significance. Intermediate values are used to indicate
intermediate significance. In addition to affecting the blending of
visualization parameters, the non-zero values may be used for
filtering by the selection tools so that boundaries with a
significance value of, for example, 5 are significant to some but
not all selection tools, a boundary with a significance value of 50
is significant for a greater subset of the selection tools, and a
boundary with the maximum significance value of 100 is significant
to all selection tools. Alternatively, the non-zero significance
values may be used by selection tools to resolve conflicts between
different marked boundaries, with boundaries having higher
significance values taking precedence. Examples of selection tools
are tools for marking objects in a set of connected or unconnected
voxels with a visualization parameter (e.g. color or opacity)
between two significant visualization parameter boundaries,
multiple groups of connected or unconnected voxels above a
significant boundary or multiple bands of connected or unconnected
voxels below a significant boundary. Marked voxels could then, for
example, be sculpted. Sculpting is a well known term of art used to
describe voxels that are marked to be transparent from view
irrespective of their signal values.
[0017] In the best mode of the invention, the numerical analysis
method comprises: forming a convex hull of a plurality of segments
around the histogram; determining which perpendicular from the
segments to the histogram has the greatest length; and taking the
signal value at the intersection between the histogram and the
perpendicular as the visualization threshold. The sharpness value
and the significance test can then be based on the length of the
perpendicular determined to have the greatest length. For example,
the visualization threshold can be determined to be insignificant
if the ratio of the length of the perpendicular to a parameter
derived from the signal value range and/or the frequency range of
the histogram is below a minimum score.
[0018] For some automatic presets, the numerical analysis method is
applied to the histogram within a predetermined restricted range of
signal values to search for a visualization threshold within that
restricted range. This will be particularly useful for 3D data sets
with calibrated signal values, such as X-ray data sets calibrated
in Hounsfield units. Accordingly, the restricted range may be
defined in terms of Hounsfield units.
[0019] To provide the user with information about the nature of the
automatically calculated thresholds, the histogram and its
visualization parameter boundaries can be displayed to the user
together with the image created from the 3D data set, thus making
the user aware of the visualization parameter boundaries determined
by the automatic preset.
[0020] The method of the invention is particularly powerful in that
it can take account of sculpting performed on the 3D data set prior
to automatic preset determination according to the invention. A
common example of sculpting will be when a plane is defined through
a 3D data set and all voxels to one side of the plane are not
rendered, irrespective of their signal values. Another example of
sculpting will be the removal of a given set of connected voxels
with signal values in a specified range, thus restricting the range
of signal values to be visualized prior to determining an automatic
preset. Sculpting can be taken account of by restricting the
histogram to unsculpted voxels in the VOI.
[0021] It has been recognized that voxels with the highest and
lowest signal values often constitute bad data which can skew the
results of the numerical analysis of the histogram. Accordingly, it
is preferred that voxels with the highest and/or the lowest signal
values are excluded from the numerical analysis method. For
example, the voxels with the lowest and highest 0.1% of the signal
values can be excluded. Other proportions could also be
envisaged.
[0022] In some implementations the method may operate
interactively. In such cases, if a user re-defines the VOI, the
method of setting visualization parameter boundaries is
automatically reapplied to continuously provide the most
appropriate visualization parameter boundaries.
[0023] The invention further provides a computer program product
bearing computer readable instructions for performing the method of
the invention.
[0024] The invention also provides a computer apparatus loaded with
computer readable instructions for performing the method of the
invention.
[0025] According to a further aspect of the invention there is
provided a method of numerically processing a medical image data
set comprising voxels, the method comprising: receiving user input
to positively and negatively select voxels that are and are not of
a tissue type of interest; determining a distinguishing function
that discriminates between the positively and negatively selected
voxels on the basis of one or more characterizing parameters of the
voxels; and classifying further voxels in the medical image data
set on the basis of the distinguishing function. This method thus
applies supervised pattern recognition to classify the voxels.
[0026] By receiving input in response to a user specifying both
positive examples of voxels (i.e. those which do correspond to the
tissue type of interest) and negative examples of voxels (i.e.
those which do not correspond to the tissue type of interest), the
method is able to objectively classify further voxels in the data
set. Because of this, the method provides for an easy and intuitive
to use technique for allowing users to select regions of interest
for further examination or removal from the data set.
[0027] The method may include presenting a representative (2D)
image derived from the (3D) medical image data set to a user, such
as a sagittal, coronal or transverse section view, whereby the user
selects voxels by positioning a pointer at appropriate locations in
the example image. An example voxel may then be taken to be a voxel
whose coordinates in the medical image data set map to the location
of the pointer in the example image. Alternatively, for a single
positioning of the pointer, a number of example voxels may be
selected, for example those in a region surrounding a voxel whose
coordinates in the data set map to the location of the pointer in
the example image may be taken as being selected. Selecting
multiple voxels with a single positioning of the cursor allows for
a more statistically significant sample of example voxels to be
provided with little additional user input.
[0028] At least one of the one or more characterizing parameters of
a voxel may be a function of surrounding voxels. For example, a
local average, a local standard deviation, gradient magnitude,
Laplacian, minimum value, maximum value or any other
parameterization may be used. This allows voxels to be classified
on the basis of characteristics of their surroundings, rather than
simply on the basis of their voxel value. This means that similar
tissue types can be properly classified more accurately than with
conventional classification methods based on voxel value alone.
This is because subtle difference in "texture" in the vicinity of a
voxel can help to distinguish it from other voxels having otherwise
similar voxel values. It is also noted that for some modalities
such as MR there may be multiple voxel values, such as T1 and T2 in
multi-spectral MR, which could each be used to define a separate
characterizing parameter. These could be used collectively in
combination to set the distinguishing function.
[0029] Moreover, the user input may additionally include clinical
information, such as specification of tissue type or anatomical
feature, regarding either the positively or negatively selected
voxels, or both. Following this user input, the distinguishing
function can then determined from the characterizing parameters
having regard to the clinical information input by the user.
[0030] Once the voxels have been classified, an image of the data
set may be rendered which takes account of the classification of
voxels. The rendered image may then be displayed to the user. For
example, the positively selected voxels may be tinted with a color
in a monochrome gray scale rendering.
[0031] In some examples, a binary classification may be used
whereby voxels are classified as either corresponding to the tissue
type of interest or not corresponding to the tissue type of
interest. In these cases, voxels classified as not corresponding to
the tissue type of interest may be rendered as transparent or
semi-transparent in a displayed image. The general practice of
rendering features that are not of interest as semi-transparent is
sometimes referred to as "dimming" in the art. Alternatively,
voxels which are classified as corresponding to the tissue type of
interest may be rendered as transparent, or voxels classified as
corresponding to the tissue type of interest may be rendered to be
displayed in one range of displayable colors and voxels classified
as not corresponding to the tissue type of interest being rendered
to be displayed in another range of displayable colors.
[0032] An image based on rendering a volume data set representing
the value of the distinguishing function of the voxels can also be
made.
[0033] In other examples, rather than using a binary
classification, voxels may be classified according to a calculated
probability that they correspond to the tissue type of interest. In
these cases, an image may be generated by rendering of a volume
data set representing the probability that the voxels correspond to
the tissue type of interest, rather than rendering based on voxel
values themselves. For example, the probability can be mapped onto
opacity of the rendered material instead of taking a threshold.
Another approach would be to render as transparent any voxels
having a probability of corresponding to the tissue type of
interest of less than a certain value.
[0034] Where the classification provides an estimated probability
for each voxel, the probabilities per voxel may themselves be
considered as voxel values in a medical image data set which may be
re-classified according in a subsequent iteration of the method.
This implements a form of relaxation labeling.
[0035] Further, it will be appreciated that the user input can be
prompted by displaying an image to a user from a 3D data set
comprising a plurality of voxels, each with an associated signal
value, for example by selecting a volume of interest (VOI) within
the 3D data set; generating a histogram of signal values from
voxels that are within the VOI; applying a numerical analysis
method to the histogram to determine a visualization threshold; and
setting at least one of a plurality of boundaries for a
visualization parameter according to the visualization
threshold.
[0036] According to a further aspect of the invention there is
provided an apparatus for numerically processing a medical image
data set comprising voxels, the apparatus comprising: storage from
which a medical image data set may be retrieved; a user input
device configured to receive user input to positively and
negatively select voxels that are and are not of a tissue type of
interest; and a processor configured to determine a distinguishing
function that discriminates between the positively and negatively
selected voxels on the basis of one or more characterizing
parameters of the voxels; and to classify further voxels in the
medical image data set on the basis of the distinguishing
function.
BRIEF DESCRIPTION OF THE DRAWINGS
[0037] The patent or application file contains at least one drawing
executed in color. Copies of this patent or patent application
publication with color drawing(s) will be provided by the Office
upon request and payment of the necessary fee.
[0038] For a better understanding of the invention and to show how
the same may be carried into effect reference is now made by way of
example to the accompanying drawings in which:
[0039] FIG. 1 shows a generic computer tomography scanner for
generating a 3D data set;
[0040] FIG. 2a shows a 2D projection of a 3D data set with tissue
opacity values being represented by a linear gray-scale;
[0041] FIG. 2b schematically shows a graphical representation of
the color and opacity curve mappings used in generating the 2D
image shown in FIG. 2a;
[0042] FIG. 2c shows a 2D projection of a 3D data set with ranges
of tissue opacity values being represented by ranges of a
gray-scale defined by presets;
[0043] FIG. 2d schematically shows a graphical representation of
the color and opacity curve mappings used in generating the 2D
image shown in FIG. 2c;
[0044] FIG. 2e shows a 2D projection of a 3D data set with ranges
of tissue opacity values being represented by ranges of colors
defined by presets;
[0045] FIG. 3 shows a histogram of data values within a volume of
interest (VOI) within a 3D data set;
[0046] FIG. 4 shows a flow chart of an automatic preset
determination method according to an embodiment of the
invention;
[0047] FIG. 5a shows a histogram of data values within a VOI within
a 3D data set and to which a first convex hull has been applied to
determine a first visualization threshold;
[0048] FIG. 5b shows a histogram of data values within a VOI within
a 3D data set and to which a second convex hull has been applied to
determine a second visualization threshold;
[0049] FIG. 5c shows a histogram of data values within a VOI within
a 3D data set and to which a third convex hull has been applied to
determine a third visualization threshold;
[0050] FIG. 5d shows a histogram of data values within a VOI within
a 3D data set and to which a fourth convex hull has been applied to
determine a fourth visualization threshold;
[0051] FIG. 6 shows a computer system for storing, processing and
displaying medical image data;
[0052] FIG. 7a shows a visualization state tool loaded with a VOI
from a 3D data set for which color boundaries have been determined
according to an automatic preset according to a first example of
the invention, referred to as "Active MR";
[0053] FIG. 7b shows an example image displayed according to the
automatic preset of FIG. 7a;
[0054] FIG. 8a shows a visualization state tool loaded with a VOI
from a 3D data set for which color boundaries have been determined
according to an automatic preset according to a second example of
the invention, referred to as "Active Bone (CT)";
[0055] FIG. 8b shows an example image displayed according to the
automatic preset of FIG. 8a;
[0056] FIG. 9a shows a visualization state tool loaded with a VOI
from a 3D data set for which color boundaries have been determined
according to an automatic preset according to a third example of
the invention, referred to as "Active Angio (CT)";
[0057] FIG. 9b shows an example image displayed according to the
automatic preset of FIG. 9a;
[0058] FIG. 10 schematically shows an example display of an image
and associated section views which a user may employ to identify a
tissue type of interest;
[0059] FIG. 11 is a flow chart schematically showing a method for
classifying whether voxels in a volume data set belong to a tissue
type of interest according to an embodiment of the invention;
and
[0060] FIGS. 12A-12D schematically show the distribution of a
number of different characterizing parameters computed for example
voxels identified by a user as belonging to different tissue
types.
DETAILED DESCRIPTION
[0061] FIG. 1 is a schematic perspective view of a generic CT
scanner 2 for obtaining a 3D scan of a region of a patient 4. An
anatomical feature of interest (in this case a head) is placed
within a circular opening 6 of the CT scanner 2 and a series of
X-ray exposures is taken. Raw image data is derived from the CT
scanner and could comprise a collection of one hundred 2D 512*512
data subsets, for example. These data subsets, each representing an
X-ray image of the region of the patient being studied, are subject
to image processing in accordance with known techniques to produce
a 3D representation of the feature imaged such that various
user-selected 2D projections of the 3D representation can be
displayed (typically on a computer monitor). The techniques for
generating such 3D representations of structures from collections
of 2D data subsets are known and will not be described further
herein.
[0062] FIGS. 2a, 2c and 2e show example 2D images of the same
projection from a 3D CT data set but with different presets. FIGS.
2b and 2d show graphical representations of the color and opacity
curve mappings used in generating the 2D images shown in FIG. 2a
and 2c respectively. FIGS. 2a-e are included to illustrate the
effect of presets on such images before describing how presets are
implemented in specific embodiments of the invention.
[0063] FIG. 2a shows an example 2D image which is a projection of a
3D data set obtained from a CT scanner. A VOI within the 3D data
set has been selected for display. The material surrounding the VOI
is not rendered in the projection. The image is displayed with a
uniform gray-scale ranging from black to white.
[0064] FIG. 2b schematically shows a graphical representation of
the color and opacity curve mappings used in generating the 2D
image shown in FIG. 2a. FIG. 2b includes a plot of a binned
frequency distribution of the signal values in the VOI with a
superposed line plot of opacity as a function of signal value (red
curve). The shading of the area under the binned frequency
distribution plot indicates the mapping of colors to signal value
in the rendering.
[0065] There are three tissue types in the projected image. These
are a region of bone 26, a region of soft tissue 30 and a barely
visible network of blood vessels 28. The high X-ray stopping power
of bone, compared to that of blood and soft tissue, makes the
region of bone 26 easily identifiable in the image due to the high
opacity associated with the associated voxels. However, since the
opacities of blood and soft tissue are more similar, they are not
as clearly distinguished. In particular, it is difficult to see the
blood vessel network.
[0066] FIG. 2c shows a 2D image of the same projection as FIG. 2a,
but rendered with a different color and opacity mapping, i.e. a
different preset.
[0067] FIG. 2d schematically shows a graphical representation of
the color and opacity curve mappings used in generating the 2D
image shown in FIG. 2c. FIG. 2d includes a plot of a binned
frequency distribution of the signal values in the VOI and a
superposed line plot of opacity as a function of signal value (red
curve). The shading of the area under the binned frequency
distribution plot again indicates the mapping of colors to signal
values in the rendering.
[0068] In FIG. 2c, signal values indicative of soft tissue voxels
have been colored significantly differently from signal values
indicative of blood vessel voxels. This makes the interpretation of
the blood vessels much clearer. In practice, however, a wider range
of colors will be available than can be reliably shown in a
black-and-white figure such as FIG. 2c, and the three tissue types
could be shown in distinctly different colors.
[0069] FIG. 2e shows a 2D image of the same projection as FIGS. 2a
and 2b, but rendered with a different color and opacity mapping,
i.e. a different preset. FIG. 2e shows the signal values of the
voxels within the VOI (and hence the corresponding portions in the
projected image) as different colors. Voxels associated with the
blood vessel network are shaded yellow, those of the soft tissue
are allocated shades of transparent red and those of the bone are
allocated shades of cream.
[0070] The range of displayable colors and opacities to which the
voxel signal values are to be mapped (i.e. the entries in the color
table) will in general depend on the specific application. For
example, in one application images might be represented using five
color ranges with the color ranges being black, red, orange, yellow
and white. The opacity level from black to white might range from
0% to 100% opacity with the colors mixing in relation to the
opacity curve. This would allow for a smooth transition between
bands, with the color and opacity values at the upper edge or
boundary of one range matching the color and opacity values at the
lower edge of boundary of the adjacent range. In this way, the five
ranges blend together at their boundaries to form a smoothly
varying and continuous spectrum. The colors at the bottom boundary
of the first range and the top boundary of the fifth range are
black and white respectively.
[0071] Each of these five color ranges can be mapped to the voxel
signal values which represent different tissue types to distinctly
identify the different tissues. For example, bone might be
represented in shades of cream, blood in shades of yellow, a kidney
in shades of orange and so on, all with varying opacities. The task
of attributing color and opacity to different tissue types becomes
one of determining suitable signal values, hereafter referred to as
visualization thresholds, for defining boundaries between the
ranges.
[0072] Sub-ranges of signal values between color boundaries may be
taken to represent different tissue types. The shades of color
available within the color range associated with each tissue type
can then be appropriately mapped to the sub-range of signal values.
For instance, in one example there are 32 shades of cream available
for coloring bone, and bone is associated with signal values
between 154 and 217 (in arbitrary units). A color look-up table
could then associate the first shade of cream with signal values
154 and 155, the second shade of cream with signal values 156 and
157, the third with 158 and 159 and so on through to associating
the thirty-second shade of cream with signal values 216 and
217.
[0073] FIG. 3 is a histogram which schematically shows the binned
frequency distribution F of an example set of voxels within a
selected VOI as a function of signal value D. The signal values D
may be in arbitrary units (typical for MR) or calibrated units
(such as Hounsfield units (HU) that are used for CT and other types
of X-ray imaging). The histogram shown in FIG. 3 represents the
voxels within a VOI from which a 2D projected image (such as those
shown in FIGS. 2a, 2c and 2e) can be derived and the signal values
represent X-ray attenuation calibrated in HUs.
[0074] The signal values D of the voxels within the selected VOI
are distributed between a minimum value S and a maximum value E.
Within this overall range of signal values, four distinct voxel
value sub-ranges are evident. A first voxel value sub-range I has a
narrow peak at relatively high signal values, a second voxel value
sub-range II has a relatively broad peak, a third voxel value
sub-range III has a shoulder on the lower signal value side of the
second voxel value sub-range II and a fourth voxel value sub-range
IV has a shoulder on the lower signal value side of the third voxel
value sub-range III.
[0075] The different voxel value sub-ranges I-IV identified in the
histogram are likely to relate to different tissue types in the 3D
data set and so would benefit from being displayed with different
color ranges. For example, one might reasonably infer that
sub-range I of high X-ray attenuation corresponds to bone,
sub-range II corresponds to blood, sub-range III corresponds to
soft tissue and sub-range IV represents the background tissue type
or air.
[0076] It is not necessary when displaying the images to
pre-associate voxel value sub-ranges in the histogram with
particular tissue types. In fact, if the signal values in the data
set are un-calibrated it may not even be possible to do so from the
signal values alone. Nonetheless, if distinct voxel value
sub-ranges in the histogram can be identified by a numerical
analysis, derived images can be shown with different tissue types
clearly and consistently displayed without the need for user-driven
post-display processing.
[0077] FIG. 4 is a flow chart showing the steps involved in
determining color boundaries for allocating displayable colors
based on a numerical analysis of signal values. In a first step 31,
a 3D data set of signal values is provided. The 3D data set in this
example is provided by an MR scanner and is calibrated in arbitrary
units. However, any 3D data set could equally well be used. In a
next step 32, a VOI within the data set is selected. This step of
selecting a VOI within may be performed manually or automatically,
for instance using a connectivity algorithm. In a next step 33, a
histogram of the signal values of the voxels within the VOI is
generated, such as the one shown in FIG. 3, for example. In a next
step 34, a set of extreme signal values are identified. Exclusion
of this set of extreme signal values from subsequent steps of the
determination of color boundaries helps to avoid any undesirable
skewing of the results by signal values which are not considered
statistically significant. Such extreme values might be caused by
highly attenuating medical implants (such as screws) or defects in
the image data set, for example. By ignoring a fraction of the
highest and lowest signal values, such as the extreme 0.1% of
voxels at each end of the range, any of these extreme outlier
voxels within the VOI will not unduly skew the results of the
numerical analysis. However, if there are no extreme outlier
voxels, the numerical analysis will not be unduly affected by
ignoring a relatively small fraction of the histogram. After
excluding the extreme data, the histogram is effectively considered
to run between signal values L and U, where signal values between S
and L, and between U and E are those which are excluded. Whereas in
this example a default fraction of extreme voxels are excluded, the
fraction discarded could also depend on characteristics of the data
set and/or a subject being studied.
[0078] In a next step 35, an iteration parameter n is given an
initial value of 1. The iteration parameter is indicative of how
many visualization thresholds for determining visualization
parameters have already being determined, at this stage of the flow
chart, a value of n means that n-1 visualization thresholds have
previously been found. In a next step 36, an n.sup.th convex hull
of the histogram is determined. The n.sup.th convex hull is defined
to be a curve spanning the signal value range L to U and which
comprises that series of line segments which combine to form the
shortest possible single curve drawn between the histogram values
at L and U subject to the condition that at any given signal value
within the range, the curve must be greater than or equal to the
value of the histogram, and further subject to the condition that
the n.sup.th convex hull must also meet the histogram profile at
all previously identified color boundaries.
[0079] FIG. 5a shows the histogram previously shown in FIG. 3 but
on which the first (i.e. n.sup.th where n=1) convex hull spanning
the histogram has been drawn (since n=1, there are no previously
identified color boundaries at which the first convex hull must
meet the histogram). It can be seen that the first convex hull
contains two extended straight line portions marked b and a. These
are the distinct and continuous sections of the first convex hull
which deviate from following the histogram profile by "cutting
comers" and thereby minimizing the integrated length of the convex
hull.
[0080] In a next step 37, an n.sup.th visualization threshold
T.sub.n is found by determining the point on the histogram profile
for which the n.sup.th convex hull has the maximum nearest
distance. This is the point on the histogram from which the longest
possible line can be drawn to meet the n.sup.th convex hull
perpendicularly. This point, which must intersect the n.sup.th
convex hull on one of its extended straight line portions, can be
determined, to a finite accuracy, by a number of known techniques.
The longest perpendicular which can be drawn between the first
convex hull and the histogram profile shown in FIG. 5a connects to
the straight line portion marked b and is indicated in the figure
by the dotted-line marked c. This line intersects the histogram
profile at signal value T.sub.1 and so defines a first
visualization threshold of T.sub.1. The first visualization
threshold T.sub.1 divides the histogram into two ranges, one
running between signal values L and T.sub.1 and one running between
signal values T.sub.1 and U. It is apparent from FIG. 5a that the
range between signal values T.sub.1 and U corresponds closely with
the first voxel value sub-range I identified in the histogram
indicated in FIG. 3.
[0081] In a next step 38 shown in FIG. 4, a significance parameter
for the n.sup.th visualization threshold T.sub.n is determined. The
determination of the significance parameter is discussed further
below.
[0082] In a next step 39, one or more color boundaries are set
based on the n.sup.th visualization threshold T.sub.n. In this
example, a single color boundary threshold is set at the data value
matching the visualization threshold T.sub.n. In other examples,
depending on clinical application, and/or the requirements of
subsequent visualization tools, it may be preferable to associate
two color boundaries with a single visualization threshold. For
instance, by setting first and second color boundaries at signal
values slightly displaced to the lower and higher signal value side
of the signal value of an n.sup.th visualization threshold, for
example at data values T.sub.n+/-3 in arbitrary units, a rapid
change in colors and opacities allotted to signal values in the
vicinity of the visualization threshold occurs which can help to
highlight features of the boundary in a subsequently displayed 2D
image.
[0083] In a next step 40, a sharpness value is set for each of the
one or more color boundaries associated with the n.sup.th
visualization threshold T.sub.n. The sharpness value may be based
on the significance parameter of n.sup.th visualization threshold
T.sub.n and can be used to assist in displaying images. The
sharpness value may, for example, range from 0 to 100 and be used
to determine a level of color blending between the colors near to a
color boundary. Increased color blending can be set to occur at
boundaries with relatively low sharpness values. This ensures that
low significance boundaries appear less harsh in a displayed image.
Conversely, little or no color blending is applied to boundaries
with relatively high sharpness values. This ensures high
significance boundaries appear well defined in a displayed image.
If multiple color boundaries are set according to a single
visualization threshold, it may be convenient to associate a
sharpness value based on the significance parameter of the
visualization threshold with only a single one of the multiple
color boundaries, and to set a fixed sharpness value for the other
of the multiple color boundaries. Since the sharpness values are
based on the significance parameter, sharpness values can also be
taken as a measure of the significance of a visualization threshold
and associated boundaries. The significance of a boundary may play
a role in further processing as discussed further below.
[0084] In a next step 41, a test is performed to determine whether
additional color boundaries are required. The iteration parameter
n, which at this stage of the flow chart indicates how many
visualization thresholds have been determined, is compared with the
total number (N.sub.total) of visualization thresholds required.
N.sub.total will depend on the number of displayable color ranges
and how many color boundaries have been set for each of the
visualization thresholds. For example, in this case, where all
visualization thresholds provide a single color boundary, and if
there are five displayable color ranges (hence four color
boundaries), four visualization thresholds are required and
N.sub.total=4. However, in another example, a particular
application might require a color boundary to be set either side of
the first visualization threshold and single color boundaries to be
set at each subsequent visualization threshold. In such a case the
four color boundaries associated with five displayable color ranges
would be set after determining only three visualization thresholds
since the first visualization threshold sets two color boundaries,
accordingly N.sub.total=3.
[0085] If it is determined that no further color boundaries are
required (i.e. n=N.sub.total) the flow chart follows the N-branch
from step 41 to step 42 where the preset determination is complete.
If further color boundaries are required (i.e. n<N.sub.total),
the flow chart follows the Y-branch from step 41 to a step 42 where
the iteration parameter n is incremented, and then returns to step
36 to continue as described above.
[0086] FIG. 5b shows the histogram previously shown in FIG. 3, but
on which the data value T.sub.n of the first visualization
threshold (and hence in this example also a first color boundary)
and the second (i.e. n.sup.th where n=2) convex hull are marked
(i.e. showing the results of step 36 in the flow chart shown in
FIG. 4 during the n=2 iteration). The second convex hull contains
four extended straight line portions which are marked f, e, d and a
in the figure. In the n=2 iteration of step 37, the point on the
histogram from which the longest possible line can be drawn to meet
the second convex hull perpendicularly is determined. It is noted
that the geometry of the extended line portion marked a in both
FIGS. 5a and 5b is not affected by the additional condition applied
to the second convex hull (i.e. that it pass through the histogram
value at T.sub.1). Accordingly it is not necessary to re-calculate
the perpendicular distances between this section of the convex hull
and the histogram profile since those previously determined may be
relied upon.
[0087] The longest perpendicular which can be drawn between the
second convex hull and the histogram profile shown in FIG. 5b
connects to the straight line portion marked f and is indicated in
the figure by the dotted-line marked g. This line intersects the
histogram profile at signal value T.sub.2 which combines with
T.sub.1 to divide the histogram into three ranges, one running
between signal values L and T.sub.2, one running between signal
values T.sub.2 and T.sub.1 and, as before, one running between
signal values T.sub.1 and U. It is apparent from FIG. 5b that the
range between signal values T.sub.2 and T.sub.1 corresponds closely
with the second voxel value sub-range II identified in the
histogram indicated in FIG. 3. The setting of color boundaries (and
their associated sharpness) which are linked to the second
visualization threshold T.sub.2 will be understood from the
above.
[0088] FIG. 5c again shows the histogram previously shown in FIG. 3
but with the third (i.e. n.sup.th where n=3) convex hull associated
with determining a third visualization threshold T.sub.3 also
shown. The convex hull in FIG. 5c passes through the histogram
values at both T.sub.1 and T.sub.2 as well as the lower and upper
end-points L and U and contains five extended straight line
portions marked j, h, e, d, and a. The longest perpendicular which
can be drawn between the third convex hull and the histogram
profile connects to the straight line portion marked j and is
indicated in the figure by the dotted-line marked k. This line
intersects the histogram profile at signal value T.sub.3 which
combines with T.sub.1 and T.sub.2 to divide the histogram into four
ranges, one running between signal values L and T.sub.3, one
running between signal values T.sub.3 and T.sub.2, one running
between signal values T.sub.2 and T.sub.1 and one running between
signal values T.sub.1 and U. It is apparent from FIG. 5c that the
ranges between signal values L and T.sub.3, and T.sub.3 and T.sub.2
corresponds closely with the fourth and third voxel value
sub-ranges IV, III respectively identified in the histogram
indicated in FIG. 3.
[0089] As noted above, in the example shown in FIG. 4 the iterative
determination of visualization thresholds continues until a pre-set
maximum number of associated boundaries have been determined, e.g.
if there are five color ranges available for display then four
color boundaries are required to define the associated five voxel
value sub-ranges within the histogram. When the requisite number of
boundaries are determined, the color mappings for displaying images
derived from the VOI on which the histogram analysis has been
performed can be generated by associating the shades available
within each color range with the signal values defined by the color
boundaries.
[0090] If there are P available color ranges for display and the
VOI contains P or more than P distinct tissue types, the method
outlined above automatically identifies P voxel value sub-ranges
from the histogram of the voxels contained in the VOI. For a given
type of data, an appropriate value of P (and hence number of
visualization thresholds to be identified) can be selected based on
the expected characteristics of the data set.
[0091] If, on the other hand, the VOI contains fewer than P
distinct tissue types, the allotment of P color ranges will cause
more than one color range to be allotted to at least one of the
tissue types. A color boundary which is placed within a signal
value range representing a single tissue type may appear confusing
in the display, especially if the user is unaware of it. As noted
above, color blending based on a sharpness value derived from the
significance parameter for each visualization threshold can be used
to de-emphasize such boundaries.
[0092] In addition to setting the sharpness value, the significance
parameter may also form the basis of a significance test for
determining the significance of a visualization threshold. The
significance parameter may, for example, derive from the length of
the determined longest perpendicular between the n.sup.th convex
hull and the histogram. The significance test may require that this
longest perpendicular is at least a pre-defined fraction of the
histogram's characteristic dimensions. For instance, the
significance test may require that the longest perpendicular be at
least 5% of the geometric height or width of the histogram. In
another example, the significance test may require that the longest
perpendicular be at least 10% of the value of the appropriately
normalized height and width of the histogram added in quadrature.
The height and width may be differently normalized to provide
different weighting. Whilst a default fraction, such as 5% or 10%,
may be used, the fraction may also be changed to better suit a
particular application and expected histogram characteristics. A
determined visualization threshold which lies within a signal value
range representing a single tissue type will fail an appropriately
configured significance test. Boundaries associated with these
visualization thresholds will be noted as cosmetic boundaries and
will be ignored by selection tools.
[0093] FIG. 5d shows the histogram previously shown in FIG. 3, but
on which the fourth (i.e. n.sup.th where n=4) convex hull
associated with attempting to find a fourth visualization threshold
T.sub.4 is shown. The fourth convex hull meets the histogram at
signal values T.sub.1, T.sub.2 and T.sub.3. As noted above, the
three previously identified visualization thresholds, and hence
associated color boundaries, define four voxel value sub-ranges in
the histogram. As further noted above, the histogram example shown
in FIG. 3 corresponds to a VOI containing four distinct tissue
types and accordingly there are no more significant visualization
thresholds to be identified. This is reflected by the fact that the
longest perpendicular which can be drawn between the convex hull
and the histogram shown in FIG. 5d (marked by the line l) is
relatively small compared with the lines c, g and k used to define
the visualization thresholds T.sub.1, T.sub.2 and T.sub.3 and shown
in FIGS. 5a, 5b and 5c respectively. The visualization threshold
T.sub.4 defined by the line l shown in FIG. 5d is thus not
significant and would fail an appropriately configured significance
test.
[0094] Where a visualization threshold is deemed not to be
significant, for example with reference to a model of a histogram's
expected characteristics or by comparison with a pre-determined
minimum length of longest perpendicular, it is can either be noted
as such to provide a cosmetic color boundary in the color table
(and given an appropriate sharpness value of, for example, 0), or
it can be discarded. The concept of a cosmetic boundary is a
boundary which is defined in the color table and thus relevant for
display purposes, but which is ignored by other tools in the
graphics system which are boundary sensitive, such as tools used
for selecting objects that contain algorithms that automatically
search for and mark boundaries. One or more cosmetic boundaries may
be determined iteratively in the manner described above until
sufficient number are determined to satisfy the requirements of the
number of available displayable color ranges (i.e. j-1 color
boundaries for j displayable color ranges).
[0095] If color boundaries associated with visualization thresholds
which fail the significance test are to be discarded, rather than
kept but marked as cosmetic, the iterative search for further
visualization thresholds may cease after the first
significance-test-failing visualization threshold is identified. In
these circumstances there will be fewer identified voxel value
sub-ranges in the histogram (which nominally correspond to fewer
distinct tissue types within the VOI) than there are displayable
color ranges. Individual color ranges may be allotted to the
individual signal value ranges defined by the identified color
boundaries and the remaining color ranges unused, or cosmetic
boundaries can be artificially defined. For example, a cosmetic
boundary could be generated by defining a color boundary mid-way
between the two signal values of the most widely separated
significant color boundaries. If multiple cosmetic boundaries are
to be defined, the signal values with which to associate them can
be determined serially, i.e. one after another using the above
criterion, or in parallel such that they collectively divide the
widest signal value range in to equal sections.
[0096] The significance of a given color boundary may be a
continuous parameter and need not be a purely binary--e.g. a color
boundary need not simply be significant or non-significant (i.e.
non-cosmetic or cosmetic). As noted above, sharpness values derived
form a significance parameter may be used as a direct measure of a
boundary's significance such that in many cases the sharpness value
itself will be used to directly indicate a boundary's significance.
Accordingly, significance may be given an insignificant level (e.g.
0) and one or more levels of significance (e.g. integers between 1
and 100). This facility may be used by other graphics tools within
the image processing system, for example to determine the
probability of a boundary being significant when attempting to
resolve conflicts when determining the bounds of a topological
entity. A zero level of sharpness (i.e. significance) is set for
boundaries which fail the significance test.
[0097] As indicated above, the visualization threshold's
significance could be based on the length of the appropriately
normalized length of the perpendicular between the n.sup.th convex
hull and the histogram. By indicating to a user the significance of
the identified color boundaries used in the display, the
interpretation of the displayed image can be aided. The
significance of each of the color boundaries may also play a role
in the appropriate use of connectivity algorithms used to define
surfaces or volumes within the 3D data set which are associated
with features identified by a user in the projected image.
[0098] FIG. 6 schematically illustrates a general purpose computer
132 of the type that may be used to perform processing in
accordance with the above described techniques. The computer 132
includes a central processing unit 134, a read only memory 136, a
random access memory 138, a hard disk drive 140, a display driver
142 and display 144 and a user input/output circuit 146 with a
keyboard 148 and mouse 150 all connected via a common bus 152. The
central processing unit 134 may execute program instructions stored
within the ROM 136, the RAM 138 or the hard disk drive 140 to carry
out processing of signal values that may be stored within the RAM
138 or the hard disk drive 140. Signal values may represent the
image data described above and the processing may carry out the
steps described above and illustrated in FIG. 4. The program may be
written in a wide variety of different programming languages. The
computer program itself may be stored and distributed on a
recording medium, such as a compact disc, or may be downloaded over
a network link (not illustrated). The general purpose computer 132
when operating under control of an appropriate computer program
effectively forms an apparatus for processing image data in
accordance with the above described technique. The general purpose
computer 132 also performs the method as described above and
operates using a computer program product having appropriate code
portions (logic) for controlling the processing as described
above.
[0099] The image data could take a variety of forms, but the
technique is particularly well suited to embodiments in which the
image data comprises a collection of 2D images resulting from CT
scanning, MRI scanning, ultrasound scanning or PET that are
combined to synthesize a 3D object using known techniques. The
aided visualization of distinct features within such images can be
of significant benefit in the interpretation of those images when
they are subsequently projected into 2D representations along
arbitrarily selected directions that allow a user to view the
synthesized 3D object from any particular angle they choose.
[0100] Having described the principles of a method for
automatically determining presets, specific examples of its
application will now be given. In some applications, presets may be
required to satisfy further conditions before they are accepted for
defining color range boundaries (or other visualization
parameters). For example, in applications where bone visualization
within a CT data set is of prime interest, preset thresholds should
not be defined between those signal values representing soft tissue
and blood vessels since both of these should be made transparent in
the displayed image. Instead, less significant but more appropriate
thresholds within the range of signal values representing bone may
be preferred.
FIRST EXAMPLE
Active MR Preset
[0101] Magnetic resonance imaging (MR) data sets are generally
un-calibrated and display a wide range of data-values, dependent
on, for example, acquisition parameter values or the position of
the VOI with respect to a scanner's detector coils during scanning.
Accordingly, it is not usually possible to pre-estimate suitable
signal values with which to attribute color range presets and this
makes the present invention especially useful for application to MR
data sets.
[0102] FIG. 7a schematically shows the appearance of a
visualization state tool displayed on the display 144 shown in FIG.
6. The display tool shows a user the outcome of a preset
determination method for a selected VOI. The visualization state
tool comprises a data display window 80, a color display bar 72, a
display of opacity values 74, a display of boundary positions 78, a
display of sharpness values 76 and a number of display modification
buttons 82. The color display bar identifies the five color ranges
available for display in this application, although no details of
the available shades within each range are shown. The display of
boundary positions 78 shows the signal values of the four
determined color boundaries. The individual display windows for
each of these four boundary positions are centered beneath the two
color ranges indicated on the color display bar 72 with which each
is associated. The display of sharpness values 76 can be used to
determine the significance of the four determined boundaries. The
display of opacity values 74 at each of the boundary positions (and
at the maximum signal value) is also shown (with example values 0,
0, 73, 90, 100).
[0103] The data display window shows a logarithmically scaled
histogram of the signal values for the entire 3D data set overlaid
with dashed and solid vertical lines marking the cosmetic and
significant boundary positions respectively. The boundary lines may
also be marked differently, for example based upon their relative
significance. The histogram is colored to represent the color table
and opacity mapping at each signal value indicated by the scale
along the top of the data display window. The frequency
distribution shown here is that of the entire image data set and is
not restricted to the VOI on which the preset determination is
based. In some cases it may aid a user's interpretation if the
histogram of the selected VOI is shown. This might be in place of
the histogram of the entire data set or indicated separately, such
as in a separate window or drawn as a curve overlaying the entire
image data set histogram. A curve is plotted overlaying the
frequency distribution which shows the opacity curve. The opacity
curve is formed by interpolation between the opacity values set for
each of the color boundary positions taking into account the
sharpness at each boundary, in this example 0, 0, 73, 90 and 100
for voxel values 11, 19, 102, 158 and the maximum voxel value
respectively. The display modification buttons 82 allow a user to
pan along the histogram shown in the display window and also allows
for repositioning color boundaries if required.
[0104] The color boundaries shown in the display of boundary
positions 78 in FIG. 7a have been determined to attempt to best
show what the user wants to see within the currently selected VOI.
In some circumstances, for example when supplied with a VOI which
includes the whole volume, the preset determination method might be
configured to automatically assume that it is the tissue/air
interface which is required to be visualized. Accordingly, after
determining the presets, the voxels containing signal values below
the lowest significant threshold will be assumed to represent air
and be made transparent in the projection. If the VOI is selected
such that only a small proportion of air is included, the preset
determination method will look for a higher threshold of apparent
significance to determine which voxels should be made transparent.
As noted above, the intensities in MR vary substantially across the
imaged volume due to coil positioning and other factors, so the
smaller the VOI then the more accurate/useful the resulting
visualization is likely to be.
[0105] When applied to an MR data set, the active preset
determination method of this example first tries to find a
candidate threshold, using the technique described above, and which
further satisfies the condition that 60% (+/-30%) of the volume is
transparent.
[0106] In this example, a suitable threshold is found to be at
signal value 15. However, the design of the "Color/Opacity
Settings" interpolation used by the particular display software
used in this example operates best if two boundaries are placed a
small distance on either side of the computed background threshold
in order to provide a rapid rise in the opacity curve as usually
desired. In this example, these two boundaries are placed at
positions .+-.4 signal value units either side of the visualization
threshold at signal values of 11 and 19 respectively. The
boundaries may also be placed at other positions, for example, at
positions .+-.3 signal value units either side of the visualization
threshold.
[0107] Since in this example there are five color ranges to allot
(requiring four color boundaries), the method looks for up to two
more candidate visualization thresholds above the background level,
at which to place the two remaining color boundaries. If two
significant visualization thresholds cannot be found, then the
missing color boundaries are placed in the center of the largest
gap, but, as noted above, the associated sharpness (i.e.
significance) is set to 0, indicating a cosmetic boundary with no
significance to selection.
[0108] In this example, a second visualization threshold at signal
value position 102 is determined to be significant and a single
color boundary with a sharpness set to 5 is defined at signal value
position 102. No further significant visualization thresholds are
found and the remaining color boundary, between yellow and white,
is placed at signal value position 158. It should be noted that,
whilst not immediately apparent from the histogram shown in the
data display window 80, this cosmetic boundary is in the middle of
the widest gap between significant color boundaries. There are two
reasons why it is perhaps not immediately apparent. Firstly, the
histogram upon which the analysis is made differs from the
histogram shown in the data display window since the former is
restricted to the VOI and un-sculpted domain whereas the latter
represents the entire 3D data set. Secondly, as noted above, in
order to prevent the analysis from being thrown off by spurious
values in the tails of the distribution, the lowest and highest
0.1% of voxels are excluded from the numerical analysis. Because
the histogram shown in the dialog has a logarithmic vertical scale
the voxels at the extremes of the voxel value range can appear more
significant.
[0109] FIG. 7b shows an example image displayed according to the
automatic preset of FIG. 7a. The air/tissue interface is shown with
regions of skin, bone and soft tissue are apparent in the
image.
SECOND EXAMPLE
Active CT Preset
[0110] In this example, the same preset type is used as in the
first example. This may be useful for CT data sets in which a user
wants to visualize soft tissue.
THIRD EXAMPLE
Active Bone (CT) Preset
[0111] This example is for use on CT data sets for the purpose of
visualizing and selecting bone.
[0112] FIG. 8a schematically shows the appearance of a
visualization state tool presenting an example of use of the
"Active Bone (CT)" Preset. The different fields within the
visualization state tool shown in FIG. 8a will be understood from
the description of FIG. 7a.
[0113] The "Active Bone (CT) Preset" operates by determining a
first significant visualization threshold within the signal value
range 70 HU to 270 HU. In the example, a visualization threshold
value of 182 HU is determined. If no such visualization threshold
is found then 170 HU is used. This first visualization threshold is
used to set the background level in the display software by setting
two boundary positions at .+-.45 HU from the first visualization
threshold. The boundary positions in this example are accordingly
at 137 HU and 227 HU. With the five available ranges of color
indicated in FIG. 8a there are two remaining presets to determine.
One of these is placed at -500 HU (which, in the "Active Bone"
scheme, is denoted as a significant boundary with a sharpness of 5
to show some information about soft tissue in the side multi-planar
reconstruction (MPR) views. A fourth color boundary is placed at
600 HU to give some intensity information. The fourth color
boundary is ascribed a sharpness value of 0 to denote a cosmetic
boundary.
[0114] FIG. 8b shows an example image displayed according to the
automatic preset of FIG. 8a. Regions of bone are most apparent in
the image.
FOURTH EXAMPLE
Active Angio (CT) Preset
[0115] This preset assumes the data are in correctly calibrated
Hounsfield units. The purpose of this preset is to visualize
angio-tissue.
[0116] FIG. 9a schematically shows the appearance of a
visualization state tool presenting an example of use of the
"Active Angio (CT)" Preset. The different fields within the
visualization state tool shown in FIG. 9a will be understood from
the description of FIG. 7a.
[0117] The yellow/white boundary is placed at a position determined
by the histogram analysis within the range 550 HU.+-.200 HU, and
this boundary is given a sharpness of 5 so that is significant to
selection. In the example a boundary position of 550 HU is
determined. Thus, in good data sets, the selection tools can be
used in conjunction with this preset to discriminate bone from
contrast enhanced vasculature.
[0118] The other boundary positions are fixed at values -500 HU,
105 HU and 195 HU.
[0119] FIG. 9b shows an example image displayed according to the
automatic preset of FIG. 9a. Regions of bone and angio-tissue are
most apparent in the image.
[0120] Summary
[0121] It should be clear from the above examples that the preset
determination method of the present invention can be specifically
tailored in any number of ways to apply to data sets with known
specific characteristics. The method can also be used as an
entirely general tool with no prior knowledge of the data set, such
as in the MR example described above. In addition, users may
themselves customize the preset determination to suit the
requirements of a particular study. This might be done, for
example, where CT calibrated data are used and a user requires
features with a particular X-ray attenuation to be identified. This
might also be done to distinguish between two tissue types of
similar X-ray attenuations. Similarly, with un-calibrated data, a
user might modify the preset determination method based on the
appearance of a single 2D projection so that the preset is applied
consistently to all 2D projections generated from that voxel data
set. By storing parameters associated with a user's personal
customizations to the method (such as the significance test
stringency, the typical fraction of the data set the user wants to
appear as transparent, or specific signal value ranges in which
thresholds should occur), the modified method could be consistently
applied to further data sets.
[0122] While by way of example the foregoing description is
directed mainly towards determining visualization thresholds which
are used to define color boundaries, it will be appreciated that
determined visualization thresholds are equally suitable for
defining boundaries for visualization parameters other than color,
such as opacity, that are relevant for rendering. Furthermore, it
is often clinically useful for the placement of boundaries in an
opacity mapping to be positioned at the same signal values as the
boundaries in a color mapping. Other visualization parameters for
which the invention could be used include rate of change of color
with signal value, rate of change of opacity with signal value, and
segmentation information.
[0123] The preset determination is also not limited to finding any
particular number of boundaries, and the associated number of
visualization thresholds, but is extendable to determining any
number of boundaries or visualization thresholds. In some
applications it may be appropriate to determine more visualization
thresholds than there are distinct boundaries required to allow
less significant thresholds to play a role in defining the specific
allocation of available color shades or transitions between colors
within one or more of the determined ranges, for example.
[0124] Although in the above examples, the visualization parameter
boundaries are determined automatically, in other cases, some level
of user input can assist in determining the most appropriate
conditions for displaying an image. This is because once an
automatic preset has been determined it may be desirable to make an
assumption regarding what aspects of the data a user is interested
in seeing in a displayed image. For example, in the histogram of CT
data shown in FIG. 3, four tissue types are identified. As
previously noted, it might reasonably be inferred that sub-range I
of high X-ray attenuation corresponds to bone, sub-range II
corresponds to blood, sub-range III corresponds to soft tissue and
sub-range IV represents the background tissue type or air. Once an
automatic preset has been determined which identifies the four
sub-ranges, an assumption might be made that the user does not wish
to view the data corresponding to sub-range IV (background tissue
type and air) and so voxels corresponding to this region will be
rendered transparent. The displayed image will then show the bone,
blood and soft tissue. However, in some situations a user may be
interested in viewing bone and blood only with soft tissue rendered
transparent. In other situations, the user might wish to view the
background tissue and so it should not be rendered transparent. To
address this, in some embodiments of the invention the user may be
invited to identify in a displayed 2D image one or more examples of
areas which are of interest and should be rendered visible, and one
or more examples of areas which are not of interest and which
should be rendered transparent. The user might identify such
example areas by moving a cursor to appropriate parts of a
displayed image and selecting the examples by "clicking" with a
mouse-like pointer, for example. Once the example areas have been
identified, it is possible to determine which sub-ranges they fall
within and so set appropriate display conditions for these
sub-ranges (e.g. transparent or not-transparent).
[0125] In addition to employing user supplied examples of tissue
types which are and which are not of interest to assist in
displaying images in conjunction with the above described automatic
preset determination, such techniques can also be applied more
generally to classify different tissue types in medical image
volume data.
[0126] The technique can be particularly useful where different
tissue types appear very similar in the data, for example because
they have similar X-ray stopping powers for CT data. In the
histogram shown in FIG. 3, some of the sub-ranges may contain two
subtly different tissue types, for example, sub-range I may include
distinct regions of bone having subtly different densities from
each other. Another example is identification of tumors in organs
such as the liver or brain. It can be difficult to properly
classify voxels in the volume data which correspond with these
different tissue types due to the similarity in the signal values
associated with them.
[0127] FIG. 10 shows an example screen shot of a display 101 of a
2-D image generated from a volume (i.e. 3-D) data set. A main image
100 displays a 2-D image rendered from the volume data. The main
image 100 shown in the figure includes a partial wire-frame cuboid
to assist a user in interpreting the orientation of the image with
respect to the original volume data, and some basic textual
information, such as the date and time. The display 101 also
contains a sagittal section view 102, a coronal section view 104,
and a transverse section view 106 of the volume data to assist in
diagnostic interpretation. A number of different tissue types, for
example corresponding to bone and brain, are seen in the image. The
top portion of the skull has been sculpted away (i.e. rendered
transparent) so that the underlying brain can be seen.
[0128] A user viewing the display shown in FIG. 10 may wish to
sculpt away further material so that a particular tissue type of
interest within the brain can be viewed. For example, the tissue
type of interest might correspond to a feature the user has
observed in one of the section views 102, 104, 106 displayed on the
left of the display and wishes to examine further. In some cases,
it can be difficult for a segmentation algorithm to properly
separate voxels in the volume data which correspond to a region of
interest (and so should be displayed) from other voxels which do
not (and so should not be displayed, i.e. rendered transparent). If
there are significant differences in the voxel values associated
with voxels corresponding to different types of tissue, for example
as seen for bone and soft tissue in a CT scan, it can be relatively
easy to classify the voxels. However, in cases where there are more
subtle differences between a tissue type of interest and
surrounding tissue, segmentation algorithms can often fail to
properly classify voxels corresponding to the different tissue
types. If segmentation is performed on the basis of voxel values
expected for voxels corresponding to the tissue type of interest, a
carefully selected window of values needs to be defined. Voxels
having values falling within the window are considered to
correspond to the tissue type of interest, voxels having voxel
values falling outside of the window are considered not to
correspond to the tissue type of interest. However, it not an easy
task for a segmentation algorithm to select an appropriate window
width and this is generally done through a user interactively
adjusting window parameters until satisfied with the desired
appearance of a displayed image. The inherent subjectivity of this
approach means the displayed image is inevitably based on a user's
preconceptions of how the image should appear because there is a
lack of objective selection as to which voxels correspond to the
tissue type of interest and which do not. Furthermore, in some
situations, for example in CT data where a tissue type of interest
has a X-ray stopping power which is similar to that of surrounding
tissue, the voxel values themselves may not discriminate strongly
between different tissue types.
[0129] FIG. 11 is a flow chart schematically showing a method of
identifying voxels in a medical image data set which correspond to
a tissue type of interest according to an embodiment of the
invention. It will be assumed by way of example that the method is
executed in response to a user having being presented with the
image shown in FIG. 11 identifying in the sagittal section view 102
an anomalous region of brain which appears slightly different to
surroundings tissue and which he wants to examine further.
[0130] In this example the method is performed by a suitably
programmed general purpose computer, such as that shown in FIG. 6.
The computer may be a stand-alone machine or may form part of a
network, for example, a Picture Archiving and Communication System
(PACS) network.
[0131] In Step 111 of FIG. 11, input is received from the user
which identifies (selects) voxels corresponding to the tissue type
of interest. With reference to FIG. 6, this is conveniently
performed by the user positioning a cursor ("pointer") displayed on
the screen 144 displaying the image 101 over a pixel corresponding
to the tissue type of interest in one of the section views 102,
104, 106, the cursor being positioned by manipulation of the mouse
150. However, other input means, such as a light-pen, graphics
tablet or track ball, for example, may equally be used to point to
the tissue type of interest. Since in this example the user
initially noticed the region he wishes to examiner further in the
sagittal section view 102, it is assumed he positions the cursor
over a pixel within the anomalous region in this view. If the
region is also apparent in either of the other section views 104,
106, he may equally position the cursor over an appropriate pixel
in those views. Once the cursor is positioned over a desired pixel,
the user indicates his selection by pressing ("clicking") a button
on the mouse 150. Any other input means could equally be used. A
voxel in the volume data corresponding to the selected pixel is
then determined based on the plane of the section view within the
volume data and the selected position within the section view.
Depending on the displayed resolution of the section view, the
selected pixel might span a number of voxels in the volume data. In
this example, the voxel in which the selected pixel is situated is
taken as the identified voxel. In other cases all of the voxels
within a region of a predetermined size and shape surrounding a
central selected voxel might be considered as being identified as
corresponding to the tissue type of interest. The user may identify
any number of further voxels by clicking elsewhere in the sagittal
or other section views. The user may change the particular
displayed sagittal, coronal and/or transverse section views to
allow for voxels identifying the tissue type of interest to be
selected from anywhere within the volume data. Typically five or so
voxels corresponding to the tissue type of interest might be
identified, though fewer or more may be preferred. These voxels
will be referred to as positively selected voxels and the process
of identifying them will be referred to as making a positive
selection.
[0132] It will be appreciated that other schemes for allowing a
user to identify voxels can also be used. For example, rather than
"click" on an individual pixel in one of the section views, a range
of pixels could be identified by a user "clicking" twice to
identify opposite corners of a rectangle, or a centre and
circumference point of circle, or by defining a shape in some other
way. Voxels corresponding to pixels within the perimeter of the
shape may then all be deemed to have been identified.
[0133] In Step 112, input is received from the user which
identifies (selects) voxels not corresponding to the tissue type of
interest. Step 112 may be performed in a manner which is similar to
Step 111 described above, but in which the user positions the
cursor over pixels in the sagittal, coronal and/or transverse
sections which do not correspond to the tissue type of interest.
The user may indicate his selection by "clicking" a different mouse
button to that used to identify the positively selected voxels.
Alternatively, the same mouse button might be used in parallel with
the pressing of a key on the keyboard 148.
[0134] To allow subtly different tissue types to be distinguished,
the user should identify voxels which are most similar to the
tissue type of interest, but which he wants to exclude nonetheless.
This is because voxels which differ more significantly from voxels
corresponding to the tissue type of interest are easier to classify
as not being of interest. In this case, where the tissue type of
interest is an anomalous region of brain which appears slightly
different from its surroundings in the sagittal section view 102,
the user should identify voxels by selecting pixels in the area
surrounding the anomalous region. However, if there are other
regions which also appear similar to the tissue type of interest,
but which are not necessarily in close proximity to it, the user
may also identify some voxels corresponding to these regions. For
example, five or so voxels not corresponding to the tissue type of
interest might be identified. However, as few as one or many more
than five may also be chosen. For example, if there are a number of
regions in the data appearing only slightly different from the
tissue type of interest, the user may choose to identify a number
of voxels in each of these regions. The voxels identified in Step
112 will be referred to as negatively selected voxels, and the
process of identifying them will be referred to as making a
negative selection.
[0135] In Step 113 one or more characterizing parameters are
computed for each of the voxels selected in Steps 111 and 112. In
this example implementation four characterizing parameters, namely
voxel value V, a local average A, a local standard deviation
.sigma. and maximum Sobel edge filter response S over all
orientations, are determined for each voxel. In another embodiment,
instead of maximum Sobel edge filter response, gradient magnitude
is used. In this case the local average and standard deviation are
computed for a 5.times.5.times.5 cube of voxels centered on the
particular voxel at hand. However, other regions may also be used.
For example, a smaller regions may be considered for faster
performance. Furthermore, the regions need not be
three-dimensional, a 5.times.5 square of voxels, or other region,
in an arbitrarily chosen or pre-determined plane may equally be
used.
[0136] In Step 114 the distribution of computed characterizing
parameters are analyzed to determine which of them may be used to
distinguish between the positively selected and the negatively
selected voxels.
[0137] FIGS. 12A-12D show example distributions of voxel value V,
local average A, local standard deviation .sigma. and maximum Sobel
edge filter response S respectively for five positively selected
and five negatively selected voxels. In each case, the values for
the positively selected voxels are marked by "plus" symbols above
the horizontal line representing the range of values of the
particular characterizing parameter at appropriate positions along
the line. The values for the negatively selected voxels are
similarly represented by "minus" symbols below the line.
[0138] It can be seen from FIG. 12A that the voxel values V are
similar and fall within roughly the same range for both the
positively and negatively selected voxels. This indicates that
voxel value itself is not a good discriminator between the
positively and negatively selected voxels in this case.
[0139] It can be seen from FIG. 12B that the local averages A are
also broadly similar for both the positively and negatively
selected voxels. There appears to be a slight bias towards higher
values of local average for positively selected voxels, but there
is still a large degree of overlap.
[0140] However, it can be seen from FIG. 12C that the computed
local standard deviations .sigma. are significantly different for
the positively and negatively selected voxels. In particular, the
regions surrounding the positively selected voxels tend to have
significantly larger standard deviations than those surrounding the
negatively selected voxels. This indicates that the positively
selected voxels from the region of tissue type which the user
wishes to examine further correspond to regions of greater
granularity in the data. It is likely to be this greater degree of
granularity which causes the region to appear to human visual
perception to be slightly different to the surrounding regions in
the section views.
[0141] It can be seen from FIG. 12D that the computed maximum Sobel
edge filter responses S are also different for the positively and
negatively selected voxels, although to a lesser extent than the
local standard deviations.
[0142] From these distributions of the computed characterizing
parameters for the positively and negatively selected voxels, it is
apparent that local standard deviation .sigma. is a characterizing
parameter which distinguishes well between positively and
negatively selected voxels, and as such is considered to be a
distinguishing parameter. In this example implementation only one
distinguishing parameter is sought and is chosen on the basis of it
being the most able of the computed characterizing parameters to
discriminate between the positively and negatively selected voxels.
The ability of a given characterizing parameter to discriminate is
referred to as its discrimination power and may be parameterized
using conventional statistical analysis. In this example, this is
done by separately calculating the average and the standard
deviation of each characterizing parameter for the positively and
the negatively selected voxels. The discriminating power of a given
characterizing parameter is then taken to be the difference in the
average for the positively and negatively selected voxels divided
by the quadrature sum of their standard deviations. The
charactering parameter having the greatest discriminating power is
then taken to be the distinguishing parameter. As will be seen
further below, in other examples multiple distinguishing parameters
may be used, for example all characterizing parameters having a
discriminating power greater than a certain level or a fixed number
of characterizing parameters having the highest discriminating
powers may be used.
[0143] In Step 115, the distinguishing parameter (i.e. local
standard deviation .sigma. in this case) is calculated for other
voxels in the data. Although this may be done for all of the
voxels, it may be more efficient to restrict the calculation to
only a subset of voxels. For example, a conventional segmentation
algorithm may first be applied to the data to identify which voxels
belong to significantly different tissue types (e.g. bone or
brain). Once this is done, the local standard deviation .sigma. may
then be calculated only for those voxels which have been classified
by the conventional segmentation algorithm as corresponding to
brain. This is because there would be no need to perform the
computation for voxels which have already been distinguished from
the tissue type of interest by the conventional segmentation
algorithm. Alternatively, the calculation may only be made for
voxels in a VOI identified by the user.
[0144] In step 116, the distinguishing parameter, i.e. the local
standard deviation for the example characterizing parameter
distributions seen in FIGS. 12A-D, is used to classify each of the
other voxels. This is performed in this example by defining a
critical local standard deviation .sigma..sub.c (marked in FIG.
12C) between the average local standard deviation for the
positively selected voxels and the average local standard deviation
of the negatively selected voxels. If the local standard deviation
computed in Step 115 for a particular voxel is greater than
.sigma..sub.c, the voxel is classified as belonging to the tissue
type of interest. If the local standard deviation is less than
.sigma..sub.c, the voxel is classified as not belonging to the
tissue type of interest.
[0145] It will be appreciated that although in this particular
example the computed value of one of the characterizing parameters
(local standard deviation) is itself identified as being able to
distinguish between the tissue type of interest and surrounding
tissue, this is a special example of the more general case in which
a distinguishing functional relationship between characterizing
parameters is identified. For example, for a particular tissue type
interest it might be found the ratio of two different
characterizing parameters has a greater discriminating power
between positively and negatively selected voxels than either of
the characterizing parameters themselves. A numerical example of
how this can arise is if values generally between 2.5 and 3.5
(arbitrary unit) are found for one characterizing parameter for
both positively and negatively selected voxels and values generally
between 5 and 7 (arbitrary units) are found for another
characterizing parameter, again for both positively and negatively
selected voxels. Because of this, neither characterizing parameter
alone is able to discriminate properly between positive and
negatively selected voxels. However, if for the tissue type of
interest the second characterizing parameter is always close to
twice the value of the first, whereas for the negatively selected
voxels, the two parameters are unrelated, a distinguishing function
based on the ratio of the two parameters can be identified.
[0146] Depending on clinical application, additional requirements
may be imposed on which voxels are to be considered to correspond
to the tissue type of interest. For example, a requirement that the
tissue type of interest forms a single volume may be made by
applying a connectivity requirement. This would mean voxels which
are not linked to the positively selected voxels by a chain of
voxels classified as corresponding to the tissue type of interest
will be classified as not corresponding to this tissue type, even
if their distinguishing parameters are such that they would
otherwise be considered to do so.
[0147] Once the voxels have been classified, the user may proceed
to examine those corresponding to the tissue type of interest as
desired. For example, the user may render an image showing only the
tissue type of interest. In another example, the tissue type of
interest may be shown in one color and other tissue types in other
colors, that is to say the method shown in FIG. 11 may be used as
the basis of calculating presets. This could be realized when a
monochrome image of the brain is displayed. Here the classification
could be used to distinguish between white and gray matter in the
brain. Based on the classification, the gray matter is displayed
shaded in a semi-transparent blue color wash. In a further example,
the selected object can be measured in some way, for example the
volume is calculated. Another example is that the unclassified
parts ("don't want" regions) are "dimmed", i.e. rendered
semi-transparent.
[0148] In some examples, an image based on the distinguishing
parameter itself (or a function thereof) may be rendered (e.g.
using the distinguishing parameter as the imaged parameter in the
rendering rather than voxel value). In the above described
situation, rather than rendering an image based on voxel values in
the image data set (i.e. X-ray stopping power for CT data), an
image based on the local standard deviation for each of the voxels
may be rendered instead. Ranges of color and/or opacity may be
associated with different values of local standard deviation and an
image rendered accordingly. Visualization presets for the rendered
image may be calculated as previously described, for example. This
approach can provide for a displayed image in which a user can
easily distinguish the tissue type of interest from surrounding
tissue because characteristics of the tissue type of interest which
differentiates it from its surroundings are used as the basis for
rendering the image.
[0149] Rather than display an image, the classification may be used
in conjunction with conventional analysis techniques, for example
to calculate the volume of the anomalous region corresponding to
the tissue type of interest. It will of course be appreciated that
in some cases a region of interest might be of interest merely
because the user wishes to identify it so it can be discarded from
subsequent display or analysis.
[0150] It is not necessary for the steps shown in FIG. 11 to be
performed in the order shown. For example, Step 111 and Step 112
could be reversed, or even intertwined. That is so say, a user
could identify some voxels which correspond to the tissue type of
interest, then some voxels which do not correspond to the tissue
type of interest, and then some more voxels corresponding to the
tissue type of interest and so on (i.e. in effect cycle between
Step 111 and Step 112).
[0151] Furthermore, the process may return to earlier steps during
execution. For example, a user may be alerted at Step 114 if there
are no characterizing parameters having a discriminating power
above a predetermined level. In response to this, the user may
choose to return to Step 111 and/or Step 112 to provide more
examples. Alternatively, in such a circumstance the user may
instead indicate that additional characterizing parameters should
be determined and their discriminating powers examined, or may
simply choose to proceed with the classification nonetheless.
[0152] The method shown in FIG. 11 may be modified in a number of
ways. For example, rather than simply having a binary
classification (i.e. classifying voxels as either corresponding to
the tissue type of interest or not corresponding to the tissue type
of interest) a probability classification may be used. Each voxel
may be attributed a likelihood of corresponding to the same tissue
type as the positively selected voxels on the basis of how much its
distinguishing parameter differs from those of the negatively
selected voxels. In this scheme, a voxel having a local standard
deviation of .sigma..sub.1 shown in FIG. 12C would be classified as
having a greater probability of belonging to the population of
voxels corresponding to the tissue type of interest than one having
a local standard deviation of .sigma..sub.2.
[0153] Furthermore, more than one distinguishing parameter may be
used for the classification. For example, if multiple parameters
are identified in Step 114 as being capable of distinguishing
between the positively and negatively selected voxels, these
multiple distinguishing parameters may each then be computed for
the other voxels in Step 115. The classification in Step 116 could
then be based on a conventional multi-dimensional expectation
maximization (EM) algorithm or other cluster recognition process
which takes the distinguishing parameters computed for the
positively and negatively selected voxels as seeds for defining for
the populations of voxels (i.e. the population of voxels
corresponding to the tissue type of interest and the population of
voxels not corresponding to the tissue type of interest). Example
classification schemes when the distinguishing function has two or
more characterizing parameters are multivariant Gaussian maximum
likelihood and k-nn (nearest neighbors).
[0154] The EM algorithm provides the distributions for the positive
and negative cases which then allows, for each voxel, a probability
to be determined that the voxel is a member of the population
exemplified by the positively selected voxels, that is to say a
probability that the voxel corresponds to the tissue type of
interest. The EM algorithm may also provide an estimate of the
overall fraction of voxels which are members of the population
exemplified by the positively selected voxels. This information
allows an image of the tissue type of interest to be rendered from
the volume data in a number of ways.
[0155] One way is to render all voxels having a probability of
corresponding to the tissue type of interest lower than a threshold
level as transparent, and render the remaining voxels using
conventional techniques based on their voxel values (e.g. opacity
to X-rays for CT data). The threshold level may be selected
arbitrarily, for example at 50%, or may be selected such that the
total number of voxels falling above the threshold level
corresponds to the overall fraction of voxels which are members of
the population exemplified by the positively selected voxels
predicted by the EM algorithm.
[0156] Another way of generating an image showing the tissue type
of interest would be to again render all voxels having a
probability of corresponding to the tissue type of interest lower
than a threshold level as transparent, but to then render the
remaining voxels based on their probability of corresponding to the
tissue type of interest, rather than their voxel values. This
provides a form of probability image from which a user can
immediately identify the likelihood of individual areas being
correctly classified as corresponding to the tissue type of
interest.
[0157] In either case, where an image based on rendering of
probabilities is displayed, the user may be presented with the
opportunity of manually altering the threshold level. This allows
the user to determine an appropriate compromise between including
too many false negatives (i.e. voxels which do not correspond to
the tissue type of interest) and excluding too many true positives
(i.e. voxels which do correspond to the tissue type of
interest).
[0158] It will be appreciated that in addition to the example
characterizing parameters shown in FIGS. 12A-12D, there is a wide
range of other parameters which may be used. For example,
parameters based on local averages calculated over differently
sized regions, parameters based on local gradients in voxel value,
local spatial frequency components, and so on may all be used. It
will also be appreciated that the choice of characterizing
parameters to compute may depend on the type of data under study.
For example, because MR data often show significant variations in
sensitivity throughout a volume data set, absolute voxel value can
be a poor indicator of tissue type in MR data. Because of this,
characterizing parameters such as voxel value or local averages of
voxel value might be excluded from use with MR data.
[0159] While the above description relates to a situation where a
user is interested in further examining only a single tissue type,
it will be understood that the method may equally be employed where
a user wishes to identify multiple tissue types. This can be
achieved by a user making positive selections for each of the
different tissue types of interest in Step 111 shown in FIG. 11.
Depending on the characteristics of the different tissue types of
interest, there may be a unique distinguishing feature identified
in step 114 that can be used to classify the voxels. However, in
some cases it may be necessary to employ multiple distinguishing
parameters with voxels classified on the basis of one or other of
these. For example, if in addition to the positive selection of
voxels corresponding to the anomalous region of brain discussed
above, the user is also interested in further examination of a
second anomalous region cited elsewhere in the brain, the user
simply makes some positive selections of that region. If the second
anomalous region is represented by voxels having voxels values
which are generally higher than the negatively selected voxels, but
having a similar local standard deviation, then, unlike the voxels
in the first anomalous region, they cannot be classified on the
basis of local standard deviation. This means in Step 114 both
local standard deviation .sigma. and voxel value V will be
determined to be distinguishing parameters and both will be
calculated in Step 115 for other voxels in the data. In Step 116,
voxels may then be classified as corresponding to one of the tissue
types of interest if either their local standard deviation is
different to that of the negatively selected voxels (in which case
they relate to the first anomalous region) or if their voxel value
is different to that of the negatively selected voxels (in which
case they relate to the second anomalous region).
[0160] The method may also be applied in an iterative manner. For
example, following execution of the method shown in FIG. 11 a
probability image showing the classification of the voxels may be
displayed to the user. The user may then decide to refine the
classification by re-executing the method on the basis of the
probability image. This is a form of relaxation labeling and allows
for additional spatial information to be exploited in each
subsequent iteration.
[0161] In some implementations of the method, the computation of
the distinguishing features may include additional analysis
techniques to assist in the proper classification of voxels. For
example, partial volume effects might cause a boundary between two
types of tissue which are not of interest to be wrongly classified.
If this is a concern in a particular situation, techniques such a
partial volume filtering as described in WO 02/084594 [1] may be
employed when computing the distinguishing parameters.
[0162] In cases where a user considers that the classification has
not performed adequately, for example should one of the negatively
selected voxels be attributed a high probability of being a member
of the population exemplified by the positively selected voxels,
further segmentation analysis techniques may be applied. For
example conventional morphological segmentation algorithms may be
applied to volume data representing the probability of each voxel
comprising the volume data of corresponding to the tissue type of
interest.
[0163] Additional user input may also be used to assist the
classification process. In particular, the user input may
additionally include clinical information, such as specification of
tissue type or anatomical feature of interest. For example, the
user input may adopt the paradigm "want that gray matter--don't
want that white matter", or "want that liver--don't want that other
(unspecified) tissue", or "want that (liver) tumor--don't want that
healthy (liver) tissue", or "want that (unspecified) tissue--don't
want that fat tissue". This user input can be done by appropriate
pointer selection in combination with filling out a text label or
selection from a drop down menu of options. Following this user
input, the distinguishing function can then determined from the
characterizing parameters having regard to the clinical information
input by the user. For example, if the positively selected voxels
are indicated as belonging to a tumor, local standard deviation may
be preferentially selected as the distinguishing function, since
this will be sensitive to the enhanced granularity that is an
attribute of tumors.
[0164] In some clinical studies multiple volume data sets of a
single patient may be available, for example from different imaging
modalities or from the same imaging modality but taken at different
times. If the images can be appropriately registered with one
another, it is possible to classify voxels in one of these volume
data sets on the basis of positively and negatively selected voxels
in another. Distinguishing parameters may even be based on an
analysis of voxels in one data set yet be used to classify voxels
in another data set. This can help because with more information
made available, it is more likely that a good distinguishing
parameter can be found.
[0165] It will be appreciated that although particular embodiments
of the invention have been described, many modifications/additions
and/or substitutions may be made within the spirit and scope of the
present invention.
[0166] Thus, for example, although the described embodiments employ
a computer program operating on a general purpose computer, for
example a conventional computer workstation, in other embodiments
special purpose hardware could be used. For example, at least some
of the functionality could be effected using special purpose
circuits, for example a field programmable gate array (FPGA) or an
application specific integrated circuit (ASIC) or in the form of a
graphics processing unit (GPU). Also, multi-thread processing or
parallel computing hardware could be used for at least some of the
processing. For example, different threads or processing stages
could be used to calculate respective characterizing
parameters.
References
[0167] [1] WO 02/084594 (Voxar Limited)
* * * * *