U.S. patent application number 13/377401 was filed with the patent office on 2012-04-05 for establishing a contour of a structure based on image information.
This patent application is currently assigned to KONINKLIJKE PHILIPS ELECTRONICS N.V.. Invention is credited to Olivier Ecabert, Reinhard Kneser, Carsten Meyer, Jochen Peters, Juergen Weese.
Application Number | 20120082354 13/377401 |
Document ID | / |
Family ID | 42556644 |
Filed Date | 2012-04-05 |
United States Patent
Application |
20120082354 |
Kind Code |
A1 |
Peters; Jochen ; et
al. |
April 5, 2012 |
ESTABLISHING A CONTOUR OF A STRUCTURE BASED ON IMAGE
INFORMATION
Abstract
A system for establishing a contour of a structure is disclosed.
An initialization subsystem (1) is used for initializing an
adaptive mesh representing an approximate contour of the structure,
the structure being represented at least partly by a first image,
and the structure being represented at least partly by a second
image. A deforming subsystem (2) is used for deforming the adaptive
mesh, based on feature information of the first image and feature
information of the second image. The deforming subsystem comprises
a force-establishing subsystem (3) for establishing a force acting
on at least part of the adaptive mesh, in dependence on the feature
information of the first image and the feature information of the
second image. A transform-establishing subsystem (4) is used for
establishing a coordinate transform reflecting a registration
mismatch between the first image, the second image, and the
adaptive mesh.
Inventors: |
Peters; Jochen; (Aachen,
DE) ; Ecabert; Olivier; (Aachen, DE) ; Meyer;
Carsten; (Hamburg, DE) ; Kneser; Reinhard;
(Aachen, DE) ; Weese; Juergen; (Aachen,
DE) |
Assignee: |
KONINKLIJKE PHILIPS ELECTRONICS
N.V.
EINDHOVEN
NL
|
Family ID: |
42556644 |
Appl. No.: |
13/377401 |
Filed: |
June 18, 2010 |
PCT Filed: |
June 18, 2010 |
PCT NO: |
PCT/IB2010/052756 |
371 Date: |
December 9, 2011 |
Current U.S.
Class: |
382/128 ;
382/199 |
Current CPC
Class: |
G06T 2207/20116
20130101; G06T 2207/30004 20130101; G06T 7/60 20130101; G06T 7/97
20170101; G06T 7/0012 20130101; G06T 2207/10072 20130101; G06T 7/12
20170101; G06T 7/149 20170101 |
Class at
Publication: |
382/128 ;
382/199 |
International
Class: |
G06K 9/48 20060101
G06K009/48; G06K 9/00 20060101 G06K009/00 |
Foreign Application Data
Date |
Code |
Application Number |
Jun 24, 2009 |
EP |
09163572.2 |
Claims
1. A system for establishing a contour of a structure, comprising
an initialization subsystem (1) for initializing an adaptive mesh
representing an approximate contour of the structure, the structure
being represented at least partly by a first image, and the
structure being represented at least partly by a second image; and
a deforming subsystem (2) for deforming the adaptive mesh, based on
feature information of the first image and feature information of
the second image.
2. The system according to claim 1, the deforming subsystem
comprising a force-establishing subsystem (3) for establishing a
force acting on at least part of the adaptive mesh in dependence on
the feature information of the first image and the feature
information of the second image.
3. The system according to claim 2, the force-establishing
subsystem (3) being arranged for establishing the force acting on
the at least part of the adaptive mesh also in dependence on a type
of the first image and/or a type of the second image.
4. The system according to claim 1, the deforming subsystem (3)
comprising a feature information-extracting subsystem (11) for
extracting feature information from the respective images, using
respective models trained for the particular imaging modalities or
protocols used to acquire the respective images.
5. The system according to claim 1, further comprising a
transform-establishing subsystem (4) for establishing a first
coordinate transform defining a registration between the adaptive
mesh and at least one of the first image and the second image,
based on feature information in the respective image and the
adaptive mesh.
6. The system according to claim 5 the transform establishing a
subsystem (4) being arranged for establishing a second coordinate
transform defining a relation between a coordinate system of the
first image and a coordinate system of the second image, based on
the first transform.
7. The system according to claim 5 the coordinate transform being
an affine transform.
8. The system according to claim 1, further comprising a general
outline-providing subsystem (5) for providing a shape model
representing a general outline of the structure; the deforming
subsystem (2) being arranged for deforming the adaptive mesh based
also on the shape model.
9. The system according to claim 1, the first image and the second
image having been acquired using at least one imaging modality from
the group of X-ray, CT, MR, Ultrasound, PET, SPECT, and magnetic
particle imaging.
10. The system according to claim 1, the first and second image
having been acquired using two different imaging modalities.
11. The system according to claim 1, the first image having been
acquired while a particular object was visible in the imaged
subject, the second image having been acquired while that
particular object was not visible in the imaged subject.
12. A medical imaging workstation comprising the system according
to claim 1.
13. A medical imaging acquisition apparatus comprising a scanner
for acquiring the first image and the system according to claim
1.
14. A method of establishing a contour of a structure, comprising
initializing (201) an adaptive mesh representing an approximate
contour of the structure, the structure being represented at least
partly by a first image, and the structure being represented at
least partly by a second image; and deforming (202) the adaptive
mesh, based on feature information of the first image and feature
information of the second image.
15. A computer program product comprising instructions for causing
a processor system to perform the steps of the method according to
claim 14.
Description
FIELD OF THE INVENTION
[0001] The invention relates to image segmentation and image
registration. The invention further relates to establishing a
contour of a structure.
BACKGROUND OF THE INVENTION
[0002] Image segmentation generally concerns selection and/or
separation of a selected part of a dataset. Such a dataset notably
represents image information of an imaged object and the selected
part relates to a specific part of the image. The dataset is in
general a multi-dimensional dataset that assigns data values to
positions in a multi-dimensional geometrical space. For example,
such datasets can be two-dimensional or three-dimensional images
where the data values are pixel values, such as brightness values,
grey values or color values, assigned to positions in a
two-dimensional plane or a three-dimensional volume.
[0003] U.S. Pat. No. 7,010,164 discloses a method of segmenting a
selected region from a multi-dimensional dataset. The method
comprises the steps of setting up a shape model representing the
general outline of a selected region and setting up an adaptive
mesh. The adaptive mesh represents an approximate contour of the
selected region. The adaptive mesh is initialized on the basis of
the shape model. Furthermore, the adaptive mesh is deformed in
dependence on the shape model and on feature information of the
selected region. In this way, a more precise contour of the
selected region is obtained.
SUMMARY OF THE INVENTION
[0004] It would be advantageous to have an improved system for
establishing a contour of a structure. To better address this
concern, in a first aspect of the invention a system is presented
that comprises
[0005] an initialization subsystem for initializing an adaptive
mesh representing an approximate contour of the structure, the
structure being represented at least partly by a first image, and
the structure being represented at least partly by a second
image;
[0006] a deforming subsystem for deforming the adaptive mesh, based
on feature information of the first image and feature information
of the second image.
[0007] Since the adaptive mesh is deformed based on both feature
information of the first image and feature information of the
second image, any information about the structure which is missing
or unreliable in the first image may be obtained from the feature
information of the second image. This way, the feature information
of both images complement each other. The result may be a deformed
adaptive mesh reflecting the shape information represented by both
images. The contour may describe an outline of the structure. The
structure may be, for example, a body, an organ, or a mass. The
contour may describe a surface in a three-dimensional space.
Alternatively, the contour may describe a line or curve in a
two-dimensional space.
[0008] The deforming subsystem may be arranged for deforming the
adaptive mesh, based on feature information of a plurality of
images. In particular, an adaptive mesh may be deformed based on
feature information of more than two images.
[0009] The deforming subsystem may comprise a force-establishing
subsystem for establishing a force acting on at least part of the
adaptive mesh in dependence on the feature information of the first
image and the feature information of the second image. By
considering the feature information of the first image and the
feature information of the second image in establishing the force
acting on a part of the adaptive mesh, the force may be more
reliable. Moreover, the resulting deformed mesh is based on
information from the two images. This may make the resulting
deformed mesh more accurate. The feature information of the images
may be weighted based on reliability criteria, for example, such
that feature information which is regarded to be reliable
determines the force to a greater extent than feature information
which is regarded to be less reliable.
[0010] The force-establishing subsystem may be arranged for
establishing the force acting on at least part of the adaptive mesh
also in dependence on a type of the first image and/or a type of
the second image. The type of an image may comprise an imaging
modality or a clinical application of the image or, for example, a
body part which is imaged. Different ways of establishing the force
may be used for these different types of images, which allows the
adaptive mesh to be adapted based on a plurality of images of
different types.
[0011] The deforming subsystem may comprise a feature
information-extracting subsystem for extracting feature information
from the respective images, using respective models trained for the
particular imaging modalities or protocols used to acquire the
respective images. This helps to deform the mesh in an appropriate
manner. Moreover, the feature information may be used by the
force-establishing subsystem to establish the forces.
[0012] The system may comprise a transform-establishing subsystem
for establishing a coordinate transform defining a registration
between the first image, the second image, and/or the adaptive
mesh, based on feature information in the respective image and the
adaptive mesh. By involving the adaptive mesh in the generation of
the coordinate transform, a more accurate registration may be
obtained. The more accurate registration may in turn improve the
adaptive mesh resulting from the deforming subsystem.
[0013] The coordinate transform may define a relation between a
coordinate system of the first image and a coordinate system of the
second image. Such a coordinate transform may be used to define a
registration between the two images.
[0014] The coordinate transform may be an affine transform. Such an
affine transform may represent global differences, such as
movement, rotation, and/or enlargement. Such global changes then do
not need to hinder the model adaptation.
[0015] The system may comprise a general outline-providing
subsystem for providing a shape model representing a general
outline of the structure. Moreover, the deforming subsystem may be
arranged for deforming the adaptive mesh, based also on the shape
model. For example, the adaptive mesh is deformed on the basis of
an internal energy and the internal energy is defined in dependence
on the shape model. This may help to avoid, for example, that image
features in the multi-dimensional dataset would drive the adaptive
mesh to false boundaries, so-called false attractors.
[0016] The first image and the second image may have been acquired
using, for example, the same or two different imaging modalities
from the group of X-ray, 3D rotational X-ray, CT, MR, Ultrasound,
PET and SPECT, magnetic particle imaging. For example, the system
may comprise an imaging modality of that group to perform the image
acquisitions. Alternatively or additionally, an input may be
provided for retrieving the image data from elsewhere.
[0017] The first and second image may be acquired using two
different imaging modalities. The different features which appear
in the images of different imaging modalities may be combined or
may complement each other in the model adaptation.
[0018] The first image may have been acquired while an imaged
subject contained a structure it did not contain during the
acquisition of the second image. For example, the subject contained
injected contrast agent during the acquisition of the first image,
but not during the acquisition of the second image. The order in
which images are acquired is not of importance here.
[0019] The first image and the second image may relate to a same
patient. The different features visible in the first image and
second image then may be assumed to relate to the same body
structure.
[0020] A medical imaging workstation may comprise the system set
forth.
[0021] A medical imaging acquisition apparatus may comprise a
scanner for acquiring the first image and the system set forth.
[0022] A method of establishing a contour of a structure may
comprise
[0023] initializing an adaptive mesh representing an approximate
contour of the structure, the structure being represented at least
partly by a first image, and the structure being represented at
least partly by a second image; and
[0024] deforming the adaptive mesh, based on feature information of
the first image and feature information of the second image.
[0025] A computer program product may comprise instructions for
causing a processor system to perform the steps of the method set
forth.
[0026] It will be appreciated by those skilled in the art that two
or more of the above-mentioned embodiments, implementations, and/or
aspects of the invention may be combined in any way deemed
useful.
[0027] Modifications and variations of the image acquisition
apparatus, of the workstation, of the system, and/or of the
computer program product, which correspond to the described
modifications and variations of the system, can be carried out by a
person skilled in the art on the basis of the present
description.
[0028] A person skilled in the art will appreciate that the method
may be applied to multidimensional image data, e.g., to
2-dimensional (2-D), 3-dimensional (3-D) or 4-dimensional (4-D, for
example 3-D+time) images, acquired by various acquisition
modalities such as, but not limited to, standard X-ray Imaging,
Computed Tomography (CT), Magnetic Resonance Imaging (MRI),
Ultrasound (US), Positron Emission Tomography (PET), Single Photon
Emission Computed Tomography (SPECT), and Nuclear Medicine
(NM).
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] These and other aspects of the invention will be further
elucidated and described with reference to the drawing, in
which
[0030] FIG. 1 shows a block diagram of a system for establishing a
contour of a structure; and
[0031] FIG. 2 shows a block diagram of a method of establishing a
contour of a structure.
DETAILED DESCRIPTION OF EMBODIMENTS
[0032] Segmentation of medical images (2D, 3D, 4D) from one imaging
modality can be performed automatically using shape-constrained
deformable models. Depending on the task and the imaging protocol,
some structures of interest may be invisible or hard to define. If
no protocol is available that provides all the needed information,
then a multi-modal approach may be helpful in solving the
segmentation task. Here, complementary information about all
structures of interest may provide a comprehensive
segmentation.
[0033] Prior knowledge of inter-image registration parameters may
be available to the system a priori. In such a case, the relation
between the coordinate systems of the different images is known
beforehand. For each available image, image-based forces may be
established that jointly "pull" at least part of a common mesh
model towards the structure of interest. Simultaneous image
segmentation and registration is also possible. For example, the
relations between the image coordinate systems and the mesh
coordinate system are refined iteratively in alternation with the
segmentation.
[0034] For improved viewing, regions with visible anatomical
structures can be identified in images obtained from the different
imaging modalities and combined in a single composite image. For
example, the "locally most informative" image patches (i.e., image
parts from that modality that provides the most convincing boundary
information at an image location) may be fused to obtain one
overall display showing image information from various sources. A
user interface may be arranged for enabling the user to switch
(locally, e.g., per displayed patch) between the complementary
images to check or compare the information provided by each imaging
modality. For example, information from both images may be encoded
into a model comprising the contour or adaptive mesh, and the
combined model may be displayed. For example, calcifications may be
extracted from an image without contrast agent, whereas wall
thickness may be extracted from another image with contrast
agent.
[0035] If the inter-image registration parameters are not available
a priori, or if it is desired to further improve these parameters,
a registration process may be performed. Such registration process
may comprise matching the images onto each other by means of rigid,
affine, or even non-linear coordinate transforms. This may be done
by optimizing a global match function between the images being
registered. Match functions may establish some sort of correlation
(or an objective function) between the grey values (image
intensities) of the images. An optimal match may be achieved by
maximizing this correlation or objective function. Alternatively
(or thereafter), "forces" may be defined which may direct the first
and/or the second image to a common mesh based on image features.
Such a method arranged for aligning the images with the common mesh
will be described in more detail further below.
[0036] FIG. 1 illustrates a system for establishing a contour of a
structure. Such a system may be implemented in hardware or software
or a combination thereof. The system may comprise an input 6 for
retrieving image data from an imaging modality 7, for example, or
from a database system 8. Such a database system 8 may comprise a
PACS server, for example. An image may comprise a multi-dimensional
array of grey-values, for example. The grey-values represent
objects scanned using an imaging modality, such as computed
tomography (CT), or magnetic resonance (MR) imaging.
[0037] An initialization subsystem 1 is arranged for initializing
an adaptive mesh. The adaptive mesh represents an approximate
contour of a structure. Such a structure could be an organ of a
human body, for example. Other possible examples include an organ
of an animal, or a non-organic object in non-destructive testing.
The structure may be represented at least partly by a first image
obtained via the input 6. In addition, the structure may be
represented at least partly by a second image obtained via the
input 6. The initialization subsystem may be arranged for using a
default shape with which the adaptive mesh may be initialized.
Alternatively, information in the first and second images may be
used, for example by identifying a region having a different grey
scale than the remainder of the image. The adaptive mesh may then
be initialized as a contour of said region. The adaptive mesh may
be initialized based on a shape model provided by the general
outline-providing subsystem 5. For example, the adaptive mesh is
initialized to be equal to a shape model provided by the general
outline-providing subsystem 5. The initialization subsystem may
further be arranged for initializing the adaptive mesh with a
previous deformed model. For example, deformation may occur in
iterations.
[0038] The deformation of the adaptive mesh may be performed by a
deforming subsystem 2. The deformation may be based on feature
information of the first image and feature information of the
second image. This way, two information sources are used to obtain
feature information. Feature information may include, for example,
gradients. Other kinds of feature information are known in the art
per se.
[0039] The deforming subsystem may comprise a force-establishing
subsystem 3 for establishing a force acting on at least part of the
adaptive mesh in dependence on the feature information of the first
image and the feature information of the second image. For example,
the first image may comprise a first feature close to a part of the
adaptive mesh. However, the second image may also comprise a second
feature close to that part of the adaptive mesh. The
force-establishing subsystem 3 may take both features into account
when establishing the force value. When only one of the images
comprises a relevant feature close to a part of the adaptive mesh,
the force acting on that part of the adaptive mesh may be
established based on only this feature. The deforming subsystem 3
may comprise a feature information-extracting subsystem 11 for
extracting feature information from the respective images, using
respective models trained for the particular imaging modalities or
protocols used to acquire the respective images. This way, the
particular feature information provided by each modality can be
exploited.
[0040] The system may further comprise a transform-establishing
subsystem 4 for establishing a coordinate transform defining a
registration between the first image, the second image, and the
adaptive mesh. This registration may be based on the adaptive mesh
and the feature information in one or more of the images. Points of
the adaptive mesh may be mapped onto points in the first or second
image. A coordinate transform may be established which defines a
relation between a coordinate system of the first image and a
coordinate system of the second image. For example, the adaptive
mesh may be defined with respect to the coordinate system of one of
the images, and a coordinate transform may be established mapping
the points of the mesh and of the image onto points of another
image. To this end, feature information of that other image may be
used and compared with the shape of the adaptive mesh. It is also
possible to establish coordinate transforms from the adaptive mesh
to different images; the transform from one image to the other can
be derived from the transforms mapping the adaptive mesh to each of
the images.
[0041] The registration may be based on the adaptive mesh and the
feature information in one or more of the images. For example, a
registration of a mesh to an image is computed. After that, the
transform representing the registration may be inverted and used to
register the two images. To start, for a given image, a mesh of
fixed shape may be registered to the image via a parametric
transformation (for example, but not limited to, an affine
transformation). This registration may be controlled by "forces"
established between mesh points (or triangles) and corresponding
image points (also called target points or target surfaces) which
may lead to a so-called external energy. For example, such an
external energy may be computed as a sum of energies of imaginary
springs attached between the mesh points and the target structures.
By minimizing the external energy with respect to the parametric
transform, the mesh may be aligned, or registered, with the image.
To register two images, the transform may be inverted, if
necessary. Instead of transforming the mesh to either image, an
inverse transform may be used to align each image with the mesh.
This is a way of establishing registration parameters between the
two images.
[0042] The coordinate transform-establishing subsystem 4 may be
arranged for establishing an affine transform. This way, only
global changes are represented in the coordinate transform. More
detailed changes may be accounted for by the deforming subsystem 2
and/or by the force-establishing subsystem 3.
[0043] The system may comprise a general outline-providing
subsystem 5 for providing a shape model representing a general
outline of the structure. For example, the subsystem 5 may comprise
or have access to a database of shape models, e.g. via the input 6.
These shape models may, for example, provide an average shape for a
particular type of object or structure. For example, different
models may be provided for the lungs and for the kidney. A user
input or image metadata may be used to select one of the shape
models from the database. The deforming subsystem 2 may be arranged
for deforming the adaptive mesh, based also on the shape model.
This way, the general shape of the structure which is segmented may
be used as a reference during the deformation process. The shape
model may also be used by the transform-establishing subsystem 4.
The general outline-providing subsystem 5 may further provide
access to multi-modal image feature information. For example, a
database may be provided specifying which image features should be
considered in relation to images from a particular imaging modality
and/or images of a particular body part or application.
[0044] The first image and the second image may be acquired using
one or more imaging modalities, for example from the group of CT,
MR, Ultrasound, PET and SPECT. However, these modalities are not to
be construed in a limiting sense. Different imaging modalities may
result in different feature information. The feature information
obtainable by different imaging modalities may be complementary.
For example, CT and MR show different kinds of tissues. Multiple
images using the same modality may also provide additional,
complementary feature information compared to only one image. For
example, the images may relate to different (preferably
overlapping) portions of the subject. Moreover, different images
may be acquired after applying or not applying different sorts of
contrast agents or after waiting for different times to observe the
dynamics of contrast distribution in the tissue over time.
[0045] The first image and the second image may relate to a same
subject, for example a patient. The first image may be acquired
while an imaged subject contains a structure it does not contain
during the acquisition of the second image. For example, contrast
agent inflow can be visualized in the images. Also, a sequence of
images showing a moving object may be captured; these images may be
used in the system described herein.
[0046] The adaptive mesh, after successful deformation, and the
coordinate transformation(s), may be forwarded to a post-processing
subsystem 9 for further processing. For example, the data may be
stored in a database such as a patient database. It is also
possible to use the results to identify the grey values of the
images relating to the structure. This way, computations and
comparisons become possible. The post-processing subsystem 9 may
also be arranged for generating a visualization of the contour, for
example as an overlay over one or more of the images. This
visualization may be shown to a user on a display 10.
[0047] A medical imaging workstation may comprise the system set
forth. A medical imaging workstation may further comprise user
interface means, including for example the display 10, a keyboard
and/or pointing device such as a mouse, and a network connection or
other communications means. The communication means may be used to
retrieve image information via input 6 and/or to store the results
such as the deformed mesh or the coordinate transform, or data
derived therefrom.
[0048] A medical imaging acquisition apparatus may comprise a
scanner for acquiring the first image and/or the second image, and
the system set forth. For example, the medical imaging apparatus
may comprise the features of the medical imaging workstation.
[0049] FIG. 2 illustrates aspects of a method of establishing a
contour of a structure. In step 201, the adaptive mesh is
initialized. The adaptive mesh represents an approximate contour of
the structure, the structure being represented at least partly by a
first image, and the structure being represented at least partly by
a second image. In step 202, the adaptive mesh is deformed based on
feature information of the first image and feature information of
the second image. The deforming step 202 may be iterated until the
adaptive mesh has converged. In step 203, a coordinate transform is
established. The coordinate transform may define a registration
between the first image, the second image, and the adaptive mesh.
The coordinate transform may be based on feature information in the
respective image or images and the adaptive mesh. The coordinate
transform may be used to register the images or to register the
mesh to one or more of the images. The order of the steps may be
interchanged, for example, step 203 may precede step 202. Also,
either of steps 202 or 203 may be omitted. Moreover, steps 202 and
203 may be performed iteratively, by performing steps 202 and 203
in alternating fashion. This way, the registration may benefit from
the updated mesh and the mesh deformation may benefit from the
updated coordinate transform. The method may be implemented as a
computer program product.
[0050] Multi-modal segmentation may be used for segmenting images
with missing information by inclusion of information provided by
other images. A model-based approach for image segmentation may be
used for identifying most or all visible structures of an image.
However, most model-based approaches are optimized for a particular
imaging modality. By combining model-based approaches that have
been optimized for different imaging modalities, the system may be
enabled to identify most or all visible structures in each single
image. By combining the available information about the visible
structures from the images (preferably obtained from different
imaging modalities), the different images complement each other.
This will improve the segmentation compared to a segmentation of an
individual image. Furthermore, any found correspondences between
identified anatomical landmarks in different images can be pointed
out in a suitable user interface.
[0051] It is possible to combine registration and segmentation.
This may be done by refining an initial rough guess of the
registration parameters based on a comparison of the detected
visible structures (anatomical landmarks) in the images.
Registration may thus be preferably driven by geometric information
(describing the location of visible structures). However, it is
also possible to drive the registration by a global matching of
grey values. Using a model of the desired anatomy in combination
with geometric information may improve the registration accuracy,
because it takes into account the anatomical structures of interest
instead of the whole image. Also, an initial registration may be
obtained from prior knowledge about the relations between the
scanner geometries with respect to the patient.
[0052] A multi-modal segmentation framework may comprise
establishing image-based forces that "pull" on an adaptive mesh
describing the structures of interest. The mesh may be defined with
respect to a coordinate system. This coordinate system may be
related to image coordinate systems via an affine transformation.
Such affine transformations may establish a registration between
the images and the common segmentation result.
[0053] The multi-modal segmentation may combine the image-based
forces derived from all available images of a particular structure
of interest. These image-based forces may be based on visible
structures in the images. The image-based forces may "pull" the
common mesh to a compromise position. If a particular region in an
image does not contain structures of interest, these regions
preferably do not contribute to the image-based forces. However, if
a structure of interest is available in a corresponding region of
another image, this structure preferably does contribute to the
image-based forces. This provides a way of "bridging the gap"
caused by image regions which do not reveal the structure of
interest.
[0054] For any given portion of the adaptive mesh model, it is
possible to assess which image (or images) represent(s) the
structure on which the portion of the surface model is based, for
example by considering the forces which attracted the portion of
the adaptive surface model. Based on this assessment, a combined
image may be composed of portions of the different images as a
"patchwork" of most relevant image regions. A user interface may
show an indication of which image contributed to a given portion of
the adaptive mesh model. Moreover, the user interface may enable a
user to visualize a desired image in order to assess the
complementary image information. Such visualization may be local;
for example, if the composed image contains a first portion of a
first image, the user may choose to replace said first portion by a
corresponding portion of a second image, to obtain a new composite
image which may be visualized.
[0055] Combining information from multiple images, which may have
been acquired from different modalities, allows segmenting
structures that are only partly visible in each of the images but
well-defined if all images are taken into account. Furthermore, by
comparing the visible structures per image with the common
segmentation result, an (affine) registration between the image
coordinate systems may be estimated or refined.
[0056] In a multi-modal segmentation framework, different images
I.sub.n may be acquired within separate coordinate systems. E.g.,
the patient may be placed in different scanners, or scan
orientations may change. Offsets as well as re-orientations and
maybe even scaling effects due to geometric distortions may need to
be accounted for. However, this is not always the case. It is also
possible to acquire multiple images in the same coordinate
system.
[0057] To arrive at a common segmentation, first, a reference
coordinate system may be established, wherein the segmentation
result may be represented. This will be called the common
coordinate system (CCS) hereinafter. An image I.sub.n may have its
own image coordinate system which may be denoted by "ICS.sub.n". It
is also possible that all images have the same coordinate system.
In such a case, ICS.sub.n is the same for each image n.
[0058] In some cases, the transform between the coordinate systems
may be known. Furthermore, a global linear transform may sometimes
be sufficient to map the CCS to any ICS.sub.n. Let x be some
coordinate vector in the CCS, and let x' be the corresponding
vector in ICS.sub.n. Then, some transform T.sub.n={M.sub.n,
t.sub.n} with a 3.times.3 matrix M.sub.n and a translation vector
t.sub.n maps x to x':
x'=M.sub.nx+t.sub.n (1)
[0059] Gradients .gradient.I, as examples of feature information,
may also change under the transform. Gradients may be transformed
as follows (.gradient.I'=gradient in ICS.sub.n,
.gradient.I=gradient if I.sub.n were transformed into the CCS):
.gradient.I'(x')=(M.sub.n.sup.-1).sup.T.gradient.I(x).gradient.I(x)=M.su-
b.n.sup.T.gradient.I'(x') (2)
where the notation .sup.T denotes matrix transposition. (Here, the
point x' where the transformed gradient .gradient.I' is evaluated
includes the transform in (1)). The multi-modal segmentation may be
realized as follows: The internal energy of the deforming shape
model may be established in the CCS. External energies may be set
up for individual ones of the images I.sub.n and accumulated with
some weighting factors .beta..sub.n with .SIGMA..beta..sub.n=1:
E tot = n .beta. n E ext , n E ext , n + .alpha. E int ( 3 )
##EQU00001##
[0060] In combination, the common segmentation may produce a "best
compromise" between the external forces from the I.sub.n and the
internal forces from the shape model.
[0061] Example formulations of the external energies if coordinate
systems differ by T.sub.n are described hereinafter. By applying
the external energies and internal energies to the adaptive mesh,
the adaptive mesh may be deformed to obtain a contour of a
structure in the image or images. An example external energy
E.sub.ext in some standard coordinate system may be written as
(here c.sub.i represents the mesh point that is subject to
optimization):
E ext = i f i ( .gradient. I ( x i target ) .gradient. I ( x i
target ) ( c i - x i target ) ) 2 ( 4 ) ##EQU00002##
[0062] To set up E.sub.ext,n, in case of a transformed coordinate
system, target points may be detected in the image I.sub.n. This
may be done by transforming the triangle centers c.sub.i and
normals from the CCS to ICS.sub.n and by searching a target point
around a transformed triangle center. Within ICS.sub.n, a gradient
direction .gradient.I'.parallel..gradient.I'.parallel. may be
determined at a detected target point.
[0063] A first option for E.sub.ext,n is to back-transform
x.sub.i.sup.target into the CCS and to establish the external
energy there. Thinking in "spring forces", the springs may be
attached to the triangles in the CCS pulling the mesh towards
back-transformed target planes. This involves the
back-transformation of the gradient direction using (2) and
re-normalization, for example:
E ext , n = i f i ' ( M n T .gradient. I ' ( x i ' target ) M n T
.gradient. I ' ( x i ' target ) ( c i - x i target ) ) 2 ( 5 )
##EQU00003##
where x.sub.i.sup.target is a back-transformed target point and
c.sub.i is a triangle center in the CCS. Another option is to
formulate E.sub.ext,n within the image I.sub.n, i.e., within the
ICS.sub.n.
[0064] The transformations between the image coordinate systems may
be unknown. Finding these transformations may involve a
registration task. This registration task may be solved in
combination with the segmentation task.
[0065] Different aspects of the registration may be identified;
these aspects may help to arrive at an initial coarse estimation of
M.sub.n and t.sub.n:
[0066] Gross orientation of the images, e.g., axial versus sagittal
or coronal slices in the x-y plane (for MR images, oblique planes
may also be acquired): This information is normally available from
the scanner.
[0067] Scaling of the images: Normally, images should "live" in
metric space with no or little distortions. We may use the gross
orientation and a scaling factor of 1 as an initial guess for the
transformation matrices M.sub.n.
[0068] Translation: Images may have an unknown coordinate offset.
However, object localization techniques may allow coarsely
localizing the organ of interest.
[0069] The transforms may be refined per image I.sub.n. For
example, it is possible to take the current instance of the
commonly adapted mesh and optimize the transformation T.sub.n such
as to minimize the external energy for I.sub.n alone. T.sub.n then
may describe an improved mapping of the common mesh to the
respective image. After performing this step for all images, it is
possible to continue with the multi-modal segmentation using the
updated transforms T.sub.n. This procedure of iterative multi-modal
mesh adaptation and re-estimation of all T.sub.n may result in a
converged segmentation including the registration parameters as
encoded by T.sub.n.
[0070] The segmentation and/or registration techniques described
herein may be applied for example for segmentation of structures or
complex organs that cannot be fully described by a single imaging
modality alone. The techniques may further be applied for
segmentation of structures represented by images that do not
contain the full anatomical information. Complementary scans from
different imaging modalities may provide any such missing
information. For example, functional images, such as PET or SPECT
images, may not provide full anatomical information. By performing
the techniques presented herein on a functional image and an
anatomical image, such as a CT image or MR image, an improved
segmentation may be obtained. Moreover, the techniques may be used
to register the functional image with the anatomical image.
Moreover, the techniques may be used to generate a composite image
showing both the functional and the anatomical information.
[0071] A method of establishing a contour of a structure may start
with initializing a shape model. Next, the structure or parts of it
may be identified in a first image at a first image location and in
a second image at a second image location which may be different
from the first image location. A first force value may be
associated with a first portion of the initialized shape model, the
first force value being based on the structure in the first image,
the first portion of the initial shape model corresponding to the
first image location. A second force value may be associated with a
second portion of the initial shape model, the second force value
being based on the structure in the second image, the second
portion of the initial shape model corresponding to the second
image location. The shape of the first portion of the initialized
shape model may be adapted based on the first force value. The
shape of the second portion of the initialized model may be adapted
based on the second force value. If both portions of the
initialized shape model overlap, they may be adapted based on both
force values.
[0072] It will be appreciated that the invention also extends to
computer programs, particularly computer programs on or in a
carrier, adapted for putting the invention into practice. The
program may be in the form of source code, object code, a code
intermediate source and object code such as a partially compiled
form, or in any other form suitable for use in the implementation
of the method according to the invention. It will also be
appreciated that such a program may have many different
architectural designs. For example, a program code implementing the
functionality of the method or system according to the invention
may be subdivided into one or more subroutines. Many different ways
to distribute the functionality among these subroutines will be
apparent to the skilled person. The subroutines may be stored
together in one executable file to form a self-contained program.
Such an executable file may comprise computer executable
instructions, for example processor instructions and/or interpreter
instructions (e.g. Java interpreter instructions). Alternatively,
one or more or all of the subroutines may be stored in at least one
external library file and linked with a main program either
statically or dynamically, e.g. at run-time. The main program
contains at least one call to at least one of the subroutines.
Also, the subroutines may comprise function calls to each other. An
embodiment relating to a computer program product comprises
computer executable instructions corresponding to each of the
processing steps of at least one of the methods set forth. These
instructions may be subdivided into subroutines and/or stored in
one or more files that may be linked statically or dynamically.
Another embodiment relating to a computer program product comprises
computer executable instructions corresponding to each of the means
of at least one of the systems and/or products set forth. These
instructions may be subdivided into subroutines and/or stored in
one or more files that may be linked statically or dynamically.
[0073] The carrier of a computer program may be any entity or
device capable of carrying the program. For example, the carrier
may include a storage medium, such as a ROM, for example a CD ROM
or a semiconductor ROM, or a magnetic recording medium, for example
a floppy disc or hard disk. Further, the carrier may be a
transmissible carrier such as an electrical or optical signal,
which may be conveyed via electrical or optical cable or by radio
or other means. When the program is embodied in such a signal, the
carrier may be constituted by such a cable or other device or
means. Alternatively, the carrier may be an integrated circuit in
which the program is embedded, the integrated circuit being adapted
for performing, or for use in the performance of, the relevant
method.
[0074] It should be noted that the above-mentioned embodiments
illustrate rather than limit the invention, and that those skilled
in the art will be able to design many alternative embodiments
without departing from the scope of the appended claims. In the
claims, any reference signs placed between parentheses shall not be
construed as limiting the claim. Use of the verb "comprise" and its
conjugations does not exclude the presence of elements or steps
other than those stated in a claim. The article "a" or "an"
preceding an element does not exclude the presence of a plurality
of such elements. The invention may be implemented by means of
hardware comprising several distinct elements, and by means of a
suitably programmed computer. In the device claim enumerating
several means, several of these means may be embodied by one and
the same item of hardware. The mere fact that certain measures are
recited in mutually different dependent claims does not indicate
that a combination of these measures cannot be used to
advantage.
* * * * *