U.S. patent application number 14/660880 was filed with the patent office on 2015-09-17 for system and method for improving workflow efficiencies in reading tomosynthesis medical image data.
The applicant listed for this patent is iCAD, Inc.. Invention is credited to Sergey Fotin, Senthil Periaswamy.
Application Number | 20150262354 14/660880 |
Document ID | / |
Family ID | 49780344 |
Filed Date | 2015-09-17 |
United States Patent
Application |
20150262354 |
Kind Code |
A1 |
Periaswamy; Senthil ; et
al. |
September 17, 2015 |
SYSTEM AND METHOD FOR IMPROVING WORKFLOW EFFICIENCIES IN READING
TOMOSYNTHESIS MEDICAL IMAGE DATA
Abstract
A system and a method are disclosed that forms a novel,
synthetic, two-dimensional image of an anatomical region such as a
breast. Two-dimensional regions of interest (ROIs) such as masses
are extracted from three-dimensional medical image data, such as
digital tomosynthesis reconstructed volumes. Using image processing
technologies, the ROIs are then blended with two-dimensional image
information of the anatomical region to form the synthetic,
two-dimensional image. This arrangement and resulting image
desirably improves the workflow of a physician reading medical
image data, as the synthetic, two-dimensional image provides detail
previously only seen by interrogating the three-dimensional medical
image data.
Inventors: |
Periaswamy; Senthil;
(Hollis, NH) ; Fotin; Sergey; (Nashua,
NH) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
iCAD, Inc. |
Nashua |
NH |
US |
|
|
Family ID: |
49780344 |
Appl. No.: |
14/660880 |
Filed: |
March 17, 2015 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
13684475 |
Nov 23, 2012 |
8983156 |
|
|
14660880 |
|
|
|
|
Current U.S.
Class: |
382/131 |
Current CPC
Class: |
A61B 6/5223 20130101;
F04C 2270/041 20130101; G06T 7/11 20170101; A61B 6/469 20130101;
G06T 2207/10112 20130101; A61B 6/466 20130101; A61B 6/502 20130101;
G06T 7/194 20170101; G06T 7/0012 20130101; G06T 2207/30096
20130101; A61B 6/025 20130101; G06T 5/50 20130101; G06T 2207/20221
20130101 |
International
Class: |
G06T 7/00 20060101
G06T007/00; G06T 5/50 20060101 G06T005/50 |
Claims
1. A system for processing image data relative to an imaged
anatomical region comprising: a. an acquisition process that
acquires one or more two-dimensional (2D) regions of interest
(ROIs) from a three-dimensional (3D) medical image of the
anatomical region obtained from a medical imaging device; b. a
first projection process that defines a first 2D projection image
of the anatomical region; c. a second projection process that
generates a second 2D projection image of the anatomical region
using image information from the first 2D projection image and the
one or more 2D ROIs; and d. at least one of a display and a data
storage arrangement receiving an output of the second 2D projection
image.
Description
RELATED APPLICATIONS
[0001] This Application is a continuation of co-pending U.S. patent
application Ser. No. 13/684,475, filed Nov. 23, 2012, entitled
SYSTEM AND METHOD FOR IMPROVING WORKFLOW EFFICIENCIES IN READING
TOMOSYNTHESIS MEDICAL IMAGE DATA, the entire disclosure of which is
herein incorporated by reference.
BACKGROUND
[0002] 1. Field of the Invention
[0003] This application relates generally to image processing for
biomedical applications. More particularly, this application
relates to improving workflow efficiencies in reading medical image
data.
[0004] 2. Description of the Related Art
[0005] In the fields of medical imaging and radiology, various
techniques may be employed for creating images of an anatomical
region of the human body. For example, in mammography, the breast
is often imaged at two fixed angles using x-rays. Physicians may
review two-dimensional (2D) or planar x-ray images of the
anatomical region to uncover and diagnose disease-like conditions,
such as breast cancer.
[0006] Numerous medical imaging procedures now employ systems and
techniques that create three-dimensional (3D) or volumetric imagery
of the human body. For example, significant attention has been
given to tomographic imaging techniques. One such example is
digital breast tomosynthesis (DBT), a relatively new imaging
procedure in which systems image a breast by moving a source and
exposing the breast to radiation from a plurality of angles, thus
acquiring high resolution, planar images (i.e., "direct
projections") at different angles. For example, a DBT system may
acquire 10 direct projection images in which the source moves in
such a way as to change the imaging angle by a total angle of 40
degrees.
[0007] 3D medical images enable physicians to visualize important
structures in greater detail than available with 2D medical images.
However, the substantial amount of image data produced by 3D
medical imaging procedures presents a challenge. In mammography,
for example, a physician may review two images of a breast: a
cranial-caudal (CC) image and a medial-lateral oblique (MLO) image.
In DBT, the physician may review approximately 50-70 images, which
could include the original projection images and reconstructed
images.
[0008] Several techniques for improving the speed of diagnostic
assessment are disclosed in U.S. Pat. No. 7,630,533, entitled
BREAST TOMOSYNTHESIS WITH DISPLAY OF HIGHLIGHTED SUSPECTED
CALCIFICATIONS; U.S. Pat. No. 8,044,972, entitled SYNCHRONIZED
VIEWING OF TOMOSYNTHESIS AND/OR MAMMOGRAMS; U.S. Pat. No.
8,051,386, entitled CAD-BASED NAVIGATION OF VIEWS OF MEDICAL IMAGE
DATA STACKS OR VOLUMES; and U.S. Pat. No. 8,155,421, entitled
MATCHING GEOMETRY GENERATION AND DISPLAY OF MAMMOGRAMS AND
TOMOSYNTHESIS IMAGES, the teachings of which patents are
incorporated herein by reference as useful background information.
However, solutions are desired that would further improve the speed
of diagnosis without sacrificing the detail provided by 3D medical
imaging technology.
SUMMARY OF THE INVENTION
[0009] This invention overcomes disadvantages of the prior art by
providing a system and method for improving workflow efficiencies
in reading tomosynthesis medical image data that avoids sacrificing
desired detail in images. The system and method generally enhances
the identification of regions and/or objects of interest (ROIs),
such as masses, within an acquired image by performing, based on
three-dimensional (3D) data, an enhancement process to the image
before it is projected into a two-dimensional (2D) format. This
renders the regions/object(s) of interest more identifiable to a
viewer (e.g. a diagnostician, such as a physician and/or
radiologist) in the 2D-projected image as it boundaries are
more-defined within the overall field.
[0010] In an illustrative embodiment, the system and method
acquires, using an acquisition process, one or more two-dimensional
(2D) regions of interest (ROIs) from a three-dimensional (3D)
medical image of an anatomical region. The medical image is
obtained from a scanning process carried out on a patient by an
appropriate medical imaging device and associated data handling and
storage devices. A first projection process defines a first 2D
projection image of the anatomical region. Then, a second
projection process generates a second 2D projection image of the
anatomical region using image information from the first 2D
projection image and the one or more 2D ROIs. The second 2D
projection image is then output to be stored and/or displayed using
an appropriate storage system and/or display device. The second
projection process can be constructed and arranged, in a blending
process, to blend the one or more 2D ROIs with image information
from the first 2D projection image, and can include an ROI detector
that forms at least one ROI response image. The blending process
can be further constructed and arranged to extract 2D binary masks
of the one or more ROIs from at least one ROI response image and/or
to blend the 2D binary masks with the first 2D projection image to
generate the second 2D projection image. Additionally, a
three-dimensional response image based upon a selected portion of
the second 2D projection image can be provided to assist the
diagnostician in identifying a region or object of interest, such
as a mass. This 3D response image characterizes the degree to which
various points or regions in an image exhibit characteristics
interest.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] Various inventive embodiments disclosed herein, both as to
its organization and manner of operation, together with further
objectives and advantages, may be best understood by reference to
the following description, taken in connection with the
accompanying drawings as set forth below in which:
[0012] FIG. 1 is a block diagram of a medical imaging system
according to an illustrative embodiment;
[0013] FIG. 2 is a flow diagram of an illustrative image processing
process that can be performed by the medical imaging system of FIG.
1;
[0014] FIG. 3 is a flow diagram of an illustrative process for
using a region of interest (ROI) enhanced two-dimensional image to
improve the efficiency with which a viewer/diagnostician
(physician, radiologist, etc.) reads medical image datasets;
[0015] FIG. 4 is a display image of an exemplary 2D projection
containing an object of interest without processing according to
the illustrative embodiment; and
[0016] FIG. 5 is a display image of an exemplary 2D projection
containing the object of interest of FIG. 4 after enhancement
processing according to the illustrative embodiment.
DETAILED DESCRIPTION
[0017] FIG. 1 is a block diagram of a medical imaging system 100 in
accordance with an illustrative embodiment. The system includes a
three-dimensional medical image source 110, a two-dimensional
medical image source 116, and an image processing unit 120 that
produces a novel, region of interest (ROI)-enhanced two-dimensional
image 140 that can be the primary image read for detection and
diagnosis of disease by a diagnostician. The system 100 further
includes a graphical user interface (GUI) and/or display 142 for
outputting the various medical image data. It should be noted that
a wide range of functional components can be provided to the
system, 100 in various embodiments, including various networked
data-handling and storage devices, additional displays, printing
devices, interfaces for portable computing devices, etc.
[0018] According to an embodiment, the three-dimensional medical
image source 110 is a digital tomosynthesis imaging system such as
offered by the General Electric Company of Fairfield, Conn. (GE);
Hologic, Inc, of Bedford, Mass. (Hologic); or Siemens AG of Munich,
Germany (Siemens). Digital tomosynthesis imaging systems image an
anatomical region by moving a source, and acquiring a plurality of
projection images (e.g., 10-25 direct projections) at different
angles (e.g., at 4-degree increments).
[0019] As illustrated in FIG. 1, the three-dimensional medical
image source 110 provides a three-dimensional image 112 of an
anatomical region 114. According to an embodiment, after the source
110 acquires projection images, the projection images are input to
a reconstruction processing unit, which employs conventional
techniques and processes to construct an image volume of the
anatomical region. By way of one example, the image volume can be
constructed in 40-60 image thin slices, each thin slice having a
spatial resolution of 100 microns per pixel, a thickness of 1
millimeter (mm), and dimensions of 2500 rows of pixels by 1500
columns of pixels.
[0020] According to an embodiment, the two-dimensional medical
image source 116 provides a two-dimensional image 118 of the
anatomical region 114. By way of one example, source 116 can
include a computer memory of conventional design that reads the
image 118 from a disk or other data storage device. The depicted
source can be defined to include associated storage hardware in
such embodiments. By way of another example, source 116 can be
defined to include a tomosynthesis image acquisition unit capable
of operating in a full-field digital mammography imaging mode and
acquiring medio-lateral oblique (MLO) or cranio-caudal (CC)
two-dimensional images. By way of yet a further example, source 116
can be defined to include image processing computer software
capable of synthetically producing two-dimensional images from
existing image data of the anatomical region 114.
[0021] Note, as used herein the terms "process" and/or "processor"
should be taken broadly to include a variety of electronic hardware
and/or software based functions and components. Moreover, a
depicted process or processor can be combined with other processes
and/or processors or divided into various sub-processes or
processors. Such sub-processes and/or sub-processors can be
variously combined according to embodiments herein. Likewise, it is
expressly contemplated that any function, process and/or processor
here herein can be implemented using electronic hardware, software
consisting of a non-transitory computer-readable medium of program
instructions, or a combination of hardware and software.
[0022] The image processing unit 120 further includes a
three-dimensional ROI detector 124, a two-dimensional ROI extractor
128, and an image blending unit 132.
[0023] The three-dimensional ROI detector 124 characterizes the
degree to which various points or regions in an image exhibit
characteristics of particular interest. For example,
characteristics that may be of interest in a breast include
blob-like regions or spiculated regions, both of which could
indicate malignancy. Thus, according to an embodiment, the detector
124 can include a calcification detector, blob detector, a
spiculation detector, or combinations thereof. As illustrated in
FIG. 1, the three-dimensional ROI detector 124 produces an ROI
response image 126 that contains this characterization information
for every image slice in the three-dimensional image 112.
[0024] The two-dimensional ROI extractor 128 extracts
two-dimensional information from portions of the three-dimensional
image 112 that include the points or regions of interest exhibiting
the characteristics of interest. According to an embodiment, the
extractor 128 extracts a 2D binary mask 130, also referred to
herein as a chip 130, for each ROI.
[0025] According to an embodiment, the image blending unit 132
includes a blending function or process that combines the
two-dimensional information extracted by the extractor 128 with the
two-dimensional image 118 provided by source 116. The blending
function/process forms the ROI-enhanced two-dimensional image
140.
[0026] FIG. 2 is a flow diagram of the operational image processing
that can be performed by the medical imaging system 100 to produce
an ROI-enhanced two-dimensional image.
[0027] At a step 210, a three-dimensional, reconstructed image
volume of an anatomical region is acquired from the
three-dimensional image source 110.
[0028] At a step 220, the three-dimensional ROI detector 124
processes the 3D reconstructed image volume of the anatomical
region to form the ROI response image 126.
[0029] At a step 230, the ROI extractor 128 extracts 2D binary
masks of ROIs from the ROI response image 126. According to an
embodiment, the ROI extractor 128 first finds the local maxima of
ROIs in the response image. A local maximum specifies the 2D slice
of the three-dimensional image from which the binary mask should be
optimally extracted. Then, the ROI extractor 128 extracts the 2D
binary mask of the ROI by thresholding the response image. In one
embodiment, the threshold value to be applied is a fixed variable
whose value can be set using empirical data. Finally, the ROI
extractor 128 performs a mathematical morphological dilation
operation to ensure that the extracted 2D binary mask will
encompass the entire structure of interest.
[0030] At a step 240, the image blending unit 132 blends each 2D
binary mask into the two-dimensional image 118. According to an
embodiment, the blending unit 132 first computes a soft blending
mask from the 2D binary mask, which will ensure that the ROIs are
smoothly blended into the final image. An illustrative technique
for computing the soft blending mask involves applying a known
Gaussian smoothing filter on the 2D binary mask. Then, the blending
unit 132 performs the following blending function:
[0031] For each pixel i in the mixed_image
mixed_image[i]=original_image[i]*(1-soft_mask_value[i])+chip_image[i]*
soft_mask_value[i]
[0032] In this function, original_image[i] refers to the pixel
intensity of the two-dimensional image 118, the soft_mask_value[i]
refers to the pixel intensity in the soft blending mask, and the
chip_image[i] refers to the pixel intensity in the 2D binary
mask.
[0033] FIG. 3 is a flow diagram of an illustrative process in which
system 100 uses a region of interest (ROI)-enhanced two-dimensional
image to improve the efficiency with which a physician reads
medical image datasets.
[0034] At a step 310, the system 100 outputs an ROI-enhanced 2D
image to a display, such as the graphic user interface 142
described with reference to FIG. 1.
[0035] At a step 320, the system 100 receives input specifying a
spatial x, y coordinate location in the 2D image. For example, the
input can specify a point or region in the 2D image that is of
further interest to the physician/diagnostician.
[0036] At a step 330, the system 100 programmatically determines
three-dimensional image information that would optimally aid the
physician's task of interpreting the specific point or region of
interest. According to an embodiment, the system 100 utilizes a
three-dimensional response image to make this determination. As
previously described, a three-dimensional response image
characterizes the degree to which various points or regions in an
image exhibit characteristics of particular interest. The system
100 identifies the slice of the three-dimensional response image
where the specified spatial point exhibits the local maxima (i.e.,
the point or region of interest is most blob-like, most spiculated,
etc.)
[0037] At a step 340, the system 100 outputs the three-dimensional
image information that includes the spatial point exhibiting the
local maxima to a display. By way of one example, the system 100
outputs the specific slice identified in the previous step. By way
of another example, the system 100 computes a slab image that
includes the spatial point and outputs the slab image to the
display.
[0038] To again summarize, the illustrative system and method
effectively increases the efficiency of a physician/diagnostician
(e.g. radiologist) in reading tomography images. Typically,
reviewing the 3D data is time-consuming and labor-intensive for
such personnel. Specifically, in this modality, masses are visible
and sharpest in only one or two slices of the 3D reconstructed
data, which can be part of a large volume of slices. Thus, the
viewer often must review all slices or slabs in the data set. When
the data is projected onto a 2D projection using traditional
methods, structures that exist above or below the object (mass)
tends to obstruct the view, possibly occluding the mass, posing a
significant challenge in identifying such an object in the 2D
projection image. However, if the system can effectively identify
the region of the mass before generating the 2D projection image,
then the projection process can be modified to ignore confusing
structures above and below the mass to produce a much clearer view
in the 2D projection. The end result is a 2D projection in which
the masses are also clearly visible, and generally free of any
obstructions that could occlude a clear view of the object (mass)
of interest. Advantageously, it is contemplated that this
illustrative process can also be adapted and applied to spiculated
masses and calcifications in a manner clear to those of skill.
[0039] Illustratively, the process can operate to first identifies
the object of interest in the 3D data, determines the best slice(s)
that reveal this object, segments and extracts the region, and then
smoothly merges the result with the traditional 2D projection.
[0040] The difference between a 2D-projected image before and after
processing according to the illustrative system and method is shown
in the respective exemplary display images 400 and 500 of FIGS. 4
and 5. These images are close-up views of a region of interest
containing an object of interest (a suspected tumor and/or mass) in
the center of the image. As shown in the display image 400 of FIG.
4 the object of interest 410 is fuzzy and contains poorly defined
(not sharp) boundaries, rendering it sometimes challenging to
identify without close study of the images. Conversely, the
exemplary display image 500 of FIG. 5, which is a projected 2D
image that has undergone the process of the illustrative system and
method, displays the object of interest 510 with more-defined,
sharp boundaries. This renders the object 510 more-readily
identified by a viewer, thereby increasing diagnostic accuracy,
efficiency and throughput.
[0041] The foregoing has been a detailed description of
illustrative embodiments of the invention. Various modifications
and additions can be made without departing from the spirit and
scope of this invention. Features of each of the various
embodiments described above may be combined with features of other
described embodiments as appropriate in order to provide a
multiplicity of feature combinations in associated new embodiments.
Furthermore, while the foregoing describes a number of separate
embodiments of the apparatus and method of the present invention,
what has been described herein is merely illustrative of the
application of the principles of the present invention. For
example, additional image handling algorithms/processes can be
included in the overall system process to enhance or filter image
information accordingly. Accordingly, this description is meant to
be taken only by way of example, and not to otherwise limit the
scope of this invention.
* * * * *