U.S. patent application number 13/817730 was filed with the patent office on 2013-11-14 for apparatus and method for four dimensional soft tissue navigation.
This patent application is currently assigned to VERAN MEDICAL TECHNOLOGIES, INC.. The applicant listed for this patent is Troy L. Holsing, Mark Hunter, Christopher Lee, Marc Wennogle. Invention is credited to Troy L. Holsing, Mark Hunter, Christopher Lee, Marc Wennogle.
Application Number | 20130303887 13/817730 |
Document ID | / |
Family ID | 45594602 |
Filed Date | 2013-11-14 |
United States Patent
Application |
20130303887 |
Kind Code |
A1 |
Holsing; Troy L. ; et
al. |
November 14, 2013 |
APPARATUS AND METHOD FOR FOUR DIMENSIONAL SOFT TISSUE
NAVIGATION
Abstract
A surgical instrument navigation system is provided that
visually simulates a virtual volumetric scene of a body cavity of a
patient from a point of view of a surgical instrument residing in
the cavity of the patient. The surgical instrument navigation
system includes: a surgical instrument; an imaging device which is
operable to capture scan data representative of an internal region
of interest within a given patient; a tracking subsystem that
employs electro-magnetic sensing to capture in real-time position
data indicative of the position of the surgical instrument; a data
processor which is operable to render a volumetric, perspective
image of the internal region of interest from a point of view of
the surgical instrument; and a display which is operable to display
the volumetric perspective image of the patient.
Inventors: |
Holsing; Troy L.; (Golden,
CO) ; Hunter; Mark; (St. Louis, MO) ; Lee;
Christopher; (St. Louis, MO) ; Wennogle; Marc;
(Centennial, CO) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Holsing; Troy L.
Hunter; Mark
Lee; Christopher
Wennogle; Marc |
Golden
St. Louis
St. Louis
Centennial |
CO
MO
MO
CO |
US
US
US
US |
|
|
Assignee: |
VERAN MEDICAL TECHNOLOGIES,
INC.
St. Louis
MO
|
Family ID: |
45594602 |
Appl. No.: |
13/817730 |
Filed: |
August 22, 2011 |
PCT Filed: |
August 22, 2011 |
PCT NO: |
PCT/US11/48669 |
371 Date: |
July 25, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61375439 |
Aug 20, 2010 |
|
|
|
61375533 |
Aug 20, 2010 |
|
|
|
61375523 |
Aug 20, 2010 |
|
|
|
61375484 |
Aug 20, 2010 |
|
|
|
Current U.S.
Class: |
600/424 ;
600/109; 600/585 |
Current CPC
Class: |
A61B 5/065 20130101;
A61B 5/066 20130101; A61B 90/39 20160201; A61B 5/418 20130101; A61B
8/12 20130101; A61B 2034/2065 20160201; A61B 1/00009 20130101; A61B
8/0841 20130101; A61B 5/061 20130101; A61B 5/113 20130101; A61B
1/04 20130101; A61B 5/064 20130101; A61B 34/20 20160201; A61B
2034/2063 20160201; A61B 5/415 20130101; A61B 2034/2068 20160201;
A61B 2034/2051 20160201; A61B 1/0005 20130101; A61M 25/09 20130101;
G06T 2207/30061 20130101; A61B 1/00094 20130101; A61B 5/062
20130101; A61B 1/2676 20130101; A61B 5/0456 20130101; A61B 5/0066
20130101; A61M 2025/09183 20130101; A61B 2034/2061 20160201 |
Class at
Publication: |
600/424 ;
600/109; 600/585 |
International
Class: |
A61B 1/267 20060101
A61B001/267; A61B 1/04 20060101 A61B001/04; A61B 8/08 20060101
A61B008/08; A61M 25/09 20060101 A61M025/09; A61B 8/12 20060101
A61B008/12; A61B 5/00 20060101 A61B005/00; A61B 5/06 20060101
A61B005/06; A61B 1/00 20060101 A61B001/00 |
Claims
1. An image-guided method comprising directing an endoscope
including a tip sensor to an anatomical position in a patient;
orienting an endoscopic view to a visual orientation of a user;
activating a steering mechanism of the endoscope to agitate the tip
sensor in a plane; and determining the orientation and location of
the tip sensor in the patient and correlating the tip sensor
location and orientation with one or more patient images.
2. The image-guided method of claim 1 wherein the anatomical
position is a branching point of a bronchial tree.
3. The image-guided method of claim 1, wherein the tip sensor is an
angled coil sensor, the coil being disposed at a 45.degree. angle
relative to a core portion of the tip sensor.
4. The image-guided method of claim 1, wherein orientation and
location determination comprise applying a deformation vector
field.
5. The image-guided method of claim 1, wherein the one or more
patient images are derived from an imaging device in conjunction
with a patient tracking device disposed at an external region of
the patient.
6. A method of using a guidewire or other navigated instrument with
one to one rotation to continuously align a virtual display view to
be consistent with an actual bronchoscopic video view.
7. A method of using a video input of the bronchoscope to adjust
the virtual fly-through view to be consistent with a user's normal
perspective.
8. The method of claim 7 wherein video processing and matching
techniques can be used between the real-time video and the virtual
image to align.
9. A method of using bronchoscopic video to provide angular
information at a current location to provide targeting or
directional cues to the user.
10. The method of claim 9 wherein the angular information is
derived from the location of patient anatomy in the image and the
relative size of each within the image.
11. A system or apparatus comprising one or more components for
carrying out one or more elements of the method of claim 7.
12. A non-transitory processor-readable medium storing code
representing instructions to cause a processor to perform a
process, the code comprising code to carry out one or more elements
of the method of claim 7.
13. An imaging method comprising tracking the traveled path of an
instrument in a patient and correlating the tracked path with one
or more sets of image data derived from an imaging device in
conjunction with a patient tracking device.
14. The method of claim 13 wherein a respiratory signal or a
heartbeat signal derived from the patient tracking device is used
to gate localization data of the instrument in an anatomical
position of the patient to determine on or more patient airway
models and correlate the instrument position to the image data to
provide a registration of the one or more patient airway models
during a respiratory cycle of the patient.
15. The method of claim 13 wherein the patient airway model
determination comprises the addition of patient airway data to the
image data.
16. The method of claim 15 wherein the patient airway data
comprises branching points or segments of branching points in a
bronchial tree.
17. The method of claim 13, wherein the correlation of the tracked
path comprises applying a deformation vector field.
18. A system or apparatus comprising one or more components for
carrying out one or more elements of the method of claim 13.
19. A non-transitory processor-readable medium storing code
representing instructions to cause a processor to perform a
process, the code comprising code to carry out one or more elements
of the method of claim 13.
20. A endoscope or attachment thereof including one or more offset
devices proximate a port of the endoscope or attachment, the offset
devices capable of securing a navigated guidewire in a
predetermined location.
21. The endoscope or attachment of claim 20 wherein a guidewire of
the endoscope comprises one or more detachable sensors on a
fiducial structure.
22. The endoscope or attachment of claim 20 wherein the endoscope
comprises an sensor affixed proximate to an aspiration needle
disposed at an end of the endoscope or attachment, wherein the
needle tip and the sensor move in conjunction with one another upon
actuation of the endoscope by a user.
23. The endoscope or attachment of claim 20 wherein the endoscope
comprises an sensor affixed proximate to a brush disposed at an end
of the endoscope or attachment, wherein the brush and the sensor
move in conjunction with one another upon actuation of the
endoscope by a user.
24. The endoscope or attachment of claim 20 wherein the endoscope
comprises an sensor affixed proximate to a forceps disposed at an
end of the endoscope or attachment, wherein the forceps and the
sensor move in conjunction with one another upon actuation of the
endoscope by a user.
25. The endoscope or attachment of claim 20, wherein the endoscope
or attachment comprises three or more offset devices.
26. The endoscope or attachment of claim 20 wherein the endoscope
or attachment includes two or more offsets in a series, each offset
being positioned 1 cm from each other.
27. A system or method comprising the use of an endoscope or
attachment of claim 20 in a medical procedure.
28. A method of constructing a three-dimensional model of an airway
or vessel of a patient comprising correlating a three-dimensional
location of instrument at an internal position of the patient with
image data collected in conjunction with a patient tracking device
disposed on an external position of the patient.
29. The method of claim 28 wherein the construction comprises
recording three-dimensional location of an instrument and
corresponding EBUS video or images to construct a three-dimensional
model of the patient's airway or a lesion therein.
30. The method of claim 28 wherein the construction comprises
recording three-dimensional location and corresponding IVUS video
or images to construct a three-dimensional model of the patient's
vessel or a plaque therein.
31. The method of claim 28 wherein the construction comprises
recording three-dimensional location and corresponding OCT images
to construct a three-dimensional model of the patient's vessel or a
plaque therein.
32. The method of claim 28 wherein the construction comprises the
use of a fiber optic localization (FDL) device disposed on an
instrument positioned within an airway or vessel of the patient and
further comprises generating localization information and shape
sensing information and applying an algorithm to said localization
and shape sensing information to determine the location and
orientation of the instrument within the patient.
33. A system or apparatus comprising one or more components for
carrying out one or more elements of the method of claim 28.
34. A non-transitory processor-readable medium storing code
representing instructions to cause a processor to perform a
process, the code comprising code to carry out one or more elements
of the method of claim 28.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional
Application Ser. Nos. 61/375,439, filed Aug. 20, 2010, 61/375,484,
filed Aug. 20, 2010, 61/375,523, filed Aug. 20, 2010, and
61/375,533, filed Aug. 20, 2010, each of which are hereby
incorporated by reference in their entirety, including any figures,
tables, and drawings.
BACKGROUND
[0002] The invention relates generally to a medical device and
particularly to an apparatus and methods associated with a range of
image guided medical procedures.
[0003] Image guided surgery (IGS), also known as image guided
intervention (IGI), enhances a physician's ability to locate
instruments within anatomy during a medical procedure. IGS can
include 2-dimensional (2-D), 3-dimensional (3-D), and 4-dimensional
(4-D) applications. The fourth dimension of IGS can include
multiple parameters either individually or together such as time,
motion, electrical signals, pressure, airflow, blood flow,
respiration, heartbeat, and other patient measured parameters.
[0004] Existing imaging modalities can capture the movement of
dynamic anatomy. Such modalities include electrocardiogram
(ECG)-gated or respiratory-gated magnetic resonance imaging (MRI)
devices, ECG-gated or respiratory-gated computer tomography (CT)
devices, standard computed tomography (CT), 3D Fluoroscopic images
(Angio-suites), and cinematography (CINE) fluoroscopy and
ultrasound. Multiple image datasets can be acquired at different
times, cycles of patient signals, or physical states of the
patient. The dynamic imaging modalities can capture the movement of
anatomy over a periodic cycle of that movement by sampling the
anatomy at several instants during its characteristic movement and
then creating a set of image frames or volumes.
[0005] A need exists for an apparatus that can be used with such
imaging devices to capture pre-procedural or intra-procedural
images of a targeted anatomical body and use those images
intra-procedurally to help guide a physician to the correct
location of the anatomical body during a medical procedure.
SUMMARY OF THE INVENTION
[0006] A method includes receiving during a first time interval
image data associated with an image of a dynamic body. The image
data includes an indication of a position of a first marker on a
patient tracking device (PTD) coupled to the dynamic body and a
position of a second marker on the PTD. Some registration methods
such as 2D to 3D registration techniques allow for the image data
containing the target or patient anatomy of interest to not contain
the PTD. A registration step is performed to calculate the
transformation from image space to patient space using an
additional dataset to register (i.e., a 2D fluoroscopic set of
images is used to register a 3D fluoroscopic dataset). This
technique is not limited to fluoroscopic procedures as it can
implemented in any procedure acquiring 2D images such as
ultrasound, OCT (optical coherence tomography), EBUS (endobronchial
ultrasound), or IVUS (intravascular ultrasound). This technique
uses the markers that are within multiple 2D images to register the
3D volume that is reconstructed from these 2D images. The
reconstructed 3D volume is smaller than the field of view of the 2D
images, so this technique allows for the PTD markers to be visible
in a subset of the 2D images, but not within the 3D volume. In
certain embodiments, the first marker is coupled to the PTD at a
first location and the second marker is coupled to the PTD at a
second location. A distance between the position of the first
marker and the position of the second marker is determined. During
a second time interval after the first time interval, data
associated with a position of a first localization element coupled
to the PTD at the first location and data associated with a
position of a second localization element coupled to the PTD at the
second location are received. A distance between the first
localization element and the second localization element based on
the data associated with the position of the first localization
element and the position of the second localization element is
determined. A difference is calculated between the distance between
the first marker and the second marker during the first time
interval and the distance between the first localization element
and the second localization element during the second time
interval. In addition the PTD device can be tracked continuously
during the procedure and a sequence of motion of the PTD device
that represents the patient motion of an organ or the patient's
respiratory cycle can be collected. The sequence of motion can then
be analyzed to find unique similar points within the dataset and
grouped.
[0007] Other objects and features will be in part apparent and in
part pointed out hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The details of the present invention, both as to its
construction and operation can best be understood with reference to
the accompanying drawings, in which like numerals refer to like
parts, and in which:
[0009] FIG. 1 is a schematic illustration of various devices used
with a method according to an embodiment of the invention.
[0010] FIG. 2 is a schematic illustration of various devices used
with a method according to an embodiment of the invention.
[0011] FIG. 3 is a schematic illustrating vector distances on an
apparatus according to an embodiment of the invention.
[0012] FIG. 4A is a schematic illustrating vector distances from a
localization device according to an embodiment of the
invention.
[0013] FIG. 4B is a schematic illustrating vector distances from
image data according to an embodiment of the invention.
[0014] FIG. 5 is a front perspective view of an apparatus according
to an embodiment of the invention.
[0015] FIG. 6 is a graphical representation illustrating the
function of an apparatus according to an embodiment of the
invention.
[0016] FIG. 7 is a flowchart illustrating a method according to an
embodiment of the invention.
[0017] FIG. 8 shows the layout of a system that may be used to
carry out image guided interventions using certain of the present
methods that involve gated datasets.
[0018] FIG. 9 illustrates one example of samples of a periodic
human characteristic signal (specifically, an ECG waveform)
associated, or gated, with images of dynamic anatomy.
[0019] FIG. 10 is a diagram of an exemplary surgical instrument
navigation system in accordance with present invention;
[0020] FIG. 11 is a flowchart that depicts a technique for
simulating a virtual volumetric scene of a body cavity from a point
of view of a surgical instrument positioned within the patient in
accordance with the present invention;
[0021] FIG. 12 is an exemplary display from the surgical instrument
navigation system of the present invention;
[0022] FIG. 13 is a flowchart that depicts a technique for
synchronizing the display of an indicia or graphical representation
of the surgical instrument with cardiac or respiratory cycle of the
patient in accordance with the present invention; and
[0023] FIG. 14 is a flowchart that depicts a technique for
generating four-dimensional image data that is synchronized with
the patient in accordance with the present invention.
[0024] FIG. 15 is a graph depicting an axis or point that the
instrument (e.g., a bronchoscope) deflects in a single planar
direction. The graph shows the instrument (e.g., a bronchoscope)
being maneuvered in six different orientations in a 3D localizer
volume, with all orientations converging about a common axis or
point of deflection.
[0025] FIG. 16 is a graph depicting the eigenvalues (e0,e1,e2) for
a moving 3.0 sec PCA (principal component analysis) window over a
data file including 1800 samples. The square wave represents an
on/off "wiggle detector" state based on the algorithm described
herein, and this square wave demonstrates that the algorithm
exhibits no false negatives for the validation test data and that
the seven exemplary "wiggle" periods are clearly matched to the
"on" state of the wiggle detector. The implementation of the
algorithm uses low pass filtering and an appropriate comparator
function to eliminate any false positive traces or spots ("blips")
indicated in FIG. 16.
[0026] FIG. 17 is an image of an exemplary synthetic radiograph in
accordance with the present invention, depicting the historical
instrument position trace.
[0027] FIG. 18 depicts an exemplary curvature warning system in
accordance with the invention described herein.
[0028] FIG. 19 depicts an exemplary real-time respiration
compensation algorithm.
[0029] FIG. 20 depicts additional and alternative exemplary
embodiments of the invention described herein.
[0030] FIG. 21 depicts additional and alternative exemplary
embodiments of the invention described herein.
[0031] FIGS. 22A and 22B depict additional and alternative
exemplary embodiments of the invention described herein.
[0032] FIG. 23 depicts additional and alternative exemplary
embodiments of the invention described herein.
[0033] FIGS. 24A and 24B depict exemplary embodiments of an port
offset device in accordance with the invention described
herein.
[0034] FIG. 25 depicts exemplary embodiments of a actuatable
sensor-equipped forceps device in accordance with the invention
described herein.
DETAILED DESCRIPTION
[0035] The accompanying Figures and this description depict and
describe embodiments of a navigation system (and related methods
and devices) in accordance with the present invention, and features
and components thereof. It should also be noted that any references
herein to front and back, right and left, top and bottom and upper
and lower are intended for convenience of description, not to limit
the present invention or its components to any one positional or
spatial orientation.
[0036] It is noted that the terms "comprise" (and any form of
comprise, such as "comprises" and "comprising"), "have" (and any
form of have, such as "has" and "having"), "contain" (and any form
of contain, such as "contains" and "containing"), and "include"
(and any form of include, such as "includes" and "including") are
open-ended linking verbs. Thus, a method, an apparatus, or a system
that "comprises," "has," "contains," or "includes" one or more
items possesses at least those one or more items, but is not
limited to possessing only those one or more items. For example, a
method that comprises receiving a position of an instrument
reference marker coupled to an instrument; transforming the
position into image space using a position of a non-tissue internal
reference marker implanted in a patient; and superimposing a
representation of the instrument on an image in which the
non-tissue internal reference marker appears possesses at least the
receiving, transforming, and superimposing steps, but is not
limited to possessing only those steps. Accordingly, the method
also covers instances where the transforming includes transforming
the position into image space using a transformation that is based,
in part, on the position of the non-tissue internal reference
marker implanted in the patient, and calculating the transformation
using image space coordinates of the internal reference marker in
the image. The term "use" should be interpreted the same way. Thus,
a calculation that uses certain items uses at least those items,
but also covers the use of additional items.
[0037] Individual elements or steps of the present methods,
apparatuses, and systems are to be treated in the same manner.
Thus, a step that calls for creating a dataset that includes
images, one of the images (a) depicting a non-tissue internal
reference marker, (b) being linked to non-tissue internal reference
marker positional information, and (c) being at least 2-dimensional
covers the creation of at least such a dataset, but also covers the
creation of a dataset that includes images, where each image (a)
depicts the non-tissue internal reference marker, and (b) is linked
to non-tissue internal reference marker positional information.
[0038] The terms "a" and "an" are defined as one or more than one.
The term "another" is defined as at least a second or more. The
term "coupled" encompasses both direct and indirect connections,
and is not limited to mechanical connections.
[0039] Those of skill in the art will appreciate that in the
detailed description below, certain well known components and
assembly techniques have been omitted so that the present methods,
apparatuses, and systems are not obscured in unnecessary
detail.
[0040] An apparatus according to an embodiment of the invention
includes a PTD and two or more markers coupled to the PTD. The
apparatus can also include two or more localization elements
coupled to the PTD proximate the markers. The apparatus is
configured to be coupled to a dynamic body, such as selected
dynamic anatomy of a patient. Dynamic anatomy can be, for example,
any anatomy that moves during its normal function (e.g., the heart,
lungs, kidneys, liver and blood vessels). A processor, such as a
computer, is configured to receive image data associated with the
dynamic body taken during a pre-surgical or pre-procedural first
time interval. The image data can include an indication of a
position of each of the markers for multiple instants in time
during the first time interval. The processor can also receive
position data associated with the localization elements during a
second time interval in which a surgical procedure or other medical
procedure is being performed. The processor can use the position
data received from the localization elements to determine a
distance between the elements for a given instant in time during
the second time interval. The processor can also use the image data
to determine the distance between the markers for a given instant
in time during the first time interval. The processor can then find
a match between an image where the distance between the markers at
a given instant in time during the first time interval is the same
as the distance between the elements associated with those markers
at a given instant in time during the medical procedure, or second
time interval. Additionally, the processor can determine a sequence
of motion of the markers and match this sequence of motion to the
recorded motion of the markers over the complete procedure or
significant period of time. Distance alone between the markers may
not be sufficient to match the patient space to image space in many
instances, it is important for the system to know the direction the
markers are moving and the range and speed of this motion to find
the appropriate sequence of motion for a complex signal or sequence
of motion by the patient.
[0041] A physician or other healthcare professional can use the
images selected by the processor during a medical procedure
performed during the second time interval. For example, when a
medical procedure is performed on a targeted anatomy of a patient,
such as a heart or lung, the physician may not be able to utilize
an imaging device during the medical procedure to guide him to the
targeted area within the patient. A PTD according to an embodiment
of the invention can be positioned or coupled to the patient
proximate the targeted anatomy prior to the medical procedure, and
pre-procedural images can be taken of the targeted area during a
first time interval. Markers or fiducials coupled to the PTD can be
viewed with the image data, which can include an indication of the
position of the markers during a given path of motion of the
targeted anatomy (e.g., the heart) during the first time interval.
Such motion can be due, for example, to inspiration (i.e.,
inhaling) and expiration (i.e., exhaling) of the patient, or due to
the heart beating. During a medical procedure, performed during a
second time interval, such as a procedure on a heart or lung, the
processor receives data from the localization elements associated
with a position of the elements at a given instant in time during
the medical procedure (or second time interval). The distance
between selected pairs of markers can be determined from the image
data and the distance, range, acceleration, and speed between
corresponding selected pairs of localization elements can be
determined based on the element data for given instants in time.
From multiple image datasets the range and speed of the markers
motion can be calculated.
[0042] Because the localization elements are coupled to the PTD
proximate the location of the markers, the distance between a
selected pair of elements can be used to determine an
intra-procedural distance between the pair of corresponding markers
to which the localization elements are coupled. An image from the
pre-procedural image data taken during the first time interval can
then be selected where the distance between the pair of selected
markers in that image corresponds with or closely approximates the
same distance determined using the localization elements at a given
instant in time during the second time interval. This process can
be done continuously during the medical procedure, producing
simulated real-time, intra-procedural images illustrating the
orientation and shape of the targeted anatomy as a catheter,
sheath, needle, forceps, guidewire, fiducial delivery devices,
therapy device (ablation modeling, drug diffusion modeling, etc.),
or similar structure(s) is/are navigated to the targeted anatomy.
Thus, during the medical procedure, the physician can view selected
image(s) of the targeted anatomy that correspond to and simulate
real-time movement of the anatomy. In addition, during a medical
procedure being performed during the second time interval, such as
navigating a catheter or other instrument or component thereof to a
targeted anatomy, the location(s) of a sensor (e.g., an
electromagnetic coil sensor) coupled to the catheter during the
second time interval can be superimposed on an image of a catheter.
The superimposed image(s) of the catheter can then be superimposed
on the selected image(s) from the first time interval, providing
simulated real-time images of the catheter location relative to the
targeted anatomy. This process and other related methods are
described in pending U.S. patent application Ser. No. 10/273,598,
entitled Methods, Apparatuses, and Systems Useful in Conducting
Image Guided Interventions, filed Nov. 8, 2003, the entire
disclosure of which is incorporated herein by reference.
[0043] In one embodiment, a real-time pathway registration is
applied to a pre-acquired dataset that does not contain the PTD. It
will be understood that the pre-acquired dataset can be at only one
cycle of a patient's respiratory, heartbeat, or other path of
motion. In order to optimize the registration of a pre-acquired
dataset that does not contain the PTD, a PTD can be subsequently
applied to the patient, and the PTD signal can be used to collect
registration information throughout full range or path of motion
but only that information that is captured at a similar PTD
orientation, shape, or point along the PTD cycle of motion is used.
This method enhances the registration accuracy by ensuring that the
registration points being used to register are at the same point
during the initial dataset acquisition. In preferred embodiments,
the method uses multiple subsets of the acquired registration data
that are collected based on the PTD signal. These multiple subsets
are then applied against the pre-acquired dataset to find the
optimal registration fit.
[0044] In another embodiment, the device can be integrated with one
or more fiber optic localization (FDL) devices and/or techniques.
In this way, the sensor (such as an EM sensor) provides the 3D
spatial orientation of the device, while the FDL provides shape
sensing of the airway, vessel, pathway, organ, environment and
surroundings. Conventional FDL techniques can be employed. In
various embodiments, for example, the FDL device can be used to
create localization information for the complete pathway or to
refine the localization accuracy in a particular segment of the
pathway. By either using 3D localization information, shape, or
both detected by the FDL device, the system can use a weighted
algorithm between multiple localization devices to determine the
location and orientation of the instrument in the patient. The FDL
device can also be used as or in conjunction with the PTD to track
the patient's motion such as respiration or heartbeat.
[0045] Other aspects involve using a guidewire or other navigated
instrument with one to one rotation to continuously align a virtual
display view to be consistent with the actual bronchoscopic video
view. A similar technique can be used with OCT, IVUS, or EBUS
devices to orient the virtual view to the image captured by the
OCT, IVUS, or EBUS devices.
[0046] Other aspects involve using video input of the bronchoscope
to adjust the virtual "fly-through" view to be consistent with the
user's normal perspective. For example, conventional video
processing and matching techniques can be used to align the
real-time video and the virtual image.
[0047] Other aspects involve using bronchoscopic video to provide
angular information at a current location to provide targeting or
directional cues to the user. Angular information can be derived
from the location of patient anatomy in the image and the relative
size of each within the image. Using information extracted from the
video captured by the bronchoscope, the system can determine which
the direction of the display. This can be done using, for example,
translation, rotation, or a combination of both. Comparing the
real-time image captured to the virtual image constructed from the
3D dataset (i.e., CT) the system can use this information to align
the virtual image and/or enhance the system accuracy.
[0048] In another aspect, a high-speed three-dimensional imaging
device, such as an optical coherence tomography (OCT) device, can
be tracked. In accordance with conventional methods, such a device
can only view 1-2 mm below the surface. With an EM sensor attached
in accordance with the systems and methods described herein,
multiple 3D volumes of data can be collected and a larger 3D volume
of collected data can be constructed. Knowing the 3D location and
orientation of the multiple 3D volumes will allow the user to view
a more robust image of, for example, pre-cancerous changes in the
esophagus or colon. This data can also be correlated to
pre-acquired or intra-procedurally acquired CT, fluoroscopic,
ultrasound, or 3D fluoroscopic images to provide additional
information.
[0049] Among several potential enhancements that could be provided
by an endolumenal system as described herein is that a user could
overlay the planned pathway information on to the actual/real-time
video image of the scope or imaging device (such as ultrasound
based device). Additionally, the system and apparatus could provide
a visual cue on the real-time video image showing the correct
direction or pathway to take.
[0050] Additionally, or alternatively, one could use a 5DOF sensor
and a limited or known range of motion of a localization device to
determine its orientation in the field. This is particularly
relevant, for example, in determining which way is up or down or
the overall rotation of an image in 3D space. In bronchoscopy, for
instance, this can be used to orient the bronchoscopic view to the
user's normal expected visual orientation. Because a bronchoscope
is typically only able to move in one plane up and down, the system
can use the 3D location of a tip sensor moving up and down to
determine the sixth degree of freedom. In this implementation, a
user could steer the bronchoscope to a bifurcation or close to a
bifurcation and then perform a motion with the scope (e.g., up and
down) using the thumb control to wiggle or flutter the tip and
sensor. With this motion (i.e., described herein generally as the
"wiggle maneuver"), the system can determine the orientation and
display the correct image. Typically, 5DOF sensing of instrument
tip POSE (position and orientation) determines 5 of the 6 POSE
parameters (x, y, z, pitch, and yaw). Such sensing may be unable in
certain applications, however, to determine the instantaneous roll
of the device. This roll determination can be critical in matching
the video coming from the device to the images. The techniques
described herein advantageously allow for users to relate
orientation of a virtual endoscopic "fly-thru" display to actual
video orientation presented by the endoscopic instrument.
[0051] In general, the methods described herein provides for the
user to select a location, most typically at a branching point in
the bronchial tree, and perform a "wiggle" maneuver with the tip of
the device. In preferred embodiments, the wiggle maneuver generally
consists of three steps:
[0052] (i). At desired branch or other point, the physical or
translational location of device is substantially secured or held
in place. For example, with a bronchoscope, the user should ensure
that the scope is held securely so it cannot translate relative to
the airway.
[0053] (ii). Perform a tip wiggle in plane by a rhythmic actuation
of the scope steering mechanism. The magnitude of the actuation
should be sufficient in force for the tip to cover an approximate 1
cm-2 cm range of motion, but less motion may be sufficient in some
applications or embodiments.
[0054] (iii). Continue the wiggle maneuver, keeping the scope
itself substantially stationary until the systems described herein
and related software defines the sixth degree of freedom, and the
orientation of the video display of scope matches the virtual
"fly-thru".
[0055] Algorithmically, a manifestation of an algorithm to detect
this operation consists of recognizing a unique signature of motion
over time. One such technique, for example, consists of two
parts:
[0056] (a) performance of a continuous PCA analysis of a specified
time window of 5DOF sensor locations. A repeated motion in a plane
by the instrument tip will produce a covariance matrix such that
the foremost eigenvalue (e0) will reflect the variance of a 1 cm-2
cm motion over a given time window. In addition, the secondary and
tertiary eigenvalues (e1 and e2) will reflect a very small
variance, as the tip should preferably be moving in an arc
constrained to the plane defined by the spline mechanism of the
scope.
[0057] (b) once an appropriate PCA signature is detected,
orientation of the tip to the eigenvector E0, which represents the
historical vector of motion for the instrument tip, is compared. An
exemplary way to do this is the dot product of the measured tip
vector and E0, which represents the acute angle.
r=VE0
[0058] By way of example and not by way of limitation, in an ideal
wiggle maneuver, the orientation of the tip should show a rhythmic
oscillation of about 90.degree. (approximate range could be for
instance +/-)45.degree.. This comparison of tip orientation to E0
provides the basis for a determination of the wiggle plane normal
using, for example, a cross-product or gram-schmidt
orthogonalization. The physical wiggle plane normal is in a
constant relationship to the coordinate space of the video signal,
and thus can be related (calibrated) thereto.
[0059] Interestingly, this algorithm can be used in a variety of
different modes to detect certain forms of scope motion: [0060]
stationary scope (wherein the e0, e1, and e2 values will be very
small and roughly equal, representing the small magnitude, unbiased
Gaussian noise of the direct localization measurement); [0061]
scope in motion along a straight line in space, such as when
traversing individual segments (wherein the e0 will be very large,
with very small e1 and e2 values, representing a large co-linear
translation in space). In addition, the absolute value of r will be
close to 1, indicating that the orientation of the tip is nearly
collinear with the translation of the tip over time.
[0062] In general, the wiggle techniques described herein for
determining direction and orientation, including "up" and "down"
orientation relative to (or independent of) the instrument steering
mechanism, may used in a range of different endoscopic
applications. Although the above examples and embodiments generally
refer to the use of the wiggle maneuver in connection with
bronchoscopic applications, it will be understood that the
techniques described herein may also be used in other applications
including, but not limited to, enteroscopy, colonoscopy,
sigmoidoscopy, rhinoscopy, proctoscopy, otoscopy, cystoscopy,
gynoscopy, colposcopy, hysteroscopy, falloposcopy, laparoscopy,
arthroscopy, thoracoscopy, amnioscopy, fetoscopy, panendoscopy,
epiduroscopy, and the like. The wiggle techniques described herein
may also be applicable in non-medical endoscopic uses, such as the
internal inspection of complex technical systems, surveillance, and
the like.
[0063] In general, the methods described herein employ successive
approximation techniques of pathway planning, and enhanced
localization accuracy to use traveled pathway information to
continuously, substantially continuously, or serially update
pathway planning and localization accuracy. The limits of automatic
pathway or vessel segments are generally defined, for example, by
the quality of the image data provided or the state the patient may
be in when the image data is acquired. Therefore, in some cases the
pathway segmentation may not extend completely to the target (that
is, the pathway segmentation stops short of the target). One method
to enhance the segmentation involves using the traveled path of a
tracked instrument to add branches and segment(s) to the pathway.
Methods may include multiple approaches described herein such as
recording traveled paths or using the additional path traveled
information to iteratively segment vessels or airways from the
image dataset. Having the additional knowledge that an instrument
actually traveled the path can enable an automatic segmentation
algorithm to extend the vessels and airways that it can calculate
and find. This can be particularly valuable in situations where
airways or vessels may be obstructed or pinched off. This technique
can also be a valuable tool in recording the paths traveled by the
instrument to give an indication to the user of "good" and "bad"
(i.e., correct or incorrect, direct or indirect, etc.) directions
based, for instance, on trial and error.
[0064] According to the various methods and systems described
herein, therefore, in some embodiments real-time pathway
registration uses full 5D and/or 6D information and/or details of
instrument location to enhance registration. These and other
methods can employ distance, proximity, and/or angle information,
for example, to determine the pathway being traversed in order to
optimize localization accuracy. This can be particularly beneficial
at the bifurcation(s) of vessels and airways, where an initial (or
subsequent) selection of the correct direction is important to
accurately navigate the desired pathway. Using both 3D localization
and the orientational and/or directional angle of the device can,
in various embodiments, be helpful in maximizing accuracy in
applications such as an endolumenal navigation system. Advance
knowledge of the various segmented vessels and/or airways in a
procedural or pre-procedural setting, and the orientation and/or
direction of the navigated instrument, can further enhance
accuracy. In certain embodiments, the various parameters (e.g.,
distance, angle, etc.) can be weighted in real-time (or
near-real-time) for improved application or use of the relevant
information. For example, distance information can be used in the
relevant algorithms without taking angle information into account,
or vice versa. Or, for example, the various parameters can be
equally applied (e.g., 50% angle, 50% proximity). Additionally or
alternatively, different percentages can be applied to emphasize or
weigh one parameter over another (e.g., 15% angle, 85% proximity;
20% angle, 80% proximity; 25% angle, 75% proximity; 30% angle, 70%
proximity; 35% angle, 65% proximity; 40% angle, 60% proximity; or
45% angle, 65% proximity; or, alternatively, 15% proximity, 85%
angle; 20% proximity, 80% angle; 25% proximity, 75% angle; 30%
proximity, 70% angle; 35% proximity, 65% angle; 40% proximity, 60%
angle; 45% proximity, 65% angle). In operation, it may be
preferable to more heavily weigh angle information when the
instrument arrives at a branch (e.g., an arterial branch) in order
to facilitate or improve directional decision-making in real-time;
weighting the parameters in this manner, therefore, can further
improve accuracy.
[0065] According to one embodiment, the real-time pathway
registration uses path traveled of the instrument location to
enhance registration. Using path traveled to exclude other
potential navigational solutions enhances navigational accuracy.
Many times in patient anatomy vessel or airway branches may twist
around and come close to one another, but it is obvious from
earlier location(s) recorded along the path traveled that only one
branch can be the correct navigated location.
[0066] According to one particular embodiment, the respiratory
signal derived from the PTD is used to gate the localization
information of the instrument in the airway. This can assist in
determining multiple airway models, for example, by performing a
best fit of the real-time patient airway model to the CT data to
derive the optimal registration and gated period in the patient's
respiratory cycle. Additionally or alternatively, the respiratory
signal can be derived from devices other than the PTD, such a
device that records the resistance between two locations on the
patient. For example, this method is similar to a variable
potentiometer in that the resistance of the patient changes between
two fixed points as the patient inhales and exhales. Thus, the
resistance can be measured to create a respiratory signal.
[0067] According to another particular embodiment, 3D location
information is used to extend the segmented airway model. The 3D
airway can be extended as the instrument is passed along the airway
by using this location information as an additional parameter to
segment the airway from the CT data. Using an iterative
segmentation process, for instance, the 3D location information of
the instrument can be used to provide seed points, manual
extension, or an additional variable of likelihood of a segmented
vessel or airway existing in the 3D image volume. These added
airways can be displayed in a different format or color (for
example) to indicate to the user that they are extending the
segmented airway using instrument location information.
[0068] In general, the embodiments described herein have
applicability in "Inspiration to Expiration"-type CT scan fusion.
According to various methods, the user navigates on the expiration
CT scan for optimal accuracy, while using the inspiration scan to
obtain maximum airway segmentation. In one embodiment, for example,
a user could complete planning and pathway segmentation on an
inspiration scan of the patient. Preferably, a deformation or
vector field is created between at least two datasets. The
deformation or vector field may then be applied to the segmented
vessels and/or airways and the user's planned path and target. In
these and other embodiments, the deformation or vector field can
also be applied to multiple datasets or in a progressive way to
create a moving underlying dataset that matches the patient's
respiratory or cardiac motion.
[0069] By way of example, "Inspiration to Expiration" CT fusion
using the lung lobe centroid and vector change to modify an airway
model may also be applicable. In accordance with various
embodiments, this technique may be used to translate and scale each
airway based on the lung lobe change between scans. The lung is
constructed of multiple lobes and these lobes are commonly analyzed
for volume, shape, and translation change. Each lobe changes in a
very different way during the patient's breathing cycle. Using this
information to scale and translate the airways that are located in
each lobe, it is possible to adapt for airway movement. This scaled
airway model can then be linked to the 4D tracking of the patient
as described herein.
[0070] In one preferred embodiment, for example, a cine loop of
ultrasound data is collected in conjunction with the patient's
respiratory cycle information. This can serve to limit registration
point selection, in order to be consistent with the patient's
respiratory cycle that a 3D dataset such as CT, MR, or PET has
acquired. This technique advantageously maximizes registration
accuracy, a major flaw in conventional systems in the prior
art.
[0071] In various aspects, the systems and methods described herein
involve modifying inspiration CT scans to the expiration cycle for
navigation. It is well understood that the patient's airways are
contained within multiple lobes of the lung. It is also understood
that airways significantly change between inspiration and
expiration. In order to have the most accurate map for navigation
it would be beneficial to include the detail of the inspiration
scan, coupled with the ability to navigate it accurately during
expiration, which is the most repeatable point in a patient's
breath cycle. In preferred embodiments, this modification can be
carried out in accordance with the following steps:
[0072] 1) Scan patient at both inspiration and expiration;
[0073] 2) Segment the airways in both the inspiration and
expiration;
[0074] 3) Segment the lung lobes in both the inspiration and
expiration (as the lung lobes are identifiable in both the
inspiration and expiration scans with a high degree of
accuracy);
[0075] 4) Determine a volume difference for each lung lobe between
inspiration and expiration, use this change to shrink the airway
size from the inspiration to the expiration cycle. Preferably, this
is done for each individual lobe, as the percentage change will
typically be different for each lobe.
[0076] 5) Determine the centroid for each lung lobe and the vector
change in motion from the main carina in both scans. This vector
can then be used to shift the airways that are associated with each
lung lobe. A centroid for the airway can be calculated based on the
segmented branches. For each airway branch in the segmentation, it
includes a tag that associates it with the respective lung lobe.
The central airway including the main carina and initial airway
branches for each lobe that is linked according to the expiration
scan location of these points. Next, a plane can be defined using
the main carina and initial airway branch exits to determine the
vector change for each lobe.
[0077] Among the lobes to modify, for example:
[0078] left inferior lobe--the bottom lobe of the lung on the left
side of the body.
[0079] left superior lobe--the top lobe of the lung on the left
side of the body.
[0080] right inferior lobe--the bottom lobe of the lung on the
right side of the body.
[0081] right middle lobe--the middle lobe of the lung on the right
side of the body.
[0082] right superior lobe--the top lobe of the lung on the right
side of the body.
[0083] Exemplary calculations are as follows:
Inspiration Airway-Left Inferior Lobe(LIL).times.70%(reduction in
volume Inspiration to Expiration calculated)=ExAirwayLlL
[0084] Determine Expiration Central Airway points (Main Carina and
Initial Airway branches) based upon segmentation
[0085] Shift ExAirwayLIL by vector distance (3 cm, 45 degrees up
and back from main carina) that LIL centroid moved from inspiration
to expiration
[0086] Preferably, this process is repeated for each lobe. In
preferred embodiments, the completion of 5 lobes will result in a
Navigation Airway Map for the patient.
[0087] In various embodiments, the target location for the patient
can be selected in the expiration scan and applied to the
Navigation Airway Map. Alternatively, it may be selected in the
Inspiration scan and adjusted based on the same or similar criteria
as the inspiration airways. In either case, it can be adjusted
individually or linked to the airway via a 3D network and moved in
the same transformation.
[0088] One aspect of the present invention is directed to a
endoscopic port offset device. The device includes, for example, a
snap-on or otherwise affixable feature (that is, offsets) to the
endoscopic (e.g., bronchoscopic) port that is capable of holding a
navigated guidewire or instrument in a known or preset (i.e.,
predetermined) location to maintain device location and free the
physician/user's hand. In one embodiment, for example, the device
includes one or more offset portions that can be adjusted by
combining and/or removing multiple offset segments. In another
embodiment, for example, the device includes one or more offset
portions that can be adjusted by the removal of one or more
removable offset segments separated by perforations (i.e., in a
disposable fashion). In yet another embodiment, the device includes
an offset that is capable of adjustment using a screw mechanism
(i.e., the length of the offset can be adjusted by screwing the
offset in and out). In various embodiments, each offset can be
represented on the navigation screen showing offset distance from
the tip of the endoscope or working channel sheath. The endoscope
or attachment thereof, in various embodiments, may include one or
more offsets, two or more offsets, three or more offsets, four or
more offsets, or five or more offsets. In other embodiments, more
than five offsets may be included, e.g., 6-12 offsets, 6-18
offsets, 6-24 offsets, or more).
[0089] Another aspect of the invention is a closed loop system that
allows the navigation system to steer the working channel using
shape memory alloy/metal type materials. An instrument would have
tracking sensors located at the tip for directional guidance as
described herein that drive the micro-actuators along the shaft to
turn corners. Using feedback from multiple sensors along the shaft
it is also possible to determine the maximum points of friction and
adjust the shape of the device to allow for easier insertion. One
simple metric that can be used, for example, is the difference in
the shape or bend of the device to the segmented pathway of the
vessel or airway, described in further detail below.
[0090] Other embodiments include, for example, detachable sensors
to a fiducial structure (e.g., a reduced cost patient pad).
[0091] In accordance with other embodiments, for example, a sensor
as described herein (e.g., an electromagnetic (EM) sensor) is
affixed (preferably permanently affixed, but may also be removable)
to a device or instrument so that both the device or instrument (or
component thereof) and the sensor move together, such that they can
be imaged and viewed. In one embodiment, for example, the device is
an aspiration needle and the needle tip and the sensor move
together. In another embodiment, for example, the device is a
brush, forceps, or forceps tissue capture mechanism and these
components and the sensor move together. In these and other
embodiments, the device may additionally include an actuating
handle (e.g., finger holds) that is coupled with the sensor, thus
allowing movement tracking. These various embodiments
advantageously allow the device (and components thereof) to be
tracked using the sensor, improving overall accuracy and
reliability.
[0092] In one particular embodiment, a sensor (e.g., an EM sensor)
is positioned at or near the tip of a steerable catheter which
further includes a side exiting working channel for forceps,
aspiration needle, a brush, combinations thereof, and the like.
Using the sensor (which in preferred embodiments is a 6DOF sensor
as described herein), the user can have the direction of the side
exiting working channel defined on the navigation screen. That is,
the image plane that is generated is one at a side exiting working
channel or port, as opposed to a point or position distal to the
device. This will advantageously allow easier targeting of lesions
that may not be directly in the airway, but rather partially or
even completely outside of the airway. In accordance with an
exemplary method of using the device, the catheter is steered
slightly past the target to align the side exiting port/working
channel; the sampling component (e.g., forceps, needle, brush) is
then extended out the catheter. The directional aspect of the
instrument can be viewed on the navigation screen and a simulated
device can be shown to demonstrate to the user the tissue that will
be sampled. These applications may be particularly useful in the
sampling of lymph nodes that are outside the patient airways. In
some embodiments, for example, the device may be capable of
creating an endobronchial ultrasound (EBUS)-like view. For example,
an image plane oriented with the working channel plane can be
created and the instrument can be shown sampling the target on this
plane. In various alternative embodiments, the image(s) may be
oriented in a plane or orthogonally.
[0093] Other embodiments include, for example, using EM sensor as
LC or energy transmission device. This stored energy could be used
to actuate a sampling device such as forceps or power a diagnostic
sensor.
[0094] Other embodiments include, for example, using an elastic
tube length of scope or steerable catheter to add a sensor (e.g.,
an EM sensor) to a device. This may take the form of a flexible
sheath or tube the length of a bronchoscope or steerable catheter
that can be added to existing conventional instrumentation. In this
way, the conventional device does not have to modified. For
instance, a very thin wall device can be slid other the length of
the scope or catheter and it can be made navigational.
[0095] In accordance with another general aspect, the PTD comprises
a pad that can be placed on the patient and left for a few hours or
a few days. For example, such a device could be placed on a patient
prior to a 3D image acquisition such as CT or MR. The device would
contain EM (or other) sensors and an EM/LC tank circuit that would
allow it be charged up or turned on once localization was ready to
commence. The device could be wireless and transmit induced voltage
levels back to the system to navigate in a 3D coordinate space as
described herein.
[0096] In various embodiments, the device could either have a
separate fiducial structure in a known orientation to the EM
sensors, have the configuration learned by the system, or have EM
sensors that act as fiducials in the 3D image scan.
[0097] As discussed herein, auto registration of a device is
conducted, at least initially, by finding the PTD. In general, the
PTD needs to be within the 3D volume, but for some devices the 3D
volume may be too small to include both the PTD and the target. To
overcome this, a 2D/3D algorithm can be employed in accordance with
various embodiments, whereby multiple 2D images of a patient are
acquired for a 3D volume reconstruction. The complete PTD (which
may include one, two or multiple (e.g., at least 3) sensors or
fiducials) does not need to be seen in each image; e.g., some
images may only show one sensor or a part thereof, but the entire
collection of 2D images can be compiled and used to find the whole
or entire PTD relative to the target. For example, 180 2D images
(or more or less depending on patient/device position, etc.) can be
used to construct a complete 3D volume (where only a fraction
(e.g., 10-30) of the 2D images include all or a portion of the PTD.
This facilitates easier device placement and allows the user to
focus on the target, as the PTD and target do not necessarily need
to be in the same initial 3D volume.
[0098] One aspect is directed to recording 3D location and
bronchoscopic video to construct a 3D model of the patient's
airway. This 3D video can be recorded over multiple sessions (e.g.,
weeks between recording) and color, size, and shape analysis/change
can be determined and/or compared for diagnostic purposes. Not only
can airway lumen size and/or shape be compared, but a deformation
or vector field can also be compared for the multiple sessions.
This can be particularly valuable in determining overall lung
function change as well as local changes to muscle and tissue
elasticity.
[0099] Another similar aspect is directed to recording 3D location
and EBUS video or images to construct a 3D model of the patient's
airway or a lesion. Typically, the EBUS image is very small and 2D.
Thus, recording multiple planes of EBUS can be used to create a 3D
image of the lesion, lymph node, or blood vessels. Providing
correlated CT information to the EBUS image can be valuable in
determining the location of structures in the patient.
[0100] Yet another similar aspect is directed to recording 3D
location and IVUS video or images to construct a 3D model of the
patient's blood vessels or plaques. Like EBUS, the IVUS image is
typically very small and 2D. Recording multiple planes of IVUS can
be used to create a 3D image of the blood vessels and
malformations. Providing correlated CT information to the IVUS
image can be valuable in determining the location of structures in
the patient. In accordance with this and other aspects, OCT may
additionally or alternatively employed.
[0101] In general, the methods described herein involve increasing
registration accuracy for Ultrasound (US) to CT or any other 3D
image dataset such as MR, CT-PET, and 3D Ultrasound. Using 4D
tracking of the patient respiratory signal and collecting a cine
loop of ultrasound images, one can maximize the US to CT fusion
accuracy by limiting the point or plane selection for registration
to the correct respiratory cycle that matches the 3D dataset. The
process involves the use of a patient tracker on the patient and a
tracker on the US transducer; using a localizer (such as an EM
localization system) to record a cine loop of US images and match
the images in the cine loop to the respiratory signal.
[0102] Collecting a cine loop of US data for registration is
valuable in that the user does not necessarily have to select a
moving point while the patient is breathing. The user can simply
scroll through the cine loop and, with an indicator of the point in
the respiratory cycle, select the points that are best used for
registration from a static image.
[0103] The user then selects, for example, at least 3 points in the
US & CT datasets, or a plane and at least one point to register
the US space to the 3D CT dataset space. Preferably, the points and
plane are selected at the same respiratory point as the 3D CT
dataset. Normally, for example, the 3D CT dataset would be acquired
at exhalation. Therefore, the user preferably selects registration
points at exhalation or significant errors can be incorporated into
the US to CT registration. If a user was to select a plane that was
at a different point in the respiratory cycle, for instance, there
would be significant translation error in the registration. There
would likely also be significant rotational error in the
registration if the points were acquired at different points in the
respiratory cycle.
[0104] The methods and systems described herein can be expanded to
use other patient sensing information such as cardiac information
(heartbeat) as a single source 4D signal or multiple sensed signal
approach in that respiration and cardiac data could be used
together. This is particularly relevant in connection with
locations close to the heart or within the heart.
[0105] Preferably, the user selects points that are sufficiently
far apart for the best accuracy. This can be done, for example, by
requiring the user to record a cine loop of data that extends over
the whole patient organ. Preferably, the user is not allowed to
pick multiple points in the same image.
[0106] Another technique for maximizing registration accuracy is a
centroid finding algorithm that can be used for refining point
locations in a local area. Often, a user will want to select a
vessel bifurcation. The vessel bifurcation will be seen as a bright
white location on the CT and US images. An algorithm can be used to
help the user select the optimal center location for these
locations. Once a user selects a point on the image, the local
algorithm can be employed to find similar white voxels that are
connected and, for that shape in the 3D space, refine the point to
the centroid or any other optimal point (such as, for example, the
most anterior or most posterior point).
[0107] In general, the systems and methods described herein can be
implemented regardless of the number of sensors that are used. In
some embodiments, serial orientation or positioning of multiple
sensors allows the determination of one or more parameters such as
shape, position, orientation, and mechanical status of a complete
or partial section of guidewire or other device or instrument. For
example, the placement of multiple sensors can assist in
visualizing the shape of the device and any bends in the path by
providing a number of data points on the path (e.g., 8 sensors,
spaced 1 mm apart) to create a 3D shape model of the device.
Various parameters can be used to track past or present movement
and changes in device shape including, for example, elasticity,
bend radius, limiting, and durometer rating of the device material.
These parameters and accompanying data can provide visual cues to
the user during the procedure, for example, when the device has a
certain bend or curvature (based on path or surroundings), e.g., to
provide a notice or warning that the device is on the correct or
incorrect path, or to provide notice regarding, or track, a
particular parameter(s) that the user is interested in. Such a
sensor pathway is generally depicted in FIG. 18, which shows
exemplary curvature warning scenarios in the differently marked
sections or segments.
[0108] In various aspects and embodiments described herein, one can
use the knowledge of the path traveled by the instrument and
segmented airway or vessel from the acquired image (e.g., CT) to
limit the possibilities of where the instrument is located in the
patient. The techniques described herein, therefore, can be
valuable to improve virtual displays for users. Fly through,
Fly-above, or image displays related to segmented paths are
commonly dependent upon relative closeness to the segmented path.
For a breathing patient, for example, or a patient with a moving
vessel related to heartbeat, it is valuable to use the path
traveled information to determine where in the 4D patient motion
cycle the system is located within the patient. By comparing the 3D
location, the patient's tracked or physiological signal is used to
determine 4D patient motion cycle, and with the instrument's
traveled path, one can determine the optical location relative to a
segmented airway or vessel and use this information to provide the
optimal virtual display.
[0109] FIGS. 1 and 2 are schematic illustrations of devices that
can be used in conjunction with, or to perform, various procedures
described herein. As shown in FIG. 1, an apparatus 10 includes a
PTD 20. The PTD 20 can be coupled to a dynamic body B. The dynamic
body B can be, for example, a selected dynamic portion of the
anatomy of a patient. The PTD 20 can be a variety of different
shapes and sizes. For example, in one embodiment the PTD 20 is
substantially planar, such as in the form of a patch that can be
disposed at a variety of locations on a patient's body. Such a PTD
20 can be coupled to the dynamic body with adhesive, straps, hook
and pile, snaps, or any other suitable coupling method. In another
embodiment the PTD can be a catheter type device with a pigtail or
anchoring mechanism that allows it to be attached to an internal
organ or along a vessel.
[0110] Two or more markers or fiducials 22 are coupled to the PTD
20 at selected locations as shown in FIG. 1. The markers 22 are
constructed of a material that can be viewed on an image, such as
an X-ray or CT. The markers 22 can be, for example, radiopaque, and
can be coupled to the PTD 20 using any known methods of coupling
such devices. FIGS. 1 and 2 illustrate the apparatus 10 having four
markers 22, but any number of two or more markers can be used. In
one embodiment the marker or fiducials and the localization element
can be the same device.
[0111] An imaging device 40 can be used to take images of the
dynamic body B while the PTD 20 is coupled to the dynamic body B,
pre-procedurally during a first time interval. As stated above, the
markers 22 are visible on the images and can provide an indication
of a position of each of the markers 22 during the first time
interval. The position of the markers 22 at given instants in time
through a path of motion of the dynamic body B can be illustrated
with the images. The imaging device 40 can be, for example, a
computed tomography (CT) device (e.g., respiratory-gated CT device,
ECG-gated CT device), a magnetic resonance imaging (MRI) device
(e.g., respiratory-gated MRI device, ECG-gated MRI device), an
X-ray device, or any other suitable medical imaging device. In one
embodiment, the imaging device 40 is a computed
tomography--positron emission tomography device that produces a
fused computed tomography--positron emission tomography image
dataset. The imaging device 40 can be in communication with a
processor 30 and send, transfer, copy and/or provide image data
taken during the first time interval associated with the dynamic
body B to the processor 30.
[0112] The processor 30 includes a processor-readable medium
storing code representing instructions to cause the processor 30 to
perform a process. The processor 30 can be, for example, a
commercially available personal computer, or a less complex
computing or processing device that is dedicated to performing one
or more specific tasks. For example, the processor 30 can be a
terminal dedicated to providing an interactive graphical user
interface (GUI). The processor 30, according to one or more
embodiments of the invention, can be a commercially available
microprocessor. Alternatively, the processor 30 can be an
application-specific integrated circuit (ASIC) or a combination of
ASICs, which are designed to achieve one or more specific
functions, or enable one or more specific devices or applications.
In yet another embodiment, the processor 30 can be an analog or
digital circuit, or a combination of multiple circuits.
[0113] The processor 30 can include a memory component 32. The
memory component 32 can include one or more types of memory. For
example, the memory component 32 can include a read only memory
(ROM) component and a random access memory (RAM) component. The
memory component can also include other types of memory that are
suitable for storing data in a form retrievable by the processor
30. For example, electronically programmable read only memory
(EPROM), erasable electronically programmable read only memory
(EEPROM), flash memory, as well as other suitable forms of memory
can be included within the memory component. The processor 30 can
also include a variety of other components, such as for example,
coprocessors, graphic processors, etc., depending upon the desired
functionality of the code.
[0114] The processor 30 can store data in the memory component 32
or retrieve data previously stored in the memory component 32. The
components of the processor 30 can communicate with devices
external to the processor 30 by way of an input/output (I/O)
component (not shown). According to one or more embodiments of the
invention, the I/O component can include a variety of suitable
communication interfaces. For example, the I/O component can
include, for example, wired connections, such as standard serial
ports, parallel ports, universal serial bus (USB) ports, S-video
ports, local area network (LAN) ports, small computer system
interface (SCCI) ports, and so forth. Additionally, the I/O
component can include, for example, wireless connections, such as
infrared ports, optical ports, Bluetooth.RTM. wireless ports,
wireless LAN ports, or the like.
[0115] The processor 30 can be connected to a network, which may be
any form of interconnecting network including an intranet, such as
a local or wide area network, or an extranet, such as the World
Wide Web or the Internet. The network can be physically implemented
on a wireless or wired network, on leased or dedicated lines,
including a virtual private network (VPN).
[0116] As stated above, the processor 30 can receive image data
from the imaging device 40. The processor 30 can identify the
position of selected markers 22 within the image data or voxel
space using various segmentation techniques, such as Hounsfield
unit thresholding, convolution, connected component, or other
combinatory image processing and segmentation techniques. The
processor 30 can determine a distance and direction between the
position of any two markers 22 during multiple instants in time
during the first time interval, and store the image data, as well
as the position and distance data, within the memory component 32.
Multiple images can be produced providing a visual image at
multiple instants in time through the path of motion of the dynamic
body. The processor 30 can also include a receiving device or
localization device 34, which is described in more detail
below.
[0117] A deformation field may also be included in the analysis in
various embodiments described herein. For example, the deformation
field can be applied to fuse 3D fluoroscopic images to CT images in
order to compensate for different patient orientations, patient
position, respiration, deformation induced by the catheter or other
instrument, and/or other changes or perturbations that occur due to
therapy delivery or resection or ablation of tissue.
[0118] In some embodiments, for example, real-time respiration
compensation can be determined by applying an
inspiration-to-expiration deformation vector field. In combination
with the PTD respiratory signal, for example, the instrument
location can be calculated using the deformation vector field. A
real-time instrument tip correction vector can be applied to a 3D
localized instrument tip. The real-time correction vector is
computed by scaling an inspiration-to-expiration deformation vector
(found from the inspiration-to-expiration deformation vector field)
based on the PTD respiratory signal. This correction vector can
then be applied to the 3D localized instrument tip. This can
further optimize accuracy during navigation.
[0119] An example of an algorithm for real-time respiration
compensation can be found in FIG. 19. In accordance with this
algorithm, for each l:
[0120] (a) find v.sub.i such that scalar d is minimized;
[0121] (b) compute c, wherein:
c=-v.sub.it [0122] and (c) compute l', wherein:
[0122] l'=l+c
Thus, l' is a respiration compensated version of l.
[0123] Although FIG. 19 and the above discussion generally relate
to real-time respiration motion, it will be understood that these
calculations and determinations may also be applied to real-time
heartbeat and/or vessel motion compensation, or any other motion of
a dynamic body as described herein. In one embodiment, for example,
the deformation matrix is calculated based upon inspiration and
expiration. In another embodiment, for example, the deformation
matrix is calculated based upon heartbeat. In yet another
embodiment, for example, the deformation matrix is based upon
vessel motion. In these and other embodiments, it is also possible
to extend these calculations and determinations to develop multiple
deformation matrices across multiple patient datasets, by acquiring
the multiple datasets over the course of, for example, a single
heartbeat cycle or a single respiratory cycle.
[0124] Deformation on 2D images can also be calculated based upon
therapeutic change of tissue, changes in Houndsfield units for
images, patient motion compensation during the imaging sequence,
therapy monitoring, and temperature monitoring with fluoroscopic
imaging, among other things. One potential issue with conventional
therapy delivery, for instance, is monitoring the therapy for
temperature or tissue changes. In accordance with the methods
described herein, this monitoring can be carried out using
intermittent fluoroscopic imaging, where the images are compensated
between acquisition times to show very small changes in image
density, which can represent temperature changes or tissue changes
as a result of the therapy and/or navigation.
[0125] In general, it may also be preferable to reduce the level of
radiation that patients are exposed to before or during a procedure
(or pre-procedural analysis) as described herein. One method of
reducing radiation during the acquisition of a 3D fluoroscopic
dataset (or other dataset described herein), for example, is to use
a deformation field between acquired 2D images to reduce the actual
number of 2D images that need to be acquired to create the 3D
dataset. In one particular embodiment, the deformation field is
used to calculate the deformation between images in the acquisition
sequence to produce 2D images between the acquired slices, and
these new slices can be used to calculate the 3D fluoroscopic
dataset. For example, if 180 2D image slices were previously
required, e.g., an image(s) taken every 2 degrees of a 360 degree
acquisition sequence, in accordance with some embodiments 90 2D
images can be acquired over a 360 degree acquisition sequence and
the data from the images that would have ordinarily been acquired
between each slice can be calculated and imported into the 3D
reconstruction algorithm. Thus, the radiation is effectively
reduced by 50%.
[0126] As shown in FIG. 2, two or more localization elements 24 are
coupled to the PTD 20 proximate the locations of the markers 22 for
use during a medical procedure to be performed during a second time
interval. The localization elements 24 can be, for example,
electromagnetic coils, infrared light emitting diodes, and/or
optical passive reflective markers. The localization elements 24
can also be, or be integrated with, one or more fiber optic
localization (FDL) devices. The markers 22 can include plastic or
non-ferrous fixtures or dovetails or other suitable connectors used
to couple the localization elements 24 to the markers 22. A medical
procedure can then be performed with the PTD 20 coupled to the
dynamic body B at the same location as during the first time
interval when the pre-procedural images were taken. During the
medical procedure, the localization elements 24 are in
communication or coupled to the localization device 34 included
within processor 30. The localization device 34 can be, for
example, an analog to digital converter that measures voltages
induced onto localization coils in the field; creates a digital
voltage reading; and maps that voltage reading to a metric
positional measurement based on a characterized volume of voltages
to millimeters from a fixed field emitter. Position data associated
with the elements 24 can be transmitted or sent to the localization
device 34 continuously during the medical procedure during the
second time interval. Thus, the position of the localization
elements 24 can be captured at given instants in time during the
second time interval. Because the localization elements 24 are
coupled to the PTD 20 proximate the markers 22, the localization
device 34 can use the position data of the elements 24 to deduce
coordinates or positions associated with the markers 22
intra-procedurally during the second time interval. The distance,
range, acceleration, and speed between one or more selected pairs
of localization elements 24 (and corresponding markers 22) can then
be determined and various algorithms can be used to analyze and
compare the distance between selected elements 24 at given instants
in time, to the distances between and orientation among
corresponding markers 22 observed in the pre-operative images.
[0127] An image can then be selected from the pre-operative images
taken during the first time interval that indicates a distance or
is grouped in a similar sequence of motion between corresponding
markers 22 at a given instant in time, that most closely
approximates or matches the distance or similar sequence of motion
between the selected elements 24. The process of comparing the
distances is described in more detail below. Thus, the apparatus 10
and processor 30 can be used to provide images corresponding to the
actual movement of the targeted anatomy during the medical
procedure being performed during the second time interval. The
images illustrate the orientation and shape of the targeted anatomy
during a path of motion of the anatomy, for example, during
inhaling and exhaling.
[0128] FIG. 3 illustrates an example set of distances or vectors d1
through d6 between a set of markers 122, labeled m1 through m9 that
are disposed at spaced locations on a PTD 120. As described above,
pre-procedure images can be taken of a dynamic body for which the
PTD 120 is to be coupled during a first time interval. The
distances between the markers can be determined for multiple
instants in time through the path of motion of the dynamic body.
Then, during a medical procedure, performed during a second time
interval, localization elements (not shown in FIG. 3) coupled
proximate to the location of markers 122 can provide position data
for the elements to a localization device (not shown in FIG. 3).
The localization device can use the position data to determine
distances or vectors between the elements for multiple instants in
time during the medical procedure or second time interval.
[0129] FIG. 4A shows an example of distance or vector data from the
localization device. Vectors a1 through a6 represent distance data
for one instant in time and vectors n1 through n6 for another
instant in time, during a time interval from a to n. As previously
described, the vector data can be used to select an image from the
pre-procedural images that includes distances between the markers
m1 through m9 that correspond to or closely approximate the
distances a1 through a6 for time a, for example, between the
localization elements. The same process can be performed for the
vectors n1 through n6 captured during time n.
[0130] One method of selecting the appropriate image from the
pre-procedural images is to execute an algorithm that can sum all
of the distances a1 through a6 and then search for and match this
sum to an image containing a sum of all of the distances d1 through
d6 obtained pre-procedurally from the image data that is equal to
the sum of the distances a1 through a6. When the difference between
these sums is equal to zero, the relative position and orientation
of the anatomy or dynamic body D during the medical procedure will
substantially match the position and orientation of the anatomy in
the particular image. The image associated with distances d1
through d6 that match or closely approximate the distances a1
through a6 can then be selected and displayed. For example, FIG. 4B
illustrates examples of pre-procedural images, Image a and Image n,
of a dynamic body D that correspond to the distances a1 through a6
and n1 through n6, respectively. An example of an algorithm for
determining a match is as follows:
[0131] Does .SIGMA.a.sub.i=.SIGMA.d.sub.i (i=1 to 6 in this
example) OR
[0132] Does .SIGMA.(a.sub.i-d.sub.i)=0 (i=1 to 6 in this
example).
If yes to either of these, then the image is a match to the vector
or distance data obtained during the medical procedure.
[0133] FIG. 5 illustrates an apparatus 210 according to an
embodiment of the invention. The apparatus 210 includes a tubular
shaped PTD 220 that can be constructed with a rigid material or,
alternatively, a flexible and/or stretchable material. In one
embodiment, for example, the PTD 220 is substantially rigid in
structure. In another embodiment, for example, the PTD 220 has a
flexible or stretchable structure. The PTD 220 can be positioned
over a portion of a patient's body, such as around the upper or
lower torso of the patient. In the embodiments in which the PTD 220
is constructed with a stretchable and/or flexible material, for
instance, the stretchability of the PTD 220 allows the PTD 220 to
at least partially constrict some of the movement of the portion of
the body for which it is coupled. The apparatus 210 further
includes multiple markers or fiducials 222 coupled to the PTD 220
at spaced locations. A plurality of localization elements 224 are
removably coupled proximate to the locations of markers 222, such
that during a first time interval as described above, images can be
taken without the elements 224 being coupled to the PTD 220. The
localization elements need not be removably coupled. For example,
the elements can be fixedly coupled to the PTD. In addition, the
elements can be coupled to the PTD during the pre-procedure
imaging.
[0134] FIG. 6 is a graphical illustration indicating how the
apparatus 210 (shown without localization elements 224) can move
and change orientation and shape during movement of a dynamic body,
such as a mammalian body M. The graph is one example of how the
lung volume can change during inhalation (inspiration) and
exhalation (expiration) of the mammalian body M. The corresponding
changes in shape and orientation of the apparatus 210 during
inhalation and exhalation are also illustrated. The six markers 222
shown in FIG. 5 are labeled a, b, c, d, e, and f. As described
above, images of the apparatus 110 can be taken during a first time
interval. The images can include an indication of relative position
of each of the markers 222, that is the markers 222 are visible in
the images, and the position of each marker 222 can then be
observed over a period of time. A distance between any two markers
222 can then be determined for any given instant of time during the
first time interval. For example, a distance X between markers a
and b is illustrated, and a distance Y between markers b and f is
illustrated. These distances can be determined for any given
instant in time during the first time interval from an associated
image that illustrates the position and orientation of the markers
222. As illustrated, during expiration of the mammalian body M at
times indicated as A and C, the distance X is smaller than during
inspiration of the mammalian body M, at the time indicated as B.
Likewise, the distance Y is greater during inspiration than during
expiration. The distance between any pair of markers 222 can be
determined and used in the processes described herein. Thus, the
above embodiments are merely examples of possible pair selections.
For example, a distance between a position of marker e and a
position of marker b may be determined. In addition, multiple pairs
or only one pair may be selected for a given procedure.
[0135] FIG. 7 is a flowchart illustrating a method according to an
embodiment of the invention. A method 50 includes at step 52
receiving image data during a pre-procedural or first time
interval. As discussed above, images are taken of a dynamic body
using an appropriate imaging modality (e.g., CT Scan, MRI, etc.).
The image data is associated with one or more images taken of a PTD
(as described herein) coupled to a dynamic body, where the PTD
includes two or more markers coupled thereto. In other words, the
image data of the dynamic body is correlated with image data
related to the PTD. The one or more images can be taken using a
variety of different imaging modalities as described previously.
The image data can include an indication of a position of a first
marker and an indication of a position of a second marker, as
illustrated at step 54. The image data can include position data
for multiple positions of the markers during a range or path of
motion of the dynamic body over a selected time interval. As
described above, the image data can include position data
associated with multiple markers, however, only two are described
here for simplicity. A distance between the position of the first
marker and the position of the second marker can be determined for
multiple instants in time during the first time interval, at step
56. As also described above, the determination can include
determining the distance based on the observable distance between
the markers on a given image. The image data, including all of the
images received during the first time interval, the position, and
the distance data can be stored in a memory and/or recorded at step
58.
[0136] Then at step 60, during a second time interval, while
performing a medical procedure on the patient with the PTD
positioned on the patient at substantially the same location,
position data can be received for a first localization element and
a second localization element. The localization elements can be
coupled to the PTD proximate the locations of the markers, such
that the position data associated with the elements can be used to
determine the relative position of the markers in real-time during
the medical procedure. The position data of the elements can be
stored and/or recorded at step 62.
[0137] A distance between the first and second localization
elements can be determined at step 64. Although only two
localization elements are described, as with the markers, position
data associated with more than two localization elements can be
received and the distances between the additional elements can be
determined.
[0138] The next step is to determine which image from the one or
more images taken during the first time interval represents the
relative position and/or orientation of the dynamic body at a given
instant in time during the second time interval or during the
medical procedure. To determine this, at step 66, the distance
between the positions of the first and second localization elements
at a given instant in time during the second time interval are
compared to the distance(s) determined in step 56 between the
positions of the first and second markers obtained with the image
data during the first time interval.
[0139] An image can be selected from the first time interval that
best represents the same position and orientation of the dynamic
body at a given instant in time during the medical procedure. To do
this, the difference between the distance between a given pair of
localization elements during the second time interval is used to
select the image that contains the same distance between the same
given pair of markers from the image data received during the first
time interval. This can be accomplished, for example, by executing
an algorithm to perform the calculations. When there are multiple
pairs of markers and localization elements, the algorithm can sum
the distances between all of the selected pairs of elements for a
given instant in time during the second time interval and sum the
distances between all of the associated selected pairs of markers
for each instant in time during the first time interval when the
pre-procedural image data was received.
[0140] When an image is found that provides the sum of distances
for the selected pairs of markers that is substantially the same as
the sum of the distances between the localization elements during
the second time interval, then that image is selected at step 68.
The selected image can then be displayed at step 70. The physician
can then observe the image during the medical procedure on a
targeted portion of the dynamic body. Thus, during the medical
procedure, the above process can be continuously executed such that
multiple images are displayed and images corresponding to real-time
positions of the dynamic body can be viewed.
[0141] FIG. 8 shows one embodiment of a system (system 100) that
includes components that can be used to perform image guided
interventions using a gated imaging modality, such as ECG-gated
MRI, or ECG-gated CT. The figure depicts a patient 10 positioned on
an operating table 12 with a physician 14 performing a medical
procedure on him.
[0142] Specifically, FIG. 8 depicts physician 14 steering a medical
instrument 16 through the patient's internal anatomy in order to
deliver therapy. In this particular instance, instrument 16 is
depicted as a catheter entering the right atrium by way of the
inferior vena cava preceded by a femoral artery access point;
however, the present systems are not limited to catheter use
indications. The position of virtually any instrument may be
tracked as discussed below and a representation of it superimposed
on the proper image, consistent with the present methods,
apparatuses, and systems. An "instrument" is any device controlled
by physician 10 for the purpose of delivering therapy, and includes
needles, guidewires, stents, filters, occluders, retrieval devices,
imaging devices (such as OCT, EBUS, IVUS, and the like), and leads.
Instrument 16 is fitted with one or more instrument reference
markers 18. A tracker 20 (which is sometimes referred to in the art
as a "tracking system") is configured to track the type of
reference marker or markers coupled to instrument 16. Tracker 20
can be any type of tracking system, including but not limited to an
electromagnetic tracking system. An example of a suitable
electromagnetic tracking system is the AURORA electromagnetic
tracking system, commercially available from Northern Digital Inc.
in Waterloo, Ontario Canada. If tracker 20 is an electromagnetic
tracking system, element 20 would represent an electromagnetic
field generator that emits a series of electromagnetic fields
designed to engulf patient 10, and reference marker or markers 18
coupled to medical instrument 16 could be coils that would receive
an induced voltage that could be monitored and translated into a
coordinate position of the marker(s).
[0143] As noted herein, a variety of instruments and devices can be
used in conjunction with the systems and methods described herein.
In one embodiment, for example, an angled coil sensor is employed
during the targeted navigation. In accordance with this embodiment,
for example, instead of using a conventional wire sensor wrapped at
about a 90.degree. angle (i.e., roughly perpendicular) to the axial
length (or core) of the sensor, the coil is wrapped at an acute
angle (i.e., the angle is less than about 90.degree.) relative to
the axial length of the sensor. In one embodiment, the coil is
positioned (e.g., wrapped) at an angle of from about 30.degree. to
about 60.degree. relative to the axial length. In one preferred
embodiment, the coil is positioned at about a 45.degree. angle
relative to the axial length. The positioning of the coil in
accordance with the exemplary embodiments described herein
advantageously provides a directional vector that is not parallel
with the sensor core. Thus, the physical axis is different and, as
the sensor moves, this additional directional vector can be
quantified and used to detect up and down (and other directional)
movement. This motion can be captured over time as described herein
to determine orientation and prepare and display the more accurate
images.
[0144] An external reference marker 22 can be placed in a location
close to the region of the patient where the procedure is to be
performed, yet in a stable location that will not move (or that
will move a negligible amount) with the patient's heart beat and
respiration. If patient 10 is securely fixed to table 12 for the
procedure, external reference marker 22 (which may be described as
"static") can be affixed to table 12. If patient 10 is not
completely secured to table 12, external reference marker 22 can be
placed on region of the back of patient 10 exhibiting the least
amount of movement. Tracker 20 can be configured to track external
reference marker 22.
[0145] One or more non-tissue internal reference markers 24 can be
placed in the gross region where the image guided navigation will
be carried out. Non-tissue internal reference marker(s) 24 should
be placed in an anatomic location that exhibits movement that is
correlated with the movement of the anatomy intended for image
guided navigation. This location will be internal to the patient,
in the gross location of the anatomy of interest.
[0146] Medical instrument 16, instrument reference marker(s) 18,
external reference marker 22, and non-tissue internal reference
marker(s) 24 can be coupled to converter 26 of system 100.
Converter 26, one example of which may be referred to in the art as
a break-out box, can be configured to convert analog measurements
received from the reference markers and tracker 20 into digital
data understandable by image guidance computing platform 30, and
relay that data to image guidance computing platform 30 to which
converter 26 can be coupled. Image guidance computing platform 30
can take the form of a computer, and may include a monitor on which
a representation of one or more instruments used during the IGI can
be displayed over an image of the anatomy of interest.
[0147] System 100 also includes a periodic human characteristic
signal monitor, such as ECG monitor 32, which can be configured to
receive a periodic human characteristic signal. For example, ECG
monitor 32 can be configured to receive an ECG signal in the form
of the ECG data transmitted to it by ECG leads 34 coupled to
patient 10. The periodic human characteristic signal monitor (e.g.,
ECG monitor 32) can also be configured to relay a periodic human
characteristic signal (e.g., ECG data) to image guidance computing
platform 30, to which it can be coupled.
[0148] Prior to the start of the image guided intervention,
non-tissue internal reference marker(s) 24--but not necessarily
static external reference marker 22--should be placed in the gross
region of interest for the procedure. After placement of non-tissue
internal reference marker(s) 24, patient 10 is to be scanned with
an imaging device, such as gated scanner 40, and the resulting
gated image dataset transferred to image guidance computing
platform 30, to which the imaging device is coupled and which can
reside in the operating or procedure theatre. Examples of suitable
imaging devices, and more specifically suitable gated scanners,
include ECG-gated MRI scanners and ECG-gated CT scanners. A
hospital network 50 may be used to couple gated scanner 40 to image
guidance computing platform 30.
[0149] The imaging device (e.g., gated scanner 40) can be
configured to create a gated dataset that includes pre-operative
images, one or more of which (up to all) are taken using the
imaging device and are linked to a sample of a periodic human
characteristic signal (e.g., a sample, or a phase, of an ECG
signal). Once patient 10 is scanned using the imaging device and
the gated dataset is transferred to and received by image guidance
computing platform 30, patient 10 can be secured to operating table
12 and the equipment making up system 100 (e.g., tracker 20,
converter 26, image guidance computing platform 30, ECG monitor 32,
and gated scanner 40) set up as shown in FIG. 9. Information can
then flow among the system 100 components.
[0150] At this point, a gated dataset created by gated scanner 40
resides on image guidance computing platform 30. FIG. 9 highlights
the relationship between the samples (S1 . . . Sn) and the images
(I1 . . . In) that were captured by gated scanner 40. Designations
P, Q, R, S, and T are designations well known in the art; they
designate depolarizations and re-polarizations of the heart. Gated
scanner 40 essentially creates an image of the anatomy of interest
at a particular instant in time during the anatomy's periodic
movement. Image I1 corresponds to the image that was captured at
the S1 moment of patient 10's ECG cycle. Similarly, I2 is
correlated with S2, and In with Sn.
[0151] FIG. 10 is a diagram of another exemplary surgical
instrument navigation system 10. In accordance with one aspect of
the present invention, the surgical instrument navigation system 10
is operable to visually simulate a virtual volumetric scene within
the body of a patient, such as an internal body cavity, from a
point of view of a surgical instrument 12 residing in the cavity of
a patient 13. To do so, the surgical instrument navigation system
10 is primarily comprised of a surgical instrument 12, a data
processor 16 having a display 18, and a tracking subsystem 20. The
surgical instrument navigation system 10 may further include (or
accompanied by) an imaging device 14 that is operable to provide
image data to the system.
[0152] The surgical instrument 12 is preferably a relatively
inexpensive, flexible and/or steerable catheter that may be of a
disposable type. The surgical instrument 12 is modified to include
one or more tracking sensors that are detectable by the tracking
subsystem 20. It is readily understood that other types of surgical
instruments (e.g., a guide wire, a needle, a forcep, a pointer
probe, a stent, a seed, an implant, an endoscope, an energy
delivery device, a therapy delivery device, etc.) are also within
the scope of the present invention. It is also envisioned that at
least some of these surgical instruments may be wireless or have
wireless communications links. It is also envisioned that the
surgical instruments may encompass medical devices which are used
for exploratory purposes, testing purposes or other types of
medical procedures.
[0153] The volumetric scan data is then registered as shown at 34.
Registration of the dynamic reference frame 19 generally relates
information in the volumetric scan data to the region of interest
associated with the patient. This process is referred to as
registering image space to patient space. Often, the image space
must also be registered to another image space. Registration is
accomplished through knowledge of the coordinate vectors of at
least three non-collinear points in the image space and the patient
space.
[0154] Referring to FIG. 11, the imaging device 14 is used to
capture volumetric scan data 32 representative of an internal
region of interest within the patient 13. The three-dimensional
scan data is preferably obtained prior to surgery on the patient
13. In this case, the captured volumetric scan data may be stored
in a data store associated with the data processor 16 for
subsequent processing. However, one skilled in the art will readily
recognize that the principles of the present invention may also
extend to scan data acquired during surgery. It is readily
understood that volumetric scan data may be acquired using various
known medical imaging devices 14, including but not limited to a
magnetic resonance imaging (MRI) device, a computed tomography (CT)
imaging device, a positron emission tomography (PET) imaging
device, a 2D or 3D fluoroscopic imaging device, and 2D, 3D or 4D
ultrasound imaging devices. In the case of a two-dimensional
ultrasound imaging device or other two-dimensional image
acquisition device, a series of two-dimensional data sets may be
acquired and then assembled into volumetric data as is well known
in the art using a two-dimensional to three-dimensional
conversion.
[0155] The multi-dimensional imaging modalities described herein
may also be coupled with digitally reconstructed radiography (DRR)
techniques. In accordance with a fluoroscopic image acquisition,
for example, radiation passes through a physical media to create a
projection image on a radiation-sensitive film or an electronic
image intensifier. Given a 3D or 4D dataset as described herein,
for example, a simulated image can be generated in conjunction with
DRR methodologies. DRR is generally known in the art, and is
described, for example, by Lemieux et al. (Med. Phys. 21(11),
November 1994, pp. 1749-60).
[0156] When a DRR image is created, a fluoroscopic image is formed
by computationally projecting volume elements, or voxels, of the 3D
or 4D dataset onto one or more selected image planes. Using a 3D or
4D dataset of a given patient as described herein, for example, it
is possible to generate a DRR image that is similar in appearance
to a corresponding patient image. This similarity can be due, at
least in part, to similar intrinsic imaging parameters (e.g.,
projective transformations, distortion corrections, etc.) and
extrinsic imaging parameters (e.g., orientation, view direction,
etc.). The intrinsic imaging parameters can be derived, for
instance, from the calibration of the equipment. Advantageously,
this provides another method to see the up-and-down (and other
directional) movement of the instrument. This arrangement further
provides the ability to see how the device moves in an image(s),
which translates to improved movement of the device in a patient.
An exemplary pathway in accordance with the disclosure herein can
be seen in FIG. 17.
[0157] A dynamic reference frame 19 is attached to the patient
proximate to the region of interest within the patient 13. To the
extent that the region of interest is a vessel or a cavity within
the patient, it is readily understood that the dynamic reference
frame 19 may be placed within the patient 13. To determine its
location, the dynamic reference frame 19 is also modified to
include tracking sensors detectable by the tracking subsystem 20.
The tracking subsystem 20 is operable to determine position data
for the dynamic reference frame 19 as further described below.
[0158] The volumetric scan data is then registered as shown at 34.
Registration of the dynamic reference frame 19 generally relates
information in the volumetric scan data to the region of interest
associated with the patient. This process is referred to as
registering image space to patient space. Often, the image space
must also be registered to another image space. Registration is
accomplished through knowledge of the coordinate vectors of at
least three non-collinear points in the image space and the patient
space.
[0159] Registration for image guided surgery can be completed by
different known techniques. First, point-to-point registration is
accomplished by identifying points in an image space and then
touching the same points in patient space. These points are
generally anatomical landmarks that are easily identifiable on the
patient. Second, surface registration involves the user's
generation of a surface in patient space by either selecting
multiple points or scanning, and then accepting the best fit to
that surface in image space by iteratively calculating with the
data processor until a surface match is identified. Third, repeat
fixation devices entail the user repeatedly removing and replacing
a device (i.e., dynamic reference frame, etc.) in known relation to
the patient or image fiducials of the patient. Fourth, automatic
registration by first attaching the dynamic reference frame to the
patient prior to acquiring image data. It is envisioned that other
known registration procedures are also within the scope of the
present invention, such as that disclosed in U.S. Ser. No.
09/274,972, filed on Mar. 23, 1999, entitled "NAVIGATIONAL GUIDANCE
VIA COMPUTER-ASSISTED FLUOROSCOPIC IMAGING", which is hereby
incorporated by reference.
[0160] FIG. 12 illustrates another type of secondary image 28 which
may be displayed in conjunction with the primary perspective image
38. In this instance, the primary perspective image is an interior
view of an air passage within the patient 13. The secondary image
28 is an exterior view of the air passage which includes an indicia
or graphical representation 29 that corresponds to the location of
the surgical instrument 12 within the air passage. In FIG. 12, the
indicia 29 is shown as a crosshairs. It is envisioned that other
indicia may be used to signify the location of the surgical
instrument in the secondary image. As further described below, the
secondary image 28 is constructed by superimposing the indicia 29
of the surgical instrument 12 onto the manipulated image data
38.
[0161] Referring to FIG. 13, the display of an indicia of the
surgical instrument 12 on the secondary image may be synchronized
with an anatomical function, such as the cardiac or respiratory
cycle, of the patient. In certain instances, the cardiac or
respiratory cycle of the patient may cause the surgical instrument
12 to flutter or jitter within the patient. For instance, a
surgical instrument 12 positioned in or near a chamber of the heart
will move in relation to the patient's heart beat. In these
instance, the indicia of the surgical instrument 12 will likewise
flutter or jitter on the displayed image 40. It is envisioned that
other anatomical functions which may effect the position of the
surgical instrument 12 within the patient are also within the scope
of the present invention.
[0162] To eliminate the flutter of the indicia on the displayed
image 40, position data for the surgical instrument 12 is acquired
at a repetitive point within each cycle of either the cardiac cycle
or the respiratory cycle of the patient. As described above, the
imaging device 14 is used to capture volumetric scan data 42
representative of an internal region of interest within a given
patient. A secondary image may then be rendered 44 from the
volumetric scan data by the data processor 16.
[0163] In order to synchronize the acquisition of position data for
the surgical instrument 12, the surgical instrument navigation
system 10 may further include a timing signal generator 26. The
timing signal generator 26 is operable to generate and transmit a
timing signal 46 that correlates to at least one of (or both) the
cardiac cycle or the respiratory cycle of the patient 13. For a
patient having a consistent rhythmic cycle, the timing signal might
be in the form of a periodic clock signal. Alternatively, the
timing signal may be derived from an electrocardiogram signal from
the patient 13. One skilled in the art will readily recognize other
techniques for deriving a timing signal that correlate to at least
one of the cardiac or respiratory cycle or other anatomical cycle
of the patient.
[0164] As described above, the indicia of the surgical instrument
12 tracks the movement of the surgical instrument 12 as it is moved
by the surgeon within the patient 13. Rather than display the
indicia of the surgical instrument 12 on a real-time basis, the
display of the indicia of the surgical instrument 12 is
periodically updated 48 based on the timing signal from the timing
signal generator 26. In one exemplary embodiment, the timing
generator 26 is electrically connected to the tracking subsystem
20. The tracking subsystem 20 is in turn operable to report
position data for the surgical instrument 12 in response to a
timing signal received from the timing signal generator 26. The
position of the indicia of the surgical instrument 12 is then
updated 50 on the display of the image data. It is readily
understood that other techniques for synchronizing the display of
an indicia of the surgical instrument 12 based on the timing signal
are within the scope of the present invention, thereby eliminating
any flutter or jitter which may appear on the displayed image 52.
It is also envisioned that a path (or projected path) of the
surgical instrument 12 may also be illustrated on the displayed
image data 52.
[0165] In another aspect of the present invention, the surgical
instrument navigation system 10 may be further adapted to display
four-dimensional image data for a region of interest as shown in
FIG. 14. In this case, the imaging device 14 is operable to capture
volumetric scan data 62 for an internal region of interest over a
period of time, such that the region of interest includes motion
that is caused by either the cardiac cycle or the respiratory cycle
of the patient 13. A volumetric perspective view of the region may
be rendered 64 from the volumetric scan data 62 by the data
processor 16 as described above. The four-dimensional image data
may be further supplemented with other patient data, such as
temperature or blood pressure, using coloring coding
techniques.
[0166] In order to synchronize the display of the volumetric
perspective view in real-time with the cardiac or respiratory cycle
of the patient, the data processor 16 is adapted to receive a
timing signal from the timing signal generator 26. As described
above, the timing signal generator 26 is operable to generate and
transmit a timing signal that correlates to either the cardiac
cycle or the respiratory cycle of the patient 13. In this way, the
volumetric perspective image may be synchronized 66 with the
cardiac or respiratory cycle of the patient 13. The synchronized
image 66 is then displayed 68 on the display 18 of the system. The
four-dimensional synchronized image may be either (or both of) the
primary image rendered from the point of view of the surgical
instrument or the secondary image depicting the indicia of the
position of the surgical instrument 12 within the patient 13. It is
readily understood that the synchronization process is also
applicable to two-dimensional image data acquire over time.
[0167] To enhance visualization and refine accuracy of the
displayed image data, the surgical navigation system can use prior
knowledge such as the segmented vessel or airway structure to
compensate for error in the tracking subsystem or for inaccuracies
caused by an anatomical shift occurring since acquisition of scan
data. For instance, it is known that the surgical instrument 12
being localized is located within a given vessel or airway and,
therefore should be displayed within the vessel or airway.
Statistical methods can be used to determine the most likely
location; within the vessel or airway with respect to the reported
location and then compensate so the display accurately represents
the instrument 12 within the center of the vessel or airway. The
center of the vessel or airway can be found by segmenting the
vessels or airways from the three-dimensional datasets and using
commonly known imaging techniques to define the centerline of the
vessel or airway tree. Statistical methods may also be used to
determine if the surgical instrument 12 has potentially punctured
the vessel or airway. This can be done by determining the reported
location is too far from the centerline or the trajectory of the
path traveled is greater than a certain angle (worse case 90
degrees) with respect to the vessel or airway. Reporting this type
of trajectory (error) is very important to the clinicians. The
tracking along the center of the vessel may also be further refined
by correcting for motion of the respiratory or cardiac cycle, as
described above. While navigating along the vessel or airway tree
prior knowledge about the last known location can be used to aid in
determining the new location. The instrument or navigated device
must follow a pre-defined vessel or airway tree and therefore can
not jump from one branch to the other without traveling along a
path that would be allowed. The orientation of the instrument or
navigated device can also be used to select the most likely pathway
that is being traversed. The orientation information can be used to
increase the probability or weight for selected location or to
exclude potential pathways and therefore enhance system
accuracy.
[0168] The surgical instrument navigation system of the present
invention may also incorporate atlas maps. It is envisioned that
three-dimensional or four-dimensional atlas maps may be registered
with patient specific scan data or generic anatomical models. Atlas
maps may contain kinematic information (e.g., heart and lung
models) that can be synchronized with four-dimensional image data,
thereby supplementing the real-time information. In addition, the
kinematic information may be combined with localization information
from several instruments to provide a complete four-dimensional
model of organ motion. The atlas maps may also be used to localize
bones or soft tissue which can assist in determining placement and
location of implants.
[0169] FIG. 20, for example, illustrates in one exemplary
embodiment the use of Inspiration/arms-up CT and
Expiration/arms-down CT for image guided navigation purposes. Stage
(A) shows an Inspiration/arms-up pathway registration; this is,
generally speaking, the preferred CT scan acquisition state for
automatic segmentation of the tracheo-bronchial tree. Stage (B), on
the other hand, shows FRC/arms-down segmentation (wherein FRC
refers to the Functional Residual Capacity (i.e., the lung volume
at the end of a normal expiration, when the muscles of respiration
are completely relaxed; at FRC (and typically at FRC only), the
tendency of the lungs to collapse is exactly balanced by the
tendency of the chest wall to expand). FRC/arms-down is, generally,
the preferred navigational state for image guided pulmonary
navigation. Unfortunately, however, using the inspiration (arms-up)
state of the lungs for navigation can contribute significant error
to the image guided navigation. Thus, in accordance with various
embodiments described herein, the airways are segmented using full
inspiration/arms-up CT acquisition, and mapped to a less-fully
segmented, FRC/arms-down CT acquisition. As shown, the results can
then be conformed to produce a fully segmented, tracheo-bronchial
tree, at the proper spatial representation for patient FRC (Stage
(C)).
[0170] FIG. 21 depicts a CT minP/maxP volume reading including a
scenario whereby precise navigation (e.g., using an EM sensor) near
a target lesion is carried out with incomplete segmentation
results. This scenario provides the user with a view or image 1600
using minP (minimum intensity projection or maxP (maximum intensity
projection) volume renderings to simultaneously integrate one or
more of: (i) the last known segmented airway 1601; (ii) the
target(s) 1602; (iii) a visually distinct representation of
previously traversed paths that are "bad" (i.e., incorrect) 1603;
(iv) the distance and angle 1604 to the target (e.g., a target
lesion) using a vector fit to the last 1 cm (or so) of travel (in
addition to, or in place of, instantaneous orientation provided by
a 5DOF tip sensor as described herein); and (v) to incorporate user
provided "way points" to create a final-approach tube 1605 to the
target. As described herein, the image 1600 may also provide a
real-time or simulated real-time rendering of the device 1606
(e.g., the device tip as shown with a virtual extension 1607 to the
target).
[0171] Referring now to FIGS. 22A and 22B, one exemplary embodiment
of a 4D thoracic registration is depicted. In general, the 4D
dataset may be acquired through respiratory gating or tracheal
temporal registration. In accordance with the methods described
herein, for example, acceleration of N data collectors (e.g.,
magnetic or MEMS accelerometers, for instance) are measured to
register the thorax in time and space, using the general formula:
dataT.sub.thorax=F(t). As shown in FIGS. 22A and 22B, the various
sensors 1701 and tracheal sensor 1702 provide data as described
herein, as does sternum sensor 1703 (e.g., x, y, and z dynamic
tracking). Device 1704 (e.g., biopsy needle or other device or
instrument described herein) is further capable of tracking
position and trajectory (as described herein).
[0172] An exemplary apparatus and method for respiratory 4D data
acquisition and navigation is depicted in FIG. 23. As shown in the
upper box, a respiratory problem or issue is scanned (e.g., by a CT
and/or MR scanner) and signal S from the PTD is provide to the
CT/IR unit (lower box). The 4D registration based on the motion of
the fiducial and tracker units (which could be, e.g., EM sensors,
MEMS devices, combinations thereof, and the like) is provided to
the user (shown in FIG. 18 as an interventional radiologist (IR))
on computer C. The system is capable of displaying the current
position of the device tip in the image data shown to the IR, using
the fiducial or tracker locations in the scan coupled with the
real-time motion information provided by the device tip (e.g.,
which can include a sensor as described herein), thus providing
registration.
[0173] A representative offset device in accordance in certain
embodiments of the disclosure herein is depicted in FIGS. 24A and
24B, which show a scope or other instrument 1500 including one or
more offset devices 1501 at port 1502. The offset device(s) is/are
capable of holding the tracked guidewire 1503 (including brush 1504
at the scope tip) in place and provide a substantially fixed
distance or length of extension out of the scope (or virtual scope)
to take a sample. It will be understood that brush 1504 at the
guidewire tip may be any of a variety of devices, including
forceps, needles (e.g., a biopsy needle), and the like. As shown,
multiple offsets 1501 can be provided in stages that allow
extension of the guidewire, e.g., at 1 cm, 2 cm, 3 cm, and so on,
whereby the user can adjust the offsets by removal or repositioning
(removed offsets 1501R, for example, are depicted in dashed lines
within the port in FIG. 15B). Thus, FIG. 24A shows the brush tip
1504 at the tip of the scope (i.e., prior to extension), whereas
FIG. 24B shows the guidewire and tip extended (e.g., 2 cm of
extension) by the removal or repositioning of two offsets
1501R.
[0174] In FIG. 25 a representative actuatable sensor/forceps device
in accordance with the embodiments described herein is depicted. As
shown, the coil of an EM (or other) sensor 1601 disposed in a
catheter (or similar device) body 1600 can act as a solenoid to
actuate a forceps 1602. In accordance with the illustrated
embodiment, for example, the solenoid coil is used as an EM sensor
in a "passive" mode, but can be activated by energy stored in an
ultra-capacitor 1603 which actuates the forceps via armature 1604
in an "active" mode.
CONCLUSION
[0175] While various embodiments of the invention have been
described above, it should be understood that they have been
presented by way of example only, and not limitation. Thus, the
breadth and scope of the invention should not be limited by any of
the above-described embodiments, but should be defined only in
accordance with the following claims and their equivalents.
[0176] The previous description of the embodiments is provided to
enable any person skilled in the art to make or use the invention.
While the invention has been particularly shown and described with
reference to embodiments thereof, it will be understood by those
skilled in art that various changes in form and details may be made
therein without departing from the spirit and scope of the
invention. For example, the PTD, markers and localization elements
can be constructed from any suitable material, and can be a variety
of different shapes and sizes, not necessarily specifically
illustrated, while still remaining within the scope of the
invention.
* * * * *