U.S. patent application number 17/391088 was filed with the patent office on 2021-11-18 for augmented surgical reality environment.
The applicant listed for this patent is Covidien LP. Invention is credited to Tony C. Carnes, Edward M. McKenna, Stephen Pack.
Application Number | 20210358122 17/391088 |
Document ID | / |
Family ID | 1000005752902 |
Filed Date | 2021-11-18 |
United States Patent
Application |
20210358122 |
Kind Code |
A1 |
Carnes; Tony C. ; et
al. |
November 18, 2021 |
AUGMENTED SURGICAL REALITY ENVIRONMENT
Abstract
The present disclosure is directed to an augmented reality
surgical system for viewing an augmented image of a region of
interest during a surgical procedure. The system includes an image
capture device that captures an image of the region of interest. A
controller receives the image and applies at least one image
processing filter to the image. The image processing filter
includes a spatial decomposition filter that decomposes the image
into spatial frequency bands. A temporal filter is applied to the
spatial frequency bands to generate temporally filtered bands. An
adder adds each band spatial frequency band to a corresponding
temporally filtered band to generate augmented bands. A
reconstruction filter generates an augmented image by collapsing
the augmented bands. A display displays the augmented image to a
user.
Inventors: |
Carnes; Tony C.;
(Gainesville, FL) ; McKenna; Edward M.; (Boulder,
CO) ; Pack; Stephen; (Boulder, CO) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Covidien LP |
Mansfield |
MA |
US |
|
|
Family ID: |
1000005752902 |
Appl. No.: |
17/391088 |
Filed: |
August 2, 2021 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
16834125 |
Mar 30, 2020 |
11080854 |
|
|
17391088 |
|
|
|
|
16210686 |
Dec 5, 2018 |
10607345 |
|
|
16834125 |
|
|
|
|
15327058 |
Jan 18, 2017 |
10152789 |
|
|
PCT/US2015/041083 |
Jul 20, 2015 |
|
|
|
16210686 |
|
|
|
|
62028974 |
Jul 25, 2014 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06T 11/008 20130101;
G06T 19/006 20130101; G06T 2207/20221 20130101; A61B 2576/00
20130101; G06T 19/20 20130101; G06T 5/50 20130101; A61B 2090/365
20160201; H04N 5/23293 20130101; G06T 5/10 20130101; G06T 7/13
20170101; G06T 2207/10048 20130101; G06T 7/0012 20130101; A61B
90/361 20160201; H04N 5/33 20130101; H04N 5/232 20130101; G06T
2207/30004 20130101; G06T 2207/20024 20130101; A61B 90/37
20160201 |
International
Class: |
G06T 7/00 20060101
G06T007/00; G06T 19/00 20060101 G06T019/00; G06T 5/50 20060101
G06T005/50; A61B 90/00 20060101 A61B090/00; H04N 5/232 20060101
H04N005/232; G06T 7/13 20060101 G06T007/13; G06T 5/10 20060101
G06T005/10; G06T 11/00 20060101 G06T011/00; G06T 19/20 20060101
G06T019/20; H04N 5/33 20060101 H04N005/33 |
Claims
1-20. (canceled)
21. An augmented reality surgical system for viewing an augmented
image of a region of interest, the system comprising: an image
capture device configured to capture an image of the region of
interest inside a patient during a surgical procedure; a controller
configured to receive the captured image, receive one or more
desired frequency bands, and apply at least one image processing
filter to the image to generate the augmented image, the image
processing filter including: a spatial decomposition filter
configured to decompose the image into a plurality of spatial
frequency bands; a temporal filter configured to extract the one or
more desired spatial frequency bands from the plurality of spatial
frequency bands; an amplifier configured to amplify the one or more
desired spatial frequency bands; an adder configured to add each
band in the amplified one or more desired spatial frequency bands
to a corresponding band in the plurality of spatial frequency bands
to generate a plurality of augmented bands; and a reconstruction
filter configured to generate an augmented image by collapsing the
plurality of augmented bands; and a display configured to display
the augmented image thereon.
22. The augmented reality surgical system of claim 21, wherein the
one or more desired frequency bands are received via a user
interface.
23. The augmented reality surgical system of claim 21, wherein the
one or more desired frequency bands are related to the region of
interest.
24. The augmented reality surgical system of claim 21, wherein the
image capture device captures a video having a plurality of image
frames and the controller applies the at least one image processing
filter to each image frame of the plurality of image frames.
25. The augmented reality surgical system of claim 21, wherein the
temporal filter includes a bandpass filter.
26. The augmented reality surgical system of claim 21, wherein the
temporal filter isolates the one or more desired spatial frequency
bands from the plurality of spatial frequency bands.
27. The augmented reality surgical system of claim 21, wherein the
image processing filter uses an edge detection algorithm configured
to highlight one or more edges in the image, wherein the one or
more highlighted edges is added to the augmented image.
28. The augmented reality surgical system of claim 21, further
comprising at least one hyper-spectral sensor configured to obtain
a plurality of hyper-spectral images, wherein the image processing
filter uses a hyper-spectral algorithm to combine the plurality of
spectral images to generate a three dimensional hyper-spectral
image cube that is added to the augmented image.
29. The augmented reality surgical system of claim 21, further
comprising an infrared camera configured to capture an infrared
image, wherein the infrared image is added to the augmented
image.
30. A method for generating an augmented image of a region of
interest, the method comprising: capturing an image of the region
of interest inside a patient during a surgical procedure; receiving
one or more desired frequency bands; decomposing the image into a
plurality of spatial frequency bands; extracting the one or more
desired spatial frequency bands from the plurality of spatial
frequency bands; amplifying the one or more desired spatial
frequency bands; adding each band in the amplified one or more
desired spatial frequency bands to a corresponding band in the
plurality of spatial frequency bands to generate a plurality of
augmented bands; generating the augmented image by collapsing the
plurality of augmented bands; and displaying the augmented image on
a display.
31. The method of claim 30, wherein the one or more desired
frequency bands are received via a user interface.
32. The method of claim 30, wherein the one or more desired
frequency bands are related to the region of interest.
33. The method of claim 30, wherein the captured image is a video
having a plurality of image frames.
34. The method of claim 33, further comprising: producing an
augmented video from augmented image frames.
35. The method of claim 30, wherein decomposing the image is
performed by a bandpass filter.
36. The method of claim 30, wherein decomposing the image includes
isolating the one or more desired spatial frequency bands from the
plurality spatial frequency bands.
37. The method of claim 30, further comprising: applying an edge
detection algorithm to highlight one or more edges in the captured
image; and adding the one or more highlighted edges to the
augmented image.
38. The method of claim 30, further comprising: obtaining a
plurality of hyper-spectral images; combining the plurality of
hyper-spectral images to generate a three dimensional
hyper-spectral image cube; and adding the three dimensional
hyper-spectral image cube to the augmented image.
39. The method of claim 30, further comprising: obtaining an
infrared image; and adding the infrared image to the augmented
image.
40. A non-transitory computer readable medium storing instructions
that, when executed by a control device, cause the control device
to perform a method comprising: capturing an image of the region of
interest inside a patient during a surgical procedure; receiving
one or more desired frequency bands; decomposing the image into a
plurality of spatial frequency bands; extracting the one or more
desired spatial frequency bands from the plurality of spatial
frequency bands; amplifying the one or more desired spatial
frequency bands; adding each band in the amplified one or more
desired spatial frequency bands to a corresponding band in the
plurality of spatial frequency bands to generate a plurality of
augmented bands; generating the augmented image by collapsing the
plurality of augmented bands; and displaying the augmented image on
a display.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application is a Continuation Application of U.S.
patent application Ser. No. 16/834,125, filed Mar. 30, 2020 (now
U.S. Pat. No. 11,080,854), which is a Continuation Application of
U.S. patent application Ser. No. 16/210,686, filed Dec. 5, 2018
(now U.S. Pat. No. 10,607,345), which is a Continuation Application
of U.S. patent application Ser. No. 15/327,058, filed Jan. 18, 2017
(now U.S. Pat. No. 10,152,789), which claims the benefit of and
priority to U.S. National Stage Application filed under 35 U.S.C.
.sctn. 371(a) of International Patent Application Serial No.
PCT/US2015/041083, filed Jul. 20, 2015, which claims the benefit of
and priority to U.S. Provisional Patent Application No. 62/028,974,
filed Jul. 25, 2014, the entire disclosure of each of which is
incorporated by reference herein.
BACKGROUND
1. Technical Field
[0002] The present disclosure relates to surgical techniques to
improve surgical outcomes for a patient. More specifically, the
present disclosure is directed to systems and methods for
augmenting and enhancing a clinician's field of vision while
performing a surgical technique.
2. Background of the Related Art
[0003] Minimally invasive surgeries have involved the use of
multiple small incisions to perform a surgical procedure instead of
one larger opening. The small incisions have reduced patient
discomfort and improved recovery times. The small incisions have
also limited the visibility of internal organs, tissue, and other
matter.
[0004] Endoscopes have been inserted in one or more of the
incisions to make it easier for clinicians to see internal organs,
tissue, and other matter inside the body during surgery. These
endoscopes have included a camera with an optical and/or digital
zoom capability that is coupled to a display showing the magnified
view of organs, tissue, and matter inside the body as captured by
the camera. Existing endoscopes and displays, especially those used
in surgical robotic systems, have had a limited ability to identify
conditions or objects that are within the field of view of the
camera but are not fully visible within the spectrum shown on the
display.
[0005] For example, existing minimally invasive and robotic
surgical tools, including but not limited to endoscopes and
displays, have had a limited, if any, ability to identify tissue
perfusion after resection, locate different sized arteries within
tissue, measure the effectiveness of vessel sealing, identify
diseased or dead tissue from a heat signature, verify appropriate
functioning after a resection, distinguish between sensitive areas
(such as the ureter) and surrounding matter (such as surrounding
blood), and detecting super-small leaks that are not visible with
current tests. In some surgeries these checks were either not
performed or more invasive and/or time consuming tests were
performed to check for these and other conditions and objects.
[0006] There is a need for identifying a greater range of possible
conditions or objects that are within the field of view of a
surgical camera but are not fully visible within the spectrum shown
on the display during surgery.
SUMMARY
[0007] In an aspect of the present disclosure, an augmented reality
surgical system for viewing an augmented image of a region of
interest during a surgical procedure is provided. The system
includes an image capture device configured to capture an image of
the region of interest. The system also includes a controller
configured to receive the image and apply at least one image
processing filter to the image to generate an augmented image. The
image processing filter includes a spatial decomposition filter
configured to decompose the image into a plurality of spatial
frequency bands, a temporal filter that is configured to be applied
to the plurality of spatial frequency bands to generate a plurality
of temporally filtered bands, an adder configured to add each band
in the plurality of spatial frequency bands to a corresponding band
in the plurality of temporally filtered bands to generate a
plurality of augmented bands, and a reconstruction filter
configured to generate an augmented image by collapsing the
plurality of augmented bands. The augmented image is then displayed
to a user.
[0008] The image capture device may capture a video having a
plurality of image frames and the controller applies the at least
one image processing filter to each image frame of the plurality of
image frames.
[0009] The temporal filter isolates at least one spatial frequency
band from the plurality of spatial frequency bands to generate the
plurality of temporally filtered bands. The plurality of temporally
filtered bands are amplified by an amplifier before each band in
the plurality of spatial frequency bands is added to a
corresponding band in the plurality of temporally filtered bands to
generate a plurality of augmented bands.
[0010] The image processing filter may use an edge detection
algorithm configured to highlight one or more edges in the image,
wherein the one or more highlighted edges is added to the augmented
image.
[0011] The system may include at least one hyper-spectral sensor
configured to obtain a plurality of hyper-spectral images. The
image processing filter uses a hyper-spectral algorithm to combine
the plurality of spectral images to generate a three dimensional
hyper-spectral image cube that is added to the augmented image.
[0012] The system may also include an infrared camera configured
capture at least one image, wherein the image processing filter
generates an infrared image that is added to the augmented
image.
[0013] In another aspect of the present disclosure, methods for
generating an augmented image are provided. A non-transitory
computer readable medium, including but not limited to flash
memory, compact discs, and solid state drives, may store
instructions that, when executed by a processing device, including
but not limited to a controller or central processing unit, cause
the processing device to perform one or more functions, including
the methods described herein. A method may include capturing at
least one image using an image capture device. The at least one
image is decomposed to generate a plurality of spatial frequency
bands. A temporal filter is applied to the plurality of spatial
frequency bands to generate a plurality of temporally filtered
bands. Each band in the plurality of spatial frequency bands is
added to a corresponding band in the plurality of temporally
filtered bands to generate a plurality of augmented bands. The
plurality of augmented bands is collapsed to generate an augmented
image which is displayed on a display.
[0014] At least one spatial frequency band is isolated from the
plurality of spatial frequency bands. The temporally filtered bands
may be amplified before adding each band in the plurality of
spatial frequency bands to a corresponding band in the plurality of
temporally filtered bands to generate a plurality of augmented
bands.
[0015] An edge detection algorithm may be applied to highlight one
or more edges in the image and the one or more highlighted edges is
added to the augmented image.
[0016] A plurality of hyper-spectral images may be obtained. The
plurality of hyper-spectral images are combined to generate a three
dimensional hyper-spectral image cube. The three dimensional
hyper-spectral image cube is added to the augmented image.
[0017] An infrared image may be obtained and added to the augmented
image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The above and other aspects, features, and advantages of the
present disclosure will become more apparent in light of the
following detailed description when taken in conjunction with the
accompanying drawings in which:
[0019] FIG. 1 is a block diagram of a system for augmenting a
surgical environment;
[0020] FIGS. 2A-2D are examples of how the system of FIG. 1 may be
implemented;
[0021] FIG. 3 is a system block diagram of the controller of FIG.
1;
[0022] FIG. 4 is a first block diagram of a system for augmenting
an image or video;
[0023] FIG. 5 is a second block diagram of a system for augmenting
an image or video;
[0024] FIG. 6 is a third block diagram of a system for augmenting
an image or video;
[0025] FIG. 7 is a fourth block diagram of a system for augmenting
an image or video;
[0026] and
[0027] FIG. 8 is a system block diagram of a robotic surgical
system.
DETAILED DESCRIPTION
[0028] Image data captured from a surgical camera during a surgical
procedure may be analyzed to identify additional imperceptible
properties of objects within the camera field of view that may be
invisible or visible but difficult to clearly see for people
viewing the camera image displayed on a screen. Various image
processing technologies may be applied to this image data to
identify different conditions in the patient. For example, Eulerian
image amplification techniques may be used to identify wavelength
or "color" changes of light in different parts of a capture image.
These changes may be further analyzed to identify re-perfusion,
arterial flow, and/or vessel types.
[0029] Eulerian image amplification may also be used to make motion
or movement between image frames more visible to a clinician. In
some instances changes in a measured intensity of predetermined
wavelengths of light between different image frames may be
presented to a clinician to make the clinician more aware of the
motion of particular objects of interest (such as blood).
[0030] Image algebra may be used to identify an optimal location
for cutting tissue or other matter during the surgical procedure.
In some instances, image algebra may include edge detection and/or
Eulerian image amplification to identify optimal cutting
locations.
[0031] Hyper-spectral image analysis may be used to identify subtle
changes in small areas within the range of view that may be
invisible or otherwise difficult for the human eye to discern.
These hyper-spectral image analysis techniques may be combined with
Eulerian image amplification to identify a specific set of changes
in these areas.
[0032] Image algebra may be combined with hyper-spectral image
analysis to identify an edge of an object or other mass. Image
algebra may include edge detection and/or may be combined with both
hyper-spectral image analysis and Eulerian image amplification to
identify an edge of a mass.
[0033] Infrared light may be used to identify a boundary of
diseased, dead, and/or abnormal tissue. A filter may be used to
isolate one or more desired wavelengths in an infrared, near
infrared, or other range from captured image data. Eulerian image
amplification and/or image algebra may be used to analyze the
filtered image data and identify a particular tissue boundary.
[0034] One or more of Eulerian image amplification, image algebra,
hyper-spectral image analysis, and filtering technologies may be
included as part of an imaging system. These technologies may
enable the imaging system to provide additional information about
unapparent conditions and objects within a camera's field of view
and enhance surgical outcomes. This additional information may
include, but is not limited to, identifying tissue perfusion,
locating arteries of specific sizes (such as larger arteries),
verifying an effectiveness of vessel sealing, identifying a heat
signature of abnormal tissue, verifying desired object motion (such
as a lack of movement in edges of dead tissue or verifying proper
flow after resection), distinguishing between similar looking
objects (such as between the ureter, inferior mesenteric artery,
and/or surrounding blood), and detecting small leaks (such as leaks
that may occur after an anastomosis).
[0035] One or more of these technologies may be included as part of
an imaging system in a surgical robotic system to provide a
clinician with additional information in real time about unapparent
conditions and objects within an endoscope's field of view. This
may enable the clinician to quickly identify, avoid, and/or correct
undesirable situations and conditions during surgery. For example,
a clinician may be able to verify during surgery that vessels have
been properly sealed, that blood is properly flowing, that there
are no air leaks after an anastomosis, and/or that diseased tissue
has been removed. The clinician may then be able to correct these
issues if needed during the surgery. A clinician may also be able
to identify delicate or critical objects in the body that the
surgical instruments should avoid contacting or handle extra
carefully, such as larger arteries or the ureter.
[0036] The present disclosure is directed to systems and methods
for providing an augmented image in real time to a clinician during
a surgical procedure. The systems and methods described herein
apply image processing filters to a captured image to provide an
augmented or enhanced image to a clinician via a display. In some
embodiments, the systems and methods permit video capture during a
surgical procedure. The captured video is processed in real time or
near real time and then displayed to the clinician as an augmented
image. The image processing filters are applied to each frame of
the captured video. Providing the augmented image or video to the
clinician permits the clinician to identify and address potential
adverse physiologic conditions thereby reducing the need for
additional surgical procedures as well as ensuring the
effectiveness of the original surgical procedure.
[0037] The embodiments described herein enable a clinician to
identify areas receiving excessive or ineffective blood,
effectiveness of stapling or sealing, temperature variations in
organs to identify diseased tissue, subtle tissue movement to
determine if tissue is alive, and tissue thickness. Additionally,
the embodiments described herein may be used to identify tissue
perfusion after resection, location of arteries, distinguish
between different tissues, and determine air leaks.
[0038] Turning to FIG. 1, a system for augmenting a surgical
environment, according to embodiments of the present disclosure, is
shown generally as 100. System 100 includes a controller 102 that
has a processor 104 and a memory 106. The system 100 also includes
an image capture device 108, e.g., a camera, that records still
frame images or moving images. A sensor array 110 provides
information concerning the surgical environment to the controller
102. For instance, sensor array 110 includes biometric sensors
capable of obtaining biometric data of a patient such as, pulse,
temperature, blood pressure, blood oxygen levels, heart rhythm,
etc. Sensor array 110 may also include hyper-spectral sensors to
perform hyper-spectral imaging. A display 112, displays augmented
images to a clinician during a surgical procedure. In some
embodiments, the controller 102 may communicate with a central
server (not shown) via a wireless or wired connection. The central
server may store images of a patient or multiple patients that may
be obtained using x-ray, a computed tomography scan, or magnetic
resonance imaging.
[0039] FIGS. 2A-2D depict examples of how the system of FIG. 1 is
implemented in a surgical environment. As shown in FIGS. 2A-2D, an
image capture device 108 captures images of a surgical environment
during a surgical procedure. Images recorded by the image capture
device 108, data from the sensor array 110, and images from central
server (not shown) are combined by the controller 102 to generate
an augmented image that is provided to a clinician via display 112.
As shown in FIGS. 2A-2D, display 112 may be a projector (FIG. 2A),
a laser projection system (FIG. 2B), a pair of glasses that
projects an image onto one of the lenses such as GOOGLE GLASS.RTM.
(provided by Google.RTM.) (FIG. 2C), both lenses, or on a facial
shield, or a monitor (FIG. 2D). When using a monitor as shown in
FIG. 2D, the augmented image is overlaid on an image of the patient
obtained by the image capture device 108.
[0040] FIG. 3 depicts a system block diagram of the controller 102.
As shown in FIG. 3, the controller 102 includes a transceiver 114
configured to receive still frame images or video from the image
capture device 108 or data from sensor array 110. In some
embodiments, the transceiver 114 may include an antenna to receive
the still frame images, video, or data via a wireless communication
protocol. The still frame images, video, or data are provided to
the processor 104. The processor 104 includes an image processing
filter 116 that processes the received still frame images, video,
or data to generate an augmented image or video. The image
processing filter 116 may be implemented using discrete components,
software, or a combination thereof. The augmented image or video is
provided to the display 112.
[0041] Turning to FIG. 4, a system block diagram of an image
processing filter that may be applied to video received by
transceiver 114 is shown as 116A. In the image processing filter
116A, each frame of a received video is decomposed into different
spatial frequency bands Si to SN using a spatial decomposition
filter 118. The spatial decomposition filter 118 uses an image
processing technique known as a pyramid in which an image is
subjected to repeated smoothing and subsampling.
[0042] After the frame is subjected to the spatial decomposition
filter 118, a temporal filter 120 is applied to all the spatial
frequency bands Si to SN to generate temporally filtered bands
ST.sub.1 to ST.sub.N. The temporal filter 120 is a bandpass filter
that is used to extract one or more desired frequency bands. For
example, if the clinician knows the patient's pulse, the clinician
can set the bandpass frequency of the temporal filter 120, using a
user interface (not shown), to magnify the spatial frequency band
that corresponds to the patient's pulse. In other words, the
bandpass filter is set to a narrow range that includes the
patient's pulse and applied to all the spatial frequency bands Si
to SN. Only the spatial frequency band that corresponds to the set
range of the bandpass filter will be isolated or passed through.
All of the temporally filtered bands ST.sub.1 to ST.sub.N are
individually amplified by an amplifier having a gain a. Because the
temporal filter isolates or passes through a desired spatial
frequency band, only the desired spatial frequency band gets
amplified. The amplified temporally filtered bands ST.sub.1 to
ST.sub.N are then added to the original spatial frequency bands Si
to SN to generate augmented bands S'.sub.1 to S'.sub.N. Each frame
of the video is then reconstructed using a reconstruction filter
122 by collapsing augmented bands S'.sub.1 to S'.sub.N to generate
an augmented frame. All the augmented frames are combined to
produce the augmented video. The augmented video that is shown to
the clinician includes a portion that is magnified, i.e., the
portion that corresponds the desired spatial frequency band, to
enable the clinician to easily identify such portion.
[0043] In some embodiments, instead of using an amplifier to
amplify the isolated temporally filtered band, the image processing
filter 116A may highlight the temporally filtered band using a one
or more colors before reconstructing the video. Using a different
color for a desired portion of the patient, e.g., a vessel or
nerve, may make it easier for the clinician to identify the
location of such portion.
[0044] Turning to FIG. 5, a system block diagram of an image
processing filter that may be applied to a still frame image or
video received by transceiver 114 is shown as 116B. As shown in
FIG. 5, an input image 124 (i.e., a captured image or a frame from
a video) is inputted into image processing filter 116B. The image
processing filter 116B then employs an edge detection algorithm on
the inputted image 124 and outputs a filtered image 126 that
highlights the edges found in the input image 124.
[0045] FIG. 6 depicts a block diagram of a system for generating a
hyper-spectral image. As shown in FIG. 6, sensor array 110 includes
hyper-spectral sensors 128. The hyper-spectral sensors 128 collect
a set of images where each image represents a different range of
the electromagnetic spectrum. The set of images are sent to image
processing filter 116C which employs an hyper-spectral algorithm to
combine the set of images to form a three-dimensional (3D)
hyper-spectral image cube. The 3D hyper-spectral image cube is
outputted to the display 112
[0046] FIG. 7 depicts a block diagram of a system for generating an
infrared image. As shown in FIG. 7, an infrared camera 130 captures
images or video and transmits the captured images or video to image
processing filter 116D. Image processing filter 116D processes the
received captured images or video to generate an infrared image
that is displayed on display 112.
[0047] The image processing filters described above, i.e.,
116A-116D, may be used individually to identify physical conditions
during a surgical procedure. In some embodiments, image processing
filter 116A may be used to identify changes in color in order to
identify tissue perfusion or re-perfusion after a resection,
arterial flow, vessel types. Image processing filter 116A may also
be used to enhance visibility of motion identify edges of necrotic
tissue or appropriate functioning of tissue after resection.
[0048] In some embodiments, the above-described filters may be
combined to assist the clinician in identifying adverse physical
conditions. For instance, image processing filters 116A and 116B
may be combined to identify edges of different tissues to determine
the most effective placement for performing a task, e.g., cutting.
Image processing filters 116A and 116C may be combined to identify
subtle changes in small areas, e.g., air leaks that cannot be
determined by conventional methods. Image processing filters 116A,
116B, and 116C may be combined to identify edges of a mass, e.g., a
tumor. Image processing filters 116A, 116B, and 116D may be
combined to identify the boundary of diseased tissue.
[0049] Image processing filters 116A, 116, B, 116C, and 116D may be
implemented using different circuits or they may be implemented
using a single processor that executes different subroutines based
on the filter that is applied to the image.
[0050] The above-described embodiments may also be integrated into
a robotic surgical system. FIG. 8 shows various components that may
be included in a robotic surgical system 1, such as two or more
robot arms 2, 3; a control device 4; and an operating console 5
coupled with control device 4. Operating console 5 may include a
display device 6, which may be set up in particular to display
three-dimensional images; and one or more manual input devices 7,
8, by means of which a person (not shown), for example a surgeon,
is able to telemanipulate robot arms 2, 3 in a first operating
mode.
[0051] The movement of input devices 7, 8 may be scaled so that a
surgical instrument attached to a robot arm 2, 3 has a
corresponding movement that is different (e.g. smaller or larger)
than the movement of the input devices 7, 8. The scale factor or
gearing ratio may be adjustable so that the clinician can control
the resolution of the working ends of the surgical instrument.
[0052] Each of the robot arms 2, 3 may include a plurality of
members, which are connected through joints, and a surgical
assembly 20 to which may be attached, for example, a surgical
instrument, such as, for example, an image capture device 108, such
as an endoscope, or other surgical instrument having an end
effector 200, in accordance with any of the embodiments disclosed
herein. A distal end of surgical assembly 20 may be configured to
support an image capture device 108 and/or other surgical
instruments having end effectors 200, including, but not limited to
a grasper, surgical stapler, a surgical cutter, a surgical
stapler-cutter, a linear surgical stapler, a linear surgical
stapler-cutter, a circular surgical stapler, a circular surgical
stapler-cutter, a surgical clip applier, a surgical clip ligator, a
surgical clamping device, a vessel expanding device, a lumen
expanding device, a scalpel, a fluid delivery device or any other
type of surgical instrument. Each of these surgical instruments may
be configured for actuation and manipulation by the robot arms 2, 3
via force transmitting members. Force transmitting members may be
variously configured, such as, for example, hypotubes, push rods,
shafts, or tethers, and can transmit various forces, such as, for
example, axial (i.e., pushing and pulling), rotary, and/or torque.
An image capture device 108, such as an endoscope having a camera
as an end effector 200 that articulates may include such force
transmitting members. One or more of these force transmitting
members may be configured to control the articulation of the
camera.
[0053] Robot arms 2, 3 may be driven by electric drives that are
connected to control device 4. Control device 4 (e.g., a computer)
is set up to activate the drives, in particular by means of a
computer program, in such a way that robot arms 2, 3, their
surgical assemblies 20 and thus the end effector 200 execute a
desired movement according to a movement defined by means of manual
input devices 7, 8. Control device 4 may also be set up in such a
way that it regulates the movement of robot arms 2, 3 and/or of the
drives.
[0054] Control device 4 may also be communicatively coupled to
other components of the surgical system 1, including, but not
limited to, the surgical assemblies 20; display 6; input devices 7,
8; and surgical instruments coupled to robot arms 2, 3 such as
image capture device 108 and instrument having an end effector 200.
Control device 4 may also include or be coupled to controller 102.
Controller 102 and/or control device 4 may include transceiver 114,
which may be configured to receive still frame images or video from
the image capture device 108 or data from sensor array 110. In some
embodiments, the transceiver 114 may include an antenna to receive
the still frame images, video, or data via a wireless communication
protocol. The transceiver 114 may also receive the still frame
images, video, or data via a wired connection. The still frame
images, video, or data may be sent to processor 104. The processor
104 includes an image processing filter 116 that processes the
received still frame images, video, or data to generate an
augmented image or video. Processor 104 may include a buffer or
memory 106 to store the images, video, or data being processed. The
image processing filter 116 may be implemented using discrete
components, software, or a combination thereof. The augmented image
or video may be stored and/or sent to the display 6 or another
output device.
[0055] In the surgical system 1, at least one image capture device
108 may be coupled to at least a first of the two or more robot
arms 2, 3. The image capture device 108 may be configured to be
inserted into the patient and capture an image of a region of
interest inside the patient during a surgical procedure. The
captured image may be displayed on the display 6.
[0056] Another surgical instrument having an end effector 200
configured to manipulate tissue in the region of interest during
the surgical procedure may be coupled to at least a second of the
two or more robot arms 2, 3.
[0057] The controller 102 may be configured to process image
captured from image device 108 and apply at least one image
processing filter 116 (e.g. filters 116A-116D in one or more of the
different ways mentioned throughout the application) to the
captured image to identify an imperceptible property of an object
in the region of interest during the surgical procedure. The
controller 102 may output the identified imperceptible property
during the surgical procedure to the clinician. The imperceptible
property may be outputted in different ways including to the
display 6 where the imperceptible property may be shown to a
clinician and/or by way of haptics so the clinician may feel the
imperceptible property. When the imperceptible property is
outputted to the display, the imperceptible property may be altered
or transformed into a more clearly visible signal that may be
overlayed onto a corresponding section of the captured image and
shown to the clinician on the display.
[0058] Each of the instruments that may be attached to a robot arm
2, 3 may be equipped with a tool-type identifier, such as a quick
response code, an identifier stored in a memory of the instrument,
a particular circuit configuration associated with the tool-type,
and so on. The surgical system 1 may include components or
circuitry configured to receive and/or read the tool-type
identifier from each instrument attached to a robot arm 2, 3. This
information may then be used to select the specific image
processing filters 116 that may be applied to the captured
image.
[0059] The tool-type identifier information may be used to identify
a surgical instrument attached to a robot arm 2, 3 that is located
with the field of view of the image capture device 108. For
example, if a quick response code or other tool-type identifier is
located on the end effector 200, shaft, or other component of the
surgical instrument that appears within the field of view of the
image device 108, then the captured image data may analyzed to
identify the surgical instrument from the quick response code or
other identifier identified from the image capture data.
[0060] In other instances, a surgical instrument that enters the
field of view of the image capture device 108 may be identified
based on a comparison of positional information about the image
capture device 108 (and/or the robot arm 2 to which the image
device 108 is attached) and an instrument attached to one of the
other robot arms 3 (and/or the robot arm 3 to which the instrument
is attached). Positional information may be obtained from one or
more position sensors in each instrument or in the robot arms 2, 3.
A transformation may be used to convert absolute positional
information in different coordinate systems from different robot
arms 2, 3 so that a relative position of the image capture device
108 on one robot arm 2 relative to the surgical instrument attached
to another robot arm 3 may be obtained. The relative position
information may be used to identify whether the surgical instrument
attached to the other robot arm 3 is within the field of view of
the image capture device 108.
[0061] In other instances, one or more cameras may be used to
capture an image of one or more of the robot arms 2, 3. The image
data from these cameras may be analyzed to identify the position of
each of the robot arms 2, 3. This positional information may be
analyzed to determine if the robot arm 3 to which the surgical
instrument is attached is located within the field of view of the
image capture device 108. Other systems and methods for determining
whether a robot arm 3 to which a surgical instrument is attached is
located within the field of view of the image capture device 108
may also be used.
[0062] If the positional information and/or the tool-type
identifier indicate that a particular surgical instrument is within
the field of view of the image capture device 108, then one or more
of the image processing filters 116 may be automatically selected
that correspond to the particular surgical instrument. For example,
if an electro-cauterization surgical instrument is identified as
being within the field of view of image capture device 108, then an
image processing filter 116 showing an effectiveness of a vessel
seal may be automatically selected. If the electro-cauterization
instrument is then moved out of the field of view and a cutting
instrument, such as a scalpel in moved into the field of view, then
a different image processing filter 116 showing the location of
large arteries may be automatically selected instead.
[0063] Different image processing filters 116 may be automatically
selected depending on the task that is to be performed. For
example, if a cutting tool is in the field of view and is being
moved at a rate exceeding a predetermined threshold and/or a
scaling factor of the input device 7, 8, is changed so that the
surgical instrument moves faster, an image processing filters 116
showing the location of large arteries may be automatically
selected. However, if the cutting tool is being moved a slower rate
and/or activated to methodically cut and/or remove tissue, then an
image processing filter 116 showing abnormal tissue may be used
instead. The same analysis may applied to electro-cauterization
tools--if the tool has not been activated within a predetermined
period and/or is being moved at a rate exceeding a predetermined
threshold then an image processing filter 116 showing the location
of large arteries may be automatically selected. However, if the
electro-cauterization tool is being moved a slower rate and/or
activated within the predetermined period to methodically cut
and/or remove tissue, then an image processing filter 116 showing
an effectiveness of a vessel seal or other desired property may be
used instead.
[0064] The input device 7, 8 may include haptics 216 to provide
feedback to the clinician relating to the imperceptible property.
For example, an output signal representative of a tissue parameter
or condition, e.g., tissue resistance due to manipulation, cutting
or otherwise treating, pressure by the instrument onto the tissue,
tissue temperature, tissue impedance, and so on, may be generated
and transmitted to the input device 7, 8 to provide haptic feedback
to the clinician that varies based on the imperceptible property.
Haptics 216 may provide the clinician with enhanced tactile
feedback about imperceptible properties of objects that may improve
patient safety. For example, haptics 216 may be implemented to
provide feedback to the clinician when a surgical instrument moved
by the input device 7, 8 comes within a predetermined distance of a
large artery or other delicate tissue to prevent possible injury to
the artery and/or delicate tissue. Haptics 216 may include
vibratory motors, electroacitve polymers, piezoelectric devices,
electrostatic devices, subsonic audio wave surface actuation
devices, reverse-electrovibration, or any other device capable of
providing a tactile feedback to a user. The input device 7, 8 may
also include a variety of different actuators for delicate tissue
manipulation or treatment further enhancing the clinician's ability
to mimic actual operating conditions.
[0065] The embodiments disclosed herein are examples of the
disclosure and may be embodied in various forms. Specific
structural and functional details disclosed herein are not to be
interpreted as limiting, but as a basis for the claims and as a
representative basis for teaching one skilled in the art to
variously employ the present disclosure in virtually any
appropriately detailed structure. Like reference numerals may refer
to similar or identical elements throughout the description of the
figures.
[0066] The phrases "in an embodiment," "in embodiments," "in some
embodiments," or "in other embodiments," which may each refer to
one or more of the same or different embodiments in accordance with
the present disclosure. A phrase in the form "A or B" means "(A),
(B), or (A and B)". A phrase in the form "at least one of A, B, or
C" means "(A), (B), (C), (A and B), (A and C), (B and C), or (A, B
and C)". A clinician may refers to a clinician or any medical
professional, such as a doctor, nurse, technician, medical
assistant, or the like) performing a medical procedure.
[0067] The systems described herein may also utilize one or more
controllers to receive various information and transform the
received information to generate an output. The controller may
include any type of computing device, computational circuit, or any
type of processor or processing circuit capable of executing a
series of instructions that are stored in a memory. The controller
may include multiple processors and/or multicore central processing
units (CPUs) and may include any type of processor, such as a
microprocessor, digital signal processor, microcontroller, or the
like. The controller may also include a memory to store data and/or
algorithms to perform a series of instructions.
[0068] Any of the herein described methods, programs, algorithms or
codes may be converted to, or expressed in, a programming language
or computer program. A "Programming Language" and "Computer
Program" includes any language used to specify instructions to a
computer, and includes (but is not limited to) these languages and
their derivatives: Assembler, Basic, Batch files, BCPL, C, C+, C++,
Delphi, Fortran, Java, JavaScript, Machine code, operating system
command languages, Pascal, Perl, PL1, scripting languages, Visual
Basic, metalanguages which themselves specify programs, and all
first, second, third, fourth, and fifth generation computer
languages. Also included are database and other data schemas, and
any other meta-languages. No distinction is made between languages
which are interpreted, compiled, or use both compiled and
interpreted approaches. No distinction is also made between
compiled and source versions of a program. Thus, reference to a
program, where the programming language could exist in more than
one state (such as source, compiled, object, or linked) is a
reference to any and all such states. Reference to a program may
encompass the actual instructions and/or the intent of those
instructions.
[0069] Any of the herein described methods, programs, algorithms or
codes may be contained on one or more machine-readable media or
memory. The term "memory" may include a mechanism that provides
(e.g., stores and/or transmits) information in a form readable by a
machine such a processor, computer, or a digital processing device.
For example, a memory may include a read only memory (ROM), random
access memory (RAM), magnetic disk storage media, optical storage
media, flash memory devices, or any other volatile or non-volatile
memory storage device. Code or instructions contained thereon can
be represented by carrier wave signals, infrared signals, digital
signals, and by other like signals.
[0070] It should be understood that the foregoing description is
only illustrative of the present disclosure. Various alternatives
and modifications can be devised by those skilled in the art
without departing from the disclosure. For instance, any of the
augmented images described herein can be combined into a single
augmented image to be displayed to a clinician. Accordingly, the
present disclosure is intended to embrace all such alternatives,
modifications and variances. The embodiments described with
reference to the attached drawing FIGS. are presented only to
demonstrate certain examples of the disclosure. Other elements,
steps, methods and techniques that are insubstantially different
from those described above and/or in the appended claims are also
intended to be within the scope of the disclosure.
* * * * *