U.S. patent application number 14/926963 was filed with the patent office on 2016-05-12 for surgical navigation system with one or more body borne components and method therefor.
The applicant listed for this patent is INTELLIJOINT SURGICAL INC.. Invention is credited to ARMEN GARO BAKIRTZIAN, RICHARD TYLER FANSON, ANDRE NOVOMIR HLADIO.
Application Number | 20160128783 14/926963 |
Document ID | / |
Family ID | 55856304 |
Filed Date | 2016-05-12 |
United States Patent
Application |
20160128783 |
Kind Code |
A1 |
HLADIO; ANDRE NOVOMIR ; et
al. |
May 12, 2016 |
SURGICAL NAVIGATION SYSTEM WITH ONE OR MORE BODY BORNE COMPONENTS
AND METHOD THEREFOR
Abstract
A system for performing a navigated surgery comprises a first
target attached to a patient at a surgical site and a second target
at the surgical site. An optical sensor is coupled to the user and
detects the first target and second target simultaneously in a
working volume of the sensor. An intra-operative computing unit
(ICU) receives sensor data concerning the first target and second
target, calculates a relative pose and provides display
information. The sensor can be handheld, body-mounted or
head-mounted, and communicate wirelessly with the ICU. The sensor
may also be mountable on a fixed structure (e.g. proximate) and in
alignment with the surgical site. The ICU may receive user input
via the sensor, where the user input is at least one of sensor
motions, voice commands, and gestures presented to the optical
sensor by the user. The display information may be presented via a
(heads-up) display unit.
Inventors: |
HLADIO; ANDRE NOVOMIR; (ON,
CA) ; BAKIRTZIAN; ARMEN GARO; (ON, CA) ;
FANSON; RICHARD TYLER; (ON, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
INTELLIJOINT SURGICAL INC. |
WATERLOO |
|
CA |
|
|
Family ID: |
55856304 |
Appl. No.: |
14/926963 |
Filed: |
October 29, 2015 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62072041 |
Oct 29, 2014 |
|
|
|
62072030 |
Oct 29, 2014 |
|
|
|
62084891 |
Nov 26, 2014 |
|
|
|
62072032 |
Oct 29, 2014 |
|
|
|
Current U.S.
Class: |
600/424 |
Current CPC
Class: |
A61B 2034/2048 20160201;
A61B 5/6847 20130101; A61B 2034/2068 20160201; A61B 2090/364
20160201; A61B 2090/061 20160201; A61B 2034/105 20160201; G06T
2207/30008 20130101; A61B 34/10 20160201; A61B 2034/2057 20160201;
A61B 2034/2072 20160201; A61B 34/20 20160201; A61B 2090/067
20160201; G06T 7/74 20170101; A61B 46/10 20160201; A61B 2090/3983
20160201; A61B 2090/3937 20160201; A61B 2090/373 20160201; G06T
7/337 20170101; A61B 17/1703 20130101; A61B 90/30 20160201; A61B
90/361 20160201; A61B 2090/064 20160201; A61B 90/06 20160201; A61B
2034/2055 20160201; A61B 2034/2065 20160201 |
International
Class: |
G06F 19/00 20110101
G06F019/00 |
Claims
1. A system for performing a navigated surgery at a surgical site
of a patient, the system comprising: a first target configured to
be attached to the patient at the surgical site; a second target at
the surgical site; a sensor configured to be coupled to a user, the
sensor comprising an optical sensor configured to detect the first
target and second target simultaneously; and an intra-operative
computing unit (ICU) configured to: receive, from the sensor,
sensor data concerning the first target and second target;
calculate the relative pose between the first target and second
target; and based on the relative pose, provide display information
to a display unit.
2. The system of claim 1 wherein the second target is a static
reference target.
3. The system of claim 1 wherein the second target is attached to
one of: a surgical instrument; a bone cutting guide; and a
bone.
4. The system of claim 1, wherein the sensor is configured to be at
least one of: handheld; body-mounted; and head-mounted.
5. The system of claim 1, wherein a sensor working volume fir the
sensor is in alignment with a field of view of the user.
6. The system of claim 1, wherein the sensor communicates
wirelessly with the ICU.
7. The system of claim 1, wherein the sensor is communicatively
connected by wire to as sensor control unit and the sensor control
unit is configured to wirelessly communicate with the ICU.
8. The system of claim 1, wherein the sensor is further configured
to be mountable on a fixed structure.
9. The system of claim 1, wherein the ICU is further configured to
present, via the display unit, where the targets are with respect
to the sensor field of view.
10. The system of claim 9, wherein the ICU is further configured to
present, via the display unit, an optical sensor video feed from
the optical sensor.
11. The system of claim 9, wherein the ICU is further configured to
receive user input via the sensor by at least one of: receiving
motions of the sensor, where the sensor has additional sensing
capabilities to sense motions; receiving voice commands, where the
sensor further comprises a microphone; and receiving gestures
presented to the optical sensor by the user, the gestures being
associated with specific commands.
12. The system of claim 1 further comprising a display unit wherein
the display unit is further configured to be positionable within a
field of view of the user while the optical sensor is detecting the
first target and second target.
13. The system of claim 12 wherein the display unit is a
surgeon-worn heads up display.
14. A computer-implemented method for performing a navigated
surgery at a surgical site of a patient, the method comprising:
receiving, by at least one processor of an intra-operative
computing unit (ICU), sensor data from a sensor where the sensor
data comprises information for calculating the relative pose of a
first target and a second target, wherein the sensor is coupled to
a user and comprises an optical sensor configured to detect the
first target and second target simultaneously, and wherein the
first target is attached to the patient at the surgical site and
the second target is located at the surgical site; calculating, by
the at least one processor, the relative pose between the first
target and second target; and based on the relative pose,
providing, by at least one processor, display information to a
display unit.
15. The method of claim 14, wherein a sensor working volume of the
sensor is in alignment with a field of view of the user and the
first target and second target are in the sensor working
volume.
16. The method of claim 14, further comprising receiving, by the at
least one processor further sensor data from the sensor, wherein
the sensor is attached to a fixed structure such that the sensor
volume is aligned with the surgical site when the sensor is
attached.
17. The method of claim 14 further comprising, receiving, by the at
least one processor, user input from the sensor for invoking the at
least one processor to perform an activity of the navigated
surgery.
18. The method of claim 17 wherein the user input comprises a
gesture sensed by the sensor.
19. A method comprising: aiming an optical sensor, held by the
hand, at a surgical site having two targets at the site and within
a working volume of the sensor, one of the two targets attached to
patient anatomy, wherein the optical sensor is in communication
with a processing unit configured to determine a relative position
of the two targets and provide display information, via a display
unit, pertaining to the relative position; and receiving the
display information via the display unit.
20. The method of claim 19, further comprising providing to the
processing unit user input via the sensor for invoking the
processing unit to perform an activity of the navigated surgery.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. provisional
application No. 62/072,041 titled "Systems, Methods and Devices for
Anatomical Registration and Surgical Localization" and filed on
Oct. 29, 2014, the entire contents of which are incorporated herein
by reference.
[0002] This application claims priority to U.S. provisional
application No 62/072,030 titled "Devices including a surgical
navigation camera and systems and methods for surgical navigation"
and filed on Oct. 29, 2014, the entire contents of which are
incorporated herein by reference.
[0003] This application claims priority to U.S. provisional
application No. 62/084,891 titled "Devices, systems and methods for
natural feature tracking of surgical tools and other objects" and
filed on Nov. 26, 2014, the entire contents of which air
incorporated herein by reference.
[0004] This application claims priority to U.S. provisional
application No. 62/072,032 titled "Devices, systems and methods for
reamer guidance and cup sealing " and filed on Oct. 29, 2014, the
entire contents of which are incorporated herein by reference.
FIELD
[0005] The present application relates to computer-assisted surgery
and surgical navigation systems where One or more targets and the
one or more objects to which the targets are attached are tracked
by an optical sensor, such as one borne by the body of the user
(e.g. hand, head etc.) The present application further relates to
gestural control for surgical navigation systems.
BACKGROUND
[0006] The field of computer-assisted surgery (or "computer
navigation") creates systems and devices to provide a surgeon with
positional measurements of objects in space to allow the surgeon to
operate more precisely and accurately. Existing surgical navigation
systems utilize binocular cameras as optical sensors to detect
targets attached to objects within a working volume. The binocular
cameras are part of large and expensive medical equipment systems.
The cameras are affixed to medical carts with various computer
systems, monitors, etc. The binocular-based navigation systems are
located outside a surgical sterile field, and can localize (i.e.
measure the pose of) targets within the sterile field. There are
several limitations to existing binocular-based navigation systems,
including line-of-sight disruptions between the cameras and the
objects, ability to control computer navigation software, cost, and
complexity.
BRIEF SUMMARY
[0007] In one aspect, a system is disclosed for performing a
navigated surgery. The system comprises a first target attached to
a patient at a surgical site and a second target at the surgical
site. An optical sensor is coupled to the user and detects the
first target and second target simultaneously in a working volume
of the sensor. An intra-operative computing unit (ICU) receives
sensor data concerning the first target and second target,
calculates a relative pose and provides display information to a
display unit. The sensor can be handheld, body-mounted or
head-mounted, and communicate wirelessly with the ICU. The sensor
may also be mountable on a fixed structure (e.g. proximate thereto)
with the working volume in alignment with the surgical site. The
ICU may receive user input via the sensor, where the user input is
at least one of sensor motions, voice commands, and gestures
presented to the optical sensor by the user. The display
information may be presented via a heads-up or other display
unit.
[0008] There is provided as system for performing a navigated
surgery at a surgical site of a patient where the system comprises:
a first target configured to be attached to the patient at the
surgical site; a second target at the surgical site; a sensor
configured to be coupled to a user, the sensor comprising an
optical sensor configured to detect the first target and second
target simultaneously; and an intra-operative computing unit (ICU)
configured to: receive, from the sensor, sensor data concerning the
first target and second target; calculate the relative pose between
the first target and second target; and based on the relative pose,
provide display information to a display unit.
[0009] The second target may be a static reference target. The
second target may be attached to one of: a surgical instrument; a
bone cutting guide; and a bone.
[0010] The sensor may be configured to be at least one of:
handheld; body-mounted; and head-mounted.
[0011] A sensor working volume for the sensor may be in alignment
with a field of view of the user.
[0012] The sensor may communicate wirelessly with the ICU.
[0013] The sensor may be communicatively connected by wire to a
sensor control unit and the sensor control unit is configured to
wirelessly communicate with the ICU.
[0014] The sensor may be further configured to be mountable on a
fixed structure.
[0015] The ICU may be further configured to present, via the
display unit, where the targets are with respect to the sensor
field of view. The ICU may be further configured to present, via
the display unit, an optical sensor video feed from the optical
sensor. The ICU may be further configured to receive user input via
the sensor by at least one of receiving motions of the sensor,
where the sensor has additional sensing capabilities to sense
motions; receiving voice commands, where the sensor further
comprises a microphone; and receiving gestures presented to the
optical sensor by the user, the gestures being associated with
specific commands.
[0016] The system may further comprise a display unit wherein the
display unit is further configured to be positionable within a
field of view of the user while the optical sensor is detecting the
first target and second target. The display unit may be a
surgeon-worn heads up display.
[0017] There is provided a computer-implemented method for
performing a navigated surgery at a surgical site of a patient. The
method comprises receiving, by at least one processor of an
intra-operative computing unit (ICU), sensor data from a sensor
where the sensor data comprises information for calculating the
relative pose of a first target and a second target, wherein the
sensor is coupled to a user and comprises an optical sensor
configured to detect the first target and second target
simultaneously, and wherein the first target is attached to the
patient at the surgical site and the second target is located at
the surgical site calculating, by the at least one processor, the
relative pose between the first target and second target; and based
on the relative pose, providing, by at least one processor, display
information to a display unit.
[0018] In this method a sensor working volume of the sensor may be
in alignment with a field of view of the user and the first target
and second target are in the sensor working volume.
[0019] The method may further comprise receiving, by the at least
one processor further sensor data from the sensor, wherein the
sensor is attached to a fixed structure such that the sensor volume
is aligned with the surgical site when the sensor is attached.
[0020] The method may further comprise, receiving, by the at least
one processor, user input from the sensor for invoking the at least
one processor to perform an activity of the navigated surgery. The
user input may comprise a gesture sensed by the sensor.
[0021] There is provided a method comprising: aiming an optical
sensor, held by the hand, at a surgical site having two targets at
the site and within a working volume of the sensor, one of the two
targets attached to patient anatomy, wherein the optical sensor is
in communication with a processing unit configured to determine a
relative position of the two targets and provide display
information, via a display unit, pertaining to the relative
position; and receiving the display information via the display
unit. This method may further comprise providing to the processing
unit user input via the sensor for invoking the processing unit to
perform an activity of the navigated surgery.
[0022] Additional objects and advantages of the disclosed
embodiments will be set forth in part in the description that
follows, and in part will be obvious from the description, or may
be learned by practice of the disclosed embodiments. The objects
and advantages of the disclosed embodiments will be realized and
attained by means of the elements and combinations particularly
pointed out in the appended claims.
[0023] It is to be understood that both the foregoing general
description and the following detailed description are exemplary
and explanatory only and are not restrictive of the disclosed
embodiments as claimed.
[0024] The accompanying drawings constitute a part of this
specification. The drawings illustrate several embodiments of the
present disclosure and, together with the description, serve to
explain the principles of the disclosed embodiments as set forth in
the accompanying claims.
BRIEF DESCRIPTION OF DRAWINGS
[0025] Embodiments disclosed herein will be more fully understood
from the detailed description and the corresponding drawings, which
form a part of this application, and in which:
[0026] FIG. 1 illustrates use of a surgical navigation, system in
Total Knee Arthroplasty (TKA);
[0027] FIG. 2 shows in a block diagram an interaction between
various components of a surgical navigation system;
[0028] FIG. 3 shows use of a sensor with multiple targets in
accordance with an embodiment;
[0029] FIG. 4a shows use of a sensor in a handheld mode as an
example for clarity;
[0030] FIG. 4b shows use of a sensor in a non-handheld mode as an
example for clarity;
[0031] FIG. 5 illustrates, as an example for clarity, use of the
system where a center of the working volume of a sensor attached to
a surgeon is substantially centered with respect to a site of
surgery;
[0032] FIG. 6 shows a gesture of a hand sensed by a sensor;
[0033] FIG. 7 illustrates an image, as seen by a sensor when
configured to sense a gesture of a hand;
[0034] FIG. 8 illustrates motions of as body detected by a sensor
attached to the body in accordance with an embodiment;
[0035] FIG. 9 shows a sensor communicating wirelessly with an
intra-operative computing unit (ICU);
[0036] FIG. 10 shows a sensor communicating with an ICU through a
sensor control unit (SCU);
[0037] FIG. 11 shows a sensor attached to a surgeon's head such
that the working volume of the sensor is within the surgeon's field
of view;
[0038] FIG. 12 shows a sensor integrated with heads-up display
glasses;
[0039] FIG. 13 shows heads-up display glasses (with integrated
sensor) connected by a cable to a SCU mounted on a surgeon;
[0040] FIG. 14 shows a use of a static reference target as an
example tear clarity; and
[0041] FIG. 15 shows a display of an ICU depicting a video feed as
seen by an optical sensor, when viewing the targets.
[0042] It will be appreciated that for simplicity and clarity of
illustration, elements shown in the figures have not necessarily
been drawn to scale. For example, the dimensions of some of the
elements may be exaggerated relative to other elements for
clarity.
DEFINITIONS
[0043] Field of View (FOV): The angular Span (horizontally and
vertically) that an optical sensor (e.g. camera) is able to
view.
[0044] Degrees of Freedom (DOF): independent parameters used to
describe the pose (position and orientation) of a rigid body. There
are up to 6 DOF for a rigid body: 3 DOF for position (i.e. x, y, z
position), and 3 DOF for orientation (e.g. roll, pitch, yaw).
[0045] Pose: The position and orientation of an object in up to 6
DOF in space.
[0046] Working Volume: A 3D volume relative to an optical sensor
within which valid poses may be generated. The Working Volume is a
subset of the optical sensor's field of view and is configurable
depending on the type of system that uses the optical sensor.
DETAILED DESCRIPTION
[0047] In navigated surgery, targets are affixed to a patient's
anatomy, as well as to surgical instruments, such that a relative
position and orientation between the sensor and the target may be
measured. Total Knee Arthroplasty (TKA) will be used as an example
in this disclosure to describe a navigated surgical procedure. As
illustrated in FIG. 1, a system 100 is used for navigation in TKA.
Targets 102 are affixed to a patient's anatomy (i.e. a femur bone
104 and a tibia bone 106 that are part of a knee joint) and located
within a field of view 108 of a sensor 110 that is held by a hand
112. The sensor 110 is connected to an intra-operative computing
unit (ICU) by a cable 113. Additionally, targets 102 are affixed to
surgical instruments (in this case, a distal femoral cutting guide
114). In navigated TKA, a relative pose between the femur 104 and
tibia 106 may allow a surgeon to track the kinematics of a knee
joint during a range-of-motion test. Furthermore, as relative pose
between the distal femur cutting guide 114 and the femur 104 may
allow a surgeon to precisely align the cutting guide 114 with
respect to the femur 104 to achieve desired cut angles. The patient
rests on an operating table 116. Applicant's U.S. Pat. No.
9,138,319 B2 issued Sep. 22, 2015 and entitled "Method and System
for Aligning a Prosthesis During Surgery" describes operations and
methods to register components (such as an optical sensor to a
patient's bone (for example a pelvis)), measure relative poses and
track objects in a Total Hip Arthroplasty (THA) scenario, the
contents of which are incorporated herein by reference. Similar
methods and systems as described therein are applicable to TKA.
Sensor System Architecture
[0048] The architecture of a surgical navigation system is shown in
FIG. 2. Multiple targets are affixed to objects (anatomy,
instruments, etc.) within a sterile field. The targets provide
optical positional signals, either through emitting or reflecting
optical energy. The positional signals are received by a sensor,
comprising an optical sensor. The optical sensor preferably
includes a monocular camera (comprising a lens, an imager etc.),
illuminating components, and various optics as applicable (e.g. IR
filter, if the optical sensor operates in the IR spectrum). The
optical sensor is configured to detect a target. The optical sensor
is configured and focused in order to function according to the
relatively large working volume. This is unlike endoscopic
applications, where an endoscope is configured to view a scene
inside a body cavity. Where the scene is very close (e.g. <5 cm)
to the endoscope optics. Furthermore, endoscopes are used to
visualize tissue, whereas the present optical sensor is used to
measure relative pose between targets and objects.
[0049] The sensor may also include, other sensing components. For
example, positional sensing components, such as accelerometers,
gyroscopes, magnetometers, IR detectors, etc. may be used to
supplement, augment or enhance the positional measurements obtained
from the optical sensor. Additionally, the sensor may be capable of
receiving user input, for example, through buttons, visible
gestures, motions (e.g. determined via an accelerometer), and
microphones to receive voice commands etc. The sensor may include
indicators that signal the state of the sensor, for example, green
LED to signify sensor is on, red LED to signify error.
[0050] The sensor may be in communication with a sensor control
unit (SCU). The SCU is an intermediate device to facilitate
communication between the sensor and an intra-operative computing
unit (ICU).
[0051] The ICU may comprise a laptop, workstation, or other
computing device having at least one processing unit and at least
one storage device, such as memory storing software (instructions
and/or data) as further described herein to configure the execution
of the intra-operative computing unit. The ICU receives positional
data from the sensor via the SCU, processes the data and computes
the pose of targets that are within the working volume of the
sensor. The ICU also performs any further processing associated
with the other sensing and/or user input components. The ICU may
further process the measurements to express them in
clinically-relevant terms (e.g. according to anatomical
registration). The ICU may further implement a user interface to
guide a user through a surgical workflow. The ICU may further
provide the user interface, including measurements, to a display
unit for displaying to a surgeon or other user.
Handheld Camera Embodiment
[0052] In one embodiment, a sensor is configured for handheld use.
As depicted in FIG. 3, the sensor 110 may localize two or more
targets 102, in order to compute a relative pose between the
targets 102. The targets 102 are attached to objects 302 and the
relative pose between the objects 302 can also be calculated. In
relation to navigated TKA, the objects could be the femur 104 and
the tibia 106, or the femur 104 and the cutting guide 114.
[0053] Furthermore, both handheld and non-handheld modes may be
supported for use of the sensor within the same surgery. This is
illustrated in FIG. 4a and FIG. 4b. For example, during a TKA, it
may be preferable to leave the sensor 110 mounted to a fixed
structure 402 for the majority of the procedure with the use of a
releasable mechanical connection 404, such as the one described in
U.S. 20140275940 titled "System and method for intra-operative leg
position measurement", the entire contents of which are
incorporated herein. However, during a step to localize a malleolus
406 of the patient's anatomy, the sensor 110 can be removed from
its fixed structure 402 if the malleolus 406 is outside the working
volume 408 of the optical sensor in the sensor 110. The sensor 110
can be held in a user's hand 112 to bring the malleolus 406 within
its working volume 408, as shown in FIG. 4b.
[0054] This system configuration has several advantages. This
system does not have a wired connection to objects being tracked.
This system further allows the sensor to be used for pose
measurements of the targets at specific steps in the workflow, and
set aside when not in use. When a target is outside of or
obstructed from the working volume of the sensor, while the sensor
is attached to a static/fixed position, the position of the sensor
can be moved, using the sensor in handheld mode, to include the
target in its working volume. The sensor may comprise user input
components (e.g. buttons). This user input may be a part of the
surgical workflow, and may be communicated to the ICU. Furthermore,
indicators on the sensor could provide feedback to the surgeon
(e.g. status LED's indicating that a target is being detected by
the sensor). Also, in handheld operation, it may be preferable for
a surgical assistant to bold the sensor to align the sensor'
working volume with the targets in the surgical site, while the
surgeon is performing another task.
Body Mounted Embodiment
[0055] As illustrated in FIG. 5, in addition to handheld
configurations, the sensor 110 may be mounted onto a surgeon 502
for other user) using a sensor mounting structure 504. The sensor
mounting structure 504 may allow the sensor to be attached to a
surgeon's forehead (by way of example, shown as a headband). The
sensor 110 is preferably placed on the sensor mounting structure
504 such that the working volume 408 of the sensor 110 and the
surgeon's visual field of view 506 are substantially aligned, i.e.
the majority of the working volume 408 overlaps with the field of
view 108 of the sensor 110. In such a configuration, if the surgeon
502 can see the targets 102, the sensor 110 (while mounted on the
sensor mounting structure 504) will likely be able to do so as
well. This configuration allows the surgeon to rapidly and
intuitively overcome line-of-sight disruptions between the sensor
110 and the targets 102. In this configuration, it is desirable
that the optical sensor in the sensor 110 has a working volume 108
that is approximately centered about the nominal distance between a
surgeon's forehead (or other mounting, location) and the surgical
site. The working volume 108 of the sensor 110 is preferably large
enough to accommodate a wide range of feasible relative positions
between the mounting position on the surgeon and the surgical
site.
Description of Various User Input Options
[0056] In the previously described embodiments, the sensor in the
surgical navigation system is mounted on the surgeon's body. For
reasons of sterility and ergonomics, it may not be feasible to use
buttons on the sensor in order to interact with the intra-operative
computing unit. Hand gestures may be sensed by the optical system
and used as user input. For example, if the sensor can detect a
user's hand in its field of view, the system comprising the sensor,
SCU, ICU, and a display unit, may be able to identify predefined
gestures that correspond to certain commands within the surgical
workflow. For example, waving a hand from left to right may
correspond to advancing the surgical workflow on the display unit
of the ICU; snapping fingers may correspond to saving a measurement
that may also be displayed on the display unit of the ICU; waving a
hand hack and forth may correspond to cancelling an action within
the workflow; etc. According to FIG. 6, a user interacts with the
system using a hand wave gesture within the field of view 108 of
the sensor 110. These gestures are recognized by the sensor and by
image processing software executed by the ICU. When mounted on the
surgeon's forehead, the hand gestures may be performed by another
user within the surgery and need not necessarily be performed by
the surgeon. Furthermore, using hand gestures as user input is not
limited to a system where the sensor is placed on the surgeon's
body. It is also possible to use gesture control in any system
where the sensor is mounted on the operating table, on the patient,
hand-held, etc.
[0057] In FIG. 7, an image 702 of a sensor as seen by the optical
sensor is illustrated. In addition to gesture control, the ICU may
receive comments via voice controls (e.g. via a microphone in the
sensor or in the ICU itself). In another embodiment, as shown in
FIG. 8, body motions (e.g. head nods 802 or shakes 804) can be
detected by the sensor 110 with additional sensing components, e.g.
an embedded accelerometer. The body motions are sensed and
interpreted to send signals to the ICU which may communicate
wirelessly with the sensor 110. For example, nodding 802 may signal
the surgeon workflow to advance, shaking 804 may signal the surgeon
workflow to go back one step or cancel an action, etc.
Wireless Sensor Architecture
[0058] Reference is now made to FIG. 9. When the sensor is mounted
on a surgeon 502, the sensor may be wireless to allow the surgeon
502 to move freely around the operating room. In one embodiment,
the sensor 110 communicates wirelessly 902 with the ICU 904, the
ICU configured to receive wireless signals 902 from the sensor.
[0059] Reference is now made to FIG. 10. In another embodiment, the
sensor 110 is connected by wire to a Sensor Control Unit (SCU)
1002. The SCU 1002 further communicates wirelessly 902 with the ICU
904. The SCU 1002 contains a mounting structure 504 such that it
may be mounted on the surgeon 502. For example, the sensor 110 may
be mounted to the forehead of the surgeon 502, and the SCU 1002 may
be attached to the belt of the surgeon 502 under a surgical gown.
The SCU 1002 may have a battery to power the sensor, as well as its
own electronics. The SCU 1002 may have an integrated battery, and
the SCU 1002 may be charged between surgeries, such as on a
charging dock. Multiple SCU's may be available such that if an SCU
loses power during surgery, it can be swapped out for a fully
charged SCU. Where a wireless sensor is implemented, any wireless
protocol/technology may be used (e.g. WiFi, Bluetooth, ZigBee,
etc).
[0060] It will be appreciated that a sensor mounted on a surgeon's
body need not necessarily be sterile. For example, the surgeon's
forehead is not sterile. This is advantageous since a sensor may
not be made from materials that can be sterilized, and by removing
the sensor from the sterile field, there is no requirement to
ensure that the sensor is sterile (e.g. through draping,
autoclaving, etc.).
Display Positioning
[0061] Reference is no made to FIG. 11. In one embodiment, the
sensor 110 is attached to a mounting structure 504 to the forehead
of the surgeon 502, and the working volume 408 of the sensor 110 is
substantially contained within the surgeon's field of view 506. The
sensor 110 is configured to detect targets 102 within its working
volume 408 such that the relative pose between the targets 102 may
be calculated by the ICU 904. The ICU 904 may comprise a display
unit. In this embodiment, the display unit 1102 is also included in
the surgeon's field of view 506. This is advantageous where
real-time measurements based on the pose of the targets 102 are
provided to the surgeon 502 via the display unit 1102. The targets
102 and the display unit 1102 are simultaneously visible to the
surgeon 502. The display unit 1102 may be positioned within the
surgeon's field of view 506, for example, by being mounted to a
mobile cart.
[0062] In another embodiment, the display unit 1102 is a heads-up
display configured to be worn by the surgeon 502, and be integrated
with the sensor 110. The heads-up display is any transparent
display that does not require the surgeon to look away from the
usual field of view. For example, the heads-up display may be
projected onto a visor that surgeon's typically wear during
surgery. A head-mounted sensor allows the surgeon to ensure that
the targets are within the working volume of the sensor without
additionally trying to look at a static display unit within the
operating room, and with a heads-up display, the display unit is
visible to the surgeon regardless of where they are looking.
[0063] In another embodiment, as illustrated in FIG. 12, the sensor
110 is integrated with heads-up display glasses 1202 (such as the
Google Glass.TM. product from Google Inc.) Since the sensor 110 and
the display unit 1102 are integrated into one device, there is no
longer a need tot a dedicated display unit, thus reducing cost and
complexity. This embodiment is also advantageous since by default,
it places the senor's working volume within the surgeon's field of
view.
[0064] In another embodiment, as illustrated in FIG. 13, the
heads-up display glasses 1202 (with integrated sensor) are
connected by a cable to a SCU 1002, the SCU 1002 attached to a
surgeon 502. In this embodiment, the SCU 1002 may also comprise the
ICU (that is, it may perform the intra-operative computations to
generate display information to be displayed on the heads-up
display glasses). The SCU may include a battery, e.g. a
rechargeable battery. In this embodiment, the SCU need not
communicate wirelessly with any other device, since the entire
computational/electronic system is provided by the SCU, heads-up
display glasses and sensor. That is, the entire surgical
localization system is body-worn (with the exception of the targets
and their coupling means).
Static Reference Target to Overcome Limitations of Non-Fixed
Sensor
[0065] In a body-mounted or handheld camera configuration, the
sensor is not required to be stationary when capturing relative
pose between multiple targets. However, some measurements may be
required to be absolute. For example, when calculating the center
of rotation (COR) of a hip joint, a fixed sensor may localize a
target affixed to the femur as it is rotated about the hip COR. The
target is constrained to move along a sphere, the sphere's center
being the hip COR. Since the sensor is in a fixed location, and the
hip COR is in a fixed location, the pose data from rotating the
femur may be used to calculate the pose of the hip COR relative to
the target on the femur. In TKA, the pose of the hip COR is useful
for determining the mechanical axis of a patient's leg.
[0066] In an embodiment where the sensor may not be fixed, a second
stationary target is used as a static reference to compensate for
the possible motion of the sensor. In FIG. 14, a target 102 is
affixed to a femur 104 that has a static hip COR 1401. The sensor
is not necessarily stationary, as it may be body-mounted or
hand-held. A static reference target 1402 is introduced within the
working volume 408 of the sensor 110. The static reference target
1402 must fulfill two conditions: 1) it must remain stationary
during steps executed to measure the hip COR 1401, and 2) it must
be within the working volume 408 of the sensor 110 simultaneously
with the target 102 attached to the femur 104 while the hip COR
1401 is measured. In this example, the static reference target 1402
may be simply placed on the OR bed 116or mounted to a fixed docking
structure 402. To measure the hip COR 1401 the femur is rotated
about a pivot point of the hip joint while the sensor 110 captures
poses of the target 102 and poses of the static reference target
1402. The poses of the static reference target 1402 are used to
compensate for any movement of the sensor 110 as it used in
hand-held or body-mounted mode. This embodiment is described
relative to measuring, hip COR in the context of TKA; however, this
embodiment is meant to be example for clarity. There are many other
embodiments that will be evident to those skilled in the art e.g.
tool calibration, verification of the location of a tip of a tool,
etc.).
Working Volume Alignment Feedback
[0067] Where the surgeon has direct control over the working volume
of the optical sensor (i.e. in the body-mounted or handheld
configurations), it may be advantageous to display the video feed
as seen by the optical sensor to the surgeon via the display unit.
This visual feedback may allow the surgeon to see what the optical
sensor "sees", and may be useful in a) ensuring that the target(s)
are within the working volume of the sensor, and b) diagnosing any
occlusions, disturbances, disruptions, etc. that prevents the poses
of the targets from being captured by the ICU. For example, a
persistent optical sensor image feed may be displayed. This is
illustrated in FIG. 15, where a laptop (integrated ICU 904 and
display 1102) is shown to display a video feed 1502 (showing two
targets within the image) of the optical sensor. Alternative
graphics ma be shown in addition to or instead of the raw video
feed 1502 of the optical sensor. For example, simplified renderings
of the targets (e.g. represented as dots) may be displayed within a
window on the display instead of the video feed 1502. In another
example, a virtual overlay may be displayed on the raw video teed
1502 of the optical sensor, in which the overlay includes colours
and shapes overlaid on the target image.
[0068] Various embodiments have been described herein with
reference to the accompanying drawings. It will however, be evident
that various modifications and changes may be made thereto, and
additional embodiments may be implemented, without departing from
the broader scope of the disclosed embodiments as set forth in the
claims that follow.
* * * * *