U.S. patent number 10,506,952 [Application Number 15/682,093] was granted by the patent office on 2019-12-17 for motion monitor.
This patent grant is currently assigned to BreatheVision Ltd.. The grantee listed for this patent is BreatheVision Ltd.. Invention is credited to Ditza Auerbach.
View All Diagrams
United States Patent |
10,506,952 |
Auerbach |
December 17, 2019 |
Motion monitor
Abstract
A system for monitoring the respiratory activity of a subject,
which comprises two or more signal generating elements being
inertial sensors, or light emitting elements, applied to the thorax
of a subject, for generating signals that are indicative of
displacement of the thorax of the subject throughout a
predetermined time period; a receiver for receiving the generated
signals during breathing motions of the subject; and one or more
computing devices in data communication with the receiver, for
analyzing the breathing motions. The one or more computing devices
is operable to generate, by the two or more signal generating
elements, signals that are indicative of displacement of the thorax
of the subject, throughout the predetermined time period; calculate
the current displacements and relative phases of the signal
generating elements throughout the predetermined time period; and
calculate, throughout the predetermined time period, the breathing
volume from the displacements and the relative phases.
Inventors: |
Auerbach; Ditza (Aseret,
IL) |
Applicant: |
Name |
City |
State |
Country |
Type |
BreatheVision Ltd. |
Hof Ashkelon |
N/A |
IL |
|
|
Assignee: |
BreatheVision Ltd. (Hof
Ashkelon, IL)
|
Family
ID: |
52992363 |
Appl.
No.: |
15/682,093 |
Filed: |
August 21, 2017 |
Prior Publication Data
|
|
|
|
Document
Identifier |
Publication Date |
|
US 20170367625 A1 |
Dec 28, 2017 |
|
Related U.S. Patent Documents
|
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
Issue Date |
|
|
15135797 |
Oct 17, 2017 |
9788762 |
|
|
|
PCT/IL2014/050916 |
Oct 22, 2014 |
|
|
|
|
61895000 |
Oct 24, 2013 |
|
|
|
|
62020649 |
Jul 3, 2014 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
A61B
5/0077 (20130101); A61B 5/113 (20130101); A61B
5/1101 (20130101); A61B 5/087 (20130101); A61B
5/091 (20130101); A61B 5/1128 (20130101); A61B
5/7257 (20130101); A61B 5/7282 (20130101); A61B
3/113 (20130101); A61B 5/1114 (20130101); A61B
5/0816 (20130101); A61B 5/4818 (20130101); A61B
5/1135 (20130101); A61B 5/1117 (20130101); A61B
5/4812 (20130101); A61B 90/39 (20160201); A61B
5/02444 (20130101); A61B 2560/0238 (20130101); A61B
2562/0219 (20130101); A61B 2090/3937 (20160201); A61B
2503/04 (20130101); A61B 2560/0214 (20130101); A61B
2090/3945 (20160201); A61B 5/6898 (20130101) |
Current International
Class: |
A61B
5/11 (20060101); A61B 5/024 (20060101); A61B
5/087 (20060101); A61B 3/113 (20060101); A61B
90/00 (20160101); A61B 5/08 (20060101); A61B
5/091 (20060101); A61B 5/00 (20060101); A61B
5/113 (20060101) |
References Cited
[Referenced By]
U.S. Patent Documents
Foreign Patent Documents
|
|
|
|
|
|
|
0176467 |
|
Oct 2001 |
|
WO |
|
2004049109 |
|
Jun 2004 |
|
WO |
|
2006005021 |
|
Jan 2006 |
|
WO |
|
2006047400 |
|
May 2006 |
|
WO |
|
2009011643 |
|
Jan 2009 |
|
WO |
|
2015059700 |
|
Apr 2015 |
|
WO |
|
WO 2015/059700 |
|
Apr 2015 |
|
WO |
|
WO 2017/183039 |
|
Oct 2017 |
|
WO |
|
Other References
International Search Report and the Written Opinion dated Jul. 27,
2017 From the International Searching Authority Re. Application No.
PCT/IL2017/050466. (15 Pages). cited by applicant .
International Preliminary Report on Patentability dated Nov. 1,
2018 From the International Bureau of WIPO Re. Application No.
PCT/IL2017/050466. (10 Pages). cited by applicant .
International Search Report Application No. PCT/IL2014/050916
Completed Jan. 27, 2015; dated Jan. 29, 2015 19 pages. cited by
applicant .
International Preliminary Report on Patentability Application No.
PCT/IL2014/050916 Submitted Aug. 24, 2015; Completed Mar. 13, 2016
18 pages. cited by applicant .
Ernst et al: "Correlation between external and internal respiratory
motion: a validation study", Int J Comput Assist Radiol Surg. May
2012;7(3):483-92. doi: 10.1007/s11548-011-0653-6. Epub Aug. 19,
2011. cited by applicant .
Dementhon et al:"Model-based object pose in 25 lines of code",
International Journal of Computer Vision Jun. 1995, vol. 15, Issue
1-2, pp. 123-141. cited by applicant .
McClelland et al:"Respiratory motion models: a review", Med Image
Anal. Jan. 2013;17(1):19-42. doi: 10.1016/j.media.2012.09.005. Epub
Oct. 8, 2012. cited by applicant .
European Search Report for Application 148546467.7-1657 / 3060118
PCT/IL2014/050916 Completed Jun. 8, 2017; dated Jun. 16, 2017 9
Pages. cited by applicant .
Lucas et al: "An iterative image registration technique with an
application to stereo vision", Proceedings of Imaging Understanding
Workshop; 121-130, 1981. cited by applicant .
Rubinstein et al. "Dictionaries for Sparse Representation
Modeling", Proceedings of the IEEE 98(6) 2010. cited by applicant
.
Dementhon et al; "Model-Based Object Pose in 25 Lines of Code",
International Journal of Computer Vision, vol. 15, Jun. 1995. cited
by applicant .
Scharstein et al; "A Taxonomy and Evaluation of Dense Two-Frame
Stereo Correspondence Algorithms", International Journal of
Computer Vision 47(1/2/3) 2002. cited by applicant .
http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example5.html.
cited by applicant .
Horn; "Obtaining shape from shading information", In P. Winston,
editor, The Psychology of Computer Vision. McGraw-Hill, New York,
1975. cited by applicant .
Allen; "Learning body shape models from real-world data", PhD
Thesis, University of Washington, 2005. cited by applicant .
Scholkopf et al; :Learning with Kernels, MIT Press, 2002. cited by
applicant .
Enochson et al; "Programming and Analysis for Digital Time Series
Data", U.S. Dept. of Defense, Shock and Vibration Info. Center. p.
142, 1968. cited by applicant .
B. Scholkopf et. al., "Estimating the support of a high-dimensional
distribution", Neural Computation, 13(7), 2001. cited by applicant
.
International Search Report PCT/IL2017/050466 Completed Jul. 27,
2017; dated Jul. 27, 2017 7 pages. cited by applicant.
|
Primary Examiner: Tejani; Ankit D
Parent Case Text
This application is a continuation of U.S. application Ser. No.
15/135,797 filed on Apr. 22, 2016, which is a continuation-in-part
of PCT/IL2014/050916 filed on Oct. 22, 2014, which claims priority
from U.S. 62/020,649 filed on Jul. 3, 2014 and U.S. 61/895,000
filed on Oct. 24, 2013.
Claims
The invention claimed is:
1. A system for monitoring the respiratory activity of a subject,
comprising: a) two or more signal generating elements, wherein each
of said signal generating elements being a light emitting element
and/or an inertial sensor which generates a signal receivable by a
receiver, wherein each of said signal generating elements is
applied to the body of a subject, for generating signals that are
indicative of movement of the chest and abdomen of said subject
throughout a predetermined time period; b) receiver for receiving
said generated signals during breathing motions of said subject;
and c) one or more computing devices in data communication with
said receiver, for analyzing said breathing motions, wherein said
one or more computing devices is operable to: i. generate, by said
two or more signal generating elements, signals that are indicative
of movements of the chest and abdomen of said subject, throughout
said predetermined time period; ii. calculate the current
amplitudes and relative phases of said signal generating elements
throughout said predetermined time period; and iii. calculate,
throughout said predetermined time period, the breathing volume
from the said amplitudes and said relative phases.
2. A system according to claim 1, wherein the breathing volume is a
tidal volume.
3. A system according to claim 1, wherein the receiver comprises a
camera.
4. A system according to claim 1, wherein the inertial sensor
comprises an accelerometer.
5. A system according to claim 1, wherein each of said signal
generating elements comprising one or more light emitting sensors
which generate a signal receivable by said receiver, and one or
more inertial sensors, wherein said one or more computing devices
is operable to assign weights to said signals received from said
one or more inertial sensors, and to signals received from said
receiver, according to the noise in said received signals.
6. A system according to claim 5, wherein said analyzing of said
breathing motions by said one or more computing devices comprises
monitoring said breathing motion using the signals received from
said receiver when the signals from said inertial sensors are
noisy.
7. A system according to claim 5, wherein said analyzing of said
breathing motions by said one or more computing devices comprises
monitoring said breathing motion using the signals received from
said one or more inertial sensors when the signals from said one or
more light emitting sensors are obstructed.
8. A system according to claim 1, wherein said signal generating
elements are comprised in a single marker which is applied to the
body of a subject.
9. A system according to claim 1, wherein said signal generating
elements are comprised in two or more markers applied to one or
more regions of the body of a subject, each of said markers
comprises one or more of said signal generating elements.
10. A system according to claim 1, wherein said one or more of said
computing devices analyzes signals which are transmitted from said
receiver, wherein said receiver is positioned on a stationary frame
of reference and signals which are transmitted from said signal
generating elements are positioned on a movable frame of
reference.
11. A system according to claim 1, wherein said one or more
computing devices is operable to: define a breathing movement axis
in space for each of said signal generating elements; and
calculate, throughout said predetermined time period, including
only from movements during sub-periods of said predetermined time
period, the breathing volume from said movements, by detecting
signals which are received from said signal generating elements
along said movement axis.
12. A system according to claim 11, wherein the breathing volume is
a tidal breathing volume.
13. A system according to claim 11, wherein the receiver comprises
a camera.
14. A system according to claim 11, wherein the inertial element
comprises an accelerometer.
15. A system according to claim 11, wherein each of said signal
generating elements comprising one or more light emitting sensors
which generates a signal receivable by said receiver, and one or
more inertial sensors, wherein said one or more computing devices
is operable to assign weights to said signals received from said
one or more inertial sensor, and to signals received from said
receiver, according to the noise in said received signals.
16. A system according to claim 15, wherein said analyzing of said
breathing motions by said one or more computing devices comprises
monitoring said breathing motion using the signals received from
said receiver when the signals from said inertial sensors are
noisy.
17. A system according to claim 15, wherein said analyzing of said
breathing motions by said one or more computing devices comprises
monitoring said breathing motion using the signals received from
said one or more inertial sensors when the signals from said one or
more light emitting sensors are obstructed.
Description
FIELD OF THE INVENTION
The present invention relates to monitoring apparatus. More
particularly, the invention relates to image-based monitoring
apparatus for monitoring a variety of activities, such as human
breathing.
BACKGROUND OF THE INVENTION
Respiration rate is a vital sign that is monitored either
intermittently or continuously in a variety of situations,
including but not limited to during and after surgical
intervention. Continuous monitoring apparatus can be placed in
contact with the body (abdominal belts or air flow on trachea or
mask) or non-contact (e.g. pressure sensors under mattress, or a
Doppler sensor). Continuous monitoring of respiration beyond the
respiratory rate (such as breathing patterns, breathing depth,
minute ventilation and apnea) is achieved nowadays using contact
and invasive sensors.
SUMMARY OF THE INVENTION
The present invention relates to a video system for non-contact
monitoring of the respiration properties through various measures
such as respiratory rate, respiratory effort, tidal volumes and
respiratory patterns. These properties are obtained under a wide
range of ambient conditions such as in the presence of subject
motion, subject out of bed, various ambient lighting conditions,
obstructions, etc. In addition reliable early warnings can be
issued based on such respiratory properties and based on analysis
of the video and possibly additional signals over time. The term
"subject", as used herein, is meant to indicate an individual the
respiratory activity of whom is being tracked, whether a patient in
a health facility, or a person being tested for any reason or
purpose.
The system is applicable to various settings such as monitoring
subjects who are undergoing sedative or pain killing treatment that
can depress respiration, monitoring deterioration in the critically
ill, monitoring infants to protect against SIDS and diagnostic
tools for sleep testing such as for obstructive sleep apnea. In
such cases, the monitor of the invention can be used to track other
movements in addition to respiratory movements, e.g., leg movements
and eye movements, as well as for quantifying awakenings and
sedation level. These can be important for sleep staging
analysis.
Another use of the invention is for the diagnosis of conditions
using intermittent monitoring or comparison of measurements taken
at different times. One example of this is comparison of breathing
movement patterns of the chest and abdomen before and after surgery
in order to diagnose diaphragm paralysis. Also, change in
respiration patterns at rest over time can be tracked with the
system of the invention.
The settings in which the system may be used are versatile such as
hospitals, clinics, home, battlefield and even outdoor
environments. Often for some of these conditions, respiratory rate
is a late indicator of respiratory distress, and other measures
such as the depth of breath and pulmonary motion patterns are
earlier indicators of deterioration. There are situations in which
the respiratory rate is not the differentiator but rather the
overall pattern of body movement.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings:
FIG. 1 is a flow chart of a monitoring stage, according to one
embodiment of the invention;
FIG. 2 is a flow chart of a detection stage, according to one
embodiment of the invention;
FIG. 3 is a flow chart of a tracking stage, according to one
embodiment of the invention;
FIG. 4 is a flow chart of an analysis stage, according to one
embodiment of the invention;
FIG. 5 is a flow chart of a volume calibration stage, according to
one embodiment of the invention;
FIG. 6 shows a marker according to one embodiment of the invention
in separated position relative to a cloth, as may be part of the
subject's robe;
FIG. 7 shows the marker of FIG. 6 in operating position;
FIG. 8 is a top view of the marker of FIG. 7;
FIG. 9 is a cross-section taken along the AA line of FIG. 8;
FIG. 10 is an exploded view of the disposable part of a marker
according to one embodiment of the invention;
FIG. 11 is a breathing waveform showing a single marker's
displacement over time;
FIG. 12 is a schematic illustration of a system for monitoring the
breathing patterns of a subject using inertial sensors; and
FIG. 13 is a method for determining the respiratory activity of a
subject according to one embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
The invention will be described below with reference to
illustrative embodiments, which are not meant to limit its scope
but are only meant to assist the skilled person to understand the
invention and its uses. While in some cases reference may be made
to a single subject or to a single marker, or to a specific number
of markers, it should be understood that this is done for the sake
of simplicity but the invention is not limited to the monitoring of
a single subject and more than one subject can be monitored
simultaneously by a single system of the invention. Similarly,
different numbers of markers can be used, as will be further
discussed below, which can be positioned at different locations,
depending on the result that it is desired to obtain.
The following description relates to a system for monitoring the
respiratory activity of a subject in a static position such as in
bed, but it will be appreciated that it may also be used to monitor
the respiratory activity of a moving subject, particularly one that
is undergoing physical activity, such as riding a bicycle.
Exemplary System Overview
An illustrative system according to one embodiment of the invention
consists of the following components:
1. One or more markers on the subject comprising signal generating
elements, power supply elements and circuitry for operating the
signal generating elements, either directly contacting the subject
or on the subject's clothing or covers (i.e., on the sheets and/or
blankets).
2. A receiver for receiving the generated signals, for example an
imaging device, which can be for example a CMOS video camera, a 3D
camera, a thermal imager, a light field camera or a depth camera.
The imaging device may have an associated illumination source such
as IR LEDs in some embodiments. The receiver may also be a wireless
receiver that receives transmitted signals generated by elements,
such as an accelerometer or a gyroscope.
3. A computing device that is connected either wirelessly or wired
to the receiver. The computing device may be remote and the
connection to it may include an internet connection. As an example,
some image analysis may take place on a wired connected device such
as a computer board wired to the receiver. From there, a Bluetooth
connection (or a connection of functionally similar technology) may
connect to a gateway device that uploads the data to a cloud on the
internet, or delivers it to a remote location, for further
processing and storage.
4. An optional alarming device that issues either an audible or
other kind of warning near the subject or to a remote location
through a wired or wireless connection from the computing
device.
5. An optional remote application, e.g., at a central nurse station
or on a mobile device that can be used to control the signal
generating element in the subject's vicinity and receive data from
the signal generating element. For example, the remote application
can be used to stream raw video or analyzed video from the signal
generating element located near the subject, once an alert is
issued.
6. An optional display device that may be connected with a wired or
wireless connection to the computing device.
The system may also comprise:
a. Additional signal generating elements, such as acoustic or other
sensors that are connected to the computing device, e.g., a pulse
oximeter.
b. Data storage devices, which can be located on the receiver, on a
computing device or at a remote location, for storing data such as
an electronic medical file. In addition, a database which includes
training data for learning method parameters can be located on a
computing device.
c. A spirometer or respirometer which measures a subject's air flow
for calibration purposes in some embodiments.
Physiological Measurements
Extracting Breathing Measures
Accepted clinical measurements such as respiratory rate, tidal
volume, minute ventilation and apnea indicators can be derived from
the sensing system. In order to obtain absolute estimates of some
of these properties (typically for diagnostic purposes),
calibration is needed between the measured signals and the
respiration property. These calibration functions can be learned a
priori based on subject attributes such as shape, silhouette,
weight, age, sex, BMI, chest circumference, and body position and
data from a validated device such as a spirometer. The calibration
can also be updated using data from a phantom or a simulator. Note
that calibration functions used should be taken with the subject
pose taken into account.
Often for monitoring, the absolute value is not necessary but
changes from baseline properties need to be quantified. In such
cases, the calibration need not be carried out. In the following,
methods for extracting a number of respiratory properties from the
video sensed markers are outlined:
1. Respiratory Rate: The respiratory rate is extracted by first
extracting the dominant frequency from each marker separately and
then fusing these estimates together in some way. One method to
extract the dominant frequency from the marker's cleaned (filtered,
cleaned of noise and after non-breathing motion removal) signal, is
through analysis of time series windows which include several
breaths. For instance, this can be achieved by calculating the
Fourier transform over a moving window of, e.g., .about.20 sec
every 1 sec. A Hamming window (Enochson, Loren D.; Otnes, Robert K.
(1968). Programming and Analysis for Digital Time Series Data. U.S.
Dept. of Defense, Shock and Vibration info. Center. p. 142.) is
used to reduce artifacts. The instantaneous frequency of each
marker is taken to be the frequency with the largest Fourier
component amplitude. Another way of calculating the individual
respiration rates are in the time domain as the distance between
consecutive significant peaks or troughs.
The estimate of the actual respiratory rate is taken as a weighted
average of the dominant frequencies found from each valid marker.
The weighting can be taken proportional to the amplitude of the
marker motion and outliers can be iteratively removed if
sufficiently far from the average rate computed using the current
subset of markers.
2. Apnea and Hypo-apnea Events: These are events for which the
subject has stopped breathing or has a very low breathing rate for
over 10 seconds.
Apnea can appear with chest wall movements present as in an
obstruction and there a more careful analysis is needed. First the
amplitude of the chest movements is examined and if it is
significantly lower than recent breaths it is suspected as an
event. Furthermore, the phase difference between abdominal and
thoracic markers is followed to determine whether a significant
change 1 time may be obtained by obtaining the phase delay between
two short time segments (10 sec for example) of the marker
movements. For example, this can be achieved by taking the Fourier
Transform of each segment separately and obtaining the phase
difference between the relevant complex Fourier component of each
of the two markers. The relevant component is the one that
correspond to the extracted respiratory rate. Another method to
find the phase difference between two markers is to average the
time lag between their peaks and express it as a fraction of the
cycle time; 50%, for example, corresponds to a 180.degree. phase
lag.
3. Inhale time, Exhale time: For each marker displacement trace,
the time evolved between a peak and the following trough can be
used as a measure of the inhale or exhale time, depending on the
marker location.
4. Breathing Variability: Breathing variability can be calculated
as the entropy of the lengths of the breathing cycle. It is often
an indicator of respiratory distress.
5. Tidal Volume: A method for calculating tidal volume (the amount
of air displaced or inhaled per breath) via calibration on a
training set and learning a function on displacements was described
above.
Another method for calculating the tidal volume is through modeling
the abdominal and thoracic cavity shape. For example the shape can
be assumed to be a double cylinder. Both cylinder walls expand
during inhalation and contract during exhalation. The heights of
the two cylinders change with breathing due to the movement of the
diaphragm. The thoracic cylinder and abdominal one have different
mean radii which can be approximated from the image by using edge
detection on the image to find the body edges, at one or more body
positions. The known physical marker size can be used to convert
these measurements to physical units. During runtime, the marker
displacements and the angle of the camera are used to calculate
tidal volumes through the model.
Yet another modeling technique can utilize the 3D modeling of the
thorax as described hereinabove. The extracted marker motion can be
superimposed on the 3D model to extract the volume change during
exhalation.
6. Minute Ventilation: Integrate tidal volume over one minute.
Extracting Non-Breathing Measures
There are other measures of motion that can be found from the
monitoring system that are not associated with breathing, but can
give information on sleep quality and behavior that can be
extracted from the marker movements. Some of these are:
1. Leg Movements: By tracking markers on legs or other extremities
the extent and number of these movements can be calculated. Such
movements are associated with the diagnosis of restless leg
syndrome.
2. Eye Movements: Eyes, when open can be detected from the video
imaging without the necessity of markers. In order to classify eye
movements during sleep small markers placed on or above the eyelids
can be used. These may be used for sleep staging and also for
quantifying the continuity of sleep (eye openings).
3. Posture stability: The number of position changes during sleep
can be tracked easily due to the markers that move in and out of
the field of view. The actual body position at each time of sleep
can be found and alarms can be set to caregiver if person needs to
have position changed (subjects with bedsores, infants who are not
to be positioned in prone positions, etc.).
4. Bed Exits: The number of bed exits and bed entries can be
detected and alarms setoff in real time if person does not return
to bed or falls near bed for example.
5. Heart Rate: The heart rate can be obtained from markers located
near to chest. This can be best done by finding peaks in the
Fourier transform of the displacement signal of each marker in the
range of, e.g., 40-140 bpm for adults.
6. Tremors/seizures: Continuous trackable motion in all the markers
as well as motion in a large fraction of pixels found within the
body silhouette indicate a tremor.
7. Eye micro-tremor: The characteristics of eye micro movements are
related to the level of sedation. The described sensor can be used
in conjunction with a marker to track these movements as follows: A
marker can be adhered to the eyelid, such as a thin
retro-reflective adhesive sticker or else IR reflective polish can
be applied to eyelids or eyelashes. An alternative is to attach the
marker to the side of the face near the eye and have a thin
flexible object connecting it to the eyelid. It is possible to
track movements by detecting and tracking the boundaries between
eyelashes and eyelids in the image. Since the movements are of high
frequency (.about.80 hz) and of small amplitude (microns). The
choice of camera sensor should be made accordingly and also the
placement should be closer to narrow the field of view. One
possibility is to attach the camera to an eyeglass frame rather
than to the bed.
In order to cancel camera jitter movement and head movements which,
other reference points away from the eye are also tracked such as
markers on the cheek, forehead or earlobe. These signals are
analyzed to detect the main frequencies of motion which can
subsequently be removed from the eyelid signal. The resulting
signal can be further analyzed to determine the frequency of the
micro eye tremors.
The eye-tremor and breathing monitors can be configured to use the
same sensor or distinct sensors.
8. Head Movements: Often there are head movements arising from an
obstruction and manifesting itself as snoring and increase
respiratory effort.
Respiration Quality Measures and Warnings
Respiration is monitored in order for an early warning alert to be
issued if a problem is about to occur. The system produces an
online early warning alert based on personalized per subject data
and based on recent continuous measurements.
The ubiquitous problem with alert systems is that they are often
unreliable and therefore lead to alarm fatigue, in which alarms are
often ignored. We describe a system which provides a more reliable
early warning, is more immune to artifacts and classifies the
overall respiratory information sensed. In addition, a quality
measure is provided, which quantifies the reliability of the
current measurement. The early warning system is based on a
multi-dimensional analysis of the respiratory waveforms and
personalized to the current attributes of the subject being
monitored. This should be contrasted to typical monitoring systems
that track a single feature, for example the respiratory rate. In
such systems, an alarm will typically be set off whenever the
derived respiration rate declines or exceeds fixed preset minimal
and maximal thresholds. The approach that introduce here is
different in that it is adaptive to the subject but reliable in
that it is based on a large amount of data. First, features from
the subject are used (age, sex, thoracic size, etc.) and then
various feature are extracted from the recent measurements. These
can be the outputs from the video imaging system described above or
from other devices or from a combination of several devices. For
example, the raw signals may include those from an end tidal
CO.sub.2 capnograph, a pulse-oximeter and the video system
described above with a single marker. The measurements can be
impedance measurement made through leads attached to a subject. The
identity of the features, mathematical quantities derived from the
measurements time series, are determined in the training stage
described below. A classifier is trained in this feature space
during the training stage which is performed either in advance or
online. The classifier can be a two-class one that differentiates
between normal subject behavior and abnormal behavior. Using the
classifier, a score is assigned continuously based upon the
measurement data, with 1 indicating typical baseline behavior while
a lower score, e.g., 0.5, represents the edge of normal behavior
below which outlier behavior is observed. In addition to this
"functional score" that is calculated based on training data, a
quality score is calculated denoting how reliable the basic
measurements are in cases where this is possible.
Signal Features
The features are calculated from the various signals on various
timescales and typically depend on the physiological source of the
signal. For example, the respiratory rate calculated from a 20
second time series signal of several of the video markers described
above can be one of the features. Other features can quantify the
trend of physiological quantities, such as the derivative (trend)
of the respiratory rate over consecutive overlapping 20 second
intervals. Similarly, average amplitudes of the respiratory signal
peaks and troughs over 20 second intervals and their derivative
over time can be used. The initial set of features can be reduced
further in the training stage to reduce over-fitting of the
classifier using standard methods.
Training
A classifier is trained using training data, which consists of
training vectors that consist of the set of features and the label
of one of the classes to be classified. The label may be "normal",
"coughing", "sensor disconnected", "risky", and so forth. If
sufficient data exists for more than one-class of behavior, a
multi-class classifier map be trained. Minority classes with few
data points can be grouped to a larger class labeled "artifacts"
for example. In the case where labeled data does not exist or
predominantly belongs to a single class, a single class classifier
can be trained. Frameworks for learning such classifiers exist, for
example the SVM one-class classifier (B. Scholkopf et. al.,
Estimating the support of a high-dimensional distribution. Neural
Computation, 13(7), 2001).
Standard feature selection methods can be used to reduce the
dimensionality of the classifier problem, such as PCA, and methods
that rank the features according to their discrimination ability.
The forgoing applies in general to training a classifier based on a
fixed training set that is collected a priori to monitoring.
However, if the data is collected a priori, it may not be relevant
to the baseline characteristics of the subject in question, thus
leading to an inaccurate classifier. An alternative would be to
train the classifier based on the initial period of monitoring the
subject, e.g., the first few minutes when the medical staff is in
the vicinity. However, this greatly limits the amount of training
data that can be used and thus can severely limit the
generalizability of the classifier due to over-fitting. Again the
classifier results may not be reliable. Our proposed methods
involve forming training sets which are relevant to the subject in
question and enable the incorporation of a large amount of data.
Several variants which can be used alone or integrated together of
forming a training set are outlined below:
1. Augmenting training sets online: The data from a monitored
subject can be added to the database by adding it as time segments
of e.g., 10 minutes with all its features. In addition, the periods
of respiratory depression can be labeled according to preset
conditions such as respiratory rate <8 or minute volume <3
liter. Using such a criterion, all the data can be labeled
according to how long they precede a respiratory event. This data
can be used in future training sets of the classifier since it can
be labeled as "normal" and "risky" depending on how long before a
respiratory event they were extracted.
2. Subject training set selection "on the fly": One method is to
train a classifier based on data from a subset of the total
available subjects. This subset is chosen so that it is "closest"
to the subject in question. One method this can be done is by
carrying out a clustering procedure (e.g., mean-shift clustering)
in the feature space composed of the data from all the subjects,
including the initial data of the subject in question. The
subjects, whose data falls mainly in the same cluster as those of
the subject are those used as the training set. In order to improve
stability yet further the clustering can be carried out many times
with different parameter values; the number of times each subject
falls in the same cluster as the current subject is recorded.
Subjects that fall in a majority of the experiments in the same
cluster as the subject in question are used for the classifiers
training set.
3. Training set selection "a priori": The data from all subjects
can be a priori clustered as in method 1 to obtain subsets of
subjects with similar physiological signal features. More than one
clustering experiment can be used to produce additional subsets
that may intersect previous clusters. Each such cluster is a
candidate training set and the one that is chosen is the one who is
closest to the current subject. This subset can be chosen according
to the distance of the cluster's center of mass to the current
subject's features mean value. In addition, differences in the
principal component directions in feature space can be incorporated
into the distance matrix between the two distributions.
Another method of finding the distance between the current subject
and the subsets is to determine the overlap volume between each
cluster distribution and the current subject distribution. Each
such distribution can be approximated by an ellipsoid using the
covariance matrix and the center of mass in feature space.
Additional criteria than can be taken into account in calculating
the closeness are subject attributes such as age, medical status,
sex and base vital signs.
4. Transforming training set: Another option of utilizing the
original training set for the new subject is to scale the
individual subjects to normalized coordinates. For example, a
feature value f.sub.i can be transformed as follows:
'.mu..sigma. ##EQU00001## where (.mu..sub.i.sup.(p)),
(.sigma..sub.i.sup.(p)) are the mean and standard deviations of the
feature i for subject p. After transformation the classifier can be
learned on the transformed data set.
5. Selecting time segments for training set: Using the labeling of
time segments as described in #1 above the relevant time segments
from the subjects who were chosen to be included in training set
are used.
The output of the training stage are mathematical formulae whose
input are feature vectors and whose output is a class
probability--the probability that the feature belongs to a specific
class of the classifier. In addition, regions of the feature space
that are associated with non-normal respiration patterns are
assigned informative clinical descriptions: such as "Apnea",
"Shallow Breathing", etc. These regions and abnormalities are
assigned a priori based on pre-assigned rules (based on expert
rules).
Runtime
In runtime, an initial class is assigned continuously based on the
class with the highest score of the learned functions. In cases
where a global normalization is applied to the raw features, the
normalization factor is updated during runtime on the previous data
of the subject in question. There are many adjustments that can be
made in order to improve reliability of the monitoring such as:
1. In cases where the class association is not clear-cut an
"undecided" tag can be associated with this time.
2. A quality measure is associated continuously in time. For the
video sensor, whenever non-respiratory body motion is identified,
the quality score is reduced from 1. The amount that it is reduced
depends on the prediction error of the motion, the larger the error
the more it is reduced (the minimal quality measure is 0). Also if
periods of coughing or talking are identified using other sensors
(for example acoustic sensors), the quality measure is also
reduced. Any indication of artifacts in one of the sensor signals
should be used to reduce the quality measure of the other sensor
signals.
3. Due to missing measurements (subject out of bed), it may not be
possible to calculate the classification at specific times. The
class assigned at these times can be tagged as "unknown".
When the classier output is not "normal", the suspected anomaly can
be output based on the feature values and predetermined rules.
Examples of such rules are:
1) High respiration rate (28 breaths/sec) with low variability
(0.2);
2) Hypoventilation: Breathing effort is low (Amplitude 50% of
normal) and respiration rate is low (5 breaths/min);
3) Obstruction: Paradoxical breathing.
Output: Alarms and Communication
Typically, a class output that is neither "normal" nor "unknown"
can trigger an alarm or warning to be issued. In order to increase
reliability, a delay time (e.g., 10 seconds) may be applied before
an alarm is issued. If during this delay time, the measurements
return to the "normal" class, the triggered alarm will not be
issued. Alarms can be audible with sound associated with severity
and type of breathing pattern. They can be communicated to a
variety of output devices either by a wired or wireless connection,
such as to a bedside monitor, a handheld monitor, a mobile unit or
a central nursing station.
Subject Monitoring Process
The subject monitoring process is carried out according to one
embodiment of the invention when a subject 5 in supine position is
lying on a bed or on any other substantially horizontal support
surface 7, as illustrated in FIG. 12.
A rigid marker 51 provided with two inertial sensors 56 and 57, a
single inertial sensor or any other suitable number thereof, such
as a three-axis accelerometer and a gyroscope, or a combination
thereof, is used to calculate the respiratory activity of subject
5. Marker 51 is applied to the thorax 13 of subject 5 by engaging
means 16, such as an adhesive connection applied directly to a
bodily portion or an interfacing element such as a fabric in
engagement with the thorax, to define a desired positional
relationship to a chosen location on the subject's thorax. Each
inertial sensor may be equipped with its own power supply element
22 and operating circuitry 24, or alternatively marker 51 has
common power supply elements 22 and operating circuitry 24 for all
inertial sensors provided therewith.
During breathing motions, thorax 13 is displaced cyclically in 3D
space. Marker 51 is also displaced by a distance of dl in 3D space
in response to the thorax displacement. Inertial sensors 56 and 57,
as they are displaced, sense the force applied by thorax 13 during
a breathing motion along a local inertial frame defined by their
rectilinear structure, and in turn output a signal S that is
indicative of the detected force or of the acceleration related to
the detected force. Output signal S is generally a wireless signal,
but may also be transmitted by a wired connection. Receiver 33,
which is located in the proximity of marker 51 when the generated
signal is a wireless signal but which also may be remotely
separated from the marker, receives the generated signals S and
transmits them to a computing device 37, which processes the
generated signals S to calculate a sensor displacement.
Alternatively, the receiver 33 may be wired and located in close
proximity to the computing device. Both of them may be situated on
or near the patient's bed.
The motion of the rigid marker 51 in a single exhalation or
inhalation movement may be approximated as a 3D translation, and
therefore it may be assumed that the local accelerometer axes are
also fixed during an exhalation or inhalation movement. However,
the gravitational field adds a fixed acceleration value to the
output signals S, which may lead to incorrect conclusions regarding
the respiratory activity. A high pass filter provided with
computing device 37 may be used to filter output signal S and to
thereby remove the fixed gravitation-based acceleration value.
FIG. 13 illustrates a method for calculating a subject's current or
average breathing movement and/or current tidal volume using the
signals generated by the inertial sensors.
After the computing device receives the signals generated by the
inertial sensors in step 62, each signal is filtered in step 64
twice to remove two sources of error: the fixed gravitation-based
acceleration value, and also a determined rotation component
associated with the subject's motion, imost prominently
non-breathing motion. Non-breathing motion may be identified
according to the presence of significant rotations in the signal
related data; a possible criterion is that the cumulative angle of
rotation over a predetermined period of time, e.g., the previous 10
seconds, is greater than a predetermined threshold. A Kalman
filter, or any other filter providing high sampling rate
measurements of the output acceleration values, is suitable. After
filtering, the thorax-caused motion can be approximated as being
translational.
Each filtered output signal generated between the two extrema that
represent acceleration over time related to exhalation and
inhalation movements, respectively, is integrated twice in step 66
to first calculate a change in velocity and then a change in
breathing displacement as a function of the integral. In order to
reduce integration error, the integration is carried out between
peaks and troughs of single breaths by enforcing regularization
constraints such as zero velocity at peaks and troughs of the
respiratory signal. Then, in step 68, the magnitude of a current
breathing displacement or a current average tidal volume during a
cycle of breathing motion is determined, and in step 70, is
compared with the stored calibration function, in order to analyze
the respiratory activity.
Another embodiment of the monitoring process is carried out by
imaging, and includes four major steps and a calibration step (the
accelerometer will need a calibration step as well in order to
output tidal volume. This can be done using the described
calibration with camera. If no camera is available one can consider
doing a calibration involving only the non-camera measurements
(age, weight, etc.) and accelerometers. More accelerometers can be
used for calibration to obtain body shape), which will be described
below with reference to FIGS. 1-5.
Sensing and Data Acquisition System
Video Sensing:
The imaging system of this particular embodiment of the invention
consists of the following components:
1. Sensors and Illumination: One or more video cameras are used to
record a scene of the subject. One example is a CMOS camera with
night-vision capabilities, i.e. one in which a filter exists to
filter out the visible light and sense the near IR optical
wavelengths (NIR: 0.75-1.4 .mu.m). There may also be ambient
infrared LED illumination to increase ambient light if it is low.
The ambient light source may be on the camera or in a chosen
spatial arrangement in the subject's vicinity. Another example for
the camera sensor is a thermal camera which is sensitive to
mid-wavelength (3-8 .mu.m) or long-wavelength (8-14 .mu.m) infrared
radiation. It too can have a light source such as a halogen lamp
source which is used in Thermal Quasi-Reflectography. The sensor
may also be a 3D depth camera such as the Carmine 1.09 by
Primesense Corporation.
2. Markers: Markers are applied either directly to the subject's
body or integrated to a covering of the subject such as his
clothing, his blanket, an elastic strap, or a bandage for example.
The markers can be for example patches made from retro-reflective
material, geometric patterns on the blanket or nightgown, or low
voltage LED lights embedded or attached to clothing or patches.
Each LED light can be either fixed or can flash each with its own
known temporal pattern. Also, their light emitting wavelength and
illumination may be specific for each LED. In order to enhance the
outward facing light of these active markers, they can be embedded
in retro reflective cones or other shaped contraptions made of
reflective materials. Also the illumination angle may be narrow by
using a lens. One specific marker, which is an object of the
present invention and which can be used in a system such as this
illustrative system, will be described in greater details later in
this description.
Several markers may be incorporated together such as into an
adhesive linear strip, or a 3D shape such as a dome shape
consisting of scattered markers on its curved surface. Another
possibility is to design the blanket that covers the subject to
include "windows" that are transparent to IR light at the locations
of the markers. The blanket can be fastened to the gown or marker
patches to keep the markers visible through the windows.
In one embodiment of the invention the marker unit consists of two
distinct physical parts that can be firmly connected to each other
but can also be easily released from each other. One part may be
positioned close to the body and may therefore be for single use
(cannot be disinfected) while the other part (can be disinfected
and therefore reusable) attaches to the disposable part but has no
direct attachment to the subject. The attachment between the two
parts can be direct through fasteners, e.g., similar to ECG
fasteners. Another alternative is that the two parts be connected
through clothing or a blanket. For example, in one embodiment of
the invention the disposable part is a patch that adheres to the
body, while the reusable part consists of an active marker that is
positioned on clothing and attaches to the disposable part through
one or more fasteners. In the case of an active marker, the power
unit may either be incorporated within the disposable patch or
within the reusable marker.
The active markers can emit steady signals or coded signals. For
example if the markers emit light, the intensity can be modulated
according to a frequency that is predetermined and distinct for
each marker location. In this way, each marker can be easily
identified during operation. Furthermore, light patterns which can
save on battery life can be employed for long-term monitoring.
3. Controller and Mount: The camera can be hung overlooking the
subject on the wall, ceiling, a stand or part of the bed or any
other fixture in the vicinity. A controller may be provided, which
can be used to pan, rotate and zoom the camera's field of view
freely in 3D either manually or automatically based on the video
content. In addition to the camera positioning, it can be possible
to adjust other camera features automatically based on the video
content. Examples of such features are: focal distance,
illumination level, white balance and optical filters. Providing
such controlling elements is well known in the art and is not
therefore described herein, for the sake of brevity.
Many different embodiments of the invention can be provided,
comprising different elements, such as sensors, software for
calibration or other purposes, etc. Some such elements are
described below, it being understood that invention is not limited
to embodiments using such elements, and that many other additional
functional elements can be added, which are not described in detail
hereinafter since they will be understood by the skilled
person.
Additional Sensors
The camera visual sensor may also be equipped with a depth sensor
providing the distance of each monitored pixel from the camera.
Other cameras may provide the lightfield at each pixel, which in
addition to the scalar RGB values provides the direction of the
light incident on the sensor's pixels (see for example the lytro at
http://www.lytro.com).
Video Detection and Tracking
The following relates to the detection and tracking of the markers
using 2D video images (FIGS. 2 and 3). Later on a description is
provided how the depth information can be obtained from standard 2D
images or from a 3D camera or stereo camera For the first video
frame or any frame in which the markers from the relatively recent
frames were not all tracked, an attempt to detect the markers is
carried out. Detection on the intensity image (or sequence of
images) can proceed in the following steps:
1. Pixels with high intensity are marked. This is done using a
fixed threshold or one that adapts to the histogram of the entire
image. For example the gray level of the 90.sup.th percentile of
the image histogram is found; only pixels which surpass this value
by a set number of standard deviations are marked. The standard
deviation is calculated from the image histogram.
2. The marked pixels are clustered to connected components using
chain clustering for example. Two pixels belong to the same cluster
if there is a path of marked pixels that connect them, each within
distance d of each other.
3. The various clusters are the candidate markers. They are now
filtered according to a priori known features of the markers such
as their size, eccentricity, bounding box size, temporal frequency
and gray levels. For circular markers, principal components values
of the candidate clusters are useful features. Position of the
candidate clusters relative to the bed or body silhouette can also
be used to filter out spurious clusters.
In the case where the markers are "flashing" and do not appear in
each frame, the above analysis is carried out for a series of
frames where the extent and identity of the frames in the series
depend on the flashing frequencies. The final candidate markers for
step 3 above consist of the unified set of markers obtained from
the frame series. In the cases where the marker produces a variable
shape and dispersed light pattern the clustering is carried out
differently. This can occur for example in the presence of a
blanket which may scatter the light reflected or emanating from the
markers, dependent on the folds in the blanket and the relative
configuration of the blanket relative to the markers. In these
cases, chain clustering with a single threshold on the intensity
image as described above may not be sufficient to identify a single
marker consistently. A marker may be composed of several disjoint
clusters which are the "seeds" of its more extended light pattern.
In order to capture the marker signature, the seeds may be expanded
by region growing techniques to unite the seeds to a single
signature. This signature is saved as a set of pixels with their
gray levels. In addition the presence of a specific temporal
behavior for a particular marker is used to associate disjoint
pixels to a common signature.
Once the clusters in a frame are detected, they are tracked over
the following frames (as described below). If the tracking was not
successful for all the detected markers, the detection is
reinitialized on the new current frame. Successful tracking means
that the positions of all the detected markers were determined in
each frame, that the frame to frame displacement of each marker
does not surpass a predefined threshold and that this displacement
is consistent across the marker pixels. The threshold is set using
the maximal expected speed of the breathing motion. It can be later
adjusted per patient during the baseline setting per patient
described below.
In case the tracking was successful for a short time, say 3
seconds, the tracking is continued in time on the "valid" detected
clusters. A valid cluster is one whose motion is consistent in
terms of frequency of motion with the major detected markers (those
in the trunk of the body). This can be analyzed by calculating the
largest component of the Fourier Transform of the cluster
displacements over say 10 seconds and removing clusters whose
frequency is far from the median frequency of the chest &
abdominal markers
Once it is verified that markers in the field of view have been
detected as described, one proceeds with tracking these markers in
the coming frames. From time to time detection is carried out again
to detect markers that were momentarily absent or had a low signal
in the initial detection.
The tracking of the markers can proceed by calculating the optical
flow patterns in order to identify the local motion within the
image. The Lucas Kanade method can be utilized (B. D. Lucas and T.
Kanade, An iterative image registration technique with an
application to stereo vision. Proceedings of Imaging Understanding
Workshop, 121-130, 1981).
The tracking is carried out relative to a reference frame in order
to avoid accumulating frame-to-frame errors. The reference frame is
updated from time to time due to the fact that the scene is
changing over time. One possibility is to update the reference
frame when one or more of a specific set of criteria are met.
Examples of these criteria are: Non-breathing movements are
identified; Illumination conditions of the scene have changed
significantly (total intensity change of marker cluster relative to
same marker in reference frame); Every preset time period (e.g., 30
seconds)
In order to improve the sensitivity of marker detection, the
tracking can be used as a filter as follows: the single frame
detection method may be set to produce an abundance of objects some
of which are not actual markers on the subject's body. Once they
are tracked, criteria for filtering them can be used based on
characteristics of their movement. For example, markers that remain
static for a long time even when subject makes some position change
can be filtered. Furthermore, if one is concerned with tracking
only breathing motion, non-static markers with a different
frequency of motion (or uncorrelated motion) than the identified
breathing markers are filtered.
Movement of non-marker features can also be tracked. For example,
using IR illumination, the subject eyes are visible and can be
tracked in a similar manner as an artificial marker. In this way,
subject's awakenings can be quantified through measurement and
recording of the times the subject's eyes are open.
From Video to Respiration
For each marker that is detected or tracked in a frame, its center
of mass image coordinates are calculated by averaging the locations
of the pixels in its cluster. The raw data of positions is
processed further in order to reduce noise due to the measurement
systems and subject motion. Furthermore the resulting motion can be
classified as breathing or non-breathing motion.
Noise Reduction
Each individual marker time series (2-dimensional x and y) is
processed on-line using noise reduction filters, such as those
using a measurement window around each point to process the data
using a mathematical operation such as averaging, weighted
averaging, Kalman filtering or the median operation. Bandpass
filters damping high frequency noise can also be applied.
Dictionary learning methods can also be used to reduce noise in the
time series either prior to the elimination of non-respiratory
motion or succeeding it. These methods are useful to reconstruct
segments of the tracking signals using a small number of short time
segments chosen from a dictionary which represents an
over-determined basis set for breathing (see for example: Ron
Rubinstein, Alfred M. Bruckstein, and Michael Elad, Proceedings of
the IEEE 98(6) 2010). The dictionary is learned from time signals
taken from a training set. This training set may be collected
either a priori on a set of healthy subjects or from the baseline
measurements on the subject in question. Additional methods of
sparse reconstruction algorithms can be used to reduce noise in the
original signal.
Another possibility of isolating breathing motion from other
motions, such as body movement, heartbeat or body tremors, is to
first learn the principal image direction of the breathing vector
for each marker when the subject is at rest. All motion in the
perpendicular direction is regarded as "noise" and removed. Other
physiological information can be extracted from this removed "noisy
component" such as the frequency of cardiac oscillations (heart
rate) and tremor frequencies. Heart beats can be identified using
band pass filters associated with frequencies learned based on
subjects data or from an additional hear rate sensor (for example
the pulse-oximeter).
In the case that the camera is mounted on a non-stationary devices
such as a pole, a lamp arm or another moveable accessory,
additional noise may enter the video. Such noise can be tracked by
following the movement of a stationary object such as a wall or
floor or a marker on a wall. Another way the movement can be
tracked is by using a 3D actuator connected and recording camera
vibrations and movements. These movements can be used to align
camera frames. Camera vibrations can be compensated by either
translating subsequent frames by the measured camera movement and
known geometry, or by aligning the image frames based on fixed
features in the image but outside the subject's body.
Extracting Subject Position
In order to quantify measurements it is advantageous to know the
subject position at any point in time. The available assigned
positions may be for example, the following ones: Supine
(belly-up), Prone (belly-down), On right side, On left side, Out of
bed.
The set of subject positions may be enlarged to incorporate
intermediate positions and sitting positions. Each position can be
defined according to the visibility of each marker in that
position. So for a particular subject position, there will be a
subset of markers that must be seen (vis=1), a subset that cannot
be seen (vis=0) and another subset that may be seen (vis=0.5). At
any given time the subject position can be deduced by assigning the
nearest subject position, where the distance to subject position i
is given by:
d.sub.i=.SIGMA..sub.j=1.sup.N|vis.sub.j.sup.(i)-vis.sub.j| where
vis.sub.j is the visibility of marker j in the current subject
posture and vis.sub.j.sup.(i) is the position of that marker in
subject position i. The reference values of the visibility in each
subject position can be set once a priori provided the markers and
camera are placed similarly from subject to subject. Note that the
anatomical position of each marker is pre-set and determined
according to some unique attributes of the marker such as: size,
shape, wavelength emitted, temporal pattern, etc. Extracting
Non-Breathing Motion
The marker movements consist of respiratory movements, cardiac
movements and other gross movements unrelated to the
cardio-respiratory system. It is important for some applications
such as sleep studies to identify periods of such movements,
extract them, and continue measuring breathing parameters in their
presence when possible. Such movements may include leg movements,
twisting and turning in bed, bed-exit and others. The respiratory
analysis may be carried out differently during periods of body
movement and therefore these periods need to be identified.
As the markers are tracked one can identify frames in which there
was significant non-breathing motion as follows: For each marker at
each frame, the image positions of the marker in all the recent
frames (say, 10 seconds) are analyzed through principal component
analysis. When the body motion is pure breathing, the magnitude of
the first principal component of each marker is significantly
larger than the magnitude of the second principal component. The
direction and absolute size of the first principal component vector
is dependent on the marker location, relative camera position and
subject's physiology. However, what is found to normal breathing
motion is that locally it is along a fixed 3D axis throughout the
entire cycle. By choosing a camera position and marker positions
whose lines of sight are not parallel to these 3D physiological
breathing directions, the local directionality translates to the 2D
image. In the image coordinates, breathing motion is characterized
by locally possessing a principal component that is significantly
larger than the second one in the 2D image. Once the magnitude of
the second principal component becomes significant relative to the
first principal component for a few frames (for 0.5 seconds for
example) of any chest/abdominal marker, it is an indication that
some kind of significant non-breathing motion has occurred. At this
stage tracking is reset and detection is carried out again. In
cases where the camera line of sight is close to parallel with a
marker breathing axis, the above Principal Component Analysis
should be carried out in 3D (see below how the 3D vector is
reconstructed).
An additional indicator of non-breathing motion is obtained by
counting the number of pixels who have changed their gray level
significantly (greater than a fixed threshold) from frame to frame
of the video sequence. The number of pixels which obey this
criterion during breathing can be determined from a baseline
period, say, 30 seconds when the subject being monitored is not
moving significantly (asides from breathing motion). Using the
statistics during the baseline period, the threshold on the number
of pixels that change significantly for identifying non-breathing
motion is set.
Once the single marker criterion described above or the overall
frame to frame difference is identified, an epoch of non-breathing
motion is found. The epoch ends once the markers are redetected and
can be tracked for a short period without non-breathing being
detected.
2D to 3D Motion Tracking
The 3D displacements of the markers in the body frame of reference
needs to be determined in order to calculate various parameters
such as breathing volumes. With a 3D camera, the additional depth
coordinate of each pixel is provided. However, these coordinates
are in the camera frame and not relative to a body frame of
reference. The following description illustrates how the 3D motion
can be obtained using only the 2D image coordinates and a marker
that consists of 2 or more LEDs.
One possible marker configuration includes 2 LEDs fixed to a rigid
plate and separated from each other by a known distance, say 4 cm.
The structure of the marker can be that of FIG. 6 with electronic
circuitry and battery source that powers 2 LEDs. Each of the 2 LEDS
are detected and tracked as described above. In order to calculate
their 3D displacement and vector direction, consider two video
frames in time: the end of an exhalation (beginning of inhalation)
and the following end of inhalation. These frames can be found by
calculating 1D curves of the marker displacement along the
instantaneous principal component direction and subsequently
analyzing these marker displacements curves for minima and maxima
using standard 1D methods. The shape of the quadrilateral formed by
the four 3D positions of the 2 LEDS (two at the beginning of
inhalation and two at the end), is well-approximated by a
parallelogram. This is due to the physiological characteristic that
the breathing vectors are more or less fixed over small body
surface regions (say 5.times.5 cm) of the abdomen or chest. One
condition is not to locate the markers over points of discontinuity
such as the ribcage-abdomen boundary. The base of the parallelogram
has fixed length since it on a rigid marker but the length of its
sides is unknown. First we describe how the length is determined
assuming the parallelogram angle was known. For this task we use
well-known camera pose estimation techniques, which are used to
determine the 3D camera position of a rigid body given the image
coordinates of at least 3 known points in the object coordinate
system (see for example POSIT algorithm in Dementhon and Davis,
International Journal of Computer Vision, Vol. 15, June 1995). The
above described parallelogram can be regarded as a virtual "rigid
body"--two of its vertices possess image coordinates obtained from
the frame at the beginning of the inhale while the two other
vertices are determined from the frame at the end of the inhale. In
the object frame, the coordinates of the vertices relative to a
vertex at the beginning of inhale are
1. (4,0,0)
2. (h cos .theta., h sin .theta., 0)
3. (4+h cos .theta., h sin .theta.,0)
where .theta. is known, but h is yet to be determined. Had h been
known, the camera pose could be determined using standard methods.
So a set of values for h are assumed, say in the range of 1-20 mm,
and thus a set of corresponding camera poses are determined. The
correct h is determined as the one which leads to minimal least
square error between the measured image coordinates from the
tracking and the coordinates determined from the corresponding
camera pose. In a similar manner the angle .theta. can be varied in
addition to h in order to determine the camera pose with least
square error. Thus the angle can be determined without prior
knowledge.
Once the breathing vector from the beginning of inhale to the end
of inhale is found, the displacements for intermediate frames can
be found from the principal component displacement graphs by simple
linear scaling. Specifically, for each monotonic segment of the 1D
graph of FIG. 11, the following transformation is carried out from
left to right: y'(t)=y'(t1)+factor*(y(t)-y(t1)) where
factor=Y3d(t1,t2)/(y(t2)-y(t1)). The function y(t) is the 1D
original principal component projection at frame t, and y'(t) is
the 3D displacement graph at frame t. A monotonic segment starts at
frame t1 and finishes at frame t2. y'(t=1) is set to y(t=1) and
Y3D(t1,t2) is the length of the 3d vector corresponding to h found
for the breathing maneuver from frame t1 to frame t2.
In order to increase the accuracy of determination of the 3D
displacements, markers with more than 2 LEDs can be used as
follows: 1. In a first frame, say the beginning of inhale, the 3D
camera position is determined using standard methods based on the
knowledge of 3 or more points on the rigid body (the marker). 2.
The positions of the LEDs in a subsequent video frame (end of
inhale) are assumed to all be translated from their original
positions at the beginning of inhale by a fixed 3D vector [a,b,c]
that represents the breathing vector. Using the camera pose
determined from the first frame (Rotation and Translation) along
with the image positions of the LEDs in the second frame, the
values of a, b and c can be determined by least squares.
In order to limit the power consumption or the use of multi-LED
markers throughout the monitoring, the following methods can be
used: 1. Prior knowledge learned by using 3 or more LEDs on a
marker at specific body locations on a set of training subjects.
For example, it was found that the breathing vector for an
abdominal marker along the central superior inferior body axis is
close to being perpendicular to the marker's rigid plane. 2.
Learning breathing angles on a training set of subjects using
multi-LED markers and applying the learned results to real-time
monitoring of subjects using 2-LED markers. These learned angles
are determined for a number of subject breathing patterns and a
number of marker placements on the training subjects. The relevant
vector directions according to body type (to be described
hereinafter), anatomical location, body positioning and breathing
type (deep or shallow for example) can be used once a test subject
is monitored. Alternatively, a representative vector direction for
a wide-range of conditions (approximate surface region, all breath
types, etc.) is often sufficient. 3. Use 3 or more LEDs on each
marker on subject in question during baseline setup or
intermittently. These can be the same markers that are used during
monitoring; in order to limit power consumption the extra LEDs are
turned off during regular monitoring or flash in some time
dependent way to reduce power consumption. During the baseline
setup all the marker LEDs are used to determine the vector
directions; subsequently only 2 or more of them are used to
determine the vector displacement. Another configuration in which
3D motions can be reconstructed is by using several distinct
markers. For example, several single marker LEDs can be distributed
at 2 or more distinct locations on the chest and the distances
between them measured using measuring tape or some other
measurement means at a particular time in the respiratory cycle
(say end of exhale). From these measurements, the camera position
can be determined. Using the directions of breathing motion at each
of these locations (determined using methods such as those
described herein), the breathing displacements can be estimated
analogously to the way described for the multi-led markers using
image frames at two different times (t1 & t2) in a single
respiratory cycle, say beginning of inhale and end of inhale. Here
the "virtual rigid body" that is being tracked is the 3D slab
formed by the vertices of the marker locations at t1 (define one
face of slab) and the marker locations at t2 (define second face of
slab). In the specific case where there are only 2 single markers,
the virtual slab reduces to a quadrilateral with two long almost
parallel sides of known length. Sensor Data Fusion:
The video analysis and inertial sensor measurements of breathing
motion can be combined to provide an improved estimate of the
respiratory displacements. This can be achieved by utilizing the
video measurements of displacements in the above described Kalman
filter. In order to realize this combination, the relative rotation
between the camera frame of reference (fixed) and that of the
accelerometer should be considered. Furthermore, the markers and
the accelerometer must be on the same rigid base or very close to
each other, so that they actually measure the same body
movement.
Missing Data:
Asides from achieving more accurate results by fusing measurements
from both the imaging and the inertial sensor, the two sensors can
be used to fill in measurements when one of the sensors cannot make
a measurement, when there is no line of sight between the imaging
sensor and the LEDs, or when there if a partial obstruction of some
of the LEDs. This can be achieved by using the previously described
Kalman filter with measurement covariances, which are dynamically
changed according to the quality of the data measurements. For
example, when a LED becomes completely obstructed, the covariance
of its measurements is set to the maximal value for the filter
iterations.
3D Surface Visualization
The extracted respiratory motion can be visualized in 3D as
follows. First a 3D surface model representing the subject's body
is extracted. If the camera is a 3D depth camera, the surface
geometry is available from the depth measurements. In the case of a
2D camera, there are several options:
One possibility is to use two or more image sensors or to move the
camera along a rail in order to obtain two or more views of the
subject. If only a single camera is used the views will not be a
true stereo pair. However, by considering breaths of the same 3D
magnitude, similar duration and similar neighboring breaths but
taken from two different camera positions, corresponding frames can
be considered as stereo pairs, provided there was no significant
non-breathing motion during the acquisition. Thus for example the
beginning of inhale frames are considered a stereo pair and from
there can progress through entire breath. The artificial stereo
pairs can be viewed remotely using a 3D viewing device.
Another possibility is to register the stereo images to each other
in order to get a 3D model of the person's trunk. For the
registering one needs to solve the correspondence problem (see for
example D. Scharstein and R. Szeliski. A Taxonomy and Evaluation of
Dense Two-Frame Stereo Correspondence Algorithms, International
Journal of Computer Vision 47(1/2/3) 2002.), which can be typically
either correlation or feature based. The LED markers could be used
as features. In addition additional features can be obtained by
using additional markers of other types, such as strips printed
with a retro-reflective pattern, or clothing or a sheet with
patterns visible by the image sensor. The sheet or clothing can be
adjusted to fit snug around the patients trunk for the duration of
the stereo acquisition. The requirements on these features is that
they remain at relatively constant body locations only for a short
period (say 1 minute) as opposed to long time monitoring where
adhesion of marker to subject's body is needed.
Once the 3D surface model is extracted from a single stereo pair,
the 3D respiratory motion extracted from each of the markers can be
visualized by deforming the surface model by an amount which is
scaled (by a factor greater than 1) compared to the extracted
respiratory marker measurements. In order to show the evolution of
the entire surface model, interpolation and extrapolation is
performed between the markers and extended to the trunk
boundary.
Once the breathing displacements at the marker positions are
determined, an entire field of displacements can be obtained by
interpolation and extrapolation to the subject's trunk. The
subject's trunk position can be determined through segmenting a
video frame. This can be carried out through standard region
growing methods using the trunk markers as anchors and pixels with
a significant optical flow vector as candidate region pixels. The
optical flow is calculated between two or more frames representing
large deviations of the respiration cycle (peak exhale to peak
inhale for example). Also markers known to belong to other body
parts such as those on a leg or arm should be excluded from the
grown region.
The 3D surface of the extracted subject's trunk can be subsequently
extracted using the methods described hereinabove. The dynamics of
the trunk surface can be shown in real time providing a 3D
visualization of the breathing movement. The color of each pixel is
proportional to its breathing displacement at that time.
Such visualization can be streamed automatically to a remote device
(nurse station or caregiver's mobile device), once an alert is
triggered to the remote device. Conversely, a caregiver can use
this view remotely any time they want to check on how the subject
is doing. Several visualization schemes are possible in these
situations, for example:
a. The normal camera view;
b. The magnification of movement view described above;
c. A different color display, dynamic range, different illumination
or different camera filter which are geared towards enhancing the
view of the subject rather than the markers attached to him;
d. Control over viewing angle and position of the camera to observe
scenes surrounding the subject that were out of view.
Calibrations & Baselines:
Camera Calibration:
For accurate reconstruction of 3D marker and subject coordinates,
it may be necessary to calibrate the camera once first connected
using known grid pattern targets (without a subject). In some
embodiments the camera is used at two positions to mimic stereo
acquisition. Here too the camera should be calibrated with respect
to the fixed known physical baseline (for instance, a rod placed on
the bed). These calibrations can be achieved by many available
tools (see for example:
http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example 5.
html).
Camera Positioning:
The video sensor continuously tracks the markers that are
positioned on the subject. The field of view should be set to
include the markers from a perspective that most pronouncedly
reveals the breathing movement. The determination of breathing
displacements that are perpendicular to the image plane will
usually be less accurate than those that lie in the image plane due
to geometrical considerations. According to one embodiment of the
invention, to optimize the camera position (or positions in the
presence of more than a single camera), calibration is carried out.
The camera positions and fields of view can be set-up manually or
automatically. For automatic set-up, a calibration procedure is
used. One such exemplary process for the case of a single camera is
as follows:
1. The camera is positioned to include at least one marker in its
field of view and put into focus.
2. Once the focus is set, a predetermined set of viewing angles is
used to collect short video-clips of the subject. In each of these
video-clips the markers are detected and tracked.
3. The optimal camera viewing angle is now chosen. The main
criterion for this choice is related to which markers are visible
in the field of view. For each video clip i, the following score is
calculated: Score.sub.i=.SIGMA..sub.mW.sub.m where the sum is over
all the visible markers m and W.sub.m is a predefined weight per
marker. The identity of each visible marker can be deduced from the
video clip either according to properties such as its shape, its
relative position to other markers or its color (using the camera's
visible light sensor). The identity of a marker can also be deduced
from its time-dependent signal. For example, each marker can have a
distinct predetermined frequency with which it flashes between a
minimal and maximal intensity level. The above calibration
procedure is carried out initially at set-up or whenever tracking
of the current markers has been aborted such as when subject walks
out of the room or when the bed and camera move relative to each
other. The subject need not sit still during the set-up but
obstructing the line of sight between the camera and subject is to
be avoided to shorten collection time. Personalization and Volume
Calibration:
With the above system, various movements can be tracked and
analyzed. In order to convert these movements to physiological
quantities, some calibration may be necessary. In one embodiment of
the invention the model for breathing volume measurements is
calibrated as described hereinafter, to take into account the
characteristics of the subject and can be automatically
recalibrated with changes in both subject position and with
breathing patterns. The methods described up to now allow for the
determination of 3D displacements of the chest wall but not of
breathing volumes. The possible methods for automatic calibration
of breathing volumes are described in the following.
Spirometer-Based Calibration Per Subject:
The normal baseline tidal volume for an individual depends on many
factors such as: age, sex, obesity, pose, disease conditions, time
evolved since anesthesia, etc. One possibility for calibration of
baseline tidal volume of a subject is to carry out a calibration
procedure of the above video device versus a spirometer which
measures flow rate of air inhaled and exhaled. In order to verify
how the markers move at different tidal volumes and in different
poses, additional calibration measurements are needed to be carried
out on the subject. These tests can be done prior to surgery or to
an invasive or imaging procedure, for patients who are to be
monitored during or post procedure/surgery.
Limited Spirometer Based Training Set Followed by
Personalization:
An alternative to the calibration per subject is to learn/build a
model on several subjects or subject simulators a priori in a
training session in which several breathing patterns and poses are
recorded both with the video device and a spirometer. In addition,
other physiological data is collected on these training subjects
using the camera, specific covering (for example a snug sheet or
undergarments only) and possibly additional markers. In particular
the following information can be collected: 1. Silhouette of the
trunk area which moves during breathing: This is achieved in a
number of ways but most simply by looking at differences of video
frames taken at opposite phase during a respiratory cycle when the
scene is illuminated. Pixels which change significantly more than
they do for consecutive frames participate in respiration. These
pixels can be pot processed to form a smooth region by region
growing techniques. 2. 3D shape of subjects trunk using either a
large number of markers as described above or using shape from
shading techniques with specific ambient illumination (B. Horn.
Obtaining shape from shading information. In P. Winston, editor,
The Psychology of Computer Vision. McGraw-Hill, New York, 1975).
Other methods of determining 3D shapes of human bodies can be used
(see for example: B. Allen, Learning body shape models from
real-world data, PhD Thesis, University of Washington, 2005). These
are based on using a 3D camera in the training set from which the
3D body shape can be extracted. Furthermore the dominant features
that differentiate between body shapes can be identified and mapped
onto features that can be deduced from a regular 2D camera. For a
test subject, these 2D features can be collected in a calibration
session and mapped to find the relevant 3D body shape. 3. Various
distances between body landmark features and/or markers and/or
features of covering.
For each subject the spirometer volumes are fit to a function of
the marker displacements and phases. The number of data points
entering the fit is according to the video and spirometer sampling
rate, which is of the order of tens of samples per second, so that
say a 5 minute trial involves several thousand data points. This is
repeated for additional subject poses (lying on the side, for
example). The volume function can be the result of a non-linear
regression scheme, e.g. SVM regression (B. Scholkopf and A. J.
Smola. Learning with Kernels. MIT Press, 2002).
This model is kept in a database that can be accessed by the
computer device of the breathing monitor. The model learned can be
a linear function of the displacements; the phases can be taken
into account by using signed displacements which are determined by
setting a reference zero displacement for each marker. One way the
zero is assigned is by computing a moving average of the raw
displacements over the last few respiratory cycles (say 5 breaths
for example). The signed displacement is obtained by subtraction of
the reference zero displacement from the measured one at each time
point. The relative weights between markers can be based on their
relative "reliability" which is provided by the variance in
equi-volume measurements or the variance of displacements during
stable tidal breathing.
Alternatively, the phases can be taken into account by learning
different models for different scenarios of phase relations between
the markers. For example one function for all markers in-phase,
another one for chest and abdominal markers out of phase, and so
forth. Once a new subject is to be monitored, a short automatic
calibration session (e.g., 30 seconds) is performed according to a
predefined protocol such as "subject lies on back" is undertaken at
which time various relevant measurements are made for an estimate
of the subject's silhouette and 3D shape as in the training set. In
addition other features related to trunk size are measured: These
could be the distances between markers for example which can be
extracted using the device's image frames or distances between body
landmarks identified through image analysis or manually. In order
to convert these distances to actual length units, e.g., cm, on or
more objects of known dimensions are included in the field of view
or on the body for calibration purposes.
Using the data gathered on the current test subject, the
"nearest-neighbor" database subjects are extracted from the
database. The distance between two subjects is measured for
instance by measuring the distance between their trunk images
(within silhouette) as follow: 1. The two images are registered
using body landmarks and markers locations placed in corresponding
anatomical positions. The registration is non-rigid but can be
confined, for instance, to Affine transformations. 2. The distance
between 2 images is determined by their scale factors (for example
by the difference of the geometric mean of the scale factors to
1).
Once the relevant data base subjects that match the current subject
the closest are found, the dependence of spirometer volume on 3D
marker displacements found for the database subject is utilized to
estimate volume. The 3D vector displacements are scaled by the
scale factors before being entered into the function associated
with the database subject.
Change of position: The marker movements and breathing volumes
change as a function of subject pose. The pose of a subject is
deduced in real time as described herein. For each subject
position, the relevant database estimation for the volume is used.
However, the relevant "nearest neighbors" from the database are
selected during the initial auto-calibration session of the test
subject.
FIG. 1 is a flow chart of the monitoring step that starts with the
inputs of the first few frames 101 after which, in step 102, the
markers are detected according to the procedure of FIG. 2. The
detection step is followed by the impact of the next frame 103
after which, step 104 the markers are tracked over time as detailed
in FIG. 3. From the frames the subject's pose is extracted in step
105, and the respiration properties are calculated in step 106, as
further detail in FIG. 4. At the end of the calculation process the
system checks, 107, if the tracking has been lost and in the
affirmative case the process is started anew in step 101, while in
the negative case the next frame is input in step 103.
FIG. 2 details the detection steps. At the beginning of this step
the system already has at least one frame and the detection starts
by getting the new frame, 201. A comparison step 202 is performed
to determine whether the frame is stable compared to the previous
one. In the negative case another frame is obtained and the process
starts again at step 201, while in the affirmative case the frame
is set as a reference frame and in step 203 all pixels greater than
a threshold value are located. The pixels are then clustered into
"clouds" in step 204 and the clouds are filtered according to their
features in step 205. In step 206 the markers (clouds) are tracked
from the reference frame to the current frame and the success of
this tracking is verified in step 207. If the tracking was
unsuccessful the process starts again at step 201, while if it was
successful after t seconds have elapsed since the reference frame
(verified in step 208) tracking of markers whose motion is
consistent (similar frequency) is continued in step 209, and the
analysis of marker movement begins in step 210. If the time that
passed is less than t seconds, the next frame is obtained in step
211 and the markers are checked again in step 206.
FIG. 3 is a flowchart of the tracking procedure, which starts by
setting the first frame is a reference frame at 301. Then, the next
frame is obtained, 302 after which the system checks, in step 303,
whether t seconds have passed since the reference frame, in which
case the last frame is set to be the reference frame in step 304.
In the negative case, step 305 checks whether the brightness has
changed relative to the reference frame, in which case, again, the
frame is set to be the reference frame in step 304. In the negative
case, the 2-D motion vectors are calculated between the current and
the reference frame in step 306. This also happens after the last
frame has been set to be the reference frame in step 304. The
system then checks whether the tracking is successful using
criteria such as making sure the extent of the marker movement is
not too large, that the pixel movements within the marker are
consistent, and that the mean square error in the numerical fit is
not too large in step 307, and in the affirmative case it goes back
to step 302 and gets the next frame. If tracking was unsuccessful
the detection process of FIG. 2 is performed again in step 308.
FIG. 4 is a flowchart of the analysis process. In this process
would begin by getting 2D tracking locations of markers, in step
401. Step 402 calculates the projection of marker location onto the
current principal component direction from the last t seconds. If a
new peak is identified in the principal component function over
time, in step 403, the 3D position of the marker at the time of
"New Peak" frame is estimated in step 404; else, step 401 is
repeated. The 3D position of the marker at all frames between the
two most recent peaks is estimated by scaling the principal
component graph in step 405 and the RR, phase delay and the volumes
for all frames between the most recent two peaks is calculated in
step 406, after which the process restarts at step 401.
FIG. 5 is a flowchart of a volume calibration procedure that may be
carried out according to some embodiments of the invention. The
calibration process begins in step 501 where mechanical calibration
is performed. Then, in step 502, the subject is recorded from each
camera in supine position in various illumination conditions. In
step 503 the marker's position on the subject are tracked to obtain
3-D displacement vectors, and in step 504 the silhouette and the 3D
shape of the subject's trunk is extracted. In step 505 the "nearest
neighbor" database subject is found by registering the silhouette
and the 3D shape via a non-rigid transformation. In step 506 the
scaling factor between the subject and his nearest neighbor is
determined. The calibration function is found in step 507 by using
the nearest neighbor estimator function with the current subject's
scaled displacement.
Illustrative Markers
FIG. 6 illustrates the main parts of a marker according to one
embodiment of the invention, consisting of LED assembly 601, which
contains a LED, together with circuitry needed to operate it.
Operating LED's is conventional in the art and is well known to the
skilled person and therefore said circuitry is not shown in detail,
for the sake of brevity. Assembly 601, in this embodiment of the
invention, is meant for multiple uses and operates together with
disposable part 602, which in some embodiments of the invention is
applied directly to the subject's skin, e.g. using an adhesive
surface provided at its bottom. In the embodiment of FIG. 6,
however, disposable part 602 is located above a fabric 603, which
may be, for instance, a subject's pajama or a blanket, and is kept
in position relative to said fabric by a lower base 604, which is
maintained in position on the subject's body by using an adhesive
material located on its bottom 605. Lower base 604 can be coupled
to the bottom 606 of disposable part 602 in a variety of ways, e.g.
by magnetic coupling between the two, or by providing mechanical
coupling, e.g., through pins that perforate and pass through fabric
603 (not shown).
As will be described in greater detail with reference to FIG. 10,
disposable part 602 may contain batteries needed to operate the
LED, as well as, if desired, additional circuitry. Reusable LED
assembly 601 is connected to disposable part 602 through seats 607
and 607', which engage buttons 608 and 608' located in disposable
part 602. An additional positioning pin 609 can be provided in LED
assembly 601, to engage hole 610 in disposable part 602. Electrical
contact can be provided from the power supply located in disposable
part 602, through buttons 608 and 608', although of course many
alternative ways exist of conveying power from the batteries to the
LED assembly 601.
FIG. 7 is an operationally connected view of the various elements
of FIG. 6, the same numerals being used to indicate the same parts.
FIG. 8 is a top view of the assembled marker devise of FIG. 7, and
FIG. 9 is a cross-section of the same device assembly, taken along
the AA line of FIG. 8. In this cross-section, in addition to the
elements already described above a compartment 611 is seen, which
houses the LED, as well as electronics used to operate it. Of
course, the top portion 612 of LED assembly 601 is made of material
of a transparency sufficient to allow the required amount of light
generated by the LED to be viewed from the outside.
FIG. 10 shows the disposable part 602 of FIGS. 6-9 in exploded
view. This disposable assembly is made of a plurality of elements
and layers, kept together by gluing or by mechanical connection,
for instance when upper buttons 608 and 608' are connected with a
lower buttons 613 and 613'. A battery 614 is provided as the power
supply to the LED assembly, which can be, for instance, a standard
2030 disc battery.
As will be apparent to the skilled person the arrangement of the
disposable assembly 602, as well as that of the LED assembly 601
provided in the figures, represents only one of countless possible
arrangements and is provided only for the purpose of illustration.
Additionally, Multi-LED assemblies can be devised by the skilled
person, to provide two or more LEDs in the same LED assembly, while
performing the required dimensional changes. Such alternatives and
modifications are clear to the skilled person from the above
description and, therefore, are not further described in detail,
for the sake of brevity.
All the above description of markers, methods, uses, etc. have been
provided for the purpose of illustration and are not intended to
limit the invention in any way. Many modifications, alternative
configurations, process steps and methods can be provided by the
skilled person, without exceeding the scope of the invention.
* * * * *
References