U.S. patent application number 10/790735 was filed with the patent office on 2004-09-02 for method of providing images to a person with movement disorder.
Invention is credited to Baram, Yoram.
Application Number | 20040169620 10/790735 |
Document ID | / |
Family ID | 32233001 |
Filed Date | 2004-09-02 |
United States Patent
Application |
20040169620 |
Kind Code |
A1 |
Baram, Yoram |
September 2, 2004 |
Method of providing images to a person with movement disorder
Abstract
A method for interaction of an image with full body movement,
including the steps of providing an image to a person with a
movement disorder from a display device mounted on such person's
body, receiving signals related to voluntary and involuntary
movements of such person's body, such signals received by a
receiving device mounted on such person's body, adapting such image
according to such received signals, and providing such adapted
image to such person where such adapted image enables such person
to adjust such body movement.
Inventors: |
Baram, Yoram; (Haifa,
IL) |
Correspondence
Address: |
EITAN, PEARL, LATZER & COHEN ZEDEK LLP
10 ROCKEFELLER PLAZA, SUITE 1001
NEW YORK
NY
10020
US
|
Family ID: |
32233001 |
Appl. No.: |
10/790735 |
Filed: |
March 3, 2004 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10790735 |
Mar 3, 2004 |
|
|
|
09631292 |
Aug 2, 2000 |
|
|
|
6734834 |
|
|
|
|
60182026 |
Feb 11, 2000 |
|
|
|
Current U.S.
Class: |
345/8 |
Current CPC
Class: |
A61B 5/7445 20130101;
A61B 5/1101 20130101; A61B 5/4082 20130101; A61B 5/486 20130101;
A61B 2503/10 20130101; G16H 20/30 20180101; G16H 40/63 20180101;
A61B 5/6803 20130101; G06F 3/011 20130101; A61B 5/6814 20130101;
G06F 3/012 20130101 |
Class at
Publication: |
345/008 |
International
Class: |
G09G 005/00 |
Claims
I claim:
1. A method for interaction of an image with full body movement,
the method comprising the steps of: providing an image to a person
with a movement disorder from a display device mounted on said
person's body; receiving signals related to voluntary and
involuntary movements of said person, said signals received by a
receiving device mounted on said person's body; adapting said image
with a processor mounted on said person's body according to said
received signals; and providing said adapted image to said person
on said display device, wherein said adapted images enable said
person to adjust said body movement.
2. A method as in claim 1, wherein said steps are performed
repeatedly so as to provide continuous interaction of said image
with said body movement.
3. A method as in claim 1, wherein said interaction includes
therapy.
4. A method as in claim 1, wherein said interaction includes
assistance.
Description
CROSS-REFERENCES TO OTHER APPLICATIONS
[0001] This application is a continuation of prior U.S. application
Ser. No. 09/631,292, filed on Aug. 2, 2000 entitled "Closed Loop
Augmented Reality Apparatus", and claims the benefit of U.S.
Provisional Patent Application 60/182,026, filed on Feb. 11, 2000,
which is incorporated in its entirety by reference herein.
FIELD OF THE INVENTION
[0002] The present invention relates to a closed-loop augmented
reality system for assisting people with motion disorders.
BACKGROUND OF THE INVENTION
[0003] Certain neurological disorders, such as those associated
with Parkinson's Disease (PD), are known to cause both motor and
visual impairments. These impairments may include tremor, motor
fluctuations, and involuntary arm, leg and head movements. In
addition, patients with these disorders may have trouble initiating
and sustaining movement. While people with such disorders have
distorted visual feedback effects, they are even more dependent on
such feedback than healthy people.
[0004] U.S. Pat. No. 5,597,309 describes a method for stimulating
and sustaining ambulation in Parkinson's patients by creating
virtual visual cues. The method, however, is based only on
open-loop visual cue presentation, wherein initiating and
sustaining cues are given at predetermined speeds using an
image-generating device.
SUMMARY OF THE INVENTION
[0005] There is provided, in accordance with an embodiment of the
present invention, an apparatus for adaptive image generation. The
apparatus includes at least one non-radiating sensor, mountable on
a body, for detecting body movements and producing signals related
to the body movements, and a processor configured to receive the
signals and generate an image, wherein the generated image is
adapted according to the detected body movements.
[0006] The processor may include a filtering unit for filtering
noise from the received signals, the unit having an adaptive
filtering element, and an image generator for providing the
generated and adapted images from the filtered and received
signals. The filtering unit may include linear elements and
non-linear elements, and may be a neural network.
[0007] In one embodiment, the non-radiating sensor is an
accelerometer. There may be two sensors for producing signals
related to movement of a head and body. The generated image may
include a geometric pattern, such as a tiled floor or parallel
stripes, or it may include a view from real life.
[0008] There is also provided, in accordance with an alternative
embodiment of the present invention, an apparatus for augmented
reality, which includes at least one sensor mountable on at least
one part of a body for producing signals from movements of a body
part, and a processor for adapting an augmented image based only on
the produced signals.
[0009] There is also provided, in accordance with an alternative
embodiment of the present invention, a system for adaptive
augmented or virtual reality which includes at least one
non-radiating sensor, mountable on at least one part of a body, for
detecting body movements and producing signals related to the body
movements, a processor configured to receive the signals and
generate an image which is adapted according to the detected body
movements, and a display for displaying the generated and adapted
images. The system provides closed-loop biofeedback for adaptation
of body movements.
[0010] There is also provided, in accordance with an alternative
embodiment of the present invention, an apparatus for treating a
movement disorder. The apparatus includes at least one sensor,
mountable on a body, for detecting body movements and producing
signals related to the body movements, and a processor configured
to receive the signals and generate an image, wherein the generated
image is adapted according to the detected body movements.
[0011] There is also provided, in accordance with an alternative
embodiment of the present invention, a system and method for
reducing involuntary movement artifacts from a signal, including a
voluntary movement processor for filtering a voluntary movement
signal representative of a voluntary movement having involuntary
movements therein, an adaptive involuntary movement processor for
adaptively filtering a vertical motion signal, and a subtractor for
subtracting the involuntary movements from the voluntary movement
signal to produce a reduced artifact signal. The adaptive
involuntary movement processor adapts its processing using the
reduced artifact signal.
[0012] Involuntary movement may include tremor or other unwanted
movements. Voluntary movement may include walking or other full
body movements such as turning, running, etc.
[0013] There is also provided, in accordance with an alternative
embodiment of the present invention, a method for interaction of an
image with body movement, including the steps of providing an image
to a person, receiving signals related to movements of the person,
adapting the image according to the received signals, and providing
the adapted image to the person, wherein the adapted image enables
the person to adjust body movements.
[0014] The steps may be performed repeatedly so as to provide
continuous assistance of body movement. The image may be virtual or
augmented. Interaction may include therapy, recreational activities
(sports, sex, etc.) or physical assistance.
[0015] There is also provided, in accordance with an alternative
embodiment of the present invention, a method for treating a
movement disorder, including the steps of providing an image to a
person, receiving at least one signal from the person, filtering
unwanted noise from the signal, adapting the image based on the
received and filtered signal, and providing the adapted image to
the person, wherein the adapted image enables the person to adjust
body movements.
[0016] There may be, for example, two signals received from the
person--one from the head and one from the body. The step of
filtering may be accomplished using a filtering unit having an
adaptive filtering element. The method may also include the step of
measuring a walking parameter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The present invention will be understood and appreciated
more fully from the following detailed description taken in
conjunction with the appended drawings in which:
[0018] FIGS. 1A and 1B are schematic illustrations of a user
wearing one embodiment of the present invention;
[0019] FIGS. 2A and 2B are illustrations of images viewed by the
user of FIGS. 1A and 1B;
[0020] FIG. 3 is a block diagram illustration of a processor;
[0021] FIG. 4 is a block diagram illustration of one component of
the processor of FIG. 3 in greater detail;
[0022] FIG. 5 is a block diagram illustration of another component
of the processor of FIG. 3 in greater detail;
[0023] FIG. 6 is a block diagram illustration of open-loop and
closed-loop control; and
[0024] FIG. 7 is a table showing results from tests performed using
one embodiment of the present invention.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
[0025] The proposed invention creates an adaptive augmented reality
of motion over a virtual image, such as a tiled floor. The system
is portable, and can be used for a variety of therapeutic, healing,
assistive, or recreational activities. It uses non-radiating
sensors, such as accelerometers, which directly measure movements
of the body. It is particularly useful for treating diseases with
motion impairment, such as Parkinson's Disease (PD), by providing
closed-loop biofeedback for gait initiation, sustainment and
stabilization.
[0026] Reference is now made to FIGS. 1A and 1B, which illustrate
one embodiment of the system on a user's body. FIG. 1A shows an
overview of the entire system, and FIG. 1B shows a detailed view of
a portion of the system. The adaptive augmented reality apparatus,
generally referenced 50, is portable and generally self-contained,
and comprises a head-mounted assembly 52 and a body-mounted
assembly 54. Head-mounted assembly 52, comprising a sensor 60A and
a display 64, is attached to a pair of glasses 40. Glasses 40 may
be standard eyeglasses, with display 64 and sensor 60A attached by,
for example, clips. Sensors 60A and 60B are non-radiating sensors,
such as accelerometers. Other types of non-radiating sensors may be
used as well.
[0027] Display 64 overlays a portion of one lens of glasses 40,
protruding out approximately 1 cm past the lens plane, as shown in
FIG. 1B. Display 64 is a small (for example, 1 cm.times.1 cm)
piece, situated directly in front of one eye 41. In this way,
display 64 is close enough to eye 41 to allow the user to see a
full view image on display 64 without obscuring any view of the
surroundings. Display 64 may be, for example, a liquid crystal
display (LCD).
[0028] Alternatively, integrated eyeglasses may be used, where
display 64 is already incorporated within glasses 40. Such
integrated glasses are available from, for example, i-glasses LC
Model #500881, i-O Display Systems, LLC, Menlo Park, Calif. USA; or
The MicroOptical Corporation, Westwood, Mass. USA. Display 64,
whether located internally or externally to glasses 40, is equipped
with VGA or video connectors (not shown).
[0029] Sensor 60A is, for example, a tilt sensor such as Model
#CXTILT02E or Model #CXTA02, available from Crossbow Technology,
Inc., San Jose, Calif. USA, used to measure tilt of the head with
respect to the vertical. Alternatively, sensor 60A may be a sensor
that can detect other movements as well as head tilt, such as a
3-axis accelerometer.
[0030] Head-mounted assembly 52 is connected to body-mounted
assembly 54 by wires 42, as shown in FIG. 1A. Alternatively,
wireless connection is possible as well. Body-mounted assembly 54
comprises a processor 62 and a 3-axis accelerometer 60B, for
example, translational accelerometer Model #CXL04M3 (Crossbow
Technology, Inc., San Jose, Calif. USA).
[0031] Body-mounted assembly 54 may be configured in a box of a
reasonable size for a person to wear, for example, but not limited
to, one having dimensions 7.times.12.times.3 cm. Body-mounted
assembly 54 is preferably attached to a belt, but may be connected
to the body in any number of ways such as by a chest strap,
adhesive, or other connecting device.
[0032] Reference is now made to FIGS. 2A and 2B, which show
examples of images viewed by the user while wearing system 50. The
image displayed in FIG. 2A is adapted during movement and shown in
FIG. 2B. FIG. 2A shows a virtual tiled floor image as displayed to
the user during normal walk. The floor moves as the user walks, in
an opposite direction as depicted by arrow 43, to simulate a real
floor as it appears to someone walking. If the user stumbles or
falls forward, an image such as the one shown in FIG. 2B is
displayed to the user, to simulate the actual view of a real tiled
floor during stumble or fall. The image is continuously adapted to
the motions of the user to create more realistic dynamics of the
virtual world viewed by the patent. Consequently, the virtual floor
viewed by the user moves only during actual motion, at a rate equal
to this motion, as in real life. This enables the user to walk at
his or her own pace, which may be variable and governed by
individual motion characteristics.
[0033] Thus, the tiles as shown in FIG. 2B expand as the user looks
down or stumbles, and contract as he picks up his head and looks
farther away, as in real life. Inner arrows 45 indicate directions
of movement of the edges of the tiles in response to the falling
motion. Thus, the floor expands while still in motion. Furthermore,
as the user turns around, the image turns in the other direction,
as in real life. This feature is of particular importance for PD
patients since a high number of these patients experience
considerable difficulties turning around. These real life effects
give the patient needed biofeedback signals for stabilization and
safer motion.
[0034] It will be appreciated that the image is not restricted to
tiled floors, and may include other geometric patterns, such as
parallel stripes. In addition, other images may be generated, such
as views from real life (i.e. outdoors in a park or the like). The
image may be a virtual image, in which the outside world is blocked
out, or it may be an augmented image, in which the image is
superimposed onto the person's view of the real world.
[0035] Reference is now made to FIG. 3, which shows details of
processor 62 located within body-mounted assembly 54. Processor 62
may be a wearable computer or a microprocessor. Input data to
processor 62 is obtained from sensors 60A and 60B at input ports
74A and 74B, respectively, and output data from processor 62 is
sent to display 64 through output port 72.
[0036] Signals may be, but are not limited to, proportional direct
current (DC), and indicate some motion parameter. For example,
signals may contain acceleration data that is later converted to
velocity data. Alternatively, signals may relate to an angle of
head tilt, or other body movements. Signals from processor 62 to
display 64 may be analog video signals, for example, PAL or NTSC,
or they may be digital (e.g. VGA) signals. Conversion from analog
to digital (A/D) or from digital to analog (D/A) may either be
performed within processor 62, or external to processor 62 using a
converter.
[0037] Processor 62 includes at least two components: a filtering
unit 48, and an image generator 40. Filtering unit 48 filters
signals received at input port 74B from sensor 60B. Signals from
sensor 60A relating to movements other than head tilt may be
filtered as well, as shown by dashed lines. Filtering eliminates
unwanted components from the sensor signals, such as tremor, motor
fluctuations and involuntary arm, leg and head movements, as
described in further detail below. Image generator 40 then
incorporates filtered data, as well as signals received directly
from sensor 60A at input port 74A, and translates the received and
filtered proportional signals into rates and degrees of motion of
the displayed virtual floor. Image generator 40 then adapts the
base image (such as the one shown in FIG. 2A) according to the
generated rate and degree of motion information. Adapted images are
sent through output port 72 to display 64 to be viewed by the
user.
[0038] Reference is now made to FIG. 4, which is a block diagram
illustration of a filtering component 45 of filtering unit 48, used
for filtering tremor, and other unwanted motions. Each filtering
component 45 in filtering unit 48 is used for filtering signals
related to motion in one particular axis or direction. Thus,
filtering unit 48 may have one or several filtering components 45,
depending on the number of axes of movement being measured.
[0039] First, noisy sensor data are generally cleaned by filtering.
Signals relating to vertical movement (up/down), representing
tremor and other involuntary movements, are then subtracted from
signals relating to translational movement (forward/back or
side/side) or other voluntary movements. In this way, both noise
from signals and unwanted motions and tremor are filtered out.
[0040] Filtering unit 48 has an upper path 47 and a lower path 49.
Upper path 47 is used for cleaning signals from voluntary movement.
This may include translational, rotational, or other movements,
which may be measured, for example, using a 3-axis accelerometer.
Lower path 49 is used for eliminating tremor and involuntary
movement, based on receipt of vertical (up/down) movements.
Vertical movements may also be obtained from a 3-axis
accelerometer, or by other measuring means.
[0041] In upper path 47, a linear filtering element 76 is used to
clean signals in one axis, for example, forward acceleration, from
voluntary movement or another voluntary movement in one axis, for
example, forward acceleration. Output is related to input by the
following equation: x.sub.1(i)=.SIGMA.a.sub.kv.sub.1(i-k) for k=1 .
. . K, where v.sub.1(i) and x.sub.1(i) are the input and output to
linear filtering element 76 at time i, respectively, and a.sub.k
are weights.
[0042] In lower path 49, an adaptive linear filtering element 77 is
used. Adaptive linear filtering element 77 is, for example,
5-dimensional, and is similar to one proposed by Widrow B. and
Winter R for a linear adapter noise canceller in "Neural nets for
adaptive filtering and adaptive pattern recognition", Computer
21(3): p. 25, 1988, incorporated herein by reference in its
entirety. Similar to linear filtering element 76, output is related
to input by the following equation: x.sub.2(i)=.SIGMA.b.sub.kv-
.sub.2(i-k) for k=1 . . . K, where v.sub.2(i) and x.sub.2(i) are
the input and output to adaptive linear filtering element 77 at
time i, respectively. However, as opposed to the linear filtering
element 76, the b.sub.k are variable weights. K was taken to be 5,
but can be any number.
[0043] Linear filter 76 and adaptive linear filtering element 77
both feed into sigmoidal elements 78. For sigmoidal elements 78,
new outputs y.sub.1(i) and y.sub.2(i) are related to inputs
x.sub.1(i) and x.sub.2(i) from linear filtering element 76 and
adaptive linear filtering element 77, respectively by the following
equation: y.sub.n(i)=tan h(x.sub.n(i)) at time i. Since the
sigmoidal function is bound between two predetermined values, the
sigmoidal elements attenuate high-amplitude accelerations, which
was found to improve performance over the use of linear elements
alone. Any combination of linear and sigmoidal elements may be
used. For example, the sigmoidal elements may be included in either
upper path 47 or lower path 49, or both or neither.
[0044] In summer 80, output y.sub.2(i) from adaptive linear
filtering element 77 is subtracted from output y.sub.1(i) from
linear filtering element 76 to obtain a final output r(i). Weights
b.sub.k in adaptive linear filter 77 are then adjusted so as to
minimize the squared final output r.sup.2(i).
[0045] It should be noted that by adapting the filtering process in
this way, filtering unit 48 "learns" the user's motions. Filtering
unit 48 may be considered a neural network.
[0046] Each axis of movement (forward/back or side/side, for
example) uses its own filtering component 45. For each filtering
component 45, the cleaned signal is sent from filtering unit 48 to
image generator 40. Thus, image generator 40 may simultaneously
obtain multiple filtered signals from filtering unit 48, as well as
signals directly from sensor 60A, such as a tilt sensor.
[0047] Reference is now made to FIG. 5, which is a block diagram
illustration of image generator 40, used for creating images and
adapting the images based on received filtered data. Specifically,
an initial image 80 of a tiled floor, or other image, is created
using an imaging software package (OpenGL.TM., Silicon Graphics,
Inc., Mountain View, Calif. USA). Data from sensors, which may be
filtered or unfiltered, are fed into image generator 40, and are
used to make corresponding proportional changes in floor angle and
speed of movement of image 80, resulting in an updated image 80',
also provided by the imaging software. In the case of acceleration
data, the filtered acceleration signals are converted into rate of
motion data within image generator 40, typically using an
integrator.
[0048] Thus, the tilt angle received from sensor 60A, is translated
into an inclination angle of the virtual tiled floor so as to
create a realistic view of the floor. Tripping or falling motions
result in larger angles, and are translated into a proportional
outward expansion of image 80, as in real-life vision.
[0049] Sensors 60A and 60B may also detect turning motions, which
are translated into counter-turning motions of the virtual
floor.
[0050] The rates of motion of the virtual tiled floor are the same
as the rates of body motion of the user, occurring in opposite
directions so as to create the sensation of a floor fixed in space.
The tilt of the virtual floor is the same as that of the user's
head, as measured by head-mounted sensor 60A. Parameters such as
tile size, color and intensity of the virtual floor are
adjustable.
[0051] Because of filtering unit 48, a forward motion of the tiled
floor will not be triggered by leg tremor, and expansion of tile
images, indicating a stumble or a fall, will not be caused by head
tremor. Learning and filtering are performed on-line, as the
patient's dynamic characteristics keep changing in time.
[0052] The present invention may potentially be used for anything
that other virtual reality devices are used for, such as
entertainment, industry, science and medicine. The use of
accelerometers allows for free movement and is not restricted by
location or space. In addition, it allows for adaptation of the
image to full body motions. Thus, for example, one embodiment of
the invention may include a device which would enable a sport or
any other recreational activity (i.e. sexual activity) to be
performed with a virtual background scene, outside of an
entertainment room allowing for more body movements. In another
embodiment, the device could be connected to the Internet, allowing
for direct interaction between patients and doctors or between
users. Movement disorders may include stroke, trauma, PD, or other
central nervous system disorders and degenerative diseases. Also,
it may include birth defects and results of aging.
EXPERIMENTAL DETAILS SECTION
[0053] A prototype of the proposed invention has been developed and
systematically tested on PD patients supervised by a team of
medical doctors in the Movement Disorders Clinic at the Cognitive
Neurology Department of the RAMBAM Medical Center in Haifa,
Israel.
[0054] Reference is now made to FIG. 6, which illustrates the
concept of open-loop versus closed-loop control. In an open-loop
system, an image generator 40 produces a display 64 for a user 44
to see. User 44 may then react to display 64, and voluntarily begin
to move. This, however, has no effect on image generator 40. In a
closed-loop system, the motion of user 44 is sensed by motion
sensors 60, which send signals related to this motion through a
filtering unit 48 and back to image generator 40. In contrast to
the open-loop system, which does not measure or respond to the body
motions of user 44, the closed-loop system incorporates signals
from motion sensors 60 into display 64.
[0055] Reference is now made to FIG. 7, which is a table showing
details about the subjects who participated in the study, and the
results obtained with the display off, with open-loop display, and
with closed-loop display. For open-loop display, no sensors were
activated on the subject for measuring movements, resulting in an
image displayed at a predetermined speed towards the observer.
Speed and stride length are listed for each test per subject, and
the final two columns list a percentage change for the tested
parameters.
[0056] Fourteen subjects, all clinically diagnosed with idiopathic
PD and treated with Dopaminergic medication, participated in the
study. The subjects' initials, ages, number of years having the
disease (yd) and disease severity on the Hoehn and Yahr (HY) scale
(See Hoehn M M and Yahr M D: "Parkinsonism: onset, progression and
mortality." Neurology 17(5):427-42, 1967) are listed in FIG. 7. All
subjects had 20/20 visual acuity, with correction when necessary.
The tests were always performed at approximately the same time of
day, and either following a 12-hour period without medication, or
during the "off" state of the disease, which is characterized by
severe immobility.
[0057] Each test consisted of a subject walking a stretch of 10
meters 4 times. Only results from the last two out of four tests in
each category were used, to eliminate the effect of training. At
the start of each test, the subject was verbally instructed to
start walking. The length of time and the number of steps to
completion of the 10-meter stretch were recorded for each test.
Speed in meters/second (m/s) and stride length in meters (m) were
calculated. In the first test (the reference test) the display was
turned off. In the second, the open-loop system was turned on,
displaying a virtual tiled floor in perpetual motion towards the
observer at the maximal speed level comfortable for the subject.
The third test employed the adaptive closed-loop system. The order
of the second and the third tests was then reversed and results
were averaged, in order to eliminate the effect of training from
the comparison.
[0058] The last two columns of FIG. 7 list the percentage changes
in the performance parameters obtained for the closed-loop system
with respect to the reference test. It can be seen that, in all
cases but one, performance was improved significantly with respect
to the reference test when the closed-loop system was turned on
(higher speed, longer strides).
[0059] Qualitative results were noted by the testers as well.
Improvement in the quality of the steps was observed. Subjects who
dragged their feet on the ground in the reference test raised them
noticeably higher when the closed-loop system was turned on.
Improvement was particularly dramatic in subjects tested during
their "off" phase (JS and NM), characterized by severe immobility.
These subjects were severely Brady-kinetic, unable to stand or
start walking on their own. When the closed-loop display was turned
on, and the subjects were instructed to watch the display, both
subjects were able to start walking unaided. The one subject who
did not benefit from the closed-loop system, MR, had no walking
impairment; as can be seen from his test parameters, he had the
best performance during the reference test.
[0060] Comparison of results for the open-loop system and the
closed-loop system shows that the average values are similar.
However, the standard deviations for the open-loop system are much
higher than for the closed-loop system as well as the reference
test. This means that the open-loop system affects different
individuals in very different ways. The behaviors of JS and NM are
particularly noteworthy in this respect. Both subjects improved
their performance parameters with respect to the reference test
when the closed-loop system was turned on, and both experienced
freezing episodes when the perpetual motion display (open-loop
system) was turned on. For both subjects, the performance
parameters for the open-loop system are even lower than for the
reference test. Some subjects reported discomfort, dizziness and
nausea caused by the perpetual floor motion of the open-loop
system. Most subjects reported relative comfort with the
self-activated, closed-loop adaptive system and indicated a clear
preference for it over the open-loop system.
[0061] The last two rows in the table show the average performance
of the subject group (excluding MR, who, as noted before, had
non-gait related impairment). It can be seen that, on average, the
proposed closed-loop system improves performance by about 25%
(speed) or 30% (stride length) with respect to the reference test.
It should also be noted, however, that the standard deviations of
these results are-rather high, which implies that the results
should be evaluated mainly on an individual basis. Certain PD
patients would be helped by the proposed approach to a very
significant degree (50%-100%), while others would be helped to a
lesser degree. Few, in particular those without walking
impairments, would not be helped at all.
[0062] Similar tests done on non-PD patients, such as stroke
victims, have shown similar improvements in the walking abilities
of these patients using the apparatus as described hereinabove.
[0063] Our study is the first to show the benefit of augmented
reality, adapted to a person's own motion, for gait control in PD
patients. In particular, we have shown the advantage of a
closed-loop adaptive display of a virtual tiled floor as compared
to a previously proposed open-loop, non-adaptive, perpetual virtual
motion display. Our experiments have shown that adaptive augmented
reality can significantly improve the walking abilities of most PD
patients without causing the discomfort and the freezing phenomena
associated with the open-loop system.
[0064] Finally, it is important to note that the gait parameters
most affected by the proposed approach, namely, speed and stride
length, also respond, to a similar extent, to antiparkinson
medication (See Pedersen S W, Eriksson T and Oberg B: "Effects of
withdrawal of antiparkinson medication on gait and clinical score
in the Parkinson patient", Acta Neurol. Scand. 84, 7, 1991) as well
as to pallidotomy (brain surgery), as reported by Siegel K L and
Metman L V: "Effects of bilateral posteroventral pallidotomy on
gait in subjects with Parkinson's disease", Arch. Neurol., 57, 198,
2000. However, medication causes involuntary movement which
disturbs gait further.
[0065] The proposed approach may make it possible to reduce
medication and postpone surgical intervention. The proposed
invention may be useful as treatment, as therapy, or as an
assistive device.
[0066] It will be appreciated by persons skilled in the art that
the present invention is not limited by what has been particularly
shown and described hereinabove. Rather the scope of the invention
is defined by the claims which follow:
* * * * *