U.S. patent application number 13/145436 was filed with the patent office on 2012-06-21 for multi-user smartglove for virtual environment-based rehabilitation.
Invention is credited to Avi Bajpai, Caitlyn Bintz, Jason Chrisos, Andrew Clark, Maureen K. Holden, Drew Lentz, Constantinos Mavroidis, Mark Sivak.
Application Number | 20120157263 13/145436 |
Document ID | / |
Family ID | 42356175 |
Filed Date | 2012-06-21 |
United States Patent
Application |
20120157263 |
Kind Code |
A1 |
Sivak; Mark ; et
al. |
June 21, 2012 |
MULTI-USER SMARTGLOVE FOR VIRTUAL ENVIRONMENT-BASED
REHABILITATION
Abstract
A low-cost, virtual environment, rehabilitation system and a
glove input device for patients suffering from stroke or other
neurological impairments for independent, in-home use, to improve
upper extremity motor function, including hand and finger control.
The system includes a low-cost input device for tracking arm, hand,
and finger movement; an open source gaming engine; and a processing
device. The system is controllable to provide four types of
multiple patient/user interactions: competition, cooperation,
counter-operative, and mixed.
Inventors: |
Sivak; Mark; (Boston,
MA) ; Holden; Maureen K.; (Waltham, MA) ;
Mavroidis; Constantinos; (Arlington, MA) ; Bajpai;
Avi; (New York, NY) ; Bintz; Caitlyn;
(Brighton, MA) ; Chrisos; Jason; (Allston, MA)
; Clark; Andrew; (Boston, MA) ; Lentz; Drew;
(Potomac, MD) |
Family ID: |
42356175 |
Appl. No.: |
13/145436 |
Filed: |
January 20, 2010 |
PCT Filed: |
January 20, 2010 |
PCT NO: |
PCT/US10/21483 |
371 Date: |
March 8, 2012 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61145825 |
Jan 20, 2009 |
|
|
|
61266543 |
Dec 4, 2009 |
|
|
|
Current U.S.
Class: |
482/4 |
Current CPC
Class: |
A63B 2225/20 20130101;
A63B 2022/0094 20130101; G06F 3/014 20130101; A61B 2505/09
20130101; A63B 2220/10 20130101; A63B 2024/0096 20130101; G16H
50/50 20180101; A61B 5/6806 20130101; A63B 2225/54 20130101; A63F
2300/1012 20130101; A61B 5/0022 20130101; A63B 2220/16 20130101;
A63B 2220/805 20130101; G06F 19/00 20130101; A63B 2071/0638
20130101; A61B 5/7475 20130101; G16H 40/63 20180101; A63B 23/16
20130101; A63B 23/08 20130101; A63B 2220/51 20130101; A63B
2071/0655 20130101; A63B 2220/40 20130101; A63F 2300/105 20130101;
A61B 5/1114 20130101; A63B 2225/50 20130101; G16H 20/30 20180101;
G06F 3/011 20130101; G09B 19/003 20130101; A63B 2220/89 20130101;
A63F 13/212 20140902 |
Class at
Publication: |
482/4 |
International
Class: |
A63B 24/00 20060101
A63B024/00 |
Claims
1. A multiple-user virtual environment system for rehabilitation
exercise of mammalian trunk, extremities, and digits, the system
comprising: a communication network; and a plurality of individual
virtual environments, each of the individual virtual environments
including: at least one input device that is structured and
arranged to generate at least one of movement, orientation,
velocity, and position data corresponding to discrete movement of a
portion of the mammalian trunk or of one or more mammalian
extremities or digits, a processing device that is adapted to
receive the at least one movement, orientation, velocity and
position data from the input device; to store said movement,
orientation, velocity, and position data; and to generate image
data therefrom for display on a display device, and a virtual
environment interface that is adapted to enable virtual environment
communication and virtual environment data transfer between the
processing device and the network.
2. The system as recited in claim 1 further comprising an interface
and processing device that enable a third party to observe and to
record data from the processing device.
3. The system as recited in claim 2, wherein the third-party
processing device is structured and arranged to pre-establish or
adapt at least one rehabilitation exercise for each of the
plurality of virtual environments or to modify said exercise or
virtual environment.
4. The system as recited in claim 4, wherein the third-party
processing device is structured and arranged to pre-establish a
type of multi-user interaction for each of the at least one
rehabilitation exercise and for each of the plurality of virtual
environments.
5. The system as recited in claim 4, wherein the type of multi-user
interaction is selected from the group consisting of a competitive
interaction, a counter-operative interaction, a cooperative
interaction, and a mixed interaction.
6. The system as recited in claim 4, wherein the third-party
processing device is structured and arranged to adjust a degree of
difficulty for each of the at least one rehabilitation
exercises.
7. The system as recited in claim 1, wherein the virtual
environment interface includes at least one of: game engine
hardware having a programming code and a plurality of rules, game
engine software having a programming code and a plurality of rules,
a scripting programming code that is capable of overwriting
low-level game engine programming code, three-dimensional model
software that is adapted to populate the engine programming code
and to adhere to the plurality of rules, and three-dimensional
graphic software that is adapted to populate the engine programming
code and to adhere to the plurality of rules.
8. The system as recited in claim 1, wherein the network is
selected from the group consisting of a local area network, a wide
area network, the World Wide Web, and the Internet.
9. The system as recited in claim 1, wherein the network is adapted
to generate a virtual reality environment using the virtual
environment data.
10. The system as recited in claim 1, wherein the input device is
adapted to monitor a position or an attitude of an extremity or of
a digit in space.
11. The system as recited in claim 1, wherein the discrete movement
is selected from the group consisting of movement in an
x-direction, movement in a y-direction, movement in a z-direction,
pitch, roll, and yaw, wherein each of the x-direction, the
y-direction, and the z-direction is mutually perpendicular.
12. The system as recited in claim 1 further comprising a base unit
that, in combination with the at least one input device, is
structured and arranged to provide a dead reckoning starting and a
dead reckoning ending point-of-reference, to enable the processing
device to determine at least one of attitude and velocity of said
at least one input device.
13. The system as recited in claim 12, wherein the base unit is
selected from the group comprising a banana grip base, a globe
base, a pyramidal base or a tear-drop base.
14. The system as recited in claim 13, wherein the globe base
includes imprinted grooves that define the dead reckoning starting
and the dead reckoning ending point-of-reference.
15. The system as recited in claim 14, wherein the base unit
includes adjustable arm splints for neutral wrist position and
forearm support.
16. The system as recited in claim 1, wherein the at least one
input device comprises a pair of gloves.
17. The system as recited in claim 1, wherein the processing device
is adapted to map real world movement directly into similar or
abstractly into different virtual world movement.
18. The system as recited in claim 1 further comprising a teacher
model capability to highlight shortcomings and errors of the user
and to demonstrate how to correct said shortcomings and errors.
19. An input device for use with a multiple-user virtual
environment system for rehabilitation exercise of a human hand and
digits, the input device being structured and arranged to generate
signals corresponding to at least one of a discrete movement and an
attitude of said hand and said digits, the device comprising: a
glove that can be readily donned and doffed on either hand by a
user, the glove having finger portions for at least a thumb, an
index finger, a middle finger, and a ring finger; a first plurality
of sensors, each sensor being structured and arranged to provide
data on movement and range of movement of at least one of the index
finger, the middle finger, and the ring finger, each of the first
plurality of sensors being disposed within the finger portions of
said index finger, said middle finger, and said ring finger; a
second plurality of sensors that is structured and arranged to
provide data on movement of the thumb; and a positioning and
tracking system that is structured and arranged to generate
position coordinates in three rotational axes and three
translational axes to determine at least one of the attitude and a
velocity of said hand.
20. The input device as recited in claim 19, wherein the first and
the second pluralities of sensors are selected from the group
comprising electronic bend sensors, resistive bend sensors,
capacitive bend sensors, optical fiber sensors, mechanical
measurement bend sensors, angle measurement sensors, Hall effect
sensors or electromechanical sensors.
21. The input device as recited in claim 19, wherein the
positioning and tracking system is selected from the group
comprising an inertial measurement unit (IMU), a radio frequency
(RF) positioning and tracking system, an infrared positioning and
tracking system, three-dimensional cameras or a magnetic tracking
system.
22. The input device as recited in claim 21, wherein the
positioning and tracking system is an inertial measurement unit
(IMU) that employs dead reckoning to determine the attitude of the
hand and the velocity of the hand.
23. The input device as recited in claim 22, the input device
further including a Hall effect sensor for zeroing the IMU.
24. The input device as recited in claim 19, wherein the device has
a total weight that does not exceed sixteen ounces.
25. The input device as recited in claim 24, wherein a dorsal
weight on the input device does not exceed eight ounces.
26. The input device as recited in claim 19, wherein the input
device is structured and arranged to measure at least one of the
following accurately: finger flexion/extension measured to at least
90.degree.; wrist flexion/extension or dorsal action at
.+-.90.degree.; wrist-radial deviation up to 40.degree.;
wrist-ulnar deviation up to 50.degree.; and forearm
supination/pronation up to 180.degree..
27. The input device as recited in claim 19 further comprising a
feedback system that is selected from the group comprising an
audible speaker to provide auditory clues, at least one
light-emitting device to provide a visual signal, and a haptic
device to provide a vibratory signal.
28. The input device as recited in claim 19 further comprising a
touch sensor that is adapted to provide and record pinch data, the
touch sensor being disposed in a tip portion of the glove thumb and
being activated by contact with any of the index finger, the middle
finger, the ring finger or a pinky finger.
29. The input device as recited in claim 19 further comprising a
communication means for providing hand and finger movement data and
attitude data to a processing unit.
30. The input device as recited in claim 19 further comprising a
pulley portion to measure radial and ulnar deviations, the pulley
portion being disposed above a user's elbow and being releasably
attached to a medial side of the glove.
31. A method of providing a virtual environment system for
rehabilitation exercise to a plurality of users over a
communication network, the method comprising: providing an
individual virtual environments to each of the plurality of users,
each of the individual virtual environments including an input
device, a processing device, and a virtual environment interface;
generating at least one of movement, orientation, velocity, and
position data signals corresponding to discrete movement of one or
more mammalian trunk, extremities or digits disposed in the input
device; receiving the data signals from the input device;
generating image data for display on a display device and other
data from said input data; and enabling virtual environment
communication and virtual environment data transfer between the
processing device and the network.
32. The method as recited in claim 31 further comprising enabling a
third party to observe and to record image and other data from the
processing device.
33. The method as recited in claim 31 further comprising
pre-establishing at least one rehabilitation exercise for each of
the plurality of virtual environments.
34. The method as recited in claim 31 further comprising
pre-establishing a type of multi-user interaction for each of the
at least one rehabilitation exercise and for each of the plurality
of virtual environments.
35. The method as recited in claim 34, wherein the type of
multi-user interaction is selected from the group consisting of a
competitive interaction, a counter-operative interaction, a
cooperative interaction, and a mixed interaction.
36. The method as recited in claim 31, wherein the discrete
movement is selected from the group consisting of movement in an
x-direction, movement in a y-direction, movement in a z-direction,
pitch, roll, and yaw, wherein each of the x-direction, the
y-direction, and the z-direction is mutually perpendicular.
37. The method as recited in claim 31 further comprising generating
a virtual reality environment using the virtual environment
data.
38. The method as recited in claim 31 further comprising monitoring
a position of an extremity or of a digit in space.
39. The method as recited in claim 31 further comprising: scoring
user performance using said input data; and displaying scoring
results after each exercise, daily, weekly, after each session,
after each phase of an exercise and/or immediately.
40. The method as recited in claim 31 further comprising amplifying
the data signals corresponding to the at least one of movement,
orientation, velocity, and position before displaying image data
generated therefrom.
41. The method as recited in claim 34 further comprising adjusting
a degree of difficulty of said at least one rehabilitation
exercise.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] Priority to U.S. provisional patent application No.
61/145,825 entitled "Multiple User Virtual Environment for
Rehabilitation (MUVER)", which was filed on. Jan. 20, 2009, and
U.S. provisional patent application No. 61/266,543 entitled "Low
Cost Smart Glove for Virtual Reality Based Rehabilitation", which
was filed on Dec. 4, 2009 is claimed.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
[0002] N/A
BACKGROUND OF THE INVENTION
[0003] 1. Field of the Invention
[0004] A device and system for rehabilitating hand and finger
movements of stroke patients with neurological or orthopedic
problems is disclosed, and, more specifically, a device and system
that are structured and arranged to capture hand and wrist motion
for the purpose of guiding a patient/user through rehabilitation
exercises.
[0005] 2. Summary of the Related Art
[0006] Every year, between 700,000 and 800,000 Americans suffer a
new or a recurring stroke in which a sudden variance in blood
supply to the brain causes loss of brain function. About 150,000
Americans die from the event, leaving approximately 650,000 who
must overcome or learn to live with the effects of any permanent or
long-term disability. These effects may include an inability to
return to work, which can lead to a loss of income and benefits
such as health care, and/or the need for daily living assistance.
Thus, stroke remains the leading cause of disability among adults
in the United States.
[0007] Eighty-five percent (%) of stroke patients, at least
initially, suffer from physical disabilities that prevent them from
performing normal functions and/or from returning to work.
Moreover, six months after a stroke, 55-75% of survivors still
experience limited upper extremity (UE) function. Indeed, in
instances with initial UE paralysis, complete motor recovery has
been reported in less than 15% of the cases.
[0008] Despite these disappointing statistics, studies have shown
that patients with chronic stroke have the potential to improve
their physical ability and independence through rehabilitation.
Indeed, stroke patients generally appear to be highly motivated to
make further gains once conventional, therapist-assisted or
therapist-supervised rehabilitation ("rehab") has been completed.
Rising health care costs, however, are causing stroke patients to
be discharge from hospital sooner and causing the length of
physical therapy sessions to be shorter.
[0009] As a result, a rehabilitation program that can be performed
in a stroke patient's home and performed without a visiting
therapist being physically present in the patient's home could
further increase rehab participation. Such a program saves time in
transportation and cost for clinical fees.
[0010] A key component of poor or incomplete functional recovery
remains the impaired use of the stroke patient's hand and fingers.
Critical motions of the hand and wrist have been determined to be
gross finger flexion and extension, opposition of the thumb, radial
and ulnar deviation, supination and pronation of the wrist, wrist
flexion and extension, and the hand's position and orientation in
space. Accordingly, there is a compelling need to improve available
methods for UE rehabilitation in stroke patients, and in
particular, methods that improve hand and finger function. Hand
rehabilitation devices that are designed to be low-cost for in-home
use remain a viable option for continuing the rehabilitation of the
increasing number of stroke patients in the United States. However,
hand rehabilitation devices by others tend to be complex,
expensive, and/or not readily available to clinicians.
[0011] There are many products currently available on the market
that can achieve these or similar goals and that form the prior
art. These include the Cyberglove, P5 Glove.TM., 5DT Dataglove,
Acceleglove, Hand Mentor, and HandTutor. Other products of interest
that are not necessarily expressly for rehabilitation but that
contain several of the critical factors that can be incorporated
into this design include the Nintendo.RTM. Wii.TM.. Some of these
products, such as the Cyberglove, are very precise, but too costly
for in-home purposes ($10,000 per Cyberglove). Others are more
affordable, e.g., the P5 Glove.TM. (approximately $100 per glove),
but, without modification, do not deliver data accurately enough
for the intended application. Hence there is tension and a
necessary trade-off between cost and performance.
[0012] FIG. 1 shows the P5 Glove.TM.. The P5 Glove.TM. 10 was
released for the home personal computer (PC) video game market in
2002 by Essential Reality, LLC of New York, N.Y. Although it is not
wireless, the P5 Glove.TM. 10 is lightweight and allows for six
degrees of tracking freedom including the three translational axes
(the x-, y-, z-direction) and the three rotational axes (yaw,
pitch, and roll).
[0013] The P5 Glove.TM. device 10 uses infrared (IR) technology,
e.g., light-emitting diodes (LEDs), and bend sensors, to track
movement of the patient/user's hand and/or fingers. Bend sensors 9,
which are discussed in greater detail below, are adapted to measure
the bend or flexion of a digit, e.g., a finger or toe. The bend
sensors 9 are disposed along the back of each finger and thumb, to
provide independent finger and thumb measurements. The bend sensors
9 are mechanically coupled to the patient/user's hand by a ring 8
that fits around each fingertip.
[0014] With the P5 Glove.TM.p0 10, IR sensors are structured and
arranged to determine three-dimensional (3D) positioning. However,
conventionally, the P5 Glove.TM. device 10 is electrically coupled
to an infrared tower (or receptor) (not shown), e.g., via a PS/2
cable (not shown). The infrared tower, in turn, is electrically
coupled to a PC (not shown), e.g., via a USB cable (not shown).
Although this arrangement makes it easy to use at home, because of
the infrared control receptor, the work space is limited, which is
to say that the range within which motion can be detected is
limited to only about three or four feet between the glove and the
receptor.
[0015] Typically, there are three ways in which IR sensors can be
used to determine position. The first way involves a single IR LED
that is disposed at a pre-set, known position. A sensor is disposed
on the tracking object. Based on the angle and intensity of the
sensed light, the position of the sensors relative to the LED can
be determined. A second way in which IR light can be used to
determine position is by disposing a single, IR light detecting
sensor at a known position and by moving objects that emit IR light
relative to the sensors. Finally, in a third technique, the LED and
the IR detecting sensors are disposed proximate each other. IR
light from the LED reflects off of objects within the illumination
area. The sensor picks up the reflected light, from which the
position of the reflecting object can be determined. Many
multi-touch tables such as the Microsoft.RTM. Surface utilize
reflective infrared technologies.
[0016] Although infrared positioning is accurate and relatively
inexpensive it is not the most useful or most accurate method of
determining the position of a P5 Glove.TM.. Indeed, because IR
light detection is predicated on beams of light traveling between
an emitter and a sensor, obstructions to the beam path limit this
capability. Consequently, because a patient/user's hands move in
many directions and at many angles there is no guarantee that
emitted IR beams will reach the sensor without being obstructed or
reflected.
[0017] FIG. 2 shows a Hand Mentor Rehabilitation Device 11 ("Hand
Mentor"). The Hand Mentor 11 is manufactured by Columbia
Scientific, LLC of Tucson, Ariz. and has been cleared by the Food
and Drug Administration (FDA) for use in rehabilitation
clinics.
[0018] The Hand Mentor 11 encourages patients to restore the range
of motion of their wrist and hand using the principles of
Repetitive Task Practice (RTP) and Constraint Induced Therapy (CI).
As shown in FIG. 2, the hand of the patient fits into a sleeve 12
that is adapted to sense and to generate a signal commensurate with
the level of resistance caused by flexor spasticity.
[0019] The device 11 offers three different program types that are
adapted to reduce spasticity, to recruit specific muscle groups,
and/or to improve motor control. The resistance signals are
transmitted to a processing device 13 that includes software (or,
alternatively, is hard wired) that is designed for unsupervised
patient use of the device 11. Advantageously, the device 11 can
also offer a therapist option, to establish rehabilitation regimens
and generate data for documenting and reporting the patient/user's
progress.
[0020] Referring to FIG. 3, SensAble Technologies (ST) of Woburn,
Mass. manufactures a line of haptic input devices 14, which are
designed to gather motion input and to provide feedback to the
patient/user's fingers, hand, and arm. The ST device 14 most suited
for use in a rehabilitation virtual environment is a six
degree-of-freedom PHANTOM.RTM. SensAble model, in which
patients/users grasp a pen- or pencil-like portion 15 of the device
14 in either hand and control the x-direction, y-direction,
z-direction, roll, pitch, and yaw.
[0021] The PHANTOM.RTM. six degree-of-freedom device 14 interfaces
with a PC (not shown) via a parallel port (not shown). The device
14 typically comes bundled with several software demos and with a
software development kit specific to the inputs and limitations of
the device.
[0022] The Falcon from Novint Technologies, Inc. of Albuquerque, N.
Mex. is shown in FIG. 4. The Falcon device 16 was originally
designed as an input device for playing games on a PC. The device
16 has three degrees-of-freedom; however, different grips with a
plurality of buttons or dials can be added to provide more degrees
of freedom.
[0023] The Falcon device 16 includes a 4''.times.4''.times.4''
workspace and has a two-pound (force) capability. The Falcon device
16 interfaces with a PC, e.g., using a universal serial bus (USB),
e.g., a USB 20. The Falcon device 16 is sold with several games
already available for it as well as driver software to play PC
games.
[0024] The Wii.TM. manufactured by Nintendo Company, Limited of
Kyoto, Japan was released in the Fall of 2006 as a personal video
gaming console. Referring to FIG. 5, the Wii.TM. controller 17 has
two input device components: a Wii Remote.TM. controller 18 and a
Nunchuk.TM. controller 19. The Wii.TM. Remote.TM. controller 18,
when used solely, is normally held in a player's dominant hand.
However, players choosing to use both the Nunchuk.TM. controller 19
and the Wii.TM. Remote.TM. controller 18, usually hold the Wii.TM.
Remote.TM. controller 18 in the right hand and the Nunchuk.TM.
controller 19, which is electrically coupled to the Wii.TM.
Remote.TM. controller 18, in the player's left hand.
[0025] The Wii.TM. gaming console (not shown) connects directly to
a power source and to a television or other display device. Each
Wii.TM. Remote.TM. controller 18 and each Wii.TM. Nunchuk.TM.
controller 19 communicates with the gaming console wirelessly via a
sensor bar (not shown), e.g., using Bluetooth wireless technology.
Although a large library of publicly-available Wii.TM. games
exists, presently, there is no licensed software development kit
available to the public.
[0026] Referring to FIG. 6, a Rutgers Master II-ND Force Feedback
Glove 20 is shown. The glove device 20 was developed in 2002 at
Rutgers University and is structured and arranged to use a
plurality of, e.g., four, pneumatic actuators 21 and a plurality of
sensors. The direct-drive configuration of the actuators 21
provides force to the tips of the fingers 23 via finger rings 24
that are that mechanically-coupled to the actuators 21. Sensors are
disposed on the patient/user's palm 22, to avoid the presence of
wires at the fingertips 23. The Rutgers Master II-ND device 20 is a
research only device and there is no indication of software or a
software development kit.
[0027] Referring to FIG. 7, a CyberGlove II device 25 manufactured
by Immersion Technologies of San Jose, Calif. is shown. The
CyberGlove device 25 is one of the leading products for sensing and
capturing motion in the current market. The device 25 is made from
lightweight elastic and each sensor is extremely thin and flexible,
making the sensors virtually undetectable. The fabric on top of the
device 25 is a stretch material that is provided for comfort. The
fabric on the bottom of the device 25 is made of an open or mesh
material for better ventilation.
[0028] The device 25 is wireless and has a capacity of making
eighteen or twenty-two high-accuracy, joint-angle measurements. The
glove 25 uses a proprietary resistive bend-sensing technology to
capture real-time digital joint-angle data. The 18-sensor model
includes two bend sensors that are disposed on each finger, four
abduction sensor, and sensors for monitoring thumb crossover, palm
arch, wrist flexion, and wrist abduction. The 22-sensor model
includes a third bend sensor for each finger.
[0029] The CyberGlove II device 25 is electrically coupled to a PC,
e.g., using a wireless USB receiver. The software that come bundled
with the glove 25 is for evaluation purposes only and is not for
virtual reality. The is no publicly available software development
kit for the CyberGlove device 25.
[0030] The 5DT Data Glove device 26 shown in FIG. 8, is designed
for use in motion capture and animation. The device 26 material is
stretch Lycra.RTM. and the fingertips are exposed to facilitate the
grasping function.
[0031] The device 26 is adapted to sense multiple bends, e.g.,
finger flexion, but is unable to measure the attitude or
orientation of the hand in space or with respect to the
patient/user's body. The device 26 features automatic calibration
and has an on-board processor (not shown).
[0032] Two 5DT versions are commercially available: the 5DT Data
Glove 5 Ultra device, which includes five bend sensors to measure
discrete finger and thumb flexure, and the 5DT Data Glove 14 Ultra
device (depicted in FIG. 8), which uses two bend sensors on each
finger and thumb and one bend sensor per abduction between adjacent
digits.
[0033] The 5DT Data Glove 5 Ultra device 26 is adapted to include
Bluetooth technology to make it wireless and, also, can include a
cross-platform SDK. The bundled software that comes with the device
26 has no rehabilitation applications.
[0034] Combinations of the commercially-available prior art
technologies shown in FIGS. 1-8 have also been investigated by
others for hand rehabilitation purposes. For example, the Rutgers
Hand Master I and Hand Master II (FIG. 6) have been used in
combination with a Cyberglove.TM. (FIG. 7) to improve hand function
in stroke patients. The system uses the palm-mounted pneumatic
pistons 21 and virtual reality to improve resisted finger flexion
and non-resisted finger extension.
[0035] Robotic devices that train the entire arm, such as the
MIT-Manus, have also been shown as benefit for stroke patients.
More recently, the Bi-Manu-Track robotic arm trainer has been found
to be equally as effective as electrical stimulation training. A
study utilizing the Howard Hand Robot found greater mobility gains
for stroke subjects who exercised with robotic assistance in
virtual reality during a relatively longer, e.g., three-week,
intervention in comparison with subjects who had robotic assistance
only during the last week-and-a-half of training.
[0036] A reported study by Fischer et al. concluded that there was
no difference between three groups of stroke subjects who trained
on a reach-to-grasp task in virtual reality with and without two
different types of robotic assistance to finger extension during
the training. Finally, a pilot study performed with a new Finger
Trainer robotic device found some improvements in active movement
and less development of spasticity in comparison with a control
group that received bimanual therapy. However, this Finger Trainer
was designed to perform passive finger movement only.
[0037] None of these devices, however, meets the need for a
low-cost, simple, UE motor training device that patients could use
easily in their homes, and, potentially, use with other patients
over a network, the Internet, and the like. Presently, few virtual
environment-based (VE-based) or robotic systems for hand
rehabilitation exist. Those that do exist are prohibitively
expensive, and most are not commercially available. Moreover, none
is suitable for independent home use by patients and, furthermore,
none provides for multiple patient/user interaction over the
Internet.
[0038] Hence, it would be desirable to provide a low-cost device
that stroke or other patients could independently use in the home,
to improve UE function and especially improved UE function of the
hand. Such a device would also be useful as an adjunct to ongoing
rehabilitation therapy, providing patients with an interesting and
motivating way to perform a home exercise program. If designed
appropriately, such a system could be used by a therapist to
establish exercise programs that were adjustable in level of
difficulty, and tailored to the patient's specific interests. These
features would likely increase patient motivation and
compliance.
[0039] It would also be desirable to facilitate interactions with
other patient/users over a local or a wide area network, the World
Wide Web, the Internet, and so forth to make practice more fun and
to enhance motivation. Such virtual interactions may also alleviate
feelings of social isolation in patients who remain housebound due
to mobility problems.
SUMMARY OF THE INVENTION
[0040] To meet the apparent need, a Multiple-User Virtual
Environment for Rehabilitation (MOVER) is disclosed. The MUVER is
structured and arranged to enable multiple patients and system
users at remote locations to interact with each other in virtual
space with activities designed to enhance UE and skilled-hand
function. The intended application is for use as a supplemental,
in-home rehabilitation tool for people with hand function and
coordination disabilities, specifically the type of disability that
would result from a stroke. Advantageously, MUVER will be the first
inexpensive, VE-based system that patients could purchase, e.g.,
for home use, that is specifically designed to enhance finger and
thumb movement in addition to arm movement.
[0041] The MUVER system is flexible enough to include a variety of
different rehabilitation devices to control the MUVER software. For
example, the MUVER system is structured and arranged to monitor
force and torque produced by the hand and fingers during grasping
and manipulation tasks and can be extended to control ankle
movements.
[0042] The system includes a virtual reality game-type interface
that will have "scenes" developed specifically for patients with
stroke who need to practice finger, hand, and arm movements. The
activities will be functional movements that involve the whole arm
as well as hand, but with specific emphasis on hand and finger
motions. Feedback features and training routines, based on
principles of motor learning, facilitate motor recovery in patients
at different levels of motor ability.
[0043] The device and system uniquely combine an ability to track
hand position and orientation in space with tracking of finger and
thumb configuration using an input device. This feature combination
is critical to using the device to display a wide variety of hand
and upper extremity exercises in virtual reality displays.
[0044] One such input device is fashioned like a glove for use with
a multiple-user virtual environment system for rehabilitation
exercise of a human hand and digits. The input device is structured
and arranged to generate signals corresponding to at least one of a
discrete movement and an attitude of said hand and said digits. The
device includes a glove that can be readily donned and doffed on
either hand by a user; a first plurality of sensors, each sensor
being structured and arranged to provide data on movement and range
of movement of at least one of the index finger, the middle finger,
and the ring finger; a second plurality of sensors that is
structured and arranged to provide data on movement of the thumb;
and a positioning and tracking system that is structured and
arranged to generate position coordinates in three rotational axes
and three translational axes to determine at least one of the
attitude and a velocity of said hand.
[0045] Another unique feature will be feedback lights placed on the
back of the hand which will allow the patient to know if they are
performing the correct motion while looking at their hand, as
opposed to the screen. This will allow patients with impaired
perceptual abilities to concentrate on the task while not having to
interact as much with a computer interface.
[0046] It may also have leisure applications to, in particular
gaming. The rehabilitation exercises will essentially be
mini-games, so the device could easily be adapted for
non-rehabilitation related virtual gaming.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0047] The foregoing and other objects, features, and advantages of
the invention will be apparent from the following more particular
description of preferred embodiments of the invention, as
illustrated in the accompanying drawings in which like reference
characters refer to the same parts throughout the different views.
The drawings are not necessarily to scale, emphasis instead being
placed upon illustrating the principles of the invention.
[0048] FIG. 1 shows a view of a P5 Glove;
[0049] FIG. 2 shows a view of a Hand Mentor Rehabilitation
Device;
[0050] FIG. 3 shows a view of SensAble Phantom.RTM. devices;
[0051] FIG. 4 shows a view of a Novint Falcon device;
[0052] FIG. 5 shows front and side views of a Wii.TM. remote
control and a view of a Wii.TM. Nunchuck.TM. device;
[0053] FIG. 6 shows a view of a Rutgers. Master II-ND Force
feedback Glove device;
[0054] FIG. 7 shows a view of a Cyberglove.TM. II device;
[0055] FIG. 8 shows a view of a 5DT Data Glove 5 Ultra device;
[0056] FIG. 9 shows a schematic of the multi-user virtual reality
rehabilitation (MUVER) system in accordance with the present
invention;
[0057] FIG. 10 shows schematics of four common multi-user virtual
interactions;
[0058] FIG. 11 shows a back side of a glove input device having
bend sensors disposed on finger portions of the glove in
registration with the index, middle, and ring fingers;
[0059] FIG. 12 shows bend sensors disposed on the glove of FIG. 11
in registration with the thumb and the base of the wrist;
[0060] FIG. 13 shows bends sensors for capturing wrist
flexion/extension and for tracking radial and ulnar deviations and
an IMU;
[0061] FIG. 14 shows and arrangement of electroluminescent wire
light-emitting devices for providing visual signals to a
patient/user;
[0062] FIG. 15 shows two views of a second glove input device
embodiment;
[0063] FIG. 16 shows an IMU and a processing unit for the glove in
FIG. 15;
[0064] FIG. 17 shows an embodiment of an input glove device using
Hall effect sensing;
[0065] FIG. 18 shows the palm portion of the input glove device
shown in FIG. 17;
[0066] FIG. 19 shows the back of the had portion of the input glove
device shown in FIG. 17;
[0067] FIG. 20 shows and arm sleeve embodiment of a glove input
device;
[0068] FIG. 21 shows a schematic of a hardware interface in
accordance with the present invention;
[0069] FIGS. 22A and 22B show embodiments of a banana grip base
structure;
[0070] FIG. 23 shows an embodiment of a globe base structure;
[0071] FIGS. 24A and 24B show embodiments of teardrop-shaped base
structures;
[0072] FIGS. 25A and 25B show embodiments of pyramid base
structures;
[0073] FIG. 26 shows a programming schematic for scripting a
virtual reality scene;
[0074] FIGS. 27A-27D show graphics of four stages of an exemplary
virtual environment;
[0075] FIG. 28 summarizes the mean and standard deviations of
various testing sub-phases shown in FIGS. 27A-27D;
[0076] FIG. 29 shows another schematic of the multi-user virtual
reality rehabilitation (MUVER) system in accordance with the
present invention;
[0077] FIG. 30 shows top and bottom portions to a ring
prototype;
[0078] FIG. 31 shows the top and bottom portions of FIG. 30
assembled;
[0079] FIG. 32 shows an IMU disposed in the assembled ring
prototype;
[0080] FIG. 33 shows the ring prototype with a plurality of bend
sensors;
[0081] FIG. 34 shows a knuckle plate for the prototype of FIG.
33;
[0082] FIG. 35 shows a ring prototype mechanically coupled to a
knuckle plate;
[0083] FIG. 36 shows an exemplary virtual environment scene for a
single degree of freedom knob;
[0084] FIG. 37 shows an exemplary virtual environment scene for a
single degree of freedom hand device;
[0085] FIG. 38 shows an exemplary virtual environment scene for an
active hand device;
[0086] FIG. 39 shows an embodiment of a SmartGlove input
device;
[0087] FIGS. 40A and 40B show MCP flexion/extension set-ups for 45
degrees and 90 degrees, respectively;
[0088] FIGS. 40C and 40D show PIP flexion/extension set-ups for 45
degrees and 90 degrees, respectively;
[0089] FIG. 41A shows an illustrative bar graph of MPC bend data
for the input device shown in FIGS. 40A and 40B;
[0090] FIG. 41B shows an illustrative bar graph of MPC bend data
for the input device shown in FIGS. 40C and 40D; and
[0091] FIG. 42 shows a schematic of a bi-manual SmartGlove system
with arm splints for neutral wrist position and support.
DETAILED DESCRIPTION OF THE INVENTION
[0092] U.S. provisional patent application No. 61/145,825 entitled
"Multiple User Virtual Environment for Rehabilitation (MUVER)",
which was filed on Jan. 20, 2009, and U.S. provisional patent
application No. 61/266,543 entitled "Low Cost Smart Glove for
Virtual Reality Based Rehabilitation", which was filed on Dec. 4,
2009 are incorporated herein in their entirety.
[0093] As mentioned above, the increasing number of stroke patients
necessarily recovering in their home is stoking the need for
inexpensive, in-home, hand rehabilitation devices. The devices
described above in the background section provide a basis for a
variety of options for rehabilitation with the correct software and
therapist-approved regimen. The NU-MUVER (Northeastern University
Multiple-User Virtual Environment for Rehabilitation) system has
been developed by Northeastern University of Boston, Mass. to meet
the need for an inexpensive device that can be used to rehabilitate
hand and finger movements of stroke survivors and other patients
experiencing neurological or orthopedic problems. The NU-MUVER
system is designed to be used at home and/or over a network, e.g.,
a LAN, a WAN, the World Wide Web, the Internet, and the like, alone
or with others, e.g., a therapist, other patients, and so
forth.
[0094] The NU-MUVER consists of three basic components: an input
device that generates data on position, attitude, and orientation
of the patient/user's hand in space as well as of individual finger
and thumb movements; commercially-available graphics software that
provides object and animation routines that can be used to
construct various movement re-training scenes; and a control unit
that includes control software that enables a networking
capability, movement parsing, performance scoring, recording,
storage, manipulation, and display of data; and multiple training
"scenes" that are designed to facilitate the practice of a
particular movement(s) that is/are therapeutic for discrete patient
populations.
[0095] Referring to FIG. 9, a MUVER system 90 in accordance with
the present invention is shown. The system 90 is structured and
arranged to provide a plurality of virtual environments 100
designed for specific rehabilitation exercises and for multiple
patients/users 91 to interact with others, and with third parties
96, e.g., medical personnel, physical therapists, and the like.
Virtual environments 100, or worlds, that are designed for more
than one patient/user 91 are called Multi-User Virtual Environments
(MUVE). Because the instant MUVE is for rehabilitation, the system
90 is referred to as a "Multi-User Virtual Environment for
Rehabilitation" or MUVER 90. The elements of the MUVER 90 are shown
in the figure and are discussed in greater detail below.
[0096] The MUVER system 90 is designed to be modular, which is to
say that the number of patients/users 91 and the size of the
virtual environment en gross or of each discrete, individual or
personal virtual environment 100 can vary and, moreover, can be
easily changed. To facilitate modularity further, each patient/user
91 is equipped with his/her own personal computer (PC) 93 on which
MUVER software 94 is installed. As shown in FIG. 9, an input device
92, the PC 93, and the software 94 define each personal virtual
environment 100. The input device 92 in each individual virtual
environment 100 is adapted to enable each patient/user 91 to
interact with other patients/users 91, a third party 96, and the
like.
[0097] Communication from and between personal virtual environments
100 takes place over and through a virtual environment network 95,
e.g., a LAN, a WAN, the World Wide Web, the Internet, and the like.
This approach differs appreciably from other virtual environments
in which a dedicated server operates the virtual environment for
each of the patients/users. This feature facilitates recording and
logging communications between the virtual environment network 95
and a third party's computer 96 for later evaluation.
The Virtual Environments
[0098] The design of a unique virtual environment has several
stages. The first stage is to use the nature of the patient/user's
disability to determine what rehabilitation exercises or movements
would be appropriate and feasible to emulate in a virtual
environment. Preferably, the rehabilitation exercises or movements
are selected by a physical therapist, a physician, a medical
specialist, and the like.
[0099] Once appropriate rehabilitation exercises have been chosen,
the next step is to choose the character of multi-user interaction.
Once again, preferably, the character of interaction is determined
by a physical therapist, a physician, a medical specialist, and the
like. Common types of multiplayer or multi-user virtual
interaction, e.g., competitive interaction, counter-operative
(versus) interaction, cooperative interaction, mixed interaction,
and any other combination of the first three interactions, are
illustrated in the FIG. 10.
[0100] "Competitive interaction" occurs where each patient/user 91
of a plurality of patients/users 91, who have no direct interaction
between them, completes the same task having the same goals for
which a comparative score can be assigned. "Counter-operative" or
"versus interactions" occur where a first patient/user 91 works
against a second patient(s)/user(s) 91 to achieve competing goals,
which only one of the patients/users 91 can obtain. "Cooperative
interaction" occurs where two or more patients/users 91 work
jointly to complete a common goal or task. "Mixed interactions"
occur where patients/users 91 work together to complete a common
goal but the performance of each patient/user 91 is scored
comparatively.
[0101] Once a desired virtual interaction is selected, the setting
of the individual virtual environment 100 can be established.
Preferably, the setting is simple and appropriate for the discrete
patient/user 91 and, moreover, is designed to make the goal(s)
clear for each patient/user. After the virtual environment 100 is
completely designed, rigorous testing for both interaction and
substance is necessary.
[0102] The MUVER system 90 includes three components: an input
device 92, a graphic display device, and a controller. For the
purpose of illustration and not limitation, the MUVER system 90
will be described in terms of a SmartGlove.TM. as the input device,
a Panda3D graphics engine for the graphics display device, and
specialty software and driver programs for controlling the system
90. These components are discussed in the subsequent sections.
However, brief descriptions of sensors and of positioning and
tracking systems are provided.
Sensors
[0103] Sensing devices ("sensors") are provided to sense movement,
e.g., bending, flexion, and so forth. The bend or flexion of a
human finger can be measured using various methods, which are
collectively referred to as bend sensors.
[0104] Electronic bend sensors use physical geometries and material
properties to alter an electrical signal in proportion with angle
or pressure. Bend radius and bend angle affect sensor output
voltage. Bend sensors have been used for finger position
measurement for quite some time, with the first large-scale
commercial application appearing in 1989 with the Nintendo.RTM.
Power Glove. There remains a wide range of currently-available
products that use bend sensors, from very simple to very
expensive.
[0105] Other types of bend sensors include optical fiber sensors
and mechanical measurement devices. Electromechanical sensors
provided in, for example, the Nintendo.RTM. Power Glove use the
patented technology of Abrams Gentile Entertainment Inc.
("Abrams"). Abrams defines five different electromechanical methods
for changing the resistance of an electrically-conductive
construction based on a bend angle. A first economic application of
these technologies involves a resistive sensors having a
carbon-based, electrically-conductive ink as a stretched part,
which changes electrical resistance in response to applied
pressure. Using simple baseline calibration routines, reliable
measurements of bend angle are attainable.
[0106] Other similar electromechanical inventions have been
patented, including some that modulate capacitance of the sensor in
a similar manner. This class of sensors offers better reliability
and accuracy than resistive sensors, and is also able to determine
the direction of bending rather than just the magnitude of the
bend.
[0107] Optical bend sensors typically include a light source that
is coupled to a light detector using, for example, an optical
fiber. As the fiber bends, less light traverses the length of the
fiber due to total internal reflection (TIR). For example, at
higher bend angle, relatively few rays strike the detector and more
rays exit the fiber at large angles.
[0108] This particular optical technology was used to develop the
Data Glove, one of the earliest hand data recording systems, made
by VPL Research, Inc. The optical method provides a repeatable
measurement of a bend angle; however, it is less cost effective
than either of the previously described technologies. Other optical
technologies improve on the concept by using multiple fibers in a
bundle or by pre-bending the fiber in a certain direction, and are
therefore able to measure direction of bend as well as
magnitude.
[0109] Finally, a class of angle measurement sensors exists that
relies on mechanical means such as the tension of a cable disposed
inside a rigid tube, or the relative position of members in an
armature. These systems are many, but, in most cases, are better
suited to a particular application. Hence, these systems are not
inherently flexible. Conventionally, with these systems, changes in
the geometry of a mechanical arm assembly are measured. Although
such a system could potentially cost less than, for example, a
resistive bend sensor, the time spent on design and troubleshooting
would likely offset any cost increases. Accuracy also can be quite
good, however, greater mechanical tolerances must be controlled for
repeatable measurements.
[0110] Hall effect sensors are switches that are activated in the
presence of a magnetic field such as generated by a magnetic
field-producing device, e.g., a magnet. The sensor contains a
capacitor that generates an electrical current and a magnetic field
perpendicular thereto. The magnetic charges generated follow a
straight line except when in proximity of a magnetic field at which
time the path of the charge becomes non-linear, i.e., curves, and
accumulates on one face of the sensor. The distance at which the
magnetic field causes the sensor to act like a switch is a function
of the strength of the magnetic field and, therefore, the magnet,
and the current density specified by the sensor.
Positioning and Tracking Systems
[0111] The ability to generate position coordinates in six axes
(three translational and three rotational) and the ability to
continuously track the position coordinates are critical to the
operability of the device and system. Several
commercially-available positioning systems can produce position
coordinates accurately and track multiple points at once. For
example, magnetic tracking systems combine very high tracking
resolution with high-speed sampling, which contribute to
utilization in virtual reality simulations. Disadvantageously,
magnetic tracking systems are very expensive and, furthermore, the
likelihood of successfully integrating magnetic tracking in an
inexpensive, home system is not very high. The magnetic fields
associated with these systems also may experience high interference
in home operation, affecting proper and satisfactory system
operation.
[0112] Radio frequency positioning and tracking, e.g., using a few
identification tags (RFID) in combination with a plurality of
receiving units, is a possible alternative. Typically, RFID systems
determine the positioning of an object, e.g., a hand, by
triangulating, e.g., measuring the time it takes for the RFID
signal to travel to/from the object for each of the plurality of
receiving units. However, conventional RFID systems operate at or
near the wireless spectrum of most household, making interference
an issue. RF systems also would not provide the accuracy necessary
for the present invention.
[0113] Infrared positioning, which is discussed above, can be both
accurate and inexpensive. However, IR relies on line-of-site
signals, making obstructions a huge problem.
[0114] Inertial measurement units (IMUs) are adapted to determine
the orientation of an object (in space), the velocity of the
object, and 3D positions using dead reckoning. IMUs can be
structured and arranged to gather data from all six degrees of
freedom and avoids the shortcomings of the IR and electro-magnetic
options. One reason why IMUs have not been used heretofore, has to
do with dead reckoning.
[0115] "Dead reckoning" refers to all object positions being
measured relative to a pre-established and known initial starting
("home") point. As soon as the IMU moves from the initial starting
point, sensors, e.g., multi-axis accelerometers, gyroscopes, and
the like, provide data to a processing unit that is adapted to
calculate the speed of the IMU and the distance traveled from home.
Each successive move relates back to all previous movements, hence,
positioning is a compilation of a plurality of discrete, smaller
movements.
[0116] Finally, three-dimensional cameras provide real time
transitional axis positional data but do not provide rotational
axis positional data. Typically, an illumination source emits an IR
light having a discrete, pre-established frequency, e.g., 44 MHz. A
plurality of sensors--half of which operate at the pre-established
frequency and half of which are out-of-phase with that
frequency--measures the time it takes for the IR light to be
reflected by an object and to return to the sensor, which is to
say, the time-of-flight (TOF). TOF data provide accurate depth data
of the object, which can be gathered as quickly as an acceptable 60
frames per second.
Input Device (SmartGlove.TM. )
[0117] An ideal input device 92 is a wearable glove that is sized
to be universal, i.e., useable on either hand, or adjustable, and,
optionally, has its fingertip portions removed, to accommodate
different hand sizes. Preferably, the input device 92 is structured
and arranged to enable a patient/user 91 to don it and doff it
using only one hand. A total weight not to exceed one pound and a
dorsal weight not to exceed eight ounces is recommended to
facilitate use by patients/users 91.
[0118] Ideally, six axes of movement of the hand are feasible and,
more importantly, are recognizable for the purpose of generating
and recording movement and orientation data. Additionally, motion
of each finger and thumb is not hindered and individually
isolatable.
[0119] Preferably, the input device 92 is structured and arranged
to measure at least one of the following accurately: finger
flexion/extension measured to at least 90.degree.; wrist
flexion/extension, i.e., dorsal action, at .+-.90.degree.;
wrist-radial deviation up to 40.degree.; and wrist-ulnar deviation
up to 50.degree.; and supination/pronation of the forearm up to
180.degree..
[0120] Referring to FIG. 11, a first input device embodiment 70
includes bend sensors 71, 72, and 73, which are disposed on the
back of the input device 92 on finger portions that are in
registration with the patient/user's index, middle, and ring
fingers. To reduce weight and cost, a sensor on the pinky finger,
whose movement generally follows that of the adjacent ring finger
very closely, is optional. As shown in FIG. 12, a bend sensor 74
can also be disposed on the back of the input device 92 in
registration with the thumb and a bend sensor 75 can be disposed on
the input device 92 at the base of the palm of the hand. The latter
bend sensor 75 is adapted to bend as the heel of the thumb crosses
the palm to oppose one or more of the fingers, e.g., during a pinch
motion.
[0121] Referring to FIG. 13, a two-dimensional bend sensor 77,
which is disposed on the back of the input device 92 in
registration with the wrist and oriented along the axis of the
ulna, is provided to capture wrist flexion/extension. To track
radial and ulnar deviations, a bend sensor 78 is disposed on the
ulnar side inside of the hand, generally oriented along the axis of
the thumb in a neutral position. At a neutral hand position, the
bend sensor 78 for radial and ulnar deviations is slightly bent.
When the hand is rotated or turned in an outward direction from the
neutral position, the same sensor 78 will appear to remain
straight. However, when the hand is rotated or turned inwardly from
the neutral position, the sensor 78 will detect and measure the
greatest movement.
[0122] Preferably, the glove 70 can be hardwired to a base station
that includes the electronics required to communicate to the PC 93,
e.g., wirelessly or via a USB 99. This eliminates the need to
attach an electronics board to the patient/user's forearm. The base
station can be ergonomically shaped and can include a mechanical
button for dead reckoning purposes. A plurality of, e.g. two, input
buttons can also be provided to facilitate digital YES (or 1) and
NO (or 0) input for navigating the software. To prevent false
readings, software will only recognize button input at discrete,
pre-established times, e.g., between exercise sets.
[0123] A feedback system, e.g., a haptic system, an audible
speaker, and/or light emitting devices, can also be disposed on or
within the finger portions of the glove and/or on the back of the
glove 70, e.g., with or in the IMU housing 76, to provide vibratory
or auditory clues and/or visual signals during exercises. For
example, referring to FIG. 14, electro-luminescent (el) wire 79 can
be exposed around each of the index, middle, and ring fingers so
that when any of the fingers is moved into a correct position, the
el wire 79 disposed about the correctly-positioned finger can be
illuminated, e.g., by a sequencer (not shown) electrically coupled
to a power source and a power control device (not shown), e.g., an
inverter.
[0124] Referring to FIG. 15, a second glove embodiment 60 is shown.
The glove 60 includes bend sensors 61 and 62 that are disposed,
respectively, on the metacarpalphalangeal (MCP) joint and the
proximal interphalangeal (PIP) joint of the thumb and of each
finger, including the pinky finger. The MCP and PIP bend sensors 61
and 62 are adapted to record arcuate bend data associated with the
motion or movement of each finger. A third bend sensor 63 is
disposed on the back of the hand at the base of the thumb. For
measuring wrist flexion/extension, a bi-directional bend sensor
(not shown) is disposed to extend across the wrist on the palm side
of the glove 60.
[0125] Optionally, a switch pad 64, e.g., a capacitive touch
sensor, can be disposed on or within the tip of the thumb portion
of the glove 60 for providing and recording pinch data. In
operation, when the tip of one or more of the glove fingers
contacts, i.e., activates, the switch pad 64, the controller is
adapted to use the touch data and bend sensor data generated to
differentiate and identify the pinching finger from the
non-pinching fingers.
[0126] An IMU 65 is provided on the glove 60, e.g., between the
knuckles and the wrist on the portion of the glove 60 corresponding
to the back of the patient/user's hand. During prototype testing,
the inventors used an IMU 65 that includes three integrated
circuits: two vibrating beam gyroscope assemblies and one
micro-machined accelerometer. Each of the three chips generate data
in all three acceleration axes and in all three rotational axes.
Hence, the degree of freedom is six.
[0127] The IMU 65 is electrically coupled to a processing unit 66
that is removably attached to the patient/user's forearm as shown
in FIG. 16. By attaching the processing unit 66 on the
patent/user's forearm, the weight of the unit 66 is removed from
the hand so as not to hinder or interfere with hand movement while
keeping the sensors 61-64 proximate to the unit 66.
[0128] A Hall effect glove embodiment 50 is shown in FIGS. 17-19.
Single bend sensors covering both the MCP and PIP joints 51 are
disposed on each of the finger portions of the glove 50. A
plurality, e.g., three, bend sensors 52-54 are disposed on the
thumb portion of the glove 50. A bend sensor 52 is disposed between
the thumb portion and the index finger portion of the glove 50 to
track relative movement between the fingers and the thumb, another
bend sensor 53 is disposed along the axis of the thumb portion to
track movement of the thumb, and a third bend sensor 54 is disposed
at the base of the thumb portion of the glove 50 to measure roll of
the wrist joint as the patient/user's thumb reaches across the
palm.
[0129] To measure finger pinch, on the palm side of the glove 50,
Hall effect sensors 56 are disposed on the tips of each glove
finger and a magnetic field generating device 57, e.g., a magnet,
is disposed at or near the tip of the glove thumb. When the
magnetic field from the magnetic field generating device 57
approaches and/or contacts any of the sensors 56, the sensors 56
switch, providing data to the controller. The controller is adapted
to use the touch data and bend sensor data generated to
differentiate between and to identify the pinching finger from the
non-pinching fingers.
[0130] To properly measure hand position and orientation, an IMU 52
is disposed on the back of the hand portion on the glove 50.
Another Hall effect sensor 59 is also disposed in the palm of the
glove 50 to enable dead reckoning. More specifically, whenever the
patient/user 91 places his/her hand correctly on a base station
(described in greater detail below) that is equipped with a
magnetic-field generating device (not shown), the Hall effect
sensor 59 will generate a signal from which the controller will
call and execute an algorithm, software, driver programs, and the
like to calibrate the IMU 52.
[0131] Advantageously, with Hall effect sensors in the glove 50 to
activate a signal, the base can be wireless. Although not required,
a wireless base minimizes the proliferation of wires and cables,
which can get tangled and/or hinder movement.
[0132] As shown in FIG. 19, a strap 40 can be removably attached to
the patient/user's forearm. The strap 40 can include a power source
(not shown), e.g., one or more batteries, a controller (not shown),
e.g., an Arduino USB board manufactured by Arduino Software of
Italy, and an accelerometer 41. Comparison of accelerometer
readings from the accelerometer 41 on the forearm and from an
accelerometer disposed in the IMU 52 can be used to determine the
angle of wrist bending.
[0133] Referring to FIG. 20, an arm sleeve embodiment 45 is shown.
The embodiment includes a glove portion 42 and a pulley portion 43.
To measure the flexion/extension of the patient/patient/user's
digits, bend sensors 44 and 46 are dispose on the back of the hand
portion of the glove portion 42, to be in registration with the MCP
and the PIP of each finger, while at least one bend sensors 47 is
disposed along the PIP of the thumb. To measure the roll of the
thumb, at least one bend sensor (not shown) is disposed on the
glove portion 42 along the crease of the thumb along the palm of
the hand. Flexion/extension of the wrist can be measured by a bend
sensor (not shown) that is disposed on the glove portion 42 in
registration with the posterior side of the patient/user's
wrist.
[0134] Optionally, the glove portion 42 can be fingerless,
permitting better fit across a variety of hand sizes. When
fingerless, a plurality of rubber caps are attached to the tips of
each finger and thumb. Inside each rubber cap is a small push
button that is proximate the finger or thump tips. The operation of
the buttons in the fingerless version is the same as that
previously described.
[0135] The pulley portion 43 is provided to measure radial and
ulnar deviations. To that end, the pulley portion 43 includes a
strap 48 that is securely but releasably attachable to the, e.g.,
medial side of the, glove portion 42, e.g., using a hook and pile
material, at a first end; that runs along the lateral side of the
patient/user's wrist; and that passes through a small pulley 49
that is disposed, e.g., in an arm sleeve, above the patient/user's
elbow.
[0136] Referring to FIG. 21, an exemplary hardware interface, i.e.,
input device 92, that each patient/user 91 will use is the
SmartGlove.TM. developed at Northeastern University of Boston,
Mass. The SmartGlove.TM. 92 was chosen as an input device 92
because of a low cost, off-the-shelf device that would be suitable
for home use. Newer technologies could provide the same kind of
interface for patients/users 91 while maintaining the high
usability and low cost for the practitioner.
[0137] An input device mounting box prototype will be described
referring to FIGS. 30-33. FIG. 30 shows top and bottom portions
101, 102 of a mounting box 110 that, when assembled as shown in
FIG. 31, form a box-like structure that is structured and arranged
to rest on the back of a patient/user's hand and to accommodate all
of the necessary sensing devices. A depressed area 106 for
receiving an IMU 108 is provided in the top portion 101. A pair of
vertical standoffs 109 are provided to orient the IMU 108. A slot
104 for accommodating a Hall effect sensor 105 is also provided in
the top portion 101.
[0138] The prototype 110 is releasably attachable to the back of
the patient/user's hand using, for example, a hook and pile
combination that can be routed through a pair of openings 103
provided for that purpose.
[0139] Referring to FIG. 33, bend sensors 107, which will be
disposed within the material of the SmartGlove.TM. 92, are shown
optionally coupled to exiting sleeve rings 109 at a first end and,
to mimic the structure of the hand, are mechanically attached to a
single point at the rear of the mounting box 110 at a second end.
As shown in FIG. 35, each bend sensor 107 enters the mounting box
110 via a respective slot 112. Elastic string can be used for
attaching the bend sensors 107 to the single point. The string
prevents the sensors 107 from rotating about the single point while
also allowing the sensors 107 freedom to translate along the axis
of each finger as flexion/extension occurs. This enables the
sensors 107 to retain the geometry of the patient/user's hand as
knuckles are flexed and relaxed. It also enables the bend sensors
107 to remain in a constant position relative to the fingers.
[0140] Optionally, to fix and hold constant the radius over which
an MCP bend sensor will flex and to prevent the bend sensors 107
from sliding to the sides of the patient/user's knuckles, a knuckle
plate 111 (FIGS. 34 and 35) can be provided. Because signals from
the sensors 107 vary with the bend radius independent of the actual
angle of the bend, if the radius can be held constant, the angle of
the MCP joint's bend can be accurately modeled.
[0141] The SmartGlove.TM. 92 was also chosen because it is easily
connected to a PC 93, e.g., wirelessly or via a universal serial
bus (USB) port 99, and offers satisfactory control to the
patient/user 91. For use in connection with the MUVER system 90, a
USB 2.0 is preferred for greater data transfer at a faster
rate.
[0142] Technical specifications for the SmartGlove.TM. input device
92 include:
Finger Sensor Specifications
[0143] Five independent finger measurements [0144] 60 Hz refresh
rate [0145] 0.5 degree resolution
Tracking System Specifications
[0145] [0146] Optical tracking system [0147] 3-foot range from
"receptor" [0148] 45 Hz refresh rate [0149] Six DOF
Yaw\Pitch\Roll Specifications
[0149] [0150] 3 degree resolution [0151] 3 degree accuracy
X\Y\Z Specifications
[0151] [0152] 0.125 inch resolution at 3 foot range from "receptor"
[0153] 0.5 inch accuracy at 3 foot range from "receptor"
USB System Specs
[0153] [0154] USB 1.1 compliant [0155] HID specification compliant
[0156] At least two USB interfaces provided, i.e., a native P5
mode, and standard mouse mode
[0157] Referring again to FIG. 21, during operation, which is to
say, during manipulation of the hand rehabilitation (input) device
92, the patient/user's hand and/or finger movements are transmitted
to the PC 93, e.g., wirelessly and/or via the USB connection 99. A
device driver 97 for the SmartGlove.TM. 92, which can be installed
in the operating system of the SmartGlove.TM. 92 or, alternatively,
as shown in FIG. 21, in operating system 98 of the PC 93,
interprets the input data and provides the data to the virtual
reality software 94.
[0158] As shown in FIG. 39, although the IMU 115 is incorporated
into the input device 92 and disposed on the back of the
patient/user's hand, when a USB connection 99 is used, to reduce
the total weight of the system on the patient/user's hand a cable
mount 112 can be disposed on the patient/user's forearm.
[0159] The virtual reality software 94, in turn, uses the
SmartGlove.TM. 92 input data to generate output signals designed to
display appropriate images in a virtual reality on a display device
(not shown), e.g., the display device of a PC. Preferably, the
MUVER programming display software includes three different pieces
that are illustrated in FIG. 26: a game engine 82, 3D models and
graphics 83, and a scripting code 84.
Bi-Manual System
[0160] Although the above description describes a single input
device, the system can also include multiple input devices, e.g., a
pair of SmartGloves.TM. for the left and right hands of the
patient/user, that can be used simultaneously. Multiple input
devices in general and, more specifically, a pair of
SmartGloves.TM., permit more complex and realistic rehabilitation
tasks such as, in virtual reality, simultaneously grasping a jar
with a first hand and removing a lid from the jar with a second
hand.
[0161] An illustrative bi-manual system 120 is shown in FIG. 42.
Each of the patient/user's hands and forearms are supported in
adjustable arm splints 111 for neutral wrist position and support.
The end portions 119 of each of the splints are ergonomically
curled to make the natural position as comfortable for the
patient/user as possible. Each hand is disposed within an input
device 113 that can be a Spandex.RTM./cotton blend glove.
[0162] Bend sensors 114 are positioned (within a pocket in or
within the material of the glove 113) across the MCP and PIP of the
patient/user's index, middle, and ring fingers, to measure finger
flexion and extension. A fourth bend sensor 116 is disposed along
the back of the hand proximate the patient/user's wrist to measure
wrist flexion and extension. A fifth bend sensor 117 is disposed
along the back of the thumb to measure the rotation of the thumb
with respect to the patient/user's palm and fingers.
[0163] Hall effect sensors 118 can be wired in the finger tip
portions of the glove 113 and are adapted to interact when they
come in proximity of a small magnet (not shown) that is disposed in
the palm of the glove. The small magnets in each of the gloves are
used in combination with the globe base (see below) to calibrate
the position of the gloves with respect to the base.
[0164] An IMU 115 is disposed on the back portion of the glove 113
for monitoring the three-dimensional hand position, i.e., attitude,
of the patient/user's hand. To reduce the weight on the
patient/user's hand and to facilitate connection of wiring to the
IMU 115, a cable housing 112 is wrist-mounted. More preferably,
wireless communication of data can be effected.
Base Unit
[0165] With the present invention, the patient/user must also be
able to achieve the same dead reckoning position at the start
and/or at the completion of each exercise due to, inter alia, the
nature of IMU positioning. To best achieve this, the system
includes a base structure that accounts for ergonomics and an
activation sequence that enables the controller to receive data
once the patient/user has placed his/her hands in the appropriate
position.
[0166] FIG. 22A show a banana grip base 30 having a plurality of
RFID tag receivers 31 and FIG. 22B shows the same grip base 30 with
a patient/user's hands placed in the appropriate staring ("home")
position. The ergonomics of the banana grip base 30 allows
patients/users to place their hands on the base pad 32 without
having to strain to cause them to lie flat. The RFID receivers 31
do not require power, allowing the grip base 30 to remain
completely passive, i.e., wireless.
[0167] According to this approach, RFID transceivers 59 (see, FIG.
18) are disposed in the input device 50 so that when the
patient/user's hands are disposed at the "home" position, the
receivers 31 and transceivers 59 are proximate, causing the
transceiver 59 to emit a signal to that effect. A drawback of the
RFID approach is its accuracy. Slight deviations from a true "home"
position may skew the results of the exercise.
[0168] FIG. 23 shows a globe base 35 that includes imprinted hand
grooves 33 that define the "home" position. Capacitive touch
sensors 34 can be disposed at each of the finger and thumb tips of
the hand grooves 33. The globe base 35 is adapted so that the
patient/user's digits must touch each of the touch sensors 34 in
both hand grooves 33 for location data to zero itself. This design
is particularly attractive due to its simplicity and, further, it
causes little strain on the patent/user's hands. The hand grooves
33 and touch sensors 34 at the fingertips of the hand grooves 33
make it intuitive to use. Alternatively, Hall effect sensors and
magnets can be integrated into the input device 92 and the globe
base 35.
[0169] The globe base 35 is a single piece, ergonomic design that
requires a power source. The relatively large size make it possible
to house electronics, e.g., a processing board, a Bluetooth.RTM.
receiver, and other accessories (including the input devices when
not in use), within the base 35 itself. Various feedback systems,
e.g., speakers for audio feedback, vibration devices for haptic
feedback, and LEDs for visual feedback, can also be disposed within
or on the outer surface of the globe base 35.
[0170] Prototype testing by the inventors highlighted the need for
providing wrist and or forearm support for accommodating and
supporting when the patient/user's hands when properly positioned
in the "home" position. The incorporation of medical arm splints
with the globe base 35 (FIG. 42) allows the patient/user to
maintain his/her hands in a neutral, "home" position, which is to
say: full pronation, no extension or flexion, and no radial or
ulnar deviation. By employing a splint that is not in wrist
extension--as most are--patients having, for example, a contracture
or spasticity in wrist or finger flexor muscles can more easily
achieve the correct, neutral position.
[0171] Disadvantageously, touch sensors 34 in the fingertips alone
do not ensure that the patient/user's palms are also touching the
base 35. As a result, location data may be incorrect. The touch
sensors 34 also require the glove device to be fingerless as only
contact with exposed skin at the fingertips will activate the
sensors 34. Finally, the size of the globe base 35 is relatively
large, requiring additional exercise area.
[0172] FIGS. 24A and 24B show a teardrop base 39, one of which is
provided for each hand. The teardrop base 39 design creates an
ergonomic platform on which patients/users may rest his/her hands.
A tactile switch button 38 is disposed on the base 39 so that when
the patient/user's hand is properly positioned over the base 39,
the button 38 will activate the device. The active switch button 38
can be electrically coupled to the controller, e.g., via a USB
connection. The main problem with the teardrop design is that the
button 38 will not necessarily be repeatedly activated by the same
part of the patient/user's hand and because two bases 39 are
needed--one for each hand--requiring two dead reckoning
signals.
[0173] FIGS. 25A and 25B show a pyramid base 37, one of which will
also be provided for each hand. The design creates an ergonomic
platform for hand placement, which is particularly effective for
stroke patients who frequently have difficulty spreading their
hands. The pyramid base 37 includes a magnetic field-generating
source 36, e.g., a magnet, that is adapted to activate a Hall
effect sensor that is disposed in the palms of the input device
gloves. Hall effect activation allows hand placement on the base 37
to be repeatable within an acceptable tolerance proportional to the
resolution of the sensor 36, while allowing the base 37 to be
passive, i.e., wireless and not requiring power.
[0174] The simplicity of design, ergonomic shape and Hall effect
activation are advantages of the pyramid base 37.
Disadvantageously, two bases 37 are needed--one for each
hand--requiring two dead reckoning signals and, furthermore, the
sensor disposed in the glove input device may cause discomfort.
Three-dimensional Graphics Engine
[0175] A 3D graphics engine is a library of subroutines for 3D
rendering and game development. The 3D models and graphics 83
portion populates the engine code and follows the rules of the game
engine 82. The scripting code 84 in the game engine 82 controls
many of the low level features, e.g., physics and display, and,
preferably, is written in Python.TM. Script. More preferably, the
scripting code 84 can overwrite some or all low-level game engine
code while also providing unique features to the 3D models 83.
[0176] Software demands for the MUVER 90 are both specific and
advanced. Indeed, any development platform selected requires
network capabilities and 3D graphics. Preferably, the MUVER 90
includes a software package that is simple both to implement and to
change and that, also, is capable of providing the features that
the MUVER 90 requires, e.g., network coding, 2D/3D rendering, and
so forth. One possible software option for the MUVER 90 is to use a
3D graphics engine for most of the code and using a scripting
language to program in the various scenes and the use of the
SmartGlove.TM. 92.
[0177] There are many commercially-available game engines that have
varying levels of programming sophistication. Five such engine
solutions that could be integrated into the MUVER 90 include:
Panda3D, Blender, Source and Hammer, XNA.RTM., and Flash.RTM.. Each
solution has advantages and disadvantages with respect to the other
solutions.
[0178] The Panda3D graphics engine was originally created by Disney
but is currently owned by Carnegie-Mellon University of Pittsburgh,
Pa. The software integrated into the graphics engine is open
source, which is to say that it is free to the public for download
for commercial and non-commercial use and can be freely modified.
The Panda3D graphics engine uses the Python.TM. scripting
(programming) language and is written in object-oriented, C++
libraries and modules. Panda3D has comprehensive support for
networking that allows for rich virtual interactions between
patient/users. Data input methods for the Panda3D advantageously
include direct input of Head Mounted Displays (HMD) and VR
trackers.
[0179] Panda3D remains a preferred platform because of its license
agreement as well as its capabilities. However, Panda3D requires
additional middleware which increases cost and complexity.
Middleware is a term or art used to define a software "bridge"
between hardware such as between an input device 92 and the
software of the MUVER. As a result, the middleware must be
compatible with the input device, i.e., the SmartGlove.TM. 92, but
must also be able to generate a compatible output signal to the
chosen development software.
[0180] Potential middleware examples include software development
tools, e.g. SWIG, that are adapted to connect programs written in a
first programming language, e.g., C and C++, with a variety of
high-level programming languages. Typically, SWIG is used to create
high-level, interpreted or compiled programming environments, and
user interfaces. SWIG is used with different types of
languages--including common scripting languages such as Python.TM.,
which is the scripting language of Panda3D. Advantageously, SWIG is
open source and, hence, may be freely used, distributed, and
modified for both commercial and non-commercial use.
[0181] SWIG, however, is a difficult program to work with and very
unstable. Notwithstanding, SWIG enables programmers to write
programming methods for the SmartGlove.TM. 92 in C/C++ and,
subsequently, to "wrap" them so that they can be read in Python.TM.
programming code. This features allows the SmartGlove.TM. 92 to be
useable in the BlenderGE as a set of Python.TM. scripts.
[0182] A second commercially-available middleware is GlovePIE
(Glove Programmable Input Emulator), which was originally developed
to emulate joystick and keyboard input. Succinctly, GlovePlE
emulates movements made with the SmartGlove.TM. 92 using software
macros and, further, binds the movements to an input device such as
a keyboard or a joystick. As a result, use of the SmartGlove.TM. 92
as an input device is extended to any program that is traditionally
control using a keyboard or a joystick. Disadvantageously, only
certain joystick/keyboard movements are emulated by the
SmartGlove.TM. 92, which limits the number of movements the would
be available to a practitioner and/or a patient/user.
[0183] Advantageously, however, use of GlovePIE is no longer
confined to VR gloves; but, rather, now supports emulating a myriad
of input, using a myriad of devices, e.g., Polhemus, Intersense,
Ascension, WorldViz, 5DT, and eMagin products. GlovePIE may also
control MIDI or OSC output.
[0184] Hardware supported by the GlovePIE includes: [0185] Nintendo
Wii Remote [0186] NaturalPoint (or eDimensional) TracklR,
OptiTrack, SmartNav [0187] FakeSpace Pinch Gloves (9600 baud by
default, but can be changed) [0188] Concept 2 PM3 rowing machines
(ergo or erg) [0189] All joysticks or gamepads recognized by
Windows [0190] Parallel port gamepads (with PPJoy) [0191] All
keyboards [0192] Mice with up to 5 buttons and 2 scroll wheels
[0193] Most microphones (don't have to be high quality) [0194] Most
MIDI input or output devices [0195] Essential Reality P5 Glove.TM.
[0196] 5DT Data Glove (all versions) [0197] eMagin 2800 3D Visor
HMD [0198] Polhemus trackers (must be set to 115200 baud): IsoTrak
II, FasTrak, Liberty, Patriot, Liberty Latus [0199] Ascension
trackers: Flock of Birds, MotionStar, etc. [0200] Intersense
trackers: InterTrax, InertiaCube, IS-300, IS-600, IS-900, IS-1200,
etc. [0201] WorldViz PPT trackers (all versions) [0202] GameTrak
(only as a joystick, no direct support)
[0203] Yet another commercially-available middleware product is
OpenTracker, which is manufactured by Argent Data Systems of Santa
Maria, Calif. OpenTracker is another open source product that is
adapted to create a full-featured tracking software package that
can be integrated into any software as a device library. The major
advantages of OpenTracker are that it is the most full featured and
robust middleware package. It also natively integrates into C/C++.
Hardware supported by the OpenTracker includes: [0204] A.R.T.
optical tracker (Windows, Linux) [0205] ARToolkit(Windows, Linux)
[0206] ARToolkitPlus(Windows, Linux) [0207] Parallel Port (Windows,
Linux) [0208] CyberMouse(Windows) [0209] Origin Instruments
DynaSight (Windows, Linux) [0210] Ascension Flock of Birds
(Windows, Linux) [0211] Polhemus FastTrak/IsoTrak (Windows, Linux)
[0212] Garmin GPS devices like eTrex and similar (Windows) [0213]
ICube X (Windows) [0214] Intersense InterTrax2 USB (Windows) [0215]
Joysticks (Windows) [0216] evdev interface (mouse, keyboard,
joysticks) (Linux) [0217] Barco MagicY (Windows) [0218] Midi
(Windows) [0219] gTec gMobilab/gMobilab+ (Windows, Linux) [0220]
P5Glove (Windows) [0221] Phantom Omni (Linux requires 3rd party
driver, Windows?) [0222] 3DConnexion SpaceDevice (Windows; requires
3Dxware SDK) [0223] 3DConnexion SpaceMouse (Plus USB) (Windows;
requires 3Dxware SDK) [0224] Ubisense Tracker (Windows, Linux)
[0225] Polhemus UltraTrak (Windows, Linux) [0226] Wacom Graphire
(Windows) [0227] Wii (Windows) [0228] XSense (Windows, requires MT9
SDK from XSens)
[0229] Finally, the Virtual-Reality Peripheral Network (VRPN) is a
set of classes within a library and a set of servers that are
designed to implement a network-transparent interface between
application programs and the set of physical devices, e.g.,
trackers and so forth, used in a virtual-reality system.
Conceptually, a PC or other host controller is disposed at each VR
station to control the peripherals, e.g., tracker, button device,
haptic device, analog inputs, sound, and the like.
[0230] VRPN provides middleware connections between the
application(s) and the hardware devices using an appropriate
class-of-service for each type of device sharing the link. The
application remains unaware of the network topology.
Advantageously, VRPN can be used with devices that are directly
connected to the system that is executing (running) the
application, using separate control programs or running the
applications as a single program.
[0231] VRPN also provides an abstraction layer that makes all
devices of the same base class look the same. For example, all
tracking devices are made to look like they are of the type
vrpn_Tracker. As a result, all trackers will produce the same types
of reports. At the same time, it is possible for an application
that requires access to specialized features of a certain tracking
device, e.g., telling a certain type of tracker how often to
generate reports, to derive a class that communicates with this
type of tracker. If this specialized class were used with a tracker
that did not understand how to set its update rate, the specialized
commands would be ignored by that tracker.
[0232] Current VPRN system types include: Analog, Button, Dial,
ForceDevice, Sound, Text, and Tracker. Each type abstracts a set of
semantics for a specific device type. There are one or more servers
for each type of device, and a client-side class to read values
from the device and control its operation. VRPN is the preferred
middleware solution because of its capabilities and robustness,
which the other solutions lacked. Historically, VRPN has also been
used successfully with Panda3D.
[0233] Returning to the game engines, Blender is another open
source software package manufactured by the Blender Foundation that
focuses on digital modeling and animation. An integrated game
engine called BlenderGE uses Python.TM. as a scripting
(programming) language. The main advantages of Blender are that the
software is included in the BlenderGE and, because, when Blender is
used as a digital animation package, it has many features to use
armatures, e.g., a human hand.
[0234] The Source Game Engine and the Hammer Level Editor ("Source
and Hammer") are manufactured by the Valve Corporation, a video
game company. Source and Hammer is a leading character animation
graphics engine whose main advantage is an advanced physics engine
and multiple patient/user capabilities. Source and Hammer can be
used for non-commercial purposes.
[0235] XNA.TM. is a product of Microsoft.RTM. Corporation of
Seattle, Wash. XNA.TM. uses C# code and a shared library to
facilitate creation of games and simulations. The biggest
advantages of XNA.TM. are that the community is very knowledgeable
and it is designed for making multi-user games. One caveat,
however, is that XNA.TM. does not have a bundled game engine.
Accordingly, using XNA.TM. software to create the MUVER would
require much more programming than using one of the software
packages associated with a game engine.
[0236] Finally, Flash.RTM. manufactured by Adobe Systems
Incorporated of San Jose, Calif. is primarily used for Web sites
and for Internet applications. The biggest advantage of using
Flash.RTM. is that most applications can be accessed from any Web
browser, which means that installation is not mandatory. Like
XNA.TM., however, Flash.RTM. does not have a bundled game engine
and requires more programming than other software solutions. The
multi-user options available for Flash.RTM., however, are
comparable to XNA.TM. and Panda3D but better than what is available
for Blender.
Testing Results of Concept
[0237] FIG. 27 shows an exemplary virtual environment (VE) training
"scene" (or exercise) for competitive virtual interaction between
two patient/users. The exemplary scene is designed to allow
patients/ users to practice both an active grasp and a maintained
grasp. An active grasp involves a pinching action using two to four
fingers and the thumb and a release action. A maintained grasp
involves active supination, which is to say, a hand further
rotating toward a palm-up position. These movements are deemed by
experts essential to improved hand function in stroke patients.
[0238] In this VE training "scene", a patient/user must move
his/her hand and fingers, which are operationally coupled to the
input device, first, to grasp or grip a virtual lid 67 (FIG. 27A)
and then to lift the virtual lid 67 from a virtual pot 58 (FIG.
27B). Once the patient/user has successfully grasped and lifted the
virtual lid 67, he/she supinates the virtual lid 67 to a palm-up
position, all the while maintaining his/her grasp on the virtual
lid 67. In the final stage of the scene, the patient/user pronates
the virtual lid 67 to a palm-down position; returns the virtual lid
67 back to the virtual pot 72; and releases his/her grasp on the
virtual lid 67 (FIG. 27D).
[0239] Preferably, the scene application is adapted so that a
visual signal is generated upon successful completion of any or all
of the movements or motor activity associated with stages. For
example, once the patient/user satisfactorily is able to grip the
virtual lid 67 (FIG. 27A) the virtual lid 67 can change color,
e.g., from grey to blue. Further, once the virtual lid 67 is lifted
from the virtual pot 68 (FIG. 27B) the virtual lid 67 can change
color again, e.g., from blue to green. Similarly, upon successful
grasp plus supination and transport (FIG. 27C), the virtual lid 67
can change color again, e.g., from green to red. Advantageously,
the supination threshold for success can be pre-set, e.g., 45
degrees, and can be further adjusted to require a greater or lesser
result. Finally, following pronation and transport (FIG. 27D), the
virtual lid 67 can return to its original color, e.g., grey.
Undergoing this cycle of stages, the trial is counted as a
"success".
[0240] If, however, the patient/user loses his/her grasp while
raising, supinating, and/or pronating the virtual lid 67, the
application can be adapted to cause the virtual lid 67 to return
automatically to the original position on the virtual pot 68 and to
return to its original color, e.g., grey.
[0241] The movement during each of the phases can be timed and
counted separately. This provides more feedback to patients/users
about their performance if the entire trial was not successful. The
ability to count successful phases and trials and record time for
each can be incorporated in the design for the scoring function
described in greater detail below. Thus, each discrete data element
can be displayed separately to provide feedback about performance
either during the session or after. These discrete elements count
the number of successes for each phase; count the number of
successful trials, which is to say, that all phases are completed
successfully; record the time for each phase; and record the time
for each trial. Time can also be displayed as a mean for block of
trials, with the number in block adjustable.
[0242] FIG. 28 summarizes the results of testing of six (6) healthy
subjects (four males, two females; 5 right-handed, one
ambidextrous) who performed competitive interaction using the MUVER
scene shown in FIG. 27. Each patient/user donned and calibrated the
SmartGlove.TM. and was instructed in the hand rehabilitation
movement depicted in the scene. Each subject was provided with up
to five minutes to practice and become accustomed to working in the
virtual environment.
[0243] Subsequently, subjects were asked to complete ten movements
as rapidly as possible. A short rest was given between each of the
ten trial movements. The total time and the time for each phase
were recorded for each trial and for each subject. The mean
durations 4 and associated standard deviations 5 across blocks and
subjects are summarized in FIG. 28 for each sub-phase of the
movement task (grasp, turn, and return) and for the total task.
[0244] Start to completion of grasping the lid 67 averaged
1.6.+-.0.7 sec. Grasp with turn and transport to right averaged
1.8.+-.0.6 sec. Pronation, lid return, finger extension to release
grasp averaged 2.1.+-.0.9 sec. In summary, the mean time for the
total task was 5.5.+-.1.6 sec.
[0245] Experimenters and subjects noted that the mapping of real
world to virtual environment movements was not completely accurate,
which probably accounts for times that are somewhat slower than
expected in healthy subjects. This inaccuracy, however, could be
due to a hardware problem in the glove position detection and/or
due to the software that maps position and orientation data to the
virtual scene elements.
[0246] Referring to FIG. 29 a more complete MUVER System 90 is
shown. As previously mentioned, the practitioner's interface 96
expands data collection of the basic system in two major ways:
providing real-time feedback and modifying the virtual environment.
Indeed, a practitioner can be a spectator and watch the actions and
interactions of the patients/users 91. They also have the ability
to privately or openly provide feedback to a patient/user 91 or
multiple patients/users 91 at once. As another form of feedback the
practitioner can change or control the MUVER system 90 based on the
actions and interactions of the patients/users 91. Modifying the
MUVER 90 has several advantages including changing the level of
difficulty to better suit rehabilitation and also directing the
MUVER 90 to facilitate certain interactions between patients/users
91.
[0247] The more complete system 90 includes a Rapid Prototyping
feature 85. Rapid Prototyping (RP) is a manufacturing method that
allows custom objects to be created from a .STL file quickly and
for low cost. With this feature, the MUVER 90 can be populated by
models that can be exported as .STL files. Consequently, a
practitioner can borrow an object from the real world; recreate it
in the MUVER for virtual rehabilitation; and then create it using
RP for use in actual rehabilitation exercises 86.
Object Examples
[0248] A first exemplary MUVER scene that can be presented
virtually in accordance with the present invention is a simple,
Single Degree of Freedom Knob (SK). In the real world, an SK is an
Electro-Rheological Fluid (ERF) based device that can be
manipulated by the patient/user in a single degree of freedom,
e.g., clockwise or counterclockwise rotation about an axis, by
rotating a variably-resistive knob. The MUVER system 90 allows a
practitioner to place a patient/user in a virtual environment
containing an SK device in the comfort and convenience of the
patient/user's home. From a remote site, the practitioner is also
able to customize the resistance level of the real world knobs for
a particular patient/user.
[0249] A virtual knob 87 that might be shown in a patient/user's
virtual environment is shown in FIG. 36. The MUVER design for the
virtual knob 87 is meant to be for one or two patient/users. A
first patient/user manipulates his/her hand and fingers in the real
world sufficiently to cause the circle 88 to rotate about an axis
in either direction in the virtual environment. If there is a
second patient/user, he/she likewise manipulates his/her hand in
the real world sufficiently to cause the rounded square 89 to
rotate about the axis in the same direction in the virtual
environment. If there is no second patient/user, the practitioner
or, alternatively, a software program can control and manipulate
the rate of rotation of the rounded square 89. Control of the
square 89 by a non-patient/user can be performed using another
knob, a rehabilitation device, a keyboard, a mouse, and the
like.
[0250] The simplistic MUVER SK scene can be controlled to provide
competitive, versus, or mixed virtual interactions. For the
competitive interaction, the time it takes each patient/user to
catch, i.e. to "tag", the square 89 during a pre-established amount
of time can be recorded and compared. Another possible competitive
interaction is, instead, to record the number of "tags" during a
pre-established period of time for each patient/user.
[0251] An exemplary versus interaction can include awarding points
to the first, circle patient/user for every "tag" of the square 89
and awarding points to the rounded square patient/user for avoiding
being tagged over a set increment of time. An exemplary mixed
interaction can include two patient/users comparing their scores
and times with another pair of patient/users or with historical
scores of the two.
[0252] Scene themes, such as the SK example above, can play a very
important role in the MUVER design. The customizable knob lends the
device to many possible real scenes that can be made into virtual
ones. The availability of real life scenes in a virtual environment
can be beneficial because a particular patient/user may have
performed the task regularly in the real world and, hence, already
understands the movements needed to successfully complete it.
[0253] Possible real life scenes to use in a MUVER for the SK can,
for purposes of illustration and not limitation, include: [0254]
Opening a door knob [0255] Unlocking a door [0256] Turning a car's
ignition [0257] Opening a jar [0258] Using a rotary phone [0259]
Manipulating a thermostat [0260] Reeling in a fish [0261] Screwing
in a light blub [0262] Opening a bottle of wine [0263] Turning a
stove on and off [0264] Juicing a lemon or orange [0265] Stirring
coffee or soup
[0266] Fictional scene themes that parallel events that the
patient/user is not familiar with have a low learning curve because
the patient/user may not know how to complete the movements
successfully. For example, a non-fisherman may not know how to reel
in a fish or a teetotaler may not know how to turn a corkscrew to
open a bottle of wine. Notwithstanding, fictional scene themes have
the advantage of being made expressly for exercising a certain
movement or knob design. The prototype MUVER proposed here is an
example of a fictional theme but the idea of tag is a common play
mechanic that patient/user may be familiar with.
[0267] Referring to FIG. 37 a diagram of virtual environment for a
Single Degree of Freedom Hand Device (SHD) is shown. An SHD is a
passive ERF device and, commonly, a research device meant for use
in MRI machines.
[0268] The single degree of freedom is linear and has a
pre-established, e.g., three inch, stroke. As a result, the
exercise routine the patient/user performs is simply grasp and
release. Challenges include noise from the MRI machine, simplicity
of the device, and using mirrors for the patient/user to see the
computer monitor displaying the MUVER graphics. The real world SHD
has many unique design aspects, which to the greatest extent
possible are transferred over to the MUVER virtual environment
design.
[0269] The scene theme in FIG. 37 replicates the real world act of
inflating a balloon 85 using a hand pump 86. This particular
exemplary scene theme advantageously provides an objective that is
simple and intuitive. Additionally, because the progressive
inflation of the balloon presents the patient/user with feedback,
no other feedback needs to be provided.
[0270] An advantage of this MUVER scene theme is that it can
facilitate all four kinds of multiple patient/user interactions:
competitive, cooperative, versus, and mixed. The competitive
interaction would be conducted by timing each patient/user for a
pre-established number of pumps that will completely inflate the
balloon 85. When multiple pumps 86 control the rate of inflation of
the same balloon 85, a cooperative interaction can be accomplished.
This type of interaction can be further explored by controlling the
rhythm of pumps from each patient/user or having a set order that
patients/users must pump in order to inflate the balloon 85.
[0271] By having the actions of one patient/user inflate the
balloon 85 while the actions of a second patient/user concurrently
deflate it, one can provide counter-operative interaction. Finally,
a mixed interaction using this MUVER design has the most depth and
the greatest possibility, e.g., by having multiple patients/users
inflate the same balloon while evaluating the relative percentages
that each inflated. The patients/users could also be required to
follow an inflation order or rhythm and could be judged on their
respective accuracy at following it.
[0272] As another example, an Active Hand Device (AHD) is a two
degree of freedom, active ERF device. An exercise the patient/user
does with the AHD includes the linear degree of freedom that the
patient/user performs with the SHD as described above as well as a
supination/pronation rotation similar to that of the SK.
Advantageously, the AHD is also an active device, which is to say,
that the AHD can provide variable resistance and push back against
the patient/user, which can be used to enhance the experience. The
major design considerations for such a device include having a
customizable real world design so that the practitioner can choose
to use both degrees or a single degree of freedom.
[0273] Referring to FIG. 38, a diagram of the prototype MUVER for
an AHD is shown. The patient/user is represented as the circle 69.
The patient/user manipulates the AHD to move the circle 69 around a
track 29 that can be designed by the practitioner. The patient/user
receives feedback in the form of accuracy in movement and also in
the speed with which the circle 69 circumnavigates the track
29.
[0274] Adjustable read lines 28 represent the start and the end of
the track 29. The locations of the read lines 28 can be modified
depending on what the practitioner wants the patient/user to do for
an exercise. The ellipse 27 surrounding the circle 69 is a force
gradient, inside of which the patient/user tries to keep the circle
69 as it moves around the track 29.
[0275] For example, exercises can be made more or less difficult
based on the hands position relative to the shoulder, movement
tests can be designed to test extremity coordination. Thus, the
position/orientation feature is a valuable component to the
rehabilitation package. The system is designed to work with two
gloves simultaneously, to allow patients/users 91 to use their
capable hand to interact in exercises along with their disabled
one.
[0276] This is not to say that the mapping of real world movement
must necessarily correspond to a virtual world movement, i.e.,
"direct mapping". If the goal is rehabilitation, then whatever
virtual "scene" best motivates a patient/user to expedite the
rehabilitation process or rehabilitation milestones, the better.
Accordingly, "abstract mapping" by which real world movement
produced by the patient/user differs from the movement of the
virtual world object(s) is also possible with the invention as
claimed. For example, if the patient/user movement desired is wrist
flexion and extension, a "direct mapping" scene may depict a hand
waving while an "abstract mapping" scene may equate the
flexion/extension to the movement of a cartoon animal.
Patient/User Scoring and Teacher Models
[0277] Patient/user's performance and other data are stored locally
and, furthermore, can be transmitted to a third party, e.g., a
clinical provider, via the network. Performance data, e.g., hand
and finger kinematics, can be used to assess progress over time,
the level of impairment, and so forth. "Scoring" connotes reduction
of performance data into a format that is easily usable to
determine performance and progress. In particular, the scoring data
is easily formatted into graphics, spreadsheets, summaries, and so
forth.
[0278] For example, FIGS. 40A-D show a patient/user's hand being
constrained in 45 degree and 90 degree orientations, for measuring
voltage as a function of joint bend of the MCP (FIGS. 40A and 40B)
and the PIP (FIGS. 40C and 40D). Patient/user movement data for
FIGS. 40A and 40B are displayed in a bar graph in FIG. 41A, which
separates the data by finger, i.e., index finger, middle finger,
and ring finger, and by the angle of constraint, i.e., 0, 45
degrees, and 90 degrees. Patient/user movement data for FIGS. 40C
and 40D are displayed in a bar graph in FIG. 41B, which also
separates the data by finger, i.e., index finger, middle finger,
and ring finger, and by the angle of constraint, i.e., 0, 45
degrees, and 90 degrees. In both instances, at a glance, one can
determine from the bar graphs that each of the joints of the
patient/user can be moved.
[0279] "Scoring" can be performed per "scene" or exercise, per
phase within a "scene" or can be compiled over multiple scenes.
"Scoring" can measure, for example, a number of repetitions, a
magnitude of motion, speed of performance, speed of performance of
a phase of a scene, accuracy of movement, and so forth.
[0280] Optionally, the system can also include a "teacher model"
capability, which provides the ability to record patient/user
performance for analysis and later playback. During playback,
patient/user errors can be highlighted and made known to the
patient/user and correct performance can be demonstrated. U.S. Pat.
No. 5,554,033 discloses a virtual teacher and is included herein in
its entirety by reference.
Amplified Feedback and Adaptive Design
[0281] Another advantage of the present system is a library of
"scenes" or exercises, each scene being used for a discrete purpose
or functional goal and being adjustable. More specifically, each of
the "scenes" can be programmed to adjust the degree of difficulty
of the scene. Indeed, defining the values of a set of parameters,
e.g., speed of motion, magnitude of motion, smoothness of motion,
hand orientation during motion, and the like, enables medical
personnel, physical therapists, and the like to tailor scenes for a
discrete patient/user. Hence, collectively, the "scene" database
provides a variety of purposes and functional goals.
[0282] Another feature of the present system is amplified feedback
by which real world movement can be discretionally amplified prior
to virtual mapping. For example, if a patient/user only bends
his/her fingers ten degrees, the signal can be amplified by a
factor of five so that, the virtual movement shows a 50 degree
flexure. In this manner, amplified feedback can be used as a carrot
to encourage patients/users.
[0283] Those skilled in the art can appreciate that the simplistic
and rudimentary scene themes described above are for illustrative
purposes only. By mixing a real world scene theme with fictional
scoring to facilitate multiple patient/user interactions, the MUVER
design proves to be very robust.
* * * * *