U.S. patent application number 12/311025 was filed with the patent office on 2010-01-21 for method and apparatus for providing personalised audio-visual instruction.
Invention is credited to Richard John Baker.
Application Number | 20100015585 12/311025 |
Document ID | / |
Family ID | 39324013 |
Filed Date | 2010-01-21 |
United States Patent
Application |
20100015585 |
Kind Code |
A1 |
Baker; Richard John |
January 21, 2010 |
Method and apparatus for providing personalised audio-visual
instruction
Abstract
A method of and apparatus (10) for providing a personalized
audio-visual instructional aid for assisting a person (11) to
emulate preferred positions and/or movements in undertaking an
activity by capturing position and/or movement data of the person
(11) or an object associated with and controlled by the person
undertaking the activity using position and/or movement sensing
devices such as MEMS sensors (13) on the person (11) or object and
using a computer (16) to analyze and compare the captured data with
pre-stored data relating to preferred positions and/or movement in
undertaking the activity and generating a visual presentation (19)
based on the differences between the captured position and/or
movement of the person or object and the preferred positions and/or
movement and adding to the generated visual presentation, audio
instructional comments relating to the differences.
Inventors: |
Baker; Richard John;
(Queensland, AU) |
Correspondence
Address: |
LITMAN LAW OFFICES, LTD.
POST OFFICE BOX 15035, CRYSTAL CITY STATION
ARLINGTON
VA
22215-0035
US
|
Family ID: |
39324013 |
Appl. No.: |
12/311025 |
Filed: |
October 19, 2007 |
PCT Filed: |
October 19, 2007 |
PCT NO: |
PCT/AU2007/001586 |
371 Date: |
March 17, 2009 |
Current U.S.
Class: |
434/247 ;
434/308 |
Current CPC
Class: |
A63B 2225/50 20130101;
A61B 5/11 20130101; A61B 2503/10 20130101; A63B 24/0006 20130101;
A63B 2220/40 20130101; G09B 5/06 20130101; A61B 5/1114 20130101;
A63B 2220/803 20130101; A63B 2220/53 20130101; A63B 2220/51
20130101; A63B 2024/0012 20130101; A63B 2220/836 20130101; G09B
19/0038 20130101; G06K 9/00342 20130101; A63B 24/0003 20130101;
A61B 5/486 20130101; A63B 24/0075 20130101; G09B 19/0015 20130101;
A63B 71/0622 20130101 |
Class at
Publication: |
434/247 ;
434/308 |
International
Class: |
G09B 5/00 20060101
G09B005/00; G09B 19/00 20060101 G09B019/00 |
Foreign Application Data
Date |
Code |
Application Number |
Oct 26, 2006 |
AU |
2006905938 |
Claims
1. A method of providing a personalized audio-visual instructional
aid for assisting a person to emulate preferred positions and/or
movements in undertaking an activity, said method including the
steps of capturing position and/or movement data of said person or
part of said person or an object associated with and controlled by
said person whilst said person undertakes said activity using
position and/or movement sensing means attached to or worn by said
person or attached to said object, analyzing and comparing said
captured data with pre-stored data relating to a preferred
positions and/or movement in undertaking said activity, generating
a visual presentation based on the differences between the captured
position and/or movement of said person or object and said
preferred positions and/or movement and adding to said generated
visual presentation, audio instructional comments relating to said
differences to assist a person to emulate said preferred positions
and/or movement.
2. A method as claimed in claim 1 wherein said captured data
comprises coordinate data in at least two axes.
3. A method as claimed in claim 2 wherein said captured data
additionally comprises orientation data.
4. A method as claimed in claim 1, further including the step of
locally analyzing said data against said prestored data and locally
presenting said audio-visual aid for viewing by said person.
5. A method as claimed in claim 1, further including the step of
wirelessly transmitting the captured data to a remote location at
which the audio-visual instructional aid is prepared.
6. A method as claimed in claim 1, further including the step of
transmitting said audio visual presentation over a communications
networks for display on a display screen for direct viewing by the
person undergoing instruction.
7. A method as claimed in claim 1, further including the step of
storing said captured data in a computer database for comparison
with or analysis against prestored data of the preferred positions
and/or movements.
8. A method as claimed in claim 1, further including the step of
prestoring or generating said audio instructional comments in a
database.
9. A method as claimed in claim 1, further including the step of
adding selected comments from the prestored or generated comments
to said visual display in accordance with the comparison and
analysis between the captured data and prestored data.
10. A method as claimed in claim 1 and including the step of
downloading the audio-visual presentation to a storage device for
subsequent viewing.
11. A method as claimed in claim 1, wherein said data is captured
using sensors which incorporate one or more of miniature
gyroscopes, accelerometers, magnetometers, GPS or DGPS sensors.
12. A method as claimed in claim 11 wherein said sensors comprise
Micro-Electro-mechanical System (MEMS) sensors.
13. A method as claimed in claim 11, further including the step of
transmitting said captured data wirelessly from said sensors for
analysis and comparison.
14. Apparatus for providing a personalized audio-visual
instructional aid for assisting a person to emulate a preferred
movement during an activity, said apparatus including position
and/or movement sensing means adapted to be attached to or worn by
said person or part of said person or attached to an object
associated with and controlled by said person for capturing
position and/or movement data of said person or object whilst said
person undertakes said activity, means for analyzing and comparing
said captured t data with pre-stored data relating to preferred
positions and/or movement in undertaking said activity, means for
generating a visual presentation based on the differences between
the captured positions and/or movement of said person or object and
said preferred positions and/or movements, and means for adding to
said generated visual presentation, audio instructional comments
relating to said differences to assist a person to emulate said
preferred positions and/or movements.
15. Apparatus as claimed in claim 14 wherein said sensing means
comprise sensors which include miniature gyroscope and/or
accelerometers.
16. Apparatus as claimed in claim 15 wherein said sensors comprise
one or more of MEMS sensors, sensor attached to or incorporated in
clothing to be worn by said person, magnetometers, GPS and DGPS
sensors.
17. Apparatus as claimed in claim 15, further including means for
wirelessly transmitting data directly or indirectly from said
sensors to a remote location for said analyzing and comparison.
18. Apparatus as claimed in claim 14, wherein said analyzing and
comparison means comprises a computer having a database, said
database having pre-stored visual image data representative of said
preferred positions and/or movement and wherein said database is
adapted to store said captured data for comparison with said
pre-stored data.
19. Apparatus as claimed in claim 18 wherein said database further
includes prestored audio instructional comments or is adapted to
generate instructional comments for addition to said visual
presentation.
20. Apparatus as claimed in claim 18, wherein said computer
includes processing means comprising said means for generating said
visual presentation based on the analysis and comparison between
said stored data of the preferred positions or movement and
captured data and wherein said processing means is adapted to add
to said visual presentation, audio instructional comments based on
said analysis and comparison of said data.
21. Apparatus as claimed in claim 18 and including means for
transmitting said audio-visual presentation for display on a
display screen or downloading to a storage device.
22. Apparatus as claimed in claim 18, further including means for
sensing bio-mechanical data of said person undertaking said
activity and means for incorporating said data in said audio-visual
aid.
23. Apparatus as claimed in claim 18, further including
intermediate transceiver means adapted to communicate via a
communication link with said computer and receive said captured
data from said sensing means to convey said data to said computer
for said analysis and comparison with said prestored data and
receive from said computer said audio visual presentation for
display to said person.
24. Apparatus as claimed in claim 23 wherein said transceiver
includes a processor and modem or other signal conversion means to
permit communication with said computer over said communication
link.
25. A method for providing a personalized audio-visual
instructional aid for assisting a person to emulate preferred
actions in undertaking an activity in relation to an object
controlled by said person, said method including the steps of
capturing position and/or movement data of said object whilst said
person undertakes said activity using position and/or movement
sensing means on said object, analyzing and comparing said captured
data with data of preferred positions and/or movements in relation
to said activity, generating a visual presentation based on the
differences between the captured positions and/or movements of said
object and said preferred positions and/or movements and adding to
said generated visual presentation, audio instructional comments
relating to said differences to assist a person to emulate said
preferred actions in said activity.
Description
TECHNICAL FIELD
[0001] This invention relates to a method and apparatus for
providing personalized audio-visual instruction and in particular
to a method and apparatus which uses sensed data coordinates of a
human movement or movement of an object hi order to provide the
basis for producing a personalized audio-visual teaching
presentation or aid.
BACKGROUND ART
[0002] Accurate three-dimensional tracking of human movement has
the potential to enable many kinds of human-computer interaction
and vision-based methods promise tracking without encumbrance by
body-mounted apparatus. The latter form of tracking can be based on
the use of optical sensors affixed to a person's body however
sensors of this type can become obscured and therefore accurate
movement sensing is not always possible. Furthermore, setting up of
a movement tracking studio using mocap cameras requires a large
area, is very expensive, and for the organizations that do set up
these facilities, the data collection procedures are long and
complicated. Each person's test can take up a number of hours and
normally requires an engineer (or technician) to be on hand. The
data collected can be processed within minutes to yield numerous
desired parameters for each application, yet it can take days or
weeks for the person being monitored to receive a report on their
performance.
[0003] The problem with 3D human movement capture is that while it
can be very effective for some applications within the confines of
a very expensive studio, remote sensing movement systems have not
been designed which can be used in general public areas where they
are likely to be required such as tennis courts, golf driving
ranges, and gyms or in the home environment.
SUMMARY OF THE INVENTION
[0004] The present invention aims to provide a method of and
apparatus for providing personal audio-visual advice relating to a
movement of a person or object. The audio visual advice may be
based on prerecorded audio advice or may be personalized and based
on input provided by an expert in the field. Other objects and
advantages will become apparent from the following description.
[0005] The present invention thus provides in one aspect, a method
of providing a personalized audio-visual instructional aid for
assisting a person to emulate preferred positions and/or movements
in undertaking an activity, said method including the steps of
capturing position and/or movement data of said person or part of
said person or an object associated with and controlled by said
person whilst said person undertakes said activity using position
and/or movement sensing means attached to or worn by said person or
attached to said object, analyzing and comparing said captured t
data with pre-stored data of preferred positions and/or movement in
undertaking said activity, generating a visual presentation based
on the differences between the captured positions and/or movements
of said person or object and said preferred positions and/or
movements and adding to said generated visual presentation, audio
instructional comments relating to said differences to assist a
person to emulate said preferred positions and/or movements.
[0006] The captured data may comprise coordinate data in at least
two axes and preferably three dimensional data that is coordinate
data in three axes (x, y and z). The captured data additionally may
include orientation data (yaw, pitch and roll).
[0007] The method may comprise the step of locally analyzing the
data and locally providing the audio-visual instructional aid for
viewing by the person. For example the audio-visual aid may be
presented on a display screen or screens separate from the person
or alternatively incorporated in headwear glasses or goggles worn
by the person and the instructional comments provided via
headphones or earphones either part of the headwear or separate
from the headwear. The data may be analyzed and the audio-visual
presentation prepared in apparatus adjacent to or worn by the
person.
[0008] Alternatively, the method may include the step of wirelessly
transmitting the captured t data to a remote location at which the
audio-visual instructional aid based on the prestored data is
prepared and returned to the person for viewing.
[0009] Preferably the captured data is stored in a computer
database for comparison with or analysis against the prestored data
of the preferred positions and/or movements.
[0010] Preferably the audio instructional comments are prestored or
generated in a database and selected comments from the prestored or
generated comments are added to the visual display in accordance
with the comparison and analysis between the captured data and
prestored data.
[0011] The audio instructional comments of the preferred positions
and/or movements may be stored in the form of a plurality of
comments or phrases relating to the activity being undertaken and
to common errors occurring in the activity being undertaken which
can be linked automatically to the results of the analysis and
comparison between the captured data and prestored data. Thus
analysis of the captured data against the prestored data may
indicate different positions of the body by the person in
undertaking the activity as compared to the prestored preferred
position sand an appropriate comment or phrase may be automatically
retrieved from the database in accordance with the analysis of the
different body positions. The comment or phrase data alternatively
may be generated in the database in response to the analysis and
comparison of the captured data and prestored data.
[0012] The method may further include the step of transmitting the
prepared audio visual presentation from the remote location over a
communications networks for display on a display screen for direct
viewing by the person undergoing instruction. Alternatively the
method may include the step of downloading the audio-visual
presentation to a storage device such as a memory chip for supply
to the person for viewing at a later date.
[0013] Preferably the data is captured using sensors which
incorporate miniature gyroscopes and/or accelerometers which
function by measuring position (x, y, and z coordinates), and
orientation (yaw, pitch, and roll) with respect to a reference
point or state. Each sensor provides its own global orientation (3
degrees of freedom) and is physically and computationally
independent, requiring only external communication. The data
however may be captured using other or additions sensors such as
magnetometers, GPS sensors or DGPS sensors.
[0014] The data from the sensors may be communicated directly or
indirectly via wireless via transceiver to a host or server
computer for processing which allows a database of the computer
which holds the prestored data to receive relatively accurate human
or object position or movement data coordinates for processing.
[0015] In another aspect, the present invention provides apparatus
for providing a personalized audio-visual instructional aid for
assisting a person to emulate a preferred movement during an
activity, said apparatus including position and/or movement sensing
means adapted to be attached to or worn by said person or part of
said person or attached to an object associated with and controlled
by said person for capturing position and/or movement data of said
person or object whilst said person undertakes said activity, means
for analyzing and comparing said captured data with pre-stored
position and/or movement data relating to preferred positions
and/or movements in undertaking said activity, means for generating
a visual presentation based on the differences between the captured
positions and/or movements of said person or object and said
preferred positions and/or movements, and means for adding to said
generated visual presentation, audio instructional comments
relating to said differences to assist a person to emulate said
preferred positions and/or movements.
[0016] The method and apparatus used for the personalized
audio-visual teaching presentation enable a person or pupil to more
clearly emulate another movement and receive expert advice to
improve their technique without an expert person needing to see the
pupil's movement.
[0017] As referred to above the sensing means most suitably
comprise sensors which incorporate miniature gyroscopes and/or
accelerometers. The sensors may comprise one or more of miniature
motion sensor such as MEMS (Micro-Electro-Mechanical Systems)
sensors, sensors attached to or incorporated in conductive fibres
of clothing or textile fabric which can be worn by a person,
magnetometers, GPS and DGPS sensors. Sensors of the above type
allow the body movement of a person to be captured and stored for
computer analysis.
[0018] A common data, logging unit may be associated with the
sensors to record movement data of the sensors over time. The data
logging unit may be a separate unit with which the sensors
communicate by hard wiring or wirelessly or the data logging unit
may be carried by or worn by the person for example incorporated in
a conductive material or fabric which can be embedded or attached
to clothing.
[0019] Means may be provided for wirelessly transmitting data
directly or indirectly from the sensors to a remote location for
said analyzing and comparison. Thus wireless transmitters may be
associated with each sensor or with the data logging unit for
transmission of the data.
[0020] The analyzing and comparison means may comprises a computer
having a database/s which has/have stored therein pre-stored visual
image data representative of the preferred positions and/or
movement and wherein the database/s is/are adapted to store the
captured data for comparison with the pre-stored data.
[0021] The database/s may further include prestored comments or may
generate audio instructional comments for addition to the visual
presentation.
[0022] The computer may include processing means comprising the
means for generating the visual presentation based on the analysis
and comparison between the stored data of the preferred positions
and/or movement and captured data and the processing means is
adapted to add to the visual presentation, prestored or generated
audio instructional comments based on the analysis and comparison
of the data,
[0023] Means may be provided for transmitting the audio-visual
presentation for display on a display screen or downloading to a
storage device.
[0024] The apparatus may also include means for sensing
bio-mechanical data of the person undertaking the activity and
means for incorporating the data in said audio-visual aid.
[0025] The apparatus may also include intermediate transceiver
means which is adapted to communicate via a communication link with
the computer and receive the captured data from the sensing means
to convey the data to the computer for analysis and comparison with
the prestored data and receive from the computer the audio visual
presentation for display to said person.
[0026] The transceiver may include a processor and modem or other
signal conversion means to permit communication with the computer
over the communication link.
[0027] The transceiver alternatively may include a database in
which the audio-visual aid may be prepared based on information
stored in its database and then communicate the prepared aid back
to die pupil or person undertaking the activity for display on a
screen. The transceiver in this embodiment may be worn or carried
by the person undertaking the activity so that immediate
instruction may be given.
[0028] The present invention has particular applications to
teachings in sport, so as to help and assist a person emulate a
particular movement or technique. The present invention however may
also be applied to many other applications where an emulation or
overview of a precise movement is required. Thus, the invention may
be applied to various applications in the arts fields, for example
the teaching of dance steps or within the medical rehabilitation
field where precise movements are required to be performed and
expert advice given. Additionally, the present invention may be
used in games to detect the movement of a person or their equipment
whilst playing a game, analyze that movement against a preferred
movement or successful strategies to play the game and show back on
screen how they have performed with audio instructional comments to
help the player understand where their movement differs from an
experts.
[0029] The present invention may be further applied to assist
drivers to drive a vehicle in a correct and safe manner or to
educate persons to follow a correct route. In this latter aspect,
sensors may be provided on a vehicle to sense position and movement
of the vehicle and the display may be provided in the vehicle and
associated with a computer which contains the pre-stored phrases or
comments relating to operation of the vehicle. Further sensors such
as GPS sensors may be provided on the vehicle to sense position of
the vehicle along a road. Alternatively or additionally, sensors on
the vehicle may receive data signals from transmitters along the
road relating to road restrictions or conditions such as speed
limits. That data may be uploaded into the computer for use in
providing the presentation. The generated presentation may comprise
an audio-visual presentation relating to road conditions or
restrictions at that location presented on the screen with
instructional comments through any suitable audio output
device.
[0030] In this latter aspect, the present invention provides a
method for providing a personalized audio-visual instructional aid
for assisting a person to emulate preferred actions in undertaking
an activity in relation to an object controlled by said person,
said method including the steps of capturing position and/or
movement data of said object whilst said person undertakes said
activity using position and/or movement sensing means on said
object, analyzing and comparing said captured data with data of
preferred positions and/or movements in relation to said activity,
generating a visual presentation based on the differences between
the captured positions and/or movements of said object and said
preferred positions and/or movements and adding to said generated
visual presentation, audio instructional comments relating to said
differences to assist a person to emulate said preferred actions in
said activity.
[0031] The aforesaid apparatus may be applied for use in the above
method.
BRIEF DESCRIPTION OF THE DRAWING
[0032] Reference will now be made to FIG. 1 of the accompanying
drawing which illustrate schematically and in block diagram form a
preferred embodiment of the invention comprising apparatus for
producing audio visual advice to provide an aid to a person
undertaking an exercising movement, in this case a simple jumping
jack exercise motion It will be appreciated however that whilst the
description of the preferred embodiment is made in relation to this
specific activity, the present invention may be used in any
situation in which coaching for any movement is required.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0033] The apparatus 10 as illustrated is described in reference to
teaching a pupil 11 an exercise movement in this case a jumping
jack, the apparatus 10 includes a data logging unit 12 associated
with, a plurality of Micro-Electro-Mechanical Systems (MEMS)
sensors 13 which can be worn or held by a pupil 11 undertaking an
exercise movement in the home or at a sporting facility or the
like. The sensors 13 which are affixed to different positions on
the pupil's body allow the data coordinates of the pupil's movement
and position to be captured. In the embodiment illustrates, sensors
13 are shown on both aims and on both legs however more or less
sensors 13 can be used as required depending upon the movement
being sensed. Data from sensors 13 as the pupil is undergoing the
exercise movement is recorded in the data logging unit 12 and a
wireless transmission device 14 is associated with the unit 12 to
send the captured data via a communications link or links 15 to a
host or server computer 16 having a database 17 in which the
transmitted data is stored.
[0034] The database 17 additionally has stored therein data
relating to an expert's stored movement or a model action or
movement. The computer 16 includes a processor 18 which can analyze
the data in the database 17 and produce a visual presentation 19 on
a display screen or monitor 20 to show the pupil 11 differences in
their movement compared with an experts stored movement or model
action. The database 17 also stores or can generate a series of
prerecorded messages relating to the exercise movement being
undertaken. The processor 18 upon analyzing and comparing the
recorded data of the pupil 11 relating to their movement with the
expert's stored movement can include with the generated visual
presentation, audio instructional comments through an audio output
device such as a speaker 20' on the monitor or display screen 20 so
that the pupil 11 can receive personalized audio-visual coaching or
advice as further described below.
[0035] To capture the data coordinates of the pupil's exercise, and
as referred to above, MEMS sensors 13 with wireless transmission
capabilities are used. For this purpose the pupil 11 could wear,
hold or have attached to their body or equipment they are using, an
array of MEMS sensors 13 which use gyroscopes accelerometers and
function by measuring the position (x, y, and z coordinates), and
the orientation (yaw, pitch, and roll) with respect to a reference
point or state. Each sensor 13 provides its own global orientation
(3 degrees of freedom) and is physically and computationally
independent, requiring only external communication. More than one
MEMs sensor 13 is preferably used attached to different parts of
the body to capture the requirement movement and position of the
body. Alternatively, the pupil 11 may wear electronic textile
clothing that have sensors and interconnections woven into them
which can capture a person's movement and position at any point of
time. The sensors 13 or clothing however may only be applied to or
worn on part of the body (such as to an arm or leg), the movement
of which is being analyzed.
[0036] The wireless MEMS sensors 13 can directly communicate with
the computer 16 or alternatively the information from the sensors
13 can be recorded in the data logging unit 12 which then transmits
the data to the computer 16 for storage in the database 17 and
subsequent analysis. In the latter case, the data logging unit 12
may be worn on or carried by the person and the MEMS sensors may
wirelessly communicate with the data logging unit 12 or be hard
wired to the unit 12.
[0037] Alternatively, the information from the sensors 13 can as
shown in dotted outline be communicated either directly or through
the data logging unit 12 wirelessly via wireless link 21 to an
intermediate transceiver unit 22 which has its own further two-way
communications link 23 with the host computer 16.
[0038] The transceiver unit 22 may be an independent unit or take
the form of a personal computer, a games box example Playstation,
Xbox, Wii, PDA, iPOD, Mobile Phone or the like having its own
central processing unit/s and database to which could be added a
library of preferred movement images and audio advices so pupil
could receive personalized audio-visual instruction directly in a
home or training environment without connecting to a chargeable
communications network 15 or 23 or to the computer 16. The
transceiver unit 22 is preferably of portable construction and may
be worn by the pupil 11.
[0039] Preferably however the transceiver unit 22 is connectable
via modem or other signal conversion device over the communications
link 23 to the computer 16 to send information thereto or receive
information therefrom to allow the pupil 11 to connect to a much
greater library of personalized audio-visual advices which can be
stored in the database 17. The computer 16 in this case will
analyze the data and prepare the final audio-visual presentation
and may be situated anywhere in world whereas in the first instance
above the computer 16 maybe in the same room as the pupil or with
them as per PDA or iPOD.
[0040] The transceiver unit 22 may also include or accept a storage
medium such as a memory chip, hard drive or the like for directly
making the personalized video or downloading the personalized
audio-video instruction made by the remote computer 16 for viewing
at a later time. Alternatively, the remote computer 16 may use the
transceiver 22 to present the personalized video presentation
directly to a display screen 24 connected to the transceiver 22 for
viewing by the pupil 11 at the location of the transceiver 22.
[0041] The transceiver unit 22 may be software and/or hardware
programmed to enable inclusion of one or more of the following
features:
[0042] 1) The sender's details for later personalizing effects to
the final video presentation i.e. name and spoken language and
means for selecting the person with whom the movement of the person
is to be compared.
[0043] 2) The transmitting parties/agents details for account
keeping purposes and security for the system i.e. digitized
security code.
[0044] 3) Circuitry for increased transmission speed.
[0045] 4) Circuitry and displays for showing details regarding
signaling times to reach database and regeneration times once
retransmitted back to pupil/agent remote location, for account
keeping purposes.
[0046] 5) Circuitry and displays for showing format of transmitted
or requested signals in i.e. PAL, NTSC or SECAM etc.
[0047] 6) Circuitry for Time, Date, Auto Dial-Redial, Stop, Start,
Receive, Send and possibly an Advertising channel etc.
[0048] 7) Circuitry and displays for showing to whom the
regenerated signals belong and to enable the retransmitted
regenerated signal to be added to the already prerecorded signal,
described below, if desired.
[0049] Where recording of movement's takes place, bio-mechanical
data such as weight transference, alignment, grip pressure exerted
during an exercise movement or the like may also be captured from
sensors such as a sensor mat 25 upon which the pupil 11 is
exercising, or other load or impact sensor which for example may be
incorporated in shoes worn by the person or in the handle of an
article being gripped by a person such as in the handle of a golf
club. This data may also be transmitted to the transceiver unit 22
or directly to computer 16 for example via the data logging unit 12
so a pupil can then receive with the final audio-visual
presentation graphs showing for example, weight transference
compared to experts movement.
[0050] The communications networks 15 and 21 and 23 may comprise
fast speed cable, wireless, ethernet, general switched telephone
network, power line, satellite or the like over which the captured
data coordinate signals representative of the pupils movement
together with their bio-mechanical data can be transmitted to the
computer 16 for storage in the database 17 for later analysis
against selected preferred movement signal.
[0051] The computer 16 can be in close proximity to the pupil 11
within the same location and be contactable by local wireless or
other communications network or be at a remote location which can
be contacted by telecommunications means such as via the
transceiver 22. In another form, the computer 16 may be portable
and worn by the person undertaking the activity. In this form, the
computer 16 may include or incorporate the data logging unit 12. In
this form, the sensors 13 may wirelessly communicate with the
computer 16 or hard wired to the computer 16. The audio-visual aid
in this embodiment may be presented for viewing on a monitor or
other display adjacent to the pupil or may be presented for viewing
on apparatus worn by the pupil such as goggles 26 incorporating a
viewable screen
[0052] The computer database 17 holds the following prerecorded
information in digital form, which provides a basis for forming a
personalized audio-visual teaching presentation.
[0053] 1) A digitized audio and visual library of selected coaches,
advisers professional or mechanisms movements, techniques, steps or
procedures, that have been previously analyzed, recorded and
reformed into suitable digitized signals, for later analysis and
regeneration in the database against the pupils coordinate signal
received.
[0054] 2) A digitized bio mechanical library of selected coaches or
mechanisms movements measured individually or as a group, for later
display with the regenerated audio visual signal.
[0055] 3) A digitized visual library of objects which a comparison
can be made against a person's submitted object design sensed via
remote sensing means.
[0056] The computer 16 is able to receive and edit the transmitted
data coordinates or bio-mechanical signals relating to the exercise
movement of the pupil 11 and arranges that information in the
database 17 in such a way as to be able to match the received data
coordinate information with the previously stored and selected
database coordinate information of the preferred expert's movement
to enable the processor 18 to generate the visual presentation
which enables the pupil to sec those differences that exist between
the two movements, that of the pupil and selected coach.
[0057] The computer 16 is software and/or hardware programmed to
compare by use of its processor 18 the respective coordinate
movement of the person or object being analyzed against the
selected preferred movement coordinate with adjustments made such
as for size of person, speed and other parameters to allow a
correct comparison. The differences between the movement
coordinates of the pupil and coach are measured by sampling or
other techniques in the database 17 and the processor 18 can then
produce automatically a visual representation showing the pupil's
movement against the expert's preferred movement.
[0058] During the above visual production, the processor 18 can add
via software and/or hardware to the visual display previously
stored, audio comments from an expert as to how they have
performing the movement or how they can improve the movement, so
that the pupil can receive in the final audio-visual presentation,
personalized audio-visual instruction from an expert, similar to
private instruction.
[0059] The computer 16 may further include a means for converting
received data coordinate signals, if needed, in one broadcasting
mode, into those of other countries i.e. PAL-NTSC-SECAM etc. To
accomplish this within the computer 16, a Standards Converter or
the like may be used, using Optical Scan or Digital Standards
features or the like.
[0060] The computer 16 may additionally include a means for
determining costing on each and every received and regenerated data
signal that passes through the system, based upon the following
information:--
[0061] (a) Signal transmission time to and from the computer
database.
[0062] (b) The amount of analysis and regeneration time required,
for each and every presentation, such as with the showing of audio
visual faults, bio-mechanical/characterized data graphs, dialogue
and broadcasting changes etc.
[0063] The computer 16 is software and/or hardware programmed to
accept digitized data signals transmitted to it, associated with
the transceiver unit 22, so as to personally relate in the final
regeneration and editing phases to the pupil 11, this being
performed by, using various digital editing techniques and
procedures as described below.
[0064] The visual aspect of the audio visual presentation generated
by the processor 18 can take the form of split-screening as shown
in the drawings generating new visual images of the preferred
technique displayed alongside the present technique. Alternatively,
the preferred technique can be superimposed over the recorded
technique. The visual aspect however may be displayed in any other
form. Graphics, charts numbers etc, for bio-mechanical/information
displays so that the viewer can clearly see those changes that are
required or need to be performed, to develop or pursue the
preferred movement, procedure or proposal are also displayed. These
superimposing effects or the like, may take the form of a computer
generated human body shape appearance, normal human or visual
appearance, stick figures etc, so as to show the viewer any
variances between their movement and the preferred movement. For
ease of editing the subject material within the computer database
17, a time code or the like, maybe also burnt into the original
recorded signal, so as to make final editing much more efficient
within the database 17. This procedure may also be linked with an
edit controller or the like, within the computer 16 and processor
18 which locks the two signals electronically together during
edits.
[0065] To personalize the final presentation, dialogue is also
added when required so that the selected instructor or adviser may
relate to the viewer/pupil more personally than with other
audio-visual presentations. Part of this personalizing effect is
initially accomplished in the original transmission phase, by the
name, spoken language, broadcasting mode or location statistics etc
of the final viewer being captured within tins database, to which
an addition of dialogue from a selected adviser can also be added
by the processor 18 when required in the visual signal generation
phase, showing those changes they need to perform or consider, to
obtain the preferred movement, thus totally personalizing the
return audio-visual signal to the pupil.
[0066] The dialogue may take the form of a brief statement, example
"keep your arms up like mine when doing this jumping jack" so as to
make the viewer fully aware of those changes that are required, or
need their consideration, to perfect the movement or procedure. The
means for adding such dialogue may take the form of a voice energy
monitor, text to speech software or the like contained within the
computer database 17 which on analysis can scan a memory bank of
stored experts comments or learned words phrases etcetera which
maybe in text format within the computer database 17 to find
suitable terms to bring attention to these problem areas or matters
needing their attention. Dialogue may also be added to the final
audio-visual presentation to suitability of equipment being used to
perform the desired movements and techniques.
[0067] To increase the systems acceptance as a true teaching aid,
scientific data may be presented to a viewer, when needed, in the
form of graphs, charts, or the like, set above, below, alongside or
within the total regenerated video signal recording. This
information is gathered from their received data coordinate
signals, being matched to their preferred coaches data coordinate
signals contained within the database 17 so that the pupil can
clearly see, those bio mechanical differences that exist between,
their current procedures techniques and that of their selected
coaches professionals procedures techniques.
[0068] Once the audio-visual teaching presentation is generated, it
is then transmitted back from the computer 16 via communications
links 23 to the remote senders or agent's (agent maybe at a
training facility providing the service for pupil) transceiver unit
22. This procedure maybe performed, from within the computer 16
itself, using various digital, compression and sampling techniques,
or it can be again sent back by a further transceiver associated
with the computer 16.
[0069] On receipt at the remote pupils/agents receiving location,
the audio-visual presentation maybe able to pass through the
transceiver unit 22 and be viewed directly on a screen 24 or saved
on hard drive, video tape, disk, chip or the like within the
transceiver unit 22 or downloaded from the transceiver unit 22 for
later viewing or at a web site.
[0070] Where the computer 16 is at a remote location and where the
transceiver unit 22 is used, the transceiver unit 22 includes the
database to store a person or objects movement data coordinates
sent to it via wireless means associated with the sensors 13 such
as Micro-Electro-Mechanical Systems (MEMS) sensors, other sensors
which incorporate gyroscopes and accelerometers which can be
attached to or held by the pupil 11 or be part of their equipment.
Conductive clothing incorporating similar sensors may also be worn
by the pupil 11 to capture the movement data coordinates. The
transceiver unit 22 is able to transmit the data, coordinates via
the communications links 23 to the computer 16, the database 17 of
which has previously stored coordinates of a correct movement and
audio advice on how to achieve a preferred movement. This advice
may also be generated by computer processing facility not needing
an expert person to record audio comments. The computer 16 uses
various software and hardware techniques to analyses the pupils
received motion data coordinates against the preferred stored data
motion coordinates of an expert movement and produces within the
database 17 a visual presentation showing differences in the pupils
movement against the preferred movement. The system whilst making
the visual presentation also selects and adds stored audio comments
to parts of the visual presentation so a pupil can not only see
differences in their movement as opposed to an expert or preferred
movement but also can hear an experts audio comment coming via
speakers, head phones or other audio output device.
[0071] The audio-visual presentation transmitted from computer
database 17 back to the pupil via communications link 23 may show
the pupil or objects movement in the form of a stick figure or
computer generated likeness to normal appearance either
superimposed split screened or the like alongside the preferred
movement with pointers lines or the like drawn on the visual images
to bring attention to areas of fault. To this presentation could
also be added graphs or charts to show differences say in weight
transference, pressure in grip, equipment being used or the
like.
[0072] For teaching an exercise movement via the system maybe as
follows:
[0073] The pupil 11 puts on garment which includes conductive
material embedded into it or attaches Micro-Electro-Mechanical
Systems (MEMS) sensors 13 to their body or equipment or holds
devices which includes these MEMS sensors 13. As the pupil 11 makes
an exercise movement these sensors 13 communicate the pupils
movement data coordinates wirelessly directly or indirectly via the
data logger 12 to the transceiver unit 22 which could also connect
to make larger communications network to alert a computer 16 that
this pupil 11 would like to receive personalized audio-visual
coaching for this particular movement which maybe a jumping jack
exercise and the selected coach could be Jane Fonda.
[0074] As the pupil 11 is performing the movement their movement
data coordinates are captured via the sensors 13 and sent in real
time to the computer 16 for storage in the computer, database 17
with the processor 18 then analyzing the pupils jumping jack
exercise movement against Jane Fonda's preferred way of doing the
movement. The computer 16 then produces in near real time using its
processor 18, a visual presentation 19 which shows the pupil 11
their movement against the preferred way of doing the movement.
[0075] When the computer 16 detects faults, from analyzing the data
coordinates of pupil's movement against Jane movement it selects
Jane's stored audio comment for this fault from the database 17 and
adds it to the new visual presentation so the pupil receives
personalized audio-visual instruction from Jane as if she viewed
the movement.
[0076] For example the audio comments may be "you are not getting
your hands over your head! Keep your hands over your head like mine
when doing this jumping jack movement, please try again!" The pupil
11 hears this comment whilst watching the video and sees how their
movement compares to Jane. Then they would try the movement
again.
[0077] Bio mechanical data in the form of graphs or charts etc. may
also added to the presentation so pupil can clearly see scientific
data pertaining to the performed movement. Dialogue may also be
added regarding suitability of equipment being used at this
stage.
[0078] Towards the conclusion of this audio-visual presentation,
the expert could conclude by saying: "Thank you for allowing me to
help you improve your technique, but please continue to work on
these points with the help of your local adviser, who is also there
to assist you in overcoming these problems Jane"
[0079] The above embodiment has been described with reference to an
exercise movement; however, as previously stated the invention may
readily applied where other movements are required to be emulated.
Such movement may comprise movements of persons or objects, parts
or mechanisms. In this embodiment and as before an object may have
MEMs motion sensors attached or wear conductive movement sensing
material or the like to which gives its position (x, y, and z
coordinates), and the orientation (yaw, pitch, and roll) with
respect to a reference point or state and as before the data
obtained is stored and then transmitted to for storage in a
database for analysis in a computer with the corresponding data of
a preferred object whose data information is previously stored in
the database. A typical object with which the apparatus may be used
may comprise a golf club 26 (shown in dotted outline) which carries
the sensors 13.
[0080] After comparison and adding, where appropriate, suitable
dialogue from say an expert in that field, a personalized
audio-visual presentation is made and transmitted back to the pupil
11 so that the pupil 11 has an expert assessment of the differences
between the respective objects. A particular application in this
embodiment could be for advising a cyclist if they are peddling
correctly to get maximum power, the system could sense via MEMS
sensors or conductive material placed on or inside a shoe the shoe
angle during the pedal stroke and compare this to an experts
movement and give personalized audio-visual advice in this
regards
[0081] It will be apparent that the latter embodiment may be
applied to any suitable objects of which analysis is required. The
present invention thus provides a method and means for expertly
teaching or instructing procedures, strategies, assessments or
perceptions in audio-visual form to enable a viewer to more closely
emulate calculate or develop techniques suitable to their various
applications.
[0082] Whilst the above has been given by way of illustrative
embodiment of the invention, all such modifications and variations
thereto as would be apparent to persons skilled in the art are
deemed to fall within the broad scope and ambit of the invention as
herein defined by the appended claims.
* * * * *