U.S. patent application number 15/265639 was filed with the patent office on 2017-04-06 for method and device for inputting a user's instructions based on movement sensing.
The applicant listed for this patent is MICROINFINITY, INC.. Invention is credited to Sang-Bum Kim, Jung-Hwan Lee, Kyu-Cheol Park, Won-Jang Park, Byung-Chun Sakong, Woo-Hee Yang.
Application Number | 20170097690 15/265639 |
Document ID | / |
Family ID | 42279387 |
Filed Date | 2017-04-06 |
United States Patent
Application |
20170097690 |
Kind Code |
A1 |
Park; Kyu-Cheol ; et
al. |
April 6, 2017 |
METHOD AND DEVICE FOR INPUTTING A USER'S INSTRUCTIONS BASED ON
MOVEMENT SENSING
Abstract
Provided is a user instruction input device operating in a
three-dimensional space. The user instruction input device includes
a first sensor that senses the angular rate of the device centering
on at least one axis, a second sensor that senses the acceleration
of the device at least for one direction, and a processing unit
that calculates a first rotation angle in a coordinate system
independent of the attitude of the first device from the output
value of the first sensor, calculates a second rotation angle in
the coordinate system from the output value of the second sensor,
and calculates the final attitude angle by combining the first
rotation angle and the second rotation angle.
Inventors: |
Park; Kyu-Cheol; (Seoul,
KR) ; Lee; Jung-Hwan; (Gyeonggi-do, KR) ;
Park; Won-Jang; (Seoul, KR) ; Sakong; Byung-Chun;
(Gyeonggi-do, KR) ; Kim; Sang-Bum; (Seoul, KR)
; Yang; Woo-Hee; (Gyeonggi-do, KR) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
MICROINFINITY, INC. |
Gyeonggi-do |
|
KR |
|
|
Family ID: |
42279387 |
Appl. No.: |
15/265639 |
Filed: |
September 14, 2016 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
12604780 |
Oct 23, 2009 |
|
|
|
15265639 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06F 3/0346
20130101 |
International
Class: |
G06F 3/0346 20060101
G06F003/0346 |
Foreign Application Data
Date |
Code |
Application Number |
Nov 14, 2008 |
KR |
10-2008-0113610 |
Mar 30, 2009 |
KR |
10-2009-0027100 |
Claims
1-12. (canceled)
13. A user instruction input device comprising: an angular rate
sensor adapted to sense an angular rate at which the device rotates
to provide a first output; an acceleration sensor adapted to sense
an acceleration of the device to provide a second output; and a
processing unit adapted to convert the first output of the angular
rate sensor into a first rotation angle, convert the second output
of the acceleration sensor into a second rotation angle, and
compute a weighted average of the first rotation angle and the
second rotation angle based on the first output and/or the second
output to produce an attitude angle estimate.
14. The device of claim 13, wherein the processing unit is further
adapted to determine a first weight for the first rotation angle
and a second weight for the second rotation angle based on one of
the first output, the second output and a combination thereof, such
that the first and second weights satisfy the conditions that (i)
the first and second weights are greater than or equal to 0 and
less than or equal to 1, (ii) a sum of the first and second weights
is equal to 1, and (iii) as said one of the first output, the
second output and the combination thereof becomes larger, the
second weight becomes smaller, and wherein the processing unit is
further adapted to compute the weighted average with the determined
first and second weights.
15. The device of claim 13, wherein the first rotation angle
includes three rotation angles (.PHI..sub.G, .theta..sub.G,
.PSI..sub.G) in the navigation frame.
16. The device of claim 13, wherein the second rotation angle
includes three rotation angles (.PHI..sub.XL, .theta..sub.XL,
.PSI..sub.XL) in the navigation frame.
17. The device of claim 15, wherein the first weight includes three
weights each for a respective one of the three rotation angles
(.PHI..sub.G, .theta..sub.G, .PSI..sub.G).
18. The device of claim 16, wherein the second weight includes
three weights each for a respective one of the three rotation
angles (.PHI..sub.XL, .theta..sub.XL, .PSI..sub.XL).
19. The device of claim 13, wherein the device is operable in
conjunction with a display device so that a movement of the device
triggers a movement of an object on the display device.
20. The device of claim 19, wherein the attitude angle estimate
includes at least yaw (.PSI.) and pitch (.theta.) estimates of the
device , and the processing unit is further adapted to detect a
variation in yaw and pitch of the device based on the yaw (.PSI.)
and pitch (.theta.) estimates of the device, calculate a position
variation (.DELTA.x, .DELTA.y) for the object from the variation in
yaw and pitch of the device and output the position variation
(.DELTA.x, .DELTA.y) to the display device so that the object on
the display device is moved correspondingly.
21. The device of claim 20, wherein the object is a pointer.
22. The device of claim 20, wherein the processing unit is further
adapted to perform mapping upon the variation in yaw and pitch of
the device according to a predetermined mapping function to provide
the position variation (.DELTA.x, .DELTA.y) for the object.
23. The device of claim 22, wherein the predetermined mapping
function has at least one of a depression area and a limitation
area and wherein (i) if the variation in yaw and pitch of the
device falls within the depression area, the variation in yaw and
pitch of the device is mapped to a depressed position variation
according to the predetermined mapping function and (ii) if the
variation in yaw and pitch of the device falls within the
limitation area, the variation in yaw and pitch of the device is
mapped to a limited position variation according to the
predetermined mapping function.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application is based on and claims priority from Korean
Patent Application Nos. 10-2008-0113610, filed on Nov. 14, 2008,
and 10-2009-0027100, filed on Mar. 30, 2009, the disclosures of
which are incorporated herein in their entireties by reference.
BACKGROUND OF THE INVENTION
[0002] 1. Field of the Invention
[0003] Methods and apparatuses consistent with the present
invention relate to a user instruction input device including a
movement sensor. More particularly, the present invention relates
to a user instruction input device capable of inputting user
instructions naturally and accurately based on device movements in
a three-dimensional space.
[0004] 2. Description of the Related Art
[0005] As computer science develops, various devices that allow
users to input information in a computer device have been
developed. One of such devices is called a user command input
device. As a user manipulates such device components, position data
corresponding to motion of the user command input device are
generated. Also, such position data are converted into motions of a
pointer image shown on the display. Hence, by moving the user
command input device, the user may link the pointer image with
objects displayed on the display. Here, an object refers to a user
interface that allows a certain action to be performed when a menu,
a button or an image is selected. After that, the user can perform
a certain command related with the corresponding object through a
selection action such as pushing a certain button of the user
command input device.
[0006] General personal computer users use operating systems with
graphical user interfaces, such as Microsoft Windows and MAC OS X,
to operate their computer. This is due to convenient mouse
functions and various graphic functions that are not supported in
console based operating systems such as DOS (Disk Operating System)
system and some UNIX versions, and users can simply input commands
through a mouse dragging, scrolling or a click without a keyboard
input.
[0007] On the other hand, various commands are inputted using a key
pad prepared on a remote control device in an image display device
that cannot use a keyboard or a mouse that is used in a personal
computer, such as a digital TV, a set-top box, a game machine. Such
a key pad input method has been mainly used because such devices
are not fixed on a certain position for the operation of the device
unlike a personal computer, and operation is necessary in an open
space such as a living room, so it is difficult to use an input
means fixed on a plane such as a keyboard or mouse.
[0008] Considering such problems, three-dimensional user command
input devices with a motion sensor such as a gyroscope and an
accelerometer are recently being developed. By moving a
three-dimensional user command input device, a user can move a
pointer image on the corresponding display in a desired direction
and at a desired speed, and by pushing a certain button on the user
command input device, the user can select and execute a desired
action.
[0009] However, unlike a technology that inputs user commands
through actions on a fixed two-dimensional plane as in a mouse, it
is not easy to transmit natural and accurate motions in a user
command input device that moves a pointer or a certain object
(e.g., a game unit) through an arbitrary action in a
three-dimensional space. It is because motions that have not been
intended by the user may be transmitted depending on the pose,
orientation or distance toward the device.
[0010] In fact, inventions about measuring three-dimensional
motions using accelerometers and angular rate sensors have been
made since 1980s. The present invention does not simply intend to
implement an input device using an accelerometer and an angular
rate sensor, but intends to implement an input device that
naturally fits the user's intention with a compact system (i.e., a
system that uses a small amount of operations).
SUMMARY OF THE INVENTION
[0011] An objective of the present invention is to input user
instructions more naturally and accurately in a device that inputs
user instructions through arbitrary movements in a
three-dimensional space.
[0012] The present invention will not be limited to the technical
objectives described above. Other objectives not described herein
will be more definitely understood by those in the art from the
following detailed description.
[0013] According to an exemplary embodiment of the present
invention, there is provided a user instruction input device
operating in a three-dimensional space including a first sensor
that senses the angular rate of the device centering on at least
one axis, a second sensor that senses the acceleration of the
device at least for one direction, and a processing unit that
calculates a first rotation angle in a coordinate system
independent of the attitude of the first device from the output
value of the first sensor, calculates a second rotation angle in
the coordinate system from the output value of the second sensor,
and calculates the final attitude angle by combining the first
rotation angle and the second rotation angle.
[0014] According to an exemplary embodiment of the present
invention, there is provided a method of inputting a user
instruction using a user instruction input device operating in a
three-dimensional space including sensing the angular rate of the
device on at least one axis, sensing the acceleration of the device
for at least one direction, calculating a first rotation angle in a
coordinate system independent of the attitude of the device using
the output value of the first sensor, calculating a second rotation
angle in the coordinate system using the output value of the second
sensor, calculating the final attitude angle by combining the first
rotation angle and the second rotation angle, and outputting a
position variation corresponding to the variation of the calculated
final attitude angle.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The above and other features and advantages of the present
invention will become apparent by describing in detail preferred
embodiments thereof with reference to the attached drawings in
which:
[0016] FIG. 1 illustrates a three-axis rotation angle defined in a
certain frame.
[0017] FIG. 2 is a block diagram illustrating a user instruction
input device according to an exemplary embodiment of the present
invention.
[0018] FIG. 3 illustrates a method of measuring a roll value using
an accelerometer according to an exemplary embodiment of the
present invention.
[0019] FIG. 4 illustrates a method of calculating a yaw value using
the pitch and roll values in the method of measuring a yaw angle
according to an exemplary embodiment of the present invention.
[0020] FIGS. 5 to 7 show the case when .omega...sub.x is used and
the case when .omega..sub.x is not used in equation 5.
[0021] FIG. 8 is flowchart estimating movements of the device using
a movement estimation function.
[0022] FIG. 9 shows a bell-shaped curve as a movement estimation
function.
[0023] FIG. 10 shows an exponential function used as a movement
estimation function.
[0024] FIGS. 11A, 11B and 12 show a mapping function according to
an exemplary embodiment of the present invention.
[0025] FIGS. 13 and 14 show examples of applying a mapping scale
regarding pitch-direction rotations.
[0026] FIG. 15 shows two rotation examples of the device 100 in a
positive pitch angle direction (left) and in a negative pitch angle
direction (right).
[0027] FIGS. 16 and 17 show examples of applying a mapping scale
regarding yaw-direction rotations.
[0028] FIG. 18 shows the agreement between the starting point of a
body frame and the starting point of a navigation frame.
[0029] FIG. 19 shows the disagreement between the starting point of
a body frame and the starting point of a navigation frame.
DETAILED DESCRIPTION OF THE INVENTION
[0030] Exemplary embodiments of the present invention will be
described in detail with reference to the accompanying
drawings.
[0031] Advantages and features of the present invention and methods
of accomplishing the same may be understood more readily by
reference to the following detailed description of the exemplary
embodiments and the accompanying drawings. The present invention
may, however, be embodied in many different forms and should not be
construed as being limited to the embodiments set forth herein.
Rather, these embodiments are provided so that this disclosure will
be thorough and complete and will fully convey the concept of the
invention to those skilled in the art, and the present invention
will only be defined by the appended claims.
[0032] In the present invention, a user instruction input device
refers to an interface device that makes various contents
intuitively available by receiving the input of the user's
movements. The device makes the information obtained through the
user's movements correspond to information that is necessary for
various information devices or various services. Some examples of
such devices are a three-dimensional space mouse, an IPTV (Internet
protocol TV) remote control and a game input device.
[0033] FIG. 1 illustrates a three-axis rotation direction defined
in a certain frame (coordinate system). In a certain frame
consisting of x, y and z axes, a pitch (.theta.) refers to a
rotation in y-axis direction, a roll (.phi.) refers to a rotation
in x-axis direction, and a yaw (.PSI.) refer to a rotation in
z-axis direction. Whether a rotation is positive (+) or negative
(-) is determined by a right-handed coordinate. The present
invention mentions two frames of a navigation frame and a body
frame. The navigation frame is fixed on space and refers to a
standard coordinate system consisting of three axes of X.sub.N,
Y.sub.N and Z.sub.N. That is, the navigation frame is independent
of the attitude of a device. Further, the body frame refers to a
relative coordinate system that exists on an object placed in
three-dimensional space and consists of three axes of X.sub.B,
Y.sub.B and Z.sub.B. In FIG. 1, x-direction refers to a standard
direction where the user instruction input device is toward. That
is, it is a roll direction if a rotation is made based on the axis
toward the standard direction.
[0034] FIG. 2 is a block diagram illustrating a user instruction
input device 100 according to an exemplary embodiment of the
present invention. The user instruction input device 100 can
control an object on at least one display device (not shown). The
object may be a pointer on a display or may be an object while
playing a game. Also the display device may be installed on a
separately fixed position, but it may also be integrated with the
user instruction input device 100 (e.g., a portable game
machine).
[0035] As a more specific example, the user instruction input
device 100 may include an angular rate sensor 110, an acceleration
sensor 120, a filtering unit 130, a processing unit 190 and an
output unit 180. Further, the processing unit 190 may include a
first operation unit 140, a second operation unit 150, a attitude
angle measuring unit 160 and a variation mapping unit 170.
[0036] The angular rate sensor 110 senses an angular rate at which
the device 100 rotates on the body frame, and provides a sampled
output value (digital value). A gyroscope can be used as angular
rate sensor 110, and various types of gyroscopes such as a
mechanical type, a fluid type, an optical type and a piezoelectric
type. Specifically, the angular rate sensor 110 can obtain
rotational angular rates for two axes (axes on the body frame) that
cross at right angles, e.g., a rotational angular rate
(.omega..sub.x, .omega..sub.y, .omega..sub.z,) on x-axis, y-axis
and z-axis of the body frame.
[0037] The acceleration sensor 120 senses the acceleration of the
device 100 on the body frame and provides a sampled output value
(digital value). The acceleration sensor 120 can be a piezoelectric
type or a moving coil type. Specifically, the angular rate sensor
110 calculates the straight line acceleration (f.sub.x, f.sub.y,
f.sub.z) for three axes that cross at right angles (axes on the
body frame).
[0038] The filtering unit 130 may consist of a low pass filter, a
high pass filter, a offset filter or a scaling filter depending on
the usage of the device 100, compensates for the error after
receiving the output of the angular sensor 110 and the output of
the acceleration sensor 120. The filtering unit 130 provides the
error-compensated rotational angular rate (.omega..sub.x,
.omega..sub.y, .omega..sub.z,) to the first operation unit 140 and
provides the error-compensated acceleration speed (f.sub.x,
f.sub.y, f.sub.z) to the second operation unit 150.
[0039] The second operation unit calculates the roll, pitch and yaw
of the navigation frame (.phi..sub.XL, .theta..sub.XL,
.phi..sub.XL) (the second rotational angle) using the straight line
angular rate (f.sub.x, f.sub.y, f.sub.z) provided from the
filtering unit 130. A specific example of the calculation is shown
in the following.
.PHI. XL = arc tan 2 ( f y f z ) .theta. XL = arc tan 2 ( - f x f y
2 + f z 2 ) .psi. XL = .phi. XL sin ( .theta. XL ) Equation 1
##EQU00001##
[0040] Generally, the roll and the pitch (.phi..sub.XL,
.theta..sub.XL) can be simply calculated using the acceleration,
but it is not easy to get the yaw. The yaw (.PSI..sub.XL)
calculated from the acceleration in the present invention is a
pseudo yaw and can be explained with reference to FIGS. 3 and
4.
[0041] FIG. 3 illustrates a method of measuring the roll value
using the accelerometer according to an exemplary embodiment of the
present invention. Assuming that the direction acceleration speed
is f.sub.y, and the pitch has already been generated, the vertical
element of acceleration of gravity becomes g cos .theta. in (b) of
FIG. 3. Hence, the equation of the roll is as follows.
.phi. = arc sin ( f y g cos .theta. ) Equation 2 ##EQU00002##
[0042] The method of measuring the roll value using the
accelerometer can be calculated in various methods.
[0043] FIG. 4 illustrates a method of calculating a yaw value using
the pitch and the roll values in the method of measuring the yaw
angle according to an exemplary embodiment of the present
invention. (a) of FIG. 4 shows a vector direction of the roll and
yaw angular rate based on the generation of the pitch. In (a) of
FIG. 4, .omega..sub.y represents the roll angular rate vector, and
.omega..sub.z represents the yaw angular rate vector. The yaw
angular rate vector is not the actual yaw vector, but the projected
vector of the actual yaw vector.
[0044] (b) of FIG. 4 illustrates (a) of FIG. 4 from the side. In
(b) of FIG. 4, assuming that the time is "t," the following
equation 3 is established among the roll angular rate vector
.omega..sub.y, the yaw angular rate vector .omega..sub.z and the
pitch (.theta.).
sin .theta. ( t ) = .omega. y ( t ) .omega. z ( t ) Equation 3
##EQU00003##
[0045] From the above equation 3, the yaw (.PSI.) can be
approximated as shown in the equation 4.
.psi. = .intg. w z t = .intg. w z ( t ) sin .theta. ( 2 ) .apprxeq.
1 sin .theta. .intg. w y ( t ) = .0. sin .theta. ##EQU00004##
[0046] Equation 4 cannot be applied if pitch .theta. is close to
0.degree. or 90.degree., so a certain restriction should be given
at such angles. The actually measured yaw values in each condition
using equation 4 are shown in the following.
TABLE-US-00001 Yaw (.PSI.) measured Yaw (.PSI.) measured at (-)
pitch at (+) pitch Pitch (.theta.) roll .PHI. = -80 roll .PHI. =
+80 roll .PHI. = -80 roll .PHI. = +80 10 Strange Strange -414 383
movement movement 20 -222 243.5 -221 241.5 40 -120 125 -122 127 60
-89 91.5 -92 94.5 80 -72.5 77 -83 84.5
[0047] As shown in the above table, as the roll change, the yaw
value changes, so the scale elements can be used to reduce such
differences. Consequently, in the situation when there is a pitch
value and both the roll value and the yaw value change, it is
possible to calculate the approximate value for the yaw.
[0048] The first operation unit 140 calculates the rotation angle
(.phi..sub.G, .theta..sub.G, .PSI..sub.G) (the first rotation
angle) of three axes in the navigation frame using angular rate
(.omega..sub.x, .omega..sub.y, .omega..sub.z) value provided from
the filtering value 130. The specific equation using the Euler
angle is shown in equation 5. Equation 5 has a form of a
differential equation for the rotation angle (.omega..sub.G,
.theta..sub.G, .PSI..sub.G) in the navigation frame.
.phi. . G = ( W y sin .phi. G + W z cos .phi. G ) tan .theta. G + W
x .phi. . G = ( W y cos .phi. G + W z sin .phi. G ) .psi. . G = W y
sin .phi. G + W z cos .phi. G cos .theta. G Equation 5
##EQU00005##
[0049] Generally, the attitude angle in the navigation frame can be
obtained using three angular rates. The desirable embodiment of the
present invention calculates angular rates of three axes in the
navigation frame using only two angular rates (.omega..sub.y,
.omega..sub.z), and here .omega..sub.x of equation 5 is a
problem.
[0050] General hand and arm movements of a person are often based
on a single axis in a three-axis coordinate system. Some such
examples are rotation on Z-axis and rotation in Y-axis direction.
Also, even though there are two or more composite movements at the
same time, there is a tendency that when X-axis and Y-axis
rotations of the body frame and X-axis and Z-axis rotations of the
body frame occur at the same time, movements in X-axis direction
becomes relatively smaller than those in Y-axis and Z-axis
directions.
[0051] Hence, in equation 5, .omega..sub.x in X-axis direction
becomes relatively smaller than that in Y-axis direction or Z-axis
direction, so .omega..sub.x may be disregarded, but the calculation
is not done accurately. However, if the roll (.phi..sub.XL)
information calculated by the accelerometer is appropriately used,
a performance, which is similar to that when the angular rate of
.omega..sub.x is used, can be secured.
[0052] FIGS. 5-7 are the result of comparing the case of using
.omega..sub.x (dotted line) and the case of not using .omega..sub.x
(solid line). X-axis refers to the number of unit samples, and
Y-axis refers to the angle. Also, the right side graph of each
drawing is an extended view of a certain section indicated by a
circle in the left side graph. When .omega..sub.x is removed, there
is an 2.degree..about.4.degree. difference and a delay of 2 to 10
samples than when w, is not removed. However, if the calculation
period is more than 100 Hz, the angular difference and the delay
are not easily distinguishable by the user, so there is no
significant difference in the result between when using the angular
rate for two axes (Y-axis angular rate and Z-axis angular rate of
the body frame) and when using the angular rate for three axes.
[0053] Based on such an experiment result, equation 5 can be
transformed to the following equation 6.
.phi. . G = ( W y sin .phi. G + W z cos .phi. G ) tan .theta. G
.phi. . G = ( W y cos .phi. G - W z sin .phi. G ) .psi. . G = W y
sin .phi. G + W z cos .phi. G cos .theta. G Equation 6
##EQU00006##
[0054] Further, if a pattern when a person grabs the input device
and moves is recognized in advance and is utilized, even
.omega..sub.y can be removed. Here, as in equation 1, errors
generated by not using .omega..sub.y, .omega..sub.z can be overcome
by using (.phi..sub.XL, .theta..sub.XL, .PSI..sub.XL) calculated
using the output of the accelerometer.
[0055] Equations 5 and 6 illustrate the calculation of the rotation
angle (.phi..sub.G, .theta..sub.G, .PSI..sub.G) of three axes in
the navigation frame from the angular rate (.omega..sub.y,
.omega..sub.z) based on the Euler angle representation, but the
calculation may be performed based on the more involved quaternion
angle representation instead of Euler angle.
[0056] Referring to FIG. 2, the attitude angle measuring unit 160
calculates the weighted average of three rotation angles
(.phi..sub.XL, .theta..sub.XL, .PSI..sub.XL) on the navigation
frame obtained from the second operation unit 150 and the three
rotation angles (.phi..sub.G, .theta..sub.G, .PSI..sub.G) on the
navigation frame obtained from the first operation unit 140, and
calculates the attitude angle (.phi., .theta., .PSI.) in the
navigation frame. The specific weighted average can be calculated
according to the following equation 7.
.phi.=.alpha..sub.1.phi..sub.XL+(1-.alpha..sub.1).phi..sub.G
.theta.=.alpha..sub.2.theta..sub.XL+(1-.alpha..sub.2).theta..sub.G
.psi.=.alpha..sub.3.psi..sub.XL+(1-.alpha..sub.3).psi..sub.G
Equation 7
[0057] Here, .alpha..sub.1, .alpha..sub.2 and .alpha..sub.3 are
weights for .phi., .theta. and .PSI., respectively. The above
process to calculate the attitude angle in the navigation is just
an example, but various other ways can be used to calculate the
attitude angle. For example, .PSI..sub.XL used in equation 1 can be
calculated using a magnetic sensor or an image sensor. According to
the magnetic sensor or the image sensor, because a reentering angle
in the navigation frame can be directly calculated, a
transformation process as in equation 1 used in the acceleration
sensor is not necessary.
[0058] If .PSI..sub.XL, is not calculated from the third formula of
equation 1, i.e., .theta..sub.XL=0, by setting .alpha..sub.3 to 1,
.PSI. can be calculated only by .PSI..sub.G without using
.PSI..sub.XL.
[0059] However, in order to calculate the final attitude angle
(.phi., .theta., .PSI.) more accurately, .alpha..sub.1,
.alpha..sub.2 and .alpha..sub.3 need to be adaptively determined
rather than arbitrarily determined. For this, the "attitude angle
measuring" angle 160 can use a movement estimation function.
[0060] The movement estimation function refers to a function that
detects detailed movements by normalizing movements to the range
between 0 and 1 based on data using the outputs of the angular rate
sensor and the acceleration sensor. As an example, in the case
where movements of the detected device 100 are hardly noticeable,
the value of the acceleration sensor 120 is more reliable, so the
mapping is done so that .alpha..sub.n=1, and in the case where the
movements are the maximum state, the value of the angular rate
sensor 110 is more reliable, so the mapping is done so that
.alpha..sub.n=0. In the case where the movements are between the
hardly noticeable state and the maximum state, the mapping should
be done with an appropriate value.
[0061] FIG. 8 is a flowchart for estimating movements of a device
100 using a movement estimation function. A bell shape curve or an
exponential function may be used as a movement estimation function.
The bell shape curve, as a curve in the form shown in FIG. 9, may
be expressed by Gaussian function, a raised cosine function, etc.
Further, the exponential function has the form shown in FIG. 10,
and is, e.g., y=e.sup.-|x|. In FIGS. 9 and 10, x-axis represents
the size of movements of the device 100, and y-axis represents the
weight in equation 7, i.e., .alpha..sub.n. Functions of FIGS. 9 and
10 commonly have the peak value in the center, and have the form
approaching 0 as they go right or left.
[0062] By using such a movement estimation function, detailed
movements such as stoppage, minute movements, slow movements and
fast movements can be detected in addition to detecting whether the
device 100 has stopped. Further, by measuring such movements, the
basis for removing unintended movements (e.g., a cursor movement by
a trembling of a hand) can be provided. Further, it is possible to
adjust the scale of a mapping function according to the size of
movements, which will be explained later.
[0063] Referring to FIG. 2 once again, in the variation mapping
unit 170 the attitude angle (.phi., .theta., .PSI.) itself obtained
in the attitude angle measuring unit 160 can be used in controlling
the object of a display device. For example, in a flying object in
a flight simulation, each axis direction rotation can be controlled
by the attitude angle. Further, a pointer existing on the
two-dimensional screen of the display device needs to be mapped
with the variation (displacement) of the attitude angle (.phi.,
.theta., .PSI.) of such a navigation coordinate system. As an
example, in order to express natural movements of a pointer control
device, horizontal movements (.DELTA.x) on the display screen need
to correspond to the variation (.DELTA..PSI.) of the yaw, and
vertical movements need to correspond to the variance
(.DELTA..theta.) of the pitch.
[0064] Such a relation can be expressed as the following equation
8.
.DELTA..psi.=.psi..sub.k-.psi..sub.k-1
.DELTA..theta.=.theta..sub.k-.theta..sub.k-1 Equation 8
[0065] Here, "k" is an index that indicates a certain sampling time
point.
[0066] In order to define such a correspondence relation, the
present invention introduces a mapping function. FIGS. 11 and 12
illustrate examples of various mapping functions and show
correspondence between horizontal movements (.DELTA.x) and
variation of the yaw (.DELTA..PSI.). The same is applied to the
correspondence between vertical movements (.DELTA.y) and the
variation of the pitch (.DELTA..theta.).
[0067] FIGS. 11A and 11B show examples of simple mapping functions.
Here, FIG. 11A shows a floor function having the form of f(t)=.left
brkt-bot.kt.right brkt-bot., and FIG. 11B shows a ceiling function
having the form of f(t)=.left brkt-top.kt.right brkt-bot.. Such
mapping functions can make the variation of the yaw or the pitch
correspond to horizontal movements or vertical movements on the
display by a simple relation. Here, the input value of functions
can be reduced or removed depending on the change of "K", and an
input value more than a certain value can be converted into a
proportional output value. However, the mapping functions such as
FIGS. 11A and 11B have a disadvantage that output values cannot be
appropriately limited for excessive input values more than a
certain value. That is, the mapping function of FIGS. 11A and 11B
do not have the function of the limitation area among the functions
of the depression area, the scaling area and the limitation area
included in the mapping function.
[0068] The mapping function of FIG. 12 is divided into three areas,
which is more complicated than the functions of FIGS. 11A and 11B.
The first area (part 1) is a depression area that does the mapping
after reducing the input value of the function. The second area
(part 2) is a scaling area that maps the input value to a roughly
proportional output value. Finally, the third area is a limitation
area that limits the output value for the input more than a certain
value
[0069] An example of mapping functions of FIG. 12 is Sigmoid
function such as
f ( t ) = 1 1 + e - 1 . ##EQU00007##
Here, there are both positive and negative movement directions, so
the Sigmoid function is symmetrical on the starting point of the
coordinates. That is, the mapping function of FIG. 12 is made of
combination of the same two Sigmoid functions. It is desirable for
a value determined as a HID (human interface device) mouse standard
to be used as the position variation obtained by the mapping.
[0070] The meaning of three areas of FIG. 12 is explained in more
detailed in the following. The first depression area is an area
where the user's pointing movements are minute. In this area, the
variation of the attitude angle is not mapped with the position
variation by 1:1, but the mapping is done by reducing the position
variation, which provides the function that removes unintended
movements such as a trembling of a user's hand. However, in
applications that need implementation of minute movements, such
movements can be made to be expressed by raising up the depression
area.
[0071] The second scaling area is an area that proportionally maps
actual user movements to position information on the display
device, and the mapping can be done by .+-.128 integer values
according to HID mouse standard.
[0072] The third limitation region is an area that limits position
variations (.DELTA.x, .DELTA.y) when the user's movements are
relatively large. In FIG. 12, the mapping function has a
symmetrical form for the starting point, so symmetrical outputs can
be drawn in positive and negative directions for the user's
movements.
[0073] Further, such a mapping function can be scaled in various
ways in line with the user's movement pattern, which is explained
in the following.
[0074] The user most comfortably grabs the device 100 when the
pitch angle is 5.degree.-10.degree. as shown in FIG. 13. However,
if the pitch angle becomes 30.degree.-50.degree., the control of
the pointer 10 toward Y-axis positive direction becomes relatively
more difficult than the control of the pointer 10 toward the Y-axis
negative direction. Likewise, the difficulty due to the limitation
of the wrist movement in implementing the position information is
called wrist jamming. The same wrist jamming can exist for each
direction of the yaw (.PSI.).
[0075] FIG. 14 shows an example of such a wrist jamming. The device
100 is raised up about more than 40.degree., but the position of
the pointer 10 on the display screen is positioned near the lower
side of the display device. In this case, it becomes difficult for
the user to move in the upper direction of the display device due
to the limitation of the wrist movement.
[0076] Hence, in the device 100 according to an exemplary
embodiment of the present invention, as shown in FIG. 15, if the
pitch angle exceeds a certain angle, the scale is increased for the
movement that raises up in the positive pitch angle direction, the
scale is decreased for the movement that lowers in the negative
pitch angle direction, by which the user can continually point in
the most comfortable state of 5.degree. to 10.degree. pitch
angle.
[0077] The variation mapping unit 170 of the device 100 can simply
do such operations by adjusting the mapping scale applied to the
entire mapping functions. The adjustment of the mapping scale can
be applied in various manners according to each set of pitch
information, and the same can be applied even when the pitch angle
is negative.
[0078] The technique used in the pitch (.theta.) angle can be
applied for the yaw (.PSI.) angle. In the case where the user
operates the device 100, the movement is usually done between
.+-.45.degree. in the yaw rotation direction as shown in FIG. 16.
However, if the yaw angle of the device 100 exceeds the range, the
user feels difficulty in pointing due to the limitation of the
wrist movement.
[0079] Hence, in the device 100 according to an exemplary
embodiment of the present invention, when the yaw angle exceeds a
certain angle in the positive direction, the device increases the
scale for the movement that rotates in the positive yaw angle
direction and decrease the scale for the movement in the negative
yaw angle direction. The same can be applied in the state where the
yaw angle of the device 100 exceeds a certain angle in the negative
direction. Hence, the user's pointing can be continued between
.+-.45.degree. yaw angles when the user feels most comfortable. The
variation mapping unit 170 of the device 100 can simply perform
such an operation by adjusting the mapping scale applied to the
entire mapping function.
[0080] Further, in the case where the device 100 is raised up by
90.degree. in the pitch direction, the position variations
(.DELTA.x, .DELTA.y) on the display screen can be made to be
limited. For example, the variation mapping unit 170 can be made
for the mapped position variations (.DELTA.x, .DELTA.y) not to be
generated by setting .DELTA..PSI. and .DELTA..theta. to "0" when
.theta. is close to 90.degree.. It is because there can be pointer
movements unintended by the user as a singular point is generated
in the third formula (a formula about .PSI.) of equation 6 when the
pitch angle becomes 90.degree..
[0081] Such a problem can be solved if the calculation is done
using a quaternion angle representation, instead of Euler angles,
but the amount of calculation increases, so each method has
advantages and disadvantages. However, the user rarely operates the
pointer while grabbing the device perpendicularly, and even if it
happens, it can be resolved by limiting the position variation as
described above.
[0082] Referring to FIG. 2, the output unit 180 wirelessly
transmits the attitude angles (.phi., .theta., .PSI.) provided from
the attitude angle measuring unit 160 and the position variation
(.DELTA.x, .DELTA.y) provided from the variation mapping unit to
the display device depending on the type of the application. If the
display device is integrally implemented with the device 100, the
data can be transmitted to the main processor. The wireless
transmission can be done through Bluetooth communication, infrared
communication, IEEE 802.11 wireless LAN standard, IEEE 802.15.3.
wireless LAN standard, etc.
[0083] Each block of FIG. 2 can be implemented by a task, a class,
a sub-routine, a process, an object, an execution thread, software
such as a program, hardware such as FPGA (field-programmable gate
array) and ASIC (application-specific integrated circuit) or a
combination of the software and the hardware performed in a
predetermined area of a memory. Also, each block may represent a
portion of code, a segment or a module including one or more
executable instructions to execute specified logical functions.
Also, in some alternative examples, the above-mentioned functions
can occur regardless of the order. For example, two consecutive
blocks may be practically performed at the same time, or may even
be performed in a reverse order depending to their functions.
[0084] A user instruction input device according to an exemplary
embodiment of the present invention has practical advantages as
follows.
[0085] 1. Simplified System
[0086] The user instruction input device 100 only uses the
conversion for the rotation information between two coordinate
systems (frame) in the process of converting the body information
into the navigation frame. That is, the starting point of the body
frame and the starting point of the navigation frame are kept in
the same state as in FIG. 18. Likewise, when the coordinate system
is converted, the change of the starting points (O.sub.2-O.sub.1 of
FIG. 19) between two coordinate systems is omitted, so the amount
of calculation can be reduced.
[0087] Also, as in equation 7, by using the concept of the weighted
average, the movement estimation function and the mapping function,
the complicated operations for sampling data, which is necessary
for a model-based filtering such as linear filtering, Kalman
filtering, Kalman smoothing, extended Kalman filtering, state-space
estimation and expectation-maximization, and the following
initialization time are not required.
[0088] Further, the movement estimation function and the mapping
function use a simplified form of 1:1 correspondence function
unlike the matrix operation that occupies many resources. Hence, as
such a simplified function is used, the operation time and
resources are significantly reduced.
[0089] 2. Attitude Angle in Navigation Frame
[0090] The user instruction input device 100 measures the rotation
and acceleration of the body frame to calculate attitude
information of the navigation frame. The position information
(e.g., the movement of a pointer) is controlled through the
obtained attitude information of the navigation frame, so the
position information can be implemented regardless of the slant of
the body frame. As stated above using the roll, pitch and yaw
information (attitude angle information) in the navigation frame,
various applications such as implementation of movements according
to the user pattern, intuitive movement implementation and the
control of the state of the display device are possible. Also, in
the product design, there is no limitation on the rotation
direction of the body frame, various designs regarding the outer
appearance of the user instruction input device 100 are
possible.
[0091] 3. Movement Detection
[0092] In the existing technology, a simple stoppage is determined,
but in the user instruction input device 100, the various movements
such as the stoppage, minute movements, slow movements and fast
movements can be detected, and the basis for more improved
movements is provided through such various forms of movement
detections. Also, the movement estimation function used in
detecting movements makes the input correspond to the output by
1:1, thereby not occupying many resources.
[0093] 4. Mapping
[0094] The mapping function consisting of the depression area, the
scaling area and the limitation area provides more detailed and
intuitive movements. Movements such as a trembling of a hand are
removed through the depression area, and minute movements can be
implemented through the scaling area, and the limitation of
excessive movements is possible through the limitation area. In
such a mapping function, it is possible to only take values of
desired areas.
[0095] Also, the mapping function can convert a floating point
value into a normalized digital value, and through the
digitalization, advantages of digital signals such as reduction of
noise and reduction of transmitted data amount are provided. Also,
the mapping function used here does note require a separate
initialization time and data sampling as in a model-based
filtering, and is a simplified function where the input corresponds
to the output by 1:1.
[0096] 5. Stable and Accurate Movement Implementation Using
Weighted Averages
[0097] In the user instruction input device 100, the weighted
average is calculated based on each set of information implemented
by the output of the angular rate sensor and the output of the
accelerometer, so more stable and accurate movements can be
implemented. If only the angular rate is used, accumulated errors
by the bias change are generated in the process of implementing
angles by integrating angular rates, thereby generating divergence
of angular rates, and there technologies to resolve such a problem
are known. Some examples of such technologies are a method of
estimating a bias through Kalman filter, a method of using a
digital filter through frequency analysis and a method of
estimating a bias by analyzing given time and critical values.
However, all such existing technologies excessively consume system
resources and require a large amount of operation.
[0098] In contrast, in the case of the user instruction input
device 100, the concept of the weighted average is applied using
the angular rate sensor and the accelerometer, so the divergence of
the angle can be simply limited by the accelerometer, by which the
model or the filter used in estimating the bias of the angular rate
can be simplified, and the accuracy of the attitude angle
measurement can be improved. Through the calculated attitude angle,
more stable and accurate movements can be implemented.
[0099] Also, in the user instruction input device 100, if the
movement is detected by the movement estimation function, the
user-intended various movements can be expressed by changing each
weight (.alpha..sub.1, .alpha..sub.2, .alpha..sub.3 of equation 7)
according to the detected movement. That is, appropriate movements
harmonized with the situation can be implemented.
[0100] 6. Reducing System Initialization Time
[0101] According to a simplified system of the user instruction
input device 100, the initialization time of the device can be
reduced. A representative example is the concept of a weighted
average. If an angle is implemented only by an angular rate sensor,
a separate filtering technique such as Kalman filtering is required
to minimize accumulated errors. In the case of the Kalman
filtering, the initialization step for the initial bias estimation
or bias setting is essentially required.
[0102] In contrast, the user instruction command device 100 can
calculate the accurate angle without such an initialization step by
implementing each set of information using the weighted average
with the output of the accelerometer along with the accelerometer.
That is, the accuracy of the attitude angle measurement can be
improved with the system is simplified.
[0103] Also, unlike the model-based filtering, the mapping function
centers on 1:1 correspondence, so it contributes to initialization
time reduction to some extent.
[0104] 7. Reducing Power Consumption
[0105] According to a simplified system of the user instruction
input device 100, the consumption power can be reduced based on the
reduction of the initialization time and operation amount. As the
initialization time of the system is reduced, there can be
simplified operation modes such as operation mode, power-down mode
and power-off mode.
[0106] The existing technologies needed temporary steps such as a
stand-by mode that stabilizes the system for entering the operation
mode. However, the user instruction input device 100 does not need
a temporary step such as a stand-by mode, so the power supplied to
a certain element can be selectively turned off. Hence, if the
on-off of the device power gets easy, the consumption power can be
more reduced.
[0107] It should be understood by those of ordinary skill in the
art that various replacements, modifications and changes may be
made in the form and details without departing from the spirit and
scope of the present invention as defined by the following claims.
Therefore, it is to be appreciated that the above described
embodiments are for purposes of illustration only and are not to be
construed as limitations of the invention.
* * * * *