U.S. patent application number 16/526297 was filed with the patent office on 2020-02-06 for posture estimation method, posture estimation device, and vehicle.
This patent application is currently assigned to SEIKO EPSON CORPORATION. The applicant listed for this patent is SEIKO EPSON CORPORATION. Invention is credited to Yasushi NAKAOKA, Fumikazu OTANI, Kentaro YODA.
Application Number | 20200039522 16/526297 |
Document ID | / |
Family ID | 69228213 |
Filed Date | 2020-02-06 |
![](/patent/app/20200039522/US20200039522A1-20200206-D00000.png)
![](/patent/app/20200039522/US20200039522A1-20200206-D00001.png)
![](/patent/app/20200039522/US20200039522A1-20200206-D00002.png)
![](/patent/app/20200039522/US20200039522A1-20200206-D00003.png)
![](/patent/app/20200039522/US20200039522A1-20200206-D00004.png)
![](/patent/app/20200039522/US20200039522A1-20200206-D00005.png)
![](/patent/app/20200039522/US20200039522A1-20200206-D00006.png)
![](/patent/app/20200039522/US20200039522A1-20200206-D00007.png)
![](/patent/app/20200039522/US20200039522A1-20200206-D00008.png)
![](/patent/app/20200039522/US20200039522A1-20200206-D00009.png)
![](/patent/app/20200039522/US20200039522A1-20200206-D00010.png)
View All Diagrams
United States Patent
Application |
20200039522 |
Kind Code |
A1 |
NAKAOKA; Yasushi ; et
al. |
February 6, 2020 |
POSTURE ESTIMATION METHOD, POSTURE ESTIMATION DEVICE, AND
VEHICLE
Abstract
A posture estimation method includes calculating a posture
change amount of an object based on an output of an angular
velocity sensor, predicting posture information of the object by
using the posture change amount, adjusting error information in a
manner of determining whether or not the output of the angular
velocity sensor is within an effective range and, when it is
determined that the output of the angular velocity sensor is not
within the effective range, increasing a posture error component in
error information and reducing a correlation component between the
posture error component and an error component other than the
posture error component in the error information, and correcting
the predicted posture information of the object based on the error
information.
Inventors: |
NAKAOKA; Yasushi;
(Shiojiri-shi, JP) ; OTANI; Fumikazu;
(Matsumoto-shi, JP) ; YODA; Kentaro; (Chino-shi,
JP) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
SEIKO EPSON CORPORATION |
Tokyo |
|
JP |
|
|
Assignee: |
SEIKO EPSON CORPORATION
Tokyo
JP
|
Family ID: |
69228213 |
Appl. No.: |
16/526297 |
Filed: |
July 30, 2019 |
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G01P 3/44 20130101; G01C
9/02 20130101; B60W 30/02 20130101; B60Y 2200/50 20130101; B60W
40/10 20130101; G05D 1/027 20130101; G01C 21/16 20130101 |
International
Class: |
B60W 40/10 20060101
B60W040/10; G01C 9/02 20060101 G01C009/02; G01P 3/44 20060101
G01P003/44; G05D 1/02 20060101 G05D001/02; B60W 30/02 20060101
B60W030/02 |
Foreign Application Data
Date |
Code |
Application Number |
Jul 31, 2018 |
JP |
2018-143691 |
Claims
1. A posture estimation method comprising: calculating a posture
change amount of an object based on an output of an angular
velocity sensor; predicting posture information of the object by
using the posture change amount; adjusting error information in a
manner of determining whether or not the output of the angular
velocity sensor is within an effective range, and when it is
determined that the output of the angular velocity sensor is not
within the effective range, increasing a posture error component in
error information and reducing a correlation component between the
posture error component and an error component other than the
posture error component in the error information; and correcting
the predicted posture information of the object based on the error
information.
2. The posture estimation method according to claim 1, wherein the
adjusting of the error information includes determining whether or
not a current time is in a first period after it is determined that
the output of the angular velocity sensor is not within the
effective range, increasing the posture error component in the
first period, and reducing the correlation component between the
posture error component and the error component other than the
posture error component, in the first period.
3. The posture estimation method according to claim 1, wherein the
error component other than the posture error component includes a
bias error component of an angular velocity.
4. The posture estimation method according to claim 2, wherein the
error component other than the posture error component includes a
bias error component of an angular velocity.
5. The posture estimation method according to claim 1, wherein the
correlation component between the posture error component and the
error component other than the posture error component is zero.
6. The posture estimation method according to claim 1, further
comprising: calculating a velocity change amount of the object
based on an output of an acceleration sensor and the output of the
angular velocity sensor, wherein in the adjusting of the error
information, whether or not the output of the acceleration sensor
is within an effective range is determined, and, when it is
determined that the output of the angular velocity sensor or the
output of the acceleration sensor is not within the corresponding
effective range, a motion velocity error component in the error
information is increased, and a correlation component between the
motion velocity error component and an error component other than
the motion velocity error component in the error information is
reduced, and in the predicting of the posture information, velocity
information of the object is predicted by using the velocity change
amount.
7. The posture estimation method according to claim 6, wherein the
adjusting of the error information includes determining whether or
not a current time is in a second period after it is determined
that the output of the acceleration sensor is not within the
effective range, increasing the motion velocity error component in
the second period, and reducing the correlation component between
the motion velocity error component and the error component other
than the motion velocity error component, in the second period.
8. The posture estimation method according to claim 6, wherein the
error component other than the motion velocity error component
includes a bias error component of an acceleration.
9. The posture estimation method according to claim 7, wherein the
error component other than the motion velocity error component
includes a bias error component of an acceleration.
10. The posture estimation method according to claim 6, wherein the
correlation component between the motion velocity error component
and the error component other than the motion velocity error
component is zero.
11. A posture estimation device comprising: a posture-change-amount
calculation unit that calculates a posture change amount of an
object based on an output of an angular velocity sensor; a posture
information prediction unit that predicts posture information of
the object by using the posture change amount; an error information
adjustment unit that determines whether or not the output of the
angular velocity sensor is within an effective range, and when it
is determined that the output of the angular velocity sensor is not
within the effective range, increases a posture error component in
error information, and reduces a correlation component between the
posture error component and an error component other than the
posture error component in the error information; and a posture
information correction unit that corrects the predicted posture
information of the object based on the error information.
12. A vehicle comprising: the posture estimation device according
to claim 11; and a control device that controls a posture of the
vehicle based on posture information of the vehicle, which is
estimated by the posture estimation device.
Description
[0001] The present application is based on, and claims priority
from JP Application Serial Number 2018-143691, filed Jul. 31, 2018,
the disclosure of which is hereby incorporated by reference herein
in its entirety.
BACKGROUND
1. Technical Field
[0002] The present disclosure relates to a posture estimation
method, a posture estimation device, and a vehicle.
2. Related Art
[0003] A device or a system of mounting an inertial measurement
unit (IMU) on an object and calculating the position or posture of
the object with an output signal of the inertial measurement unit
(IMU) is known. A bias error is provided in the output signal of
the inertial measurement unit (IMU), and an error also occurs in
posture calculation. Thus, a technique of correcting such an error
with a Kalman filter and estimating the accurate posture of the
object is proposed. For example, JP-A-2015-179002 discloses a
posture estimation method of calculating a posture change amount of
an object by using an output of an angular velocity sensor and
estimating the posture of the object by using the posture change
amount.
[0004] JP-A-2015-179002 is an example of the related art.
[0005] An effective measurement range is defined in an angular
velocity sensor. When an angular velocity which is out of the range
is temporarily input, the output of the angular velocity sensor may
include a not-assumed large error. Regarding this, in the posture
estimation method disclosed in JP-A-2015-179002, there is a concern
of not assuming a case where the output of the angular velocity
sensor is even temporarily out of the effective range and
decreasing accuracy in estimating the posture of an object.
SUMMARY
[0006] An aspect of a posture estimation method according to the
present disclosure includes calculating a posture change amount of
an object based on an output of an angular velocity sensor,
predicting posture information of the object by using the posture
change amount, adjusting error information in a manner of
determining whether or not the output of the angular velocity
sensor is within an effective range, and when it is determined that
the output of the angular velocity sensor is not within the
effective range, increasing a posture error component in error
information and reducing a correlation component between the
posture error component and an error component other than the
posture error component in the error information, and correcting
the predicted posture information of the object based on the error
information.
[0007] In the aspect of the posture estimation method, the
adjusting of the error information may include determining whether
or not the current time is in a first period after it is determined
that the output of the angular velocity sensor is not within the
effective range, increasing the posture error component in the
first period, and reducing the correlation component between the
posture error component and the error component other than the
posture error component, in the first period.
[0008] In the aspect of the posture estimation method, the error
component other than the posture error component may include a bias
error component of an angular velocity.
[0009] In the aspect of the posture estimation method, the
correlation component between the posture error component and the
error component other than the posture error component may be
zero.
[0010] The aspect of the posture estimation method may further
include calculating a velocity change amount of the object based on
an output of an acceleration sensor and the output of the angular
velocity sensor. In the adjusting of the error information, whether
or not the output of the acceleration sensor is within an effective
range may be determined, and, when it is determined that the output
of the angular velocity sensor or the output of the acceleration
sensor is not within the corresponding effective range, a motion
velocity error component in the error information may be increased,
and a correlation component between the motion velocity error
component and an error component other than the motion velocity
error component in the error information may be reduced. In the
predicting of the posture information, velocity information of the
object may be predicted by using the velocity change amount.
[0011] In the aspect of the posture estimation method, the
adjusting of the error information may include determining whether
or not the current time is in a second period after it is
determined that the output of the acceleration sensor is not within
the effective range, increasing the motion velocity error component
in the second period, and reducing the correlation component
between the motion velocity error component and the error component
other than the motion velocity error component, in the second
period.
[0012] In the aspect of the posture estimation method, the error
component other than the motion velocity error component may
include a bias error component of an acceleration.
[0013] In the aspect of the posture estimation method, the
correlation component between the motion velocity error component
and the error component other than the motion velocity error
component may be zero.
[0014] An aspect of a posture estimation device according to the
present disclosure includes a posture-change-amount calculation
unit that calculates a posture change amount of an object based on
an output of an angular velocity sensor, a posture information
prediction unit that predicts posture information of the object by
using the posture change amount, an error information adjustment
unit that determines whether or not the output of the angular
velocity sensor is within an effective range, and when it is
determined that the output of the angular velocity sensor is not
within the effective range, increases a posture error component in
error information, and reduces a correlation component between the
posture error component and an error component other than the
posture error component in the error information, and a posture
information correction unit that corrects the predicted posture
information of the object based on the error information.
[0015] An aspect of a vehicle according to the present disclosure
includes the aspect of the posture estimation device and a control
device that controls a posture of the vehicle based on posture
information of the vehicle, which is estimated by the posture
estimation device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] FIG. 1 is a flowchart illustrating an example of procedures
of a posture estimation method.
[0017] FIG. 2 is a flowchart illustrating an example of procedures
of a process S3 in FIG. 1.
[0018] FIG. 3 is a flowchart illustrating an example of procedures
of a process S5 in FIG. 1.
[0019] FIG. 4 is a flowchart illustrating an example of procedures
of a process S6 in FIG. 1.
[0020] FIG. 5 is a flowchart illustrating an example of procedures
of a process S7 in FIG. 1.
[0021] FIG. 6 is a diagram illustrating an example of a
configuration of a posture estimation device according to an
embodiment.
[0022] FIG. 7 is a diagram illustrating a sensor coordinate system
and a local coordinate system.
[0023] FIG. 8 is a diagram illustrating an example of a
configuration of a processing unit in the embodiment.
[0024] FIG. 9 is a flowchart illustrating another example of the
procedures of the process S5 in FIG. 1.
[0025] FIG. 10 is a block diagram illustrating an example of a
configuration of an electronic device in the embodiment.
[0026] FIG. 11 is a plan view illustrating a wrist watch-type
activity meter as a portable type electronic device.
[0027] FIG. 12 is a block diagram illustrating an example of a
configuration of the wrist watch-type activity meter as the
portable type electronic device.
[0028] FIG. 13 illustrates an example of a vehicle in the
embodiment.
[0029] FIG. 14 is a block diagram illustrating an example of a
configuration of the vehicle.
[0030] FIG. 15 illustrates another example of the vehicle in the
embodiment.
[0031] FIG. 16 is a block diagram illustrating an example of a
configuration of the vehicle.
DESCRIPTION OF EXEMPLARY EMBODIMENTS
[0032] Hereinafter, a preferred embodiment according to the present
disclosure will be described in detail with reference to the
drawings. Embodiments described below do not unduly limit the
contents of the present disclosure described in the appended
claims. All components described below are not essential components
in the present disclosure.
1. Posture Estimation Method
1-1. Posture Estimation Theory
1-1-1. IMU Output Model
[0033] The output of the inertial measurement unit (IMU) includes
angular velocity data d.sub..omega., k as an output of a three-axis
angular velocity sensor and acceleration data d.sub..alpha., k as
an output of a three-axis acceleration sensor at each sampling time
point (t.sub.k). Here, as shown in Expression (1), the angular
velocity data d.sub..omega., k is represented by the sum of an
average value of an angular velocity vector .omega. in a sampling
interval (.DELTA.t=t.sub.k-t.sub.k-1) and the residual bias
b.sub..omega..
d .omega. , k = [ d .omega. x , k d .omega. y , k d .omega. z , k ]
= .omega. _ k + b .omega. , .omega. _ k = 1 .DELTA. t .intg. t k -
1 t k .omega. dt ( 1 ) ##EQU00001##
[0034] Similarly, as shown in Expression (2), the acceleration data
d.sub..alpha., k is also represented by the sum of an average value
of an acceleration vector .alpha. and the residual bias
b.sub..alpha..
d .alpha. , k = [ d .alpha. x , k d .alpha. y , k d .alpha. z , k ]
= .alpha. _ k + b .alpha. , .alpha. _ k = 1 .DELTA. t .intg. t k -
1 t k .alpha. dt ( 2 ) ##EQU00002##
1-1-2. Calculation of Three-Dimensional Posture by Angular Velocity
Integration
[0035] When a three-dimensional posture is represented by
quaternions, a relation between the posture quaternion q and the
angular velocity vector .omega. [rad/s] is represented by a
differential equation in Expression (3).
d dt q = 1 2 q .omega. ( 3 ) ##EQU00003##
[0036] Here, a symbol obtained by superimposing .largecircle. and
.times. on each other indicates quaternion multiplication. For
example, elements of quaternion multiplication of q and p are
calculated as shown in Expression (4).
q p = [ + q 0 - q 1 - q 2 - q 3 + q 1 + q 0 - q 3 + q 2 + q 2 + q 3
+ q 0 - q 1 + q 3 - q 2 + q 1 + q 0 ] [ p 0 p 1 p 2 p 3 ] = [ q 0 p
0 - q 1 p 1 - q 2 p 2 - q 3 p 3 q 1 p 0 + q 0 p 1 - q 3 p 2 + q 2 p
3 q 2 p 0 + q 3 p 1 + q 0 p 2 - q 1 p 3 q 3 p 0 - q 2 p 1 + q 1 p 2
+ q 0 p 3 ] ( 4 ) ##EQU00004##
[0037] As shown in Expression (5), the angular velocity vector
.omega. is considered as being equivalent to a quaternion in which
the real (scalar) component is zero, and the imaginary (vector)
component coincides with the component of w.
.omega. = [ .omega. x .omega. y .omega. z ] .ident. [ 0 .omega. x
.omega. y .omega. z ] = [ 0 .omega. ] ( 5 ) ##EQU00005##
[0038] If the differential equation (3) is solved, it is possible
to calculate the three-dimensional posture. However, unfortunately,
the general solution thereof has not been found. Further, the value
of the angular velocity vector .omega. is also obtained in only a
form of a discrete average value. Thus, it is necessary that
approximation calculation is performed with Expression (6) for each
short (sampling) time.
q k .apprxeq. q k - 1 .DELTA. q k { .DELTA. q k = 1 + .omega. _ k 2
.DELTA. t + ( .omega. _ k - 1 .times. .omega. _ k 24 - .omega. _ k
2 8 ) .DELTA. t 2 - .omega. _ k 2 .omega. _ k 48 .DELTA. t 3 = [ 1
- 1 8 .omega. _ k .DELTA. t 2 1 2 .omega. _ k .DELTA. t + 1 24 (
.omega. _ k - 1 .DELTA. t .times. .omega. _ k .DELTA. t ) - 1 48
.omega. _ k .DELTA. t 2 .omega. _ k .DELTA. t ] ( 6 )
##EQU00006##
[0039] Expression (6) is an expression calculated based on Taylor
expansion at each of t=t.sub.k-1 to the third-order term of
.DELTA.t, considering the posture quaternion q and an integration
relation of the angular velocity vector .omega. for each axis. The
term including .omega..sub.k-1 in the expression corresponds to the
Corning correction term. The symbol .times. indicates a cross
product (vector product) of a three-dimensional vector. For
example, the elements of v.times.w are calculated as in Expression
(7).
v .times. w = [ 0 - v z + v y + v 2 0 - v x - v y + v z 0 ] [ w x w
y w z ] = [ v y w z - v z w y v z w x - v x w z v x w y - v y w x ]
( 7 ) ##EQU00007##
1-1-3. Tilt Error Observation by Gravitational Acceleration
[0040] The acceleration sensor detects an acceleration generated by
the movement thereof. However, on the earth, the acceleration is
normally detected in a state of adding a gravitational acceleration
of about 1 [G] (=9.80665 [m/s.sup.2]). The gravitational
acceleration is normally a vector in a vertical direction. Thus, it
is possible to know an error of a tilt (roll and pitch) component
of the posture by comparison to the output of the three-axis
acceleration sensor. Therefore, firstly, it is necessary that an
acceleration vector .alpha. in the sensor coordinate system (xyz
coordinate system), which is observed by the three-axis
acceleration sensor is transformed to an acceleration vector
.alpha.' in a coordinate system (XYZ coordinate system) of a local
space obtained by horizontal orthogonal axes and a vertical axis.
The coordinate (rotation) transformation can be calculated with the
posture quaternion q and conjugate quaternion q*, as shown in
Expression (8).
.alpha.'=q.alpha.q* (8)
[0041] Expression (8) can be expressed with a three-dimensional
coordinate transformation matrix C, as in Expression (9).
.alpha. ' = C .alpha. { C = [ q 0 2 + q 1 2 - q 2 2 - q 3 2 2 ( q 1
q 2 - q 0 q 3 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 1 q 2 + q 0 q 3 ) q 0
2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 - q 0 q 1 ) 2 ( q 1 q 3 - q 0
q 2 ) 2 ( q 2 q 3 + q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2 ] ( 9 )
##EQU00008##
[0042] A tilt error is obtained by comparing the acceleration
vector .alpha.' to the gravitational acceleration vector g in the
local-space coordinate system (XYZ coordinate system). The
gravitational acceleration vector g is represented by Expression
(10). In Expression (10), .DELTA.g indicates a
gravitational-acceleration correction value indicating a difference
[G] from the standard value of the gravitational acceleration
vector g.
g = [ 0 0 - ( 1 + .DELTA. g ) ] ( 10 ) ##EQU00009##
1-1-4. Observation of Zero Motion Velocity
[0043] In particular, the motion velocity of the IMU is considered
to be substantially equal to zero in a long term, in user interface
applications. A relation between the motion velocity vector v in
the local-space coordinate system, and the acceleration vector
.alpha. and the angular velocity vector .omega. in the sensor
coordinate system is expressed with the coordinate transformation
matrix C by a differential equation in Expression (11).
d dt v = C .alpha. - g d dt C = C [ 0 - .omega. z + .omega. y +
.omega. z 0 - .omega. x - .omega. y + .omega. x 0 ] ( 11 )
##EQU00010##
[0044] Here, the values of the acceleration vector .alpha. and the
angular velocity vector .omega. are obtained only in a form of a
discrete average value. Thus, the motion velocity vector is
calculated by performing approximation calculation with Expression
(12) for each short (sampling) time.
v k .apprxeq. v k - 1 + ( C k .lamda. k - g ) .DELTA. t { .lamda. k
= .alpha. _ k + ( .omega. _ k .times. .alpha. _ k 2 + .omega. _ k -
1 .times. .alpha. _ k - .omega. _ k .times. .alpha. _ k - 1 12 )
.DELTA. t + ( .omega. _ k .alpha. _ k ) .omega. _ k - .omega. _ k 2
.alpha. _ k 6 .DELTA. t 2 + ( .omega. _ k - 1 .alpha. _ k - .omega.
_ k .alpha. _ k - 1 ) .omega. _ k - ( .omega. _ k - 1 .omega. _ k )
.alpha. _ k + .omega. _ k 2 .alpha. _ k - 1 24 .DELTA. t 2 -
.omega. _ k 2 24 .omega. _ k .times. .alpha. _ k .DELTA. t 3 =
.alpha. _ k + 1 2 ( .omega. _ k .DELTA. t .times. .alpha. _ k ) + 1
12 ( .omega. _ k - 1 .DELTA. t .times. .alpha. _ k ) - 1 12 (
.omega. _ k .DELTA. t .times. .alpha. _ k - 1 ) + 1 6 ( .omega. _ k
.DELTA. t .times. ( .omega. _ k .DELTA. t .times. .alpha. _ k ) ) +
1 24 ( .omega. _ k - 1 .DELTA. t .times. ( .omega. _ k .DELTA. t
.times. .alpha. _ k ) ) - 1 24 ( .omega. _ k .DELTA. t .times. (
.omega. _ k .DELTA. t .times. .alpha. _ k - 1 ) ) - 1 24 .omega. _
k .DELTA. t 2 ( .omega. _ k .DELTA. t .times. .alpha. _ k ) ( 12 )
##EQU00011##
[0045] Expression (12) is an expression calculated based on Taylor
expansion at each of t=t.sub.k-1 to the third-order term of
.DELTA.t, considering the motion velocity vector v and integration
relations of the acceleration vector .alpha. and the angular
velocity vector .omega. for each axis. The third-order term is
ignored in Expression (12) because of being sufficiently small
although the residual error .epsilon..sub..lamda. is provided in
the third-order term. The symbol indicates a dot product (scalar
product) of a three-dimensional vector. For example, vw is
calculated as in Expression (13).
vw=r.sub.xw.sub.x+r.sub.yw.sub.y+v.sub.zw.sub.z (13)
1-1-5. Posture Quaternion and Error Thereof
[0046] It is considered that the true value ({circumflex over ( )})
of the calculated posture quaternion q has an error
.epsilon..sub.q, as in Expression (14).
q = q ^ + q = [ q ^ 0 + q 0 q ^ 1 + q 1 q ^ 2 + q 2 q ^ 3 + q 3 ] q
2 = E [ q q T ] ( 14 ) ##EQU00012##
[0047] Here, .SIGMA..sub.q.sup.2 represents an error covariance
matrix indicating the magnitude of the error .epsilon..sub.q. E[ ]
indicates an expected value. T on the right shoulder indicates
transposition of the vector-matrix.
[0048] The quaternion and the error have four values, but there are
just three degrees of freedom in a three-dimensional posture
(rotational transformation). The fourth degree of freedom of the
posture quaternion corresponds to enlargement/reduction conversion.
However, it is necessary that the enlargement/reduction ratio is
normally fixed to 1 in posture detection processing. In practice, a
enlargement/reduction ratio component changes by various
calculation errors. Thus, processing of suppressing the change of
the enlargement/reduction ratio is required.
[0049] In a case of the posture quaternion q, the square of the
absolute value thereof corresponds to the enlargement/reduction
ratio. Thus, the change is suppressed by normalization processing
in which the absolute value is set to 1, as in Expression (15).
q .rarw. q q = 1 q 0 2 + q 1 2 + q 2 2 + q 3 2 [ q 0 q 1 q 2 q 3 ]
( 15 ) ##EQU00013##
[0050] In a case of the posture error .epsilon..sub.q, it is
necessary to hold the rank (order number) of the error covariance
matrix .SIGMA..sub.q.sup.2 to be three. Thus, the rank is limited
as in Expression (16), considering a (three-dimensional) error
rotation vector .epsilon..sub..theta. in the local-space coordinate
system (assuming that a posture error angle is sufficiently
small).
.theta. = [ .theta. x .theta. y .theta. z ] .apprxeq. 2 D q q
.rarw. D T D q q 2 .rarw. D T D q 2 D T D ( 16 ) { D = [ - q 1 + q
0 - q 3 + q 2 - q 2 + q 3 + q 0 - q 1 - q 3 - q 2 + q 1 + q 0 ] D T
D = [ q 1 2 + q 2 2 + q 3 2 - q 0 q 1 - q 0 q 2 - q 0 q 3 - q 0 q 1
q 0 2 + q 2 2 + q 3 2 - q 1 q 2 - q 1 q 3 - q 0 q 2 - q 1 q 2 q 0 2
+ q 1 2 + q 3 2 - q 2 q 3 - q 0 q 3 - q 1 q 3 - q 2 q 3 q 0 2 + q 1
2 + q 2 2 ] ##EQU00014##
1-1-6. Removal (Ignoring) of Azimuth Error
[0051] When an azimuth observation section such as a magnetic
sensor is not provided in posture detection, an azimuthal component
of the posture error just monotonously increases and does not serve
for any purpose. Further, the increased error estimate causes a
feedback gain to unnecessarily increase, and this causes the
azimuth to unexpectedly change or vary. Thus, an azimuth error
component .epsilon..sub..theta.z is removed from Expression
(16).
.theta. ' = [ .theta. x .theta. y ] .apprxeq. 2 D ' q q .rarw. D '
T D ' q q 2 .rarw. D ' T D ' q 2 D ' T D ' { D ' = [ - q 1 + q 0 -
q 3 + q 2 - q 2 + q 3 + q 0 - q 1 ] D ' T D ' = [ q 1 2 + q 2 2 - q
0 q 1 - q 2 q 3 - q 1 q 3 - q 0 q 2 0 - q 0 q 1 - q 2 q 3 q 0 2 + q
3 2 0 q 0 q 2 - q 1 q 3 q 1 q 3 - q 0 q 2 0 q 0 2 + q 3 2 - q 0 q 1
- q 2 q 3 0 q 0 q 2 - q 1 q 3 - q 0 q 1 - q 2 q 3 q 1 2 + q 2 2 ] (
17 ) ##EQU00015##
1-1-7. Extended Kalman Filter
[0052] An extended Kalman filter that calculates a
three-dimensional posture based on the above model expressions can
be designed.
State Vector and Error Covariance Matrix
[0053] As in Expression (18), the posture quaternion q, the motion
velocity vector v, the residual bias b.sub..omega. (offset to
angular velocity vector .omega.) of the angular velocity sensor,
the residual bias b.sub..alpha. (offset to acceleration vector
.alpha.) of the acceleration sensor, and the
gravitational-acceleration correction value .DELTA.g, as unknown
state values to be obtained, constitute a state vector x
(14-dimensional vector) of the extended Kalman filter. In addition,
the error covariance matrix .SIGMA..sub.x.sup.2 is defined.
x = [ q v b .omega. b .alpha. .DELTA. g ] , x 2 = E [ x x T ] , x =
[ q v b .omega. b .alpha. .DELTA. g ] x = x ^ + x ( 18 )
##EQU00016##
Process Model
[0054] In a process model, the value of the latest state vector is
predicted based on the sampling interval .DELTA.t and values of the
angular velocity data d.sub..omega. and the acceleration data
d.sub..alpha., as in Expression (19).
x k = [ q k v k b .omega. , k b .alpha. , k .DELTA. g k ] = [ q k -
1 .DELTA. q k v k - 1 + ( C k .lamda. k - g k - 1 ) .DELTA. t b
.omega. , k - 1 b .alpha. , k - 1 .DELTA. g k - 1 ] { .DELTA. q k =
1 + .omega. _ k 2 .DELTA. t + ( .omega. _ k - 1 .times. .omega. _ k
24 - .omega. _ k 2 8 ) .DELTA. t 2 - .omega. _ k 2 .omega. _ k 48
.DELTA. t 3 C k + 1 = [ q 0 , k 2 + q 1 , k 2 - q 2 , k 2 - q 3 , k
2 2 ( q 1 , k q 2 , k - q 0 , k q 3 , k ) 2 ( q 1 , k q 3 , k + q 0
, k q 2 , k ) 2 ( q 1 , k q 2 , k + q 0 , k q 3 , k ) q 0 , k 2 - q
1 , k 2 + q 2 , k 2 - q 3 , k 2 2 ( q 2 , k q 3 , k - q 0 , k q 1 ,
k ) 2 ( q 1 , k q 3 , k - q 0 , k q 2 , k ) 2 ( q 2 , k q 3 , k + q
0 , k q 1 , k ) q 0 , k 2 - q 1 , k 2 - q 2 , k 2 + q 3 , k 2 ]
.lamda. k = .alpha. _ k + ( .omega. _ k .times. .alpha. _ k 2 +
.omega. _ k - 1 .times. .alpha. _ k - .omega. _ k .times. .alpha. _
k - 1 12 ) .DELTA. t + ( .omega. _ k .alpha. _ k ) .omega. _ k -
.omega. _ k 2 .alpha. _ k 6 .DELTA. t 2 + ( .omega. _ k - 1 .alpha.
_ k - .omega. _ k .alpha. _ k - 1 ) .omega. _ k - ( .omega. _ k - 1
.omega. _ k ) .alpha. _ k + .omega. _ k 2 .alpha. _ k - 1 24
.DELTA. t 2 - .omega. _ k 2 24 .omega. _ k .times. .alpha. _ k
.DELTA. t 3 g k = [ 0 0 - ( 1 + .DELTA. g k ) ] T .omega. _ k = d
.omega. , k - b .omega. , k - 1 .alpha. _ k = d .alpha. , k - b
.alpha. , k - 1 ( 19 ) ##EQU00017##
[0055] The covariance matrix of a state error is updated as in
Expression (20) by receiving influences of noise components
.eta..sub..omega. and .eta..sub..alpha. of the angular velocity
data d.sub..omega., and the acceleration data d.sub..alpha., and
process noise .rho..sub..omega., .rho..sub..alpha., and .rho..sub.g
indicating instability of the residual bias b.sub..omega. of the
angular velocity sensor, the residual bias b.sub..alpha. of the
acceleration sensor, and the value (gravitational acceleration
value) of the gravitational acceleration vector g.
x , k 2 = A k x , k - 1 2 A k T + W k .rho. , k 2 W k T { A k
.apprxeq. [ J q / q , k 0 4 .times. 3 - J q / .omega. , k 0 4
.times. 3 0 4 .times. 1 J v / q , k I 3 .times. 3 0 3 .times. 3 - C
k 0 3 .times. 1 0 3 .times. 4 0 3 .times. 3 I 3 .times. 3 0 3
.times. 3 0 3 .times. 1 0 3 .times. 4 0 3 .times. 3 0 3 .times. 3 I
3 .times. 3 0 3 .times. 1 0 1 .times. 1 0 3 .times. 3 0 1 .times. 3
0 1 .times. 3 I 1 .times. 1 ] W k .apprxeq. [ J q / .omega. , k 0 4
.times. 3 0 4 .times. 3 0 4 .times. 3 0 4 .times. 1 0 3 .times. 3 C
k 0 3 .times. 3 0 3 .times. 3 0 3 .times. 1 0 3 .times. 3 0 3
.times. 3 .DELTA. t I 3 .times. 3 0 3 .times. 3 0 3 .times. 1 0 3
.times. 3 0 3 .times. 3 0 3 .times. 3 .DELTA. t I 3 .times. 3 0 3
.times. 1 0 1 .times. 3 0 1 .times. 3 0 1 .times. 3 0 1 .times. 3
.DELTA. t I 1 .times. 1 ] .rho. , k 2 = [ .eta. .omega. , k 2 0 3
.times. 3 0 3 .times. 3 0 3 .times. 3 0 3 .times. 1 0 3 .times. 3
.eta..alpha. , k 2 0 3 .times. 3 0 3 .times. 3 0 3 .times. 1 0 3
.times. 3 0 3 .times. 3 .rho..omega. 2 0 3 .times. 3 0 3 .times. 1
0 3 .times. 3 0 3 .times. 3 0 3 .times. 3 .rho. .alpha. 2 0 3
.times. 1 0 1 .times. 3 0 1 .times. 3 0 1 .times. 3 0 1 .times. 3
.sigma. .rho. g 2 I 1 .times. 1 ] ( 20 ) ##EQU00018##
[0056] Here, 0.sub.n.times.m indicates a zero matrix having n rows
and m columns. I.sub.n.times.m indicates an identity matrix having
n rows and m columns. J . . . indicates a matrix of propagation
coefficients of each error obtained by partial differentiation of
the process model, as with Expression (21). .SIGMA. . . . indicates
a covariance matrix of each type of noise as with Expression
(22).
J q / q , k = [ + .DELTA. q 0 , k - .DELTA. q 1 , k - .DELTA. q 2 ,
k - .DELTA. q 3 , k + .DELTA. q 1 , k + .DELTA. q 0 , k + .DELTA. q
3 , k - .DELTA. q 2 , k + .DELTA. q 2 , k - .DELTA. q 3 , k +
.DELTA. q 0 , k + .DELTA. q 1 , k + .DELTA. q 3 , k + .DELTA. q 2 ,
k - .DELTA. q 1 , k + .DELTA. q 0 , k ] J q / .omega. , k = 1 2 [ -
q 1 , k - 1 - q 2 , k - 1 - q 3 , k - 1 + q 0 , k - 1 - q 3 , k - 1
+ q 2 , k - 1 + q 3 , k - 1 + q 0 , k - 1 - q 1 , k - 1 - q 2 , k -
1 + q 1 , k - 1 + q 0 , k - 1 ] .DELTA. t J v / q , k = 2 [ + r 1 ,
k + r 0 , k + r 3 , k - r 2 , k + r 2 , k - r 3 , k + r 0 , k + r 1
, k 0 0 0 0 ] .DELTA. t , [ r 0 , k r 1 , k r 2 , k r 3 , k ] = [ +
q 1 , k - 1 + q 2 , k - 1 + q 3 , k - 1 + q 0 , k - 1 - q 3 , k - 1
+ q 2 , k - 1 + q 3 , k - 1 + q 0 , k - 1 - q 1 , k - 1 - q 2 , k -
1 + q 1 , k - 1 + q 0 , k - 1 ] .lamda. k ( 21 ) .eta..omega. , k 2
= E [ .eta. .omega. .eta. .omega. T ] = [ .sigma. .eta. .omega. x ,
k 2 0 0 0 .sigma. .eta. .omega. y , k 2 0 0 0 .sigma. .eta. .omega.
z , k 2 ] .eta. .alpha. , k 2 = E [ .eta. .alpha. .eta. .alpha. T ]
= [ .sigma. .eta. .alpha. x , k 2 0 0 0 .sigma. .eta. .alpha. y , k
2 0 0 0 .sigma. .eta. .alpha. z , k 2 ] .rho. .omega. 2 = E [ .rho.
.omega. .rho. .omega. T ] = [ .sigma. .rho..omega. x , k 2 0 0 0
.sigma. .rho..omega. y , k 2 0 0 0 .sigma. .rho..omega. z , k 2 ]
.rho..alpha. 2 = E [ .rho. .alpha. .rho. .alpha. T ] = [ .sigma.
.rho..alpha. x , k 2 0 0 0 .sigma. .rho..alpha. y , k 2 0 0 0
.sigma. .rho..alpha. z , k 2 ] ( 22 ) ##EQU00019##
Observation Model
[0057] In an observation model, an observation residual .DELTA.z in
which an observation residual .DELTA.z.sub.a of the gravitational
acceleration based on acceleration data d.sub..alpha. and an
observation residual .DELTA.z.sub.v of the zero motion velocity are
used as elements is calculated as in Expression (23).
.DELTA. z k = [ .DELTA. z a , k .DELTA. z v , k ] = [ C k .alpha. _
k - g k v k ] ( 23 ) ##EQU00020##
[0058] Here, a Kalman coefficient K is calculated, as in Expression
(24), by adding the noise component .eta..sub..alpha. of the
acceleration data d.sub..alpha. and the motion acceleration
component .zeta..sub.a and the motion velocity component
.zeta..sub.v as observation errors.
K k = x , k 2 H k T ( H k x , k 2 H k T + V k .zeta. , k 2 V k T )
- 1 H k = [ J za / q , k 0 3 .times. 3 0 3 .times. 3 0 3 .times. 3
J za / g , k 0 3 .times. 4 I 3 .times. 3 0 3 .times. 3 0 3 .times.
3 0 3 .times. 1 ] .zeta. , k 2 = [ .eta..alpha. , k 2 0 3 .times. 3
0 3 .times. 3 0 3 .times. 3 .zeta. a , k 2 0 3 .times. 3 0 3
.times. 3 0 3 .times. 3 .zeta. v , k 2 ] V k = [ C k - I 3 .times.
3 0 3 .times. 3 0 3 .times. 3 0 3 .times. 3 - I 3 .times. 3 ] ( 24
) ##EQU00021##
[0059] Here, J . . . is the matrix of propagation coefficients of
each error obtained by partial differentiation of the observation
model, as with Expression (25). .SIGMA. . . . indicates a
covariance matrix of each type of noise as with Expression (26)
J za / q , k = 2 [ + s 1 , k + s 0 , k + s 3 , k - s 2 , k + s 2 ,
k - s 3 , k + s 0 , k + s 1 , k 0 0 0 0 ] , [ s 0 , k s 1 , k s 2 ,
k s 3 , k ] = [ + q 1 , k + q 2 , k + q 3 , k + q 0 , k - q 3 , k +
q 2 , k + q 3 , k + q 0 , k - q 1 , k - q 2 , k + q 1 , k + q 0 , k
] .alpha. _ k J za / g , k = [ 0 0 1 ] ( 25 ) .zeta. a , k 2 = E [
.zeta. a .zeta. a T ] = [ .sigma. .zeta. ax , k 2 0 0 0 .sigma.
.zeta. ay , k 2 0 0 0 .sigma. .zeta. az , k 2 ] + .mu. 2 3 .DELTA.
z a , k 2 I 3 .times. 3 .zeta. v , k 2 = E [ .zeta. v .zeta. v T ]
= [ .sigma. .zeta. vx , k 2 0 0 0 .sigma. .zeta. vy , k 2 0 0 0
.sigma. .zeta. vz , k 2 ] ( 26 ) ##EQU00022##
[0060] Here, .mu. is a coefficient for calculating the RMS of the
error of each axis from the predicted motion acceleration. The
state vector x is corrected, as in Expression (27), and the error
covariance matrix .SIGMA..sub.x.sup.2 thereof is updated, by using
the Kalman coefficient K.
x.sub.k.rarw.x.sub.k-K.sub.kz.sub.k
.SIGMA..sub.z,k.sup.2.rarw..SIGMA..sub.z,k.sup.2-K.sub.kH.sub.k.SIGMA..s-
ub.z,k.sup.2 (27)
Posture Normalization Model
[0061] In the posture normalization model, Expression (28) is
updated in order to maintain the posture quaternion and the error
covariance thereof to proper values.
x k = [ q k v k b .omega. , k b .alpha. , k .DELTA. g k ] .rarw. [
q k q k v k b .omega. , k b .alpha. , k .DELTA. g k ] .SIGMA. x , k
2 .rarw. [ D k ' T D k ' 0 4 .times. 10 0 10 .times. 4 I 10 .times.
10 ] .SIGMA. x , k 2 [ D k ' T D k ' 0 4 .times. 10 0 10 .times. 4
I 10 .times. 10 ] ( 28 ) ##EQU00023##
[0062] Here, D'.sup.TD' indicates a matrix of Expression (29) for
limiting the rank of the posture error and removing the azimuthal
component.
D k ' T D k ' = [ q 1 , k 2 + q 2 , k 2 - q 0 , k q 1 , k - q 2 , k
q 3 , k q 1 , k q 3 , k - q 0 , k q 2 , k 0 - q 0 , k q 1 , k - q 2
, k q 3 , k q 0 , k 2 + q 3 , k 2 0 q 0 , k q 2 , k - q 1 , k q 3 ,
k q 1 , k q 3 , k - q 0 , k q 2 , k 0 q 0 , k 2 + q 3 , k 2 - q 0 ,
k q 1 , k - q 2 , k q 3 , k 0 q 0 , k q 2 , k - q 1 , k q 3 , k - q
0 , k q 1 , k - q 2 , k q 3 , k q 1 , k 2 + q 2 , k 2 ] ( 29 )
##EQU00024##
1-1-8. Initial Value
State Vector and Error Covariance Matrix
[0063] Initial values of the state vector x and the error
covariance matrix .SIGMA..sub.x.sup.2 are given as in Expression
(30).
x 0 = [ q 0 v 0 b .omega. , 0 b .alpha. , 0 .DELTA. g 0 ] .SIGMA. x
, 0 2 = [ .SIGMA. q , 0 2 0 4 .times. 3 0 4 .times. 3 0 4 .times. 3
0 4 .times. 1 0 3 .times. 4 .SIGMA. .upsilon. , 0 2 0 3 .times. 3 0
3 .times. 3 0 3 .times. 1 0 3 .times. 4 0 3 .times. 3 .SIGMA. b
.omega. , 0 2 0 3 .times. 3 0 3 .times. 1 0 3 .times. 4 0 3 .times.
3 0 3 .times. 3 .SIGMA. b .alpha. , 0 2 0 3 .times. 1 0 1 .times. 4
0 1 .times. 3 0 1 .times. 3 0 1 .times. 1 .sigma. .DELTA. g , 0 2 I
1 .times. 1 ] ( 30 ) ##EQU00025##
Posture Quaternion
[0064] It is necessary that the posture of the IMU is given in
quaternion expression. The posture quaternion q can be calculated
from a roll (bank) angle .phi. [rad], a pitch (elevation) angle
.theta. [rad], and a yaw angle (azimuth) .psi. [rad] used for the
posture and the like of an aircraft, as in Expression (31).
q = [ q 0 q 1 q 2 q 3 ] = [ sin .phi. 2 sin .theta. 2 sin .psi. 2 +
cos .phi. 2 cos .theta. 2 cos .psi. 2 sin .phi. 2 cos .theta. 2 cos
.psi. 2 - cos .phi. 2 sin .theta. 2 sin .psi. 2 cos .phi. 2 sin
.theta. 2 cos .psi. 2 + sin .phi. 2 cos .theta. 2 sin .psi. 2 cos
.phi. 2 cos .theta. 2 sin .psi. 2 - sin .phi. 2 sin .theta. 2 cos
.psi. 2 ] ( 31 ) ##EQU00026##
[0065] The error covariance matrix .SIGMA..sub.q.sup.2 is
calculated from RMS.sigma..sub..PHI. [rad RMS] of a roll angle
error and RMS.sigma..sub..theta. [rad RMS] of a pitch angle error,
as in Expression (32) (yaw angle error is ignored).
.SIGMA. q 2 = 1 4 D ' T [ .sigma. .phi. 2 0 0 .sigma. .theta. 2 ] D
' D ' = [ - q 1 + q 0 - q 3 + q 2 - q 2 + q 3 + q 0 - q 1 ] ( 32 )
##EQU00027##
Motion Velocity Vector
[0066] If the initial state is stationary, the motion velocity
vector v may be to be required to 0. The error covariance matrix
.SIGMA..sub.v.sup.2 is given based on RMS.sigma..sub.vx,
RMS.sigma..sub.vy, and RMS.sigma..sub.vz [Gs RMS] of the errors of
the motion velocity vector v in the axes, regardless of being
stationary, as in Expression (33)
.SIGMA. v 2 = [ .sigma. vx 2 0 0 0 .sigma. vy 2 0 0 0 .sigma. vz 2
] ( 33 ) ##EQU00028##
Residual Biases of Angular Velocity/Acceleration Sensor
[0067] If the residual bias b.sub..omega. of the angular velocity
sensor and the residual bias b.sub..alpha. of the acceleration
sensor are known, the residual biases are required to be
appropriately set. When the residual biases are unknown, zero as an
expected value is given to the residual biases. Error covariance
matrixes .SIGMA..sub.b.omega..sup.2 and .SIGMA..sub.b.alpha..sup.2
are given based on the errors RMS.sigma..sub.b.omega.x,
RMS.sigma..sub.b.omega.y, and RMS.sigma..sub.b.omega.z [rad/s RMS]
of the residual biases of the angular velocity sensor in the axes
and the errors RMS.sigma..sub.b.alpha.x, RMS.sigma..sub.b.alpha.y,
and RMS.sigma..sub.b.alpha.z [G RMS] of the residual biases of the
acceleration sensor in the axes, as in Expression (34).
.SIGMA. b .omega. 2 = [ .sigma. b .omega. x 2 0 0 0 .sigma. b
.omega. y 2 0 0 0 .sigma. b .omega. z 2 ] .SIGMA. b .alpha. 2 = [
.sigma. b .alpha. x 2 0 0 0 .sigma. b .alpha. y 2 0 0 0 .sigma. b
.alpha. z 2 ] ( 34 ) ##EQU00029##
Gravitational-Acceleration Correction Value
[0068] If the gravitational acceleration value is known, a
difference from the standard value 1 [G] (=9.80665 [m/s.sup.2]) is
required to be appropriately set. When the gravitational
acceleration value is unknown, zero as an expected value is given
to the gravitational acceleration value. RMS.sigma..sub..DELTA.g,
[G RMS] of the error of the gravitational acceleration value is
applied to the error covariance matrix.
1-1-9. Setting Value
Sampling Interval
[0069] Since the posture detection processing from the output of
the IMS corresponds to a time integration operation in principle,
the sampling interval .DELTA.t [s] is an important value, and thus
is required to be appropriately set.
Output Noise of Angular Velocity/Acceleration Sensor
[0070] The noise components .eta. included in outputs of the
angular velocity sensor and the acceleration sensor are considered
as the white Gaussian noise of the variance .sigma..sub..eta..sup.2
which is average zero independent for each axis. The magnitude of
the noise components is designated by the corresponding RMS values
(.sigma..sub..eta..omega.x, .sigma..sub..eta..omega.y,
.sigma..sub..eta..omega.z) [rad/s RMS] and (.sigma..sub.n.alpha.x,
.sigma..sub..eta..alpha.y, .sigma..sub..eta..alpha.z) [G RMS], as
in Expression (35).
.eta. .omega. = [ .eta. .omega. x .eta. .omega. y .eta. .omega. z ]
.apprxeq. [ N ( 0 , .sigma. .eta. .omega. x 2 ) N ( 0 , .sigma.
.eta. .omega. y 2 ) N ( 0 , .sigma. .eta..omega. z 2 ) ] = N ( 0 ,
.SIGMA. .eta. .omega. 2 ) , .SIGMA. .eta. .omega. 2 = [ .sigma.
.eta. .omega. x 2 0 0 0 .sigma. .eta. .omega. y 2 0 0 0 .sigma.
.eta. .omega. z ] .eta. .alpha. = [ .eta. .alpha. x .eta. .alpha. y
.eta. .alpha. z ] .apprxeq. [ N ( 0 , .sigma. .eta. .alpha. x 2 ) N
( 0 , .sigma. .eta. .alpha. y 2 ) N ( 0 , .sigma. .eta. .alpha. z 2
) ] = N ( 0 , .SIGMA. .eta. .alpha. 2 ) , .SIGMA. .eta..alpha. 2 =
[ .sigma. .eta. .alpha. x 2 0 0 0 .sigma. .eta. .alpha. y 2 0 0 0
.sigma. .eta. .alpha. z 2 ] ( 35 ) ##EQU00030##
Biases of Angular Velocity/Acceleration Sensor and Instability of
Gravitational Acceleration Value
[0071] It is considered that the biases of the angular velocity
sensor and the acceleration sensor are not constant and change with
time. It is considered that the gravitational acceleration value
also slightly changes depending on the surrounding environment.
Considering the change as an individual random walk, the
instability is designated by (.sigma..sub..rho..omega.x,
.sigma..sub..rho..omega.y, .sigma..sub..rho..omega.z) [rad/s/ s],
(.sigma..sub..rho..alpha.x, .sigma..sub..rho..alpha.y,
.sigma..sub..rho..alpha.z) [G/ s], and .sigma..sub..rho.g [G/ s],
as in Expression (36).
b .omega. , k = b .omega. , k - 1 + N ( 0 , .SIGMA. .rho..omega. 2
.DELTA. t ) , .SIGMA. .rho..omega. 2 = [ .sigma. .rho..omega. x 2 0
0 0 .sigma. .rho..omega. y 2 0 0 0 .sigma. .rho..omega. z 2 ] b
.alpha. , k = b .alpha. , k - 1 + N ( 0 , .SIGMA. .rho..alpha. 2
.DELTA. t ) , .SIGMA. .rho. .alpha. 2 = [ .sigma. .rho. .alpha. x 2
0 0 0 .sigma. .rho. .alpha. y 2 0 0 0 .sigma. .rho. .alpha. z 2 ]
.DELTA. g k = .DELTA. g k - 1 + N ( 0 , .sigma. .rho. g 2 .DELTA. t
) ( 36 ) ##EQU00031##
Motion Acceleration
[0072] When the gravitational acceleration is observed in order to
correct the posture, the motion acceleration component .zeta..sub.a
acts as the observation error. When this observation error is
considered as simple white Gaussian noise, a result that the
posture sensitively responds to a large motion acceleration due to
a rapid motion is obtained. Thus, a noise model as with Expression
(37) of changing the level depending on the magnitude of the
estimated motion acceleration (difference between the observed
acceleration and the estimated gravitational acceleration) is used,
and a linear coefficient .mu. [N/A] and a constant term
(.sigma..sub..zeta..alpha.x, .sigma..sub..zeta..alpha.y,
.sigma..sub..zeta..alpha.z) [G RMS] are used as setting items.
a , k .apprxeq. N ( 0 , .SIGMA. a , k 2 ) , .SIGMA. a , k 2 = [
.sigma. ax 2 0 0 0 .sigma. ay 0 0 0 .sigma. az ] + .mu. 2 3 .DELTA.
z a , k 2 I 3 .times. 3 ( 37 ) ##EQU00032##
Motion Velocity
[0073] In observing the zero motion velocity, it is observed that
the motion velocity of the IMU is substantially zero in a long
term. The motion velocity component .zeta..sub.v appearing in a
short term acts as the observation error. This observation error is
considered as the white Gaussian noise independent for each axis,
and the magnitude of this observation error is designated by the
RMS values (.sigma..sub..zeta.vx, .sigma..sub..zeta.vy,
.sigma..sub..zeta.vz) [Gs RMS], as in Expression (38).
v .apprxeq. N ( 0 , .SIGMA. v 2 ) , .SIGMA. v 2 = [ .sigma. vx 2 0
0 0 .sigma. vy 2 0 0 0 .sigma. vz 2 ] ( 38 ) ##EQU00033##
1-1-10. Limitation of Yaw-Axis Component of Bias Error in Angular
Velocity Sensor
[0074] When an azimuth observation section such as a magnetic
sensor is not provided, only a yaw-axis component (vertical
component) of the bias error of the angular velocity sensor also
monotonously increases. Normally, a yaw-axis direction changes with
the posture and the sensor coordinates. Thus, for example, even
though a z-axis of the angular velocity sensor at a certain time
point coincides with a yaw axis, and thus error estimate increases,
correction is performed by observing the gravitational acceleration
when the posture changes, and thus the z-axis becomes an
inclination (for example, a horizontal-axis direction) other than
the yaw axis. Thus, the error estimate is reduced. However, for
example, when the posture changes small, and the substantially same
posture continues for a long time, the error estimate of the
yaw-axis component increases, and the feedback gain increases. In
addition, the increase of the feedback gain causes the bias
estimation value of the angular velocity sensor to change without
intention, and thus causes the azimuth of the posture to drift. In
addition, it is unlikely that the bias estimation error of even the
yaw-axis component of the practical angular velocity sensor
increases beyond, for example, the initial bias error without
limit. Accordingly, an upper limit value is provided only in the
yaw-axis component of the bias estimation error of the angular
velocity sensor in the state error covariance matrix, and thus, the
yaw-axis component is limited not to exceed the upper limit
value.
[0075] Firstly, as in Expression (39), the variance
.sigma..sub.b.omega.v.sup.2 of the yaw-axis component is calculated
from the error covariance matrix .SIGMA..sub.b.omega..sup.2 of the
residual bias in the angular velocity sensor, based on the current
posture quaternion.
.sigma. bwv 2 = c r T .SIGMA. bw 2 c v = [ 0 7 T c v T 0 4 T ]
.SIGMA. x 2 [ 0 7 c v 0 4 ] { c v = [ 2 ( q 1 q 3 - q 0 q 2 ) 2 ( q
2 q 3 + q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2 ] ( 39 )
##EQU00034##
[0076] When the variance .sigma..sub.b.omega.v.sup.2 of the
yaw-axis component exceeds the upper limit value
.sigma..sub.b.omega.max.sup.2, the yaw-axis component is limited as
in Expression (40).
.SIGMA. x 2 .rarw. L x .SIGMA. x 2 L x T { L x = [ I 7 .times. 7 0
7 .times. 3 0 7 .times. 4 0 3 .times. 7 L bw 0 3 .times. 4 0 4
.times. 7 0 4 .times. 3 I 4 .times. 4 ] L bw = C T [ 1 0 0 0 1 0 0
0 .sigma. ? 2 .sigma. bwr 2 ] C C = [ q 0 2 + q 1 2 - q 2 2 - q 3 2
2 ( q 1 q 2 - q 0 q 3 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 1 q 2 + q 0 q
3 ) q 0 2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 - q 0 q 1 ) 2 ( q 1 q
3 - q 0 q 2 ) 2 ( q 2 q 3 + q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2
] ? indicates text missing or illegible when filed ( 40 )
##EQU00035##
1-1-11. Countermeasure for Off-Scale of Sensor
[0077] An effective measurement range is defined in an actual
inertial sensor (angular velocity sensor and acceleration sensor).
When a physical quantity (angular velocity or acceleration)
exceeding the range is input, it is not possible to obtain an
accurate output by saturation of a signal in the inertial sensor.
Exceeding the effective measurement range is referred to as
"off-scale" below. Normally, a sensor to be used is selected to
correspond to a range to be measured such that off-scale does not
occur. However, a state (off-scale state) exceeding the range by an
impact and the like in a very short time is assumed depending on
the application.
Influence of Sensor Off-Scale
[0078] If the angular velocity sensor is in the off-scale state, a
large error is included in the angular velocity, and an error in a
posture angle obtained by integrating the angular velocity
increases. An error in the motion velocity increases by integration
of the acceleration based on the posture angle having an increased
error. If the acceleration sensor is in the off-scale state, a
large error is included in the acceleration, and the error in the
motion velocity obtained by integrating the acceleration increases.
Since the error by the off-scale is not the stochastic process,
obtaining the average or the variance is not possible. Modeling of
a Kalman filter is difficult because the amount of error is huge
and unpredictable. Thus, in the current Kalman filter, the above
error is not recognized as an error to be corrected. Thus, when the
output value of the sensor is out of the effective range of the
sensor, this situation is recognized as off-scale, and a proper
period after the time point at which the output value of the sensor
is out of the effective range is defined as a recovery period from
the off-scale. Then, processing different from that in the normal
time is performed.
Off-Scale Recovery Processing
[0079] Following processing is applied to the state error
covariance matrix to handle the state error increasing by an
influence of off-scale. That is, the error variance component of
the covariance matrix is appropriately inflated in response to an
increase of the error. The covariance component is set to zero in
order to separate the increased error from other state variable
errors. It is possible to strongly correct the increased state
variable of the error and to suppress the negative influence on
other state variables such as the bias estimation value to the
minimum, by performing the above processing before correction
processing.
Angular Velocity Off-Scale Recovery Processing
[0080] Processing of Expression (41) is applied in order to handle
the posture angle and the error in motion velocity, which increase
by the off-scale state of the angular velocity sensor. That is, the
error component (posture error component) of the posture quaternion
q in the error covariance matrix .SIGMA..sub.x.sup.2 is
appropriately increased in response to an increase of the posture
angle error. A correlation component (covariance component) between
the posture error component and an error component other than the
posture error component is set to zero in order to separate the
posture error component from other error components, in the error
covariance matrix .SIGMA..sub.x.sup.2. The error component other
than the posture error component includes a motion velocity error
component, an error component of the residual bias b.sub..omega. of
the angular velocity sensor (bias error component of angular
velocity), an error component of the residual bias b.sub..alpha. of
the acceleration sensor (bias error component of acceleration), and
an error component of the gravitational-acceleration correction
value .DELTA.g. Setting the correlation component (covariance
component) to zero can include a value approximate to zero and a
value allowing a predetermined error component to be separated from
other error components.
[0081] The error component of the motion velocity vector v (motion
velocity error component) in the error covariance matrix
.SIGMA..sub.x.sup.2 is appropriately increased in response to an
increase of the motion velocity error. A correlation component
(covariance component) between the motion velocity error component
and an error component other than the motion velocity error
component is set to zero in order to separate the motion velocity
error component from other error components, in the error
covariance matrix .SIGMA..sub.x.sup.2. The error component other
than the motion velocity error component includes the posture error
component, the bias error component of the angular velocity, the
bias error component of the acceleration, and the error component
of the gravitational-acceleration correction value. In Expression
(41), .sigma..sub.oq.sup.2 indicates the variance of the posture
error at time of off-scale recovery, and .sigma..sub.ov.sup.2
indicates the variance of the motion velocity error at the time of
off-scale recovery.
.SIGMA. x 2 .rarw. R w .SIGMA. x 2 R w T + .SIGMA. Rw 2 = [ .sigma.
? 2 4 D ' T D ' 0 4 .times. 3 0 4 .times. 3 0 4 .times. 3 0 4 0 3
.times. 4 .sigma. ov 2 I 3 0 3 .times. 3 0 3 .times. 0 3 0 3
.times. 4 0 3 .times. 3 .SIGMA. w 2 .SIGMA. w a .sigma. wg 0 3
.times. 4 0 3 .times. 3 .SIGMA. wa .SIGMA. a 2 .sigma. ag 0 4 T 0 3
T .sigma. wg T .sigma. ag T .sigma. g 2 ] { R w = [ 0 7 .times. 7 0
7 .times. 7 0 7 .times. 7 I 7 ] .SIGMA. Rw 2 = [ .sigma. ? 2 4 D '
T D ' 0 4 .times. 3 0 4 .times. 7 0 3 .times. 4 .sigma. ov 2 I 3 0
3 .times. 7 0 7 .times. 3 0 7 .times. 4 0 7 .times. 7 ] .SIGMA. x 2
= [ .SIGMA. q 2 .SIGMA. qv .SIGMA. qw .SIGMA. qa .sigma. qg .SIGMA.
qv .SIGMA. v 2 .SIGMA. rw .SIGMA. va .sigma. rg .SIGMA. qw .SIGMA.
vw .SIGMA. w 2 .SIGMA. wa .sigma. wg .SIGMA. qa .SIGMA. va .SIGMA.
wa .SIGMA. a 2 .sigma. ag .sigma. qg T .sigma. vg T .sigma. wg T
.sigma. ag T .SIGMA. G 2 ] D ' T D ' = [ q 1 2 + q 2 2 - q 0 q 1 -
q 2 q 3 q 1 q 3 - q 0 q 2 0 - q 0 q 1 - q 2 q 3 q 0 2 + q 3 2 0 q 1
q 3 - q 0 q 2 q 1 q 3 - q 0 q 2 0 q 0 2 + q 3 2 - q 0 q 1 - q 2 q 3
0 q 1 q 3 - q 0 q 2 - q 0 q 1 - q 2 q 3 q 1 2 + q 2 2 ] ? indicates
text missing or illegible when filed ( 41 ) ##EQU00036##
Acceleration Off-Scale Recovery Processing
[0082] Processing of Expression (42) is applied in order to handle
the error in motion velocity, which increases by the off-scale
state of the acceleration sensor. That is, the motion velocity
error component in the error covariance matrix .SIGMA..sub.x.sup.2
is appropriately increased in response to an increase of the motion
velocity error. A correlation component (covariance component)
between the motion velocity error component and an error component
other than the motion velocity error component is set to zero in
order to separate the motion velocity error component from other
state variable errors, in the error covariance matrix
.SIGMA..sub.x.sup.2.
.SIGMA. x 2 .rarw. R a .SIGMA. x 2 R a T + .SIGMA. Ra 2 = [ .SIGMA.
q 2 0 4 .times. 3 .SIGMA. qw .SIGMA. qa .sigma. qg 0 3 .times. 4
.sigma. ov 2 I 3 0 3 .times. 3 0 3 .times. 3 0 3 .SIGMA. qw 0 3
.times. 3 .SIGMA. w 2 .SIGMA. wa .sigma. wg .SIGMA. qa 0 3 .times.
3 .SIGMA. wa .SIGMA. a 2 .sigma. ag .sigma. qg T 0 3 T .sigma. wg T
.sigma. ag T .sigma. g 2 ] { R a = [ I 4 0 4 .times. 3 0 4 .times.
7 0 3 .times. 4 0 3 .times. 3 0 3 .times. 7 0 7 .times. 4 0 7
.times. 3 I 7 ] .SIGMA. Ra 2 = [ 0 4 .times. 4 0 4 .times. 3 0 4
.times. 7 0 3 .times. 4 .sigma. ov 2 I 3 0 3 .times. 7 0 7 .times.
4 0 7 .times. 3 0 7 .times. 7 ] ( 42 ) ##EQU00037##
1-2. Flowchart of Posture Estimation Method
[0083] FIG. 1 is a flowchart illustrating an example of procedures
of a posture estimation method according to the embodiment. The
procedures in FIG. 1 are performed by a posture estimation device
that estimates the posture of an object to which an IMU is
attached, for example. The object is not particularly limited, and
for example, a vehicle, an electronic device, exercise equipment, a
person, and an animal may be provided as the object. The IMU may be
detachable from the object. The IMU may be provided in a state
where the IMU is fixed to the object, and thus detaching the IMU is
not possible, for example, the IMU is mounted in the object. For
example, the posture estimation device may be a personal computer
(PC) or may be various portable devices such as a smart phone.
[0084] As illustrated in FIG. 1, in the posture estimation method
in the embodiment, firstly, the posture estimation device
initializes the state vector x.sub.k and the error covariance
matrix .SIGMA..sub.x, k.sup.2 as error information (initialization
step S1). That is, the posture estimation device sets k=0 and sets
(initializes) the state vector x.sub.0 and the error covariance
matrix .SIGMA..sub.x, 0.sup.2 at a time point to. Specifically, the
posture estimation device sets the elements q.sub.0, v.sub.0,
b.sub..omega., 0, b.sub..alpha., 0, and .DELTA.g.sub.0 of the state
vector x.sub.0 and .SIGMA..sub.q, 0.sup.2, .SIGMA..sub.v, 0.sup.2,
.SIGMA..sub.b.omega., 0.sup.2, .SIGMA..sub.b.alpha., 0.sup.2, and
.sigma..sub..DELTA.g, 0.sup.2 included in the error covariance
matrix .SIGMA..sub.x, 0.sup.2, which are expressed as in Expression
(30).
[0085] For example, the posture estimation device may set the
initial posture of the inertial measurement unit (IMU) to have a
roll angle, a pitch angle, and a yaw angle which have been
determined in advance, and may set q.sub.0 by substituting the roll
angle, the pitch angle, and the yaw angle into Expression (31).
Alternatively, the posture estimation device may acquire
acceleration data from the acceleration sensor in a state where the
inertial measurement unit (IMU) is stopped. The posture estimation
device may specify a direction of the gravitational acceleration
from the acceleration data, calculate a roll angle and a pitch
angle, and set the yaw angle to a predetermined value (for example,
0). Then, the posture estimation device may set q.sub.0 by
substituting the roll angle, the pitch angle, and the yaw angle
into Expression (31). The inertial measurement unit (IMU) sets
.SIGMA..sub.q, 0.sup.2 by substituting RMS.sigma..sub..PHI. of a
roll angle error and RMS.sigma..sub..theta. of a pitch angle error
into Expression (32).
[0086] For example, the posture estimation device sets a state
where the inertial measurement unit (IMU) is stopped to be an
initial state and sets v.sub.0 to 0. Then, the posture estimation
device sets .SIGMA..sub.v, 0.sup.2 by substituting
RMS.sigma..sub.vx, RMS.sigma..sub.vy, and RMS.sigma..sub.vz of the
error of the motion velocity vector v in the axes into Expression
(33).
[0087] If the residual bias b.sub..omega. of the angular velocity
sensor and the residual bias b.sub..alpha. of the acceleration
sensor are known, the posture estimation device sets the values of
the residual biases to b.sub..omega., 0 and b.sub..alpha., 0. If
b.sub..omega. and b.sub..alpha. are not known, the posture
estimation device sets b.sub..omega., 0 and b.sub..alpha., 0 to
zero. The posture estimation device sets .SIGMA..sub.b.omega.,
0.sup.2 and .SIGMA..sub.b.alpha., 0.sup.2 by substituting the
errors RMS.sigma..sub.b.omega.x, RMS.sigma..sub.b.omega.y, and
RMS.sigma..sub.b.omega.z of the residual bias of the angular
velocity sensor in the axes and the errors
RMS.sigma..sub.b.alpha.x, RMS.sigma..sub.b.alpha.y, and
RMS.sigma..sub.b.alpha.z of the residual bias of the acceleration
sensor in the axes into Expression (34).
[0088] If the gravitational acceleration value is known, the
posture estimation device sets the difference from 1 G to
.DELTA.g.sub.0. If the gravitational acceleration value is not
known, the posture estimation device sets .DELTA.g.sub.0 to zero.
The posture estimation device sets RMS.sigma..sub..DELTA.g of the
error of the gravitational acceleration value in
.sigma..sub..DELTA.g, 0.sup.2.
[0089] Then, the posture estimation device acquires a measured
value of the inertial measurement unit (IMU) (measured-value
acquisition step S2). Specifically, the posture estimation device
waits until the sampling interval .DELTA.t elapses. If the sampling
interval .DELTA.t elapses, the posture estimation device sets k=k+1
and t.sub.k=t.sub.k-1+.DELTA.t and acquires angular velocity data
d.sub..omega., k and acceleration data d.sub..alpha., k from the
inertial measurement unit (IMU).
[0090] Then, the posture estimation device performs prediction
processing (also referred to as time update processing) of the
state vector x.sub.k (including the posture quaternion q.sub.k
being the posture information at a time point t.sub.k, as an
element) and the error covariance matrix .SIGMA..sub.x, k.sup.2
being error information at the time point t.sub.k (prediction step
S3).
[0091] FIG. 2 is a flowchart illustrating an example of procedures
of the step S3 in FIG. 1. As illustrated in FIG. 2, firstly, the
posture estimation device performs processing of removing the
residual biases b.sub..omega., k-1 and b.sub..alpha., k-1 which
have been estimated at a time point t.sub.k-1, from angular
velocity data d.sub..omega., k and acceleration data d.sub..alpha.,
k at the time point t.sub.k. The posture estimation device performs
the above processing with Expression (19) (bias removal step
S31).
[0092] Then, the posture estimation device performs processing
(posture-change-amount calculation processing) of calculating the
posture change amount .DELTA.q.sub.k at the time point t.sub.k
based on the output of the angular velocity sensor
(posture-change-amount calculation step S32). Specifically, the
posture estimation device calculates the posture change amount
.DELTA.q.sub.k with Expression (6), based on the angular velocity
data d.sub..omega., k.
[0093] The posture estimation device performs processing
(velocity-change-amount calculation processing) of calculating the
velocity change amount (C.sub.k.lamda..sub.k-g.sub.k-1).DELTA.t of
the object based on the output of the acceleration sensor and the
output of the angular velocity sensor (velocity-change-amount
calculation step S33). Specifically, the posture estimation device
calculates the velocity change amount
(C.sub.k.lamda..sub.k-g.sub.k-1).DELTA.t with Expressions (9),
(10), and (12), based on the acceleration data d.sub..alpha., k and
the angular velocity data d.sub..omega., k.
[0094] The posture estimation device performs processing (posture
information prediction processing) of predicting the posture
quaternion q.sub.k being posture information of the object at the
time point t.sub.k, by using the posture change amount
.DELTA.q.sub.k (posture information prediction step S34). In the
posture information prediction step S34, the posture estimation
device further performs processing (velocity information prediction
processing) of predicting the motion velocity vector v.sub.k being
velocity information of the object at the time point t.sub.k by
using the velocity change amount
(C.sub.k.lamda..sub.k-g.sub.k-1).DELTA.t. Specifically, in the
posture information prediction step S34, the posture estimation
device performs processing of predicting the posture quaternion
q.sub.k and the state vector x.sub.k including the motion velocity
vector v.sub.k as the element, by Expressions (6), (12), and
(19).
[0095] Lastly, the posture estimation device performs processing of
updating the error covariance matrix .SIGMA..sub.x, k.sup.2 at the
time point t.sub.k by Expressions (20) and (21) (error information
update step S35).
[0096] Returning to FIG. 1, the posture estimation device performs
processing (rotational error component removal processing) of
removing a rotational error component around a reference vector in
the error covariance matrix .SIGMA..sub.x, k.sup.2 being the error
information (rotational error-component removal step S4). The
reference vector is a vector observed by the observation section.
In the embodiment, the reference vector is a gravitational
acceleration vector observed by the acceleration sensor being the
observation section. In the embodiment, a rotation error around the
reference vector is an azimuth error. Thus, in the step S4, the
posture estimation device performs processing of removing the
azimuth error in the error covariance matrix .SIGMA..sub.x,
k.sup.2. Specifically, the posture estimation device generates the
error covariance matrix .SIGMA..sub.x, k.sup.2 in which the rank
limitation and removal of the azimuth error component
.epsilon..sub..theta.z in the error covariance matrix
.SIGMA..sub.q, k.sup.2 of the posture quaternion q.sub.k are
performed, by Expressions (28) and (29).
[0097] Then, the posture estimation device performs off-scale
recovery processing (error information adjustment step S5).
Specifically, in the error information adjustment step S5, the
posture estimation device determines whether or not the output of
the angular velocity sensor is within the effective range. When it
is determined that the output of the angular velocity sensor is not
within the effective range, the posture estimation device performs
processing of increasing the posture error component in the error
covariance matrix .SIGMA..sub.x, k.sup.2 being the error
information and reducing the correlation component between the
posture error component and the error component other than the
posture error component, in the error covariance matrix
.SIGMA..sub.x, k.sup.2 (error information adjustment step S5). In
the error information adjustment step S5, the posture estimation
device determines whether or not the output of the acceleration
sensor is within the effective range. When it is determined that
the output of the angular velocity sensor or the output of the
acceleration sensor is not within the corresponding effective
range, the posture estimation device performs processing of
increasing the motion velocity error component in the error
covariance matrix .SIGMA..sub.x, k.sup.2 being the error
information and reducing the correlation component between the
motion velocity error component and the error component other than
the motion velocity error component in the error covariance matrix
.SIGMA..sub.x, k.sup.2.
[0098] FIG. 3 is a flowchart illustrating an example of procedures
of the step S5 in FIG. 1. As illustrated in FIG. 3, firstly, the
posture estimation device determines whether or not the output
(angular velocity data d.sub..omega.) of the angular velocity
sensor is within the effective range (Step S51). When it is
determined that the output of the angular velocity sensor is within
the effective range (Y in Step S51), the posture estimation device
checks the remaining of an angular-velocity off-scale recovery
period (Step S52a). When it is determined that the output of the
angular velocity sensor is not within the effective range (N in
Step S51), the posture estimation device starts or prolongs the
angular-velocity off-scale recovery period (Step S52b).
[0099] Then, the posture estimation device determines whether or
not the output (acceleration data d.sub..alpha.) of the
acceleration sensor is within the effective range (Step S53). When
the posture estimation device determines that the output of the
acceleration sensor is within the effective range (Y in Step S53),
the posture estimation device checks the remaining of an
acceleration off-scale recovery period (Step S54a). When the
posture estimation device determines that the output of the
acceleration sensor is not within the effective range (N in Step
S53), the posture estimation device starts or prolongs the
acceleration off-scale recovery period (Step S54b).
[0100] Then, the posture estimation device determines whether or
not the current time is in the angular-velocity off-scale recovery
period as a first period after it is determined that the output of
the angular velocity sensor is not within the effective range (Step
S55). When the posture estimation device determines that the
current time is in the angular-velocity off-scale recovery period
(Y in Step S55), the posture estimation device increases the
posture error component in the error covariance matrix
.SIGMA..sub.x, k.sup.2 being the error information (Step S56a). The
posture estimation device reduces the correlation component between
the posture error component and the error component other than the
posture error component in the error covariance matrix
.SIGMA..sub.x, k.sup.2, for example, sets the correlation component
to zero (Step S56b). The posture estimation device increases the
motion velocity error component in the error covariance matrix
.SIGMA..sub.x, k.sup.2 (Step S58a). The posture estimation device
reduces the correlation component between the motion velocity error
component and the error component other than the motion velocity
error component in the error covariance matrix .SIGMA..sub.x,
k.sup.2, for example, sets the correlation component to zero (Step
S58b). In practice, the posture estimation device performs the
processes of Step S56a, Step S56b, Step S58a, and Step S58b by
updating the error covariance matrix .SIGMA..sub.x, k.sup.2 with
Expression (41).
[0101] When the posture estimation device determines that the
current time is not in the angular-velocity off-scale recovery
period (N in Step S55), the posture estimation device determines
whether or not the current time is in the acceleration off-scale
recovery period as a second period after it is determined that the
output of the acceleration sensor is not within the effective range
(Step S57). When the posture estimation device determines that the
current time is in the acceleration off-scale recovery period (Y in
Step S57), the posture estimation device increases the motion
velocity error component in the error covariance matrix
.SIGMA..sub.x, k.sup.2 (Step S58a). The posture estimation device
reduces the correlation component between the motion velocity error
component and the error component other than the motion velocity
error component in the error covariance matrix .SIGMA..sub.x,
k.sup.2, for example, sets the correlation component to zero (Step
S58b). In practice, the posture estimation device performs the
processes of Step S58a and Step S58b by updating the error
covariance matrix .SIGMA..sub.x, k.sup.2 with Expression (42).
[0102] When the posture estimation device determines that the
current time is in neither the angular-velocity off-scale recovery
period nor the acceleration off-scale recovery period (N in Step
S55 and N in Step S57), the posture estimation device does not
perform the processes of Step S56a, Step S56b, Step S58a, and Step
S58b.
[0103] Returning to FIG. 1, the posture estimation device performs
processing (bias error limitation processing) of limiting the bias
error component of the angular velocity around the reference
vector, in the error covariance matrix .SIGMA..sub.x, k.sup.2 being
the error information (bias error limitation step S6). As described
above, in the embodiment, the reference vector is the gravitational
acceleration vector. Thus, in Step S6, the posture estimation
device limits the bias error component of the angular velocity
around the gravitational acceleration vector, that is, limits the
vertical component (yaw-axis component) of the bias error of the
angular velocity.
[0104] FIG. 4 is a flowchart illustrating an example of procedures
of Step S6 in FIG. 1. As illustrated in FIG. 4, firstly, the
posture estimation device performs processing of calculating the
variance .sigma..sub.b.omega.v.sup.2 of the vertical component of
the bias error in the angular velocity with Expression (39) and the
state vector x.sub.k and the error covariance matrix .SIGMA..sub.x,
k.sup.2 at the time point t.sub.k (bias-error vertical component
calculation step S61).
[0105] Then, the posture estimation device performs processing of
determining whether or not the variance .sigma..sub.b.omega.v.sup.2
exceeds the upper limit value (bias error determination step
S62).
[0106] When the variance .sigma..sub.b.omega.v.sup.2 exceeds the
upper limit value (Y in Step S62), the posture estimation device
performs limitation operation processing of limiting the vertical
component of the bias error in the angular velocity (limitation
operation step S63). Specifically, when the variance
.sigma..sub.b.omega.v.sup.2 exceeds the upper limit value
.sigma..sub.b.omega.max.sup.2, the posture estimation device
updates the error covariance matrix .SIGMA..sub.x, k.sup.2 by
Expression (40). When the variance .sigma..sub.b.omega.v.sup.2 is
equal to or less than the upper limit value (N in Step S62), the
posture estimation device does not perform the limitation operation
processing in Step S63.
[0107] Returning to FIG. 1, the posture estimation device performs
correction processing (also referred to as observation update
processing) of the state vector x.sub.k and the error covariance
matrix .SIGMA..sub.x, k.sup.2 at the time point t.sub.k (correction
step S7). In the embodiment, the posture estimation device performs
processing (posture information correction step) of correcting the
posture quaternion q.sub.k being the posture information of the
object, which has been predicted in Step S3, based on the error
covariance matrix .SIGMA..sub.x, k.sup.2 being the error
information. Specifically, in the posture information correction
step, the posture estimation device corrects the state vector
x.sub.k including the posture quaternion q.sub.k being the posture
information as the element, based on the observation residual
.DELTA.z.sub.a, k being a difference between the acceleration
vector (obtained based on the output of the acceleration sensor)
and the gravitational acceleration vector being the reference
vector. FIG. 5 is a flowchart illustrating an example of procedures
of Step S7 in FIG. 1.
[0108] As illustrated in FIG. 5, firstly, the posture estimation
device performs processing of calculating the observation residual
.DELTA.z.sub.k, a Kalman coefficient K.sub.k, and a transformation
matrix H.sub.k at the time point t.sub.k by Expressions (23) and
(24) (correction coefficient calculation step S71).
[0109] Then, the posture estimation device performs processing
(posture information correction processing) of correcting the
posture quaternion q.sub.k being the predicted posture information
of the object at the time point t.sub.k (posture information
correction step S72). Specifically, the posture estimation device
performs processing of correcting the state vector x.sub.k at the
time point t.sub.k with Expression (27), the observation residual
.DELTA.z.sub.k, and the Kalman coefficient K.sub.k.
[0110] The posture estimation device performs processing of
normalizing the state vector x.sub.k at the time point t.sub.k by
Expression (28) (normalization step S73).
[0111] Then, the posture estimation device performs processing of
correcting the error covariance matrix .SIGMA..sub.x, k.sup.2 at
the time point t.sub.k with Expression (27), the Kalman coefficient
K.sub.k, and the transformation matrix H.sub.k (error information
correction step S74).
[0112] Returning to FIG. 1, the posture estimation device repeats
the processes of Step S2 to Step S7 until an end instruction is
received (N in Step S8). If the end instruction is received (Y in
Step S8), the posture estimation device ends the processing.
[0113] The order of the steps in FIG. 1 can be appropriately
changed. For example, the order of Step S4, Step S5, and Step S6
may be changed, and at least one of Step S4, Step S5, and Step S6
may be performed after Step S7.
[0114] As described above, according to the posture estimation
method in the embodiment, the posture change amount and the
velocity change amount of an object are calculated with Expressions
(6) and (12) derived from Expressions (1) and (2) of the output
model of the IMU. Then, the posture of the object is estimated with
the posture change amount and the velocity change amount. In
Expressions (6) and (12), a calculation error in the posture change
amount or the velocity change amount is reduced in comparison to
that in the related art, by calculating the posture change amount
and the velocity change amount with not only the first-order term
of .DELTA.t but also the second-order term and the third-order term
thereof.
[0115] If the object rotates, the coordinate transformation matrix
C.sub.k changes. However, the coordinate transformation matrix
C.sub.k is calculated from the elements of the posture quaternion
q.sub.k estimated by the Kalman filter. Thus, when the object
rotates rapidly, the coordinate transformation matrix C.sub.k may
not immediately follow the rotation of the object. Regarding this,
according to the posture estimation method in the embodiment, since
the acceleration .lamda..sub.k is calculated with not only the
acceleration but also the angular velocity in Expression (12), the
rotation of the object is immediately reflected to the acceleration
.lamda..sub.k. Thus, even when the object rotates rapidly, it is
possible to reduce deterioration of calculation accuracy of the
velocity change amount.
[0116] Further, according to the posture estimation method in the
embodiment, with Expression (23), the observation residual of the
gravitational acceleration by the output of the acceleration sensor
and the motion velocity of the object become zero in a long term,
and the Kalman coefficient K.sub.k in Expression (24) is calculated
with the observation residual of the zero motion velocity. Thus,
even though observation information of the azimuth is not provided,
it is possible to estimate the posture of the object with high
accuracy.
[0117] In the posture estimation method in the embodiment, when the
output of the angular velocity sensor is not within the effective
range, the posture error component in the error covariance matrix
.SIGMA..sub.x, k.sup.2 is increased, and the correlation component
between the posture error component and the error component other
than the posture error component is set to zero, by Expression
(41). Accordingly, even when a situation in which an angular
velocity exceeding the effective measurement range of the angular
velocity sensor is temporarily input, and the output of the angular
velocity sensor is temporarily not within the effective range
occurs, it is possible to reduce a concern of decreasing estimation
accuracy of the posture of the object.
[0118] In the posture estimation method in the embodiment, when the
output of the angular velocity sensor is not within the effective
range, the motion velocity error component in the error covariance
matrix .SIGMA..sub.x, k.sup.2 is increased, and the correlation
component between the motion velocity error component and the error
component other than the motion velocity error component is set to
zero, by Expression (41). In addition, when the output of the
acceleration sensor is not within the effective range, the motion
velocity error component in the error covariance matrix
.SIGMA..sub.x, k.sup.2 is increased, and the correlation component
between the motion velocity error component and the error component
other than the motion velocity error component is set to zero, by
Expression (42). Accordingly, even when a situation in which an
angular velocity exceeding the effective measurement range of the
angular velocity sensor or an acceleration exceeding the effective
measurement range of the acceleration sensor is temporarily input,
and the output of the angular velocity sensor or the output of the
acceleration sensor is temporarily not within the corresponding
effective range occurs, it is possible to reduce a concern of
decreasing estimation accuracy of the motion velocity of the
object.
[0119] Since information regarding the azimuth of the object is not
included in the output of the angular velocity sensor and the
output of the acceleration sensor, the azimuth error of the object
is not corrected. However, if the azimuth error included in the
updated posture error remains, reliability of the azimuth error is
monotonously reduced, and the posture error monotonously increases.
Thus, the estimation accuracy of the posture may be deteriorated.
Regarding this, according to the posture estimation method in the
embodiment, the azimuth error included in the posture error of the
object, which has been updated by using the output of an angular
velocity sensor 12 and the output of an acceleration sensor 14 is
removed by Expression (17). Thus, it is possible to reduce a
concern of monotonously increasing the posture error and to reduce
a concern of decreasing the estimation accuracy of the posture.
[0120] The output of the angular velocity sensor and the output of
the acceleration sensor do not include the information regarding
the azimuth of the object. Thus, for example, when the object
continues in the substantially same posture for a long term, the
vertical component of the updated bias error of the angular
velocity sensor monotonously increases, and the Kalman coefficient
K.sub.k becomes too large. Consequently, there is a concern of
deteriorating the estimation accuracy of the posture. Regarding
this, according to the posture estimation method in the embodiment,
when the vertical component of the bias error in the angular
velocity sensor exceeds the upper limit value, the vertical
component of the bias error is limited to the upper limit value by
Expression (40). Thus, it is possible to reduce a concern of
increasing the Kalman coefficient K.sub.k too large and to reduce a
concern of decreasing the estimation accuracy of the posture.
[0121] With the above descriptions, according to the posture
estimation method in the embodiment, it is possible to reduce a
concern of decreasing the estimation accuracy of the posture of the
object even when the posture of the object changes small and to
estimate the posture of the object with sufficient accuracy.
2. Posture Estimation Device
2-1. Configuration of Posture Estimation Device
[0122] FIG. 6 is a diagram illustrating an example of a
configuration of the posture estimation device in the embodiment.
As illustrated in FIG. 6, in the embodiment, a posture estimation
device 1 includes a processing unit 20, a ROM 30, a RAM 40, a
recording medium 50, and a communication unit 60. The posture
estimation device 1 estimates the posture of an object based on an
output of an inertial measurement unit (IMU) 10. In the embodiment,
the posture estimation device 1 may have a configuration obtained
by changing or removing some components or by adding another
component.
[0123] As illustrated in FIG. 6, in the embodiment, the posture
estimation device 1 is separated from the IMU 10. However, the
posture estimation device 1 may include the IMU 10. The IMU 10 and
the posture estimation device 1 may be accommodated in one casing.
The IMU 10 may be separated or separable from the main body in
which the posture estimation device 1 is accommodated. In the
former case, the posture estimation device 1 is mounted on an
object. In the latter case, the IMU 10 is mounted on the
object.
[0124] In the embodiment, the IMU 10 includes the angular velocity
sensor 12, the acceleration sensor 14, and a signal processing unit
16. In the embodiment, the IMU 10 may have a configuration obtained
by changing or removing some components or by adding another
component.
[0125] The angular velocity sensor 12 measures an angular velocity
in each of directions of three axes which intersect with each
other, in ideal, are perpendicular to each other. The angular
velocity sensor 12 outputs an analog signal depending on the
magnitude and the orientation of the measured three-axis angular
velocity.
[0126] The acceleration sensor 14 measures an acceleration in each
of directions of three axes which intersect with each other, in
ideal, are perpendicular to each other. The acceleration sensor 14
outputs an analog signal depending on the magnitude and the
orientation of the measured three-axis acceleration.
[0127] The signal processing unit 16 performs processing of
performing sampling of an output signal of the angular velocity
sensor 12 at a predetermined sampling interval .DELTA.t to convert
the output signal into angular velocity data having a digital
value. The signal processing unit 16 performs processing of
performing sampling of an output signal of the acceleration sensor
14 at a predetermined sampling interval .DELTA.t to convert the
output signal into acceleration data having a digital value.
[0128] Ideally, the angular velocity sensor 12 and the acceleration
sensor 14 are attached to the IMU 10 such that the three axes
coincide with three axes (x-axis, y-axis, and z-axis) of the sensor
coordinate system which is an orthogonal coordinate system defined
for the IMU 10. However, in practice, an error occurs in a mounting
angle. Thus, the signal processing unit 16 performs processing of
converting the angular velocity data and the acceleration data into
data in an xyz coordinate system, by using a correction parameter
which has been, in advance, calculated in accordance with the error
in the mounting angle. The signal processing unit 16 also performs
processing of correcting the temperature in the angular velocity
data and the acceleration data in accordance with temperature
characteristics of the angular velocity sensor 12 and the
acceleration sensor 14.
[0129] A function of A/D conversion or temperature correction may
be embedded in the angular velocity sensor 12 and the acceleration
sensor 14.
[0130] The IMU 10 outputs angular velocity data d.sub..omega. and
acceleration data d.sub..alpha. after the processing by the signal
processing unit 16 to the processing unit 20 of the posture
estimation device 1.
[0131] The ROM 30 stores programs used when the processing unit 20
performs various types of processing, and various programs or
various types of data for realizing application functions, for
example.
[0132] The RAM 40 is a storage unit that is used as a work area of
the processing unit 20, and temporarily stores a program or data
read out from the ROM 30 or operation results obtained by the
processing unit 20 performing processing in accordance with various
programs, for example.
[0133] The recording medium 50 is a non-volatile storage unit that
stores data required to be preserved for a long term among pieces
of data generated by processing of the processing unit 20. The
recording medium 50 may store programs used when the processing
unit 20 performs various types of processing, and various programs
or various types of data for realizing application functions, for
example.
[0134] The processing unit 20 performs various types of processing
in accordance with the program stored in the ROM 30 or the
recording medium 50 or in accordance with the program which is
received from a server via a network and then is stored in the RAM
40 or the recording medium 50. In particular, in the embodiment,
the processing unit 20 executes the program to function as a bias
removal unit 22, a posture-change-amount calculation unit 24, a
velocity-change-amount calculation unit 26, and a posture
estimation unit 28. Thus, the processing unit 20 performs a
predetermined operation on the angular velocity data d.sub..omega.
and the acceleration data d.sub..alpha. output at an interval of
.DELTA.t by the IMU 10 so as to perform processing of estimating
the posture of the object.
[0135] In the embodiment, as illustrated in FIG. 7, the sensor
coordinate system (xyz coordinate system constituted by the x-axis,
the y-axis, and the z-axis which are perpendicular to each other)
as the coordinate system of the IMU 10 and a local-space coordinate
system (XYZ coordinate system constituted by an X-axis, a Y-axis,
and a Z-axis which are perpendicular to each other) as a coordinate
system of a space in which the object exists are considered. The
processing unit 20 estimates the posture of the object (may also be
referred to as the posture of the IMU 10) in the local-space
coordinate system from the three-axis angular velocity and the
three-axis acceleration in the sensor coordinate system, which are
output from the IMU 10 mounted on the object.
[0136] The bias removal unit 22 performs processing of calculating
the three-axis angular velocity obtained by removing a bias error
from the output of the angular velocity sensor 12 and performs
processing of calculating the three-axis acceleration obtained by
removing a bias error from the output of the acceleration sensor
14.
[0137] The posture-change-amount calculation unit 24 calculates the
posture change amount of the object based on the output of the
angular velocity sensor 12. Specifically, the posture-change-amount
calculation unit 24 performs processing of calculating the posture
change amount of the object by approximation with a polynomial
expression in which the sampling interval .DELTA.t is used as a
variable. The posture-change-amount calculation unit 24 performs
the processing with the three-axis angular velocity in which the
bias error has been removed by the bias removal unit 22.
[0138] The velocity-change-amount calculation unit 26 calculates
the velocity change amount of the object based on the output of the
acceleration sensor 14 and the output of the angular velocity
sensor 12. Specifically, the velocity-change-amount calculation
unit 26 performs processing of calculating the velocity change
amount of the object with the three-axis angular velocity and the
three-axis acceleration in which the bias error has been removed by
the bias removal unit 22.
[0139] The posture estimation unit 28 functions as an integration
calculation unit 101, a posture information prediction unit 102, an
error information update unit 103, a correction coefficient
calculation unit 104, a posture information correction unit 105, a
normalization unit 106, an error information correction unit 107, a
rotational error-component removal unit 108, a bias error
limitation unit 109, and an error information adjustment unit 110.
The posture estimation unit 28 performs processing of estimating
the posture of the object with the posture change amount calculated
by the posture-change-amount calculation unit 24 and the velocity
change amount calculated by the velocity-change-amount calculation
unit 26. In practice, the posture estimation unit 28 performs
processing of estimating a state vector x defined in Expression
(18) and an error covariance matrix .SIGMA..sub.x.sup.2 thereof
with an extended Kalman filter.
[0140] The integration calculation unit 101 performs integration
processing of integrating the posture change amount calculated by
the posture-change-amount calculation unit 24 with the previous
estimated value of the posture, which has been corrected by the
posture information correction unit 105 and normalized by the
normalization unit 106. The integration calculation unit 101
performs integration processing of integrating the velocity change
amount calculated by the velocity-change-amount calculation unit 26
with the previous estimated value of the velocity, which has been
corrected by the posture information correction unit 105 and
normalized by the normalization unit 106.
[0141] The posture information prediction unit 102 performs
processing of predicting posture quaternion q as posture
information of the object, with the posture change amount
calculated by the posture-change-amount calculation unit 24. The
posture information prediction unit 102 also performs processing of
predicting a motion velocity vector v as velocity information of
the object, based on the velocity change amount calculated by the
velocity-change-amount calculation unit 26. In practice, the
posture information prediction unit 102 performs processing of
predicting the state vector x including the posture quaternion q
and the motion velocity vector v as elements.
[0142] The error information update unit 103 performs processing of
updating the error covariance matrix .SIGMA..sub.x.sup.2 as error
information, based on the output of the angular velocity sensor 12.
Specifically, the error information update unit 103 performs
processing of updating a posture error of the object with the
three-axis angular velocity in which the bias error has been
removed by the bias removal unit 22. In practice, the error
information update unit 103 performs processing of updating the
error covariance matrix .SIGMA..sub.x.sup.2 with the extended
Kalman filter.
[0143] The rotational error-component removal unit 108 performs
processing of removing a rotational error component around a
reference vector, in the error covariance matrix
.SIGMA..sub.x.sup.2 being the error information. Specifically, the
rotational error-component removal unit 108 performs processing of
removing an azimuth error component included in the posture error
in the error covariance matrix .SIGMA..sub.x.sup.2 updated by the
error information update unit 103. In practice, the rotational
error-component removal unit 108 performs processing of generating
the error covariance matrix .SIGMA..sub.x.sup.2 in which the rank
limitation and removal of the azimuth error component are performed
in the error covariance matrix .SIGMA.q.sup.2 of the posture, on
the error covariance matrix .SIGMA..sub.x.sup.2.
[0144] The error information adjustment unit 110 performs
processing as follows. That is, the error information adjustment
unit 110 determines whether or not the output of the angular
velocity sensor 12 is within an effective range. When the error
information adjustment unit 110 determines that the output of the
angular velocity sensor 12 is not within the effective range, the
error information adjustment unit 110 increases a posture error
component in the error covariance matrix .SIGMA..sub.x.sup.2 being
the error information and reduces a correlation component between
the posture error component and an error component other than the
posture error component in the error covariance matrix
.SIGMA..sub.x.sup.2 (for example, sets the correlation component to
zero). In addition, the error information adjustment unit 110
performs processing as follows. That is, the error information
adjustment unit 110 determines whether or not the output of the
acceleration sensor 14 is within an effective range. When the error
information adjustment unit 110 determines that the output of the
angular velocity sensor 12 or the output of the acceleration sensor
14 is not within the corresponding effective range, the error
information adjustment unit 110 increases a motion velocity error
component in the error covariance matrix .SIGMA..sub.x, k.sup.2 and
reduces a correlation component between the motion velocity error
component and an error component other than the motion velocity
error component in the error covariance matrix .SIGMA..sub.x,
k.sup.2 (for example, sets the correlation component to zero).
Specifically, in an angular-velocity off-scale recovery period
after the error information adjustment unit 110 determines that the
output of the angular velocity sensor 12 is not within the
effective range, the error information adjustment unit 110 performs
processing of increasing the posture error component and the motion
velocity error component in the error covariance matrix
.SIGMA..sub.x.sup.2 generated by the rotational error-component
removal unit 108 and reducing the correlation component between the
posture error component and the error component other than the
posture error component and the correlation component between the
motion velocity error component and the error component other than
the motion velocity error component (for example, setting the
correlation components to zero). In an acceleration off-scale
recovery period after the error information adjustment unit 110
determines that the output of the angular velocity sensor 12 is
within the effective range, and the output of the acceleration
sensor 14 is not within the effective range, the error information
adjustment unit 110 performs processing of increasing a motion
velocity error component in the error covariance matrix
.SIGMA..sub.x.sup.2 generated by the rotational error-component
removal unit 108 and reducing a correlation component between the
motion velocity error component and an error component other than
the motion velocity error component (for example, setting the
correlation component to zero).
[0145] The bias error limitation unit 109 performs processing of
limiting a bias error component of an angular velocity around the
reference vector, in the error covariance matrix
.SIGMA..sub.x.sup.2 being the error information. Specifically, the
bias error limitation unit 109 performs processing of limiting a
vertical component of the bias error of the angular velocity, in
the error covariance matrix .SIGMA..sub.x.sup.2 generated by the
error information adjustment unit 110. In practice, the bias error
limitation unit 109 performs processing as follows. That is, the
bias error limitation unit 109 determines whether or not the
vertical component of the bias error of the angular velocity
exceeds an upper limit value. When the vertical component exceeds
the upper limit value, the bias error limitation unit 109 generates
the error covariance matrix .SIGMA..sub.x.sup.2 in which limitation
is applied such that the vertical component has the upper limit
value.
[0146] The correction coefficient calculation unit 104 performs
processing of calculating correction coefficients based on the
error covariance matrix .SIGMA..sub.x.sup.2 which has been
generated by the bias error limitation unit 109 and is the error
information. The correction coefficients are used for determining
the correction amount of the posture information (posture
quaternion q) or the velocity information (motion velocity vector
v) of the object by the posture information correction unit 105 and
the correction amount of the error information (error covariance
matrix .SIGMA.x) by the error information correction unit 107. In
practice, the correction coefficient calculation unit 104 performs
processing of calculating an observation residual .DELTA.z, a
Kalman coefficient K, and a transformation matrix H.
[0147] The posture information correction unit 105 performs
processing of correcting the posture information (posture
quaternion q) of the object, which has been predicted by the
posture information prediction unit 102, based on the error
covariance matrix .SIGMA.x being the error information.
Specifically, the posture information correction unit 105 performs
processing of correcting the error covariance matrix .SIGMA.x
generated by the bias error limitation unit 109 and the posture
quaternion q. The posture information correction unit 105 performs
the processing with the Kalman coefficient K and the observation
residual .DELTA.z.sub.a of the gravitational acceleration
calculated by the correction coefficient calculation unit 104 based
on the gravitational acceleration vector g being the reference
vector and the acceleration vector .alpha. obtained from the output
of the acceleration sensor 14. In practice, the posture information
correction unit 105 performs processing of correcting the state
vector x predicted by the posture information prediction unit 102,
with the extended Kalman filter.
[0148] The normalization unit 106 performs processing of
normalizing the posture information (posture quaternion q) of the
object, which has been corrected by the posture information
correction unit 105 such that the magnitude thereof does not
change. In practice, the normalization unit 106 performs processing
of normalizing the state vector x corrected by the posture
information correction unit 105.
[0149] The error information correction unit 107 performs
processing of correcting the error covariance matrix .SIGMA.x being
the error information. Specifically, the error information
correction unit 107 performs processing of correcting the error
covariance matrix .SIGMA.x generated by the bias error limitation
unit 109 with the extended Kalman filter, and the transformation
matrix H and the Kalman coefficient K calculated by the correction
coefficient calculation unit 104.
[0150] The posture information (posture quaternion q) of the
object, which has been estimated by the processing unit 20 can be
transmitted to another device via the communication unit 60.
2-2. Configuration of Processing Unit
[0151] FIG. 8 is a diagram illustrating a specific example of a
configuration of the processing unit 20. In FIG. 8, components as
same as those in FIG. 6 are denoted by the same reference signs. As
illustrated in FIG. 8, angular velocity data d.sub..omega., k and
acceleration data d.sub..alpha., k at a time point t.sub.k, which
are output by the IMU 10 are input to the bias removal unit 22. As
shown in Expression (1), the angular velocity data d.sub..omega., k
is represented by the sum of the average value of the angular
velocity vector .omega. in a period from a time point t.sub.k-1 to
the time point t.sub.k and the residual bias b.sub..omega. of the
angular velocity sensor 12. Similarly, as shown in Expression (2),
the acceleration data d.sub..alpha., k is represented by the sum of
the average value of the acceleration vector .alpha. in the period
from the time point t.sub.k-1 to the time point t.sub.k and the
residual bias b.sub..alpha. of the acceleration sensor 14.
[0152] The bias removal unit 22 calculates the average value of the
angular velocity vector .omega. in the period from the time point
t.sub.k-1 to the time point t.sub.k by Expression (19) in a manner
of subtracting the residual bias b.sub..omega., k-1 at the time
point t.sub.k-1 from the angular velocity data d.sub..omega., k at
the time point t.sub.k. The bias removal unit 22 calculates the
average value of the acceleration vector .alpha. in the period from
the time point t.sub.k-1 to the time point t.sub.k by Expression
(19) in a manner of subtracting the residual bias b.sub..alpha.,
k-1 at the time point t.sub.k-1 from the acceleration data
d.sub..alpha., k at the time point t.sub.k.
[0153] The posture-change-amount calculation unit 24 calculates the
approximation of the posture change amount .DELTA.q.sub.k at the
time point t.sub.k by substituting the average value of the angular
velocity vector .omega. in the period from the time point t.sub.k-1
to the time point t.sub.k and the average value of the angular
velocity vector .omega. in a period from a time point t.sub.k-2 to
the time point t.sub.k-1 into the polynomial expression of
Expression (6). The above-described average values of the angular
velocity vector .omega. have been calculated by the bias removal
unit 22.
[0154] The velocity-change-amount calculation unit 26 calculates a
coordinate transformation matrix C.sub.k at the time point t.sub.k
from posture quaternion q.sub.k-1 at the time point t.sub.k-1 with
Expression (9). The velocity-change-amount calculation unit 26
calculates the approximation of the acceleration .lamda..sub.k at
the time point t.sub.k by substituting the average value of the
acceleration vector .alpha. and the average value of the angular
velocity vector .omega. in the period from the time point t.sub.k-1
to the time point t.sub.k, and the average value of the
acceleration vector .alpha. and the average value of the angular
velocity vector .omega. in the period from the time point t.sub.k-2
to the time point t.sub.k-1 into the polynomial expression of
Expression (12). The above-described average values of the angular
velocity vector .omega. have been calculated by the bias removal
unit 22. The velocity-change-amount calculation unit 26 calculates
a gravitational acceleration vector g.sub.k-1 at the time point
t.sub.k-1 by substituting a gravitational-acceleration correction
value .DELTA.g.sub.k-1 at the time point t.sub.k-1 into Expression
(10). The velocity-change-amount calculation unit 26 calculates the
velocity change amount (C.sub.k.lamda..sub.k-g.sub.k-1).DELTA.t at
the time point t.sub.k from the coordinate transformation matrix
C.sub.k, the acceleration .lamda..sub.k, and the gravitational
acceleration vector g.sub.k-1, which have been calculated.
[0155] As illustrated in FIG. 8, the posture estimation unit 28
includes the integration calculation unit 101, the posture
information prediction unit 102, the error information update unit
103, the correction coefficient calculation unit 104, the posture
information correction unit 105, the normalization unit 106, the
error information correction unit 107, the rotational
error-component removal unit 108, and the bias error limitation
unit 109. The posture estimation unit 28 estimates the state vector
x and the error covariance matrix .SIGMA..sub.x, k.sup.2 at the
time point t.sub.k with the extended Kalman filter.
[0156] The integration calculation unit 101 performs quaternion
multiplication of the posture quaternion q.sub.k-1 at the time
point t.sub.k-1 and the posture change amount .DELTA.q.sub.k at the
time point t.sub.k, which has been calculated by the
posture-change-amount calculation unit 24, with Expression (6). The
integration calculation unit 101 adds a motion velocity vector
v.sub.k-1 at the time point t.sub.k-1 and the velocity change
amount (C.sub.k.lamda..sub.k-g.sub.k-1).DELTA.t at the time point
t.sub.k, which has been calculated by the velocity-change-amount
calculation unit 26, with Expression (12).
[0157] The posture information prediction unit 102 predicts the
posture quaternion q.sub.k, the motion velocity vector v.sub.k, the
residual bias b.sub..omega., k of the angular velocity sensor 12,
the residual bias b.sub..alpha., k of the acceleration sensor 14,
and the gravitational-acceleration correction value .DELTA.g.sub.k
which are elements of the state vector x.sub.k, with Expression
(19). Specifically, the posture information prediction unit 102
predicts the posture quaternion q.sub.k as a result of the
quaternion multiplication of the posture quaternion q.sub.k-1 and
the posture change amount .DELTA.q.sub.k by the integration
calculation unit 101. The posture information prediction unit 102
predicts the motion velocity vector v.sub.k as a result of adding
the motion velocity vector v.sub.k-1 and the velocity change amount
(C.sub.k.lamda..sub.k-g.sub.k-1).DELTA.t by the integration
calculation unit 101. The posture information prediction unit 102
predicts the residual bias b.sub..omega., k Of the angular velocity
sensor 12 to be the residual bias b.sub..omega., k-1 of the angular
velocity sensor 12 at the time point t.sub.k-1. The posture
information prediction unit 102 predicts the residual bias
b.sub..alpha., k of the acceleration sensor 14 to be the residual
bias b.sub..alpha., k-1 of the acceleration sensor 14 at the time
point t.sub.k-1. The posture information prediction unit 102
predicts the gravitational-acceleration correction value
.DELTA.g.sub.k to be the gravitational-acceleration correction
value .DELTA.g.sub.k-1 at the time point t.sub.k-1.
[0158] The error information update unit 103 updates the error
covariance matrix .SIGMA..sub.x, k.sup.2 at the time point t.sub.k
with Expressions (20) and (21). The error information update unit
103 performs the update with the posture change amount
.DELTA.q.sub.k calculated by the posture-change-amount calculation
unit 24, the acceleration .lamda..sub.k and the coordinate
transformation matrix C.sub.k calculated by the
velocity-change-amount calculation unit 26, the posture quaternion
q.sub.k-1 at the time point t.sub.k-1, and the error covariance
matrix .SIGMA..sub.x, k.sup.2 at the time point t.sub.k-1.
[0159] The rotational error-component removal unit 108 calculates a
matrix D'.sup.TD' with the posture quaternion q.sub.k predicted by
the posture information prediction unit 102, by Expression (29).
The rotational error-component removal unit 108 updates the error
covariance matrix .SIGMA..sub.x, k.sup.2 updated by the error
information update unit 103, with Expression (28) and the matrix
D'.sup.TD'. Thus, the error covariance matrix .SIGMA..sub.x,
k.sup.2 in which the rank of the error covariance matrix
.SIGMA..sub.q, k.sup.2 Of the posture quaternion q.sub.k is limited
to 3, and an azimuth error component .epsilon..sub..theta.z has
been removed from the error covariance matrix .SIGMA..sub.q,
k.sup.2 is generated.
[0160] In the angular-velocity off-scale recovery period, the error
information adjustment unit 110 updates the error covariance matrix
.SIGMA..sub.x, k.sup.2 generated by the rotational error-component
removal unit 108, with Expression (41) and the posture quaternion
q.sub.k predicted by the posture information prediction unit 102.
In the acceleration off-scale recovery period, the error
information adjustment unit 110 updates the error covariance matrix
.SIGMA..sub.x, k.sup.2 generated by the rotational error-component
removal unit 108 with Expression (42) and the posture quaternion
q.sub.k predicted by the posture information prediction unit
102.
[0161] The bias error limitation unit 109 calculates the variance
.sigma..sub.b.omega.v.sup.2 of the vertical component of the bias
error in the angular velocity sensor 12 with the posture quaternion
q.sub.k predicted by the posture information prediction unit 102
and the error covariance matrix .SIGMA..sub.x, k.sup.2 generated by
the error information adjustment unit 110, by Expression (39). When
the variance .sigma..sub.b.omega.v.sup.2 exceeds the upper limit
value .sigma..sub.b.omega.max.sup.2, the bias error limitation unit
109 updates the error covariance matrix .SIGMA..sub.x, k.sup.2
generated by the error information adjustment unit 110, with
Expression (40). Thus, in the error covariance matrix
.SIGMA..sub.x, k.sup.2, the variance .sigma..sub.b.omega.v.sup.2 of
the vertical component of the bias error in the angular velocity
sensor 12 is limited to the upper limit value
.sigma..sub.b.omega.max.sup.2.
[0162] The correction coefficient calculation unit 104 calculates
the observation residual .DELTA.z.sub.k, the transformation matrix
H.sub.k, and the Kalman coefficient K.sub.k at the time point
t.sub.k with Expressions (23) to (26). The correction coefficient
calculation unit 104 performs the calculation with the error
covariance matrix .SIGMA..sub.x, k.sup.2 generated by the bias
error limitation unit 109, the coordinate transformation matrix
C.sub.k calculated by the velocity-change-amount calculation unit
26, the average value of the acceleration vector .alpha. calculated
by the bias removal unit 22 in the period from the time point
t.sub.k-1 to the time point t.sub.k, and the posture quaternion
q.sub.k and the gravitational-acceleration correction value
.DELTA.g.sub.k predicted by the posture information prediction
unit.
[0163] The posture information correction unit 105 corrects the
elements (posture quaternion q.sub.k, motion velocity vector
v.sub.k, residual bias b.sub..omega., k of the angular velocity
sensor 12, residual bias b.sub..alpha., k of the acceleration
sensor 14, and gravitational-acceleration correction value
.DELTA.g.sub.k) of the state vector x.sub.k predicted by the
posture information prediction unit 102. The posture information
correction unit 105 performs the correction by Expression (27) with
the observation residual .DELTA.z.sub.k and the Kalman coefficient
K.sub.k calculated by the correction coefficient calculation unit
104.
[0164] The normalization unit 106 normalizes the elements (posture
quaternion q.sub.k, motion velocity vector v.sub.k, residual bias
b.sub..omega., k of the angular velocity sensor 12, residual bias
b.sub..alpha., k of the acceleration sensor 14, and
gravitational-acceleration correction value .DELTA.g.sub.k) of the
state vector x.sub.k corrected by the posture information
correction unit 105, with Expression (28).
[0165] The error information correction unit 107 corrects the error
covariance matrix .SIGMA..sub.x, k.sup.2 generated by the bias
error limitation unit 109 by Expression (27) with the
transformation matrix H.sub.k and the Kalman coefficient K.sub.k
calculated by the correction coefficient calculation unit 104.
[0166] If the next sampling interval .DELTA.t has elapsed, the
state vector x.sub.k calculated by the normalization unit 106 and
the error covariance matrix .SIGMA..sub.x, k.sup.2 corrected by the
error information correction unit 107 are fed back to the bias
removal unit 22, the velocity-change-amount calculation unit 26,
the integration calculation unit 101, the posture information
prediction unit 102, and the error information update unit 103 in a
state of being set to be the state vector x.sub.k-1 and the error
covariance matrix .SIGMA..sub.x, k-1.sup.2 at the time point
t.sub.k-1.
[0167] The processing unit 20 described above performs posture
estimation processing of estimating the posture of the object, for
example, in accordance with the procedures illustrated in FIGS. 1
to 5.
[0168] According to the above-described posture estimation device 1
in the embodiment, the processing unit 20 estimates the posture of
an object in accordance with the procedures illustrated in FIGS. 1
to 5. Thus, even when a situation in which the output of the
angular velocity sensor is temporarily not within the effective
range occurs, it is possible to reduce a concern of decreasing the
estimation accuracy of the posture of the object and to estimate
the posture of the object with sufficient accuracy. According to
the posture estimation device 1 in the embodiment, even when a
situation in which the output of the angular velocity sensor or the
output of the acceleration sensor is temporarily not within the
corresponding effective range occurs, it is possible to reduce a
concern of decreasing the estimation accuracy of the motion
velocity of the object and to estimate the motion velocity of the
object with sufficient accuracy. In addition, according to the
posture estimation device 1 in the embodiment, it is possible to
exhibit effects similar to those in the above-described posture
estimation method in the embodiment.
3. Modification Examples
[0169] In the above-described embodiment, the angular velocity
sensor and the acceleration sensor are integrated by being
accommodated in one inertial measurement unit (IMU). However, the
angular velocity sensor and the acceleration sensor may be provided
to be individual from each other.
[0170] In the above-described embodiment, the posture estimation
device outputs only posture information of the object. However, the
posture estimation device may output another kind of information.
For example, the posture estimation device may output position
information of the object, which is obtained by integrating the
velocity information or the motion velocity vector v.sub.k of the
object based on the motion velocity vector v.sub.k at the time
point t.sub.k.
[0171] In the above-described embodiment, the acceleration sensor
and the angular velocity sensor update the outputs at the same
sampling interval .DELTA.t. However, the sampling interval
.DELTA.t.sub.a of the acceleration sensor may be different from the
sampling interval .DELTA.t.sub..omega. of the angular velocity
sensor. In this case, the prediction processing (time update
processing) of the state vector and the error covariance matrix in
Step S3 in FIG. 1 may be performed for each .DELTA.t.sub..omega.,
and correction processing (observation update processing) of the
state vector and the error covariance matrix in Step S7 in FIG. 1
may be performed for each .DELTA.t.sub.a. The rotational
error-component removal processing in Step S4 in FIG. 1 and the
bias error limitation processing in Step S6 in FIG. 1 may be
performed for each .DELTA.t.sub..omega., for each .DELTA.t.sub.a,
or performed in a period different from .DELTA.t.sub..omega. or
.DELTA.t.sub.a.
[0172] In the above-described embodiment, the posture estimation
device estimates the posture of an object by using the
gravitational acceleration vector observed by the acceleration
sensor as the reference vector, and by using the output of the
angular velocity sensor and the reference vector. However, the
reference vector may not be the gravitational acceleration vector.
For example, when the posture estimation device estimates the
posture of an object by using the angular velocity sensor and a
geomagnetic sensor, the reference vector may be a geomagnetic
vector (vector directed toward the north) observed by using a
geomagnetic sensor. For example, when the posture estimation device
estimates the posture of a satellite as an object by the angular
velocity sensor and a star tracker, the reference vector may be a
vector which is directed from the object toward a fixed star and is
observed by the star tracker. The acceleration sensor, the
geomagnetic sensor, and the star tracker are examples of a
reference observation sensor that observes the reference
vector.
[0173] In the above-described embodiment, the posture information
of an object is expressed in quaternion. However, the posture
information may be information expressed by a roll angle, a pitch
angle, and a yaw angle or may be a posture transformation
matrix.
[0174] In the above-described embodiment, the state vector x.sub.k
includes the posture quaternion q.sub.k, the motion velocity vector
v.sub.k, the residual bias b.sub..omega., k of the angular velocity
sensor, the residual bias b.sub..alpha., k of the acceleration
sensor, and the gravitational-acceleration correction value
.DELTA.g.sub.k, as the elements. However, the state vector x.sub.k
is not limited thereto. For example, the state vector x.sub.k may
include the posture quaternion q.sub.k, the residual bias
b.sub..omega., k of the angular velocity sensor, the residual bias
b.sub..alpha., k of the acceleration sensor, and the
gravitational-acceleration correction value .DELTA.g.sub.k, as the
elements, and may not include the motion velocity vector v.sub.k.
In this case, the flowchart of the bias error limitation step
illustrated in FIG. 3 is changed to the flowchart illustrated in
FIG. 9, for example. In FIG. 9, steps as same as those in FIG. 3
are denoted by the same reference signs. The flowchart in FIG. 9 is
obtained by removing some steps from the flowchart in FIG. 3, not
by adding a new step. Thus, descriptions thereof will not be
repeated.
4. Electronic Device
[0175] FIG. 10 is a block diagram illustrating an example of a
configuration of an electronic device 300 in the embodiment. The
electronic device 300 includes the posture estimation device 1 and
the inertial measurement unit 10 in the above embodiment. The
electronic device 300 can include a communication unit 310, an
operation unit 330, a display unit 340, a storage unit 350, and an
antenna 312.
[0176] The communication unit 310 is, for example, a wireless
circuit. The communication unit 310 performs processing of
receiving data from the outside of the device or transmitting data
to the outside thereof, via the antenna 312.
[0177] The posture estimation device 1 performs processing based on
the output signal of the inertial measurement unit 10.
Specifically, the posture estimation device 1 performs processing
of estimating the posture of the electronic device 300 based on an
output signal (output data) such as detection data of the inertial
measurement unit 10. The posture estimation device 1 may perform
signal processing such as correction processing or filtering on the
output signal (output data) such as detection data of the inertial
measurement unit 10. In addition, based on the output signals, the
posture estimation device 1 may perform various types of control
processing for the electronic device 300, such as control
processing of the electronic device 300 or various types of digital
processing of data transmitted or received via the communication
unit 310. The function of the posture estimation device 1 can be
realized by a processor such as an MPU or a CPU, for example.
[0178] The operation unit 330 is used when a user performs an input
operation. The operation unit 330 can be realized by an operation
button, a touch panel display, or the like.
[0179] The display unit 340 displays various types of information
and can be realized by a display of liquid crystal, organic EL, or
the like. The storage unit 350 stores data. The function of the
storage unit 350 can be realized by, for example, a semiconductor
memory such as a RAM or a ROM.
[0180] The electronic device 300 in the embodiment can be applied
to, for example, an image-related device (such as a digital still
camera or a video camera), an in-vehicle device, a wearable device
(such as a head mounted display device or a watch-related device),
an ink jet discharge device, a robot, a personal computer, a
portable information terminal, a printing device, and a projection
device. The in-vehicle device includes a car navigation device or a
device for automatic driving, for example. The watch-related device
includes a watch or a smart watch, for example. For example, an ink
jet printer is provided as the ink jet discharge device. The
portable information terminal includes a smart phone, a portable
phone, a portable game device, a notebook PC, or a tablet terminal,
for example. The electronic device 300 in the embodiment can also
be applied to an electronic notebook, an electronic dictionary, a
calculator, a word processor, a workstation, a videophone, a
television monitor for crime prevention, electronic binoculars, a
POS terminal, a medical device, a fish finder, a measuring device,
and a device for a base station of a mobile terminal, instruments,
a flight simulator, and a network server, for example. The medical
device includes an electronic thermometer, a sphygmomanometer, a
blood glucose meter, an electrocardiogram measuring device, an
ultrasonic diagnostic device, an electronic endoscope, and the
like. The instruments are instruments of vehicles, aircraft, ships,
and the like.
[0181] FIG. 11 is a plan view illustrating a wrist-watch type
activity meter 400 as a portable electronic device. FIG. 12 is a
block diagram illustrating an example of a configuration of the
activity meter 400. The activity meter 400 is put on the region
(such as the wrist) of a user by a band 401. The activity meter 400
which is an active tracker includes a display unit 402 for digital
display. The activity meter 400 can perform wireless communication
by Bluetooth (registered trademark), Wi-Fi (registered trademark),
or the like.
[0182] As illustrated in FIGS. 11 and 12, the activity meter 400
includes a case 403, the posture estimation device 1, the display
unit 402, and a translucent cover 404. In the case 403, the
inertial measurement unit 10 is accommodated. The posture
estimation device 1 is accommodated in the case 403 and performs
processing based on an output signal from the inertial measurement
unit 10. The display unit 402 is accommodated in the case 403. The
translucent cover 404 closes an opening portion of the case 403. A
bezel 405 is provided on the outside of the translucent cover 404.
A plurality of operation buttons 406 and 407 are provided on a side
surface of the case 403. The acceleration sensor 14 that detects a
three-axis acceleration and the angular velocity sensor 12 that
detects a three-axis angular velocity are provided in the inertial
measurement unit 10.
[0183] In the display unit 402, position information or the
movement amount obtained by a GPS sensor 411 or a geomagnetic
sensor 412, motion information (such as a momentum) obtained by the
acceleration sensor 14 or the angular velocity sensor 12, biometric
information (such as a pulse rate) obtained by a pulse rate sensor
416, and time point information such as the current time point are
displayed in accordance with various detection modes. An
environmental temperature obtained by a temperature sensor 417 can
also be displayed. A communication unit 422 communicates with an
information terminal such as a user terminal. The posture
estimation device 1 is realized by an MPU, a DSP, and an ASIC, for
example. The posture estimation device 1 performs various types of
processing based on programs stored in a storage unit 420 and
information input by an operation unit 418 such as the operation
buttons 406 and 407. The posture estimation device 1 performs
processing of estimating posture information of the activity meter
400 based on the output signal of the inertial measurement unit 10.
The posture estimation device 1 may perform processing based on
output signals of the GPS sensor 411, the geomagnetic sensor 412, a
pressure sensor 413, the acceleration sensor 14, the angular
velocity sensor 12, the pulse rate sensor 416, the temperature
sensor 417, and a timekeeping unit 419. The posture estimation
device 1 can also perform display processing of displaying an image
in the display unit 402, sound output processing of outputting
sound to a sound output unit 421, communication processing of
communicating with an information terminal via the communication
unit 422, power control processing of supplying power from a
battery 423 to the components, and the like.
[0184] According to the activity meter 400 having the
above-described configuration in the embodiment, it is possible to
exhibit the above-described effects of the posture estimation
device 1 and to exhibit high reliability. The activity meter 400
includes the GPS sensor 411 and can measure the movement distance
and a movement trajectory of the user. Thus, the activity meter 400
having high usability is obtained. The activity meter 400 can be
widely applied to a running watch, a runner watch, an outdoor
watch, a GPS watch equipped with a GPS, and the like.
5. Vehicle
[0185] In the embodiment, a vehicle includes the posture estimation
device 1 in the above embodiment, and a control device that
controls the posture of the vehicle based on the posture
information of the vehicle, which has been estimated by the posture
estimation device 1.
[0186] FIG. 13 illustrates an example of a vehicle 500. FIG. 14 is
a block diagram illustrating an example of a configuration of the
vehicle 500. As illustrated in FIG. 13, the vehicle 500 includes a
vehicle body 502 and wheels 504. A positioning device 510 is
mounted on the vehicle 500. A control device 570 that performs
vehicle control and the like is provided in the vehicle 500. As
illustrated in FIG. 14, the vehicle 500 includes a driving
mechanism 580 such as an engine and a motor, a braking mechanism
582 such as a disk brake and a drum brake, and a steering mechanism
584 realized by a steering wheel, a steering gear box and the like.
As described above, the vehicle 500 is equipment or a device that
includes the driving mechanism 580, the braking mechanism 582, and
the steering mechanism 584 and moves on the ground, in the sky, or
in the sea. For example, the vehicle 500 is a four-wheel vehicle
such as an agricultural machine.
[0187] The positioning device 510 is a device that is mounted on
the vehicle 500 and performs positioning of the vehicle 500. The
positioning device 510 includes the inertial measurement unit 10, a
GPS receiving unit 520, an antenna 522 for GPS reception, and the
posture estimation device 1. The posture estimation device 1
includes a position information acquisition unit 532, a position
composition unit 534, an operational processing unit 536, and a
processing unit 538. The inertial measurement unit 10 includes a
three-axis acceleration sensor and a three-axis angular velocity
sensor. The operational processing unit 536 receives acceleration
data and angular velocity data from the acceleration sensor and the
angular velocity sensor, performs inertial navigation operational
processing on the received data, and outputs inertial navigation
positioning data. The inertial navigation positioning data
indicates the acceleration or the posture of the vehicle 500.
[0188] The GPS receiving unit 520 receives a signal from a GPS
satellite via the antenna 522. The position information acquisition
unit 532 outputs GPS positioning data based on a signal received by
the GPS receiving unit 520. The GPS positioning data indicates the
position, the speed, and the direction of the vehicle 500 on which
the positioning device 510 is mounted. The position composition
unit 534 calculates a position at which the vehicle 500 runs on the
ground at the current time, based on the inertial navigation
positioning data output from the operational processing unit 536
and the GPS positioning data output from the position information
acquisition unit 532. For example, if the posture of the vehicle
500 differs by an influence of an inclination (.theta.) of the
ground and the like as illustrated in FIG. 13 even though the
position of the vehicle 500, which is included in the GPS
positioning data is the same, the vehicle 500 runs at a different
position on the ground. Therefore, it is not possible to calculate
the accurate position of the vehicle 500 only with the GPS
positioning data. Thus, the position composition unit 534
calculates a position at which the vehicle 500 runs on the ground
at the current time, by using data regarding the posture of the
vehicle 500 among types of inertial navigation positioning data.
Position data output from the position composition unit 534 is
subjected to predetermined processing by the processing unit 538,
and is displayed in a display unit 550, as a positioning result.
The position data may be transmitted to an external device by a
communication unit 560.
[0189] The control device 570 controls the driving mechanism 580,
the braking mechanism 582, and the steering mechanism 584 of the
vehicle 500. The control device 570 is a controller of controlling
the vehicle. For example, the control device 570 can be realized by
a plurality of control units. The control device 570 includes a
vehicle control unit 572 being a control unit that controls the
vehicle, an automatic driving control unit 574 being a control unit
that controls automatic driving, and a storage unit 576 realized by
a semiconductor memory and the like. A monitoring device 578
monitors an object such as an obstacle around the vehicle 500. The
monitoring device 578 is realized by a surrounding monitoring
camera, a millimeter wave radar, a sonar or the like.
[0190] As illustrated in FIG. 14, the vehicle 500 in the embodiment
includes the posture estimation device 1 and the control device
570. The control device 570 controls the posture of the vehicle 500
based on posture information of the vehicle 500, which has been
estimated by the posture estimation device 1. For example, the
posture estimation device 1 performs various types of processing as
described above, based on an output signal including detection data
from the inertial measurement unit 10, so as to obtain information
of the position or the posture of the vehicle 500. For example, the
posture estimation device 1 can obtain information of the position
of the vehicle 500 based on the GPS positioning data and the
inertial navigation positioning data as described above. The
posture estimation device 1 can estimate information of the posture
of the vehicle 500 based on angular velocity data and the like
included in the inertial navigation positioning data, for example.
The information of the posture of the vehicle 500 can be
represented by quaternion or by a roll angle, a pitch angle, and a
yaw angle, for example. The control device 570 controls the posture
of the vehicle 500 based on the posture information of the vehicle
500, which has been estimated by the processing of the posture
estimation device 1. The control is performed by the vehicle
control unit 572, for example. The control of the posture can be
implemented by the control device 570 controlling the steering
mechanism 584, for example. Alternatively, in control (such as slip
control) for stabilizing the posture of the vehicle 500, the
control device 570 may control the driving mechanism 580 or control
the braking mechanism 582. According to the embodiment, with the
posture estimation device 1, it is possible to estimate information
of a posture with high accuracy. Accordingly, it is possible to
realize appropriate posture control of the vehicle 500.
[0191] In the embodiment, the control device 570 controls at least
one of accelerating, braking, and steering of the vehicle 500 based
on information of the position and the posture of the vehicle 500,
which has been obtained by the posture estimation device 1. For
example, the control device 570 controls at least one of the
driving mechanism 580, the braking mechanism 582, and the steering
mechanism 584 based on the information of the position and the
posture of the vehicle 500. Thus, for example, it is possible to
realize automatic driving control of the vehicle 500 by the
automatic driving control unit 574. In the automatic driving
control, a monitoring result of a surrounding object by the
monitoring device 578, map information or running route information
stored in the storage unit 576, and the like are used in addition
to the information of the position and the posture of the vehicle
500. The control device 570 switches the execution or non-execution
of the automatic driving of the vehicle 500 based on a monitoring
result of the output signal of the inertial measurement unit 10.
For example, the posture estimation device 1 monitors the output
signal such as detection data from the inertial measurement unit
10. When a decrease of detection accuracy of the inertial
measurement unit 10 or a sensing problem is detected based on the
monitoring result, for example, the control device 570 performs
switching from the execution of the automatic driving to the
non-execution of the automatic driving. For example, in the
automatic driving, at least one of accelerating, braking, and
steering of the vehicle 500 is automatically controlled. When the
automatic driving is not executed, such automatic control of
accelerating, braking, and steering is not performed. In this
manner, an assistance having higher reliability in running of the
vehicle 500 that performs automatic driving is possible. An
automation level of the automatic driving may be switched based on
the monitoring result of the output signal of the inertial
measurement unit 10.
[0192] FIG. 15 illustrates an example of another vehicle 600. FIG.
16 is a block diagram illustrating an example of a configuration of
the vehicle 600. The posture estimation device 1 in the embodiment
can be effectively used in posture control and the like of a
construction machine. FIGS. 15 and 16 illustrate a hydraulic shovel
being an example of the construction machine as the vehicle
600.
[0193] As illustrated in FIG. 15, in the vehicle 600, a vehicle
body is constituted by a lower running body 612 and an upper
revolving body 611, and a work mechanism 620 is provided in the
front of the upper revolving body 611. The upper revolving body 611
is mounted to be capable of revolving on the lower running body
612. The work mechanism 620 is constituted by a plurality of
members capable of pivoting in an up-and-down direction. A driver
seat (not illustrated) is provided in the upper revolving body 611.
An operation device (not illustrated) that operates the members
constituting the work mechanism 620 is provided on the driver seat.
An inertial measurement unit 10d functioning as an inclination
sensor that detects an inclination angle of the upper revolving
body 611 is disposed in the upper revolving body 611.
[0194] The work mechanism 620 includes a boom 613, an arm 614, a
bucket link 616, a bucket 615, a boom cylinder 617, an arm cylinder
618, and a bucket cylinder 619, as the plurality of members. The
boom 613 is attached to the front portion of the upper revolving
body 611 to be capable of elevating. The arm 614 is attached to the
tip of the boom 613 to be capable of elevating. The bucket link 616
is attached to the tip of the arm 614 to be rotatable. The bucket
615 is attached to the tip of the arm 614 and the bucket link 616
to be rotatable. The boom cylinder 617 drives the boom 613. The arm
cylinder 618 drives the arm 614. The bucket cylinder 619 drives the
bucket 615 through the bucket link 616.
[0195] The base end of the boom 613 is supported by the upper
revolving body 611 to be rotatable in the up-and-down direction.
The boom 613 is rotationally driven relative to the upper revolving
body 611 by expansion and contraction of the boom cylinder 617. An
inertial measurement unit 10c functioning as an inertial sensor
that detects the motion state of the boom 613 is disposed in the
boom 613.
[0196] One end of the arm 614 is supported by the tip of the boom
613 to be rotatable. The arm 614 is rotationally driven relative to
the boom 613 by expansion and contraction of the arm cylinder 618.
An inertial measurement unit 10b functioning as an inertial sensor
that detects the motion state of the arm 614 is disposed in the arm
614.
[0197] The bucket link 616 and the bucket 615 are supported by the
tip of the arm 614 to be rotatable. The bucket link 616 is
rotationally driven relative to the arm 614 by expansion and
contraction of the bucket cylinder 619. The bucket 615 is
rotationally driven relative to the arm 614 with the bucket link
616 driven. An inertial measurement unit 10a functioning as an
inertial sensor that detects the motion state of the bucket link
616 is disposed in the bucket link 616.
[0198] Here, the inertial measurement unit 10 described in the
above embodiment can be used as the inertial measurement units 10a,
10b, 10c, and 10d. The inertial measurement units 10a, 10b, 10c,
and 10d can detect at least any of an angular velocity and an
acceleration acting on the members of the work mechanism 620 or the
upper revolving body 611. As illustrated in FIG. 16, the inertial
measurement units 10a, 10b, and 10c are coupled in series and can
transmit a detection signal to a calculation device 630. As
described above, since the inertial measurement units 10a, 10b, and
10c are coupled in series, it is possible to reduce the number of
wires for transmitting a detection signal in a movable region and
to obtain a compact wiring structure. With the compact wiring
structure, it is easy to select a method of laying the wire, and it
is possible to reduce an occurrence of deteriorating or damaging
the wire, for example.
[0199] Further, as illustrated in FIG. 15, the calculation device
630 is provided in the vehicle 600. The calculation device 630
calculates an inclination angle of the upper revolving body 611 or
the positions or postures of the boom 613, the arm 614, and the
bucket 615 constituting the work mechanism 620. As illustrated in
FIG. 16, the calculation device 630 includes the posture estimation
device 1 in the above embodiment and a control device 632. The
posture estimation device 1 estimates posture information of the
vehicle 600 based on an output signal of the inertial measurement
units 10a, 10b, 10c, and 10d. The control device 632 controls the
posture of the vehicle 600 based on the posture information of the
vehicle 600, which has been estimated by the posture estimation
device 1. Specifically, the calculation device 630 receives various
detection signals input from the inertial measurement units 10a,
10b, 10c, and 10d and calculates the positions and postures
(posture angles) of the boom 613, the arm 614, and the bucket 615
or an inclination state of the upper revolving body 611 based on
the various detection signals. The calculated position-and-posture
signal including a posture angle of the boom 613, the arm 614, or
the bucket 615 or an inclination signal including a posture angle
of the upper revolving body 611, for example, the
position-and-posture signal of the bucket 615 is used in feedback
information for a display of a monitoring device (not illustrated)
on the driver seat or for controlling an operation of the work
mechanism 620 or the upper revolving body 611.
[0200] As the construction machine in which the posture estimation
device 1 in the above embodiment is used, for example, a rough
terrain crane (crane car), a bulldozer, an excavator/loader, a
wheel loader, and an aerial work vehicle (lift car) are provided in
addition to the hydraulic shovel (jumbo, back hoe, and power
shovel) exemplified above.
[0201] According to the embodiment, with the posture estimation
device 1, it is possible to obtain information of a posture with
high accuracy. Thus, it is possible to realize appropriate posture
control of the vehicle 600. According to the vehicle 600, since the
compact inertial measurement unit 10 is mounted, it is possible to
provide a construction machine in which a plurality of inertial
measurement units can be compactly disposed at installation sites
of the inertial measurement units 10 by serial coupling
(multi-coupling), or cable routing of coupling the inertial
measurement units 10 installed at the sites to each other in series
by a cable can be compactly performed, even in a very narrow region
such as the bucket link 616.
[0202] In the embodiment, descriptions are made by using the
four-wheel vehicle such as the agricultural machine and the
construction machine as an example of the vehicle in which the
posture estimation device 1 is used. However, in addition,
motorcycles, bicycles, trains, airplanes, biped robots,
remote-controlled or autonomous aircraft (such as radio-controlled
aircraft, radio-controlled helicopters and drones), rockets,
satellites, ships, automated guided vehicles (AGVs) are
provided.
[0203] The present disclosure is not limited to the embodiment, and
various modifications can be made in a range of the gist of the
present disclosure.
[0204] The above-described embodiment and modification examples are
examples, and the present disclosure is not limited thereto. For
example, the embodiment and the modification examples can be
appropriately combined.
[0205] The present disclosure includes a configuration which is
substantially the same as the configuration described in the
embodiment (for example, configuration having the same function,
method, and result or configuration having the same purpose and
effect). The present disclosure includes a configuration in which
the not-essential component in the configuration described in the
embodiment has been replaced. The present disclosure includes a
configuration of exhibiting the same effects as those in the
configuration described in the embodiment or a configuration
capable of achieving the same purpose. The present disclosure
includes a configuration in which a known technique is added to the
configuration described in the embodiment.
* * * * *