U.S. patent application number 15/222947 was filed with the patent office on 2017-02-02 for machine learning device, robot controller, robot system, and machine learning method for learning action pattern of human.
The applicant listed for this patent is FANUC CORPORATION, Preferred Networks, Inc.. Invention is credited to Keigo KAWAAI, Eiichi MATSUMOTO, Daisuke OKANOHARA, Ryosuke OKUTA, Taketsugu TSUDA.
Application Number | 20170028553 15/222947 |
Document ID | / |
Family ID | 57795710 |
Filed Date | 2017-02-02 |
United States Patent
Application |
20170028553 |
Kind Code |
A1 |
TSUDA; Taketsugu ; et
al. |
February 2, 2017 |
MACHINE LEARNING DEVICE, ROBOT CONTROLLER, ROBOT SYSTEM, AND
MACHINE LEARNING METHOD FOR LEARNING ACTION PATTERN OF HUMAN
Abstract
A machine learning device for a robot that allows a human and
the robot to work cooperatively, the machine learning device
including a state observation unit that observes a state variable
representing a state of the robot during a period in that the human
and the robot work cooperatively; a determination data obtaining
unit that obtains determination data for at least one of a level of
burden on the human and a working efficiency; and a learning unit
that learns a training data set for setting an action of the robot,
based on the state variable and the determination data.
Inventors: |
TSUDA; Taketsugu;
(Yamanashi, JP) ; OKANOHARA; Daisuke; (Tokyo,
JP) ; OKUTA; Ryosuke; (Tokyo, JP) ; MATSUMOTO;
Eiichi; (Tokyo, JP) ; KAWAAI; Keigo; (Tokyo,
JP) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
FANUC CORPORATION
Preferred Networks, Inc. |
Yamanashi
Tokyo |
|
JP
JP |
|
|
Family ID: |
57795710 |
Appl. No.: |
15/222947 |
Filed: |
July 29, 2016 |
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
B25J 9/0084 20130101;
B25J 9/1653 20130101; B25J 9/163 20130101; B25J 19/06 20130101;
B25J 9/1694 20130101; G05B 2219/40202 20130101; B25J 13/084
20130101; B25J 13/088 20130101; B25J 13/085 20130101; B25J 9/1676
20130101; G06N 3/08 20130101 |
International
Class: |
B25J 9/16 20060101
B25J009/16; B25J 9/00 20060101 B25J009/00; B25J 13/08 20060101
B25J013/08 |
Foreign Application Data
Date |
Code |
Application Number |
Jul 31, 2015 |
JP |
2015-151561 |
Jan 29, 2016 |
JP |
2016-015538 |
Claims
1. A machine learning device for a robot that allows a human and
the robot to work cooperatively, the machine learning device
comprising: a state observation unit that observes a state variable
representing a state of the robot when the human and the robot work
cooperatively; a determination data obtaining unit that obtains
determination data for at least one of a level of burden on the
human and a working efficiency; and a learning unit that learns a
training data set for setting an action of the robot, based on the
state variable and the determination data.
2. The machine learning device according to claim 1, wherein the
state variable comprises at least one of a position, an
orientation, a velocity, and an acceleration of the robot.
3. The machine learning device according to claim 1, wherein the
determination data comprises at least one of a magnitude and a
direction of a load sensed by the robot, a magnitude and a
direction of a load sensed by an environment surrounding the robot,
a level of burden on the environment surrounding the robot, and a
time taken to move the robot.
4. The machine learning device according to claim 1, wherein the
training data set comprises an action value variable representing a
value of an action of the robot set for each state of the robot and
each action of the robot, and the learning unit comprises: a reward
computation unit that sets a reward, based on the determination
data and the state variable; and a function update unit that
updates the action value variable, based on the reward and the
state variable.
5. The machine learning device according to claim 4, wherein the
reward computation unit sets a greater reward for a smaller
absolute value of an acceleration of the robot, and a greater
reward for a shorter time taken to move the robot.
6. The machine learning device according to claim 1, wherein the
training data set comprises a learning model for the robot set for
each state of the robot and each action of the robot, and the
learning unit comprises: an error computation unit that computes an
error of the learning model, based on the determination data, the
state variable, and input teacher data; and a learning model update
unit that updates the learning model, based on the error and the
state variable.
7. The machine learning device according to claim 1, further
comprising: a human identification unit that identifies a human who
works cooperatively with the robot, wherein the training data set
is created for each human, and the learning unit learns the
training data set for the identified human.
8. The machine learning device according to claim 1, wherein the
machine learning device comprises a neural network.
9. The machine learning device according to claim 1, wherein the
robot comprises one of an industrial robot, a field robot, and a
service robot.
10. A robot controller comprising: the machine learning device
according to claim 1; and an action control unit that controls an
action of the robot, the machine learning device comprising a
decision unit that sets an action of the robot, based on the
training data set, wherein the action control unit controls the
action of the robot, based on a command from the decision unit.
11. A robot system comprising: the robot controller according to
claim 10; a robot that assists a human in work; and an end effector
attached to the robot.
12. The robot system according to claim 11, wherein the robot
comprises: a force detector that outputs a signal corresponding to
a force from the human; and a state detector that detects a
position and an orientation of the robot, the determination data
obtaining unit obtains the determination data, based on output of
the force detector, and the state observation unit obtains the
state variable, based on output of the state detector.
13. The robot system according to claim 12, wherein the state
detector comprises at least one of a motion sensor, a pressure
sensor, a torque sensor for a motor, and a contact sensor.
14. The robot system according to claim 12, further comprising: a
plurality of robots; a plurality of robot controllers; and a
communication line that connects the plurality of robot controllers
to each other, wherein each of the plurality of robot controllers
independently learns the training data set for a robot controlled
by the robot controller and sends and shares learned information
via the communication line.
15. A machine learning method for a robot that allows a human and
the robot to work cooperatively, the machine learning method
comprising the steps of: observing a state variable representing a
state of the robot during a period in that the human and the robot
work cooperatively; obtaining determination data for at least one
of a level of burden on the human and a working efficiency; and
learning a training data set for setting an action of the robot,
based on the state variable and the determination data.
Description
BACKGROUND OF THE INVENTION
[0001] 1. Field of the Invention
[0002] The present invention relates to a machine learning device,
a robot controller, a robot system, and a machine learning method
for learning the action pattern of a human.
[0003] 2. Description of the Related Art
[0004] In conventional techniques, to ensure human safety, safety
measures are taken to forbid humans from entering the working area
of a robot when the robot is driven. For example, a robot is
enclosed with a safety fence to forbid humans to step over the
safety fence during the period in which the robot is driven. In
recent years, a robot system which allows a human to work
cooperatively with a robot is known. The robot system allows the
robot and the human to simultaneously perform one task without
enclosing the robot with a safety fence.
[0005] Japanese Laid-open Patent Publication No. 2015-123505
discloses an industrial robot which works cooperatively with a
human. The robot includes a movable unit, a protective member, and
a detector. The movable unit is supported by the basal portion. The
protective member is made of a material having rigidity lower than
that of the movable unit and covers the periphery of the movable
unit. The detector is located in the movable unit and detects an
external force input via the protective member.
[0006] A general industrial robot is driven in accordance with an
operation program created in advance. Alternatively, the robot is
driven to pass through a teaching point taught by, e.g., a teaching
pendant in advance. In other words, the robot is driven along a
predetermined trajectory.
[0007] Even in a robot system which allows a human to work
cooperatively with a robot, the trajectory of the robot can be set
in advance to drive the robot along the generated trajectory.
However, when a human works cooperatively with the robot, more than
one working method may be selectable. In, e.g., plants for
manufacturing products, workpieces may be transported from the
initial to target positions. To transport workpieces, the
workpieces may be lifted by cooperation between the human and the
robot and transported to the target positions. The robot can lift
and transport objects to be transported. In this case, numerous
options are available for, e.g., the orientation and velocity at
which the workpieces are lifted.
[0008] The level of burden on a human varies for each method for
controlling a robot to manipulate a workpiece. Even when, for
example, the same task is performed, the level of fatigue of a
human varies and the level of burden also varies depending on the
velocity or the distance from the human. It is, therefore,
preferable to set an appropriate method for controlling a robot.
However, numerous methods are available to control a robot. In
addition, a human may exhibit different action patterns, depending
on the work that is being carried out. This makes it difficult to
set an optimal method for controlling a robot in accordance with
the working details.
SUMMARY OF INVENTION
[0009] According to a first aspect of the invention, there is
provided a machine learning device for a robot that allows a human
and the robot to work cooperatively, the machine learning device
including a state observation unit that observes a state variable
representing a state of the robot during a period in that the human
and the robot work cooperatively; a determination data obtaining
unit that obtains determination data for at least one of a level of
burden on the human and a working efficiency; and a learning unit
that learns a training data set for setting an action of the robot,
based on the state variable and the determination data.
[0010] The state variable may include at least one of a position,
an orientation, a velocity, and an acceleration of the robot. The
determination data preferably includes at least one of a magnitude
and a direction of a load sensed by the robot, a magnitude and a
direction of a load sensed by an environment surrounding the robot,
a level of burden on the environment surrounding the robot, and a
time taken to move the robot.
[0011] The training data set may include an action value variable
representing a value of an action of the robot set for each state
of the robot and each action of the robot, and the learning unit
may include a reward computation unit that sets a reward, based on
the determination data and the state variable; and a function
update unit that updates the action value variable, based on the
reward and the state variable. The reward computation unit may set
a greater reward for a smaller absolute value of an acceleration of
the robot, and a greater reward for a shorter time taken to move
the robot.
[0012] The training data set may include a learning model for the
robot set for each state of the robot and each action of the robot,
and the learning unit may include an error computation unit that
computes an error of the learning model, based on the determination
data, the state variable, and input teacher data; and a learning
model update unit that updates the learning model, based on the
error and the state variable. The machine learning device may
further include a human identification unit that identifies a human
who works cooperatively with the robot, wherein the training data
set may be created for each human, and the learning unit may learn
the training data set for the identified human. The machine
learning device may include a neural network. The robot may include
one of an industrial robot, a field robot, and a service robot.
[0013] According to a second aspect of the invention, there is
provided a robot controller including the machine learning device
as described above; and an action control unit that controls an
action of the robot, the machine learning device including a
decision unit that sets an action of the robot, based on the
training data set, wherein the action control unit controls the
action of the robot, based on a command from the decision unit.
[0014] According to a third aspect of the invention, there is
provided a robot system including the robot controller as described
above; a robot that assists a human in work; and an end effector
attached to the robot. The robot may include a force detector that
outputs a signal corresponding to a force from the human; and a
state detector that detects a position and an orientation of the
robot, the determination data obtaining unit may obtain the
determination data, based on output of the force detector, and the
state observation unit may obtain the state variable, based on
output of the state detector. The state detector may include at
least one of a motion sensor, a pressure sensor, a torque sensor
for a motor, and a contact sensor. The robot system may further
include a plurality of robots; a plurality of robot controllers;
and a communication line that connects the plurality of robot
controllers to each other, wherein each of the plurality of robot
may control independently learns the training data set for a robot
controlled by the robot controller and send and share learned
information via the communication line.
[0015] According to a fourth aspect of the invention, there is
provided a machine learning method for a robot that allows a human
and the robot to work cooperatively, the machine learning method
including the steps of observing a state variable representing a
state of the robot during a period in that the human and the robot
work cooperatively; obtaining determination data for at least one
of a level of burden on the human and a working efficiency; and
learning a training data set for setting an action of the robot,
based on the state variable and the determination data.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The present invention will be more clearly understood by
reference to the accompanying drawings, in which:
[0017] FIG. 1 is a view schematically illustrating how a robot and
a human work cooperatively in an embodiment;
[0018] FIG. 2 is a block diagram illustrating an exemplary robot
system in the embodiment;
[0019] FIG. 3 is a diagram schematically representing a model for a
neuron;
[0020] FIG. 4 is a diagram schematically representing a three-layer
neural network formed by combining neurons as illustrated as FIG. 3
together;
[0021] FIG. 5 is a schematic view for explaining a path for
transporting a workpiece;
[0022] FIG. 6 is a view for explaining moving points for the robot
tip point;
[0023] FIG. 7 is an enlarged view for explaining moving points for
the robot tip point;
[0024] FIG. 8 is a block diagram illustrating another exemplary
robot system in the embodiment; and
[0025] FIG. 9 is a block diagram illustrating a modification of the
robot system illustrated as FIG. 2.
DETAILED DESCRIPTION
[0026] A machine learning device, a robot controller, a robot
system, and a machine learning method according to an embodiment
will be described below with reference to the accompanying
drawings. However, it is to be understood that the present
invention should be limited neither to the drawings nor to the
following embodiment.
[0027] The robot system allows a human and a robot to cooperate
together to perform a predetermined task. This embodiment
illustrates how a human and a robot cooperate together to transport
a workpiece.
[0028] FIG. 1 is a schematic view illustrating the robot system in
this embodiment. FIG. 2 is a block diagram illustrating an
exemplary robot system in this embodiment. Referring to FIG. 1 and
FIG. 2, a robot system 3 includes a robot 1 which assists in
transporting a workpiece W, and a robot controller 2 which controls
the robot 1. The robot 1 according to this embodiment is
implemented in a multi-articulated robot including an arm 12 and a
plurality of joint portions 13. The robot system 3 includes a hand
6 serving as an end effector attached to the robot 1. The hand 6
has the function of gripping and releasing the workpiece W. The end
effector is not limited to a hand, and an end effector appropriate
to the working details may be used.
[0029] The robot 1 includes an arm driving device 44 which drives
the joint portions 13. The arm driving device 44 includes arm
driving motors 14 built into the joint portions 13. The arm driving
motors 14 can drive and bend the arm 12 at desired angles in the
joint portions 13. The robot 1 in this embodiment is assembled such
that the entire arm 12 can rotate about an axis which extends
vertically. The arm driving motors 14 include motors which rotate
the arm 12.
[0030] The robot 1 includes a hand driving device 45 which closes
and opens the hand 6. The hand driving device 45 in this embodiment
includes a hand driving cylinder 18 which drives the hand 6, and an
air pump and a solenoid valve for supplying compressed air to the
hand driving cylinder 18.
[0031] The robot 1 includes a base portion 11 which supports the
arm 12. The robot 1 in this embodiment includes a force detector 19
which detects a force acting on the base portion 11. The force
acting on the base portion 11 corresponds to the force acting on
the robot 1. The force detector 19 outputs a signal corresponding
to a force applied by a human. The force detector 19 in this
embodiment is fixed to a floor surface 20.
[0032] The force detector 19 may be any detector capable of
detecting the magnitude and direction of the force acting on the
robot 1. The force detector 19 in this embodiment includes a metal
substrate connected to the base portion 11, and a strain sensor
attached to the substrate surface. The force detector 19 can
calculate a force acting on the robot 1, based on the amount of
deformation detected by the strain sensor.
[0033] The robot 1 in this embodiment includes a state detector
which detects the robot position and orientation. The state
detector detects the position of the robot tip point and the
orientation of the robot 1. The state detector in this embodiment
includes a rotation angle detector 15 attached to each arm driving
motor 14. The rotation angle detector 15 detects the angle of
rotation at which each arm driving motor 14 is driven. The
position, orientation, velocity, and acceleration of the robot 1
can be detected based on the angles of rotation of the arm driving
motors 14.
[0034] The state detector may be not only the rotation angle
detector 15 but also, e.g., a camera, a motion sensor, a pressure
sensor, a torque sensor for a motor, and a contact sensor. In other
words, the state variable observed by the state observation unit 51
may be not only the output of the rotation angle detector 15 but
also data (state quantity) obtained from, e.g., a camera, a motion
sensor, a pressure sensor, a torque sensor for a motor, and a
contact sensor. These detectors such as a camera, a motion sensor,
a pressure sensor, a torque sensor for a motor, and a contact
sensor may be, for example, directly placed in predetermined
portions of the robot 1 (hand 6) or mounted in appropriate portions
around the robot 1, as a matter of course.
[0035] The robot 1 is driven based on an operation command for the
robot controller 2. The robot controller 2 includes an arithmetic
processor including, e.g., a CPU (Central Processing Unit), a RAM
(Random Access Memory), and a ROM (Read Only Memory) connected to
each other via buses. The robot controller 2 includes a storage
unit 59 which stores various types of information. The robot
controller 2 includes an action control unit 43 which controls the
arm driving device 44 and the hand driving device 45. The arm
driving device 44 and the hand driving device 45 are driven in
accordance with an operation command from the action control unit
43.
[0036] The robot controller 2 in this embodiment includes an
external force calculation unit 46 which estimates an external
force applied to the robot 1 from the outside of the robot 1. The
force detected by the force detector 19 includes an internal force
generated by the mass of the robot 1 and the operation of the robot
1, and an external force applied to the robot 1 from the outside of
the robot 1.
[0037] The external force calculation unit 46 calculates an
internal force acting on the robot 1 by the self weight of the
robot 1 as the robot 1 operates while no force is applied from the
outside of the robot 1. An internal force can be calculated based
on the position and orientation of the robot 1 detected based on
the output of the rotation angle detector 15 in combination of the
mass of the robot 1 and other factors. The mass of the robot 1 and
other factors can be stored in the storage unit 59 in advance. The
external force calculation unit 46 subtracts the internal force
from the force detected by the force detector 19 to calculate an
external force. The external force corresponds to the force applied
to a workpiece by a human 85.
[0038] The robot controller 2 includes an input unit 41 and a
display unit 42. The display unit 42 can display information
concerning the operation of the robot 1. Examples of the display
unit 42 may include a liquid crystal display. The input unit 41
allows a human to input a desired command to the robot controller
2. Examples of the input unit 41 may include a keyboard.
[0039] FIG. 5 is a schematic view for explaining a path for
transporting a workpiece in the robot system according to this
embodiment. Referring to FIG. 1 and FIG. 5, in this embodiment, the
workpiece W placed on the floor surface 20 is transported onto the
upper surface of a worktable 81. The workpiece W is, for example, a
heavy workpiece. It may be very exhaustive or difficult for the
human 85 to transport such a workpiece. The workpiece W in this
embodiment serves as an automobile tire.
[0040] In transporting the workpiece W, multiple paths for
transporting the workpiece W are available, as indicated by arrows
91 to 93. The human 85 experiences an interval, wherein the human
may preferably exert a great force or an interval, and wherein the
human may exert only a small force. Even when the workpiece W is
set at the same position, the robot 1 may have various
orientations.
[0041] Referring to FIG. 2, the robot controller 2 in this
embodiment includes a machine learning device 5 which learns the
action pattern of a human and, in turn, learns a method for
controlling a robot to appropriately assist the human. The machine
learning device 5 in this embodiment selects an action of the robot
1 determined to be optimal at a predetermined moving point during
the period in which the robot 1 is driven. In other words, the
machine learning device 5 sends a command to drive the robot 1 in a
driving pattern determined to be optimal.
[0042] The machine learning device 5 includes a state observation
unit 51 which obtains a state variable representing the state of
the robot 1 during the period in which the human 85 and the robot 1
work cooperatively. The state variable in this embodiment includes
the position, orientation, velocity, and acceleration of the robot
1. For example, the position, the velocity, and the acceleration at
the robot tip point may be used as state variables. The position,
orientation, velocity, and acceleration of the robot 1 can be
detected based on the output of the rotation angle detector 15. A
signal output from the rotation angle detector 15 is input to the
state observation unit 51.
[0043] The state variable is not limited to this form but may be
any variables representing the robot state. For example, the state
observation unit 51 may obtain at least one variable of the
position, orientation, velocity, and acceleration of the robot
1.
[0044] The machine learning device 5 includes a determination data
obtaining unit 52 which obtains determination data associated with
the burden on the human 85. The determination data in this
embodiment includes the magnitude and direction of the force
applied by the human 85 to transport the workpiece W. The
determination data in this embodiment further includes the time
taken to move the workpiece W.
[0045] The determination data obtaining unit 52 in this embodiment
obtains determination data, based on the output of the force
detector 19. The magnitude of the external force calculated by the
external force calculation unit 46 corresponds to the magnitude of
the force applied by the human 85. The direction of the external
force calculated by the external force calculation unit 46
corresponds to the direction of the force applied to the workpiece
W by the human 85. The determination data obtaining unit 52
receives the magnitude and direction of the force applied by the
human 85 from the external force calculation unit 46.
[0046] The robot controller 2 includes a movement time measuring
unit 47 which measures the working time. The movement time
measuring unit 47 in this embodiment calculates the time taken to
move between moving points (to be described later). The movement
time measuring unit 47 in this embodiment calculates the movement
time, based on a command from the action control unit 43. The
movement time measured by the movement time measuring unit 47 is
sent to the determination data obtaining unit 52.
[0047] The determination data is not limited to the above-mentioned
form but may be any data associated with at least one of the level
of burden on the human and the working efficiency. For example, the
determination data may be not only, e.g., the magnitude and
direction of a load sensed by the robot, the magnitudes and
directions of loads sensed by surrounding humans or objects, the
levels of burden on the surrounding humans or objects, and the
movement time, but also pieces of information from, e.g., a camera,
a motion sensor, and a pressure sensor. In this specification, the
human includes not only an operator who actually performs
processing (task) cooperatively with the robot, but also various
humans such as a human who observes the processing around the robot
or a human who happens to pass near the robot, although they do not
directly operate the robot.
[0048] The machine learning device 5 in this embodiment includes a
learning unit 54 which learns a training data set for setting the
action of the robot, based on the state variable and the
determination data. The learning unit 54 obtains the state variable
from the state observation unit 51. The learning unit 54 further
obtains the determination data from the determination data
obtaining unit 52. The training data set means a set of pieces of
value information associated with actions defined based on the
state variable and the determination data. The machine learning
device 5 can set a method for driving the robot by comparing values
associated with the state and the action in the training data set.
This embodiment is not limited to an industrial robot and is also
applicable to, e.g., a field robot or a service robot, as a matter
of course.
[0049] The machine learning device will be described hereinafter.
The machine learning device has the function of extracting, e.g., a
useful rule, a knowledge representation, and a determination
criterion by analysis from a set of data input to the device,
outputting the determination results, and learning knowledge
(machine learning). A variety of machine learning techniques are
available, which are roughly classified into, e.g., "supervised
learning," "unsupervised learning," and "reinforcement learning."
To implement these techniques, another technique called "deep
learning" in which extraction of feature amounts themselves is
learned is available.
[0050] The machine learning device 5 illustrated as FIG. 2 uses
"reinforcement learning," and a machine learning device 7 (to be
described later with reference to FIG. 9) uses "supervised
learning." Although these types of machine learning (machine
learning devices 5 and 7) may use a general-purpose computer or
processor, the use of, e.g., GPGPU (General-Purpose computing on
Graphics Processing Units) or large-scale PC clusters allows
higher-speed processing.
[0051] First, in supervised learning, teacher data, i.e., a large
number of sets of data of certain inputs and results (labels) are
fed into a machine learning device, which learns features observed
in these data sets and inductively acquires a model (learning
model) for estimating the result from the input, i.e., their
relationship. Supervised learning can be implemented using an
algorithm such as a neural network (to be described later).
[0052] In unsupervised learning, only input data are fed into a
learning device in large amounts, which learns the distribution of
the input data and, in turn, performs learning with a device which,
e.g., compresses, classifies, and shapes the input data, without
corresponding teacher output data being fed into the learning
device. This allows, e.g., clustering of features seen in these
data sets into similar features. The obtained result can be used to
define certain criteria and allocate outputs in an optimizing
manner according to the criteria, thus predicting an output.
[0053] Intermediate problem setting between unsupervised learning
and supervised learning, called semi-supervised learning, is also
available. This applies when, for example, only some data serve as
data sets of inputs and outputs and the remaining data include only
inputs. In this embodiment, learning can be efficiently performed
by applying data (e.g., image data or simulation data) which can be
obtained even without actual movement of the robot to unsupervised
learning.
[0054] Reinforcement learning will be described below.
Reinforcement learning problem setting will be considered as
follows:
[0055] The robot observes the environmental state to decide its
action;
[0056] The environment may change according to a certain rule and a
human may change the environment by his or her own action;
[0057] A reward signal is returned every time action is taken;
[0058] The sum of (discount) rewards in the future is to be
maximized;
[0059] Learning starts in a state in which a result to be brought
about by the action is totally unknown or known only incompletely.
In other words, the robot can obtain the result of an action as
data only after it actually takes action. This means that an
optimal action may be preferably searched for by trial and error;
and
[0060] As if imitating human behaviors, learning can be started
from a good starting point by starting from an initial state in
which learning has been performed in advance (a technique such as
the above-mentioned supervised learning or reverse reinforcement
learning).
[0061] In reinforcement learning, in addition to determination and
classification, an action is learned to acquire a method for
learning an appropriate action in consideration of interactions
exerted on the environment by the action, i.e., learning to
maximize the reward to be obtained in the future. Although this
description will be followed by an example of Q-learning, the
present invention is not limited to this.
[0062] In Q-learning, a value Q(s, a) of selection of an action a
is learned in a particular environmental state s. In other words,
an action a having the highest value Q(s, a) in the particular
state s may be preferably selected as an optimal action. However,
at first, a correct value Q(s, a) is totally unknown for a pair of
a state s and an action a. The agent (the subject of an action)
selects various actions a in the particular state s and rewards are
offered for the actions a. With this operation, the agent learns to
select a better action, i.e., a correct value Q(s, a).
[0063] To maximize the sum of rewards to be obtained in the future
as a result of the actions, Q(s, a)=E[.SIGMA.(.gamma..sup.t)
r.sub.t] is to be finally satisfied, where E[] is the expected
value, t is time, .gamma. is a parameter called the discount rate
(to be described later), r.sub.t is the reward at time t, and
.SIGMA. is the sum at time t. The expected value in this expression
is taken in response to a change in state that follows an optimal
action and is an unknown value, which is learned by a search. An
update expression of such a value Q(s, a) is given by, e.g.:
Q ( s t , a t ) .rarw. Q ( s t , a t ) + .alpha. ( r t + 1 +
.gamma. max a Q ( s t + 1 , a ) - Q ( s t , a t ) ) ( 1 )
##EQU00001##
where s.sub.t is the environmental state at time t and a.sub.t is
the action at time t. Upon the action a.sub.t, the state changes to
s.sub.t+1. r.sub.t+1 is the reward received upon a change the
Q-value multiplied by .gamma. when an action a having the highest
Q-value known in the state s.sub.t+1 is selected. .gamma. is a
parameter called the discount rate, satisfying
0<.gamma..ltoreq.1. .alpha. is a learning factor satisfying
0<.alpha..ltoreq.1.
[0064] Expression (1) represents a method for updating the
evaluation value Q(s.sub.t, a.sub.t) of the action a.sub.t in the
state s.sub.t, based on the reward r.sub.t+1 returned as a result
of the trial a.sub.t. More specifically, when the sum of the reward
r.sub.t+1 and the evaluation value Q(s.sub.t+1, max a.sub.t+1) of
the best action max a in the state subsequent to the state s upon
the action a is greater than the evaluation value Q(s.sub.t,
a.sub.t) of the action a in the state s, Q(s.sub.t, a.sub.t) is
increased; otherwise, Q(s.sub.t, a.sub.t) is reduced. In other
words, the value of a particular action in a particular state is
brought close to the reward immediately returned as a result, and
the value of the best action in the subsequent state upon the
particular action.
[0065] Methods for representing Q (s, a) on the computer include a
method for holding the numerical values of all state-action pairs
(s, a) in the form of a table and a method for providing a function
that approximates Q(s, a). With the latter method, above-mentioned
expression (1) can be implemented by adjusting the parameter of an
approximation function using a technique such as the stochastic
gradient descent method. A neural network (to be described later)
can be used as the approximation function.
[0066] Neural networks can also be used as learning models for
supervised learning and unsupervised learning or approximation
algorithms for value functions in reinforcement learning. FIG. 3 is
a diagram schematically representing a model for a neuron, and FIG.
4 is a diagram schematically representing a three-layer neural
network formed by combining neurons as illustrated as FIG. 3
together. More specifically, the neural network is implemented by,
e.g., an arithmetic device and a memory imitating a model for a
neuron as illustrated as, e.g., FIG. 3.
[0067] As illustrated as FIG. 3, the neurons serve to output an
output (result) y for a plurality of inputs x (FIG. 3 illustrates
inputs x1 to x3 as an example). Each input x (x1, x2, x3) is
multiplied by a weight w (w1, w2, w3) corresponding to the input x.
With this operation, the neurons output results y given by:
y=f.sub.k(.SIGMA..sub.i=1.sup.nx.sub.iw.sub.i-.theta.) (2)
where .theta. is the bias and f.sub.k is the activation function.
Note that all of the input x, the result y, and the weight w are
vectors.
[0068] A three-layer neural network formed by combining neurons as
illustrated as FIG. 3 together will be described below with
reference to FIG. 4. A plurality of inputs x (inputs x1 to x3 are
taken as an example herein) are input from the left of the neural
network and results y (results y1 to y3 are taken as an example
herein) are output from the right of this network, as illustrated
as FIG. 4. More specifically, the inputs x1, x2, and x3 are
multiplied by a weight corresponding to each of three neurons N11
to N13 and then are input to the neurons. The weights used to
multiply these inputs are collectively referred to as W1
herein.
[0069] The neurons N11 to N13 output z11 to z13, respectively.
Referring to FIGS. 4, z11 to z13 are collectively referred to as
feature vectors Z1 and may be regarded as vectors obtained by
extracting the feature amounts of input vectors. The feature
vectors Z1 are defined between the weights W1 and W2. Z11 to z13
are multiplied by a weight corresponding to each of two neurons N21
and N22 and are then input to the neurons. The weights used to
multiply these feature vectors are collectively referred to as W2
herein.
[0070] The neurons N21 and N22 output z21 and z22, respectively.
Referring to FIGS. 4, z21 and z22 are collectively referred to as
feature vectors Z2. The feature vectors Z2 are defined between the
weights W2 and W3. z21 and z22 are multiplied by a weight
corresponding to each of three neurons N31 to N33 and input. The
weights used to multiply these feature vectors are collectively
referred to as W3 herein.
[0071] Lastly, the neurons N31 to N33 output results y1 to y3,
respectively. The operation of the neural network includes a
learning mode and a value prediction mode. For example, the weight
W is learned using a learning data set in the learning mode, and a
robot action is determined in the prediction mode using the
parameter. Although "prediction" has been referred to above for the
sake of convenience, a variety of tasks such as detection,
classification, and inference are possible, as a matter of
course.
[0072] Data obtained by actually operating the robot in the
prediction mode can be immediately learned and reflected on the
subsequent action (online learning), or a group of data collected
in advance can be used to perform collective learning and to
subsequently execute the detection mode using the same parameters
(batch learning). As another, intermediate approach, the learning
mode can be interposed every time a certain amount of data is
accumulated.
[0073] The weights W1 to W3 can be learned by the error
backpropagation method. The information of errors enters from the
right and flows to the left. The error backpropagation method is
used to adjust (learn) each weight to reduce the difference between
the true output y (teacher) and the output y when the input x is
input.
[0074] Such a neural network can have more than three layers
(called deep learning). It is possible to automatically acquire
from only teacher data an arithmetic device which extracts features
of the input stepwise and returns a result. The machine learning
device 5 according to an embodiment includes, e.g., a state
observation unit 51, a learning unit 54, and a decision unit 58, as
illustrated as FIG. 2, to execute the above-described Q-learning.
However, the machine learning method applied to the present
invention is not limited to Q-learning, as mentioned earlier. In
other words, various techniques such as "supervised learning,"
"unsupervised learning," "semi-supervised learning," and
"reinforcement learning (reinforcement learning other than
Q-learning)" that can be used by the machine learning device are
applicable.
[0075] FIG. 6 depicts a schematic view for explaining moving points
in this embodiment. FIG. 6 illustrates an example in which the
robot tip point moves on a two-dimensional plane for the sake of
descriptive simplicity. The workpiece W is transported from an
initial position 88 to a target position 89. Moving points P are
set in a checkerboard pattern in a region to which the robot tip
point is likely to move. The moving point P is defined as a point
passed by the robot tip point. The robot tip point moves from a
moving point P.sub.00 corresponding to the initial position 88 to a
moving point P.sub.nm corresponding to the target position 89.
[0076] FIG. 7 depicts a schematic view for explaining how the robot
tip point moves. In this embodiment, the moving direction of the
robot tip point is preset for each moving point P. The robot tip
point moves in directions indicated by arrows 94 to 97. When the
robot tip point is located at one moving point P, it moves to
another adjacent moving point P upon the subsequent action. In the
example illustrated as FIG. 7, when the robot tip point is located
at the moving point P.sub.11, it moves to the moving point
P.sub.12, P.sub.21, P.sub.10, or P.sub.01.
[0077] In this embodiment, reinforcement learning which uses the
above-mentioned Q-learning is performed among a variety of machine
learning methods. The training data set in this embodiment includes
a plurality of action value variables Q. In this embodiment,
various techniques such as "supervised learning," "unsupervised
learning," "semi-supervised learning," and "reinforcement learning
(including Q-learning)" are applicable, as mentioned earlier.
[0078] In this embodiment, the state s.sub.t corresponds to the
state variable of the robot 1. In other words, the state s.sub.t
includes, e.g., the position, orientation, velocity, and
acceleration of the robot 1. The action a.sub.t corresponds to,
e.g., movement of the robot tip point in the directions indicated
by the arrows 94 to 97. The action a.sub.t includes, e.g., the
position, orientation, velocity, and acceleration of the robot 1
for movement in the directions indicated by the arrows 94 to
97.
[0079] The learning unit 54 in this embodiment updates the action
value variables Q every time a workpiece W is transported. The
learning unit 54 includes a reward computation unit 55 which sets a
reward, and a function update unit 56 which updates the action
value variables Q. The reward computation unit 55 sets the reward
r.sub.t, based on the determination data obtained by the
determination data obtaining unit 52. The reward computation unit
55 may set the reward r.sub.t, based on the state variable obtained
by the state observation unit 51.
[0080] The reward computation unit 55 can set a greater reward
r.sub.t for a lower burden (level of burden) on the human and a
higher working efficiency. When, for example, the workpiece W
considerably decelerates or accelerates, the burden on the human
increases, thus degrading the working efficiency. More
specifically, the smaller the absolute value of the acceleration
(acceleration magnitude) of the robot 1, the lower the burden on
the human, and the higher the working efficiency. Alternatively,
when the absolute value of the acceleration is large, it can be
determined that the robot 1 is in an undesirable state because of
the rapid operation of the robot 1. Thus, the reward computation
unit 55 can set a greater reward for a smaller absolute value of
the acceleration of the robot 1. The force applied by the human is
preferably as small as possible. The reward computation unit 55 may
set a greater reward for a smaller force applied by the human
(external force applied to the robot).
[0081] The shorter the working time to transport the workpiece W
from the initial position 88 to the target position 89, the lower
the burden on the human, and the higher the working efficiency.
Therefore, the reward computation unit 55 sets a greater reward for
a shorter time taken to move between moving points P. In the form
illustrated as FIG. 5, the transport path gets longer when the
workpiece W is moved downward. Thus, the reward computation unit 55
sets a poor reward when the robot tip point moves downward or the
human applies a force downwards. The level of burden on the human
not only means the burden on the human itself but also includes
loads based on various factors, such as unexpected contact and
pressing of the human, and may even include the level of burden
placed not only on the human but also, e.g., surrounding
objects.
[0082] In addition, the reward computation unit 55 can set a
reward, based on any action of the human. When, for example, the
workpiece W collides against the worktable 81, the reward
computation unit 55 can set a poor positive reward or a negative
reward.
[0083] In reward setting, for example, the human can preset the
value of a reward for a variable such as the magnitude of the force
and store it in a storage unit 59. The reward computation unit 55
can read and set the reward stored in the storage unit 59.
Alternatively, the human may store an arithmetic expression for
calculating a reward in the storage unit 59 in advance, and the
reward computation unit 55 may calculate a reward, based on the
arithmetic expression.
[0084] The function update unit 56 updates the action value
variables Q using above-mentioned expression (1) or (2). In other
words, the value of the action of the robot at a predetermined
moving point is updated based on the actual action of the robot and
the force applied by the human.
[0085] The human can preset the initial values of the action value
variables Q at each moving point P. Alternatively, the human may
preset random initial values.
[0086] The machine learning device 5 includes a decision unit 58
which sets an action of the robot 1, based on a training data set.
The decision unit 58 in this embodiment sets an action of the robot
1, based on the action value variables Q updated by the learning
unit 54. For example, the decision unit 58 can select an action
having the largest action value variables Q. Reinforcement learning
in this embodiment uses the .epsilon.-greedy method. In the
.epsilon.-greedy method, the decision unit 58 sets a random action
at a predetermined probability .epsilon.. The decision unit 58
further sets an action having the maximum action value variables Q
at a probability (1-.epsilon.). In other words, the decision unit
58 selects an action different from that considered to be optimal
at the probability .epsilon.. This method may allow discovery of a
robot action more excellent than that determined to be optimal.
[0087] A command for the action of the robot 1 set by the decision
unit 58 is sent to the action control unit 43. The action control
unit 43 controls the robot 1 and the hand 6, based on the command
from the decision unit 58.
[0088] In this manner, a machine learning method according to this
embodiment includes the steps of obtaining a state variable
representing the state of a robot during the period in which a
human and the robot work cooperatively, and obtaining determination
data for at least one of the level of burden on the human and the
working efficiency. The machine learning method further includes
the step of learning a training data set for determining an action
of the robot, based on the state variable and the determination
data.
[0089] The machine learning device and the machine learning method
according to this embodiment allow learning of a method for
controlling a robot, based on the action pattern of a human, by
repeated transportation of the workpiece W. This makes it possible
to set an optimal method for controlling a robot, which involves
less burden on a human, high working efficiency, and a short
working time.
[0090] Machine learning by the machine learning device and the
machine learning method is not limited to the above-mentioned form
but may be any form of machine learning. For example, the machine
learning device may set an optimal action using a multi-layer
neural network formed by deep learning. Instead of using a table
representing action value variables as functions of a plurality of
actions and a plurality of states, a neural network can be used
which outputs action value variables for each action taken when a
predetermined state is given.
[0091] In the above-mentioned embodiment, moving points are
arranged in a checkerboard pattern. However, moving points are not
limited to this form and may be set in any form. Further, reducing
the interval between moving points allows a smooth robot operation.
In the above-mentioned embodiment, the robot moves
two-dimensionally because moving points are set on a plane.
However, arranging moving points in a three-dimensional space
allows the robot to move three-dimensionally.
[0092] Referring to FIG. 5, the machine learning device 5 in this
embodiment includes a human identification unit 57 which identifies
a human. A human-specific training data set is created in advance.
The storage unit 59 stores the human-specific training data set. In
this embodiment, the human 85 inputs a human-specific number to the
input unit 41. The human identification unit 57 reads a training
data set corresponding to the human from the storage unit 59, based
on the input number. The learning unit 54 learns the training data
set corresponding to the human. This control allows human-specific
setting of a method for controlling a robot based on the action
pattern of a human. In other words, optimal robot control can be
performed for each human. Various people exist such as tall humans,
short humans, humans having strong lower-body muscles, and humans
having strong arm muscles. Optimal robot control can be achieved in
accordance with each such human.
[0093] Control for identifying a human is not limited to the
above-mentioned form but may use any method. For example, the
machine learning device may be equipped with a number reader. The
human takes a card inscribed with an individual number. The number
reader reads the number and sends a result to the human
identification unit. The human identification unit can identify the
human, based on the read number.
[0094] FIG. 8 is a block diagram illustrating another robot system
in this embodiment. In another robot system (manufacturing system)
4, distributed learning is performed. The other robot system 4
includes a plurality of robots and a plurality of robot
controllers, as illustrated as FIG. 8. The robot system 4 includes
a first robot 1a and a second robot 1b. The robot system 4 includes
a first hand 6a attached to the first robot 1a, and a second hand
6b attached to the second robot 1b. In this manner, the other robot
system 4 includes two robots 1a and 1b and two hands 6a and 6b.
[0095] The robot system 4 includes a first robot controller 2a
which controls the first robot 1a, and a second robot controller 2b
which controls the second robot 1b. The first robot controller 2a
and the second robot controller 2b have the same configuration as
the above-mentioned robot controller 2. The plurality of robot
controllers 2a and 2b are connected to each other via a
communication device including a communication line 21. The
communication device can perform communication via, e.g.,
Ethernet.RTM.. The robot controllers 2a and 2b are capable of
mutual information exchange through communication.
[0096] In the robot system 4, the plurality of robots 1a and 1b and
the human work cooperatively. In the example illustrated as FIG. 8,
two robots assist the human in work. The first robot controller 2a
independently learns to control the first robot 1a. The second
robot controller 2b independently learns to control the second
robot 1b. Pieces of information learned by the respective robot
controllers are transmitted to each other via the communication
line 21.
[0097] In this manner, pieces of information learned by the
respective robot controllers 2a and 2b can be shared by the first
robot controller 2a and the second robot controller 2b. This
control allows sharing of, e.g., action patterns for learning to
increase the number of learning operations. This can improve the
learning accuracy.
[0098] The above-described embodiment illustrates a cooperative
task for transporting the workpiece W. However, the cooperative
task is not limited to this form but may be any cooperative task.
Examples may include a task for assembling one part into a
predetermined device by cooperation between a robot and a
human.
[0099] FIG. 9 is a block diagram illustrating a modification of the
robot system illustrated as FIG. 2 and depicts a robot system 3'
which exploits supervised learning. The robot system 3' includes,
e.g., a robot 1, a hand 6, and a robot controller 2', as
illustrated as FIG. 9. The robot controller 2' includes a machine
learning device 7, an input unit 41, a display unit 42, an action
control unit 43, an external force calculation unit 46, and a
movement time measuring unit 47. The machine learning device 7
includes a state observation unit 71, a determination data
obtaining unit 72, a learning unit 74, a human identification unit
77, a decision unit 78, and a storage unit 79. The learning unit 74
includes a reward computation unit 55 and a function update unit
56.
[0100] More specifically, as is obvious from a comparison of FIG. 9
with FIG. 2 mentioned earlier, in the learning unit 74 of the robot
system 3' in the modification illustrated as FIG. 9, the reward
computation unit 55 and the function update unit 56 in the learning
unit 54 illustrated as FIG. 2 are replaced with an error
computation unit 75 and a learning model update unit 76. Other
configurations are substantially the same as the machine learning
device 5 illustrated as FIG. 2, and a description thereof will not
be given. The error computation unit 75 receives teacher data from
the outside and, for example, performs computation to reduce the
error with respect to the data obtained by past learning, and the
learning model update unit 76 updates the learning model (error
model). In other words, the error computation unit 75 receives the
output of the state observation unit 71, teacher data, and the like
and computes, for example, the error between the result
(label)-bearing data and the output of the learning model mounted
in the learning unit 74. When, for example, the program input to
the robot controller 2' (the operation processed by the robot
system 3') is the same, result (label)-bearing data obtained until
the day before a predetermined day on which the robot system 3'
performs processing can be held and provided to the error
computation unit 75 as teacher data on the predetermined day.
[0101] Alternatively, data obtained through, e.g., simulation
performed outside the robot system 3' or result (label)-bearing
data obtained by another robot system may be provided to the error
computation unit 75 of the robot system 3' as teacher data via a
memory card or a communication line. The result (label)-bearing
data may also be held in a non-volatile memory such as a flash
memory built into the learning unit 74 and directly used in the
learning unit 74.
[0102] In the foregoing description, assuming a manufacturing
system (manufacturing facility) including a plurality of robot
systems 3' (3), for example, a machine learning device 7 (5) may be
provided for each robot system 3' (3), and the plurality of machine
learning devices 7 (5) located in the plurality of robot systems 3'
(3) may share or exchange data with each other via a communication
medium. A machine learning device 7 (5) may also be located on,
e.g., a cloud server.
[0103] The present invention can provide a machine learning device,
a robot controller, a robot system, and a machine learning method
which may learn the action pattern of a human and set a method for
controlling a robot which appropriately assists the human.
[0104] All examples and conditional language provided herein are
intended for the pedagogical purposes of aiding the reader in
understanding the invention and the concepts contributed by the
inventor to further the art, and are not to be construed as
limitations to such specifically recited examples and conditions,
nor does the organization of such examples in the specification
relate to a showing of the superiority and inferiority of the
invention. Although one or more embodiments of the present
invention have been described in detail, it should be understood
that various changes, substitutions, and alterations could be made
hereto without departing from the spirit and scope of the
invention.
* * * * *