WO2018042730A1 - ロボットの制御装置およびロボットの制御方法 - Google Patents
ロボットの制御装置およびロボットの制御方法 Download PDFInfo
- Publication number
- WO2018042730A1 WO2018042730A1 PCT/JP2017/010887 JP2017010887W WO2018042730A1 WO 2018042730 A1 WO2018042730 A1 WO 2018042730A1 JP 2017010887 W JP2017010887 W JP 2017010887W WO 2018042730 A1 WO2018042730 A1 WO 2018042730A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- assembly
- workpiece
- state
- hand
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 48
- 230000002787 reinforcement Effects 0.000 claims abstract description 50
- 230000006399 behavior Effects 0.000 claims abstract description 32
- 230000009471 action Effects 0.000 claims description 78
- 230000008569 process Effects 0.000 claims description 35
- 238000001514 detection method Methods 0.000 claims description 6
- 101000585359 Homo sapiens Suppressor of tumorigenicity 20 protein Proteins 0.000 description 21
- 102100029860 Suppressor of tumorigenicity 20 protein Human genes 0.000 description 21
- 230000008859 change Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 11
- 238000005452 bending Methods 0.000 description 9
- 101000911772 Homo sapiens Hsc70-interacting protein Proteins 0.000 description 6
- 101001139126 Homo sapiens Krueppel-like factor 6 Proteins 0.000 description 6
- 239000000463 material Substances 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 210000000078 claw Anatomy 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 239000002184 metal Substances 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23P—METAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
- B23P19/00—Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
- B23P19/04—Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes for assembling or disassembling parts
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/085—Force or torque sensors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1612—Programme controls characterised by the hand, wrist, grip control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/163—Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/1633—Programme controls characterised by the control loop compliant, force, torque control, e.g. combined with position control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1687—Assembly, peg and hole, palletising, straight line, weaving pattern movement
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
- G05B19/41805—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by assembly
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23P—METAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
- B23P19/00—Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
- B23P19/02—Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes for connecting objects by press fit or for detaching same
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39469—Grip flexible, deformable plate, object and manipulate it
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40499—Reinforcement learning algorithm
Definitions
- the present invention relates to a robot control apparatus and a robot control method for performing press-fitting work and the like.
- Patent Document 1 describes a press-fitting device that press-fits a shaft-like component into a press-fitting hole formed in a press-fitted work.
- This press-fitting device has press-fitting means that is swingably supported by the mounting member via a pair of springs, whereby the press-fitting means fluctuates when the shaft-like component receives an eccentric load from the edge of the press-fitting hole, Reduce press-fit reaction force.
- the device described in Patent Document 1 merely reduces the press-fitting reaction force. For example, when there is a misalignment between the shaft-like component and the press-fit hole due to individual differences between the shaft-like components, for example. It is difficult to perform press-fitting even using the apparatus described in Patent Document 1.
- One aspect of the present invention is a robot control apparatus that controls a robot so that a first part supported by a robot hand driven by an actuator is assembled to a second part, which is obtained in advance by reinforcement learning.
- a storage unit that stores a relationship between a plurality of intermediate states of one part and an optimal behavior of the robot that gives the highest reward for each intermediate state of the assembly, and state detection that detects the intermediate state of the first part
- an actuator control unit that specifies the optimal behavior of the robot corresponding to the assembly-in-progress state detected by the state detection unit based on the relationship stored in the storage unit, and controls the actuator according to the optimal behavior.
- a robot control method for controlling a robot so that a first part supported by a robot hand driven by an actuator is assembled to a second part. Strengthen to acquire the relationship between the multiple assembly states of the first component and the optimal behavior of the robot that gives the highest reward for each assembly state by performing the operation of assembling one component to the second component multiple times. Based on the learning process and when the first part is assembled to the second part, the intermediate state of the first part is detected and the optimum behavior corresponding to the detected intermediate state is acquired in the reinforcement learning step. And an assembly work process for controlling the actuator according to the identified optimum action.
- the first component is moved to the second component by driving the robot hand. Can be easily assembled.
- FIG. 1 is a diagram schematically showing a robot system including a robot control apparatus according to an embodiment of the present invention.
- the enlarged view of the arm tip part of the robot of FIG. The figure which shows the bending state of the workpiece
- FIG. 5 is a diagram in which a part of FIG. 4 is taken out and is a diagram for explaining a movement path of a workpiece.
- the flowchart which shows an example of the process performed by the normal control part of FIG.
- FIG. 1 is a diagram schematically showing a robot system including a robot control apparatus according to an embodiment of the present invention.
- This robot system includes a robot 1 and a controller 2 that controls the robot 1.
- the controller 2 includes a PLC (Programmable Logic Controller), a servo amplifier, and the like.
- the robot 1 is, for example, a vertical articulated robot having a plurality of pivotable arms 11, and a work hand 12 is provided at the end of the arm.
- the robot 1 has a plurality of servo motors 13 (only one is shown for convenience) for driving the robot.
- Each servo motor 13 is provided with an encoder 14, and the encoder 14 detects the rotation angle of the servo motor 13. The detected rotation angle is fed back to the controller 2, and the position and posture of the hand 12 in the three-dimensional space are controlled by feedback control in the controller 2.
- the controller 2 includes an arithmetic processing unit having a CPU, ROM, RAM, and other peripheral circuits.
- the controller 2 outputs a control signal to the servo motor 13 according to a program stored in advance in the memory, and controls the operation of the robot 1.
- the robot 1 can perform various operations, the robot 1 according to the present embodiment is particularly configured to perform an assembling operation for assembling a workpiece to a part.
- FIG. 2 is an enlarged view of the arm tip of the robot 1.
- the hand 12 has a claw portion 12a that can be expanded and contracted about the axis line CL1, and can hold the workpiece 100 about the axis line CL1 through the claw portion 12a.
- the workpiece 100 is a tube made of, for example, a flexible material (rubber or the like).
- the workpiece 100 is press-fitted outside a part (for example, a pipe) 101 made of a material (metal or the like) that protrudes from the engine and is harder than the workpiece 100, whereby the workpiece 100 is assembled to the part 101.
- the workpiece 100 and the part 101 form a flow path through which fluid flows into or out of the engine.
- a reference workpiece shape is defined in advance.
- a cylindrical reference workpiece shape (dotted line) centering on the axis CL1 is defined.
- work is performed by setting the reference point P0 to the front-end
- the reference point P0 is set to the point at the tip of the reference workpiece shape on the axis CL1 as shown in the figure.
- the reference point P0 can also be set at a point (for example, the tip of the claw portion 12a) that is a predetermined distance away from the attachment portion of the hand 12.
- the tube-shaped workpiece 100 has a bending curve peculiar to the workpiece, and individual differences occur in individual workpiece shapes. This individual difference is also caused by a difference in the molding conditions of the workpiece 100. Furthermore, the physical characteristics (such as elastic modulus) of the workpiece 100 may change due to differences in temperature and humidity during use. As a result, as shown in FIG. 2, a deviation occurs between the axis CL1 and the center axis CL2 of the workpiece tip. For this reason, when the assembly work of the workpiece 100 is performed by moving the hand 12 along a predetermined locus (position control), for example, the workpiece 100 is bent as shown in FIG. 3A or as shown in FIG. 3B. There is a possibility that the workpiece 100 is buckled.
- position control position control
- the robot control device is configured as follows so that the work 100 can be quickly pressed in without complicating the configuration of the hand 12.
- the controller 2 receives signals from the force detector 15 and the input unit 16 in addition to the encoder 14.
- the force detector 15 is constituted by a 6-axis force sensor provided at the tip of the hand 12.
- the direction of the axis CL1 is defined as the Z direction
- the two orthogonal directions constituting the plane perpendicular to the axis CL1 are defined as the X direction and the Y direction
- the force detector 15 is applied to the hand 12, the X axis, the Y axis, and the Z axis.
- Directional translational forces Fx, Fy, Fz and moments Mx, My, Mz around the X, Y, and Z axes are detected.
- the Z direction is the traveling direction of the hand 12 (the direction along the axis line CL1)
- the Y direction is the direction in which the misalignment between the axis line CL3 of the component 101 and the center axis CL2 of the workpiece tip occurs. That is, the robot 1 operates so as to cause misalignment between components in the YZ plane, and the hand 12 moves in the YZ plane so as to correct the misalignment.
- the robot 1 is constituted by a keyboard, a touch panel, or the like, and various commands and setting values related to assembly work, a reference workpiece shape, and the like are input via the input unit 16.
- the robot 1 according to the present embodiment can perform a work as reinforcement learning in addition to performing a normal work assembling work according to a command from the controller 2, and switching of these work is also performed via the input unit 16.
- Various setting values required for reinforcement learning for example, a movement path (reference movement path PA in FIG. 4) serving as a reference for the hand tip (reference point P0), a movement amount (pitch) per unit time, and the like are also input unit 16. Is set via a movement path (reference movement path PA in FIG. 4) serving as a reference for the hand tip (reference point P0), a movement amount (pitch) per unit time, and the like are also input unit 16. Is set via a movement path (reference movement path PA in FIG. 4) serving as a reference for the hand tip (reference point P0),
- the controller 2 includes a storage unit 21 and a motor control unit 22 as functional configurations.
- the motor control unit 22 includes a learning control unit 23 that controls the servo motor 13 during reinforcement learning, and a normal control unit 24 that controls the servo motor 13 during normal work assembly work.
- the storage unit 21 stores a relationship (a Q table, which will be described later) between a state during the assembly of the workpiece 100 and an action of the robot 1 corresponding to the state during the assembly.
- the servo motor 13 is driven by the processing in the learning control unit 23, and the work of assembling the workpiece 100 to the component 101 is performed a plurality of times.
- reinforcement learning will be described.
- Reinforcement learning is a type of machine learning that deals with the problem of observing the current state of an agent in a certain environment and determining the action to be taken. Agents get rewards from the environment by selecting actions. There are various methods for reinforcement learning.
- Q-learning is used. Q-learning is a technique for performing learning so as to take an action having the highest action evaluation function value (Q value) (an action that receives the most reward) under a certain environmental state.
- the Q value is updated by the following equation (I) based on the state st and the action at at time t.
- Q (st, at) ⁇ Q (st, at) + ⁇ [rt + 1 + ⁇ maxQ (st + 1, at + 1) ⁇ Q (st, at)] (I)
- ⁇ is a coefficient (learning rate) indicating the degree of updating the Q value
- ⁇ is a coefficient (discount rate) indicating how much the result of a future event is reflected.
- R in the above formula (I) is an index (reward) for evaluating the action at with respect to the change in the state st, and is set so that the Q value increases as the state st improves.
- FIG. 4 is a diagram illustrating an example of the reference movement path PA.
- the reference movement path PA is determined in consideration of a mode in which an operator who is familiar with the work of assembling the work 100 actually press-fits the work 100 by hand.
- the operator when the flexible workpiece 100 is press-fitted into the outer peripheral surface of the component 101, the operator first holds the tip of the workpiece 100, and places the workpiece tip on the outside of the component 101 at a predetermined angle with respect to the axis CL3. It is inserted obliquely at ⁇ (for example 45 °). Next, the operator rotates the workpiece 100 so that the center axis CL2 of the workpiece 100 coincides with the axis CL3, and then pushes the workpiece 100 along the axis CL3 to a predetermined position while maintaining the posture.
- a reference movement path PA when the workpiece 100 is press-fitted by the robot 1 is defined on the YZ plane.
- the movement direction (Z direction) of the hand 12 changes along the reference movement path PA, and accordingly, the Y direction perpendicular to the Z direction also changes.
- the reference movement path PA is set.
- a plurality of (for example, 20) steps (ST1 to ST20) are divided.
- the time t in the above formula (I) is replaced with a step, and the Q value is calculated for each step.
- steps ST1 to ST9 the workpiece 100 is inserted obliquely with respect to the axis CL3.
- steps ST10 to ST12 the workpiece 100 is rotated.
- steps ST13 to ST20 the workpiece 100 is pushed along the axis CL3.
- the current step, the immediately preceding step, and the immediately following step during the work assembling work may be represented by STt, STt ⁇ 1, and STt + 1, respectively.
- FIG. 5 is a diagram for explaining a state where the work 100 moving in the YZ plane is being assembled.
- the work 100 is in the middle of assembly in the amount of change ⁇ Fz of the force Fz in the direction of the axis CL2 (Z direction) acting on the tip of the hand, and the moment Mx about the X axis orthogonal to the YZ plane.
- it can be classified into six states, that is, mode MD1 to mode MD6.
- the force change amount ⁇ Fz is the difference between the force Fz acting on the workpiece in the current step STt and the force Fz acting on the workpiece in the immediately preceding step STt-1.
- the current step is ST3
- the difference between the force Fz applied in step ST3 and the force Fz applied in the immediately preceding step ST2 is ⁇ Fz.
- the moment Mx takes a positive value when a rotational force in the + Y direction acts on the hand 12, and takes a negative value when a rotational force in the -Y direction acts.
- the misalignment direction of the workpiece 100 with respect to the axis line CL3 can be specified.
- the mode MD2 is a state in which both the force change amount ⁇ Fz and the moment Mx are 0 or almost zero. More specifically, the force change amount ⁇ Fz is equal to or less than the positive predetermined value ⁇ F1 and the moment Mx is equal to or greater than the negative predetermined value M2 and equal to or less than the positive predetermined value M1. Corresponds to a non-contact state where there is no contact.
- the mode MD1 is a state in which the force change amount ⁇ Fz is equal to or less than ⁇ F1 and the moment Mx is larger than M1, and corresponds to a state in which the workpiece 100 is buckled in the + Y direction as illustrated.
- Mode MD3 corresponds to a state in which the force change amount ⁇ Fz is equal to or less than ⁇ F1 and the moment Mx is less than M2, and the workpiece is buckled in the ⁇ Y direction as illustrated.
- Mode MD1 to mode MD3 include a case where the force change amount ⁇ Fz is negative.
- the mode MD5 is a state in which the force change amount ⁇ Fz is larger than ⁇ F1 and the moment Mx is not less than M2 and not more than M1. This state corresponds to a normal state when the workpiece 100 is normally press-fitted as illustrated.
- the mode MD4 is a state where the force change amount ⁇ Fz is larger than ⁇ F1 and the moment Mx is larger than M1, and corresponds to a bending state in which the workpiece is bent in the + Y direction as shown in the figure.
- the mode MD6 is a state in which the force change amount ⁇ Fz is larger than ⁇ F1 and the moment Mx is less than M2, and corresponds to a bending state in which the workpiece is bent in the ⁇ Y direction as shown in the figure.
- the current intermediate state of the workpiece 100 that is, which mode MD1 to MD6 the workpiece 100 corresponds to, is determined by the force Fz and moment Mx detected by the force detector 15, more precisely, the force change amount ⁇ Fz and moment.
- the learning control unit 23 specifies based on Mx.
- the reward r of the above formula (I) is set using a reward table stored in advance, that is, a reward table defined by the relationship between the state at the current step STt and the state at the immediately preceding step STt-1.
- FIG. 6 is a diagram illustrating an example of a reward table.
- the reward r (specifically, the rewards r15, r25, r35, r45, r55, r65) is set regardless of the state at the immediately preceding step STt-1.
- the reward r (specifically Is set to a predetermined value (for example, ⁇ 3) for the rewards r11, r22, r33, r44, r66). That is, in this case, a negative reward r is given assuming that the state is not improved any more. In other cases (when the state changes other than the normal state MD5), 0 is set to the reward r. Note that the value of the reward r described above can be changed as appropriate based on the result of actual press-fitting work.
- the learning control unit 23 sets the reward r of the above formula (I) at each step according to the reward table of FIG. 6, and calculates the Q value.
- FIG. 7 is a view showing a part of the lattice of FIG. As shown in FIG. 7, the intersection (dot) of the lattice corresponds to the moving point of the hand tip. That is, the hand tip (reference point P0) moves in units of dots in steps ST1 to ST20, and the dot interval corresponds to the pitch when the hand 12 is moved.
- the hand 12 moves along the reference movement path PA in the next step STt + 1. It moves to either the point P2, the point P3 shifted by 1 pitch in the + Y direction from the reference movement path PA, or the point P4 shifted by 1 pitch in the -Y direction. If the current step STt becomes the point P4, the next step STt + 1 moves to one of the points P5, P6 and P7.
- the direction in which the hand 12 can move (the angle indicating the movement direction) and the amount of movement are stored in advance in the memory. For example, 0 ° and ⁇ 45 ° with respect to the axis CL1 are set as angles indicating the moving direction, and a length corresponding to the interval between adjacent dots is set as the moving amount.
- the learning control unit 23 operates the robot 1 so that a high reward r is obtained according to the determined condition.
- the robot 1 can not only move the hand 12 but also rotate the hand 12 around the X axis. Therefore, the controller 2 is also set with the amount of rotation about the X axis with respect to the moving direction of the hand 12.
- FIG. 8 is a diagram showing actions that the robot 1 can take during the work assembly work. As shown in FIG. 8, the robot 1 can take nine actions a1 to a9 in steps ST1 to ST20, respectively.
- the action a1 corresponds to the movement from the point P1 to the point P2 and the movement from the point P4 to the point P5 in FIG.
- the action a2 corresponds to the movement from the point P1 to the point P4 and the movement from the point P4 to the point P7 in FIG.
- the action a3 corresponds to the movement from the point P1 to the point P3 and the movement from the point P4 to the point P6 in FIG.
- Actions a4 to a6 are actions that rotate clockwise around the X axis in addition to the movements of actions a1 to a3.
- Actions a7 to a9 are actions that rotate counterclockwise around the X axis in addition to the movements of actions a1 to a3.
- the work as reinforcement learning can be performed by applying nine actions a1 to a9 to each of the six assembly states (modes MD1 to MD6) of the workpiece 100.
- modes MD1 to MD6 six assembly states
- the reinforcement learning process takes a lot of time. Therefore, in order to shorten the time taken for the reinforcement learning process, it is preferable to narrow down actions in reinforcement learning.
- Narrowing down the behavior is performed by, for example, manually assembling a worker who is familiar with the work assembly work in advance and grasping the action pattern at that time. That is, in steps ST1 to ST20 from the start of assembly of the workpiece 100 to the completion of assembly, when there is an action that is not clearly selected by the worker, the action is narrowed down by excluding that action.
- steps ST1 to ST9 and steps ST13 to ST20 in FIG. 4 the worker selects only the actions a1 to a3 and does not select the actions a4 to a9.
- steps ST10 to ST12 the worker selects only the actions a4 to a6 and does not select the actions a1 to a3 and the actions a7 to a9.
- the work assembling work as reinforcement learning is limited so that only actions a1 to a3 are applied in steps ST1 to ST9 and steps ST13 to ST20, and only actions a4 to a6 are applied in steps ST10 to ST12.
- Applicable actions in each of steps ST1 to ST20 are set in advance via the input unit 16.
- the learning control unit 23 operates the robot 1 by selecting an arbitrary action that can obtain a positive reward from these applicable actions, and uses the above formula (I) each time an action is selected. Q value is calculated. The work assembling work as reinforcement learning is repeated until the Q value converges in each of steps ST1 to ST20.
- FIG. 9 is a diagram showing the relationship between the number of operations of the hand 12 (number of trials N) and the Q value at a certain step STt.
- the Q value is 0, and the Q value converges to a constant value as the number of trials N increases.
- a Q table is constructed using such a converged Q value.
- FIG. 10A and FIG. 10B are diagrams showing an example of the Q table obtained in the reinforcement learning process.
- the Q value is set for each of steps ST1 to ST20 according to the state and the action.
- states (modes) MD1 to MD6 and actions a1 to a3 Q tables QT1 to QT9 and QT13 to QT20 corresponding to the above are constructed.
- steps ST10 to ST12 as shown in FIG. 10B, Q tables QT10 to QT12 corresponding to the states MD1 to MD6 and the actions a4 to a6 are constructed.
- the constructed Q tables QT1 to QT20 are stored in the storage unit 21 of FIG.
- FIG. 11 is a diagram showing a specific example of the Q table.
- This Q table is, for example, the Q table QT1 in step ST1.
- the Q values are all zero.
- the normal control unit 24 in FIG. 1 selects an action having the highest Q value for the state from the Q table stored in the storage unit 21. For example, the action a2 is selected in the state MD1, and the action a1 is selected in the state MD2. Then, the servo motor 13 is controlled so that the robot 1 executes the selected action.
- FIG. 12 is a flowchart showing an example of processing executed by the normal control unit 24.
- the process shown in this flowchart is started when the start of normal work assembling work is instructed by the operation of the input unit 16 after the Q table is stored in the reinforcement learning process.
- the process of FIG. 12 is executed in each of steps ST1 to ST20.
- the current assembly state of the workpiece 100 is detected based on the signal from the force detector 15. That is, the mode MD1-MD6 corresponding to the workpiece 100 is detected.
- the Q table QT corresponding to the current step STt is read from the storage unit 21, and the action with the highest Q value is selected with respect to the detected assembling state.
- a control signal is output to the servo motor 13 so that the robot 1 takes the selected action.
- actions can be narrowed down so that actions a1 to a3 are taken in steps ST1 to ST9 and ST13 to ST20, and actions a4 to a6 are taken in steps ST10 to ST12.
- the reference movement path PA determined in the preliminary work process and the actions that the robot 1 can take are set in the controller 2 via the input unit 16.
- the reinforcement learning process is executed.
- the learning control unit 23 outputs a control signal to the servo motor 13 to actually operate the robot 1 and repeatedly perform the work of assembling the workpiece 100.
- the learning control unit 23 selects one action from a plurality of actions set in advance for each of steps ST1 to ST20, and controls the servo motor 13 so that the robot 1 executes the action.
- the state change is grasped by a signal from the force detector 15, and a reward r based on the state change is determined with reference to a predetermined reward table (FIG. 6). Then, using this reward r, the Q value corresponding to the state and action in each step ST1 to ST20 is calculated by the above equation (I).
- the learning control unit 23 randomly selects an action in each of steps ST1 to ST20.
- the learning control unit 23 preferentially selects an action that provides a high reward r, and the Q value of the specific action gradually increases for each state in each of the steps ST1 to ST20.
- the Q value of the action that corrects bending and buckling increases.
- a Q table QT is constructed with the Q value at that time and stored in the storage unit 21.
- the assembly work of the workpiece 100 is performed by the process in the normal control unit 24 as the assembly work process.
- the normal control unit 24 detects a state during the assembly of the workpiece 100 at the current step STt based on a signal from the force detector 15 (S11).
- the current step of ST1 to ST20 can be specified by a signal from the encoder 14, for example.
- the normal control unit 24 selects an action having the highest Q value as an optimum action from a plurality of actions corresponding to the assembling state set in the Q table (S12), and the robot 1 takes the optimum action.
- the servo motor 13 is controlled (S13).
- the robot 1 can be operated. That is, it is possible to take an optimum action according to the change in the state, and it is possible to press the workpiece 100 into the part 101 regardless of individual differences of the workpiece 100. Even when the workpiece 100 is formed of a flexible tube, the workpiece 100 can be press-fitted while easily and appropriately correcting the bending and buckling of the workpiece 100.
- the robot control apparatus controls the robot 1 so that the workpiece 100 supported by the hand 12 of the robot 1 driven by the servo motor 13 is assembled to the component 101.
- the control device includes a plurality of intermediate assembly states (MD1 to MD6) of workpieces obtained by reinforcement learning in advance, and an optimal action (a1 to a6) of the robot 1 that gives the highest reward for each intermediate assembly state. Is detected by the force detector 15 based on the storage unit 21 that stores the relationship (Q table), the force detector 15 that detects the state of the work 100 being assembled, and the Q table stored in the storage unit 21.
- a normal control unit 24 that specifies the optimum behavior of the robot 1 corresponding to the assembling state and controls the servo motor 13 according to the optimum behavior is provided (FIG. 1).
- the servo motor 13 By controlling the servo motor 13 with reference to the Q table acquired by reinforcement learning in this way, there is an individual difference such as a bending wrinkle in the workpiece 100, and there is a difference between the center axis CL2 of the workpiece 100 and the axis CL3 of the component 101. Even if there is a misalignment, the workpiece 100 can be easily and quickly press-fitted into the component 101 while correcting the misalignment without causing the workpiece 100 to bend or buckle. Further, it is not necessary to separately provide a reaction force receiving portion or the like in the hand 12, the configuration of the hand 12 can be simplified, and the enlargement of the hand 12 can be avoided.
- the optimal action of the robot 1 is defined by a combination of an angle indicating the moving direction of the hand 12, a moving amount of the hand 12 along the moving direction, and a rotating amount of the hand 12 with respect to the moving direction (FIG. 8).
- the force detector 15 includes a force detector 15 that detects translational forces Fx, Fy, Fz and moments Mx, My, Mz acting on the hand 12, and the translational force detected by the force detector 15. Based on Fy and the moment Mx, the assembly state of the workpiece 100 is specified (FIG. 5). Thereby, the bending state or buckling state of the workpiece 100 due to the misalignment of the workpiece 100 can be detected with a simple configuration, and the apparatus can be configured at a lower cost than when a camera or the like is used.
- the storage unit 21 has a relationship between a plurality of assembly intermediate states from the start of assembly of the workpiece 100 to the completion of assembly and the optimum behavior corresponding to each of the intermediate assembly states, that is, the Q table (FIG. 10A). , FIG. 10B).
- the Q table FIG. 10A
- FIG. 10B the Q table
- the robot control method controls the robot 1 so that the workpiece 100 supported by the hand 12 of the robot 1 driven by the servo motor 13 is assembled to the component 101 ( FIG. 1).
- the operation of assembling the workpiece 100 to the component 101 by driving the hand 12 is performed a plurality of times, and the robot 1 that gives the highest reward for the plurality of intermediate assembly states of the workpiece 100 and the respective intermediate assembly states is provided.
- the reinforcement learning process for acquiring the relationship (Q table) with the optimum behavior and when the workpiece 100 is assembled to the component 101, the assembly intermediate state of the workpiece 100 is detected, and the optimum behavior corresponding to the detected intermediate assembly state is detected.
- An assembly work process that is specified based on the Q table acquired in the reinforcement learning process and controls the servo motor 13 in accordance with the specified optimum action.
- the robot control method further includes a preliminary work process in which an operator assembles the workpiece 100 to the part 101 before performing the reinforcement learning process. Based on the grasped behavior pattern of the worker, the behavior of the robot 1 in the reinforcement learning process is determined. As a result, the robot 1 can realize the same behavior as that of the expert. Further, for example, the actions of the robot 1 can be narrowed down so that actions a1 to a3 are taken in steps ST1 to ST9 and steps ST13 to ST20, and actions a4 to a6 are taken in steps ST10 to ST12. Therefore, the time required for the reinforcement learning process can be shortened, and efficient control of the robot 1 can be realized.
- the controller 2 constituting the robot control apparatus has the learning control unit 23 and the normal control unit 24, and performs the work assembly work as reinforcement learning by the processing in the learning control unit 23.
- the relationship between the assembly state of the workpiece 100 and the optimal behavior of the robot 1 is acquired using Q learning.
- the reinforcement learning is not limited to Q learning, and other methods may be used. . Therefore, you may memorize
- a state detection part is not restricted to this.
- a pair of vibration sensors are mounted on the peripheral surface of the base end of the workpiece 100 or the tip of the hand, and the moment is detected based on the time difference between the pair of vibration sensors detecting vibrations. It may be detected.
- the optimal behavior of the robot 1 corresponding to the assembly state of the workpiece 100 detected by the force detector 15 is specified, and the servo motor 13 is controlled according to the optimal behavior.
- the configuration of the normal control unit 24 as the actuator control unit is not limited to this.
- the robot 1 may be provided with not only the servo motor 13 but other actuators (for example, cylinders), and the actuator control unit may control the other actuators so that the robot 1 takes an optimal action.
- the assembly state of the workpiece 100 is classified into six modes MD1 to MD6. However, this is determined by the material and shape of the workpiece 100, and may be classified into other modes.
- the vertical articulated robot 1 is used, but the configuration of the robot is not limited to this.
- a flexible tube is used as the workpiece 100.
- the shape and material of the workpiece may be anything.
- the workpiece 100 may be a metal.
- the tube-shaped workpiece 100 (first component) is press-fitted into the pipe-shaped component 101 (second component) as the workpiece assembly operation, but the configuration of the first component and the second component is as follows. Not limited to this. Therefore, the assembly work by the robot is not limited to the press-fitting work, and the robot control device and control method of the present invention can be similarly applied to various kinds of work.
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Orthopedic Medicine & Surgery (AREA)
- General Engineering & Computer Science (AREA)
- Manufacturing & Machinery (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
- Automatic Assembly (AREA)
Abstract
Description
Q(st,at)←
Q(st,at)+α[rt+1+γmaxQ(st+1,at+1)-Q(st,at)]・・・(I)
(1)事前作業工程
まず、強化学習工程を行う前に、事前作業工程として、熟練の作業者に手動でワーク100を部品101に組み付ける作業を行わせる。このとき、ワーク100の状態をモードMD1~MD6に変化させながら、そのときの行動パターンを分析する。これによりロボット1によりワーク100の組付作業を行う際の基準移動経路PA(図4)と、各々のステップST1~AT20でロボット1がとり得る行動とを決定することができる。すなわち、ステップST1~ST9,ST13~ST20で行動a1~a3を、ステップST10~ST12で行動a4~a6をとるように、行動の絞り込みを行うことができる。事前作業工程において決定した基準移動経路PAとロボット1がとり得る行動とは、入力部16を介してコントローラ2に設定される。
事前作業工程が終了すると強化学習工程を実行する。強化学習工程においては、学習制御部23がサーボモータ13に制御信号を出力し、ロボット1を実際に動作させてワーク100の組付作業を繰り返し行わせる。このとき、学習制御部23は、予めステップST1~ST20毎に設定された複数の行動の中から一の行動を選択し、ロボット1がその行動を実行するようにサーボモータ13を制御する。さらに、力検出器15からの信号により状態の変化を把握し、予め定められた報酬テーブル(図6)を参照して、状態の変化に基づく報酬rを決定する。そして、この報酬rを用いて、上式(I)により、各々のステップST1~ST20における状態と行動とに対応したQ値を算出する。
強化学習工程が終了すると、組付作業工程として通常制御部24での処理により、ワーク100の組付作業を行う。この場合、通常制御部24は、力検出器15からの信号により現在のステップSTtでのワーク100の組付途中状態を検知する(S11)。なお、ST1~ST20のうちの現在のステップは、例えばエンコーダ14からの信号により特定することができる。さらに通常制御部24は、Qテーブルに設定された組付途中状態に対応する複数の行動の中から、Q値が最も高い行動を最適行動として選択し(S12)、ロボット1が最適行動をとるようにサーボモータ13を制御する(S13)。
(1)本発明の実施形態に係るロボットの制御装置は、サーボモータ13により駆動されるロボット1のハンド12に支持されたワーク100を部品101に組み付けるようにロボット1を制御するものである。この制御装置は、予め強化学習によって得られたワークの複数の組付途中状態(MD1~MD6)と各々の組付途中状態に対し最も高い報酬を与えるロボット1の最適行動(a1~a6)との関係(Qテーブル)を記憶する記憶部21と、ワーク100の組付途中状態を検知する力検出器15と、記憶部21に記憶されたQテーブルに基づき、力検出器15により検知された組付途中状態に対応するロボット1の最適行動を特定し、この最適行動に従いサーボモータ13を制御する通常制御部24とを備える(図1)。
上記実施形態は、種々の形態に変形することができる。以下、変形例について説明する。上記実施形態では、ロボットの制御装置を構成するコントローラ2が学習制御部23と通常制御部24とを有し、学習制御部23での処理により、強化学習としてのワーク組付作業を行うようにしたが、学習制御部23での処理を別の制御装置で行うようにしてもよい。すなわち、ワーク100の組付途中状態とロボット1の最適行動との関係を示すQテーブルを別の制御装置から取得し、これを記憶部としてのロボット制御装置の記憶部21に記憶するようにしてもよい。例えば工場出荷時に、量産のロボット制御装置の記憶部21にそれぞれ同一のQテーブルを記憶させてもよい。したがって、コントローラ2(図1)から学習制御部23を省略することができる。
Claims (6)
- アクチュエータにより駆動されるロボットのハンドに支持された第1部品を第2部品に組み付けるように前記ロボットを制御するロボットの制御装置であって、
予め強化学習によって得られた前記第1部品の複数の組付途中状態と各々の組付途中状態に対し最も高い報酬を与える前記ロボットの最適行動との関係を記憶する記憶部と、
前記第1部品の組付途中状態を検知する状態検知部と、
前記記憶部に記憶された前記関係に基づき、前記状態検知部により検知された組付途中状態に対応する前記ロボットの最適行動を特定し、該最適行動に従い前記アクチュエータを制御するアクチュエータ制御部と、を備えることを特徴とするロボットの制御装置。 - 請求項1に記載のロボットの制御装置において、
前記最適行動は、前記ハンドの移動方向を示す角度、前記移動方向に沿った前記ハンドの移動量、および前記移動方向に対する前記ハンドの回転量の組み合わせによって規定されることを特徴とするロボットの制御装置。 - 請求項1または2に記載のロボットの制御装置において、
前記状態検知部は、前記ハンドに作用する並進力とモーメントとを検出する検出器を有し、該検出器により検出された並進力とモーメントとに基づき前記第1部品の組付途中状態を特定することを特徴とするロボットの制御装置。 - 請求項1~3のいずれか1項に記載のロボットの制御装置において、
前記記憶部は、前記第1部品の組付開始から組付完了に至るまでの複数の組付途中状態と各々の組付途中状態に対応する最適行動との関係を記憶することを特徴とするロボットの制御装置。 - アクチュエータにより駆動されるロボットのハンドに支持された第1部品を第2部品に組み付けるように前記ロボットを制御するロボットの制御方法であって、
前記ハンドの駆動により前記第1部品を前記第2部品へ組み付ける作業を複数回行って、前記第1部品の複数の組付途中状態と各々の組付途中状態に対し最も高い報酬を与える前記ロボットの最適行動との関係を取得する強化学習工程と、
前記第1部品を前記第2部品に組み付けるときに、前記第1部品の組付途中状態を検知し、検知した組付途中状態に対応する最適行動を前記強化学習工程で取得した前記関係に基づいて特定し、特定した最適行動に従い前記アクチュエータを制御する組付作業工程と、を含むことを特徴とするロボットの制御方法。 - 請求項5に記載のロボットの制御方法において、
前記強化学習工程を行う前に、作業者が前記第1部品を前記第2部品へ組み付ける事前作業工程をさらに含み、
前記強化学習工程では、前記事前作業工程で把握した前記作業者の行動パターンに基づき、前記強化学習工程における前記ロボットの行動を決定することを特徴とするロボットの制御方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018536921A JP6651636B2 (ja) | 2016-08-30 | 2017-03-17 | ロボットの制御装置およびロボットの制御方法 |
US16/328,063 US20190184564A1 (en) | 2016-08-30 | 2017-03-17 | Robot control apparatus and robot control method |
CN201780052332.6A CN109641354B (zh) | 2016-08-30 | 2017-03-17 | 机器人的控制装置和机器人的控制方法 |
CA3035492A CA3035492C (en) | 2016-08-30 | 2017-03-17 | Robot control apparatus and robot control method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016-168350 | 2016-08-30 | ||
JP2016168350 | 2016-08-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018042730A1 true WO2018042730A1 (ja) | 2018-03-08 |
Family
ID=61301492
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/010887 WO2018042730A1 (ja) | 2016-08-30 | 2017-03-17 | ロボットの制御装置およびロボットの制御方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20190184564A1 (ja) |
JP (1) | JP6651636B2 (ja) |
CN (1) | CN109641354B (ja) |
CA (1) | CA3035492C (ja) |
WO (1) | WO2018042730A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019240047A1 (ja) * | 2018-06-11 | 2019-12-19 | Necソリューションイノベータ株式会社 | 行動学習装置、行動学習方法、行動学習システム、プログラム、及び記録媒体 |
WO2019238311A1 (de) * | 2018-06-16 | 2019-12-19 | Psa Automobiles Sa | Roboteranordnung und verfahren zur durchführung einer montageoperation an einem werkstück |
JP2020034994A (ja) * | 2018-08-27 | 2020-03-05 | 株式会社デンソー | 強化学習装置 |
KR20200072592A (ko) * | 2018-12-03 | 2020-06-23 | 한국생산기술연구원 | 로봇용 학습 프레임워크 설정방법 및 이를 수행하는 디지털 제어 장치 |
CN111438687A (zh) * | 2019-01-16 | 2020-07-24 | 发那科株式会社 | 判定装置 |
WO2021070404A1 (ja) * | 2019-10-09 | 2021-04-15 | 三菱電機株式会社 | 組み立て装置 |
JPWO2021111701A1 (ja) * | 2019-12-05 | 2021-06-10 | ||
JPWO2021144886A1 (ja) * | 2020-01-15 | 2021-07-22 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6879009B2 (ja) * | 2017-03-30 | 2021-06-02 | 株式会社安川電機 | ロボット動作指令生成方法、ロボット動作指令生成装置及びコンピュータプログラム |
JP6603257B2 (ja) * | 2017-03-31 | 2019-11-06 | ファナック株式会社 | 行動情報学習装置、管理装置、ロボット制御システム及び行動情報学習方法 |
US10967510B2 (en) * | 2017-11-16 | 2021-04-06 | Industrial Technology Research Institute | Robot arm processing system and method thereof |
FR3075409B1 (fr) * | 2017-12-15 | 2020-01-03 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Dispositif electronique de traitement de signaux a optimisation integree de consommation d'energie electrique et procede correspondant |
EP3743250A1 (en) * | 2018-02-27 | 2020-12-02 | Siemens Aktiengesellschaft | Reinforcement learning for contact-rich tasks in automation systems |
US20200320035A1 (en) * | 2019-04-02 | 2020-10-08 | Micro Focus Software Inc. | Temporal difference learning, reinforcement learning approach to determine optimal number of threads to use for file copying |
US11426874B2 (en) * | 2019-04-30 | 2022-08-30 | Flexiv Ltd. | Robot-based insertion mounting of workpieces |
JP2020192614A (ja) * | 2019-05-24 | 2020-12-03 | 京セラドキュメントソリューションズ株式会社 | ロボット装置及び把持方法 |
JP7186349B2 (ja) * | 2019-06-27 | 2022-12-09 | パナソニックIpマネジメント株式会社 | エンドエフェクタの制御システムおよびエンドエフェクタの制御方法 |
US20220066697A1 (en) * | 2020-09-01 | 2022-03-03 | Western Digital Technologies, Inc. | Memory Device With Reinforcement Learning With Q-Learning Acceleration |
US11833666B2 (en) * | 2020-10-28 | 2023-12-05 | Shanghai Flexiv Robotics Technology Co., Ltd. | Method for assembling an operating member and an adapting member by a robot, robot, and controller |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000176868A (ja) * | 1998-12-16 | 2000-06-27 | Toyoda Mach Works Ltd | ロボット制御装置 |
JP2011248728A (ja) * | 2010-05-28 | 2011-12-08 | Honda Motor Co Ltd | 学習制御システム及び学習制御方法 |
JP2013158850A (ja) * | 2012-02-01 | 2013-08-19 | Seiko Epson Corp | ロボット装置、組立て方法、及び組立てプログラム |
JP2015033747A (ja) * | 2013-08-09 | 2015-02-19 | 株式会社安川電機 | ロボットシステム、ロボット制御装置及びロボット制御方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1048403A3 (en) * | 1996-06-15 | 2001-12-12 | Unova U.K. Limited | Improvements in and relating to grinding machines |
JP5330138B2 (ja) * | 2008-11-04 | 2013-10-30 | 本田技研工業株式会社 | 強化学習システム |
US8428780B2 (en) * | 2010-03-01 | 2013-04-23 | Honda Motor Co., Ltd. | External force target generating device of legged mobile robot |
JP4980453B2 (ja) * | 2010-09-06 | 2012-07-18 | ファナック株式会社 | 加工を高精度化するサーボ制御システム |
-
2017
- 2017-03-17 WO PCT/JP2017/010887 patent/WO2018042730A1/ja active Application Filing
- 2017-03-17 CN CN201780052332.6A patent/CN109641354B/zh active Active
- 2017-03-17 CA CA3035492A patent/CA3035492C/en active Active
- 2017-03-17 JP JP2018536921A patent/JP6651636B2/ja active Active
- 2017-03-17 US US16/328,063 patent/US20190184564A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000176868A (ja) * | 1998-12-16 | 2000-06-27 | Toyoda Mach Works Ltd | ロボット制御装置 |
JP2011248728A (ja) * | 2010-05-28 | 2011-12-08 | Honda Motor Co Ltd | 学習制御システム及び学習制御方法 |
JP2013158850A (ja) * | 2012-02-01 | 2013-08-19 | Seiko Epson Corp | ロボット装置、組立て方法、及び組立てプログラム |
JP2015033747A (ja) * | 2013-08-09 | 2015-02-19 | 株式会社安川電機 | ロボットシステム、ロボット制御装置及びロボット制御方法 |
Non-Patent Citations (2)
Title |
---|
KIYOHARU TAGAWA ET AL.: "Approach to Artificial Skill from Affordance Theory : Memory and Embodiment", JOURNAL OF THE ROBOTICS SOCIETY OF JAPAN, vol. 22, no. 7, 22 July 2004 (2004-07-22), pages 892 - 900, XP055471453, DOI: doi:10.7210/jrsj.22.892 * |
RYO MOTOYAMA ET AL.: "Rikikaku Shingo no Jikokureki o Mochiita Robot ni yoru Pin Sonyu Sagyo -Recurrent Neural Network o Mochiita Jotai Sen'i no Yosoku", THE 29TH ANNUAL CONFERENCE OF THE ROBOTICS SOCIETY OF JAPAN YOKOSHU DVD -ROM , THE ROBOTICS SOCIETY OF JAPAN, 7 September 2011 (2011-09-07) * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112262399A (zh) * | 2018-06-11 | 2021-01-22 | 日本电气方案创新株式会社 | 行动学习设备、行动学习方法、行动学习系统、程序以及记录介质 |
JPWO2019240047A1 (ja) * | 2018-06-11 | 2021-03-11 | Necソリューションイノベータ株式会社 | 行動学習装置 |
WO2019240047A1 (ja) * | 2018-06-11 | 2019-12-19 | Necソリューションイノベータ株式会社 | 行動学習装置、行動学習方法、行動学習システム、プログラム、及び記録媒体 |
CN112262399B (zh) * | 2018-06-11 | 2024-08-06 | 日本电气方案创新株式会社 | 行动学习设备、行动学习方法、行动学习系统、程序以及记录介质 |
WO2019238311A1 (de) * | 2018-06-16 | 2019-12-19 | Psa Automobiles Sa | Roboteranordnung und verfahren zur durchführung einer montageoperation an einem werkstück |
JP2020034994A (ja) * | 2018-08-27 | 2020-03-05 | 株式会社デンソー | 強化学習装置 |
KR20200072592A (ko) * | 2018-12-03 | 2020-06-23 | 한국생산기술연구원 | 로봇용 학습 프레임워크 설정방법 및 이를 수행하는 디지털 제어 장치 |
KR102213061B1 (ko) * | 2018-12-03 | 2021-02-09 | 한국생산기술연구원 | 로봇용 학습 프레임워크 설정방법 및 이를 수행하는 디지털 제어 장치 |
CN111438687A (zh) * | 2019-01-16 | 2020-07-24 | 发那科株式会社 | 判定装置 |
JP7209859B2 (ja) | 2019-10-09 | 2023-01-20 | 三菱電機株式会社 | 組み立て装置 |
WO2021070404A1 (ja) * | 2019-10-09 | 2021-04-15 | 三菱電機株式会社 | 組み立て装置 |
JPWO2021070404A1 (ja) * | 2019-10-09 | 2021-04-15 | ||
JPWO2021111701A1 (ja) * | 2019-12-05 | 2021-06-10 | ||
CN114746226A (zh) * | 2019-12-05 | 2022-07-12 | 三菱电机株式会社 | 连接器嵌合装置及连接器嵌合方法 |
JP7186900B2 (ja) | 2019-12-05 | 2022-12-09 | 三菱電機株式会社 | コネクタ嵌合装置およびコネクタ嵌合方法 |
CN114746226B (zh) * | 2019-12-05 | 2024-03-08 | 三菱电机株式会社 | 连接器嵌合装置及连接器嵌合方法 |
WO2021111701A1 (ja) * | 2019-12-05 | 2021-06-10 | 三菱電機株式会社 | コネクタ嵌合装置およびコネクタ嵌合方法 |
WO2021144886A1 (ja) * | 2020-01-15 | 2021-07-22 | オムロン株式会社 | 制御装置、学習装置、制御方法、及び制御プログラム |
JPWO2021144886A1 (ja) * | 2020-01-15 | 2021-07-22 | ||
JP7342974B2 (ja) | 2020-01-15 | 2023-09-12 | オムロン株式会社 | 制御装置、学習装置、制御方法、及び制御プログラム |
Also Published As
Publication number | Publication date |
---|---|
JP6651636B2 (ja) | 2020-02-19 |
CA3035492A1 (en) | 2018-03-08 |
CN109641354A (zh) | 2019-04-16 |
US20190184564A1 (en) | 2019-06-20 |
CN109641354B (zh) | 2022-08-05 |
CA3035492C (en) | 2021-03-23 |
JPWO2018042730A1 (ja) | 2019-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018042730A1 (ja) | ロボットの制御装置およびロボットの制御方法 | |
EP1845427B1 (en) | Control | |
US9718187B2 (en) | Robot controlling method, robot apparatus, program, recording medium, and method for manufacturing assembly component | |
JP6046218B1 (ja) | 物体と物体とを合わせ状態にするロボットのロボット制御装置 | |
CN108422420B (zh) | 具有学习控制功能的机器人系统以及学习控制方法 | |
US10350749B2 (en) | Robot control device having learning control function | |
US9815202B2 (en) | Control method for robot apparatus, computer readable recording medium, and robot apparatus | |
US9043023B2 (en) | Robot system, and control apparatus and method thereof | |
JP4513663B2 (ja) | 自動組立システムにおける組立機構の動作教示方法 | |
JP2015155126A (ja) | ロボットシステムのツール座標系補正方法、およびロボットシステム | |
EP3299129A1 (en) | Robot control device, robot, and robot system | |
US20190022859A1 (en) | Robot apparatus, control method for the robot apparatus, assembly method using the robot apparatus, and recording medium | |
US11951625B2 (en) | Control method for robot and robot system | |
US11141855B2 (en) | Robot system, method of controlling robot arm, recording medium, and method of manufacturing an article | |
JP7392161B2 (ja) | ロボットシステム及びロボット制御装置 | |
JP5218540B2 (ja) | 組立ロボットとその制御方法 | |
JP6862604B2 (ja) | 垂直多関節ロボットの慣性パラメータ同定システム及び慣性パラメータ同定方法並びに垂直多関節ロボットの制御装置及び制御方法 | |
US20210039256A1 (en) | Robot control method | |
JP2020044590A (ja) | ロボット装置 | |
US20170043481A1 (en) | Robot controller inhibiting shaking of tool tip in robot equipped with travel axis | |
JP7227018B2 (ja) | 学習制御装置、ロボット制御装置およびロボット | |
JP7423943B2 (ja) | 制御方法およびロボットシステム | |
JP2006315128A (ja) | ロボットハンドの持ち替え把持制御方法。 | |
JP2023003592A (ja) | 力制御パラメーターの調整方法および力制御パラメーター調整装置 | |
Turygin et al. | Investigation of kinematic error in transfer mechanisms of mechatronic system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17845741 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2018536921 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3035492 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17845741 Country of ref document: EP Kind code of ref document: A1 |