WO2019225746A1 - ロボットシステム及び追加学習方法 - Google Patents
ロボットシステム及び追加学習方法 Download PDFInfo
- Publication number
- WO2019225746A1 WO2019225746A1 PCT/JP2019/020697 JP2019020697W WO2019225746A1 WO 2019225746 A1 WO2019225746 A1 WO 2019225746A1 JP 2019020697 W JP2019020697 W JP 2019020697W WO 2019225746 A1 WO2019225746 A1 WO 2019225746A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- state
- robot
- work
- unit
- operation force
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/163—Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/085—Force or torque sensors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/088—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/1633—Programme controls characterised by the control loop compliant, force, torque control, e.g. combined with position control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1692—Calibration of manipulator
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1687—Assembly, peg and hole, palletising, straight line, weaving pattern movement
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/42—Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/36—Nc in input of data, input key till input tape
- G05B2219/36039—Learning task dynamics, process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/36—Nc in input of data, input key till input tape
- G05B2219/36489—Position and force
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40032—Peg and hole insertion, mating and joining, remote center compliance
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40102—Tasks are classified in types of unit motions
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40116—Learn by operator observation, symbiosis, show, watch
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40153—Teleassistance, operator assists, controls autonomous robot
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40223—If insertion force to high, alarm, stop for operator assistance
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40391—Human to robot skill transfer
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40496—Hierarchical, learning, recognition level controls adaptation, servo level
Definitions
- Patent Document 1 discloses an assembly method for assembling a plurality of parts by controlling a robot arm. In this assembling method, the coordinates of two parts held by the robot arm are acquired, and when it is determined that the coordinates of the two parts are appropriate, both parts are connected.
- the present invention has been made in view of the above circumstances, and the main purpose of the present invention is to make it possible to continue the work when the robot becomes unable to continue the work and then enters the same kind of state next time.
- the purpose is to provide a robot system for learning.
- a robot system having the following configuration. That is, the robot system includes a robot, a state detection sensor, a time measuring unit, a learning control unit, a determination unit, an operation device, an input unit, a switching device, and an additional learning unit.
- the robot performs work based on an operation command.
- the state detection sensor detects and outputs a state value indicating the progress of work of the robot.
- the timer section outputs a timer signal at a predetermined time interval.
- the learning control unit is a model constructed by machine learning of a work state and a next work state associated with the work state, and at least one set of the state value and an operation force associated with the state value.
- the robot 10 includes an arm portion attached to a pedestal.
- the arm portion has a plurality of joints, and each joint is provided with an actuator.
- the robot 10 operates the arm unit by operating an actuator in accordance with an operation command input from the outside.
- This operation command includes a linear velocity command and an angular velocity command.
- An end effector corresponding to the work content is attached to the tip of the arm part.
- the robot 10 performs work by operating the end effector in accordance with an operation command input from the outside.
- the robot 10 is provided with sensors for detecting the operation of the robot 10 and the surrounding environment.
- the motion sensor 11, the force sensor 12, and the camera 13 are attached to the robot 10.
- the motion sensor 11 is provided for each joint of the arm portion of the robot 10 and detects the rotation angle or angular velocity of each joint.
- the force sensor 12 detects the force received by the robot 10 during the operation of the robot 10.
- the force sensor 12 may be configured to detect a force applied to the end effector, or may be configured to detect a force applied to each joint of the arm unit.
- the force sensor 12 may be configured to detect a moment instead of or in addition to the force.
- the camera 13 detects an image of a work that is a work target (work progress status of the work).
- a sound sensor for detecting sound and / or a vibration sensor for detecting vibration can be provided, and the progress of work on the workpiece can be detected based on the detection results of these sensors.
- the data detected by the motion sensor 11 is motion data indicating the motion of the robot 10
- the data detected by the force sensor 12 and the camera 13 is ambient environment data indicating the state of the environment around the robot 10.
- Data detected by the sensor 11, the force sensor 12, and the camera 13 is a state value indicating a progress state of work of the robot 10 (work on the work).
- the motion sensor 11, the force sensor 12, and the camera 13 provided in the robot 10 may be collectively referred to as “state detection sensors 11 to 13”.
- the data detected by the state detection sensors 11 to 13 may be particularly referred to as “sensor information”.
- the state detection sensors 11 to 13 may be provided around the robot 10 instead of being attached to the robot 10.
- the learning control unit 42 includes the current work state, the next work state associated with the current work state (that is, the next transition work state), at least one set of state values, and an operation associated with the state value.
- Machine learning and building a model is a value indicating the progress of the work of the robot 10, and is a value that changes according to the progress of the work.
- the state value includes sensor information detected by the state detection sensors 11 to 13 (for example, work status such as position, speed, force, moment, and video).
- the state value may include information calculated based on the sensor information (for example, a value indicating a change with time in sensor information from the past to the present).
- FIG. 3 is a diagram illustrating an example of data that the learning control unit 42 performs machine learning.
- FIG. 4 is a diagram conceptually illustrating an example of correspondence between state values and work states in the model.
- the current work state is work state 2 (contact)
- the current state value is S 210
- a transition is made to work state 3 (insertion) (referred to as state value S 310 ).
- the learning control unit 42 performs machine learning for n seconds (n is an integer of 1 or more) of the robot 10.
- the learning control unit 42 performs machine learning on the data shown in FIG. 3 and constructs a model.
- This arrow corresponds to the operation for n seconds of the robot 10 that has been machine-learned by the learning control unit 42 and changes the work state from the work state 2 (contact) to the work state 3 (insertion) shown in FIG.
- the learning control unit 42 outputs a trigger signal to the time counting unit 46 when the first operation force I 210 shown in FIG. 3 is output to the switching device 30 as a calculation operation force.
- the timer 46 outputs a timer signal every second from the time when the trigger signal is input based on the trigger signal.
- the learning control unit 42 performs arithmetic operations on the switching device 30 while switching the operation forces I 210 to I 21 (n ⁇ 1) shown in FIG. 3 every second based on the timer signal from the time measuring unit 46. Output as force.
- the learning control unit 42 detects that the operation force shown in FIG. 3 is I null indicating the dummy operation force, the learning control unit 42 stops outputting the calculation operation force.
- FIG. 5 is a flowchart showing processing performed by the robot system regarding additional learning.
- 6 and 7 are diagrams conceptually showing the contents of additional learning according to the determination result of the work state in the model.
- the learning control unit 42 determines that the current work state is the work state 4 based on the current state value while controlling the robot 10 (that is, while only the calculation operation force is output to the switching device 30). It is determined whether or not (completion) is satisfied (S102, work state estimation processing). When the current work state corresponds to work state 4 (completed), the learning control unit 42 determines that the work is completed, and the next work start position (for example, the place where the next work 100 is placed). ) Is output to the switching device 30 such that the arm portion of the robot 10 is moved to the switching device 30, and the switching device 30 outputs an operation command obtained by converting this arithmetic operation force to the robot 10 (S112).
- the similarity is calculated by comparing the current state value with the distribution of state values belonging to each work state in the model (that is, machine state values belonging to each work state).
- the learning control unit 42 determines the distance between the coordinates indicating the current state values S 5 and S 6 and the respective center points of the work state 1 to work state 4 regions (or the work state 1 to work state 4). Based on the shortest distance to each, a similarity that increases as the distance decreases is calculated.
- the additional learning unit 43 acquires the state value and the operator operation force every second based on the timer signal every second from the time measuring unit 46, increments the index by 1, the index, and the state value And the process of storing the operation force (that is, the operator operation force) in association with each other until the operation of the robot 10 by the operator's operation is completed.
- the additional learning unit 43 uses the acquired state value to determine that the operation of the robot 10 by the operator's operation has been completed and to specify the work state when the operation is completed (that is, the work state after the state transition).
- S110 state transition completion determination step.
- the determination that the operation of the robot 10 has been completed is that the additional learning unit 43 has passed a certain time or more after the state value has not changed based on the index, the state value, and the operation force stored in association with each other. (I.e., the same state value has been stored continuously for a certain number of times), or a certain period of time has passed since the output of the operator's operating force has ceased (that is, no operating force has occurred for a certain number of times).
- step S105 before and after the process of step S105 (for example, when the determination unit 44 outputs a determination result indicating that the work of the robot 10 cannot be continued), the control unit 40 allows the switching device 30 to be operated by the operator.
- the control unit 40 outputs the operation command in which the switching device 30 has converted the operation force.
- the progress level acquisition unit 51 acquires the progress level.
- the degree of progress is used to evaluate which degree of progress the operation performed by the robot 10 based on the output of the model constructed by the above-described machine learning (including additional learning) corresponds to in a series of operations. It is a parameter. In the present embodiment, the degree of progress takes a value in the range from 0 to 100, and the closer to 100, the more series of work is progressing.
- the learning control unit 42 calculates the degree of progress corresponding to the current situation of the robot 10 using the clustering result. As shown in FIG. 12, the value of the degree of progress is determined in advance so as to increase stepwise and cumulatively according to the order of operations indicated by each cluster. Since a series of operations of the robot 10 can be expressed as feature vectors arranged in time-series order, the time-series order of each cluster can be obtained using this time-series order information.
- a feature vector classified into the cluster of the operation A by the clustering is input in the probabilistic classifier of the cluster of the operation A, a value close to 100 is output and the cluster is classified into another operation cluster.
- machine learning is performed so that a value close to 0 is output. Therefore, when a feature vector indicating the current situation of the robot 10 is input to the probabilistic classifier that has completed learning, the probabilistic classifier outputs a value indicating whether or not the situation seems to be an action A. It can be said that this value substantially indicates the probability (estimated probability) that the current situation of the robot 10 is the motion A.
- learning is performed in the same manner as described above.
- the user can evaluate whether or not the operation of the robot 10 is likely, for example, by looking at the certainty value during a series of operations. That is, when the model moves unremembered, the certainty value decreases. Therefore, the user can grasp that an operation that is insufficiently learned is included in a series of operations.
- the control unit 40 may automatically detect an operation with a low certainty factor.
- the model moves, the confidence value increases. Therefore, the user can also know that the operation of the robot 10 in a certain situation matches the known operation.
- the user can use the certainty value to confirm that the operation of the robot 10 has reached a known state (for example, any of the operations A to E).
- the degree of progress is also used in the process for determining whether or not the work is completed in the work state estimation process (S102) of the first embodiment. Specifically, the learning control unit 42 determines whether the current working state corresponding to the action E and the progress is greater than or equal to a threshold (for example, 100), and if the progress is greater than or equal to the threshold, Is determined to be complete.
- a threshold for example, 100
- the certainty factor can be used as information for the operator to specify the current correct working state, similar to the similarity degree of the first embodiment.
- the notification unit 45 outputs a first notification signal for displaying that the work cannot be continued to the display device 22, and outputs a second notification signal for displaying the certainty factor to the display device 22.
- the additional learning unit 43 includes a determination result indicating that the work of the robot 10 cannot be continued, the work state output by the input unit 23, the operator operation force output by the operation device 21, and the state detection sensors 11-13. Based on the detected state value and the timer signal, the work state, the next work state associated with the work state, and at least one set of state values and the operator operation force associated with the state value are added. Learn to update the model (additional learning step).
- the display device 22 displays the notified similarity, the worker can accurately specify the current working state.
- the robot system 1 acquires a degree of progress indicating which degree of progress of the robot 10 work corresponds to the work state of the robot 10 realized based on the calculation operation force output by the model.
- the progress degree acquisition unit 51 is provided.
- the determination unit 44 outputs a determination result based on the degree of progress.
- the additional learning unit 43 determines that the work state input to the input unit 23 is included in the model, it is detected by the state detection sensors 11 to 13. Based on the state value, the estimation criterion for the work state in the model is corrected (operation state estimation reference correction step).
- the time measuring unit 46 outputs a timer signal at a predetermined time interval from the time of receiving the trigger signal based on the trigger signal, and the learning control unit 42 A trigger signal is output when the output is started, and the additional learning unit 43 outputs the trigger signal when the input of the operator operation force is detected.
- the data listed as the status value is an example, and different data may be used as the status value.
- the processing can be simplified by using data in a common coordinate system for the robot 10 and the worker (the operation device 21 and the display device 22).
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
Abstract
Description
10 ロボット
11 動作センサ
12 力センサ
13 カメラ
21 操作装置
22 表示装置
23 入力部
30 切替装置
40 制御部
41 通信部
42 学習制御部
43 追加学習部
44 判定部
45 通知部
46 計時部
Claims (17)
- 動作指令に基づいて作業を行うロボットと、
前記ロボットの作業の進行の状態を示す状態値を検出して出力する状態検出センサと、
所定の時間間隔でタイマー信号を出力する計時部と、
作業状態及び当該作業状態に対応付けられる次の作業状態と、少なくとも1組の前記状態値及び当該状態値に対応付けられる操作力とを機械学習することで構築されたモデルを用いて、前記状態検出センサにより検出された前記状態値及び前記タイマー信号に基づいて、演算操作力を出力する学習制御部と、
前記状態検出センサにより検出された前記状態値に基づいて、前記学習制御部による制御で前記ロボットの作業が継続可能か否かを示す判定結果を出力する判定部と、
作業者が操作する装置であり、作業者が加えた操作力である作業者操作力を検出して出力する操作装置と、
作業者による前記作業状態の入力を受け付けて出力する入力部と、
前記作業者操作力及び前記演算操作力に基づいて、前記作業者操作力又は前記演算操作力の何れかを前記動作指令に変換して出力する切替装置と、
前記ロボットの作業が継続できないことを示す前記判定結果と、前記入力部が出力した前記作業状態と、前記操作装置が出力した前記作業者操作力と、前記状態検出センサが検出した前記状態値と、前記タイマー信号とに基づいて、前記作業状態及び当該作業状態に対応付けられる前記次の作業状態と、少なくとも1組の前記状態値及び当該状態値に対応付けられる前記作業者操作力とを追加学習して前記モデルを更新する追加学習部と、
を備えることを特徴とするロボットシステム。 - 請求項1に記載のロボットシステムであって、
前記追加学習部は、前記状態値に基づいて、前記作業状態に対応付けられる前記次の作業状態を求め、前記作業状態及び前記次の作業状態と、前記状態値及び前記作業者操作力とを追加学習して前記モデルを更新することを特徴とするロボットシステム。 - 請求項1に記載のロボットシステムであって、
前記入力部は、入力された前記作業状態に対応付けられる前記次の作業状態の作業者による入力を受け付けて前記追加学習部に出力し、
前記追加学習部は、前記作業状態及び前記次の作業状態と、前記状態値及び作業者操作力とを追加学習して前記モデルを更新することを特徴とするロボットシステム。 - 請求項1から3までの何れか一項に記載のロボットシステムであって、
前記追加学習部は、前記作業状態が前記次の作業状態と異なる場合、前記作業状態及び前記次の作業状態と、複数組の前記状態値及び当該状態値に対応付けられる前記作業者操作力とを追加学習して前記モデルを更新することを特徴とするロボットシステム。 - 請求項1から4までの何れか一項に記載のロボットシステムであって、
前記切替装置は、前記作業者操作力又は前記演算操作力の何れを変換することを示す設定信号に基づいて、前記作業者操作力又は前記演算操作力の何れかを前記動作指令に変換して出力することを特徴とするロボットシステム。 - 請求項1から4までの何れか一項に記載のロボットシステムであって、
前記切替装置は、前記操作装置が出力した前記作業者操作力の大きさを検知するセンサを備え、
前記切替装置は、検知された前記作業者操作力の大きさに基づいて、前記作業者操作力又は前記演算操作力の何れかを前記動作指令に変換して出力することを特徴とするロボットシステム。 - 請求項1から6までの何れか一項に記載のロボットシステムであって、
前記学習制御部は、前記ロボットの作業が継続できないことを示す前記判定結果に基づいて、前記演算操作力の出力を中断し、追加学習が完了したと判定したとき、前記演算操作力の出力を再開することを特徴とするロボットシステム。 - 請求項1から7までの何れか一項に記載のロボットシステムであって、
前記ロボットの作業が継続できないことを示す前記判定結果に基づいて、通知信号を出力する通知部と、
前記通知信号に基づいて表示を行う表示装置と、
を備えることを特徴とするロボットシステム。 - 請求項8に記載のロボットシステムであって、
前記学習制御部は、前記状態検出センサにより検出された前記状態値に基づいて、前記モデル内の前記作業状態に対して現在の前記状態値が類似している程度を示す類似度を算出して出力し、
前記通知部は、当該類似度及び前記ロボットの作業の継続ができないことを示す前記判定結果に基づいて、前記通知信号を出力することを特徴とするロボットシステム。 - 請求項1から8までの何れか一項に記載のロボットシステムであって、
前記学習制御部は、前記状態検出センサにより検出された前記状態値に基づいて、前記モデル内の前記作業状態に対して現在の前記状態値が類似している程度を示す類似度を算出して出力し、
前記判定部は、前記状態値及び前記類似度に基づいて、前記判定結果を出力することを特徴とするロボットシステム。 - 請求項8に記載のロボットシステムであって、
前記モデルに入力される入力データに応じて当該モデルが前記演算操作力を推定して出力する場合の、当該推定の確からしさを示す確信度を取得する確信度取得部を備え、
前記通知部は、当該確信度及び前記ロボットの作業の継続ができないことを示す前記判定結果に基づいて、前記通知信号を出力することを特徴とするロボットシステム。 - 請求項1から8までの何れか一項に記載のロボットシステムであって、
前記モデルに入力される入力データに応じて当該モデルが前記演算操作力を推定して出力する場合の、当該推定の確からしさを示す確信度を取得する確信度取得部を備え、
前記判定部は、前記確信度に基づいて、前記判定結果を出力することを特徴とするロボットシステム。 - 請求項1から12までの何れか一項に記載のロボットシステムであって、
前記モデルが出力する前記演算操作力に基づいて実現される前記ロボットの作業状態が、前記ロボットの作業のうちどの進捗度合いに相当するかを示す進行度を取得する進行度取得部を備え、
前記判定部は、前記進行度に基づいて、前記判定結果を出力することを特徴とするロボットシステム。 - 請求項1から13までの何れか一項に記載のロボットシステムであって、
前記追加学習部は、前記入力部に入力された前記作業状態が、前記モデル内に含まれていると判定した場合、前記状態検出センサにより検出された前記状態値に基づいて、前記モデル内の当該作業状態の推定基準を修正することを特徴とするロボットシステム。 - 請求項1から14までの何れか一項に記載のロボットシステムであって、
前記追加学習部は、前記入力部に入力された前記作業状態が、前記モデル内に含まれていないと判定した場合、前記状態検出センサにより検出された前記状態値に基づいて、入力された当該作業状態を前記モデルに登録することを特徴とするロボットシステム。 - 請求項1から15までの何れか一項に記載のロボットシステムであって、
前記計時部は、トリガー信号に基づいて、当該トリガー信号の受信時から前記所定の時間間隔で前記タイマー信号を出力し、
前記学習制御部は、前記演算操作力の出力を開始するときに前記トリガー信号を出力し、
前記追加学習部は、前記作業者操作力の入力を検知したときに前記トリガー信号を出力することを特徴とするロボットシステム。 - 動作指令に基づいて作業を行うロボットと、
前記ロボットの作業の進行の状態を示す状態値を検出して出力する状態検出センサと、
所定の時間間隔でタイマー信号を出力する計時部と、
作業状態及び当該作業状態に対応付けられる次の作業状態と、少なくとも1組の前記状態値及び当該状態値に対応付けられる操作力とを機械学習することで構築されたモデルを用いて、前記状態検出センサにより検出された前記状態値及び前記タイマー信号に基づいて、演算操作力を出力する学習制御部と、
作業者が操作する装置であり、作業者が加えた操作力である作業者操作力を検出して出力する操作装置と、
前記作業者操作力及び前記演算操作力に基づいて、前記作業者操作力又は前記演算操作力の何れかを前記動作指令に変換して出力する切替装置と、
を備えるロボットシステムに対して、
前記状態検出センサにより検出された前記状態値に基づいて、前記学習制御部による制御で前記ロボットの作業が継続可能か否かを示す判定結果を出力する判定工程と、
前記作業状態と、前記操作装置からの前記作業者操作力との入力を受け付ける入力受付工程と、
前記ロボットの作業が継続できないことを示す前記判定結果と、入力された前記作業状態と、入力された前記作業者操作力と、前記状態検出センサが検出した前記状態値と、前記タイマー信号とに基づいて、前記作業状態及び当該作業状態に対応付けられる前記次の作業状態と、少なくとも1組の前記状態値及び当該状態値に対応付けられる前記作業者操作力とを追加学習して前記モデルを更新する追加学習工程と、
を含む処理を行うことを特徴とする追加学習方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020520399A JP7167141B2 (ja) | 2018-05-25 | 2019-05-24 | ロボットシステム及び追加学習方法 |
US17/058,770 US11858140B2 (en) | 2018-05-25 | 2019-05-24 | Robot system and supplemental learning method |
KR1020207035037A KR102403073B1 (ko) | 2018-05-25 | 2019-05-24 | 로봇 시스템 및 추가 학습 방법 |
CN201980035276.4A CN112203812B (zh) | 2018-05-25 | 2019-05-24 | 机器人系统及追加学习方法 |
EP19807213.4A EP3804918A4 (en) | 2018-05-25 | 2019-05-24 | ROBOT SYSTEM AND ADDITIONAL LEARNING METHOD |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018100520 | 2018-05-25 | ||
JP2018-100520 | 2018-05-25 | ||
JP2018-245459 | 2018-12-27 | ||
JP2018245459 | 2018-12-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019225746A1 true WO2019225746A1 (ja) | 2019-11-28 |
Family
ID=68616335
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/020697 WO2019225746A1 (ja) | 2018-05-25 | 2019-05-24 | ロボットシステム及び追加学習方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US11858140B2 (ja) |
EP (1) | EP3804918A4 (ja) |
JP (1) | JP7167141B2 (ja) |
KR (1) | KR102403073B1 (ja) |
CN (1) | CN112203812B (ja) |
WO (1) | WO2019225746A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020138436A1 (ja) * | 2018-12-27 | 2020-07-02 | 川崎重工業株式会社 | ロボット制御装置、ロボットシステム及びロボット制御方法 |
WO2020138446A1 (ja) * | 2018-12-27 | 2020-07-02 | 川崎重工業株式会社 | ロボット制御装置、ロボットシステム及びロボット制御方法 |
JP2022023737A (ja) * | 2020-07-27 | 2022-02-08 | トヨタ自動車株式会社 | ロータの組立方法及びロータ組立装置の制御装置 |
WO2023085100A1 (ja) * | 2021-11-12 | 2023-05-19 | 川崎重工業株式会社 | ロボット制御装置、ロボットシステム及びロボット制御方法 |
JP7463777B2 (ja) | 2020-03-13 | 2024-04-09 | オムロン株式会社 | 制御装置、学習装置、ロボットシステム、および方法 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018146770A1 (ja) * | 2017-02-09 | 2018-08-16 | 三菱電機株式会社 | 位置制御装置及び位置制御方法 |
US11472025B2 (en) * | 2020-05-21 | 2022-10-18 | Intrinsic Innovation Llc | Robotic demonstration learning device |
JP2022073192A (ja) * | 2020-10-30 | 2022-05-17 | セイコーエプソン株式会社 | ロボットの制御方法 |
DE102021109334B4 (de) * | 2021-04-14 | 2023-05-25 | Robert Bosch Gesellschaft mit beschränkter Haftung | Vorrichtung und Verfahren zum Trainieren eines Neuronalen Netzes zum Steuern eines Roboters für eine Einsetzaufgabe |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009125920A (ja) * | 2007-11-28 | 2009-06-11 | Mitsubishi Electric Corp | ロボットの作業動作最適化装置 |
JP2013543764A (ja) * | 2010-11-11 | 2013-12-09 | ザ・ジョンズ・ホプキンス・ユニバーシティ | ヒューマン・マシン連携ロボットシステム |
JP2016159407A (ja) * | 2015-03-03 | 2016-09-05 | キヤノン株式会社 | ロボット制御装置およびロボット制御方法 |
JP2017007064A (ja) | 2015-06-25 | 2017-01-12 | ダイハツ工業株式会社 | 組立方法 |
JP2017030135A (ja) | 2015-07-31 | 2017-02-09 | ファナック株式会社 | ワークの取り出し動作を学習する機械学習装置、ロボットシステムおよび機械学習方法 |
JP2017170553A (ja) * | 2016-03-23 | 2017-09-28 | 国立大学法人 東京大学 | 制御方法 |
JP2017185577A (ja) * | 2016-04-04 | 2017-10-12 | ファナック株式会社 | シミュレーション結果を利用して学習を行う機械学習装置,機械システム,製造システムおよび機械学習方法 |
JP2017200718A (ja) * | 2016-05-05 | 2017-11-09 | トヨタ自動車株式会社 | 認識的アフォーダンスに基づくロボットから人間への物体ハンドオーバの適合 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3832517B2 (ja) * | 1996-07-05 | 2006-10-11 | セイコーエプソン株式会社 | ロボット用コントローラ及びその制御方法 |
JP4169063B2 (ja) * | 2006-04-06 | 2008-10-22 | ソニー株式会社 | データ処理装置、データ処理方法、及びプログラム |
CN101433491B (zh) * | 2008-12-05 | 2010-12-22 | 华中科技大学 | 多自由度的穿戴式手功能康复训练机器人及其控制系统 |
JP5522403B2 (ja) * | 2010-12-28 | 2014-06-18 | 株式会社安川電機 | ロボットシステム及びロボットの状態判定方法 |
JP5803155B2 (ja) * | 2011-03-04 | 2015-11-04 | セイコーエプソン株式会社 | ロボット位置検出装置及びロボットシステム |
US11074495B2 (en) * | 2013-02-28 | 2021-07-27 | Z Advanced Computing, Inc. (Zac) | System and method for extremely efficient image and pattern recognition and artificial intelligence platform |
US9085080B2 (en) * | 2012-12-06 | 2015-07-21 | International Business Machines Corp. | Human augmentation of robotic work |
JP2014128857A (ja) * | 2012-12-28 | 2014-07-10 | Yaskawa Electric Corp | ロボット教示システムおよびロボット教示方法 |
DE102016009030B4 (de) * | 2015-07-31 | 2019-05-09 | Fanuc Corporation | Vorrichtung für maschinelles Lernen, Robotersystem und maschinelles Lernsystem zum Lernen eines Werkstückaufnahmevorgangs |
DE102017000063B4 (de) * | 2016-01-14 | 2019-10-31 | Fanuc Corporation | Robotereinrichtung mit Lernfunktion |
EP3409428B1 (en) * | 2016-01-26 | 2024-02-14 | Fuji Corporation | Work system comprising a job creation device and a work robot control device |
CN107536698B (zh) * | 2016-06-29 | 2022-06-03 | 松下知识产权经营株式会社 | 行走辅助机器人以及行走辅助方法 |
US10186130B2 (en) * | 2016-07-28 | 2019-01-22 | The Boeing Company | Using human motion sensors to detect movement when in the vicinity of hydraulic robots |
JP6431017B2 (ja) * | 2016-10-19 | 2018-11-28 | ファナック株式会社 | 機械学習により外力の検出精度を向上させた人協調ロボットシステム |
JP6392825B2 (ja) * | 2016-11-01 | 2018-09-19 | ファナック株式会社 | 学習制御機能を備えたロボット制御装置 |
JP2018126798A (ja) * | 2017-02-06 | 2018-08-16 | セイコーエプソン株式会社 | 制御装置、ロボットおよびロボットシステム |
US10807242B2 (en) * | 2017-12-13 | 2020-10-20 | Verb Surgical Inc. | Control modes and processes for positioning of a robotic manipulator |
-
2019
- 2019-05-24 JP JP2020520399A patent/JP7167141B2/ja active Active
- 2019-05-24 EP EP19807213.4A patent/EP3804918A4/en active Pending
- 2019-05-24 KR KR1020207035037A patent/KR102403073B1/ko active IP Right Grant
- 2019-05-24 US US17/058,770 patent/US11858140B2/en active Active
- 2019-05-24 WO PCT/JP2019/020697 patent/WO2019225746A1/ja unknown
- 2019-05-24 CN CN201980035276.4A patent/CN112203812B/zh active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009125920A (ja) * | 2007-11-28 | 2009-06-11 | Mitsubishi Electric Corp | ロボットの作業動作最適化装置 |
JP2013543764A (ja) * | 2010-11-11 | 2013-12-09 | ザ・ジョンズ・ホプキンス・ユニバーシティ | ヒューマン・マシン連携ロボットシステム |
JP2016159407A (ja) * | 2015-03-03 | 2016-09-05 | キヤノン株式会社 | ロボット制御装置およびロボット制御方法 |
JP2017007064A (ja) | 2015-06-25 | 2017-01-12 | ダイハツ工業株式会社 | 組立方法 |
JP2017030135A (ja) | 2015-07-31 | 2017-02-09 | ファナック株式会社 | ワークの取り出し動作を学習する機械学習装置、ロボットシステムおよび機械学習方法 |
JP2017170553A (ja) * | 2016-03-23 | 2017-09-28 | 国立大学法人 東京大学 | 制御方法 |
JP2017185577A (ja) * | 2016-04-04 | 2017-10-12 | ファナック株式会社 | シミュレーション結果を利用して学習を行う機械学習装置,機械システム,製造システムおよび機械学習方法 |
JP2017200718A (ja) * | 2016-05-05 | 2017-11-09 | トヨタ自動車株式会社 | 認識的アフォーダンスに基づくロボットから人間への物体ハンドオーバの適合 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3804918A4 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020138436A1 (ja) * | 2018-12-27 | 2020-07-02 | 川崎重工業株式会社 | ロボット制御装置、ロボットシステム及びロボット制御方法 |
WO2020138446A1 (ja) * | 2018-12-27 | 2020-07-02 | 川崎重工業株式会社 | ロボット制御装置、ロボットシステム及びロボット制御方法 |
JP2020104216A (ja) * | 2018-12-27 | 2020-07-09 | 川崎重工業株式会社 | ロボット制御装置、ロボットシステム及びロボット制御方法 |
JP7117237B2 (ja) | 2018-12-27 | 2022-08-12 | 川崎重工業株式会社 | ロボット制御装置、ロボットシステム及びロボット制御方法 |
JP7463777B2 (ja) | 2020-03-13 | 2024-04-09 | オムロン株式会社 | 制御装置、学習装置、ロボットシステム、および方法 |
JP2022023737A (ja) * | 2020-07-27 | 2022-02-08 | トヨタ自動車株式会社 | ロータの組立方法及びロータ組立装置の制御装置 |
US11942838B2 (en) | 2020-07-27 | 2024-03-26 | Toyota Jidosha Kabushiki Kaisha | Rotor assembling method |
WO2023085100A1 (ja) * | 2021-11-12 | 2023-05-19 | 川崎重工業株式会社 | ロボット制御装置、ロボットシステム及びロボット制御方法 |
Also Published As
Publication number | Publication date |
---|---|
KR102403073B1 (ko) | 2022-05-30 |
JPWO2019225746A1 (ja) | 2021-06-10 |
US11858140B2 (en) | 2024-01-02 |
CN112203812B (zh) | 2023-05-16 |
CN112203812A (zh) | 2021-01-08 |
US20210197369A1 (en) | 2021-07-01 |
EP3804918A4 (en) | 2022-03-30 |
KR20210006431A (ko) | 2021-01-18 |
JP7167141B2 (ja) | 2022-11-08 |
EP3804918A1 (en) | 2021-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019225746A1 (ja) | ロボットシステム及び追加学習方法 | |
CN106780608B (zh) | 位姿信息估计方法、装置和可移动设备 | |
JP5550671B2 (ja) | 自律走行ロボット及び自律走行ロボットの走行制御方法 | |
Chalon et al. | Online in-hand object localization | |
JP4875228B2 (ja) | 物体位置補正装置、物体位置補正方法、及び物体位置補正プログラム | |
KR102303126B1 (ko) | 사용자 선호에 따른 강화학습 기반 자율주행 최적화 방법 및 시스템 | |
JP2006320997A (ja) | ロボット行動選択装置及びロボット行動選択方法 | |
JP2022543926A (ja) | ロボットシステムのためのデリバティブフリーモデル学習のシステムおよび設計 | |
Khokar et al. | A novel telerobotic method for human-in-the-loop assisted grasping based on intention recognition | |
CN114800535B (zh) | 机器人的控制方法、机械臂控制方法、机器人及控制终端 | |
JP2020025992A (ja) | 制御装置、制御方法、およびプログラム | |
JP2007052652A (ja) | 状態ベクトル推定方法および自律型移動体 | |
Aoki et al. | Segmentation of human body movement using inertial measurement unit | |
Maderna et al. | Real-time monitoring of human task advancement | |
CN113195177B (zh) | 机器人控制装置、机器人系统以及机器人控制方法 | |
Du et al. | Human‐Manipulator Interface Using Particle Filter | |
JP5120024B2 (ja) | 自律移動ロボット及びその障害物識別方法 | |
JP2012236254A (ja) | 移動体把持装置と方法 | |
CN111971149A (zh) | 记录介质、信息处理设备和信息处理方法 | |
WO2022264333A1 (ja) | 遠隔作業装置の制御装置、制御方法及び制御プログラム | |
JP2020082313A (ja) | ロボット制御装置、学習装置、及びロボット制御システム | |
CN117207190B (zh) | 基于视觉与触觉融合的精准抓取机器人系统 | |
WO2023085100A1 (ja) | ロボット制御装置、ロボットシステム及びロボット制御方法 | |
US20240139950A1 (en) | Constraint condition learning device, constraint condition learning method, and storage medium | |
CN118265596A (en) | Robot control device, robot system, and robot control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19807213 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020520399 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20207035037 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019807213 Country of ref document: EP Effective date: 20210111 |