WO2019064751A1 - System for teaching robot, method for teaching robot, control device, and computer program - Google Patents

System for teaching robot, method for teaching robot, control device, and computer program Download PDF

Info

Publication number
WO2019064751A1
WO2019064751A1 PCT/JP2018/023728 JP2018023728W WO2019064751A1 WO 2019064751 A1 WO2019064751 A1 WO 2019064751A1 JP 2018023728 W JP2018023728 W JP 2018023728W WO 2019064751 A1 WO2019064751 A1 WO 2019064751A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
robot
work
motion
recognized
Prior art date
Application number
PCT/JP2018/023728
Other languages
French (fr)
Japanese (ja)
Inventor
一宏 佐齋
西岡 澄人
Original Assignee
日本電産株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電産株式会社 filed Critical 日本電産株式会社
Publication of WO2019064751A1 publication Critical patent/WO2019064751A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/42Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine

Definitions

  • the present invention relates to a robot teaching system, a robot teaching method, a control device, and a computer program.
  • direct teaching method direct teaching
  • remote teaching method remote teaching
  • indirect teaching method off-line programming
  • the direct teaching method is a method in which the operator holds the arm of the industrial robot by hand and teaches while moving the arm by hand.
  • the remote teaching method is a method of operating an industrial robot with a teaching pendant and recording a teaching point by pressing a teaching button or the like.
  • the indirect teaching method is a method of teaching an industrial robot using support software such as CAM (computer aided manufacturing; computer aid manufacturing) (see, for example, Patent Document 1).
  • the conventional direct teaching method, remote teaching method, and indirect teaching method can control the robot only in the same operation as the taught operation. For this reason, there is a problem that the versatility of the work performed by the robot is lacking.
  • An object of the present invention is to improve the versatility of work performed by a robot.
  • An exemplary first invention of the present invention is a robot teaching system for teaching a robot an operation, which is an acquisition device for acquiring operation information including image information to be followed by the robot, and the acquired operation information And a control device that generates a work command to be executed by the robot based on the generated information and transmits the generated work command to the robot, and the control device represents an object included in the acquired work information.
  • a recognition unit that recognizes object information and motion information representing an operation performed on the object, a storage unit that stores the recognized object information and the motion information, and the stored object information and the motion
  • the control unit generates the work instruction based on the information, and the communication unit transmits the generated work instruction to the robot. It is a robot teaching system.
  • a second exemplary invention of the present invention is a robot teaching method for teaching a robot an operation, the first step of acquiring operation information including image information to be followed by the robot, and the acquired operation information And a second step of generating a work command to be executed by the robot based on the command and transmitting the generated work command to the robot, the second step including an object included in the obtained work information
  • a recognition step for recognizing object information representing an object and motion information representing an operation performed on the object a storage step for storing the recognized object information and the motion information, and the stored object information And generating the work command from the motion information, and transmitting the generated work command to the robot Comprising a transmission step that the a robot teaching method.
  • a control system generates a work command to be executed by the robot based on work information including image information to be followed by the robot, and transmits the generated work command to the robot
  • a recognition unit that recognizes object information representing an object included in the work information and motion information representing an operation performed on the object; and stores the recognized object information and the motion information
  • a control device comprising: a storage unit, a control unit that generates the work instruction based on the stored object information and the motion information, and a communication unit that transmits the generated work instruction to the robot is there.
  • a control system generates a work command to be executed by the robot based on work information including image information to be followed by the robot, and transmits the generated work command to the robot
  • a computer program which causes a computer to function, and recognizing object information representing an object contained in the work information and motion information representing an operation performed on the object;
  • a computer program that executes.
  • the versatility of the work performed by the robot can be improved.
  • FIG. 1 is a schematic view showing an overall configuration of a robot teaching system exemplified as an embodiment of the present invention.
  • the robot teaching system 1 is a system capable of teaching work to various robots.
  • a robot hereinafter, referred to as a first robot
  • the first robot 2 is, for example, a double-arm robot provided with two arms 2a.
  • FIG. 1 shows an assembly operation for sequentially assembling a plurality of parts 3 to complete an assembly 4 as an object, as an example of an operation to teach the first robot 2.
  • the robot teaching system 1 of the present embodiment is a system in which a human 5 performs in advance a task to be performed by the first robot 2 and causes the first robot 2 to mimic the task performed by the human 5.
  • the robot teaching system 1 includes an acquisition device 10 and a control device 20.
  • the subject (imitation target) who performs the work in advance is not limited to the human 5, and may be a robot other than the first robot 2 (hereinafter referred to as a second robot).
  • a robot other than the first robot 2 hereinafter referred to as a second robot.
  • the old robot when switching an old robot to a new robot, the old robot may be set as a second robot (shown in the drawing) to be imitated, and the new robot may be taught the work performed by the old robot.
  • the operation taught to the first robot 2 is not limited to the assembly operation, and may be another operation such as a simple operation.
  • the acquisition device 10 acquires task information S1 related to a series of tasks performed by the human 5 in advance.
  • the acquisition device 10 of the present embodiment includes a first camera 11, a second camera 12, and an information acquisition unit 13.
  • the first camera 11 is, for example, a stereo camera that captures a work space B in which a human 5 works.
  • the stereo camera includes two or more cameras (shown in the drawing) that capture the work space B from different directions.
  • the first camera 11 is connected to the control device 20 by wire or wirelessly.
  • a photographed image acquired by photographing the work space B by the first camera 11 is transmitted to the control device 20 as image information S11 of a series of operations by the human 5, that is, operations to be followed by the first robot 2.
  • the second camera 12 is, for example, a wearable camera mounted on the head of the human 5.
  • the second camera 12 is connected to the control device 20 by wire or wirelessly.
  • the second camera 12 is attached near the viewpoint position of the human 5, and captures an object present in the viewing direction E of the human 5 at work.
  • a gaze sensor for detecting the movement of eyes of the human 5 is attached to the head of the human 5 together with the second camera 12.
  • the sight line sensor is connected to the control device 20 by wire or wirelessly.
  • a photographed image acquired by photographing an object present in the line-of-sight direction E by the second camera 12 and detection data detected by the line-of-sight sensor are transmitted to the control device 20 as line-of-sight information S12.
  • the line-of-sight information S12 may include at least a photographed image of the second camera 12. Therefore, the line-of-sight information S12 may be acquired only with the second camera 12 without using the line-of-sight sensor.
  • the information acquisition unit 13 is, for example, a pressure sensor attached to both hands of the human 5.
  • the pressure sensor detects a pressure value when the human 5 grips the object (the part 3 and the tool, etc.). Therefore, the information acquisition unit 13 of the present embodiment acquires the pressure value when the human 5 grips the target.
  • the detection value (pressure value) of the information acquisition unit 13 is transmitted to the control device 20 as the related information S13 when the imitation target (in this embodiment, the human 5) grips the object.
  • the information acquisition unit 13 is not limited to a pressure sensor that detects a pressure value, and may use various sensors such as a contact sensor to detect a pressure distribution, the direction of a moment, a slip angle, and the like. When various sensors are used, pressure distribution, direction of moment, slip angle, etc., which are detection values of various sensors, are transmitted to the control device 20 as related information S13.
  • the related information S13 may be transmitted to the control device 20 directly from the second robot without using the information acquisition unit 13.
  • the related information S13 transmitted from the second robot is information including a pressure value, a pressure distribution, a direction of moment, a slip angle, and the like, which is used when the second robot grips an object.
  • the said information may be the information which the 2nd robot acquired from the exterior, and may be the information beforehand memorized by the 2nd robot itself.
  • the acquisition device 10 acquires the image information S11, the line-of-sight information S12, and the related information S13 as the work information S1. Then, the acquisition device 10 transmits the acquired work information S1 to the control device 20. Note that the acquisition device 10 only needs to include at least the first camera 11. That is, the acquisition device 10 may acquire only the image information S11 of the first camera 11 as the work information S1.
  • the control device 20 generates a work command S5 to be executed by the first robot 2 based on the work information S1 received from the acquisition device 10, and transmits the generated work command S5 to the first robot 2 Do.
  • FIG. 2 is a functional block diagram showing an internal configuration of the control device 20. As shown in FIG.
  • the control device 20 includes a computer, and is connected to the first robot 2 and the acquisition device 10 by wire or wirelessly.
  • the control device 20 includes a communication unit 21, a recognition unit 22, a storage unit 23, and a control unit 24.
  • the control unit 24 includes, for example, a central processing unit (CPU).
  • the control unit 24 is connected to the other hardware units 21, 22, 23 via an internal bus or the like. Then, the control unit 24 controls the operation of the other hardware units 21, 22, 23. Further, the control unit 24 reads a computer program stored in the storage unit 23 and executes various processes.
  • the control unit 24 of the present embodiment executes “object registration processing”, “motion registration processing”, and “work instruction generation processing” described later.
  • the communication unit 21 functions as a communication interface that performs wired communication or wireless communication with the first camera 11, the second camera 12, and the information acquisition unit 13. Specifically, when the communication unit 21 receives the work information S1 from the acquisition device 10, the communication unit 21 passes the received work information S1 to the recognition unit 22. When the imitation target is the second robot, when the communication unit 21 receives the related information S13 of the work information S1 from the external second robot, the communication unit 21 passes the received related information S13 to the recognition unit 22.
  • the communication unit 21 also functions as a communication interface that performs wired communication or wireless communication with the first robot 2. Specifically, the communication unit 21 transmits the work instruction S5 generated by the control unit 21 to the first robot 2.
  • the storage unit 23 is formed of a recording medium such as a hard disk or a semiconductor memory.
  • the storage unit 23 stores computer programs that the recognition unit 22 and the control unit 24 perform processing respectively.
  • the storage unit 23 also includes a first database DB1 and a second database DB2.
  • the recognition unit 22 includes, for example, a GPU (Graphics Processing Unit).
  • the recognition unit 22 reads the computer program stored in the storage unit 23 and executes various processes.
  • the communication unit 21 receives the work information S1
  • the recognition unit 22 of the present embodiment executes “object recognition process” and “motion recognition process” described later based on the work information S1.
  • control device 20 is configured to include the computer, and each function of the control device 20 is exhibited by the computer program stored in the storage unit 23 of the computer being executed by the CPU and the GPU of the computer. Ru.
  • Such computer program can be stored on a temporary or non-temporary recording medium such as a CD-ROM or a USB memory.
  • the object recognition process performed by the recognition unit 22 is a process of recognizing object information S2 representing an object of work from image information S11 included in the work information S1.
  • the recognition unit 22 recognizes an object as the object information S2 using a known image recognition technology.
  • the recognition unit 22 recognizes the type, shape, angle (posture), and the like of the target as the object information S2. Therefore, the type, shape, angle, etc. of the object are included as attribute information in the recognized object information S2.
  • the recognition unit 22 individually recognizes objects included in the image information S11.
  • the image information S11 of the present embodiment includes, as objects, a plurality of parts 3 used for the assembling operation, tools, jigs, trays and the like, and an assembly 4 assembled by the assembling operation (see FIG. 3). ). Therefore, the recognition unit 22 individually recognizes a plurality of "parts", “tools”, “jigs", "trays", “assemblies” and the like as object information S2 representing an object.
  • the attribute information of the object information S2 representing "assembly” includes assembly information in addition to the type, shape, and angle of the assembly 4.
  • the assembly information is information necessary to assemble the assembly 4. Specifically, the assembly information includes a plurality of parts 3 necessary for assembly, an assembly order of the plurality of parts 3 and the like.
  • the recognition unit 22 recognizes the assembly 4 as the object as the object information S2, but may recognize the assembly 4 as information representing the object separately from the object information S2. Furthermore, a dedicated database storing information representing the recognized target may be provided in the storage unit 23.
  • the motion recognition process performed by the recognition unit 22 is a process of recognizing, from the image information S11 included in the work information S1, motion information S3 representing an operation performed on the object.
  • the recognition unit 22 recognizes an operation performed on an object as motion information S3 using a known motion capture technology.
  • the recognition unit 22 calculates three-dimensional coordinates of each of a plurality of markers attached to each joint or the like of the human 5 from captured images captured from a plurality of different directions included in the image information S11.
  • the recognition unit 22 recognizes the motion performed on the target as motion information S3 based on the temporal change of the three-dimensional coordinates of each marker. At this time, the recognition unit 22 also recognizes the motion speed (rotational speed), motion angle, and the like of the motion performed on the object as motion information S3. Therefore, the recognized motion information S3 includes, as attribute information, the operation speed (rotation number) of the operation performed on the object, the operation angle, and the like.
  • the recognition unit 22 segments the image information S11 and recognizes it as motion information S3. That is, in the motion recognition process, the recognition unit 22 individually recognizes all the operations included in the image information S11.
  • the image information S11 of the present embodiment includes a plurality of operations such as “gripping”, “releasing”, “turning”, and “loading” performed at the assembling operation (see FIG. 3). Therefore, the recognition unit 22 individually performs “hold”, “release”, “turn”, “place on”, and the like performed in the assembly operation as motion information S3 representing the operation performed on the object. recognize.
  • the recognition unit 22 of the present embodiment includes the related information S13 as attribute information of motion information S3 (hereinafter referred to as holding information S31) representing an operation of "gripping" in motion recognition processing.
  • the related information S13 includes the pressure value detected by the information acquisition unit 13 when the human 5 grips the target. Therefore, the recognition unit 22 includes the pressure value of the information acquisition unit 13 included in the related information S13 in the attribute information of the grip information S31. Thereby, the recognized grip information S31 is stored in the storage unit 23 in association with the related information S13.
  • the attribute information of the grip information S31 may not include the related information S13.
  • the recognition unit 22 recognizes motion information S3 representing an operation performed on an object from the sight line information S12 in addition to the image information S11.
  • the line-of-sight information S12 includes a photographed image of the second camera 12 which photographed an object present in the line-of-sight direction E of the human 5 and detection data of the visual line sensor which detected the movement of the human 5's eye. .
  • the recognition unit 22 determines which position of the object the person 5 in operation is looking at using the known gaze measurement technology from the photographed image of the second camera 12 included in the gaze information S12 and the detection data of the gaze sensor Recognize Thereby, the recognition part 22 can specify the holding position of the said target object from the position where the person 5 is looking at the target object, for example, when recognizing the operation of "gripping" the target object.
  • the recognition unit 22 can more accurately recognize the gripping information S31 by the “gripping” operation recognized from the image information S11 and the gripping position specified from the visual recognition information S12. Note that the recognition unit 22 of the present embodiment may recognize the gripping information S31 based on only the image information S11.
  • the first database DB1 of the storage unit 23 is a database in which the object information S2 is stored.
  • object information S2 representing a known object is stored in advance.
  • the control unit 24 executes an "object registration process" of storing the object information S2 recognized by the recognition unit 22 in the first database DB1 according to a predetermined condition.
  • control unit 24 checks whether the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1. As a result of the collation, when the object information S2 recognized by the recognition unit 22 is not stored in the first database DB1, the control unit 24 causes the storage unit 23 to compare the object information S2 recognized by the recognition unit 22 1) Store in database DB1. Thereby, a plurality of different pieces of object information S2 are accumulated in the first database DB1.
  • the second database DB2 of the storage unit 23 is a database in which the motion information S3 is stored.
  • the control unit 24 executes a “motion registration process” of storing the motion information S3 recognized by the recognition unit 22 in the second database DB2 in accordance with a predetermined condition.
  • control unit 24 collates whether or not the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2. As a result of the collation, when the motion information S3 recognized by the recognition unit 22 is not stored in the second database DB2, the control unit 24 causes the storage unit 23 to recognize the motion information S3 recognized by the recognition unit 22 2 Store in database DB2. Thereby, a plurality of different motion information S3 is accumulated in the second database DB2.
  • the control unit 24 executes “work command generation process” for generating a work command S5 to be executed by the first robot 2 following the work information S1.
  • the work command generation process generates a work command S5 based on one or more pieces of object information S2 stored in the first database DB1 and one or more pieces of motion information S3 stored in the second database DB2. It is.
  • the control unit 24 generates at least one module information S4.
  • the module information S4 is information in which one of the plurality of pieces of object information S2 stored in the first database DB1 and one of the plurality of pieces of motion information S3 stored in the second database DB2 are combined. Then, when the plurality of pieces of module information S4 are generated, the control unit 24 generates a work instruction S5 including an operation program in which the plurality of pieces of module information S4 are sequentially connected.
  • FIG. 3 is a diagram illustrating an example of the work command generation process performed by the control unit 24.
  • the work command generation process shown in FIG. 3 is a process of generating a work command S5 that causes the first robot 2 to execute the assembly work of the assembly 4.
  • the control unit 24 of the present embodiment acquires assembly information from the object information S2 stored in the first database DB1 and representing an “assembly” corresponding to the assembly 4. Then, based on the acquired assembly information, the control unit 24 generates a plurality of module information S4 in which one object information S2 and one motion information S3 are combined.
  • control unit 24 combines the object information S2 representing “parts” and the motion information S3 representing “grasp” to generate the first module information S4.
  • the first module information S4 represents an operation of “gripping the part 3”.
  • control unit 24 combines the object information S2 representing "tool” and the motion information S3 representing "turn” to generate the second module information S4.
  • the second module information S4 represents the "turning tool” operation.
  • control unit 24 combines the object information S2 representing "tray” and the motion information S3 representing "loading” to generate the third module information S4.
  • the third module information S4 represents an operation of “loading tray”. As described above, the control unit 24 generates the first to N-th module information S4 necessary for the completion of the assembly 4. N is an integer of 2 or more.
  • the control unit 24 generates, based on the acquired assembly information, a work instruction S5 composed of an operation program in which a plurality of module information S4 are sequentially connected. Specifically, the control unit 24 generates a work command S5 composed of an operation program in which the first module information S4 to the Nth module information S4 are connected in order. Therefore, the generated work instruction S5 is an operation program that causes the first robot 2 to sequentially execute each operation included in the first to N-th module information S4. The work command S5 generated is transmitted to the first robot 2 by the communication unit 21.
  • the work instruction S5 may be composed of not only the first to N-th module information S4, but also part (or one) of the first to N-th module information S4.
  • the control unit 24 can generate, as a work command for assembling a part of the assembly 4, a work command S 5 composed of first to Kth ( ⁇ N) pieces of module information S 4. Further, the control unit 24 can also generate a work command S5 including one piece of module information S4 (for example, "turn the tool").
  • the work command S5 is repeatedly transmitted by the communication unit 21, the first robot 2 repeats the work (work of turning the tool) specified by the module information S4 included in the work command S5.
  • FIGS. 4 and 5 are flowcharts showing an example of a robot teaching method performed by the acquisition device 10 and the control device 20 in cooperation with each other.
  • the robot teaching method of FIG. 4 and FIG. 5 shows a method from the acquisition device 10 acquiring the work information S1 until outputting the work command S5 to the first robot 2.
  • the circled letter A in FIG. 4 is connected to the letter A in FIG.
  • acquisition device 10 acquires work information S1 related to a series of work performed by human 5 in advance (step ST1). Specifically, the first camera 11 captures an image of the work space B where the human 5 works and acquires the image information S11. The second camera 12 captures an object present in the viewing direction E of the person 5 at the time of work, and acquires the viewing information S12. The information acquisition unit 13 detects a pressure value when the human 5 grips an object, and acquires the related information S13.
  • the acquisition device 10 transmits the acquired work information S1 to the control device 20 (step ST2). Specifically, the first camera 11, the second camera 12, and the information acquisition unit 13 transmit the acquired image information S11, line of sight information S12, and related information S13 to the communication unit 21 of the control device 20. When the transmission to the communication unit 21 ends, the process of the acquisition device 10 ends.
  • the communication unit 21 of the control device 20 receives the image information S11, the line-of-sight information S12, and the related information S13 from the acquisition device 10 as the work information S1 (step ST3). Then, the communication unit 21 passes the work information S1 to the recognition unit 22 of the control device 20.
  • the recognition unit 22 executes the object recognition process and the motion recognition process described above based on the work information S1 acquired from the communication unit 21. That is, the recognition unit 22 recognizes, from the work information S1, object information S2 representing an object and motion information S3 representing an operation performed on the object (step ST4). The recognition unit 22 passes the recognized object information S2 and motion information S3 to the control unit 24 of the control device 20.
  • control unit 24 executes the object registration process described above. Specifically, the control unit 24 checks whether the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1 (step ST5). As a result of the collation, when the object information S2 recognized by the recognition unit 22 is not stored in the first database DB1 (in the case of “No” in step ST5), the control unit 24 instructs the storage unit 23 to recognize The object information S2 recognized by 22 is stored in the first database DB1 (step ST6).
  • step ST5 when the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1 (in the case of “Yes” in step ST5), the control unit 24 proceeds to step ST7 described later Do.
  • control unit 24 executes the above-described motion registration process. Specifically, the control unit 24 checks whether or not the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2 (step ST7). As a result of the collation, when the motion information S3 recognized by the recognition unit 22 is not stored in the second database DB2 (in the case of “No” in step ST7), the control unit 24 instructs the storage unit 23 to recognize The motion information S3 recognized in step 22 is stored in the second database DB2 (step ST8).
  • step ST7 when the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2 (in the case of “Yes” in step ST7), the control unit 24 proceeds to step ST9 described later Do.
  • the motion registration process (steps ST7 to ST8) may be performed before the object registration process (steps ST5 to ST6).
  • control unit 24 executes the work command generation process described above.
  • the control unit 24 is module information in which one of the plurality of pieces of object information S2 stored in the first database DB1 and one of the plurality of motion information S3 stored in the second database DB2 are combined. Generate multiple S4. Then, the control unit 24 generates a work instruction S5 composed of an operation program in which a plurality of pieces of module information S4 are sequentially connected (step ST9).
  • control unit 24 controls the communication unit 21 to transmit the generated work instruction S5 to the first robot 2. That is, the control unit 24 outputs an instruction to transmit the generated work instruction S5 to the first robot 2 to the communication unit 21 (step ST10).
  • the communication unit 21 transmits the work instruction S5 to the first robot 2 in accordance with the transmission instruction of the control unit 24 (step ST11).
  • the processing of the control device 20 ends.
  • Step ST1 is a first step of acquiring work information S1 including image information S11 to be followed by the robot 2 in the robot teaching method of the present invention.
  • steps ST4 to ST11 in the robot teaching method of the present invention, a work command S5 to be executed by the robot (first robot 2) is generated based on the obtained work information S1, and the generated work command S5 is transmitted to the robot Second step.
  • Step ST4 is a recognition step of recognizing object information S2 representing an object contained in the acquired work information S1 and motion information S3 representing an operation performed on the object in the robot teaching method of the present invention .
  • Steps ST5 to ST8 are storage steps for storing the recognized object information S2 and motion information S3 in the robot teaching method of the present invention.
  • Step ST9 is a generation step of generating a work instruction S5 from the stored object information S2 and motion information S3 in the robot teaching method of the present invention.
  • Step ST11 is a transmission step of transmitting the generated work instruction S5 to the robot 2 in the robot teaching method of the present invention.
  • Step ST4 is a recognition step of recognizing object information S2 representing an object contained in the work information S1 and motion information S3 representing an operation performed on the object in the computer program of the present invention.
  • Steps ST5 to ST8 are storage steps for storing the recognized object information S2 and motion information S3 in the computer program of the present invention.
  • Step ST9 is a generation step of generating a work instruction S5 based on the stored object information S2 and motion information S3 in the computer program of the present invention.
  • Step ST11 is a transmission step of transmitting the generated work instruction S5 to the robot 2 in the computer program of the present invention.
  • the recognition unit 22 represents object information S2 representing an object included in the acquired operation information S1, and an operation performed on the object Recognize the motion information S3.
  • the recognized object information S2 and motion information S3 are stored in the storage unit 23.
  • the control unit 24 generates a work instruction S5 based on the stored object information S2 and motion information S3, and the communication unit 21 transmits the generated work instruction S5 to the robot 2. Therefore, if the combination of the object information S2 and the motion information S3 is arbitrarily changed, a work instruction S5 different from the work information S1 acquired by the acquisition device 10 can be transmitted to the robot 2. For this reason, even if there are few work information S1 which acquisition device 10 acquires, the versatility of the work which robot 2 performs can be improved.
  • the control unit 24 generates at least one piece of module information S4 in which one of the plurality of pieces of object information S2 and one of the plurality of pieces of motion information S3 are combined. Then, the communication unit 21 transmits the work instruction S5 including the generated module information S4 to the robot 2. For this reason, it is possible to control the operation of the robot 2 for each module information S4 which is the minimum unit of operation.
  • the work instruction S5 is composed of an operation program in which a plurality of pieces of module information S4 necessary for completion of the target assembly 4 are sequentially connected. Therefore, the robot 2 can be controlled to complete the object.
  • control unit 24 executes the object registration process and the motion registration process described above. Therefore, when the recognition unit 22 recognizes new object information S2 or motion information S3, the new object information S2 or motion information S3 can be newly stored in the storage unit 23.
  • the robot teaching method of the present embodiment, the control device 20, and a computer program for causing a computer to function as the control device 20 have substantially the same configuration as the robot teaching system 1, and thus, are the same as the robot teaching system 1. It exerts an action effect.
  • the work information S1 includes the image information S11 of the imitation target such as the human 5, but may not include the image information S11. That is, as long as the object information S2 and the motion information S3 can be recognized, information other than the image information S11 may be included in the work information S1.
  • SYMBOLS 1 robot teaching system 2 1st robot (robot), 2a arm, 3 parts (object), 4 assembly (object), 5 human, 10 acquisition apparatus, 11 1st camera, 12 2nd camera, 13 information Acquisition unit, 20 control unit, 21 communication unit, 22 recognition unit, 23 storage unit, 24 control unit, B work space, E gaze direction, S1 work information, S2 object information, S3 motion information, S4 module information, S5 work command , S11 image information, S12 gaze information, S13 related information, S31 grip information

Abstract

[Problem] To improve the versatility of tasks performed by a robot. [Solution] A system 1 for teaching a robot in which a control device 20 comprises: a recognition unit 22 that recognizes object information S2 representing an object included in task information S1 acquired by an acquisition device 10 and motion information S3 representing an action performed on the object; a storage unit 23 for storing the recognized object information S2 and motion information S3; a control unit 24 that generates a task command S5 on the basis of the stored object information S2 and motion information S3; and a communication unit 21 that transmits the generated task command S5 to a first robot 2.

Description

ロボット教示システム、ロボット教示方法、制御装置、及びコンピュータプログラムRobot teaching system, robot teaching method, control device, and computer program
本発明は、ロボット教示システム、ロボット教示方法、制御装置、及びコンピュータプログラムに関する。 The present invention relates to a robot teaching system, a robot teaching method, a control device, and a computer program.
従来、産業用ロボットに作業を教示する方式として、直接教示方式(ダイレクトティーチング)、遠隔教示方式(リモートティーチング)、及び間接教示方式(オフラインプログラミング)が知られている。直接教示方式は、作業者が産業用ロボットのアームを手で持ち、当該アームを手で動かしながら教示する方式である。遠隔教示方式は、ティーチングペンダントにより産業用ロボットを操作し、教示ボタン押下等により教示点を記録していく方式である。間接教示方式は、CAM(computer aided manufacturing;コンピュータ支援製造)等の支援ソフトウェアを用いて産業用ロボットに教示する方式である(例えば、特許文献1参照)。 Conventionally, direct teaching method (direct teaching), remote teaching method (remote teaching), and indirect teaching method (off-line programming) are known as methods for teaching work to industrial robots. The direct teaching method is a method in which the operator holds the arm of the industrial robot by hand and teaches while moving the arm by hand. The remote teaching method is a method of operating an industrial robot with a teaching pendant and recording a teaching point by pressing a teaching button or the like. The indirect teaching method is a method of teaching an industrial robot using support software such as CAM (computer aided manufacturing; computer aid manufacturing) (see, for example, Patent Document 1).
特開2017-27501号公報Unexamined-Japanese-Patent No. 2017-27501
従来の直接教示方式、遠隔教示方式、及び間接教示方式は、いずれも教示した動作と同じ動作でしかロボットを制御することができない。このため、ロボットが実行する作業の汎用性に欠けるという問題があった。  The conventional direct teaching method, remote teaching method, and indirect teaching method can control the robot only in the same operation as the taught operation. For this reason, there is a problem that the versatility of the work performed by the robot is lacking.
本発明は、ロボットが実行する作業の汎用性を向上させることを目的とする。 An object of the present invention is to improve the versatility of work performed by a robot.
本発明の例示的な第1発明は、ロボットに作業を教示するロボット教示システムであって、前記ロボットが追従すべき画像情報を含む作業情報を取得する取得装置と、取得された前記作業情報に基づいて前記ロボットに実行させる作業指令を生成し、生成された前記作業指令を前記ロボットに送信する制御装置と、を備え、前記制御装置は、取得された前記作業情報に含まれる対象物を表すオブジェクト情報と、前記対象物に行われた動作を表すモーション情報とを認識する認識部と、認識された前記オブジェクト情報及び前記モーション情報を記憶する記憶部と、記憶された前記オブジェクト情報及び前記モーション情報に基づいて、前記作業指令を生成する制御部と、生成された前記作業指令を前記ロボットに送信する通信部と、を有する、ロボット教示システムである。  An exemplary first invention of the present invention is a robot teaching system for teaching a robot an operation, which is an acquisition device for acquiring operation information including image information to be followed by the robot, and the acquired operation information And a control device that generates a work command to be executed by the robot based on the generated information and transmits the generated work command to the robot, and the control device represents an object included in the acquired work information. A recognition unit that recognizes object information and motion information representing an operation performed on the object, a storage unit that stores the recognized object information and the motion information, and the stored object information and the motion The control unit generates the work instruction based on the information, and the communication unit transmits the generated work instruction to the robot. It is a robot teaching system.
本発明の例示的な第2発明は、ロボットに作業を教示するロボット教示方法であって、前記ロボットが追従すべき画像情報を含む作業情報を取得する第1ステップと、取得された前記作業情報に基づいて前記ロボットに実行させる作業指令を生成し、生成された前記作業指令を前記ロボットに送信する第2ステップと、を含み、前記第2ステップは、取得された前記作業情報に含まれる対象物を表すオブジェクト情報と、前記対象物に行われた動作を表すモーション情報とを認識する認識ステップと、認識された前記オブジェクト情報及び前記モーション情報を記憶する記憶ステップと、記憶された前記オブジェクト情報及び前記モーション情報から、前記作業指令を生成する生成ステップと、生成された前記作業指令を前記ロボットに送信する送信ステップと、を含む、ロボット教示方法である。  A second exemplary invention of the present invention is a robot teaching method for teaching a robot an operation, the first step of acquiring operation information including image information to be followed by the robot, and the acquired operation information And a second step of generating a work command to be executed by the robot based on the command and transmitting the generated work command to the robot, the second step including an object included in the obtained work information A recognition step for recognizing object information representing an object and motion information representing an operation performed on the object, a storage step for storing the recognized object information and the motion information, and the stored object information And generating the work command from the motion information, and transmitting the generated work command to the robot Comprising a transmission step that the a robot teaching method.
本発明の例示的な第3発明は、ロボットが追従すべき画像情報を含む作業情報に基づいて前記ロボットに実行させる作業指令を生成し、生成された前記作業指令を前記ロボットに送信する制御装置であって、前記作業情報に含まれる対象物を表すオブジェクト情報と、前記対象物に行われた動作を表すモーション情報とを認識する認識部と、認識された前記オブジェクト情報及び前記モーション情報を記憶する記憶部と、記憶された前記オブジェクト情報及び前記モーション情報に基づいて、前記作業指令を生成する制御部と、生成された前記作業指令を前記ロボットに送信する通信部と、を備える制御装置である。  A control system according to an exemplary third aspect of the present invention generates a work command to be executed by the robot based on work information including image information to be followed by the robot, and transmits the generated work command to the robot A recognition unit that recognizes object information representing an object included in the work information and motion information representing an operation performed on the object; and stores the recognized object information and the motion information A control device comprising: a storage unit, a control unit that generates the work instruction based on the stored object information and the motion information, and a communication unit that transmits the generated work instruction to the robot is there.
本発明の例示的な第4発明は、ロボットが追従すべき画像情報を含む作業情報に基づいて前記ロボットに実行させる作業指令を生成し、生成された前記作業指令を前記ロボットに送信する制御装置としてコンピュータを機能させるコンピュータプログラムであって、前記作業情報に含まれる対象物を表すオブジェクト情報と、前記対象物に行われた動作を表すモーション情報とを認識する認識ステップと、認識された前記オブジェクト情報及び前記モーション情報を記憶する記憶ステップと、記憶された前記オブジェクト情報及び前記モーション情報に基づいて、前記作業指令を生成する生成ステップと、生成された前記作業指令を前記ロボットに送信する送信ステップと、を実行させるコンピュータプログラムである。 A control system according to an exemplary fourth aspect of the present invention generates a work command to be executed by the robot based on work information including image information to be followed by the robot, and transmits the generated work command to the robot A computer program which causes a computer to function, and recognizing object information representing an object contained in the work information and motion information representing an operation performed on the object; A step of storing information and the motion information, a step of generating the work instruction based on the stored object information and the motion information, and a transmitting step of transmitting the generated work instruction to the robot And a computer program that executes.
本発明の例示的な第1発明~第4発明によれば、ロボットが実行する作業の汎用性を向上させることができる。 According to the first to fourth inventions of the present invention, the versatility of the work performed by the robot can be improved.
本発明の実施形態として例示するロボット教示システムの全体構成を示す模式図である。It is a schematic diagram which shows the whole structure of the robot teaching system illustrated as embodiment of this invention. 制御装置の内部構成を示す機能ブロック図である。It is a functional block diagram showing an internal configuration of a control device. 制御部が実行する作業指令生成処理の一例を示す図である。It is a figure which shows an example of the work instruction | command production | generation process which a control part performs. 取得装置及び制御装置が協働して行うロボット教示方法の一例を示すフローチャートである。It is a flowchart which shows an example of the robot teaching method which an acquisition device and a control device perform in cooperation. 取得装置及び制御装置が協働して行うロボット教示方法の一例を示すフローチャートである。It is a flowchart which shows an example of the robot teaching method which an acquisition device and a control device perform in cooperation.
以下、本発明の実施形態について添付図面に基づき詳細に説明する。 [ロボット教示システムの全体構成] 図1は、本発明の実施形態として例示するロボット教示システムの全体構成を示す模式図である。ロボット教示システム1は、種々のロボットに作業を教示することができるシステムである。本実施形態では、一例として、工場の生産ラインに設置されるロボット(以下、第1ロボットという)2に作業を教示する場合について説明する。第1ロボット2は、例えば、2本のアーム2aを備えた双腕ロボットである。また、図1には、第1ロボット2に教示する作業の一例として、複数の部品3を順次組み立てて目的物である組立品4を完成させる組み立て作業を示している。  Hereinafter, an embodiment of the present invention will be described in detail based on the attached drawings. [Overall Configuration of Robot Teaching System] FIG. 1 is a schematic view showing an overall configuration of a robot teaching system exemplified as an embodiment of the present invention. The robot teaching system 1 is a system capable of teaching work to various robots. In the present embodiment, as an example, a case where a robot (hereinafter, referred to as a first robot) 2 installed on a production line of a factory is taught to work will be described. The first robot 2 is, for example, a double-arm robot provided with two arms 2a. Further, FIG. 1 shows an assembly operation for sequentially assembling a plurality of parts 3 to complete an assembly 4 as an object, as an example of an operation to teach the first robot 2.
本実施形態のロボット教示システム1は、第1ロボット2に行わせる作業を、予め人間5が行い、当該人間5が行った作業を第1ロボット2に模倣させるシステムである。ロボット教示システム1は、取得装置10と、制御装置20とを備える。  The robot teaching system 1 of the present embodiment is a system in which a human 5 performs in advance a task to be performed by the first robot 2 and causes the first robot 2 to mimic the task performed by the human 5. The robot teaching system 1 includes an acquisition device 10 and a control device 20.
なお、予め作業を行う主体(模倣対象)は、人間5に限定されず、第1ロボット2以外のロボット(以下、第2ロボットという)であってもよい。例えば、古いロボットを新しいロボットに切り替える場合、古いロボットを、模倣対象である第2ロボット(付図示)とし、古いロボットが行う作業を新しいロボットに教示させてもよい。また、第1ロボット2に教示する作業は、組み立て作業に限定されず、単純作業等の他の作業であってもよい。  The subject (imitation target) who performs the work in advance is not limited to the human 5, and may be a robot other than the first robot 2 (hereinafter referred to as a second robot). For example, when switching an old robot to a new robot, the old robot may be set as a second robot (shown in the drawing) to be imitated, and the new robot may be taught the work performed by the old robot. Moreover, the operation taught to the first robot 2 is not limited to the assembly operation, and may be another operation such as a simple operation.
[取得装置の構成] 取得装置10は、人間5が予め行った一連の作業に関する作業情報S1を取得する。本実施形態の取得装置10は、第1カメラ11と、第2カメラ12と、情報取得部13とを備える。  [Configuration of Acquisition Device] The acquisition device 10 acquires task information S1 related to a series of tasks performed by the human 5 in advance. The acquisition device 10 of the present embodiment includes a first camera 11, a second camera 12, and an information acquisition unit 13.
第1カメラ11は、例えば、人間5が作業する作業空間Bを撮影するステレオカメラである。ステレオカメラは、互いに異なる方向から作業空間Bを撮影する2台以上のカメラ(付図示)を備えている。第1カメラ11は、有線又は無線により制御装置20に接続されている。第1カメラ11が作業空間Bを撮影して取得した撮影画像は、人間5による一連の作業、つまり第1ロボット2が追従すべき作業の画像情報S11として制御装置20に送信される。  The first camera 11 is, for example, a stereo camera that captures a work space B in which a human 5 works. The stereo camera includes two or more cameras (shown in the drawing) that capture the work space B from different directions. The first camera 11 is connected to the control device 20 by wire or wirelessly. A photographed image acquired by photographing the work space B by the first camera 11 is transmitted to the control device 20 as image information S11 of a series of operations by the human 5, that is, operations to be followed by the first robot 2.
第2カメラ12は、例えば、人間5の頭部に装着されるウェアラブルカメラである。第2カメラ12は、有線又は無線により制御装置20に接続される。第2カメラ12は、人間5の視点位置付近に取り付けられ、作業時の人間5の視線方向Eに存在する物体を撮影する。図示を省略するが、人間5の頭部には、人間5の目の動きを検出する視線センサが、第2カメラ12と共に装着される。視線センサは、有線又は無線により制御装置20に接続される。  The second camera 12 is, for example, a wearable camera mounted on the head of the human 5. The second camera 12 is connected to the control device 20 by wire or wirelessly. The second camera 12 is attached near the viewpoint position of the human 5, and captures an object present in the viewing direction E of the human 5 at work. Although not shown, a gaze sensor for detecting the movement of eyes of the human 5 is attached to the head of the human 5 together with the second camera 12. The sight line sensor is connected to the control device 20 by wire or wirelessly.
第2カメラ12が視線方向Eに存在する物体を撮影して取得した撮影画像、及び視線センサが検出した検出データは、視線情報S12として制御装置20に送信される。なお、視線情報S12には、少なくとも第2カメラ12の撮影画像が含まれていればよい。したがって、視線センサを用いずに、第2カメラ12のみで視線情報S12を取得してもよい。  A photographed image acquired by photographing an object present in the line-of-sight direction E by the second camera 12 and detection data detected by the line-of-sight sensor are transmitted to the control device 20 as line-of-sight information S12. The line-of-sight information S12 may include at least a photographed image of the second camera 12. Therefore, the line-of-sight information S12 may be acquired only with the second camera 12 without using the line-of-sight sensor.
情報取得部13は、例えば、人間5の両手にそれぞれ装着される圧力センサからなる。圧力センサは、人間5が対象物(部品3及び工具等)を把持する際の圧力値を検出する。したがって、本実施形態の情報取得部13は、人間5が対象物を把持する際の圧力値を取得する。情報取得部13の検出値(圧力値)は、模倣対象(本実施形態では人間5)が対象物を把持する際の関連情報S13として制御装置20に送信される。  The information acquisition unit 13 is, for example, a pressure sensor attached to both hands of the human 5. The pressure sensor detects a pressure value when the human 5 grips the object (the part 3 and the tool, etc.). Therefore, the information acquisition unit 13 of the present embodiment acquires the pressure value when the human 5 grips the target. The detection value (pressure value) of the information acquisition unit 13 is transmitted to the control device 20 as the related information S13 when the imitation target (in this embodiment, the human 5) grips the object.
なお、情報取得部13は、圧力値を検出する圧力センサに限定されず、接触センサ等の各種センサにより、圧力分布、モーメントの方向、及びすべり角等を検出してもよい。各種センサを用いる場合、各種センサの検出値である、圧力分布、モーメントの方向、及びすべり角等は、関連情報S13として制御装置20に送信される。  The information acquisition unit 13 is not limited to a pressure sensor that detects a pressure value, and may use various sensors such as a contact sensor to detect a pressure distribution, the direction of a moment, a slip angle, and the like. When various sensors are used, pressure distribution, direction of moment, slip angle, etc., which are detection values of various sensors, are transmitted to the control device 20 as related information S13.
また、模倣対象が第2ロボットの場合には、情報取得部13を用いずに、第2ロボットから直接、関連情報S13を制御装置20に送信してもよい。第2ロボットから送信される関連情報S13は、第2ロボットが対象物を把持する際に用いる、圧力値、圧力分布、モーメントの方向、及びすべり角等を含む情報である。当該情報は、第2ロボットが外部から取得した情報であってもよいし、第2ロボット自体に予め記憶されている情報であってもよい。  Further, when the imitation target is the second robot, the related information S13 may be transmitted to the control device 20 directly from the second robot without using the information acquisition unit 13. The related information S13 transmitted from the second robot is information including a pressure value, a pressure distribution, a direction of moment, a slip angle, and the like, which is used when the second robot grips an object. The said information may be the information which the 2nd robot acquired from the exterior, and may be the information beforehand memorized by the 2nd robot itself.
以上の通り、取得装置10は、画像情報S11、視線情報S12、及び関連情報S13を、作業情報S1として取得する。そして、取得装置10は、取得した作業情報S1を制御装置20に送信する。なお、取得装置10は、少なくとも第1カメラ11を備えていればよい。すなわち、取得装置10は、第1カメラ11の画像情報S11のみを作業情報S1として取得してもよい。  As described above, the acquisition device 10 acquires the image information S11, the line-of-sight information S12, and the related information S13 as the work information S1. Then, the acquisition device 10 transmits the acquired work information S1 to the control device 20. Note that the acquisition device 10 only needs to include at least the first camera 11. That is, the acquisition device 10 may acquire only the image information S11 of the first camera 11 as the work information S1.
[制御装置の構成] 制御装置20は、取得装置10から受信した作業情報S1に基づいて第1ロボット2に実行させる作業指令S5を生成し、生成された作業指令S5を第1ロボット2に送信する。図2は、制御装置20の内部構成を示す機能ブロック図である。制御装置20は、コンピュータを備えて構成されており、有線又は無線により第1ロボット2及び取得装置10に接続されている。制御装置20は、通信部21と、認識部22と、記憶部23と、制御部24とを有する。  [Configuration of Control Device] The control device 20 generates a work command S5 to be executed by the first robot 2 based on the work information S1 received from the acquisition device 10, and transmits the generated work command S5 to the first robot 2 Do. FIG. 2 is a functional block diagram showing an internal configuration of the control device 20. As shown in FIG. The control device 20 includes a computer, and is connected to the first robot 2 and the acquisition device 10 by wire or wirelessly. The control device 20 includes a communication unit 21, a recognition unit 22, a storage unit 23, and a control unit 24.
制御部24は、例えばCPU(Central Processing Unit)からなる。制御部24は、内部バス等を介して、他のハードウェア各部21,22,23と接続されている。そして、制御部24は、他のハードウェア各部21,22,23の動作を制御する。また、制御部24は、記憶部23に記憶されたコンピュータプログラムを読み出して、様々な処理を実行する。本実施形態の制御部24は、後述する「オブジェクト登録処理」、「モーション登録処理」、及び「作業指令生成処理」を実行する。  The control unit 24 includes, for example, a central processing unit (CPU). The control unit 24 is connected to the other hardware units 21, 22, 23 via an internal bus or the like. Then, the control unit 24 controls the operation of the other hardware units 21, 22, 23. Further, the control unit 24 reads a computer program stored in the storage unit 23 and executes various processes. The control unit 24 of the present embodiment executes “object registration processing”, “motion registration processing”, and “work instruction generation processing” described later.
通信部21は、第1カメラ11、第2カメラ12、及び情報取得部13と、有線通信又は無線通信を行う通信インタフェースとして機能する。具体的には、通信部21は、取得装置10から作業情報S1を受信すると、受信した作業情報S1を認識部22に渡す。なお、模倣対象が第2ロボットの場合、通信部21は、外部である第2ロボットから、作業情報S1の関連情報S13を受信すると、受信した関連情報S13を認識部22に渡す。  The communication unit 21 functions as a communication interface that performs wired communication or wireless communication with the first camera 11, the second camera 12, and the information acquisition unit 13. Specifically, when the communication unit 21 receives the work information S1 from the acquisition device 10, the communication unit 21 passes the received work information S1 to the recognition unit 22. When the imitation target is the second robot, when the communication unit 21 receives the related information S13 of the work information S1 from the external second robot, the communication unit 21 passes the received related information S13 to the recognition unit 22.
また、通信部21は、第1ロボット2と、有線通信又は無線通信を行う通信インタフェースとしても機能する。具体的には、通信部21は、制御部21で生成され
た作業指令S5を第1ロボット2に送信する。 
The communication unit 21 also functions as a communication interface that performs wired communication or wireless communication with the first robot 2. Specifically, the communication unit 21 transmits the work instruction S5 generated by the control unit 21 to the first robot 2.
記憶部23は、ハードディスク又は半導体メモリ等の記録媒体からなる。記憶部23は、認識部22及び制御部24がそれぞれ処理を行うコンピュータプログラムを記憶している。また、記憶部23には、第1データベースDB1、及び第2データベースDB2が含まれる。  The storage unit 23 is formed of a recording medium such as a hard disk or a semiconductor memory. The storage unit 23 stores computer programs that the recognition unit 22 and the control unit 24 perform processing respectively. The storage unit 23 also includes a first database DB1 and a second database DB2.
認識部22は、例えばGPU(Graphics Processing Unit)からなる。認識部22は、記憶部23に記憶されたコンピュータプログラムを読み出して、様々な処理を実行する。本実施形態の認識部22は、通信部21が作業情報S1を受信すると、当該作業情報S1に基づいて、後述する「オブジェクト認識処理」及び「モーション認識処理」を実行する。  The recognition unit 22 includes, for example, a GPU (Graphics Processing Unit). The recognition unit 22 reads the computer program stored in the storage unit 23 and executes various processes. When the communication unit 21 receives the work information S1, the recognition unit 22 of the present embodiment executes “object recognition process” and “motion recognition process” described later based on the work information S1.
以上のように、制御装置20は、コンピュータを備えて構成され、制御装置20の各機能は、コンピュータの記憶部23に記憶されたコンピュータプログラムがコンピュータのCPU及びGPUによって実行されることで発揮される。かかるコンピュータプログラムは、CD-ROMやUSBメモリなどの一時的又は非一時的な記録媒体に記憶させることができる。  As described above, the control device 20 is configured to include the computer, and each function of the control device 20 is exhibited by the computer program stored in the storage unit 23 of the computer being executed by the CPU and the GPU of the computer. Ru. Such computer program can be stored on a temporary or non-temporary recording medium such as a CD-ROM or a USB memory.
[オブジェクト認識処理] 認識部22が実行するオブジェクト認識処理は、作業情報S1に含まれる画像情報S11から、作業の対象物を表すオブジェクト情報S2を認識する処理である。例えば、認識部22は、公知の画像認識技術を用いて、対象物をオブジェクト情報S2として認識する。その際、認識部22は、当該対象物の種類、形状、及び角度(姿勢)等もオブジェクト情報S2として認識する。したがって、認識されたオブジェクト情報S2には、対象物の種類、形状、及び角度等が属性情報として含まれる。  [Object Recognition Process] The object recognition process performed by the recognition unit 22 is a process of recognizing object information S2 representing an object of work from image information S11 included in the work information S1. For example, the recognition unit 22 recognizes an object as the object information S2 using a known image recognition technology. At this time, the recognition unit 22 recognizes the type, shape, angle (posture), and the like of the target as the object information S2. Therefore, the type, shape, angle, etc. of the object are included as attribute information in the recognized object information S2.
また、オブジェクト認識処理において、認識部22は、画像情報S11に含まれる対象物を個別に認識する。本実施形態の画像情報S11には、組み立て作業に用いる複数の部品3、工具、治具、及びトレー等と、組み立て作業によって組み立てられた組立品4とが、対象物として含まれる(図3参照)。したがって、認識部22は、複数の「部品」、「工具」、「治具」、「トレー」及び「組立品」等を、それぞれ対象物を表すオブジェクト情報S2として個別に認識する。  Further, in the object recognition process, the recognition unit 22 individually recognizes objects included in the image information S11. The image information S11 of the present embodiment includes, as objects, a plurality of parts 3 used for the assembling operation, tools, jigs, trays and the like, and an assembly 4 assembled by the assembling operation (see FIG. 3). ). Therefore, the recognition unit 22 individually recognizes a plurality of "parts", "tools", "jigs", "trays", "assemblies" and the like as object information S2 representing an object.
「組立品」を表すオブジェクト情報S2の属性情報には、組立品4の種類、形状及び角度等に加えて、組み立て情報が含まれる。当該組み立て情報は、組立品4を組み立てるために必要な情報である。具体的には、組み立て情報には、組み立てに必要な複数の部品3、及び当該複数の部品3の組み立て順序等が含まれる。  The attribute information of the object information S2 representing "assembly" includes assembly information in addition to the type, shape, and angle of the assembly 4. The assembly information is information necessary to assemble the assembly 4. Specifically, the assembly information includes a plurality of parts 3 necessary for assembly, an assembly order of the plurality of parts 3 and the like.
なお、認識部22は、目的物である組立品4を、オブジェクト情報S2として認識しているが、オブジェクト情報S2とは別に、目的物を表す情報として認識してもよい。さらに、認識された目的物を表す情報を記憶する専用のデータベースを記憶部23に設けてもよい。  The recognition unit 22 recognizes the assembly 4 as the object as the object information S2, but may recognize the assembly 4 as information representing the object separately from the object information S2. Furthermore, a dedicated database storing information representing the recognized target may be provided in the storage unit 23.
[モーション認識処理] 認識部22が実行するモーション認識処理は、作業情報S1に含まれる画像情報S11から、対象物に行われた動作を表すモーション情報S3を認識する処理である。例えば、認識部22は、公知のモーションキャプチャ技術を用いて、対象物に行われた動作をモーション情報S3として認識する。具体的には、認識部22は、画像情報S11に含まれる複数の異なる方向から撮影した撮影画像から、人間5の各関節等に取り付けた複数のマーカーそれぞれの3次元座標を計算する。  [Motion Recognition Process] The motion recognition process performed by the recognition unit 22 is a process of recognizing, from the image information S11 included in the work information S1, motion information S3 representing an operation performed on the object. For example, the recognition unit 22 recognizes an operation performed on an object as motion information S3 using a known motion capture technology. Specifically, the recognition unit 22 calculates three-dimensional coordinates of each of a plurality of markers attached to each joint or the like of the human 5 from captured images captured from a plurality of different directions included in the image information S11.
そして、認識部22は、各マーカーの3次元座標の経時変化に基づいて、対象物に行われた動作をモーション情報S3として認識する。その際、認識部22は、対象物に行われた動作の動作速度(回転数)及び動作角度等もモーション情報S3として認識する。したがって、認識されたモーション情報S3には、対象物に行われた動作の動作速度(回転数)及び動作角度等が属性情報として含まれる。  Then, the recognition unit 22 recognizes the motion performed on the target as motion information S3 based on the temporal change of the three-dimensional coordinates of each marker. At this time, the recognition unit 22 also recognizes the motion speed (rotational speed), motion angle, and the like of the motion performed on the object as motion information S3. Therefore, the recognized motion information S3 includes, as attribute information, the operation speed (rotation number) of the operation performed on the object, the operation angle, and the like.
認識部22は、画像情報S11を細分化し、モーション情報S3として認識する。すなわち、モーション認識処理において、認識部22は、画像情報S11に含まれる全ての動作を個別に認識する。本実施形態の画像情報S11には、組み立て作業時に行われた、「把持する」、「放す」、「回す」、及び「載せる」等の複数の動作が含まれる(図3参照)。したがって、認識部22は、組み立て作業で行われた、「把持する」、「放す」、「回す」、及び「載せる」等を、それぞれ対象物に行われた動作を表すモージョン情報S3として個別に認識する。  The recognition unit 22 segments the image information S11 and recognizes it as motion information S3. That is, in the motion recognition process, the recognition unit 22 individually recognizes all the operations included in the image information S11. The image information S11 of the present embodiment includes a plurality of operations such as "gripping", "releasing", "turning", and "loading" performed at the assembling operation (see FIG. 3). Therefore, the recognition unit 22 individually performs “hold”, “release”, “turn”, “place on”, and the like performed in the assembly operation as motion information S3 representing the operation performed on the object. recognize.
本実施形態の認識部22は、モーション認識処理において、「把持する」動作を表すモーション情報S3(以下、把持情報S31という)の属性情報として、関連情報S13を含める。関連情報S13には、上記のように、人間5が対象物を把持する際に情報取得部13が検出した圧力値が含まれる。したがって、認識部22は、関連情報S13に含まれる情報取得部13の圧力値を、把持情報S31の属性情報に含める。これにより、認識された把持情報S31は、関連情報S13と関連づけて記憶部23に記憶される。なお、把持情報S31の属性情報には、関連情報S13を含めなくてもよい。  The recognition unit 22 of the present embodiment includes the related information S13 as attribute information of motion information S3 (hereinafter referred to as holding information S31) representing an operation of "gripping" in motion recognition processing. As described above, the related information S13 includes the pressure value detected by the information acquisition unit 13 when the human 5 grips the target. Therefore, the recognition unit 22 includes the pressure value of the information acquisition unit 13 included in the related information S13 in the attribute information of the grip information S31. Thereby, the recognized grip information S31 is stored in the storage unit 23 in association with the related information S13. The attribute information of the grip information S31 may not include the related information S13.
本実施形態の認識部22は、モーション認識処理において、画像情報S11に加えて、視線情報S12からも、対象物に行われた動作を表すモーション情報S3を認識する。視線情報S12には、上記のように、人間5の視線方向Eに存在する物体を撮影した第2カメラ12の撮影画像と、人間5の目の動きを検出した視線センサの検出データが含まれる。  In the motion recognition process, the recognition unit 22 according to the present embodiment recognizes motion information S3 representing an operation performed on an object from the sight line information S12 in addition to the image information S11. As described above, the line-of-sight information S12 includes a photographed image of the second camera 12 which photographed an object present in the line-of-sight direction E of the human 5 and detection data of the visual line sensor which detected the movement of the human 5's eye. .
認識部22は、視線情報S12に含まれる第2カメラ12の撮影画像及び視線センサの検出データから、公知の視線計測技術を用いて、作業中の人間5が対象物のどの位置を見ているかを認識する。これにより、認識部22は、例えば、対象物を「把持する」動作を認識するときに、人間5が対象物を見ている位置から、当該対象物の把持位置を特定することができる。  The recognition unit 22 determines which position of the object the person 5 in operation is looking at using the known gaze measurement technology from the photographed image of the second camera 12 included in the gaze information S12 and the detection data of the gaze sensor Recognize Thereby, the recognition part 22 can specify the holding position of the said target object from the position where the person 5 is looking at the target object, for example, when recognizing the operation of "gripping" the target object.
したがって、認識部22は、画像情報S11から認識した「把持する」動作と、視認情報S12から特定した把持位置とにより、把持情報S31をより正確に認識することができる。なお、本実施形態の認識部22は、画像情報S11のみに基づいて把持情報S31を認識してもよい。  Therefore, the recognition unit 22 can more accurately recognize the gripping information S31 by the “gripping” operation recognized from the image information S11 and the gripping position specified from the visual recognition information S12. Note that the recognition unit 22 of the present embodiment may recognize the gripping information S31 based on only the image information S11.
[オブジェクト登録処理] 記憶部23の第1データベースDB1は、オブジェクト情報S2が記憶されるデータベースである。第1データベースDB1には、既知の対象物を表すオブジェクト情報S2が予め記憶されている。制御部24は、認識部22で認識されたオブジェクト情報S2を、所定条件に従って第1データベースDB1に記憶する「オブジェクト登録処理」を実行する。  [Object Registration Process] The first database DB1 of the storage unit 23 is a database in which the object information S2 is stored. In the first database DB1, object information S2 representing a known object is stored in advance. The control unit 24 executes an "object registration process" of storing the object information S2 recognized by the recognition unit 22 in the first database DB1 according to a predetermined condition.
具体的には、制御部24は、認識部22で認識されたオブジェクト情報S2が、第1データベースDB1に既に記憶されているか否かを照合する。照合した結果、認識部22で認識されたオブジェクト情報S2が第1データベースDB1に記憶されていない場合、制御部24は、記憶部23に対して、認識部22で認識されたオブジェクト情報S2を第1データベースDB1に記憶させる。これにより、第1データベースDB1には、複数の異なるオブジェクト情報S2が蓄積される。  Specifically, the control unit 24 checks whether the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1. As a result of the collation, when the object information S2 recognized by the recognition unit 22 is not stored in the first database DB1, the control unit 24 causes the storage unit 23 to compare the object information S2 recognized by the recognition unit 22 1) Store in database DB1. Thereby, a plurality of different pieces of object information S2 are accumulated in the first database DB1.
[モーション登録処理] 記憶部23の第2データベースDB2は、モーション情報S3が記憶されるデータベースである。制御部24は、認識部22で認識されたモーション情報S3を、所定条件に従って第2データベースDB2に記憶する「モーション登録処理」を実行する。  [Motion Registration Process] The second database DB2 of the storage unit 23 is a database in which the motion information S3 is stored. The control unit 24 executes a “motion registration process” of storing the motion information S3 recognized by the recognition unit 22 in the second database DB2 in accordance with a predetermined condition.
具体的には、制御部24は、認識部22で認識されたモーション情報S3が、第2データベースDB2に既に記憶されているか否かを照合する。照合した結果、認識部22で認識されたモーション情報S3が第2データベースDB2に記憶されていない場合、制御部24は、記憶部23に対して、認識部22で認識されたモーション情報S3を第2データベースDB2に記憶させる。これにより、第2データベースDB2には、複数の異なるモーション情報S3が蓄積される。  Specifically, the control unit 24 collates whether or not the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2. As a result of the collation, when the motion information S3 recognized by the recognition unit 22 is not stored in the second database DB2, the control unit 24 causes the storage unit 23 to recognize the motion information S3 recognized by the recognition unit 22 2 Store in database DB2. Thereby, a plurality of different motion information S3 is accumulated in the second database DB2.
[作業指令生成処理] 制御部24は、作業情報S1に倣って、第1ロボット2に実行させる作業指令S5を生成する「作業指令生成処理」を実行する。作業指令生成処理は、第1データベースDB1に記憶された1又は複数のオブジェクト情報S2と、第2データベースDB2に記憶された1又は複数のモーション情報S3とに基づいて、作業指令S5を生成する処理である。  [Work Command Generation Process] The control unit 24 executes “work command generation process” for generating a work command S5 to be executed by the first robot 2 following the work information S1. The work command generation process generates a work command S5 based on one or more pieces of object information S2 stored in the first database DB1 and one or more pieces of motion information S3 stored in the second database DB2. It is.
具体的には、制御部24は、モジュール情報S4を少なくとも1つ生成する。モジュール情報S4は、第1データベースDB1に記憶された複数のオブジェクト情報S2のうちの1つと、第2データベースDB2に記憶された複数のモーション情報S3のうちの1つとを組み合わせた情報である。そして、制御部24は、複数のモジュール情報S4を生成した場合、当該複数のモジュール情報S4をシーケンシャルに連結した動作プログラムよりなる作業指令S5を生成する。  Specifically, the control unit 24 generates at least one module information S4. The module information S4 is information in which one of the plurality of pieces of object information S2 stored in the first database DB1 and one of the plurality of pieces of motion information S3 stored in the second database DB2 are combined. Then, when the plurality of pieces of module information S4 are generated, the control unit 24 generates a work instruction S5 including an operation program in which the plurality of pieces of module information S4 are sequentially connected.
図3は、制御部24が実行する作業指令生成処理の一例を示す図である。図3に示す作業指令生成処理は、組立品4の組み立て作業を第1ロボット2に実行させる作業指令S5を生成する処理である。図3に示すように、本実施形態の制御部24は、まず、第1データベースDB1に記憶された、組立品4に相当する「組立品」を表すオブジェクト情報S2から、組み立て情報を取得する。そして、制御部24は、取得した組み立て情報に基づいて、1つのオブジェクト情報S2と、1つのモーション情報S3とを組み合わせたモジュール情報S4を複数生成する。  FIG. 3 is a diagram illustrating an example of the work command generation process performed by the control unit 24. The work command generation process shown in FIG. 3 is a process of generating a work command S5 that causes the first robot 2 to execute the assembly work of the assembly 4. As shown in FIG. 3, first, the control unit 24 of the present embodiment acquires assembly information from the object information S2 stored in the first database DB1 and representing an “assembly” corresponding to the assembly 4. Then, based on the acquired assembly information, the control unit 24 generates a plurality of module information S4 in which one object information S2 and one motion information S3 are combined.
例えば、制御部24は、「部品」を表すオブジェクト情報S2と、「把持する」を表すモーション情報S3とを組み合わせ、第1のモジュール情報S4を生成する。当該第1のモジュール情報S4は、「部品3を把持する」作業を表している。また、制御部24は、「工具」を表すオブジェクト情報S2と、「回す」を表すモーション情報S3とを組み合わせ、第2のモジュール情報S4を生成する。当該第2のモジュール情報S4は、「工具を回す」作業を表している。  For example, the control unit 24 combines the object information S2 representing “parts” and the motion information S3 representing “grasp” to generate the first module information S4. The first module information S4 represents an operation of “gripping the part 3”. Further, the control unit 24 combines the object information S2 representing "tool" and the motion information S3 representing "turn" to generate the second module information S4. The second module information S4 represents the "turning tool" operation.
さらに、制御部24は、「トレー」を表すオブジェクト情報S2と、「載せる」を表すモーション情報S3とを組み合わせ、第3のモジュール情報S4を生成する。当該第3のモジュール情報S4は、「トレーを載せる」作業を表している。上記のようにして、制御部24は、組立品4の完成に必要な第1から第Nのモジュール情報S4を生成する。Nは2以上の整数である。  Further, the control unit 24 combines the object information S2 representing "tray" and the motion information S3 representing "loading" to generate the third module information S4. The third module information S4 represents an operation of “loading tray”. As described above, the control unit 24 generates the first to N-th module information S4 necessary for the completion of the assembly 4. N is an integer of 2 or more.
次に、制御部24は、取得した組み立て情報に基づいて、複数のモジュール情報S4をシーケンシャルに連結した動作プログラムよりなる作業指令S5を生成する。具体的には、制御部24は、第1のモジュール情報S4から第Nのモジュール情報S4までを順に連結した動作プログラムよりなる作業指令S5を生成する。したがって、生成された作業指令S5は、第1から第Nのモジュール情報S4に含まれる各作業を順に第1ロボット2に実行させる動作プログラムとなる。生成された作業指令S5は、通信部21により、第1ロボット2に送信される。  Next, the control unit 24 generates, based on the acquired assembly information, a work instruction S5 composed of an operation program in which a plurality of module information S4 are sequentially connected. Specifically, the control unit 24 generates a work command S5 composed of an operation program in which the first module information S4 to the Nth module information S4 are connected in order. Therefore, the generated work instruction S5 is an operation program that causes the first robot 2 to sequentially execute each operation included in the first to N-th module information S4. The work command S5 generated is transmitted to the first robot 2 by the communication unit 21.
なお、作業指令S5は、第1から第Nのモジュール情報S4だけでなく、第1から第Nのうちの一部(1つでもよい)のモジュール情報S4から構成されていてもよい。例えば、制御部24は、組立品4の一部を組み立てる作業指令として、第1~第K(<N)のモジュール情報S4で構成された作業指令S5を生成することが可能である。 また、制御部24は、1つのモジュール情報S4(例えば「工具を回す」)を含む作業指令S5を生成することも可能である。当該作業指令S5を、通信部21により繰り返し送信させれば、第1ロボット2は、当該作業指令S5に含まれるモジュール情報S4により指定された作業(工具を回す作業)を繰り返す。  The work instruction S5 may be composed of not only the first to N-th module information S4, but also part (or one) of the first to N-th module information S4. For example, the control unit 24 can generate, as a work command for assembling a part of the assembly 4, a work command S 5 composed of first to Kth (<N) pieces of module information S 4. Further, the control unit 24 can also generate a work command S5 including one piece of module information S4 (for example, "turn the tool"). When the work command S5 is repeatedly transmitted by the communication unit 21, the first robot 2 repeats the work (work of turning the tool) specified by the module information S4 included in the work command S5.
[ロボット教示方法] 図4及び図5は、取得装置10及び制御装置20が協働して行うロボット教示方法
の一例を示すフローチャートである。図4及び図5のロボット教示方法は、取得装置10が作業情報S1を取得してから、第1ロボット2に作業指令S5を出力するまでの方法を示している。なお、図4における丸で囲んだ文字Aは、図5における同Aにつながっている。 
[Robot Teaching Method] FIGS. 4 and 5 are flowcharts showing an example of a robot teaching method performed by the acquisition device 10 and the control device 20 in cooperation with each other. The robot teaching method of FIG. 4 and FIG. 5 shows a method from the acquisition device 10 acquiring the work information S1 until outputting the work command S5 to the first robot 2. The circled letter A in FIG. 4 is connected to the letter A in FIG.
図4を参照して、取得装置10は、人間5が予め行った一連の作業に関する作業情報S1を取得する(ステップST1)。具体的には、第1カメラ11は、人間5が作業する作業空間Bを撮影して画像情報S11を取得する。第2カメラ12は、作業時の人間5の視線方向Eに存在する物体を撮影して視線情報S12を取得する。情報取得部13は、人間5が対象物を把持する際に圧力値を検出して関連情報S13を取得する。  Referring to FIG. 4, acquisition device 10 acquires work information S1 related to a series of work performed by human 5 in advance (step ST1). Specifically, the first camera 11 captures an image of the work space B where the human 5 works and acquires the image information S11. The second camera 12 captures an object present in the viewing direction E of the person 5 at the time of work, and acquires the viewing information S12. The information acquisition unit 13 detects a pressure value when the human 5 grips an object, and acquires the related information S13.
次に、取得装置10は、取得した作業情報S1を制御装置20に送信する(ステップST2)。具体的には、第1カメラ11、第2カメラ12、及び情報取得部13は、それぞれ取得した画像情報S11、視線情報S12、及び関連情報S13を制御装置20の通信部21に送信する。通信部21への送信が終了すると、取得装置10の処理は終了する。  Next, the acquisition device 10 transmits the acquired work information S1 to the control device 20 (step ST2). Specifically, the first camera 11, the second camera 12, and the information acquisition unit 13 transmit the acquired image information S11, line of sight information S12, and related information S13 to the communication unit 21 of the control device 20. When the transmission to the communication unit 21 ends, the process of the acquisition device 10 ends.
制御装置20の通信部21は、取得装置10から、画像情報S11、視線情報S12、及び関連情報S13を、作業情報S1として受信する(ステップST3)。そして、通信部21は、当該作業情報S1を制御装置20の認識部22に渡す。  The communication unit 21 of the control device 20 receives the image information S11, the line-of-sight information S12, and the related information S13 from the acquisition device 10 as the work information S1 (step ST3). Then, the communication unit 21 passes the work information S1 to the recognition unit 22 of the control device 20.
認識部22は、通信部21から取得した作業情報S1に基づいて、上述したオブジェクト認識処理及びモーション認識処理をそれぞれ実行する。すなわち、認識部22は、作業情報S1から、対象物を表すオブジェクト情報S2と、対象物に行われた動作を表すモーション情報S3とをそれぞれ認識する(ステップST4)。認識部22は、認識したオブジェクト情報S2及びモーション情報S3を、制御装置20の制御部24に渡す。  The recognition unit 22 executes the object recognition process and the motion recognition process described above based on the work information S1 acquired from the communication unit 21. That is, the recognition unit 22 recognizes, from the work information S1, object information S2 representing an object and motion information S3 representing an operation performed on the object (step ST4). The recognition unit 22 passes the recognized object information S2 and motion information S3 to the control unit 24 of the control device 20.
図5を参照して、次に、制御部24は、上述したオブジェクト登録処理を実行する。具体的には、制御部24は、認識部22で認識されたオブジェクト情報S2が、第1データベースDB1に既に記憶されているか否かを照合する(ステップST5)。照合した結果、認識部22で認識されたオブジェクト情報S2が第1データベースDB1に記憶されていない場合(ステップST5で「No」の場合)、制御部24は、記憶部23に対して、認識部22で認識されたオブジェクト情報S2を第1データベースDB1に記憶させる(ステップST6)。  Referring to FIG. 5, next, control unit 24 executes the object registration process described above. Specifically, the control unit 24 checks whether the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1 (step ST5). As a result of the collation, when the object information S2 recognized by the recognition unit 22 is not stored in the first database DB1 (in the case of “No” in step ST5), the control unit 24 instructs the storage unit 23 to recognize The object information S2 recognized by 22 is stored in the first database DB1 (step ST6).
一方、照合した結果、認識部22で認識されたオブジェクト情報S2が第1データベースDB1に既に記憶されている場合(ステップST5で「Yes」の場合)、制御部24は、後述するステップST7に移行する。  On the other hand, as a result of collation, when the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1 (in the case of “Yes” in step ST5), the control unit 24 proceeds to step ST7 described later Do.
次に、制御部24は、上述したモーション登録処理を実行する。具体的には、制御部24は、認識部22で認識されたモーション情報S3が、第2データベースDB2に既に記憶されているか否かを照合する(ステップST7)。照合した結果、認識部22で認識されたモーション情報S3が第2データベースDB2に記憶されていない場合(ステップST7で「No」の場合)、制御部24は、記憶部23に対して、認識部22で認識されたモーション情報S3を第2データベースDB2に記憶させる(ステップST8)。  Next, the control unit 24 executes the above-described motion registration process. Specifically, the control unit 24 checks whether or not the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2 (step ST7). As a result of the collation, when the motion information S3 recognized by the recognition unit 22 is not stored in the second database DB2 (in the case of “No” in step ST7), the control unit 24 instructs the storage unit 23 to recognize The motion information S3 recognized in step 22 is stored in the second database DB2 (step ST8).
一方、照合した結果、認識部22で認識されたモーション情報S3が第2データベースDB2に既に記憶されている場合(ステップST7で「Yes」の場合)、制御部24は、後述するステップST9に移行する。なお、モーション登録処理(ステップST7からST8)は、オブジェクト登録処理(ステップST5からST6)よりも先に実行してもよい。  On the other hand, as a result of collation, when the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2 (in the case of “Yes” in step ST7), the control unit 24 proceeds to step ST9 described later Do. The motion registration process (steps ST7 to ST8) may be performed before the object registration process (steps ST5 to ST6).
次に、制御部24は、上述した作業指令生成処理を実行する。具体的には、制御部24は、第1データベースDB1に記憶された複数のオブジェクト情報S2のうちの1つと、第2データベースDB2に記憶された複数のモーション情報S3の1つとを組み合わせたモジュール情報S4を複数生成する。そして、制御部24は、複数のモジュール情報S4をシーケンシャルに連結した動作プログラムよりなる作業指令S5を生成する(ステップST9)。  Next, the control unit 24 executes the work command generation process described above. Specifically, the control unit 24 is module information in which one of the plurality of pieces of object information S2 stored in the first database DB1 and one of the plurality of motion information S3 stored in the second database DB2 are combined. Generate multiple S4. Then, the control unit 24 generates a work instruction S5 composed of an operation program in which a plurality of pieces of module information S4 are sequentially connected (step ST9).
次に、制御部24は、生成した作業指令S5を、第1ロボット2に送信するように通信部21を制御する。すなわち、制御部24は、通信部21に対して、生成した作業指令S5を第1ロボット2に送信する指示を出力する(ステップST10)。通信部21は、制御部24の送信指示に従って、作業指令S5を第1ロボット2に送信する(ステップST11)。第1ロボット2への送信が終了すると、制御装置20の処理は終了する。  Next, the control unit 24 controls the communication unit 21 to transmit the generated work instruction S5 to the first robot 2. That is, the control unit 24 outputs an instruction to transmit the generated work instruction S5 to the first robot 2 to the communication unit 21 (step ST10). The communication unit 21 transmits the work instruction S5 to the first robot 2 in accordance with the transmission instruction of the control unit 24 (step ST11). When the transmission to the first robot 2 ends, the processing of the control device 20 ends.
ステップST1は、本発明のロボット教示方法において、ロボット2が追従すべき画像情報S11を含む作業情報S1を取得する第1ステップである。ステップST4からST11は、本発明のロボット教示方法において、取得された作業情報S1に基づいてロボット(第1ロボット2)に実行させる作業指令S5を生成し、生成された作業指令S5をロボットに送信する第2ステップである。  Step ST1 is a first step of acquiring work information S1 including image information S11 to be followed by the robot 2 in the robot teaching method of the present invention. In steps ST4 to ST11, in the robot teaching method of the present invention, a work command S5 to be executed by the robot (first robot 2) is generated based on the obtained work information S1, and the generated work command S5 is transmitted to the robot Second step.
ステップST4は、本発明のロボット教示方法において、取得された作業情報S1に含まれる対象物を表すオブジェクト情報S2と、対象物に行われた動作を表すモーション情報S3とを認識する認識ステップである。ステップST5からST8は、本発明のロボット教示方法において、認識されたオブジェクト情報S2及びモーション情報S3を記憶する記憶ステップである。  Step ST4 is a recognition step of recognizing object information S2 representing an object contained in the acquired work information S1 and motion information S3 representing an operation performed on the object in the robot teaching method of the present invention . Steps ST5 to ST8 are storage steps for storing the recognized object information S2 and motion information S3 in the robot teaching method of the present invention.
ステップST9は、本発明のロボット教示方法において、記憶されたオブジェクト情報S2及びモーション情報S3から、作業指令S5を生成する生成ステップである。ステップST11は、本発明のロボット教示方法において、生成された作業指令S5をロボット2に送信する送信ステップである。  Step ST9 is a generation step of generating a work instruction S5 from the stored object information S2 and motion information S3 in the robot teaching method of the present invention. Step ST11 is a transmission step of transmitting the generated work instruction S5 to the robot 2 in the robot teaching method of the present invention.
ステップST4は、本発明のコンピュータプログラムにおいて、作業情報S1に含まれる対象物を表すオブジェクト情報S2と、対象物に行われた動作を表すモーション情報S3とを認識する認識ステップである。ステップST5からST8は、本発明のコンピュータプログラムにおいて、認識されたオブジェクト情報S2及びモーション情報S3を記憶する記憶ステップである。  Step ST4 is a recognition step of recognizing object information S2 representing an object contained in the work information S1 and motion information S3 representing an operation performed on the object in the computer program of the present invention. Steps ST5 to ST8 are storage steps for storing the recognized object information S2 and motion information S3 in the computer program of the present invention.
ステップST9は、本発明のコンピュータプログラムにおいて、記憶されたオブジェクト情報S2及びモーション情報S3に基づいて、作業指令S5を生成する生成ステップである。ステップST11は、本発明のコンピュータプログラムにおいて、生成された作業指令S5をロボット2に送信する送信ステップである。  Step ST9 is a generation step of generating a work instruction S5 based on the stored object information S2 and motion information S3 in the computer program of the present invention. Step ST11 is a transmission step of transmitting the generated work instruction S5 to the robot 2 in the computer program of the present invention.
[作用効果] 以上に説明した本実施形態のロボット教示システム1では、認識部22が、取得された作業情報S1に含まれる対象物を表すオブジェクト情報S2と、対象物に行われた動作を表すモーション情報S3とを認識する。認識されたオブジェクト情報S2及びモーション情報S3は記憶部23に記憶される。そして、制御部24は、記憶されたオブジェクト情報S2とモーション情報S3とに基づいて作業指令S5を生成し、通信部21が、生成された作業指令S5をロボット2に送信する。したがって、オブジェクト情報S2とモーション情報S3の組み合わせを任意に変更すれば、取得装置10が取得した作業情報S1と異なる作業指令S5をロボット2に送信することができる。このため、取得装置10が取得する作業情報S1が少なくても、ロボット2が実行する作業の汎用性を向上させることができる。  [Operation and Effect] In the robot teaching system 1 of the present embodiment described above, the recognition unit 22 represents object information S2 representing an object included in the acquired operation information S1, and an operation performed on the object Recognize the motion information S3. The recognized object information S2 and motion information S3 are stored in the storage unit 23. Then, the control unit 24 generates a work instruction S5 based on the stored object information S2 and motion information S3, and the communication unit 21 transmits the generated work instruction S5 to the robot 2. Therefore, if the combination of the object information S2 and the motion information S3 is arbitrarily changed, a work instruction S5 different from the work information S1 acquired by the acquisition device 10 can be transmitted to the robot 2. For this reason, even if there are few work information S1 which acquisition device 10 acquires, the versatility of the work which robot 2 performs can be improved.
また、本実施形態のロボット教示システム1では、制御部24が、複数のオブジェクト情報S2のうちの1つと複数のモーション情報S3のうちの1つとを組み合わせたモジュール情報S4を少なくとも1つ生成する。そして、通信部21が、生成されたモジュール情報S4を含む作業指令S5をロボット2に送信する。このため、動作の最小単位であるモジュール情報S4ごとにロボット2の動作を制御することができる。  Further, in the robot teaching system 1 of the present embodiment, the control unit 24 generates at least one piece of module information S4 in which one of the plurality of pieces of object information S2 and one of the plurality of pieces of motion information S3 are combined. Then, the communication unit 21 transmits the work instruction S5 including the generated module information S4 to the robot 2. For this reason, it is possible to control the operation of the robot 2 for each module information S4 which is the minimum unit of operation.
また、本実施形態のロボット教示システム1では、作業指令S5が、目的物である組立品4の完成に必要な複数のモジュール情報S4をシーケンシャルに連結した動作プログラムよりなる。このため、目的物を完成するようにロボット2を制御することができる。  Further, in the robot teaching system 1 of the present embodiment, the work instruction S5 is composed of an operation program in which a plurality of pieces of module information S4 necessary for completion of the target assembly 4 are sequentially connected. Therefore, the robot 2 can be controlled to complete the object.
また、本実施形態のロボット教示システム1では、制御部24が、上記のオブジェクト登録処理とモーション登録処理とを実行する。このため、認識部22が新たなオブジェクト情報S2又はモーション情報S3を認識した場合に、当該新たなオブジェクト情報S2又はモーション情報S3を新たに記憶部23に蓄積することができる。  Further, in the robot teaching system 1 of the present embodiment, the control unit 24 executes the object registration process and the motion registration process described above. Therefore, when the recognition unit 22 recognizes new object information S2 or motion information S3, the new object information S2 or motion information S3 can be newly stored in the storage unit 23.
また、本実施形態のロボット教示方法、制御装置20、及び当該制御装置20としてコンピュータを機能させるコンピュータプログラムは、ロボット教示システム1と実質的に同一の構成をするため、ロボット教示システム1と同様の作用効果を奏する。  Further, the robot teaching method of the present embodiment, the control device 20, and a computer program for causing a computer to function as the control device 20 have substantially the same configuration as the robot teaching system 1, and thus, are the same as the robot teaching system 1. It exerts an action effect.
[その他] 本実施形態のロボット教示システム1は、作業情報S1に、人間5等の模倣対象の画像情報S11を含めているが、画像情報S11を含めなくてもよい。すなわち、オブジェクト情報S2及びモーション情報S3を認識することができる情報であれば、画像情報S11以外の他の情報を作業情報S1に含めてもよい。  [Others] In the robot teaching system 1 of the present embodiment, the work information S1 includes the image information S11 of the imitation target such as the human 5, but may not include the image information S11. That is, as long as the object information S2 and the motion information S3 can be recognized, information other than the image information S11 may be included in the work information S1.
なお、今回開示された実施の形態はすべての点で例示であって制限的ではないと考えられるべきである。本発明の範囲は、上記した意味ではなく、特許請求の範囲によって示され、特許請求の範囲と均等の意味、及び範囲内でのすべての変更が含まれることが意図される。 It should be understood that the embodiments disclosed herein are illustrative and non-restrictive in every respect. The scope of the present invention is indicated not by the meaning described above but by the claims, and is intended to include the meanings equivalent to the claims and all modifications within the scope.
1 ロボット教示システム、2 第1ロボット(ロボット)、2a アーム、3 部品(対象物)、4 組立品(目的物)、5 人間、10 取得装置、11 第1カメラ、12 第2カメラ、13 情報取得部、20 制御装置、21 通信部、22 認識部、23 記憶部、24 制御部、B 作業空間、E 視線方向、S1 作業情報、S2 オブジェクト情報、S3 モーション情報、S4 モジュール情報、S5 作業指令、S11 画像情報、S12 視線情報、S13 関連情報、S31 把持情報 DESCRIPTION OF SYMBOLS 1 robot teaching system, 2 1st robot (robot), 2a arm, 3 parts (object), 4 assembly (object), 5 human, 10 acquisition apparatus, 11 1st camera, 12 2nd camera, 13 information Acquisition unit, 20 control unit, 21 communication unit, 22 recognition unit, 23 storage unit, 24 control unit, B work space, E gaze direction, S1 work information, S2 object information, S3 motion information, S4 module information, S5 work command , S11 image information, S12 gaze information, S13 related information, S31 grip information

Claims (16)

  1. ロボットに作業を教示するロボット教示システムであって、前記ロボットが追従すべき画像情報を含む作業情報を取得する取得装置と、取得された前記作業情報に基づいて前記ロボットに実行させる作業指令を生成し、生成された前記作業指令を前記ロボットに送信する制御装置と、を備え、前記制御装置は、 取得された前記作業情報に含まれる対象物を表すオブジェクト情報と、前記対象物に行われた動作を表すモーション情報とを認識する認識部と、認識された前記オブジェクト情報及び前記モーション情報を記憶する記憶部と、記憶された前記オブジェクト情報及び前記モーション情報に基づいて、前記作業指令を生成する制御部と、生成された前記作業指令を前記ロボットに送信する通信部と、を有する、ロボット教示システム。 A robot teaching system for teaching a task to a robot, wherein an acquisition device for acquiring task information including image information to be followed by the robot, and a task command to be executed by the robot based on the acquired task information A control device for transmitting the generated work command to the robot, wherein the control device is configured to perform object information representing an object included in the acquired work information, and to the object The work command is generated based on a recognition unit that recognizes motion information representing a motion, a storage unit that stores the recognized object information and the motion information, and the stored object information and the motion information. A robot teaching system comprising: a control unit; and a communication unit that transmits the generated work command to the robot.
  2. 前記制御部は、記憶された複数の前記オブジェクト情報のうちの1つと、記憶された複数の前記モーション情報のうちの1つとを組み合わせたモジュール情報を少なくとも1つ生成し、前記通信部は、生成された前記モジュール情報を含む前記作業指令を前記ロボットに送信する、請求項1に記載のロボット教示システム。 The control unit generates at least one piece of module information in which one of the plurality of stored object information and one of the stored plurality of motion information is combined, and the communication unit generates The robot teaching system according to claim 1, wherein the work instruction including the received module information is transmitted to the robot.
  3. 前記作業指令は、目的物の完成に必要な複数の前記モジュール情報をシーケンシャルに連結した動作プログラムよりなる、請求項2に記載のロボット教示システム。 The robot teaching system according to claim 2, wherein the work instruction comprises an operation program in which a plurality of pieces of the module information necessary for completion of an object are sequentially linked.
  4. 前記制御部は、認識された前記オブジェクト情報を前記記憶部が記憶していない場合に、認識された前記オブジェクト情報を当該記憶部に記憶させるオブジェクト登録処理と、認識された前記モーション情報を前記記憶部が記憶していない場合に、認識された前記モーション情報を当該記憶部に記憶させるモーション登録処理と、を実行する、請求項1から請求項3までのいずれか1項に記
    載のロボット教示システム。
    When the storage unit does not store the recognized object information, the control unit stores an object registration process that causes the storage unit to store the recognized object information; and the storage unit stores the recognized motion information. The robot teaching system according to any one of claims 1 to 3, performing a motion registration process of storing the recognized motion information in the storage unit when the unit does not store the information. .
  5. ロボットに作業を教示するロボット教示方法であって、前記ロボットが追従すべき画像情報を含む作業情報を取得する第1ステップと、取得された前記作業情報に基づいて前記ロボットに実行させる作業指令を生成し、生成された前記作業指令を前記ロボットに送信する第2ステップと、を含み、前記第2ステップは、取得された前記作業情報に含まれる対象物を表すオブジェクト情報と、前記対象物に行われた動作を表すモーション情報とを認識する認識ステップと、認識された前記オブジェクト情報及び前記モーション情報を記憶する記憶ステップと、記憶された前記オブジェクト情報及び前記モーション情報から、前記作業指令を生成する生成ステップと、生成された前記作業指令を前記ロボットに送信する送信ステップと、を含む、ロボット教示方法。 A robot teaching method for teaching a task to a robot, comprising: a first step of acquiring task information including image information to be followed by the robot; and a task command to be executed by the robot based on the acquired task information A second step of generating and transmitting the generated work command to the robot, wherein the second step is object information representing an object included in the acquired work information, and the object information The work instruction is generated from a recognition step of recognizing motion information representing a motion performed, a storage step of storing the recognized object information and the motion information, and the stored object information and the motion information. Generating, and transmitting the generated work instruction to the robot. Bot teaching method.
  6. 前記生成ステップでは、記憶された複数の前記オブジェクト情報のうちの1つと、記憶された複数の前記モーション情報のうちの1つとを組み合わせたモジュール情報を少なくとも1つ生成し、前記送信ステップでは、生成された前記モジュール情報を含む前記作業指令を前記ロボットに送信する、請求項5に記載のロボット教示方法。 The generation step generates at least one module information combining one of the plurality of stored object information and one of the stored plurality of motion information, and the generation step includes the generation The robot teaching method according to claim 5, further comprising transmitting the work command including the received module information to the robot.
  7. 前記作業指令は、目的物の完成に必要な複数の前記モジュール情報をシーケンシャルに連結した動作プログラムよりなる、請求項6に記載のロボット教示方法。 The robot instruction method according to claim 6, wherein the work instruction comprises an operation program in which a plurality of pieces of the module information necessary to complete an object are sequentially linked.
  8. 前記記憶ステップでは、認識された前記オブジェクト情報を記憶していない場合に、認識された前記オブジェクト情報を記憶するオブジェクト登録処理と、認識された前記モーション情報を記憶していない場合に、認識された前記モーション情報を記憶するモーション登録処理と、を実行する、請求項5から請求項7までのいずれか1項に記載のロボット教示方法。 In the storing step, when the recognized object information is not stored, an object registration process of storing the recognized object information, and when the recognized motion information is not stored, the recognition is performed. The robot teaching method according to any one of claims 5 to 7, wherein a motion registration process of storing the motion information is performed.
  9. ロボットが追従すべき画像情報を含む作業情報に基づいて前記ロボットに実行させる作業指令を生成し、生成された前記作業指令を前記ロボットに送信する制御装置であって、前記作業情報に含まれる対象物を表すオブジェクト情報と、前記対象物に行われた動作を表すモーション情報とを認識する認識部と、認識された前記オブジェクト情報及び前記モーション情報を記憶する記憶部と、記憶された前記オブジェクト情報及び前記モーション情報に基づいて、前記作業指令を生成する制御部と、生成された前記作業指令を前記ロボットに送信する通信部と、を備える制御装置。 A control device that generates a work command to be executed by the robot based on work information including image information to be followed by the robot, and transmits the generated work command to the robot, the target included in the work information A recognition unit that recognizes object information representing an object and motion information representing an operation performed on the object, a storage unit that stores the recognized object information and the motion information, and the stored object information And a control unit that generates the work instruction based on the motion information, and a communication unit that transmits the generated work instruction to the robot.
  10. 前記制御部は、記憶された複数の前記オブジェクト情報のうちの1つと、記憶された複数の前記モーション情報のうちの1つとを組み合わせたモジュール情報を少なくとも1つ生成し、前記通信部は、生成された前記モジュール情報を含む前記作業指令を前記ロボットに送信する、請求項9に記載の制御装置。 The control unit generates at least one piece of module information in which one of the plurality of stored object information and one of the stored plurality of motion information is combined, and the communication unit generates The control device according to claim 9, wherein the work command including the received module information is transmitted to the robot.
  11. 前記作業指令は、目的物の完成に必要な複数の前記モジュール情報をシーケンシャルに連結した動作プログラムよりなる、請求項10に記載の制御装置。 The control device according to claim 10, wherein the work instruction comprises an operation program in which a plurality of pieces of the module information necessary to complete an object are sequentially connected.
  12. 前記制御部は、認識された前記オブジェクト情報を前記記憶部が記憶していない場合に、認識された前記オブジェクト情報を当該記憶部に記憶させるオブジェクト登録処理と、認識された前記モーション情報を前記記憶部が記憶していない場合に、認識された前記モーション情報を当該記憶部に記憶させるモーション登録処理と、を実行する、請求項9から請求項11までのいずれか1項に記載の制御装置。 When the storage unit does not store the recognized object information, the control unit stores an object registration process that causes the storage unit to store the recognized object information; and the storage unit stores the recognized motion information. The control device according to any one of claims 9 to 11, performing a motion registration process of storing the recognized motion information in the storage unit when the unit does not store the information.
  13. ロボットが追従すべき画像情報を含む作業情報に基づいて前記ロボットに実行させる作業指令を生成し、生成された前記作業指令を前記ロボットに送信する制御装置としてコンピュータを機能させるコンピュータプログラムであって、前記作業情報に含まれる対象物を表すオブジェクト情報と、前記対象物に行われた動作を表すモーション情報とを認識する認識ステップと、認識された前記オブジェクト情報及び前記モーション情報を記憶する記憶ステップと、記憶された前記オブジェクト情報及び前記モーション情報に基づいて、前記作業指令を生成する生成ステップと、生成された前記作業指令を前記ロボットに送信する送信ステップと、を実行させるコンピュータプログラム。 A computer program that causes a computer to function as a control device that generates a work command to be executed by the robot based on work information including image information to be followed by the robot, and transmits the generated work command to the robot. A recognition step of recognizing object information representing an object included in the work information, motion information representing an operation performed on the object, and a storage step of storing the recognized object information and the motion information A computer program that executes a generation step of generating the work instruction based on the stored object information and the motion information, and a transmission step of transmitting the generated work instruction to the robot.
  14. 前記生成ステップでは、記憶された複数の前記オブジェクト情報のうちの1つと、記憶された複数の前記モーション情報のうちの1つとを組み合わせたモジュール情報を少なくとも1つ生成し、前記送信ステップでは、生成された前記モジュール情報を含む前記作業指令を前記ロボットに送信する、請求項13に記載のコンピュータプログラム。 The generation step generates at least one module information combining one of the plurality of stored object information and one of the stored plurality of motion information, and the generation step includes the generation The computer program according to claim 13, transmitting the work instruction including the module information to the robot.
  15. 前記作業指令は、目的物の完成に必要な複数の前記モジュール情報をシーケンシャルに連結した動作プログラムよりなる、請求項14に記載のコンピュータプログラム。 The computer program according to claim 14, wherein the work instruction comprises an operation program in which a plurality of pieces of the module information necessary for completion of an object are sequentially linked.
  16. 前記記憶ステップでは、認識された前記オブジェクト情報を記憶していない場合に、認識された前記オブジェクト情報を記憶するオブジェクト登録処理と、認識された前記モーション情報を記憶していない場合に、認識された前記モーション情報を記憶するモーション登録処理と、を実行する、請求項13から請求項15までのいずれか1項に記載のコンピュータプログラム。 In the storing step, when the recognized object information is not stored, an object registration process of storing the recognized object information, and when the recognized motion information is not stored, the recognition is performed. The computer program according to any one of claims 13 to 15, performing a motion registration process of storing the motion information.
PCT/JP2018/023728 2017-09-28 2018-06-22 System for teaching robot, method for teaching robot, control device, and computer program WO2019064751A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017187396 2017-09-28
JP2017-187396 2017-09-28

Publications (1)

Publication Number Publication Date
WO2019064751A1 true WO2019064751A1 (en) 2019-04-04

Family

ID=65901274

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/023728 WO2019064751A1 (en) 2017-09-28 2018-06-22 System for teaching robot, method for teaching robot, control device, and computer program

Country Status (1)

Country Link
WO (1) WO2019064751A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112091962A (en) * 2019-06-18 2020-12-18 株式会社大亨 Robot control device and robot control system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002301676A (en) * 2001-04-02 2002-10-15 Sony Corp Robot device, information providing method, program and storage medium
JP2008068348A (en) * 2006-09-13 2008-03-27 National Institute Of Advanced Industrial & Technology Robot work teaching system and work teaching method for robot
JP2011131376A (en) * 2003-11-13 2011-07-07 Japan Science & Technology Agency Robot drive system and robot drive program
JP2011200997A (en) * 2010-03-26 2011-10-13 Kanto Auto Works Ltd Teaching device and method for robot
JP2013158887A (en) * 2012-02-07 2013-08-19 Seiko Epson Corp Teaching device, robot, robot device, and teaching method
WO2016181572A1 (en) * 2015-05-11 2016-11-17 株式会社安川電機 Dispensing system, controller, and control method
JP6038417B1 (en) * 2016-01-29 2016-12-07 三菱電機株式会社 Robot teaching apparatus and robot control program creating method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002301676A (en) * 2001-04-02 2002-10-15 Sony Corp Robot device, information providing method, program and storage medium
JP2011131376A (en) * 2003-11-13 2011-07-07 Japan Science & Technology Agency Robot drive system and robot drive program
JP2008068348A (en) * 2006-09-13 2008-03-27 National Institute Of Advanced Industrial & Technology Robot work teaching system and work teaching method for robot
JP2011200997A (en) * 2010-03-26 2011-10-13 Kanto Auto Works Ltd Teaching device and method for robot
JP2013158887A (en) * 2012-02-07 2013-08-19 Seiko Epson Corp Teaching device, robot, robot device, and teaching method
WO2016181572A1 (en) * 2015-05-11 2016-11-17 株式会社安川電機 Dispensing system, controller, and control method
JP6038417B1 (en) * 2016-01-29 2016-12-07 三菱電機株式会社 Robot teaching apparatus and robot control program creating method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112091962A (en) * 2019-06-18 2020-12-18 株式会社大亨 Robot control device and robot control system

Similar Documents

Publication Publication Date Title
US11195041B2 (en) Generating a model for an object encountered by a robot
US11541545B2 (en) Information processing apparatus, information processing method, and system
US10919152B1 (en) Teleoperating of robots with tasks by mapping to human operator pose
CN105666505B (en) Robot system having display for augmented reality
US8244402B2 (en) Visual perception system and method for a humanoid robot
CN110662631B (en) Control device, robot control method, and robot control system
EP3486041A3 (en) Gripping system, learning device, and gripping method
KR20120027253A (en) Object-learning robot and method
JP6777670B2 (en) A robot system that uses image processing to correct robot teaching
JP7000253B2 (en) Force visualization device, robot and force visualization program
CN111319026A (en) Immersive human-simulated remote control method for double-arm robot
Çoban et al. Wireless teleoperation of an industrial robot by using myo arm band
JP2008168372A (en) Robot device and shape recognition method
WO2019064751A1 (en) System for teaching robot, method for teaching robot, control device, and computer program
WO2019064752A1 (en) System for teaching robot, method for teaching robot, control device, and computer program
JP6067547B2 (en) Object recognition device, robot, and object recognition method
JP6455869B2 (en) Robot, robot system, control device, and control method
US11461867B2 (en) Visual interface and communications techniques for use with robots
KR20190091870A (en) Robot control system using motion sensor and VR
WO2023037966A1 (en) System and method for control of robot avatar by plurality of persons
JP2013173209A (en) Robot apparatus and control method of the same, and computer program
JP2023175331A (en) Robot teaching method and robot teaching device
Park et al. Robot-based Object Pose Auto-annotation System for Dexterous Manipulation
JP2022157119A (en) Robot remote operation control device, robot remote operation control system, robot remote operation control method and program
JP2022155623A (en) Robot remote operation control device, robot remote operation control system, robot remote operation control method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18863403

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18863403

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP