WO2019064752A1 - System for teaching robot, method for teaching robot, control device, and computer program - Google Patents

System for teaching robot, method for teaching robot, control device, and computer program Download PDF

Info

Publication number
WO2019064752A1
WO2019064752A1 PCT/JP2018/023729 JP2018023729W WO2019064752A1 WO 2019064752 A1 WO2019064752 A1 WO 2019064752A1 JP 2018023729 W JP2018023729 W JP 2018023729W WO 2019064752 A1 WO2019064752 A1 WO 2019064752A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
work
robot
motion
unit
Prior art date
Application number
PCT/JP2018/023729
Other languages
French (fr)
Japanese (ja)
Inventor
一宏 佐齋
西岡 澄人
Original Assignee
日本電産株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電産株式会社 filed Critical 日本電産株式会社
Publication of WO2019064752A1 publication Critical patent/WO2019064752A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/42Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine

Definitions

  • the present invention relates to a robot teaching system, a robot teaching method, a control device, and a computer program.
  • direct teaching method direct teaching
  • remote teaching method remote teaching
  • indirect teaching method off-line programming
  • teaching methods for teaching work to industrial robots are known as teaching methods for teaching work to industrial robots.
  • the direct teaching method is a method in which the operator holds the arm of the industrial robot by hand and teaches while moving the arm by hand.
  • the remote teaching method is a method of operating an industrial robot with a teaching pendant and recording a teaching point by pressing a teaching button or the like.
  • the indirect teaching method is a method of teaching an industrial robot using support software such as CAM (computer aided manufacturing; computer aid manufacturing) (see, for example, Patent Document 1).
  • the conventional direct teaching method, remote teaching method, and indirect teaching method all have problems in that it takes time and effort to teach an operation to an industrial robot.
  • An object of the present invention is to enable a robot to easily teach a desired operation in view of the above-mentioned situation.
  • a robot teaching system for teaching a work to a first robot, wherein the robot teaching system is a human being who is working or a second robot which is working other than the first robot.
  • An acquisition device for acquiring work information including image information of a series of work; and a control device for generating a work instruction according to the acquired work information and transmitting the generated work instruction to the first robot And a robot teaching system.
  • a second exemplary invention of the present invention is a robot teaching method for teaching a work to a first robot, which is a person in a work or a second robot in a work other than the first robot, according to an imitation target
  • a third step of transmitting
  • a third exemplary invention of the present invention is a control device for teaching a work to a first robot, which is a person in a work or a second robot in a work other than the first robot according to a series of imitation objects
  • a control unit that controls the unit is a control device for teaching a work to a first robot, which is a person in a work or a second robot in a work other than the first robot according to a series of imitation objects
  • a fourth exemplary invention of the present invention is a computer program for causing a computer to function as a control device for teaching a work to a first robot, wherein a second person in operation or a second robot in operation other than the first robot.
  • a first step of receiving operation information including image information of a series of operations according to the imitation target, a second step of generating an operation instruction following the received operation information, and the generated operation instruction And a third step of controlling communication to transmit to the first robot.
  • a desired operation can be easily taught to a robot.
  • FIG. 1 is a schematic view showing an overall configuration of a robot teaching system exemplified as an embodiment of the present invention.
  • the robot teaching system 1 is a system capable of teaching work to various robots.
  • a robot hereinafter, referred to as a first robot
  • the first robot 2 is, for example, a double-arm robot provided with two arms 2a.
  • FIG. 1 shows an assembly operation for sequentially assembling a plurality of parts 3 to complete an assembly 4 as an object, as an example of an operation to teach the first robot 2.
  • the robot teaching system 1 of the present embodiment is a system in which a human 5 performs in advance a task to be performed by the first robot 2 and causes the first robot 2 to mimic the task performed by the human 5.
  • the robot teaching system 1 includes an acquisition device 10 and a control device 20.
  • the subject (imitation target) who performs the work in advance is not limited to the human 5, and may be a robot other than the first robot 2 (hereinafter referred to as a second robot).
  • a robot other than the first robot 2 hereinafter referred to as a second robot.
  • the old robot when switching an old robot to a new robot, the old robot may be set as a second robot (shown in the drawing) to be imitated, and the new robot may be taught the work performed by the old robot.
  • the operation taught to the first robot 2 is not limited to the assembly operation, and may be another operation such as a simple operation.
  • the acquisition device 10 acquires task information S1 related to a series of tasks performed by the human 5 in advance.
  • the acquisition device 10 of the present embodiment includes a first camera 11, a second camera 12, and an information acquisition unit 13.
  • the first camera 11 is, for example, a stereo camera that captures a work space B in which a human 5 works.
  • the stereo camera includes two or more cameras (shown in the drawing) that capture the work space B from different directions.
  • the first camera 11 is connected to the control device 20 by wire or wirelessly.
  • a photographed image acquired by photographing the work space B by the first camera 11 is transmitted to the control device 20 as image information S11 of a series of operations by the human 5, that is, operations to be followed by the first robot 2.
  • the second camera 12 is, for example, a wearable camera mounted on the head of the human 5.
  • the second camera 12 is connected to the control device 20 by wire or wirelessly.
  • the second camera 12 is attached near the viewpoint position of the human 5, and captures an object present in the viewing direction E of the human 5 at work.
  • a gaze sensor for detecting the movement of the eyes of the human 5 is attached to the head of the human 5 together with the second camera 12 (not shown).
  • the sight line sensor is connected to the control device 20 by wire or wirelessly.
  • a photographed image acquired by photographing an object present in the line-of-sight direction E by the second camera 12 and detection data detected by the line-of-sight sensor are transmitted to the control device 20 as line-of-sight information S12.
  • the line-of-sight information S12 may include at least a photographed image of the second camera 12. Therefore, the line-of-sight information S12 may be acquired only with the second camera 12 without using the line-of-sight sensor.
  • the information acquisition unit 13 is, for example, a pressure sensor attached to both hands of the human 5.
  • the pressure sensor detects a pressure value when the human 5 grips the object (the part 3 and the tool, etc.). Therefore, the information acquisition unit 13 of the present embodiment acquires the pressure value when the human 5 grips the target.
  • the detection value (pressure value) of the information acquisition unit 13 is transmitted to the control device 20 as the related information S13 when the imitation target (in this embodiment, the human 5) grips the object.
  • the information acquisition unit 13 is not limited to a pressure sensor that detects a pressure value, and may use various sensors such as a contact sensor to detect a pressure distribution, the direction of a moment, a slip angle, and the like. When various sensors are used, pressure distribution, direction of moment, slip angle, etc., which are detection values of various sensors, are transmitted to the control device 20 as related information S13.
  • the related information S13 may be transmitted to the control device 20 directly from the second robot without using the information acquisition unit 13.
  • the related information S13 transmitted from the second robot is information including a pressure value, a pressure distribution, a direction of moment, a slip angle, and the like, which is used when the second robot grips an object.
  • the said information may be the information which the 2nd robot acquired from the exterior, and may be the information beforehand memorized by the 2nd robot itself.
  • the acquisition device 10 acquires the image information S11, the line-of-sight information S12, and the related information S13 as the work information S1. Then, the acquisition device 10 transmits the acquired work information S1 to the control device 20. Note that the acquisition device 10 only needs to include at least the first camera 11. That is, the acquisition device 10 may acquire only the image information S11 of the first camera 11 as the work information S1.
  • the control device 20 generates a work command S5 to be executed by the first robot 2 based on the work information S1 received from the acquisition device 10, and transmits the generated work command S5 to the first robot 2 Do.
  • FIG. 2 is a functional block diagram showing an internal configuration of the control device 20. As shown in FIG.
  • the control device 20 includes a computer, and is connected to the first robot 2 and the acquisition device 10 by wire or wirelessly.
  • the control device 20 includes a communication unit 21, a recognition unit 22, a storage unit 23, and a control unit 24.
  • the control unit 24 includes, for example, a central processing unit (CPU).
  • the control unit 24 is connected to the other hardware units 21, 22, 23 via an internal bus or the like. Then, the control unit 24 controls the operation of the other hardware units 21, 22, 23. Further, the control unit 24 reads a computer program stored in the storage unit 23 and executes various processes.
  • the control unit 24 of the present embodiment executes “object registration processing”, “motion registration processing”, and “work instruction generation processing” described later.
  • the communication unit 21 functions as a communication interface that performs wired communication or wireless communication with the first camera 11, the second camera 12, and the information acquisition unit 13. Specifically, when the communication unit 21 receives the work information S1 from the acquisition device 10, the communication unit 21 passes the received work information S1 to the recognition unit 22. When the imitation target is the second robot, when the communication unit 21 receives the related information S13 of the work information S1 from the external second robot, the communication unit 21 passes the received related information S13 to the recognition unit 22.
  • the communication unit 21 also functions as a communication interface that performs wired communication or wireless communication with the first robot 2. Specifically, the communication unit 21 transmits the work instruction S5 generated by the control unit 21 to the first robot 2.
  • the storage unit 23 is formed of a recording medium such as a hard disk or a semiconductor memory.
  • the storage unit 23 stores computer programs that the recognition unit 22 and the control unit 24 perform processing respectively.
  • the storage unit 23 also includes a first database DB1 and a second database DB2.
  • the recognition unit 22 includes, for example, a GPU (Graphics Processing Unit).
  • the recognition unit 22 reads the computer program stored in the storage unit 23 and executes various processes.
  • the communication unit 21 receives the work information S1
  • the recognition unit 22 of the present embodiment executes “object recognition process” and “motion recognition process” described later based on the work information S1.
  • control device 20 is configured to include the computer, and each function of the control device 20 is exhibited by the computer program stored in the storage unit 23 of the computer being executed by the CPU and the GPU of the computer. Ru.
  • Such computer program can be stored on a temporary or non-temporary recording medium such as a CD-ROM or a USB memory.
  • the object recognition process performed by the recognition unit 22 is a process of recognizing object information S2 representing an object of work from image information S11 included in the work information S1.
  • the recognition unit 22 recognizes an object as the object information S2 using a known image recognition technology.
  • the recognition unit 22 recognizes the type, shape, angle (posture), and the like of the target as the object information S2. Therefore, the type, shape, angle, etc. of the object are included as attribute information in the recognized object information S2.
  • the recognition unit 22 individually recognizes objects included in the image information S11.
  • the image information S11 of the present embodiment includes, as objects, a plurality of parts 3 used for the assembling operation, tools, jigs, trays and the like, and an assembly 4 assembled by the assembling operation (see FIG. 3). ). Therefore, the recognition unit 22 individually recognizes a plurality of "parts", “tools”, “jigs", "trays", “assemblies” and the like as object information S2 representing an object.
  • the attribute information of the object information S2 representing "assembly” includes assembly information in addition to the type, shape, and angle of the assembly 4.
  • the assembly information is information necessary to assemble the assembly 4. Specifically, the assembly information includes a plurality of parts 3 necessary for assembly, an assembly order of the plurality of parts 3 and the like.
  • the recognition unit 22 recognizes the assembly 4 as the object as the object information S2, but may recognize the assembly 4 as information representing the object separately from the object information S2. Furthermore, a dedicated database storing information representing the recognized target may be provided in the storage unit 23.
  • the motion recognition process performed by the recognition unit 22 is a process of recognizing, from the image information S11 included in the work information S1, motion information S3 representing an operation performed on the object.
  • the recognition unit 22 recognizes an operation performed on an object as motion information S3 using a known motion capture technology.
  • the recognition unit 22 calculates three-dimensional coordinates of each of a plurality of markers attached to each joint or the like of the human 5 from captured images captured from a plurality of different directions included in the image information S11.
  • the recognition unit 22 recognizes the motion performed on the target as motion information S3 based on the temporal change of the three-dimensional coordinates of each marker. At this time, the recognition unit 22 also recognizes the motion speed (rotational speed), motion angle, and the like of the motion performed on the object as motion information S3. Therefore, the recognized motion information S3 includes, as attribute information, the operation speed (rotation number) of the operation performed on the object, the operation angle, and the like.
  • the recognition unit 22 segments the image information S11 and recognizes it as motion information S3. That is, in the motion recognition process, the recognition unit 22 individually recognizes all the operations included in the image information S11.
  • the image information S11 of the present embodiment includes a plurality of operations such as “gripping”, “releasing”, “turning”, and “loading” performed at the assembling operation (see FIG. 3). Therefore, the recognition unit 22 individually performs “hold”, “release”, “turn”, “place on”, and the like performed in the assembly operation as motion information S3 representing the operation performed on the object. recognize.
  • the recognition unit 22 of the present embodiment includes the related information S13 as attribute information of motion information S3 (hereinafter referred to as holding information S31) representing an operation of "gripping" in motion recognition processing.
  • the related information S13 includes the pressure value detected by the information acquisition unit 13 when the human 5 grips the target. Therefore, the recognition unit 22 includes the pressure value of the information acquisition unit 13 included in the related information S13 in the attribute information of the grip information S31. Thereby, the recognized grip information S31 is stored in the storage unit 23 in association with the related information S13.
  • the attribute information of the grip information S31 may not include the related information S13.
  • the recognition unit 22 recognizes motion information S3 representing an operation performed on an object from the sight line information S12 in addition to the image information S11.
  • the line-of-sight information S12 includes a photographed image of the second camera 12 which photographed an object present in the line-of-sight direction E of the human 5 and detection data of the visual line sensor which detected the movement of the human 5's eye. .
  • the recognition unit 22 determines which position of the object the person 5 in operation is looking at using the known gaze measurement technology from the photographed image of the second camera 12 included in the gaze information S12 and the detection data of the gaze sensor Recognize Thereby, the recognition part 22 can specify the holding position of the said target object from the position where the person 5 is looking at the target object, for example, when recognizing the operation of "gripping" the target object.
  • the recognition unit 22 can more accurately recognize the gripping information S31 by the “gripping” operation recognized from the image information S11 and the gripping position specified from the visual recognition information S12. Note that the recognition unit 22 of the present embodiment may recognize the gripping information S31 based on only the image information S11.
  • the first database DB1 of the storage unit 23 is a database in which the object information S2 is stored.
  • object information S2 representing a known object is stored in advance.
  • the control unit 24 executes an "object registration process" of storing the object information S2 recognized by the recognition unit 22 in the first database DB1 according to a predetermined condition.
  • control unit 24 checks whether the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1. As a result of the collation, when the object information S2 recognized by the recognition unit 22 is not stored in the first database DB1, the control unit 24 causes the storage unit 23 to compare the object information S2 recognized by the recognition unit 22 1) Store in database DB1. Thereby, a plurality of different pieces of object information S2 are accumulated in the first database DB1.
  • the second database DB2 of the storage unit 23 is a database in which the motion information S3 is stored.
  • the control unit 24 executes a “motion registration process” of storing the motion information S3 recognized by the recognition unit 22 in the second database DB2 in accordance with a predetermined condition.
  • control unit 24 collates whether or not the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2. As a result of the collation, when the motion information S3 recognized by the recognition unit 22 is not stored in the second database DB2, the control unit 24 causes the storage unit 23 to recognize the motion information S3 recognized by the recognition unit 22 2 Store in database DB2. Thereby, a plurality of different motion information S3 is accumulated in the second database DB2.
  • the control unit 24 executes “work command generation process” for generating a work command S5 to be executed by the first robot 2 following the work information S1.
  • the work command generation process generates a work command S5 based on one or more pieces of object information S2 stored in the first database DB1 and one or more pieces of motion information S3 stored in the second database DB2. It is.
  • the control unit 24 generates at least one module information S4.
  • the module information S4 is information in which one of the plurality of pieces of object information S2 stored in the first database DB1 and one of the plurality of pieces of motion information S3 stored in the second database DB2 are combined. Then, when the plurality of pieces of module information S4 are generated, the control unit 24 generates a work instruction S5 including an operation program in which the plurality of pieces of module information S4 are sequentially connected.
  • FIG. 3 is a diagram illustrating an example of the work command generation process performed by the control unit 24.
  • the work command generation process shown in FIG. 3 is a process of generating a work command S5 that causes the first robot 2 to execute the assembly work of the assembly 4.
  • the control unit 24 of the present embodiment acquires assembly information from the object information S2 stored in the first database DB1 and representing an “assembly” corresponding to the assembly 4. Then, based on the acquired assembly information, the control unit 24 generates a plurality of module information S4 in which one object information S2 and one motion information S3 are combined.
  • control unit 24 combines the object information S2 representing “parts” and the motion information S3 representing “grasp” to generate the first module information S4.
  • the first module information S4 represents an operation of “gripping the part 3”.
  • control unit 24 combines the object information S2 representing "tool” and the motion information S3 representing "turn” to generate the second module information S4.
  • the second module information S4 represents the "turning tool” operation.
  • control unit 24 combines the object information S2 representing "tray” and the motion information S3 representing "loading” to generate the third module information S4.
  • the third module information S4 represents an operation of “loading tray”. As described above, the control unit 24 generates the first to N-th module information S4 necessary for the completion of the assembly 4. N is an integer of 2 or more.
  • the control unit 24 generates, based on the acquired assembly information, a work instruction S5 composed of an operation program in which a plurality of module information S4 are sequentially connected. Specifically, the control unit 24 generates a work command S5 composed of an operation program in which the first module information S4 to the Nth module information S4 are connected in order. Therefore, the generated work instruction S5 is an operation program that causes the first robot 2 to sequentially execute each operation included in the first to N-th module information S4. The work command S5 generated is transmitted to the first robot 2 by the communication unit 21.
  • the work instruction S5 may be composed of not only the first to N-th module information S4, but also part (or one) of the first to N-th module information S4.
  • the control unit 24 can generate, as a work command for assembling a part of the assembly 4, a work command S 5 composed of first to Kth ( ⁇ N) pieces of module information S 4. Further, the control unit 24 can also generate a work command S5 including one piece of module information S4 (for example, "turn the tool").
  • the work command S5 is repeatedly transmitted by the communication unit 21, the first robot 2 repeats the work (work of turning the tool) specified by the module information S4 included in the work command S5.
  • FIGS. 4 and 5 are flowcharts showing an example of a robot teaching method performed by the acquisition device 10 and the control device 20 in cooperation with each other.
  • the robot teaching method of FIG. 4 and FIG. 5 shows a method from the acquisition device 10 acquiring the work information S1 until outputting the work command S5 to the first robot 2.
  • the circled letter A in FIG. 4 is connected to the letter A in FIG.
  • acquisition device 10 acquires work information S1 related to a series of work performed by human 5 in advance (step ST1). Specifically, the first camera 11 captures an image of the work space B where the human 5 works and acquires the image information S11. The second camera 12 captures an object present in the viewing direction E of the person 5 at the time of work, and acquires the viewing information S12. The information acquisition unit 13 detects a pressure value when the human 5 grips an object, and acquires the related information S13.
  • the acquisition device 10 transmits the acquired work information S1 to the control device 20 (step ST2). Specifically, the first camera 11, the second camera 12, and the information acquisition unit 13 transmit the acquired image information S11, line of sight information S12, and related information S13 to the communication unit 21 of the control device 20. When the transmission to the communication unit 21 ends, the process of the acquisition device 10 ends.
  • the communication unit 21 of the control device 20 receives the image information S11, the line-of-sight information S12, and the related information S13 from the acquisition device 10 as the work information S1 (step ST3). Then, the communication unit 21 passes the work information S1 to the recognition unit 22 of the control device 20.
  • the recognition unit 22 executes the object recognition process and the motion recognition process described above based on the work information S1 acquired from the communication unit 21. That is, the recognition unit 22 recognizes, from the work information S1, object information S2 representing an object and motion information S3 representing an operation performed on the object (step ST4). The recognition unit 22 passes the recognized object information S2 and motion information S3 to the control unit 24 of the control device 20.
  • control unit 24 executes the object registration process described above. Specifically, the control unit 24 checks whether the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1 (step ST5). As a result of the collation, when the object information S2 recognized by the recognition unit 22 is not stored in the first database DB1 (in the case of “No” in step ST5), the control unit 24 instructs the storage unit 23 to recognize The object information S2 recognized by 22 is stored in the first database DB1 (step ST6).
  • step ST5 when the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1 (in the case of “Yes” in step ST5), the control unit 24 proceeds to step ST7 described later Do.
  • control unit 24 executes the above-described motion registration process. Specifically, the control unit 24 checks whether or not the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2 (step ST7). As a result of the collation, when the motion information S3 recognized by the recognition unit 22 is not stored in the second database DB2 (in the case of “No” in step ST7), the control unit 24 instructs the storage unit 23 to recognize The motion information S3 recognized in step 22 is stored in the second database DB2 (step ST8).
  • step ST7 when the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2 (in the case of “Yes” in step ST7), the control unit 24 proceeds to step ST9 described later Do.
  • the motion registration process (steps ST7 to ST8) may be performed before the object registration process (steps ST5 to ST6).
  • control unit 24 executes the work command generation process described above.
  • the control unit 24 is module information in which one of the plurality of pieces of object information S2 stored in the first database DB1 and one of the plurality of motion information S3 stored in the second database DB2 are combined. Generate multiple S4. Then, the control unit 24 generates a work instruction S5 composed of an operation program in which a plurality of pieces of module information S4 are sequentially connected (step ST9).
  • control unit 24 controls the communication unit 21 to transmit the generated work instruction S5 to the first robot 2. That is, the control unit 24 outputs an instruction to transmit the generated work instruction S5 to the first robot 2 to the communication unit 21 (step ST10).
  • the communication unit 21 transmits the work instruction S5 to the first robot 2 in accordance with the transmission instruction of the control unit 24 (step ST11).
  • the processing of the control device 20 ends.
  • Step ST1 is a first step of acquiring work information S1 including image information S11 of a series of works by the imitation target in the robot teaching method of the present invention.
  • Steps ST3 to ST9 are a second step of generating a work instruction S5 in accordance with the acquired work information S1 in the robot teaching method of the present invention.
  • Step ST11 is a third step of transmitting the generated work command S5 to the first robot 2 in the robot teaching method of the present invention.
  • Step ST4 is a recognition step in which the acquired work information S1 is subdivided and recognized as motion information S3 in the robot teaching method of the present invention.
  • Steps ST7 to ST8 are storage steps for storing at least one of the recognized motion information S3 in the robot teaching method of the present invention.
  • Step ST9 is a generation step of generating a work instruction S5 from the stored one or more pieces of motion information S3 in the robot teaching method of the present invention.
  • Step ST3 is a receiving step of receiving, as work information S1, related information S13 when the mimic target holds the target in the robot teaching method of the present invention from the outside.
  • Step ST3 is a first step of receiving the work information S1 including the image information S11 of a series of works by the imitation target in the computer program of the present invention.
  • Steps ST4 to ST9 are a second step of generating a work instruction S5 according to the received work information S1 in the computer program of the present invention.
  • Step ST10 is a third step of controlling communication so as to transmit the generated work instruction S5 to the first robot 2 in the computer program of the present invention.
  • Step ST4 is a recognition step in which the received work information S1 is subdivided and recognized as motion information S3 in the computer program of the present invention.
  • Steps ST7 to ST8 are storage steps for storing at least one of the recognized motion information S3 in the computer program of the present invention.
  • Step ST9 is a generation step of generating the work instruction S5 from the stored one or more pieces of motion information S3 in the computer program of the present invention.
  • the acquisition device 10 acquires the work information S1 including the image information S11 of a series of work by the person 5 who is working. Then, the control device 20 generates a work instruction S5 in accordance with the acquired work information S1, and transmits the generated work instruction S5 to the first robot 2. That is, the original data when generating the work instruction S5 to be transmitted to the first robot 2 is made up of the work information S1 including the image information S11 of a series of works by the imitation target who is the human 5. Therefore, even if the above-described conventional teaching method is not performed, it is possible to instruct the first robot 2 to perform a predetermined work command S5 that is to be imitated. Therefore, the desired operation can be easily taught to the first robot 2 as compared with the conventional teaching method.
  • the recognition unit 22 recognizes motion information S3 obtained by dividing the acquired work information S1, and the storage unit 23 stores the recognized motion information S3. Then, the control unit 24 generates a work instruction S5 from the stored one or more pieces of motion information S3. Therefore, it is possible to teach the robot not only the work completely the same as the work information S1 which is the work to be simulated but also a partial motion (motion) included in the work information S1.
  • the communication unit 21 receives, from the outside (the information acquisition unit 13), the related information S13 when the human 5 grips the object. Then, the control unit 24 includes the related information S13 corresponding to the grip information S31 in the work command S5 including the grip information S31.
  • “Corresponding related information” is, for example, information of a pressure detected by a pressure sensor when the object is gripped. This makes it possible to teach the first robot 2 an appropriate pressure when gripping an object. Therefore, the first robot 2 can grip the object at an appropriate pressure according to the hardness of the object and the like.
  • the information acquisition unit 13 provided for the human 5 acquires the related information S13. Therefore, it is possible to more accurately teach the first robot 2 the appropriate pressure when gripping the object.
  • the acquisition device 10 includes the second camera 12 that acquires line-of-sight information S12 related to an object present in the line-of-sight direction E of the person 5 at work.
  • the recognition unit 22 recognizes the motion information S3 based on the image information S11 and the line-of-sight information S12.
  • the recognition unit 22 can more accurately recognize the motion information S3, and can teach the first robot 2 an appropriate gripping position.
  • the robot teaching method of the present embodiment, the control device 20, and a computer program for causing a computer to function as the control device 20 have substantially the same configuration as the robot teaching system 1, and thus, are the same as the robot teaching system 1. It exerts an action effect.
  • the robot teaching system 1 of this embodiment recognizes the object information S2 and the motion information S3 from the work information S1 to be imitated, but the information recognized from the work information S1 is not particularly limited. For example, only motion information S3 may be recognized from work information S1, or other information may be recognized.
  • SYMBOLS 1 robot teaching system 2 1st robot, 2a arm, 3 parts (object), 4 assembly, 5 human (imitation target), 10 acquisition apparatus, 11 1st camera, 12 2nd camera (camera), 13 information Acquisition unit, 20 control unit, 21 communication unit, 22 recognition unit, 23 storage unit, 24 control unit, B work space, E gaze direction, S1 work information, S2 object information, S3 motion information, S4 module information, S5 work command , S11 image information, S12 gaze information, S13 related information, S31 grip information

Abstract

[Problem] To make it possible to easily teach a robot to perform a desired task. [Solution] A system 1 for teaching a robot is provided with: an acquisition device 10 for acquiring task information S1 including image information S11 for a series of tasks carried out by an object to be imitated that is a human 5 performing tasks; and a control device 20 that generates a task command S5 in accordance with the acquired task information S1 and transmits the generated task command S5 to a first robot 2.

Description

ロボット教示システム、ロボット教示方法、制御装置、及びコンピュータプログラムRobot teaching system, robot teaching method, control device, and computer program
本発明は、ロボット教示システム、ロボット教示方法、制御装置、及びコンピュータプログラムに関する。 The present invention relates to a robot teaching system, a robot teaching method, a control device, and a computer program.
従来、産業用ロボットに作業を教示する教示方式として、直接教示方式(ダイレクトティーチング)、遠隔教示方式(リモートティーチング)、及び間接教示方式(オフラインプログラミング)が知られている。直接教示方式は、作業者が産業用ロボットのアームを手で持ち、当該アームを手で動かしながら教示する方式である。遠隔教示方式は、ティーチングペンダントにより産業用ロボットを操作し、教示ボタン押下等により教示点を記録していく方式である。間接教示方式は、CAM(computer aided manufacturing;コンピュータ支援製造)等の支援ソフトウェアを用いて産業用ロボットに教示する方式である(例えば、特許文献1参照)。 Conventionally, direct teaching method (direct teaching), remote teaching method (remote teaching), and indirect teaching method (off-line programming) are known as teaching methods for teaching work to industrial robots. The direct teaching method is a method in which the operator holds the arm of the industrial robot by hand and teaches while moving the arm by hand. The remote teaching method is a method of operating an industrial robot with a teaching pendant and recording a teaching point by pressing a teaching button or the like. The indirect teaching method is a method of teaching an industrial robot using support software such as CAM (computer aided manufacturing; computer aid manufacturing) (see, for example, Patent Document 1).
特開2017-27501号公報Unexamined-Japanese-Patent No. 2017-27501
従来の直接教示方式、遠隔教示方式、及び間接教示方式は、いずれも産業用ロボットに作業を教示するのに手間と時間を要するという問題があった。  The conventional direct teaching method, remote teaching method, and indirect teaching method all have problems in that it takes time and effort to teach an operation to an industrial robot.
本発明は、上記状況に鑑み、所望の作業をロボットに容易に教示できるようにすることを目的とする。 An object of the present invention is to enable a robot to easily teach a desired operation in view of the above-mentioned situation.
本発明の例示的な第1発明は、第1ロボットに作業を教示するロボット教示システムであって、作業中の人間、又は前記第1ロボット以外の作業中の第2ロボットである、模倣対象による一連の作業の画像情報を含む作業情報を取得する取得装置と、取得された前記作業情報に倣った作業指令を生成し、生成された前記作業指令を、前記第1ロボットに送信する制御装置と、を備えるロボット教示システムである。  According to a first aspect of the present invention, there is provided a robot teaching system for teaching a work to a first robot, wherein the robot teaching system is a human being who is working or a second robot which is working other than the first robot. An acquisition device for acquiring work information including image information of a series of work; and a control device for generating a work instruction according to the acquired work information and transmitting the generated work instruction to the first robot And a robot teaching system.
本発明の例示的な第2発明は、第1ロボットに作業を教示するロボット教示方法であって、作業中の人間、又は前記第1ロボット以外の作業中の第2ロボットである、模倣対象による一連の作業の画像情報を含む作業情報を取得する第1ステップと、取得された前記作業情報に倣った作業指令を生成する第2ステップと、生成された前記作業指令を、前記第1ロボットに送信する第3ステップと、を含むロボット教示方法である。  A second exemplary invention of the present invention is a robot teaching method for teaching a work to a first robot, which is a person in a work or a second robot in a work other than the first robot, according to an imitation target A first step of acquiring operation information including image information of a series of operations; a second step of generating an operation instruction following the acquired operation information; and the generated operation instruction to the first robot And a third step of transmitting.
本発明の例示的な第3発明は、第1ロボットに作業を教示する制御装置であって、作業中の人間、又は前記第1ロボット以外の作業中の第2ロボットである、模倣対象による一連の作業の画像情報を含む作業情報を受信する通信部と、受信された前記作業情報に倣った作業指令を生成し、生成された前記作業指令を、前記第1ロボットに送信するように前記通信部を制御する制御部と、を備える制御装置である。  A third exemplary invention of the present invention is a control device for teaching a work to a first robot, which is a person in a work or a second robot in a work other than the first robot according to a series of imitation objects A communication unit for receiving work information including work image information; and a work instruction for generating the work instruction according to the received work information and transmitting the generated work instruction to the first robot And a control unit that controls the unit.
本発明の例示的な第4発明は、第1ロボットに作業を教示する制御装置としてコンピュータを機能させるコンピュータプログラムであって、作業中の人間、又は前記第1ロボット以外の作業中の第2ロボットである、模倣対象による一連の作業の画像情報を含む作業情報を受信する第1ステップと、受信された前記作業情報に倣った作業指令を生成する第2ステップと、生成された前記作業指令を、前記第1ロボットに送信するように通信を制御する第3ステップと、を実行させるコンピュータプログラムである。 A fourth exemplary invention of the present invention is a computer program for causing a computer to function as a control device for teaching a work to a first robot, wherein a second person in operation or a second robot in operation other than the first robot. A first step of receiving operation information including image information of a series of operations according to the imitation target, a second step of generating an operation instruction following the received operation information, and the generated operation instruction And a third step of controlling communication to transmit to the first robot.
本発明の例示的な第1発明から第4発明によれば、所望の作業をロボットに容易に教示することができる。 According to the first to fourth inventions of the present invention, a desired operation can be easily taught to a robot.
本発明の実施形態として例示するロボット教示システムの全体構成を示す模式図である。It is a schematic diagram which shows the whole structure of the robot teaching system illustrated as embodiment of this invention. 制御装置の内部構成を示す機能ブロック図である。It is a functional block diagram showing an internal configuration of a control device. 制御部が実行する作業指令生成処理の一例を示す図である。It is a figure which shows an example of the work instruction | command production | generation process which a control part performs. 取得装置及び制御装置が協働して行うロボット教示方法の一例を示すフローチャートである。It is a flowchart which shows an example of the robot teaching method which an acquisition device and a control device perform in cooperation. 取得装置及び制御装置が協働して行うロボット教示方法の一例を示すフローチャートである。It is a flowchart which shows an example of the robot teaching method which an acquisition device and a control device perform in cooperation.
以下、本発明の実施形態について添付図面に基づき詳細に説明する。 [ロボット教示システムの全体構成]図1は、本発明の実施形態として例示するロボット教示システムの全体構成を示す模式図である。ロボット教示システム1は、種々のロボットに作業を教示することができるシステムである。本実施形態では、一例として、工場の生産ラインに設置されるロボット(以下、第1ロボットという)2に作業を教示する場合について説明する。第1ロボット2は、例えば、2本のアーム2aを備えた双腕ロボットである。また、図1には、第1ロボット2に教示する作業の一例として、複数の部品3を順次組み立てて目的物である組立品4を完成させる組み立て作業を示している。  Hereinafter, an embodiment of the present invention will be described in detail based on the attached drawings. [Overall Configuration of Robot Teaching System] FIG. 1 is a schematic view showing an overall configuration of a robot teaching system exemplified as an embodiment of the present invention. The robot teaching system 1 is a system capable of teaching work to various robots. In the present embodiment, as an example, a case where a robot (hereinafter, referred to as a first robot) 2 installed on a production line of a factory is taught to work will be described. The first robot 2 is, for example, a double-arm robot provided with two arms 2a. Further, FIG. 1 shows an assembly operation for sequentially assembling a plurality of parts 3 to complete an assembly 4 as an object, as an example of an operation to teach the first robot 2.
本実施形態のロボット教示システム1は、第1ロボット2に行わせる作業を、予め人間5が行い、当該人間5が行った作業を第1ロボット2に模倣させるシステムである。ロボット教示システム1は、取得装置10と、制御装置20とを備える。  The robot teaching system 1 of the present embodiment is a system in which a human 5 performs in advance a task to be performed by the first robot 2 and causes the first robot 2 to mimic the task performed by the human 5. The robot teaching system 1 includes an acquisition device 10 and a control device 20.
なお、予め作業を行う主体(模倣対象)は、人間5に限定されず、第1ロボット2以外のロボット(以下、第2ロボットという)であってもよい。例えば、古いロボットを新しいロボットに切り替える場合、古いロボットを、模倣対象である第2ロボット(付図示)とし、古いロボットが行う作業を新しいロボットに教示させてもよい。また、第1ロボット2に教示する作業は、組み立て作業に限定されず、単純作業等の他の作業であってもよい。  The subject (imitation target) who performs the work in advance is not limited to the human 5, and may be a robot other than the first robot 2 (hereinafter referred to as a second robot). For example, when switching an old robot to a new robot, the old robot may be set as a second robot (shown in the drawing) to be imitated, and the new robot may be taught the work performed by the old robot. Moreover, the operation taught to the first robot 2 is not limited to the assembly operation, and may be another operation such as a simple operation.
[取得装置の構成] 取得装置10は、人間5が予め行った一連の作業に関する作業情報S1を取得する。本実施形態の取得装置10は、第1カメラ11と、第2カメラ12と、情報取得部13とを備える。  [Configuration of Acquisition Device] The acquisition device 10 acquires task information S1 related to a series of tasks performed by the human 5 in advance. The acquisition device 10 of the present embodiment includes a first camera 11, a second camera 12, and an information acquisition unit 13.
第1カメラ11は、例えば、人間5が作業する作業空間Bを撮影するステレオカメラである。ステレオカメラは、互いに異なる方向から作業空間Bを撮影する2台以上のカメラ(付図示)を備えている。第1カメラ11は、有線又は無線により制御装置20に接続されている。第1カメラ11が作業空間Bを撮影して取得した撮影画像は、人間5による一連の作業、つまり第1ロボット2が追従すべき作業の画像情報S11として制御装置20に送信される。  The first camera 11 is, for example, a stereo camera that captures a work space B in which a human 5 works. The stereo camera includes two or more cameras (shown in the drawing) that capture the work space B from different directions. The first camera 11 is connected to the control device 20 by wire or wirelessly. A photographed image acquired by photographing the work space B by the first camera 11 is transmitted to the control device 20 as image information S11 of a series of operations by the human 5, that is, operations to be followed by the first robot 2.
第2カメラ12は、例えば、人間5の頭部に装着されるウェアラブルカメラである。第2カメラ12は、有線又は無線により制御装置20に接続される。第2カメラ12は、人間5の視点位置付近に取り付けられ、作業時の人間5の視線方向Eに存在する物体を撮影する。人間5の頭部には、人間5の目の動きを検出する視線センサが、第2カメラ12と共に装着される(図示省略)。視線センサは、有線又は無線により制御装置20に接続される。  The second camera 12 is, for example, a wearable camera mounted on the head of the human 5. The second camera 12 is connected to the control device 20 by wire or wirelessly. The second camera 12 is attached near the viewpoint position of the human 5, and captures an object present in the viewing direction E of the human 5 at work. A gaze sensor for detecting the movement of the eyes of the human 5 is attached to the head of the human 5 together with the second camera 12 (not shown). The sight line sensor is connected to the control device 20 by wire or wirelessly.
第2カメラ12が視線方向Eに存在する物体を撮影して取得した撮影画像、及び視線センサが検出した検出データは、視線情報S12として制御装置20に送信される。なお、視線情報S12には、少なくとも第2カメラ12の撮影画像が含まれていればよい。したがって、視線センサを用いずに、第2カメラ12のみで視線情報S12を取得してもよい。  A photographed image acquired by photographing an object present in the line-of-sight direction E by the second camera 12 and detection data detected by the line-of-sight sensor are transmitted to the control device 20 as line-of-sight information S12. The line-of-sight information S12 may include at least a photographed image of the second camera 12. Therefore, the line-of-sight information S12 may be acquired only with the second camera 12 without using the line-of-sight sensor.
情報取得部13は、例えば、人間5の両手にそれぞれ装着される圧力センサからなる。圧力センサは、人間5が対象物(部品3及び工具等)を把持する際の圧力値を検出する。したがって、本実施形態の情報取得部13は、人間5が対象物を把持する際の圧力値を取得する。情報取得部13の検出値(圧力値)は、模倣対象(本実施形態では人間5)が対象物を把持する際の関連情報S13として制御装置20に送信される。  The information acquisition unit 13 is, for example, a pressure sensor attached to both hands of the human 5. The pressure sensor detects a pressure value when the human 5 grips the object (the part 3 and the tool, etc.). Therefore, the information acquisition unit 13 of the present embodiment acquires the pressure value when the human 5 grips the target. The detection value (pressure value) of the information acquisition unit 13 is transmitted to the control device 20 as the related information S13 when the imitation target (in this embodiment, the human 5) grips the object.
なお、情報取得部13は、圧力値を検出する圧力センサに限定されず、接触センサ等の各種センサにより、圧力分布、モーメントの方向、及びすべり角等を検出してもよい。各種センサを用いる場合、各種センサの検出値である、圧力分布、モーメントの方向、及びすべり角等は、関連情報S13として制御装置20に送信される。  The information acquisition unit 13 is not limited to a pressure sensor that detects a pressure value, and may use various sensors such as a contact sensor to detect a pressure distribution, the direction of a moment, a slip angle, and the like. When various sensors are used, pressure distribution, direction of moment, slip angle, etc., which are detection values of various sensors, are transmitted to the control device 20 as related information S13.
模倣対象が第2ロボットの場合には、情報取得部13を用いずに、第2ロボットから直接、関連情報S13を制御装置20に送信してもよい。第2ロボットから送信される関連情報S13は、第2ロボットが対象物を把持する際に用いる、圧力値、圧力分布、モーメントの方向、及びすべり角等を含む情報である。当該情報は、第2ロボットが外部から取得した情報であってもよいし、第2ロボット自体に予め記憶されている情報であってもよい。  When the imitation target is the second robot, the related information S13 may be transmitted to the control device 20 directly from the second robot without using the information acquisition unit 13. The related information S13 transmitted from the second robot is information including a pressure value, a pressure distribution, a direction of moment, a slip angle, and the like, which is used when the second robot grips an object. The said information may be the information which the 2nd robot acquired from the exterior, and may be the information beforehand memorized by the 2nd robot itself.
以上の通り、取得装置10は、画像情報S11、視線情報S12、及び関連情報S13を、作業情報S1として取得する。そして、取得装置10は、取得した作業情報S1を制御装置20に送信する。なお、取得装置10は、少なくとも第1カメラ11を備えていればよい。すなわち、取得装置10は、第1カメラ11の画像情報S11のみを作業情報S1として取得してもよい。  As described above, the acquisition device 10 acquires the image information S11, the line-of-sight information S12, and the related information S13 as the work information S1. Then, the acquisition device 10 transmits the acquired work information S1 to the control device 20. Note that the acquisition device 10 only needs to include at least the first camera 11. That is, the acquisition device 10 may acquire only the image information S11 of the first camera 11 as the work information S1.
[制御装置の構成] 制御装置20は、取得装置10から受信した作業情報S1に基づいて第1ロボット2に実行させる作業指令S5を生成し、生成された作業指令S5を第1ロボット2に送信する。図2は、制御装置20の内部構成を示す機能ブロック図である。制御装置20は、コンピュータを備えて構成されており、有線又は無線により第1ロボット2及び取得装置10に接続されている。制御装置20は、通信部21と、認識部22と、記憶部23と、制御部24とを有する。  [Configuration of Control Device] The control device 20 generates a work command S5 to be executed by the first robot 2 based on the work information S1 received from the acquisition device 10, and transmits the generated work command S5 to the first robot 2 Do. FIG. 2 is a functional block diagram showing an internal configuration of the control device 20. As shown in FIG. The control device 20 includes a computer, and is connected to the first robot 2 and the acquisition device 10 by wire or wirelessly. The control device 20 includes a communication unit 21, a recognition unit 22, a storage unit 23, and a control unit 24.
制御部24は、例えばCPU(Central Processing Unit)からなる。制御部24は、内部バス等を介して、他のハードウェア各部21,22,23と接続されている。そして、制御部24は、他のハードウェア各部21,22,23の動作を制御する。また、制御部24は、記憶部23に記憶されたコンピュータプログラムを読み出して、様々な処理を実行する。本実施形態の制御部24は、後述する「オブジェクト登録処理」、「モーション登録処理」、及び「作業指令生成処理」を実行する。  The control unit 24 includes, for example, a central processing unit (CPU). The control unit 24 is connected to the other hardware units 21, 22, 23 via an internal bus or the like. Then, the control unit 24 controls the operation of the other hardware units 21, 22, 23. Further, the control unit 24 reads a computer program stored in the storage unit 23 and executes various processes. The control unit 24 of the present embodiment executes “object registration processing”, “motion registration processing”, and “work instruction generation processing” described later.
通信部21は、第1カメラ11、第2カメラ12、及び情報取得部13と、有線通信又は無線通信を行う通信インタフェースとして機能する。具体的には、通信部21は、取得装置10から作業情報S1を受信すると、受信した作業情報S1を認識部22に渡す。なお、模倣対象が第2ロボットの場合、通信部21は、外部である第2ロボットから、作業情報S1の関連情報S13を受信すると、受信した関連情報S13を認識部22に渡す。  The communication unit 21 functions as a communication interface that performs wired communication or wireless communication with the first camera 11, the second camera 12, and the information acquisition unit 13. Specifically, when the communication unit 21 receives the work information S1 from the acquisition device 10, the communication unit 21 passes the received work information S1 to the recognition unit 22. When the imitation target is the second robot, when the communication unit 21 receives the related information S13 of the work information S1 from the external second robot, the communication unit 21 passes the received related information S13 to the recognition unit 22.
また、通信部21は、第1ロボット2と、有線通信又は無線通信を行う通信インタフェースとしても機能する。具体的には、通信部21は、制御部21で生成された作業指令S5を第1ロボット2に送信する。  The communication unit 21 also functions as a communication interface that performs wired communication or wireless communication with the first robot 2. Specifically, the communication unit 21 transmits the work instruction S5 generated by the control unit 21 to the first robot 2.
記憶部23は、ハードディスク又は半導体メモリ等の記録媒体からなる。記憶部23は、認識部22及び制御部24がそれぞれ処理を行うコンピュータプログラムを記憶している。また、記憶部23には、第1データベースDB1、及び第2データベースDB2が含まれる。  The storage unit 23 is formed of a recording medium such as a hard disk or a semiconductor memory. The storage unit 23 stores computer programs that the recognition unit 22 and the control unit 24 perform processing respectively. The storage unit 23 also includes a first database DB1 and a second database DB2.
認識部22は、例えばGPU(Graphics Processing Unit)からなる。認識部22は、記憶部23に記憶されたコンピュータプログラムを読み出して、様々な処理を実行する。本実施形態の認識部22は、通信部21が作業情報S1を受信すると、当該作業情報S1に基づいて、後述する「オブジェクト認識処理」及び「モーション認識処理」を実行する。  The recognition unit 22 includes, for example, a GPU (Graphics Processing Unit). The recognition unit 22 reads the computer program stored in the storage unit 23 and executes various processes. When the communication unit 21 receives the work information S1, the recognition unit 22 of the present embodiment executes “object recognition process” and “motion recognition process” described later based on the work information S1.
以上のように、制御装置20は、コンピュータを備えて構成され、制御装置20の各機能は、コンピュータの記憶部23に記憶されたコンピュータプログラムがコンピュータのCPU及びGPUによって実行されることで発揮される。かかるコンピュータプログラムは、CD-ROMやUSBメモリなどの一時的又は非一時的な記録媒体に記憶させることができる。  As described above, the control device 20 is configured to include the computer, and each function of the control device 20 is exhibited by the computer program stored in the storage unit 23 of the computer being executed by the CPU and the GPU of the computer. Ru. Such computer program can be stored on a temporary or non-temporary recording medium such as a CD-ROM or a USB memory.
[オブジェクト認識処理] 認識部22が実行するオブジェクト認識処理は、作業情報S1に含まれる画像
情報S11から、作業の対象物を表すオブジェクト情報S2を認識する処理である。例えば、認識部22は、公知の画像認識技術を用いて、対象物をオブジェクト情報S2として認識する。その際、認識部22は、当該対象物の種類、形状、及び角度(姿勢)等もオブジェクト情報S2として認識する。したがって、認識されたオブジェクト情報S2には、対象物の種類、形状、及び角度等が属性情報として含まれる。 
[Object Recognition Process] The object recognition process performed by the recognition unit 22 is a process of recognizing object information S2 representing an object of work from image information S11 included in the work information S1. For example, the recognition unit 22 recognizes an object as the object information S2 using a known image recognition technology. At this time, the recognition unit 22 recognizes the type, shape, angle (posture), and the like of the target as the object information S2. Therefore, the type, shape, angle, etc. of the object are included as attribute information in the recognized object information S2.
また、オブジェクト認識処理において、認識部22は、画像情報S11に含まれる対象物を個別に認識する。本実施形態の画像情報S11には、組み立て作業に用いる複数の部品3、工具、治具、及びトレー等と、組み立て作業によって組み立てられた組立品4とが、対象物として含まれる(図3参照)。したがって、認識部22は、複数の「部品」、「工具」、「治具」、「トレー」及び「組立品」等を、それぞれ対象物を表すオブジェクト情報S2として個別に認識する。  Further, in the object recognition process, the recognition unit 22 individually recognizes objects included in the image information S11. The image information S11 of the present embodiment includes, as objects, a plurality of parts 3 used for the assembling operation, tools, jigs, trays and the like, and an assembly 4 assembled by the assembling operation (see FIG. 3). ). Therefore, the recognition unit 22 individually recognizes a plurality of "parts", "tools", "jigs", "trays", "assemblies" and the like as object information S2 representing an object.
「組立品」を表すオブジェクト情報S2の属性情報には、組立品4の種類、形状及び角度等に加えて、組み立て情報が含まれる。当該組み立て情報は、組立品4を組み立てるために必要な情報である。具体的には、組み立て情報には、組み立てに必要な複数の部品3、及び当該複数の部品3の組み立て順序等が含まれる。  The attribute information of the object information S2 representing "assembly" includes assembly information in addition to the type, shape, and angle of the assembly 4. The assembly information is information necessary to assemble the assembly 4. Specifically, the assembly information includes a plurality of parts 3 necessary for assembly, an assembly order of the plurality of parts 3 and the like.
なお、認識部22は、目的物である組立品4を、オブジェクト情報S2として認識しているが、オブジェクト情報S2とは別に、目的物を表す情報として認識してもよい。さらに、認識された目的物を表す情報を記憶する専用のデータベースを記憶部23に設けてもよい。  The recognition unit 22 recognizes the assembly 4 as the object as the object information S2, but may recognize the assembly 4 as information representing the object separately from the object information S2. Furthermore, a dedicated database storing information representing the recognized target may be provided in the storage unit 23.
[モーション認識処理] 認識部22が実行するモーション認識処理は、作業情報S1に含まれる画像情報S11から、対象物に行われた動作を表すモーション情報S3を認識する処理である。例えば、認識部22は、公知のモーションキャプチャ技術を用いて、対象物に行われた動作をモーション情報S3として認識する。具体的には、認識部22は、画像情報S11に含まれる複数の異なる方向から撮影した撮影画像から、人間5の各関節等に取り付けた複数のマーカーそれぞれの3次元座標を計算する。  [Motion Recognition Process] The motion recognition process performed by the recognition unit 22 is a process of recognizing, from the image information S11 included in the work information S1, motion information S3 representing an operation performed on the object. For example, the recognition unit 22 recognizes an operation performed on an object as motion information S3 using a known motion capture technology. Specifically, the recognition unit 22 calculates three-dimensional coordinates of each of a plurality of markers attached to each joint or the like of the human 5 from captured images captured from a plurality of different directions included in the image information S11.
そして、認識部22は、各マーカーの3次元座標の経時変化に基づいて、対象物に行われた動作をモーション情報S3として認識する。その際、認識部22は、対象物に行われた動作の動作速度(回転数)及び動作角度等もモーション情報S3として認識する。したがって、認識されたモーション情報S3には、対象物に行われた動作の動作速度(回転数)及び動作角度等が属性情報として含まれる。  Then, the recognition unit 22 recognizes the motion performed on the target as motion information S3 based on the temporal change of the three-dimensional coordinates of each marker. At this time, the recognition unit 22 also recognizes the motion speed (rotational speed), motion angle, and the like of the motion performed on the object as motion information S3. Therefore, the recognized motion information S3 includes, as attribute information, the operation speed (rotation number) of the operation performed on the object, the operation angle, and the like.
認識部22は、画像情報S11を細分化し、モーション情報S3として認識する。すなわち、モーション認識処理において、認識部22は、画像情報S11に含まれる全ての動作を個別に認識する。本実施形態の画像情報S11には、組み立て作業時に行われた、「把持する」、「放す」、「回す」、及び「載せる」等の複数の動作が含まれる(図3参照)。したがって、認識部22は、組み立て作業で行われた、「把持する」、「放す」、「回す」、及び「載せる」等を、それぞれ対象物に行われた動作を表すモージョン情報S3として個別に認識する。  The recognition unit 22 segments the image information S11 and recognizes it as motion information S3. That is, in the motion recognition process, the recognition unit 22 individually recognizes all the operations included in the image information S11. The image information S11 of the present embodiment includes a plurality of operations such as "gripping", "releasing", "turning", and "loading" performed at the assembling operation (see FIG. 3). Therefore, the recognition unit 22 individually performs “hold”, “release”, “turn”, “place on”, and the like performed in the assembly operation as motion information S3 representing the operation performed on the object. recognize.
本実施形態の認識部22は、モーション認識処理において、「把持する」動作を表すモーション情報S3(以下、把持情報S31という)の属性情報として、関連情報S13を含める。関連情報S13には、上記のように、人間5が対象物を把持する際に情報取得部13が検出した圧力値が含まれる。したがって、認識部22は、関連情報S13に含まれる情報取得部13の圧力値を、把持情報S31の属性情報に含める。これにより、認識された把持情報S31は、関連情報S13と関連づけて記憶部23に記憶される。なお、把持情報S31の属性情報には、関連情報S13を含めなくてもよい。  The recognition unit 22 of the present embodiment includes the related information S13 as attribute information of motion information S3 (hereinafter referred to as holding information S31) representing an operation of "gripping" in motion recognition processing. As described above, the related information S13 includes the pressure value detected by the information acquisition unit 13 when the human 5 grips the target. Therefore, the recognition unit 22 includes the pressure value of the information acquisition unit 13 included in the related information S13 in the attribute information of the grip information S31. Thereby, the recognized grip information S31 is stored in the storage unit 23 in association with the related information S13. The attribute information of the grip information S31 may not include the related information S13.
本実施形態の認識部22は、モーション認識処理において、画像情報S11に加えて、視線情報S12からも、対象物に行われた動作を表すモーション情報S3を認識する。視線情報S12には、上記のように、人間5の視線方向Eに存在する物体を撮影した第2カメラ12の撮影画像と、人間5の目の動きを検出した視線センサの検出データが含まれる。  In the motion recognition process, the recognition unit 22 according to the present embodiment recognizes motion information S3 representing an operation performed on an object from the sight line information S12 in addition to the image information S11. As described above, the line-of-sight information S12 includes a photographed image of the second camera 12 which photographed an object present in the line-of-sight direction E of the human 5 and detection data of the visual line sensor which detected the movement of the human 5's eye. .
認識部22は、視線情報S12に含まれる第2カメラ12の撮影画像及び視線センサの検出データから、公知の視線計測技術を用いて、作業中の人間5が対象物のどの位置を見ているかを認識する。これにより、認識部22は、例えば、対象物を「把持する」動作を認識するときに、人間5が対象物を見ている位置から、当該対象物の把持位置を特定することができる。  The recognition unit 22 determines which position of the object the person 5 in operation is looking at using the known gaze measurement technology from the photographed image of the second camera 12 included in the gaze information S12 and the detection data of the gaze sensor Recognize Thereby, the recognition part 22 can specify the holding position of the said target object from the position where the person 5 is looking at the target object, for example, when recognizing the operation of "gripping" the target object.
したがって、認識部22は、画像情報S11から認識した「把持する」動作と、視認情報S12から特定した把持位置とにより、把持情報S31をより正確に認識することができる。なお、本実施形態の認識部22は、画像情報S11のみに基づいて把持情報S31を認識してもよい。  Therefore, the recognition unit 22 can more accurately recognize the gripping information S31 by the “gripping” operation recognized from the image information S11 and the gripping position specified from the visual recognition information S12. Note that the recognition unit 22 of the present embodiment may recognize the gripping information S31 based on only the image information S11.
[オブジェクト登録処理] 記憶部23の第1データベースDB1は、オブジェクト情報S2が記憶されるデータベースである。第1データベースDB1には、既知の対象物を表すオブジェクト情報S2が予め記憶されている。制御部24は、認識部22で認識されたオブジェクト情報S2を、所定条件に従って第1データベースDB1に記憶する「オブジェクト登録処理」を実行する。  [Object Registration Process] The first database DB1 of the storage unit 23 is a database in which the object information S2 is stored. In the first database DB1, object information S2 representing a known object is stored in advance. The control unit 24 executes an "object registration process" of storing the object information S2 recognized by the recognition unit 22 in the first database DB1 according to a predetermined condition.
具体的には、制御部24は、認識部22で認識されたオブジェクト情報S2が、第1データベースDB1に既に記憶されているか否かを照合する。照合した結果、認識部22で認識されたオブジェクト情報S2が第1データベースDB1に記憶されていない場合、制御部24は、記憶部23に対して、認識部22で認識されたオブジェクト情報S2を第1データベースDB1に記憶させる。これにより、第1データベースDB1には、複数の異なるオブジェクト情報S2が蓄積される。  Specifically, the control unit 24 checks whether the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1. As a result of the collation, when the object information S2 recognized by the recognition unit 22 is not stored in the first database DB1, the control unit 24 causes the storage unit 23 to compare the object information S2 recognized by the recognition unit 22 1) Store in database DB1. Thereby, a plurality of different pieces of object information S2 are accumulated in the first database DB1.
[モーション登録処理] 記憶部23の第2データベースDB2は、モーション情報S3が記憶されるデータベースである。制御部24は、認識部22で認識されたモーション情報S3を、所定条件に従って第2データベースDB2に記憶する「モーション登録処理」を実行する。  [Motion Registration Process] The second database DB2 of the storage unit 23 is a database in which the motion information S3 is stored. The control unit 24 executes a “motion registration process” of storing the motion information S3 recognized by the recognition unit 22 in the second database DB2 in accordance with a predetermined condition.
具体的には、制御部24は、認識部22で認識されたモーション情報S3が、第2データベースDB2に既に記憶されているか否かを照合する。照合した結果、認識部22で認識されたモーション情報S3が第2データベースDB2に記憶されていない場合、制御部24は、記憶部23に対して、認識部22で認識されたモーション情報S3を第2データベースDB2に記憶させる。これにより、第2データベースDB2には、複数の異なるモーション情報S3が蓄積される。  Specifically, the control unit 24 collates whether or not the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2. As a result of the collation, when the motion information S3 recognized by the recognition unit 22 is not stored in the second database DB2, the control unit 24 causes the storage unit 23 to recognize the motion information S3 recognized by the recognition unit 22 2 Store in database DB2. Thereby, a plurality of different motion information S3 is accumulated in the second database DB2.
[作業指令生成処理] 制御部24は、作業情報S1に倣って、第1ロボット2に実行させる作業指令S5を生成する「作業指令生成処理」を実行する。作業指令生成処理は、第1データベースDB1に記憶された1又は複数のオブジェクト情報S2と、第2データベースDB2に記憶された1又は複数のモーション情報S3とに基づいて、作業指令S5を生成する処理である。  [Work Command Generation Process] The control unit 24 executes “work command generation process” for generating a work command S5 to be executed by the first robot 2 following the work information S1. The work command generation process generates a work command S5 based on one or more pieces of object information S2 stored in the first database DB1 and one or more pieces of motion information S3 stored in the second database DB2. It is.
具体的には、制御部24は、モジュール情報S4を少なくとも1つ生成する。モジュール情報S4は、第1データベースDB1に記憶された複数のオブジェクト情報S2のうちの1つと、第2データベースDB2に記憶された複数のモーション情報S3のうちの1つとを組み合わせた情報である。そして、制御部24は、複数のモジュール情報S4を生成した場合、当該複数のモジュール情報S4をシーケンシャルに連結した動作プログラムよりなる作業指令S5を生成する。  Specifically, the control unit 24 generates at least one module information S4. The module information S4 is information in which one of the plurality of pieces of object information S2 stored in the first database DB1 and one of the plurality of pieces of motion information S3 stored in the second database DB2 are combined. Then, when the plurality of pieces of module information S4 are generated, the control unit 24 generates a work instruction S5 including an operation program in which the plurality of pieces of module information S4 are sequentially connected.
図3は、制御部24が実行する作業指令生成処理の一例を示す図である。図3に示す作業指令生成処理は、組立品4の組み立て作業を第1ロボット2に実行させる作業指令S5を生成する処理である。図3に示すように、本実施形態の制御部24は、まず、第1データベースDB1に記憶された、組立品4に相当する「組立品」を表すオブジェクト情報S2から、組み立て情報を取得する。そして、制御部24は、取得した組み立て情報に基づいて、1つのオブジェクト情報S2と、1つのモーション情報S3とを組み合わせたモジュール情報S4を複数生成する。  FIG. 3 is a diagram illustrating an example of the work command generation process performed by the control unit 24. The work command generation process shown in FIG. 3 is a process of generating a work command S5 that causes the first robot 2 to execute the assembly work of the assembly 4. As shown in FIG. 3, first, the control unit 24 of the present embodiment acquires assembly information from the object information S2 stored in the first database DB1 and representing an “assembly” corresponding to the assembly 4. Then, based on the acquired assembly information, the control unit 24 generates a plurality of module information S4 in which one object information S2 and one motion information S3 are combined.
例えば、制御部24は、「部品」を表すオブジェクト情報S2と、「把持する」を表すモーション情報S3とを組み合わせ、第1のモジュール情報S4を生成する。当該第1のモジュール情報S4は、「部品3を把持する」作業を表している。また、制御部24は、「工具」を表すオブジェクト情報S2と、「回す」を表すモーション情報S3とを組み合わせ、第2のモジュール情報S4を生成する。当該第2のモジュール情報S4は、「工具を回す」作業を表している。  For example, the control unit 24 combines the object information S2 representing “parts” and the motion information S3 representing “grasp” to generate the first module information S4. The first module information S4 represents an operation of “gripping the part 3”. Further, the control unit 24 combines the object information S2 representing "tool" and the motion information S3 representing "turn" to generate the second module information S4. The second module information S4 represents the "turning tool" operation.
さらに、制御部24は、「トレー」を表すオブジェクト情報S2と、「載せる」を表すモーション情報S3とを組み合わせ、第3のモジュール情報S4を生成する。当該第3のモジュール情報S4は、「トレーを載せる」作業を表している。上記のようにして、制御部24は、組立品4の完成に必要な第1から第Nのモジュール情報S4を生成する。Nは2以上の整数である。  Further, the control unit 24 combines the object information S2 representing "tray" and the motion information S3 representing "loading" to generate the third module information S4. The third module information S4 represents an operation of “loading tray”. As described above, the control unit 24 generates the first to N-th module information S4 necessary for the completion of the assembly 4. N is an integer of 2 or more.
次に、制御部24は、取得した組み立て情報に基づいて、複数のモジュール情報S4をシーケンシャルに連結した動作プログラムよりなる作業指令S5を生成する。具体的には、制御部24は、第1のモジュール情報S4から第Nのモジュール情報S4までを順に連結した動作プログラムよりなる作業指令S5を生成する。したがって、生成された作業指令S5は、第1から第Nのモジュール情報S4に含まれる各作業を順に第1ロボット2に実行させる動作プログラムとなる。生成された作業指令S5は、通信部21により、第1ロボット2に送信される。  Next, the control unit 24 generates, based on the acquired assembly information, a work instruction S5 composed of an operation program in which a plurality of module information S4 are sequentially connected. Specifically, the control unit 24 generates a work command S5 composed of an operation program in which the first module information S4 to the Nth module information S4 are connected in order. Therefore, the generated work instruction S5 is an operation program that causes the first robot 2 to sequentially execute each operation included in the first to N-th module information S4. The work command S5 generated is transmitted to the first robot 2 by the communication unit 21.
なお、作業指令S5は、第1から第Nのモジュール情報S4だけでなく、第1から第Nのうちの一部(1つでもよい)のモジュール情報S4から構成されていてもよい。例えば、制御部24は、組立品4の一部を組み立てる作業指令として、第1~第K(<N)のモジュール情報S4で構成された作業指令S5を生成することが可能である。また、制御部24は、1つのモジュール情報S4(例えば「工具を回す」)を含む作業指令S5を生成することも可能である。当該作業指令S5を、通信部21により繰り返し送信させれば、第1ロボット2は、当該作業指令S5に含まれるモジュール情報S4により指定された作業(工具を回す作業)を繰り返す。  The work instruction S5 may be composed of not only the first to N-th module information S4, but also part (or one) of the first to N-th module information S4. For example, the control unit 24 can generate, as a work command for assembling a part of the assembly 4, a work command S 5 composed of first to Kth (<N) pieces of module information S 4. Further, the control unit 24 can also generate a work command S5 including one piece of module information S4 (for example, "turn the tool"). When the work command S5 is repeatedly transmitted by the communication unit 21, the first robot 2 repeats the work (work of turning the tool) specified by the module information S4 included in the work command S5.
[ロボット教示方法] 図4及び図5は、取得装置10及び制御装置20が協働して行うロボット教示方法の一例を示すフローチャートである。図4及び図5のロボット教示方法は、取得装置10が作業情報S1を取得してから、第1ロボット2に作業指令S5を出力するまでの方法を示している。なお、図4における丸で囲んだ文字Aは、図5における同Aにつながっている。  [Robot Teaching Method] FIGS. 4 and 5 are flowcharts showing an example of a robot teaching method performed by the acquisition device 10 and the control device 20 in cooperation with each other. The robot teaching method of FIG. 4 and FIG. 5 shows a method from the acquisition device 10 acquiring the work information S1 until outputting the work command S5 to the first robot 2. The circled letter A in FIG. 4 is connected to the letter A in FIG.
図4を参照して、取得装置10は、人間5が予め行った一連の作業に関する作業情報S1を取得する(ステップST1)。具体的には、第1カメラ11は、人間5が作業する作業空間Bを撮影して画像情報S11を取得する。第2カメラ12は、作業時の人間5の視線方向Eに存在する物体を撮影して視線情報S12を取得する。情報取得部13は、人間5が対象物を把持する際に圧力値を検出して関連情報S13を取得する。  Referring to FIG. 4, acquisition device 10 acquires work information S1 related to a series of work performed by human 5 in advance (step ST1). Specifically, the first camera 11 captures an image of the work space B where the human 5 works and acquires the image information S11. The second camera 12 captures an object present in the viewing direction E of the person 5 at the time of work, and acquires the viewing information S12. The information acquisition unit 13 detects a pressure value when the human 5 grips an object, and acquires the related information S13.
次に、取得装置10は、取得した作業情報S1を制御装置20に送信する(ステップST2)。具体的には、第1カメラ11、第2カメラ12、及び情報取得部13は、それぞれ取得した画像情報S11、視線情報S12、及び関連情報S13を制御装置20の通信部21に送信する。通信部21への送信が終了すると、取得装置10の処理は終了する。  Next, the acquisition device 10 transmits the acquired work information S1 to the control device 20 (step ST2). Specifically, the first camera 11, the second camera 12, and the information acquisition unit 13 transmit the acquired image information S11, line of sight information S12, and related information S13 to the communication unit 21 of the control device 20. When the transmission to the communication unit 21 ends, the process of the acquisition device 10 ends.
制御装置20の通信部21は、取得装置10から、画像情報S11、視線情報S12、及び関連情報S13を、作業情報
S1として受信する(ステップST3)。そして、通信部21は、当該作業情報S1を制御装置20の認識部22に渡す。 
The communication unit 21 of the control device 20 receives the image information S11, the line-of-sight information S12, and the related information S13 from the acquisition device 10 as the work information S1 (step ST3). Then, the communication unit 21 passes the work information S1 to the recognition unit 22 of the control device 20.
認識部22は、通信部21から取得した作業情報S1に基づいて、上述したオブジェクト認識処理及びモーション認識処理をそれぞれ実行する。すなわち、認識部22は、作業情報S1から、対象物を表すオブジェクト情報S2と、対象物に行われた動作を表すモーション情報S3とをそれぞれ認識する(ステップST4)。認識部22は、認識したオブジェクト情報S2及びモーション情報S3を、制御装置20の制御部24に渡す。  The recognition unit 22 executes the object recognition process and the motion recognition process described above based on the work information S1 acquired from the communication unit 21. That is, the recognition unit 22 recognizes, from the work information S1, object information S2 representing an object and motion information S3 representing an operation performed on the object (step ST4). The recognition unit 22 passes the recognized object information S2 and motion information S3 to the control unit 24 of the control device 20.
図5を参照して、次に、制御部24は、上述したオブジェクト登録処理を実行する。具体的には、制御部24は、認識部22で認識されたオブジェクト情報S2が、第1データベースDB1に既に記憶されているか否かを照合する(ステップST5)。照合した結果、認識部22で認識されたオブジェクト情報S2が第1データベースDB1に記憶されていない場合(ステップST5で「No」の場合)、制御部24は、記憶部23に対して、認識部22で認識されたオブジェクト情報S2を第1データベースDB1に記憶させる(ステップST6)。  Referring to FIG. 5, next, control unit 24 executes the object registration process described above. Specifically, the control unit 24 checks whether the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1 (step ST5). As a result of the collation, when the object information S2 recognized by the recognition unit 22 is not stored in the first database DB1 (in the case of “No” in step ST5), the control unit 24 instructs the storage unit 23 to recognize The object information S2 recognized by 22 is stored in the first database DB1 (step ST6).
一方、照合した結果、認識部22で認識されたオブジェクト情報S2が第1データベースDB1に既に記憶されている場合(ステップST5で「Yes」の場合)、制御部24は、後述するステップST7に移行する。  On the other hand, as a result of collation, when the object information S2 recognized by the recognition unit 22 is already stored in the first database DB1 (in the case of “Yes” in step ST5), the control unit 24 proceeds to step ST7 described later Do.
次に、制御部24は、上述したモーション登録処理を実行する。具体的には、制御部24は、認識部22で認識されたモーション情報S3が、第2データベースDB2に既に記憶されているか否かを照合する(ステップST7)。照合した結果、認識部22で認識されたモーション情報S3が第2データベースDB2に記憶されていない場合(ステップST7で「No」の場合)、制御部24は、記憶部23に対して、認識部22で認識されたモーション情報S3を第2データベースDB2に記憶させる(ステップST8)。  Next, the control unit 24 executes the above-described motion registration process. Specifically, the control unit 24 checks whether or not the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2 (step ST7). As a result of the collation, when the motion information S3 recognized by the recognition unit 22 is not stored in the second database DB2 (in the case of “No” in step ST7), the control unit 24 instructs the storage unit 23 to recognize The motion information S3 recognized in step 22 is stored in the second database DB2 (step ST8).
一方、照合した結果、認識部22で認識されたモーション情報S3が第2データベースDB2に既に記憶されている場合(ステップST7で「Yes」の場合)、制御部24は、後述するステップST9に移行する。なお、モーション登録処理(ステップST7からST8)は、オブジェクト登録処理(ステップST5からST6)よりも先に実行してもよい。  On the other hand, as a result of collation, when the motion information S3 recognized by the recognition unit 22 is already stored in the second database DB2 (in the case of “Yes” in step ST7), the control unit 24 proceeds to step ST9 described later Do. The motion registration process (steps ST7 to ST8) may be performed before the object registration process (steps ST5 to ST6).
次に、制御部24は、上述した作業指令生成処理を実行する。具体的には、制御部24は、第1データベースDB1に記憶された複数のオブジェクト情報S2のうちの1つと、第2データベースDB2に記憶された複数のモーション情報S3の1つとを組み合わせたモジュール情報S4を複数生成する。そして、制御部24は、複数のモジュール情報S4をシーケンシャルに連結した動作プログラムよりなる作業指令S5を生成する(ステップST9)。  Next, the control unit 24 executes the work command generation process described above. Specifically, the control unit 24 is module information in which one of the plurality of pieces of object information S2 stored in the first database DB1 and one of the plurality of motion information S3 stored in the second database DB2 are combined. Generate multiple S4. Then, the control unit 24 generates a work instruction S5 composed of an operation program in which a plurality of pieces of module information S4 are sequentially connected (step ST9).
次に、制御部24は、生成した作業指令S5を、第1ロボット2に送信するように通信部21を制御する。すなわち、制御部24は、通信部21に対して、生成した作業指令S5を第1ロボット2に送信する指示を出力する(ステップST10)。通信部21は、制御部24の送信指示に従って、作業指令S5を第1ロボット2に送信する(ステップST11)。第1ロボット2への送信が終了すると、制御装置20の処理は終了する。  Next, the control unit 24 controls the communication unit 21 to transmit the generated work instruction S5 to the first robot 2. That is, the control unit 24 outputs an instruction to transmit the generated work instruction S5 to the first robot 2 to the communication unit 21 (step ST10). The communication unit 21 transmits the work instruction S5 to the first robot 2 in accordance with the transmission instruction of the control unit 24 (step ST11). When the transmission to the first robot 2 ends, the processing of the control device 20 ends.
ステップST1は、本発明のロボット教示方法において、模倣対象による一連の作業の画像情報S11を含む作業情報S1を取得する第1ステップである。ステップST3からST9は、本発明のロボット教示方法において、取得された作業情報S1に倣った作業指令S5を生成する第2ステップである。ステップST11は、本発明のロボット教示方法において、生成された作業指令S5を第1ロボット2に送信する第3ステップである。  Step ST1 is a first step of acquiring work information S1 including image information S11 of a series of works by the imitation target in the robot teaching method of the present invention. Steps ST3 to ST9 are a second step of generating a work instruction S5 in accordance with the acquired work information S1 in the robot teaching method of the present invention. Step ST11 is a third step of transmitting the generated work command S5 to the first robot 2 in the robot teaching method of the present invention.
ステップST4は、本発明のロボット教示方法において、取得された作業情報S1を細分化し、モーション情報S3として認識する認識ステップである。ステップST7からST8は、本発明のロボット教示方法において、認識されたモーション情報S3のうちの少なくとも1つを記憶する記憶ステップである。  Step ST4 is a recognition step in which the acquired work information S1 is subdivided and recognized as motion information S3 in the robot teaching method of the present invention. Steps ST7 to ST8 are storage steps for storing at least one of the recognized motion information S3 in the robot teaching method of the present invention.
ステップST9は、本発明のロボット教示方法において、記憶された1又は複数のモーション情報S3から作業指令S5を生成する生成ステップである。ステップST3は、本発明のロボット教示方法において、模倣対象が対象物を把持する際の関連情報S13を、作業情報S1として外部から受信する受信ステップである。  Step ST9 is a generation step of generating a work instruction S5 from the stored one or more pieces of motion information S3 in the robot teaching method of the present invention. Step ST3 is a receiving step of receiving, as work information S1, related information S13 when the mimic target holds the target in the robot teaching method of the present invention from the outside.
ステップST3は、本発明のコンピュータプログラムにおいて、模倣対象による一連の作業の画像情報S11を含む作業情報S1を受信する第1ステップである。ステップST4からST9は、本発明のコンピュータプログラムにおいて、受信された作業情報S1に倣った作業指令S5を生成する第2ステップである。ステップST10は、本発明のコンピュータプログラムにおいて、生成された作業指令S5を第1ロボット2に送信するように通信を制御する第3ステップである。  Step ST3 is a first step of receiving the work information S1 including the image information S11 of a series of works by the imitation target in the computer program of the present invention. Steps ST4 to ST9 are a second step of generating a work instruction S5 according to the received work information S1 in the computer program of the present invention. Step ST10 is a third step of controlling communication so as to transmit the generated work instruction S5 to the first robot 2 in the computer program of the present invention.
ステップST4は、本発明のコンピュータプログラムにおいて、受信された作業情報S1を細分化し、モーション情報S3として認識する認識ステップである。ステップST7からST8は、本発明のコンピュータプログラムにおいて、認識されたモーション情報S3のうちの少なくとも1つを記憶する記憶ステップである。ステップST9は、本発明のコンピュータプログラムにおいて、記憶された1又は複数のモーション情報S3から作業指令S5を生成する生成ステップである。  Step ST4 is a recognition step in which the received work information S1 is subdivided and recognized as motion information S3 in the computer program of the present invention. Steps ST7 to ST8 are storage steps for storing at least one of the recognized motion information S3 in the computer program of the present invention. Step ST9 is a generation step of generating the work instruction S5 from the stored one or more pieces of motion information S3 in the computer program of the present invention.
[作用効果] 以上に説明した本実施形態のロボット教示システム1では、取得装置10が、作業中の人間5による一連の作業の画像情報S11を含む作業情報S1を取得する。そして、制御装置20は、取得された作業情報S1に倣った作業指令S5を生成し、生成された作業指令S5を第1ロボット2に送信する。すなわち、第1ロボット2に送信する作業指令S5を生成するときの元データは、人間5である模倣対象による一連の作業の画像情報S11を含む作業情報S1よりなる。このため、上述した従来の教示方式を行わなくても、模倣対象に倣った所定の作業指令S5を第1ロボット2に指示することができる。従って、従来の教示方式に比べて、所望の作業を第1ロボット2に容易に教示することができる。  [Operation and Effect] In the robot teaching system 1 of the present embodiment described above, the acquisition device 10 acquires the work information S1 including the image information S11 of a series of work by the person 5 who is working. Then, the control device 20 generates a work instruction S5 in accordance with the acquired work information S1, and transmits the generated work instruction S5 to the first robot 2. That is, the original data when generating the work instruction S5 to be transmitted to the first robot 2 is made up of the work information S1 including the image information S11 of a series of works by the imitation target who is the human 5. Therefore, even if the above-described conventional teaching method is not performed, it is possible to instruct the first robot 2 to perform a predetermined work command S5 that is to be imitated. Therefore, the desired operation can be easily taught to the first robot 2 as compared with the conventional teaching method.
また、本実施形態のロボット教示システム1では、認識部22が、取得された作業情報S1を細分化したモーション情報S3を認識し、認識されたモーション情報S3を記憶部23が記憶する。そして、制御部24は、記憶された1又は複数のモーション情報S3から作業指令S5を生成する。このため、模倣対象による作業である作業情報S1と完全に同一の作業だけでなく、作業情報S1に含まれる部分的な動作(モーション)をロボットに教示することができる。  Further, in the robot teaching system 1 of the present embodiment, the recognition unit 22 recognizes motion information S3 obtained by dividing the acquired work information S1, and the storage unit 23 stores the recognized motion information S3. Then, the control unit 24 generates a work instruction S5 from the stored one or more pieces of motion information S3. Therefore, it is possible to teach the robot not only the work completely the same as the work information S1 which is the work to be simulated but also a partial motion (motion) included in the work information S1.
また、本実施形態のロボット教示システム1では、通信部21により、人間5が対象物を把持する際の関連情報S13を外部(情報取得部13)から受信する。そして、制御部24が、把持情報S31を含む作業指令S5に、当該把持情報S31に対応する関連情報S13を含める。「対応する関連情報」とは、例えば圧力センサが検出した、対象物を把持する場合の適切な圧力の情報である。これにより、対象物を把持する場合の適切な圧力を第1ロボット2に教示することができる。したがって、第1ロボット2は、対象物の硬さなどに応じて、適切な圧力で対象物を把持することができる。  Further, in the robot teaching system 1 of the present embodiment, the communication unit 21 receives, from the outside (the information acquisition unit 13), the related information S13 when the human 5 grips the object. Then, the control unit 24 includes the related information S13 corresponding to the grip information S31 in the work command S5 including the grip information S31. “Corresponding related information” is, for example, information of a pressure detected by a pressure sensor when the object is gripped. This makes it possible to teach the first robot 2 an appropriate pressure when gripping an object. Therefore, the first robot 2 can grip the object at an appropriate pressure according to the hardness of the object and the like.
また、本実施形態のロボット教示システム1では、人間5に設けられた情報取得部13が、関連情報S13を取得する。このため、対象物を把持する場合の適切な圧力を、第1ロボット2に対して、より正確に教示することができる。  Further, in the robot teaching system 1 of the present embodiment, the information acquisition unit 13 provided for the human 5 acquires the related information S13. Therefore, it is possible to more accurately teach the first robot 2 the appropriate pressure when gripping the object.
また、本実施形態のロボット教示システム1では、取得装置10は、作業時の人間5の視線方向Eに存在する物体に関する視線情報S12を取得する第2カメラ12を有する。そして、認識部22は、画像情報S11と視線情報S12とに基づいてモーション情報S3を認識する。これにより、画像情報S11のみを認識処理の入力データとする場合に比べて、対象物の把持位置などをより正確に認識できる。このため、認識部22がモーション情報S3をより正確に認識することができ、第1ロボット2に適切な把持位置を教示することができる。  Further, in the robot teaching system 1 of the present embodiment, the acquisition device 10 includes the second camera 12 that acquires line-of-sight information S12 related to an object present in the line-of-sight direction E of the person 5 at work. Then, the recognition unit 22 recognizes the motion information S3 based on the image information S11 and the line-of-sight information S12. Thereby, compared with the case where only the image information S11 is used as the input data of the recognition process, it is possible to more accurately recognize the holding position of the object, etc. Therefore, the recognition unit 22 can more accurately recognize the motion information S3, and can teach the first robot 2 an appropriate gripping position.
また、本実施形態のロボット教示方法、制御装置20、及び当該制御装置20としてコンピュータを機能させるコンピュータプログラムは、ロボット教示システム1と実質的に同一の構成をするため、ロボット教示システム1と同様の作用効果を奏する。  Further, the robot teaching method of the present embodiment, the control device 20, and a computer program for causing a computer to function as the control device 20 have substantially the same configuration as the robot teaching system 1, and thus, are the same as the robot teaching system 1. It exerts an action effect.
[その他] 本実施形態のロボット教示システム1は、模倣対象の作業情報S1から、オブジェクト情報S2とモーション情報S3とを認識しているが、作業情報S1から認識する情報は特に限定されない。例えば、作業情報S1からモーション情報S3のみを認識してもよいし、他の情報を認識してもよい。  [Others] The robot teaching system 1 of this embodiment recognizes the object information S2 and the motion information S3 from the work information S1 to be imitated, but the information recognized from the work information S1 is not particularly limited. For example, only motion information S3 may be recognized from work information S1, or other information may be recognized.
なお、今回開示された実施の形態はすべての点で例示であって制限的ではないと考えられるべきである。本発明の範囲は、上記した意味ではなく、特許請求の範囲によって示され、特許請求の範囲と均等の意味、及び範囲内でのすべての変更が含まれることが意図される。 It should be understood that the embodiments disclosed herein are illustrative and non-restrictive in every respect. The scope of the present invention is indicated not by the meaning described above but by the claims, and is intended to include the meanings equivalent to the claims and all modifications within the scope.
1 ロボット教示システム、2 第1ロボット、2a アーム、3 部品(対象物)、4 組立品、5 人間(模倣対象)、10 取得装置、11 第1カメラ、12 第2カメラ(カメラ)、13 情報取得部、20 制御装置、21 通信部、22 認識部、23 記憶部、24 制御部、B 作業空間、E 視線方向、S1 作業情報、S2 オブジェクト情報、S3 モーション情報、S4 モジュール情報、S5 作業指令、S11 画像情報、S12 視線情報、S13 関連情報、S31 把持情報 DESCRIPTION OF SYMBOLS 1 robot teaching system, 2 1st robot, 2a arm, 3 parts (object), 4 assembly, 5 human (imitation target), 10 acquisition apparatus, 11 1st camera, 12 2nd camera (camera), 13 information Acquisition unit, 20 control unit, 21 communication unit, 22 recognition unit, 23 storage unit, 24 control unit, B work space, E gaze direction, S1 work information, S2 object information, S3 motion information, S4 module information, S5 work command , S11 image information, S12 gaze information, S13 related information, S31 grip information

Claims (20)

  1. 第1ロボットに作業を教示するロボット教示システムであって、作業中の人間、又は前記第1ロボット以外の作業中の第2ロボットである、模倣対象による一連の作業の画像情報を含む作業情報を取得する取得装置と、取得された前記作業情報に倣った作業指令を生成し、生成された前記作業指令を、前記第1ロボットに送信する制御装置と、を備えるロボット教示システム。 A robot teaching system for teaching a task to a first robot, which is task information including a series of task information by a target to be imitated, which is a human during task or a second robot during task other than the first robot A robot teaching system comprising: an acquisition device to acquire; and a control device that generates a work instruction according to the acquired work information and transmits the generated work instruction to the first robot.
  2. 前記制御装置は、取得された前記作業情報を細分化し、モーション情報として認識する認識部と、認識された前記モーション情報のうちの少なくとも1つを記憶する記憶部と、記憶された1又は複数の前記モーション情報から前記作業指令を生成する制御部と、生成された前記作業指令を、前記第1ロボットに送信する通信部と、を有する、請求項1に記載のロボット教示システム。 The control device subdivides the acquired work information and recognizes a recognition unit that recognizes the work information as motion information, a storage unit that stores at least one of the recognized motion information, and one or more stored. The robot teaching system according to claim 1, further comprising: a control unit that generates the work instruction from the motion information; and a communication unit that transmits the generated work instruction to the first robot.
  3. 前記通信部は、前記模倣対象が対象物を把持する際の関連情報を、前記作業情報として外部から受信し、前記記憶部は、受信された前記関連情報を、前記対象物を把持する前記モーション情報である把持情報と関連づけて記憶し、前記制御部は、前記把持情報を含む前記作業指令に、前記把持情報に対応する前記関連情報を含める、請求項2に記載のロボット教示システム。 The communication unit receives related information from the outside as the work information when the imitation target grips the object, and the storage unit receives the related information from the storage unit, the motion for holding the object. The robot teaching system according to claim 2, wherein the robot teaching system stores the information in association with gripping information, and the control unit includes the related information corresponding to the gripping information in the work command including the gripping information.
  4. 前記取得装置は、前記模倣対象に設けられた情報取得部を有し、前記情報取得部は、前記関連情報を取得する、請求項3に記載のロボット教示システム。 The robot teaching system according to claim 3, wherein the acquisition device has an information acquisition unit provided to the imitation target, and the information acquisition unit acquires the related information.
  5. 前記取得装置は、作業時の前記模倣対象の視線方向に関する視線情報を、前記作業情報としてさらに取得するカメラを有し、前記認識部は、取得された前記画像情報と前記視線情報とに基づいて、前記モーション情報を認識する、請求項2から請求項4までのいずれか1項に記載のロボット教示システム。 The acquisition device has a camera that further acquires, as the task information, line-of-sight information regarding the gaze direction of the imitation target at the time of operation, and the recognition unit is based on the acquired image information and the line-of-sight information. The robot teaching system according to any one of claims 2 to 4, wherein the motion information is recognized.
  6. 第1ロボットに作業を教示するロボット教示方法であって、作業中の人間、又は前記第1ロボット以外の作業中の第2ロボットである、模倣対象による一連の作業の画像情報を含む作業情報を取得する第1ステップと、取得された
    前記作業情報に倣った作業指令を生成する第2ステップと、生成された前記作業指令を、前記第1ロボットに送信する第3ステップと、を含むロボット教示方法。
    A robot teaching method for teaching a work to a first robot, the work information including image information of a series of works by a person to be simulated, which is a human in a work or a second robot in a work other than the first robot Robot teaching including a first step to acquire, a second step to generate a work instruction according to the acquired work information, and a third step to transmit the generated work instruction to the first robot Method.
  7. 前記第2ステップは、取得された前記作業情報を細分化し、モーション情報として認識する認識ステップと、認識された前記モーション情報のうちの少なくとも1つを記憶する記憶ステップと、記憶された1又は複数の前記モーション情報から前記作業指令を生成する生成ステップと、を含む、請求項6に記載のロボット教示方法。 The second step subdivides the acquired work information to recognize as motion information, a storing step for storing at least one of the recognized motion information, and one or more stored The robot teaching method according to claim 6, further comprising: generating the work command from the motion information of
  8. 前記第2ステップは、前記模倣対象が対象物を把持する際の関連情報を、前記作業情報として外部から受信する受信ステップをさらに含み、前記記憶ステップでは、受信された前記関連情報を、前記対象物を把持する前記モーション情報である把持情報と関連づけて記憶し、前記生成ステップでは、前記把持情報を含む前記作業指令に、前記把持情報に対応する前記関連情報を含める、請求項7に記載のロボット教示方法。 The second step further includes a receiving step of externally receiving, as the task information, related information when the imitation target grips the target, and in the storing step, the received related information is the target 8. The information processing apparatus according to claim 7, wherein the object information is stored in association with gripping information which is the motion information for gripping an object, and the generation step includes the related information corresponding to the gripping information in the work command including the gripping information. Robot teaching method.
  9. 前記第1ステップでは、前記模倣対象に設けられた情報取得部が前記関連情報を取得する、請求項8に記載のロボット教示方法。 The robot teaching method according to claim 8, wherein, in the first step, an information acquisition unit provided to the object to be simulated acquires the related information.
  10. 前記第1ステップでは、作業時の前記模倣対象の視線方向に関する視線情報を、前記作業情報としてさらに取得し、前記認識ステップでは、取得された前記画像情報と前記視線情報とに基づいて、前記モーション情報を認識する、請求項7から請求項9までのいずれか1項に記載のロボット教示方法。 In the first step, line-of-sight information on the line-of-sight direction of the imitation target at the time of work is further acquired as the work information, and in the recognition step, the motion is calculated based on the acquired image information and the line-of-sight information. The robot teaching method according to any one of claims 7 to 9, wherein the information is recognized.
  11. 第1ロボットに作業を教示する制御装置であって、作業中の人間、又は前記第1ロボット以外の作業中の第2ロボットである、模倣対象による一連の作業の画像情報を含む作業情報を受信する通信部と、受信された前記作業情報に倣った作業指令を生成し、生成された前記作業指令を、前記第1ロボットに送信するように前記通信部を制御する制御部と、を備える制御装置。 A control device that teaches a work to a first robot, and receives work information including a series of work by a target of imitation, which is a person in work or a second robot in work other than the first robot A control unit configured to generate a work command according to the received work information, and to control the communication unit to transmit the generated work command to the first robot apparatus.
  12. 取得された前記作業情報を細分化し、モーション情報として認識する認識部と、認識された前記モーション情報のうちの少なくとも1つを記憶する記憶部と、をさらに備え、前記制御部は、記憶された1又は複数の前記モーション情報から前記作業指令を生成し、前記通信部は、生成された前記作業指令を、前記第1ロボットに送信する、請求項11に記載の制御装置。 The control unit further includes a recognition unit that divides the acquired work information and recognizes it as motion information, and a storage unit that stores at least one of the recognized motion information. The control device according to claim 11, wherein the work instruction is generated from one or more pieces of the motion information, and the communication unit transmits the generated work instruction to the first robot.
  13. 前記通信部は、前記模倣対象が対象物を把持する際の関連情報を、前記作業情報として外部から受信し、前記記憶部は、受信された前記関連情報を、前記対象物を把持する前記モーション情報である把持情報と関連づけて記憶し、前記制御部は、前記把持情報を含む前記作業指令に、前記把持情報に対応する前記関連情報を含める、請求項12に記載の制御装置。 The communication unit receives related information from the outside as the work information when the imitation target grips the object, and the storage unit receives the related information from the storage unit, the motion for holding the object. The control device according to claim 12, wherein the control unit stores the information in association with gripping information, and the control unit includes the related information corresponding to the gripping information in the work command including the gripping information.
  14. 前記通信部は、前記模倣対象に設けられた情報取得部から前記関連情報を受信する、請求項13に記載の制御装置。 The control device according to claim 13, wherein the communication unit receives the related information from an information acquisition unit provided to the imitation target.
  15. 前記通信部は、作業時の前記模倣対象の視線方向に関する視線情報を、前記作業情報としてさらに受信し、前記認識部は、受信された前記画像情報と前記視線情報とに基づいて、前記モーション情報を認識する、請求項12から請求項14までのいずれか1項に記載の制御装置。 The communication unit further receives, as the work information, line-of-sight information regarding the gaze direction of the imitation target at the time of work, and the recognition unit performs the motion information based on the received image information and the look information. The control device according to any one of claims 12 to 14, which recognizes
  16. 第1ロボットに作業を教示する制御装置としてコンピュータを機能させるコンピュータプログラムであって、作業中の人間、又は前記第1ロボット以外の作業中の第2ロボットである、模倣対象による一連の作業の画像情報を含む作業情報を受信する第1ステップと、受信された前記作業情報に倣った作業指令を生成する第2ステップと、生成された前記作業指令を、前記第1ロボットに送信するように通信を制御する第3ステップと、を実行させるコンピュータプログラム。 A computer program that causes a computer to function as a control device that teaches a work to a first robot, and is an image of a series of work by a person being a work or a second robot while working other than the first robot A first step of receiving work information including information, a second step of generating a work command following the received work information, and communication to transmit the generated work command to the first robot A third step of controlling the computer program to execute.
  17. 前記第2ステップでは、受信された前記作業情報を細分化し、モーション情報として認識する認識ステップと、認識された前記モーション情報のうちの少なくとも1つを記憶する記憶ステップと、記憶された1又は複数の前記モーション情報から前記作業指令を生成する生成ステップと、を実行させる、請求項16に記載のコンピュータプログラム。 In the second step, the received work information is subdivided, a recognition step for recognizing as motion information, a storage step for storing at least one of the recognized motion information, and one or more stored The computer program according to claim 16, further comprising: generating the work instruction from the motion information of
  18. 前記第1ステップでは、前記模倣対象が対象物を把持する際の関連情報を、前記作業情報として外部から受信し、前記記憶ステップでは、受信された前記関連情報を、前記対象物を把持する前記モーション情報である把持情報と関連づけて記憶し、前記生成ステップでは、前記把持情報を含む前記作業指令に、前記把持情報に対応する前記関連情報を含める、請求項17に記載のコンピュータプログラム。 In the first step, related information when the imitation target grips the object is received from the outside as the work information, and in the storing step, the received related information is gripped the object. The computer program according to claim 17, storing in association with gripping information which is motion information, and in the generation step, the work command including the gripping information includes the related information corresponding to the gripping information.
  19. 前記第1ステップでは、前記模倣対象に設けられた情報取得部から前記関連情報を受信する、請求項18に記載のコンピュータプログラム。 The computer program according to claim 18, wherein in the first step, the related information is received from an information acquisition unit provided to the imitation target.
  20. 前記第1ステップでは、作業時の前記模倣対象の視線方向に関する視線情報を、前記作業情報としてさらに受信し、前記認識ステップでは、受信された前記画像情報と前記視線情報とに基づいて、前記モーション情報を認識する、請求項17から請求項19までのいずれか1項に記載のコンピュータプログラム。 In the first step, line-of-sight information related to the line-of-sight direction of the imitation target at the time of work is further received as the work information, and in the recognition step, the motion based on the received image information and the line-of-sight information 20. A computer program according to any of the claims 17-19, which recognizes information.
PCT/JP2018/023729 2017-09-28 2018-06-22 System for teaching robot, method for teaching robot, control device, and computer program WO2019064752A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-187384 2017-09-28
JP2017187384 2017-09-28

Publications (1)

Publication Number Publication Date
WO2019064752A1 true WO2019064752A1 (en) 2019-04-04

Family

ID=65901264

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/023729 WO2019064752A1 (en) 2017-09-28 2018-06-22 System for teaching robot, method for teaching robot, control device, and computer program

Country Status (1)

Country Link
WO (1) WO2019064752A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112021006786T5 (en) 2021-03-15 2023-11-09 Hitachi High-Tech Corporation ACTIVITY TRAINING DEVICE AND ACTIVITY TRAINING METHOD FOR ROBOTS

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011131376A (en) * 2003-11-13 2011-07-07 Japan Science & Technology Agency Robot drive system and robot drive program
JP2011200997A (en) * 2010-03-26 2011-10-13 Kanto Auto Works Ltd Teaching device and method for robot
JP2013158887A (en) * 2012-02-07 2013-08-19 Seiko Epson Corp Teaching device, robot, robot device, and teaching method
WO2016181572A1 (en) * 2015-05-11 2016-11-17 株式会社安川電機 Dispensing system, controller, and control method
JP6038417B1 (en) * 2016-01-29 2016-12-07 三菱電機株式会社 Robot teaching apparatus and robot control program creating method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011131376A (en) * 2003-11-13 2011-07-07 Japan Science & Technology Agency Robot drive system and robot drive program
JP2011200997A (en) * 2010-03-26 2011-10-13 Kanto Auto Works Ltd Teaching device and method for robot
JP2013158887A (en) * 2012-02-07 2013-08-19 Seiko Epson Corp Teaching device, robot, robot device, and teaching method
WO2016181572A1 (en) * 2015-05-11 2016-11-17 株式会社安川電機 Dispensing system, controller, and control method
JP6038417B1 (en) * 2016-01-29 2016-12-07 三菱電機株式会社 Robot teaching apparatus and robot control program creating method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112021006786T5 (en) 2021-03-15 2023-11-09 Hitachi High-Tech Corporation ACTIVITY TRAINING DEVICE AND ACTIVITY TRAINING METHOD FOR ROBOTS

Similar Documents

Publication Publication Date Title
US11541545B2 (en) Information processing apparatus, information processing method, and system
US11195041B2 (en) Generating a model for an object encountered by a robot
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
CN105666505B (en) Robot system having display for augmented reality
EP3486041A3 (en) Gripping system, learning device, and gripping method
CN110662631B (en) Control device, robot control method, and robot control system
KR20120027253A (en) Object-learning robot and method
WO2015106278A2 (en) Wearable robot assisting manual tasks
JP2021167060A (en) Robot teaching by human demonstration
JP2011067941A (en) Visual perception system and method for humanoid robot
US20220111533A1 (en) End effector control system and end effector control method
JP6777670B2 (en) A robot system that uses image processing to correct robot teaching
CN113412178A (en) Robot control device, robot system, and robot control method
Çoban et al. Wireless teleoperation of an industrial robot by using myo arm band
WO2019064752A1 (en) System for teaching robot, method for teaching robot, control device, and computer program
WO2019064751A1 (en) System for teaching robot, method for teaching robot, control device, and computer program
JP2015114933A (en) Object recognition device, robot, and object recognition method
JP6455869B2 (en) Robot, robot system, control device, and control method
JP4715296B2 (en) Robot hand holding and gripping control method.
JP2024034668A (en) Wire insertion system, wire insertion method, and wire insertion program
Park et al. Robot-based Object Pose Auto-annotation System for Dexterous Manipulation
JP2013173209A (en) Robot apparatus and control method of the same, and computer program
JP2022157119A (en) Robot remote operation control device, robot remote operation control system, robot remote operation control method and program
JP2022155623A (en) Robot remote operation control device, robot remote operation control system, robot remote operation control method and program
JP2022157123A (en) Robot remote operation control device, robot remote operation control system, robot remote operation control method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18860192

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18860192

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP