EP4074474A1 - Robot system and method for forming three-dimensional model of workpiece - Google Patents
Robot system and method for forming three-dimensional model of workpiece Download PDFInfo
- Publication number
- EP4074474A1 EP4074474A1 EP20898730.5A EP20898730A EP4074474A1 EP 4074474 A1 EP4074474 A1 EP 4074474A1 EP 20898730 A EP20898730 A EP 20898730A EP 4074474 A1 EP4074474 A1 EP 4074474A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- workpiece
- control device
- robot
- display
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 8
- 230000015572 biosynthetic process Effects 0.000 claims description 15
- 238000012545 processing Methods 0.000 description 76
- 230000004048 modification Effects 0.000 description 34
- 238000012986 modification Methods 0.000 description 34
- 238000010586 diagram Methods 0.000 description 26
- 239000012636 effector Substances 0.000 description 26
- 238000003384 imaging method Methods 0.000 description 14
- 230000000694 effects Effects 0.000 description 12
- 239000012530 fluid Substances 0.000 description 9
- 238000004519 manufacturing process Methods 0.000 description 8
- 238000005520 cutting process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000005498 polishing Methods 0.000 description 5
- 238000005406 washing Methods 0.000 description 4
- 238000003466 welding Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- NJPPVKZQTLUDBO-UHFFFAOYSA-N novaluron Chemical compound C1=C(Cl)C(OC(F)(F)C(OC(F)(F)F)F)=CC=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F NJPPVKZQTLUDBO-UHFFFAOYSA-N 0.000 description 3
- 238000005507 spraying Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000002347 injection Methods 0.000 description 2
- 239000007924 injection Substances 0.000 description 2
- 239000003973 paint Substances 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/06—Control stands, e.g. consoles, switchboards
- B25J13/065—Control stands, e.g. consoles, switchboards comprising joy-sticks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1671—Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/088—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
- B25J13/089—Determining the position of the robot with reference to its environment
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/1605—Simulation of manipulator lay-out, design, modelling of manipulator
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1689—Teleoperation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40131—Virtual reality control, programming of manipulator
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40195—Tele-operation, computer assisted manual operation
Definitions
- the present disclosure relates to a robot system, and a method of forming a three-dimensional model of a workpiece.
- Patent Document 1 Display systems in which a three-dimensional image indicating a work cell of a robot is displayed on various personal digital assistants are known (for example, see Patent Document 1).
- the display system disclosed in Patent Document 1 has a display which generates a three-dimensional robot model and robot work cell data containing models of other structures of the work cell, and displays a three-dimensional rendering image, such as the generated three-dimensional robot model.
- a collision object for example, a safety wall
- the display system disclosed in Patent Document 1 when a user (operator) operates the robot displayed on the display by using a user interface, a collision object (for example, a safety wall) is indicated on the display before the robot operates. Then, when the physical robot collides the collision object due to the operation of the user, the collision is displayed on the display.
- a collision object for example, a safety wall
- the present disclosure is to solve the above problem, and one purpose thereof is to provide a robot system and a method of forming a three-dimensional model of a workpiece, capable of improving the production efficiency, as compared with the conventional display system.
- a robot system includes a robot installed in a workarea and controlled by a second control device, a 3D camera operated by an operator, a sensor that is disposed in a manipulation area that is a space different from the workarea, and wirelessly detects position information and posture information on the 3D camera, a display, and a first control device.
- the first control device acquires image information on a workpiece imaged by the 3D camera, acquires, from the sensor, the position information and the posture information when the workpiece is imaged by the 3D camera, displays the acquired image information on the display, forms a three-dimensional model of the workpiece based on the image information, and the acquired position information and posture information, displays the formed three-dimensional model on the display, and outputs first data that is data of the formed three-dimensional model to the second control device.
- a method of forming a three-dimensional model of a workpiece includes the steps of detecting position information and posture information on a 3D camera, when the 3D camera images the workpiece disposed in a manipulation area that is a space different from a workarea where a robot is installed, acquiring image information on the imaged workpiece, acquiring the detected position information and posture information, displaying the acquired image information on a display, and forming the three-dimensional model of the workpiece based on the acquired image information, and the acquired position information and posture information.
- FIG. 1 is a schematic diagram illustrating an outline configuration of a robot system according to Embodiment 1.
- the robot system 100 includes a robot 101, a first interface 121, a 3D camera 103, a sensor 104, a display 105, a first control device 111, and a second control device 112.
- the robot 101 and the second control device 112 are installed inside a workarea 201, and the sensor 104, the display 105, and the first control device 111 are disposed inside a manipulation area 202.
- the first interface 121 is gripped (held) and operated by an operator inside the manipulation area 202.
- the 3D camera 103 is disposed at a tip-end part of the first interface 121.
- the 3D camera 103 is provided to the tip-end part of the first interface 121, the 3D camera 103 may not be provided to the first interface 121, and the 3D camera 103 may be provided separately from the first interface 121.
- the workarea 201 is a space where the robot 101 is installed, and includes at least a space inside an operating range of the robot 101. Further, the manipulation area 202 is a space separated from the workarea 201 (a space different from the workarea 201). The workarea 201 and the manipulation area 202 may be divided by a wall member 203.
- the wall member 203 is provided with a window 204, and therefore, the operator is visible of the robot 101 disposed inside the workarea 201.
- the workarea 201 may be an explosion-proof area where an explosion-proof specification is applied
- the manipulation area 202 may be a non-explosion-proof area where the explosion-proof specification is not applied.
- the sensor 104 wirelessly detects position information and posture information on the 3D camera 103 (for example, lens), and outputs them to the first control device 111. Further, the sensor 104 wirelessly detects position information and posture information on the tip-end part of the first interface 121, and outputs them to the second control device 112. Note that the sensor 104 may perform the output to the first control device 111 and/or the second control device 112 wirelessly or wiredly.
- the sensor 104 may be an infrared sensor or a camera, for example. Note that, if the sensor 104 is the camera, the sensor 104 may not be disposed inside the manipulation area 202.
- the camera may be a camera installed in a personal digital assistant or a head mounted display which the operator carries. Further, the sensor 104 which detects the position information on the 3D camera 103 and the sensor 104 which detects the position information on the tip-end part of the first interface 121 may be comprised of the same sensor, or may be comprised of different sensors.
- the operator grips a gripper 121A and operates the robot 101.
- the robot 101 operating to follow the locus of a tip-end part of an interface body 121E of the gripped first interface 121, the operator can manipulate the robot 101 intuitively by the first interface 121 inside the manipulation area 202.
- an apparatus which transmits to the operator inner force sense information detected by a force sensor provided to an end effector 20 of the robot 101 which is described later, or voice information may be disposed.
- This apparatus includes, for example, a vibration motor, a speaker, and a mechanism which expands and contracts a casing which constitutes the gripper.
- the first interface 121 may be provided with a switch 121B which starts/stops spraying or injecting grains, fluid, or gas to a workpiece 300, or cutting or polishing of the workpiece 300.
- first interface 121 may be configured to be portable by the operator. Further, the interface body 121E of the first interface 121 may be formed in the same shape as the end effector 20 of the robot 101. Moreover, the first interface 121 may use known interfaces, such as a joystick, a keyboard, ten keys, and a teach pendant, for example.
- the 3D camera 103 outputs image information imaged inside the manipulation area 202 to the first control device 111.
- image information includes at least one of still image information, moving image information, and video information. Further, the term “image information” in the following explanation is similar.
- the sensor 104 detects the position information and the posture information on the 3D camera 103 wirelessly, and outputs them to the first control device 111.
- the display 105 displays the three-dimensional model of the workpiece 300 and the robot 101 outputted from the first control device 111, and the image information on the workpiece 300 etc. imaged by the 3D camera 103.
- the display 105 may be comprised of a non-portable display, which is installed and used on a desk, a floor, etc., for example. Further, the display 105 may be comprised of a head mounted display or glasses which the operator wears and uses.
- the end effector 20 of the robot 101 may have a structure capable of spraying or injecting grains, fluid, or gas to the workpiece 300, or may have a structure capable of cutting or polishing the workpiece 300, or may have a structure capable of welding the workpiece 300, or may have a structure capable of washing the workpiece 300.
- a configuration of the robot 101 is described in detail with reference to FIG. 2 .
- FIG. 2 is a schematic diagram illustrating an outline configuration of the robot in the robot system illustrated in FIG. 1 .
- the robot 101 is a vertical articulated robotic arm provided with a serially-coupled body comprised of a plurality of links (here, a first link 11a, a second link 11b, a third link 11c, a fourth link 11d, a fifth link 11e, and a sixth link 11f), a plurality of joints (here, a first joint JT1, a second joint JT2, a third joint JT3, a fourth joint JT4, a fifth joint JT5, and a sixth joint JT6), and a pedestal 15 which supports the serially-coupled body and the joints.
- a first joint JT1, a second joint JT2, a third joint JT3, a fourth joint JT4, a fifth joint JT5, and a sixth joint JT6 a pedestal 15 which supports the serially-coupled body and the joints.
- the vertical articulated robot is adopted as the robot 101, it is not limited to this configuration, but it may adopt a horizontal articulated robot.
- the pedestal 15 and a base-end part of the first link 11a are coupled to each other swivelably on an axis extending in the vertical direction.
- a tip-end part of the first link 11a and a base-end part of the second link 11b are coupled to each other pivotably on an axis extending in the horizontal direction.
- a tip-end part of the second link 11b and a base-end part of the third link 11c are coupled to each other pivotably on an axis extending in the horizontal direction.
- a tip-end part of the third link 11c and a base-end part of the fourth link 11d are coupled to each other rotatably on an axis extending in the longitudinal direction of the fourth link 11d.
- a tip-end part of the fourth link 11d and a base-end part of the fifth link 11e are coupled to each other pivotably on an axis perpendicular to the longitudinal direction of the fourth link 11d.
- a tip-end part of the fifth link 11e and a base-end part of the sixth link 11f are twistably coupled to each other.
- a mechanical interface is provided to a tip-end part of the sixth link 11f.
- the end effector 20 is detachably attached to the mechanical interface, corresponding to the contents of work.
- the end effector 20 sprays or injects fluid (for example, paint) to the workpiece 300. Further, the end effector 20 is connected to piping 21 for feeding the fluid to the end effector 20.
- fluid for example, paint
- first joint JT1, the second joint JT2, the third joint JT3, the fourth joint JT4, the fifth joint JT5, and the sixth joint JT6 are each provided with a drive motor (not illustrated) as one example of an actuator which relatively rotates two members coupled to each other via the joint.
- the drive motor may be a servomotor which is servo-controlled by the second control device 112, for example.
- first joint JT1, the second joint JT2, the third joint JT3, the fourth joint JT4, the fifth joint JT5, and the sixth joint JT6 are each provided with a rotation sensor (not illustrated) which detects a rotational position of the drive motor, and a current sensor (not illustrated) which detects current for controlling the rotation of the drive motor.
- the rotation sensor may be an encoder, for example.
- the first control device 111 includes a processor 111a such as a microprocessor and a CPU, and a memory 111b such as a ROM and a RAM.
- the memory 111b stores information on a basic program, various fixed data, etc.
- the processor 111a reads and executes software, such as the basic program stored in the memory 111b.
- the processor 111a forms a three-dimensional model (3D computer graphics or 3DCAD data) of the workpiece 300 based on image information inputted from the 3D camera 103, and position information and posture information inputted from the sensor 104 when the 3D camera 103 images the workpiece 300.
- the memory 111b stores the three-dimensional model of the workpiece 300 formed by the processor 111a.
- the processor 111a outputs the formed three-dimensional model of the workpiece 300 to the display 105.
- the display 105 displays the three-dimensional model of the workpiece 300 inputted from the processor 111a as a 3D workpiece 301 (see FIG. 6 etc.).
- the second control device 112 includes, similar to the first control device 111, a processor 112a such as a microprocessor and a CPU, and a memory 112b such as a ROM and a RAM.
- the memory 112b stores information on a basic program, various fixed data, etc.
- the processor 112a performs various kinds of operations of the robot 101 by reading and executing software, such as the basic program, stored in the memory 112b.
- first control device 111 and/or the second control device 112 may be comprised of a sole control device which carries out a centralized control, or may be comprised of a plurality of control devices which collaboratively carry out a distributed control. Further, the first control device 111 and/or the second control device 112 may be comprised of a microcomputer, or may be comprised of an MPU, a PLC (Programmable Logic Controller), a logical circuit, etc.
- the robot system 100 according to Embodiment 1 is provided with both the first control device 111 and the second control device 112, it is not limited to this configuration.
- the first control device 111 may have the function of the second control device 112 (that is, the robot system 100 may only be provided with the first control device 111).
- the second control device 112 may have the function of the first control device 111 (that is, the robot system 100 may only be provided with the second control device 112).
- FIG. 3 is a flowchart illustrating one example of the operation of the robot system according to Embodiment 1 (the method of forming the three-dimensional model of the workpiece).
- FIGs. 4 to 9 are schematic diagrams each illustrating a state inside the manipulation area when the robot system operates in accordance with the flowchart illustrated in FIG. 3 . Note that, in FIGs. 4 to 9 , a front-and-rear direction, a left-and-right direction, and an up-and-down direction of the workpiece are expressed as a front-and-rear direction, a left-and-right direction, and an up-and-down direction in the drawings.
- a box-shaped workpiece 300 which opens at an upper part thereof is disposed.
- the operator grips the first interface 121 where the 3D camera 103 is installed, and images the workpiece 300 from the front of the workpiece 300.
- the operator may use the 3D camera 103 to acquire the image information on (image) one workpiece 300, or may discontinuously acquire image information on (image) a plurality of workpieces 300, or may acquire video information on (image) the workpiece 300. Further, the operator may use the 3D camera 103 to image the workpiece 300 from directions other than the front of the workpiece 300.
- the first control device 111 acquires, from the 3D camera 103, the image information on the workpiece 300 imaged by the 3D camera 103 (Step S101). Next, the first control device 111 acquires, from the sensor 104, the position information and the posture information on the 3D camera 103 when imaging the image information acquired at Step S101 (Step S102).
- the first control device 111 displays the image information on the workpiece 300 acquired at Step S101 on the display 105, as a workpiece image 302A (Step S103; see FIG. 5 ).
- the first control device 111 determines whether an acquisition end command of the image information on the workpiece 300 (imaging end information on the workpiece 300) is inputted by the operator from an input device etc. (not illustrated) (Step S104).
- the first control device 111 forms the three-dimensional model (3D computer graphics or 3DCAD) of the workpiece 300 based on the image information on the workpiece 300 acquired at Step S101, and the position information and the posture information on the 3D camera 103 which are acquired at Step S102 (Step S105).
- 3D computer graphics or 3DCAD three-dimensional model
- the first control device 111 displays the three-dimensional model of the workpiece 300 formed at Step S 105 on the display 105 as a 3D workpiece 301A (Step S 106; see FIG. 6 ).
- the first control device 111 determines whether a formation end command of the three-dimensional model of the workpiece 300 (formation end information on the three-dimensional model) is inputted by the operator from the input device etc. (not illustrated) (Step S107).
- Step S107 If determined that the three-dimensional model formation end information is not inputted from the input device etc. (No at Step S107), the first control device 111 returns to the processing of Step S101. On the other hand, if determined that the three-dimensional model formation end information is inputted from the input device etc. (Yes at Step S107), the first control device 111 transits to processing of Step S108.
- the first control device 111 outputs the three-dimensional model of the workpiece 300 formed at Step S105 to the second control device 112, and ends this program. Therefore, the second control device 112 can perform the control of the operation of the robot 101 based on the three-dimensional model data of the workpiece 300.
- Step S104 for example, after displaying the workpiece image 302A on the display 105, if the operator further images the workpiece 300 and the image information on the imaged workpiece 300 is inputted from the 3D camera 103, the first control device 111 may determine that the imaging end information on the workpiece 300 is not inputted from the input device etc.
- Step S104 If determined that the imaging end information on the workpiece 300 is not inputted (No at Step S104), the first control device 111 returns to the processing of Step S101.
- Step S101 the image information on the imaged workpiece 300 is inputted into the first control device 111 (Step S101).
- the first control device 111 acquires, from the sensor 104, the position information and the posture information on the 3D camera 103 when imaging the image information acquired at second Step S101 (Step S102).
- the first control device 111 displays the image information on the workpiece 300 acquired at the second Step S101 on the display 105, as a workpiece image 302B (Step S103; see FIG. 7 ). Therefore, it can be seen that a rectangular through-hole is formed in a lower part of a rear surface of the workpiece 300.
- the first control device 111 again determines whether the acquisition end command of the image information on the workpiece 300 is inputted from the input device which is not illustrated (Step S104).
- Step S101 if the operator further images the workpiece 300 from above of the workpiece 300, the image information on the imaged workpiece 300 is inputted into the first control device 111 (Step S101).
- the first control device 111 acquires, from the sensor 104, the position information and the posture information on the 3D camera 103 when imaging the image information acquired at the third Step S101 (Step S102).
- the first control device 111 displays the image information on the workpiece 300 acquired at the third Step S101 on the display 105 as a workpiece image 302C (Step S103; see FIG. 8 ). Therefore, it can be seen that a rectangular through-hole is formed in a bottom surface of the workpiece 300. Further, the operator can see that imaging of all the parts of the workpiece 300 is finished. Thus, the operator makes the input device etc. output the imaging end information on the workpiece 300 to the first control device 111.
- the first control device 111 determines that the imaging end information on the workpiece 300 is inputted from the input device etc. (Yes at Step S 104), and forms the three-dimensional model of the workpiece 300 based on the image information on the workpiece 300 acquired at Step S 101, and the position information and the posture information on the 3D camera 103 which are acquired at Step S102 (Step S 105).
- the first control device 111 may form the three-dimensional model of the workpiece 300 based on the image information on the workpiece 300 acquired each time, and the position information and the posture information on the 3D camera 103 which are acquired each time.
- the first control device 111 may form a three-dimensional model of the workpiece 300 each time based on the image information on the workpiece 300, and the position information and the posture information on the 3D camera 103, and may again form a three-dimensional model of the workpiece 300 based on a group of the formed three-dimensional models of the workpiece 300.
- the first control device 111 may perform processing which forms a three-dimensional model of the workpiece 300 based on image information on the workpiece 300 acquired at the Ath time (for example, the first time), and the position information and the posture information on the 3D camera 103 (Step S105), and may perform processing which again forms a three-dimensional model of the workpiece 300 based on the three-dimensional model of the workpiece 300 formed by the processing which forms the three-dimensional model (Step S 105), image information on the workpiece 300 acquired at the Bth time (B ⁇ A; for example, the second time and the third time), and the position information and the posture information on the 3D camera 103.
- the Ath time for example, the first time
- the position information and the posture information on the 3D camera 103 Step S105
- the first control device 111 may perform processing which forms a three-dimensional model of the workpiece 300 based on image information on the workpiece 300 acquired at the Ath time (for example, the first time and the second time), and the position information and the posture information on the 3D camera 103 (Step S 105), and may perform processing which again forms a three-dimensional model of the workpiece 300 based on the three-dimensional model of the workpiece 300 formed by the processing which forms the three-dimensional model(Step S 105), and image information on the workpiece 300 acquired at the Bth time (B ⁇ A; for example, the third time).
- the first control device 111 may perform processing which forms a three-dimensional model of the workpiece 300 based on image information on the workpiece 300 acquired at the Cth time (for example, the first time), and the position information and the posture information on the 3D camera 103 (Step S 105), may perform processing which forms a three-dimensional model of the workpiece 300 based on image information on the workpiece 300 acquired at the Dth time (D ⁇ C; for example, the second time and the third time), and the position information and the posture information on the 3D camera 103 (Step S105), and may perform processing which again forms a three-dimensional model of the workpiece 300 based on the three-dimensional model of the workpiece 300 formed at Step S105 based on the image information on the workpiece 300 acquired at the Cth time, and the position information and the posture information on the 3D camera 103, and the three-dimensional model of the workpiece 300 formed at Step S105 based on the image information on the workpiece 300 acquired at the Dth time, and the position information and the posture information on the 3D camera
- the first control device 111 may perform processing which forms a three-dimensional model of the workpiece 300 based on image information on the workpiece 300 acquired at the Cth time (for example, the first time and the second time), and the position information and the posture information on the 3D camera 103 (Step S 105), may perform processing which forms a three-dimensional model of the workpiece 300 based on image information on the workpiece 300 acquired at the Dth time (D ⁇ C; for example, the third time), and the position information and the posture information on the 3D camera 103 (Step S105), and may perform processing which again forms a three-dimensional model of the workpiece 300 based on the three-dimensional model of the workpiece 300 formed at Step S105 based on the image information on the workpiece 300 acquired at the Cth time, and the position information and the posture information on the 3D camera 103, and the three-dimensional model of the workpiece 300 formed at Step S105 based on the image information on the workpiece 300 acquired at the Dth time, and the position information and the posture information on the 3D camera
- the first control device 111 may perform the above-described processing which forms the three-dimensional model of the workpiece 300 based on the image information on the workpiece 300 acquired at the Ath time, and the position information and the posture information on the 3D camera 103 (Step S 105), and may perform processing which again forms the three-dimensional model of the workpiece 300 based on the three-dimensional model of the workpiece 300 which is again formed by the processing which again forms the three-dimensional model of the workpiece 300 based on the three-dimensional model of the workpiece 300 formed by the processing which forms the three-dimensional model (Step S105), the image information on the workpiece 300 acquired at the Bth time, and the position information and the posture information on the 3D camera 103, and the three-dimensional model of the workpiece 300 which is again formed by the processing which again forms the three-dimensional model of the workpiece 300 based on the three-dimensional model of the workpiece 300 formed at Step S105 based on the image information on the workpiece 300 acquired at the Cth time, and the position information and the posture information on
- the first control device 111 displays the three-dimensional model of the workpiece 300 formed by the processing of Step S105 on the display 105 as a 3D workpiece 301C (Step S106; see FIG. 9 ). Therefore, the 3D workpiece 301C in which the rectangular through-holes are formed in lower parts of left and right side surfaces, the lower part of the rear surface, and the bottom surface of the workpiece 300 is displayed on the display 105. Therefore, the operator can recognize that the formation of the three-dimensional model of the workpiece 300 is finished, and he/she makes the input device etc. output the three-dimensional model formation end information to the first control device 111.
- the first control device 111 determines that the model formation end information is inputted from the input device etc. (Yes at Step S107), outputs the three-dimensional model of the workpiece 300 formed at Step S105 to the second control device 112, and ends this program.
- the processor 111a of the first control device 111 may store in the memory 111b the data of the three-dimensional model of the workpiece 300 (3D workpiece 301C) formed at Step S105, as first data.
- the first control device 111 forms the three-dimensional model of the workpiece based on the image information on the workpiece 300 imaged by the 3D camera 103, and the position information and the posture information on the 3D camera 103 when the 3D camera 103 images the workpiece 300, a programmer is not necessary to create the three-dimensional model data of the workpiece, thereby reducing the cost for creating the data.
- it can improve the production efficiency.
- the operator can judge whether there still is a non-imaged part in the workpiece 300. Further, the operator can understand which direction is better for an efficient imaging of the workpiece 300 based on the three-dimensional model of the workpiece 300 displayed on the display 105.
- the robot system 100 since the first control device 111 displays the imaged workpiece 300 (the workpiece image 302A etc.) on the display 105, the operator can judge whether there still is a non-imaged part in the workpiece 300. Moreover, the operator can understand which direction is better for an efficient imaging of the workpiece 300 based on the workpiece 300 displayed on the display 105 (the workpiece image 302A etc.).
- the workpiece 300 When performing the acquisition of the image information on the workpiece 300 and the acquisition of the position information and the posture information on the 3D camera 103 only once, and displaying the image of the workpiece 300 or forming the three-dimensional model of the workpiece 300 only using the acquired information, the workpiece 300 cannot be displayed in the perfect form, and the three-dimensional model may not be formed in the perfect form.
- the first control device 111 may perform the processing which forms the three-dimensional model of the workpiece 300 based on the acquired image information, and the acquired position information and posture information (Step S 105), the processing which displays the formed three-dimensional model on the display 105 (Step S 106), and the processing which outputs the first data which is data of the formed three-dimensional model to the second control device 112 (Step S108).
- Step S101 after performing the processing which acquires the image information (Step S101), the processing which acquires, from the sensor 104, the position information and the posture information (Step S 102), the processing which displays the acquired image information on the display 105 (Step S 103), the processing which forms the three-dimensional model (Step S 105), and the processing which displays the three-dimensional model on the display 105 (Step S 106), the processing which acquires the image information (Step S101), the processing which acquires, from the sensor 104, the position information and the posture information (Step S102), and the processing which displays the acquired image information on the display 105 (Step S103) are repeated once or more.
- the first control device 111 may perform, in the processing which forms the three-dimensional model (Step S 105), the processing which forms the three-dimensional model based on the image information acquired at the Ath time, and the acquired position information and posture information, and the processing which again forms the three-dimensional model based on the three-dimensional model formed by the processing described above which forms the three-dimensional model, and the image information acquired at the Bth time (B ⁇ A), and then perform the processing which displays the three-dimensional model on the display 105 (Step S106), and the processing which outputs the first data to the second control device 112 (Step S108).
- Step S101 after performing the processing which acquires the image information (Step S101), the processing which acquires, from the sensor 104, the position information and the posture information (Step S102), the processing which displays the acquired image information on the display 105 (Step S103), the processing which forms the three-dimensional model (Step S105), and the processing which displays the three-dimensional model on the display 105 (Step S 106), the processing which acquires the image information (Step S101), the processing which acquires, from the sensor 104, the position information and the posture information (Step S102), and the processing which displays the acquired image information on the display 105 (Step S103) are repeated once or more.
- the first control device 111 may perform, in the processing which forms the three-dimensional model (Step S105), the processing which forms the three-dimensional model based on the image information acquired at the Cth time, and the acquired position information and posture information, the processing which forms the three-dimensional model based on the image information acquired at the Dth time (D ⁇ C), and the processing which again forms the three-dimensional model based on the three-dimensional model formed based on the image information acquired at the Cth time, and the acquired position information and posture information, and the three-dimensional model formed based on the image information acquired at the Dth time (D ⁇ C), and then perform the processing which displays the three-dimensional model on the display 105 (Step S106), and the processing which outputs the first data to the second control device 112 (Step S108).
- Step S105 the processing which forms the three-dimensional model based on the image information acquired at the Cth time, and the acquired position information and posture information
- D ⁇ C the processing which again forms the three-dimensional model based on the three-dimensional model formed based
- the workarea 201 may be an explosion-proof area
- the manipulation area 202 may be a non-explosion-proof area. Therefore, when the first interface 121 and the 3D camera 103 are used inside the manipulation area 202, these devices become unnecessary to be explosion proof.
- the first control device 111 determines whether the three-dimensional model formation end information is inputted in the processing of Step S107, it is not limited to this configuration. The first control device 111 may not perform the processing of Step S107.
- FIG. 10 is a schematic diagram illustrating an outline configuration of a robot system of Modification 1 in Embodiment 1.
- the robot system 100 of Modification 1 differs from the robot system 100 according to Embodiment 1 in that a detector 12 and a transmitter 13 are provided to the first interface 121.
- the detector 12 detects the position information and the posture information on the first interface 121 and the 3D camera 103 wirelessly.
- the detector 12 is a gyro sensor or a camera, for example.
- the transmitter 13 transmits to the first control device 111 the position information and the posture information which are detected by the detector 12.
- the detector 12 and the transmitter 13 correspond to the sensor 104. Note that the detector 12 may not detect the position information and the posture information, or the detector 12 may detect only the position information or may detect only the posture information.
- the robot system 100 of Modification 1 has similar operation and effects to the robot system 100 according to Embodiment 1.
- FIG. 11 is a schematic diagram illustrating an outline configuration of a robot system of Modification 2 in Embodiment 1.
- the robot system 100 of Modification 2 differs from the robot system 100 according to Embodiment 1 in that the 3D camera 103 is provided to the tip-end part of a robotic arm 102, and a second interface 122 is additionally provided.
- the robotic arm 102 may be a vertical articulated robotic arm, or may be a horizontal articulated robot.
- the robotic arm 102 is operated by the second interface 122.
- the second interface 122 may be a known interface, such as a joystick, a keyboard, ten keys, and a teach pendant, for example.
- a switch 122B for instructing start/stop of imaging by the 3D camera 103 is provided to the second interface 122.
- first interface 121 may also serve as the second interface 122.
- a switch which switches between operation of the robot 101 and operation of the robotic arm 102 may be provided to the first interface 121.
- the robot system 100 of Modification 2 has similar operation and effects to the robot system 100 according to Embodiment 1.
- the first control device 111 also serves as the second control device 112. That is, the first control device 111 also realizes the function of the second control device 112. Therefore, since the functions of the two kinds of control devices can be implemented by the sole control device, the configuration, such as wiring, can be simplified.
- FIG. 12 is a schematic diagram illustrating an outline configuration of a robot system according to Embodiment 2. Note that, in FIG. 12 the directions of the robot are expressed by the directions of the X-axis, the Y-axis, and the Z-axis in the three-dimensional rectangular coordinate system which are illustrated in this drawing for convenience.
- the robot system 100 according to Embodiment 2 differs from the robot system according to Embodiment 1 (including its modifications) in that a conveying device 106 is additionally provided.
- the conveying device 106 conveys the workpiece from the manipulation area to a first position in the workarea which is set beforehand.
- the conveying device 106 is a known conveying device, such as a belt conveyor.
- the robot system 100 according to Embodiment 2 may be provided with a shutter 107 etc. which permits/inhibits a movement of the workpiece 300 from the manipulation area 202 into the workarea 201.
- the first interface 121 is a joystick, and the first interface 121 is configured separately from the 3D camera 103.
- the memory 112b of the second control device 112 may store three-dimensional model information on the scale, indicative of a given first range set beforehand.
- the three-dimensional model information on the scale may be, for example, information on a ruler for measuring a distance from a tip end of the robot 101, or may be information on a cone (truncated cone) shape indicative of a range where grains, fluid, or gas is injected.
- FIG. 13 is a schematic diagram illustrating a state of the workarea seen from a window, in the robot system illustrated in FIG. 12 .
- FIG. 14 is a flowchart illustrating one example of operation of the robot system according to Embodiment 2.
- the workpiece 300 and the robot 101 may overlap with each other. Further, a spatial relationship between the tip-end part of the robot 101 (end effector 20) and the workpiece 300 may be difficult to be grasped. In this case, it may be possible to install a camera inside the workarea 201, to image the tip-end part of the robot 101 (end effector 20) and the workpiece 300, and to display the captured image for the operator.
- the robot 101 paints the workpiece 300 or welds the workpiece 300
- the explosion-proof camera is high in cost, and therefore, the facility cost increases.
- the imaging location of the camera may be necessary to be changed, and therefore, the operator's work load increases.
- the second control device 112 performs the following operation (processing) by using the three-dimensional model of the workpiece 300 created by the first control device 111.
- the processor 112a of the second control device 112 acquires the first data which is three-dimensional model data of the workpiece 300 from the memory 111b of the first control device 111 (Step S201). Note that, if the first data is stored in the memory 112b, the processor 112a may acquire the first data from the memory 112b.
- the processor 112a of the second control device 112 acquires the position information on the first position that is a conveyance position of the workpiece 300 inside the workarea 201 from the memory 112b (Step S202).
- the processor 112a of the second control device 112 acquires second data which is the three-dimensional model data of the robot 101 from the memory 112b (Step S203).
- the data with which the programmer formed the three-dimensional model of the robot 101 may be stored beforehand in the memory 112b.
- the first control device 111 may form the three-dimensional model of the robot 101 by using the 3D camera 103 installed in the first interface 121, and may store this three-dimensional model data in the memory 112b.
- the processor 112a of the second control device 112 makes the conveying device 106 convey the workpiece 300 disposed in the manipulation area 202 to the first position in the workarea 201 (Step S204).
- the second control device 112 may perform beforehand the processing of Steps S201 to S203 before the instruction information is inputted from the input device etc., or may perform it after the processing of Step S204.
- the processor 112a of the second control device 112 acquires manipulation information (operational information) of the robot 101 outputted from the first interface 121 when the operator operates the first interface 121 (Step S205).
- the processor 112a of the second control device 112 operates the robot 101 based on the manipulation information acquired at Step S205 (Step S206).
- the processor 112a of the second control device 112 displays the spatial relationship between the tip-end part of the robot 101 (end effector 20) and the workpiece 300 on the display 105 as the three-dimensional model based on the first data, the first position, and the second data which are acquired by the processing of Steps S201 to S203, and the manipulation information acquired at Step S205 (Step S207).
- the processor 112a of the second control device 112 displays on the display 105 the 3D workpiece 301 and a 3D robot 101A in a state where the workpiece and the robot are seen from a direction different from the direction in which the operator looks at the robot 101 from the manipulation area 202.
- the direction in which the operator looks at the robot 101 from the manipulation area 202 may be a direction in which he/she looks at the robot 101 from the window 204 of the manipulation area 202 (here, X-direction), for example.
- a motion sensor may be disposed inside the manipulation area 202, and the direction may be a direction of a straight line which connects coordinates of the operator's position detected by the motion sensor and coordinates of the position of the robot 101.
- the direction different from the direction in which the operator looks at the robot 101 from the manipulation area 202 may be any direction in Embodiment 2, as long as it is a direction other than the X-direction, and, for example, it may be a direction perpendicular to the X-direction (here, Y-direction or Z-direction).
- the processor 112a of the second control device 112 may display on the display 105 the 3D workpiece 301 and the 3D robot 101A as they are seen from the direction different from the direction (here, the X-direction) in which the operator looks at the robot 101 from the window 204 of the manipulation area 202, for example.
- the processor 112a of the second control device 112 may display on the display 105 the spatial relationship between the tip-end part of the robot 101 (end effector 20) when seen from the Y-direction and the workpiece 300.
- the processor 112a of the second control device 112 may display on the display 105 a 3D scale 20A which is a three-dimensional model of the scale at the tip end of the end effector 20 of the robot 101 (3D robot 101A) (see FIG. 12 ).
- the processor 112a of the second control device 112 determines whether the instruction information indicative of an end of the work for the workpiece 300 is inputted via the input device etc. (not illustrated) by the operator (Step S208).
- the processor 112a of the second control device 112 repeats the processing of Steps S205 to S208 until it determines that the instruction information indicative of the end of the work for the workpiece 300 is inputted.
- the processor 112a of the second control device 112 ends this program.
- the second control device 112 displays the spatial relationship between the tip-end part of the robot 101 (end effector 20) and the workpiece 300 on the display 105 as the three-dimensional model, using the three-dimensional model of the workpiece 300 created by the first control device 111.
- the second control device 112 displays on the display 105 the 3D scale 20A at the tip end of the end effector 20 of the robot 101 (3D robot 101A), the operator's burden can be reduced, and the work efficiency can be improved.
- the conveying device 106 which conveys the workpiece 300 from the manipulation area 202 to the first position in the workarea 201 which is set beforehand. Therefore, it can move the workpiece 300 imaged in the manipulation area 202 by the 3D camera 103 to the exact position suitable for the work.
- the first interface 121 which operates the robot 101 and is disposed inside the manipulation area 202 is further provided.
- the second control device 112 has the memory 112b which stores the second data which is data of the three-dimensional model of the robot 101. Further, when the operator operates the first interface 121 to manipulate the robot 101 to perform the work for the workpiece 300, the second control device 112 displays on the display 105 the spatial relationship between the workpiece 300 and the tip-end part of the robot 101 in a state where the workpiece and the robot are seen from the direction different from the direction in which the operator looks at the robot 101 from the manipulation area 202, based on the first data inputted by the output of the first data to the second control device 112, the position information of the first position of the conveyed workpiece, the second data, and the manipulation information on the robot 101 inputted from the first interface 121. According to the above configuration, the operator can understand now the spatial relationship between the workpiece 300 and the tip-end part of the robot 101 which is hardly understood by directly looking at the robot 101 and the work
- the 3D camera 103 is attached to the first interface 121. According to this configuration, since the device which the operator should grip is a sole object, it becomes easier to operate the first interface 121 and the 3D camera 103.
- the sensor 104 may detect the position information and the posture information wirelessly from the first interface 121, and the second control device 112 may calculate the locus of the first interface 121 based on the position information and the posture information from the first interface 121 which are detected by the sensor 104, and may perform the processing which operates the robot 101 on real time based on the calculated locus (Step S206).
- the robot 101 since it becomes easier to move the first interface 121, the robot 101 can be operated correctly.
- the second control device 112 may store in the memory 112b the work performed by the robot 101 (the operational information on the first interface 121) based on the manipulation information which is produced by the operator operating the first interface 121. Further, the second control device 112 may automatically operate the robot 101 according to the operational information on the first interface 121 stored in the memory 112b.
- FIG. 15 is a schematic diagram illustrating an outline configuration of a robot system of Modification 1 in Embodiment 2.
- the robot system 100 of Modification 1 differs from the robot system 100 according to Embodiment 2 in that the second control device 112 operates the robot 101 (end effector 20), based on the position information and the posture information on the first interface 121 inputted from the sensor 104, so as to follow the movement of the tip-end part of the first interface 121.
- the second control device 112 calculates the locus of the first interface 121 based on the position information and the posture information on the first interface 121 which are detected by the sensor 104, and operates the robot 101 on real time.
- the second control device 112 may calculate the locus of the first interface 121 based on the position information and the posture information in a three-dimensional space of the first interface 121 which are detected by the sensor 104, and based on the calculated locus, it may cause the robot 101 on real time to perform any work of an injecting work which injects fluid or gas to the workpiece 300, a cutting work which cuts the workpiece 300, a polishing work which polishes the workpiece 300, a welding work which welds the workpiece 300, and a washing work which washes the workpiece 300.
- the "work" of the injecting work, the cutting work, the polishing work, the welding operation, and the washing work is a series of operations performed to the workpiece 300 by the robot 101, and is a concept which includes a plurality of operations.
- the work includes, for example, an operation in which the robot 101 approaches the workpiece 300, an operation in which the robot 101 starts injection of fluid etc. to the workpiece 300, an operation in which the robot 101 stops the injection of the fluid etc., and an operation in which the robot 101 separates from the workpiece 300.
- FIG. 16 is a flowchart illustrating one example of operation of the robot system of Modification 1 in Embodiment 2. As illustrated in FIG. 16 , the operation of the robot system 100 of Modification 1 differs from the operation of the robot system 100 according to Embodiment 2 in that processing (operation) of Steps S205A and S205B is performed, instead of the processing of Step S205.
- the processor 112a of the second control device 112 acquires, from the sensor 104, the position information and the posture information on the first interface 121 detected by the sensor 104 (Step S205A). Next, the processor 112a of the second control device 112 calculates the locus of the first interface 121 based on the position information and the posture information on the first interface 121 which are acquired at Step S205 (Step S205B).
- Step S206 the processor 112a of the second control device 112 operates the robot 101 on real time based on the locus of the first interface 121 calculated at Step S205B (Step S206).
- the processor 112a of the second control device 112 displays the spatial relationship between the tip-end part of the robot 101 (end effector 20) and the workpiece 300 on the display 105 as the three-dimensional model based on the first data, the first position, and the second data which are acquired by the processing of Steps S201 to S203, and the locus of the first interface 121 calculated at Step S205B (Step S207).
- the second control device 112 displays the 3D workpiece 301 and the 3D robot 101A on the display 105.
- the processor 112a of the second control device 112 determines whether the instruction information indicative of the end of the work for the workpiece 300 is inputted via the input device etc. (not illustrated) by the operator (Step S208).
- the processor 112a of the second control device 112 repeats the processing of Steps S205A to S208 until it determines that the instruction information indicative of the end of the work for the workpiece 300 is inputted.
- the processor 112a of the second control device 112 ends this program.
- the second control device 112 calculates the locus of the first interface 121 based on the position information and the posture information on the first interface 121 which are detected by the sensor 104, and operates the robot 101 on real time.
- the second control device 112 displays the spatial relationship between the tip-end part of the robot 101 (end effector 20) and the workpiece 300 on the display 105 as the three-dimensional model, using the three-dimensional model of the workpiece 300 created by the first control device 111.
- the second control device 112 may have the memory 112b which stores the second data which is data of the three-dimensional model of the robot 101.
- the second control device 112 may display on the display 105 the spatial relationship between the workpiece 300 and the tip-end part of the robot 101 as they are seen from the direction different from the direction in which the operator looks at the robot 101 from the manipulation area 202, based on the first data inputted by the processing which outputs the first data to the second control device 112 (Step S108), the position information on the first position of the conveyed workpiece 300, the second data, and the locus calculated by the processing which operates the robot 101 on real time (Step S206).
- the operator can understand now the spatial relationship between the workpiece 300 and the tip-end part of the robot 101 which is hardly understood by directly looking at the robot 101 and the workpiece 300.
- the second control device 112 may calculate the locus of the first interface 121 which is produced by the operator moving (operating) the first interface 121, and store in the memory 112b the work which is performed by the robot 101 (locus information on the first interface 121) based on the calculated locus. Further, the second control device 112 may operate the robot 101 according to the locus information on the first interface 121 stored in the memory 112b.
- FIGs. 17 and 18 are schematic diagrams illustrating an outline configuration of a robot system of Modification 2 in Embodiment 2.
- the directions of the robot are expressed as the directions of the X-axis, the Y-axis, and the Z-axis in the three-dimensional rectangular coordinate system illustrated in the drawings, for convenience.
- the robot system 100 of Modification 2 differs from the robot system 100 according to Embodiment 2 in that the second control device 112 displays on the display 105 a line 30 indicative of a normal direction of a given first part of the workpiece 300, which is set beforehand, based on the three-dimensional model information on the workpiece 300.
- the first part may be a part which opposes to the tip end of the end effector 20 of the robot 101.
- the robot system 100 of Modification 2 may further be provided with an alarm 150.
- the alarm 150 may display character data or image data on the display 105, or may inform by sound from a speaker etc., or may inform by light or color. Alternatively, it may inform a smartphone, a cellular phone, or a tablet computer by an e-mail or an application via a communication network.
- the second control device 112 may change the color and/or the thickness of the line 30, and display it on the display 105 (see FIG. 18 ).
- the second control device 112 may activate the alarm 150 and inform the agreement.
- the robot system 100 of Modification 2 has similar operation and effects to the robot system 100 according to Embodiment 2.
- FIG. 19 is a schematic diagram illustrating an outline configuration of a robot system according to Embodiment 3. Note that, in FIG. 19 , the directions of the robot are expressed as the directions of the X-axis, the Y-axis, and the Z-axis in the three-dimensional rectangular coordinate system illustrated in the drawing, for convenience.
- the robot system 100 according to Embodiment 3 differs from the robot system 100 according to Embodiment 2 in that the alarm 150 is disposed inside the manipulation area 202.
- the alarm 150 may display character data or image data on the display 105, or may inform by sound from a speaker etc., or may inform by light or color. Alternatively, it may inform a smartphone, a cellular phone, or a tablet computer by an e-mail or an application via a communication network.
- FIGs. 20A and 20B are flowcharts illustrating one example of operation of the robot system according to Embodiment 3. As illustrated in FIGs. 20A and 20B , the operation of the robot system 100 according to Embodiment 3 differs from the operation of the robot system 100 according to Embodiment 2 in that processing of Steps S207A to S207C is performed between Step S207 and Step S208.
- the processor 112a of the second control device 112 displays the spatial relationship between the tip-end part of the robot 101 (end effector 20) and the workpiece 300 on the display 105 as the three-dimensional model based on the first data, the first position, and the second data which are acquired by the processing of Steps S201 to S203, and manipulational command information acquired at Step S205 (Step S207).
- the processor 112a of the second control device 112 calculates a distance A between the robot 101 (3D robot 101A) and the workpiece 300 (3D workpiece 301) based on the first data and the second data, and the manipulational command information acquired at Step S205 (Step S207A).
- the processor 112a of the second control device 112 may calculate a distance between a part of the robot 101 (3D robot 101A) nearest to the workpiece 300 (3D workpiece 301) and the workpiece 300 (3D workpiece 301).
- the processor 112a of the second control device 112 may calculate a distance between the tip end of the end effector 20 and the workpiece 300. Moreover, when a certain part of the robot 101 is located at the position nearest to the workpiece 300, the processor 1112 of the second control device 112 may calculate a distance between this part of the robot 101 and the workpiece 300.
- the processor 112a of the second control device 112 determines whether the distance A calculated at Step S207A is less than a given first distance set beforehand (Step S207B).
- the first distance may be set based on the operating speed of the robot 101, the contents of the work for the workpiece 300, etc.
- the first distance may be set smaller. Further, also when the work for the workpiece 300 is welding, cutting, washing, and polishing work, the first distance may be set smaller.
- the first distance when the operating speed of the robot 101 is faster, the first distance may be set larger. Further, also when the work for the workpiece 300 is injecting/spraying work of fluid, the first distance may be set larger.
- the first distance may be 0.5cm or more from the viewpoint of suppressing a collision with the workpiece 300, and may be 30cm from the viewpoint of performing the work to the workpiece 300.
- the processor 112a of the second control device 112 activates the alarm 150 to inform a warning about a possibility of collision with the workpiece 300 (Step S207C).
- the processor 112a of the second control device 112 may reduce the operating speed of the robot 101, or may stop the robot 101.
- the operator can recognize the possibility of the robot 101 colliding with the workpiece 300, and can operate the robot 101 by using the interface 102 so that the robot 101 does not collide the workpiece 300.
- the processor 112a of the second control device 112 acquires the manipulational command information inputted from the interface 102. That is, the processor 112a of the second control device 112 returns to the processing of Step S205.
- the processor 112a of the second control device 112 determines whether the instruction information indicative of the end of the work for the workpiece 300 is inputted via the input device etc. (not illustrated) by the operator (Step S208).
- the processor 112a of the second control device 112 repeats the processing of Steps S205 to S208 until it determines that the instruction information indicative of the end of the work for the workpiece 300 is inputted.
- the processor 112a of the second control device 112 ends this program.
- the robot system 100 according to Embodiment 3 has similar operation and effects to the robot system 100 according to Embodiment 2.
- the conveying device 106 which conveys the workpiece 300 from the manipulation area 202 to the first position in the workarea 201 which is set beforehand.
- the workpiece 300 imaged by the 3D camera 103 in the manipulation area 202 can be moved to the exact position suitable for the work.
- the first interface 121 which operates the robot 101 and is disposed inside the manipulation area 202 is further provided.
- the second control device 112 has the memory 112b which stores the second data which is data of the three-dimensional model of the robot 101. Further, when the operator operates the first interface 121 to manipulate the robot 101 to perform the work for the workpiece 300, the second control device 112 displays on the display 105 the spatial relationship between the workpiece 300 and the tip-end part of the robot 101 as they are seen from the direction different from the direction in which the operator looks at the robot 101 from the manipulation area 202 based on the first data inputted by the output of the first data to the second control device 112, the position information on the first position of the conveyed workpiece, the second data, and the manipulation information on the robot 101 inputted from the first interface 121. According to the above configuration, the operator can understand now the spatial relationship between the workpiece 300 and the tip-end part of the robot 101 which is hardly understood by directly looking at the robot 101 and the workpiece 300.
- the 3D camera 103 is attached to the first interface 121. According to this configuration, since the device which the operator should grip is a sole object, it becomes easier to operate the first interface 121 and the 3D camera 103.
- the sensor 104 may detect the position information and the posture information from the first interface 121 wirelessly, and the second control device 112 may calculate the locus of the first interface 121 based on the position information and the posture information from the first interface 121 which are detected by the sensor 104, and may perform the processing which operates the robot 101 on real time based on the calculated locus (Step S206).
- the robot 101 since it becomes easier to move the first interface 121, the robot 101 can be operated correctly.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
- Numerical Control (AREA)
- Image Analysis (AREA)
Abstract
A robot system (100) of the present disclosure includes a robot (101) installed in a workarea (201) and controlled by a second control device (112), a 3D camera (103) operated by an operator, a sensor (104) that is disposed in a manipulation area (202) that is a space different from the workarea (201), and wirelessly detects position information and posture information on the 3D camera (103), a display (105), and a first control device (111). The first control device (111) acquires image information on a workpiece (300) imaged by the 3D camera (103), acquires, from the sensor (104), the position information and the posture information when the workpiece (300) is imaged by the 3D camera (103), displays the acquired image information on the display (105), forms a three-dimensional model of the workpiece (300) based on the image information, and the acquired position information and posture information, displays the formed three-dimensional model on the display (105), and outputs first data that is data of the formed three-dimensional model to the second control device (112).
Description
- This application claims the benefit of priority to
Japanese Patent Application No. 2019-225562 filed on December 13, 2019 - The present disclosure relates to a robot system, and a method of forming a three-dimensional model of a workpiece.
- Display systems in which a three-dimensional image indicating a work cell of a robot is displayed on various personal digital assistants are known (for example, see Patent Document 1). The display system disclosed in Patent Document 1 has a display which generates a three-dimensional robot model and robot work cell data containing models of other structures of the work cell, and displays a three-dimensional rendering image, such as the generated three-dimensional robot model.
- Further, in the display system disclosed in Patent Document 1, when a user (operator) operates the robot displayed on the display by using a user interface, a collision object (for example, a safety wall) is indicated on the display before the robot operates. Then, when the physical robot collides the collision object due to the operation of the user, the collision is displayed on the display.
- [Patent Document 1]
JP2019-177477A - However, in the display system disclosed in Patent Document 1, a programmer must create the three-dimensional data, such as a three-dimensional robot model and an animated three-dimensional robot model. Therefore, when the kinds of the workpieces which are used as targets to be worked by the robot increase, the data needed to be created also increases significantly, thereby increasing the preparation expense of the data. Particularly, when the workpieces are small in number, a ratio of the preparation expense of the data among the production cost becomes larger.
- Therefore, there is still room for an improvement in the display system disclosed in Patent Document 1 in terms of an improvement in the production efficiency.
- The present disclosure is to solve the above problem, and one purpose thereof is to provide a robot system and a method of forming a three-dimensional model of a workpiece, capable of improving the production efficiency, as compared with the conventional display system.
- A robot system according to the present disclosure includes a robot installed in a workarea and controlled by a second control device, a 3D camera operated by an operator, a sensor that is disposed in a manipulation area that is a space different from the workarea, and wirelessly detects position information and posture information on the 3D camera, a display, and a first control device. The first control device acquires image information on a workpiece imaged by the 3D camera, acquires, from the sensor, the position information and the posture information when the workpiece is imaged by the 3D camera, displays the acquired image information on the display, forms a three-dimensional model of the workpiece based on the image information, and the acquired position information and posture information, displays the formed three-dimensional model on the display, and outputs first data that is data of the formed three-dimensional model to the second control device.
- According to this robot system, since it is not necessary to create the three-dimensional model data of the workpiece by a programmer, the cost for creating the data can be reduced. Thus, as compared with the conventional display system, it can improve the production efficiency.
- A method of forming a three-dimensional model of a workpiece according to the present disclosure includes the steps of detecting position information and posture information on a 3D camera, when the 3D camera images the workpiece disposed in a manipulation area that is a space different from a workarea where a robot is installed, acquiring image information on the imaged workpiece, acquiring the detected position information and posture information, displaying the acquired image information on a display, and forming the three-dimensional model of the workpiece based on the acquired image information, and the acquired position information and posture information.
- According to this method of forming the three-dimensional model of the workpiece, since it is not necessary to create the three-dimensional model data of the workpiece by a programmer, the cost for creating the data can be reduced. Thus, as compared with the conventional display system, it can improve the production efficiency.
-
-
FIG. 1 is a schematic diagram illustrating an outline configuration of a robot system according to Embodiment 1. -
FIG. 2 is a schematic diagram illustrating an outline configuration of the robot in the robot system illustrated inFIG. 1 . -
FIG. 3 is a flowchart illustrating one example of operation of the robot system according to Embodiment 1. -
FIG. 4 is a schematic diagram illustrating a state inside a manipulation area when the robot system operates in accordance with the flowchart illustrated inFIG. 3 . -
FIG. 5 is a schematic diagram illustrating the state inside the manipulation area when the robot system operates in accordance with the flowchart illustrated inFIG. 3 . -
FIG. 6 is a schematic diagram illustrating the state inside the manipulation area when the robot system operates in accordance with the flowchart illustrated inFIG. 3 . -
FIG. 7 is a schematic diagram illustrating the state inside the manipulation area when the robot system operates in accordance with the flowchart illustrated inFIG. 3 . -
FIG. 8 is a schematic diagram illustrating the state inside the manipulation area when the robot system operates in accordance with the flowchart illustrated inFIG. 3 . -
FIG. 9 is a schematic diagram illustrating the state inside the manipulation area when the robot system operates in accordance with the flowchart illustrated inFIG. 3 . -
FIG. 10 is a schematic diagram illustrating an outline configuration of a robot system of Modification 1 in Embodiment 1. -
FIG. 11 is a schematic diagram illustrating an outline configuration of a robot system of Modification 2 in Embodiment 1. -
FIG. 12 is a schematic diagram illustrating an outline configuration of a robot system according to Embodiment 2. -
FIG. 13 is a schematic diagram illustrating a state of a workarea seen from a window, in the robot system illustrated inFIG. 12 . -
FIG. 14 is a flowchart illustrating one example of operation of the robot system according to Embodiment 2. -
FIG. 15 is a schematic diagram illustrating an outline configuration of a robot system of Modification 1 in Embodiment 2. -
FIG. 16 is a flowchart illustrating one example of operation of the robot system of Modification 1 in Embodiment 2. -
FIG. 17 is a schematic diagram illustrating an outline configuration of a robot system of Modification 2 in Embodiment 2. -
FIG. 18 is a schematic diagram illustrating the outline configuration of the robot system of Modification 2 in Embodiment 2. -
FIG. 19 is a schematic diagram illustrating an outline configuration of a robot system according toEmbodiment 3. -
FIG. 20A is a flowchart illustrating one example of operation of the robot system according toEmbodiment 3. -
FIG. 20B is a flowchart illustrating one example of the operation of the robot system according toEmbodiment 3. - Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. Note that, throughout the drawings, the same reference characters are given to the same or corresponding parts to omit redundant explanations. Further, throughout the drawings, components for describing the present disclosure are selectively illustrated, and illustration about other components may be omitted. Moreover, the present disclosure is not limited to the following embodiments.
-
FIG. 1 is a schematic diagram illustrating an outline configuration of a robot system according to Embodiment 1. Therobot system 100 includes arobot 101, afirst interface 121, a3D camera 103, asensor 104, adisplay 105, afirst control device 111, and asecond control device 112. Therobot 101 and thesecond control device 112 are installed inside aworkarea 201, and thesensor 104, thedisplay 105, and thefirst control device 111 are disposed inside amanipulation area 202. Further, thefirst interface 121 is gripped (held) and operated by an operator inside themanipulation area 202. The3D camera 103 is disposed at a tip-end part of thefirst interface 121. - Note that, in Embodiment 1, although the
3D camera 103 is provided to the tip-end part of thefirst interface 121, the3D camera 103 may not be provided to thefirst interface 121, and the3D camera 103 may be provided separately from thefirst interface 121. - The
workarea 201 is a space where therobot 101 is installed, and includes at least a space inside an operating range of therobot 101. Further, themanipulation area 202 is a space separated from the workarea 201 (a space different from the workarea 201). Theworkarea 201 and themanipulation area 202 may be divided by awall member 203. - The
wall member 203 is provided with awindow 204, and therefore, the operator is visible of therobot 101 disposed inside theworkarea 201. Note that theworkarea 201 may be an explosion-proof area where an explosion-proof specification is applied, and themanipulation area 202 may be a non-explosion-proof area where the explosion-proof specification is not applied. - The
sensor 104 wirelessly detects position information and posture information on the 3D camera 103 (for example, lens), and outputs them to thefirst control device 111. Further, thesensor 104 wirelessly detects position information and posture information on the tip-end part of thefirst interface 121, and outputs them to thesecond control device 112. Note that thesensor 104 may perform the output to thefirst control device 111 and/or thesecond control device 112 wirelessly or wiredly. - The
sensor 104 may be an infrared sensor or a camera, for example. Note that, if thesensor 104 is the camera, thesensor 104 may not be disposed inside themanipulation area 202. For example, the camera may be a camera installed in a personal digital assistant or a head mounted display which the operator carries. Further, thesensor 104 which detects the position information on the3D camera 103 and thesensor 104 which detects the position information on the tip-end part of thefirst interface 121 may be comprised of the same sensor, or may be comprised of different sensors. - As for the
first interface 121, the operator grips agripper 121A and operates therobot 101. In detail, by therobot 101 operating to follow the locus of a tip-end part of aninterface body 121E of the grippedfirst interface 121, the operator can manipulate therobot 101 intuitively by thefirst interface 121 inside themanipulation area 202. - In the
gripper 121A, an apparatus which transmits to the operator inner force sense information detected by a force sensor provided to anend effector 20 of therobot 101 which is described later, or voice information may be disposed. This apparatus includes, for example, a vibration motor, a speaker, and a mechanism which expands and contracts a casing which constitutes the gripper. - Further, the
first interface 121 may be provided with aswitch 121B which starts/stops spraying or injecting grains, fluid, or gas to aworkpiece 300, or cutting or polishing of theworkpiece 300. - Note that the
first interface 121 may be configured to be portable by the operator. Further, theinterface body 121E of thefirst interface 121 may be formed in the same shape as theend effector 20 of therobot 101. Moreover, thefirst interface 121 may use known interfaces, such as a joystick, a keyboard, ten keys, and a teach pendant, for example. - The
3D camera 103 outputs image information imaged inside themanipulation area 202 to thefirst control device 111. Note that, the term "image information" as used herein includes at least one of still image information, moving image information, and video information. Further, the term "image information" in the following explanation is similar. When the3D camera 103 images theworkpiece 300 etc. inside themanipulation area 202, thesensor 104 detects the position information and the posture information on the3D camera 103 wirelessly, and outputs them to thefirst control device 111. - The
display 105 displays the three-dimensional model of theworkpiece 300 and therobot 101 outputted from thefirst control device 111, and the image information on theworkpiece 300 etc. imaged by the3D camera 103. Thedisplay 105 may be comprised of a non-portable display, which is installed and used on a desk, a floor, etc., for example. Further, thedisplay 105 may be comprised of a head mounted display or glasses which the operator wears and uses. - The
end effector 20 of therobot 101 may have a structure capable of spraying or injecting grains, fluid, or gas to theworkpiece 300, or may have a structure capable of cutting or polishing theworkpiece 300, or may have a structure capable of welding theworkpiece 300, or may have a structure capable of washing theworkpiece 300. Here, a configuration of therobot 101 is described in detail with reference toFIG. 2 . -
FIG. 2 is a schematic diagram illustrating an outline configuration of the robot in the robot system illustrated inFIG. 1 . - As illustrated in
FIG. 2 , therobot 101 is a vertical articulated robotic arm provided with a serially-coupled body comprised of a plurality of links (here, afirst link 11a, asecond link 11b, athird link 11c, afourth link 11d, afifth link 11e, and asixth link 11f), a plurality of joints (here, a first joint JT1, a second joint JT2, a third joint JT3, a fourth joint JT4, a fifth joint JT5, and a sixth joint JT6), and apedestal 15 which supports the serially-coupled body and the joints. Note that, in Embodiment 1, although the vertical articulated robot is adopted as therobot 101, it is not limited to this configuration, but it may adopt a horizontal articulated robot. - In the first joint JT1, the
pedestal 15 and a base-end part of thefirst link 11a are coupled to each other swivelably on an axis extending in the vertical direction. In the second joint JT2, a tip-end part of thefirst link 11a and a base-end part of thesecond link 11b are coupled to each other pivotably on an axis extending in the horizontal direction. In the third joint JT3, a tip-end part of thesecond link 11b and a base-end part of thethird link 11c are coupled to each other pivotably on an axis extending in the horizontal direction. - In the fourth joint JT4, a tip-end part of the
third link 11c and a base-end part of thefourth link 11d are coupled to each other rotatably on an axis extending in the longitudinal direction of thefourth link 11d. In the fifth joint JT5, a tip-end part of thefourth link 11d and a base-end part of thefifth link 11e are coupled to each other pivotably on an axis perpendicular to the longitudinal direction of thefourth link 11d. In the sixth joint JT6, a tip-end part of thefifth link 11e and a base-end part of thesixth link 11f are twistably coupled to each other. - A mechanical interface is provided to a tip-end part of the
sixth link 11f. Theend effector 20 is detachably attached to the mechanical interface, corresponding to the contents of work. - Here, the
end effector 20 sprays or injects fluid (for example, paint) to theworkpiece 300. Further, theend effector 20 is connected to piping 21 for feeding the fluid to theend effector 20. - Further, the first joint JT1, the second joint JT2, the third joint JT3, the fourth joint JT4, the fifth joint JT5, and the sixth joint JT6 are each provided with a drive motor (not illustrated) as one example of an actuator which relatively rotates two members coupled to each other via the joint. The drive motor may be a servomotor which is servo-controlled by the
second control device 112, for example. Further, the first joint JT1, the second joint JT2, the third joint JT3, the fourth joint JT4, the fifth joint JT5, and the sixth joint JT6 are each provided with a rotation sensor (not illustrated) which detects a rotational position of the drive motor, and a current sensor (not illustrated) which detects current for controlling the rotation of the drive motor. The rotation sensor may be an encoder, for example. - The
first control device 111 includes aprocessor 111a such as a microprocessor and a CPU, and amemory 111b such as a ROM and a RAM. Thememory 111b stores information on a basic program, various fixed data, etc. Theprocessor 111a reads and executes software, such as the basic program stored in thememory 111b. - Further, the
processor 111a forms a three-dimensional model (3D computer graphics or 3DCAD data) of theworkpiece 300 based on image information inputted from the3D camera 103, and position information and posture information inputted from thesensor 104 when the3D camera 103 images theworkpiece 300. Thememory 111b stores the three-dimensional model of theworkpiece 300 formed by theprocessor 111a. - Further, the
processor 111a outputs the formed three-dimensional model of theworkpiece 300 to thedisplay 105. Thedisplay 105 displays the three-dimensional model of theworkpiece 300 inputted from theprocessor 111a as a 3D workpiece 301 (seeFIG. 6 etc.). - The
second control device 112 includes, similar to thefirst control device 111, aprocessor 112a such as a microprocessor and a CPU, and amemory 112b such as a ROM and a RAM. Thememory 112b stores information on a basic program, various fixed data, etc. Theprocessor 112a performs various kinds of operations of therobot 101 by reading and executing software, such as the basic program, stored in thememory 112b. - Note that the
first control device 111 and/or thesecond control device 112 may be comprised of a sole control device which carries out a centralized control, or may be comprised of a plurality of control devices which collaboratively carry out a distributed control. Further, thefirst control device 111 and/or thesecond control device 112 may be comprised of a microcomputer, or may be comprised of an MPU, a PLC (Programmable Logic Controller), a logical circuit, etc. - Moreover, although the
robot system 100 according to Embodiment 1 is provided with both thefirst control device 111 and thesecond control device 112, it is not limited to this configuration. Thefirst control device 111 may have the function of the second control device 112 (that is, therobot system 100 may only be provided with the first control device 111). Similarly, thesecond control device 112 may have the function of the first control device 111 (that is, therobot system 100 may only be provided with the second control device 112). - Next, operation and effects of the
robot system 100 according to Embodiment 1 are described in detail with reference toFIGs. 1 to 9 . Note that the following operation is performed by theprocessor 111a of thefirst control device 111 reading the program stored in thememory 111b. -
FIG. 3 is a flowchart illustrating one example of the operation of the robot system according to Embodiment 1 (the method of forming the three-dimensional model of the workpiece).FIGs. 4 to 9 are schematic diagrams each illustrating a state inside the manipulation area when the robot system operates in accordance with the flowchart illustrated inFIG. 3 . Note that, inFIGs. 4 to 9 , a front-and-rear direction, a left-and-right direction, and an up-and-down direction of the workpiece are expressed as a front-and-rear direction, a left-and-right direction, and an up-and-down direction in the drawings. - First, as illustrated in
FIG. 4 , inside themanipulation area 202, a box-shapedworkpiece 300 which opens at an upper part thereof is disposed. As illustrated inFIG. 5 , in order for the operator (worker) to create the three-dimensional model of theworkpiece 300, he/she grips thefirst interface 121 where the3D camera 103 is installed, and images theworkpiece 300 from the front of theworkpiece 300. - Here, the operator may use the
3D camera 103 to acquire the image information on (image) oneworkpiece 300, or may discontinuously acquire image information on (image) a plurality ofworkpieces 300, or may acquire video information on (image) theworkpiece 300. Further, the operator may use the3D camera 103 to image theworkpiece 300 from directions other than the front of theworkpiece 300. - Then, the
first control device 111 acquires, from the3D camera 103, the image information on theworkpiece 300 imaged by the 3D camera 103 (Step S101). Next, thefirst control device 111 acquires, from thesensor 104, the position information and the posture information on the3D camera 103 when imaging the image information acquired at Step S101 (Step S102). - Next, the
first control device 111 displays the image information on theworkpiece 300 acquired at Step S101 on thedisplay 105, as aworkpiece image 302A (Step S103; seeFIG. 5 ). - Next, the
first control device 111 determines whether an acquisition end command of the image information on the workpiece 300 (imaging end information on the workpiece 300) is inputted by the operator from an input device etc. (not illustrated) (Step S104). - If determined that the imaging end information on the
workpiece 300 is inputted from the input device etc. (Yes at Step S 104), thefirst control device 111 forms the three-dimensional model (3D computer graphics or 3DCAD) of theworkpiece 300 based on the image information on theworkpiece 300 acquired at Step S101, and the position information and the posture information on the3D camera 103 which are acquired at Step S102 (Step S105). - Next, the
first control device 111 displays the three-dimensional model of theworkpiece 300 formed atStep S 105 on thedisplay 105 as a3D workpiece 301A (Step S 106; seeFIG. 6 ). - Next, the
first control device 111 determines whether a formation end command of the three-dimensional model of the workpiece 300 (formation end information on the three-dimensional model) is inputted by the operator from the input device etc. (not illustrated) (Step S107). - If determined that the three-dimensional model formation end information is not inputted from the input device etc. (No at Step S107), the
first control device 111 returns to the processing of Step S101. On the other hand, if determined that the three-dimensional model formation end information is inputted from the input device etc. (Yes at Step S107), thefirst control device 111 transits to processing of Step S108. - At Step S108, the
first control device 111 outputs the three-dimensional model of theworkpiece 300 formed at Step S105 to thesecond control device 112, and ends this program. Therefore, thesecond control device 112 can perform the control of the operation of therobot 101 based on the three-dimensional model data of theworkpiece 300. - On the other hand, at Step S104, for example, after displaying the
workpiece image 302A on thedisplay 105, if the operator further images theworkpiece 300 and the image information on the imagedworkpiece 300 is inputted from the3D camera 103, thefirst control device 111 may determine that the imaging end information on theworkpiece 300 is not inputted from the input device etc. - If determined that the imaging end information on the
workpiece 300 is not inputted (No at Step S104), thefirst control device 111 returns to the processing of Step S101. - For example, as illustrated in
FIG. 7 , if the operator further images theworkpiece 300 from rearward and rightward of theworkpiece 300, the image information on the imagedworkpiece 300 is inputted into the first control device 111 (Step S101). - Next, the
first control device 111 acquires, from thesensor 104, the position information and the posture information on the3D camera 103 when imaging the image information acquired at second Step S101 (Step S102). - Next, the
first control device 111 displays the image information on theworkpiece 300 acquired at the second Step S101 on thedisplay 105, as aworkpiece image 302B (Step S103; seeFIG. 7 ). Therefore, it can be seen that a rectangular through-hole is formed in a lower part of a rear surface of theworkpiece 300. - Then, the
first control device 111 again determines whether the acquisition end command of the image information on theworkpiece 300 is inputted from the input device which is not illustrated (Step S104). - Next, for example, as illustrated in
FIG. 8 , if the operator further images theworkpiece 300 from above of theworkpiece 300, the image information on the imagedworkpiece 300 is inputted into the first control device 111 (Step S101). - Next, the
first control device 111 acquires, from thesensor 104, the position information and the posture information on the3D camera 103 when imaging the image information acquired at the third Step S101 (Step S102). - Next, the
first control device 111 displays the image information on theworkpiece 300 acquired at the third Step S101 on thedisplay 105 as aworkpiece image 302C (Step S103; seeFIG. 8 ). Therefore, it can be seen that a rectangular through-hole is formed in a bottom surface of theworkpiece 300. Further, the operator can see that imaging of all the parts of theworkpiece 300 is finished. Thus, the operator makes the input device etc. output the imaging end information on theworkpiece 300 to thefirst control device 111. - Therefore, the
first control device 111 determines that the imaging end information on theworkpiece 300 is inputted from the input device etc. (Yes at Step S 104), and forms the three-dimensional model of theworkpiece 300 based on the image information on theworkpiece 300 acquired atStep S 101, and the position information and the posture information on the3D camera 103 which are acquired at Step S102 (Step S 105). - Here, for example, the
first control device 111 may form the three-dimensional model of theworkpiece 300 based on the image information on theworkpiece 300 acquired each time, and the position information and the posture information on the3D camera 103 which are acquired each time. - Further, for example, the
first control device 111 may form a three-dimensional model of theworkpiece 300 each time based on the image information on theworkpiece 300, and the position information and the posture information on the3D camera 103, and may again form a three-dimensional model of theworkpiece 300 based on a group of the formed three-dimensional models of theworkpiece 300. - Moreover, for example, the
first control device 111 may perform processing which forms a three-dimensional model of theworkpiece 300 based on image information on theworkpiece 300 acquired at the Ath time (for example, the first time), and the position information and the posture information on the 3D camera 103 (Step S105), and may perform processing which again forms a three-dimensional model of theworkpiece 300 based on the three-dimensional model of theworkpiece 300 formed by the processing which forms the three-dimensional model (Step S 105), image information on theworkpiece 300 acquired at the Bth time (B≠A; for example, the second time and the third time), and the position information and the posture information on the3D camera 103. - Similarly, for example, the
first control device 111 may perform processing which forms a three-dimensional model of theworkpiece 300 based on image information on theworkpiece 300 acquired at the Ath time (for example, the first time and the second time), and the position information and the posture information on the 3D camera 103 (Step S 105), and may perform processing which again forms a three-dimensional model of theworkpiece 300 based on the three-dimensional model of theworkpiece 300 formed by the processing which forms the three-dimensional model(Step S 105), and image information on theworkpiece 300 acquired at the Bth time (B≠A; for example, the third time). - Further, for example, the
first control device 111 may perform processing which forms a three-dimensional model of theworkpiece 300 based on image information on theworkpiece 300 acquired at the Cth time (for example, the first time), and the position information and the posture information on the 3D camera 103 (Step S 105), may perform processing which forms a three-dimensional model of theworkpiece 300 based on image information on theworkpiece 300 acquired at the Dth time (D≠C; for example, the second time and the third time), and the position information and the posture information on the 3D camera 103 (Step S105), and may perform processing which again forms a three-dimensional model of theworkpiece 300 based on the three-dimensional model of theworkpiece 300 formed at Step S105 based on the image information on theworkpiece 300 acquired at the Cth time, and the position information and the posture information on the3D camera 103, and the three-dimensional model of theworkpiece 300 formed at Step S105 based on the image information on theworkpiece 300 acquired at the Dth time, and the position information and the posture information on the3D camera 103. - Similarly, for example, the
first control device 111 may perform processing which forms a three-dimensional model of theworkpiece 300 based on image information on theworkpiece 300 acquired at the Cth time (for example, the first time and the second time), and the position information and the posture information on the 3D camera 103 (Step S 105), may perform processing which forms a three-dimensional model of theworkpiece 300 based on image information on theworkpiece 300 acquired at the Dth time (D≠C; for example, the third time), and the position information and the posture information on the 3D camera 103 (Step S105), and may perform processing which again forms a three-dimensional model of theworkpiece 300 based on the three-dimensional model of theworkpiece 300 formed at Step S105 based on the image information on theworkpiece 300 acquired at the Cth time, and the position information and the posture information on the3D camera 103, and the three-dimensional model of theworkpiece 300 formed at Step S105 based on the image information on theworkpiece 300 acquired at the Dth time, and the position information and the posture information on the3D camera 103. - Further, the first control device 111 may perform the above-described processing which forms the three-dimensional model of the workpiece 300 based on the image information on the workpiece 300 acquired at the Ath time, and the position information and the posture information on the 3D camera 103 (Step S 105), and may perform processing which again forms the three-dimensional model of the workpiece 300 based on the three-dimensional model of the workpiece 300 which is again formed by the processing which again forms the three-dimensional model of the workpiece 300 based on the three-dimensional model of the workpiece 300 formed by the processing which forms the three-dimensional model (Step S105), the image information on the workpiece 300 acquired at the Bth time, and the position information and the posture information on the 3D camera 103, and the three-dimensional model of the workpiece 300 which is again formed by the processing which again forms the three-dimensional model of the workpiece 300 based on the three-dimensional model of the workpiece 300 formed at Step S105 based on the image information on the workpiece 300 acquired at the Cth time, and the position information and the posture information on the 3D camera 103, and the three-dimensional model of the workpiece 300 formed at Step S105 based on the image information on the workpiece 300 acquired at the Dth time, and the position information and the posture information on the 3D camera 103.
- Next, the
first control device 111 displays the three-dimensional model of theworkpiece 300 formed by the processing of Step S105 on thedisplay 105 as a3D workpiece 301C (Step S106; seeFIG. 9 ). Therefore, the3D workpiece 301C in which the rectangular through-holes are formed in lower parts of left and right side surfaces, the lower part of the rear surface, and the bottom surface of theworkpiece 300 is displayed on thedisplay 105. Therefore, the operator can recognize that the formation of the three-dimensional model of theworkpiece 300 is finished, and he/she makes the input device etc. output the three-dimensional model formation end information to thefirst control device 111. - Therefore, the
first control device 111 determines that the model formation end information is inputted from the input device etc. (Yes at Step S107), outputs the three-dimensional model of theworkpiece 300 formed at Step S105 to thesecond control device 112, and ends this program. - Note that the
processor 111a of thefirst control device 111 may store in thememory 111b the data of the three-dimensional model of the workpiece 300 (3D workpiece 301C) formed at Step S105, as first data. - In the
robot system 100 according to Embodiment 1, since thefirst control device 111 forms the three-dimensional model of the workpiece based on the image information on theworkpiece 300 imaged by the3D camera 103, and the position information and the posture information on the3D camera 103 when the3D camera 103 images theworkpiece 300, a programmer is not necessary to create the three-dimensional model data of the workpiece, thereby reducing the cost for creating the data. Thus, as compared with the conventional display system, it can improve the production efficiency. - Further, in the
robot system 100 according to Embodiment 1, since thefirst control device 111 displays the formed three-dimensional model of theworkpiece 300 on thedisplay 105, the operator can judge whether there still is a non-imaged part in theworkpiece 300. Further, the operator can understand which direction is better for an efficient imaging of theworkpiece 300 based on the three-dimensional model of theworkpiece 300 displayed on thedisplay 105. - Thus, as compared with the conventional display system, it can improve the efficiency of the formation of the three-dimensional model of the
workpiece 300. - Further, in the
robot system 100 according to Embodiment 1, since thefirst control device 111 displays the imaged workpiece 300 (theworkpiece image 302A etc.) on thedisplay 105, the operator can judge whether there still is a non-imaged part in theworkpiece 300. Moreover, the operator can understand which direction is better for an efficient imaging of theworkpiece 300 based on theworkpiece 300 displayed on the display 105 (theworkpiece image 302A etc.). - Thus, as compared with the conventional display system, it can improve the efficiency of the formation of the three-dimensional model of the
workpiece 300. - When performing the acquisition of the image information on the
workpiece 300 and the acquisition of the position information and the posture information on the3D camera 103 only once, and displaying the image of theworkpiece 300 or forming the three-dimensional model of theworkpiece 300 only using the acquired information, theworkpiece 300 cannot be displayed in the perfect form, and the three-dimensional model may not be formed in the perfect form. In order to display the image of theworkpiece 300 or form the three-dimensional model of theworkpiece 300 in a more perfect form, it is preferred to repeatedly performing the acquisition of the image information on theworkpiece 300, the acquisition of the position information and the posture information on the3D camera 103, and the display of the image information on theworkpiece 300, and to again form the three-dimensional model using the image information on theworkpiece 300 and the position information and the posture information on the3D camera 103, which are acquired by repeatedly performing these processing. Various modes for this configuration will be described below. - In this embodiment, after repeatedly performing the processing which acquires the image information on the
workpiece 300 imaged by the 3D camera 103 (Step S101), the processing which acquires, from thesensor 104, the position information and the posture information when theworkpiece 300 is imaged by the 3D camera 103 (Step S 102), and the processing which displays the acquired image information on the display 105 (Step S 103), thefirst control device 111 may perform the processing which forms the three-dimensional model of theworkpiece 300 based on the acquired image information, and the acquired position information and posture information (Step S 105), the processing which displays the formed three-dimensional model on the display 105 (Step S 106), and the processing which outputs the first data which is data of the formed three-dimensional model to the second control device 112 (Step S108). - Further, in this embodiment, after performing the processing which acquires the image information (Step S101), the processing which acquires, from the
sensor 104, the position information and the posture information (Step S 102), the processing which displays the acquired image information on the display 105 (Step S 103), the processing which forms the three-dimensional model (Step S 105), and the processing which displays the three-dimensional model on the display 105 (Step S 106), the processing which acquires the image information (Step S101), the processing which acquires, from thesensor 104, the position information and the posture information (Step S102), and the processing which displays the acquired image information on the display 105 (Step S103) are repeated once or more. Then, thefirst control device 111 may perform, in the processing which forms the three-dimensional model (Step S 105), the processing which forms the three-dimensional model based on the image information acquired at the Ath time, and the acquired position information and posture information, and the processing which again forms the three-dimensional model based on the three-dimensional model formed by the processing described above which forms the three-dimensional model, and the image information acquired at the Bth time (B≠A), and then perform the processing which displays the three-dimensional model on the display 105 (Step S106), and the processing which outputs the first data to the second control device 112 (Step S108). - Further, in this embodiment, after performing the processing which acquires the image information (Step S101), the processing which acquires, from the
sensor 104, the position information and the posture information (Step S102), the processing which displays the acquired image information on the display 105 (Step S103), the processing which forms the three-dimensional model (Step S105), and the processing which displays the three-dimensional model on the display 105 (Step S 106), the processing which acquires the image information (Step S101), the processing which acquires, from thesensor 104, the position information and the posture information (Step S102), and the processing which displays the acquired image information on the display 105 (Step S103) are repeated once or more. Then, thefirst control device 111 may perform, in the processing which forms the three-dimensional model (Step S105), the processing which forms the three-dimensional model based on the image information acquired at the Cth time, and the acquired position information and posture information, the processing which forms the three-dimensional model based on the image information acquired at the Dth time (D≠C), and the processing which again forms the three-dimensional model based on the three-dimensional model formed based on the image information acquired at the Cth time, and the acquired position information and posture information, and the three-dimensional model formed based on the image information acquired at the Dth time (D≠C), and then perform the processing which displays the three-dimensional model on the display 105 (Step S106), and the processing which outputs the first data to the second control device 112 (Step S108). - Note that, in this embodiment, the
workarea 201 may be an explosion-proof area, and themanipulation area 202 may be a non-explosion-proof area. Therefore, when thefirst interface 121 and the3D camera 103 are used inside themanipulation area 202, these devices become unnecessary to be explosion proof. - Note that, in Embodiment 1, although the
first control device 111 determines whether the three-dimensional model formation end information is inputted in the processing of Step S107, it is not limited to this configuration. Thefirst control device 111 may not perform the processing of Step S107. -
FIG. 10 is a schematic diagram illustrating an outline configuration of a robot system of Modification 1 in Embodiment 1. Therobot system 100 of Modification 1 differs from therobot system 100 according to Embodiment 1 in that adetector 12 and atransmitter 13 are provided to thefirst interface 121. Thedetector 12 detects the position information and the posture information on thefirst interface 121 and the3D camera 103 wirelessly. Thedetector 12 is a gyro sensor or a camera, for example. Thetransmitter 13 transmits to thefirst control device 111 the position information and the posture information which are detected by thedetector 12. In Modification 1, thedetector 12 and thetransmitter 13 correspond to thesensor 104. Note that thedetector 12 may not detect the position information and the posture information, or thedetector 12 may detect only the position information or may detect only the posture information. - The
robot system 100 of Modification 1 has similar operation and effects to therobot system 100 according to Embodiment 1. -
FIG. 11 is a schematic diagram illustrating an outline configuration of a robot system of Modification 2 in Embodiment 1. Therobot system 100 of Modification 2 differs from therobot system 100 according to Embodiment 1 in that the3D camera 103 is provided to the tip-end part of arobotic arm 102, and asecond interface 122 is additionally provided. - The
robotic arm 102 may be a vertical articulated robotic arm, or may be a horizontal articulated robot. Therobotic arm 102 is operated by thesecond interface 122. Thesecond interface 122 may be a known interface, such as a joystick, a keyboard, ten keys, and a teach pendant, for example. Aswitch 122B for instructing start/stop of imaging by the3D camera 103 is provided to thesecond interface 122. - Note that the
first interface 121 may also serve as thesecond interface 122. In this case, a switch (changeover switch) which switches between operation of therobot 101 and operation of therobotic arm 102 may be provided to thefirst interface 121. - The
robot system 100 of Modification 2 has similar operation and effects to therobot system 100 according to Embodiment 1. - In Modification 2, since the
3D camera 103 is provided to the tip-end part of therobotic arm 102, and thesecond interface 122 for manipulating therobotic arm 102 is further provided, it becomes easy to acquire the image information on theworkpiece 300 near therobot 101. - Further, in Modification 2, the
first control device 111 also serves as thesecond control device 112. That is, thefirst control device 111 also realizes the function of thesecond control device 112. Therefore, since the functions of the two kinds of control devices can be implemented by the sole control device, the configuration, such as wiring, can be simplified. -
FIG. 12 is a schematic diagram illustrating an outline configuration of a robot system according to Embodiment 2. Note that, inFIG. 12 the directions of the robot are expressed by the directions of the X-axis, the Y-axis, and the Z-axis in the three-dimensional rectangular coordinate system which are illustrated in this drawing for convenience. Therobot system 100 according to Embodiment 2 differs from the robot system according to Embodiment 1 (including its modifications) in that a conveyingdevice 106 is additionally provided. The conveyingdevice 106 conveys the workpiece from the manipulation area to a first position in the workarea which is set beforehand. The conveyingdevice 106 is a known conveying device, such as a belt conveyor. - Note that the
robot system 100 according to Embodiment 2 may be provided with ashutter 107 etc. which permits/inhibits a movement of theworkpiece 300 from themanipulation area 202 into theworkarea 201. - Further, in Embodiment 2, the
first interface 121 is a joystick, and thefirst interface 121 is configured separately from the3D camera 103. - Further, the
memory 112b of thesecond control device 112 may store three-dimensional model information on the scale, indicative of a given first range set beforehand. The three-dimensional model information on the scale may be, for example, information on a ruler for measuring a distance from a tip end of therobot 101, or may be information on a cone (truncated cone) shape indicative of a range where grains, fluid, or gas is injected. - Next, operation and effects of the
robot system 100 according to Embodiment 2 are described in detail with reference toFIGs. 12 to 14 . Note that the following operation is performed by theprocessor 112a of thesecond control device 112 reading the program stored in thememory 112b. -
FIG. 13 is a schematic diagram illustrating a state of the workarea seen from a window, in the robot system illustrated inFIG. 12 .FIG. 14 is a flowchart illustrating one example of operation of the robot system according to Embodiment 2. As illustrated inFIG. 13 , when the operator looks at theworkarea 201 from thewindow 204, theworkpiece 300 and therobot 101 may overlap with each other. Further, a spatial relationship between the tip-end part of the robot 101 (end effector 20) and theworkpiece 300 may be difficult to be grasped. In this case, it may be possible to install a camera inside theworkarea 201, to image the tip-end part of the robot 101 (end effector 20) and theworkpiece 300, and to display the captured image for the operator. - However, when the
robot 101 paints theworkpiece 300 or welds theworkpiece 300, it is necessary to have the camera installed in the explosion-proof workarea 201. The explosion-proof camera is high in cost, and therefore, the facility cost increases. When the size of the workpiece to be painted changes, the imaging location of the camera may be necessary to be changed, and therefore, the operator's work load increases. - Therefore, in the
robot system 100 according to Embodiment 2, thesecond control device 112 performs the following operation (processing) by using the three-dimensional model of theworkpiece 300 created by thefirst control device 111. - First, suppose that instruction information indicative of that a work (for example, painting work) is to be performed to the
workpiece 300 is inputted into thesecond control device 112 via the input device etc. (not illustrated) by the operator. As illustrated inFIG. 14 , theprocessor 112a of thesecond control device 112 acquires the first data which is three-dimensional model data of theworkpiece 300 from thememory 111b of the first control device 111 (Step S201). Note that, if the first data is stored in thememory 112b, theprocessor 112a may acquire the first data from thememory 112b. - Next, the
processor 112a of thesecond control device 112 acquires the position information on the first position that is a conveyance position of theworkpiece 300 inside theworkarea 201 from thememory 112b (Step S202). - Next, the
processor 112a of thesecond control device 112 acquires second data which is the three-dimensional model data of therobot 101 from thememory 112b (Step S203). Note that, as for the second data, the data with which the programmer formed the three-dimensional model of therobot 101 may be stored beforehand in thememory 112b. Further, as described for the operation of therobot system 100 according to Embodiment 1, as for the second data, thefirst control device 111 may form the three-dimensional model of therobot 101 by using the3D camera 103 installed in thefirst interface 121, and may store this three-dimensional model data in thememory 112b. - Next, the
processor 112a of thesecond control device 112 makes the conveyingdevice 106 convey theworkpiece 300 disposed in themanipulation area 202 to the first position in the workarea 201 (Step S204). Note that thesecond control device 112 may perform beforehand the processing of Steps S201 to S203 before the instruction information is inputted from the input device etc., or may perform it after the processing of Step S204. - Next, the
processor 112a of thesecond control device 112 acquires manipulation information (operational information) of therobot 101 outputted from thefirst interface 121 when the operator operates the first interface 121 (Step S205). Next, theprocessor 112a of thesecond control device 112 operates therobot 101 based on the manipulation information acquired at Step S205 (Step S206). - Next, the
processor 112a of thesecond control device 112 displays the spatial relationship between the tip-end part of the robot 101 (end effector 20) and theworkpiece 300 on thedisplay 105 as the three-dimensional model based on the first data, the first position, and the second data which are acquired by the processing of Steps S201 to S203, and the manipulation information acquired at Step S205 (Step S207). - In detail, the
processor 112a of thesecond control device 112 displays on thedisplay 105 the3D workpiece 301 and a3D robot 101A in a state where the workpiece and the robot are seen from a direction different from the direction in which the operator looks at therobot 101 from themanipulation area 202. - Here, the direction in which the operator looks at the
robot 101 from themanipulation area 202 may be a direction in which he/she looks at therobot 101 from thewindow 204 of the manipulation area 202 (here, X-direction), for example. Further, for example, a motion sensor may be disposed inside themanipulation area 202, and the direction may be a direction of a straight line which connects coordinates of the operator's position detected by the motion sensor and coordinates of the position of therobot 101. - Further, the direction different from the direction in which the operator looks at the
robot 101 from themanipulation area 202 may be any direction in Embodiment 2, as long as it is a direction other than the X-direction, and, for example, it may be a direction perpendicular to the X-direction (here, Y-direction or Z-direction). - Therefore, the
processor 112a of thesecond control device 112 may display on thedisplay 105 the3D workpiece 301 and the3D robot 101A as they are seen from the direction different from the direction (here, the X-direction) in which the operator looks at therobot 101 from thewindow 204 of themanipulation area 202, for example. - In more detail, as illustrated in
FIG. 12 , theprocessor 112a of thesecond control device 112 may display on thedisplay 105 the spatial relationship between the tip-end part of the robot 101 (end effector 20) when seen from the Y-direction and theworkpiece 300. - Note that, in the processing of Step S207, the
processor 112a of thesecond control device 112 may display on the display 105 a3D scale 20A which is a three-dimensional model of the scale at the tip end of theend effector 20 of the robot 101 (3D robot 101A) (seeFIG. 12 ). - Next, the
processor 112a of thesecond control device 112 determines whether the instruction information indicative of an end of the work for theworkpiece 300 is inputted via the input device etc. (not illustrated) by the operator (Step S208). - If determined that the instruction information indicative of the end of the work for the
workpiece 300 is not inputted (No at Step S208), theprocessor 112a of thesecond control device 112 repeats the processing of Steps S205 to S208 until it determines that the instruction information indicative of the end of the work for theworkpiece 300 is inputted. - On the other hand, if determined that the instruction information indicative of the end of the work for the
workpiece 300 is inputted (Yes at Step S208), theprocessor 112a of thesecond control device 112 ends this program. - Thus, in the
robot system 100 according to Embodiment 2, thesecond control device 112 displays the spatial relationship between the tip-end part of the robot 101 (end effector 20) and theworkpiece 300 on thedisplay 105 as the three-dimensional model, using the three-dimensional model of theworkpiece 300 created by thefirst control device 111. - Therefore, since it is not necessary to create the three-dimensional model data of the workpiece by the programmer, the cost for creating the data can be reduced. Thus, as compared with the conventional display system, it can improve the production efficiency.
- Further, in the
robot system 100 according to Embodiment 2, since thesecond control device 112 displays on thedisplay 105 the3D scale 20A at the tip end of theend effector 20 of the robot 101 (3D robot 101A), the operator's burden can be reduced, and the work efficiency can be improved. - In this embodiment, it is further provided with the conveying
device 106 which conveys theworkpiece 300 from themanipulation area 202 to the first position in theworkarea 201 which is set beforehand. Therefore, it can move theworkpiece 300 imaged in themanipulation area 202 by the3D camera 103 to the exact position suitable for the work. - Further, in this embodiment, the
first interface 121 which operates therobot 101 and is disposed inside themanipulation area 202 is further provided. Thesecond control device 112 has thememory 112b which stores the second data which is data of the three-dimensional model of therobot 101. Further, when the operator operates thefirst interface 121 to manipulate therobot 101 to perform the work for theworkpiece 300, thesecond control device 112 displays on thedisplay 105 the spatial relationship between theworkpiece 300 and the tip-end part of therobot 101 in a state where the workpiece and the robot are seen from the direction different from the direction in which the operator looks at therobot 101 from themanipulation area 202, based on the first data inputted by the output of the first data to thesecond control device 112, the position information of the first position of the conveyed workpiece, the second data, and the manipulation information on therobot 101 inputted from thefirst interface 121. According to the above configuration, the operator can understand now the spatial relationship between theworkpiece 300 and the tip-end part of therobot 101 which is hardly understood by directly looking at therobot 101 and theworkpiece 300. - Further, in this embodiment, the
3D camera 103 is attached to thefirst interface 121. According to this configuration, since the device which the operator should grip is a sole object, it becomes easier to operate thefirst interface 121 and the3D camera 103. - Further, in this embodiment, the
sensor 104 may detect the position information and the posture information wirelessly from thefirst interface 121, and thesecond control device 112 may calculate the locus of thefirst interface 121 based on the position information and the posture information from thefirst interface 121 which are detected by thesensor 104, and may perform the processing which operates therobot 101 on real time based on the calculated locus (Step S206). Thus, since it becomes easier to move thefirst interface 121, therobot 101 can be operated correctly. - Note that the
second control device 112 may store in thememory 112b the work performed by the robot 101 (the operational information on the first interface 121) based on the manipulation information which is produced by the operator operating thefirst interface 121. Further, thesecond control device 112 may automatically operate therobot 101 according to the operational information on thefirst interface 121 stored in thememory 112b. -
FIG. 15 is a schematic diagram illustrating an outline configuration of a robot system of Modification 1 in Embodiment 2. Therobot system 100 of Modification 1 differs from therobot system 100 according to Embodiment 2 in that thesecond control device 112 operates the robot 101 (end effector 20), based on the position information and the posture information on thefirst interface 121 inputted from thesensor 104, so as to follow the movement of the tip-end part of thefirst interface 121. - That is, in Modification 1, the
second control device 112 calculates the locus of thefirst interface 121 based on the position information and the posture information on thefirst interface 121 which are detected by thesensor 104, and operates therobot 101 on real time. Thesecond control device 112 may calculate the locus of thefirst interface 121 based on the position information and the posture information in a three-dimensional space of thefirst interface 121 which are detected by thesensor 104, and based on the calculated locus, it may cause therobot 101 on real time to perform any work of an injecting work which injects fluid or gas to theworkpiece 300, a cutting work which cuts theworkpiece 300, a polishing work which polishes theworkpiece 300, a welding work which welds theworkpiece 300, and a washing work which washes theworkpiece 300. The "work" of the injecting work, the cutting work, the polishing work, the welding operation, and the washing work is a series of operations performed to theworkpiece 300 by therobot 101, and is a concept which includes a plurality of operations. The work includes, for example, an operation in which therobot 101 approaches theworkpiece 300, an operation in which therobot 101 starts injection of fluid etc. to theworkpiece 300, an operation in which therobot 101 stops the injection of the fluid etc., and an operation in which therobot 101 separates from theworkpiece 300. - Operation and effects of the
robot system 100 of Modification 1 in Embodiment 2 are described in detail with reference toFIGs. 15 and16 . Note that the following operation is performed by theprocessor 112a of thesecond control device 112 reading the program stored in thememory 112b. -
FIG. 16 is a flowchart illustrating one example of operation of the robot system of Modification 1 in Embodiment 2. As illustrated inFIG. 16 , the operation of therobot system 100 of Modification 1 differs from the operation of therobot system 100 according to Embodiment 2 in that processing (operation) of Steps S205A and S205B is performed, instead of the processing of Step S205. - In detail, the
processor 112a of thesecond control device 112 acquires, from thesensor 104, the position information and the posture information on thefirst interface 121 detected by the sensor 104 (Step S205A). Next, theprocessor 112a of thesecond control device 112 calculates the locus of thefirst interface 121 based on the position information and the posture information on thefirst interface 121 which are acquired at Step S205 (Step S205B). - Next, the
processor 112a of thesecond control device 112 operates therobot 101 on real time based on the locus of thefirst interface 121 calculated at Step S205B (Step S206). - Next, the
processor 112a of thesecond control device 112 displays the spatial relationship between the tip-end part of the robot 101 (end effector 20) and theworkpiece 300 on thedisplay 105 as the three-dimensional model based on the first data, the first position, and the second data which are acquired by the processing of Steps S201 to S203, and the locus of thefirst interface 121 calculated at Step S205B (Step S207). In detail, as illustrated inFIG. 15 , thesecond control device 112 displays the3D workpiece 301 and the3D robot 101A on thedisplay 105. - Next, the
processor 112a of thesecond control device 112 determines whether the instruction information indicative of the end of the work for theworkpiece 300 is inputted via the input device etc. (not illustrated) by the operator (Step S208). - If determined that the instruction information indicative of the end of the work for the
workpiece 300 is not inputted (No at Step S208), theprocessor 112a of thesecond control device 112 repeats the processing of Steps S205A to S208 until it determines that the instruction information indicative of the end of the work for theworkpiece 300 is inputted. - On the other hand, if determined that the instruction information indicative of the end of the work for the
workpiece 300 is inputted (Yes at Step S208), theprocessor 112a of thesecond control device 112 ends this program. - In the
robot system 100 of Modification 1, thesecond control device 112 calculates the locus of thefirst interface 121 based on the position information and the posture information on thefirst interface 121 which are detected by thesensor 104, and operates therobot 101 on real time. - Therefore, since the operator can operate the
robot 101 on real time, he/she can operate therobot 101 intuitively. Further, he/she can instantly judge whether the work operation by therobot 101 to theworkpiece 300 is performed correctly. - Further, in the
robot system 100 of Modification 1, thesecond control device 112 displays the spatial relationship between the tip-end part of the robot 101 (end effector 20) and theworkpiece 300 on thedisplay 105 as the three-dimensional model, using the three-dimensional model of theworkpiece 300 created by thefirst control device 111. - Therefore, since it is not necessary to create the three-dimensional model data of the workpiece by the programmer, the cost for creating the data can be reduced. Thus, as compared with the conventional display system, it can improve the production efficiency.
- In Modification 1, the
second control device 112 may have thememory 112b which stores the second data which is data of the three-dimensional model of therobot 101. Thesecond control device 112 may display on thedisplay 105 the spatial relationship between theworkpiece 300 and the tip-end part of therobot 101 as they are seen from the direction different from the direction in which the operator looks at therobot 101 from themanipulation area 202, based on the first data inputted by the processing which outputs the first data to the second control device 112 (Step S108), the position information on the first position of the conveyedworkpiece 300, the second data, and the locus calculated by the processing which operates therobot 101 on real time (Step S206). Thus, the operator can understand now the spatial relationship between theworkpiece 300 and the tip-end part of therobot 101 which is hardly understood by directly looking at therobot 101 and theworkpiece 300. - Note that the
second control device 112 may calculate the locus of thefirst interface 121 which is produced by the operator moving (operating) thefirst interface 121, and store in thememory 112b the work which is performed by the robot 101 (locus information on the first interface 121) based on the calculated locus. Further, thesecond control device 112 may operate therobot 101 according to the locus information on thefirst interface 121 stored in thememory 112b. -
FIGs. 17 and18 are schematic diagrams illustrating an outline configuration of a robot system of Modification 2 in Embodiment 2. Note that, inFIGs. 17 and18 , the directions of the robot are expressed as the directions of the X-axis, the Y-axis, and the Z-axis in the three-dimensional rectangular coordinate system illustrated in the drawings, for convenience. As illustrated inFIGs. 17 and18 , therobot system 100 of Modification 2 differs from therobot system 100 according to Embodiment 2 in that thesecond control device 112 displays on the display 105 aline 30 indicative of a normal direction of a given first part of theworkpiece 300, which is set beforehand, based on the three-dimensional model information on theworkpiece 300. The first part may be a part which opposes to the tip end of theend effector 20 of therobot 101. - The
robot system 100 of Modification 2 may further be provided with analarm 150. Thealarm 150 may display character data or image data on thedisplay 105, or may inform by sound from a speaker etc., or may inform by light or color. Alternatively, it may inform a smartphone, a cellular phone, or a tablet computer by an e-mail or an application via a communication network. - Further, in the
robot system 100 of Modification 2, when theline 30 becomes in agreement with the axial center direction of theend effector 20 of the robot 101 (theend effector 20 of the3D robot 101A), thesecond control device 112 may change the color and/or the thickness of theline 30, and display it on the display 105 (seeFIG. 18 ). - Further, when the
line 30 becomes in agreement with the axial center direction of theend effector 20 of the robot 101 (theend effector 20 of the3D robot 101A), thesecond control device 112 may activate thealarm 150 and inform the agreement. - The
robot system 100 of Modification 2 has similar operation and effects to therobot system 100 according to Embodiment 2. -
FIG. 19 is a schematic diagram illustrating an outline configuration of a robot system according toEmbodiment 3. Note that, inFIG. 19 , the directions of the robot are expressed as the directions of the X-axis, the Y-axis, and the Z-axis in the three-dimensional rectangular coordinate system illustrated in the drawing, for convenience. - As illustrated in
FIG. 19 , therobot system 100 according toEmbodiment 3 differs from therobot system 100 according to Embodiment 2 in that thealarm 150 is disposed inside themanipulation area 202. Thealarm 150 may display character data or image data on thedisplay 105, or may inform by sound from a speaker etc., or may inform by light or color. Alternatively, it may inform a smartphone, a cellular phone, or a tablet computer by an e-mail or an application via a communication network. - Next, operation and effects of the
robot system 100 according toEmbodiment 3 are described in detail with reference toFIGs. 19 ,20A , and20B . Note that the following operation is performed by theprocessor 112a of thesecond control device 112 reading the program stored in thememory 112b. -
FIGs. 20A and20B are flowcharts illustrating one example of operation of the robot system according toEmbodiment 3. As illustrated inFIGs. 20A and20B , the operation of therobot system 100 according toEmbodiment 3 differs from the operation of therobot system 100 according to Embodiment 2 in that processing of Steps S207A to S207C is performed between Step S207 and Step S208. - In detail, the
processor 112a of thesecond control device 112 displays the spatial relationship between the tip-end part of the robot 101 (end effector 20) and theworkpiece 300 on thedisplay 105 as the three-dimensional model based on the first data, the first position, and the second data which are acquired by the processing of Steps S201 to S203, and manipulational command information acquired at Step S205 (Step S207). - Next, the
processor 112a of thesecond control device 112 calculates a distance A between the robot 101 (3D robot 101A) and the workpiece 300 (3D workpiece 301) based on the first data and the second data, and the manipulational command information acquired at Step S205 (Step S207A). - Here, the
processor 112a of thesecond control device 112 may calculate a distance between a part of the robot 101 (3D robot 101A) nearest to the workpiece 300 (3D workpiece 301) and the workpiece 300 (3D workpiece 301). - That is, when the tip end of the
end effector 20 of therobot 101 is located at the position nearest to theworkpiece 300, theprocessor 112a of thesecond control device 112 may calculate a distance between the tip end of theend effector 20 and theworkpiece 300. Moreover, when a certain part of therobot 101 is located at the position nearest to theworkpiece 300, the processor 1112 of thesecond control device 112 may calculate a distance between this part of therobot 101 and theworkpiece 300. - Next, the
processor 112a of thesecond control device 112 determines whether the distance A calculated at Step S207A is less than a given first distance set beforehand (Step S207B). Here, the first distance may be set based on the operating speed of therobot 101, the contents of the work for theworkpiece 300, etc. - If the operating speed of the
robot 101 is slower, the first distance may be set smaller. Further, also when the work for theworkpiece 300 is welding, cutting, washing, and polishing work, the first distance may be set smaller. - On the other hand, when the operating speed of the
robot 101 is faster, the first distance may be set larger. Further, also when the work for theworkpiece 300 is injecting/spraying work of fluid, the first distance may be set larger. - For example, the first distance may be 0.5cm or more from the viewpoint of suppressing a collision with the
workpiece 300, and may be 30cm from the viewpoint of performing the work to theworkpiece 300. - If determined that the distance A is less than the first distance (Yes at Step S207B), the
processor 112a of thesecond control device 112 activates thealarm 150 to inform a warning about a possibility of collision with the workpiece 300 (Step S207C). Here, theprocessor 112a of thesecond control device 112 may reduce the operating speed of therobot 101, or may stop therobot 101. - Therefore, the operator can recognize the possibility of the
robot 101 colliding with theworkpiece 300, and can operate therobot 101 by using theinterface 102 so that therobot 101 does not collide theworkpiece 300. - Therefore, the
processor 112a of thesecond control device 112 acquires the manipulational command information inputted from theinterface 102. That is, theprocessor 112a of thesecond control device 112 returns to the processing of Step S205. - On the other hand, if determined that the distance A is not less than the first distance (No at Step S207B), the
processor 112a of thesecond control device 112 determines whether the instruction information indicative of the end of the work for theworkpiece 300 is inputted via the input device etc. (not illustrated) by the operator (Step S208). - If determined that the instruction information indicative of the end of the work for the
workpiece 300 is not inputted (No at Step S208), theprocessor 112a of thesecond control device 112 repeats the processing of Steps S205 to S208 until it determines that the instruction information indicative of the end of the work for theworkpiece 300 is inputted. - On the other hand, if determined that the instruction information indicative of the end of the work for the
workpiece 300 is inputted (Yes at Step S208), theprocessor 112a of thesecond control device 112 ends this program. - The
robot system 100 according toEmbodiment 3 has similar operation and effects to therobot system 100 according to Embodiment 2. - In this embodiment, it is further provided with the conveying
device 106 which conveys theworkpiece 300 from themanipulation area 202 to the first position in theworkarea 201 which is set beforehand. Thus, theworkpiece 300 imaged by the3D camera 103 in themanipulation area 202 can be moved to the exact position suitable for the work. - Further, in this embodiment, the
first interface 121 which operates therobot 101 and is disposed inside themanipulation area 202 is further provided. Thesecond control device 112 has thememory 112b which stores the second data which is data of the three-dimensional model of therobot 101. Further, when the operator operates thefirst interface 121 to manipulate therobot 101 to perform the work for theworkpiece 300, thesecond control device 112 displays on thedisplay 105 the spatial relationship between theworkpiece 300 and the tip-end part of therobot 101 as they are seen from the direction different from the direction in which the operator looks at therobot 101 from themanipulation area 202 based on the first data inputted by the output of the first data to thesecond control device 112, the position information on the first position of the conveyed workpiece, the second data, and the manipulation information on therobot 101 inputted from thefirst interface 121. According to the above configuration, the operator can understand now the spatial relationship between theworkpiece 300 and the tip-end part of therobot 101 which is hardly understood by directly looking at therobot 101 and theworkpiece 300. - Further, in this embodiment, the
3D camera 103 is attached to thefirst interface 121. According to this configuration, since the device which the operator should grip is a sole object, it becomes easier to operate thefirst interface 121 and the3D camera 103. - Further, in this embodiment, the
sensor 104 may detect the position information and the posture information from thefirst interface 121 wirelessly, and thesecond control device 112 may calculate the locus of thefirst interface 121 based on the position information and the posture information from thefirst interface 121 which are detected by thesensor 104, and may perform the processing which operates therobot 101 on real time based on the calculated locus (Step S206). Thus, since it becomes easier to move thefirst interface 121, therobot 101 can be operated correctly. - It is apparent for the person skilled in the art that many improvements or other embodiments of the present disclosure are possible from the above description. Therefore, the above description is to be interpreted only as illustration, and it is provided in order to teach the person skilled in the art the best mode that implements the present disclosure. The details of the structures and/or the functions may be changed substantially, without departing from the present disclosure.
-
- 11a
- First Link
- 11b
- Second Link
- 11c
- Third Link
- 11d
- Fourth Link
- 11e
- Fifth Link
- 11f
- Sixth Link
- 12
- Detector
- 13
- Transmitter
- 15
- Pedestal
- 20
- End Effector
- 21
- Piping
- 100
- Robot System
- 101
- Robot
- 101A
- 3D Robot
- 102
- Robotic Arm
- 103
- 3D Camera
- 104
- Sensor
- 105
- Display
- 106
- Conveying Device
- 107
- Shutter
- 111
- First Control Device
- 111a
- Processor
- 111b
- Memory
- 112
- Second Control Device
- 112a
- Processor
- 112b
- Memory
- 121
- First Interface
- 121A
- Gripper
- 121B
- Switch
- 121E
- Interface Body
- 122
- Second Interface
- 122B
- Switch
- 150
- Alarm
- 201
- Workarea
- 202
- Manipulation Area
- 203
- Wall Member
- 204
- Window
- 300
- Workpiece
- 301
- 3D Workpiece
- 301A
- 3D Workpiece
- 301B
- 3D Workpiece
- 301C
- 3D Workpiece
- 302A
- Workpiece Image
- 302B
- Workpiece Image
- 302C
- Workpiece Image
- JT1
- First Joint
- JT2
- Second Joint
- JT3
- Third Joint
- JT4
- Fourth Joint
- JT5
- Fifth Joint
- JT6
- Sixth Joint
Claims (13)
- A robot system, comprising:a robot installed in a workarea and controlled by a second control device;a 3D camera operated by an operator;a sensor that is disposed in a manipulation area that is a space different from the workarea, and wirelessly detects position information and posture information on the 3D camera;a display; anda first control device, the first control device being adapted to:acquire image information on a workpiece imaged by the 3D camera;acquire, from the sensor, the position information and the posture information when the workpiece is imaged by the 3D camera;display the acquired image information on the display;form a three-dimensional model of the workpiece based on the image information, and the acquired position information and posture information;display the formed three-dimensional model on the display; andoutput first data that is data of the formed three-dimensional model to the second control device.
- The robot system of claim 1, wherein, after repeatedly performing the acquisition of the image information, the acquisition of the position information and the posture information from the sensor, and the display of the image information on the display, the first control device performs formation of the 3D model, display of the formed 3D model on the display, and the output of the first data to the second control device.
- The robot system of claim 1 or 2, wherein, after performing the acquisition of the image information, the acquisition of the position information and the posture information from the sensor, the display of the image information on the display, the formation of the 3D model, and the display of the formed 3D model on the display,the first control device repeatedly performs the acquisition of the image information, the acquisition of the position information and the posture information from the sensor, and the display of the image information on the display, once or more,in the formation of the 3D model, the first control device then forms the three-dimensional model based on the image information acquired at the Ath time, and the acquired position information and posture information,the first control device again forms the three-dimensional model based on the formed three-dimensional model, and the image information acquired at the Bth time (B≠A), andthe first control device then performs the display of the formed 3D model on the display, and the output of the first data to the second control device.
- The robot system of any one of claims 1 to 3, wherein, after performing the acquisition of the image information, the acquisition of the position information and the posture information from the sensor, the display of the image information on the display, the formation of the 3D model, and the display of the formed 3D model on the display,the first control device repeatedly performs the acquisition of the image information, the acquisition of the position information and the posture information from the sensor, and the display of the image information on the display, once or more,in the formation of the 3D model, the first control device then forms the three-dimensional model based on the image information acquired at the Cth time, and the acquired position information and posture information,the first control device forms the three-dimensional model based on the image information acquired at the Dth time (D≠C),the first control device again forms the three-dimensional model based on the three-dimensional model formed based on the image information acquired at the Cth time, and the acquired position information and posture information, and the three-dimensional model formed based on the image information acquired at the Dth time (D≠C), andthe first control device then performs the display of the formed 3D model on the display, and the output of the first data to the second control device.
- The robot system of any one of claims 1 to 4, further comprising a conveying device that conveys the workpiece from the manipulation area to a first position in the workarea that is set beforehand.
- The robot system of any one of claims 1 to 5, further comprising a first interface that manipulates the robot and is disposed in the manipulation area,wherein the second control device has a memory that stores second data that is data of a three-dimensional model of the robot, andwherein, when the operator operates the first interface to manipulate the robot to perform a work for the workpiece, the second control device displays on the display a spatial relationship between the workpiece and a tip-end part of the robot in a state where the workpiece and the robot are seen from a direction different from a direction in which the operator looks at the robot from the manipulation area, based on the first data inputted by the output of the first data to the second control device, the position information on the first position of the conveyed workpiece, the second data, and manipulation information on the robot inputted from the first interface.
- The robot system of claim 6, wherein the 3D camera is attached to the first interface.
- The robot system of claim 7, wherein the sensor wirelessly detects the position information and the posture information from the first interface, and
wherein the second control device calculates a locus of the first interface based on the position information and the posture information from the first interface, that are detected by the sensor, and operates the robot on real time based on the calculated locus. - The robot system of claim 8, wherein the second control device has the memory that stores second data that is data of the three-dimensional model of the robot, and
wherein the second control device displays on the display the spatial relationship between the workpiece and the tip-end part of the robot in a state where the workpiece and the robot are seen from a direction different from a direction in which the operator looks at the robot from the manipulation area, based on the first data inputted by the output of the first data to the second control device, the position information on the first position of the conveyed workpiece, the second data, and the locus calculated in the operation of the robot on real time. - The robot system of any one of claims 1 to 9, wherein the workarea is an explosion-proof area, and the manipulation area is a non-explosion-proof area.
- The robot system of any one of claims 1 to 10, wherein the 3D camera is provided to a tip-end part of a robotic arm, and
wherein the robot system further comprises a second interface that operates the robotic arm. - The robot system of any one of claims 1 to 11, wherein the first control device also serves as the second control device.
- A method of forming a three-dimensional model of a workpiece, comprising the steps of:detecting position information and posture information on a 3D camera, when the 3D camera images the workpiece disposed in a manipulation area that is a space different from a workarea where a robot is installed;acquiring image information on the imaged workpiece;acquiring the detected position information and posture information;displaying the acquired image information on a display; andforming the three-dimensional model of the workpiece based on the acquired image information, and the acquired position information and posture information.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019225562 | 2019-12-13 | ||
PCT/JP2020/046282 WO2021117868A1 (en) | 2019-12-13 | 2020-12-11 | Robot system and method for forming three-dimensional model of workpiece |
Publications (2)
Publication Number | Publication Date |
---|---|
EP4074474A1 true EP4074474A1 (en) | 2022-10-19 |
EP4074474A4 EP4074474A4 (en) | 2024-06-05 |
Family
ID=76330001
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20898730.5A Pending EP4074474A4 (en) | 2019-12-13 | 2020-12-11 | Robot system and method for forming three-dimensional model of workpiece |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230028457A1 (en) |
EP (1) | EP4074474A4 (en) |
JP (1) | JP6905651B1 (en) |
CN (1) | CN114502337B (en) |
WO (1) | WO2021117868A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023047141A (en) * | 2021-09-24 | 2023-04-05 | 村田機械株式会社 | Workpiece position determination device, laser processing device, and workpiece position determination method |
WO2023079652A1 (en) * | 2021-11-04 | 2023-05-11 | ファナック株式会社 | Control device, control method, and cloud system |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2961264B1 (en) * | 1998-09-21 | 1999-10-12 | 工業技術院長 | Three-dimensional object model generation method and computer-readable recording medium recording three-dimensional object model generation program |
GB0023681D0 (en) * | 2000-09-27 | 2000-11-08 | Canon Kk | Image processing apparatus |
JP2004046542A (en) * | 2002-07-11 | 2004-02-12 | Mitsubishi Heavy Ind Ltd | Model working method and model working system |
GB2458927B (en) * | 2008-04-02 | 2012-11-14 | Eykona Technologies Ltd | 3D Imaging system |
US10475240B2 (en) * | 2010-11-19 | 2019-11-12 | Fanuc Robotics America Corporation | System, method, and apparatus to display three-dimensional robotic workcell data |
JP5659787B2 (en) * | 2010-12-28 | 2015-01-28 | トヨタ自動車株式会社 | Operation environment model construction system and operation environment model construction method |
JP5843359B2 (en) * | 2012-03-29 | 2016-01-13 | 株式会社Jsol | Information processing system, server device, terminal device, information processing device, information processing method, and program |
JP5670397B2 (en) * | 2012-08-29 | 2015-02-18 | ファナック株式会社 | Apparatus and method for picking up loosely stacked articles by robot |
JP6344890B2 (en) * | 2013-05-22 | 2018-06-20 | 川崎重工業株式会社 | Component assembly work support system and component assembly method |
US9996974B2 (en) * | 2013-08-30 | 2018-06-12 | Qualcomm Incorporated | Method and apparatus for representing a physical scene |
US9643314B2 (en) * | 2015-03-04 | 2017-05-09 | The Johns Hopkins University | Robot control, training and collaboration in an immersive virtual reality environment |
JP6723738B2 (en) * | 2015-04-03 | 2020-07-15 | キヤノン株式会社 | Information processing apparatus, information processing method, and program |
JP6582921B2 (en) * | 2015-11-26 | 2019-10-02 | 株式会社デンソーウェーブ | Robot monitor system |
JP6300120B2 (en) * | 2015-12-28 | 2018-03-28 | 株式会社ニイガタマシンテクノ | Control data generation method and control data generation apparatus |
DE102016106696A1 (en) * | 2016-04-12 | 2017-10-12 | Carl Zeiss Industrielle Messtechnik Gmbh | Coordinate Measuring System |
JP6707485B2 (en) * | 2017-03-22 | 2020-06-10 | 株式会社東芝 | Object handling device and calibration method thereof |
JP2018192602A (en) * | 2017-05-19 | 2018-12-06 | 川崎重工業株式会社 | Working method and robot system |
TWI650626B (en) * | 2017-08-15 | 2019-02-11 | 由田新技股份有限公司 | Robot processing method and system based on 3d image |
CN107908152A (en) * | 2017-12-26 | 2018-04-13 | 苏州瀚华智造智能技术有限公司 | A kind of movable robot automatic spray apparatus, control system and method |
JP6845180B2 (en) * | 2018-04-16 | 2021-03-17 | ファナック株式会社 | Control device and control system |
-
2020
- 2020-12-11 WO PCT/JP2020/046282 patent/WO2021117868A1/en unknown
- 2020-12-11 US US17/784,118 patent/US20230028457A1/en active Pending
- 2020-12-11 CN CN202080069455.2A patent/CN114502337B/en active Active
- 2020-12-11 EP EP20898730.5A patent/EP4074474A4/en active Pending
- 2020-12-11 JP JP2021506354A patent/JP6905651B1/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114502337A (en) | 2022-05-13 |
CN114502337B (en) | 2024-01-09 |
JPWO2021117868A1 (en) | 2021-12-09 |
US20230028457A1 (en) | 2023-01-26 |
WO2021117868A1 (en) | 2021-06-17 |
JP6905651B1 (en) | 2021-07-21 |
EP4074474A4 (en) | 2024-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11420330B2 (en) | Robot control device, robot, and simulation device | |
CN111093903B (en) | Robot system and method for operating the same | |
EP3239793B1 (en) | Method for generating robot operation program, and device for generating robot operation program | |
EP4074474A1 (en) | Robot system and method for forming three-dimensional model of workpiece | |
CN110977931A (en) | Robot control device and display device using augmented reality and mixed reality | |
KR20180038479A (en) | Robot system | |
US11110592B2 (en) | Remote control robot system | |
EP2925495A1 (en) | Teleoperation of machines having at least one actuated mechanism | |
JP6743453B2 (en) | Robot controller, robot and simulation device | |
EP4074463A1 (en) | Robot system | |
WO2022230143A1 (en) | Device for setting safety parameters, teaching device and method | |
EP3943256A1 (en) | Robot system | |
CN114761180B (en) | Robot system | |
JP7462046B2 (en) | Robot System | |
US20240075634A1 (en) | Robot system and robot working method | |
JP7364285B1 (en) | How the robot handling system works | |
JP2020157467A (en) | Robot system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20220704 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) |