WO2019180916A1 - Robot control device - Google Patents

Robot control device Download PDF

Info

Publication number
WO2019180916A1
WO2019180916A1 PCT/JP2018/011704 JP2018011704W WO2019180916A1 WO 2019180916 A1 WO2019180916 A1 WO 2019180916A1 JP 2018011704 W JP2018011704 W JP 2018011704W WO 2019180916 A1 WO2019180916 A1 WO 2019180916A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
gesture
robot control
control device
motion
Prior art date
Application number
PCT/JP2018/011704
Other languages
French (fr)
Japanese (ja)
Inventor
堅太 藤本
奥田 晴久
文俊 松野
孝浩 遠藤
Original Assignee
三菱電機株式会社
国立大学法人京都大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社, 国立大学法人京都大学 filed Critical 三菱電機株式会社
Priority to PCT/JP2018/011704 priority Critical patent/WO2019180916A1/en
Priority to JP2019510378A priority patent/JP6625266B1/en
Publication of WO2019180916A1 publication Critical patent/WO2019180916A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices

Definitions

  • the present invention relates to a robot control apparatus that controls a robot.
  • a position / posture indicating the position / posture of the robot hand is moved to a desired position / posture using a teaching-dedicated teaching box connected to the control device by a device expert.
  • the control device may calculate an operation instruction (operation command). Since such teaching is necessary for each shape of the object to be worked, it is necessary to perform a plurality of teachings in order for the robot to perform work on objects of various shapes as in multi-product low-volume production. is there.
  • the robot control unit detects the operator's gesture from the imaging information, specifies the robot control command associated with the gesture, performs the robot control corresponding to the control command, and displays The control unit displays a robot control command to the operator. According to such a configuration, the operator can confirm whether or not the instruction is correctly transmitted to the robot by the displayed robot control command.
  • the robot in the teaching step, is caused to perform an operation based on a three-dimensional virtual world image generated using sensory information acquired from the teaching target robot.
  • Motion information and sensory information such as vision, tactile sensation, and hearing are acquired from the tactile sensor or camera.
  • the robot imitates a desired motion based on the motion information and sensory information.
  • the operator can perform actions corresponding to various objects by performing actions with reference to sensory information such as vision, touch, and hearing.
  • JP 2014-104527 A Japanese Patent No. 4463120
  • the above technique has a problem that it is difficult for a beginner who does not know the robot control command or a beginner who does not know the physicality of the robot such as the movable range of the actual robot to operate the robot. .
  • the present invention has been made in view of the above-described problems, and an object thereof is to provide a technique that allows an operator to easily operate a robot.
  • the robot control device is a robot control device that controls a robot, recognizes the worker's gesture based on a result of detecting the motion of the worker with a sensor, and based on the gesture, the robot A work support device that determines an operation instruction for performing an operation, and information for controlling the operation of the robot is determined based on the operation instruction determined by the work support device, and the information is output to the servo motor And a motion display device that displays a human animation that moves based on the motion instruction determined by the work support device as the motion of the robot.
  • the operation instruction is determined based on the gesture of the worker, the operation of the robot is controlled based on the determined operation instruction, and the moving human animation is displayed based on the determined operation instruction. According to such a configuration, even if the operator does not have knowledge of the robot, the robot can be easily operated by viewing the animation display.
  • FIG. 1 is a block diagram illustrating a configuration of a robot control apparatus according to a first embodiment.
  • 1 is a perspective view showing a robot according to a first embodiment.
  • 1 is a block diagram illustrating a configuration of a robot control apparatus according to a first embodiment.
  • 6 is a diagram showing a gesture / motion visualization table according to Embodiment 1.
  • FIG. FIG. 6 is a diagram showing an operation visualization / animation table according to the first embodiment.
  • 3 is a flowchart illustrating a procedure of initial processing according to the first embodiment. It is a figure which shows an example of a state transition diagram. It is a figure which shows an example of correlation with the movement location of a robot, and its position coordinate.
  • FIG. 1 is a block diagram illustrating a configuration of a robot control apparatus according to a first embodiment. It is a block diagram which shows the structure of the robot control apparatus which concerns on Embodiment 2.
  • FIG. 10 is a diagram showing an operation visualization / animation table according to the second embodiment.
  • FIG. 10 is a block diagram illustrating a configuration of a robot control device according to a third embodiment.
  • FIG. 10 is a diagram showing a gain table according to the third embodiment.
  • FIG. 10 is a diagram showing an operation visualization / animation table according to the third embodiment.
  • FIG. 1 is a block diagram showing a configuration of a robot control apparatus according to an embodiment of the present invention.
  • the robot control device of FIG. 1 includes a gesture detection sensor 1, a work support device 2, a programmable logic controller 3, a robot controller 4, and an operation display device 5.
  • the gesture detection sensor 1 and the work support device 2 are connected to the programmable logic controller 3, the robot controller 4, and the operation display device 5 through a network.
  • the robot 9 is connected so as to be communicable with the robot 9 and controls the robot 9.
  • the robot 9 may be an arm type robot that can perform a plurality of operations by a plurality of servo motors, or may be another robot.
  • 1 includes a non-contact distance sensor, for example, and detects a plurality of coordinates based on the movement of the operator.
  • the work support device 2 recognizes the gesture of the worker based on the result (a plurality of coordinates) detected by the gesture detection sensor 1. Then, the work support device 2 determines an operation instruction for the robot 9 to perform an operation based on the recognized gesture.
  • the programmable logic controller 3 acquires the device state, which is the device state of the robot 9, from an area sensor or the like that detects the device state, and manages the execution of a program for controlling the operation of the robot 9.
  • the robot controller 4 determines information for controlling the operation of the robot 9 based on the operation instruction determined by the work support device 2 and outputs the information to a servo motor (not shown) of the robot 9.
  • the robot controller 4 moves the robot 9 on the basis of the operation instruction determined by the work support device 2, the device state acquired by the programmable logic controller 3, and the program managed by the programmable logic controller 3.
  • An operation command for operating (information for controlling the operation of the robot 9) is generated.
  • the robot controller 4 operates the robot 9 by passing the generated operation command to the robot 9.
  • the motion display device 5 displays a human animation that moves based on the motion instruction determined by the work support device 2 as the motion of the robot 9.
  • the human being displayed by animation includes a figure imitating a human like an avatar, for example.
  • the animation displayed by the motion display device 5 is assumed to be an animation of an avatar moving, and this animation will be described as “avatar animation”.
  • the work support device 2 includes a human motion detection unit 2a, a gesture recognition unit 2b, a motion visualization mechanism unit 2c, a work instruction conversion unit 2d, and a motion display control unit 2e. Although not shown, the work support apparatus 2 also includes a storage unit that can store various tables.
  • the initial process is a process performed before the work starts, and is a process corresponding to, for example, a teaching process and a registration process.
  • the work process is a process in which work is performed by operating the robot 9 in accordance with the operation of the worker.
  • FIG. 3 is a block diagram showing components that perform initial processing among the components of the robot control apparatus. As shown in FIG. 3, the initial process is performed by the human motion detection unit 2a, the gesture recognition unit 2b, the motion visualization mechanism unit 2c, and the like shown in FIG. Hereinafter, components related to the initial process will be described with reference to FIG. 3 and the like.
  • the gesture detection sensor 1 detects the coordinates on the worker as needed from the worker's motion (for example, periodically or periodically).
  • the human motion detection unit 2a detects the coordinates of a specific part (for example, a hand) of the worker's body as needed based on the coordinates detected by the gesture detection sensor 1 as needed.
  • the gesture recognition unit 2b obtains the movement amount of the specific part of the worker based on the coordinates of the specific part detected at any time by the human motion detection unit 2a. During the initial processing, the gesture recognition unit 2b registers a combination of the specific part and the movement amount as a gesture. By using this registration, the gesture recognition unit 2b can recognize (specify) the gesture based on the coordinates detected by the gesture detection sensor 1 at any time. Different gesture numbers are assigned to different gestures.
  • FIG. 4 is a table showing an example of the result of association of the motion visualization mechanism unit 2c.
  • the motion visualization number is a number for specifying the motion of the robot 9 and corresponds to the motion instruction described above.
  • the motion visualization mechanism unit 2c registers the association result as shown in FIG. 4 in the gesture / motion visualization table 2f in FIG.
  • the motion visualization mechanism unit 2c associates the motion visualization number, one of a plurality of avatar animations stored in the motion visualization library 2g, and the movement location.
  • an avatar animation indicating the behavior of the avatar when the motion visualization number of the robot 9 is operated in the normal motion in a state having normal human physical characteristics is associated with the motion visualization number.
  • the movement location is a position related to the robot 9 such as the position of the hand of the hand connected to the robot 9, the position of the work gripped by the hand connected to the robot 9, and the position of the joint of the robot 9.
  • FIG. 5 is a table showing an example of the result of association of the motion visualization mechanism unit 2c.
  • the motion visualization mechanism unit 2c registers the association result as shown in FIG. 5 in the motion visualization / animation table 2h of FIG.
  • FIG. 6 is a flowchart showing a procedure of initial processing.
  • step S1 a device expert such as a skilled technician creates a state transition diagram that defines the operation sequence of the robot 9, and registers it in the robot controller.
  • FIG. 7 is a diagram illustrating an example of a state transition diagram. In FIG. 7, a1 to a19 indicate motion visualization numbers. The transition state at the start of the robot 9 is an “initial state”, and the transition state of the robot 9 changes according to the gesture.
  • step S2 in FIG. 6 the motion visualization mechanism unit 2c associates the gesture gesture number with the motion visualization number.
  • a device familiar person performs an input operation for filling the table of FIG. 4 based on the state transition diagram as shown in FIG. 7, and the motion visualization mechanism unit 2c performs the association based on the input operation.
  • the gesture g1 is recognized, if an input operation for causing the robot 9 in the initial state to perform the motion visualization number a1 is performed, the gesture g1 is associated with the motion visualization number a1.
  • the gesture g2 is recognized, if an input operation for causing the robot 9 in the initial state to perform the motion visualization number a11 is performed, the gesture g2 is associated with the motion visualization number a11.
  • the association result as shown in FIG. 4 is registered in the gesture / motion visualization table 2f of FIG.
  • the motion visualization mechanism unit 2 c associates the motion visualization number, the avatar animation, and the movement location indicating the position of the robot 9.
  • a device expert may perform an input operation for filling the table of FIG. 5 based on the state transition diagram as shown in FIG. 7, and the motion visualization mechanism unit 2c may perform the association based on the input operation.
  • the motion visualization mechanism unit 2c may perform the association by analyzing a state transition diagram as shown in FIG. By this step S3, the result of association as shown in FIG. 5 is registered in the action visualization / animation table 2h of FIG.
  • step S4 in FIG. 6 the device familiar person creates a position information management table in which the movement location in FIG. 5 is associated with the position coordinates of the movement location, and registers them in the robot controller.
  • FIG. 8 is a diagram showing an example of the location information management table.
  • a place where the robot 9 can move is scanned.
  • the position coordinates of the robot 9 are To be acquired.
  • the acquired position coordinates in the field and the movement location in FIG. 5 are associated and registered in the position information management table in FIG.
  • the position coordinates of the robot 9 include, for example, the position coordinates of the hand of the hand connected to the robot 9, the position coordinates of the workpiece gripped by the hand connected to the robot 9, the position coordinates of the joint of the robot 9, and the like.
  • FIG. 9 is a block diagram illustrating components that perform work processing among the components of the robot control device. As shown in FIG. 9, the work process is performed by all the components shown in FIG. Hereinafter, the components related to the work process will be described with reference to FIG.
  • the human motion detection unit 2a Based on the human coordinates detected by the gesture detection sensor 1 at any time (for example, periodically or periodically), the human motion detection unit 2a determines the coordinates of a specific part of the human body at any time (for example, periodically or periodically). )To detect.
  • the gesture recognition unit 2b obtains the movement amount of the specific part of the worker based on the coordinates of the specific part detected at any time by the human motion detection unit 2a, and recognizes the gesture based on the obtained movement amount of the specific part.
  • the motion visualization mechanism unit 2c Based on the gesture recognized by the gesture recognition unit 2b and the transition state of the robot 9, the motion visualization mechanism unit 2c uses a single motion visualization number from the state transition diagram as shown in FIG. 7 and the table as shown in FIG. Is identified. Then, the motion visualization mechanism unit 2c identifies one avatar animation and one movement location from the table as shown in FIG. 5 based on the identified one motion visualization number.
  • the motion visualization mechanism unit 2c reads the gesture g1 from the table in FIG.
  • the action visualization number a1 associated with is specified.
  • the motion visualization mechanism unit 2c identifies the avatar animation associated with the motion visualization number a1 and the movement location “1” from the table of FIG. Thereby, for example, an animation in which the hand of the avatar moves from the movement location “0” to the movement location “1” is specified as the avatar animation associated with the motion visualization number a1.
  • the motion visualization mechanism unit 2c is associated with the gesture g5 from the table in FIG.
  • the operation visualization number a11 is specified.
  • the motion visualization mechanism unit 2c identifies the avatar animation associated with the motion visualization number a11 and the movement location “2” from the table of FIG. Thereby, for example, an animation in which the hand of the avatar moves from the movement location “0” to the movement location “2” is specified as the avatar animation associated with the motion visualization number a11.
  • the work instruction conversion unit 2d specifies the position coordinates of the movement location from the table as shown in FIG. 8 registered in the position information management table 2i based on the one movement location specified by the motion visualization mechanism unit 2c. . Then, the work instruction conversion unit 2d passes the specified position coordinates to the programmable logic controller 3.
  • the programmable logic controller 3 includes a sensor input unit 3a and a program execution management unit 3b.
  • the sensor input unit 3a acquires a device state.
  • the device state is, for example, position information detected by an encoder of the robot 9, information on the arm position of the robot 9, information on the presence / absence of a workpiece, and the like.
  • the program execution management unit 3 b manages a program for the robot controller 4 to control the operation of the robot 9.
  • the programmable logic controller 3 configured as described above has a sensor input function for acquiring a device state and a program execution management function of the robot controller 4. Further, when the programmable logic controller 3 receives the position coordinates from the work instruction conversion unit 2d, the programmable logic controller 3 notifies the robot controller 4 of the position coordinates and the execution start of the program.
  • the robot controller 4 includes a robot command generation unit 4a and a state acquisition unit 4b.
  • the robot command generation unit 4a generates an operation command to the robot 9 from the position coordinates from the programmable logic controller 3 (substantially the position coordinates from the work support device 2) without being conscious of the physicality of the robot 9. .
  • the robot command generation unit 4a calculates the movement amount of the specific part of the robot 9 based on the position coordinates from the programmable logic controller 3 and the current position coordinates of the robot 9 managed by the robot command generation unit 4a. calculate. Then, the robot command generation unit 4 a calculates the movement amount of each axis of the robot 9 based on the calculated movement amount, and passes an operation command including the movement amount to the robot 9. Thereby, the specific part of the robot 9 moves to the target position.
  • the state acquisition unit 4b acquires a camera image acquired by a sensor such as a vision sensor and an image sensor attached to the robot as a robot state, and notifies the robot command generation unit 4a of the robot state.
  • the robot command generation unit 4a When the object is gripped, the robot command generation unit 4a causes the robot 9 to approach the object and grip the object based on the robot state such as the camera image notified from the state acquisition unit 4b. To control. When the object is released, the robot command generation unit 4a causes the robot 9 to move to the target position and release the hand based on the robot state such as the camera image notified from the state acquisition unit 4b. 9 is controlled.
  • the programmable logic controller 3 passes the device state acquired by the sensor input unit 3a to the robot command generation unit 4a.
  • the robot command generation unit 4a moves the movement amount of each axis in the robot controller 4 and the robot 9 based on the device state. Provide feedback to correct.
  • the programmable logic controller 3 detects the completion of movement of the robot arm and the completion of movement of the work as device states, the programmable logic controller 3 notifies the work support apparatus 2 of a program state indicating completion of execution of the program.
  • the operation visualization mechanism unit 2c When the operation visualization mechanism unit 2c receives the program execution completion from the programmable logic controller 3 as a program state, it passes the specified avatar animation to the operation display control unit 2e.
  • the motion display control unit 2e causes the motion display device 5 to display the avatar animation.
  • the operation instruction is determined based on the gesture of the worker, and the operation of the robot 9 is controlled based on the determined operation instruction.
  • Display moving human animation based on movement instructions.
  • the operation of the robot 9 is displayed as a human animation. For this reason, even if the operator does not have knowledge of the robot 9 such as the control command of the robot 9 and the movable range and physicality of the robot 9, the robot 9 is displayed by viewing the display of the human animation. Easy to operate.
  • the work support apparatus 2 is connected to the programmable logic controller 3 via a network. Thereby, the operator can perform remote operation of the robot 9.
  • FIG. 10 is a block diagram showing the configuration of the robot control apparatus according to the second embodiment of the present invention.
  • constituent elements that are the same as or similar to the constituent elements described above are assigned the same reference numerals, and different constituent elements are mainly described.
  • the work state conversion unit 2j is provided in the work support device 2.
  • the state acquisition unit 4b of the robot controller 4 notifies the robot command generation unit 4a and the work support device 2 of the robot state.
  • the work instruction conversion unit 2d generates a work state including the robot state from the state acquisition unit 4b and the program state from the programmable logic controller 3.
  • the work instruction conversion unit 2d passes the generated work state to the motion visualization mechanism unit 2c, and the motion visualization mechanism unit 2c passes the work state to the work state conversion unit 2j.
  • FIG. 11 is a diagram showing an operation visualization / animation table 2h according to the second embodiment.
  • the sound number, vibration number, start sound, end sound, start vibration, and end vibration are added to the motion visualization / animation table 2h in FIG.
  • a file name for identifying the sound file (for convenience, 1 and 2 are attached) is set.
  • a vibration frequency (indicated by 1 and 2 for convenience in FIG. 11) is set. The setting of the sound number and the vibration number is performed by an operator at the time of the gesture association process at the time of the initial process, for example.
  • the start sound flag of the motion visualization number in FIG. 11 is set to ON (in FIG. 11, ⁇ indicates ON). Is done.
  • the end sound flag of the motion visualization number in FIG. 11 is set to ON (in FIG. 11, ⁇ indicates ON). Is done.
  • ON is set for both the start sound and the end sound of the motion visualization number in FIG. Not done.
  • the start vibration flag of the motion visualization number in FIG. 11 is ON (in FIG. 11, ⁇ indicates ON).
  • the end vibration flag of the motion visualization number in FIG. 11 is set to ON (in FIG. 11, ⁇ indicates ON). Is done.
  • ON is set for both the start vibration and the end vibration of the motion visualization number in FIG. Not done.
  • the motion visualization mechanism unit 2c When the motion visualization mechanism unit 2c identifies one motion visualization number and the file name is set to the sound number corresponding to the one motion visualization number, the sound file identified by the file name Is transferred to the work state conversion unit 2j. Similarly, when the motion visualization mechanism unit 2c specifies one motion visualization number and the vibration frequency is set to the vibration number corresponding to the one motion visualization number, the motion visualization mechanism 2c sets the vibration frequency to the working state. The data is transferred to the conversion unit 2j.
  • the output device 6 includes a speaker 6a that notifies (outputs) sound to the worker and a vibration device 6b that notifies (outputs) vibration to the worker.
  • the work state conversion unit 2j controls the output device 6 based on the work state, sound file, and vibration frequency from the motion visualization mechanism unit 2c.
  • the work state conversion unit 2j When the work state including the program state from the programmable logic controller 3 indicates that the execution of the program has been completed and the flag of the end sound of the operation visualization number that has been executed so far is ON, the work state conversion unit 2j The sound file specified by the visualization mechanism unit 2c is transferred to the speaker 6a. Further, when the work state including the program state from the programmable logic controller 3 indicates that the execution of the program has been completed, and the flag of the end vibration of the operation visualization number that has been executed so far is ON, the work state conversion unit 2j The vibration frequency specified by the motion visualization mechanism unit 2c is passed to the vibration device 6b.
  • the work state conversion unit 2j When the robot 9 completes an operation based on a specific motion visualization number (motion instruction), the work state conversion unit 2j configured as described above generates a sound that is associated with the completion of the motion in advance in the table. Output to the speaker 6a. Further, when the robot 9 completes the operation based on the specific motion visualization number (motion instruction), the work state conversion unit 2j outputs the vibration associated with the completion of the motion in advance in the table to the vibration device 6b.
  • the motion visualization mechanism unit 2c determines one motion visualization number from the state transition diagram as shown in FIG. 7 and the association as shown in FIG. Is identified.
  • the motion visualization mechanism unit 2c identifies one avatar animation and one movement location from the table of FIG.
  • a sound file of a name is specified and passed to the work state conversion unit 2j.
  • the work state conversion unit 2j passes the sound file specified by the motion visualization mechanism unit 2c to the speaker 6a.
  • the motion visualization mechanism unit 2c specifies one avatar animation and one movement location from the table of FIG. The frequency is specified and passed to the work state conversion unit 2j.
  • the work state conversion unit 2j passes the vibration frequency specified by the motion visualization mechanism unit 2c to the vibration device 6b.
  • the work state conversion unit 2j When the robot 9 starts an operation based on a specific motion visualization number (motion instruction), the work state conversion unit 2j configured as described above generates a sound previously associated with the start of the motion in the table. Output to the speaker 6a. Further, when the robot 9 starts an operation based on a specific motion visualization number (motion instruction), the work state conversion unit 2j outputs a vibration associated with the start of the motion in advance in the table to the vibration device 6b.
  • FIG. 12 is a block diagram showing the configuration of the robot control apparatus according to Embodiment 3 of the present invention.
  • constituent elements that are the same as or similar to the constituent elements described above are assigned the same reference numerals, and different constituent elements are mainly described.
  • the gesture recognizing unit 2b is not only a normal gesture that is a specific first gesture that is a gesture for determining an action visualization number (action instruction), but also a specific second that is different from the first gesture.
  • a switching gesture which is a gesture
  • a manual gesture which is a third gesture indicating movement of a specific part of the worker in a space of predetermined coordinates, are recognized.
  • the predetermined coordinates are orthogonal coordinates defined by the x-axis, y-axis, and z-axis, and the specific part of the operator is a human hand.
  • the work support device 2 defines a normal mode that is the first mode and a manual mode that is the second mode. In the normal mode, the work support apparatus 2 determines an operation instruction based on the normal gesture as in the first and second embodiments. In the manual mode, the work support device 2 determines the movement amount for the robot 9 to perform movement based on the movement of the hand of the manual gesture as the operation instruction.
  • FIG. 13 is a diagram showing a gain table 2k according to the third embodiment.
  • gxa, gya, and gza are proportional gains of the x-axis, y-axis, and z-axis, respectively.
  • gxb, gyb, and gzb are integral gains of the x-axis, y-axis, and z-axis, respectively.
  • the gesture recognition unit 2b obtains the movement amount of the robot 9 using the following equation (1) based on the movement indicated by the manual gesture, the preset proportional gain, and the preset integral gain. .
  • the body part coordinate previous x coordinate and the body part coordinate current x coordinate are coordinates indicating movement of the x axis indicated by the manual gesture, and the hand coordinate x corresponds to the movement amount of the robot 9.
  • the y coordinate and the z coordinate are the same as the x coordinate.
  • the gesture recognition unit 2b notifies the obtained movement amount to the work instruction conversion unit 2d.
  • the work instruction conversion unit 2d notifies the programmable logic controller 3 of the notified movement amount.
  • the programmable logic controller 3 notifies the robot controller 4 of the notified movement amount.
  • the robot controller 4 performs control for moving the robot 9 based on the notified movement amount when the work support apparatus 2 performs the manual mode.
  • FIG. 14 is a diagram showing an operation visualization / animation table 2h according to the third embodiment.
  • “manual mode” is set at the movement location of the motion visualization number “14”
  • “normal mode” is set at the movement location of the motion visualization number “16”.
  • gestures corresponding to motion visualization numbers “1” to “13”, “15”, and “17 to 19” correspond to normal gestures, and motion visualization numbers “14” and “16”.
  • the gesture corresponding to is equivalent to a switching gesture.
  • the motion visualization mechanism unit 2c When the motion visualization number “14” is identified based on the gesture recognized by the gesture recognition unit 2b and the transition state of the robot 9, the motion visualization mechanism unit 2c notifies the gesture recognition unit 2b of “manual mode”. To do. On the other hand, when the motion visualization number “16” is identified based on the gesture recognized by the gesture recognition unit 2 b and the transition state of the robot 9, the motion visualization mechanism unit 2 c gives the gesture recognition unit 2 b the “normal mode”. To be notified. As described above, when the work support device 2 recognizes the switching gesture, the work support device 2 performs switching from the normal mode to the manual mode or switching from the manual mode to the normal mode.
  • the work support apparatus 2 performs the manual mode instead of the normal mode when the switching gesture is recognized. According to such a configuration, the operator can switch to the manual mode that can deal with fine work by performing a switching gesture.
  • the work movement amount of the robot 9 can be optimized.

Abstract

The purpose of the present invention is to provide technology with which an operator can easily operate a robot. This robot control device for controlling a robot 9 comprises: a operation support device 2 which recognizes a gesture from an operator, and on the basis of said gesture, determines an action instruction in order for the robot 9 to perform an action; a robot controller 4 which, on the basis of the action instruction determined by the operation support device 2, controls the action of the robot 9; and an action display device 5, which displays an animation of a human that moves on the basis of the action instruction determined by the operation support device 2.

Description

ロボット制御装置Robot controller
 本発明は、ロボットを制御するロボット制御装置に関する。 The present invention relates to a robot control apparatus that controls a robot.
 近年、日本では生産人口の減少や熟練技能者の高齢化が進み、ものづくりができる人材が減少してきている。このため、ものづくりに参画できる人材の裾野を広げるために、在宅から遠隔地のものづくりに参画するためのしくみや、初心者がすぐに技能を身につけるためのしくみが必要になってきている。 In recent years, in Japan, the number of people who can make things is decreasing due to the decline in the production population and the aging of skilled workers. For this reason, in order to broaden the base of human resources who can participate in manufacturing, there is a need for a mechanism for participating in manufacturing from a remote location from home and a mechanism for beginners to quickly acquire skills.
 ものづくりの現場では、組立てや搬送などの作業をロボットに代替させる自動化が進められてきている。このような自動化によってロボットに所望の作業を行わせるためには、作業経路の開始、中間、及び終了等のロボットの挙動が変化する各場面での、ロボットの位置姿勢(位置、高さ及び傾き)を定義する教示が必要である。 In the field of manufacturing, automation is being promoted in which robots replace work such as assembly and transport. In order to allow the robot to perform a desired work by such automation, the robot's position and orientation (position, height, and tilt) in each scene where the behavior of the robot changes, such as the start, middle, and end of the work path. ) Is necessary to define.
 一般的なロボットの教示作業には、(i)装置熟知者が制御装置に接続された教示専用のティーチングボックスを用いて、ロボットハンドを所望の位置姿勢に移動させ、その位置姿勢を示す位置姿勢データをロボットプログラム上に予め記憶させること、(ii)予め記憶させた位置姿勢データに基づいて、それらの位置姿勢を繋ぐ経路(軌道)を満足する、ロボットに設けられた複数のサーボモータへの動作指示(動作指令)を制御装置に計算させること、などがある。このような教示は、作業対象の物体の形状ごとに必要となるため、ロボットが多品種少量生産のように様々な形状の物体に対して作業を行うためには、複数の教示を行う必要がある。 For general robot teaching work, (i) a position / posture indicating the position / posture of the robot hand is moved to a desired position / posture using a teaching-dedicated teaching box connected to the control device by a device expert. (Ii) based on the position / orientation data stored in advance, satisfying a path (orbit) connecting the positions and orientations to a plurality of servo motors provided in the robot. For example, the control device may calculate an operation instruction (operation command). Since such teaching is necessary for each shape of the object to be worked, it is necessary to perform a plurality of teachings in order for the robot to perform work on objects of various shapes as in multi-product low-volume production. is there.
 教示専用のティーチングボックスを用いて、以上のように教示されたロボットに対する動作指示(作業指示)の方式として、いくつかの方式が提案されている。例えば、ジェスチャを用いてロボットへの動作指示を行う方式、または、操作対象物が変化したときに人間がジェスチャを行うことによって、教示によって得られた情報とロボットの状態をセンシングした情報とに基づいて位置情報を変更する方式などがある。これら方式によれば、作業者がロボットへ動作指示する際に、ロボット固有のインタフェースを知ることなくロボットを直接操作することができる。 い く つ か Several methods have been proposed as a method of operating instructions (work instructions) for the robot taught as described above using a teaching-dedicated teaching box. For example, based on a method of instructing a robot using a gesture, or information obtained by teaching and information obtained by sensing the state of a robot when a human performs a gesture when an operation target changes. To change the location information. According to these methods, when the operator gives an operation instruction to the robot, the robot can be directly operated without knowing the interface unique to the robot.
 また例えば、特許文献1に開示の方式では、ロボット制御部が撮像情報から作業者のジェスチャを検出し、ジェスチャに関連付けられたロボット制御コマンドを特定し、制御コマンドに対応したロボット制御を行い、表示制御部において作業者にロボット制御コマンドを表示する。このような構成によれば、作業者は、表示されたロボット制御コマンドによって、ロボットに指示が正しく伝わったか否かを確認することが可能となっている。 Further, for example, in the method disclosed in Patent Document 1, the robot control unit detects the operator's gesture from the imaging information, specifies the robot control command associated with the gesture, performs the robot control corresponding to the control command, and displays The control unit displays a robot control command to the operator. According to such a configuration, the operator can confirm whether or not the instruction is correctly transmitted to the robot by the displayed robot control command.
 また例えば、特許文献2に開示の方式では、教示ステップの際に、教示対象のロボットから取得した感覚情報を用いて生成された3次元仮想世界映像に基づいて、ロボットに動作を行わせ、ロボットの触覚センサやカメラから運動情報と、視覚、触覚、聴覚などの感覚情報とを取得する。そして、身まねステップの際に、運動情報と感覚情報とに基づいてロボットに所望の動作を身まねさせる。これにより、作業者は、視覚、触覚、聴覚などの感覚情報を参考にして動作を行うことにより、多様な対象に対応する動作をロボットにまねさせることが可能となっている。 Further, for example, in the method disclosed in Patent Document 2, in the teaching step, the robot is caused to perform an operation based on a three-dimensional virtual world image generated using sensory information acquired from the teaching target robot. Motion information and sensory information such as vision, tactile sensation, and hearing are acquired from the tactile sensor or camera. In the mimicking step, the robot imitates a desired motion based on the motion information and sensory information. As a result, the operator can perform actions corresponding to various objects by performing actions with reference to sensory information such as vision, touch, and hearing.
特開2014-104527号公報JP 2014-104527 A 特許第4463120号公報Japanese Patent No. 4463120
 しかしながら、以上の技術では、ロボット制御コマンドを知らない初心者、または、実ロボットの可動範囲のようなロボットの身体性を知らない初心者にとっては、ロボットを操作することが困難であるという問題があった。 However, the above technique has a problem that it is difficult for a beginner who does not know the robot control command or a beginner who does not know the physicality of the robot such as the movable range of the actual robot to operate the robot. .
 そこで、本発明は、上記のような問題点を鑑みてなされたものであり、作業者がロボットを容易に操作することが可能な技術を提供することを目的とする。 Therefore, the present invention has been made in view of the above-described problems, and an object thereof is to provide a technique that allows an operator to easily operate a robot.
 本発明に係るロボット制御装置は、ロボットを制御するロボット制御装置であって、作業者の動作をセンサで検出した結果に基づいて当該作業者のジェスチャを認識し、当該ジェスチャに基づいて、前記ロボットが動作を行うための動作指示を決定する作業支援装置と、前記作業支援装置が決定した動作指示に基づいて、前記ロボットの動作を制御するための情報を決定し、当該情報をサーボモータに出力するロボットコントローラと、前記作業支援装置が決定した動作指示に基づいて動く人間のアニメーションを、前記ロボットの動作として表示する動作表示装置とを備える。 The robot control device according to the present invention is a robot control device that controls a robot, recognizes the worker's gesture based on a result of detecting the motion of the worker with a sensor, and based on the gesture, the robot A work support device that determines an operation instruction for performing an operation, and information for controlling the operation of the robot is determined based on the operation instruction determined by the work support device, and the information is output to the servo motor And a motion display device that displays a human animation that moves based on the motion instruction determined by the work support device as the motion of the robot.
 本発明によれば、作業者のジェスチャに基づいて動作指示を決定し、決定された動作指示に基づいてロボットの動作を制御し、決定された動作指示に基づいて動く人間のアニメーションを表示する。このような構成によれば、作業者が、ロボットの知識を有していなくても、アニメーションの表示を見ることによってロボットを容易に操作することができる。 According to the present invention, the operation instruction is determined based on the gesture of the worker, the operation of the robot is controlled based on the determined operation instruction, and the moving human animation is displayed based on the determined operation instruction. According to such a configuration, even if the operator does not have knowledge of the robot, the robot can be easily operated by viewing the animation display.
 本発明の目的、特徴、態様及び利点は、以下の詳細な説明と添付図面とによって、より明白となる。 The objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description and the accompanying drawings.
実施の形態1に係るロボット制御装置の構成を示すブロック図である。1 is a block diagram illustrating a configuration of a robot control apparatus according to a first embodiment. 実施の形態1に係るロボットを示す斜視図である。1 is a perspective view showing a robot according to a first embodiment. 実施の形態1に係るロボット制御装置の構成を示すブロック図である。1 is a block diagram illustrating a configuration of a robot control apparatus according to a first embodiment. 実施の形態1に係るジェスチャ・動作可視化テーブルを示す図である。6 is a diagram showing a gesture / motion visualization table according to Embodiment 1. FIG. 実施の形態1に係る動作可視化・アニメーションテーブルを示す図である。FIG. 6 is a diagram showing an operation visualization / animation table according to the first embodiment. 実施の形態1に係る初期処理の手順を示すフローチャートである。3 is a flowchart illustrating a procedure of initial processing according to the first embodiment. 状態遷移図の一例を示す図である。It is a figure which shows an example of a state transition diagram. ロボットの移動箇所と、その位置座標との関連付けの一例を示す図である。It is a figure which shows an example of correlation with the movement location of a robot, and its position coordinate. 実施の形態1に係るロボット制御装置の構成を示すブロック図である。1 is a block diagram illustrating a configuration of a robot control apparatus according to a first embodiment. 実施の形態2に係るロボット制御装置の構成を示すブロック図である。It is a block diagram which shows the structure of the robot control apparatus which concerns on Embodiment 2. FIG. 実施の形態2に係る動作可視化・アニメーションテーブルを示す図である。FIG. 10 is a diagram showing an operation visualization / animation table according to the second embodiment. 実施の形態3に係るロボット制御装置の構成を示すブロック図である。FIG. 10 is a block diagram illustrating a configuration of a robot control device according to a third embodiment. 実施の形態3に係るゲインテーブルを示す図である。FIG. 10 is a diagram showing a gain table according to the third embodiment. 実施の形態3に係る動作可視化・アニメーションテーブルを示す図である。FIG. 10 is a diagram showing an operation visualization / animation table according to the third embodiment.
 <実施の形態1>
 図1は、本発明の実施の形態に係るロボット制御装置の構成を示すブロック図である。図1のロボット制御装置は、ジェスチャ検出センサ1と、作業支援装置2と、プログラマブルロジックコントローラ3と、ロボットコントローラ4と、動作表示装置5とを備えている。そして、ジェスチャ検出センサ1及び作業支援装置2は、ネットワークを介してプログラマブルロジックコントローラ3、ロボットコントローラ4、及び、動作表示装置5と接続されている。
<Embodiment 1>
FIG. 1 is a block diagram showing a configuration of a robot control apparatus according to an embodiment of the present invention. The robot control device of FIG. 1 includes a gesture detection sensor 1, a work support device 2, a programmable logic controller 3, a robot controller 4, and an operation display device 5. The gesture detection sensor 1 and the work support device 2 are connected to the programmable logic controller 3, the robot controller 4, and the operation display device 5 through a network.
 図1のロボット制御装置は、ロボット9と通信可能に接続されており、ロボット9を制御する。ロボット9は、例えば、図2に示すように、複数のサーボモータによって複数の動作を行うことが可能なアーム型のロボットであってもよいし、その他のロボットであってもよい。 1 is connected so as to be communicable with the robot 9 and controls the robot 9. For example, as shown in FIG. 2, the robot 9 may be an arm type robot that can perform a plurality of operations by a plurality of servo motors, or may be another robot.
 以下、ロボット制御装置の構成要素について簡単に説明する。 The following is a brief description of the components of the robot controller.
 図1のジェスチャ検出センサ1は、例えば非接触距離センサを含み、作業者の動作から複数の座標を検出する。 1 includes a non-contact distance sensor, for example, and detects a plurality of coordinates based on the movement of the operator.
 作業支援装置2は、作業者の動作をジェスチャ検出センサ1で検出した結果(複数の座標)に基づいて当該作業者のジェスチャを認識する。そして、作業支援装置2は、認識したジェスチャに基づいて、ロボット9が動作を行うための動作指示を決定する。 The work support device 2 recognizes the gesture of the worker based on the result (a plurality of coordinates) detected by the gesture detection sensor 1. Then, the work support device 2 determines an operation instruction for the robot 9 to perform an operation based on the recognized gesture.
 プログラマブルロジックコントローラ3は、ロボット9のデバイスの状態であるデバイス状態を、当該デバイス状態を検出するエリアセンサなどから取得し、かつ、ロボット9の動作を制御するためのプログラムの実行を管理する。 The programmable logic controller 3 acquires the device state, which is the device state of the robot 9, from an area sensor or the like that detects the device state, and manages the execution of a program for controlling the operation of the robot 9.
 ロボットコントローラ4は、作業支援装置2が決定した動作指示に基づいて、ロボット9の動作を制御するための情報を決定し、当該情報をロボット9のサーボモータ(図示せず)に出力する。本実施の形態1では、ロボットコントローラ4は、作業支援装置2が決定した動作指示と、プログラマブルロジックコントローラ3が取得したデバイス状態と、プログラマブルロジックコントローラ3が管理するプログラムとに基づいて、ロボット9を動作させるための動作指令(ロボット9の動作を制御するための情報)を生成する。そして、ロボットコントローラ4は、生成した動作指令をロボット9に渡すことによって、ロボット9を動作させる。 The robot controller 4 determines information for controlling the operation of the robot 9 based on the operation instruction determined by the work support device 2 and outputs the information to a servo motor (not shown) of the robot 9. In the first embodiment, the robot controller 4 moves the robot 9 on the basis of the operation instruction determined by the work support device 2, the device state acquired by the programmable logic controller 3, and the program managed by the programmable logic controller 3. An operation command for operating (information for controlling the operation of the robot 9) is generated. Then, the robot controller 4 operates the robot 9 by passing the generated operation command to the robot 9.
 動作表示装置5は、作業支援装置2が決定した動作指示に基づいて動く人間のアニメーションを、ロボット9の動作として表示する。アニメーションによって表示される人間は、例えばアバターのように人間を模した姿などを含む。以下、動作表示装置5が表示するアニメーションは、アバターが動くアニメーションであるものとし、このアニメーションを「アバターアニメーション」と記して説明する。 The motion display device 5 displays a human animation that moves based on the motion instruction determined by the work support device 2 as the motion of the robot 9. The human being displayed by animation includes a figure imitating a human like an avatar, for example. Hereinafter, the animation displayed by the motion display device 5 is assumed to be an animation of an avatar moving, and this animation will be described as “avatar animation”.
 次に、ロボット制御装置のいくつかの構成要素について詳細に説明する。 Next, some components of the robot controller will be described in detail.
 作業支援装置2は、人間動作検出部2aと、ジェスチャ認識部2bと、動作可視化機構部2cと、作業指示変換部2dと、動作表示制御部2eとを備える。なお図示されていないが、作業支援装置2は、各種テーブルを記憶可能な記憶部なども備えている。 The work support device 2 includes a human motion detection unit 2a, a gesture recognition unit 2b, a motion visualization mechanism unit 2c, a work instruction conversion unit 2d, and a motion display control unit 2e. Although not shown, the work support apparatus 2 also includes a storage unit that can store various tables.
 ここで本実施の形態1に係るロボット制御装置には、初期処理及び作業処理という2つのフェーズが規定されている。初期処理は、作業開始前に行われる処理であり、例えば教示処理及び登録処理などに相当する処理である。作業処理は、作業者の操作に応じてロボット9を動作させることによって作業が行われる処理である。以下、初期処理に関する構成要素について説明した後に、作業処理に関する構成要素について説明する。 Here, in the robot control apparatus according to the first embodiment, two phases of initial processing and work processing are defined. The initial process is a process performed before the work starts, and is a process corresponding to, for example, a teaching process and a registration process. The work process is a process in which work is performed by operating the robot 9 in accordance with the operation of the worker. Hereinafter, after describing the components related to the initial process, the components related to the work process will be described.
 <初期処理に関する構成要素>
 図3は、ロボット制御装置の構成要素のうち、初期処理を行う構成要素を示すブロック図である。図3に示すように、初期処理は、図1の人間動作検出部2a、ジェスチャ認識部2b、及び、動作可視化機構部2cなどによって行われる。以下、図3などを用いて、初期処理に関する構成要素について説明する。
<Components related to initial processing>
FIG. 3 is a block diagram showing components that perform initial processing among the components of the robot control apparatus. As shown in FIG. 3, the initial process is performed by the human motion detection unit 2a, the gesture recognition unit 2b, the motion visualization mechanism unit 2c, and the like shown in FIG. Hereinafter, components related to the initial process will be described with reference to FIG. 3 and the like.
 ジェスチャ検出センサ1は、作業者の動作から作業者上の座標を随時(例えば定期的、周期的に)検出する。 The gesture detection sensor 1 detects the coordinates on the worker as needed from the worker's motion (for example, periodically or periodically).
 人間動作検出部2aは、ジェスチャ検出センサ1で随時検出された座標に基づいて、作業者の身体の特定部位(例えば手)の座標を随時検出する。 The human motion detection unit 2a detects the coordinates of a specific part (for example, a hand) of the worker's body as needed based on the coordinates detected by the gesture detection sensor 1 as needed.
 ジェスチャ認識部2bは、人間動作検出部2aで随時検出された特定部位の座標に基づいて、作業者の特定部位の移動量を求める。初期処理の際、ジェスチャ認識部2bは、特定部位と移動量との組み合わせを、ジェスチャとして登録する。ジェスチャ認識部2bは、この登録を用いることにより、ジェスチャ検出センサ1で随時検出された座標に基づいてジェスチャを認識(特定)することが可能となっている。なお、異なるジェスチャには異なるジェスチャ番号が割り当てられる。 The gesture recognition unit 2b obtains the movement amount of the specific part of the worker based on the coordinates of the specific part detected at any time by the human motion detection unit 2a. During the initial processing, the gesture recognition unit 2b registers a combination of the specific part and the movement amount as a gesture. By using this registration, the gesture recognition unit 2b can recognize (specify) the gesture based on the coordinates detected by the gesture detection sensor 1 at any time. Different gesture numbers are assigned to different gestures.
 初期処理の際、動作可視化機構部2cは、ジェスチャ番号と、動作可視化番号とを関連付ける。図4は、動作可視化機構部2cの関連付けの結果の一例を示すテーブルである。動作可視化番号は、ロボット9の動作を特定するための番号であり、上述した動作指示に相当する。動作可視化機構部2cは、図4のような関連付けの結果を、図3のジェスチャ・動作可視化テーブル2fに登録する。 During the initial processing, the motion visualization mechanism unit 2c associates the gesture number with the motion visualization number. FIG. 4 is a table showing an example of the result of association of the motion visualization mechanism unit 2c. The motion visualization number is a number for specifying the motion of the robot 9 and corresponds to the motion instruction described above. The motion visualization mechanism unit 2c registers the association result as shown in FIG. 4 in the gesture / motion visualization table 2f in FIG.
 また初期処理の際、動作可視化機構部2cは、動作可視化番号と、動作可視化ライブラリ2gに記憶されている複数のアバターアニメーションのいずれかと、移動箇所とを関連付ける。例えば、通常の人間の身体的特徴を有する状態で、ロボット9の動作可視化番号の動作を通常の動作で行った場合の、アバターの挙動を示すアバターアニメーションが、当該動作可視化番号に関連付けられる。移動箇所は、例えば、ロボット9に接続されたハンドの手先の位置、ロボット9に接続されたハンドが把持するワークの位置、ロボット9の関節の位置などの、ロボット9に関する位置である。図5は、動作可視化機構部2cの関連付けの結果の一例を示すテーブルである。動作可視化機構部2cは、図5のような関連付けの結果を、図3の動作可視化・アニメーションテーブル2hに登録する。 Also, during the initial processing, the motion visualization mechanism unit 2c associates the motion visualization number, one of a plurality of avatar animations stored in the motion visualization library 2g, and the movement location. For example, an avatar animation indicating the behavior of the avatar when the motion visualization number of the robot 9 is operated in the normal motion in a state having normal human physical characteristics is associated with the motion visualization number. The movement location is a position related to the robot 9 such as the position of the hand of the hand connected to the robot 9, the position of the work gripped by the hand connected to the robot 9, and the position of the joint of the robot 9. FIG. 5 is a table showing an example of the result of association of the motion visualization mechanism unit 2c. The motion visualization mechanism unit 2c registers the association result as shown in FIG. 5 in the motion visualization / animation table 2h of FIG.
 図6は、初期処理の手順を示すフローチャートである。 FIG. 6 is a flowchart showing a procedure of initial processing.
 まずステップS1にて、熟練技能者などの装置熟知者は、ロボット9の動作の順序を規定する状態遷移図を作成し、ロボット制御装置に登録する。図7は、状態遷移図の一例を示す図である。図7におけるa1~a19は動作可視化番号を示す。ロボット9の開始時の遷移状態は「初期状態」であり、ロボット9の遷移状態はジェスチャに応じて推移する。 First, in step S1, a device expert such as a skilled technician creates a state transition diagram that defines the operation sequence of the robot 9, and registers it in the robot controller. FIG. 7 is a diagram illustrating an example of a state transition diagram. In FIG. 7, a1 to a19 indicate motion visualization numbers. The transition state at the start of the robot 9 is an “initial state”, and the transition state of the robot 9 changes according to the gesture.
 図6のステップS2にて、動作可視化機構部2cは、ジェスチャのジェスチャ番号と、動作可視化番号とを関連付ける。 In step S2 in FIG. 6, the motion visualization mechanism unit 2c associates the gesture gesture number with the motion visualization number.
 例えば装置熟知者が、図7のような状態遷移図に基づいて図4のテーブルを埋める入力操作を行い、動作可視化機構部2cが、当該入力操作に基づいて上記関連付けを行う。これにより、例えば、ジェスチャg1が認識された場合に、初期状態のロボット9に動作可視化番号a1の動作を行わせるための入力操作が行われると、ジェスチャg1が動作可視化番号a1と関連付けられる。また例えば、ジェスチャg2が認識された場合に、初期状態のロボット9に動作可視化番号a11の動作を行わせるための入力操作が行われると、ジェスチャg2が動作可視化番号a11と関連付けられる。このステップS2によって、図4のような関連付けの結果が図3のジェスチャ・動作可視化テーブル2fに登録される。 For example, a device familiar person performs an input operation for filling the table of FIG. 4 based on the state transition diagram as shown in FIG. 7, and the motion visualization mechanism unit 2c performs the association based on the input operation. Thereby, for example, when the gesture g1 is recognized, if an input operation for causing the robot 9 in the initial state to perform the motion visualization number a1 is performed, the gesture g1 is associated with the motion visualization number a1. Further, for example, when the gesture g2 is recognized, if an input operation for causing the robot 9 in the initial state to perform the motion visualization number a11 is performed, the gesture g2 is associated with the motion visualization number a11. By this step S2, the association result as shown in FIG. 4 is registered in the gesture / motion visualization table 2f of FIG.
 図6のステップS3にて、動作可視化機構部2cは、動作可視化番号と、アバターアニメーションと、ロボット9の位置を示す移動箇所とを関連付ける。例えば装置熟知者が、図7のような状態遷移図に基づいて図5のテーブルを埋める入力操作を行い、動作可視化機構部2cが、当該入力操作に基づいて上記関連付けを行ってもよい。また例えば、動作可視化機構部2cが、図7のような状態遷移図を解析することによって上記関連付けを行ってもよい。このステップS3によって、図5のような関連付けの結果が図3の動作可視化・アニメーションテーブル2hに登録される。 6, the motion visualization mechanism unit 2 c associates the motion visualization number, the avatar animation, and the movement location indicating the position of the robot 9. For example, a device expert may perform an input operation for filling the table of FIG. 5 based on the state transition diagram as shown in FIG. 7, and the motion visualization mechanism unit 2c may perform the association based on the input operation. Further, for example, the motion visualization mechanism unit 2c may perform the association by analyzing a state transition diagram as shown in FIG. By this step S3, the result of association as shown in FIG. 5 is registered in the action visualization / animation table 2h of FIG.
 図6のステップS4にて、装置熟知者は、図5の移動箇所と、当該移動箇所の位置座標とを関連付けた位置情報管理テーブルを作成し、ロボット制御装置に登録する。 In step S4 in FIG. 6, the device familiar person creates a position information management table in which the movement location in FIG. 5 is associated with the position coordinates of the movement location, and registers them in the robot controller.
 図8は、位置情報管理テーブルの一例を示す図である。このステップでは、例えば、ロボット9が移動できる箇所がスキャンされる。具体的には、図5の移動箇所のレコード数分のマーカが、現場にセットされており、ロボット9に取り付けられた図示しないカメラがマーカの画像を検出した際に、ロボット9の位置座標が取得される。そして、取得された現場における位置座標と、図5の移動箇所とが関連付けられて図8の位置情報管理テーブルに登録される。ここでいうロボット9の位置座標は、例えば、ロボット9に接続されたハンドの手先の位置座標、ロボット9に接続されたハンドが把持するワークの位置座標、ロボット9の関節の位置座標などを含む。 FIG. 8 is a diagram showing an example of the location information management table. In this step, for example, a place where the robot 9 can move is scanned. Specifically, as many markers as the number of records of the movement location in FIG. 5 are set in the field, and when a camera (not shown) attached to the robot 9 detects an image of the marker, the position coordinates of the robot 9 are To be acquired. Then, the acquired position coordinates in the field and the movement location in FIG. 5 are associated and registered in the position information management table in FIG. Here, the position coordinates of the robot 9 include, for example, the position coordinates of the hand of the hand connected to the robot 9, the position coordinates of the workpiece gripped by the hand connected to the robot 9, the position coordinates of the joint of the robot 9, and the like. .
 <作業処理に関する構成要素>
 図9は、ロボット制御装置の構成要素のうち、作業処理を行う構成要素を示すブロック図である。図9に示すように、作業処理は、図1の全ての構成要素によって行われる。以下、図9などを用いて、作業処理に関する構成要素について説明する。
<Components related to work processing>
FIG. 9 is a block diagram illustrating components that perform work processing among the components of the robot control device. As shown in FIG. 9, the work process is performed by all the components shown in FIG. Hereinafter, the components related to the work process will be described with reference to FIG.
 人間動作検出部2aは、ジェスチャ検出センサ1で随時(例えば定期的、周期的に)検出された人間の座標に基づいて、人間の身体の特定部位の座標を随時(例えば定期的、周期的に)検出する。 Based on the human coordinates detected by the gesture detection sensor 1 at any time (for example, periodically or periodically), the human motion detection unit 2a determines the coordinates of a specific part of the human body at any time (for example, periodically or periodically). )To detect.
 ジェスチャ認識部2bは、人間動作検出部2aで随時検出された特定部位の座標に基づいて、作業者の特定部位の移動量を求め、求めた特定部位の移動量に基づいてジェスチャを認識する。 The gesture recognition unit 2b obtains the movement amount of the specific part of the worker based on the coordinates of the specific part detected at any time by the human motion detection unit 2a, and recognizes the gesture based on the obtained movement amount of the specific part.
 動作可視化機構部2cは、ジェスチャ認識部2bで認識されたジェスチャと、ロボット9の遷移状態とに基づいて、図7のような状態遷移図及び図4のようなテーブルから、1つの動作可視化番号を特定する。そして、動作可視化機構部2cは、特定した1つの動作可視化番号に基づいて、図5のようなテーブルから、1つのアバターアニメーションと、1つの移動箇所とを特定する。 Based on the gesture recognized by the gesture recognition unit 2b and the transition state of the robot 9, the motion visualization mechanism unit 2c uses a single motion visualization number from the state transition diagram as shown in FIG. 7 and the table as shown in FIG. Is identified. Then, the motion visualization mechanism unit 2c identifies one avatar animation and one movement location from the table as shown in FIG. 5 based on the identified one motion visualization number.
 具体的には、ロボット9の遷移状態が図7の「初期状態」であり、かつ、ジェスチャ認識部2bがジェスチャg1を認識した場合、動作可視化機構部2cは、図4のテーブルから、ジェスチャg1に関連付けられた動作可視化番号a1を特定する。そして、動作可視化機構部2cは、図5のテーブルから、動作可視化番号a1に関連付けられたアバターアニメーションと、移動箇所「1」とを特定する。これにより、例えばアバターの手が移動箇所「0」から移動箇所「1」に移動するアニメーションが、動作可視化番号a1に関連付けられたアバターアニメーションとして特定される。 Specifically, when the transition state of the robot 9 is the “initial state” in FIG. 7 and the gesture recognition unit 2b recognizes the gesture g1, the motion visualization mechanism unit 2c reads the gesture g1 from the table in FIG. The action visualization number a1 associated with is specified. Then, the motion visualization mechanism unit 2c identifies the avatar animation associated with the motion visualization number a1 and the movement location “1” from the table of FIG. Thereby, for example, an animation in which the hand of the avatar moves from the movement location “0” to the movement location “1” is specified as the avatar animation associated with the motion visualization number a1.
 一方、ロボット9の遷移状態が図7の「初期状態」であり、かつ、ジェスチャ認識部2bがジェスチャg5を認識した場合、動作可視化機構部2cは、図4のテーブルから、ジェスチャg5に関連付けられた動作可視化番号a11を特定する。そして、動作可視化機構部2cは、図5のテーブルから、動作可視化番号a11に関連付けられたアバターアニメーションと、移動箇所「2」とを特定する。これにより、例えばアバターの手が移動箇所「0」から移動箇所「2」に移動するアニメーションが、動作可視化番号a11に関連付けられたアバターアニメーションとして特定される。 On the other hand, when the transition state of the robot 9 is the “initial state” in FIG. 7 and the gesture recognition unit 2b recognizes the gesture g5, the motion visualization mechanism unit 2c is associated with the gesture g5 from the table in FIG. The operation visualization number a11 is specified. Then, the motion visualization mechanism unit 2c identifies the avatar animation associated with the motion visualization number a11 and the movement location “2” from the table of FIG. Thereby, for example, an animation in which the hand of the avatar moves from the movement location “0” to the movement location “2” is specified as the avatar animation associated with the motion visualization number a11.
 作業指示変換部2dは、動作可視化機構部2cで特定された1つの移動箇所に基づいて、位置情報管理テーブル2iに登録された図8のようなテーブルから、当該移動箇所の位置座標を特定する。そして、作業指示変換部2dは、特定した位置座標をプログラマブルロジックコントローラ3に渡す。 The work instruction conversion unit 2d specifies the position coordinates of the movement location from the table as shown in FIG. 8 registered in the position information management table 2i based on the one movement location specified by the motion visualization mechanism unit 2c. . Then, the work instruction conversion unit 2d passes the specified position coordinates to the programmable logic controller 3.
 プログラマブルロジックコントローラ3は、センサ入力部3aと、プログラム実行管理部3bとを備える。センサ入力部3aはデバイス状態を取得する。デバイス状態は、例えば、ロボット9のエンコーダなどで検出される位置情報、ロボット9のアーム位置の情報、ワークの有無の情報などである。プログラム実行管理部3bは、ロボットコントローラ4がロボット9の動作を制御するためのプログラムを管理する。 The programmable logic controller 3 includes a sensor input unit 3a and a program execution management unit 3b. The sensor input unit 3a acquires a device state. The device state is, for example, position information detected by an encoder of the robot 9, information on the arm position of the robot 9, information on the presence / absence of a workpiece, and the like. The program execution management unit 3 b manages a program for the robot controller 4 to control the operation of the robot 9.
 このように構成されたプログラマブルロジックコントローラ3は、デバイス状態を取得するセンサ入力機能と、ロボットコントローラ4のプログラム実行管理機能とを有している。また、プログラマブルロジックコントローラ3は、作業指示変換部2dからの位置座標を受け取った場合に、当該位置座標と、プログラムの実行開始とをロボットコントローラ4に通知する。 The programmable logic controller 3 configured as described above has a sensor input function for acquiring a device state and a program execution management function of the robot controller 4. Further, when the programmable logic controller 3 receives the position coordinates from the work instruction conversion unit 2d, the programmable logic controller 3 notifies the robot controller 4 of the position coordinates and the execution start of the program.
 ロボットコントローラ4は、ロボット指令生成部4aと、状態取得部4bとを備える。ロボット指令生成部4aは、ロボット9の身体性を意識することなく、プログラマブルロジックコントローラ3からの位置座標(実質的には作業支援装置2からの位置座標)からロボット9への動作指令を生成する。 The robot controller 4 includes a robot command generation unit 4a and a state acquisition unit 4b. The robot command generation unit 4a generates an operation command to the robot 9 from the position coordinates from the programmable logic controller 3 (substantially the position coordinates from the work support device 2) without being conscious of the physicality of the robot 9. .
 例えば、ロボット指令生成部4aは、プログラマブルロジックコントローラ3からの位置座標と、ロボット指令生成部4aが管理しているロボット9の現在の位置座標とに基づいて、ロボット9の特定部分の移動量を算出する。そして、ロボット指令生成部4aは、算出した移動量に基づいて、ロボット9の各軸の移動量を算出し、当該移動量を含む動作指令をロボット9に渡す。これにより、ロボット9の特定部分が目的位置まで移動する。 For example, the robot command generation unit 4a calculates the movement amount of the specific part of the robot 9 based on the position coordinates from the programmable logic controller 3 and the current position coordinates of the robot 9 managed by the robot command generation unit 4a. calculate. Then, the robot command generation unit 4 a calculates the movement amount of each axis of the robot 9 based on the calculated movement amount, and passes an operation command including the movement amount to the robot 9. Thereby, the specific part of the robot 9 moves to the target position.
 状態取得部4bは、ロボットに取り付けられたビジョンセンサ及び画像センサなどのセンサが取得したカメラ画像などをロボット状態として取得し、当該ロボット状態をロボット指令生成部4aに通知する。 The state acquisition unit 4b acquires a camera image acquired by a sensor such as a vision sensor and an image sensor attached to the robot as a robot state, and notifies the robot command generation unit 4a of the robot state.
 物体の把持が行われるときには、ロボット指令生成部4aは、状態取得部4bから通知されたカメラ画像などのロボット状態に基づいて、ロボット9が物体への接近及び物体の把持を行うようにロボット9を制御する。物体の解放が行われるときには、ロボット指令生成部4aは、状態取得部4bから通知されたカメラ画像などのロボット状態に基づいて、ロボット9が目的位置への移動及びハンドの解放を行うようにロボット9を制御する。 When the object is gripped, the robot command generation unit 4a causes the robot 9 to approach the object and grip the object based on the robot state such as the camera image notified from the state acquisition unit 4b. To control. When the object is released, the robot command generation unit 4a causes the robot 9 to move to the target position and release the hand based on the robot state such as the camera image notified from the state acquisition unit 4b. 9 is controlled.
 プログラマブルロジックコントローラ3は、センサ入力部3aで取得されたデバイス状態をロボット指令生成部4aに渡し、ロボット指令生成部4aは、当該デバイス状態に基づいてロボットコントローラ4及びロボット9における各軸の移動量を補正するフィードバックを行う。また、プログラマブルロジックコントローラ3は、ロボットアームの移動完了およびワークの移動完了をデバイス状態として検出すると、プログラムの実行完了などを示すプログラム状態を作業支援装置2に通知する。 The programmable logic controller 3 passes the device state acquired by the sensor input unit 3a to the robot command generation unit 4a. The robot command generation unit 4a moves the movement amount of each axis in the robot controller 4 and the robot 9 based on the device state. Provide feedback to correct. When the programmable logic controller 3 detects the completion of movement of the robot arm and the completion of movement of the work as device states, the programmable logic controller 3 notifies the work support apparatus 2 of a program state indicating completion of execution of the program.
 動作可視化機構部2cは、プログラマブルロジックコントローラ3からプログラム状態としてプログラムの実行完了を受けると、特定していたアバターアニメーションを動作表示制御部2eに渡す。動作表示制御部2eは、当該アバターアニメーションを動作表示装置5に表示させる。 When the operation visualization mechanism unit 2c receives the program execution completion from the programmable logic controller 3 as a program state, it passes the specified avatar animation to the operation display control unit 2e. The motion display control unit 2e causes the motion display device 5 to display the avatar animation.
 <実施の形態1のまとめ>
 以上のような本実施の形態1に係るロボット制御装置によれば、作業者のジェスチャに基づいて動作指示を決定し、決定された動作指示に基づいてロボット9の動作を制御し、決定された動作指示に基づいて動く人間のアニメーションを表示する。このような構成によれば、ロボット9の動作が、人間のアニメーションとして表示される。このため、作業者が、ロボット9の制御コマンド、並びに、ロボット9の可動範囲及び身体性などの、ロボット9の知識を有していなくても、人間のアニメーションの表示を見ることによってロボット9を容易に操作することができる。
<Summary of Embodiment 1>
According to the robot control apparatus according to the first embodiment as described above, the operation instruction is determined based on the gesture of the worker, and the operation of the robot 9 is controlled based on the determined operation instruction. Display moving human animation based on movement instructions. According to such a configuration, the operation of the robot 9 is displayed as a human animation. For this reason, even if the operator does not have knowledge of the robot 9 such as the control command of the robot 9 and the movable range and physicality of the robot 9, the robot 9 is displayed by viewing the display of the human animation. Easy to operate.
 また本実施の形態1では、作業支援装置2は、ネットワークを介してプログラマブルロジックコントローラ3と接続されている。これにより、作業者は、ロボット9の遠隔操作を行うことができる。 In the first embodiment, the work support apparatus 2 is connected to the programmable logic controller 3 via a network. Thereby, the operator can perform remote operation of the robot 9.
 <実施の形態2>
 図10は、本発明の実施の形態2に係るロボット制御装置の構成を示すブロック図である。以下、本実施の形態2に係る構成要素のうち、上述の構成要素と同じまたは類似する構成要素については同じ参照符号を付し、異なる構成要素について主に説明する。
<Embodiment 2>
FIG. 10 is a block diagram showing the configuration of the robot control apparatus according to the second embodiment of the present invention. Hereinafter, among the constituent elements according to the second embodiment, constituent elements that are the same as or similar to the constituent elements described above are assigned the same reference numerals, and different constituent elements are mainly described.
 図10のロボット制御装置の構成は、図9の構成に、出力デバイス6及び作業状態変換部2jを追加した構成と同様である。なお、作業状態変換部2jは、作業支援装置2に備えられる。 10 is the same as the configuration in which the output device 6 and the work state conversion unit 2j are added to the configuration in FIG. The work state conversion unit 2j is provided in the work support device 2.
 本実施の形態2では、ロボットコントローラ4の状態取得部4bは、ロボット状態をロボット指令生成部4a及び作業支援装置2に通知する。作業指示変換部2dは、状態取得部4bからのロボット状態と、プログラマブルロジックコントローラ3からのプログラム状態とを含む作業状態を生成する。作業指示変換部2dは、生成した作業状態を動作可視化機構部2cに渡し、動作可視化機構部2cは、当該作業状態を作業状態変換部2jに渡す。 In the second embodiment, the state acquisition unit 4b of the robot controller 4 notifies the robot command generation unit 4a and the work support device 2 of the robot state. The work instruction conversion unit 2d generates a work state including the robot state from the state acquisition unit 4b and the program state from the programmable logic controller 3. The work instruction conversion unit 2d passes the generated work state to the motion visualization mechanism unit 2c, and the motion visualization mechanism unit 2c passes the work state to the work state conversion unit 2j.
 図11は、実施の形態2に係る動作可視化・アニメーションテーブル2hを示す図である。図11の動作可視化・アニメーションテーブル2hでは、図5の動作可視化・アニメーションテーブル2hに、音番号、振動番号、開始音、終了音、開始振動、終了振動が追加されている。 FIG. 11 is a diagram showing an operation visualization / animation table 2h according to the second embodiment. In the motion visualization / animation table 2h in FIG. 11, the sound number, vibration number, start sound, end sound, start vibration, and end vibration are added to the motion visualization / animation table 2h in FIG.
 音番号には、音ファイルを特定するファイル名(図11では便宜上、1,2が付されている)が設定される。振動番号には、振動周波数(図11では便宜上、1,2が付されている)が設定される。音番号及び振動番号の設定は、例えば、初期処理時のジェスチャ関連付け処理の際に作業者によって行われる。 In the sound number, a file name for identifying the sound file (for convenience, 1 and 2 are attached) is set. In the vibration number, a vibration frequency (indicated by 1 and 2 for convenience in FIG. 11) is set. The setting of the sound number and the vibration number is performed by an operator at the time of the gesture association process at the time of the initial process, for example.
 ロボット9が動作可視化番号の動作を開始する際に作業者に音を通知する場合には、図11の当該動作可視化番号の開始音のフラグがON(図11では○はONを示す)に設定される。ロボット9が動作可視化番号の動作を終了した際に作業者に音を通知する場合には、図11の当該動作可視化番号の終了音のフラグがON(図11では○はONを示す)に設定される。ロボット9が動作可視化番号の動作を開始する際にも終了する際にも作業者に音を通知しない場合には、図11の当該動作可視化番号の開始音にも終了音にもONの設定が行われない。 When the robot 9 notifies the worker of a sound when the motion visualization number starts, the start sound flag of the motion visualization number in FIG. 11 is set to ON (in FIG. 11, ◯ indicates ON). Is done. When the robot 9 notifies the worker of the sound when the motion visualization number is finished, the end sound flag of the motion visualization number in FIG. 11 is set to ON (in FIG. 11, ◯ indicates ON). Is done. When the robot 9 does not notify the operator of the sound of the motion visualization number when starting or finishing the motion visualization number, ON is set for both the start sound and the end sound of the motion visualization number in FIG. Not done.
 同様に、ロボット9が動作可視化番号の動作を開始する際に作業者に振動を通知する場合には、図11の当該動作可視化番号の開始振動のフラグがON(図11では○はONを示す)に設定される。ロボット9が動作可視化番号の動作を終了した際に作業者に振動を通知する場合には、図11の当該動作可視化番号の終了振動のフラグがON(図11では○はONを示す)に設定される。ロボット9が動作可視化番号の動作を開始する際にも終了する際にも作業者に振動を通知しない場合には、図11の当該動作可視化番号の開始振動にも終了振動にもONの設定が行われない。 Similarly, when the robot 9 notifies the operator of the vibration when starting the operation with the motion visualization number, the start vibration flag of the motion visualization number in FIG. 11 is ON (in FIG. 11, ◯ indicates ON). ). When the robot 9 notifies the operator of the vibration when the operation of the motion visualization number is finished, the end vibration flag of the motion visualization number in FIG. 11 is set to ON (in FIG. 11, ◯ indicates ON). Is done. When the robot 9 does not notify the operator of vibrations at the time of starting or ending the motion visualization number, ON is set for both the start vibration and the end vibration of the motion visualization number in FIG. Not done.
 動作可視化機構部2cは、1つの動作可視化番号を特定した場合に、当該1つの動作可視化番号に対応する音番号にファイル名が設定されている場合には、当該ファイル名で特定される音ファイルを作業状態変換部2jに渡す。同様に、動作可視化機構部2cは、1つの動作可視化番号を特定した場合に、当該1つの動作可視化番号に対応する振動番号に振動周波数が設定されている場合には、当該振動周波数を作業状態変換部2jに渡す。 When the motion visualization mechanism unit 2c identifies one motion visualization number and the file name is set to the sound number corresponding to the one motion visualization number, the sound file identified by the file name Is transferred to the work state conversion unit 2j. Similarly, when the motion visualization mechanism unit 2c specifies one motion visualization number and the vibration frequency is set to the vibration number corresponding to the one motion visualization number, the motion visualization mechanism 2c sets the vibration frequency to the working state. The data is transferred to the conversion unit 2j.
 出力デバイス6は、作業者に音を通知(出力)するスピーカ6aと、作業者に振動を通知(出力)する振動デバイス6bとを備える。作業状態変換部2jは、動作可視化機構部2cからの作業状態、音ファイル、及び、振動周波数に基づいて出力デバイス6を制御する。以下、その一例について説明する。 The output device 6 includes a speaker 6a that notifies (outputs) sound to the worker and a vibration device 6b that notifies (outputs) vibration to the worker. The work state conversion unit 2j controls the output device 6 based on the work state, sound file, and vibration frequency from the motion visualization mechanism unit 2c. Hereinafter, an example will be described.
 プログラマブルロジックコントローラ3からのプログラム状態を含む作業状態がプログラムの実行完了を示し、かつ、これまで実行されていた動作可視化番号の終了音のフラグがONである場合、作業状態変換部2jは、動作可視化機構部2cで特定された音ファイルをスピーカ6aへ渡す。また、プログラマブルロジックコントローラ3からのプログラム状態を含む作業状態がプログラムの実行完了を示し、かつ、これまで実行されていた動作可視化番号の終了振動のフラグがONである場合、作業状態変換部2jは、動作可視化機構部2cで特定された振動周波数を振動デバイス6bへ渡す。 When the work state including the program state from the programmable logic controller 3 indicates that the execution of the program has been completed and the flag of the end sound of the operation visualization number that has been executed so far is ON, the work state conversion unit 2j The sound file specified by the visualization mechanism unit 2c is transferred to the speaker 6a. Further, when the work state including the program state from the programmable logic controller 3 indicates that the execution of the program has been completed, and the flag of the end vibration of the operation visualization number that has been executed so far is ON, the work state conversion unit 2j The vibration frequency specified by the motion visualization mechanism unit 2c is passed to the vibration device 6b.
 以上のように構成された作業状態変換部2jは、ロボット9が、特定の動作可視化番号(動作指示)に基づく動作を完了したときに、当該動作の完了とテーブルで予め対応付けられた音をスピーカ6aに出力させる。また、作業状態変換部2jは、ロボット9が、特定の動作可視化番号(動作指示)に基づく動作を完了したときに、当該動作の完了とテーブルで予め対応付けられた振動を振動デバイス6bに出力させる。 When the robot 9 completes an operation based on a specific motion visualization number (motion instruction), the work state conversion unit 2j configured as described above generates a sound that is associated with the completion of the motion in advance in the table. Output to the speaker 6a. Further, when the robot 9 completes the operation based on the specific motion visualization number (motion instruction), the work state conversion unit 2j outputs the vibration associated with the completion of the motion in advance in the table to the vibration device 6b. Let
 また本実施の形態2では、動作可視化機構部2cは、ジェスチャ認識部2bでジェスチャが認識された場合に、図7のような状態遷移図及び図4のような関連付けから、1つの動作可視化番号を特定する。 Further, in the second embodiment, when the gesture recognition unit 2b recognizes the gesture, the motion visualization mechanism unit 2c determines one motion visualization number from the state transition diagram as shown in FIG. 7 and the association as shown in FIG. Is identified.
 上記1つの動作可視化番号に対応する音番号にファイル名が設定されている場合、動作可視化機構部2cは、図11のテーブルから、1つのアバターアニメーション及び1つの移動箇所を特定するとともに、当該ファイル名の音ファイルを特定して作業状態変換部2jに渡す。上記1つの動作可視化番号の開始音のフラグがONである場合、作業状態変換部2jは、動作可視化機構部2cで特定された音ファイルをスピーカ6aへ渡す。 When the file name is set to the sound number corresponding to the one motion visualization number, the motion visualization mechanism unit 2c identifies one avatar animation and one movement location from the table of FIG. A sound file of a name is specified and passed to the work state conversion unit 2j. When the flag of the start sound of the one motion visualization number is ON, the work state conversion unit 2j passes the sound file specified by the motion visualization mechanism unit 2c to the speaker 6a.
 上記1つの動作可視化番号に対応する振動番号に振動周波数が設定されている場合、動作可視化機構部2cは、図11のテーブルから、1つのアバターアニメーション及び1つの移動箇所を特定するとともに、当該振動周波数を特定して作業状態変換部2jに渡す。上記1つの動作可視化番号の開始振動のフラグがONである場合、作業状態変換部2jは、動作可視化機構部2cで特定された振動周波数を振動デバイス6bへ渡す。 When the vibration frequency is set to the vibration number corresponding to the one motion visualization number, the motion visualization mechanism unit 2c specifies one avatar animation and one movement location from the table of FIG. The frequency is specified and passed to the work state conversion unit 2j. When the start vibration flag of the one motion visualization number is ON, the work state conversion unit 2j passes the vibration frequency specified by the motion visualization mechanism unit 2c to the vibration device 6b.
 以上のように構成された作業状態変換部2jは、ロボット9が、特定の動作可視化番号(動作指示)に基づく動作を開始するときに、当該動作の開始とテーブルで予め対応付けられた音をスピーカ6aに出力させる。また、作業状態変換部2jは、ロボット9が、特定の動作可視化番号(動作指示)に基づく動作を開始するときに、当該動作の開始とテーブルで予め対応付けられた振動を振動デバイス6bに出力させる。 When the robot 9 starts an operation based on a specific motion visualization number (motion instruction), the work state conversion unit 2j configured as described above generates a sound previously associated with the start of the motion in the table. Output to the speaker 6a. Further, when the robot 9 starts an operation based on a specific motion visualization number (motion instruction), the work state conversion unit 2j outputs a vibration associated with the start of the motion in advance in the table to the vibration device 6b. Let
 <実施の形態2のまとめ>
 以上のような本実施の形態2に係るロボット制御装置によれば、ロボット9が特定の動作を完了したときに、音及び振動の少なくともいずれか1つを出力すること、及び、ロボット9が、特定の動作を開始するときに、音及び振動の少なくともいずれか1つを出力することを行う。このような構成によれば、作業者は、特定の動作の開始時及び完了時を聴覚及び触覚の少なくともいずれか1つによって知ることができる。
<Summary of Embodiment 2>
According to the robot control device according to the second embodiment as described above, when the robot 9 completes a specific operation, the robot 9 outputs at least one of sound and vibration, and the robot 9 When starting a specific operation, at least one of sound and vibration is output. According to such a configuration, the worker can know the start time and the completion time of a specific action by at least one of hearing and touch.
 <実施の形態3>
 図12は、本発明の実施の形態3に係るロボット制御装置の構成を示すブロック図である。以下、本実施の形態3で説明する構成要素のうち、上述の構成要素と同じまたは類似する構成要素については同じ参照符号を付し、異なる構成要素について主に説明する。
<Embodiment 3>
FIG. 12 is a block diagram showing the configuration of the robot control apparatus according to Embodiment 3 of the present invention. Hereinafter, among the constituent elements described in the third embodiment, constituent elements that are the same as or similar to the constituent elements described above are assigned the same reference numerals, and different constituent elements are mainly described.
 本実施の形態3では、ジェスチャ認識部2bは、動作可視化番号(動作指示)を決定するためのジェスチャである特定の第1ジェスチャである通常ジェスチャだけでなく、第1ジェスチャと異なる特定の第2ジェスチャである切替ジェスチャと、予め定められた座標の空間における作業者の特定部位の移動を示す第3ジェスチャである手動ジェスチャとを認識する。以下、予め定められた座標は、x軸、y軸及びz軸で規定される直交座標であり、作業者の特定部位は、人間の手であるものとして説明する。 In the third embodiment, the gesture recognizing unit 2b is not only a normal gesture that is a specific first gesture that is a gesture for determining an action visualization number (action instruction), but also a specific second that is different from the first gesture. A switching gesture, which is a gesture, and a manual gesture, which is a third gesture indicating movement of a specific part of the worker in a space of predetermined coordinates, are recognized. In the following description, it is assumed that the predetermined coordinates are orthogonal coordinates defined by the x-axis, y-axis, and z-axis, and the specific part of the operator is a human hand.
 作業支援装置2には、第1モードである通常モードと、第2モードである手動モードとが規定されている。通常モードでは、作業支援装置2は、実施の形態1,2と同様に通常ジェスチャに基づいて動作指示を決定する。手動モードでは、作業支援装置2は、ロボット9が手動ジェスチャの手の移動に基づく移動を行うための移動量を動作指示として決定する。 The work support device 2 defines a normal mode that is the first mode and a manual mode that is the second mode. In the normal mode, the work support apparatus 2 determines an operation instruction based on the normal gesture as in the first and second embodiments. In the manual mode, the work support device 2 determines the movement amount for the robot 9 to perform movement based on the movement of the hand of the manual gesture as the operation instruction.
 図13は、本実施の形態3に係るゲインテーブル2kを示す図である。gxa,gya,gzaは、それぞれx軸、y軸、z軸の比例ゲインである。gxb,gyb,gzbは、それぞれx軸、y軸、z軸の積分ゲインである。手動モードでは、ジェスチャ認識部2bは、手動ジェスチャが示す移動と、予め設定された比例ゲインと、予め設定された積分ゲインとに基づき、次式(1)を用いてロボット9の移動量を求める。なお、式(1)において、身体部位座標前回x座標及び身体部位座標今回x座標は、手動ジェスチャが示すx軸の移動を示す座標であり、手先座標xは、ロボット9の移動量に相当する位置座標である。y座標及びz座標もx座標と同様である。 FIG. 13 is a diagram showing a gain table 2k according to the third embodiment. gxa, gya, and gza are proportional gains of the x-axis, y-axis, and z-axis, respectively. gxb, gyb, and gzb are integral gains of the x-axis, y-axis, and z-axis, respectively. In the manual mode, the gesture recognition unit 2b obtains the movement amount of the robot 9 using the following equation (1) based on the movement indicated by the manual gesture, the preset proportional gain, and the preset integral gain. . In equation (1), the body part coordinate previous x coordinate and the body part coordinate current x coordinate are coordinates indicating movement of the x axis indicated by the manual gesture, and the hand coordinate x corresponds to the movement amount of the robot 9. Position coordinates. The y coordinate and the z coordinate are the same as the x coordinate.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ジェスチャ認識部2bは、求めた移動量を作業指示変換部2dに通知する。作業指示変換部2dは、通知された移動量をプログラマブルロジックコントローラ3に通知する。プログラマブルロジックコントローラ3は、通知された移動量をロボットコントローラ4に通知する。ロボットコントローラ4は、作業支援装置2が手動モードを行う場合に、通知された移動量に基づいてロボット9を移動させる制御を行う。 The gesture recognition unit 2b notifies the obtained movement amount to the work instruction conversion unit 2d. The work instruction conversion unit 2d notifies the programmable logic controller 3 of the notified movement amount. The programmable logic controller 3 notifies the robot controller 4 of the notified movement amount. The robot controller 4 performs control for moving the robot 9 based on the notified movement amount when the work support apparatus 2 performs the manual mode.
 次に、通常モードと手動モードとの切り替えについて説明する。図14は、本実施の形態3に係る動作可視化・アニメーションテーブル2hを示す図である。図14の動作可視化・アニメーションテーブル2hでは、動作可視化番号「14」の移動箇所において「手動モード」が設定され、動作可視化番号「16」の移動箇所において「通常モード」が設定されている。なお、図14の例では、動作可視化番号「1」~「13」、「15」、「17~19」に対応するジェスチャは、通常ジェスチャに相当し、動作可視化番号「14」及び「16」に対応するジェスチャは、切替ジェスチャに相当する。 Next, switching between normal mode and manual mode will be described. FIG. 14 is a diagram showing an operation visualization / animation table 2h according to the third embodiment. In the motion visualization / animation table 2h of FIG. 14, “manual mode” is set at the movement location of the motion visualization number “14”, and “normal mode” is set at the movement location of the motion visualization number “16”. In the example of FIG. 14, gestures corresponding to motion visualization numbers “1” to “13”, “15”, and “17 to 19” correspond to normal gestures, and motion visualization numbers “14” and “16”. The gesture corresponding to is equivalent to a switching gesture.
 動作可視化機構部2cは、ジェスチャ認識部2bで認識されたジェスチャと、ロボット9の遷移状態とに基づいて、動作可視化番号「14」を特定した場合、ジェスチャ認識部2bに「手動モード」を通知する。一方、動作可視化機構部2cは、ジェスチャ認識部2bで認識されたジェスチャと、ロボット9の遷移状態とに基づいて、動作可視化番号「16」を特定した場合、ジェスチャ認識部2bに「通常モード」を通知する。このように、作業支援装置2は、切替ジェスチャを認識した場合に、通常モードから手動モードへの切り替え、または、手動モードから通常モードへの切り替えを行う。 When the motion visualization number “14” is identified based on the gesture recognized by the gesture recognition unit 2b and the transition state of the robot 9, the motion visualization mechanism unit 2c notifies the gesture recognition unit 2b of “manual mode”. To do. On the other hand, when the motion visualization number “16” is identified based on the gesture recognized by the gesture recognition unit 2 b and the transition state of the robot 9, the motion visualization mechanism unit 2 c gives the gesture recognition unit 2 b the “normal mode”. To be notified. As described above, when the work support device 2 recognizes the switching gesture, the work support device 2 performs switching from the normal mode to the manual mode or switching from the manual mode to the normal mode.
 <実施の形態3のまとめ>
 以上のような本実施の形態3に係るロボット制御装置によれば、作業支援装置2は、切替ジェスチャを認識した場合に、通常モードの代わりに手動モードを行う。このような構成によれば、作業者は、切替ジェスチャを行うことにより、細かい作業に対応することができる手動モードに切り替えることができる。
<Summary of Embodiment 3>
According to the robot control apparatus according to the third embodiment as described above, the work support apparatus 2 performs the manual mode instead of the normal mode when the switching gesture is recognized. According to such a configuration, the operator can switch to the manual mode that can deal with fine work by performing a switching gesture.
 また本実施の形態3では、第2ジェスチャが示す移動と各種ゲインとに基づいて、ロボット9の移動量を求めるので、ロボット9の作業移動量を適切化することができる。 In the third embodiment, since the movement amount of the robot 9 is obtained based on the movement indicated by the second gesture and various gains, the work movement amount of the robot 9 can be optimized.
 なお、本発明は、その発明の範囲内において、各実施の形態を自由に組み合わせたり、各実施の形態を適宜、変形、省略したりすることが可能である。 In the present invention, it is possible to freely combine the respective embodiments within the scope of the invention, and to appropriately modify and omit the respective embodiments.
 本発明は詳細に説明されたが、上記した説明は、すべての態様において、例示であって、本発明がそれに限定されるものではない。例示されていない無数の変形例が、本発明の範囲から外れることなく想定され得るものと解される。 Although the present invention has been described in detail, the above description is illustrative in all aspects, and the present invention is not limited thereto. It is understood that countless variations that are not illustrated can be envisaged without departing from the scope of the present invention.
 2 作業支援装置、4 ロボットコントローラ、5 動作表示装置、6a スピーカ、6b 振動デバイス、9 ロボット。 2. Work support device, 4 robot controller, 5 motion display device, 6a speaker, 6b vibration device, 9 robot.

Claims (9)

  1.  ロボットを制御するロボット制御装置であって、
     作業者の動作をセンサで検出した結果に基づいて当該作業者のジェスチャを認識し、当該ジェスチャに基づいて、前記ロボットが動作を行うための動作指示を決定する作業支援装置と、
     前記作業支援装置が決定した動作指示に基づいて、前記ロボットの動作を制御するための情報を決定し、当該情報をサーボモータに出力するロボットコントローラと、
     前記作業支援装置が決定した動作指示に基づいて動く人間のアニメーションを、前記ロボットの動作として表示する動作表示装置と
    を備える、ロボット制御装置。
    A robot control device for controlling a robot,
    A work support device for recognizing a gesture of the worker based on a result of detecting the movement of the worker by a sensor, and determining an operation instruction for the robot to perform a motion based on the gesture;
    Based on the operation instruction determined by the work support device, determine information for controlling the operation of the robot, a robot controller that outputs the information to a servo motor;
    A robot control device comprising: an operation display device that displays a human animation that moves based on an operation instruction determined by the work support device as the operation of the robot.
  2.  請求項1に記載のロボット制御装置であって、
     前記作業支援装置は、ネットワークによって前記ロボットコントローラと接続されている、ロボット制御装置。
    The robot control device according to claim 1,
    The work support device is a robot control device connected to the robot controller via a network.
  3.  請求項1または請求項2に記載のロボット制御装置であって、
     前記ロボットが、特定の前記動作指示に基づく動作を完了したときに、当該動作の完了とテーブルで予め対応付けられた音をスピーカに出力させる、ロボット制御装置。
    The robot control device according to claim 1 or 2,
    When the robot completes an operation based on the specific operation instruction, the robot control device causes a speaker to output a sound previously associated with the completion of the operation in the table.
  4.  請求項1または請求項2に記載のロボット制御装置であって、
     前記ロボットが、特定の前記動作指示に基づく動作を開始するときに、当該動作の開始とテーブルで予め対応付けられた音をスピーカに出力させる、ロボット制御装置。
    The robot control device according to claim 1 or 2,
    When the robot starts an operation based on the specific operation instruction, the robot control device causes a speaker to output a sound previously associated with the start of the operation and a table.
  5.  請求項1または請求項2に記載のロボット制御装置であって、
     前記ロボットが、特定の前記動作指示に基づく動作を完了したときに、当該動作の完了とテーブルで予め対応付けられた振動を振動デバイスに出力させる、ロボット制御装置。
    The robot control device according to claim 1 or 2,
    When the robot completes an operation based on the specific operation instruction, the robot control apparatus causes the vibration device to output vibration associated with the completion of the operation in advance in the table.
  6.  請求項1または請求項2に記載のロボット制御装置であって、
     前記ロボットが、特定の前記動作指示に基づく動作を開始するときに、当該動作の開始とテーブルで予め対応付けられた振動を振動デバイスに出力させる、ロボット制御装置。
    The robot control device according to claim 1 or 2,
    When the robot starts an operation based on the specific operation instruction, the robot control apparatus causes the vibration device to output a vibration associated with the start of the operation in advance in the table.
  7.  請求項1から請求項6のうちのいずれか1項に記載のロボット制御装置であって、
     前記作業支援装置は、
     前記動作指示を決定するための前記ジェスチャである特定の第1ジェスチャだけでなく、前記第1ジェスチャと異なる特定の第2ジェスチャと、作業者の動作から当該作業者の特定部位の移動を示す第3ジェスチャとを認識し、
     前記作業支援装置は、
     前記第2ジェスチャを認識した場合に、前記第1ジェスチャに基づく前記動作指示を決定する第1モードの代わりに、前記ロボットが前記第3ジェスチャの前記移動に基づく移動を行うための移動量を前記動作指示として決定する第2モードを行う、ロボット制御装置。
    The robot control device according to any one of claims 1 to 6, wherein
    The work support device includes:
    Not only the specific first gesture which is the gesture for determining the operation instruction, but also a specific second gesture different from the first gesture, and a first indicating movement of the specific part of the worker from the operator's motion Recognize 3 gestures,
    The work support device includes:
    When the second gesture is recognized, instead of the first mode for determining the operation instruction based on the first gesture, the movement amount for the robot to move based on the movement of the third gesture is set as the movement amount. A robot control apparatus that performs a second mode determined as an operation instruction.
  8.  請求項7に記載のロボット制御装置であって、
     前記作業支援装置は、
     前記第3ジェスチャが示す前記移動と、予め設定された比例ゲインとに基づいて、前記ロボットの前記移動量を求める、ロボット制御装置。
    The robot control device according to claim 7,
    The work support device includes:
    The robot control apparatus which calculates | requires the said moving amount | distance of the said robot based on the said movement which the said 3rd gesture shows, and the preset proportional gain.
  9.  請求項7に記載のロボット制御装置であって、
     前記第3ジェスチャが示す前記移動と、予め設定された積分ゲインとに基づいて、前記ロボットの前記移動量が求める、ロボット制御装置。
    The robot control device according to claim 7,
    The robot control apparatus which calculates | requires the said moving amount | distance of the said robot based on the said movement which the said 3rd gesture shows, and the preset integral gain.
PCT/JP2018/011704 2018-03-23 2018-03-23 Robot control device WO2019180916A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2018/011704 WO2019180916A1 (en) 2018-03-23 2018-03-23 Robot control device
JP2019510378A JP6625266B1 (en) 2018-03-23 2018-03-23 Robot controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/011704 WO2019180916A1 (en) 2018-03-23 2018-03-23 Robot control device

Publications (1)

Publication Number Publication Date
WO2019180916A1 true WO2019180916A1 (en) 2019-09-26

Family

ID=67987033

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/011704 WO2019180916A1 (en) 2018-03-23 2018-03-23 Robot control device

Country Status (2)

Country Link
JP (1) JP6625266B1 (en)
WO (1) WO2019180916A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112021004343T5 (en) 2020-08-20 2023-05-25 Fanuc Corporation ROBOT CONTROL SYSTEM
JP7365991B2 (en) 2020-10-26 2023-10-20 三菱電機株式会社 remote control system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116847957A (en) 2021-06-28 2023-10-03 三星电子株式会社 Robot and method for operating a robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0639754A (en) * 1992-07-27 1994-02-15 Nippon Telegr & Teleph Corp <Ntt> Robot hand control device
JP4463120B2 (en) * 2005-01-17 2010-05-12 独立行政法人理化学研究所 Imitation robot system and its imitation control method
JP2011110620A (en) * 2009-11-24 2011-06-09 Toyota Industries Corp Method of controlling action of robot, and robot system
JP2014104527A (en) * 2012-11-27 2014-06-09 Seiko Epson Corp Robot system, program, production system, and robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0639754A (en) * 1992-07-27 1994-02-15 Nippon Telegr & Teleph Corp <Ntt> Robot hand control device
JP4463120B2 (en) * 2005-01-17 2010-05-12 独立行政法人理化学研究所 Imitation robot system and its imitation control method
JP2011110620A (en) * 2009-11-24 2011-06-09 Toyota Industries Corp Method of controlling action of robot, and robot system
JP2014104527A (en) * 2012-11-27 2014-06-09 Seiko Epson Corp Robot system, program, production system, and robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112021004343T5 (en) 2020-08-20 2023-05-25 Fanuc Corporation ROBOT CONTROL SYSTEM
JP7365991B2 (en) 2020-10-26 2023-10-20 三菱電機株式会社 remote control system

Also Published As

Publication number Publication date
JPWO2019180916A1 (en) 2020-04-30
JP6625266B1 (en) 2019-12-25

Similar Documents

Publication Publication Date Title
JP5512048B2 (en) ROBOT ARM CONTROL DEVICE AND CONTROL METHOD, ROBOT, CONTROL PROGRAM, AND INTEGRATED ELECTRONIC CIRCUIT
JP3529373B2 (en) Work machine simulation equipment
US9387589B2 (en) Visual debugging of robotic tasks
KR100762380B1 (en) Motion control apparatus for teaching robot position, robot-position teaching apparatus, motion control method for teaching robot position, robot-position teaching method, and motion control program for teaching robot-position
KR102042115B1 (en) Method for generating robot operation program, and device for generating robot operation program
WO2019180916A1 (en) Robot control device
JP6445092B2 (en) Robot system displaying information for teaching robots
JP6863927B2 (en) Robot simulation device
JP6598191B2 (en) Image display system and image display method
JP2018167334A (en) Teaching device and teaching method
JP7049069B2 (en) Robot system and control method of robot system
KR20170016436A (en) Teaching data-generating device and teaching data-generating method for work robot
JP7117237B2 (en) ROBOT CONTROL DEVICE, ROBOT SYSTEM AND ROBOT CONTROL METHOD
JP2014065100A (en) Robot system and method for teaching robot
Miądlicki et al. Real-time gesture control of a CNC machine tool with the use Microsoft Kinect sensor
JP2018015863A (en) Robot system, teaching data generation system, and teaching data generation method
JPH06250730A (en) Teaching device for industrial robot
JP2023024890A (en) Control device capable of receiving direct teaching operation, teaching device and computer program of control device
TW202218836A (en) Robot control device, and robot system
JP2022163836A (en) Method for displaying robot image, computer program, and method for displaying robot image
JP2009196040A (en) Robot system
US20220226982A1 (en) Method Of Creating Control Program For Robot, System Executing Processing Of Creating Control Program For Robot, And Non-Transitory Computer-Readable Storage Medium
WO2023073959A1 (en) Work assistance device and work assistance method
JPH1177568A (en) Teaching assisting method and device
US20240066694A1 (en) Robot control system, robot control method, and robot control program

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019510378

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18911170

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18911170

Country of ref document: EP

Kind code of ref document: A1