WO2019092792A1 - Display control device, display control method, and display control program - Google Patents

Display control device, display control method, and display control program Download PDF

Info

Publication number
WO2019092792A1
WO2019092792A1 PCT/JP2017/040132 JP2017040132W WO2019092792A1 WO 2019092792 A1 WO2019092792 A1 WO 2019092792A1 JP 2017040132 W JP2017040132 W JP 2017040132W WO 2019092792 A1 WO2019092792 A1 WO 2019092792A1
Authority
WO
WIPO (PCT)
Prior art keywords
straight line
dimensional data
robot
information
display control
Prior art date
Application number
PCT/JP2017/040132
Other languages
French (fr)
Japanese (ja)
Inventor
高徳 三宅
秀人 岩本
加藤 伸一
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2017/040132 priority Critical patent/WO2019092792A1/en
Priority to JP2018526967A priority patent/JP6385627B1/en
Priority to TW107116580A priority patent/TW201918807A/en
Publication of WO2019092792A1 publication Critical patent/WO2019092792A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/42Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine

Definitions

  • the present invention relates to a technique for displaying information for supporting the operation of a device.
  • the robot simulator that simulate and display the operation of a robot based on teaching data for causing a device (hereinafter, the device is a robot to be described as an example) to perform a prescribed operation and the like. Proposed.
  • the operator can create teaching data while verifying whether the robot does not interfere with surrounding structures without actually operating the robot.
  • the operator creates teaching data by designating a point or a trajectory on a three-dimensional space through which the tip of the robot arm passes.
  • the teaching data is created by moving the robot arm by numerical input, by moving the robot arm by indirect operation using an arrow button or a joystick, etc., and by the operator using the robot arm. There is a method such as grabbing and moving directly. However, in the case of moving the robot arm by numerical input to create teaching data, it is necessary to repeat the numerical input until the result of the teaching data is as intended by the operator, which takes time.
  • the device disclosed in Patent Document 1 is a device for visualizing computer-aided information into an image of a real environment detected by an image receiving device on a vision device.
  • determination about position and orientation of an image receiving device or pose is performed, and robot-specific information corresponding to the determination, for example, a coordinate system unique to at least one robot Information, including a display, is displayed superimposed on the image of the real environment on the visual device.
  • This invention was made in order to solve the above subjects, and presents the information which can grasp visually the relation between the present position of the apparatus to operate, and the object position which moves the apparatus to operate. With the goal.
  • a display control device is a control point set as a drive target device in a virtual space defined by three-dimensional data of a structure based on three-dimensional data of the drive target device and three-dimensional data of the structure.
  • Information processing unit that generates three-dimensional data of the aiming coordinate axis including the first straight line passing through the three-dimensional data of the device to be driven, three-dimensional data of the structure, and three-dimensional data of the aiming coordinate axis generated by the information processing unit
  • an output control unit that generates control information for displaying the aiming coordinate axis on the output device.
  • the present invention it is possible to present information capable of visually grasping the relationship between the current position of the device to be operated and the target position to which the device to be operated is moved.
  • FIG. 1 is a diagram showing a configuration of a teaching system provided with a display control device according to Embodiment 1. It is a figure showing an example of composition of a robot applied to a teaching system. It is a figure which shows the example of attachment of the tool to the flange part of a robot.
  • FIG. 1 is a block diagram showing a configuration of a display control device according to Embodiment 1.
  • 5A and 5B are diagrams showing an example of the hardware configuration of the display control apparatus according to the first embodiment.
  • FIG. 6 is a diagram showing an example of setting of a aiming coordinate axis by the display control apparatus according to the first embodiment.
  • FIG. 6 is a diagram showing an example of setting of a aiming coordinate axis by the display control apparatus according to the first embodiment.
  • FIG. 6 is a diagram showing an example of setting of a aiming coordinate axis by the display control apparatus according to the first embodiment.
  • FIG. 7 is a view showing a display example of aiming coordinate axes by the display control device according to the first embodiment.
  • FIG. 7 is a view showing a display example of aiming coordinate axes by the display control device according to the first embodiment.
  • FIG. 7 is a view showing a display example of aiming coordinate axes by the display control device according to the first embodiment.
  • 5 is a flowchart showing an operation of the display control device according to the first embodiment. 5 is a flowchart showing an operation of an information processing unit of the display control device according to Embodiment 1.
  • FIG. 5 is a flowchart showing an operation of an information processing unit of the display control device according to Embodiment 1.
  • 11A and 11B are diagrams showing other display examples of the aiming coordinate axes by the display control apparatus according to the first embodiment.
  • FIG. 7 is a block diagram showing a configuration of a display control device according to Embodiment 2.
  • FIG. 16 is a diagram showing a drivable area of a robot and an aiming coordinate axis in a display control device according to a second embodiment.
  • FIG. 16 is a diagram showing a drivable area of a robot and an aiming coordinate axis in a display control device according to a second embodiment.
  • 15 is a flowchart showing an operation of an information processing unit of a display control device according to Embodiment 2.
  • FIG. 16 is a block diagram showing a configuration of a display control device according to Embodiment 3.
  • FIG. 17A, FIG. 17B, and FIG. 17C are diagrams showing the types of drive tracks generated by the track generation unit of the display control apparatus according to Embodiment 3.
  • FIG. 16 is a flowchart showing an operation of a trajectory generation unit of the display control device according to Embodiment 3.
  • FIG. 16 is a flowchart showing the operation of the reproduction processing unit of the display control device according to Embodiment 3.
  • FIG. FIG. 17 is a view showing a display example of an interference point by the display control device of the invention according to Embodiment 3.
  • FIG. 17 is a diagram showing a display example when control information of the display control apparatus shown in Embodiments 1 to 3 is output to an output device of the augmented reality space.
  • FIG. 1 is a diagram showing the configuration of a teaching system provided with a display control device 1 according to the first embodiment.
  • FIG. 1 shows an example in which the display control device 1 is applied to a teaching system for creating teaching data of a robot (device to be driven).
  • the display control device 1 is not limited to the application to a teaching system of a robot, and can be applied to various systems for operating equipment.
  • the display control device 1 is used for the purpose of superposing and displaying the generated teaching data on three-dimensional data or augmented reality (AR) content displayed on the display means.
  • the display control device 1 displays teaching data superimposed on three-dimensional data displayed on a personal computer, a tablet, a smartphone, a dedicated teaching device, or the like, or on augmented reality content displayed on augmented reality glasses.
  • Control to The augmented reality glasses are head mounted displays (HMDs) mounted on the user's head and displaying an image in front of the user's eyes.
  • the head mounted display may be non-transmissive or transmissive.
  • the display control device 1 is mounted as, for example, a processing circuit or chip or the like such as the personal computer, tablet, smartphone, dedicated teaching device, and augmented reality glasses described above.
  • the teaching system includes a display control device 1, a robot 2, a peripheral information acquisition device 3, a teaching device 4 and a robot control device 7.
  • the teaching device 4 includes an output device 5 and an input device 6.
  • the robot 2 is a device that operates in accordance with a control program and performs a target operation.
  • the control program is information for instructing the operation of the robot 2.
  • the information instructing the operation of the robot 2 includes information such as a passing point, a trajectory, a velocity, and the number of times of operation when the robot 2 is driven.
  • An operation in which the operator inputs or corrects the control program is called robot teaching.
  • data obtained by robot teaching is called teaching data.
  • the peripheral information acquisition device 3 acquires three-dimensional data of the robot 2 and three-dimensional data of the surrounding environment of the robot 2 and outputs the three-dimensional data to the display control device 1.
  • the three-dimensional data is data in which the outline, position and movement of the robot 2 and the surrounding environment are numerically indicated.
  • the peripheral information acquisition device 3 is configured of a camera, a three-dimensional scanner, a server or the like. For example, when the peripheral information acquisition device 3 is configured by a camera or a three-dimensional scanner, the robot 2 and the peripheral environment in the real environment are photographed or scanned, and three-dimensional data of the robot 2 and the peripheral environment are acquired.
  • the peripheral information acquisition device 3 is configured by, for example, a server, three-dimensional data of the robot 2 and peripheral environment stored in advance in a storage device or the like is acquired and output to the display control device 1.
  • the surrounding environment is a floor, a wall, a pillar, a stand, related devices, wiring, work, and the like (hereinafter referred to as a structure) in a space where the robot 2 is installed.
  • the work refers to a work target of the robot 2.
  • the robot 2 is, for example, an apparatus for assembling or transporting
  • the workpiece is an object to be gripped by a gripping tool attached to the robot 2.
  • the robot 2 is, for example, a deburring device
  • the workpiece is an object to be pressed against the deburring tool attached to the robot 2. The details of the tool will be described later.
  • the display control device 1 acquires three-dimensional data of the robot 2 and three-dimensional data of a structure from the peripheral information acquisition device 3.
  • a space virtually constructed in the information processing in the display control device 1 and defined by three-dimensional data of a structure is referred to as a virtual space.
  • the display control device 1 performs control for causing the output device 5 to display a structure that defines a virtual space based on three-dimensional data of the structure.
  • the display control device 1 performs control for causing the output device 5 to display an image indicating the robot 2 disposed in the virtual space and operation support information.
  • the display control device 1 outputs control information to the teaching device 4.
  • the output device 5 of the teaching device 4 is configured of, for example, a display such as a personal computer or a tablet terminal, or a head mounted display.
  • the output device 5 displays the virtual space, the robot 2, the operation support information, and the like based on the control information input from the display control device 1. Further, the output device 5 may display a control program of the robot 2 based on a command from the display control device 1.
  • the input device 6 of the teaching device 4 includes, for example, a mouse, a touch panel, a dedicated teaching terminal, or an input unit of a head mounted display, and receives an operation input to information displayed by the output device 5.
  • the input device 6 outputs the operation information corresponding to the received operation input to the display control device 1.
  • the display control device 1 generates control information according to the state of the robot 2 after the operation based on the operation information input from the input device 6. Further, when the control program is displayed on the output device 5, the input device 6 receives an operation input such as correction of the displayed control program.
  • the display control device 1 corrects the control program of the robot 2 based on the operation information input from the input device 6.
  • the display control device 1 determines the control program of the robot 2.
  • the display control device 1 outputs the determined control program to the robot control device 7.
  • the robot control device 7 saves the control program input from the display control device 1, converts the saved control program into a drive signal of the robot 2, and transmits it to the robot 2.
  • the robot 2 is driven based on the drive signal received from the robot control device 7 and performs a predetermined operation or the like.
  • FIG. 2 is a view showing a configuration example of the robot 2 applied to the teaching system.
  • the robot 2 illustrated in FIG. 2 is a so-called vertical articulated robot having six degrees of freedom, and includes a mounting unit 2a, a first arm 2b, a second arm 2c, a third arm 2d, and a fourth arm 2e. , A fifth arm 2f and a flange portion 2g.
  • the mounting portion 2a is fixed to the floor.
  • the first arm 2b pivots with respect to the mounting portion 2a with the axis J1 as a rotation axis.
  • the second arm 2c rotates with respect to the first arm 2b with the axis line J2 as a rotation axis.
  • the third arm 2 d rotates with respect to the second arm 2 c with the axis line J 3 as a rotation axis.
  • the fourth arm 2e rotates with respect to the third arm 2d with the axis line J4 as a rotation axis.
  • the fifth arm 2 f rotates with respect to the fourth arm 2 e with the axis line J 5 as a rotation axis.
  • the flange portion 2g rotates with respect to the fifth arm 2f with the axis line J6 as a rotation axis.
  • the flange portion 2g has a mechanism for attaching and fixing various tools at the end opposite to the connection side with the fifth arm 2f.
  • the tool is selected according to the work content of the robot 2.
  • the tool is a tool for holding a work and a tool such as a grinder for polishing the work.
  • FIG. 3 is a view showing an example of attachment of the tool 8 to the flange 2 g of the robot 2.
  • FIG. 3A is a view showing a case where an instrument 8a for gripping a workpiece is attached as a tool 8 to the flange portion 2g.
  • FIG. 3B is a view showing a case where a grinder 8 b is attached as a tool 8 to the flange portion 2 g.
  • Point A shown in FIG. 3A and FIG. 3B indicates a control point set according to the tool 8 (hereinafter referred to as a control point A).
  • the control point A indicates which position on the displayed robot 2 or in the vicinity of the robot 2 should be operated when the operator moves the robot 2 in the virtual space displayed on the output device 5.
  • the control point A is displayed in, for example, a circular or rectangular shape in the virtual space displayed on the output device 5.
  • the control point A is set to a point at which the tool 8a grips the work.
  • the control point A is set to the grinding point of the grinder 8b.
  • the control point A is set to the machining point or machining area of the tool.
  • FIG. 4 is a block diagram showing the configuration of the display control device 1 according to the first embodiment.
  • the display control device 1 includes a three-dimensional data acquisition unit 101, an operation information acquisition unit 102, an information processing unit 103, and an output control unit 104. Further, the display control device 1 shown in FIG. 4 is connected to the peripheral information acquisition device 3, the input device 6 and the output device 5.
  • the three-dimensional data acquisition unit 101 acquires three-dimensional data of the robot 2 and three-dimensional data of a structure from the peripheral information acquisition device 3.
  • the three-dimensional data acquisition unit 101 outputs the acquired three-dimensional data to the information processing unit 103.
  • the operation information acquisition unit 102 acquires operation information of the operator via the input device 6.
  • the operation information acquisition unit 102 outputs the acquired operation information to the information processing unit 103.
  • the information processing unit 103 acquires the position coordinates of the control point A of the robot 2 in the virtual space from the three-dimensional data input from the three-dimensional data acquisition unit 101.
  • the information processing unit 103 calculates the position and direction of the aiming coordinate axis to be set in the virtual space from the acquired position coordinates of the control point A and the three-dimensional data of the structure, and generates three-dimensional data of the aiming coordinate axis.
  • the aiming coordinate axes are coordinate axes constituted by three axes of a straight line X (second straight line), a straight line Y (third straight line) and a straight line Z (first straight line).
  • the aiming coordinate axis is operation support information that the operator refers to when creating teaching data of the robot 2. The details of the aiming coordinate axis will be described later.
  • the information processing unit 103 causes the robot 2 in the virtual space to follow the operation input of the operator and drives the three-dimensional data of the robot 2 and Generate three-dimensional data of aiming coordinate axes. It is assumed that the information processing unit 103 acquires specification data of the robot 2 necessary for calculating the movement of the robot 2 in the virtual space in advance by an appropriate method. The information processing unit 103 outputs the three-dimensional data of the structure, the three-dimensional data of the robot 2 and the three-dimensional data of the aiming coordinate axes to the output control unit 104.
  • the output control unit 104 displays control information for displaying the structure, the robot 2 and the aiming coordinate axis based on the three-dimensional data of the structure acquired from the information processing unit 103, the three-dimensional data of the robot 2 and the three-dimensional data of the aiming coordinate axis.
  • Generate The control information is information for causing the output device 5 to display an image when the structure in the virtual space, the robot 2 and the aiming coordinate axes are viewed from a specific position in the virtual space.
  • the specific position in the virtual space is, for example, a position in the virtual space corresponding to the position of the viewpoint of the user in the real environment.
  • the output control unit 104 outputs the generated control information to the output device 5.
  • FIG. 5A and 5B are diagrams showing an example of the hardware configuration of the display control device 1 according to the first embodiment.
  • Each function of the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 in the display control device 1 is realized by a processing circuit. That is, the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 include processing circuits for realizing the respective functions.
  • the processing circuit may be the processing circuit 1a which is dedicated hardware as shown in FIG. 5A, or may be the processor 1b which executes a program stored in the memory 1c as shown in FIG. 5B. Good.
  • the processing circuit 1a may be, for example, a single circuit or a complex.
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • Each function of each unit of the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 may be realized by a processing circuit, or the functions of each unit are realized by one processing circuit. You may
  • the functions of each unit are software, firmware, or software and firmware and It is realized by the combination of The software or firmware is described as a program and stored in the memory 1c.
  • the processor 1b implements the functions of the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 by reading and executing the program stored in the memory 1c. That is, the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 of the display control device 1 execute the processes shown in FIG. 8 to FIG.
  • a memory 1c is provided for storing a program that results in the steps being executed.
  • these programs cause a computer to execute the procedure or method of the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104.
  • the processor 1 b is, for example, a central processing unit (CPU), a processing device, an arithmetic device, a processor, a microprocessor, a microcomputer, or a digital signal processor (DSP).
  • the memory 1c may be, for example, a nonvolatile or volatile semiconductor memory such as a random access memory (RAM), a read only memory (ROM), a flash memory, an erasable programmable ROM (EPROM), or an electrically EPROM (EEPROM). It may be a hard disk, a magnetic disk such as a flexible disk, or an optical disk such as a mini disk, a CD (Compact Disc), a DVD (Digital Versatile Disc), or the like.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable ROM
  • EEPROM electrically EPROM
  • It may be a hard disk, a magnetic disk such as a flexible disk, or an optical disk such as a mini disk, a CD (Compac
  • the functions of the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 may be partially realized by dedicated hardware and partially realized by software or firmware. You may thus, the processing circuit in the display control device 1 can realize each of the functions described above by hardware, software, firmware, or a combination thereof.
  • FIGS. 6A, 6B, 6C, 7A, 7B, and 7C are diagrams showing setting examples of the aiming coordinate axes by the display control device 1 according to the first embodiment. Specifically, in FIGS. 6A to 6C, the display control device 1 performs control to display the robot 2, the tool 8, and the installation surface B of the robot 2, and further performs control to display the control point A and the aiming coordinate axis. An example of the case is shown.
  • the aiming coordinate axes are coordinate axes constituted by a straight line Z passing the control point A and a straight line X and a straight line Y orthogonal to each other at the aiming point C on the straight line Z.
  • the aiming point C is a point at which the straight line Z intersects the surface of the structure in the virtual space. Therefore, when there is no point at which the straight line Z intersects the structure, the information processing unit 103 determines that the aim point C does not exist, and generates three-dimensional data of only the straight line Z.
  • the straight line X and the straight line Y in the present invention include a straight line according to a general definition (hereinafter referred to as a straight line) and a line along the surface shape of the structure including the aiming point C. It includes lines that are linear in shape and orthogonal to each other when viewed from the direction. That is, when the straight line X and the straight line Y are lines along the surface shape of the structure as described above, the three-dimensional shape of both straight lines does not necessarily become a straight line because it depends on the surface shape of the structure .
  • the straight line X and the straight line Y are lines along the surface shape of the structure as described above, the angle at which the straight line X and the straight line Y intersect in the three-dimensional virtual space is Because they depend on the shape of the surface, they are not necessarily at right angles. Note that the specific one direction is, for example, the direction of z-coordinates of global coordinates or local coordinates described later.
  • the direction of the straight line Z is set according to the tool 8 attached to the flange portion 2g.
  • Tool 8 for example, if an instrument 8a for holding the workpiece shown in FIG. 3A, as shown in FIGS. 6A and 6B, the linear Z is the opening and closing direction of the gripping portion of the device 8a shown by an arrow 8a 1 It is a straight line that is orthogonal and parallel to the direction in which the grip extends (for example, the direction of the axis J6 in FIG. 2) and that passes through the control point A.
  • Tool 8 for example, if a grinder 8b shown in FIG. 3B, as shown in FIG.
  • the straight line Z is the same direction as the direction of pressing the grinder 8b in a work such as indicated by the arrows 8b 1, and It becomes a straight line passing through the control point A.
  • the direction of the straight line Z is appropriately set based on what kind of work the tool 8 attached to the flange portion 2g performs on the work.
  • the directions of the straight line X and the straight line Y when viewed from the direction of the straight line Z are set based on global coordinates or local coordinates.
  • the global coordinate is a coordinate set with respect to the three-dimensional space itself, and is a coordinate of the robot 2 main body fixedly set in the three-dimensional space or any structure fixed in the three-dimensional space .
  • the local coordinates are coordinates based on an object movable in a three-dimensional space, and are coordinates set based on, for example, the tool 8 attached to the flange portion 2 g of the robot 2.
  • FIG. 6A shows aiming coordinate axes according to the straight lines X and Y whose directions are set based on the global coordinates
  • FIGS. 6B and 6C show aiming coordinate axes according to the straight lines X and Y whose directions are set based on the local coordinates. ing.
  • FIG. 6A shows an example in which the directions of the straight line X and the straight line Y of the aiming coordinate axes are set based on the global coordinates with the x axis, y axis and z axis of the robot 2 as global coordinates.
  • the z axis of the global coordinates is a coordinate axis at a position and direction overlapping with the axis line J1 of the robot 2, and the intersection point of the z axis and the installation surface B of the robot 2 is an origin O.
  • the y-axis of the global coordinates is a coordinate axis passing through the origin O, parallel to the installation surface B of the robot 2 and parallel to the front direction of the robot 2 determined in advance.
  • the x-axis of the global coordinates is a coordinate axis orthogonal to the z-axis and the y-axis at the origin O.
  • the straight line X and the straight line Y of the aiming coordinate axes are set to coincide with the axial directions of the x axis and the y axis of the global coordinate, respectively.
  • FIG. 6B shows the case where the x-axis, y-axis and z-axis set in the instrument 8a attached to the flange portion 2g are local coordinates, and the directions of straight line X and straight line Y of the aiming coordinate axis are set based on the local coordinates An example is shown.
  • Z axis of the local coordinates, and perpendicular to the closing direction of the grip portion of the instrument 8a (see arrow 8a 1), a and (direction of the axis J6 of Figure 2) directions gripper extends parallel to the direction, and the control point It is a coordinate axis passing through A.
  • the origin in this case is the control point A.
  • X-axis of the local coordinates are the same direction as the closing direction of the grip portion of the instrument 8a (see arrow 8a 1), which is a coordinate axis and passing through the control points A.
  • the y-axis of the local coordinates is a coordinate axis orthogonal to the x-axis and the z-axis at the control point A.
  • the straight line X and the straight line Y of the aiming coordinate axes are set to coincide with the axial directions of the x axis and the y axis of the local coordinates, respectively.
  • FIG. 6C shows an example in which the x axis, the y axis and the z axis set in the grinder 8b attached to the flange portion 2g are local coordinates, and the straight line X and the straight line Y of the aiming coordinate axis are set based on the local coordinates.
  • the z-axis of the local coordinates is a coordinate axis which extends in the direction (see arrow 8 b 1 ) to press the grinder 8 b against the work or the like and passes through the control point A.
  • the local coordinate x-axis is a coordinate axis passing through the control point A and orthogonal to the z-axis.
  • the y-axis of the local coordinates is a coordinate axis orthogonal to the x-axis and the z-axis at the control point A.
  • the straight line X and the straight line Y of the aiming coordinate axes are set to coincide with the axial directions of the x and y axes of the local coordinates, respectively.
  • 6A to 6C show the cases where the directions of the straight line X and the straight line Y of the aiming coordinate axes are the same as the axial directions of the x axis and the y axis of the global coordinates or the local coordinates.
  • the axial directions of the straight line X and the straight line Y of the aiming coordinate axes do not have to be the same direction as the x-axis and y-axis axes of the global coordinates or the local coordinates.
  • the directions of the straight line X and the straight line Y are rotated by an arbitrary angle with the straight line Z as the rotation axis with respect to the state in which the directions of the straight line X and the straight line Y are the same direction as the x axis and the y axis It may be set in the same direction.
  • FIG. 7A to 7C are diagrams showing display examples of aiming coordinate axes by the display control device 1 according to the first embodiment.
  • 7A and 7B show display examples of the aiming coordinate axes in the output device 5.
  • FIG. The information processing unit 103 calculates the direction of the straight line Z in the virtual space and the aiming point C on the straight line Z using the position information of the control point A of the robot 2 in the virtual space and the three-dimensional data of the structure.
  • the information processing unit 103 calculates, for the straight line X and the straight line Y, directions and shapes and the like in the virtual space.
  • the information processing unit 103 generates three-dimensional data by referring to the three-dimensional data of the structure with respect to the calculated aiming coordinate axis. Thereby, as shown in FIG.
  • straight line X and straight line Y of the aiming coordinate axes are the surface shape of work Da or structure Db existing in the virtual space and the surface shape of installation surface B of robot 2 It is displayed on the output device 5 along the line.
  • the straight line X and the straight line Y may not have a shape along the surface shape of the structure.
  • FIG. 7C shows a display example in the case where the straight line X and the straight line Y of the aiming coordinate axes are straight lines.
  • both straight lines of the aiming coordinate axes are displayed on the output device 5 as straight lines passing through the aiming point C as shown in FIG. 7C.
  • the display control device 1 may be configured to allow the operator to select whether to display the straight line X and the straight line Y of the aiming coordinate axes as a shape along the structure or a straight line.
  • the display control device 1 can perform display control of the aiming coordinate axes according to the display method selected by the operator.
  • the straight line Z shows the case where the tool 8 attached to the flange portion 2g of the robot 2 is appropriately set based on what kind of work the work is to be performed.
  • the method of setting the straight line Z is not limited to the method described above, and the straight line Z may be set to a straight line extending vertically downward or vertically upward from the control point A.
  • the straight line Z may be set as a straight line extending in a direction arbitrarily set from the control point A.
  • FIG. 8 is a flowchart showing the operation of the display control device 1 according to the first embodiment.
  • the three-dimensional data acquisition unit 101 acquires three-dimensional data of the robot 2 and the structure from the peripheral information acquisition device 3 (step ST1).
  • the three-dimensional data acquisition unit 101 outputs the acquired three-dimensional data to the information processing unit 103.
  • the information processing unit 103 calculates the position, the direction, and the like of the aiming coordinate axis based on the three-dimensional data acquired in step ST1 (step ST2).
  • the information processing unit 103 generates three-dimensional data of the aiming coordinate axis based on the calculated position, direction, and the like of the aiming coordinate axis (step ST3).
  • the information processing unit 103 outputs the three-dimensional data generated in step ST3 and the three-dimensional data of the robot 2 and the structure to the output control unit 104.
  • the output control unit 104 generates control information for displaying the structure in the virtual space, the robot 2 and the aiming coordinate axes based on the three-dimensional data input from the information processing unit 103 (step ST4).
  • the output control unit 104 outputs the control information generated in step ST4 to the output device 5 (step ST5).
  • the information processing unit 103 determines whether or not operation information is input from the operation information acquisition unit 102 (step ST6).
  • the information processing unit 103 corrects the three-dimensional data of the robot 2 and aiming coordinate axes generated previously based on the position information indicated by the input operation information ((step ST6) Step ST7).
  • the information processing unit 103 outputs the three-dimensional data corrected in step ST7 to the output control unit 104.
  • the output control unit 104 generates control information for displaying the robot 2 and the aiming coordinate axes in the virtual space based on the three-dimensional data corrected in step ST7 (step ST8).
  • the output control unit 104 outputs the control information generated in step ST8 to the output device 5 (step ST9). Thereafter, the flowchart returns to the process of step ST6.
  • step ST6 when the operation information is not input (step ST6; NO), it is determined whether a predetermined time has elapsed since the previous operation information was input (step ST10). If the predetermined time has elapsed (step ST10; YES), the process ends. On the other hand, when the predetermined time has not elapsed (step ST10; NO), the process returns to the determination process of step ST6.
  • FIG. 9 is a flowchart showing an operation of the information processing unit 103 of the display control device 1 according to the first embodiment.
  • the information processing unit 103 recognizes in advance the type of the tool 8 attached to the flange 2g.
  • straight line X and straight line Y of the aiming coordinate axes are set in advance based on either global coordinates or local coordinates.
  • the aiming coordinate axis is displayed as a line along the surface shape of the structure, displayed as a straight line, or set in advance.
  • the information processing unit 103 When the information processing unit 103 generates three-dimensional data of the robot 2 and the structure in step ST2, according to the type of the tool 8 attached to the flange portion 2g, the control point A of the tool 8 and the control point A The straight line Z passing through is determined to generate three-dimensional data (step ST11).
  • the information processing unit 103 determines, for example, the control point A and the straight line Z with reference to a database (not shown) that stores the type of tool 8 and the information indicating the position of the control point A and the direction of the straight line Z in association. Do.
  • information indicating the direction of the straight line Z means, for example, when the tool 8 is the tool 8a, a straight line that is orthogonal to the opening and closing direction of the grip portion of the tool 8a and passes through the control point A is the straight line Z It is information indicating that.
  • the information processing unit 103 refers to the three-dimensional data of the straight line Z determined in step ST11 and the three-dimensional data of the structure, and determines whether there is a point where the straight line Z intersects the surface of the structure in virtual space. It is determined whether or not (step ST12). If there is no point at which the straight line Z intersects the surface of the structure (step ST12; NO), the information processing unit 103 generates three-dimensional data of the control point A (step ST13), and proceeds to the process of step ST21.
  • the information processing unit 103 calculates all points at which the straight line Z intersects the surface of the structure (step ST14). For example, when the work Da is placed on the installation surface B of the robot 2 as shown in FIG. 7A, in the process of step ST14, the information processing unit 103 determines that the straight line Z intersects the surface of the work Da and A point at which the straight line Z intersects with the surface of the installation surface B of the robot 2 is calculated.
  • the information processing section 103 calculates the distance to the control point A from the point calculated in step ST14, and sets the point at which the calculated distance is shortest as the aiming point C (step ST15).
  • the information processing unit 103 determines the directions of the straight line X and the straight line Y orthogonal to each other at the aiming point C set in step ST15, using the global coordinates or the local coordinates (step ST16).
  • the information processing unit 103 determines whether or not the straight line X and the straight line Y are lines along the surface shape of the structure with reference to the conditions set in advance (step ST17).
  • the information processing unit 103 refers to the three-dimensional data of the structure, and the straight line Z determined in step ST11 is the aiming point Three-dimensional data of a aiming coordinate axis is generated, which has a length up to C and the straight line X and the straight line Y whose directions are determined in step ST16 are lines along the surface shape of the structure (step ST18).
  • step ST17 when the straight line X and the straight line Y are not lines along the surface shape of the structure (step ST17; NO), that is, when the straight line X and the straight line Y are straight lines, the information processing unit 103 determines in step ST11. While making the straight line Z the length to the aiming point C, three-dimensional data of the aiming coordinate axes that make the straight line X whose direction is determined in step ST16 and the straight line Y be straight lines are generated (step ST19). The information processing unit 103 generates three-dimensional data of the control point A (step ST20).
  • the information processing unit 103 performs the three-dimensional data of the aiming coordinate axes and the control point A generated in steps ST18 to ST20, the three-dimensional data of the straight line Z generated in step ST11, and the three-dimensional data of the control point A generated in step 13 Are output to the output control unit 104 (step ST21). Thereafter, the flowchart proceeds to the process of step ST4 of the flowchart of FIG.
  • FIG. 10 is a flowchart showing an operation of the information processing unit 103 of the display control device 1 according to the first embodiment.
  • an operation for moving the robot 2 displayed in the virtual space a case where an operation for moving the pointer displayed superimposed on the control point A is input will be described as an example.
  • the information processing unit 103 refers to the input operation information and the three-dimensional data input from the three-dimensional data acquisition unit 101 to be virtual. Position information of the pointer in space is acquired (step ST31).
  • the information processing unit 103 moves the movement direction and movement of the pointer along the straight line X, the straight line Y, and the straight line Z from the position information of the control point A of the previous time stored in the buffer etc. and the position information of the pointer acquired in step ST31.
  • the amount is calculated (step ST32).
  • the information processing unit 103 moves the aiming coordinate axis calculated last time using the movement direction and the movement amount calculated in step ST32, calculates the position and direction of the new aiming coordinate axis, etc. Data is generated (step ST33).
  • the information processing unit 103 also refers to the three-dimensional data of the structure to determine the straight line X of the aiming coordinate axis and the straight line Three-dimensional data in which Y is a line along the surface shape of the structure is generated.
  • the information processing unit 103 generates three-dimensional data of the new control point A (step ST34).
  • the information processing unit 103 performs the following processes of step ST35 and step ST36 in parallel with the processes of steps ST32 to ST34.
  • the information processing unit 103 calculates the movement direction and movement amount of the robot 2 from the position information of the pointer acquired in step ST31 and the three-dimensional data of the robot 2 (step ST35).
  • the information processing unit 103 generates three-dimensional data of the robot 2 after movement using the movement direction and movement amount of the robot 2 calculated in step ST35 (step ST36).
  • the information processing unit 103 causes the three-dimensional data of the new aiming coordinate axis generated in step ST33, the three-dimensional data of the new control point A generated in step ST34, and the moved robot 2 generated in step ST36.
  • the three-dimensional data is output to the output control unit 104 (step ST37). After that, the flowchart proceeds to the process of step ST8 in the flowchart of FIG.
  • the operator superimposes the cursor on the control point A to make it in a selected state, and moves it to a target position with a jog lever, an operation key, a mouse or the like.
  • the teaching device 4 is a tablet terminal
  • the operator superimposes a finger on the control point A displayed on the touch panel to make it in a selected state, and moves the finger to a target position.
  • the output control unit 104 performs highlighting such as enlarging the shape of the control point A or changing the color of the control point A, and indicates that the control point A is being selected. Perform display control to notify. Further, the output control unit 104 may perform control to output sound effects when the control point A is selected or when the control point A is moved.
  • the information processing unit 103 performs the process shown in the flowchart of FIG. 10 according to the movement of the control point A, thereby performing a process of interactively changing the robot shape.
  • the operator moves the control point A to drive the robot 2 in the virtual space while referring to the aiming coordinate axis, and performs registration etc. of the passing point of the robot 2.
  • the output control unit 104 displays the display method as shown in FIG. 11 with respect to the aiming coordinate axis with the straight line X and the straight line Y along the surface shape of the structure or with the straight line X and the straight line Y with the straight line. May apply.
  • FIG. 11 is a view showing another display example of the aiming coordinate axes of the display control device 1 according to the first embodiment.
  • FIG. 11A shows a case where the output control unit 104 performs display control in which the aiming coordinate axis in the direction in which the drive of the robot 2 is restricted is indicated by a broken line.
  • the display control device 1 drives the robot 2 in order to make it easy to recognize the linear drive of the robot 2 in only one direction. After restricting the direction to one direction, an operation on the jog lever may be accepted.
  • the output control unit 104 generates control information that displays a straight line in a direction in which the robot's drive is restricted among the straight line X, the straight line Y, and the straight line Z in a broken line, non-display or semi-transparent.
  • the display control device 1 can perform display control to clearly indicate to the operator the directions in which the robot 2 can be driven. The operator can operate while intuitively recognizing the direction in which the robot 2 can be driven, by operating the jog lever while referring to the display shown in FIG. 11A.
  • FIG. 11B illustrates a display example in the case where the output control unit 104 performs control of displaying the coordinate value of the control point A and the coordinate value of the aiming point C in addition to the display of the aiming coordinate axis.
  • the output control unit 104 acquires coordinate values of the control point A and the aiming point C from the three-dimensional data input from the information processing unit 103.
  • the output control unit 104 generates control information for displaying the display area Ea and the display area Eb displaying the acquired coordinate values in the vicinity of the control point A and the aiming point C.
  • the output control unit 104 may generate control information for displaying the numerical values of the coordinate values of the control point A and the aiming point C so as to be correctable.
  • the operator selects a display area for which correction of coordinate values is desired, and inputs a new coordinate value.
  • the operation information acquisition unit 102 acquires the operation information for correcting the coordinate value
  • the information processing unit 103 determines the new aiming coordinate axis, the new control point A, and the three-dimensional data of the robot 2 after driving based on the operation information. Generate Thereby, the operator can drive the robot 2 by inputting coordinate values.
  • the robot 2 is set in the virtual space defined by the three-dimensional data of the structure based on the three-dimensional data of the robot 2 and the three-dimensional data of the structure.
  • the output control unit 104 Since the output control unit 104 generates control information for displaying the aiming coordinate axes on the output device 5 based on the dimensional data, the present position of the device to be operated and the purpose of moving the device to be operated It is possible to present information capable of visually grasping the relationship with the position.
  • the aiming coordinate axes are added to the straight line Z, and are constituted by the straight line X and the straight line Y passing through the aiming point C located on the straight line Z, respectively. It is possible to present information that can be easily grasped in space.
  • the straight line Z is a straight line extending in a direction set according to the tool 8 attached to the robot 2, how the tool attached to the robot is with respect to the work It is possible to determine the straight line of the aiming coordinate axis according to what kind of operation is performed or what kind of processing is performed on the work. This makes it possible to display aiming coordinate axes suitable for the work performed by the robot.
  • the straight line Z is a straight line extending in the vertical direction
  • the axial direction of the straight line Z of the aiming coordinate axis is always constant, and the operator recognizes the vertical direction relative to himself and the vertical direction relative to the robot. Can be easily matched and operated.
  • the aiming point C is a point at which the first straight line intersects with the surface of the structure in the virtual space, the aiming point becomes a point on the structure, and the operator Can present the target point of the operation.
  • the straight lines X and Y are lines along the surface shape of the structure, and are straight lines having shapes that are straight and orthogonal to each other when viewed from a specific direction. Because of this, it is possible to present coordinate axes that move in accordance with the shape of the structure.
  • the straight line X and the straight line Y are configured to be straight lines extending in the virtual space, so that coordinate axes are provided to move the robot so as not to collide with the structure. be able to.
  • the output control unit 104 changes the display form of the straight line extending in the same direction as the direction in which the drive of the robot 2 is restricted among the straight line Z, the straight line X and the straight line Y. Since the control information for displaying is generated, the operator can operate while intuitively recognizing the direction in which the robot can be driven.
  • the output control unit 104 is configured to generate the control information for displaying the coordinate value of the control point A and the coordinate value of the aiming point C. The operation can be performed while confirming the
  • FIG. 12 is a block diagram showing a configuration of a display control device 1A according to the second embodiment.
  • the display control device 1A is configured by adding a drive area setting unit 105 to the information processing unit 103 of the display control device 1 of the first embodiment shown in FIG.
  • parts identical to or corresponding to the constituent elements of display control apparatus 1 according to Embodiment 1 will be assigned the same codes as those used in Embodiment 1 to omit or simplify the description.
  • the drive area setting unit 105 acquires information (hereinafter referred to as drive limit information) indicating the position of the limit at which the robot 2 can drive.
  • the drive area setting unit 105 may acquire the drive limit information from a storage area (not shown) in the display control device 1A, or may acquire it from the outside of the display control device 1A.
  • the drive area setting unit 105 sets an area in which the robot 2 can be driven (hereinafter referred to as a drivable area) in the virtual space based on the acquired drive limit information.
  • the information processing unit 103 a generates three-dimensional data of the aiming coordinate axis based on the drivable region set by the driving region setting unit 105.
  • the information processing unit 103 a outputs the corrected three-dimensional data of the aiming coordinate axis, the control point A, the robot 2, and the structure to the output control unit 104.
  • the information processing unit 103a and the drive area setting unit 105 in the display control device 100A are a processor 1b that executes a program stored in the processing circuit 1a shown in FIG. 5A or the memory 1c shown in FIG. 5B.
  • the drivable area F is an area inside an area surrounded by the first curved surface G, the second curved surface H, and a part of the installation surface B of the robot 2.
  • the first curved surface G is a surface indicated by the drive limit information, and is a curved surface indicating the outermost area in which the robot 2 can drive.
  • the second curved surface H is a surface indicated by the drive limit information, and is a curved surface indicating the innermost region in which the robot 2 can be driven.
  • the control point A of the robot 2 is outside the first curved surface G, that is, in the region where the robot 2 does not exist, and in the second curved surface, that is, the region where the mounting portion 2a of the robot 2 is located. Can not be located.
  • the surface Ba is an outer surface of the first circle Ga formed when the first curved surface G and the installation surface B of the robot 2 intersect in the installation surface B, and the second curved surface H and the installation surface B of the robot 2 And the inner area of the second circle Ha formed at the intersection.
  • the drive area setting unit 105 generates three-dimensional data indicating the drivable area F in the virtual space from the drive limit information described above and the three-dimensional data of the structure.
  • the information processing unit 103 a is configured to generate a three-dimensional aiming coordinate axis (see FIG. 6) including the straight line X, the straight line Y, and the straight line Z which has been generated based on the three-dimensional data of the drivable area F generated by the drive area setting unit 105
  • the data is corrected to three-dimensional data of a aiming coordinate axis composed of a line segment Xa, a line segment Ya and a line segment Za in the drivable area F (see FIG. 13A). As shown in FIG.
  • the display control device 1A displays only the aiming coordinate axes in the drivable area F, whereby the robot 2 is driven by the straight line lengths of the line segment Xa, the line segment Ya, and the line segment Za.
  • the possible range can be displayed.
  • a line segment Za indicates how much the robot 2 can be driven in the direction of the line segment Za from the current state of the robot 2.
  • line segment Xa and line segment Ya respectively drive robot 2 in the direction of line segment Za from the current state of robot 2 and when control point A and sight point C are closest to each other. It shows how much driving can be performed in the line segment Xa direction and the line segment Ya direction. The operator can easily recognize how much the robot 2 can be driven in the direction of the line segment Za from the current state by visually recognizing the display example of FIG. 13A, for example.
  • the operator moves the robot 2 in the line segment Xa direction and the line segment Ya direction. It can be recognized how much it can be driven.
  • the information processing unit 103a sets a line segment Za, which is a aiming coordinate axis after correction, as a line segment Zb extended to a point passing through the control point A and intersecting the first curved surface G.
  • the three-dimensional data of the aiming coordinate axes consisting of the generated straight line X, straight line Y and straight line Z may be corrected to the aiming coordinate axes consisting of line segment Xa, line segment Ya and line segment Zb.
  • FIG. 13B by displaying only the aiming coordinate axes in the drivable area F and displaying the line segment Zb, the operator recognizes how much the robot 2 can be driven in the direction of the line segment Zb. can do.
  • an aiming made up of the generated straight line X, straight line Y and straight line Z A correction is performed to convert three-dimensional data of coordinate axes into three-dimensional data of only the line segment Zc present in the drivable area F.
  • the operator does not have a structure such as a work or the like in the drivable area F in the direction of the current line segment Zc. And how much the robot 2 can be driven in the direction of the line segment Zc.
  • the information processing unit 103a sets an arbitrary virtual plane in contact with a point at which the line segment Zc intersects the first curved surface G, and generates three-dimensional data of the line segment X and the line segment Y on the virtual plane. Then, three-dimensional data of the aiming coordinate axes including the line segment X, the line segment Y, and the line segment Zc may be generated.
  • the output control unit 104 performs control to display the structure, the robot 2, the aiming coordinate axis after correction, and the control point A on the output device 5 based on the three-dimensional data input from the information processing unit 103a. .
  • the output control unit 104 may perform control to display the drivable area F on the output device 5.
  • FIG. 15 is a flowchart showing an operation of the information processing unit 103a of the display control device 1A according to the second embodiment.
  • the same steps as those of the display control device 1 according to the first embodiment are denoted by the same reference numerals as the reference numerals shown in FIG. 9, and the description will be omitted or simplified.
  • the information processing section 103 a is three-dimensional data of a straight line X, a straight line Y and a straight line Z which constitute the aiming coordinate axis generated at step ST18 or step ST19 based on the three-dimensional data of the drivable area F generated at step ST41.
  • the three-dimensional data of the straight line Z generated at step ST11 is corrected to three-dimensional data of a line segment in the drivable area F (step ST42).
  • the information processing unit 103a outputs the three-dimensional data of the aiming coordinate axis corrected at step ST42 and the three-dimensional data of the control point A generated at step ST20 or step ST13 to the output control unit 104 (step ST43). Thereafter, the flowchart proceeds to the process of step ST4 of the flowchart of FIG.
  • the drive area setting unit 105 for setting the drivable area F of the robot 2 based on the drive limit of the robot 2 is provided. Since the line segment of the located portion is used, the drivable range of the robot 2 can be displayed by the length of the straight line of the line segment Z that constitutes the aiming coordinate axis. Thereby, the operator can easily recognize how much the robot 2 can be driven in the direction of the line segment Z.
  • the drive area setting unit 105 for setting the drivable area F of the robot 2 based on the drive limit of the robot 2 is provided to drive the straight line Z, the straight line X, and the straight line Y Since it is a line segment of the part located in the possible area F, the range in which the robot can be driven can be displayed by the length of the straight line of the line segment. As a result, the operator can easily recognize how much the robot 2 can be driven in the direction of each line segment.
  • FIG. 16 is a block diagram showing a configuration of a display control device 1B according to the third embodiment.
  • the display control device 1B is configured by adding a position information storage unit 106, a trajectory generation unit 107, and a reproduction processing unit 108 to the display control device 1 of the first embodiment shown in FIG. Further, in place of the information processing unit 103 and the output control unit 104 of the display control device 1 of the first embodiment shown in FIG. 4, an information processing unit 103 b and an output control unit 104 a are provided.
  • parts identical to or corresponding to the constituent elements of display control apparatus 1 according to Embodiment 1 will be assigned the same codes as those used in Embodiment 1 to omit or simplify the description.
  • the operation information acquisition unit 102 acquires the following operation information in addition to the operation information for moving the pointer displayed superimposed on the control point A as the operator's operation information described in the first embodiment.
  • the operation information acquisition unit 102 acquires operation information specifying a passing point to which the control point A set in the robot 2 should pass. Further, the operation information acquisition unit 102 acquires, as operation information of the operator, operation information instructing generation of a drive track of the robot 2 and operation information instructing simulation reproduction of the robot 2 generated.
  • the information processing unit 103 b uses the position information of the control point A when the operation information is input as the position information of the passing point. It is stored in the information storage unit 106. The information processing unit 103 b also generates three-dimensional data of the passing point based on the position information of the passing point. The information processing unit 103 b outputs three-dimensional data of the passing point to the output control unit 104 a in addition to the three-dimensional data of the robot 2, the structure, and the aiming coordinate axis. The information processing unit 103 b also outputs three-dimensional data of the robot 2 to the reproduction processing unit 108.
  • the trajectory generation unit 107 acquires position information of the passing point stored in the position information storage unit 106.
  • the trajectory generation unit 107 generates a drive trajectory which is a group of position coordinates indicating a trajectory passing through the passing point, using the acquired position information of the passing point. Details of the method of generating the drive trajectory will be described later.
  • the trajectory generation unit 107 outputs the generated drive trajectory to the output control unit 104 a and the reproduction processing unit 108.
  • the reproduction processing unit 108 receives the drive trajectory input from the trajectory generation unit 107 and the robot input from the information processing unit 103b. Based on the two-dimensional data, an operation of driving the robot 2 along the drive trajectory is calculated. The reproduction processing unit 108 generates simulation information indicating the movement of the robot 2 from the calculated operation. The reproduction processing unit 108 outputs the generated simulation information to the output control unit 104a. Further, the reproduction processing unit 108 collates the generated simulation information with the three-dimensional data of the structure input from the information processing unit 103 b, and determines whether the robot 2 interferes with the structure in the virtual space. Do. If the reproduction processing unit 108 determines that the robot 2 and the structure interfere with each other in the virtual space, the reproduction processing unit 108 also outputs positional information of the portion where the interference occurs to the output control unit 104 a.
  • the output control unit 104a generates control information for displaying the robot 2, the structure, the aiming coordinate axis, and the passing point in the virtual space based on the three-dimensional data input from the information processing unit 103b. Further, the output control unit 104 a generates control information to display the drive trajectory generated by the trajectory generation unit 107 on the output device 5. Further, the output control unit 104 a generates control information for reproducing and displaying the simulation information input from the reproduction processing unit 108 on the output device 5. In addition, the output control unit 104a generates control information that highlights a point where interference occurs with a point or a line. Further, the output control unit 104a may generate control information for notifying the operator by voice that the occurrence of the interference occurs.
  • the teaching device 4 from which the output control unit 104a outputs control information is configured of the output device 5 and the input device 6 as shown in FIG.
  • the teaching device 4 is a device capable of displaying the robot 2 and the structure based on the control information input from the output control unit 104 a and performing operation input to the displayed robot 2.
  • the teaching device 4 is applicable to a general purpose personal computer, a tablet terminal operated by a touch panel, a head mounted display, and the like.
  • the robot 2 and a structure are displayed on the display 51 of the output device 5, and a graphic interface such as a teaching button for receiving an operation input is displayed.
  • the teaching button may be a button provided on the teaching device 4 or a button of hardware such as a keyboard of a personal computer.
  • buttons There are a plurality of teaching buttons, a button for inputting start or end of teaching processing, a button for inputting determination or deletion of passing points, a button for inputting reproduction or stop of simulation for confirming generated drive path, operation It consists of buttons for advancing or returning the procedure.
  • the information processing unit 103b, the trajectory generation unit 107, the reproduction processing unit 108, and the output control unit 104a in the display control device 100B execute the program stored in the processing circuit 1a shown in FIG. 5A or the memory 1c shown in FIG. 5B. Processor 1b.
  • the trajectory generation unit 107 generates a drive trajectory of the robot 2 connecting the drive start point of the robot 2, the passing point, and the drive end point of the robot 2 which is the destination point.
  • the drive start point and the drive end point may be points set in advance, or may be points arbitrarily set by the operator.
  • the trajectory generation unit 107 can generate a drive trajectory of the robot 2 if at least one passing point is stored in the position information storage unit 106.
  • the drive start point of the robot 2 may be a start point of the control program.
  • the trajectory generation unit 107 generates a drive trajectory by, for example, a linear motion trajectory, a tangential trajectory or a manual trajectory.
  • FIG. 17 is a diagram showing the type of drive trajectory generated by the trajectory generation unit 107 of the display control device 1B according to the third embodiment.
  • FIG. 17A shows an example of a linear motion track
  • FIG. 17B shows an example of a tangential track
  • FIG. 17C shows an example of a manual track.
  • the linear motion trajectory is a drive trajectory obtained by connecting two passing points in a straight line. Specifically, as shown in FIG. 17A, the first passing point P1 and the second passing point P2 are connected by a straight line, and the second passing point P2 and the third passing point P3 are connected by a straight line.
  • the tangent trajectory is a drive trajectory obtained by connecting the start point and the first passing point by a straight line, and connecting the second and subsequent passing points smoothly by maintaining the continuity of the tangent. Specifically, as shown in FIG. 17B, the first passing point P1 and the second passing point P2 are connected by a straight line, and the continuity of the contacts at the second passing point P2 and the third passing point P3 is determined. It is a trajectory Qb obtained by maintaining and connecting smoothly.
  • the manual trajectory is a trajectory generated by operating the jog lever, the operation key, the mouse or the like as the drive trajectory as it is. Specifically, as shown in FIG. 17C, the first passing point P1 and the second passing point P2 are connected by the line segment drawn by the operator on the display 51 of the output device 5, and the second passing point P2 is obtained. And the third passing point P3 by a line segment drawn by the operator.
  • the trajectory generation unit 107 outputs the generated drive trajectory and position information of the passing point on the drive trajectory to the output control unit 104 a and the reproduction processing unit 108.
  • the passing points are highlighted in a shape such as a circle or a rectangle shown in FIGS. 17A to 17C. Thereby, the operator can easily recognize the passing point.
  • the output control unit 104a may generate control information for displaying a character indicating the identification of the passing point, for example, “passing point 1”, “Point 1” or “P1” in the vicinity of the passing point to be displayed.
  • the output control unit 104a may perform control to output a sound effect that impresses the determination of the passing point.
  • each passing point is given a letter indicating the identification in the determined order, for example, a serial number such as "passing point 1", "passing point 2" and "passing point 3" Is displayed.
  • the trajectory generation unit 107 corrects the position information of the passing point according to the operation information. Further, the trajectory generation unit 107 corrects the drive trajectory to a trajectory connecting the corrected passing points. The trajectory generation unit 107 deletes the position information of the passing point when the operation information for selecting and deleting the passing point on the driving trajectory displayed on the display 51 is input through the output control unit 104a. Further, the trajectory generation unit 107 corrects the passing point from which the drive trajectory is deleted to a trajectory not passing, and corrects a character indicating the identification given to each passing point.
  • the trajectory generation unit 107 outputs the corrected drive trajectory and the position information of the passing point on the corrected drive trajectory to the output control unit 104 a and the reproduction processing unit 108.
  • the track generation unit 107 may receive an input of operation information for selecting and moving the drive track displayed on the display 51, and may perform correction to move the drive track.
  • the teaching device 4 receives, for example, a pressing operation of the completion button, which indicates that the correction of the drive trajectory is completed.
  • the teaching device 4 may determine that the correction of the drive trajectory is completed, and the pressing operation of the completion button may be unnecessary.
  • the teaching device 4 notifies the display control device 1 that the correction of the drive trajectory is completed.
  • the display control device 1 outputs a control program for driving the robot 2 along the generated drive trajectory to the robot control device 7.
  • the reproduction processing unit 108 calculates an operation of driving the robot 2 along the drive trajectory from the information indicating the drive trajectory input from the trajectory generation unit 107 and the three-dimensional data of the robot 2 acquired from the information processing unit 103 b. And generate simulation information.
  • the reproduction processing unit 108 outputs the generated simulation information to the output control unit 104a.
  • the output control unit 104a displays a moving image driven by the robot 2 along a drive trajectory in the virtual space based on the simulation information input from the reproduction processing unit 108 and the three-dimensional data input from the information processing unit 103b. Control information is generated and output to the output device 5.
  • the reproduction processing unit 108 collates the simulation information with the three-dimensional data of the structure input from the information processing unit 103 b.
  • the reproduction processing unit 108 determines whether or not the robot 2 and the structure interfere with each other in the virtual space with reference to the comparison result.
  • the interference between the robot 2 and the structure also includes the interference between the tool 8 attached to the flange portion 2g of the robot 2 shown in FIG. 3 and the structure.
  • the reproduction processing unit 108 determines that interference occurs, the reproduction processing unit 108 acquires position information of a portion where the interference occurs and outputs the acquired position information to the output control unit 104 a.
  • the output control unit 104a When the position information of the location where the interference occurs is input from the reproduction processing unit 108, the output control unit 104a generates control information for clearly indicating the location where the interference occurs.
  • the output control unit 104a may generate control information that clearly indicates the location where interference occurs when reproducing simulation information, or when displaying the drive trajectory of the robot 2 without reproducing the simulation information. Control information may be generated that clearly indicates where interference occurs. For example, when the trajectory generation unit 107 generates a linear motion trajectory or a tangential trajectory, the reproduction processing unit 108 determines whether the robot 2 interferes with the structure in the virtual space without generating simulation information. Do. The reproduction processing unit 108 outputs the position information of the portion where the interference occurs to the output control unit 104a when it is determined that the interference occurs. As a result, the output control unit 104a can perform control to clearly indicate a location where interference occurs on the drive trajectory.
  • the output control unit 104a generates a control that displays the control point A of the robot 2 in a posture located at the start point of the drive trajectory, and then displays a moving image driven by the robot 2 along the drive trajectory.
  • the output control unit 104a When driving of the robot 2 is completed, the output control unit 104a generates control information to be displayed by causing the control point A of the robot 2 to stand still at an end position of the drive track.
  • the output control unit 104a may include the information processing unit 103b and Control information for displaying the robot 2 stopped in the posture when the stop button is pressed is generated via the reproduction processing unit 108.
  • the output control unit 104a can process the information processing unit 103b and Display control is performed to resume driving of the robot 2 again from the stopped posture via the reproduction processing unit 108.
  • the output control unit 104a When the position information of the location where the interference occurs is input from the reproduction processing unit 108, the output control unit 104a performs display control to highlight the location of the interference with the robot 2 with a point or a line on the drive track. Further, the output control unit 104a performs display control to highlight the interference location by changing the thickness of the line of the drive track or changing the color of the line. Furthermore, the output control unit 104a may perform control to output a sound effect from the speaker 52 when the robot 2 and a structure interfere with each other in the virtual space.
  • FIG. 18 is a flowchart showing the operation of the trajectory generation unit 107 of the display control device 1B according to the third embodiment.
  • generation part 107 demonstrates as what produces
  • the trajectory generation unit 107 acquires the position information of the passing point stored in the position information storage unit 106 (step ST52).
  • the trajectory generation unit 107 generates a drive trajectory of the robot 2 using the position information of the passing point acquired in step ST52 (step ST53).
  • the trajectory generation unit 107 outputs information indicating the generated drive trajectory to the output control unit 104a.
  • the output control unit 104a generates control information for displaying the drive track and the passing point on the drive track in the virtual space based on the input information indicating the drive track (step ST54).
  • the output control unit 104 a outputs the generated control information to the output device 5.
  • the trajectory generation unit 107 determines whether or not an instruction to correct the drive trajectory or an instruction to correct the passing point has been input (step ST55).
  • the correction instruction includes correction of the passing point and deletion of the passing point.
  • the trajectory generation unit 107 corrects the previously generated drive trajectory based on the correction instruction (step ST56).
  • the trajectory generation unit 107 outputs information indicating the corrected drive trajectory to the output control unit 104a, and returns to the process of step ST54.
  • the trajectory generation unit 107 outputs information indicating the generated or corrected drive trajectory to the reproduction processing unit 108.
  • the reproduction processing unit 108 stores the information indicating the drive trajectory input from the trajectory generation unit 107 in a temporary storage area such as a buffer (step ST57), and ends the processing.
  • FIG. 19 is a flowchart showing generation of simulation information by the reproduction processing unit 108 of the display control device 1B according to the third embodiment.
  • the reproduction processing unit 108 generates simulation information when an instruction to generate simulation information is input.
  • the reproduction processing unit 108 receives information indicating the drive track stored in the buffer or the like in step ST57 of the flowchart of FIG. 18 and the information processing unit 103b. From the input three-dimensional data of the robot 2 and the structure, simulation information for driving the robot 2 along a drive path is generated (step ST62).
  • the reproduction processing unit 108 determines whether or not the robot 2 and the structure interfere with each other when the robot 2 is driven along the drive path in the virtual space (step ST63). When it is determined that interference does not occur (step ST63; NO), the reproduction processing unit 108 outputs the simulation information generated in step ST62 to the output control unit 104a. The output control unit 104a generates control information for reproducing the simulation information input from the reproduction processing unit 108 (step ST64).
  • the reproduction processing unit 108 acquires position information of the portion where the interference occurs (step ST65).
  • the reproduction processing unit 108 outputs, to the output control unit 104a, the simulation information generated in step ST62 and the position information of the portion where the interference is acquired in step ST63.
  • the output control unit 104a reproduces the simulation information and generates control information for displaying the location where the interference occurs on the drive track (step ST66).
  • the output control unit 104a outputs the control information generated in step ST64 or step ST66 to the output device 5, and ends the process.
  • FIG. 20 is a diagram showing a display example of an interference point by the display control device 1B of the invention according to the third embodiment.
  • the drive track Qd is a track through which the control point A of the robot 2 passes the first passing point P1, the second passing point P2, and the third passing point P3.
  • the area Ra is an area indicating a portion where the robot 2 and the structure Dc interfere with each other.
  • the region Rb is a region indicating where on the drive trajectory Qd the control point A is driven when the robot 2 interferes with the structure Dc in the region Ra.
  • Region Ra has a circular shape and clearly indicates the interference point.
  • a part of the drive trajectory Qd is drawn thickly as the thickness of the line segment to clearly indicate the location of the drive trajectory where interference occurs.
  • the operator may be configured to input an instruction to correct the drive path Qd via the input device 6 while confirming the display. .
  • the trajectory generation unit 107 corrects the previously generated drive trajectory based on the input correction instruction.
  • the reproduction processing unit 108 determines whether interference occurs between the robot 2 and the structure.
  • the output control unit 104 a generates control information according to the determination result, and outputs the control information to the output device 5. Thereby, the operator can correct the drive trajectory while visually recognizing the interference location.
  • the operator can set the driving speed of the robot 2, the stationary time at each passing point, the number of repetitions of driving, the control condition of the tool 8, and the like.
  • the reproduction processing unit 108 generates simulation information based on the set conditions.
  • the reproduction processing unit 108 outputs a control program for driving the robot 2 along the currently set driving trajectory of the robot 2 to the robot control device 7 .
  • the end of the simulation reproduction or the end of the teaching data generation may be judged by pressing the end button by the operator, for example, or when the driving track where the robot 2 interferes with the structure is generated. It may be determined that the generation has ended.
  • the trajectory generation unit 107 that generates the drive trajectory of the robot 2 passing through the designated passing point is provided based on the operation information that designates the passing point of the robot 2. Since the output control unit 104 a is configured to generate the drive trajectory generated by the trajectory generation unit 107 and the control information for displaying the passing point on the drive trajectory, the drive trajectory passing through the set passing point is confirmed Can present information to
  • the reproduction processing unit 108 which calculates the operation driven by the robot 2 along the drive trajectory generated by the trajectory generation unit 107 and generates simulation information indicating the motion of the robot 2 is provided. Since the output control unit 104a is configured to generate control information for reproducing the simulation information generated by the reproduction processing unit 108, the output control unit 104a is configured to check the movement of the robot along the drive track. Information can be presented.
  • the reproduction processing unit 108 determines whether the robot 2 interferes with the structure when the robot 2 is driven along the drive trajectory generated by the trajectory generation unit 107. Since the determination is made and the output control unit 104a determines that the robot 2 and the structure interfere with each other, the output control unit 104a is configured to generate control information for displaying the location where the interference occurs. Information can be presented indicating that interference will occur when the robot is driven along the drive trajectory.
  • the position information storage unit 106, the trajectory generation unit 107, the reproduction processing unit 108, and the interference determination unit 109 are added to the display control device 1 described in the first embodiment.
  • the position information storage unit 106, the trajectory generation unit 107, the reproduction processing unit 108, and the interference determination unit 109 may be added to the display control device 1A described in the second embodiment.
  • control information output from the display control devices 1, 1A and 1B of the first to third embodiments described above is displayed in the augmented reality space via the output device 5.
  • information such as a control point A, an aiming coordinate axis, a passing point, a drive trajectory, and an interference area is superimposed and displayed on the robot 2 in the real environment.
  • the information of the robot 2 generated by the display control devices 1, 1A and 1B is also superimposed and displayed on the information of the real environment.
  • a tablet terminal or a head mounted display provided with a device for three-dimensional scanning and an acceleration sensor is used as the output device 5.
  • the augmented reality space output device 5 scans the shapes of the robot 2 and structures in the real environment using a stereo camera or depth sensor or the like to generate three-dimensional data.
  • the output device 5 of the augmented reality space detects common feature points in the generated three-dimensional data and the three-dimensional data of the robot 2 and the structure for inputting teaching data inputted in advance, Match the position of the dimensional data.
  • a tablet terminal or an enclosed head mounted display is applied as the output device 5 of the augmented reality space
  • the robot 2 of the reality environment captured by the camera for image input provided in the vicinity of the device for three-dimensional scanning
  • three-dimensional data of the robot 2 and a structure for inputting teaching data inputted in advance are superimposed and displayed.
  • a transmissive head mounted display is applied as the output device 5 of the augmented reality space
  • the output device 5 of the augmented reality space periodically performs three-dimensional scanning, performs calculations in accordance with the detection result of the movement of the operator by the acceleration sensor, and updates the superimposed display of the robot 2 and the structure described above in real time.
  • the information which scanned the shape of the real robot 2 and a structure is not displayed.
  • Data in which three-dimensional data is superimposed and displayed can be superimposed on a passing point, a drive track, an operation button, and the like, as in the generation of teaching data and the generation of simulation information in virtual reality space.
  • the augmented reality space output device 5 may be set so as not to display the robot 2 and the structure for inputting the teaching data input in advance. As a result, when the interference between the robot 2 and the structure is confirmed, the operator can be made to feel as if the actual structure and the robot 2 of the three-dimensional data actually interfere with each other.
  • FIG. 21 is a diagram showing a display example when the control information of the display control devices 100, 100A, 100B shown in the first to third embodiments is output to the output device 5 of the augmented reality space.
  • the control information generated by the output control unit 104 is information for causing the output device 5 to display an image when the robot 2 and the aiming coordinate axes are viewed from a specific position in the virtual space. showed that.
  • the control information may be information for causing the output device 5 to display an image when at least the aiming coordinate axes are viewed from a specific position in the virtual space.
  • the display control devices 1, 1A, and 1B described in the first to third embodiments described above can also be applied as a device for generating a control program for remotely operating a device.
  • the present invention allows free combination of each embodiment, modification of any component of each embodiment, or omission of any component of each embodiment. It is.
  • the display control apparatus can be applied to an apparatus for generating a control program of a robot for performing work or the like, or an apparatus for generating a control program for remotely operating the apparatus.
  • 1, 1A, 1B display control device, 101 three-dimensional data acquisition unit, 102 operation information acquisition unit, 103, 103a, 103b information processing unit, 104, 104a output control unit, 105 drive area setting unit, 106 position information storage unit, 107 Trajectory generation unit, 108 reproduction processing unit.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

A display control device provided with: an information-processing unit (103) for generating, in a virtual space stipulated by the three-dimensional data of a structure, three-dimensional data for a sighting coordinate axis comprising a line Z that passes through a control point (A) that is set by a robot (2) on the basis of three-dimensional data for the robot (2) and three-dimensional data for the structure; and an output control unit (104) for generating control information for displaying the sighting coordinate axis on an output device (5) on the basis of the three-dimensional data for the robot (2), the three-dimensional data for the structure, and the three-dimensional data for the sighting coordinate axis generated by the information-processing unit (103).

Description

表示制御装置、表示制御方法および表示制御プログラムDisplay control apparatus, display control method and display control program
 この発明は、機器の操作を支援するための情報を表示するための技術に関するものである。 The present invention relates to a technique for displaying information for supporting the operation of a device.
 従来、定められた作業等を機器(以下では、機器がロボットの場合を例に説明する)に行わせるための教示データに基づいて、ロボットの動作をシミュレート演算して表示するロボットシミュレータが種々提案されている。ロボットシミュレータを用いることにより、オペレータは、実際にロボットを動作させることなく、ロボットが周囲の構造物に干渉しないか等の検証を行いながら、教示データを作成することができる。オペレータは、ロボットのアームの先端部が通過する3次元空間上のポイントまたは軌道を指定することにより、教示データを作成する。 Conventionally, there are various robot simulators that simulate and display the operation of a robot based on teaching data for causing a device (hereinafter, the device is a robot to be described as an example) to perform a prescribed operation and the like. Proposed. By using the robot simulator, the operator can create teaching data while verifying whether the robot does not interfere with surrounding structures without actually operating the robot. The operator creates teaching data by designating a point or a trajectory on a three-dimensional space through which the tip of the robot arm passes.
 教示データの作成方法として、数値入力によりロボットのアームを動かして作成する方法、矢印ボタンまたはジョイスティック等を用いた間接的な操作でロボットのアームを動かして作成する方法、およびオペレータがロボットのアームを掴んで直接動かして作成する方法等が存在する。
 しかし、数値入力によりロボットのアームを動かして教示データを作成する場合、教示データの結果がオペレータの意図通りになるまで、数値入力を繰り返す必要があり、時間を要するという問題があった。
The teaching data is created by moving the robot arm by numerical input, by moving the robot arm by indirect operation using an arrow button or a joystick, etc., and by the operator using the robot arm. There is a method such as grabbing and moving directly.
However, in the case of moving the robot arm by numerical input to create teaching data, it is necessary to repeat the numerical input until the result of the teaching data is as intended by the operator, which takes time.
 また、間接的な操作でロボットのアームを動かして教示データを作成する場合、オペレータは、自身に対する上下左右の方向と、ロボットのアームに対する上下左右方向とを一致させて操作する必要がある。そのため、オペレータは直感的にロボットのアームを操作することができず、誤操作を誘発する可能性があるという問題があった。
 また、オペレータがロボットのアームを掴んで直接動かすことにより教示データを作成する場合、各オペレータの能力に依存する作業であるため、作成された教示データにバラつきが生じ、高精度な教示データを得ることが困難であるという問題があった。また、ロボットが稼動している場合、またはロボットの周辺領域への立ち入りが禁止されている場合には、オペレータが教示データを作成する作業を行うことができないという問題があった。
Further, in the case of creating the teaching data by moving the arm of the robot by the indirect operation, it is necessary for the operator to operate in such a manner that the upper, lower, left, and right directions with respect to itself and the upper, lower, left, and right directions with respect to the arm of the robot match. Therefore, the operator can not intuitively operate the arm of the robot, and there is a problem that an erroneous operation may be induced.
Also, when the operator creates teaching data by grasping and directly moving the arm of the robot, it is an operation dependent on the ability of each operator, so the created teaching data will vary, and high precision teaching data can be obtained. There was a problem that it was difficult. Further, there is a problem that the operator can not perform the operation of creating the teaching data when the robot is operating or when the entry to the peripheral area of the robot is prohibited.
 この問題を解消するための技術として、ロボットシミュレータにおいて、画面上に仮想空間を表示し、当該仮想空間内に位置するロボットのアームを操作して駆動させることにより、教示データを作成する技術が存在する。例えば、特許文献1に開示された装置は、コンピュータによって支援された情報を視覚装置上にある画像受取装置によって検出される現実環境の画像に視覚化するための装置である。当該特許文献1に開示された装置では、画像受け取り装置の位置および方向付け、またはポーズについての判定が行われ、当該判定に対応するロボット特有の情報、例えば少なくとも1つのロボットに特有の座標系の表示を含む情報が視覚装置上の現実環境の画像に重ねて表示する。 As a technique for solving this problem, in a robot simulator, there is a technique for creating teaching data by displaying a virtual space on the screen and operating and driving an arm of the robot located in the virtual space. Do. For example, the device disclosed in Patent Document 1 is a device for visualizing computer-aided information into an image of a real environment detected by an image receiving device on a vision device. In the device disclosed in Patent Document 1, determination about position and orientation of an image receiving device or pose is performed, and robot-specific information corresponding to the determination, for example, a coordinate system unique to at least one robot Information, including a display, is displayed superimposed on the image of the real environment on the visual device.
特開2004-243516号公報JP, 2004-243516, A
 上述した特許文献1に記載された装置では、情報が現実環境の画像に重ねられて表示されるため、直感的にロボットのアームを操作することができる。しかし、ロボットの現在位置と、ロボットを移動させる目的位置との関係を把握することが困難であるという課題があった。 In the device described in Patent Document 1 described above, since the information is displayed superimposed on the image of the real environment, the arm of the robot can be intuitively operated. However, there is a problem that it is difficult to grasp the relationship between the current position of the robot and the target position to move the robot.
 この発明は、上記のような課題を解決するためになされたもので、操作する機器の現在位置と、操作する機器を移動させる目的位置との関係を視覚的に把握可能な情報を提示することを目的とする。 This invention was made in order to solve the above subjects, and presents the information which can grasp visually the relation between the present position of the apparatus to operate, and the object position which moves the apparatus to operate. With the goal.
 この発明に係る表示制御装置は、駆動対象機器の3次元データおよび構造物の3次元データに基づいて、構造物の3次元データが規定する仮想空間内において、駆動対象機器に設定された制御点を通る第1の直線を含む照準座標軸の3次元データを生成する情報処理部と、駆動対象機器の3次元データ、構造物の3次元データおよび情報処理部が生成した照準座標軸の3次元データに基づいて、照準座標軸を出力装置に表示するための制御情報を生成する出力制御部とを備えるものである。 A display control device according to the present invention is a control point set as a drive target device in a virtual space defined by three-dimensional data of a structure based on three-dimensional data of the drive target device and three-dimensional data of the structure. Information processing unit that generates three-dimensional data of the aiming coordinate axis including the first straight line passing through the three-dimensional data of the device to be driven, three-dimensional data of the structure, and three-dimensional data of the aiming coordinate axis generated by the information processing unit And an output control unit that generates control information for displaying the aiming coordinate axis on the output device.
 この発明によれば、操作する機器の現在位置と、操作する機器を移動させる目的位置との関係を視覚的に把握可能な情報を提示することができる。 According to the present invention, it is possible to present information capable of visually grasping the relationship between the current position of the device to be operated and the target position to which the device to be operated is moved.
実施の形態1に係る表示制御装置を備えた教示システムの構成を示す図である。FIG. 1 is a diagram showing a configuration of a teaching system provided with a display control device according to Embodiment 1. 教示システムに適用されるロボットの構成例を示す図である。It is a figure showing an example of composition of a robot applied to a teaching system. ロボットのフランジ部へのツールの取り付け例を示す図である。It is a figure which shows the example of attachment of the tool to the flange part of a robot. 実施の形態1に係る表示制御装置の構成を示すブロック図である。FIG. 1 is a block diagram showing a configuration of a display control device according to Embodiment 1. 図5A、図5Bは、実施の形態1に係る表示制御装置のハードウェア構成例を示す図である。5A and 5B are diagrams showing an example of the hardware configuration of the display control apparatus according to the first embodiment. 実施の形態1に係る表示制御装置による照準座標軸の設定例を示す図である。FIG. 6 is a diagram showing an example of setting of a aiming coordinate axis by the display control apparatus according to the first embodiment. 実施の形態1に係る表示制御装置による照準座標軸の設定例を示す図である。FIG. 6 is a diagram showing an example of setting of a aiming coordinate axis by the display control apparatus according to the first embodiment. 実施の形態1に係る表示制御装置による照準座標軸の設定例を示す図である。FIG. 6 is a diagram showing an example of setting of a aiming coordinate axis by the display control apparatus according to the first embodiment. 実施の形態1に係る表示制御装置による照準座標軸の表示例を示す図である。FIG. 7 is a view showing a display example of aiming coordinate axes by the display control device according to the first embodiment. 実施の形態1に係る表示制御装置による照準座標軸の表示例を示す図である。FIG. 7 is a view showing a display example of aiming coordinate axes by the display control device according to the first embodiment. 実施の形態1に係る表示制御装置による照準座標軸の表示例を示す図である。FIG. 7 is a view showing a display example of aiming coordinate axes by the display control device according to the first embodiment. 実施の形態1に係る表示制御装置の動作を示すフローチャートである。5 is a flowchart showing an operation of the display control device according to the first embodiment. 実施の形態1に係る表示制御装置の情報処理部の動作を示すフローチャートである。5 is a flowchart showing an operation of an information processing unit of the display control device according to Embodiment 1. 実施の形態1に係る表示制御装置の情報処理部の動作を示すフローチャートである。5 is a flowchart showing an operation of an information processing unit of the display control device according to Embodiment 1. 図11A、図11Bは、実施の形態1に係る表示制御装置による照準座標軸のその他の表示例を示す図である。11A and 11B are diagrams showing other display examples of the aiming coordinate axes by the display control apparatus according to the first embodiment. 実施の形態2に係る表示制御装置の構成を示すブロック図である。FIG. 7 is a block diagram showing a configuration of a display control device according to Embodiment 2. 実施の形態2に係る表示制御装置におけるロボットの駆動可能領域および照準座標軸を示す図である。FIG. 16 is a diagram showing a drivable area of a robot and an aiming coordinate axis in a display control device according to a second embodiment. 実施の形態2に係る表示制御装置におけるロボットの駆動可能領域および照準座標軸を示す図である。FIG. 16 is a diagram showing a drivable area of a robot and an aiming coordinate axis in a display control device according to a second embodiment. 実施の形態2に係る表示制御装置の情報処理部の動作を示すフローチャートである。15 is a flowchart showing an operation of an information processing unit of a display control device according to Embodiment 2. 実施の形態3に係る表示制御装置の構成を示すブロック図である。FIG. 16 is a block diagram showing a configuration of a display control device according to Embodiment 3. 図17A、図17B、図17Cは、実施の形態3に係る表示制御装置の軌道生成部が生成する駆動軌道の種別を示す図である。FIG. 17A, FIG. 17B, and FIG. 17C are diagrams showing the types of drive tracks generated by the track generation unit of the display control apparatus according to Embodiment 3. 実施の形態3に係る表示制御装置の軌道生成部の動作を示すフローチャートである。FIG. 16 is a flowchart showing an operation of a trajectory generation unit of the display control device according to Embodiment 3. FIG. 実施の形態3に係る表示制御装置の再生処理部の動作を示すフローチャートである。FIG. 16 is a flowchart showing the operation of the reproduction processing unit of the display control device according to Embodiment 3. FIG. 実施の形態3に係る発明の表示制御装置による干渉箇所の表示例を示す図である。FIG. 17 is a view showing a display example of an interference point by the display control device of the invention according to Embodiment 3. 実施の形態1から実施の形態3で示した表示制御装置の制御情報を、拡張現実空間の出力装置に出力した場合の表示例を示す図である。FIG. 17 is a diagram showing a display example when control information of the display control apparatus shown in Embodiments 1 to 3 is output to an output device of the augmented reality space.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
 図1は、実施の形態1に係る表示制御装置1を備えた教示システムの構成を示す図である。図1では、表示制御装置1を、ロボット(駆動対象機器)の教示データを作成する教示システムに適用した場合を例に示している。なお、表示制御装置1は、ロボットの教示システムへの適用に限定されるものではなく、機器を操作するための種々のシステムへの適用が可能である。
Hereinafter, in order to explain the present invention in more detail, a mode for carrying out the present invention will be described according to the attached drawings.
Embodiment 1
FIG. 1 is a diagram showing the configuration of a teaching system provided with a display control device 1 according to the first embodiment. FIG. 1 shows an example in which the display control device 1 is applied to a teaching system for creating teaching data of a robot (device to be driven). The display control device 1 is not limited to the application to a teaching system of a robot, and can be applied to various systems for operating equipment.
 また、表示制御装置1は、表示手段に表示された3次元データまたは拡張現実(Augmented Reality、AR)コンテンツに、生成した教示データを重ね合わせて表示する用途に使用される。具体的には、表示制御装置1は、パソコン、タブレット、スマートフォンおよび専用の教示装置等に表示された3次元データ、または拡張現実メガネに表示された拡張現実コンテンツに、教示データを重ね合わせて表示する制御を行う。拡張現実メガネは、ユーザの頭部に装着されて、ユーザの眼前に画像を表示するヘッドマウントディスプレイ(Head Mounted Display、HMD)である。ヘッドマウントディスプレイは、非透過型であってもよいし、透過型であってもよい。表示制御装置1は、例えば、上述したパソコン、タブレット、スマートフォン、専用の教示装置および拡張現実メガネ等の処理回路またはチップ等として実装される。 Further, the display control device 1 is used for the purpose of superposing and displaying the generated teaching data on three-dimensional data or augmented reality (AR) content displayed on the display means. Specifically, the display control device 1 displays teaching data superimposed on three-dimensional data displayed on a personal computer, a tablet, a smartphone, a dedicated teaching device, or the like, or on augmented reality content displayed on augmented reality glasses. Control to The augmented reality glasses are head mounted displays (HMDs) mounted on the user's head and displaying an image in front of the user's eyes. The head mounted display may be non-transmissive or transmissive. The display control device 1 is mounted as, for example, a processing circuit or chip or the like such as the personal computer, tablet, smartphone, dedicated teaching device, and augmented reality glasses described above.
 教示システムは、表示制御装置1、ロボット2、周辺情報取得装置3、教示装置4およびロボット制御装置7で構成される。教示装置4は、出力装置5および入力装置6を備える。
 ロボット2は、制御プログラムに従って動作して、目的とする作業を行う装置である。制御プログラムとは、ロボット2の動作を指令する情報である。ロボット2の動作を指令する情報には、ロボット2が駆動する際の通過点、軌道、速度および動作回数等の情報が含まれる。オペレータが、当該制御プログラムを入力または修正する操作をロボット教示という。また、ロボット教示により得られたデータを、教示データという。
The teaching system includes a display control device 1, a robot 2, a peripheral information acquisition device 3, a teaching device 4 and a robot control device 7. The teaching device 4 includes an output device 5 and an input device 6.
The robot 2 is a device that operates in accordance with a control program and performs a target operation. The control program is information for instructing the operation of the robot 2. The information instructing the operation of the robot 2 includes information such as a passing point, a trajectory, a velocity, and the number of times of operation when the robot 2 is driven. An operation in which the operator inputs or corrects the control program is called robot teaching. Also, data obtained by robot teaching is called teaching data.
 周辺情報取得装置3は、ロボット2の3次元データ、および、ロボット2の周辺環境の3次元データを取得して表示制御装置1に出力する。3次元データとは、ロボット2および周辺環境の、外形、位置および動きを数値で示したデータである。周辺情報取得装置3は、カメラ、3次元スキャナまたはサーバ等で構成される。周辺情報取得装置3が、例えばカメラまたは3次元スキャナで構成される場合、現実環境のロボット2および周辺環境を撮影またはスキャンし、ロボット2および周辺環境の3次元データを取得し、表示制御装置1に出力する。また、周辺情報取得装置3が、例えばサーバで構成される場合、予め記憶装置等に記憶されたロボット2および周辺環境の3次元データを取得し、表示制御装置1に出力する。
 周辺環境とは、ロボット2が設置される空間における床、壁、柱、台、関連する機器、配線およびワーク等(以下、構造物という)である。ここで、ワークとは、ロボット2の作業対象をいう。ロボット2が、例えば、組み立てまたは搬送を行う装置である場合、ワークは、ロボット2に取り付けられた把持用のツールが把持する対象である。また、ロボット2が、例えば、バリ取りを行う装置である場合、ワークは、ロボット2に取り付けられたバリ取り用のツールを押し当てる対象である。なお、ツールの詳細は後述する。
The peripheral information acquisition device 3 acquires three-dimensional data of the robot 2 and three-dimensional data of the surrounding environment of the robot 2 and outputs the three-dimensional data to the display control device 1. The three-dimensional data is data in which the outline, position and movement of the robot 2 and the surrounding environment are numerically indicated. The peripheral information acquisition device 3 is configured of a camera, a three-dimensional scanner, a server or the like. For example, when the peripheral information acquisition device 3 is configured by a camera or a three-dimensional scanner, the robot 2 and the peripheral environment in the real environment are photographed or scanned, and three-dimensional data of the robot 2 and the peripheral environment are acquired. Output to Further, when the peripheral information acquisition device 3 is configured by, for example, a server, three-dimensional data of the robot 2 and peripheral environment stored in advance in a storage device or the like is acquired and output to the display control device 1.
The surrounding environment is a floor, a wall, a pillar, a stand, related devices, wiring, work, and the like (hereinafter referred to as a structure) in a space where the robot 2 is installed. Here, the work refers to a work target of the robot 2. In the case where the robot 2 is, for example, an apparatus for assembling or transporting, the workpiece is an object to be gripped by a gripping tool attached to the robot 2. When the robot 2 is, for example, a deburring device, the workpiece is an object to be pressed against the deburring tool attached to the robot 2. The details of the tool will be described later.
 表示制御装置1は、周辺情報取得装置3から、ロボット2の3次元データおよび構造物の3次元データを取得する。表示制御装置1内の情報処理において仮想的に構築される空間であって、構造物の3次元データにより規定される空間を、仮想空間という。表示制御装置1は、構造物の3次元データに基づいて、仮想空間を規定する構造物を出力装置5に表示させるための制御を行う。また、表示制御装置1は、仮想空間内に配置されたロボット2を示す画像および操作支援情報を出力装置5に表示させるための制御を行う。表示制御装置1は、制御情報を教示装置4に出力する。 The display control device 1 acquires three-dimensional data of the robot 2 and three-dimensional data of a structure from the peripheral information acquisition device 3. A space virtually constructed in the information processing in the display control device 1 and defined by three-dimensional data of a structure is referred to as a virtual space. The display control device 1 performs control for causing the output device 5 to display a structure that defines a virtual space based on three-dimensional data of the structure. In addition, the display control device 1 performs control for causing the output device 5 to display an image indicating the robot 2 disposed in the virtual space and operation support information. The display control device 1 outputs control information to the teaching device 4.
 教示装置4の出力装置5は、例えば、パソコンまたはタブレット端末等のディスプレイ、またはヘッドマウントディスプレイで構成される。出力装置5は、表示制御装置1から入力された制御情報に基づいて、仮想空間、ロボット2および操作支援情報等を表示する。また、出力装置5は、表示制御装置1からの指令に基づき、ロボット2の制御プログラムを表示してもよい。 The output device 5 of the teaching device 4 is configured of, for example, a display such as a personal computer or a tablet terminal, or a head mounted display. The output device 5 displays the virtual space, the robot 2, the operation support information, and the like based on the control information input from the display control device 1. Further, the output device 5 may display a control program of the robot 2 based on a command from the display control device 1.
 教示装置4の入力装置6は、例えばマウス、タッチパネル、専用の教示端末、またはヘッドマウントディスプレイの入力手段で構成され、出力装置5で表示された情報に対する操作入力を受け付ける。入力装置6は、受け付けた操作入力に応じた操作情報を表示制御装置1に出力する。表示制御装置1は、入力装置6から入力された操作情報に基づいて、操作後のロボット2の状態に応じた制御情報を生成する。
 また、入力装置6は、出力装置5に制御プログラムが表示されている場合、表示された制御プログラムの修正等の操作入力を受け付ける。表示制御装置1は、入力装置6から入力された操作情報に基づいて、ロボット2の制御プログラムを修正する。
The input device 6 of the teaching device 4 includes, for example, a mouse, a touch panel, a dedicated teaching terminal, or an input unit of a head mounted display, and receives an operation input to information displayed by the output device 5. The input device 6 outputs the operation information corresponding to the received operation input to the display control device 1. The display control device 1 generates control information according to the state of the robot 2 after the operation based on the operation information input from the input device 6.
Further, when the control program is displayed on the output device 5, the input device 6 receives an operation input such as correction of the displayed control program. The display control device 1 corrects the control program of the robot 2 based on the operation information input from the input device 6.
 表示制御装置1は、オペレータによる操作入力が終了すると、ロボット2の制御プログラムを確定する。表示制御装置1は、確定した制御プログラムをロボット制御装置7に出力する。ロボット制御装置7は、表示制御装置1から入力された制御プログラムを保存し、保存した制御プログラムをロボット2の駆動信号に変換してロボット2に送信する。ロボット2は、ロボット制御装置7から受信した駆動信号に基づいて駆動し、定められた作業等を行う。 When the operation input by the operator is completed, the display control device 1 determines the control program of the robot 2. The display control device 1 outputs the determined control program to the robot control device 7. The robot control device 7 saves the control program input from the display control device 1, converts the saved control program into a drive signal of the robot 2, and transmits it to the robot 2. The robot 2 is driven based on the drive signal received from the robot control device 7 and performs a predetermined operation or the like.
 次に、ロボット2の構成について説明する。
 図2は、教示システムに適用されるロボット2の構成例を示す図である。
 図2に例示されたロボット2は、6軸の自由度を有するいわゆる垂直多関節形のロボットであり、据付部2a、第1アーム2b、第2アーム2c、第3アーム2d、第4アーム2e、第5アーム2fおよびフランジ部2gを備える。
 据付部2aは、床に固定される。第1アーム2bは、軸線J1を回転軸として、据付部2aに対して回動する。第2アーム2cは、軸線J2を回転軸として、第1アーム2bに対して回動する。第3アーム2dは、軸線J3を回転軸として、第2アーム2cに対して回動する。第4アーム2eは、軸線J4を回転軸として、第3アーム2dに対して回動する。第5アーム2fは、軸線J5を回転軸として、第4アーム2eに対して回動する。フランジ部2gは、軸線J6を回転軸として、第5アーム2fに対して回動する。フランジ部2gは、第5アーム2fとの接続側と反対側の端部に、種々のツールを取り付けて固定する機構を備える。ツールは、ロボット2の作業内容に応じて選択される。ツールは、ワークを把持する器具およびワークを研磨するグラインダ等の工具である。
Next, the configuration of the robot 2 will be described.
FIG. 2 is a view showing a configuration example of the robot 2 applied to the teaching system.
The robot 2 illustrated in FIG. 2 is a so-called vertical articulated robot having six degrees of freedom, and includes a mounting unit 2a, a first arm 2b, a second arm 2c, a third arm 2d, and a fourth arm 2e. , A fifth arm 2f and a flange portion 2g.
The mounting portion 2a is fixed to the floor. The first arm 2b pivots with respect to the mounting portion 2a with the axis J1 as a rotation axis. The second arm 2c rotates with respect to the first arm 2b with the axis line J2 as a rotation axis. The third arm 2 d rotates with respect to the second arm 2 c with the axis line J 3 as a rotation axis. The fourth arm 2e rotates with respect to the third arm 2d with the axis line J4 as a rotation axis. The fifth arm 2 f rotates with respect to the fourth arm 2 e with the axis line J 5 as a rotation axis. The flange portion 2g rotates with respect to the fifth arm 2f with the axis line J6 as a rotation axis. The flange portion 2g has a mechanism for attaching and fixing various tools at the end opposite to the connection side with the fifth arm 2f. The tool is selected according to the work content of the robot 2. The tool is a tool for holding a work and a tool such as a grinder for polishing the work.
 図3は、ロボット2のフランジ部2gへのツール8の取り付け例を示す図である。
 図3Aは、フランジ部2gにツール8として、ワークを把持する器具8aを取り付けた場合を示す図である。図3Bは、フランジ部2gにツール8として、グラインダ8bを取り付けた場合を示す図である。図3Aおよび図3Bで示した点Aは、ツール8に応じて設定された制御点を示している(以下、制御点Aという)。制御点Aは、オペレータが、出力装置5に表示された仮想空間内でロボット2を動かす場合に、表示されたロボット2上またはロボット2近傍のどの位置を操作すべきかを示すものである。制御点Aは、出力装置5に表示された仮想空間内に例えば円形または矩形の形状で表示される。ツール8が、図3Aで示したワークを把持する器具8aの場合、制御点Aは器具8aがワークを把持する点に設定される。ツール8が、図3Bで示したグラインダ8bの場合、制御点Aはグラインダ8bの研磨点に設定される。図3Bの例のように、ツール8がワークに対して加工を行う工具である場合、制御点Aは当該工具の加工点または加工領域に設定される。
FIG. 3 is a view showing an example of attachment of the tool 8 to the flange 2 g of the robot 2.
FIG. 3A is a view showing a case where an instrument 8a for gripping a workpiece is attached as a tool 8 to the flange portion 2g. FIG. 3B is a view showing a case where a grinder 8 b is attached as a tool 8 to the flange portion 2 g. Point A shown in FIG. 3A and FIG. 3B indicates a control point set according to the tool 8 (hereinafter referred to as a control point A). The control point A indicates which position on the displayed robot 2 or in the vicinity of the robot 2 should be operated when the operator moves the robot 2 in the virtual space displayed on the output device 5. The control point A is displayed in, for example, a circular or rectangular shape in the virtual space displayed on the output device 5. When the tool 8 is the tool 8a for gripping the work shown in FIG. 3A, the control point A is set to a point at which the tool 8a grips the work. When the tool 8 is the grinder 8b shown in FIG. 3B, the control point A is set to the grinding point of the grinder 8b. As in the example of FIG. 3B, when the tool 8 is a tool that performs machining on a workpiece, the control point A is set to the machining point or machining area of the tool.
 次に、表示制御装置1の内部構成について説明する。
 図4は、実施の形態1に係る表示制御装置1の構成を示すブロック図である。
 表示制御装置1は、3次元データ取得部101、操作情報取得部102、情報処理部103および出力制御部104を備える。また、図4で示した表示制御装置1は、周辺情報取得装置3、入力装置6および出力装置5に接続されている。
Next, the internal configuration of the display control device 1 will be described.
FIG. 4 is a block diagram showing the configuration of the display control device 1 according to the first embodiment.
The display control device 1 includes a three-dimensional data acquisition unit 101, an operation information acquisition unit 102, an information processing unit 103, and an output control unit 104. Further, the display control device 1 shown in FIG. 4 is connected to the peripheral information acquisition device 3, the input device 6 and the output device 5.
 3次元データ取得部101は、周辺情報取得装置3から、ロボット2の3次元データおよび構造物の3次元データを取得する。3次元データ取得部101は、取得した3次元データを情報処理部103に出力する。操作情報取得部102は、入力装置6を介して、オペレータの操作情報を取得する。操作情報取得部102は、取得した操作情報を情報処理部103に出力する。 The three-dimensional data acquisition unit 101 acquires three-dimensional data of the robot 2 and three-dimensional data of a structure from the peripheral information acquisition device 3. The three-dimensional data acquisition unit 101 outputs the acquired three-dimensional data to the information processing unit 103. The operation information acquisition unit 102 acquires operation information of the operator via the input device 6. The operation information acquisition unit 102 outputs the acquired operation information to the information processing unit 103.
 情報処理部103は、3次元データ取得部101から入力された3次元データから、仮想空間内のロボット2の制御点Aの位置座標を取得する。情報処理部103は、取得した制御点Aの位置座標および構造物の3次元データから、仮想空間内に設定する照準座標軸の位置および方向等を算出し、当該照準座標軸の3次元データを生成する。照準座標軸は、直線X(第2の直線)、直線Y(第3の直線)および直線Z(第1の直線)の3軸で構成される座標軸である。照準座標軸は、オペレータがロボット2の教示データを作成する際に参照する操作支援情報である。なお、照準座標軸の詳細については後述する。 The information processing unit 103 acquires the position coordinates of the control point A of the robot 2 in the virtual space from the three-dimensional data input from the three-dimensional data acquisition unit 101. The information processing unit 103 calculates the position and direction of the aiming coordinate axis to be set in the virtual space from the acquired position coordinates of the control point A and the three-dimensional data of the structure, and generates three-dimensional data of the aiming coordinate axis. . The aiming coordinate axes are coordinate axes constituted by three axes of a straight line X (second straight line), a straight line Y (third straight line) and a straight line Z (first straight line). The aiming coordinate axis is operation support information that the operator refers to when creating teaching data of the robot 2. The details of the aiming coordinate axis will be described later.
 また、情報処理部103は、操作情報取得部102が取得した操作情報に応じて、仮想空間内のロボット2をオペレータの操作入力に追従させて駆動させた場合の、ロボット2の3次元データおよび照準座標軸の3次元データを生成する。なお、情報処理部103は、仮想空間内でのロボット2の移動を算出するために必要なロボット2の仕様データを予め適宜の方法で取得しているものとする。情報処理部103は、構造物の3次元データ、ロボット2の3次元データおよび照準座標軸の3次元データを出力制御部104に出力する。 Further, according to the operation information acquired by the operation information acquisition unit 102, the information processing unit 103 causes the robot 2 in the virtual space to follow the operation input of the operator and drives the three-dimensional data of the robot 2 and Generate three-dimensional data of aiming coordinate axes. It is assumed that the information processing unit 103 acquires specification data of the robot 2 necessary for calculating the movement of the robot 2 in the virtual space in advance by an appropriate method. The information processing unit 103 outputs the three-dimensional data of the structure, the three-dimensional data of the robot 2 and the three-dimensional data of the aiming coordinate axes to the output control unit 104.
 出力制御部104は、情報処理部103から取得した構造物の3次元データ、ロボット2の3次元データおよび照準座標軸の3次元データに基づいて、構造物、ロボット2および照準座標軸を表示する制御情報を生成する。制御情報は、仮想空間内の構造物、ロボット2および照準座標軸を、仮想空間内の特定の位置からみた場合の画像を、出力装置5に表示させるための情報である。仮想空間内の特定の位置とは、例えば、現実環境におけるユーザの視点の位置と対応する、仮想空間内の位置である。出力制御部104は、生成した制御情報を出力装置5に出力する。 The output control unit 104 displays control information for displaying the structure, the robot 2 and the aiming coordinate axis based on the three-dimensional data of the structure acquired from the information processing unit 103, the three-dimensional data of the robot 2 and the three-dimensional data of the aiming coordinate axis. Generate The control information is information for causing the output device 5 to display an image when the structure in the virtual space, the robot 2 and the aiming coordinate axes are viewed from a specific position in the virtual space. The specific position in the virtual space is, for example, a position in the virtual space corresponding to the position of the viewpoint of the user in the real environment. The output control unit 104 outputs the generated control information to the output device 5.
 次に、表示制御装置1のハードウェア構成例を説明する。
 図5Aおよび図5Bは、実施の形態1に係る表示制御装置1のハードウェア構成例を示す図である。
 表示制御装置1における3次元データ取得部101、操作情報取得部102、情報処理部103および出力制御部104の各機能は、処理回路により実現される。即ち、3次元データ取得部101、操作情報取得部102、情報処理部103および出力制御部104は、上記各機能を実現するための処理回路を備える。当該処理回路は、図5Aに示すように専用のハードウェアである処理回路1aであってもよいし、図5Bに示すようにメモリ1cに格納されているプログラムを実行するプロセッサ1bであってもよい。
Next, a hardware configuration example of the display control device 1 will be described.
5A and 5B are diagrams showing an example of the hardware configuration of the display control device 1 according to the first embodiment.
Each function of the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 in the display control device 1 is realized by a processing circuit. That is, the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 include processing circuits for realizing the respective functions. The processing circuit may be the processing circuit 1a which is dedicated hardware as shown in FIG. 5A, or may be the processor 1b which executes a program stored in the memory 1c as shown in FIG. 5B. Good.
 図5Aに示すように、3次元データ取得部101、操作情報取得部102、情報処理部103および出力制御部104が専用のハードウェアである場合、処理回路1aは、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-programmable Gate Array)、またはこれらを組み合わせたものが該当する。3次元データ取得部101、操作情報取得部102、情報処理部103および出力制御部104の各部の機能それぞれを処理回路で実現してもよいし、各部の機能をまとめて1つの処理回路で実現してもよい。 As shown in FIG. 5A, when the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 are dedicated hardware, the processing circuit 1a may be, for example, a single circuit or a complex. A circuit, a programmed processor, a processor programmed in parallel, an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of these is applicable. Each function of each unit of the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 may be realized by a processing circuit, or the functions of each unit are realized by one processing circuit. You may
 図5Bに示すように、3次元データ取得部101、操作情報取得部102、情報処理部103および出力制御部104がプロセッサ1bである場合、各部の機能は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせにより実現される。ソフトウェアまたはファームウェアはプログラムとして記述され、メモリ1cに格納される。プロセッサ1bは、メモリ1cに記憶されたプログラムを読み出して実行することにより、3次元データ取得部101、操作情報取得部102、情報処理部103および出力制御部104の各機能を実現する。即ち、表示制御装置1の3次元データ取得部101、操作情報取得部102、情報処理部103および出力制御部104は、プロセッサ1bにより実行されるときに、後述する図8から図10に示す各ステップが結果的に実行されることになるプログラムを格納するためのメモリ1cを備える。また、これらのプログラムは、3次元データ取得部101、操作情報取得部102、情報処理部103および出力制御部104の手順または方法をコンピュータに実行させるものであるともいえる。 As shown in FIG. 5B, when the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 are the processor 1b, the functions of each unit are software, firmware, or software and firmware and It is realized by the combination of The software or firmware is described as a program and stored in the memory 1c. The processor 1b implements the functions of the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 by reading and executing the program stored in the memory 1c. That is, the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 of the display control device 1 execute the processes shown in FIG. 8 to FIG. A memory 1c is provided for storing a program that results in the steps being executed. In addition, it can be said that these programs cause a computer to execute the procedure or method of the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104.
 ここで、プロセッサ1bとは、例えば、CPU(Central Processing Unit)、処理装置、演算装置、プロセッサ、マイクロプロセッサ、マイクロコンピュータ、またはDSP(Digital Signal Processor)などのことである。
 メモリ1cは、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable ROM)、EEPROM(Electrically EPROM)等の不揮発性または揮発性の半導体メモリであってもよいし、ハードディスク、フレキシブルディスク等の磁気ディスクであってもよいし、ミニディスク、CD(Compact Disc)、DVD(Digital Versatile Disc)等の光ディスクであってもよい。
Here, the processor 1 b is, for example, a central processing unit (CPU), a processing device, an arithmetic device, a processor, a microprocessor, a microcomputer, or a digital signal processor (DSP).
The memory 1c may be, for example, a nonvolatile or volatile semiconductor memory such as a random access memory (RAM), a read only memory (ROM), a flash memory, an erasable programmable ROM (EPROM), or an electrically EPROM (EEPROM). It may be a hard disk, a magnetic disk such as a flexible disk, or an optical disk such as a mini disk, a CD (Compact Disc), a DVD (Digital Versatile Disc), or the like.
 なお、3次元データ取得部101、操作情報取得部102、情報処理部103および出力制御部104の各機能について、一部を専用のハードウェアで実現し、一部をソフトウェアまたはファームウェアで実現するようにしてもよい。このように、表示制御装置1における処理回路は、ハードウェア、ソフトウェア、ファームウェア、またはこれらの組み合わせによって、上述の各機能を実現することができる。 The functions of the three-dimensional data acquisition unit 101, the operation information acquisition unit 102, the information processing unit 103, and the output control unit 104 may be partially realized by dedicated hardware and partially realized by software or firmware. You may Thus, the processing circuit in the display control device 1 can realize each of the functions described above by hardware, software, firmware, or a combination thereof.
 次に、情報処理部103が3次元データを生成する照準座標軸について、図6A、図6B、図6C、図7A、図7Bおよび図7Cを参照しながら説明する。
 図6Aから図6Cは、実施の形態1に係る表示制御装置1による照準座標軸の設定例を示す図である。
 具体的には、図6Aから図6Cでは、表示制御装置1がロボット2、ツール8およびロボット2の設置面Bを表示する制御を行い、さらに、制御点Aおよび照準座標軸を表示する制御を行った場合の例を示している。
 図6Aから図6Cにおいて、照準座標軸は、制御点Aを通る直線Zと、当該直線Z上の照準点Cで互いに直交する直線Xおよび直線Yで構成された座標軸である。照準点Cは、仮想空間内において、直線Zと構造物の表面とが交わる点である。そのため、直線Zと構造物とが交わる点が存在しない場合、情報処理部103は、照準点Cは存在しないと判断し、直線Zのみの3次元データを生成する。
Next, aiming coordinate axes at which the information processing unit 103 generates three-dimensional data will be described with reference to FIGS. 6A, 6B, 6C, 7A, 7B, and 7C.
6A to 6C are diagrams showing setting examples of the aiming coordinate axes by the display control device 1 according to the first embodiment.
Specifically, in FIGS. 6A to 6C, the display control device 1 performs control to display the robot 2, the tool 8, and the installation surface B of the robot 2, and further performs control to display the control point A and the aiming coordinate axis. An example of the case is shown.
6A to 6C, the aiming coordinate axes are coordinate axes constituted by a straight line Z passing the control point A and a straight line X and a straight line Y orthogonal to each other at the aiming point C on the straight line Z. The aiming point C is a point at which the straight line Z intersects the surface of the structure in the virtual space. Therefore, when there is no point at which the straight line Z intersects the structure, the information processing unit 103 determines that the aim point C does not exist, and generates three-dimensional data of only the straight line Z.
 この発明における直線Xおよび直線Yには、一般的な定義による直線(以下、真直線という)が含まれるほか、照準点Cを含む構造物の表面形状に沿った線であって、特定の一方向からみた場合に直線形状、且つ互いに直交する形状となる線が含まれる。すなわち、直線Xおよび直線Yが上記のように構造物の表面形状に沿った線である場合、両直線の3次元形状は、当該構造物の表面形状に依存するため、必ずしも真直線とはならない。同様に、直線Xおよび直線Yが上記のように構造物の表面形状に沿った線である場合、直線Xと直線Yとが3次元空間である仮想空間内において交わる角度は、当該構造物の表面の形状に依存するため、必ずしも直角とはならない。なお、特定の一方向とは、例えば、後述するグローバル座標またはローカル座標のz座標の方向である。 The straight line X and the straight line Y in the present invention include a straight line according to a general definition (hereinafter referred to as a straight line) and a line along the surface shape of the structure including the aiming point C. It includes lines that are linear in shape and orthogonal to each other when viewed from the direction. That is, when the straight line X and the straight line Y are lines along the surface shape of the structure as described above, the three-dimensional shape of both straight lines does not necessarily become a straight line because it depends on the surface shape of the structure . Similarly, when the straight line X and the straight line Y are lines along the surface shape of the structure as described above, the angle at which the straight line X and the straight line Y intersect in the three-dimensional virtual space is Because they depend on the shape of the surface, they are not necessarily at right angles. Note that the specific one direction is, for example, the direction of z-coordinates of global coordinates or local coordinates described later.
 直線Zの方向は、フランジ部2gに取り付けられたツール8に応じて設定される。ツール8が、例えば、図3Aで示したワークを把持する器具8aである場合、図6Aおよび図6Bに示すように、直線Zは、矢印8aで示した器具8aの把持部の開閉方向と直交し且つ把持部が延びる方向(例えば、図2の軸線J6の方向)と平行な方向であり、且つ制御点Aを通る直線となる。ツール8が、例えば、図3Bで示したグラインダ8bである場合、図6Cに示すように、直線Zは、矢印8bで示したグラインダ8bをワーク等に押し当てる方向と同一方向であり、且つ制御点Aを通る直線となる。このように、直線Zの方向は、フランジ部2gに取り付けられたツール8が、ワークに対してどのような作業を行うか等に基づいて適宜設定される。 The direction of the straight line Z is set according to the tool 8 attached to the flange portion 2g. Tool 8, for example, if an instrument 8a for holding the workpiece shown in FIG. 3A, as shown in FIGS. 6A and 6B, the linear Z is the opening and closing direction of the gripping portion of the device 8a shown by an arrow 8a 1 It is a straight line that is orthogonal and parallel to the direction in which the grip extends (for example, the direction of the axis J6 in FIG. 2) and that passes through the control point A. Tool 8, for example, if a grinder 8b shown in FIG. 3B, as shown in FIG. 6C, the straight line Z is the same direction as the direction of pressing the grinder 8b in a work such as indicated by the arrows 8b 1, and It becomes a straight line passing through the control point A. Thus, the direction of the straight line Z is appropriately set based on what kind of work the tool 8 attached to the flange portion 2g performs on the work.
 直線Zの方向からみた場合の直線Xと直線Yの方向は、グローバル座標またはローカル座標に基づいて設定される。グローバル座標とは、3次元空間そのものに対して設定される座標であり、3次元空間に固定して設定されたロボット2本体または3次元空間に固定されたいずれかの構造物が持つ座標である。ローカル座標とは、3次元空間において移動可能な物体を基準とする座標であり、例えば、ロボット2のフランジ部2gに取り付けられたツール8を基準として設定された座標である。図6Aはグローバル座標に基づいて方向が設定された直線Xおよび直線Yによる照準座標軸を示し、図6Bおよび図6Cはローカル座標に基づいて方向が設定された直線Xおよび直線Yによる照準座標軸を示している。 The directions of the straight line X and the straight line Y when viewed from the direction of the straight line Z are set based on global coordinates or local coordinates. The global coordinate is a coordinate set with respect to the three-dimensional space itself, and is a coordinate of the robot 2 main body fixedly set in the three-dimensional space or any structure fixed in the three-dimensional space . The local coordinates are coordinates based on an object movable in a three-dimensional space, and are coordinates set based on, for example, the tool 8 attached to the flange portion 2 g of the robot 2. FIG. 6A shows aiming coordinate axes according to the straight lines X and Y whose directions are set based on the global coordinates, and FIGS. 6B and 6C show aiming coordinate axes according to the straight lines X and Y whose directions are set based on the local coordinates. ing.
 図6Aは、ロボット2が持つx軸、y軸およびz軸をグローバル座標とし、当該グローバル座標に基づいて照準座標軸の直線Xおよび直線Yの方向を設定した場合の例を示している。グローバル座標のz軸はロボット2の軸線J1と重なる位置および方向の座標軸とし、当該z軸とロボット2の設置面Bとの交点を原点Oとする。グローバル座標のy軸は、原点Oを通り、ロボット2の設置面Bと平行であり、且つ予め定められたロボット2の正面方向と平行な座標軸である。グローバル座標のx軸は、原点Oでz軸およびとy軸と直交する座標軸である。照準座標軸の直線Xおよび直線Yは、それぞれグローバル座標のx軸およびy軸の軸方向と一致するように設定される。 FIG. 6A shows an example in which the directions of the straight line X and the straight line Y of the aiming coordinate axes are set based on the global coordinates with the x axis, y axis and z axis of the robot 2 as global coordinates. The z axis of the global coordinates is a coordinate axis at a position and direction overlapping with the axis line J1 of the robot 2, and the intersection point of the z axis and the installation surface B of the robot 2 is an origin O. The y-axis of the global coordinates is a coordinate axis passing through the origin O, parallel to the installation surface B of the robot 2 and parallel to the front direction of the robot 2 determined in advance. The x-axis of the global coordinates is a coordinate axis orthogonal to the z-axis and the y-axis at the origin O. The straight line X and the straight line Y of the aiming coordinate axes are set to coincide with the axial directions of the x axis and the y axis of the global coordinate, respectively.
 図6Bは、フランジ部2gに取り付けられた器具8aに設定されたx軸、y軸およびz軸をローカル座標とし、当該ローカル座標に基づいて照準座標軸の直線Xおよび直線Yの方向を設定した場合の例を示している。ローカル座標のz軸は、器具8aの把持部の開閉方向(矢印8a参照)と直交し、且つ把持部が延びる方向(図2の軸線J6の方向)と平行な方向であり、且つ制御点Aを通る座標軸である。この場合の原点は制御点Aとなる。ローカル座標のx軸は、器具8aの把持部の開閉方向(矢印8a参照)と同一方向、且つ制御点Aを通る座標軸である。ローカル座標のy軸は、制御点Aでx軸およびz軸と直交する座標軸である。照準座標軸の直線Xおよび直線Yは、それぞれローカル座標のx軸およびy軸の軸方向一致するように設定される。 FIG. 6B shows the case where the x-axis, y-axis and z-axis set in the instrument 8a attached to the flange portion 2g are local coordinates, and the directions of straight line X and straight line Y of the aiming coordinate axis are set based on the local coordinates An example is shown. Z axis of the local coordinates, and perpendicular to the closing direction of the grip portion of the instrument 8a (see arrow 8a 1), a and (direction of the axis J6 of Figure 2) directions gripper extends parallel to the direction, and the control point It is a coordinate axis passing through A. The origin in this case is the control point A. X-axis of the local coordinates are the same direction as the closing direction of the grip portion of the instrument 8a (see arrow 8a 1), which is a coordinate axis and passing through the control points A. The y-axis of the local coordinates is a coordinate axis orthogonal to the x-axis and the z-axis at the control point A. The straight line X and the straight line Y of the aiming coordinate axes are set to coincide with the axial directions of the x axis and the y axis of the local coordinates, respectively.
 図6Cは、フランジ部2gに取り付けられたグラインダ8bに設定されたx軸、y軸およびz軸をローカル座標とし、当該ローカル座標に基づいて照準座標軸の直線Xおよび直線Yを設定した場合の例を示している。ローカル座標のz軸は、グラインダ8bをワーク等に押し当てる方向(矢印8b参照)に延び、且つ制御点Aを通る座標軸である。ローカル座標のx軸は、制御点Aを通り、且つz軸と直交する座標軸である。ローカル座標のy軸は、制御点Aでx軸およびz軸と直交する座標軸である。照準座標軸の直線Xおよび直線Yは、それぞれローカル座標のx軸およびy軸の軸方向と一致するように設定される。 FIG. 6C shows an example in which the x axis, the y axis and the z axis set in the grinder 8b attached to the flange portion 2g are local coordinates, and the straight line X and the straight line Y of the aiming coordinate axis are set based on the local coordinates. Is shown. The z-axis of the local coordinates is a coordinate axis which extends in the direction (see arrow 8 b 1 ) to press the grinder 8 b against the work or the like and passes through the control point A. The local coordinate x-axis is a coordinate axis passing through the control point A and orthogonal to the z-axis. The y-axis of the local coordinates is a coordinate axis orthogonal to the x-axis and the z-axis at the control point A. The straight line X and the straight line Y of the aiming coordinate axes are set to coincide with the axial directions of the x and y axes of the local coordinates, respectively.
 図6Aから図6Cでは、照準座標軸の直線Xおよび直線Yの方向が、グローバル座標またはローカル座標のx軸とy軸の軸方向と同一方向である場合を示した。しかし、照準座標軸の直線Xおよび直線Yの軸方向は、グローバル座標またはローカル座標が有するx軸とy軸の軸方向と同一方向である必要はない。すなわち、直線Xおよび直線Yの方向は、それぞれがグローバル座標またはローカル座標が有するx軸およびy軸の軸方向と同一方向に向いた状態に対して、直線Zを回転軸として任意の角度だけ回転した方向に設定されてもよい。 6A to 6C show the cases where the directions of the straight line X and the straight line Y of the aiming coordinate axes are the same as the axial directions of the x axis and the y axis of the global coordinates or the local coordinates. However, the axial directions of the straight line X and the straight line Y of the aiming coordinate axes do not have to be the same direction as the x-axis and y-axis axes of the global coordinates or the local coordinates. That is, the directions of the straight line X and the straight line Y are rotated by an arbitrary angle with the straight line Z as the rotation axis with respect to the state in which the directions of the straight line X and the straight line Y are the same direction as the x axis and the y axis It may be set in the same direction.
 図7Aから図7Cは、実施の形態1に係る表示制御装置1による照準座標軸の表示例を示す図である。
 図7Aおよび図7Bは、出力装置5における照準座標軸の表示例を示している。情報処理部103は、仮想空間内のロボット2の制御点Aの位置情報および構造物の3次元データを用いて、仮想空間内における直線Zの方向および直線Z上の照準点Cを算出する。また、情報処理部103は、直線Xおよび直線Yについて、仮想空間内においてそれぞれが延びる方向と形状等とを算出する。さらに、情報処理部103は、算出した照準座標軸について、構造物の3次元データを参照し、3次元データを生成する。これにより、図7Aおよび図7Bに示すように、照準座標軸の直線Xおよび直線Yは、仮想空間内に存在するワークDaまたは構造物Dbの表面形状、およびロボット2の設置面Bの表面形状に沿うように、出力装置5に表示される。
7A to 7C are diagrams showing display examples of aiming coordinate axes by the display control device 1 according to the first embodiment.
7A and 7B show display examples of the aiming coordinate axes in the output device 5. FIG. The information processing unit 103 calculates the direction of the straight line Z in the virtual space and the aiming point C on the straight line Z using the position information of the control point A of the robot 2 in the virtual space and the three-dimensional data of the structure. In addition, the information processing unit 103 calculates, for the straight line X and the straight line Y, directions and shapes and the like in the virtual space. Furthermore, the information processing unit 103 generates three-dimensional data by referring to the three-dimensional data of the structure with respect to the calculated aiming coordinate axis. Thereby, as shown in FIG. 7A and FIG. 7B, straight line X and straight line Y of the aiming coordinate axes are the surface shape of work Da or structure Db existing in the virtual space and the surface shape of installation surface B of robot 2 It is displayed on the output device 5 along the line.
 直線Xおよび直線Yは、構造物の表面形状に沿った形状でなくてもよい。図7Cは、照準座標軸の直線Xおよび直線Yを真直線とした場合の表示例を示している。照準座標軸の直線Xおよび直線Yを真直線とする場合、図7Cに示すように、照準座標軸の両直線は、照準点Cを通る真直線として、出力装置5に表示される。
 表示制御装置1は、オペレータが、照準座標軸の直線Xおよび直線Yを構造物に沿った形状として表示するか、または真直線の形状で表示するかを選択できる構成としてもよい。これにより、表示制御装置1は、オペレータの選択した表示方法に応じた照準座標軸の表示制御を行うことができる。
The straight line X and the straight line Y may not have a shape along the surface shape of the structure. FIG. 7C shows a display example in the case where the straight line X and the straight line Y of the aiming coordinate axes are straight lines. When the straight line X and the straight line Y of the aiming coordinate axes are straight lines, both straight lines of the aiming coordinate axes are displayed on the output device 5 as straight lines passing through the aiming point C as shown in FIG. 7C.
The display control device 1 may be configured to allow the operator to select whether to display the straight line X and the straight line Y of the aiming coordinate axes as a shape along the structure or a straight line. Thus, the display control device 1 can perform display control of the aiming coordinate axes according to the display method selected by the operator.
 上述した説明では、直線Zは、ロボット2のフランジ部2gに取り付けられたツール8が、ワークに対してどのような作業を行うか等に基づいて適宜設定される場合を示した。しかし、直線Zの設定方法は上述した方法に限定されることなく、直線Zは制御点Aから鉛直下向きあるいは鉛直上向き方向に延びる直線に設定されてもよい。また、直線Zは制御点Aから任意に設定された方向に延びる直線と設定されてもよい。 In the above description, the straight line Z shows the case where the tool 8 attached to the flange portion 2g of the robot 2 is appropriately set based on what kind of work the work is to be performed. However, the method of setting the straight line Z is not limited to the method described above, and the straight line Z may be set to a straight line extending vertically downward or vertically upward from the control point A. The straight line Z may be set as a straight line extending in a direction arbitrarily set from the control point A.
 次に、表示制御装置1の動作について、図8のフローチャートを参照しながら説明する。
 図8は、実施の形態1に係る表示制御装置1の動作を示すフローチャートである。
 3次元データ取得部101は、周辺情報取得装置3から、ロボット2および構造物の3次元データを取得する(ステップST1)。3次元データ取得部101は、取得した3次元データを情報処理部103に出力する。情報処理部103は、ステップST1で取得された3次元データに基づいて、照準座標軸の位置および方向等を算出する(ステップST2)。情報処理部103は、算出した照準座標軸の位置および方向等に基づいて、照準座標軸の3次元データを生成する(ステップST3)。情報処理部103は、ステップST3で生成した3次元データと、ロボット2および構造物の3次元データとを、出力制御部104に出力する。
Next, the operation of the display control device 1 will be described with reference to the flowchart of FIG.
FIG. 8 is a flowchart showing the operation of the display control device 1 according to the first embodiment.
The three-dimensional data acquisition unit 101 acquires three-dimensional data of the robot 2 and the structure from the peripheral information acquisition device 3 (step ST1). The three-dimensional data acquisition unit 101 outputs the acquired three-dimensional data to the information processing unit 103. The information processing unit 103 calculates the position, the direction, and the like of the aiming coordinate axis based on the three-dimensional data acquired in step ST1 (step ST2). The information processing unit 103 generates three-dimensional data of the aiming coordinate axis based on the calculated position, direction, and the like of the aiming coordinate axis (step ST3). The information processing unit 103 outputs the three-dimensional data generated in step ST3 and the three-dimensional data of the robot 2 and the structure to the output control unit 104.
 出力制御部104は、情報処理部103から入力された3次元データに基づいて、仮想空間内の構造物、ロボット2および照準座標軸を表示する制御情報を生成する(ステップST4)。出力制御部104は、ステップST4で生成した制御情報を出力装置5に出力する(ステップST5)。続いて、情報処理部103は、操作情報取得部102から操作情報が入力されたか否か判定を行う(ステップST6)。操作情報が入力された場合(ステップST6;YES)、情報処理部103は入力された操作情報で示された位置情報に基づいて、前回生成したロボット2および照準座標軸の3次元データを修正する(ステップST7)。情報処理部103は、ステップST7で修正した3次元データを、出力制御部104に出力する。 The output control unit 104 generates control information for displaying the structure in the virtual space, the robot 2 and the aiming coordinate axes based on the three-dimensional data input from the information processing unit 103 (step ST4). The output control unit 104 outputs the control information generated in step ST4 to the output device 5 (step ST5). Subsequently, the information processing unit 103 determines whether or not operation information is input from the operation information acquisition unit 102 (step ST6). When the operation information is input (step ST6; YES), the information processing unit 103 corrects the three-dimensional data of the robot 2 and aiming coordinate axes generated previously based on the position information indicated by the input operation information ((step ST6) Step ST7). The information processing unit 103 outputs the three-dimensional data corrected in step ST7 to the output control unit 104.
 出力制御部104は、ステップST7で修正された3次元データに基づいて、仮想空間内のロボット2および照準座標軸を表示する制御情報を生成する(ステップST8)。出力制御部104は、ステップST8で生成した制御情報を出力装置5に出力する(ステップST9)。その後、フローチャートはステップST6の処理に戻る。一方、操作情報が入力されなかった場合(ステップST6;NO)、前回の操作情報が入力されてから所定時間経過したか否か判定を行う(ステップST10)。所定時間経過した場合(ステップST10;YES)、処理を終了する。一方、所定時間経過していない場合(ステップST10;NO)、ステップST6の判定処理に戻る。 The output control unit 104 generates control information for displaying the robot 2 and the aiming coordinate axes in the virtual space based on the three-dimensional data corrected in step ST7 (step ST8). The output control unit 104 outputs the control information generated in step ST8 to the output device 5 (step ST9). Thereafter, the flowchart returns to the process of step ST6. On the other hand, when the operation information is not input (step ST6; NO), it is determined whether a predetermined time has elapsed since the previous operation information was input (step ST10). If the predetermined time has elapsed (step ST10; YES), the process ends. On the other hand, when the predetermined time has not elapsed (step ST10; NO), the process returns to the determination process of step ST6.
 次に、図8のフローチャートのステップST3の処理の詳細について、図9のフローチャートを参照しながら説明する。
 図9は、実施の形態1に係る表示制御装置1の情報処理部103の動作を示すフローチャートである。
 なお、以下の説明において、情報処理部103は、フランジ部2gに取り付けられたツール8の種別を予め認識しているものとする。また、情報処理部103には、照準座標軸の直線Xおよび直線Yを、グローバル座標またはローカル座標のどちらに基づいて設定するのか、予め設定されているものとする。さらに、情報処理部103には、照準座標軸を構造物の表面形状に沿った線として表示するか、または真直線として表示するか、予め設定されているものとする。
Next, the details of the process of step ST3 of the flowchart of FIG. 8 will be described with reference to the flowchart of FIG.
FIG. 9 is a flowchart showing an operation of the information processing unit 103 of the display control device 1 according to the first embodiment.
In the following description, it is assumed that the information processing unit 103 recognizes in advance the type of the tool 8 attached to the flange 2g. In addition, in the information processing unit 103, it is assumed that straight line X and straight line Y of the aiming coordinate axes are set in advance based on either global coordinates or local coordinates. Furthermore, in the information processing unit 103, it is assumed that the aiming coordinate axis is displayed as a line along the surface shape of the structure, displayed as a straight line, or set in advance.
 情報処理部103は、ステップST2でロボット2および構造物の3次元データを生成すると、フランジ部2gに取り付けられたツール8の種別に応じて、当該ツール8の制御点A、および当該制御点Aを通る直線Zを決定して3次元データを生成する(ステップST11)。情報処理部103は、例えば、ツール8の種別と、制御点Aの位置および直線Zの方向を示す情報とを関連付けて記憶したデータベース(図示しない)を参照し、制御点Aおよび直線Zを決定する。ここで、直線Zの方向を示す情報とは、例えば、ツール8が器具8aである場合に、当該器具8aの把持部の開閉方向と直交し、且つ制御点Aを通る直線を直線Zとすることを示す情報である。 When the information processing unit 103 generates three-dimensional data of the robot 2 and the structure in step ST2, according to the type of the tool 8 attached to the flange portion 2g, the control point A of the tool 8 and the control point A The straight line Z passing through is determined to generate three-dimensional data (step ST11). The information processing unit 103 determines, for example, the control point A and the straight line Z with reference to a database (not shown) that stores the type of tool 8 and the information indicating the position of the control point A and the direction of the straight line Z in association. Do. Here, information indicating the direction of the straight line Z means, for example, when the tool 8 is the tool 8a, a straight line that is orthogonal to the opening and closing direction of the grip portion of the tool 8a and passes through the control point A is the straight line Z It is information indicating that.
 情報処理部103は、ステップST11で決定した直線Zの3次元データと、構造物の3次元データとを参照し、仮想空間内で直線Zと構造物の表面とが交わる点が存在するか否か判定を行う(ステップST12)。直線Zと構造物の表面とが交わる点が存在しない場合(ステップST12;NO)、情報処理部103は、制御点Aの3次元データを生成し(ステップST13)、ステップST21の処理に進む。 The information processing unit 103 refers to the three-dimensional data of the straight line Z determined in step ST11 and the three-dimensional data of the structure, and determines whether there is a point where the straight line Z intersects the surface of the structure in virtual space. It is determined whether or not (step ST12). If there is no point at which the straight line Z intersects the surface of the structure (step ST12; NO), the information processing unit 103 generates three-dimensional data of the control point A (step ST13), and proceeds to the process of step ST21.
 一方、直線Zと構造物の表面とが交わる点が存在する場合(ステップST12;YES)、情報処理部103は、直線Zと構造物の表面とが交わる点を全て算出する(ステップST14)。
 例えば、図7Aで示したようにロボット2の設置面B上にワークDaが置かれている場合、ステップST14の処理では、情報処理部103は、直線ZがワークDaの表面と交わる点、および直線Zがロボット2の設置面Bの表面と交わる点等を算出する。
On the other hand, when there is a point at which the straight line Z intersects the surface of the structure (step ST12; YES), the information processing unit 103 calculates all points at which the straight line Z intersects the surface of the structure (step ST14).
For example, when the work Da is placed on the installation surface B of the robot 2 as shown in FIG. 7A, in the process of step ST14, the information processing unit 103 determines that the straight line Z intersects the surface of the work Da and A point at which the straight line Z intersects with the surface of the installation surface B of the robot 2 is calculated.
 情報処理部103は、ステップST14で算出した点から、制御点Aまでの距離を算出し、算出した距離が最短となる点を照準点Cに設定する(ステップST15)。情報処理部103は、グローバル座標またはローカル座標を用いて、ステップST15で設定した照準点Cにおいて互いに直交する直線Xおよび直線Yの方向を決定する(ステップST16)。情報処理部103は、予め設定された条件を参照して、直線Xおよび直線Yを構造物の表面形状に沿う線とするか否か判定を行う(ステップST17)。直線Xおよび直線Yを構造物の表面形状に沿う線とする場合(ステップST17;YES)、情報処理部103は、構造物の3次元データを参照し、ステップST11で決定した直線Zを照準点Cまでの長さとすると共に、ステップST16で方向を決定した直線Xおよび直線Yを、当該構造物の表面形状に沿う線とする照準座標軸の3次元データを生成する(ステップST18)。 The information processing section 103 calculates the distance to the control point A from the point calculated in step ST14, and sets the point at which the calculated distance is shortest as the aiming point C (step ST15). The information processing unit 103 determines the directions of the straight line X and the straight line Y orthogonal to each other at the aiming point C set in step ST15, using the global coordinates or the local coordinates (step ST16). The information processing unit 103 determines whether or not the straight line X and the straight line Y are lines along the surface shape of the structure with reference to the conditions set in advance (step ST17). When the straight line X and the straight line Y are lines along the surface shape of the structure (step ST17; YES), the information processing unit 103 refers to the three-dimensional data of the structure, and the straight line Z determined in step ST11 is the aiming point Three-dimensional data of a aiming coordinate axis is generated, which has a length up to C and the straight line X and the straight line Y whose directions are determined in step ST16 are lines along the surface shape of the structure (step ST18).
 一方、直線Xおよび直線Yを構造物の表面形状に沿う線としない場合(ステップST17;NO)、すなわち直線Xおよび直線Yを真直線とする場合、情報処理部103は、ステップST11で決定した直線Zを照準点Cまでの長さとすると共に、ステップST16で方向を決定した直線Xおよび直線Yを真直線とする照準座標軸の3次元データを生成する(ステップST19)。情報処理部103は、制御点Aの3次元データを生成する(ステップST20)。情報処理部103は、ステップST18からステップST20で生成した照準座標軸および制御点Aの3次元データ、またはステップST11で生成した直線Zの3次元データおよびステップ13で生成した制御点Aの3次元データを、出力制御部104に出力する(ステップST21)。その後、フローチャートは、図8のフローチャートのステップST4の処理に進む。 On the other hand, when the straight line X and the straight line Y are not lines along the surface shape of the structure (step ST17; NO), that is, when the straight line X and the straight line Y are straight lines, the information processing unit 103 determines in step ST11. While making the straight line Z the length to the aiming point C, three-dimensional data of the aiming coordinate axes that make the straight line X whose direction is determined in step ST16 and the straight line Y be straight lines are generated (step ST19). The information processing unit 103 generates three-dimensional data of the control point A (step ST20). The information processing unit 103 performs the three-dimensional data of the aiming coordinate axes and the control point A generated in steps ST18 to ST20, the three-dimensional data of the straight line Z generated in step ST11, and the three-dimensional data of the control point A generated in step 13 Are output to the output control unit 104 (step ST21). Thereafter, the flowchart proceeds to the process of step ST4 of the flowchart of FIG.
 次に、図8のフローチャートのステップST7の処理の詳細について、図10のフローチャートを参照しながら説明する。
 図10は、実施の形態1に係る表示制御装置1の情報処理部103の動作を示すフローチャートである。
 なお、以下では、仮想空間に表示されたロボット2を動かす操作として、制御点Aに重畳して表示されたポインタを移動させる操作が入力された場合を例に説明する。
 ポインタを移動させる操作情報が入力されると(ステップST6;YES)、情報処理部103は、入力された操作情報と、3次元データ取得部101から入力された3次元データとを参照し、仮想空間におけるポインタの位置情報を取得する(ステップST31)。情報処理部103は、バッファ等に格納された前回の制御点Aの位置情報と、ステップST31で取得したポインタの位置情報とから、直線X、直線Y、および直線Zにおけるポインタの移動方向および移動量を算出する(ステップST32)。
Next, the details of the process of step ST7 of the flowchart of FIG. 8 will be described with reference to the flowchart of FIG.
FIG. 10 is a flowchart showing an operation of the information processing unit 103 of the display control device 1 according to the first embodiment.
In the following, as an operation for moving the robot 2 displayed in the virtual space, a case where an operation for moving the pointer displayed superimposed on the control point A is input will be described as an example.
When the operation information for moving the pointer is input (step ST6; YES), the information processing unit 103 refers to the input operation information and the three-dimensional data input from the three-dimensional data acquisition unit 101 to be virtual. Position information of the pointer in space is acquired (step ST31). The information processing unit 103 moves the movement direction and movement of the pointer along the straight line X, the straight line Y, and the straight line Z from the position information of the control point A of the previous time stored in the buffer etc. and the position information of the pointer acquired in step ST31. The amount is calculated (step ST32).
 情報処理部103は、前回算出した照準座標軸を、ステップST32で算出した移動方向および移動量を用いて移動させ、新たな照準座標軸の位置及び方向等を算出し、当該新たな照準座標軸の3次元データを生成する(ステップST33)。なお、直線Xおよび直線Yを構造物の表面形状に沿う線とするように設定されている場合、情報処理部103は、構造物の3次元データも参照して、照準座標軸の直線Xおよび直線Yを構造物の表面形状に沿うような線とした3次元データを生成する。情報処理部103は、新たな制御点Aの3次元データを生成する(ステップST34)。 The information processing unit 103 moves the aiming coordinate axis calculated last time using the movement direction and the movement amount calculated in step ST32, calculates the position and direction of the new aiming coordinate axis, etc. Data is generated (step ST33). When the straight line X and the straight line Y are set to be a line along the surface shape of the structure, the information processing unit 103 also refers to the three-dimensional data of the structure to determine the straight line X of the aiming coordinate axis and the straight line Three-dimensional data in which Y is a line along the surface shape of the structure is generated. The information processing unit 103 generates three-dimensional data of the new control point A (step ST34).
 情報処理部103は、ステップST32からステップST34の処理と並行して、以下のステップST35およびステップST36の処理を行う。情報処理部103は、ステップST31で取得されたポインタの位置情報と、ロボット2の3次元データとから、ロボット2の移動方向および移動量を算出する(ステップST35)。情報処理部103は、ステップST35で算出したロボット2の移動方向および移動量を用いて、移動後のロボット2の3次元データを生成する(ステップST36)。次に、情報処理部103は、ステップST33で生成した新たな照準座標軸の3次元データ、ステップST34で生成した新たな制御点Aの3次元データ、およびステップST36で生成した移動後のロボット2の3次元データを、出力制御部104に出力する(ステップST37)。その後、フローチャートは、図8のフローチャートのステップST8の処理に進む。 The information processing unit 103 performs the following processes of step ST35 and step ST36 in parallel with the processes of steps ST32 to ST34. The information processing unit 103 calculates the movement direction and movement amount of the robot 2 from the position information of the pointer acquired in step ST31 and the three-dimensional data of the robot 2 (step ST35). The information processing unit 103 generates three-dimensional data of the robot 2 after movement using the movement direction and movement amount of the robot 2 calculated in step ST35 (step ST36). Next, the information processing unit 103 causes the three-dimensional data of the new aiming coordinate axis generated in step ST33, the three-dimensional data of the new control point A generated in step ST34, and the moved robot 2 generated in step ST36. The three-dimensional data is output to the output control unit 104 (step ST37). After that, the flowchart proceeds to the process of step ST8 in the flowchart of FIG.
 オペレータは、制御点Aにカーソルを重ね合わせて選択状態とし、ジョグレバー、操作キーまたはマウスなどで目的の位置に移動させる。また、教示装置4がタブレット端末の場合、オペレータは、タッチパネルに表示された制御点Aに指を重ね合わせて選択状態とし、当該指を目的とする位置に移動させる。オペレータが制御点Aを選択すると、出力制御部104は、制御点Aの形状を拡大させる、または制御点Aの色を変化させるなどの強調表示を行い、制御点Aが選択中であることを知らせる表示制御を行う。また、出力制御部104は、制御点Aが選択された場合、または制御点Aが移動されている場合に、効果音を出力する制御を行ってもよい。制御点Aの移動に応じて情報処理部103が、図10のフローチャートで示した処理を行うことにより、ロボット形状をインタラクティブに変化させる処理を行う。オペレータは、照準座標軸を参考にしながら、制御点Aを移動させて仮想空間内のロボット2を駆動させると共に、ロボット2の通過点の登録等を行う。 The operator superimposes the cursor on the control point A to make it in a selected state, and moves it to a target position with a jog lever, an operation key, a mouse or the like. When the teaching device 4 is a tablet terminal, the operator superimposes a finger on the control point A displayed on the touch panel to make it in a selected state, and moves the finger to a target position. When the operator selects the control point A, the output control unit 104 performs highlighting such as enlarging the shape of the control point A or changing the color of the control point A, and indicates that the control point A is being selected. Perform display control to notify. Further, the output control unit 104 may perform control to output sound effects when the control point A is selected or when the control point A is moved. The information processing unit 103 performs the process shown in the flowchart of FIG. 10 according to the movement of the control point A, thereby performing a process of interactively changing the robot shape. The operator moves the control point A to drive the robot 2 in the virtual space while referring to the aiming coordinate axis, and performs registration etc. of the passing point of the robot 2.
 出力制御部104は、直線Xおよび直線Yを構造物の表面形状に沿う線とした照準座標軸、または直線Xおよび直線Yを真直線とした照準座標軸に対して、図11に示すような表示方法を適用してもよい。
 図11は、実施の形態1に係る表示制御装置1の照準座標軸のその他の表示例を示す図である。
 図11Aは、出力制御部104が、ロボット2の駆動が制限された方向の照準座標軸を破線で示す表示制御を行った場合を示している。オペレータがジョグレバーを操作して、出力装置5に表示されたロボット2を駆動する場合、ロボット2の一方向のみへの直線駆動を認識し易くするために、表示制御装置1はロボット2の駆動方向を一方向に制限した上で、ジョグレバーへの操作を受け付ける場合がある。その場合、出力制御部104は、直線X、直線Yおよび直線Zのうち、ロボットの駆動が制限された方向の直線を、破線、非表示または半透過で表示する制御情報を生成する。これにより、表示制御装置1は、ロボット2を駆動させることが可能な方向をオペレータに明示する表示制御を行うことができる。オペレータは、図11Aに示した表示を参照しながらジョグレバーの操作を行うことにより、ロボット2を駆動させることができる方向を直感的に認識しながら操作することができる。
The output control unit 104 displays the display method as shown in FIG. 11 with respect to the aiming coordinate axis with the straight line X and the straight line Y along the surface shape of the structure or with the straight line X and the straight line Y with the straight line. May apply.
FIG. 11 is a view showing another display example of the aiming coordinate axes of the display control device 1 according to the first embodiment.
FIG. 11A shows a case where the output control unit 104 performs display control in which the aiming coordinate axis in the direction in which the drive of the robot 2 is restricted is indicated by a broken line. When the operator operates the jog lever to drive the robot 2 displayed on the output device 5, the display control device 1 drives the robot 2 in order to make it easy to recognize the linear drive of the robot 2 in only one direction. After restricting the direction to one direction, an operation on the jog lever may be accepted. In that case, the output control unit 104 generates control information that displays a straight line in a direction in which the robot's drive is restricted among the straight line X, the straight line Y, and the straight line Z in a broken line, non-display or semi-transparent. Thus, the display control device 1 can perform display control to clearly indicate to the operator the directions in which the robot 2 can be driven. The operator can operate while intuitively recognizing the direction in which the robot 2 can be driven, by operating the jog lever while referring to the display shown in FIG. 11A.
 図11Bは、出力制御部104が、照準座標軸の表示に加えて、制御点Aの座標値および照準点Cの座標値を表示する制御を行った場合の表示例を示している。
 出力制御部104は、情報処理部103から入力された3次元データから、制御点Aおよび照準点Cの座標値を取得する。出力制御部104は、取得した座標値を表示する表示領域Eaおよび表示領域Ebを、制御点Aおよび照準点Cの近傍に表示する制御情報を生成する。また、出力制御部104は、制御点Aおよび照準点Cの座標値の数値を修正可能に表示する制御情報を生成してもよい。この場合、オペレータは、座標値の修正を希望する表示領域を選択し、新たな座標値を入力する。操作情報取得部102が座標値を修正する操作情報を取得すると、情報処理部103は当該操作情報に基づいて、新たな照準座標軸、新たな制御点Aおよび駆動後のロボット2の3次元データを生成する。これにより、オペレータは、座標値を入力することにより、ロボット2を駆動させることができる。
FIG. 11B illustrates a display example in the case where the output control unit 104 performs control of displaying the coordinate value of the control point A and the coordinate value of the aiming point C in addition to the display of the aiming coordinate axis.
The output control unit 104 acquires coordinate values of the control point A and the aiming point C from the three-dimensional data input from the information processing unit 103. The output control unit 104 generates control information for displaying the display area Ea and the display area Eb displaying the acquired coordinate values in the vicinity of the control point A and the aiming point C. In addition, the output control unit 104 may generate control information for displaying the numerical values of the coordinate values of the control point A and the aiming point C so as to be correctable. In this case, the operator selects a display area for which correction of coordinate values is desired, and inputs a new coordinate value. When the operation information acquisition unit 102 acquires the operation information for correcting the coordinate value, the information processing unit 103 determines the new aiming coordinate axis, the new control point A, and the three-dimensional data of the robot 2 after driving based on the operation information. Generate Thereby, the operator can drive the robot 2 by inputting coordinate values.
 以上のように、この実施の形態1によれば、ロボット2の3次元データおよび構造物の3次元データに基づいて、構造物の3次元データが規定する仮想空間内において、ロボット2に設定された制御点Aを通る直線Zを含む照準座標軸の3次元データを生成する情報処理部103と、ロボット2の3次元データ、構造物の3次元データおよび情報処理部103が生成した照準座標軸の3次元データに基づいて、照準座標軸を出力装置5に表示するための制御情報を生成する出力制御部104とを備えるように構成したので、操作する機器の現在位置と、操作する機器を移動させる目的位置との関係を視覚的に把握可能な情報を提示することができる。 As described above, according to the first embodiment, the robot 2 is set in the virtual space defined by the three-dimensional data of the structure based on the three-dimensional data of the robot 2 and the three-dimensional data of the structure. 3 of the information processing unit 103 for generating 3D data of the aiming coordinate axis including the straight line Z passing through the control point A, 3D data of the robot 2, 3D data of the structure and 3 of the aiming coordinate axis generated by the information processing unit 103 Since the output control unit 104 generates control information for displaying the aiming coordinate axes on the output device 5 based on the dimensional data, the present position of the device to be operated and the purpose of moving the device to be operated It is possible to present information capable of visually grasping the relationship with the position.
 また、この実施の形態1によれば、照準座標軸を、直線Zに加え、それぞれ直線Z上に位置する照準点Cを通る直線Xおよび直線Yで構成したので、表示された画像から奥行き方向の空間も容易に把握可能な情報を提示することができる。 Further, according to the first embodiment, the aiming coordinate axes are added to the straight line Z, and are constituted by the straight line X and the straight line Y passing through the aiming point C located on the straight line Z, respectively. It is possible to present information that can be easily grasped in space.
 また、この実施の形態1によれば、直線Zは、ロボット2に取り付けられたツール8に応じて設定された方向に延びる直線であるので、ロボットに取り付けられたツールがワークに対してどのような操作を行うか、またはワークに対してどのような加工を行うかに応じて照準座標軸の直線を決定することができる。これにより、ロボットが行う作業に適した照準座標軸を表示することができる。 Further, according to the first embodiment, since the straight line Z is a straight line extending in a direction set according to the tool 8 attached to the robot 2, how the tool attached to the robot is with respect to the work It is possible to determine the straight line of the aiming coordinate axis according to what kind of operation is performed or what kind of processing is performed on the work. This makes it possible to display aiming coordinate axes suitable for the work performed by the robot.
 また、この実施の形態1によれば、直線Zは鉛直方向に延びる直線であるので、照準座標軸の直線Zの軸方向が常に一定となり、オペレータは自身に対する上下方向と、ロボットに対する上下方向とを、容易に一致させて操作することができる。 Further, according to the first embodiment, since the straight line Z is a straight line extending in the vertical direction, the axial direction of the straight line Z of the aiming coordinate axis is always constant, and the operator recognizes the vertical direction relative to himself and the vertical direction relative to the robot. Can be easily matched and operated.
 また、この実施の形態1によれば、照準点Cは、仮想空間内において、第1の直線と、構造物の表面とが交わる点であるので、照準点が構造物上の点となり、オペレータが操作の目標とする点を提示することができる。 Further, according to the first embodiment, since the aiming point C is a point at which the first straight line intersects with the surface of the structure in the virtual space, the aiming point becomes a point on the structure, and the operator Can present the target point of the operation.
 また、この実施の形態1によれば、直線Xおよび直線Yは、構造物の表面形状に沿った線であって、特定の一方向からみた場合に直線形状且つ互いに直交する形状となる線であるので、構造物の形状に合わせて移動する座標軸を提示することができる。 Further, according to the first embodiment, the straight lines X and Y are lines along the surface shape of the structure, and are straight lines having shapes that are straight and orthogonal to each other when viewed from a specific direction. Because of this, it is possible to present coordinate axes that move in accordance with the shape of the structure.
 また、この実施の形態1によれば、直線Xおよび直線Yは、仮想空間内において、真っ直ぐに延びる線であるように構成したので、ロボットを構造物に衝突させないように移動する座標軸を提示することができる。 Further, according to the first embodiment, the straight line X and the straight line Y are configured to be straight lines extending in the virtual space, so that coordinate axes are provided to move the robot so as not to collide with the structure. be able to.
 また、この実施の形態1によれば、出力制御部104が、直線Z、直線Xおよび直線Yのうち、ロボット2の駆動が制限される方向と同一方向に延びる直線の表示形態を変化させて表示するための制御情報を生成するように構成したので、オペレータはロボットを駆動させることができる方向を直感的に認識しながら操作することができる。 Further, according to the first embodiment, the output control unit 104 changes the display form of the straight line extending in the same direction as the direction in which the drive of the robot 2 is restricted among the straight line Z, the straight line X and the straight line Y. Since the control information for displaying is generated, the operator can operate while intuitively recognizing the direction in which the robot can be driven.
 また、この実施の形態1によれば、出力制御部104が、制御点Aの座標値および照準点Cの座標値を表示するための制御情報を生成するように構成したので、オペレータは座標値を確認しながら操作を行うことができる。 Further, according to the first embodiment, the output control unit 104 is configured to generate the control information for displaying the coordinate value of the control point A and the coordinate value of the aiming point C. The operation can be performed while confirming the
実施の形態2.
 この実施の形態2では、ロボット2の駆動限界を考慮して照準座標軸を表示する構成を示す。
 図12は、実施の形態2に係る表示制御装置1Aの構成を示すブロック図である。
 表示制御装置1Aは、図4で示した実施の形態1の表示制御装置1の情報処理部103に、駆動領域設定部105を追加して構成している。以下では、実施の形態1に係る表示制御装置1の構成要素と同一または相当する部分には、実施の形態1で使用した符号と同一の符号を付して説明を省略または簡略化する。
Second Embodiment
The second embodiment shows a configuration in which the aiming coordinate axes are displayed in consideration of the drive limit of the robot 2.
FIG. 12 is a block diagram showing a configuration of a display control device 1A according to the second embodiment.
The display control device 1A is configured by adding a drive area setting unit 105 to the information processing unit 103 of the display control device 1 of the first embodiment shown in FIG. In the following, parts identical to or corresponding to the constituent elements of display control apparatus 1 according to Embodiment 1 will be assigned the same codes as those used in Embodiment 1 to omit or simplify the description.
 駆動領域設定部105は、ロボット2が駆動できる限界の位置を示す情報(以下、駆動限界情報という)を取得する。駆動領域設定部105は、駆動限界情報を表示制御装置1A内の記憶領域(図示しない)から取得してもよいし、表示制御装置1Aの外部から取得してもよい。駆動領域設定部105は、取得した駆動限界情報に基づいて、仮想空間内に、ロボット2が駆動可能な領域(以下、駆動可能領域という)を設定する。情報処理部103aは、駆動領域設定部105が設定した駆動可能領域に基づいて、照準座標軸の3次元データを生成する。情報処理部103aは、補正した照準座標軸、制御点A、ロボット2および構造物の3次元データを出力制御部104に出力する。 The drive area setting unit 105 acquires information (hereinafter referred to as drive limit information) indicating the position of the limit at which the robot 2 can drive. The drive area setting unit 105 may acquire the drive limit information from a storage area (not shown) in the display control device 1A, or may acquire it from the outside of the display control device 1A. The drive area setting unit 105 sets an area in which the robot 2 can be driven (hereinafter referred to as a drivable area) in the virtual space based on the acquired drive limit information. The information processing unit 103 a generates three-dimensional data of the aiming coordinate axis based on the drivable region set by the driving region setting unit 105. The information processing unit 103 a outputs the corrected three-dimensional data of the aiming coordinate axis, the control point A, the robot 2, and the structure to the output control unit 104.
 次に、表示制御装置100Aのハードウェア構成例を説明する。なお、実施の形態1と同一の構成の説明は省略する。
 表示制御装置100Aにおける情報処理部103aおよび駆動領域設定部105は、図5Aで示した処理回路1a、または図5Bで示したメモリ1cに格納されるプログラムを実行するプロセッサ1bである。
Next, a hardware configuration example of the display control device 100A will be described. Description of the same configuration as that of the first embodiment is omitted.
The information processing unit 103a and the drive area setting unit 105 in the display control device 100A are a processor 1b that executes a program stored in the processing circuit 1a shown in FIG. 5A or the memory 1c shown in FIG. 5B.
 図13および図14は、実施の形態2に係る表示制御装置1Aにおけるロボット2の駆動可能領域および照準座標軸を示す図である。
 駆動可能領域Fは、第1の曲面Gと、第2の曲面Hと、ロボット2の設置面Bのうちの一部の面Baとで囲まれた領域の内側の領域である。第1の曲面Gは、駆動限界情報が示す面であり、ロボット2が駆動できる最外域を示す曲面である。第2の曲面Hは、駆動限界情報が示す面であり、ロボット2が駆動できる最内域を示す曲面である。第1の曲面Gの外側、すなわちロボット2が存在していない側の領域、および第2の曲面の内側、すなわちロボット2の据付部2aが位置する側の領域には、ロボット2の制御点Aが位置することができない。面Baは、設置面Bにおいて、第1の曲面Gとロボット2の設置面Bとが交わる際に形成される第1の円Gaの外側、且つ第2の曲面Hとロボット2の設置面Bとが交わる際に形成される第2の円Haの内側の領域である。
13 and 14 are diagrams showing the drivable area and the aiming coordinate axis of the robot 2 in the display control device 1A according to the second embodiment.
The drivable area F is an area inside an area surrounded by the first curved surface G, the second curved surface H, and a part of the installation surface B of the robot 2. The first curved surface G is a surface indicated by the drive limit information, and is a curved surface indicating the outermost area in which the robot 2 can drive. The second curved surface H is a surface indicated by the drive limit information, and is a curved surface indicating the innermost region in which the robot 2 can be driven. The control point A of the robot 2 is outside the first curved surface G, that is, in the region where the robot 2 does not exist, and in the second curved surface, that is, the region where the mounting portion 2a of the robot 2 is located. Can not be located. The surface Ba is an outer surface of the first circle Ga formed when the first curved surface G and the installation surface B of the robot 2 intersect in the installation surface B, and the second curved surface H and the installation surface B of the robot 2 And the inner area of the second circle Ha formed at the intersection.
 駆動領域設定部105は、上述した駆動限界情報と、構造物の3次元データとから、仮想空間内の駆動可能領域Fを示す3次元データを生成する。情報処理部103aは、駆動領域設定部105が生成した駆動可能領域Fの3次元データに基づいて、生成済みの直線X、直線Y、および直線Zからなる照準座標軸(図6参照)の3次元データを、駆動可能領域F内の線分Xa、線分Yaおよび線分Zaからなる照準座標軸の3次元データに補正する(図13A参照)。
 図13Aで示すように、表示制御装置1Aが、駆動可能領域F内の照準座標軸のみを表示することにより、線分Xa、線分Yaおよび線分Zaの直線の長さによって、ロボット2が駆動可能な範囲を表示することができる。
The drive area setting unit 105 generates three-dimensional data indicating the drivable area F in the virtual space from the drive limit information described above and the three-dimensional data of the structure. The information processing unit 103 a is configured to generate a three-dimensional aiming coordinate axis (see FIG. 6) including the straight line X, the straight line Y, and the straight line Z which has been generated based on the three-dimensional data of the drivable area F generated by the drive area setting unit 105 The data is corrected to three-dimensional data of a aiming coordinate axis composed of a line segment Xa, a line segment Ya and a line segment Za in the drivable area F (see FIG. 13A).
As shown in FIG. 13A, the display control device 1A displays only the aiming coordinate axes in the drivable area F, whereby the robot 2 is driven by the straight line lengths of the line segment Xa, the line segment Ya, and the line segment Za. The possible range can be displayed.
 図13Aにおいて、線分Zaは現在のロボット2の状態から、ロボット2を線分Za方向にどの程度駆動させることができるかを示している。また、図13Aにおいて、線分Xaおよび線分Yaは、現在のロボット2の状態から、ロボット2を線分Za方向に駆動して制御点Aと照準点Cとが最も近づいた場合に、それぞれ線分Xa方向および線分Ya方向にどの程度駆動させることができるかを示している。オペレータは、例えば図13Aの表示例を視認することにより、現在の状態から、線分Za方向に、ロボット2をどの程度駆動させることができるかを容易に認識することができる。また、例えば、現在の状態からロボット2を線分Za方向に駆動させて、制御点Aと照準点Cとを最も近づけた場合に、オペレータはロボット2を線分Xa方向および線分Ya方向に、どの程度駆動させることができるかを認識することができる。 In FIG. 13A, a line segment Za indicates how much the robot 2 can be driven in the direction of the line segment Za from the current state of the robot 2. Further, in FIG. 13A, line segment Xa and line segment Ya respectively drive robot 2 in the direction of line segment Za from the current state of robot 2 and when control point A and sight point C are closest to each other. It shows how much driving can be performed in the line segment Xa direction and the line segment Ya direction. The operator can easily recognize how much the robot 2 can be driven in the direction of the line segment Za from the current state by visually recognizing the display example of FIG. 13A, for example. Also, for example, when the robot 2 is driven in the line segment Za direction from the current state and the control point A and the aiming point C are brought closest to each other, the operator moves the robot 2 in the line segment Xa direction and the line segment Ya direction. It can be recognized how much it can be driven.
 また、情報処理部103aは、図13Bに示すように、補正後の照準座標軸である線分Zaを、制御点Aを通過して第1の曲面Gと交差する点まで延ばした線分Zbとし、生成済みの直線X、直線Y、および直線Zからなる照準座標軸の3次元データを、線分Xa、線分Yaおよび線分Zbからなる照準座標軸に補正してもよい。
 図13Bで示すように、駆動可能領域F内の照準座標軸のみを表示し、且つ線分Zbを表示することにより、オペレータはロボット2を線分Zb方向にどの程度駆動させることができるかを認識することができる。
Further, as shown in FIG. 13B, the information processing unit 103a sets a line segment Za, which is a aiming coordinate axis after correction, as a line segment Zb extended to a point passing through the control point A and intersecting the first curved surface G. The three-dimensional data of the aiming coordinate axes consisting of the generated straight line X, straight line Y and straight line Z may be corrected to the aiming coordinate axes consisting of line segment Xa, line segment Ya and line segment Zb.
As shown in FIG. 13B, by displaying only the aiming coordinate axes in the drivable area F and displaying the line segment Zb, the operator recognizes how much the robot 2 can be driven in the direction of the line segment Zb. can do.
 また、情報処理部103aは、図14に示すように、照準座標軸の照準点Cが、駆動可能領域F内に位置しない場合には、生成済みの直線X、直線Y、および直線Zからなる照準座標軸の3次元データを、駆動可能領域F内に存在する線分Zcのみの3次元データとする補正を行う。
 図14に示すように、駆動可能領域F内に線分Zcのみを表示することにより、オペレータは、現在の線分Zcの方向では駆動可能領域F内にワーク等の構造物が存在していないこと、およびロボット2を線分Zc方向にどの程度駆動させることができるかを認識することができる。
 また、情報処理部103aは、線分Zcが第1の曲面Gと交わる点に接する任意の仮想平面を設定し、当該仮想平面上における線分Xおよび線分Yの3次元データを生成することで、線分X、線分Y、および線分Zcからなる照準座標軸の3次元データを生成してもよい。
Further, as shown in FIG. 14, when the aiming point C of the aiming coordinate axis is not located within the drivable area F as shown in FIG. 14, an aiming made up of the generated straight line X, straight line Y and straight line Z A correction is performed to convert three-dimensional data of coordinate axes into three-dimensional data of only the line segment Zc present in the drivable area F.
As shown in FIG. 14, by displaying only the line segment Zc in the drivable area F, the operator does not have a structure such as a work or the like in the drivable area F in the direction of the current line segment Zc. And how much the robot 2 can be driven in the direction of the line segment Zc.
In addition, the information processing unit 103a sets an arbitrary virtual plane in contact with a point at which the line segment Zc intersects the first curved surface G, and generates three-dimensional data of the line segment X and the line segment Y on the virtual plane. Then, three-dimensional data of the aiming coordinate axes including the line segment X, the line segment Y, and the line segment Zc may be generated.
 出力制御部104は、情報処理部103aから入力された3次元データに基づいて、構造物、ロボット2、補正後の照準座標軸、および制御点Aを、出力装置5に表示するための制御を行う。また、出力制御部104は、駆動可能領域Fを出力装置5に表示するための制御を行ってもよい。 The output control unit 104 performs control to display the structure, the robot 2, the aiming coordinate axis after correction, and the control point A on the output device 5 based on the three-dimensional data input from the information processing unit 103a. . In addition, the output control unit 104 may perform control to display the drivable area F on the output device 5.
 次に、表示制御装置1Aの動作について説明する。
 図15は、実施の形態2に係る表示制御装置1Aの情報処理部103aの動作を示すフローチャートである。
 以下では、実施の形態1に係る表示制御装置1と同一のステップには、図9で示した符号と同一の符号を付し、説明を省略または簡略化する。
 ステップST13またはステップST20において、制御点Aの3次元データが生成されると、駆動領域設定部105は駆動限界情報から駆動可能領域Fの3次元データを生成する(ステップST41)。
Next, the operation of the display control device 1A will be described.
FIG. 15 is a flowchart showing an operation of the information processing unit 103a of the display control device 1A according to the second embodiment.
Hereinafter, the same steps as those of the display control device 1 according to the first embodiment are denoted by the same reference numerals as the reference numerals shown in FIG. 9, and the description will be omitted or simplified.
When three-dimensional data of control point A is generated in step ST13 or step ST20, drive area setting unit 105 generates three-dimensional data of drivable area F from the drive limit information (step ST41).
 情報処理部103aは、ステップST41で生成された駆動可能領域Fの3次元データに基づいて、ステップST18またはステップST19で生成した照準座標軸を構成する直線X、直線Yおよび直線Zの3次元データ、またはステップST11で生成された直線Zの3次元データを、駆動可能領域F内の線分の3次元データに補正する(ステップST42)。情報処理部103aは、ステップST42で補正した後の照準座標軸の3次元データ、およびステップST20またはステップST13で生成した制御点Aの3次元データを、出力制御部104に出力する(ステップST43)。その後、フローチャートは、図8のフローチャートのステップST4の処理に進む。 The information processing section 103 a is three-dimensional data of a straight line X, a straight line Y and a straight line Z which constitute the aiming coordinate axis generated at step ST18 or step ST19 based on the three-dimensional data of the drivable area F generated at step ST41. Alternatively, the three-dimensional data of the straight line Z generated at step ST11 is corrected to three-dimensional data of a line segment in the drivable area F (step ST42). The information processing unit 103a outputs the three-dimensional data of the aiming coordinate axis corrected at step ST42 and the three-dimensional data of the control point A generated at step ST20 or step ST13 to the output control unit 104 (step ST43). Thereafter, the flowchart proceeds to the process of step ST4 of the flowchart of FIG.
 以上のように、この実施の形態2によれば、ロボット2の駆動限界に基づいて、ロボット2の駆動可能領域Fを設定する駆動領域設定部105を備え、直線Zを駆動可能領域F内に位置する部分の線分としたので、照準座標軸を構成する線分Zの直線の長さによって、ロボット2が駆動可能な範囲を表示することができる。これにより、オペレータは、線分Zの方向に、ロボット2をどの程度駆動させることができるかを容易に認識することができる。 As described above, according to the second embodiment, the drive area setting unit 105 for setting the drivable area F of the robot 2 based on the drive limit of the robot 2 is provided. Since the line segment of the located portion is used, the drivable range of the robot 2 can be displayed by the length of the straight line of the line segment Z that constitutes the aiming coordinate axis. Thereby, the operator can easily recognize how much the robot 2 can be driven in the direction of the line segment Z.
 また、この実施の形態2によれば、ロボット2の駆動限界に基づいて、ロボット2の駆動可能領域Fを設定する駆動領域設定部105を備え、直線Z、直線X、および直線Yを、駆動可能領域F内に位置する部分の線分としたので、線分の直線の長さによって、ロボットが駆動可能な範囲を表示することができる。これにより、オペレータは各線分の方向に、ロボット2をどの程度駆動させることができるかを容易に認識することができる。 Further, according to the second embodiment, the drive area setting unit 105 for setting the drivable area F of the robot 2 based on the drive limit of the robot 2 is provided to drive the straight line Z, the straight line X, and the straight line Y Since it is a line segment of the part located in the possible area F, the range in which the robot can be driven can be displayed by the length of the straight line of the line segment. As a result, the operator can easily recognize how much the robot 2 can be driven in the direction of each line segment.
実施の形態3.
 この実施の形態3では、ロボット2の教示データを作成する構成を示す。
 図16は、実施の形態3に係る表示制御装置1Bの構成を示すブロック図である。
 表示制御装置1Bは、図4で示した実施の形態1の表示制御装置1に、位置情報記憶部106、軌道生成部107および再生処理部108を追加して構成している。また、図4で示した実施の形態1の表示制御装置1の情報処理部103および出力制御部104に替えて、情報処理部103bおよび出力制御部104aを設けて構成している。
 以下では、実施の形態1に係る表示制御装置1の構成要素と同一または相当する部分には、実施の形態1で使用した符号と同一の符号を付して説明を省略または簡略化する。
Third Embodiment
The third embodiment shows a configuration for creating teaching data of the robot 2.
FIG. 16 is a block diagram showing a configuration of a display control device 1B according to the third embodiment.
The display control device 1B is configured by adding a position information storage unit 106, a trajectory generation unit 107, and a reproduction processing unit 108 to the display control device 1 of the first embodiment shown in FIG. Further, in place of the information processing unit 103 and the output control unit 104 of the display control device 1 of the first embodiment shown in FIG. 4, an information processing unit 103 b and an output control unit 104 a are provided.
In the following, parts identical to or corresponding to the constituent elements of display control apparatus 1 according to Embodiment 1 will be assigned the same codes as those used in Embodiment 1 to omit or simplify the description.
 操作情報取得部102は、実施の形態1で示した、オペレータの操作情報として制御点Aに重畳して表示されたポインタを移動させる操作情報に加えて、以下の操作情報を取得する。操作情報取得部102は、オペレータの操作情報として、ロボット2の教示データを作成する際に、ロボット2に設定された制御点Aが通過すべき通過点を指定する操作情報を取得する。また、操作情報取得部102は、オペレータの操作情報として、ロボット2の駆動軌道の生成を指示する操作情報、および生成されたロボット2のシミュレーション再生を指示する操作情報を取得する。 The operation information acquisition unit 102 acquires the following operation information in addition to the operation information for moving the pointer displayed superimposed on the control point A as the operator's operation information described in the first embodiment. When creating the teaching data of the robot 2 as the operator's operation information, the operation information acquisition unit 102 acquires operation information specifying a passing point to which the control point A set in the robot 2 should pass. Further, the operation information acquisition unit 102 acquires, as operation information of the operator, operation information instructing generation of a drive track of the robot 2 and operation information instructing simulation reproduction of the robot 2 generated.
 情報処理部103bは、操作情報取得部102から、通過点を指定する操作情報が入力されると、当該操作情報が入力された際の制御点Aの位置情報を、通過点の位置情報として位置情報記憶部106に記憶させる。また、情報処理部103bは、通過点の位置情報に基づいて、通過点の3次元データを生成する。情報処理部103bは、ロボット2、構造物および照準座標軸の3次元データに加えて、通過点の3次元データを出力制御部104aに出力する。また、情報処理部103bは、ロボット2の3次元データを再生処理部108に出力する。 When the operation information acquisition unit 102 receives the operation information specifying the passing point, the information processing unit 103 b uses the position information of the control point A when the operation information is input as the position information of the passing point. It is stored in the information storage unit 106. The information processing unit 103 b also generates three-dimensional data of the passing point based on the position information of the passing point. The information processing unit 103 b outputs three-dimensional data of the passing point to the output control unit 104 a in addition to the three-dimensional data of the robot 2, the structure, and the aiming coordinate axis. The information processing unit 103 b also outputs three-dimensional data of the robot 2 to the reproduction processing unit 108.
 軌道生成部107は、操作情報取得部102から、ロボット2の駆動軌道の生成指示が入力されると、位置情報記憶部106に記憶された通過点の位置情報を取得する。軌道生成部107は、取得した通過点の位置情報を用いて、当該通過点を通る軌道を示す位置座標の群である駆動軌道を生成する。駆動軌道の生成方法の詳細は後述する。軌道生成部107は、生成した駆動軌道を出力制御部104aおよび再生処理部108に出力する。 When an instruction to generate a drive trajectory of the robot 2 is input from the operation information acquisition unit 102, the trajectory generation unit 107 acquires position information of the passing point stored in the position information storage unit 106. The trajectory generation unit 107 generates a drive trajectory which is a group of position coordinates indicating a trajectory passing through the passing point, using the acquired position information of the passing point. Details of the method of generating the drive trajectory will be described later. The trajectory generation unit 107 outputs the generated drive trajectory to the output control unit 104 a and the reproduction processing unit 108.
 再生処理部108は、操作情報取得部102から、ロボット2のシミュレーション再生を指示する操作情報が入力されると、軌道生成部107から入力された駆動軌道と、情報処理部103bから入力されたロボット2の3次元データとに基づいて、ロボット2が駆動軌道に沿って駆動する動作を算出する。再生処理部108は、算出した動作から、ロボット2の動きを示すシミュレーション情報を生成する。再生処理部108は、生成したシミュレーション情報を出力制御部104aに出力する。また、再生処理部108は、生成したシミュレーション情報と、情報処理部103bから入力された構造物の3次元データとを照合し、仮想空間においてロボット2と構造物とが干渉するか否か判定を行う。再生処理部108は、仮想空間において、ロボット2と構造物とが干渉すると判定した場合、干渉が発生する箇所の位置情報も合わせて出力制御部104aに出力する。 When the operation information acquisition unit 102 receives the operation information for instructing the simulation reproduction of the robot 2 from the operation information acquisition unit 102, the reproduction processing unit 108 receives the drive trajectory input from the trajectory generation unit 107 and the robot input from the information processing unit 103b. Based on the two-dimensional data, an operation of driving the robot 2 along the drive trajectory is calculated. The reproduction processing unit 108 generates simulation information indicating the movement of the robot 2 from the calculated operation. The reproduction processing unit 108 outputs the generated simulation information to the output control unit 104a. Further, the reproduction processing unit 108 collates the generated simulation information with the three-dimensional data of the structure input from the information processing unit 103 b, and determines whether the robot 2 interferes with the structure in the virtual space. Do. If the reproduction processing unit 108 determines that the robot 2 and the structure interfere with each other in the virtual space, the reproduction processing unit 108 also outputs positional information of the portion where the interference occurs to the output control unit 104 a.
 出力制御部104aは、情報処理部103bから入力された3次元データに基づいて、仮想空間内のロボット2、構造物、照準座標軸および通過点を表示する制御情報を生成する。また、出力制御部104aは、軌道生成部107が生成した駆動軌道を、出力装置5に表示する制御情報を生成する。また、出力制御部104aは、再生処理部108から入力されたシミュレーション情報を出力装置5において再生表示する制御情報を生成する。また、出力制御部104aは、干渉が発生する箇所を点または線などで強調表示する制御情報を生成する。また、出力制御部104aは、干渉が発生することを音声によってオペレータに通知する制御情報を生成してもよい。 The output control unit 104a generates control information for displaying the robot 2, the structure, the aiming coordinate axis, and the passing point in the virtual space based on the three-dimensional data input from the information processing unit 103b. Further, the output control unit 104 a generates control information to display the drive trajectory generated by the trajectory generation unit 107 on the output device 5. Further, the output control unit 104 a generates control information for reproducing and displaying the simulation information input from the reproduction processing unit 108 on the output device 5. In addition, the output control unit 104a generates control information that highlights a point where interference occurs with a point or a line. Further, the output control unit 104a may generate control information for notifying the operator by voice that the occurrence of the interference occurs.
 出力制御部104aが制御情報を出力する教示装置4は、図1で示したように出力装置5および入力装置6で構成される。教示装置4は、出力制御部104aから入力された制御情報に基づいて、ロボット2および構造物を表示し、表示したロボット2に対して操作入力を行うことが可能な装置である。教示装置4は、汎用パソコン、タッチパネルで操作するタブレット端末、ヘッドマウントディスプレイなどを適用可能である。
 出力装置5のディスプレイ51には、ロボット2および構造物が表示され、さらに操作入力を受け付ける教示ボタン等のグラフィックインタフェースが表示される。教示ボタンは、教示装置4に設けられたボタン、またはパソコンのキーボード等、ハードウェアのボタンであってもよい。教示ボタンは複数存在し、教示処理の開始または終了を入力するボタン、通過点の決定または削除を入力するボタン、生成された駆動軌道を確認するためのシミュレーションの再生または停止を入力するボタン、操作手順を進めるまたは戻すためのボタン等で構成される。
The teaching device 4 from which the output control unit 104a outputs control information is configured of the output device 5 and the input device 6 as shown in FIG. The teaching device 4 is a device capable of displaying the robot 2 and the structure based on the control information input from the output control unit 104 a and performing operation input to the displayed robot 2. The teaching device 4 is applicable to a general purpose personal computer, a tablet terminal operated by a touch panel, a head mounted display, and the like.
The robot 2 and a structure are displayed on the display 51 of the output device 5, and a graphic interface such as a teaching button for receiving an operation input is displayed. The teaching button may be a button provided on the teaching device 4 or a button of hardware such as a keyboard of a personal computer. There are a plurality of teaching buttons, a button for inputting start or end of teaching processing, a button for inputting determination or deletion of passing points, a button for inputting reproduction or stop of simulation for confirming generated drive path, operation It consists of buttons for advancing or returning the procedure.
 次に、表示制御装置100Bのハードウェア構成例を説明する。なお、実施の形態1と同一の構成の説明は省略する。
 表示制御装置100Bにおける情報処理部103b、軌道生成部107、再生処理部108および出力制御部104aは、図5Aで示した処理回路1a、または図5Bで示したメモリ1cに格納されるプログラムを実行するプロセッサ1bである。
Next, a hardware configuration example of the display control device 100B will be described. Description of the same configuration as that of the first embodiment is omitted.
The information processing unit 103b, the trajectory generation unit 107, the reproduction processing unit 108, and the output control unit 104a in the display control device 100B execute the program stored in the processing circuit 1a shown in FIG. 5A or the memory 1c shown in FIG. 5B. Processor 1b.
 次に、軌道生成部107の詳細について説明する。
 軌道生成部107は、ロボット2の駆動開始点と、通過点と、目的地点であるロボット2の駆動終了点とを結ぶ、ロボット2の駆動軌道を生成する。駆動開始点および駆動終了点は、予め設定された地点であってもよいし、オペレータが任意に設定した地点であってもよい。軌道生成部107は、位置情報記憶部106に、少なくとも1点の通過点が記憶されれば、ロボット2の駆動軌道を生成することが可能である。ロボット2の駆動開始点は、制御プログラムの開始点としてもよい。
Next, details of the trajectory generation unit 107 will be described.
The trajectory generation unit 107 generates a drive trajectory of the robot 2 connecting the drive start point of the robot 2, the passing point, and the drive end point of the robot 2 which is the destination point. The drive start point and the drive end point may be points set in advance, or may be points arbitrarily set by the operator. The trajectory generation unit 107 can generate a drive trajectory of the robot 2 if at least one passing point is stored in the position information storage unit 106. The drive start point of the robot 2 may be a start point of the control program.
 軌道生成部107は、例えば直動軌道、接線軌道または手動軌道によって駆動軌道を生成する。
 図17は、実施の形態3に係る表示制御装置1Bの軌道生成部107が生成する駆動軌道の種別を示す図である。図17Aは直動軌道の一例、図17Bは接線軌道の一例、および図17Cは手動軌道の一例を示している。
 直動軌道は、2つの通過点間を直線で結んで得られる駆動軌道である。具体的には、図17Aに示すように、第1の通過点P1と第2の通過点P2とを直線で結び、第2の通過点P2と第3の通過点P3とを直線で結んで得られる軌道Qaである。
 接線軌道は、開始点と1つ目の通過点を直線で結び、2つ目以降の通過点については接線の連続性を維持して滑らかに結んで得られる駆動軌道である。具体的には、図17Bに示すように、第1の通過点P1と第2の通過点P2とを直線で結び、第2の通過点P2および第3の通過点P3において接点の連続性を維持して滑らかに結んで得られた軌道Qbである。
The trajectory generation unit 107 generates a drive trajectory by, for example, a linear motion trajectory, a tangential trajectory or a manual trajectory.
FIG. 17 is a diagram showing the type of drive trajectory generated by the trajectory generation unit 107 of the display control device 1B according to the third embodiment. FIG. 17A shows an example of a linear motion track, FIG. 17B shows an example of a tangential track, and FIG. 17C shows an example of a manual track.
The linear motion trajectory is a drive trajectory obtained by connecting two passing points in a straight line. Specifically, as shown in FIG. 17A, the first passing point P1 and the second passing point P2 are connected by a straight line, and the second passing point P2 and the third passing point P3 are connected by a straight line. It is a trajectory Qa obtained.
The tangent trajectory is a drive trajectory obtained by connecting the start point and the first passing point by a straight line, and connecting the second and subsequent passing points smoothly by maintaining the continuity of the tangent. Specifically, as shown in FIG. 17B, the first passing point P1 and the second passing point P2 are connected by a straight line, and the continuity of the contacts at the second passing point P2 and the third passing point P3 is determined. It is a trajectory Qb obtained by maintaining and connecting smoothly.
 手動軌道は、オペレータが、ジョグレバー、操作キーまたはマウスなどを操作して生成した軌道を、そのまま駆動軌道としたものである。具体的には、図17Cに示すように、第1の通過点P1と第2の通過点P2とをオペレータが出力装置5のディスプレイ51上で描画した線分で結び、第2の通過点P2と第3の通過点P3とをオペレータが描画した線分で結んで得られる軌道Qcである。
 軌道生成部107は、生成した駆動軌道、および当該駆動軌道上の通過点の位置情報を、出力制御部104aおよび再生処理部108に出力する。
The manual trajectory is a trajectory generated by operating the jog lever, the operation key, the mouse or the like as the drive trajectory as it is. Specifically, as shown in FIG. 17C, the first passing point P1 and the second passing point P2 are connected by the line segment drawn by the operator on the display 51 of the output device 5, and the second passing point P2 is obtained. And the third passing point P3 by a line segment drawn by the operator.
The trajectory generation unit 107 outputs the generated drive trajectory and position information of the passing point on the drive trajectory to the output control unit 104 a and the reproduction processing unit 108.
 通過点は、図17Aから図17Cで示した円形、または矩形等の形状で強調表示される。これにより、オペレータは、通過点を容易に認識することができる。出力制御部104aは、表示する通過点の近傍に、通過点の識別を示す文字、例えば「通過点1」、「Point1」または「P1」等を表示する制御情報を生成してもよい。オペレータが通過点を決定すると、出力制御部104aは、通過点の確定を印象付ける効果音を出力する制御を行ってもよい。オペレータが複数の通過点を決定した場合、各通過点は、決定された順に識別を示す文字が付され、例えば「通過点1」、「通過点2」および「通過点3」等の連番で表示される。 The passing points are highlighted in a shape such as a circle or a rectangle shown in FIGS. 17A to 17C. Thereby, the operator can easily recognize the passing point. The output control unit 104a may generate control information for displaying a character indicating the identification of the passing point, for example, “passing point 1”, “Point 1” or “P1” in the vicinity of the passing point to be displayed. When the operator determines the passing point, the output control unit 104a may perform control to output a sound effect that impresses the determination of the passing point. When the operator determines a plurality of passing points, each passing point is given a letter indicating the identification in the determined order, for example, a serial number such as "passing point 1", "passing point 2" and "passing point 3" Is displayed.
 軌道生成部107は、ディスプレイ51に表示された駆動軌道上の通過点を移動させる操作情報が入力されると、当該通過点の位置情報を操作情報に従って移動させる修正を行う。また、軌道生成部107は、駆動軌道を、修正した通過点を結ぶ軌道に修正する。軌道生成部107は、出力制御部104aを介してディスプレイ51に表示された駆動軌道上の通過点を選択して削除する操作情報が入力されると、当該通過点の位置情報を削除する。また、軌道生成部107は、駆動軌道を削除した通過点を通過しない軌道に修正し、各通過点に付した識別を示す文字を修正する。軌道生成部107は、修正した駆動軌道、および修正した駆動軌道上の通過点の位置情報を、出力制御部104aおよび再生処理部108に出力する。なお、軌道生成部107は、ディスプレイ51に表示された駆動軌道を選択して移動させる操作情報の入力を受け付け、駆動軌道を移動させる修正を行ってもよい。 When the operation information for moving the passing point on the drive trajectory displayed on the display 51 is input, the trajectory generation unit 107 corrects the position information of the passing point according to the operation information. Further, the trajectory generation unit 107 corrects the drive trajectory to a trajectory connecting the corrected passing points. The trajectory generation unit 107 deletes the position information of the passing point when the operation information for selecting and deleting the passing point on the driving trajectory displayed on the display 51 is input through the output control unit 104a. Further, the trajectory generation unit 107 corrects the passing point from which the drive trajectory is deleted to a trajectory not passing, and corrects a character indicating the identification given to each passing point. The trajectory generation unit 107 outputs the corrected drive trajectory and the position information of the passing point on the corrected drive trajectory to the output control unit 104 a and the reproduction processing unit 108. The track generation unit 107 may receive an input of operation information for selecting and moving the drive track displayed on the display 51, and may perform correction to move the drive track.
 教示装置4は、駆動軌道の修正が完了したことを示す、例えば完了ボタンの押下操作を受け付ける。また、教示装置4は、通過点を移動または削除する操作情報の入力が完了すると、駆動軌道の修正が完了したと判断し、完了ボタンの押下操作を不要としてもよい。教示装置4は、駆動軌道の修正が完了したことを、表示制御装置1に通知する。表示制御装置1は、駆動軌道の修正が完了したことが通知されると、生成した駆動軌道に沿ってロボット2を駆動させる制御プログラムをロボット制御装置7に出力する。 The teaching device 4 receives, for example, a pressing operation of the completion button, which indicates that the correction of the drive trajectory is completed. In addition, when the input of the operation information for moving or deleting the passing point is completed, the teaching device 4 may determine that the correction of the drive trajectory is completed, and the pressing operation of the completion button may be unnecessary. The teaching device 4 notifies the display control device 1 that the correction of the drive trajectory is completed. When notified that the correction of the drive trajectory is completed, the display control device 1 outputs a control program for driving the robot 2 along the generated drive trajectory to the robot control device 7.
 次に、再生処理部108の詳細について説明する。
 再生処理部108は、軌道生成部107から入力された駆動軌道を示す情報と、情報処理部103bから取得したロボット2の3次元データとから、ロボット2が駆動軌道に沿って駆動する動作を算出し、シミュレーション情報を生成する。再生処理部108は、生成したシミュレーション情報を出力制御部104aに出力する。出力制御部104aは、再生処理部108から入力されたシミュレーション情報と、情報処理部103bから入力された3次元データとに基づいて、仮想空間においてロボット2が駆動軌道に沿って駆動する動画を表示する制御情報を生成し、出力装置5に出力する。
Next, the details of the reproduction processing unit 108 will be described.
The reproduction processing unit 108 calculates an operation of driving the robot 2 along the drive trajectory from the information indicating the drive trajectory input from the trajectory generation unit 107 and the three-dimensional data of the robot 2 acquired from the information processing unit 103 b. And generate simulation information. The reproduction processing unit 108 outputs the generated simulation information to the output control unit 104a. The output control unit 104a displays a moving image driven by the robot 2 along a drive trajectory in the virtual space based on the simulation information input from the reproduction processing unit 108 and the three-dimensional data input from the information processing unit 103b. Control information is generated and output to the output device 5.
 また、再生処理部108は、シミュレーション情報を生成する処理と並行して、シミュレーション情報と、情報処理部103bから入力された構造物の3次元データとの照合を行う。再生処理部108は、照合結果を参照し、仮想空間においてロボット2と構造物とが干渉するか否か判定を行う。ここでロボット2と構造物との干渉には、図3で示したロボット2のフランジ部2gに取り付けられたツール8と構造物との干渉も含まれるものとする。詳細には、再生処理部108は、ロボット2が駆動軌道に沿って駆動した際に、ロボット2のアーム、ロボット2の軸部材、ロボット2のフランジ部2g、およびツール8等のいずれかが、構造物と干渉するか否か判定を行う。再生処理部108は、干渉すると判定した場合に、干渉が発生する箇所の位置情報を取得し、出力制御部104aに出力する。出力制御部104aは、再生処理部108から干渉が発生する箇所の位置情報が入力されると、当該干渉が発生する箇所を明示するための制御情報を生成する。 Further, in parallel with the process of generating the simulation information, the reproduction processing unit 108 collates the simulation information with the three-dimensional data of the structure input from the information processing unit 103 b. The reproduction processing unit 108 determines whether or not the robot 2 and the structure interfere with each other in the virtual space with reference to the comparison result. Here, the interference between the robot 2 and the structure also includes the interference between the tool 8 attached to the flange portion 2g of the robot 2 shown in FIG. 3 and the structure. In detail, when the robot 2 is driven along the drive track, any one of the arm of the robot 2, the shaft member of the robot 2, the flange portion 2g of the robot 2, the tool 8, etc. It is determined whether or not it interferes with the structure. When the reproduction processing unit 108 determines that interference occurs, the reproduction processing unit 108 acquires position information of a portion where the interference occurs and outputs the acquired position information to the output control unit 104 a. When the position information of the location where the interference occurs is input from the reproduction processing unit 108, the output control unit 104a generates control information for clearly indicating the location where the interference occurs.
 出力制御部104aは、シミュレーション情報を再生している際に干渉が発生する箇所を明示する制御情報を生成してもよいし、シミュレーション情報を再生することなく、ロボット2の駆動軌道を表示する際に干渉が発生する箇所を明示する制御情報を生成してもよい。例えば、軌道生成部107が、直動軌道または接線軌道を生成する場合、再生処理部108はシミュレーション情報を生成することなく、仮想空間においてロボット2と構造物とが干渉するか否かの判定を行う。再生処理部108は、干渉すると判定した場合に、干渉が発生する箇所の位置情報を出力制御部104aに出力する。これにより、出力制御部104aは、瞬時に駆動軌道上において干渉が発生する箇所を明示する制御を行うことができる。 The output control unit 104a may generate control information that clearly indicates the location where interference occurs when reproducing simulation information, or when displaying the drive trajectory of the robot 2 without reproducing the simulation information. Control information may be generated that clearly indicates where interference occurs. For example, when the trajectory generation unit 107 generates a linear motion trajectory or a tangential trajectory, the reproduction processing unit 108 determines whether the robot 2 interferes with the structure in the virtual space without generating simulation information. Do. The reproduction processing unit 108 outputs the position information of the portion where the interference occurs to the output control unit 104a when it is determined that the interference occurs. As a result, the output control unit 104a can perform control to clearly indicate a location where interference occurs on the drive trajectory.
 出力制御部104aは、ロボット2の制御点Aが駆動軌道の開始点に位置する姿勢で表示し、次に駆動軌道に沿ってロボット2が駆動する動画を表示する制御を生成する。出力制御部104aは、ロボット2の駆動が終了すると、ロボット2の制御点Aが駆動軌道の終点に位置する姿勢で静止させて表示する制御情報を生成する。また、ロボット2が駆動軌道に沿って駆動する表示を行っている際に、操作情報取得部102が、例えば停止ボタンを押下する操作情報を取得すると、出力制御部104aは、情報処理部103bおよび再生処理部108を介して、停止ボタンが押下された際の姿勢で停止したロボット2を表示する制御情報を生成する。また、ある姿勢でロボット2が停止している表示を行っている際に、操作情報取得部102が、例えば再生ボタンを押下する操作情報を取得すると、出力制御部104aは、情報処理部103bおよび再生処理部108を介して、停止した姿勢から再度ロボット2の駆動を再開させる表示制御を行う。 The output control unit 104a generates a control that displays the control point A of the robot 2 in a posture located at the start point of the drive trajectory, and then displays a moving image driven by the robot 2 along the drive trajectory. When driving of the robot 2 is completed, the output control unit 104a generates control information to be displayed by causing the control point A of the robot 2 to stand still at an end position of the drive track. Further, when the operation information acquisition unit 102 acquires, for example, operation information for pressing the stop button while the robot 2 is performing display driving along the drive trajectory, the output control unit 104a may include the information processing unit 103b and Control information for displaying the robot 2 stopped in the posture when the stop button is pressed is generated via the reproduction processing unit 108. In addition, when the operation information acquisition unit 102 acquires, for example, operation information for pressing the play button while displaying that the robot 2 is stopped in a certain posture, the output control unit 104a can process the information processing unit 103b and Display control is performed to resume driving of the robot 2 again from the stopped posture via the reproduction processing unit 108.
 出力制御部104aは、再生処理部108から干渉が発生する箇所の位置情報が入力されると、ロボット2との干渉箇所を、駆動軌道上で点または線等で強調表示する表示制御を行う。また、出力制御部104aは、駆動軌道の線の太さを変化させる、または線の色を変化させる等により、干渉箇所を強調表示する表示制御を行う。さらに、出力制御部104aは、仮想空間においてロボット2と構造物とが干渉する際に、スピーカ52から効果音を出力する制御を行ってもよい。 When the position information of the location where the interference occurs is input from the reproduction processing unit 108, the output control unit 104a performs display control to highlight the location of the interference with the robot 2 with a point or a line on the drive track. Further, the output control unit 104a performs display control to highlight the interference location by changing the thickness of the line of the drive track or changing the color of the line. Furthermore, the output control unit 104a may perform control to output a sound effect from the speaker 52 when the robot 2 and a structure interfere with each other in the virtual space.
 次に、表示制御装置1Bの軌道生成部107、再生処理部108および出力制御部104aの動作について、図18および図19のフローチャートを参照しながら説明する。
 図18は、実施の形態3に係る表示制御装置1Bの軌道生成部107の動作を示すフローチャートである。なお、以下では、位置情報記憶部106に新たな通過点の位置情報が記憶されると、軌道生成部107が駆動軌道を生成するものとして説明する。
 位置情報記憶部106に新たな通過点の位置情報が記憶されると(ステップST51)、軌道生成部107は位置情報記憶部106に記憶された通過点の位置情報を取得する(ステップST52)。軌道生成部107は、ステップST52で取得した通過点の位置情報を用いて、ロボット2の駆動軌道を生成する(ステップST53)。軌道生成部107は、生成した駆動軌道を示す情報を出力制御部104aに出力する。
Next, operations of the trajectory generation unit 107, the reproduction processing unit 108, and the output control unit 104a of the display control device 1B will be described with reference to the flowcharts of FIGS. 18 and 19.
FIG. 18 is a flowchart showing the operation of the trajectory generation unit 107 of the display control device 1B according to the third embodiment. In addition, below, when the positional infomation on a new passing point is memorize | stored in the positional infomation memory | storage part 106, the track | orbit production | generation part 107 demonstrates as what produces | generates a drive track | orbit.
When the position information of the new passing point is stored in the position information storage unit 106 (step ST51), the trajectory generation unit 107 acquires the position information of the passing point stored in the position information storage unit 106 (step ST52). The trajectory generation unit 107 generates a drive trajectory of the robot 2 using the position information of the passing point acquired in step ST52 (step ST53). The trajectory generation unit 107 outputs information indicating the generated drive trajectory to the output control unit 104a.
 一方、出力制御部104aは、入力された駆動軌道を示す情報に基づいて、駆動軌道および駆動軌道上の通過点を、仮想空間内に表示する制御情報を生成する(ステップST54)。出力制御部104aは、生成した制御情報を出力装置5に出力する。その後、軌道生成部107は、駆動軌道を修正する指示、または通過点を修正指示が入力されたか否か判定を行う(ステップST55)。ここで、修正指示には、通過点の修正と、通過点の削除とが含まれるものとする。修正指示が入力された場合(ステップST55;YES)、軌道生成部107は、修正指示に基づいて、前回生成した駆動軌道を修正する(ステップST56)。軌道生成部107は、修正した駆動軌道を示す情報を、出力制御部104aに出力し、ステップST54の処理に戻る。 On the other hand, the output control unit 104a generates control information for displaying the drive track and the passing point on the drive track in the virtual space based on the input information indicating the drive track (step ST54). The output control unit 104 a outputs the generated control information to the output device 5. Thereafter, the trajectory generation unit 107 determines whether or not an instruction to correct the drive trajectory or an instruction to correct the passing point has been input (step ST55). Here, the correction instruction includes correction of the passing point and deletion of the passing point. When the correction instruction is input (step ST55; YES), the trajectory generation unit 107 corrects the previously generated drive trajectory based on the correction instruction (step ST56). The trajectory generation unit 107 outputs information indicating the corrected drive trajectory to the output control unit 104a, and returns to the process of step ST54.
 一方、修正指示が入力されない場合(ステップST55;NO)、軌道生成部107は、生成または修正した駆動軌道を示す情報を再生処理部108に出力する。再生処理部108は、軌道生成部107から入力された駆動軌道を示す情報をバッファ等の一時格納領域に格納し(ステップST57)、処理を終了する。 On the other hand, when the correction instruction is not input (step ST55; NO), the trajectory generation unit 107 outputs information indicating the generated or corrected drive trajectory to the reproduction processing unit 108. The reproduction processing unit 108 stores the information indicating the drive trajectory input from the trajectory generation unit 107 in a temporary storage area such as a buffer (step ST57), and ends the processing.
 図19は、実施の形態3に係る表示制御装置1Bの再生処理部108によるシミュレーション情報の生成を示すフローチャートである。なお、以下では、シミュレーション情報の生成指示が入力された場合に、再生処理部108がシミュレーション情報を生成するものとして説明する。 FIG. 19 is a flowchart showing generation of simulation information by the reproduction processing unit 108 of the display control device 1B according to the third embodiment. In the following description, it is assumed that the reproduction processing unit 108 generates simulation information when an instruction to generate simulation information is input.
 操作情報取得部102からシミュレーション再生指示が入力されると(ステップST61)、再生処理部108は、図18のフローチャートのステップST57でバッファ等に格納した駆動軌道を示す情報と、情報処理部103bから入力されたロボット2および構造物の3次元データとから、ロボット2が駆動軌道に沿って駆動するシミュレーション情報を生成する(ステップST62)。 When a simulation reproduction instruction is input from the operation information acquisition unit 102 (step ST61), the reproduction processing unit 108 receives information indicating the drive track stored in the buffer or the like in step ST57 of the flowchart of FIG. 18 and the information processing unit 103b. From the input three-dimensional data of the robot 2 and the structure, simulation information for driving the robot 2 along a drive path is generated (step ST62).
 また、再生処理部108は、仮想空間において、ロボット2が駆動軌道に沿って駆動した場合に、ロボット2と構造物とが干渉するか否か判定を行う(ステップST63)。干渉しないと判定した場合(ステップST63;NO)、再生処理部108は、ステップST62で生成したシミュレーション情報を出力制御部104aに出力する。出力制御部104aは、再生処理部108から入力されたシミュレーション情報を再生する制御情報を生成する(ステップST64)。 In addition, the reproduction processing unit 108 determines whether or not the robot 2 and the structure interfere with each other when the robot 2 is driven along the drive path in the virtual space (step ST63). When it is determined that interference does not occur (step ST63; NO), the reproduction processing unit 108 outputs the simulation information generated in step ST62 to the output control unit 104a. The output control unit 104a generates control information for reproducing the simulation information input from the reproduction processing unit 108 (step ST64).
 一方、干渉すると判定した場合(ステップST63;YES)、再生処理部108は干渉が発生する箇所の位置情報を取得する(ステップST65)。再生処理部108は、ステップST62で生成したシミュレーション情報およびステップST63で取得した干渉が発生する箇所の位置情報を、出力制御部104aに出力する。出力制御部104aは、シミュレーション情報を再生すると共に、干渉が発生する箇所を駆動軌道上に表示する制御情報を生成する(ステップST66)。出力制御部104aは、ステップST64またはステップST66で生成した制御情報を出力装置5に出力し、処理を終了する。 On the other hand, when it is determined that interference occurs (step ST63; YES), the reproduction processing unit 108 acquires position information of the portion where the interference occurs (step ST65). The reproduction processing unit 108 outputs, to the output control unit 104a, the simulation information generated in step ST62 and the position information of the portion where the interference is acquired in step ST63. The output control unit 104a reproduces the simulation information and generates control information for displaying the location where the interference occurs on the drive track (step ST66). The output control unit 104a outputs the control information generated in step ST64 or step ST66 to the output device 5, and ends the process.
 図20は、実施の形態3に係る発明の表示制御装置1Bによる干渉箇所の表示例を示す図である。
 駆動軌道Qdは、ロボット2の制御点Aが、第1の通過点P1、第2の通過点P2および第3の通過点P3を通過する軌道である。領域Raは、ロボット2と構造物Dcとが干渉する箇所を示す領域である。領域Rbは、ロボット2が領域Raで構造物Dcと干渉する際に、制御点Aが駆動軌道Qd上のどこを駆動しているかを示す領域である。領域Raは、円形状で干渉箇所を明示している。一方、領域Rbは、駆動軌道Qdの一部を線分の太さを太く描画して干渉が発生する駆動軌道の箇所を明示している。
FIG. 20 is a diagram showing a display example of an interference point by the display control device 1B of the invention according to the third embodiment.
The drive track Qd is a track through which the control point A of the robot 2 passes the first passing point P1, the second passing point P2, and the third passing point P3. The area Ra is an area indicating a portion where the robot 2 and the structure Dc interfere with each other. The region Rb is a region indicating where on the drive trajectory Qd the control point A is driven when the robot 2 interferes with the structure Dc in the region Ra. Region Ra has a circular shape and clearly indicates the interference point. On the other hand, in the region Rb, a part of the drive trajectory Qd is drawn thickly as the thickness of the line segment to clearly indicate the location of the drive trajectory where interference occurs.
 出力装置5において、図20で示す表示が行われた場合に、オペレータは当該表示を確認しながら、駆動軌道Qdを修正する指示を、入力装置6を介して入力するように構成してもよい。その場合、軌道生成部107は、入力された修正指示に基づいて、前回生成した駆動軌道を修正する。再生処理部108は、修正した駆動軌道に沿ってロボット2が駆動した場合に、ロボット2と構造物とに干渉が発生するか判定を行う。出力制御部104aは、判定結果に応じた制御情報を生成し、出力装置5に出力する。これにより、オペレータは、干渉箇所を視認しながら、駆動軌道の修正を行うことができる。 In the output device 5, when the display shown in FIG. 20 is performed, the operator may be configured to input an instruction to correct the drive path Qd via the input device 6 while confirming the display. . In that case, the trajectory generation unit 107 corrects the previously generated drive trajectory based on the input correction instruction. When the robot 2 is driven along the corrected drive path, the reproduction processing unit 108 determines whether interference occurs between the robot 2 and the structure. The output control unit 104 a generates control information according to the determination result, and outputs the control information to the output device 5. Thereby, the operator can correct the drive trajectory while visually recognizing the interference location.
 また、上述したシミュレーション再生において、オペレータは、ロボット2の駆動速度、各通過点における静止時間、駆動の繰り返し回数およびツール8の制御条件などを設定可能である。再生処理部108は、設定された条件に基づいて、シミュレーション情報を生成する。
 上述したシミュレーション再生が終了する、または教示データの生成が終了すると、再生処理部108は現在設定されているロボット2の駆動軌道に沿ってロボット2を駆動させる制御プログラムをロボット制御装置7に出力する。
 なお、シミュレーション再生の終了、または教示データ生成の終了は、例えばオペレータによる終了ボタンの押下によって判断してもよいし、ロボット2と構造物との干渉する駆動軌道が生成された場合に教示データの生成が終了したと判断してもよい。
Further, in the above-described simulation reproduction, the operator can set the driving speed of the robot 2, the stationary time at each passing point, the number of repetitions of driving, the control condition of the tool 8, and the like. The reproduction processing unit 108 generates simulation information based on the set conditions.
When the above-described simulation reproduction ends or generation of teaching data ends, the reproduction processing unit 108 outputs a control program for driving the robot 2 along the currently set driving trajectory of the robot 2 to the robot control device 7 .
It should be noted that the end of the simulation reproduction or the end of the teaching data generation may be judged by pressing the end button by the operator, for example, or when the driving track where the robot 2 interferes with the structure is generated. It may be determined that the generation has ended.
 以上のように、この実施の形態3によれば、ロボット2の通過点を指定する操作情報に基づいて、指定された通過点を経由するロボット2の駆動軌道を生成する軌道生成部107を備え、出力制御部104aは、軌道生成部107が生成した駆動軌道および駆動軌道上の通過点を表示するための制御情報を生成するように構成したので、設定した通過点を経由する駆動軌道を確認するための情報を提示することができる。 As described above, according to the third embodiment, the trajectory generation unit 107 that generates the drive trajectory of the robot 2 passing through the designated passing point is provided based on the operation information that designates the passing point of the robot 2. Since the output control unit 104 a is configured to generate the drive trajectory generated by the trajectory generation unit 107 and the control information for displaying the passing point on the drive trajectory, the drive trajectory passing through the set passing point is confirmed Can present information to
 また、この実施の形態3によれば、軌道生成部107が生成した駆動軌道に沿って、ロボット2が駆動する動作を算出し、ロボット2の動きを示すシミュレーション情報を生成する再生処理部108を備え、出力制御部104aは、再生処理部108が生成したシミュレーション情報を再生するための制御情報を生成するように構成したので、駆動軌道に沿ってロボットが駆動する際の動きを確認するための情報を提示することができる。 Further, according to the third embodiment, the reproduction processing unit 108 which calculates the operation driven by the robot 2 along the drive trajectory generated by the trajectory generation unit 107 and generates simulation information indicating the motion of the robot 2 is provided. Since the output control unit 104a is configured to generate control information for reproducing the simulation information generated by the reproduction processing unit 108, the output control unit 104a is configured to check the movement of the robot along the drive track. Information can be presented.
 また、この実施の形態3によれば、再生処理部108は、軌道生成部107が生成した駆動軌道に沿って、ロボット2が駆動した場合に、ロボット2と構造物とが干渉するか否か判定を行い、出力制御部104aは、再生処理部108がロボット2と構造物とが干渉すると判定した場合に、干渉が発生する箇所を表示するための制御情報を生成するように構成したので、駆動軌道に沿ってロボットが駆動した際に干渉が発生することを示す情報を提示することができる。 Further, according to the third embodiment, the reproduction processing unit 108 determines whether the robot 2 interferes with the structure when the robot 2 is driven along the drive trajectory generated by the trajectory generation unit 107. Since the determination is made and the output control unit 104a determines that the robot 2 and the structure interfere with each other, the output control unit 104a is configured to generate control information for displaying the location where the interference occurs. Information can be presented indicating that interference will occur when the robot is driven along the drive trajectory.
 なお、上述した説明では、実施の形態1で示した表示制御装置1に、位置情報記憶部106、軌道生成部107、再生処理部108および干渉判定部109を追加して構成する例を示したが、実施の形態2で示した表示制御装置1Aに、位置情報記憶部106、軌道生成部107、再生処理部108および干渉判定部109を追加して構成してもよい。 In the above description, an example in which the position information storage unit 106, the trajectory generation unit 107, the reproduction processing unit 108, and the interference determination unit 109 are added to the display control device 1 described in the first embodiment is described. However, the position information storage unit 106, the trajectory generation unit 107, the reproduction processing unit 108, and the interference determination unit 109 may be added to the display control device 1A described in the second embodiment.
 上述した実施の形態1から実施の形態3の表示制御装置1,1A,1Bから出力された制御情報を、出力装置5を介して拡張現実空間で表示する場合が想定される。この場合、現実環境のロボット2に対して、制御点A、照準座標軸、通過点、駆動軌道および干渉領域等の情報を重畳表示する。また、現実環境にロボット2が存在しない場合には、現実環境の情報に対して、表示制御装置1,1A,1Bが生成したロボット2の情報も合わせて重畳表示する。
 このように拡張現実空間で表示を行う場合、出力装置5として、3次元スキャニング用デバイスと加速度センサとを備えたタブレット端末またはヘッドマウントディスプレイ等を使用する。拡張現実空間の出力装置5は、ステレオカメラまたは深度センサ等を用いて、現実環境のロボット2および構造物の形状をスキャニングして3次元データを生成する。拡張現実空間の出力装置5は、生成した3次元データと、予め入力された教示データを入力するためのロボット2および構造物の3次元データとに、共通する特徴点を検出し、2つの3次元データの位置を一致させる。
It is assumed that the control information output from the display control devices 1, 1A and 1B of the first to third embodiments described above is displayed in the augmented reality space via the output device 5. In this case, information such as a control point A, an aiming coordinate axis, a passing point, a drive trajectory, and an interference area is superimposed and displayed on the robot 2 in the real environment. When the robot 2 does not exist in the real environment, the information of the robot 2 generated by the display control devices 1, 1A and 1B is also superimposed and displayed on the information of the real environment.
As described above, when displaying in the augmented reality space, a tablet terminal or a head mounted display provided with a device for three-dimensional scanning and an acceleration sensor is used as the output device 5. The augmented reality space output device 5 scans the shapes of the robot 2 and structures in the real environment using a stereo camera or depth sensor or the like to generate three-dimensional data. The output device 5 of the augmented reality space detects common feature points in the generated three-dimensional data and the three-dimensional data of the robot 2 and the structure for inputting teaching data inputted in advance, Match the position of the dimensional data.
 拡張現実空間の出力装置5として、タブレット端末、または密閉式ヘッドマウントディスプレイを適用する場合、3次元スキャニング用デバイスの近傍に備えた画像入力用のカメラで撮像した現実環境のロボット2および周辺環境の画像に、予め入力された教示データを入力するためのロボット2および構造物の3次元データを重畳表示する。
 一方、拡張現実空間の出力装置5として、透過式ヘッドマウントディスプレイを適用する場合、表示用のレンズを通して見える実物のロボット2および周辺環境に、予め入力された教示データを入力するためのロボット2および構造物を重畳表示する。
When a tablet terminal or an enclosed head mounted display is applied as the output device 5 of the augmented reality space, the robot 2 of the reality environment captured by the camera for image input provided in the vicinity of the device for three-dimensional scanning On the image, three-dimensional data of the robot 2 and a structure for inputting teaching data inputted in advance are superimposed and displayed.
On the other hand, when a transmissive head mounted display is applied as the output device 5 of the augmented reality space, the robot 2 for inputting teaching data inputted in advance into the real robot 2 and peripheral environment seen through the lens for display Display the structure superimposed.
 拡張現実空間の出力装置5は、3次元スキャニングを定期的に行い、加速度センサによるオペレータの動きの検出結果と合わせて演算を行い、上述したロボット2および構造物の重畳表示をリアルタイムに更新する。なお、実物のロボット2および構造物の形状をスキャニングした情報は表示しない。
 3次元データが重畳表示されたデータは、仮想現実空間で教示データの生成およびシミュレーション情報の作成と同様に、通過点、駆動軌道および操作ボタン等の重畳表示を行うことができる。
The output device 5 of the augmented reality space periodically performs three-dimensional scanning, performs calculations in accordance with the detection result of the movement of the operator by the acceleration sensor, and updates the superimposed display of the robot 2 and the structure described above in real time. In addition, the information which scanned the shape of the real robot 2 and a structure is not displayed.
Data in which three-dimensional data is superimposed and displayed can be superimposed on a passing point, a drive track, an operation button, and the like, as in the generation of teaching data and the generation of simulation information in virtual reality space.
 また、拡張現実空間の出力装置5は、予め入力された教示データを入力するためのロボット2および構造物を表示しない設定としてもよい。これにより、ロボット2と構造物との干渉を確認する場合に、実物の構造物と、3次元データのロボット2とが実際に干渉しているかのように、オペレータに実感させることができる。 Further, the augmented reality space output device 5 may be set so as not to display the robot 2 and the structure for inputting the teaching data input in advance. As a result, when the interference between the robot 2 and the structure is confirmed, the operator can be made to feel as if the actual structure and the robot 2 of the three-dimensional data actually interfere with each other.
 図21は、実施の形態1から実施の形態3で示した表示制御装置100,100A,100Bの制御情報を、拡張現実空間の出力装置5に出力した場合の表示例を示す図である。
 図21では、現実環境のロボット2Aおよび現実環境の周辺環境に、ロボット2の3次元データ2B、制御点A、照準座標軸、第1の通過点P1および駆動軌道Qeを重畳表示した場合の表示例を示している。
 上述した説明では、出力制御部104が生成する制御情報は、ロボット2および照準座標軸を、仮想空間内の特定の位置から見た場合の画像を、出力装置5に表示させるための情報である場合を示した。しかし、図21で示したように、制御情報は、少なくとも照準座標軸を、仮想空間内の特定の位置から見た場合の画像を、出力装置5に表示させるための情報であってもよい。
FIG. 21 is a diagram showing a display example when the control information of the display control devices 100, 100A, 100B shown in the first to third embodiments is output to the output device 5 of the augmented reality space.
In FIG. 21, a display example in the case where three-dimensional data 2B of the robot 2, the control point A, the aiming coordinate axis, the first passing point P1 and the drive trajectory Qe are superimposed on the robot 2A of the real environment and the surrounding environment of the real environment. Is shown.
In the above description, the control information generated by the output control unit 104 is information for causing the output device 5 to display an image when the robot 2 and the aiming coordinate axes are viewed from a specific position in the virtual space. showed that. However, as shown in FIG. 21, the control information may be information for causing the output device 5 to display an image when at least the aiming coordinate axes are viewed from a specific position in the virtual space.
 上述した実施の形態1から実施の形態3で示した表示制御装置1,1A,1Bは、機器を遠隔操作する制御プログラムを生成するための装置として適用することも可能である。 The display control devices 1, 1A, and 1B described in the first to third embodiments described above can also be applied as a device for generating a control program for remotely operating a device.
 上記以外にも、本発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、各実施の形態の任意の構成要素の変形、または各実施の形態の任意の構成要素の省略が可能である。 In addition to the above, within the scope of the invention, the present invention allows free combination of each embodiment, modification of any component of each embodiment, or omission of any component of each embodiment. It is.
 この発明に係る表示制御装置は、作業等を行わせるロボットの制御プログラムを生成するための機器、または機器を遠隔操作する制御プログラムを生成するための機器等に適用可能である。 The display control apparatus according to the present invention can be applied to an apparatus for generating a control program of a robot for performing work or the like, or an apparatus for generating a control program for remotely operating the apparatus.
 1,1A,1B 表示制御装置、101 3次元データ取得部、102 操作情報取得部、103,103a,103b 情報処理部、104,104a 出力制御部、105 駆動領域設定部、106 位置情報記憶部、107 軌道生成部、108 再生処理部。 1, 1A, 1B display control device, 101 three-dimensional data acquisition unit, 102 operation information acquisition unit, 103, 103a, 103b information processing unit, 104, 104a output control unit, 105 drive area setting unit, 106 position information storage unit, 107 Trajectory generation unit, 108 reproduction processing unit.

Claims (19)

  1.  駆動対象機器の3次元データおよび構造物の3次元データに基づいて、前記構造物の3次元データが規定する仮想空間内において、前記駆動対象機器に設定された制御点を通る第1の直線を含む照準座標軸の3次元データを生成する情報処理部と、
     前記駆動対象機器の3次元データ、前記構造物の3次元データおよび情報処理部が生成した前記照準座標軸の3次元データに基づいて、前記照準座標軸を出力装置に表示するための制御情報を生成する出力制御部とを備えた表示制御装置。
    In a virtual space defined by three-dimensional data of the structure based on three-dimensional data of the device to be driven and three-dimensional data of the structure, a first straight line passing through the control point set for the device to be driven is An information processing unit that generates three-dimensional data of the aiming coordinate axes including;
    The control information for displaying the aiming coordinate axis on the output device is generated based on the three-dimensional data of the driven device, the three-dimensional data of the structure, and the three-dimensional data of the aiming coordinate axis generated by the information processing unit. A display control device comprising an output control unit.
  2.  前記照準座標軸は、前記第1の直線に加え、それぞれ前記第1の直線上に位置する照準点を通る第2の直線および第3の直線で構成されたことを特徴とする請求項1記載の表示制御装置。 The said aiming coordinate axis is constituted by a second straight line and a third straight line passing through the aiming point respectively located on the first straight line in addition to the first straight line. Display control device.
  3.  前記第1の直線は、前記駆動対象機器に取り付けられたツールに応じて設定された方向に延びる直線であることを特徴とする請求項1記載の表示制御装置。 The display control device according to claim 1, wherein the first straight line is a straight line extending in a direction set according to a tool attached to the drive target device.
  4.  前記第1の直線は、鉛直方向に延びる直線であることを特徴とする請求項1記載の表示制御装置。 The display control device according to claim 1, wherein the first straight line is a straight line extending in the vertical direction.
  5.  前記第2の直線および前記第3の直線は、互いに直交するx軸およびy軸と、鉛直方向のz軸を含むグローバル座標における、前記x軸および前記y軸の軸方向と同一方向に延びる直線であることを特徴とする請求項2記載の表示制御装置。 The second straight line and the third straight line are straight lines extending in the same direction as the x axis and the y axis in global coordinates including the x axis and the y axis orthogonal to each other and the z axis in the vertical direction The display control apparatus according to claim 2, wherein
  6.  前記第2の直線および前記第3の直線は、互いに直交するx軸、y軸およびz軸を含むローカル座標における、前記x軸および前記y軸と同一方向に延びる直線であることを特徴とする請求項2記載の表示制御装置。 The second straight line and the third straight line are straight lines extending in the same direction as the x axis and the y axis at local coordinates including the x axis, the y axis, and the z axis orthogonal to one another. The display control device according to claim 2.
  7.  前記照準点は、前記仮想空間内において、前記第1の直線と、前記構造物の表面とが交わる点であることを特徴とする請求項2記載の表示制御装置。 The display control device according to claim 2, wherein the aiming point is a point at which the first straight line intersects with the surface of the structure in the virtual space.
  8.  前記第2の直線および前記第3の直線は、前記構造物の表面形状に沿った線であって、特定の一方向からみた場合に直線形状且つ互いに直交する形状となる線であることを特徴とする請求項2記載の表示制御装置。 The second straight line and the third straight line are lines along the surface shape of the structure, and when viewed from a specific direction, the straight lines are shapes that are straight and orthogonal to each other. The display control device according to claim 2.
  9.  前記第2の直線および前記第3の直線は、前記仮想空間内において、真っ直ぐに延びる線であることを特徴とする請求項2記載の表示制御装置。 The display control device according to claim 2, wherein the second straight line and the third straight line are straight lines extending in the virtual space.
  10.  前記出力制御部は、前記第1の直線、前記第2の直線および前記第3の直線のうち、前記駆動対象機器の駆動が制限される方向と同一方向に延びる直線の表示形態を変化させて表示するための制御情報を生成することを特徴とする請求項2記載の表示制御装置。 The output control unit changes a display mode of a straight line extending in the same direction as the direction in which the drive of the drive target device is limited among the first straight line, the second straight line, and the third straight line. 3. The display control apparatus according to claim 2, wherein control information for display is generated.
  11.  前記出力制御部は、前記制御点の座標値を表示するための制御情報を生成することを特徴とする請求項1記載の表示制御装置。 The display control device according to claim 1, wherein the output control unit generates control information for displaying coordinate values of the control point.
  12.  前記出力制御部は、前記照準点の座標値を表示するための制御情報を生成することを特徴とする請求項2記載の表示制御装置。 The display control device according to claim 2, wherein the output control unit generates control information for displaying coordinate values of the aiming point.
  13.  前記駆動対象機器の駆動限界に基づいて、前記駆動対象機器の駆動可能領域を設定する駆動領域設定部を備え、
     前記第1の直線は、前記駆動可能領域内に位置する部分の線分であることを特徴とする請求項1記載の表示制御装置。
    And a drive region setting unit configured to set a drivable region of the drive target device based on the drive limit of the drive target device.
    The display control device according to claim 1, wherein the first straight line is a line segment of a portion located in the drivable area.
  14.  前記駆動対象機器の駆動限界に基づいて、前記駆動対象機器の駆動可能領域を設定する駆動領域設定部を備え、
     前記第1の直線、前記第2の直線、および前記第3の直線は、前記駆動可能領域内に位置する部分の線分であることを特徴とする請求項2記載の表示制御装置。
    And a drive region setting unit configured to set a drivable region of the drive target device based on the drive limit of the drive target device.
    The display control device according to claim 2, wherein the first straight line, the second straight line, and the third straight line are line segments of a portion located in the drivable area.
  15.  前記駆動対象機器の通過点を指定する操作情報に基づいて、前記指定された通過点を経由する前記駆動対象機器の駆動軌道を生成する軌道生成部を備え、
     前記出力制御部は、前記軌道生成部が生成した前記駆動軌道および前記駆動軌道上の通過点を表示するための制御情報を生成することを特徴とする請求項1記載の表示制御装置。
    The trajectory generation unit is configured to generate a drive trajectory of the drive target device passing through the specified pass point based on operation information specifying the pass point of the drive target device.
    The display control device according to claim 1, wherein the output control unit generates control information for displaying the drive trajectory generated by the trajectory generation unit and a passing point on the drive trajectory.
  16.  前記軌道生成部が生成した前記駆動軌道に沿って、前記駆動対象機器が駆動する動作を算出し、前記駆動対象機器の動きを示すシミュレーション情報を生成する再生処理部を備え、
     前記出力制御部は、前記再生処理部が生成した前記シミュレーション情報を再生するための制御情報を生成することを特徴とする請求項15記載の表示制御装置。
    The reproduction processing unit is configured to calculate an operation to be driven by the drive target device along the drive track generated by the track generation unit, and generate simulation information indicating a motion of the drive target device.
    The display control device according to claim 15, wherein the output control unit generates control information for reproducing the simulation information generated by the reproduction processing unit.
  17.  前記再生処理部は、前記軌道生成部が生成した前記駆動軌道に沿って、前記駆動対象機器が駆動した場合に、前記駆動対象機器と前記構造物とが干渉するか否か判定を行い、
     前記出力制御部は、前記再生処理部が前記駆動対象機器と前記構造物とが干渉すると判定した場合に、前記干渉が発生する箇所を表示するための制御情報を生成することを特徴とする請求項16記載の表示制御装置。
    The reproduction processing unit determines whether the drive target device and the structure interfere with each other when the drive target device is driven along the drive track generated by the track generation unit.
    The output control unit generates control information for displaying a portion where the interference occurs when the reproduction processing unit determines that the drive target device and the structure interfere with each other. The display control apparatus of Claim 16.
  18.  情報処理部が、駆動対象機器の3次元データおよび構造物の3次元データに基づいて、前記構造物の3次元データが規定する仮想空間内において、前記駆動対象機器に設定された制御点を通る第1の直線を含む照準座標軸の3次元データを生成するステップと、
     出力制御部が、前記駆動対象機器の3次元データ、前記構造物の3次元データおよび前記生成された前記照準座標軸の3次元データに基づいて、前記照準座標軸を出力装置に表示するための制御情報を生成するステップとを備えた表示制御方法。
    The information processing unit passes control points set in the drive target device in a virtual space defined by the three-dimensional data of the structure based on the three-dimensional data of the drive target device and the three-dimensional data of the structure. Generating three-dimensional data of aiming coordinate axes including the first straight line;
    Control information for displaying the aiming coordinate axis on the output device based on the three-dimensional data of the driven device, the three-dimensional data of the structure, and the generated three-dimensional data of the aiming coordinate axis of the output control unit And generating a display control method.
  19.  駆動対象機器の3次元データおよび構造物の3次元データに基づいて、前記構造物の3次元データが規定する仮想空間内において、前記駆動対象機器に設定された制御点を通る第1の直線を含む照準座標軸の3次元データを生成する手順と、
     前記駆動対象機器の3次元データ、前記構造物の3次元データおよび前記生成された前記照準座標軸の3次元データに基づいて、前記照準座標軸を出力装置に表示するための制御情報を生成する手順とをコンピュータに実行させるための表示制御プログラム。
    In a virtual space defined by three-dimensional data of the structure based on three-dimensional data of the device to be driven and three-dimensional data of the structure, a first straight line passing through the control point set for the device to be driven is Generating three-dimensional data of the aiming coordinate axes including
    A procedure of generating control information for displaying the aiming coordinate axis on an output device based on the three-dimensional data of the drive target device, the three-dimensional data of the structure, and the generated three-dimensional data of the aiming coordinate axis; Display control program to make a computer execute.
PCT/JP2017/040132 2017-11-07 2017-11-07 Display control device, display control method, and display control program WO2019092792A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2017/040132 WO2019092792A1 (en) 2017-11-07 2017-11-07 Display control device, display control method, and display control program
JP2018526967A JP6385627B1 (en) 2017-11-07 2017-11-07 Display control apparatus, display control method, and display control program
TW107116580A TW201918807A (en) 2017-11-07 2018-05-16 Display control device, display control method, and display control program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/040132 WO2019092792A1 (en) 2017-11-07 2017-11-07 Display control device, display control method, and display control program

Publications (1)

Publication Number Publication Date
WO2019092792A1 true WO2019092792A1 (en) 2019-05-16

Family

ID=63444307

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/040132 WO2019092792A1 (en) 2017-11-07 2017-11-07 Display control device, display control method, and display control program

Country Status (3)

Country Link
JP (1) JP6385627B1 (en)
TW (1) TW201918807A (en)
WO (1) WO2019092792A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021018544A (en) * 2019-07-18 2021-02-15 ファナック株式会社 Augmented reality glasses device and display program
JP2021043135A (en) * 2019-09-13 2021-03-18 株式会社東芝 Radiation dose distribution display system and radiation dose distribution display method
WO2021260898A1 (en) * 2020-06-25 2021-12-30 株式会社日立ハイテク Robot teaching device and method for teaching work
WO2022030047A1 (en) * 2020-08-03 2022-02-10 三菱電機株式会社 Remote control device
WO2022131068A1 (en) * 2020-12-14 2022-06-23 ファナック株式会社 Augmented reality display device and augmented reality display system
WO2022255206A1 (en) * 2021-06-04 2022-12-08 パナソニックIpマネジメント株式会社 Information processing apparatus, information processing method, and computer program
US11534912B2 (en) 2019-04-26 2022-12-27 Fanuc Corporation Vibration display device, operation program creating device, and system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102184935B1 (en) * 2019-02-20 2020-12-01 한국기술교육대학교 산학협력단 Method of Confirming Motion of Robot Arm Using Augmented Reality
JP7396872B2 (en) 2019-11-22 2023-12-12 ファナック株式会社 Simulation device and robot system using augmented reality
JP7409848B2 (en) 2019-12-04 2024-01-09 ファナック株式会社 Display device and display program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004243516A (en) * 2003-02-11 2004-09-02 Kuka Roboter Gmbh Method for fading-in information created by computer into image of real environment, and device for visualizing information created by computer to image of real environment
JP2006190228A (en) * 2005-01-07 2006-07-20 Kobe Steel Ltd Operation program creating method
JP2009226561A (en) * 2008-03-25 2009-10-08 Kobe Steel Ltd Method and device for calculating and displaying operational tolerance of robot
JP2013136123A (en) * 2011-12-28 2013-07-11 Kawasaki Heavy Ind Ltd Assisting device and assisting method for teaching operation for robot
JP2014161921A (en) * 2013-02-21 2014-09-08 Yaskawa Electric Corp Robot simulator, robot teaching device and robot teaching method
JP2015147260A (en) * 2014-02-05 2015-08-20 株式会社デンソーウェーブ Teaching device for robot
JP2017019068A (en) * 2015-07-14 2017-01-26 セイコーエプソン株式会社 Teaching device, robot, and robot system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09300256A (en) * 1996-05-14 1997-11-25 Nippon Telegr & Teleph Corp <Ntt> Robot teaching method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004243516A (en) * 2003-02-11 2004-09-02 Kuka Roboter Gmbh Method for fading-in information created by computer into image of real environment, and device for visualizing information created by computer to image of real environment
JP2006190228A (en) * 2005-01-07 2006-07-20 Kobe Steel Ltd Operation program creating method
JP2009226561A (en) * 2008-03-25 2009-10-08 Kobe Steel Ltd Method and device for calculating and displaying operational tolerance of robot
JP2013136123A (en) * 2011-12-28 2013-07-11 Kawasaki Heavy Ind Ltd Assisting device and assisting method for teaching operation for robot
JP2014161921A (en) * 2013-02-21 2014-09-08 Yaskawa Electric Corp Robot simulator, robot teaching device and robot teaching method
JP2015147260A (en) * 2014-02-05 2015-08-20 株式会社デンソーウェーブ Teaching device for robot
JP2017019068A (en) * 2015-07-14 2017-01-26 セイコーエプソン株式会社 Teaching device, robot, and robot system

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11534912B2 (en) 2019-04-26 2022-12-27 Fanuc Corporation Vibration display device, operation program creating device, and system
JP2021018544A (en) * 2019-07-18 2021-02-15 ファナック株式会社 Augmented reality glasses device and display program
US11722651B2 (en) 2019-07-18 2023-08-08 Fanuc Corporation Augmented reality glasses device and display program
JP7260428B2 (en) 2019-07-18 2023-04-18 ファナック株式会社 Augmented reality glasses device and display program
JP2021043135A (en) * 2019-09-13 2021-03-18 株式会社東芝 Radiation dose distribution display system and radiation dose distribution display method
JP7293057B2 (en) 2019-09-13 2023-06-19 株式会社東芝 Radiation dose distribution display system and radiation dose distribution display method
WO2021260898A1 (en) * 2020-06-25 2021-12-30 株式会社日立ハイテク Robot teaching device and method for teaching work
JP7454046B2 (en) 2020-06-25 2024-03-21 株式会社日立ハイテク Robot teaching device and work teaching method
WO2022030047A1 (en) * 2020-08-03 2022-02-10 三菱電機株式会社 Remote control device
JP2022172112A (en) * 2020-08-03 2022-11-15 三菱電機株式会社 Remote-control device
JPWO2022030047A1 (en) * 2020-08-03 2022-02-10
WO2022131068A1 (en) * 2020-12-14 2022-06-23 ファナック株式会社 Augmented reality display device and augmented reality display system
WO2022255206A1 (en) * 2021-06-04 2022-12-08 パナソニックIpマネジメント株式会社 Information processing apparatus, information processing method, and computer program

Also Published As

Publication number Publication date
JP6385627B1 (en) 2018-09-05
JPWO2019092792A1 (en) 2019-11-14
TW201918807A (en) 2019-05-16

Similar Documents

Publication Publication Date Title
WO2019092792A1 (en) Display control device, display control method, and display control program
JP6787966B2 (en) Robot control device and display device using augmented reality and mixed reality
Park et al. Hands-free human–robot interaction using multimodal gestures and deep learning in wearable mixed reality
US11173601B2 (en) Teaching device for performing robot teaching operations and teaching method
JP6810093B2 (en) Robot simulation device
JP6355978B2 (en) Program and image generation apparatus
US9919421B2 (en) Method and apparatus for robot path teaching
EP2954987A1 (en) Teaching system, robot system, and teaching method
CA2684472C (en) Methods, devices, and systems for automated movements involving medical robots
US20150151431A1 (en) Robot simulator, robot teaching device, and robot teaching method
US20150097777A1 (en) 3D Motion Interface Systems and Methods
US20150273689A1 (en) Robot control device, robot, robotic system, teaching method, and program
WO2003099526A1 (en) A method and a system for programming an industrial robot
JP2015231445A (en) Program and image generating device
JP5108032B2 (en) Multi-joint structure teaching device
JPH06131442A (en) Three-dimensional virtual image modeling device
JPWO2019180916A1 (en) Robot controller
JP7035555B2 (en) Teaching device and system
WO2022176928A1 (en) Teaching device
CN109661621B (en) Machining simulation display device and machining simulation display method
JPS60195613A (en) Robot teaching device with verifying function
Otaduy et al. User-centric viewpoint computation for haptic exploration and manipulation
JP7068416B2 (en) Robot control device using augmented reality and mixed reality, computer program for defining the position and orientation of the robot, method for defining the position and orientation of the robot, computer program for acquiring the relative position and orientation, and method for acquiring the relative position and orientation.
JP2792842B2 (en) Robot work teaching method and device
JP2023017441A (en) Image processing device

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018526967

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17931378

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17931378

Country of ref document: EP

Kind code of ref document: A1