WO2024154218A1 - Programming device - Google Patents

Programming device Download PDF

Info

Publication number
WO2024154218A1
WO2024154218A1 PCT/JP2023/001146 JP2023001146W WO2024154218A1 WO 2024154218 A1 WO2024154218 A1 WO 2024154218A1 JP 2023001146 W JP2023001146 W JP 2023001146W WO 2024154218 A1 WO2024154218 A1 WO 2024154218A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
line
workpiece
visual sensor
programming device
Prior art date
Application number
PCT/JP2023/001146
Other languages
French (fr)
Japanese (ja)
Inventor
寛之 米山
Original Assignee
ファナック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ファナック株式会社 filed Critical ファナック株式会社
Priority to PCT/JP2023/001146 priority Critical patent/WO2024154218A1/en
Publication of WO2024154218A1 publication Critical patent/WO2024154218A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/408Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by data handling or data format, e.g. reading, buffering or conversion of data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/4093Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by part programming, e.g. entry of geometrical information as taken from a technical drawing, combining this with machining and material information to obtain control information, named part programme, for the NC machine

Definitions

  • This disclosure relates to a programming device that generates programs for industrial machinery.
  • a method for specifying a processing line on a three-dimensional model of a workpiece a method is known in which a processing line is specified by detecting a shape feature consisting of a contour line or a surface of a basic shape including a circle and a polygon on the three-dimensional model of the workpiece, or a shape formed by combining a plurality of basic shapes, or a portion where three-dimensional models of the workpiece contact each other during welding, etc.
  • Patent Documents 1 and 2 see Patent Documents 1 and 2.
  • a method in which a three-dimensional shape including a curved surface or a three-dimensional shape including a plurality of continuous planes is filled with a motion pattern consisting of a continuous trajectory indicating a periodic motion of a tool, the three-dimensional shape is arranged in a virtual space so that the motion pattern is projected onto at least one surface of a work model, and the motion pattern is projected onto at least one surface of the work model to create a machining path for the tool.
  • machining line that traces an arbitrary and complex trajectory on the surface of a workpiece, and to generate a program for an industrial machine to perform machining work along that trajectory based on the generated machining line.
  • One aspect of the programming device disclosed herein is a programming device that places an industrial machine, a tool attached to the industrial machine, a visual sensor, and a model of a workpiece in a virtual space, and generates a program in which the industrial machine model performs an operation on the workpiece model using the tool model, places a label with a contour line on the surface of the workpiece model, generates a captured image of the label at the position where the visual sensor model is placed based on imaging parameters set for the visual sensor model, detects the contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the workpiece model, and generates the program based on the processing line and the normal vector.
  • One aspect of the programming device disclosed herein is a programming device that generates a program for an industrial machine, a tool attached to the industrial machine, a visual sensor, and a workpiece arranged in real space, in which the industrial machine performs an operation on the workpiece using the tool, and the programming device places a label with a contour line on the surface of the workpiece, acquires an image of the label captured at the position where the visual sensor is arranged based on imaging parameters set for the visual sensor, detects the contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the workpiece, and generates the program based on the processing line and the normal vector.
  • FIG. 1 is a functional block diagram showing an example of a functional configuration of a programming device according to a first embodiment.
  • FIG. 2 is a diagram illustrating an example of an industrial machinery system model in a virtual space.
  • FIG. 13 is a diagram showing an example in which a label is placed on a surface of a work model.
  • FIG. 13 is a diagram illustrating an example of a positional relationship between a visual sensor model and a label.
  • FIG. 13 is a diagram showing an example of three-dimensional points detected on a contour line.
  • FIG. 13 is a diagram showing an example of a generated processing line.
  • 13 is a diagram showing an example of a normal vector relative to a surface of a workpiece model calculated at each point on a generated processing line; FIG. FIG.
  • FIG. 11 is a diagram showing an example of set teaching points.
  • FIG. 11 is a diagram illustrating an example of a simulation of a generated robot program.
  • 11 is a flowchart illustrating a program generation process of the programming device.
  • FIG. 11 is a functional block diagram showing an example of the functional configuration of a programming device according to a second embodiment.
  • FIG. 13 is a diagram showing an example of labels for contours that intersect with each other at at least one point.
  • FIG. 13 is a diagram illustrating an example of a positional relationship between a visual sensor model and a label.
  • FIG. 13 is a diagram showing an example of a plurality of line segments obtained by dividing a contour line;
  • FIG. 13 is a diagram showing an example of a result of combining a plurality of divided line segments.
  • FIG. 11 is a diagram showing an example of set teaching points.
  • FIG. 11 is a diagram illustrating an example of a simulation of a generated robot program.
  • 11 is a flowchar
  • FIG. 13 is a diagram showing an example of three-dimensional points detected on a contour line.
  • FIG. 13 is a diagram showing an example of a generated processing line.
  • 13 is a diagram showing an example of a normal vector relative to a surface of a workpiece model calculated at each point on a generated processing line;
  • FIG. 11 is a diagram showing an example of set teaching points.
  • 11 is a flowchart illustrating a program generation process of the programming device.
  • FIG. 13 is a functional block diagram showing an example of the functional configuration of a programming device according to a third embodiment.
  • 11 is a flowchart illustrating a program generation process of the programming device.
  • FIG. 13 is a diagram showing an example of a visual sensor that is fixedly disposed separately from a robot.
  • a programming device arranges a label with a contour line drawn on the surface of the model of the workpiece, and generates a captured image of the label at the position where the model of the visual sensor is arranged based on imaging parameters set for the model of the visual sensor.
  • the programming device detects the contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the model of the workpiece, and generates a program in which the industrial machine performs work on the workpiece with a tool based on the processing line and the normal vector.
  • it is possible to generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and to generate a program for an industrial machine to perform processing work of the trajectory based on the generated processing line.
  • FIG. 1 is a functional block diagram showing an example of the functional configuration of a programming device according to the first embodiment.
  • the programming device 1 is a known computer, and includes a control unit 10, an input unit 11, a display unit 12, and a storage unit 13.
  • the control unit 10 includes a virtual space creation unit 101, a three-dimensional model placement unit 102, an image placement unit 103, an image capture unit 104, a contour line detection unit 105, a three-dimensional data conversion unit 106, a processing line generation unit 107, and a robot program generation unit 108.
  • the programming device 1 may be connected to a robot control device (not shown) that controls the operation of a robot (not shown) via a network (not shown) such as a LAN (Local Area Network) or the Internet. Alternatively, the programming device 1 may be directly connected to the robot control device (not shown) via a connection interface (not shown).
  • the input unit 11 is, for example, a keyboard or a touch panel arranged on the display unit 12 (described later), and receives input from a user.
  • the display unit 12 is, for example, a liquid crystal display, etc.
  • the display unit 12 displays, for example, 3D CAD data or the like (hereinafter also referred to as a "robot model”) that represents a robot (not shown) in three dimensions input (selected) by a user via the input unit 11, 3D CAD data or the like (hereinafter also referred to as a "tool model”) that represents a tool (not shown) attached to the robot in three dimensions, 3D CAD data or the like (hereinafter also referred to as a "visual sensor model”) that represents a visual sensor (not shown) in three dimensions, and 3D CAD data or the like (hereinafter also referred to as a "work model”) that represents a work (not shown) in three dimensions.
  • 3D CAD data or the like hereinafter also referred to as a "robot model”
  • robot model 3D CAD data or the like
  • tools model that represents a tool (not shown) attached to the robot in three dimensions
  • the storage unit 13 is a solid state drive (SSD), a hard disk drive (HDD), etc.
  • the storage unit 13 may store a generation program for generating a robot program in which a robot (not shown) performs an operation on a workpiece using a tool, a generated robot program, etc.
  • the storage unit 13 also has a model storage unit 131.
  • the model storage unit 131 stores, for example, 3D CAD data (robot model) of a robot (not shown), 3D CAD data (tool model) of a tool (not shown), 3D CAD data (visual sensor model) of a visual sensor (not shown), and 3D CAD data (workpiece model) of a workpiece (not shown) that are input (selected) by a user via the input unit 11 and displayed on the display unit 12.
  • 3D CAD data robot model
  • tool model of a tool
  • 3D CAD data visual sensor model of a visual sensor
  • 3D CAD data workpiece model of a workpiece (not shown) that are input (selected) by a user via the input unit 11 and displayed on the display unit 12.
  • the control unit 10 has a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), a CMOS (Complementary Metal-Oxide-Semiconductor) memory, etc., which are configured to be able to communicate with each other via a bus, and are well known to those skilled in the art.
  • the CPU is a processor that controls the entire programming device 1.
  • the CPU reads out the system program and application program stored in the ROM via the bus, and controls the entire programming device 1 according to the system program and application program. As a result, as shown in Fig.
  • the control unit 10 is configured to realize the functions of a virtual space creation unit 101, a three-dimensional model placement unit 102, an image placement unit 103, an image capture unit 104, a contour line detection unit 105, a three-dimensional data conversion unit 106, a processing line generation unit 107, and a robot program generation unit 108.
  • the RAM stores various data such as temporary calculation data and display data.
  • the CMOS memory is backed up by a battery (not shown) and is configured as a non-volatile memory that retains its stored state even when the power supply of the programming device 1 is turned off.
  • the virtual space creation unit 101 creates a virtual space that is a three-dimensional representation of a work space in which a robot (not shown), a tool (not shown), a visual sensor (not shown), and a workpiece (not shown) are arranged.
  • the three-dimensional model placement unit 102 places a robot model of a robot (not shown), a tool model of a tool (not shown), a visual sensor model of a visual sensor (not shown), and a work model of a work (not shown) within the three-dimensional virtual space created by the virtual space creation unit 101 in response to, for example, a user's input operation on the input unit 11.
  • FIG. 2 is a diagram showing an example of an industrial machinery system model in a virtual space. 2, in order to place a robot (not shown) in a virtual space, the three-dimensional model placement unit 102 reads out a robot model of the robot from the model storage unit 131. The three-dimensional model placement unit 102 places the read out robot model in the virtual space.
  • the three-dimensional model placement unit 102 reads out a tool model of a tool (not shown) from the model storage unit 131, and places the read tool model at the tool tip of the robot model of the robot in the virtual space. Furthermore, the three-dimensional model placement unit 102 reads out a visual sensor model of a visual sensor (not shown) from the model storage unit 131, and places the visual sensor model of the read visual sensor at the tip of the arm of the robot model of the robot in the virtual space. Furthermore, in order to place a workpiece model of a work (not shown) in the virtual space, the three-dimensional model placement unit 102 reads out the workpiece model of the work from the model storage unit 131. The three-dimensional model placement unit 102 places the read out workpiece model of the work in the virtual space. In FIG. 2, the workpiece model is placed on a rectangular parallelepiped model of a jig or the like (not shown).
  • the image placement unit 103 places a label, such as a sticker, with an outline drawn on the surface of the work model, as shown in FIG. 3, for example. Note that the label shown in FIG. 3 has the outline of the letter "F" drawn on it.
  • the image capture unit 104 operates a robot model in a virtual space, positions the visual sensor model so that a vertical line to the center of the label indicated by the dotted line coincides with the optical axis of the visual sensor model, and adjusts the height of the visual sensor model so that the entire label falls within the field of view of the visual sensor model. It is assumed that the visual sensor model is set with the imaging parameters necessary to capture images similar to those of a visual sensor in real space, such as the position of the sensor model coordinate system as viewed from the robot model coordinate system, focal length, image size, etc.
  • the image capturing unit 104 generates a pseudo captured image based on the imaging parameters set for the visual sensor model, which is a representation of the image captured by the visual sensor model capturing an image of the label at the position where the visual sensor model is placed. As shown in Fig. 4, the captured image shows the outline of the letter "F".
  • the contour line detection unit 105 detects contour lines by performing image processing on the pseudo captured image captured by the image capturing unit 104 . Specifically, the contour detection unit 105 detects the black portion of the label in the pseudo captured image as a contour line by using, for example, a known method (for example, line drawing extraction, etc.).
  • the three-dimensional data conversion unit 106 converts the contours detected by the contour detection unit 105 into a set of three-dimensional data. Specifically, for example, as shown in Fig. 5, the three-dimensional data conversion unit 106 converts the three-dimensional position coordinates (X, Y, Z) of three-dimensional points (black points in Fig. 5) on the detected contour line based on the robot model coordinate system or the sensor model coordinate system in the captured image captured by the visual sensor model into a set of three-dimensional data. The three-dimensional data conversion unit 106 stores the converted three-dimensional data in the storage unit 13.
  • the processing line generating unit 107 generates a processing line from a set of three-dimensional data. Specifically, the processing line generating unit 107 generates a processing line indicated by a dashed line as shown in FIG. 6 by, for example, connecting each of the three-dimensional data of the contour lines stored in the storage unit 13 . In addition, the processing line generation unit 107 calculates the surface of the work model on which the contour label is placed from the three-dimensional data of the contour stored as a collection of three-dimensional points, as shown in Figure 7, and calculates the normal vector to the surface of the work model at each point on the processing line.
  • the robot program generating unit 108 generates a program based on the processing lines and normal vectors generated by the processing line generating unit 107 . Specifically, the robot program generating unit 108 calculates the posture of the robot model at each three-dimensional point on the processing line from the normal vector to the surface of the workpiece model at each three-dimensional point on the processing line, and sets teaching points of the robot program along the processing line, as shown in Fig. 8.
  • the robot program generating unit 108 generates a robot program in which the robot model performs work on the workpiece model using the tool model based on the processing line and the normal vector of the processing line to the surface of the workpiece model.
  • the programming device 1 then executes a simulation of the generated robot program, allowing the user to confirm that the robot model moves the tool model along the machining line of the letter "F", as shown in FIG. 9.
  • FIG. 10 is a flowchart illustrating the program generation process of the programming device 1. The flow shown here is executed every time an instruction to generate a robot program is received.
  • step S1 the three-dimensional model placement unit 102 places a robot model of a robot (not shown), a tool model of a tool (not shown), a visual sensor model of a visual sensor (not shown), and a work model of a work (not shown) that constitute the industrial machinery system model in the three-dimensional virtual space created by the virtual space creation unit 101 in response to an input operation by the user of the input unit 11.
  • step S2 the image placement unit 103 places a label with a contour line drawn on the surface of the work model.
  • step S3 the image capturing unit 104 operates the robot model in the virtual space, positions the visual sensor model so that a vertical line to the center of the label on the work model coincides with the optical axis of the visual sensor model, and adjusts the height of the visual sensor model so that the entire label falls within the field of view of the visual sensor model. Based on the imaging parameters set for the visual sensor model, the image capturing unit 104 generates a pseudo captured image in which the label is captured by the visual sensor model at the position where the visual sensor model is placed.
  • step S4 the contour detection unit 105 detects contours by performing image processing on the pseudo-image captured in step S3.
  • step S5 the three-dimensional data conversion unit 106 converts the contours detected in step S4 into a collection of three-dimensional data.
  • step S6 the processing line generation unit 107 generates a processing line from the collection of three-dimensional data.
  • step S7 the processing line generation unit 107 calculates the surface of the work model on which the contour label is placed from the three-dimensional data of the contour stored as a collection of three-dimensional points, and calculates the normal vector to the surface of the work model at each point on the processing line.
  • step S8 the robot program generation unit 108 generates a program based on the processing lines and normal vectors generated in step S7.
  • the programming device 1 places a label with a contour line on the surface of a workpiece model placed in a virtual space, and generates a captured image of the label at the position where the visual sensor model is placed based on the imaging parameters set for the visual sensor model.
  • the programming device 1 detects the contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the workpiece model, and generates a program for an industrial machine to perform work on a workpiece with a tool based on the processing line and the normal vector.
  • the programming device 1 can generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and generate a program for an industrial machine to perform a processing operation of the trajectory based on the generated processing line. Furthermore, the programming device 1 does not require the operator to be highly skilled, and can reduce the effort and time required for teaching work.
  • the first embodiment has been described above.
  • the programming device 1 places a label on the surface of the model of the workpiece, on which a contour line is drawn that does not intersect with another contour line, and generates a captured image of the label at the position where the model of the visual sensor is placed based on the imaging parameters set for the model of the visual sensor.
  • the programming device 1 detects a contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the workpiece, and generates a program in which an industrial machine performs work on the workpiece with a tool based on the processing line and the normal vector.
  • the programming device 1A places a label on the surface of the model of the workpiece, on which a contour line is drawn that intersects with another contour line at least at one point, and performs image processing on the captured image in which the contour line is captured to detect the contour line and divide it into a plurality of line segments, and combines at least two line segments drawn consecutively among the divided plurality of line segments into one line segment, and specifies the order of work to be performed by the program for each line segment.
  • the programming device 1A of the second embodiment can generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and generate a program for an industrial machine to perform processing work of the trajectory based on the generated processing line.
  • Fig. 11 is a functional block diagram showing an example of the functional configuration of a programming device according to the second embodiment. Elements having the same functions as those of the programming device 1 in Fig. 1 are given the same reference numerals and detailed description thereof will be omitted.
  • the programming device 1A according to the second embodiment has a control unit 10a, an input unit 11, a display unit 12, and a storage unit 13.
  • the control unit 10a includes a virtual space creation unit 101, a three-dimensional model placement unit 102, an image placement unit 103, an image capturing unit 104, a contour line detection unit 105a, a three-dimensional data conversion unit 106a, a processing line generation unit 107a, and a robot program generation unit 108a.
  • the input unit 11, the display unit 12, and the storage unit 13 have the same functions as the input unit 11, the display unit 12, and the storage unit 13 in the first embodiment.
  • the model storage unit 131 has the same function as the model storage unit 131 in the first embodiment.
  • the control unit 10a includes a CPU, a ROM, a RAM, a CMOS memory, etc., which are configured to be able to communicate with each other via a bus, and are well known to those skilled in the art.
  • the CPU is a processor that controls the entire programming device 1A.
  • the CPU reads out the system program and application program stored in the ROM via the bus, and controls the entire programming device 1A in accordance with the system program and application program. As a result, as shown in Fig.
  • the control unit 10a is configured to realize the functions of a virtual space creation unit 101, a three-dimensional model placement unit 102, an image placement unit 103, an image capture unit 104, a contour line detection unit 105a, a three-dimensional data conversion unit 106a, a processing line generation unit 107a, and a robot program generation unit 108a.
  • the virtual space creation unit 101, the three-dimensional model placement unit 102, the image placement unit 103, and the image capture unit 104 have functions equivalent to those of the virtual space creation unit 101, the three-dimensional model placement unit 102, the image placement unit 103, and the image capture unit 104 in the first embodiment.
  • the contour line detection unit 105a detects contour lines by performing image processing on the pseudo captured image captured by the image capturing unit 104.
  • contour lines that intersect at least one point, as shown in Figure 12, in which a label with the letter "a" is placed on the work model. 13
  • the contour line detection unit 105a acquires a pseudo captured image from the image capturing unit 104.
  • the pseudo captured image is generated by capturing an image of a label at a position where a visual sensor model is placed based on set imaging parameters, the visual sensor model being placed so that a vertical line to the center of the label indicated by a dashed dotted line coincides with the optical axis of the visual sensor model.
  • the contour line detection unit 105a detects the black part of the label in the pseudo captured image as a contour line by using a known method (e.g., line drawing extraction, etc.) in the same way as the contour line detection unit 105 in the first embodiment. As shown in FIG. 14, the contour detection unit 105a divides the detected contour of the character "a" into eleven line segments L1 to L11 based on the positions of the intersections. The contour detection unit 105a combines at least two selected line segments from among the line segments L1 to L11 into one line segment in response to, for example, a user's input operation on the input unit 11. Specifically, as shown in Fig. 15, the contour detection unit 105a combines the line segments L1 and L2 into one line segment C1.
  • a known method e.g., line drawing extraction, etc.
  • the contour detection unit 105a also combines the line segments L3 to L6 into one line segment C2.
  • the contour detection unit 105a also combines the line segments L7 to L11 into one line segment C3.
  • the contour line detection unit 105a instructs the robot program to perform work in the order of line segments C1, C2, and C3, for example.
  • the three-dimensional data conversion unit 106a converts the contours detected by the contour detection unit 105a into a set of three-dimensional data, similar to the three-dimensional data conversion unit 106 of the first embodiment. Specifically, for example, as shown in Fig. 16, the three-dimensional data conversion unit 106a converts the three-dimensional position coordinates (X, Y, Z) of three-dimensional points (black points in Fig. 16) on the detected contour line based on the robot model coordinate system or the sensor model coordinate system in the captured image captured by the visual sensor model into a set of three-dimensional data. The three-dimensional data conversion unit 106a stores the converted three-dimensional data in the storage unit 13.
  • the processing line generating unit 107a generates a processing line from a set of three-dimensional data, similarly to the processing line generating unit 107 of the first embodiment. Specifically, the processing line generating unit 107a generates a processing line indicated by a dashed line as shown in FIG. 17, for example, by connecting each of the three-dimensional data of the contour lines stored in the storage unit 13. In addition, the processing line generating unit 107a calculates the surface of the work model on which an image displaying the contour line is placed from the three-dimensional data of the contour line stored as a collection of three-dimensional points, as shown in Figure 18, and calculates the normal vector to the surface of the work model at each point on the processing line.
  • the robot program generating unit 108a generates a program based on the order of line segments specified by the contour line detecting unit 105a and the processing lines and normal vectors generated by the processing line generating unit 107a. Specifically, the robot program generating unit 108a calculates the posture of the robot model at each three-dimensional point on the processing line from the normal vector to the surface of the workpiece model at each three-dimensional point on the processing line, and sets teaching points of the robot program along the processing line, as shown in Fig. 19.
  • the robot program generating unit 108a generates a robot program in which the robot model performs work in the order of line segments C1 to C3 specified by the tool model with respect to the workpiece model, based on the processing line and the normal vector of the processing line to the surface of the workpiece model.
  • FIG. 20 is a flowchart illustrating a program generation process of the programming device 1 A.
  • the flow shown here is executed every time an instruction to generate a robot program is received.
  • the processes in steps S1 to S4 and steps S6 to S8 are similar to those in steps S1 to S3 and steps S5 to S7 in FIG. 10, and therefore will not be described.
  • step S4a the contour detection unit 105a detects a contour by performing image processing on the pseudo image captured in step S3. If the detected contour has an intersection, the contour detection unit 105a divides it into multiple line segments based on the position of the intersection. In response to the user's input operation on the input unit 11, the contour detection unit 105a combines at least two selected line segments from the multiple line segments into one line segment, and specifies the order in which work is to be performed for each line segment by the robot program.
  • step S8a the robot program generation unit 108a generates a program based on the order of each line segment specified in step S4a and the processing lines and normal vectors generated in step S7.
  • the programming device 1A when a label with a contour line that intersects with another contour line at at least one point is placed on the surface of a workpiece, the programming device 1A according to the second embodiment detects the contour line by performing image processing on a captured image of the contour line, divides the contour line into a plurality of line segments, combines at least two of the divided plurality of line segments that are drawn consecutively into one line segment, and specifies the order of work to be performed for each line segment by a program. In this way, the programming device 1A can generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and generate a program for an industrial machine to perform a processing operation of the trajectory based on the generated processing line. Furthermore, the programming device 1A does not require the operator to be skilled, and can reduce the effort and time required for teaching work.
  • the second embodiment has been described above.
  • the programming device 1 places a label on the surface of the model of the workpiece, on which a contour line is drawn that does not intersect with another contour line, and generates a captured image of the label at the position where the model of the visual sensor is placed based on the imaging parameters set for the model of the visual sensor.
  • the programming device 1 detects a contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the model of the workpiece, and generates a program for an industrial machine to perform work on the workpiece with a tool based on the processing line and the normal vector.
  • the programming device 1A differs from the first embodiment in that it places a label on the surface of the model of the workpiece, on which a contour line is drawn that intersects with another contour line at least at one point, detects the contour line by performing image processing on the captured image in which the contour line is captured, divides the contour line into a plurality of line segments, combines at least two line segments drawn consecutively among the divided plurality of line segments into one line segment, and specifies the order of work to be performed by the program for each line segment.
  • the programming device 1B differs from the first and second embodiments in that, in an industrial machine, a tool attached to the industrial machine, a visual sensor, and a workpiece arranged in real space, a label having a contour line drawn on the surface of the workpiece is arranged, and an image of the label captured at the position where the visual sensor is arranged is obtained from the visual sensor based on imaging parameters set in the visual sensor.
  • the programming device 1B of the third embodiment can generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and generate a program for an industrial machine to perform processing work of the trajectory based on the generated processing line.
  • Fig. 21 is a functional block diagram showing an example of the functional configuration of a programming device according to the third embodiment. Elements having the same functions as those of the programming device 1 in Fig. 1 are given the same reference numerals and detailed description thereof will be omitted.
  • a programming device 1B according to the third embodiment has a control unit 10b, an input unit 11, a display unit 12, and a storage unit 13b.
  • the control unit 10b includes an image arrangement unit 103b, an image capturing unit 104b, a contour line detection unit 105, a three-dimensional data conversion unit 106, a processing line generation unit 107, and a robot program generation unit 108.
  • the input unit 11 and the display unit 12 have the same functions as the input unit 11 and the display unit 12 in the first embodiment.
  • the memory unit 13b is an SSD, HDD, etc., similar to the memory unit 13 in the first embodiment, and may store a generation program that generates a robot program in which a robot (not shown) performs work on a workpiece using a tool, the generated robot program, etc.
  • the control unit 10b includes a CPU, a ROM, a RAM, a CMOS memory, etc., which are configured to be able to communicate with each other via a bus, and are well known to those skilled in the art.
  • the CPU is a processor that controls the entire programming device 1B.
  • the CPU reads out the system program and application program stored in the ROM via the bus, and controls the entire programming device 1B in accordance with the system program and application program. As a result, as shown in Fig.
  • the control unit 10b is configured to realize the functions of an image arrangement unit 103b, an image pickup unit 104b, a contour line detection unit 105, a three-dimensional data conversion unit 106, a processing line generation unit 107, and a robot program generation unit 108.
  • the contour line detection unit 105, the three-dimensional data conversion unit 106, the processing line generation unit 107, and the robot program generation unit 108 have functions equivalent to the contour line detection unit 105, the three-dimensional data conversion unit 106, the processing line generation unit 107, and the robot program generation unit 108 in the first embodiment.
  • the image placement unit 103b may, for example, use a projector (not shown) to project and place a label with a contour line drawn on the surface of the workpiece in the same manner as in FIG. 3.
  • the image capturing unit 104b operates the robot in real space via a robot control device not shown, and, as in the case of Figure 4, positions the visual sensor so that a vertical line to the center of the label indicated by the dotted line coincides with the optical axis of the visual sensor, and adjusts the height of the visual sensor so that the entire label falls within the field of view of the visual sensor.
  • the image capturing unit 104b captures an image of the label at the position where the visual sensor is disposed based on the imaging parameters set for the visual sensor, and outputs the captured image to the contour line detection unit 105.
  • FIG. 22 is a flowchart illustrating a program generation process of the programming device 1B.
  • the flow shown here is executed every time an instruction to generate a robot program is received.
  • the processes in steps S33 to S37 are similar to those in steps S4 to S8 in FIG. 10, and therefore will not be described.
  • step S31 the image placement unit 103b uses a projector (not shown) or the like to project and place a label with a contour line drawn on the surface of the work.
  • step S32 the image capturing unit 104b operates the robot in real space to position the visual sensor so that a vertical line coincides with the optical axis of the visual sensor at the center of the label on the workpiece, and adjusts the height of the visual sensor so that the entire label falls within the field of view of the visual sensor.
  • the image capturing unit 104b obtains an image of the label at the position where the visual sensor is positioned, based on the imaging parameters set for the visual sensor.
  • the programming device 1B places a label with a contour drawn on the surface of a workpiece, and acquires an image of the label at the position where the visual sensor is placed from the visual sensor based on the imaging parameters set in the visual sensor.
  • the programming device 1B detects a contour by performing image processing on the captured image, generates a processing line based on the detected contour, calculates a normal vector of the generated processing line with respect to the surface of the workpiece, and generates a program for an industrial machine to perform work on the workpiece with a tool based on the processing line and the normal vector.
  • the programming device 1B can generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and generate a program for an industrial machine to perform a processing work of the trajectory based on the generated processing line. Furthermore, the programming device 1B does not require the operator to be highly skilled, and can reduce the effort and time required for teaching work.
  • the third embodiment has been described above.
  • the label to be placed on the workpiece is one in which the contour lines of the letter "F” and the like do not intersect with each other, but this is not limited thereto.
  • the label may be one in which the contour lines of the letter "a” and the like intersect with each other at at least one point.
  • the programming device 1B may detect the contour lines by performing image processing on the captured image in which the contour lines are captured, divide the contour lines into a plurality of line segments, combine at least two line segments drawn consecutively among the divided plurality of line segments into one line segment, and specify the order of operations to be performed by the program for each line segment.
  • the programming devices 1, 1A, and 1B disclosed herein can generate machining lines that trace arbitrary and complex trajectories on the surface of a workpiece, and generate a program for an industrial machine to perform machining work of the trajectory based on the generated machining lines.
  • the surface of the workpiece (workpiece model) is a flat surface, but is not limited thereto.
  • the surface of the workpiece (workpiece model) may have any shape, such as unevenness.
  • a sticker with a label drawn on it may be attached to the surface of the workpiece (workpiece model) of any shape, or a label may be projected by a projector or the like.
  • the programming devices 1, 1A, and 1B generate robot programs for a robot as an industrial machine, but are not limited to this.
  • the programming devices 1, 1A, and 1B may generate programs for a machine tool, a forging machine, a laser processing machine, an injection molding machine, or the like as an industrial machine.
  • the visual sensor is attached to the tip of the arm of the robot (robot model), but is not limited to this.
  • the visual sensor (visual sensor model) may be fixed and placed at a position on a wall, ceiling, etc., separate from the robot (robot model) so that the entire label placed on the work fits within the field of view, as shown in Fig. 23 .
  • each function included in the programming devices 1, 1A, and 1B in the first to third embodiments can be realized by hardware, software, or a combination of these.
  • being realized by software means being realized by a computer reading and executing a program.
  • Non-transitory computer readable media include various types of tangible storage media. Examples of non-transitory computer readable media include magnetic recording media (e.g., flexible disks, magnetic tapes, hard disk drives), magneto-optical recording media (e.g., magneto-optical disks), CD-ROMs (Read Only Memory), CD-Rs, CD-R/Ws, and semiconductor memories (e.g., mask ROMs, PROMs (Programmable ROMs), EPROMs (Erasable PROMs), flash ROMs, RAMs).
  • the program may also be provided to the computer by various types of temporary computer readable media. Examples of temporary computer readable media include electrical signals, optical signals, and electromagnetic waves.
  • the temporary computer readable medium can provide the program to the computer via a wired communication path such as an electric wire or optical fiber, or via a wireless communication path.
  • the step of writing the program to be recorded on the recording medium includes not only processes that are performed chronologically according to the order, but also processes that are not necessarily performed chronologically but are executed in parallel or individually.
  • the step of writing the program may also be performed by cloud computing.
  • a programming device (1) is a programming device that places an industrial machine, a tool attached to the industrial machine, a visual sensor, and a work model within a virtual space, and generates a program in which the industrial machine model performs an operation on a work model using a tool model.
  • the programming device places a label with a contour line on the surface of the work model, generates an image of the label at the position where the visual sensor model is placed based on imaging parameters set for the visual sensor model, detects the contour line by applying image processing to the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the work model, and generates a program based on the processing line and the normal vector.
  • the programming device (1B) is a programming device that generates a program in which an industrial machine performs an operation on a workpiece using a tool, for an industrial machine, a tool attached to the industrial machine, a visual sensor, and a workpiece that are arranged in real space, and the programming device places a label with a contour line drawn on the surface of the workpiece, obtains an image of the label captured at the position where the visual sensor is arranged based on imaging parameters set in the visual sensor, detects the contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the workpiece, and generates a program based on the processing line and the normal vector.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Manufacturing & Machinery (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Geometry (AREA)
  • Numerical Control (AREA)

Abstract

The present invention generates an industrial machine program for generating a machining line which depicts an arbitrary and complex trajectory on a surface of a workpiece, and performing machining work along the trajectory on the basis of the generated machining line. This programming device generates a program by which respective models for an industrial machine, a tool, a visual sensor, and a workpiece are disposed in a virtual space, and the industrial machine model performs work on the workpiece model by means of the tool model, wherein: a label on which an outline is depicted is disposed on the surface of the workpiece model; a captured image of the label at a position at which the visual model is disposed is generated on the basis of image-capturing parameters set to the visual sensor model; an outline is detected by executing image processing on the captured image; a machining line is generated on the basis of the detected outline; a normal vector to the surface of the workpiece model on the generated machining line is calculated; and the program is generated on the basis of the machining line and the normal vector.

Description

プログラミング装置Programming Device
 本開示は、産業機械のプログラムを生成するプログラミング装置に関する。 This disclosure relates to a programming device that generates programs for industrial machinery.
 ツールを搭載したロボット、ワーク、及び少なくとも1つの周辺機器の三次元モデルを画面に配置して同時に表示し、ワークの三次元モデル上で加工線を指定し、指定された加工線に基づいて生成される教示点の動作形式、速度、位置、及び姿勢を指定し、指定された加工線と、指定された動作形式、速度、位置、及び姿勢とに基づいてワークの加工作業を行うためのロボットの動作プログラムを生成する技術が存在する。
 また、作業空間内にロボット、視覚センサ、及びワークを有するロボットシステムにおいて、作業空間を三次元的に表現した仮想空間内にロボットのロボットモデル、視覚センサの視覚センサモデル、ワークのワークモデルを配置し、視覚センサモデルによりワークモデルを計測し、ロボットモデルがワークモデルに対し作業を行うシミュレーションを行う技術が存在する。
 例えば、ワークの三次元モデル上で加工線を指定する方法として、例えばワークの三次元モデル上の円形及び多角形を含む基本的形状、又は基本的形状を複数組み合わせて構成される形状の、輪郭線、又は面からなる形状的特徴、あるいは溶接等においてワークの三次元モデル同士が接する部分を検出し、加工線を指定する方法が知られている。例えば、特許文献1、2参照。
 あるいは、曲面を含む立体形状又は連続した複数の平面を含む立体形状を工具の周期的な動作を示す連続した軌跡からなる動作パターンにより塗りつぶすとともに、動作パターンがワークモデルの少なくとも1つの面に投影されるように立体形状を仮想空間に配置し、動作パターンをワークモデルの少なくとも1つの面に投影して工具の加工経路を作成する方法が知られている。例えば、特許文献3参照。
There exists a technology in which three-dimensional models of a robot equipped with a tool, a workpiece, and at least one peripheral device are arranged on a screen and displayed simultaneously, a processing line is specified on the three-dimensional model of the workpiece, a motion type, speed, position, and posture of a teaching point generated based on the specified processing line is specified, and an operation program for the robot to perform processing of the workpiece based on the specified processing line and the specified motion type, speed, position, and posture.
In addition, in a robot system having a robot, visual sensor, and workpiece within a working space, a technology exists in which a robot model of the robot, a visual sensor model of the visual sensor, and a workpiece model of the workpiece are placed in a virtual space that represents the working space in three dimensions, the workpiece model is measured using the visual sensor model, and a simulation is performed in which the robot model performs work on the workpiece model.
For example, as a method for specifying a processing line on a three-dimensional model of a workpiece, a method is known in which a processing line is specified by detecting a shape feature consisting of a contour line or a surface of a basic shape including a circle and a polygon on the three-dimensional model of the workpiece, or a shape formed by combining a plurality of basic shapes, or a portion where three-dimensional models of the workpiece contact each other during welding, etc. For example, see Patent Documents 1 and 2.
Alternatively, a method is known in which a three-dimensional shape including a curved surface or a three-dimensional shape including a plurality of continuous planes is filled with a motion pattern consisting of a continuous trajectory indicating a periodic motion of a tool, the three-dimensional shape is arranged in a virtual space so that the motion pattern is projected onto at least one surface of a work model, and the motion pattern is projected onto at least one surface of the work model to create a machining path for the tool.
特開2017-140684号公報JP 2017-140684 A 特開2019-48358号公報JP 2019-48358 A 特開2013-248677号公報JP 2013-248677 A
 しかしながら、複雑な軌跡の加工作業を行うためのロボットの動作プログラムの教示はすべて手作業で行わなければならない。このため、教示作業に多くの手間や時間を要し、作業者の熟練を必要とする。 However, all of the robot's motion programs must be taught manually in order to perform machining work on complex trajectories. This makes the teaching process very time-consuming and labor-intensive, and requires the skilled operator.
 そこで、例えばワークの面上に、任意かつ複雑な軌跡を描く加工線を生成し、生成された加工線に基づいて当該軌跡の加工作業を行うための産業機械のプログラムを生成することが望まれている。 Therefore, it is desirable to generate a machining line that traces an arbitrary and complex trajectory on the surface of a workpiece, and to generate a program for an industrial machine to perform machining work along that trajectory based on the generated machining line.
 本開示のプログラミング装置の一態様は、仮想空間内に、産業機械、前記産業機械に取り付けられるツール、視覚センサ、及びワークのモデルをそれぞれ配置し、前記産業機械のモデルが前記ツールのモデルにより前記ワークのモデルに対し作業を行うプログラムを生成するプログラミング装置であって、前記ワークのモデルの面上に輪郭線が描かれたラベルを配置し、前記視覚センサのモデルに設定された撮像パラメータに基づいて、前記視覚センサのモデルが配置された位置において前記ラベルを撮像した撮像画像を生成し、前記撮像画像に画像処理を施すことにより前記輪郭線を検出し、検出した前記輪郭線に基づいて、加工線を生成して、生成した前記加工線の前記ワークのモデルの面に対する法線ベクトルを算出し、前記加工線及び前記法線ベクトルに基づいて前記プログラムを生成する。 One aspect of the programming device disclosed herein is a programming device that places an industrial machine, a tool attached to the industrial machine, a visual sensor, and a model of a workpiece in a virtual space, and generates a program in which the industrial machine model performs an operation on the workpiece model using the tool model, places a label with a contour line on the surface of the workpiece model, generates a captured image of the label at the position where the visual sensor model is placed based on imaging parameters set for the visual sensor model, detects the contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the workpiece model, and generates the program based on the processing line and the normal vector.
 本開示のプログラミング装置の一態様は、実空間内に配置された産業機械、前記産業機械に取り付けられるツール、視覚センサ、及びワークにおいて、前記産業機械が前記ツールにより前記ワークに対し作業を行うプログラムを生成するプログラミング装置であって、前記ワークの面上に輪郭線が描かれたラベルを配置し、前記視覚センサに設定された撮像パラメータに基づいて、前記視覚センサが配置された位置において前記ラベルを撮像した撮像画像を前記視覚センサから取得し、前記撮像画像に画像処理を施すことにより前記輪郭線を検出し、検出した前記輪郭線に基づいて、加工線を生成して、生成した前記加工線の前記ワークの面に対する法線ベクトルを算出し、前記加工線及び前記法線ベクトルに基づいて前記プログラムを生成する。 One aspect of the programming device disclosed herein is a programming device that generates a program for an industrial machine, a tool attached to the industrial machine, a visual sensor, and a workpiece arranged in real space, in which the industrial machine performs an operation on the workpiece using the tool, and the programming device places a label with a contour line on the surface of the workpiece, acquires an image of the label captured at the position where the visual sensor is arranged based on imaging parameters set for the visual sensor, detects the contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the workpiece, and generates the program based on the processing line and the normal vector.
第1実施形態に係るプログラミング装置の機能的構成例を示す機能ブロック図である。1 is a functional block diagram showing an example of a functional configuration of a programming device according to a first embodiment. 仮想空間における産業機械システムモデルの一例を示す図である。FIG. 2 is a diagram illustrating an example of an industrial machinery system model in a virtual space. ワークモデルの面上にラベルが配置された一例を示す図である。FIG. 13 is a diagram showing an example in which a label is placed on a surface of a work model. 視覚センサモデルとラベルとの位置関係の一例を示す図である。FIG. 13 is a diagram illustrating an example of a positional relationship between a visual sensor model and a label. 検出された輪郭線上の三次元点の一例を示す図である。FIG. 13 is a diagram showing an example of three-dimensional points detected on a contour line. 生成された加工線の一例を示す図である。FIG. 13 is a diagram showing an example of a generated processing line. 生成された加工線上の各点で算出されたワークモデルの面に対する法線ベクトルの一例を示す図である。13 is a diagram showing an example of a normal vector relative to a surface of a workpiece model calculated at each point on a generated processing line; FIG. 設定された教示点の一例を示す図である。FIG. 11 is a diagram showing an example of set teaching points. 生成したロボットプログラムのシミュレーションの一例を示す図である。FIG. 11 is a diagram illustrating an example of a simulation of a generated robot program. プログラミング装置のプログラム生成処理について説明するフローチャートである。11 is a flowchart illustrating a program generation process of the programming device. 第2実施形態に係るプログラミング装置の機能的構成例を示す機能ブロック図である。FIG. 11 is a functional block diagram showing an example of the functional configuration of a programming device according to a second embodiment. 輪郭線同士が少なくとも1箇所で交わる輪郭線のラベルの一例を示す図である。FIG. 13 is a diagram showing an example of labels for contours that intersect with each other at at least one point. 視覚センサモデルとラベルとの位置関係の一例を示す図である。FIG. 13 is a diagram illustrating an example of a positional relationship between a visual sensor model and a label. 輪郭線を分割した複数の線分の一例を示す図である。FIG. 13 is a diagram showing an example of a plurality of line segments obtained by dividing a contour line; 分割した複数の線分の結合結果の一例を示す図である。FIG. 13 is a diagram showing an example of a result of combining a plurality of divided line segments. 検出された輪郭線上の三次元点の一例を示す図である。FIG. 13 is a diagram showing an example of three-dimensional points detected on a contour line. 生成された加工線の一例を示す図である。FIG. 13 is a diagram showing an example of a generated processing line. 生成された加工線上の各点で算出されたワークモデルの面に対する法線ベクトルの一例を示す図である。13 is a diagram showing an example of a normal vector relative to a surface of a workpiece model calculated at each point on a generated processing line; FIG. 設定された教示点の一例を示す図である。FIG. 11 is a diagram showing an example of set teaching points. プログラミング装置のプログラム生成処理について説明するフローチャートである。11 is a flowchart illustrating a program generation process of the programming device. 第3実施形態に係るプログラミング装置の機能的構成例を示す機能ブロック図である。FIG. 13 is a functional block diagram showing an example of the functional configuration of a programming device according to a third embodiment. プログラミング装置のプログラム生成処理について説明するフローチャートである。11 is a flowchart illustrating a program generation process of the programming device. ロボットとは別で固定して配置された視覚センサの一例を示す図である。FIG. 13 is a diagram showing an example of a visual sensor that is fixedly disposed separately from a robot.
<第1実施形態>
 まず、本実施形態の概略を説明する。本実施形態では、プログラミング装置は、産業機械、産業機械に取り付けられるツール、視覚センサ、及びワークのモデルをそれぞれが配置された仮想空間において、ワークのモデルの面上に輪郭線が描かれたラベルを配置し、視覚センサのモデルに設定された撮像パラメータに基づいて、視覚センサのモデルが配置された位置においてラベルを撮像した撮像画像を生成する。プログラミング装置は、撮像画像に画像処理を施すことにより輪郭線を検出し、検出した輪郭線に基づいて、加工線を生成して、生成した加工線のワークのモデルの面に対する法線ベクトルを算出し、加工線及び法線ベクトルに基づいて産業機械がツールによりワークに対し作業を行うプログラムを生成する。
 これにより、本実施形態によれば、ワークの面上に、任意かつ複雑な軌跡を描く加工線を生成し、生成された加工線に基づいて当該軌跡の加工作業を行うための産業機械のプログラムを生成することができる。
 以上が本実施形態の概略である。
First Embodiment
First, an outline of this embodiment will be described. In this embodiment, in a virtual space in which an industrial machine, a tool attached to the industrial machine, a visual sensor, and a model of a workpiece are respectively arranged, a programming device arranges a label with a contour line drawn on the surface of the model of the workpiece, and generates a captured image of the label at the position where the model of the visual sensor is arranged based on imaging parameters set for the model of the visual sensor. The programming device detects the contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the model of the workpiece, and generates a program in which the industrial machine performs work on the workpiece with a tool based on the processing line and the normal vector.
As a result, according to this embodiment, it is possible to generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and to generate a program for an industrial machine to perform processing work of the trajectory based on the generated processing line.
The above is an outline of this embodiment.
 次に、本実施形態の構成について図面を用いて詳細に説明する。ここでは、産業機械としてのロボット、ツール、視覚センサ、及びワークのモデルをそれぞれ仮想空間に配置して、ロボットがツールによりワークに対し作業を行うロボットプログラムを生成する場合を例示する。なお、本発明は、産業機械として工作機械、産業用ロボット、サービス用ロボット、鍛圧機械、レーザ加工機、及び射出成形機といった様々な機械に対しても適用可能である。 Next, the configuration of this embodiment will be described in detail with reference to the drawings. Here, an example is shown in which models of a robot, tool, visual sensor, and workpiece as industrial machines are each placed in a virtual space, and a robot program is generated in which the robot performs work on the workpiece using the tool. Note that the present invention can also be applied to various types of industrial machines, such as machine tools, industrial robots, service robots, forging machines, laser processing machines, and injection molding machines.
 図1は、第1実施形態に係るプログラミング装置の機能的構成例を示す機能ブロック図である。
 図1に示すように、プログラミング装置1は、公知のコンピュータであり、制御部10、入力部11、表示部12、及び記憶部13を有する。制御部10は、仮想空間作成部101、三次元モデル配置部102、画像配置部103、画像撮像部104、輪郭線検出部105、三次元データ変換部106、加工線生成部107、及びロボットプログラム生成部108を含む。
 なお、プログラミング装置1は、ロボット(図示しない)の動作を制御するロボット制御装置(図示しない)とLAN(Local Area Network)やインターネット等の図示しないネットワークを介して相互に接続されていてもよい。あるいは、プログラミング装置1は、ロボット制御装置(図示しない)と図示しない接続インターフェースを介して互いに直接接続されてもよい。
FIG. 1 is a functional block diagram showing an example of the functional configuration of a programming device according to the first embodiment.
1, the programming device 1 is a known computer, and includes a control unit 10, an input unit 11, a display unit 12, and a storage unit 13. The control unit 10 includes a virtual space creation unit 101, a three-dimensional model placement unit 102, an image placement unit 103, an image capture unit 104, a contour line detection unit 105, a three-dimensional data conversion unit 106, a processing line generation unit 107, and a robot program generation unit 108.
The programming device 1 may be connected to a robot control device (not shown) that controls the operation of a robot (not shown) via a network (not shown) such as a LAN (Local Area Network) or the Internet. Alternatively, the programming device 1 may be directly connected to the robot control device (not shown) via a connection interface (not shown).
<入力部11>
 入力部11は、例えば、キーボードや、後述する表示部12に配置されたタッチパネル等であり、ユーザからの入力を受け付ける。
<Input unit 11>
The input unit 11 is, for example, a keyboard or a touch panel arranged on the display unit 12 (described later), and receives input from a user.
<表示部12>
 表示部12は、例えば、液晶ディスプレイ等である。表示部12は、後述するように、例えば入力部11を介してユーザにより入力(選択)されたロボット(図示しない)を三次元で表現した3D CADデータ等(以下、「ロボットモデル」ともいう)とともに、当該ロボットに取り付けられたツール(図示しない)を三次元で表現した3D CADデータ等(以下、「ツールモデル」ともいう)、視覚センサ(図示しない)を三次元で表現した3D CADデータ等(以下、「視覚センサモデル」ともいう)、及びワーク(図示しない)を三次元で表現した3D CADデータ等(以下、「ワークモデル」ともいう)を表示する。
<Display unit 12>
The display unit 12 is, for example, a liquid crystal display, etc. As described later, the display unit 12 displays, for example, 3D CAD data or the like (hereinafter also referred to as a "robot model") that represents a robot (not shown) in three dimensions input (selected) by a user via the input unit 11, 3D CAD data or the like (hereinafter also referred to as a "tool model") that represents a tool (not shown) attached to the robot in three dimensions, 3D CAD data or the like (hereinafter also referred to as a "visual sensor model") that represents a visual sensor (not shown) in three dimensions, and 3D CAD data or the like (hereinafter also referred to as a "work model") that represents a work (not shown) in three dimensions.
<記憶部13>
 記憶部13は、SSD(Solid State Drive)やHDD(Hard Disk Drive)等である。記憶部13は、図示しないロボットがツールによりワークに対し作業を行うロボットプログラムを生成する生成プログラムや、生成されたロボットプログラム等が記憶されてもよい。また、記憶部13は、モデル記憶部131を有する。
 モデル記憶部131は、上述したように、例えば入力部11を介してユーザにより入力(選択)され、表示部12に表示されるロボット(図示しない)の3D CADデータ(ロボットモデル)、ツール(図示しない)の3D CADデータ(ツールモデル)、視覚センサ(図示しない)の3D CADデータ(視覚センサモデル)、及びワーク(図示しない)の3D CADデータ(ワークモデル)等を記憶する。
<Storage unit 13>
The storage unit 13 is a solid state drive (SSD), a hard disk drive (HDD), etc. The storage unit 13 may store a generation program for generating a robot program in which a robot (not shown) performs an operation on a workpiece using a tool, a generated robot program, etc. The storage unit 13 also has a model storage unit 131.
As described above, the model storage unit 131 stores, for example, 3D CAD data (robot model) of a robot (not shown), 3D CAD data (tool model) of a tool (not shown), 3D CAD data (visual sensor model) of a visual sensor (not shown), and 3D CAD data (workpiece model) of a workpiece (not shown) that are input (selected) by a user via the input unit 11 and displayed on the display unit 12.
<制御部10>
 制御部10は、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)、CMOS(Complementary Metal-Oxide-Semiconductor)メモリ等を有し、これらはバスを介して相互に通信可能に構成される、当業者にとって公知のものである。
 CPUはプログラミング装置1を全体的に制御するプロセッサである。CPUは、ROMに格納されたシステムプログラム及びアプリケーションプログラムを、バスを介して読み出し、システムプログラム及びアプリケーションプログラムに従ってプログラミング装置1全体を制御する。これにより、図1に示すように、制御部10が、仮想空間作成部101、三次元モデル配置部102、画像配置部103、画像撮像部104、輪郭線検出部105、三次元データ変換部106、加工線生成部107、及びロボットプログラム生成部108の機能を実現するように構成される。RAMには一時的な計算データや表示データ等の各種データが格納される。また、CMOSメモリは図示しないバッテリでバックアップされ、プログラミング装置1の電源がオフされても記憶状態が保持される不揮発性メモリとして構成される。
<Control Unit 10>
The control unit 10 has a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), a CMOS (Complementary Metal-Oxide-Semiconductor) memory, etc., which are configured to be able to communicate with each other via a bus, and are well known to those skilled in the art.
The CPU is a processor that controls the entire programming device 1. The CPU reads out the system program and application program stored in the ROM via the bus, and controls the entire programming device 1 according to the system program and application program. As a result, as shown in Fig. 1, the control unit 10 is configured to realize the functions of a virtual space creation unit 101, a three-dimensional model placement unit 102, an image placement unit 103, an image capture unit 104, a contour line detection unit 105, a three-dimensional data conversion unit 106, a processing line generation unit 107, and a robot program generation unit 108. The RAM stores various data such as temporary calculation data and display data. In addition, the CMOS memory is backed up by a battery (not shown) and is configured as a non-volatile memory that retains its stored state even when the power supply of the programming device 1 is turned off.
 仮想空間作成部101は、ロボット(図示しない)と、ツール(図示しない)と、視覚センサ(図示しない)と、ワーク(図示しない)とが配置される作業空間を三次元的に表現した仮想空間を作成する。 The virtual space creation unit 101 creates a virtual space that is a three-dimensional representation of a work space in which a robot (not shown), a tool (not shown), a visual sensor (not shown), and a workpiece (not shown) are arranged.
 三次元モデル配置部102は、例えば、ユーザの入力部11の入力操作に応じて、仮想空間作成部101により作成された三次元の仮想空間内に、ロボット(図示しない)のロボットモデル、ツール(図示しない)のツールモデル、視覚センサ(図示しない)の視覚センサモデル、及びワーク(図示しない)のワークモデルを配置する。
 図2は、仮想空間における産業機械システムモデルの一例を示す図である。
 具体的には、三次元モデル配置部102は、図2に示すように、仮想空間内に図示しないロボットを配置するために、当該ロボットのロボットモデルをモデル記憶部131から読み出す。三次元モデル配置部102は、読み出したロボットモデルを仮想空間内に配置する。
 また、三次元モデル配置部102は、図示しないツールのツールモデルをモデル記憶部131から読み読み出し、仮想空間内のロボットのロボットモデルのツール先端に読み出したツールのツールモデルを配置する。
 また、三次元モデル配置部102は、図示しない視覚センサの視覚センサモデルをモデル記憶部131から読み読み出し、仮想空間内のロボットのロボットモデルのアーム先端に読み出した視覚センサの視覚センサモデルを配置する。
 また、三次元モデル配置部102は、仮想空間内に図示しないワークのワークモデルを配置するために、ワークのワークモデルをモデル記憶部131から読み出す。三次元モデル配置部102は、読み出したワークのワークモデルを仮想空間内に配置する。
 なお、図2では、ワークモデルは、図示しない治具等の直方体で示すモデル上に配置されている。
The three-dimensional model placement unit 102 places a robot model of a robot (not shown), a tool model of a tool (not shown), a visual sensor model of a visual sensor (not shown), and a work model of a work (not shown) within the three-dimensional virtual space created by the virtual space creation unit 101 in response to, for example, a user's input operation on the input unit 11.
FIG. 2 is a diagram showing an example of an industrial machinery system model in a virtual space.
2, in order to place a robot (not shown) in a virtual space, the three-dimensional model placement unit 102 reads out a robot model of the robot from the model storage unit 131. The three-dimensional model placement unit 102 places the read out robot model in the virtual space.
Furthermore, the three-dimensional model placement unit 102 reads out a tool model of a tool (not shown) from the model storage unit 131, and places the read tool model at the tool tip of the robot model of the robot in the virtual space.
Furthermore, the three-dimensional model placement unit 102 reads out a visual sensor model of a visual sensor (not shown) from the model storage unit 131, and places the visual sensor model of the read visual sensor at the tip of the arm of the robot model of the robot in the virtual space.
Furthermore, in order to place a workpiece model of a work (not shown) in the virtual space, the three-dimensional model placement unit 102 reads out the workpiece model of the work from the model storage unit 131. The three-dimensional model placement unit 102 places the read out workpiece model of the work in the virtual space.
In FIG. 2, the workpiece model is placed on a rectangular parallelepiped model of a jig or the like (not shown).
 画像配置部103は、例えば、図3に示すように、ワークモデルの面上に、輪郭線が描かれたシール等のラベルを配置する。なお、図3に示すラベルは、「F」の文字の輪郭線が描かれている。 The image placement unit 103 places a label, such as a sticker, with an outline drawn on the surface of the work model, as shown in FIG. 3, for example. Note that the label shown in FIG. 3 has the outline of the letter "F" drawn on it.
 画像撮像部104は、図4に示すように、仮想空間でロボットモデルを動作させて、一点鎖線で示すラベルの中心に鉛直な直線と視覚センサモデルの光軸とが一致するように視覚センサモデルを配置し、ラベル全体が視覚センサモデルの視野の範囲内に収まるように視覚センサモデルの高さを調整する。
 なお、視覚センサモデルには、ロボットモデル座標系から見たセンサモデル座標系の位置、焦点距離、画像サイズ等、実空間における視覚センサと同様の撮像を行うために必要な撮像パラメータが設定されているものとする。
 画像撮像部104は、視覚センサモデルに設定された撮像パラメータに基づいて、視覚センサモデルが配置された位置において視覚センサモデルがラベルを撮像した際の疑似的な撮像画像を生成する。図4に示すように、撮像画像には、文字「F」の輪郭線が写っている。
As shown in Figure 4, the image capture unit 104 operates a robot model in a virtual space, positions the visual sensor model so that a vertical line to the center of the label indicated by the dotted line coincides with the optical axis of the visual sensor model, and adjusts the height of the visual sensor model so that the entire label falls within the field of view of the visual sensor model.
It is assumed that the visual sensor model is set with the imaging parameters necessary to capture images similar to those of a visual sensor in real space, such as the position of the sensor model coordinate system as viewed from the robot model coordinate system, focal length, image size, etc.
The image capturing unit 104 generates a pseudo captured image based on the imaging parameters set for the visual sensor model, which is a representation of the image captured by the visual sensor model capturing an image of the label at the position where the visual sensor model is placed. As shown in Fig. 4, the captured image shows the outline of the letter "F".
 輪郭線検出部105は、画像撮像部104により撮像された疑似的な撮像画像に対して画像処理を施すことにより輪郭線を検出する。
 具体的には、輪郭線検出部105は、例えば、公知の手法(例えば、線画抽出等)を用いて、疑似的な撮像画像におけるラベルの黒い部分を輪郭線として検出する。
The contour line detection unit 105 detects contour lines by performing image processing on the pseudo captured image captured by the image capturing unit 104 .
Specifically, the contour detection unit 105 detects the black portion of the label in the pseudo captured image as a contour line by using, for example, a known method (for example, line drawing extraction, etc.).
 三次元データ変換部106は、輪郭線検出部105により検出された輪郭線を3次元データの集合に変換する。
 具体的には、三次元データ変換部106は、例えば、図5に示すように、視覚センサモデルにより撮像された撮像画像において、ロボットモデル座標系又はセンサモデル座標系を基準とした検出された輪郭線上の三次元点(図5の黒点)の三次元位置座標(X,Y,Z)を三次元データの集合に変換する。三次元データ変換部106は、変換した三次元データを記憶部13に記憶する。
The three-dimensional data conversion unit 106 converts the contours detected by the contour detection unit 105 into a set of three-dimensional data.
Specifically, for example, as shown in Fig. 5, the three-dimensional data conversion unit 106 converts the three-dimensional position coordinates (X, Y, Z) of three-dimensional points (black points in Fig. 5) on the detected contour line based on the robot model coordinate system or the sensor model coordinate system in the captured image captured by the visual sensor model into a set of three-dimensional data. The three-dimensional data conversion unit 106 stores the converted three-dimensional data in the storage unit 13.
 加工線生成部107は、三次元データの集合より加工線を生成する。
 具体的には、加工線生成部107は、例えば、記憶部13に記憶された輪郭線の三次元データそれぞれを結ぶことで、図6に示すように、破線で示す加工線を生成する。
 また、加工線生成部107は、図7に示すように、三次元点の集合として記憶された輪郭線の三次元データから、輪郭線のラベルが配置されたワークモデルの面を算出し、加工線上の各点でのワークモデルの面に対する法線ベクトルを算出する。
The processing line generating unit 107 generates a processing line from a set of three-dimensional data.
Specifically, the processing line generating unit 107 generates a processing line indicated by a dashed line as shown in FIG. 6 by, for example, connecting each of the three-dimensional data of the contour lines stored in the storage unit 13 .
In addition, the processing line generation unit 107 calculates the surface of the work model on which the contour label is placed from the three-dimensional data of the contour stored as a collection of three-dimensional points, as shown in Figure 7, and calculates the normal vector to the surface of the work model at each point on the processing line.
 ロボットプログラム生成部108は、加工線生成部107により生成された加工線及び法線ベクトルに基づいてプログラムを生成する。
 具体的には、ロボットプログラム生成部108は、例えば、図8に示すように、加工線上の各三次元点でのワークモデルの面に対する法線ベクトルから加工線上の各三次元点でのロボットモデルの姿勢を算出し、加工線に沿ってロボットプログラムの教示点を設定する。ロボットプログラム生成部108は、加工線及び加工線のワークモデルの面に対する法線ベクトルよりロボットモデルがツールモデルによりワークモデルに対し作業を行うロボットプログラムを生成する。
 その後、プログラミング装置1は、生成したロボットプログラムのシミュレーションを実行することにより、図9に示すように、ユーザは、ロボットモデルが文字「F」の加工線に沿ってツールモデルを移動させることを確認することができる。
The robot program generating unit 108 generates a program based on the processing lines and normal vectors generated by the processing line generating unit 107 .
Specifically, the robot program generating unit 108 calculates the posture of the robot model at each three-dimensional point on the processing line from the normal vector to the surface of the workpiece model at each three-dimensional point on the processing line, and sets teaching points of the robot program along the processing line, as shown in Fig. 8. The robot program generating unit 108 generates a robot program in which the robot model performs work on the workpiece model using the tool model based on the processing line and the normal vector of the processing line to the surface of the workpiece model.
The programming device 1 then executes a simulation of the generated robot program, allowing the user to confirm that the robot model moves the tool model along the machining line of the letter "F", as shown in FIG. 9.
<プログラミング装置1のプログラム生成処理>
 次に、図10を参照しながら、プログラミング装置1のプログラム生成処理の流れを説明する。
 図10は、プログラミング装置1のプログラム生成処理について説明するフローチャートである。ここで示すフローは、ロボットプログラムの生成指示を受け付ける度に実行される。
<Program Generation Process of Programming Device 1>
Next, the flow of a program generation process of the programming device 1 will be described with reference to FIG.
10 is a flowchart illustrating the program generation process of the programming device 1. The flow shown here is executed every time an instruction to generate a robot program is received.
 ステップS1において、三次元モデル配置部102は、ユーザの入力部11の入力操作に応じて、仮想空間作成部101により作成された三次元の仮想空間に、産業機械システムモデルを構成するロボット(図示しない)のロボットモデル、ツール(図示しない)のツールモデル、視覚センサ(図示しない)の視覚センサモデル、及びワーク(図示しない)のワークモデルそれぞれを配置する。 In step S1, the three-dimensional model placement unit 102 places a robot model of a robot (not shown), a tool model of a tool (not shown), a visual sensor model of a visual sensor (not shown), and a work model of a work (not shown) that constitute the industrial machinery system model in the three-dimensional virtual space created by the virtual space creation unit 101 in response to an input operation by the user of the input unit 11.
 ステップS2において、画像配置部103は、ワークモデルの面上に、輪郭線が描かれたラベルを配置する。 In step S2, the image placement unit 103 places a label with a contour line drawn on the surface of the work model.
 ステップS3において、画像撮像部104は、仮想空間でロボットモデルを動作させて、ワークモデル上のラベルの中心に鉛直な直線と視覚センサモデルの光軸とが一致するように視覚センサモデルを配置し、ラベル全体が視覚センサモデルの視野の範囲内に収まるように視覚センサモデルの高さを調整する。画像撮像部104は、視覚センサモデルに設定された撮像パラメータに基づいて、視覚センサモデルが配置された位置において視覚センサモデルによりラベルを撮像させた疑似的な撮像画像を生成する。 In step S3, the image capturing unit 104 operates the robot model in the virtual space, positions the visual sensor model so that a vertical line to the center of the label on the work model coincides with the optical axis of the visual sensor model, and adjusts the height of the visual sensor model so that the entire label falls within the field of view of the visual sensor model. Based on the imaging parameters set for the visual sensor model, the image capturing unit 104 generates a pseudo captured image in which the label is captured by the visual sensor model at the position where the visual sensor model is placed.
 ステップS4において、輪郭線検出部105は、ステップS3で撮像された疑似的な撮像画像に対して画像処理を施すことにより輪郭線を検出する。 In step S4, the contour detection unit 105 detects contours by performing image processing on the pseudo-image captured in step S3.
 ステップS5において、三次元データ変換部106は、ステップS4で検出された輪郭線を3次元データの集合に変換する。 In step S5, the three-dimensional data conversion unit 106 converts the contours detected in step S4 into a collection of three-dimensional data.
 ステップS6において、加工線生成部107は、三次元データの集合より加工線を生成する。 In step S6, the processing line generation unit 107 generates a processing line from the collection of three-dimensional data.
 ステップS7において、加工線生成部107は、三次元点の集合として記憶された輪郭線の三次元データから、輪郭線のラベルが配置されたワークモデルの面を算出し、加工線上の各点でのワークモデルの面に対する法線ベクトルを算出する。 In step S7, the processing line generation unit 107 calculates the surface of the work model on which the contour label is placed from the three-dimensional data of the contour stored as a collection of three-dimensional points, and calculates the normal vector to the surface of the work model at each point on the processing line.
 ステップS8において、ロボットプログラム生成部108は、ステップS7で生成された加工線及び法線ベクトルに基づいてプログラムを生成する。 In step S8, the robot program generation unit 108 generates a program based on the processing lines and normal vectors generated in step S7.
 以上により、第1実施形態に係るプログラミング装置1は、仮想空間内に配置されたワークモデルの面上に輪郭線が描かれたラベルを配置し、視覚センサモデルに設定された撮像パラメータに基づいて、視覚センサモデルが配置された位置においてラベルを撮像した撮像画像を生成する。プログラミング装置1は、撮像画像に画像処理を施すことにより輪郭線を検出し、検出した輪郭線に基づいて、加工線を生成して、生成した加工線のワークモデルの面に対する法線ベクトルを算出し、加工線及び法線ベクトルに基づいて産業機械がツールによりワークに対し作業を行うプログラムを生成する。これにより、プログラミング装置1は、ワークの面上に、任意かつ複雑な軌跡を描く加工線を生成し、生成された加工線に基づいて当該軌跡の加工作業を行うための産業機械のプログラムを生成することができる。
 そして、プログラミング装置1は、作業者の熟練を必要とせず、教示作業の手間や時間を削減することができる。
 以上、第1実施形態について説明した。
As described above, the programming device 1 according to the first embodiment places a label with a contour line on the surface of a workpiece model placed in a virtual space, and generates a captured image of the label at the position where the visual sensor model is placed based on the imaging parameters set for the visual sensor model. The programming device 1 detects the contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the workpiece model, and generates a program for an industrial machine to perform work on a workpiece with a tool based on the processing line and the normal vector. As a result, the programming device 1 can generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and generate a program for an industrial machine to perform a processing operation of the trajectory based on the generated processing line.
Furthermore, the programming device 1 does not require the operator to be highly skilled, and can reduce the effort and time required for teaching work.
The first embodiment has been described above.
<第2実施形態>
 次に、第2実施形態について説明する。第1実施形態では、プログラミング装置1は、ワークのモデルの面上に輪郭線同士が交わらない輪郭線が描かれたラベルを配置し、視覚センサのモデルに設定された撮像パラメータに基づいて、視覚センサのモデルが配置された位置においてラベルを撮像した撮像画像を生成する。プログラミング装置1は、撮像画像に画像処理を施すことにより輪郭線を検出し、検出した輪郭線に基づいて、加工線を生成して、生成した加工線のワークの面に対する法線ベクトルを算出し、加工線及び法線ベクトルに基づいて産業機械がツールによりワークに対し作業を行うプログラムを生成する。これに対して、第2実施形態では、プログラミング装置1Aは、ワークのモデルの面上に輪郭線同士が少なくとも1箇所で交わる輪郭線が描かれたラベルを配置し、当該輪郭線が撮像された撮像画像に画像処理を施すことにより輪郭線を検出して複数の線分に分割し、分割した複数の線分のうち互いに連続して描かれる少なくとも2つの線分を1つの線分に結合し、プログラムにより作業を行う順序を線分毎に指定する点において、第1実施形態と相違する。
 これにより、第2実施形態に係るプログラミング装置1Aは、ワークの面上に、任意かつ複雑な軌跡を描く加工線を生成し、生成された加工線に基づいて当該軌跡の加工作業を行うための産業機械のプログラムを生成することができる。
 以下、第2実施形態について説明する。
Second Embodiment
Next, a second embodiment will be described. In the first embodiment, the programming device 1 places a label on the surface of the model of the workpiece, on which a contour line is drawn that does not intersect with another contour line, and generates a captured image of the label at the position where the model of the visual sensor is placed based on the imaging parameters set for the model of the visual sensor. The programming device 1 detects a contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the workpiece, and generates a program in which an industrial machine performs work on the workpiece with a tool based on the processing line and the normal vector. In contrast, in the second embodiment, the programming device 1A places a label on the surface of the model of the workpiece, on which a contour line is drawn that intersects with another contour line at least at one point, and performs image processing on the captured image in which the contour line is captured to detect the contour line and divide it into a plurality of line segments, and combines at least two line segments drawn consecutively among the divided plurality of line segments into one line segment, and specifies the order of work to be performed by the program for each line segment. This is different from the first embodiment.
As a result, the programming device 1A of the second embodiment can generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and generate a program for an industrial machine to perform processing work of the trajectory based on the generated processing line.
The second embodiment will now be described.
 図11は、第2実施形態に係るプログラミング装置の機能的構成例を示す機能ブロック図である。なお、図1のプログラミング装置1の要素と同様の機能を有する要素については、同じ符号を付し、詳細な説明は省略する。
 図11に示すように、第2実施形態に係るプログラミング装置1Aは、制御部10a、入力部11、表示部12、及び記憶部13を有する。制御部10aは、仮想空間作成部101、三次元モデル配置部102、画像配置部103、画像撮像部104、輪郭線検出部105a、三次元データ変換部106a、加工線生成部107a、及びロボットプログラム生成部108aを含む。
 入力部11、表示部12、及び記憶部13は、第1実施形態における入力部11、表示部12、及び記憶部13と同様の機能を有する。
 モデル記憶部131は、第1実施形態におけるモデル記憶部131と同様の機能を有する。
Fig. 11 is a functional block diagram showing an example of the functional configuration of a programming device according to the second embodiment. Elements having the same functions as those of the programming device 1 in Fig. 1 are given the same reference numerals and detailed description thereof will be omitted.
11, the programming device 1A according to the second embodiment has a control unit 10a, an input unit 11, a display unit 12, and a storage unit 13. The control unit 10a includes a virtual space creation unit 101, a three-dimensional model placement unit 102, an image placement unit 103, an image capturing unit 104, a contour line detection unit 105a, a three-dimensional data conversion unit 106a, a processing line generation unit 107a, and a robot program generation unit 108a.
The input unit 11, the display unit 12, and the storage unit 13 have the same functions as the input unit 11, the display unit 12, and the storage unit 13 in the first embodiment.
The model storage unit 131 has the same function as the model storage unit 131 in the first embodiment.
<制御部10a>
 制御部10aは、CPU、ROM、RAM、CMOSメモリ等を有し、これらはバスを介して相互に通信可能に構成される、当業者にとって公知のものである。
 CPUはプログラミング装置1Aを全体的に制御するプロセッサである。CPUは、ROMに格納されたシステムプログラム及びアプリケーションプログラムを、バスを介して読み出し、システムプログラム及びアプリケーションプログラムに従ってプログラミング装置1A全体を制御する。これにより、図11に示すように、制御部10aが、仮想空間作成部101、三次元モデル配置部102、画像配置部103、画像撮像部104、輪郭線検出部105a、三次元データ変換部106a、加工線生成部107a、及びロボットプログラム生成部108aの機能を実現するように構成される。
 仮想空間作成部101、三次元モデル配置部102、画像配置部103、画像撮像部104は、第1実施形態における仮想空間作成部101、三次元モデル配置部102、画像配置部103、画像撮像部104と同等の機能を有する。
<Control unit 10a>
The control unit 10a includes a CPU, a ROM, a RAM, a CMOS memory, etc., which are configured to be able to communicate with each other via a bus, and are well known to those skilled in the art.
The CPU is a processor that controls the entire programming device 1A. The CPU reads out the system program and application program stored in the ROM via the bus, and controls the entire programming device 1A in accordance with the system program and application program. As a result, as shown in Fig. 11, the control unit 10a is configured to realize the functions of a virtual space creation unit 101, a three-dimensional model placement unit 102, an image placement unit 103, an image capture unit 104, a contour line detection unit 105a, a three-dimensional data conversion unit 106a, a processing line generation unit 107a, and a robot program generation unit 108a.
The virtual space creation unit 101, the three-dimensional model placement unit 102, the image placement unit 103, and the image capture unit 104 have functions equivalent to those of the virtual space creation unit 101, the three-dimensional model placement unit 102, the image placement unit 103, and the image capture unit 104 in the first embodiment.
 輪郭線検出部105aは、画像撮像部104により撮像された疑似的な撮像画像に対して画像処理を施すことにより輪郭線を検出する。
 なお、以下では、輪郭線同士が少なくとも1箇所で交わる輪郭線が描かれたラベルとして、図12に示すように、文字「あ」が描かれたラベルがワークモデルに配置された場合を例示して説明する。
 輪郭線検出部105aは、例えば、図13に示すように、一点鎖線で示すラベルの中心に鉛直な直線と視覚センサモデルの光軸とが一致するように配置された視覚センサモデルが、設定された撮像パラメータに基づいて配置された位置においてラベルを撮像して生成した疑似的な撮像画像を画像撮像部104から取得する。輪郭線検出部105aは、第1実施形態の輪郭線検出部105と同様に、公知の手法(例えば、線画抽出等)を用いて、疑似的な撮像画像におけるラベルの黒い部分を輪郭線として検出する。
 輪郭線検出部105aは、図14に示すように、検出した文字「あ」の輪郭線を交点の位置に基づいて11本の線分L1~L11に分割する。
 輪郭線検出部105aは、例えば、ユーザの入力部11の入力操作に応じて、線分L1~L11のうち選択された少なくとも2つの線分を1つの線分に結合する。具体的には、輪郭線検出部105aは、図15に示すように、線分L1と線分L2とを1つの線分C1に結合する。また、輪郭線検出部105aは、線分L3~L6を1つの線分C2に結合する。また、輪郭線検出部105aは、線分L7~L11を1つの線分C3に結合する。
 輪郭線検出部105aは、ユーザの入力部11の入力操作に応じて、例えば、線分C1、C2、C3の順序でロボットプログラムにより作業を行うように指定する。
The contour line detection unit 105a detects contour lines by performing image processing on the pseudo captured image captured by the image capturing unit 104.
In the following, we will use as an example a label with contour lines that intersect at least one point, as shown in Figure 12, in which a label with the letter "a" is placed on the work model.
13, the contour line detection unit 105a acquires a pseudo captured image from the image capturing unit 104. The pseudo captured image is generated by capturing an image of a label at a position where a visual sensor model is placed based on set imaging parameters, the visual sensor model being placed so that a vertical line to the center of the label indicated by a dashed dotted line coincides with the optical axis of the visual sensor model. The contour line detection unit 105a detects the black part of the label in the pseudo captured image as a contour line by using a known method (e.g., line drawing extraction, etc.) in the same way as the contour line detection unit 105 in the first embodiment.
As shown in FIG. 14, the contour detection unit 105a divides the detected contour of the character "a" into eleven line segments L1 to L11 based on the positions of the intersections.
The contour detection unit 105a combines at least two selected line segments from among the line segments L1 to L11 into one line segment in response to, for example, a user's input operation on the input unit 11. Specifically, as shown in Fig. 15, the contour detection unit 105a combines the line segments L1 and L2 into one line segment C1. The contour detection unit 105a also combines the line segments L3 to L6 into one line segment C2. The contour detection unit 105a also combines the line segments L7 to L11 into one line segment C3.
In response to an input operation of the input unit 11 by the user, the contour line detection unit 105a instructs the robot program to perform work in the order of line segments C1, C2, and C3, for example.
 三次元データ変換部106aは、第1実施形態の三次元データ変換部106と同様に、輪郭線検出部105aにより検出された輪郭線を3次元データの集合に変換する。
 具体的には、三次元データ変換部106aは、例えば、図16に示すように、視覚センサモデルにより撮像された撮像画像において、ロボットモデル座標系又はセンサモデル座標系を基準とした検出された輪郭線上の三次元点(図16の黒点)の三次元位置座標(X,Y,Z)を三次元データの集合に変換する。三次元データ変換部106aは、変換した三次元データを記憶部13に記憶する。
The three-dimensional data conversion unit 106a converts the contours detected by the contour detection unit 105a into a set of three-dimensional data, similar to the three-dimensional data conversion unit 106 of the first embodiment.
Specifically, for example, as shown in Fig. 16, the three-dimensional data conversion unit 106a converts the three-dimensional position coordinates (X, Y, Z) of three-dimensional points (black points in Fig. 16) on the detected contour line based on the robot model coordinate system or the sensor model coordinate system in the captured image captured by the visual sensor model into a set of three-dimensional data. The three-dimensional data conversion unit 106a stores the converted three-dimensional data in the storage unit 13.
 加工線生成部107aは、第1実施形態の加工線生成部107と同様に、三次元データの集合より加工線を生成する。
 具体的には、加工線生成部107aは、例えば、記憶部13に記憶された輪郭線の三次元データそれぞれを結ぶことで、図17に示すように、破線で示す加工線を生成する。
 また、加工線生成部107aは、図18に示すように、三次元点の集合として記憶された輪郭線の三次元データから、輪郭線が表示された画像が配置されたワークモデルの面を算出し、加工線上の各点でのワークモデルの面に対する法線ベクトルを算出する。
The processing line generating unit 107a generates a processing line from a set of three-dimensional data, similarly to the processing line generating unit 107 of the first embodiment.
Specifically, the processing line generating unit 107a generates a processing line indicated by a dashed line as shown in FIG. 17, for example, by connecting each of the three-dimensional data of the contour lines stored in the storage unit 13.
In addition, the processing line generating unit 107a calculates the surface of the work model on which an image displaying the contour line is placed from the three-dimensional data of the contour line stored as a collection of three-dimensional points, as shown in Figure 18, and calculates the normal vector to the surface of the work model at each point on the processing line.
 ロボットプログラム生成部108aは、輪郭線検出部105aにより指定された線分の順序と、加工線生成部107aにより生成された加工線及び法線ベクトルと、に基づいてプログラムを生成する。
 具体的には、ロボットプログラム生成部108aは、例えば、図19に示すように、加工線上の各三次元点でのワークモデルの面に対する法線ベクトルから加工線上の各三次元点でのロボットモデルの姿勢を算出し、加工線に沿ってロボットプログラムの教示点を設定する。ロボットプログラム生成部108aは、加工線及び加工線のワークモデルの面に対する法線ベクトルよりロボットモデルがツールモデルによりワークモデルに対し指定された線分C1~C3の順序で作業を行うロボットプログラムを生成する。
The robot program generating unit 108a generates a program based on the order of line segments specified by the contour line detecting unit 105a and the processing lines and normal vectors generated by the processing line generating unit 107a.
Specifically, the robot program generating unit 108a calculates the posture of the robot model at each three-dimensional point on the processing line from the normal vector to the surface of the workpiece model at each three-dimensional point on the processing line, and sets teaching points of the robot program along the processing line, as shown in Fig. 19. The robot program generating unit 108a generates a robot program in which the robot model performs work in the order of line segments C1 to C3 specified by the tool model with respect to the workpiece model, based on the processing line and the normal vector of the processing line to the surface of the workpiece model.
<プログラミング装置1Aのプログラム生成処理>
 次に、図20を参照しながら、プログラミング装置1Aのプログラム生成処理の流れを説明する。
 図20は、プログラミング装置1Aのプログラム生成処理について説明するフローチャートである。ここで示すフローは、ロボットプログラムの生成指示を受け付ける度に実行される。
 なお、ステップS1~ステップS4、及びステップS6~ステップS8の処理は、図10のステップS1~ステップS3、及びステップS5~ステップS7の処理と同様であり、説明は省略する。
<Program Generation Process of Programming Device 1A>
Next, the flow of the program generation process of the programming device 1A will be described with reference to FIG.
20 is a flowchart illustrating a program generation process of the programming device 1 A. The flow shown here is executed every time an instruction to generate a robot program is received.
The processes in steps S1 to S4 and steps S6 to S8 are similar to those in steps S1 to S3 and steps S5 to S7 in FIG. 10, and therefore will not be described.
 ステップS4aにおいて、輪郭線検出部105aは、ステップS3で撮像された疑似的な撮像画像に対して画像処理を施すことにより輪郭線を検出する。輪郭線検出部105aは、検出した輪郭線に交点がある場合、交点の位置に基づいて複数の線分に分割する。輪郭線検出部105aは、ユーザの入力部11の入力操作に応じて、複数の線分のうち選択された少なくとも2つの線分を1つの線分に結合し、ロボットプログラムにより作業を行う線分毎の順序を指定する。 In step S4a, the contour detection unit 105a detects a contour by performing image processing on the pseudo image captured in step S3. If the detected contour has an intersection, the contour detection unit 105a divides it into multiple line segments based on the position of the intersection. In response to the user's input operation on the input unit 11, the contour detection unit 105a combines at least two selected line segments from the multiple line segments into one line segment, and specifies the order in which work is to be performed for each line segment by the robot program.
 ステップS8aにおいて、ロボットプログラム生成部108aは、ステップS4aで指定された線分毎の順序と、ステップS7で生成された加工線及び法線ベクトルとに基づいてプログラムを生成する。 In step S8a, the robot program generation unit 108a generates a program based on the order of each line segment specified in step S4a and the processing lines and normal vectors generated in step S7.
 以上により、第2実施形態に係るプログラミング装置1Aは、ワークの面上に輪郭線同士が少なくとも1箇所で交わる輪郭線が描かれたラベルが配置された場合、当該輪郭線の撮像画像に画像処理を施すことにより輪郭線を検出して複数の線分に分割し、分割した複数の線分のうち互いに連続して描かれる少なくとも2つの線分を1つの線分に結合し、プログラムにより作業を行う順序を線分毎に指定する。こうすることで、プログラミング装置1Aは、ワークの面上に、任意かつ複雑な軌跡を描く加工線を生成し、生成された加工線に基づいて当該軌跡の加工作業を行うための産業機械のプログラムを生成することができる。
 そして、プログラミング装置1Aは、作業者の熟練を必要とせず、教示作業の手間や時間を削減することができる。
 以上、第2実施形態について説明した。
As described above, when a label with a contour line that intersects with another contour line at at least one point is placed on the surface of a workpiece, the programming device 1A according to the second embodiment detects the contour line by performing image processing on a captured image of the contour line, divides the contour line into a plurality of line segments, combines at least two of the divided plurality of line segments that are drawn consecutively into one line segment, and specifies the order of work to be performed for each line segment by a program. In this way, the programming device 1A can generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and generate a program for an industrial machine to perform a processing operation of the trajectory based on the generated processing line.
Furthermore, the programming device 1A does not require the operator to be skilled, and can reduce the effort and time required for teaching work.
The second embodiment has been described above.
<第3実施形態>
 次に、第3実施形態について説明する。第1実施形態では、プログラミング装置1は、ワークのモデルの面上に輪郭線同士が交わらない輪郭線が描かれたラベルを配置し、視覚センサのモデルに設定された撮像パラメータに基づいて、視覚センサのモデルが配置された位置においてラベルを撮像した撮像画像を生成する。プログラミング装置1は、撮像画像に画像処理を施すことにより輪郭線を検出し、検出した輪郭線に基づいて、加工線を生成して、生成した加工線のワークのモデルの面に対する法線ベクトルを算出し、加工線及び法線ベクトルに基づいて産業機械がツールによりワークに対し作業を行うプログラムを生成する。また、第2実施形態では、プログラミング装置1Aは、ワークのモデルの面上に輪郭線同士が少なくとも1箇所で交わる輪郭線が描かれたラベルを配置し、当該輪郭線が撮像された撮像画像に画像処理を施すことにより輪郭線を検出して複数の線分に分割し、分割した複数の線分のうち互いに連続して描かれる少なくとも2つの線分を1つの線分に結合し、プログラムにより作業を行う順序を線分毎に指定する点において、第1実施形態と相違した。これに対して、第3実施形態では、プログラミング装置1Bは、実空間内に配置された産業機械、前記産業機械に取り付けられるツール、視覚センサ、及びワークにおいて、ワークの面上に輪郭線が描かれたラベルを配置し、視覚センサに設定された撮像パラメータに基づいて、視覚センサが配置された位置においてラベルを撮像した撮像画像を視覚センサから取得する点で、第1実施形態及び第2実施形態と相違する。
 これにより、第3実施形態に係るプログラミング装置1Bは、ワークの面上に、任意かつ複雑な軌跡を描く加工線を生成し、生成された加工線に基づいて当該軌跡の加工作業を行うための産業機械のプログラムを生成することができる。
 以下、第3実施形態について説明する。
Third Embodiment
Next, a third embodiment will be described. In the first embodiment, the programming device 1 places a label on the surface of the model of the workpiece, on which a contour line is drawn that does not intersect with another contour line, and generates a captured image of the label at the position where the model of the visual sensor is placed based on the imaging parameters set for the model of the visual sensor. The programming device 1 detects a contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the model of the workpiece, and generates a program for an industrial machine to perform work on the workpiece with a tool based on the processing line and the normal vector. In the second embodiment, the programming device 1A differs from the first embodiment in that it places a label on the surface of the model of the workpiece, on which a contour line is drawn that intersects with another contour line at least at one point, detects the contour line by performing image processing on the captured image in which the contour line is captured, divides the contour line into a plurality of line segments, combines at least two line segments drawn consecutively among the divided plurality of line segments into one line segment, and specifies the order of work to be performed by the program for each line segment. In contrast, in the third embodiment, the programming device 1B differs from the first and second embodiments in that, in an industrial machine, a tool attached to the industrial machine, a visual sensor, and a workpiece arranged in real space, a label having a contour line drawn on the surface of the workpiece is arranged, and an image of the label captured at the position where the visual sensor is arranged is obtained from the visual sensor based on imaging parameters set in the visual sensor.
As a result, the programming device 1B of the third embodiment can generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and generate a program for an industrial machine to perform processing work of the trajectory based on the generated processing line.
The third embodiment will now be described.
 図21は、第3実施形態に係るプログラミング装置の機能的構成例を示す機能ブロック図である。なお、図1のプログラミング装置1の要素と同様の機能を有する要素については、同じ符号を付し、詳細な説明は省略する。
 図21に示すように、第3実施形態に係るプログラミング装置1Bは、制御部10b、入力部11、表示部12、及び記憶部13bを有する。制御部10bは、画像配置部103b、画像撮像部104b、輪郭線検出部105、三次元データ変換部106、加工線生成部107、及びロボットプログラム生成部108を含む。
 入力部11、及び表示部12は、第1実施形態における入力部11、及び表示部12と同様の機能を有する。
Fig. 21 is a functional block diagram showing an example of the functional configuration of a programming device according to the third embodiment. Elements having the same functions as those of the programming device 1 in Fig. 1 are given the same reference numerals and detailed description thereof will be omitted.
21, a programming device 1B according to the third embodiment has a control unit 10b, an input unit 11, a display unit 12, and a storage unit 13b. The control unit 10b includes an image arrangement unit 103b, an image capturing unit 104b, a contour line detection unit 105, a three-dimensional data conversion unit 106, a processing line generation unit 107, and a robot program generation unit 108.
The input unit 11 and the display unit 12 have the same functions as the input unit 11 and the display unit 12 in the first embodiment.
<記憶部13b>
 記憶部13bは、第1実施形態の記憶部13と同様に、SSDやHDD等であり、図示しないロボットがツールによりワークに対し作業を行うロボットプログラムを生成する生成プログラムや、生成されたロボットプログラム等が記憶されてもよい。
<Storage unit 13b>
The memory unit 13b is an SSD, HDD, etc., similar to the memory unit 13 in the first embodiment, and may store a generation program that generates a robot program in which a robot (not shown) performs work on a workpiece using a tool, the generated robot program, etc.
<制御部10b>
 制御部10bは、CPU、ROM、RAM、CMOSメモリ等を有し、これらはバスを介して相互に通信可能に構成される、当業者にとって公知のものである。
 CPUはプログラミング装置1Bを全体的に制御するプロセッサである。CPUは、ROMに格納されたシステムプログラム及びアプリケーションプログラムを、バスを介して読み出し、システムプログラム及びアプリケーションプログラムに従ってプログラミング装置1B全体を制御する。これにより、図21に示すように、制御部10bが、画像配置部103b、画像撮像部104b、輪郭線検出部105、三次元データ変換部106、加工線生成部107、及びロボットプログラム生成部108の機能を実現するように構成される。
 なお、輪郭線検出部105、三次元データ変換部106、加工線生成部107、及びロボットプログラム生成部108は、第1実施形態における輪郭線検出部105、三次元データ変換部106、加工線生成部107、及びロボットプログラム生成部108と同等の機能を有する。
<Control unit 10b>
The control unit 10b includes a CPU, a ROM, a RAM, a CMOS memory, etc., which are configured to be able to communicate with each other via a bus, and are well known to those skilled in the art.
The CPU is a processor that controls the entire programming device 1B. The CPU reads out the system program and application program stored in the ROM via the bus, and controls the entire programming device 1B in accordance with the system program and application program. As a result, as shown in Fig. 21, the control unit 10b is configured to realize the functions of an image arrangement unit 103b, an image pickup unit 104b, a contour line detection unit 105, a three-dimensional data conversion unit 106, a processing line generation unit 107, and a robot program generation unit 108.
In addition, the contour line detection unit 105, the three-dimensional data conversion unit 106, the processing line generation unit 107, and the robot program generation unit 108 have functions equivalent to the contour line detection unit 105, the three-dimensional data conversion unit 106, the processing line generation unit 107, and the robot program generation unit 108 in the first embodiment.
 画像配置部103bは、例えば、図示しないプロジェクタ等を用いて、ワークの面上に、図3の場合と同様に輪郭線が描かれたラベルを投影して配置するようにしてもよい。 The image placement unit 103b may, for example, use a projector (not shown) to project and place a label with a contour line drawn on the surface of the workpiece in the same manner as in FIG. 3.
 画像撮像部104bは、図示しないロボット制御装置を介して、実空間においてロボットを動作させて、図4の場合と同様に、一点鎖線で示すラベルの中心に鉛直な直線と視覚センサの光軸とが一致するように視覚センサを配置し、ラベル全体が視覚センサの視野の範囲内に収まるように視覚センサの高さを調整する。
 画像撮像部104bは、視覚センサに設定された撮像パラメータに基づいて、視覚センサが配置された位置においてラベルを撮像した撮像画像を取得する。画像撮像部104bは、撮像画像を輪郭線検出部105に出力する。
The image capturing unit 104b operates the robot in real space via a robot control device not shown, and, as in the case of Figure 4, positions the visual sensor so that a vertical line to the center of the label indicated by the dotted line coincides with the optical axis of the visual sensor, and adjusts the height of the visual sensor so that the entire label falls within the field of view of the visual sensor.
The image capturing unit 104b captures an image of the label at the position where the visual sensor is disposed based on the imaging parameters set for the visual sensor, and outputs the captured image to the contour line detection unit 105.
<プログラミング装置1Bのプログラム生成処理>
 次に、図22を参照しながら、プログラミング装置1Bのプログラム生成処理の流れを説明する。
 図22は、プログラミング装置1Bのプログラム生成処理について説明するフローチャートである。ここで示すフローは、ロボットプログラムの生成指示を受け付ける度に実行される。
 なお、ステップS33~ステップS37の処理は、図10のステップS4~ステップS8の処理と同様であり、説明は省略する。
<Program Generation Process of Programming Device 1B>
Next, the flow of the program generation process of the programming device 1B will be described with reference to FIG.
22 is a flowchart illustrating a program generation process of the programming device 1B. The flow shown here is executed every time an instruction to generate a robot program is received.
The processes in steps S33 to S37 are similar to those in steps S4 to S8 in FIG. 10, and therefore will not be described.
 ステップS31において、画像配置部103bは、図示しないプロジェクタ等を用いて、ワークの面上に、輪郭線が描かれたラベルを投影して配置する。 In step S31, the image placement unit 103b uses a projector (not shown) or the like to project and place a label with a contour line drawn on the surface of the work.
 ステップS32において、画像撮像部104bは、実空間でロボットを動作させて、ワーク上のラベルの中心に鉛直な直線と視覚センサの光軸とが一致するように視覚センサを配置し、ラベル全体が視覚センサの視野の範囲内に収まるように視覚センサの高さを調整する。画像撮像部104bは、視覚センサに設定された撮像パラメータに基づいて、視覚センサが配置された位置においてラベルを撮像した撮像画像を取得する。 In step S32, the image capturing unit 104b operates the robot in real space to position the visual sensor so that a vertical line coincides with the optical axis of the visual sensor at the center of the label on the workpiece, and adjusts the height of the visual sensor so that the entire label falls within the field of view of the visual sensor. The image capturing unit 104b obtains an image of the label at the position where the visual sensor is positioned, based on the imaging parameters set for the visual sensor.
 以上により、第3実施形態に係るプログラミング装置1Bは、ワークの面上に輪郭線が描かれたラベルを配置し、視覚センサに設定された撮像パラメータに基づいて、視覚センサが配置された位置においてラベルを撮像した撮像画像を視覚センサから取得する。プログラミング装置1Bは、撮像画像に画像処理を施すことにより輪郭線を検出し、検出した輪郭線に基づいて、加工線を生成して、生成した加工線のワークの面に対する法線ベクトルを算出し、加工線及び法線ベクトルに基づいて産業機械がツールによりワークに対し作業を行うプログラムを生成する。これにより、プログラミング装置1Bは、ワークの面上に、任意かつ複雑な軌跡を描く加工線を生成し、生成された加工線に基づいて当該軌跡の加工作業を行うための産業機械のプログラムを生成することができる。
 そして、プログラミング装置1Bは、作業者の熟練を必要とせず、教示作業の手間や時間を削減することができる。
 以上、第3実施形態について説明した。
As described above, the programming device 1B according to the third embodiment places a label with a contour drawn on the surface of a workpiece, and acquires an image of the label at the position where the visual sensor is placed from the visual sensor based on the imaging parameters set in the visual sensor. The programming device 1B detects a contour by performing image processing on the captured image, generates a processing line based on the detected contour, calculates a normal vector of the generated processing line with respect to the surface of the workpiece, and generates a program for an industrial machine to perform work on the workpiece with a tool based on the processing line and the normal vector. As a result, the programming device 1B can generate a processing line that draws an arbitrary and complex trajectory on the surface of the workpiece, and generate a program for an industrial machine to perform a processing work of the trajectory based on the generated processing line.
Furthermore, the programming device 1B does not require the operator to be highly skilled, and can reduce the effort and time required for teaching work.
The third embodiment has been described above.
<第3実施形態の変形例>
 第3実施形態では、ワークに配置されるラベルは、文字「F」等の輪郭線同士が交わらないものとしたが、これに限定されない。例えば、ラベルは、文字「あ」等の輪郭線同士が少なくとも1箇所で交わるものでもよい。この場合、プログラミング装置1Bは、第2実施形態と同様に、当該輪郭線が撮像された撮像画像に画像処理を施すことにより輪郭線を検出して複数の線分に分割し、分割した複数の線分のうち互いに連続して描かれる少なくとも2つの線分を1つの線分に結合し、プログラムにより作業を行う順序を線分毎に指定するようにしてもよい。
<Modification of the third embodiment>
In the third embodiment, the label to be placed on the workpiece is one in which the contour lines of the letter "F" and the like do not intersect with each other, but this is not limited thereto. For example, the label may be one in which the contour lines of the letter "a" and the like intersect with each other at at least one point. In this case, similar to the second embodiment, the programming device 1B may detect the contour lines by performing image processing on the captured image in which the contour lines are captured, divide the contour lines into a plurality of line segments, combine at least two line segments drawn consecutively among the divided plurality of line segments into one line segment, and specify the order of operations to be performed by the program for each line segment.
 以上のように、第1実施形態から第3実施形態に記載したように、本開示のプログラミング装置1、1A、1Bは、ワークの面上に、任意かつ複雑な軌跡を描く加工線を生成し、生成された加工線に基づいて当該軌跡の加工作業を行うための産業機械のプログラムを生成することができる。 As described above in the first to third embodiments, the programming devices 1, 1A, and 1B disclosed herein can generate machining lines that trace arbitrary and complex trajectories on the surface of a workpiece, and generate a program for an industrial machine to perform machining work of the trajectory based on the generated machining lines.
<変形例1>
 上述の第1実施形態から第3実施形態では、ワーク(ワークモデル)の面は平面としたが、これに限定されない。ワーク(ワークモデル)の面は、凹凸等の任意の形状を有してもよい。この場合、任意の形状のワーク(ワークモデル)の面には、ラベルが描かれたシールが貼り付けられてもよく、プロジェクタ等でラベルが投影されてもよい。
<Modification 1>
In the above-described first to third embodiments, the surface of the workpiece (workpiece model) is a flat surface, but is not limited thereto. The surface of the workpiece (workpiece model) may have any shape, such as unevenness. In this case, a sticker with a label drawn on it may be attached to the surface of the workpiece (workpiece model) of any shape, or a label may be projected by a projector or the like.
<変形例2>
 また例えば、上述の実施形態では、プログラミング装置1、1A、1Bは、産業機械としてロボットのロボットプログラムを生成したが、これに限定されない。例えば、プログラミング装置1、1A、1Bは、産業機械としての工作機械や、鍛圧機械、レーザ加工機、及び射出成形機等のプログラムを生成するようにしてもよい。
<Modification 2>
In addition, for example, in the above-described embodiment, the programming devices 1, 1A, and 1B generate robot programs for a robot as an industrial machine, but are not limited to this. For example, the programming devices 1, 1A, and 1B may generate programs for a machine tool, a forging machine, a laser processing machine, an injection molding machine, or the like as an industrial machine.
<変形例3>
 また例えば、上述の実施形態では、視覚センサ(視覚センサモデル)は、ロボット(ロボットモデル)のアームの先端に取り付けられたが、これに限定されない。例えば、視覚センサ(視覚センサモデル)は、図23に示すように、ロボット(ロボットモデル)とは別で、ワークに配置されたラベル全体が視野に収まる壁や天井等の位置に固定されて配置されてもよい。
<Modification 3>
Also, for example, in the above-mentioned embodiment, the visual sensor (visual sensor model) is attached to the tip of the arm of the robot (robot model), but is not limited to this. For example, the visual sensor (visual sensor model) may be fixed and placed at a position on a wall, ceiling, etc., separate from the robot (robot model) so that the entire label placed on the work fits within the field of view, as shown in Fig. 23 .
 なお、第1実施形態から第3実施形態における、プログラミング装置1、1A、1Bに含まれる各機能は、ハードウェア、ソフトウェア又はこれらの組み合わせによりそれぞれ実現することができる。ここで、ソフトウェアによって実現されるとは、コンピュータがプログラムを読み込んで実行することにより実現されることを意味する。 In addition, each function included in the programming devices 1, 1A, and 1B in the first to third embodiments can be realized by hardware, software, or a combination of these. Here, being realized by software means being realized by a computer reading and executing a program.
 プログラムは、様々なタイプの非一時的なコンピュータ可読媒体(Non-transitory computer readable medium)を用いて格納され、コンピュータに供給することができる。非一時的なコンピュータ可読媒体は、様々なタイプの実体のある記録媒体(Tangible storage medium)を含む。非一時的なコンピュータ可読媒体の例は、磁気記録媒体(例えば、フレキシブルディスク、磁気テープ、ハードディスクドライブ)、光磁気記録媒体(例えば、光磁気ディスク)、CD-ROM(Read Only Memory)、CD-R、CD-R/W、半導体メモリ(例えば、マスクROM、PROM(Programmable ROM)、EPROM(Erasable PROM)、フラッシュROM、RAM)を含む。また、プログラムは、様々なタイプの一時的なコンピュータ可読媒体(Transitory computer readable medium)によってコンピュータに供給されてもよい。一時的なコンピュータ可読媒体の例は、電気信号、光信号、及び電磁波を含む。一時的なコンピュータ可読媒体は、電線及び光ファイバ等の有線通信路、又は、無線通信路を介して、プログラムをコンピュータに供給できる。 The program can be stored and provided to the computer using various types of non-transitory computer readable media. Non-transitory computer readable media include various types of tangible storage media. Examples of non-transitory computer readable media include magnetic recording media (e.g., flexible disks, magnetic tapes, hard disk drives), magneto-optical recording media (e.g., magneto-optical disks), CD-ROMs (Read Only Memory), CD-Rs, CD-R/Ws, and semiconductor memories (e.g., mask ROMs, PROMs (Programmable ROMs), EPROMs (Erasable PROMs), flash ROMs, RAMs). The program may also be provided to the computer by various types of temporary computer readable media. Examples of temporary computer readable media include electrical signals, optical signals, and electromagnetic waves. The temporary computer readable medium can provide the program to the computer via a wired communication path such as an electric wire or optical fiber, or via a wireless communication path.
 なお、記録媒体に記録されるプログラムを記述するステップは、その順序に沿って時系列的に行われる処理はもちろん、必ずしも時系列的に処理されなくとも、並列的あるいは個別に実行される処理をも含むものである。また、前記プログラムを記述するステップはクラウドコンピューティングで実施してもよい。 The step of writing the program to be recorded on the recording medium includes not only processes that are performed chronologically according to the order, but also processes that are not necessarily performed chronologically but are executed in parallel or individually. The step of writing the program may also be performed by cloud computing.
 本開示について詳述したが、本開示は上述した個々の実施形態に限定されるものではない。これらの実施形態は、本開示の要旨を逸脱しない範囲で、又は、特許請求の範囲に記載された内容とその均等物から導き出される本 開示の趣旨を逸脱しない範囲で、種々の追加、置き換え、変更、部分的削除等が可能である。また、これらの実施形態は、組み合わせて実施することもできる。例えば、上述した実施形態において、各動作の順序や各処理の順序は、一例として示したものであり、これらに限定されるものではない。また、上述した実施形態の説明に数値又は数式が用いられている場合も同様である。 Although the present disclosure has been described in detail, the present disclosure is not limited to the individual embodiments described above. Various additions, substitutions, modifications, partial deletions, etc. are possible to these embodiments without departing from the gist of the present disclosure or the spirit of the present disclosure derived from the contents described in the claims and their equivalents. These embodiments can also be implemented in combination. For example, in the above-mentioned embodiments, the order of each operation and the order of each process are shown as examples and are not limited to these. The same applies when numerical values or formulas are used to explain the above-mentioned embodiments.
 上記実施形態及び変形例に関し、さらに以下の付記を開示する。
(付記1)
 プログラミング装置(1)は、仮想空間内に、産業機械、産業機械に取り付けられるツール、視覚センサ、及びワークのモデルをそれぞれ配置し、産業機械のモデルがツールのモデルによりワークのモデルに対し作業を行うプログラムを生成するプログラミング装置であって、ワークのモデルの面上に輪郭線が描かれたラベルを配置し、視覚センサのモデルに設定された撮像パラメータに基づいて、視覚センサのモデルが配置された位置においてラベルを撮像した撮像画像を生成し、撮像画像に画像処理を施すことにより輪郭線を検出し、検出した輪郭線に基づいて、加工線を生成して、生成した加工線のワークのモデルの面に対する法線ベクトルを算出し、加工線及び法線ベクトルに基づいてプログラムを生成する。
(付記2)
 プログラミング装置(1B)は、実空間内に配置された産業機械、産業機械に取り付けられるツール、視覚センサ、及びワークにおいて、産業機械がツールによりワークに対し作業を行うプログラムを生成するプログラミング装置であって、ワークの面上に輪郭線が描かれたラベルを配置し、視覚センサに設定された撮像パラメータに基づいて、視覚センサが配置された位置においてラベルを撮像した撮像画像を視覚センサから取得し、撮像画像に画像処理を施すことにより輪郭線を検出し、検出した輪郭線に基づいて、加工線を生成して、生成した加工線のワークの面に対する法線ベクトルを算出し、加工線及び法線ベクトルに基づいてプログラムを生成する。
(付記3)
 付記1又は付記2に記載のプログラミング装置(1A)において、輪郭線を複数の線分に分割し、複数の線分それぞれを三次元データの集合に変換する。
(付記4)
 付記3に記載のプログラミング装置(1A)において、複数の線分より連続する少なくとも2つの線分を選択し1つの線分に結合する。
(付記5)
 付記3に記載のプログラミング装置(1A)において、複数の線分に対してプログラムにより作業を行う順序を指定する。
(付記6)
 付記1に記載のプログラミング装置(1)において、ラベル全体が視覚センサのモデルの視野に収まる位置に視覚センサを配置する。
(付記7)
 付記2に記載のプログラミング装置(1B)において、ラベル全体が視覚センサの視野に収まる位置に視覚センサを配置する。
The following supplementary notes are further disclosed regarding the above-described embodiment and modified examples.
(Appendix 1)
A programming device (1) is a programming device that places an industrial machine, a tool attached to the industrial machine, a visual sensor, and a work model within a virtual space, and generates a program in which the industrial machine model performs an operation on a work model using a tool model. The programming device places a label with a contour line on the surface of the work model, generates an image of the label at the position where the visual sensor model is placed based on imaging parameters set for the visual sensor model, detects the contour line by applying image processing to the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the work model, and generates a program based on the processing line and the normal vector.
(Appendix 2)
The programming device (1B) is a programming device that generates a program in which an industrial machine performs an operation on a workpiece using a tool, for an industrial machine, a tool attached to the industrial machine, a visual sensor, and a workpiece that are arranged in real space, and the programming device places a label with a contour line drawn on the surface of the workpiece, obtains an image of the label captured at the position where the visual sensor is arranged based on imaging parameters set in the visual sensor, detects the contour line by performing image processing on the captured image, generates a processing line based on the detected contour line, calculates a normal vector of the generated processing line with respect to the surface of the workpiece, and generates a program based on the processing line and the normal vector.
(Appendix 3)
In the programming device (1A) described in Supplementary Note 1 or Supplementary Note 2, a contour line is divided into a plurality of line segments, and each of the plurality of line segments is converted into a set of three-dimensional data.
(Appendix 4)
In the programming device (1A) described in Appendix 3, at least two consecutive line segments are selected from a plurality of line segments and joined into one line segment.
(Appendix 5)
In the programming device (1A) described in Appendix 3, the order in which operations are performed on a plurality of line segments is designated by a program.
(Appendix 6)
In the programming device (1) described in Appendix 1, the visual sensor is placed at a position where the entire label falls within the field of view of the model of the visual sensor.
(Appendix 7)
In the programming device (1B) described in Appendix 2, the visual sensor is placed in a position where the entire label fits within the field of view of the visual sensor.
 1、1A、1B プログラミング装置
 10、10a、10b 制御部
 101 仮想空間生成部
 102 三次元モデル配置部
 103、103b 画像配置部
 104、104b 画像撮像部
 105、105a 輪郭線検出部
 106、106a 三次元データ変換部
 107、107a 加工線生成部
 108、108a ロボットプログラム生成部
 11 入力部
 12 表示部
 13、13b 記憶部
 131 モデル記憶部
Reference Signs List 1, 1A, 1B Programming device 10, 10a, 10b Control unit 101 Virtual space generation unit 102 Three-dimensional model arrangement unit 103, 103b Image arrangement unit 104, 104b Image capture unit 105, 105a Contour line detection unit 106, 106a Three-dimensional data conversion unit 107, 107a Processing line generation unit 108, 108a Robot program generation unit 11 Input unit 12 Display unit 13, 13b Storage unit 131 Model storage unit

Claims (7)

  1.  仮想空間内に、産業機械、前記産業機械に取り付けられるツール、視覚センサ、及びワークのモデルをそれぞれ配置し、前記産業機械のモデルが前記ツールのモデルにより前記ワークのモデルに対し作業を行うプログラムを生成するプログラミング装置であって、
     前記ワークのモデルの面上に輪郭線が描かれたラベルを配置し、
     前記視覚センサのモデルに設定された撮像パラメータに基づいて、前記視覚センサのモデルが配置された位置において前記ラベルを撮像した撮像画像を生成し、
     前記撮像画像に画像処理を施すことにより前記輪郭線を検出し、
     検出した前記輪郭線に基づいて、加工線を生成して、生成した前記加工線の前記ワークのモデルの面に対する法線ベクトルを算出し、
     前記加工線及び前記法線ベクトルに基づいて前記プログラムを生成する
     プログラミング装置。
    A programming device that arranges models of an industrial machine, a tool attached to the industrial machine, a visual sensor, and a workpiece in a virtual space, and generates a program in which the model of the industrial machine performs an operation on the model of the workpiece using the model of the tool,
    Placing a label having a contour line on a surface of the model of the workpiece;
    generating a captured image of the label at a position where the model of the visual sensor is disposed based on imaging parameters set for the model of the visual sensor;
    Detecting the contour line by performing image processing on the captured image;
    A processing line is generated based on the detected contour line, and a normal vector of the generated processing line with respect to a surface of the model of the workpiece is calculated;
    A programming device that generates the program based on the processing line and the normal vector.
  2.  実空間内に配置された産業機械、前記産業機械に取り付けられるツール、視覚センサ、及びワークにおいて、前記産業機械が前記ツールにより前記ワークに対し作業を行うプログラムを生成するプログラミング装置であって、
     前記ワークの面上に輪郭線が描かれたラベルを配置し、
     前記視覚センサに設定された撮像パラメータに基づいて、前記視覚センサが配置された位置において前記ラベルを撮像した撮像画像を前記視覚センサから取得し、
     前記撮像画像に画像処理を施すことにより前記輪郭線を検出し、
     検出した前記輪郭線に基づいて、加工線を生成して、生成した前記加工線の前記ワークの面に対する法線ベクトルを算出し、
     前記加工線及び前記法線ベクトルに基づいて前記プログラムを生成する
     プログラミング装置。
    A programming device that generates a program for an industrial machine that performs an operation on a workpiece using a tool attached to the industrial machine, a visual sensor, and a workpiece that are arranged in a real space, the programming device comprising:
    Placing a label having a contour line on the surface of the workpiece;
    acquiring, from the visual sensor, an image of the label captured at a position where the visual sensor is disposed, based on imaging parameters set for the visual sensor;
    Detecting the contour line by performing image processing on the captured image;
    A processing line is generated based on the detected contour line, and a normal vector of the generated processing line with respect to a surface of the workpiece is calculated;
    A programming device that generates the program based on the processing line and the normal vector.
  3.  前記輪郭線を複数の線分に分割し、前記複数の線分それぞれを三次元データの集合に変換する、請求項1又は請求項2に記載のプログラミング装置。 The programming device according to claim 1 or 2, which divides the contour line into a plurality of line segments and converts each of the plurality of line segments into a set of three-dimensional data.
  4.  前記複数の線分より連続する少なくとも2つの線分を選択し1つの線分に結合する、請求項3に記載のプログラミング装置。 The programming device according to claim 3, which selects at least two consecutive line segments from the plurality of line segments and combines them into one line segment.
  5.  前記複数の線分に対して前記プログラムにより前記作業を行う順序を指定する、請求項3に記載のプログラミング装置。 The programming device according to claim 3, which specifies the order in which the program performs the operations on the multiple line segments.
  6.  前記ラベル全体が前記視覚センサの視野に収まる位置に前記視覚センサのモデルを配置する、請求項1に記載のプログラミング装置。 The programming device of claim 1, wherein the model of the visual sensor is positioned so that the entire label is within the field of view of the visual sensor.
  7.  前記ラベル全体が前記視覚センサの視野に収まる位置に前記視覚センサを配置する、請求項2に記載のプログラミング装置。 The programming device according to claim 2, wherein the visual sensor is positioned so that the entire label is within the visual field of the visual sensor.
PCT/JP2023/001146 2023-01-17 2023-01-17 Programming device WO2024154218A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/001146 WO2024154218A1 (en) 2023-01-17 2023-01-17 Programming device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/001146 WO2024154218A1 (en) 2023-01-17 2023-01-17 Programming device

Publications (1)

Publication Number Publication Date
WO2024154218A1 true WO2024154218A1 (en) 2024-07-25

Family

ID=91955497

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/001146 WO2024154218A1 (en) 2023-01-17 2023-01-17 Programming device

Country Status (1)

Country Link
WO (1) WO2024154218A1 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62115505A (en) * 1985-11-15 1987-05-27 Hitachi Ltd Automatic setting device for teaching point of industrial robot
JPH05108131A (en) * 1991-10-16 1993-04-30 Toshiba Corp Teaching device of robot
JPH09501358A (en) * 1993-08-11 1997-02-10 ベネッケ−カリコ アクチエンゲゼルシャフト Method for generating engraving of pattern on surface of work piece
JP2000155609A (en) * 1998-11-19 2000-06-06 Dainippon Printing Co Ltd Graphic working method/device
JP2007108916A (en) * 2005-10-12 2007-04-26 Fanuc Ltd Off-line teaching device for robot
JP2010076258A (en) * 2008-09-26 2010-04-08 Casio Computer Co Ltd Tape printing device
JP2013146814A (en) * 2012-01-18 2013-08-01 Honda Motor Co Ltd Robot teaching method
JP2014184530A (en) * 2013-03-25 2014-10-02 Toyota Motor Corp Teaching system and teaching correction method
US20210012678A1 (en) * 2018-03-07 2021-01-14 Seabery North America, S.L. Systems and methods to simulate robotic joining operations
JP2021016922A (en) * 2019-07-22 2021-02-15 ファナック株式会社 Three-dimensional data generator and robot control system
JP2021029690A (en) * 2019-08-26 2021-03-01 株式会社サンセイアールアンドディ Game machine

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62115505A (en) * 1985-11-15 1987-05-27 Hitachi Ltd Automatic setting device for teaching point of industrial robot
JPH05108131A (en) * 1991-10-16 1993-04-30 Toshiba Corp Teaching device of robot
JPH09501358A (en) * 1993-08-11 1997-02-10 ベネッケ−カリコ アクチエンゲゼルシャフト Method for generating engraving of pattern on surface of work piece
JP2000155609A (en) * 1998-11-19 2000-06-06 Dainippon Printing Co Ltd Graphic working method/device
JP2007108916A (en) * 2005-10-12 2007-04-26 Fanuc Ltd Off-line teaching device for robot
JP2010076258A (en) * 2008-09-26 2010-04-08 Casio Computer Co Ltd Tape printing device
JP2013146814A (en) * 2012-01-18 2013-08-01 Honda Motor Co Ltd Robot teaching method
JP2014184530A (en) * 2013-03-25 2014-10-02 Toyota Motor Corp Teaching system and teaching correction method
US20210012678A1 (en) * 2018-03-07 2021-01-14 Seabery North America, S.L. Systems and methods to simulate robotic joining operations
JP2021016922A (en) * 2019-07-22 2021-02-15 ファナック株式会社 Three-dimensional data generator and robot control system
JP2021029690A (en) * 2019-08-26 2021-03-01 株式会社サンセイアールアンドディ Game machine

Similar Documents

Publication Publication Date Title
US20200338730A1 (en) Trajectory planning device, trajectory planning method and program
CN112839764B (en) System and method for weld path generation
US9311608B2 (en) Teaching system and teaching method
Pan et al. Recent progress on programming methods for industrial robots
JP4153528B2 (en) Apparatus, program, recording medium and method for robot simulation
JP5981143B2 (en) Robot tool control method
JP2017094406A (en) Simulation device, simulation method, and simulation program
JP2009175954A (en) Generating device of processing robot program
JP6445092B2 (en) Robot system displaying information for teaching robots
US20180036883A1 (en) Simulation apparatus, robot control apparatus and robot
JP2015136770A (en) Data creation system of visual sensor, and detection simulation system
US10406688B2 (en) Offline programming apparatus and method having workpiece position detection program generation function using contact sensor
CN104002297A (en) Teaching system, teaching method and robot system
CN111225143B (en) Image processing apparatus, control method thereof, and program storage medium
JP2015035211A (en) Pattern matching method and pattern matching device
JP2015174184A (en) Controller
JP7190552B1 (en) Robot teaching system
JP4932202B2 (en) Part program generating apparatus for image measuring apparatus, part program generating method for image measuring apparatus, and part program generating program for image measuring apparatus
JP2015005093A (en) Pattern matching device and pattern matching method
WO2024154218A1 (en) Programming device
JP5291482B2 (en) Robot teaching program correction device
JP2014186588A (en) Simulation apparatus, program, and image generating method
JP7509535B2 (en) IMAGE PROCESSING APPARATUS, ROBOT SYSTEM, AND IMAGE PROCESSING METHOD
KR20090049670A (en) Method for detecting 3-dimensional coordinates of subject for photography and memory media recording program to operate the method
TW202430336A (en) Programming device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23917432

Country of ref document: EP

Kind code of ref document: A1