WO2022208963A1 - Calibration device for controlling robot - Google Patents

Calibration device for controlling robot Download PDF

Info

Publication number
WO2022208963A1
WO2022208963A1 PCT/JP2021/040394 JP2021040394W WO2022208963A1 WO 2022208963 A1 WO2022208963 A1 WO 2022208963A1 JP 2021040394 W JP2021040394 W JP 2021040394W WO 2022208963 A1 WO2022208963 A1 WO 2022208963A1
Authority
WO
WIPO (PCT)
Prior art keywords
posture
unit
robot
marker
calibration
Prior art date
Application number
PCT/JP2021/040394
Other languages
French (fr)
Japanese (ja)
Inventor
毅 北村
聡 笹谷
大輔 浅井
信博 知原
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Publication of WO2022208963A1 publication Critical patent/WO2022208963A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices

Definitions

  • the present invention relates to a calibration device for robot control.
  • a target marker with a known shape (hereinafter also referred to as a "marker") is attached to the robot, the robot is photographed in various postures by an imaging unit, and the photographed image and the posture of the robot are captured.
  • calibration man-hours may become enormous.
  • a mechanism for reducing the number of calibration man-hours or prototyping has attracted more attention.
  • Japanese Patent Laid-Open No. 2002-200000 describes a solution for "easily performing calibration", which states that "a robot control system, based on an image captured by an imaging unit, A measurement unit that measures the three-dimensional coordinates of an arbitrary object existing within the field of view of the robot, and the robot according to a pre-calculated correspondence relationship between the measured three-dimensional coordinates and the position and orientation of the action unit of the robot.
  • a command generation unit that generates a command for positioning the action part, a calibration execution part that executes calibration for calculating the correspondence, and a reference object associated with the action part of the robot in the calibration. and a setting reception unit that receives the setting of the calibration area, which is the area to be calibrated.”
  • a space (calibration area) in which the marker is moved during calibration is defined based on information about the robot captured by the imaging unit, and the marker is automatically arranged in that space.
  • the number of man-hours required for calibration is reduced.
  • the technique described in Patent Document 1 since the marker is moved within the range of the calibration area whose setting is received by the setting receiving unit, the number of man-hours required for teaching the posture set of the robot can be reduced and the calibration can be performed. can be executed.
  • Patent Literature 1 does not take into account the occlusion of markers (hereinafter also referred to as "occlusion") that may occur when the robot is changed to various postures. Therefore, there are the following problems.
  • occlusion markers
  • the present application includes a plurality of means for solving the above problems.
  • One of them is an imaging unit that takes an image from a predetermined position, and a marker that is attached to the robot and displaces according to the movement of the robot.
  • a posture set generation unit for generating a posture set including a plurality of postures of the robot for enabling the imaging unit to capture the marker;
  • An unshielded posture extraction unit that extracts a plurality of postures, and estimates coordinate transformation parameters for transforming the coordinate system of the robot and the coordinate system of the imaging unit based on the plurality of postures extracted by the unshielded posture extraction unit.
  • a calibration execution unit for a calibration execution unit.
  • FIG. 2 is a diagram showing a coordinate system and coordinate transformation of a physical device in this embodiment
  • FIG. 11 is a functional block diagram of a posture set generator
  • FIG. 7 is a flow chart showing a procedure for obtaining parameters by a parameter obtaining unit
  • 4 is a flow chart showing a procedure for generating and saving a posture set
  • FIG. 5 is a diagram for explaining a method of generating a teaching space by a teaching space generation unit;
  • FIG. 5 is a diagram illustrating a method of dividing a teaching space by a teaching space generation unit;
  • FIG. 5 is a diagram for explaining a teaching position generation method by a teaching position generation unit;
  • FIG. 10 is a diagram showing how a marker looks when the marker is placed at the teaching position in each small space in the initial posture;
  • FIG. 10 is a diagram showing a teaching posture of a marker generated by a teaching posture generation unit; It is a figure which shows an example of the simulator built by the simulator building part.
  • 9 is a flow chart showing a processing procedure of a non-physical interference attitude extraction unit;
  • FIG. 4 is a top view of the posture of the robot when it actually performs picking work.
  • FIG. 3 is a side view of the posture of the robot when it actually performs picking work.
  • FIG. 9 is a flow chart showing a processing procedure of a non-shielding posture extraction unit
  • FIG. 10 is a schematic diagram showing an operation example of a non-shielding posture extraction unit
  • 4 is a functional block diagram of a posture set evaluation unit
  • FIG. 5 is a flow chart showing a processing procedure of a calibration execution unit
  • FIG. 10 is a diagram showing a configuration example of a calibration device for robot control according to a second embodiment
  • FIG. 1 is a diagram showing a configuration example of a calibration device for robot control according to the first embodiment.
  • picking work for moving a work to a predetermined position is assumed.
  • a robot arm that is an articulated robot (hereinafter also simply referred to as "robot") is assumed.
  • robot an articulated robot
  • a calibration board is assumed in which a pattern with known dimensions is printed on a jig on a plane.
  • a monocular camera is assumed as an example of the imaging unit.
  • a wall and a conveyor are assumed as examples of environmental installation objects installed around the robot and around the imaging unit.
  • the content assumed in this embodiment is merely an example, and the configuration of the calibration device for robot control is not limited to the example assumed in this embodiment.
  • the calibration device 2 includes a control device 1, a real device 2A, and a calibration control device 2B.
  • the control device 1 centrally controls the entire calibration device 2 . Further, the control device 1 controls the motion of the robot 30 and the motion of the imaging unit 32 in the physical device 2A.
  • the control device 1 and the calibration control device 2B are shown separately in FIG. 1, the functions of the calibration control device 2B can be realized by the computer hardware resources of the control device 1. .
  • the real device 2A has a robot 30 configured by a robot arm, a marker 31 attached to the robot 30, and an imaging unit 32 that measures the work space of the robot 30 and its surroundings. Also, the physical device 2A has a wall 33A and a conveyor 33B as an example of the environmental installation objects 33 installed around the robot 30 and around the imaging unit 32 .
  • the calibration control device 2B includes a prior knowledge 21, a generation parameter 22, a posture set generation unit 23 that generates a posture set of the robot 30 using the prior knowledge 21 and the generation parameter 22, and a three-dimensional image using the prior knowledge 21.
  • the robot 30 and the marker 31 are selected from the pose sets generated by the simulator constructing unit 24 for constructing a simulator (hereinafter also simply referred to as "simulator") and the pose set generating unit 23, in which part of the physical device 2A.
  • a non-physical interference posture extraction unit 25 that extracts a plurality of postures (posture set) that do not cause physical interference, and a plurality of postures
  • a posture set evaluation unit 27 that evaluates the posture set extracted by the non-shielded posture extraction unit 26 from the viewpoint of calibration accuracy. and a calibration execution unit 29 that executes calibration when the posture set evaluation unit 27 evaluates high.
  • the posture set of the robot 30 is a set of multiple postures (posture data) of the robot 30 .
  • the prior knowledge 21 includes design information including the shape of the physical device 2A, information capable of specifying the space in which the robot 30 works (hereinafter also referred to as "work space"), an evaluation value of the imaging unit 32, It is a database having parameter information such as threshold values used in each functional unit.
  • the design information of the physical device 2A is information including the shape, specifications and layout of the three-dimensional model of the physical device 2A.
  • the evaluation value of the image pickup unit 32 is an evaluation value obtained by an inspection performed when the image pickup unit 32 is received or shipped, and represents the actual performance of the image pickup unit 32 with respect to product specifications and the degree of image distortion.
  • the generation parameter 22 is a database having parameters that determine the number and density of poses included in the pose set of the robot 30 .
  • the posture set generation unit 23 receives the prior knowledge 21 and the generation parameters 22 as input, generates a posture set of the robot 30 , and stores the generated posture set in the control device 1 .
  • the posture of the robot 30 included in the posture set generated by the posture set generation unit 23 is a posture that enables the imaging unit 32 to image (capture) the marker 31 .
  • the simulator construction unit 24 uses the prior knowledge 21 to construct a three-dimensional simulator that enables virtual motion of the robot 30 and imaging using the imaging unit 32 .
  • the non-physical interference posture extraction unit 25 reads the posture set generated by the posture set generation unit 23 from the control device 1, and moves the robot to each posture on the three-dimensional simulator constructed by the simulator construction unit 24, thereby A plurality of postures are extracted in which the robot 30 and the marker 31 do not physically interfere with either the robot 30 or the environmental installation object 33 .
  • the non-physical interference posture extraction unit 25 stores the plurality of extracted postures in the control device 1 as a set of postures without physical interference.
  • the non-shielding posture extraction unit 26 reads the posture set extracted by the non-physical interference posture extraction unit 25 from the control device 1, and moves the robot to each posture on the three-dimensional simulator constructed by the simulator construction unit 24.
  • the non-shielding posture extraction unit 26 stores the plurality of extracted postures in the control device 1 as a posture set.
  • the storage location of the posture set generated by the posture set generation unit 23 and the storage destination of the posture sets extracted by the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26 are set to the control device 1.
  • the storage destination of the posture set is not limited to the control device 1, and may be, for example, a storage unit provided in the calibration control device 2B.
  • the posture set evaluation unit 27 reads the posture set extracted by the non-shielding posture extraction unit 26 from the control device 1, and determines whether or not the estimation accuracy of the calibration using the posture set is equal to or greater than a predetermined value set in advance. or not, and the posture set is evaluated based on this determination result.
  • the parameter update unit 28 updates the generation parameter 22 when the posture set evaluation unit 27 cannot expect calibration with a higher accuracy than a predetermined value.
  • the calibration executing unit 29 controls the robot 30 and the imaging unit 32 based on the posture set (a plurality of postures) determined by the posture set evaluating unit 27 to be expected to perform calibration with higher accuracy than a predetermined level. Coordinate transformation parameters for transforming the coordinate system of 30 and the coordinate system of the imaging unit 32 are estimated.
  • control device 1 the real device 2A, the prior knowledge 21, the pose set generation unit 23, the simulator construction unit 24, the non-physical interference pose extraction unit 25, the non-shielding pose extraction unit 26, the pose set evaluation unit 27, and the parameter update unit 28, the calibration execution unit 29 will be described in detail.
  • FIG. 2 is a block diagram showing a configuration example of the control device 1 in this embodiment.
  • the control device 1 includes a CPU 11, a bus 12 for transmitting commands from the CPU 11, a ROM 13, a RAM 14, and a CPU 11 as computer hardware resources for overall control of the calibration device 2.
  • a storage device 15 a network I/F (I/F is an abbreviation for interface, hereinafter the same) 16, an imaging I/F 17 for connecting the imaging unit 32, and a screen display I/F 18 for screen output. , and an input I/F 19 for external input. That is, the control device 1 can be configured by a general computer device.
  • the storage device 15 stores a program 15A for executing each function, an OS (operating system) 15B, a three-dimensional model 15C, and parameters 15D such as a database.
  • a robot controller 16A is connected to the network I/F 16 .
  • the robot control device 16A is a device that controls and operates the robot 30 .
  • the robot 30 is a picking robot that performs picking work.
  • a picking robot is composed of an articulated robot.
  • the robot 30 may be of any type as long as it has a plurality of joints.
  • the robot 30 is installed on a single-axis slider, and by increasing the degree of freedom of movement of the robot 30 with this single-axis slider, it is possible to perform work in a wide space.
  • the marker 31 a calibration board on which a pattern with known dimensions is printed on a planar jig is used. When such a marker 31 is used, the position of the marker 31 in the three-dimensional space can be identified by analyzing the data measured by the imaging section 32 .
  • the three-dimensional position of the marker 31 can be identified from the data measured by the imaging unit 32, for example, a spherical jig or a jig with a special shape can be used as the marker 31.
  • the marker 31 is attached to the arm tip of the robot 30 which is a robot arm.
  • the attachment position of the marker 31 is not limited to the tip of the arm, and may be on the robot arm as long as the position satisfies the condition that the marker 31 is displaced according to the motion of the robot 30 .
  • a monocular camera is used as the imaging unit 32 .
  • the imaging unit 32 is not limited to a monocular camera, and may be configured by, for example, a ToF (Time of Flight) camera, a stereo camera, or the like.
  • the data obtained by the imaging unit 32 is data such as images and point clouds that can specify the three-dimensional position of the marker 31 .
  • the imaging unit 32 photographs the work space of the robot 30 and the like from a predetermined position.
  • the mounting position of the imaging unit 32 can be set at any location, such as the ceiling or wall of the building where the robot 30 works.
  • FIG. 3 is a diagram showing the coordinate system and coordinate conversion of the physical device 2A in this embodiment.
  • the coordinate system of the physical device 2A includes a base coordinate system C1 whose origin is the base of the arm of the robot 30, an arm coordinate system C2 whose origin is the tip of the arm of the robot 30, and a marker coordinate system whose origin is the center of the marker 31.
  • C3 and a camera coordinate system C4 whose origin is the optical center of the imaging unit 32 .
  • Each coordinate system is a three-dimensional coordinate system.
  • the coordinate transformation of the physical device 2A includes a base/camera transformation matrix M1 representing coordinate transformation between the base coordinate system C1 and the camera coordinate system C4, and a marker/camera transformation matrix representing coordinate transformation between the marker coordinate system C3 and the camera coordinate system C4.
  • a transformation matrix M2 an arm/base transformation matrix M3 representing coordinate transformation between the arm coordinate system C2 and the base coordinate system C1
  • an arm/marker transformation matrix M4 representing coordinate transformation between the arm coordinate system C2 and the marker coordinate system C3.
  • the base coordinate system C ⁇ b>1 corresponds to the coordinate system of the robot 30
  • the camera coordinate system C ⁇ b>4 corresponds to the coordinate system of the imaging unit 32 .
  • the base/camera transformation matrix M1 corresponds to a coordinate transformation parameter for transforming the coordinate system of the robot 30 and the coordinate system of the imaging unit 32 .
  • coordinate transformation when the rotation matrix is Rca and the translation matrix is tca in the base/camera transformation matrix M1, the coordinates C (Xc, Yc, Zc) in the camera coordinate system C4 can be expressed by the following formula: It can be transformed into a point P(Xr, Yr, Zr) in the base coordinate system C1 as shown.
  • the base-camera conversion matrix M1 can be obtained as one of the design values in the physical device 2A.
  • an error may occur in the position of the robot 30 .
  • One possible cause of such an error is, for example, misalignment of the mounting position of the imaging unit 32 in the physical device 2A.
  • the mounting position of the imaging unit 32 is designed so that the optical axis (z-axis) of the imaging unit 32 is parallel to the vertical direction, a slight positional deviation may occur during actual mounting. .
  • Calibration is performed to accurately obtain the base/camera transformation matrix M so that such a positional deviation does not cause an error in the position of the robot 30 .
  • the design value of the base/camera transformation matrix M1 is included in the prior knowledge 21. Therefore, the design value of the base-camera conversion matrix M1 can be obtained from the prior knowledge 21 . Also, by executing the calibration, it is possible to obtain the actual values of the base/camera transformation matrix M1 that reflects the positional deviation and the like.
  • Calibration in this embodiment means estimating the base/camera conversion matrix M1 from a plurality of sets of data of the marker/camera conversion matrix M2 and the arm/base conversion matrix M3 when the robot 30 is moved to a certain posture. Say things. Estimation of the base-to-camera transformation matrix M1 is performed computationally.
  • the marker-camera conversion matrix M2 can be obtained by analyzing an image obtained by capturing the marker 31 by the imaging unit 32 .
  • the arm-base conversion matrix M3 can be obtained by calculation from the encoder values of the robot 30 .
  • the joint portion of the robot 30 is provided with a motor (not shown) for driving the joint and an encoder for detecting the rotation angle of the motor, and the encoder value means the output value of the encoder.
  • the arm-marker conversion matrix M4 is unknown, or a design value can be obtained from the prior knowledge 21 . In this embodiment, since the design value of the arm-marker transformation matrix M4 is included in the prior knowledge 21, the design value of the arm-marker transformation matrix M4 can be obtained from this prior knowledge 21.
  • FIG. In this embodiment, the origin of each coordinate system, the orientation of the coordinate system, and the coordinate conversion are set as described above, but the present invention is not limited to this example.
  • the prior knowledge 21 is a database having parameters (1) to (11) below.
  • the function using the above prior knowledge 21 will be described, but the function may be implemented using only part of it.
  • the posture set generator 23 generates a posture set of the robot 30 based on the parameters obtained from the prior knowledge 21 and the parameters obtained from the generated parameters 22 . Also, the posture set generator 23 stores the generated posture set in the control device 1 .
  • FIG. 4 is a functional block diagram of the posture set generator 23.
  • the posture set generation unit 23 includes a parameter acquisition unit 231, a teaching space generation unit 232, a teaching position generation unit 233, a teaching posture generation unit 234, a coordinate system conversion unit 235, and a posture set generation unit 231.
  • a storage unit 236 is provided.
  • the parameter acquisition unit 231 acquires parameters necessary for generating a set of postures from the prior knowledge 21 and the generation parameters 22 .
  • the teaching space generation unit 232 determines a space in which the marker 31 is placed during calibration, and divides this space into a plurality of small spaces. Also, the teaching space generator 232 assigns an index to each small space.
  • the teaching position generation unit 233 generates positions at which the markers 31 are arranged in each small space, that is, teaching positions, based on the camera coordinate system C4.
  • the teaching posture generation unit 234 generates the posture of the marker 31 at each teaching position generated by the teaching position generation unit 233, that is, the teaching posture.
  • the coordinate system transformation unit 235 transforms the set of poses of the marker 31 with respect to the camera coordinate system C4 to the robot with respect to the base coordinate system C1 based on the design values of the base/camera transformation matrix M1 included in the prior knowledge 21. Transform into a set of 30 poses.
  • the posture set storage unit 236 stores the posture set generated by the teaching posture generation unit 234 and the posture set transformed by the coordinate system transformation unit 235 in the control device 1 . The configuration of each part of the posture set generator 23 will be described in more detail below.
  • FIG. 5 is a diagram showing a list of parameters acquired by the parameter acquisition unit 231.
  • the parameters acquired by the parameter acquisition unit 231 include the design values (rotation matrix Rca, translation matrix tca) of the base-camera conversion matrix M1, the measurement range of the imaging unit 32 (angle of view, resolution, optical blur), distortion information of the imaging unit 32 (distortion evaluation), workspace of the robot 30 (three-dimensional space information), shape of the marker 31 (board, sphere, size), resolution in each axis direction of the posture set (X1, Y1 , Z1).
  • FIG. 6 is a flowchart showing a parameter acquisition procedure by the parameter acquisition unit 231.
  • the parameter acquisition unit 231 acquires the design values of the base-camera conversion matrix M1 from the prior knowledge 21 (step S1).
  • the parameter acquisition unit 231 acquires the rotation matrix Rca and the translation matrix tca as the design values of the base-camera conversion matrix M1.
  • the parameter acquisition unit 231 acquires the measurement range of the imaging unit 32 and the distortion information of the imaging unit 32 from the prior knowledge 21 (step S2). At this time, the parameter acquisition unit 231 acquires the angle of view, resolution, and optical blur as the measurement range of the imaging unit 32 and acquires the distortion evaluation value as the distortion information of the imaging unit 32 .
  • the parameter acquisition unit 231 acquires information defining the workspace of the robot 30 from the prior knowledge 21 (step S3). At this time, the parameter acquisition unit 231 acquires three-dimensional space information indicating the work space as information defining the work space of the robot 30 . Next, the parameter acquisition unit 231 acquires shape data of the marker 31 (step S4). At this time, the parameter acquisition unit 231 acquires the type (board, sphere) and size data of the marker 31 as the shape data of the marker 31 . Next, the parameter acquisition unit 231 acquires the resolution (X1, Y1, Z1) in each axial direction of the posture set from the generation parameter 22 (step S5).
  • the number of poses (X1, Y1, Z1) on each axis is acquired as a parameter that determines the resolution in each axis direction.
  • the greater the number of postures to be created on each axis the greater each value of X1, Y1, and Z1, the higher the resolution in each axis direction.
  • the parameters X1, Y1, and Z1 correspond to parameters that determine the number and density of poses included in the pose set.
  • FIG. 7 is a flow chart showing a procedure for generating a posture set by the teaching space generation unit 232, the teaching position generation unit 233, the teaching posture generation unit 234, and the coordinate system conversion unit 235 and storing the posture set by the posture set storage unit 236. is.
  • the teaching space generator 232 generates a teaching space in which the markers 31 are arranged (step S6).
  • the teaching space generated by the teaching space generation unit 232 is a three-dimensional space based on the camera coordinate system C4.
  • the teaching position generating section 233 sets the position where the marker 31 is arranged in each small space as a teaching position (step S8). As a result, the teaching positions of the marker 31 are generated for the number of small spaces.
  • the teaching posture generation unit 234 initially sets the teaching posture of the marker 31 at each teaching position so that the marker 31 faces the imaging unit 32 (step S9). In the following description, the initially set teaching orientation of the marker 31 will be referred to as the initial orientation.
  • the teaching posture generation unit 234 generates the marker 31 from the initial posture by ⁇ (threshold) degrees from 0 degrees in each axis direction of the camera coordinate system C4 from the initial posture, based on the index assigned to each small space, or randomly. By rotating the posture of , the teaching posture of the marker 31 is generated (step S10). As a result, the teaching postures of the marker 31 are generated for the number of small spaces.
  • the coordinate system conversion unit 235 converts the coordinates of the taught posture of the marker 31 based on the camera coordinate system C4 generated by the taught posture generation unit 234 in step S10 using the design values of the base/camera transformation matrix M1. are converted into the coordinates of the base coordinate system C1 (step S11).
  • the teaching posture of the marker 31 based on the camera coordinate system C4 is converted into the teaching posture of the robot 30 based on the base coordinate system C1.
  • the teaching postures of the robot 30 are generated for the number of small spaces.
  • the posture set storage unit 236 saves the posture set (a plurality of taught postures) of the marker 31 generated by the taught posture generation unit 234 in step S10 and the A set of postures (a plurality of teaching postures) of the robot 30 is saved in the control device 1 (step S12).
  • the controller 1 stores a set of orientations of the marker 31 based on the camera coordinate system C4 and a set of orientations of the robot 30 based on the base coordinate system C1.
  • FIG. 8 is a diagram for explaining a teaching space generation method by the teaching space generation unit 232. As shown in FIG. The illustrated generation method is applied to step S6 in FIG. 7 above.
  • the teaching space is defined as a common area between the space in which the robot 30 works and the space in which the imaging unit 32 can accurately shoot (measurable) overlap.
  • the work space of the robot 30 is determined based on the camera coordinate system C4. As a method of determining the work space, for example, when the work performed by the robot 30 is a picking work, the space in which the marker 31 is positioned when the robot 30 performs the work of gripping a work (not shown) is determined in advance. be able to.
  • the reason why the work space of the robot 30 is used as the teaching space T1 is as follows.
  • movement errors systematically occur due to manufacturing errors of the robot 30 or the like.
  • the appearance of the movement error differs depending on the position in space. Therefore, by setting the posture taken by the robot 30 during calibration to be the same as the posture during work, it is possible to use data with the same movement error as during work for calibration.
  • the space that the imaging unit 32 can shoot is determined by the angle of view of the imaging unit 32, but the space that the imaging unit 32 can shoot with accuracy is limited to a narrower range than the shootable space.
  • the size of the space in the Z-axis direction in the camera coordinate system C4 is specified so that the value of the optical blur of the imaging unit 32 is equal to or less than the threshold.
  • the spatial dimension in the Z-axis direction is specified within a range in which the optical blur value is 1 px or less.
  • the size of the space in the X-axis direction and the Y-axis direction in the camera coordinate system C4 is the image captured by the imaging unit 32 (hereinafter also referred to as “captured image”). ) is subjected to distortion correction, and the luminance difference between the image before distortion correction and the image after distortion correction is specified to be within a range of a threshold value or less. As a result, it is possible to control so that the marker 31 is placed in a space in which the captured image is less likely to be distorted. Therefore, it is possible to reduce errors in calibration, which will be described later.
  • the size of the space in the Z-axis direction in the camera coordinate system C4 is defined by the working space of the robot 30. In the example of FIG. This is because the working space of the robot 30 is narrower than the space in which the optical blur value is 1 px or less when the size of the space in the Z-axis direction is compared.
  • the size of the space in the X-axis direction and the Y-axis direction in the camera coordinate system C4 is defined by the space in which the imaging unit 32 can accurately capture images.
  • a space in which the image capturing unit 32 can accurately capture images that is, a space in which the difference in brightness based on the distortion evaluation is equal to or less than the threshold is the space in which the robot 30 is working. This is because it is narrower than the space.
  • the teaching space T1 is generated by the above method.
  • FIG. 9 is a diagram for explaining how the teaching space generation unit 232 divides the teaching space.
  • the illustrated division method is applied to step S7 in FIG. 7 above.
  • parameters X1, Y1, and Z1 given as the resolution in each axial direction of the posture set are used.
  • an index (X, Y, Z) is assigned to each small space.
  • FIG. 10 is a diagram for explaining a teaching position generation method by the teaching position generation unit 233. As shown in FIG. The illustrated generation method is applied to step S8 in FIG. 7 above.
  • a teaching position in a small space with indices (X, Y, Z) (1, 1, 3)
  • a small A position moved by half the height of the space, that is, the central position of the small space given the index (1, 1, 3) can be generated as the teaching position. This point is the same for small spaces to which other indexes are assigned.
  • FIG. 11 is a diagram showing how the marker looks when the marker is placed at the teaching position in each small space in the initial posture.
  • indexes (X, Y, Z) (1, 1, 1) (1, 2, 1) (1, 3, 1)
  • indexes (X, Y, Z) (1, 1, 1) (1, 2, 1) (1, 3, 1)
  • teach position It shows how the marker 31 looks when the marker 31 is provisionally placed at the teaching position generated by the generation unit 233 and placed so as to face the imaging unit 32 .
  • the marker 31 is arranged in any small space such that the Z axis of the camera coordinate system C4 is oriented exactly opposite to the Z axis of the marker coordinate system C3. Therefore, the appearance of the marker 31 is the same in all small spaces.
  • the teaching posture generation unit 234 generates a teaching posture by rotating the posture of the marker 31 in each axis direction of the marker coordinate system C3 from the initial posture set in step S9.
  • FIG. 12 is a diagram showing the teaching posture of the marker 31 generated by the teaching posture generation unit 234.
  • the indexes (X, Y, Z) of each small space in FIG. 12 correspond to the indexes (X, Y, Z) of each small space in FIG.
  • the taught orientation of the marker 31 in the small space with index (3,3,1) is generated by rotating the marker 31 about the x-axis of the marker coordinate system C3 with respect to the initial orientation.
  • the magnitude of the rotation angle of each axis is from 0 degrees to ⁇ (threshold) degrees, and is determined based on an index, rule-based or randomly.
  • the posture set which is a plurality of postures generated by the teaching posture generation unit 234, includes postures in which the marker 31 is rotated in various directions, and postures in which the orientation of the marker 31 is the same and only translation is different. made up of combinations.
  • the posture set is automatically generated in the teaching space that satisfies both the work space of the robot 30 and the space in which the imaging unit 32 can accurately shoot. It is possible to generate
  • the rotation of the base/camera conversion matrix M1 is obtained in the form of the rotation matrix Rca, but a quaternion or the like may be used.
  • the shape of the teaching space T1 generated by the teaching space generating section 232 is not limited to the shape shown in FIG.
  • the teaching space T1 has a shape such as a rectangular parallelepiped.
  • the distortion evaluation value of the imaging unit 32 used in the present embodiment cannot be obtained as a parameter for generating the teaching space T1
  • the user may specify the shape and size of the teaching space in advance.
  • the teaching space T1 is divided based on the camera coordinate system C4 as a method of dividing the teaching space, but the teaching space T1 may be divided based on an arbitrary coordinate system.
  • the teaching position generated by the teaching position generation unit 233 may be set to an arbitrary position within the small space, for example, the vertex of the small space.
  • the teaching posture generation unit 234 generates the teaching posture by setting the rotation angle of the marker 31 about each axis within a threshold value. No need to set.
  • the posture set storage unit 236 stores the posture set in the control device 1.
  • the present invention is not limited to this, and may be stored in a memory (not shown). You can also load sets.
  • the simulator construction unit 24 has a function of virtually operating the robot 30 by constructing a simulator based on the prior knowledge 21 and generating an image (captured image) captured by the imaging unit 32 .
  • FIG. 13 is a diagram showing an example of a simulator constructed by the simulator construction unit 24. As shown in FIG. In FIG. 13, information such as a three-dimensional model including design information of the real device 2A, specifications, layout, etc. is acquired from the prior knowledge 21, and based on the acquired information, the robot 30, the marker 31, the imaging unit 32, and the environment installed objects are detected. 33 are virtually placed on the simulator.
  • the robot 30 by using the joint information obtained as the length of each link and the specifications as shape information of the robot 30, which is information obtained from the prior knowledge 21, the robot 30 can be virtually operated. be. In addition, by operating the robot 30 virtually, it is possible to apply a trajectory planning algorithm used in an actual machine.
  • each image captured by the image capturing unit 32 on the simulator is It is possible to generate pixel values.
  • the color information of the robot 30 and the environment installation object 33 can be easily changed on the simulator. For example, it is possible to change the RGB values representing the colors of the robot 30 and the environment installation object 33 to (255, 0, 0), etc., and generate an image taken on a simulator.
  • the layout information including the design values of the base-camera conversion matrix M1 is acquired from the prior knowledge 21, and the object is placed on the simulator based on the acquired layout information. do not have.
  • rough values indicating the size and arrangement of objects in the physical device 2A may be estimated from images captured by the imaging unit 32, and the objects may be arranged on the simulator based on the estimation results.
  • the shape is reproduced using the three-dimensional model of the physical device 2A.
  • the simulator may hold only the design value information for each coordinate system.
  • the imaging unit 32 is a stereo camera or ToF camera capable of three-dimensional measurement, or when a three-dimensional shape can be obtained by machine learning an image from a monocular camera, the real device 2A acquired by the imaging unit 32 3D shape may be utilized to construct a simulator.
  • FIG. 14 is a flow chart showing the processing procedure of the non-physical interference attitude extraction unit 25.
  • the non-physical interference posture extraction unit 25 extracts postures (non-physical interference postures) in which the units of the physical device 2A do not physically interfere with each other from the posture set generated by the posture set generation unit 23 .
  • the non-physical interference posture extraction unit 25 extracts the posture that does not cause physical interference by utilizing the simulator constructed by the simulator construction unit 24 .
  • Physical interference means that the robot 30 and the marker 31 come into contact with the robot 30, the imaging unit 32, the environmental installation object 33, and the like in the physical device 2A.
  • the non-physical interference posture extraction unit 25 determines the trajectory if the robot 30 and the marker 31 do not physically interfere with any part in the real device 2A. It is determined that the plan has succeeded, and if there is physical interference, it is determined that the trajectory plan has failed.
  • the non-physical interference attitude extraction unit 25 determines that the trajectory planning is successful in step S17, it holds the i-th attitude as a success pattern in the memory, and then (step S18). Holds success/failure information for the index of the small space in which the posture of is generated (step S19).
  • the success flag is held in association with the index of the small space in which the i-th posture is generated.
  • the index of the small space in which the i-th posture is generated is held in association with the failure flag.
  • the non-physical interference attitude extraction unit 25 determines that the trajectory planning has failed in step S17, the processes in steps S18 and S19 are skipped.
  • the trajectory planning fails, it corresponds to the case where it is determined that there is physical interference, and when the trajectory planning succeeds, it corresponds to the case where it is determined that there is no physical interference.
  • FIG. 15 is a bird's-eye view of the posture when the robot 30 actually performs the picking work.
  • the robot 30 does not physically interfere with any part of the physical device 2A. Therefore, when the trajectory planning is instructed to move the marker 31 to the destination 1, the trajectory planning succeeds and the attitude of the success pattern is obtained.
  • the robot 30 physically interferes with the environmental installation object 33 (wall 33A), and the trajectory planning fails.
  • the method of determining the presence or absence of physical interference for example, for a trajectory candidate generated by a trajectory plan, when the robot 30 operates according to the trajectory, the three-dimensional position of the environmental installation object 33 and the like and the robot 30 can be determined by whether or not they overlap.
  • FIG. 16 is a side view of the posture when the robot 30 actually performs the picking work.
  • the robot 30 when moving the marker 31 to the destination 3, the robot 30 does not physically interfere with any part of the physical device 2A. Therefore, when the trajectory planning is instructed to move the marker 31 to the destination 3, the trajectory planning succeeds and the attitude of the success pattern is obtained.
  • the robot 30 when moving the marker 31 to the destination 4, the robot 30 physically interferes with the environment installation object 33 (conveyor 33B), and the trajectory planning fails.
  • whether the trajectory planning is successful or not is determined by the presence or absence of physical interference between the robot 30 and the environmental installation object 33.
  • the present invention is not limited to this.
  • the user may add restrictions on the joint angles of the robot 30 to make the determination. Specifically, when moving the robot 30 to the destination, if the joint angle of the robot 30 is equal to or less than the upper limit, the trajectory planning is determined to be successful, and if the joint angle exceeds the upper limit, the trajectory planning is determined to be unsuccessful. good too.
  • FIG. 17 is a flow chart showing the processing procedure of the non-shielding posture extraction unit 26.
  • the non-shielding posture extraction unit 26 selects from the posture set generated by the posture set generation unit 23 and extracted by the non-physical interference posture extraction unit 25 whether the marker 31 is shielded between the marker 31 and the imaging unit 32 .
  • a posture that does not occur (non-shielding posture) is extracted.
  • the non-shielding posture extraction unit 26 extracts a posture in which the shielding of the marker 31 does not occur by utilizing the simulator constructed by the simulator construction unit 24 .
  • the non-shielding posture extraction unit 26 loads the posture set generated by the posture set generation unit 23 and extracted by the non-physical interference posture extraction unit 25 onto the memory of the control device 1 (step S23).
  • the non-shielding posture extraction unit 26 acquires the number N2 of postures included in the posture set (step S24).
  • the non-shielding posture extraction unit 26 instructs the trajectory plan to move the robot 30 to the i-th posture on the simulator (step S26).
  • the non-shielding posture extraction unit 26 generates a virtual captured image by the imaging unit 32 on the simulator (step S27). That is, the non-shielding posture extraction unit 26 virtually generates a photographed image obtained by the imaging unit 32 when the robot 30 is moved to the i-th posture.
  • the non-shielding posture extraction unit 26 estimates the marker-camera conversion matrix M2 by analyzing the generated captured image (step S28).
  • the photographed image for example, when a pattern of a plurality of black circles with known sizes and positions is printed on the marker 31, the known sizes and positions of the black circles and the above-mentioned virtual photographed image are analyzed.
  • the marker-camera transformation matrix M2 can be estimated by associating the sizes and positions of the black circles in . Estimating the marker-camera transformation matrix M2 substantially means estimating the position and orientation of the marker 31, that is, the three-dimensional position of the marker 31 in the three-dimensional camera coordinate system C4.
  • the non-shielding posture extraction unit 26 determines whether or not the estimation of the marker-camera conversion matrix M2 has succeeded (step S29). Then, when the estimation of the marker-camera conversion matrix M2 is successful, the non-shielding posture extraction unit 26 calculates the reliability of the estimation (step S30), and then determines whether or not the reliability is equal to or greater than the threshold. (step S31). Further, when the calculated reliability of the estimation is equal to or higher than the threshold, the non-shielding posture extraction unit 26 stores the i-th posture as a successful pattern in the memory (step S32), and then generates the i-th posture. Success/failure information for the index of the created small space is retained (step S33).
  • the success flag When holding the success information for the index of the small space, the success flag is held in association with the index of the small space in which the i-th posture is generated. Also, when holding failure information for the index of the small space, the index of the small space in which the i-th posture is generated is held in association with the failure flag.
  • the non-shielding posture extraction unit 26 determines in step S29 that the estimation of the marker-camera transformation matrix M2 has failed, it skips the processing in steps S30 to S34, and in step S31 the reliability is set to the threshold value. If it is determined that the above is not the case, the processing of steps S32 and S33 is skipped.
  • the estimation of the marker-camera conversion matrix M2 fails, or when the reliability of the estimation is not equal to or greater than the threshold, it corresponds to the case where it is determined that the marker 31 is shielded.
  • the marker-camera conversion matrix M2 is successfully estimated, or when the reliability of the estimation is equal to or higher than the threshold, it corresponds to the case where it is determined that the marker 31 is not shielded.
  • step S28 there are cases where the marker-camera conversion matrix M2 can be estimated even when part of the marker 31 is hidden by shielding.
  • the accuracy of estimating the marker-camera conversion matrix M2 may be lower than in the case where all the markers 31 appear in the captured image. It is not desirable to use an orientation with low estimation accuracy of the marker-camera transformation matrix M2 for calibration.
  • the non-shielding posture extraction unit 26 uses the fact that the relative posture between the marker 31 and the imaging unit 32 in each teaching posture is known on the simulator. The ratio between the area of the marker 31 at that time and the area of the marker 31 actually appearing on the captured image of the imaging unit 32 on the simulator is calculated as the reliability of the estimation. In addition to this, for example, the ratio between the number of feature points on the marker 31 obtained by analyzing the captured image and the total number of feature points is calculated as the reliability. Then, the non-shielding posture extraction unit 26 determines whether or not the reliability calculated as described above is higher than a threshold determined in advance according to the type of the marker 31 .
  • FIG. 18 is a schematic diagram showing an operation example of the non-shielding posture extraction unit 26.
  • a portion of the posture set loaded onto the memory of the control device 1 in step S23 is shown within the frame line indicated by the dashed line on the left side of FIG. 18 shows part of the set of poses after occlusion determination, extracted by the non-shielding pose extracting unit 26, inside a frame line indicated by a dashed dotted line on the right side of FIG.
  • the environmental installation object 33 is arranged so as to cover the upper, lower, and left sides of the image captured by the imaging unit 32 on the simulator.
  • the robot 30 partially shields the marker 31, and the estimation of the position and posture of the marker 31 may fail. If it were possible to estimate the position and orientation of the marker 31 from a part of the marker 31 appearing in the captured image I1, the unshielded orientation extraction unit 26 would capture the marker 31 without shielding, as described above. A ratio between the area of the marker 31 when it appears in the image and the area of the marker 31 actually appearing on the captured image of the imaging unit 32 on the simulator is calculated as the reliability of estimation. In calculating this reliability, the positions and orientations of the camera coordinate system C4 and the markers 31 in a certain posture of the robot 30 are known by the posture set generation unit 23 .
  • pixels of the marker 31 on the image when the marker 31 appears in the image without obstruction A set of locations (hereinafter referred to as a "pixel set") can be specified.
  • the non-shielding attitude extraction unit 26 changes the color information of the robot 30 and the environmental installation object 33 in the simulator to a color not included in the marker 31, for example, a color with an RGB value of (255, 0, 0).
  • the posture of the robot 30 is changed to virtually generate an image captured by the imaging unit 32 .
  • the position of the marker 31 in the captured image matches the set of pixels, and the color of the blocked pixels is (255, 0, 0). Therefore, the non-shielding posture extraction unit 26 calculates the ratio of pixels that are not of the color (255, 0, 0) in the pixel set as the reliability of the estimation.
  • the estimation reliability is calculated to be approximately 50%. Moreover, when the threshold value to be compared with the reliability is set to 90% in advance, the reliability in the photographed image I1 is less than the threshold.
  • the marker 31 is not blocked and the entire marker 31 is shown in the image. Therefore, in the case of the photographed image I2, the position and orientation of the marker 31 can be estimated, and the reliability of the estimation is higher than the threshold, so this is a successful pattern. 18, the marker 31 is placed within the angle of view of the imaging unit 32, but the marker 31 is partly blocked by the environmental installation object 33. In the posture shown in the photographed image I3 of FIG. Therefore, estimation of the position and orientation of the marker 31 may fail. Further, since 9 out of 12 black dots on the marker 31 are captured in the captured image I3, if the position and orientation of the marker 31 can be estimated, the reliability is calculated.
  • the non-shielding posture extraction unit 26 uses the ratio of the number of feature points on the marker 31 obtained by analyzing the captured image and the total number of feature points as the reliability of the estimation, for example, as described above. calculate.
  • the reliability of the estimation is calculated as 75%.
  • the threshold value to be compared with the reliability is set to 90% in advance, the reliability in the photographed image I3 is less than the threshold.
  • the non-shielded posture extraction unit 26 it is determined whether or not the marker 31 is shielded, and the posture set (postures shown in the captured images I2, I4, I5, . . . ) determined to be unshielded. is generated. As a result, only a posture is generated in which the marker 31 appears in the right and center portions of the captured image.
  • the unshielded posture extraction unit 26 obtains the reliability of the estimation, and determines whether the marker 31 is shielded or not depending on whether the reliability is equal to or greater than a threshold. do. Specifically, if the reliability of the estimation is equal to or higher than the threshold, it is determined that the marker 31 is not shielded, and if the reliability of the estimation is less than the threshold, it is determined that the marker 31 is shielded. As a result, among the postures for which the three-dimensional position of the marker 31 has been successfully estimated, only postures for which the reliability of estimation is higher than the threshold can be extracted as postures in which the marker 31 is not shielded.
  • the non-shielding posture extraction unit 26 calculates the reliability of the estimation, but depending on the shape or pattern of the marker 31, the reliability of the estimation may not be calculated. That is, the reliability of estimation may be calculated as necessary. Further, calculation of the reliability is not limited to the area ratio and the feature point ratio described above, and when the position and orientation of the marker 31 are estimated using machine learning or the like, the reliability of the estimation itself may be estimated.
  • the posture set evaluation unit 27 uses the posture sets generated by the posture set generation unit 23 and extracted by the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26 to perform calibration with a high accuracy equal to or higher than a predetermined value. It is determined whether or not the In addition, the posture set evaluation unit 27 instructs the calibration execution unit 29 to execute the calibration when high-precision calibration of a predetermined value or more can be expected. In addition, when the calibration with a higher accuracy than a predetermined value cannot be expected, the posture set evaluation unit 27 instructs the posture set generation unit 23 to add a posture set and/or changes the value of the generation parameter. command to regenerate the posture set.
  • FIG. 19 is a functional block diagram of the posture set evaluation unit 27. As shown in FIG. As shown in FIG. 19 , the posture set evaluation unit 27 includes a posture number evaluation unit 271 , a posture set extraction unit 272 , a calibration evaluation unit 273 and an index evaluation unit 274 .
  • the posture number evaluation unit 271 evaluates the number of postures included in the posture set generated by the posture set generation unit 23 and extracted by the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26 . Specifically, the posture number evaluation unit 271 determines whether or not the number of postures included in the posture set is within a predetermined number. When the posture number evaluation unit 271 determines that the number of postures included in the posture set is not within the predetermined number, the posture set extraction unit 272 extracts some of the multiple postures included in the posture set. Extract the subset where . For example, if a total of 100 poses are included in the pose set, 20 poses out of the 100 poses are extracted as a subset.
  • the calibration evaluation unit 273 generates data such as the marker/camera conversion matrix M2 and the arm/base conversion matrix M3 that are virtually used for calibration on the simulator using the above-described posture set, and estimates them by calibration. Evaluate the estimation accuracy of the marker-camera transformation matrix M2. That is, the calibration evaluation unit 273 evaluates the accuracy of calibration.
  • the index evaluation unit 274 adds or adds a posture set to the posture set generation unit 23 when the calibration evaluation unit 273 determines that a calibration accuracy higher than a predetermined value cannot be expected due to, for example, an insufficient number of postures. Instruct to generate again.
  • each unit of the posture set evaluation unit 27 will be described in detail below.
  • the posture number evaluation unit 271 extracts a portion of the posture set, that is, a subset.
  • the extraction unit 272 is instructed.
  • the number-of-postures evaluation unit 271 causes the calibration evaluation unit 273 to determine whether or not the subset extracted by the posture set extraction unit 272 can be expected to have high calibration accuracy equal to or higher than a predetermined value. In this way, by extracting a part of poses from the pose set as a subset, it is possible to suppress the length of time required for calibration when the number of poses included in the pose set is large, and to perform the calibration efficiently. Accuracy can be evaluated.
  • the posture set extraction unit 272 extracts some postures from a posture set, which is a set of multiple postures, based on a predetermined rule.
  • a predetermined rule for example, there is a method of extracting so that an index assigned to the small space in which each posture is generated appears on average.
  • posture set extraction section 272 extracts some postures based on the following rules.
  • the calibration evaluation unit 273 generates a marker-camera conversion matrix M2 and an arm-base conversion matrix M3 that are used when performing virtual calibration on the simulator, and performs optimization processing (described later) to convert the base-camera conversion matrix M1. Find an estimate. Further, the calibration evaluation unit 273 compares the estimated value of the base-camera conversion matrix M1 with the correct value at the time of simulator design, and determines the calibration accuracy required for executing the robot work, that is, the calibration accuracy higher than a predetermined value. It is determined whether or not there is A method for obtaining the estimated value of the base-camera conversion matrix M1 will be described later together with the processing content of the calibration execution unit 29 .
  • the calibration evaluation unit 273 may obtain an estimated value of the base-camera conversion matrix M1 by adding an error to the generated marker-camera conversion matrix M2 and arm-base conversion matrix M3.
  • the reason is as follows. When calibration is actually performed by the real device 2A, errors due to noise and the like are added to the observed data. Therefore, by adding an error to the marker/camera conversion matrix M2 and the arm/base conversion matrix M3 as described above, calibration evaluation can be performed under conditions close to those of the actual machine.
  • the errors to be added to the marker/camera conversion matrix M2 and the arm/base conversion matrix M3 may adopt, for example, errors following a normal distribution.
  • the standard deviation of the normal distribution at this time may be specified by the prior knowledge 21 or the like, or may be specified by a value having a constant ratio with respect to the values of the marker/camera conversion matrix M2 and the arm/base conversion matrix M3.
  • the observed data is the encoder value of the robot 30
  • the noise on the observed data is noise caused by an error in the robot mechanism. and noise caused by image distortion.
  • the observation data may also be the position and orientation of the marker 31 obtained by analyzing the image captured by the imaging unit 32 .
  • the calibration evaluation unit 273 determines that calibration with a higher accuracy than a predetermined value can be expected, it instructs the calibration execution unit 29 to perform calibration.
  • the calibration execution unit 29 receives an instruction from the calibration evaluation unit 273 and executes calibration to estimate the base-camera conversion matrix M1. Further, when the calibration evaluation unit 273 determines that calibration with a precision higher than a predetermined value cannot be expected, the calibration evaluation unit 273 instructs the index evaluation unit 274 to perform index evaluation.
  • the posture set generation unit 274 When the index evaluation unit 274 receives an index evaluation execution instruction from the calibration evaluation unit 273 (when calibration with a higher accuracy than a predetermined value cannot be expected), the posture set generation unit 274 generates additional postures. and/or instruct the parameter updating unit 28 to change the value of the generation parameter 22 . By performing such feedback, the posture set generation unit 23 can generate a posture set again, and can also generate a posture set including additional postures or a posture set based on the updated generation parameters 22. can be generated. As a result, when the posture set generation unit 23 generates a posture set again, it is possible to increase the possibility that calibration with a precision higher than the predetermined value can be expected.
  • the index evaluation unit 274 acquires the posture set evaluated by the calibration evaluation unit 273, and if the number of postures included in the posture set is less than a predetermined threshold value, the index (X , Y, Z), the parameter updating unit 28 is instructed to increment the index that has the lowest frequency of appearance when searching the set of all postures. Also, when the number of postures included in the acquired posture set is greater than or equal to a predetermined threshold value, the index evaluation unit 274 instructs the posture set generation unit 23 to generate additional postures.
  • the generation method there is a method of randomly extracting a plurality of postures from the obtained posture set and generating a teaching position and a teaching posture in a small space corresponding to the posture index. is not limited to
  • the posture set evaluation unit 27 Through the processing of the posture set evaluation unit 27 described above, it is possible to feed back the evaluation result of the posture set to the posture set generation unit 23 and the parameter update unit 28 . Further, only when highly accurate calibration equal to or higher than a predetermined value can be expected, the calibration executing section 29 can be caused to execute the calibration. Therefore, it is possible to prevent the accuracy of calibration from deteriorating due to an insufficient number of postures or the like. Further, when calibration with a precision higher than a predetermined value cannot be expected, the posture set generation unit 23 can be caused to generate a posture set again.
  • the posture set evaluation unit 27 has the posture number evaluation unit 271 and the posture set extraction unit 272, but is not limited to this, and has only the calibration evaluation unit 273 and the index evaluation unit 274. may be configured. Also, the function of the calibration evaluation unit 273 and the function of the index evaluation unit 274 may be integrated into one functional unit. Further, in the present embodiment, as a rule applied to the processing of the posture set extraction unit 272, the rule that the index given to each small space appears on average was exemplified. A part of postures may be extracted at random.
  • the parameter update unit 28 increments the values of the parameters X1, Y1, and Z1 that determine the resolution of the posture set based on the instruction from the posture set evaluation unit 27. Thereby, the generation parameter 22 is updated. Therefore, the pose set generator 23 uses the updated generation parameters 22 to generate a pose set.
  • FIG. 20 is a flow chart showing the processing procedure of the calibration executing section 29.
  • the calibration execution unit 29 controls the robot 30 and the imaging unit 32 of the physical device 2A, and calculates the marker/camera conversion matrix M2 and the arm/base conversion matrix M3 when the robot 30 is moved to each posture of the posture set. and get. Further, the calibration executing unit 29 estimates the base/camera conversion matrix M1 based on the acquired coordinate conversion parameters of the marker/camera conversion matrix M2 and the arm/base conversion matrix M3.
  • the calibration executing unit 29 loads, onto the memory of the control device 1, a set of postures that the posture set evaluating unit 27 has determined (evaluated) that calibration with a predetermined value or more can be expected (step S37).
  • the calibration execution unit 29 acquires the number N3 of postures included in the posture set (step S38).
  • the calibration executing unit 29 actually moves the robot 30 to the i-th posture in the physical device 2A (step S40).
  • the calibration execution unit 29 captures an image of the work space of the robot 30 using the imaging unit 32 (step S41).
  • the calibration executing unit 29 estimates the marker-camera conversion matrix M2 by analyzing the captured image obtained by the imaging unit 32 (step S42).
  • the calibration execution unit 29 determines whether or not the estimation of the marker-camera conversion matrix M2 has succeeded (step S43).
  • step S43 there is a possibility that the estimation of the marker-camera transformation matrix M2 may fail due to the actual lighting environment in the physical device 2A or the specification/arrangement deviation from the design values of the imaging unit 32 of the physical device 2A. be.
  • the calibration execution unit 29 succeeds in estimating the marker-camera conversion matrix M2
  • the arm-base conversion matrix M3 indicating the posture value of the i-th robot 30 and the posture of the i-th robot 30
  • a marker-camera conversion matrix M2 indicating the position and orientation of the corresponding marker 31 is stored in the memory of the control device 1 (step S44).
  • step S43 when the calibration execution unit 29 determines that the estimation of the marker-camera conversion matrix M2 has failed in step S43, it skips the process of step S44.
  • the calibration execution unit 29 solves the optimization problem from the data in the memory, estimates the base-camera conversion matrix M1 (step S47), and then performs a series of Finish processing. Solving the optimization problem in step S47 corresponds to optimization processing.
  • a known technique shown in the following references can be used as a method for estimating the base-camera conversion matrix M1 by this optimization process.
  • the coordinate transformation parameter AA is a known parameter based on robot encoder values
  • the coordinate transformation parameters BB and CC are unknown parameters
  • the base-camera conversion matrix M1 in the physical device 2A can be estimated.
  • the posture set generation unit 23 generates the posture set of the robot 30 for enabling the imaging unit 32 to image (shoot) the marker 31, and the posture set is From among them, the non-shielded posture extraction unit 26 extracts postures in which the marker 31 is not shielded. Therefore, it is possible to automatically generate a set of postures in which the marker 31 is not blocked. Therefore, the posture set of the robot 30 can be taught in consideration of the shielding of the marker 31, and the number of man-hours required for teaching the posture set of the robot 30 can be reduced.
  • the non-physical interference posture extraction unit 25 extracts postures in which the robot 30 and the marker 31 do not physically interfere from the posture set generated by the posture set generation unit 23. . Therefore, it is possible to automatically generate a set of postures in which the robot 30 and the marker 31 do not physically interfere. Therefore, the pose set of the robot 30 can be taught in consideration of the physical interference in the physical device 2A, and the number of man-hours required for teaching the pose set of the robot 30 can be reduced.
  • the non-physical interference posture extraction unit 25 extracts the non-physical interference posture from the posture set generated by the posture set generation unit 23, and from the posture set of the extracted non-physical interference postures
  • the non-shielding posture extraction unit 26 is configured to extract the non-shielding posture
  • the present invention is not limited to this.
  • the non-shielding posture extraction unit 26 extracts the non-shielding posture from the posture set generated by the posture set generation unit 23, and the non-physical interference posture extraction unit 25 extracts the non-shielding posture from the extracted posture set of the non-shielding posture. It may be configured to extract the physical interference posture.
  • the non-physical interference posture extraction unit 25 extracts a non-physical interference posture from the posture set generated by the posture set generation unit 23, and the non-shielding posture extraction unit 26 extracts the posture set generated by the posture set generation unit 23.
  • a configuration in which a non-physical interference posture is extracted from the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26, and the posture set evaluation unit 27 evaluates a common posture among the posture sets extracted respectively by the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26. may be Alternatively, of the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26, only the non-shielding posture extraction unit 26 may be provided. The above points also apply to the second embodiment described later.
  • FIG. 21 is a diagram showing a configuration example of a robot control calibration device according to the second embodiment.
  • the calibration device 2-1 includes a control device 1, a real device 2C, and a calibration control device 2D.
  • the real device 2C has a robot 30 consisting of a robot arm, a marker 31 attached to the robot 30, and an imaging unit 32-1 that measures the work space of the robot 30 and its surroundings.
  • the configuration of the physical device 2C is the same as that of the first embodiment except for the imaging unit 32-1.
  • the imaging unit 32-2 is composed of a plurality of (two in the figure) imaging devices 32A and 32B.
  • Each imaging device 32A, 32B is configured by, for example, a monocular camera, a ToF (Time of Flight) camera, a stereo camera, or the like.
  • the calibration control device 2D includes prior knowledge 21-1, generated parameters 22-1, posture set generator 23-1, simulator construction unit 24-1, non-physical interference posture extraction unit 25-1, and non-physical interference posture extraction unit 25-1.
  • a common posture extraction unit 201 is provided in addition to a shielding posture extraction unit 26-1, a posture set evaluation unit 27-1, a parameter update unit 28-1, and a calibration execution unit 29-1. Configurations other than the common posture extraction unit 201 are basically the same as those of the first embodiment.
  • the imaging section 32-1 is composed of a plurality of imaging devices 32A and 32B.
  • prior knowledge 21-1, generation parameter 22-1, posture set generation unit 23-1, simulator construction unit 24-1, non-physical interference posture extraction unit 25-1, non-shielding posture extraction unit 26-1, a posture set evaluation unit 27-1, a parameter update unit 28-1, and a calibration execution unit 29-1 perform the same operations as in the first embodiment on the plurality of imaging devices 32A and 32B, respectively. Similar processing operations are performed.
  • the posture set generation unit 23-1 generates a posture set of the robot 30 that enables the imaging device 32A to photograph the marker 31, and a posture set of the robot 30 that enables the imaging device 32B to photograph the marker 31. are generated separately.
  • Individual processing for each of the imaging devices 32A and 32B is performed not only by the posture set generation unit 23-1, but also by the simulator construction unit 24-1, the non-physical interference posture extraction unit 25-1, the non-shielding posture extraction unit 26-1, the posture set The evaluation section 27-1, the parameter update section 28-1, and the calibration execution section 29-1 are also performed.
  • the common posture extraction unit 201 is generated by the posture set generation unit 23-1 for each of the imaging devices 32A and 32B, and is generated by the non-physical interference posture extraction unit 25-1 and the non-shielding posture extraction unit 26-6 for the imaging devices 32A and 32B.
  • a plurality of postures that can be used in common by the plurality of imaging devices 32A and 32B are extracted from the posture set extracted for each 32B.
  • the common orientation extraction unit 201 changes the orientation set of each imaging device 32A, 32B so that the plurality of extracted orientations remain. A detailed description will be given below.
  • the common posture extraction unit 201 determines whether or not each posture included in a posture set generated corresponding to a certain imaging device 32A is applicable to another imaging device 32B, and performs similar determination processing. This is performed for all combinations of imaging devices 32A and 32B. As an example, whether or not each orientation included in a set of orientations generated corresponding to a certain imaging device 32A is applicable to another imaging device 32B is determined on the simulator constructed by the simulator construction unit 24-1. When the robot 30 is controlled by , and the marker 31 is moved to each posture, it is possible to determine whether or not the following conditions (1) to (3) are satisfied. (1) The marker 31 is included within the teaching space of the imaging device 32B. (2) The rotation matrices of the marker 31 and the imaging device 32B are within the preset threshold range. (3) The function of the non-occlusion posture extraction unit 26-1 can determine that occlusion does not occur.
  • an orientation that can be used in common by the plurality of imaging devices 32A and 32B constitutes an orientation set of the imaging devices 32A and 32B to which the orientation can be applied. Add as one of postures.
  • postures that can be used only by one of the imaging devices (32A or 32B) are removed from the posture set of the imaging devices 32A and 32B.
  • the motion executing unit 29 adjusts (reduces) the number of postures to be applied for calibration by the motion executing unit 29;
  • the priority in removing the orientation for example, there is a method of preferentially removing the orientation in the same small space as the small space in which the added orientation is located.
  • the common orientation extraction unit 201 extracts a plurality of orientations that can be used in common by the plurality of imaging devices 32A and 32B, and the calibration execution unit 29-1 performs calibration using the extracted plurality of orientations. run the Therefore, even when the imaging unit 32 is configured with a plurality of imaging devices 32A and 32B, it is possible to perform calibration while reducing man-hours required for teaching and calibrating the posture set of the robot 30 .
  • the common posture extraction unit 201 is generated by the posture set generation unit 23-1 for each of the imaging devices 32A and 32B, and the non-physical interference posture extraction unit 25-1 and the non-shielding posture extraction unit 26 -6 extracts a common orientation from among the orientation sets extracted for each of the imaging devices 32A and 32B, but the present invention is not limited to this.
  • the common posture extraction unit 201 may extract common postures from among the posture sets generated for each of the imaging devices 32A and 32B by the posture set generation unit 23-1.
  • the common posture extraction unit 201 may extract a common posture from among the posture sets extracted for each of the imaging devices 32A and 32B by the non-physical interference posture extraction unit 25-1.
  • the common posture extraction unit 201 may extract common postures from the posture set extracted for each of the imaging devices 32A and 32B by the unshielded posture extraction unit 26-6.
  • the common posture extraction unit 201 controls the robot 30 on the simulator and determines common usable postures. , the determination may be made only from the teaching position and orientation of the marker 31 . Further, by increasing the number of postures during calibration without setting the upper limit of the number of postures included in the posture set of each imaging device 32A, 32B, an improvement in accuracy can be expected in some cases. In that case, it is not necessary to predetermine the upper limit of the number of postures.
  • the present invention is not limited to the above-described embodiments, and includes various modifications.
  • the details of the present invention have been described for easy understanding, but the present invention is not necessarily limited to having all the configurations described in the above-described embodiments.
  • part of the configuration of one embodiment can be replaced with the configuration of another embodiment.
  • add the configuration of another embodiment to the configuration of one embodiment.

Abstract

This calibration device is provided with: an imaging unit that captures an image from a predetermined position; a marker attached to a robot; a posture collection generating unit that generates a posture collection including a plurality of postures of the robot; a non-blocking-posture extracting unit that extracts from the posture collection a plurality of postures that do not block the marker; and a calibration executing unit that estimates coordinate transformation parameters for transforming the coordinate system of the robot and the coordinate system of the imaging unit on the basis of the extracted plurality of postures.

Description

ロボット制御用のキャリブレーション装置Calibration device for robot control
 本発明は、ロボット制御用のキャリブレーション装置に関する。 The present invention relates to a calibration device for robot control.
 近年の労働力不足を解消するため、従来は人が行ってきた作業、たとえば物流分野におけるピッキング作業や、産業分野における組み立て作業など、何らかのワーク(作業対象物)を取り扱う各種作業を、自律ロボットによって自動化するニーズが高まっている。この自動化を実現するにあたって、適切な自律作業をロボットに実行させるには、ワークなどの周辺環境を高精度に認識するための撮像部が必要となる。 In order to solve the labor shortage in recent years, autonomous robots are being used to perform various types of work, such as picking work in the field of logistics and assembly work in the industrial field. There is a growing need for automation. To realize this automation, in order for the robot to perform appropriate autonomous work, an imaging unit is required to recognize the surrounding environment such as the workpiece with high accuracy.
 撮像部が撮影した画像に基づいて周辺環境を認識し、この認識結果に応じてロボットが自律的に作業するには、ロボットに対する撮像部の相対的な配置(位置、姿勢)を決定するキャリブレーションの実施が必要である。このキャリブレーションには、たとえば、形状が既知のターゲットマーカ(以下、「マーカ」ともいう。)をロボットに取り付け、様々な姿勢に変更したロボットを撮像部によって撮影し、その撮影画像とロボットの姿勢との関係を対応付けた複数の組のデータを用いて、ロボットと撮像部との間の相対的な位置および姿勢を推定する方法がある。 In order for the robot to recognize the surrounding environment based on the image captured by the imaging unit and work autonomously according to this recognition result, calibration is required to determine the relative placement (position, posture) of the imaging unit with respect to the robot. implementation is necessary. For this calibration, for example, a target marker with a known shape (hereinafter also referred to as a "marker") is attached to the robot, the robot is photographed in various postures by an imaging unit, and the photographed image and the posture of the robot are captured. There is a method of estimating the relative position and orientation between the robot and the imaging unit using a plurality of sets of data in which the relationship between .
 しかしながら、上記のキャリブレーション方法では、工数の肥大化が懸念される。たとえば、キャリブレーション時に必要とされるロボットの複数の姿勢(以下、「姿勢集合」ともいう。)を教示するには、ロボットシステムの導入先の周辺環境に応じて専門家が試行錯誤する必要がある。このため、キャリブレーションに多くの工数を要する。また、ピッキング作業など不定形な作業をロボットに実行させる場合は、周辺環境に存在する物品に応じて撮像部の配置を変更する可能性がある。そして、実際に撮像部の配置を変更する場合は、その都度、ロボットの姿勢集合を教示する必要がある。したがって、キャリブレーションに必要な工数(以下、「キャリブレーション工数」ともいう。)が膨大になる恐れがある。このような背景から、近年ではキャリブレーション工数を削減するための仕組み、あるいは試作への注目度が高くなっている。 However, with the above calibration method, there is a concern that the number of man-hours will increase. For example, in order to teach a plurality of robot postures (hereinafter also referred to as “posture set”) required for calibration, it is necessary for an expert to make trial and error according to the surrounding environment where the robot system is installed. be. Therefore, many man-hours are required for calibration. In addition, when a robot is caused to perform irregular work such as picking work, there is a possibility that the arrangement of the imaging unit will be changed according to the articles existing in the surrounding environment. Then, when actually changing the arrangement of the imaging units, it is necessary to teach the set of postures of the robot each time. Therefore, the man-hours required for calibration (hereinafter also referred to as "calibration man-hours") may become enormous. Against this background, in recent years, a mechanism for reducing the number of calibration man-hours or prototyping has attracted more attention.
 ロボット制御用のキャリブレーションに関して、たとえば特許文献1には、「キャリブレーションを容易に実施する」ための解決手段として、「ロボット制御システムは、撮像部により撮像された画像に基づいて、当該撮像部の視野内に存在する任意の対象物の三次元座標を計測する計測部と、計測された三次元座標とロボットの作用部の位置および姿勢との間の予め算出された対応関係に従って、ロボットの作用部を位置決めするための指令を生成する指令生成部と、対応関係を算出するためのキャリブレーションを実行するキャリブレーション実行部と、キャリブレーションにおいて、ロボットの作用部に関連付けられた基準物体を配置すべき領域であるキャリブレーション領域の設定を受付ける設定受付部とを含む。」と記載されている。 Regarding calibration for robot control, for example, Japanese Patent Laid-Open No. 2002-200000 describes a solution for "easily performing calibration", which states that "a robot control system, based on an image captured by an imaging unit, A measurement unit that measures the three-dimensional coordinates of an arbitrary object existing within the field of view of the robot, and the robot according to a pre-calculated correspondence relationship between the measured three-dimensional coordinates and the position and orientation of the action unit of the robot. A command generation unit that generates a command for positioning the action part, a calibration execution part that executes calibration for calculating the correspondence, and a reference object associated with the action part of the robot in the calibration. and a setting reception unit that receives the setting of the calibration area, which is the area to be calibrated."
 すなわち、特許文献1に記載された技術では、撮像部が撮影したロボットの情報などから、キャリブレーション時にマーカを移動させる空間(キャリブレーション領域)を規定し、その空間内でマーカの配置を自動で生成し、姿勢集合を教示することで、キャリブレーション工数を削減している。特許文献1に記載された技術によれば、設定受付部で設定を受け付けたキャリブレーション領域の範囲内でマーカを移動させるため、ロボットの姿勢集合の教示に要する工数を削減しつつ、キャリブレーションを実行することができる。 That is, in the technique described in Patent Document 1, a space (calibration area) in which the marker is moved during calibration is defined based on information about the robot captured by the imaging unit, and the marker is automatically arranged in that space. By generating and teaching a set of poses, the number of man-hours required for calibration is reduced. According to the technique described in Patent Document 1, since the marker is moved within the range of the calibration area whose setting is received by the setting receiving unit, the number of man-hours required for teaching the posture set of the robot can be reduced and the calibration can be performed. can be executed.
特開2019-217571号公報JP 2019-217571 A
 しかしながら、特許文献1に記載された技術では、ロボットを様々な姿勢に変更した際に生じる可能性がある、マーカの遮蔽(以下、「オクルージョン」ともいう。)については考慮されていない。このため、次のような課題を有する。
 一般に、ロボットの周辺や撮像部の周辺にロボット以外のものが存在する場合、あるいは、ロボットに付随する部品のうちマーカ以外の部分がマーカと撮像部との間に介在する場合は、撮像部から見てマーカが遮蔽される可能性がある。また、マーカの遮蔽が生じないようにマーカの移動空間を規定し、ロボットの姿勢集合を教示するには膨大な工数を要する。
However, the technique described in Patent Literature 1 does not take into account the occlusion of markers (hereinafter also referred to as "occlusion") that may occur when the robot is changed to various postures. Therefore, there are the following problems.
In general, when there is something other than the robot around the robot or around the imaging unit, or when a part other than the marker among the parts attached to the robot is interposed between the marker and the imaging unit, Markers may be obscured by viewing. In addition, it takes an enormous amount of man-hours to define the movement space of the marker so that the marker is not blocked and to teach the set of postures of the robot.
 本発明の目的は、マーカの遮蔽を考慮してロボットの姿勢集合を教示するとともに、ロボットの姿勢集合の教示に要する工数を削減することができるロボット制御用のキャリブレーション装置を提供することにある。 SUMMARY OF THE INVENTION It is an object of the present invention to provide a robot control calibration apparatus that teaches a set of robot postures in consideration of the shielding of markers and that can reduce the man-hours required for teaching the set of robot postures. .
 上記課題を解決するために、たとえば、請求の範囲に記載された構成を採用する。
 本願は、上記課題を解決する手段を複数含んでいるが、その一つを挙げるならば、予め定められた位置から撮像する撮像部と、ロボットに取り付けられ、ロボットの動作にしたがって変位するマーカと、撮像部によってマーカを撮像可能とするためのロボットの複数の姿勢を含む姿勢集合を生成する姿勢集合生成部と、姿勢集合生成部によって生成された姿勢集合の中から、マーカの遮蔽が生じない複数の姿勢を抽出する非遮蔽姿勢抽出部と、非遮蔽姿勢抽出部によって抽出された複数の姿勢に基づいて、ロボットの座標系と撮像部の座標系を変換するための座標変換パラメータを推定するキャリブレーション実行部と、を備えるロボット制御用キャリブレーション装置である。
In order to solve the above problems, for example, the configurations described in the claims are adopted.
The present application includes a plurality of means for solving the above problems. One of them is an imaging unit that takes an image from a predetermined position, and a marker that is attached to the robot and displaces according to the movement of the robot. a posture set generation unit for generating a posture set including a plurality of postures of the robot for enabling the imaging unit to capture the marker; An unshielded posture extraction unit that extracts a plurality of postures, and estimates coordinate transformation parameters for transforming the coordinate system of the robot and the coordinate system of the imaging unit based on the plurality of postures extracted by the unshielded posture extraction unit. and a calibration execution unit.
 本発明によれば、マーカの遮蔽を考慮してロボットの姿勢集合を教示するとともに、ロボットの姿勢集合の教示に要する工数を削減することができる。
 上記した以外の課題、構成および効果は、以下の実施形態の説明によって明らかにされる。
According to the present invention, it is possible to teach a set of robot postures in consideration of the shielding of markers, and to reduce the number of man-hours required for teaching the set of robot postures.
Problems, configurations, and effects other than those described above will be clarified by the following description of the embodiments.
第1実施形態に係るロボット制御用のキャリブレーション装置の構成例を示す図である。It is a figure which shows the structural example of the calibration apparatus for robot control which concerns on 1st Embodiment. 本実施形態における制御装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the control apparatus in this embodiment. 本実施形態における現実装置の座標系と座標変換を示す図である。FIG. 2 is a diagram showing a coordinate system and coordinate transformation of a physical device in this embodiment; 姿勢集合生成部の機能ブロック図である。FIG. 11 is a functional block diagram of a posture set generator; パラメータ取得部が取得するパラメータの一覧を示す図である。It is a figure which shows the list of the parameter which a parameter acquisition part acquires. パラメータ取得部によるパラメータの取得手順を示すフローチャートである。7 is a flow chart showing a procedure for obtaining parameters by a parameter obtaining unit; 姿勢集合を生成して保存する手順を示すフローチャートである。4 is a flow chart showing a procedure for generating and saving a posture set; 教示空間生成部による教示空間の生成方法を説明する図である。FIG. 5 is a diagram for explaining a method of generating a teaching space by a teaching space generation unit; 教示空間生成部による教示空間の分割方法を説明する図である。FIG. 5 is a diagram illustrating a method of dividing a teaching space by a teaching space generation unit; 教示位置生成部による教示位置の生成方法を説明する図である。FIG. 5 is a diagram for explaining a teaching position generation method by a teaching position generation unit; 各小空間の教示位置にマーカを初期姿勢で配置した場合のマーカの見え方を示す図である。FIG. 10 is a diagram showing how a marker looks when the marker is placed at the teaching position in each small space in the initial posture; 教示姿勢生成部が生成したマーカの教示姿勢を示す図である。FIG. 10 is a diagram showing a teaching posture of a marker generated by a teaching posture generation unit; シミュレータ構築部によって構築されるシミュレータの一例を示す図である。It is a figure which shows an example of the simulator built by the simulator building part. 非物理干渉姿勢抽出部の処理手順を示すフローチャートである。9 is a flow chart showing a processing procedure of a non-physical interference attitude extraction unit; ロボットが実際にピッキング作業を行うときの姿勢を上から俯瞰した図である。FIG. 4 is a top view of the posture of the robot when it actually performs picking work. ロボットが実際にピッキング作業を行うときの姿勢を側面から俯瞰した図である。FIG. 3 is a side view of the posture of the robot when it actually performs picking work. 非遮蔽姿勢抽出部の処理手順を示すフローチャートである。9 is a flow chart showing a processing procedure of a non-shielding posture extraction unit; 非遮蔽姿勢抽出部の動作例を示す模式図である。FIG. 10 is a schematic diagram showing an operation example of a non-shielding posture extraction unit; 姿勢集合評価部の機能ブロック図である。4 is a functional block diagram of a posture set evaluation unit; FIG. キャリブレーション実行部の処理手順を示すフローチャートである。5 is a flow chart showing a processing procedure of a calibration execution unit; 第2実施形態に係るロボット制御用のキャリブレーション装置の構成例を示す図である。FIG. 10 is a diagram showing a configuration example of a calibration device for robot control according to a second embodiment;
 以下、本発明の実施形態について図面を参照して詳細に説明する。本明細書および図面において、実質的に同一の機能または構成を有する要素については、同一の符号を付し、重複する説明は省略する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In this specification and the drawings, elements having substantially the same function or configuration are denoted by the same reference numerals, and overlapping descriptions are omitted.
 <第1実施形態>
 図1は、第1実施形態に係るロボット制御用のキャリブレーション装置の構成例を示す図である。本実施形態では、ロボットが行う作業内容の一例として、ワークを所定の位置へと移動させるピッキング作業を想定する。また、本実施形態では、制御対象のロボットの一例として、関節型ロボットであるロボットアーム(以下、単に「ロボット」ともいう。)を想定する。また、本実施形態では、ロボットに取り付けられるマーカの一例として、平面上の治具に寸法が既知の模様を印刷したキャリブレーションボードを想定する。また、本実施形態では、撮像部の一例として、単眼カメラを想定する。また、本実施形態では、ロボットの周辺や撮像部の周辺に設置される環境設置物の一例として、壁およびコンベアを想定する。ただし、本実施形態で想定する内容は、あくまで一例であって、ロボット制御用のキャリブレーション装置の構成は、本実施形態で想定した例に限定されない。
<First Embodiment>
FIG. 1 is a diagram showing a configuration example of a calibration device for robot control according to the first embodiment. In this embodiment, as an example of the work performed by the robot, picking work for moving a work to a predetermined position is assumed. In addition, in the present embodiment, as an example of a robot to be controlled, a robot arm that is an articulated robot (hereinafter also simply referred to as "robot") is assumed. Further, in this embodiment, as an example of the marker attached to the robot, a calibration board is assumed in which a pattern with known dimensions is printed on a jig on a plane. Also, in this embodiment, a monocular camera is assumed as an example of the imaging unit. In addition, in the present embodiment, a wall and a conveyor are assumed as examples of environmental installation objects installed around the robot and around the imaging unit. However, the content assumed in this embodiment is merely an example, and the configuration of the calibration device for robot control is not limited to the example assumed in this embodiment.
 図1に示すように、キャリブレーション装置2は、制御装置1と、現実装置2Aと、キャリブレーション制御装置2Bと、を備えている。
 制御装置1は、キャリブレーション装置2全体を統括的に制御する。また、制御装置1は、現実装置2Aにおけるロボット30の動作や撮像部32の動作を制御する。なお、図1においては、制御装置1とキャリブレーション制御装置2Bを別々に表記しているが、キャリブレーション制御装置2Bの機能は、制御装置1のコンピュータハードウェア資源によって実現することが可能である。
As shown in FIG. 1, the calibration device 2 includes a control device 1, a real device 2A, and a calibration control device 2B.
The control device 1 centrally controls the entire calibration device 2 . Further, the control device 1 controls the motion of the robot 30 and the motion of the imaging unit 32 in the physical device 2A. Although the control device 1 and the calibration control device 2B are shown separately in FIG. 1, the functions of the calibration control device 2B can be realized by the computer hardware resources of the control device 1. .
 現実装置2Aは、ロボットアームによって構成されたロボット30と、ロボット30に取り付けられたマーカ31と、ロボット30の作業空間と周辺を計測する撮像部32と、を有する。また、現実装置2Aは、ロボット30の周辺や撮像部32の周辺に設置される環境設置物33の一例として、壁33Aおよびコンベア33Bを有する。 The real device 2A has a robot 30 configured by a robot arm, a marker 31 attached to the robot 30, and an imaging unit 32 that measures the work space of the robot 30 and its surroundings. Also, the physical device 2A has a wall 33A and a conveyor 33B as an example of the environmental installation objects 33 installed around the robot 30 and around the imaging unit 32 .
 キャリブレーション制御装置2Bは、事前知識21と、生成パラメータ22と、事前知識21および生成パラメータ22を用いてロボット30の姿勢集合を生成する姿勢集合生成部23と、事前知識21を用いて三次元シミュレータ(以下、単に「シミュレータ」ともいう。)を構築するシミュレータ構築部24と、姿勢集合生成部23が生成した姿勢集合の中から、ロボット30やマーカ31が現実装置2A内のいずれの部分にも物理干渉しない複数の姿勢(姿勢集合)を抽出する非物理干渉姿勢抽出部25と、非物理干渉姿勢抽出部25が抽出した姿勢集合の中から、マーカ31の遮蔽が生じない複数の姿勢(姿勢集合)を抽出する非遮蔽姿勢抽出部26と、非遮蔽姿勢抽出部26が抽出した姿勢集合をキャリブレーション精度の観点から評価する姿勢集合評価部27と、姿勢集合評価部27による評価が低い場合に生成パラメータ22を更新するパラメータ更新部28と、姿勢集合評価部27による評価が高い場合にキャリブレーションを実行するキャリブレーション実行部29と、を備えている。ロボット30の姿勢集合とは、ロボット30の複数の姿勢(姿勢データ)の集合である。 The calibration control device 2B includes a prior knowledge 21, a generation parameter 22, a posture set generation unit 23 that generates a posture set of the robot 30 using the prior knowledge 21 and the generation parameter 22, and a three-dimensional image using the prior knowledge 21. The robot 30 and the marker 31 are selected from the pose sets generated by the simulator constructing unit 24 for constructing a simulator (hereinafter also simply referred to as "simulator") and the pose set generating unit 23, in which part of the physical device 2A. A non-physical interference posture extraction unit 25 that extracts a plurality of postures (posture set) that do not cause physical interference, and a plurality of postures ( A posture set evaluation unit 27 that evaluates the posture set extracted by the non-shielded posture extraction unit 26 from the viewpoint of calibration accuracy. and a calibration execution unit 29 that executes calibration when the posture set evaluation unit 27 evaluates high. The posture set of the robot 30 is a set of multiple postures (posture data) of the robot 30 .
 事前知識21は、現実装置2Aの形状等を含む設計情報と、ロボット30が作業を行う空間(以下、「作業空間」ともいう。)を特定可能な情報と、撮像部32の評価値と、各機能部で用いる閾値などのパラメータ情報と、を有するデータベースである。現実装置2Aの設計情報は、現実装置2Aの三次元モデルなどの形状、仕様および配置を含む情報である。撮像部32の評価値は、撮像部32の受け入れ、もしくは出荷時に行う検査で得られる評価値であって、撮像部32の製品スペックに対する実性能や、画像歪みの程度を表す値である。生成パラメータ22は、ロボット30の姿勢集合に含まれる姿勢の数と密度を決定するパラメータを有するデータベースである。 The prior knowledge 21 includes design information including the shape of the physical device 2A, information capable of specifying the space in which the robot 30 works (hereinafter also referred to as "work space"), an evaluation value of the imaging unit 32, It is a database having parameter information such as threshold values used in each functional unit. The design information of the physical device 2A is information including the shape, specifications and layout of the three-dimensional model of the physical device 2A. The evaluation value of the image pickup unit 32 is an evaluation value obtained by an inspection performed when the image pickup unit 32 is received or shipped, and represents the actual performance of the image pickup unit 32 with respect to product specifications and the degree of image distortion. The generation parameter 22 is a database having parameters that determine the number and density of poses included in the pose set of the robot 30 .
 姿勢集合生成部23は、事前知識21と生成パラメータ22とを入力として、ロボット30の姿勢集合を生成するとともに、生成した姿勢集合を制御装置1に保存する。姿勢集合生成部23が生成する姿勢集合に含まれるロボット30の姿勢は、撮像部32によってマーカ31を撮像可能(撮影可能)とするための姿勢である。シミュレータ構築部24は、事前知識21を用いて、仮想的にロボット30の動作および撮像部32を用いた撮影を可能とする三次元シミュレータを構築する。非物理干渉姿勢抽出部25は、姿勢集合生成部23が生成した姿勢集合を制御装置1から読み込むとともに、シミュレータ構築部24が構築した三次元シミュレータ上でロボットを各姿勢に移動することにより、ロボット30やマーカ31が、ロボット30および環境設置物33のいずれとも物理干渉しない複数の姿勢を抽出する。また、非物理干渉姿勢抽出部25は、抽出した複数の姿勢を、物理干渉しない姿勢集合として制御装置1に保存する。非遮蔽姿勢抽出部26は、非物理干渉姿勢抽出部25が抽出した姿勢集合を制御装置1から読み込むとともに、シミュレータ構築部24が構築した三次元シミュレータ上でロボットを各姿勢に移動することにより、撮像部32とマーカ31との間でマーカ31の遮蔽が生じない複数の姿勢を抽出する。また、非遮蔽姿勢抽出部26は、抽出した複数の姿勢を、姿勢集合として制御装置1に保存する。なお、本実施形態においては、姿勢集合生成部23が生成した姿勢集合の保存先や、非物理干渉姿勢抽出部25および非遮蔽姿勢抽出部26がそれぞれ抽出した姿勢集合の保存先を制御装置1としているが、姿勢集合の保存先は制御装置1に限らず、たとえばキャリブレーション制御装置2Bが備える記憶部であってもよい。 The posture set generation unit 23 receives the prior knowledge 21 and the generation parameters 22 as input, generates a posture set of the robot 30 , and stores the generated posture set in the control device 1 . The posture of the robot 30 included in the posture set generated by the posture set generation unit 23 is a posture that enables the imaging unit 32 to image (capture) the marker 31 . The simulator construction unit 24 uses the prior knowledge 21 to construct a three-dimensional simulator that enables virtual motion of the robot 30 and imaging using the imaging unit 32 . The non-physical interference posture extraction unit 25 reads the posture set generated by the posture set generation unit 23 from the control device 1, and moves the robot to each posture on the three-dimensional simulator constructed by the simulator construction unit 24, thereby A plurality of postures are extracted in which the robot 30 and the marker 31 do not physically interfere with either the robot 30 or the environmental installation object 33 . In addition, the non-physical interference posture extraction unit 25 stores the plurality of extracted postures in the control device 1 as a set of postures without physical interference. The non-shielding posture extraction unit 26 reads the posture set extracted by the non-physical interference posture extraction unit 25 from the control device 1, and moves the robot to each posture on the three-dimensional simulator constructed by the simulator construction unit 24. A plurality of postures in which the marker 31 is not blocked between the imaging unit 32 and the marker 31 are extracted. In addition, the non-shielding posture extraction unit 26 stores the plurality of extracted postures in the control device 1 as a posture set. Note that in the present embodiment, the storage location of the posture set generated by the posture set generation unit 23 and the storage destination of the posture sets extracted by the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26 are set to the control device 1. However, the storage destination of the posture set is not limited to the control device 1, and may be, for example, a storage unit provided in the calibration control device 2B.
 姿勢集合評価部27は、非遮蔽姿勢抽出部26が抽出した姿勢集合を制御装置1から読み込むとともに、その姿勢集合を用いたキャリブレーションの推定精度が事前に設定された所定値以上であるか否かを判定し、この判定結果を基に姿勢集合を評価する。パラメータ更新部28は、姿勢集合評価部27で所定値以上に高精度なキャリブレーションが見込めない場合に、生成パラメータ22を更新する。キャリブレーション実行部29は、姿勢集合評価部27で所定以上に高精度なキャリブレーションが見込めると判定された姿勢集合(複数の姿勢)に基づいて、ロボット30と撮像部32とを制御し、ロボット30の座標系と撮像部32の座標系を変換するための座標変換パラメータを推定する。 The posture set evaluation unit 27 reads the posture set extracted by the non-shielding posture extraction unit 26 from the control device 1, and determines whether or not the estimation accuracy of the calibration using the posture set is equal to or greater than a predetermined value set in advance. or not, and the posture set is evaluated based on this determination result. The parameter update unit 28 updates the generation parameter 22 when the posture set evaluation unit 27 cannot expect calibration with a higher accuracy than a predetermined value. The calibration executing unit 29 controls the robot 30 and the imaging unit 32 based on the posture set (a plurality of postures) determined by the posture set evaluating unit 27 to be expected to perform calibration with higher accuracy than a predetermined level. Coordinate transformation parameters for transforming the coordinate system of 30 and the coordinate system of the imaging unit 32 are estimated.
 以降では、制御装置1、現実装置2A、事前知識21、姿勢集合生成部23、シミュレータ構築部24、非物理干渉姿勢抽出部25、非遮蔽姿勢抽出部26、姿勢集合評価部27、パラメータ更新部28、キャリブレーション実行部29について詳細に説明する。 Hereinafter, the control device 1, the real device 2A, the prior knowledge 21, the pose set generation unit 23, the simulator construction unit 24, the non-physical interference pose extraction unit 25, the non-shielding pose extraction unit 26, the pose set evaluation unit 27, and the parameter update unit 28, the calibration execution unit 29 will be described in detail.
 (制御装置1)
 図2は、本実施形態における制御装置1の構成例を示すブロック図である。
 図2に示すように、制御装置1は、キャリブレーション装置2全体を統括的に制御するためのコンピュータハードウェア資源として、CPU11と、CPU11の指令を伝達するバス12と、ROM13と、RAM14と、記憶装置15と、ネットワークI/F(I/Fはインターフェースの略、以下同じ)16と、撮像部32を接続するための撮像I/F17と、画面出力を行うための画面表示I/F18と、外部入力を行うための入力I/F19と、を有する。すなわち、制御装置1は、一般的なコンピュータ装置によって構成することができる。
(control device 1)
FIG. 2 is a block diagram showing a configuration example of the control device 1 in this embodiment.
As shown in FIG. 2, the control device 1 includes a CPU 11, a bus 12 for transmitting commands from the CPU 11, a ROM 13, a RAM 14, and a CPU 11 as computer hardware resources for overall control of the calibration device 2. A storage device 15, a network I/F (I/F is an abbreviation for interface, hereinafter the same) 16, an imaging I/F 17 for connecting the imaging unit 32, and a screen display I/F 18 for screen output. , and an input I/F 19 for external input. That is, the control device 1 can be configured by a general computer device.
 記憶装置15には、各機能を実行するためのプログラム15Aと、OS(オペレーティングシステム)15Bと、三次元モデル15Cと、データベースなどのパラメータ15Dとが記憶されている。ネットワークI/F16にはロボット制御装置16Aが接続されている。ロボット制御装置16Aは、ロボット30を制御して動作させる装置である。 The storage device 15 stores a program 15A for executing each function, an OS (operating system) 15B, a three-dimensional model 15C, and parameters 15D such as a database. A robot controller 16A is connected to the network I/F 16 . The robot control device 16A is a device that controls and operates the robot 30 .
 (現実装置2A)
 本実施形態において、ロボット30は、ピッキング作業を行うピッキングロボットである。一般に、ピッキングロボットは多関節ロボットによって構成される。ただし、ロボット30は、複数の関節を有するロボットであれば、ロボットの種類を問わない。また、ロボット30は、単軸スライダー上に設置し、この単軸スライダーによってロボット30の移動の自由度を増加させることで、広い空間を対象に作業を実行することができる。
 マーカ31としては、平面上の治具に、寸法が既知の模様を印刷したキャリブレーションボードを使用する。このようなマーカ31を使用した場合は、撮像部32によって計測したデータを解析することで、三次元空間におけるマーカ31の位置を特定することができる。ただし、撮像部32によって計測したデータにより、マーカ31の三次元位置を特定可能であれば、たとえば球状の治具や特殊な形状の治具などをマーカ31として使用することができる。
 マーカ31は、ロボットアームからなるロボット30のアーム先端に取り付けられている。ただし、マーカ31の取り付け位置は、ロボット30の動作にしたがってマーカ31が変位するという条件を満たす位置であれば、アーム先端に限らず、ロボットアーム上であればよい。
(Real device 2A)
In this embodiment, the robot 30 is a picking robot that performs picking work. Generally, a picking robot is composed of an articulated robot. However, the robot 30 may be of any type as long as it has a plurality of joints. In addition, the robot 30 is installed on a single-axis slider, and by increasing the degree of freedom of movement of the robot 30 with this single-axis slider, it is possible to perform work in a wide space.
As the marker 31, a calibration board on which a pattern with known dimensions is printed on a planar jig is used. When such a marker 31 is used, the position of the marker 31 in the three-dimensional space can be identified by analyzing the data measured by the imaging section 32 . However, if the three-dimensional position of the marker 31 can be identified from the data measured by the imaging unit 32, for example, a spherical jig or a jig with a special shape can be used as the marker 31.
The marker 31 is attached to the arm tip of the robot 30 which is a robot arm. However, the attachment position of the marker 31 is not limited to the tip of the arm, and may be on the robot arm as long as the position satisfies the condition that the marker 31 is displaced according to the motion of the robot 30 .
 また、本実施形態では、撮像部32として単眼カメラを用いている。ただし、撮像部32は、単眼カメラに限らず、たとえばToF(Time of Flight)方式のカメラや、ステレオカメラなどによって構成してもよい。つまり、撮像部32によって得られるデータは、マーカ31の三次元位置を特定可能な画像や点群などのデータである。撮像部32は、予め定められた位置からロボット30の作業空間などを撮影する。撮像部32の取り付け位置は、たとえば、ロボット30が作業を行う建物の天井や壁など、任意の場所に設定することができる。 Also, in this embodiment, a monocular camera is used as the imaging unit 32 . However, the imaging unit 32 is not limited to a monocular camera, and may be configured by, for example, a ToF (Time of Flight) camera, a stereo camera, or the like. In other words, the data obtained by the imaging unit 32 is data such as images and point clouds that can specify the three-dimensional position of the marker 31 . The imaging unit 32 photographs the work space of the robot 30 and the like from a predetermined position. The mounting position of the imaging unit 32 can be set at any location, such as the ceiling or wall of the building where the robot 30 works.
 ここで、現実装置2Aの座標系と座標変換について説明する。
 図3は、本実施形態における現実装置2Aの座標系と座標変換を示す図である。
 現実装置2Aの座標系には、ロボット30のアーム基部を原点とする基部座標系C1と、ロボット30のアーム先端を原点とするアーム座標系C2と、マーカ31の中心を原点とするマーカ座標系C3と、撮像部32の光学中心を原点とするカメラ座標系C4とがある。各々の座標系は、それぞれ三次元の座標系である。
Here, the coordinate system and coordinate transformation of the physical device 2A will be described.
FIG. 3 is a diagram showing the coordinate system and coordinate conversion of the physical device 2A in this embodiment.
The coordinate system of the physical device 2A includes a base coordinate system C1 whose origin is the base of the arm of the robot 30, an arm coordinate system C2 whose origin is the tip of the arm of the robot 30, and a marker coordinate system whose origin is the center of the marker 31. C3 and a camera coordinate system C4 whose origin is the optical center of the imaging unit 32 . Each coordinate system is a three-dimensional coordinate system.
 現実装置2Aの座標変換には、基部座標系C1とカメラ座標系C4との座標変換を表す基部・カメラ変換行列M1と、マーカ座標系C3とカメラ座標系C4との座標変換を表すマーカ・カメラ変換行列M2と、アーム座標系C2と基部座標系C1との座標変換を表すアーム・基部変換行列M3と、アーム座標系C2とマーカ座標系C3との座標変換を表すアーム・マーカ変換行列M4とがある。このうち、基部座標系C1は、ロボット30の座標系に相当し、カメラ座標系C4は、撮像部32の座標系に相当する。また、基部・カメラ変換行列M1は、ロボット30の座標系と撮像部32の座標系を変換するための座標変換パラメータに相当する。座標変換の一例を挙げると、基部・カメラ変換行列M1のうち、回転行列をRca、並進行列をtcaとしたとき、カメラ座標系C4における座標C(Xc,Yc,Zc)は、下記の数式に示すように基部座標系C1の点P(Xr,Yr,Zr)に変換することができる。 The coordinate transformation of the physical device 2A includes a base/camera transformation matrix M1 representing coordinate transformation between the base coordinate system C1 and the camera coordinate system C4, and a marker/camera transformation matrix representing coordinate transformation between the marker coordinate system C3 and the camera coordinate system C4. A transformation matrix M2, an arm/base transformation matrix M3 representing coordinate transformation between the arm coordinate system C2 and the base coordinate system C1, and an arm/marker transformation matrix M4 representing coordinate transformation between the arm coordinate system C2 and the marker coordinate system C3. There is Of these, the base coordinate system C<b>1 corresponds to the coordinate system of the robot 30 , and the camera coordinate system C<b>4 corresponds to the coordinate system of the imaging unit 32 . The base/camera transformation matrix M1 corresponds to a coordinate transformation parameter for transforming the coordinate system of the robot 30 and the coordinate system of the imaging unit 32 . As an example of coordinate transformation, when the rotation matrix is Rca and the translation matrix is tca in the base/camera transformation matrix M1, the coordinates C (Xc, Yc, Zc) in the camera coordinate system C4 can be expressed by the following formula: It can be transformed into a point P(Xr, Yr, Zr) in the base coordinate system C1 as shown.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 次に、現実装置2Aの座標変換パラメータ(M1,M2,M3,M4)の取得方法について説明する。
 基部・カメラ変換行列M1は、現実装置2Aにおける設計値の1つとして求めることができる。ただし、設計値として求めた基部・カメラ変換行列M1を適用してロボット30を動作させた場合に、ロボット30の位置に誤差が生じることがある。このような誤差が生じる原因の1つとして、たとえば、現実装置2Aにおける撮像部32の取り付け位置のズレが考えられる。具体的には、撮像部32の光軸(z軸)が鉛直方向と平行になるように撮像部32の取り付け位置が設計されていても、実際の取り付け時には僅かな位置ズレが生じることがある。キャリブレーションは、このような位置ズレによってロボット30の位置に誤差が生じないよう、基部・カメラ変換行列Mを正確に求めるために行われる。
Next, a method of acquiring the coordinate transformation parameters (M1, M2, M3, M4) of the physical device 2A will be described.
The base-camera conversion matrix M1 can be obtained as one of the design values in the physical device 2A. However, when the robot 30 is operated by applying the base/camera transformation matrix M1 obtained as the design value, an error may occur in the position of the robot 30 . One possible cause of such an error is, for example, misalignment of the mounting position of the imaging unit 32 in the physical device 2A. Specifically, even if the mounting position of the imaging unit 32 is designed so that the optical axis (z-axis) of the imaging unit 32 is parallel to the vertical direction, a slight positional deviation may occur during actual mounting. . Calibration is performed to accurately obtain the base/camera transformation matrix M so that such a positional deviation does not cause an error in the position of the robot 30 .
 基部・カメラ変換行列M1の設計値は事前知識21に含まれている。このため、事前知識21から基部・カメラ変換行列M1の設計値を取得することができる。また、キャリブレーションを実行することにより、上記の位置ズレ等を反映した基部・カメラ変換行列M1の実際の値を取得することができる。本実施形態におけるキャリブレーションとは、ある姿勢へとロボット30を移動したときのマーカ・カメラ変換行列M2とアーム・基部変換行列M3の複数の組のデータから、基部・カメラ変換行列M1を推定することをいう。基部・カメラ変換行列M1の推定は演算によって行われる。 The design value of the base/camera transformation matrix M1 is included in the prior knowledge 21. Therefore, the design value of the base-camera conversion matrix M1 can be obtained from the prior knowledge 21 . Also, by executing the calibration, it is possible to obtain the actual values of the base/camera transformation matrix M1 that reflects the positional deviation and the like. Calibration in this embodiment means estimating the base/camera conversion matrix M1 from a plurality of sets of data of the marker/camera conversion matrix M2 and the arm/base conversion matrix M3 when the robot 30 is moved to a certain posture. Say things. Estimation of the base-to-camera transformation matrix M1 is performed computationally.
 マーカ・カメラ変換行列M2は、撮像部32によりマーカ31を撮影して得た画像を解析することにより取得することができる。アーム・基部変換行列M3は、ロボット30のエンコーダ値から計算することで取得することができる。ロボット30の関節部分には、図示しない関節駆動用のモータと、このモータの回転角を検出するエンコーダとが設けられ、エンコーダ値は、エンコーダの出力値を意味する。アーム・マーカ変換行列M4は未知、もしくは事前知識21より設計値を取得することができる。本実施形態では、アーム・マーカ変換行列M4の設計値は事前知識21に含まれているため、この事前知識21からアーム・マーカ変換行列M4の設計値を取得することができる。なお、本実施形態においては、各座標系の原点、座標系の向き、座標変換を上述のように設定しているが、この例に限定されるものではない。 The marker-camera conversion matrix M2 can be obtained by analyzing an image obtained by capturing the marker 31 by the imaging unit 32 . The arm-base conversion matrix M3 can be obtained by calculation from the encoder values of the robot 30 . The joint portion of the robot 30 is provided with a motor (not shown) for driving the joint and an encoder for detecting the rotation angle of the motor, and the encoder value means the output value of the encoder. The arm-marker conversion matrix M4 is unknown, or a design value can be obtained from the prior knowledge 21 . In this embodiment, since the design value of the arm-marker transformation matrix M4 is included in the prior knowledge 21, the design value of the arm-marker transformation matrix M4 can be obtained from this prior knowledge 21. FIG. In this embodiment, the origin of each coordinate system, the orientation of the coordinate system, and the coordinate conversion are set as described above, but the present invention is not limited to this example.
 (事前知識21)
 事前知識21は、下記(1)~(11)のパラメータを有するデータベースである。
 (1)基部・カメラ変換行列M1の設計値。
 (2)アーム・マーカ変換行列M4の設計値。
 (3)ロボット30の形状情報を含む三次元モデル。
 (4)ロボット30の関節角の上限値などを規定する仕様データ。
 (5)マーカ31の種類やサイズを含む形状データ。
 (6)撮像部32のカメラパラメータや光学ブラー、解像度、画角などの情報を含む仕様データ。
 (7)撮像部32の歪み評価値。
 (8)ロボット30の作業空間を規定する情報。
 (9)現実装置2Aの設計情報。
 (10)環境設置物33の形状と配置を規定する情報。
 (11)各機能部で用いる閾値。
 なお、本実施形態では、上記の事前知識21を用いた機能について説明するが、一部のみを使用し機能を実装してもよい。
(Prior knowledge 21)
The prior knowledge 21 is a database having parameters (1) to (11) below.
(1) A design value of the base-camera transformation matrix M1.
(2) A design value of the arm-marker transformation matrix M4.
(3) A three-dimensional model including shape information of the robot 30;
(4) Specification data that defines the upper limit of the joint angles of the robot 30 and the like.
(5) shape data including the type and size of the marker 31;
(6) Specification data including information such as camera parameters of the imaging unit 32, optical blur, resolution, and angle of view.
(7) A distortion evaluation value of the imaging unit 32 .
(8) Information that defines the work space of the robot 30;
(9) Design information of the real device 2A.
(10) Information that defines the shape and arrangement of the environmental installation object 33 .
(11) Threshold used in each functional unit.
In this embodiment, the function using the above prior knowledge 21 will be described, but the function may be implemented using only part of it.
 (姿勢集合生成部23)
 姿勢集合生成部23は、事前知識21から取得したパラメータと生成パラメータ22から取得したパラメータとに基づいてロボット30の姿勢集合を生成する。また、姿勢集合生成部23は、生成した姿勢集合を制御装置1に保存する。
(Posture set generator 23)
The posture set generator 23 generates a posture set of the robot 30 based on the parameters obtained from the prior knowledge 21 and the parameters obtained from the generated parameters 22 . Also, the posture set generator 23 stores the generated posture set in the control device 1 .
 図4は、姿勢集合生成部23の機能ブロック図である。
 図4に示すように、姿勢集合生成部23は、パラメータ取得部231と、教示空間生成部232と、教示位置生成部233と、教示姿勢生成部234と、座標系変換部235と、姿勢集合保存部236と、を備えている。
FIG. 4 is a functional block diagram of the posture set generator 23. As shown in FIG.
As shown in FIG. 4, the posture set generation unit 23 includes a parameter acquisition unit 231, a teaching space generation unit 232, a teaching position generation unit 233, a teaching posture generation unit 234, a coordinate system conversion unit 235, and a posture set generation unit 231. A storage unit 236 is provided.
 パラメータ取得部231は、姿勢集合の生成に必要なパラメータを、事前知識21と生成パラメータ22から取得する。教示空間生成部232は、キャリブレーション時にマーカ31を配置する空間を決定し、この空間を複数の小空間に分割する。また、教示空間生成部232は、各小空間にインデックスを付与する。教示位置生成部233は、各小空間においてマーカ31を配置する位置、すなわち教示位置を、カメラ座標系C4を基準に生成する。教示姿勢生成部234は、教示位置生成部233が生成した各教示位置においてマーカ31の姿勢、すなわち教示姿勢を生成する。座標系変換部235は、カメラ座標系C4を基準としたマーカ31の姿勢集合を、事前知識21に含まれる基部・カメラ変換行列M1の設計値に基づいて、基部座標系C1を基準としたロボット30の姿勢集合へと変換する。姿勢集合保存部236は、教示姿勢生成部234が生成した姿勢集合と座標系変換部235が変換した姿勢集合とを、制御装置1に保存する。以下、姿勢集合生成部23の各部の構成についてさらに詳しく説明する。 The parameter acquisition unit 231 acquires parameters necessary for generating a set of postures from the prior knowledge 21 and the generation parameters 22 . The teaching space generation unit 232 determines a space in which the marker 31 is placed during calibration, and divides this space into a plurality of small spaces. Also, the teaching space generator 232 assigns an index to each small space. The teaching position generation unit 233 generates positions at which the markers 31 are arranged in each small space, that is, teaching positions, based on the camera coordinate system C4. The teaching posture generation unit 234 generates the posture of the marker 31 at each teaching position generated by the teaching position generation unit 233, that is, the teaching posture. The coordinate system transformation unit 235 transforms the set of poses of the marker 31 with respect to the camera coordinate system C4 to the robot with respect to the base coordinate system C1 based on the design values of the base/camera transformation matrix M1 included in the prior knowledge 21. Transform into a set of 30 poses. The posture set storage unit 236 stores the posture set generated by the teaching posture generation unit 234 and the posture set transformed by the coordinate system transformation unit 235 in the control device 1 . The configuration of each part of the posture set generator 23 will be described in more detail below.
 図5は、パラメータ取得部231が取得するパラメータの一覧を示す図である。
 図5に示すように、パラメータ取得部231が取得するパラメータには、基部・カメラ変換行列M1の設計値(回転行列Rca、並進行列tca)、撮像部32の計測範囲(画角、解像度、光学ブラー)、撮像部32の歪み情報(歪み評価)、ロボット30の作業空間(三次元空間情報)、マーカ31の形状(ボード、球体、サイズ)、姿勢集合の各軸方向の分解能(X1,Y1,Z1)が含まれる。
FIG. 5 is a diagram showing a list of parameters acquired by the parameter acquisition unit 231. As shown in FIG.
As shown in FIG. 5, the parameters acquired by the parameter acquisition unit 231 include the design values (rotation matrix Rca, translation matrix tca) of the base-camera conversion matrix M1, the measurement range of the imaging unit 32 (angle of view, resolution, optical blur), distortion information of the imaging unit 32 (distortion evaluation), workspace of the robot 30 (three-dimensional space information), shape of the marker 31 (board, sphere, size), resolution in each axis direction of the posture set (X1, Y1 , Z1).
 図6は、パラメータ取得部231によるパラメータの取得手順を示すフローチャートである。
 まず、パラメータ取得部231は、基部・カメラ変換行列M1の設計値を事前知識21から取得する(ステップS1)。このとき、パラメータ取得部231は、基部・カメラ変換行列M1の設計値として、回転行列Rcaおよび並進行列tcaを取得する。
FIG. 6 is a flowchart showing a parameter acquisition procedure by the parameter acquisition unit 231. As shown in FIG.
First, the parameter acquisition unit 231 acquires the design values of the base-camera conversion matrix M1 from the prior knowledge 21 (step S1). At this time, the parameter acquisition unit 231 acquires the rotation matrix Rca and the translation matrix tca as the design values of the base-camera conversion matrix M1.
 次に、パラメータ取得部231は、撮像部32の計測範囲と撮像部32の歪み情報とを事前知識21から取得する(ステップS2)。このとき、パラメータ取得部231は、撮像部32の計測範囲として画角、解像度および光学ブラーを取得するとともに、撮像部32の歪み情報として歪み評価値を取得する。 Next, the parameter acquisition unit 231 acquires the measurement range of the imaging unit 32 and the distortion information of the imaging unit 32 from the prior knowledge 21 (step S2). At this time, the parameter acquisition unit 231 acquires the angle of view, resolution, and optical blur as the measurement range of the imaging unit 32 and acquires the distortion evaluation value as the distortion information of the imaging unit 32 .
 次に、パラメータ取得部231は、ロボット30の作業空間を規定する情報を事前知識21から取得する(ステップS3)。このとき、パラメータ取得部231は、ロボット30の作業空間を規定する情報として、その作業空間を示す三次元空間情報を取得する。
 次に、パラメータ取得部231は、マーカ31の形状データを取得する(ステップS4)。このとき、パラメータ取得部231は、マーカ31の形状データとして、マーカ31の種類(ボード、球体)とサイズのデータを取得する。
 次に、パラメータ取得部231は、姿勢集合の各軸方向の分解能(X1,Y1,Z1)を生成パラメータ22から取得する(ステップS5)。具体的には、カメラ座標系C4を基準に、各軸において姿勢を作る個数(X1,Y1,Z1)を、各軸方向の分解能を定めるパラメータとして取得する。この場合、各軸において姿勢を作る個数が多いほど(X1,Y1,Z1の各値が大きいほど)、各軸方向の分解能が高くなる。つまり、パラメータX1,Y1,Z1は、姿勢集合に含まれる姿勢の数と密度を決定するパラメータに相当する。
Next, the parameter acquisition unit 231 acquires information defining the workspace of the robot 30 from the prior knowledge 21 (step S3). At this time, the parameter acquisition unit 231 acquires three-dimensional space information indicating the work space as information defining the work space of the robot 30 .
Next, the parameter acquisition unit 231 acquires shape data of the marker 31 (step S4). At this time, the parameter acquisition unit 231 acquires the type (board, sphere) and size data of the marker 31 as the shape data of the marker 31 .
Next, the parameter acquisition unit 231 acquires the resolution (X1, Y1, Z1) in each axial direction of the posture set from the generation parameter 22 (step S5). Specifically, with reference to the camera coordinate system C4, the number of poses (X1, Y1, Z1) on each axis is acquired as a parameter that determines the resolution in each axis direction. In this case, the greater the number of postures to be created on each axis (the greater each value of X1, Y1, and Z1), the higher the resolution in each axis direction. In other words, the parameters X1, Y1, and Z1 correspond to parameters that determine the number and density of poses included in the pose set.
 図7は、教示空間生成部232、教示位置生成部233、教示姿勢生成部234および座標系変換部235によって姿勢集合を生成し、この姿勢集合を姿勢集合保存部236によって保存する手順を示すフローチャートである。
 図7に示すように、まず、教示空間生成部232は、マーカ31を配置する教示空間を生成する(ステップS6)。このとき、教示空間生成部232が生成する教示空間は、カメラ座標系C4を基準とする三次元空間である。
FIG. 7 is a flow chart showing a procedure for generating a posture set by the teaching space generation unit 232, the teaching position generation unit 233, the teaching posture generation unit 234, and the coordinate system conversion unit 235 and storing the posture set by the posture set storage unit 236. is.
As shown in FIG. 7, first, the teaching space generator 232 generates a teaching space in which the markers 31 are arranged (step S6). At this time, the teaching space generated by the teaching space generation unit 232 is a three-dimensional space based on the camera coordinate system C4.
 次に、教示空間生成部232は、先ほど生成した教示空間を複数の小空間に分割し、各小空間にインデックスを付与する(ステップS7)。このとき、教示空間生成部232は、教示空間をX1×Y1×Z1の小空間に分割する。たとえば、X1=3、Y1=3、Z1=3であれば、教示空間生成部232は、教示空間を各軸方向で3つに分解することにより、教示空間を合計27個の小空間に分割する。また、教示空間生成部232は、小空間ごとにインデックス(X,Y,Z)を付与する。 Next, the teaching space generator 232 divides the previously generated teaching space into a plurality of small spaces, and assigns an index to each small space (step S7). At this time, the teaching space generator 232 divides the teaching space into small spaces of X1×Y1×Z1. For example, if X1=3, Y1=3, and Z1=3, the teaching space generator 232 divides the teaching space into three in each axial direction, thereby dividing the teaching space into a total of 27 small spaces. do. Also, the teaching space generator 232 assigns an index (X, Y, Z) to each small space.
 次に、教示位置生成部233は、各小空間内でマーカ31を配置する位置を教示位置として設定する(ステップS8)。これにより、小空間の個数分だけマーカ31の教示位置が生成される。
 次に、教示姿勢生成部234は、マーカ31が撮像部32に正対するように、各教示位置におけるマーカ31の教示姿勢を初期設定する(ステップS9)。以降の説明では、初期設定したマーカ31の教示姿勢を初期姿勢という。
 次に、教示姿勢生成部234は、各小空間に付与されたインデックスに基づくルールベースもしくはランダムに、初期姿勢からカメラ座標系C4の各軸方向に0度から±θ(閾値)度だけマーカ31の姿勢を回転させることにより、マーカ31の教示姿勢を生成する(ステップS10)。これにより、マーカ31の教示姿勢が小空間の個数分だけ生成される。
Next, the teaching position generating section 233 sets the position where the marker 31 is arranged in each small space as a teaching position (step S8). As a result, the teaching positions of the marker 31 are generated for the number of small spaces.
Next, the teaching posture generation unit 234 initially sets the teaching posture of the marker 31 at each teaching position so that the marker 31 faces the imaging unit 32 (step S9). In the following description, the initially set teaching orientation of the marker 31 will be referred to as the initial orientation.
Next, the teaching posture generation unit 234 generates the marker 31 from the initial posture by ±θ (threshold) degrees from 0 degrees in each axis direction of the camera coordinate system C4 from the initial posture, based on the index assigned to each small space, or randomly. By rotating the posture of , the teaching posture of the marker 31 is generated (step S10). As a result, the teaching postures of the marker 31 are generated for the number of small spaces.
 次に、座標系変換部235は、ステップS10で教示姿勢生成部234が生成した、カメラ座標系C4を基準とするマーカ31の教示姿勢の座標を、基部・カメラ変換行列M1の設計値を用いて、基部座標系C1の座標へと変換する(ステップS11)。これにより、カメラ座標系C4を基準とするマーカ31の教示姿勢が、基部座標系C1を基準とするロボット30の教示姿勢に変換される。また、ロボット30の教示姿勢は小空間の個数分だけ生成される。
 次に、姿勢集合保存部236は、上記ステップS10で教示姿勢生成部234により生成されたマーカ31の姿勢集合(複数の教示姿勢)、および、上記ステップS11で座標系変換部235により生成されたロボット30の姿勢集合(複数の教示姿勢)を、制御装置1に保存する(ステップS12)。これにより、制御装置1には、カメラ座標系C4を基準とするマーカ31の姿勢集合と、基部座標系C1を基準とするロボット30の姿勢集合とが保存される。
Next, the coordinate system conversion unit 235 converts the coordinates of the taught posture of the marker 31 based on the camera coordinate system C4 generated by the taught posture generation unit 234 in step S10 using the design values of the base/camera transformation matrix M1. are converted into the coordinates of the base coordinate system C1 (step S11). As a result, the teaching posture of the marker 31 based on the camera coordinate system C4 is converted into the teaching posture of the robot 30 based on the base coordinate system C1. Also, the teaching postures of the robot 30 are generated for the number of small spaces.
Next, the posture set storage unit 236 saves the posture set (a plurality of taught postures) of the marker 31 generated by the taught posture generation unit 234 in step S10 and the A set of postures (a plurality of teaching postures) of the robot 30 is saved in the control device 1 (step S12). As a result, the controller 1 stores a set of orientations of the marker 31 based on the camera coordinate system C4 and a set of orientations of the robot 30 based on the base coordinate system C1.
 図8は、教示空間生成部232による教示空間の生成方法を説明する図である。図示した生成方法は、上記図7のステップS6に適用される。
 まず、教示空間は、ロボット30が作業する空間と撮像部32が精度よく撮影可能(計測可能)な空間とが重なり合う、両空間の共通領域とする。
 図8においては、カメラ座標系C4を基準にロボット30の作業空間を決定している。作業空間の決定方法としては、たとえば、ロボット30が行う作業がピッキング作業である場合は、ロボット30がワーク(図示せず)を把持する作業を行うときにマーカ31が位置する空間として事前に定めることができる。ロボット30の作業空間を教示空間T1に用いる理由は次の通りである。ロボット30を移動させる場合は、ロボット30の製造誤差などに伴う移動誤差が系統的に生じる。ただし、移動誤差の出方は空間の位置によって異なる。そのため、キャリブレーション時にロボット30が取る姿勢を作業時の姿勢と同様にすることにより、移動誤差の出方が作業時と同じデータをキャリブレーションに使用することができる。
FIG. 8 is a diagram for explaining a teaching space generation method by the teaching space generation unit 232. As shown in FIG. The illustrated generation method is applied to step S6 in FIG. 7 above.
First, the teaching space is defined as a common area between the space in which the robot 30 works and the space in which the imaging unit 32 can accurately shoot (measurable) overlap.
In FIG. 8, the work space of the robot 30 is determined based on the camera coordinate system C4. As a method of determining the work space, for example, when the work performed by the robot 30 is a picking work, the space in which the marker 31 is positioned when the robot 30 performs the work of gripping a work (not shown) is determined in advance. be able to. The reason why the work space of the robot 30 is used as the teaching space T1 is as follows. When the robot 30 is moved, movement errors systematically occur due to manufacturing errors of the robot 30 or the like. However, the appearance of the movement error differs depending on the position in space. Therefore, by setting the posture taken by the robot 30 during calibration to be the same as the posture during work, it is possible to use data with the same movement error as during work for calibration.
 撮像部32が撮影可能な空間は、撮像部32の画角によって決まるが、撮像部32が精度よく撮影可能な空間は、撮影可能な空間よりも狭い範囲に制限される。図8においては、撮像部32が精度よく撮影可能な空間として、カメラ座標系C4におけるZ軸方向の空間の広さは、撮像部32の光学ブラーの値が閾値以下の空間となるよう指定する。図8の例では、光学ブラーの値が1px以下となる範囲で、Z軸方向の空間の広さを指定している。また、撮像部32が精度よく撮影可能な空間として、カメラ座標系C4におけるX軸方向およびY軸方向の空間の広さは、撮像部32が撮像した画像(以下、「撮像画像」ともいう。)に対して歪み補正を実施し、歪み補正前の画像と歪み補正後の画像との輝度差が閾値以下の範囲となるよう指定する。これにより、撮像画像の歪みが生じにくい空間にマーカ31が配置されるように制御することができる。このため、後述するキャリブレーションの誤差を低減することができる。 The space that the imaging unit 32 can shoot is determined by the angle of view of the imaging unit 32, but the space that the imaging unit 32 can shoot with accuracy is limited to a narrower range than the shootable space. In FIG. 8, as a space in which the imaging unit 32 can shoot with high accuracy, the size of the space in the Z-axis direction in the camera coordinate system C4 is specified so that the value of the optical blur of the imaging unit 32 is equal to or less than the threshold. . In the example of FIG. 8, the spatial dimension in the Z-axis direction is specified within a range in which the optical blur value is 1 px or less. In addition, as a space that can be captured by the imaging unit 32 with high accuracy, the size of the space in the X-axis direction and the Y-axis direction in the camera coordinate system C4 is the image captured by the imaging unit 32 (hereinafter also referred to as “captured image”). ) is subjected to distortion correction, and the luminance difference between the image before distortion correction and the image after distortion correction is specified to be within a range of a threshold value or less. As a result, it is possible to control so that the marker 31 is placed in a space in which the captured image is less likely to be distorted. Therefore, it is possible to reduce errors in calibration, which will be described later.
 図8の例では、カメラ座標系C4におけるZ軸方向の空間の広さは、ロボット30の作業空間によって規定されている。これは、Z軸方向の空間の広さを比較した場合に、ロボット30の作業空間が、光学ブラーの値が1px以下の空間よりも狭いためである。一方、カメラ座標系C4におけるX軸方向およびY軸方向の空間の広さは、撮像部32が精度よく撮影可能な空間によって規定されている。これは、X軸方向およびY軸方向の空間の広さを比較した場合に、撮像部32が精度よく撮影可能な空間である、歪み評価による輝度差が閾値以下の空間が、ロボット30の作業空間よりも狭いためである。
 以上の方法により、教示空間T1が生成される。
In the example of FIG. 8, the size of the space in the Z-axis direction in the camera coordinate system C4 is defined by the working space of the robot 30. In the example of FIG. This is because the working space of the robot 30 is narrower than the space in which the optical blur value is 1 px or less when the size of the space in the Z-axis direction is compared. On the other hand, the size of the space in the X-axis direction and the Y-axis direction in the camera coordinate system C4 is defined by the space in which the imaging unit 32 can accurately capture images. This is because, when the sizes of the spaces in the X-axis direction and the Y-axis direction are compared, a space in which the image capturing unit 32 can accurately capture images, that is, a space in which the difference in brightness based on the distortion evaluation is equal to or less than the threshold is the space in which the robot 30 is working. This is because it is narrower than the space.
The teaching space T1 is generated by the above method.
 図9は、教示空間生成部232による教示空間の分割方法を説明する図である。図示した分割方法は、上記図7のステップS7に適用される。
 教示空間を分割する場合は、姿勢集合の各軸方向の分解能として与えられるパラメータX1,Y1,Z1を用いる。具体的には、上記ステップS6で生成した教示空間T1を、カメラ座標系C4を基準として、x軸方向ではX1個に分割し、y軸方向ではY1個に分割し、z軸方向ではZ1個に分割する。図9の例では、X1=3、Y1=3、Z1=3として、教示空間T1を分割している。このため、教示空間T1は、合計27個の小空間に分割される。このように教示空間T1を複数の小空間に分割したら、各小空間にインデックス(X,Y,Z)を割り振る。インデックス(X,Y,Z)は、座標軸に基づいて割り振られる。たとえば、x軸については図9の右側から左側に向かってX=1、X=2、X=3のインデックスが割り振られ、y軸については図9の手前から奥側に向かってY=1、Y=2、Y=3のインデックスが割り振られ、z軸については図9の上側から下側に向かってZ=1、Z=2、Z=3のインデックスが割り振られる。そうした場合、図9の右下に位置する小空間には、インデックス(X,Y,Z)=(1,1,3)が付与される。
FIG. 9 is a diagram for explaining how the teaching space generation unit 232 divides the teaching space. The illustrated division method is applied to step S7 in FIG. 7 above.
When dividing the teaching space, parameters X1, Y1, and Z1 given as the resolution in each axial direction of the posture set are used. Specifically, with reference to the camera coordinate system C4, the teaching space T1 generated in step S6 is divided into X1 pieces in the x-axis direction, Y1 pieces in the y-axis direction, and Z1 pieces in the z-axis direction. split into In the example of FIG. 9, the teaching space T1 is divided with X1=3, Y1=3, and Z1=3. Therefore, the teaching space T1 is divided into a total of 27 small spaces. After dividing the teaching space T1 into a plurality of small spaces in this way, an index (X, Y, Z) is assigned to each small space. Indices (X, Y, Z) are assigned based on the coordinate axes. For example, the x-axis is assigned indices of X=1, X=2, and X=3 from the right side to the left side of FIG. Indexes of Y=2 and Y=3 are assigned, and indexes of Z=1, Z=2 and Z=3 are assigned to the z-axis from the upper side to the lower side of FIG. In such a case, indices (X, Y, Z)=(1, 1, 3) are given to the small space located in the lower right of FIG.
 図10は、教示位置生成部233による教示位置の生成方法を説明する図である。図示した生成方法は、上記図7のステップS8に適用される。
 まず、1つの例として、インデックス(X,Y,Z)=(1,1,3)が付与された小空間における教示位置については、この小空間の上面の中心位置から、法線方向に小空間の高さの半分だけ移動した位置、すなわちインデックス(1,1,3)が付与された小空間の中心位置を教示位置として生成することができる。この点は、他のインデックスが付与された小空間についても同様である。
FIG. 10 is a diagram for explaining a teaching position generation method by the teaching position generation unit 233. As shown in FIG. The illustrated generation method is applied to step S8 in FIG. 7 above.
First, as an example, for a teaching position in a small space with indices (X, Y, Z) = (1, 1, 3), a small A position moved by half the height of the space, that is, the central position of the small space given the index (1, 1, 3) can be generated as the teaching position. This point is the same for small spaces to which other indexes are assigned.
 続いて、教示姿勢生成部234による教示姿勢の生成方法について説明する。教示姿勢は、上記図7のステップS9およびステップS10の処理によって生成される。
 まず、ステップS9の処理内容について説明すると、教示姿勢生成部234は、マーカ31が撮像部32に正対するように、教示姿勢の初期値を設定する。
 図11は、各小空間の教示位置にマーカを初期姿勢で配置した場合のマーカの見え方を示す図である。
 図11においては、教示空間生成部232で生成した複数の小空間のうち、インデックス(X,Y,Z)=(1,1,1)(1,2,1)(1,3,1)(2,1,1)(2,2,1)(2,3,1)(3,1,1)(3,2,1)(3,3,1)の各小空間について、教示位置生成部233が生成した教示位置に仮にマーカ31を配置し、かつ撮像部32に対してマーカ31が正対するように配置した場合の、マーカ31の見え方を示している。初期姿勢のもとでは、いずれの小空間においても、カメラ座標系C4のZ軸がマーカ座標系C3のZ軸に対して真逆を向くように、マーカ31が配置される。したがって、すべての小空間でマーカ31の見え方は同じである。
Next, a method of generating a taught posture by the taught posture generation unit 234 will be described. The teaching posture is generated by the processing of steps S9 and S10 in FIG. 7 above.
First, the processing contents of step S<b>9 will be described.
FIG. 11 is a diagram showing how the marker looks when the marker is placed at the teaching position in each small space in the initial posture.
In FIG. 11, among the plurality of small spaces generated by the teaching space generation unit 232, indexes (X, Y, Z)=(1, 1, 1) (1, 2, 1) (1, 3, 1) For each small space (2,1,1) (2,2,1) (2,3,1) (3,1,1) (3,2,1) (3,3,1), teach position It shows how the marker 31 looks when the marker 31 is provisionally placed at the teaching position generated by the generation unit 233 and placed so as to face the imaging unit 32 . Under the initial orientation, the marker 31 is arranged in any small space such that the Z axis of the camera coordinate system C4 is oriented exactly opposite to the Z axis of the marker coordinate system C3. Therefore, the appearance of the marker 31 is the same in all small spaces.
 次に、ステップS10の処理について説明する。
 教示姿勢生成部234は、上記ステップS9で設定した初期姿勢から、マーカ31の姿勢をマーカ座標系C3の各軸方向に回転させることで、教示姿勢を生成する。
 図12は、教示姿勢生成部234が生成したマーカ31の教示姿勢を示す図である。なお、図12における各小空間のインデックス(X,Y,Z)は、上記図11における各小空間のインデックス(X,Y,Z)に対応している。例として、インデックス(3,3,1)の小空間におけるマーカ31の教示姿勢は、初期姿勢に対してマーカ座標系C3のx軸まわりにマーカ31を回転させることで生成される。各軸の回転角の大きさは、0度から±θ(閾値)度までとし、インデックスに基づきルールベースもしくはランダムに決定する。本実施形態では、インデックスが(1,1,1)(1,3,1)(3,1,1)の小空間では初期姿勢と同様の姿勢を教示姿勢として用い、他のインデックスの小空間では軸まわりに回転させた姿勢を教示姿勢として用いる。このように、教示姿勢生成部234が生成する複数の姿勢である姿勢集合は、マーカ31が様々な向きとなるよう回転させた姿勢と、マーカ31の向きは同じで並進のみが異なる姿勢との組み合わせによって構成されている。これにより、マーカ31の姿勢をすべてランダムに変えた場合(マーカ31の向きがすべてのインデックスで異なる場合)に比べて、キャリブレーションの推定時の精度を向上させることができる。
Next, the processing of step S10 will be described.
The teaching posture generation unit 234 generates a teaching posture by rotating the posture of the marker 31 in each axis direction of the marker coordinate system C3 from the initial posture set in step S9.
FIG. 12 is a diagram showing the teaching posture of the marker 31 generated by the teaching posture generation unit 234. As shown in FIG. The indexes (X, Y, Z) of each small space in FIG. 12 correspond to the indexes (X, Y, Z) of each small space in FIG. As an example, the taught orientation of the marker 31 in the small space with index (3,3,1) is generated by rotating the marker 31 about the x-axis of the marker coordinate system C3 with respect to the initial orientation. The magnitude of the rotation angle of each axis is from 0 degrees to ±θ (threshold) degrees, and is determined based on an index, rule-based or randomly. In this embodiment, in the small spaces with indices (1,1,1) (1,3,1) (3,1,1), the same posture as the initial posture is used as the teaching posture, and in the small spaces with other indexes Then, the posture rotated around the axis is used as the teaching posture. In this way, the posture set, which is a plurality of postures generated by the teaching posture generation unit 234, includes postures in which the marker 31 is rotated in various directions, and postures in which the orientation of the marker 31 is the same and only translation is different. made up of combinations. As a result, compared to the case where the orientations of the markers 31 are changed randomly (the orientations of the markers 31 are different for all indices), the accuracy of calibration estimation can be improved.
 以上述べた姿勢集合生成部23の各部(231~236)の処理より、ロボット30の作業空間と撮像部32が精度よく撮影可能な空間の両方を満たす教示空間にて、姿勢集合を自動的に生成することが可能である。 Through the processing of each unit (231 to 236) of the posture set generation unit 23 described above, the posture set is automatically generated in the teaching space that satisfies both the work space of the robot 30 and the space in which the imaging unit 32 can accurately shoot. It is possible to generate
 なお、本実施形態では、基部・カメラ変換行列M1の回転を回転行列Rcaの形式で取得しているが、四元数などを使用してもよい。また、教示空間生成部232が生成する教示空間T1の形状は、図8に示す形状に限らない。たとえば、撮像部32が精度よく撮影可能な空間に比べてロボット30の作業空間が狭い場合などは、教示空間T1は直方体などの形状となる。また、教示空間T1を生成するパラメータとして、本実施形態で使用した撮像部32の歪み評価の値が得られない場合は、歪み評価の値を使用せずに画角のみ、もしくは画角のうち予め定めた一部の空間を使うよう決定してもよい。また、教示空間の決定方法として、予めユーザが教示空間の形状や広さを指定してもよい。また、本実施形態では、教示空間の分割方法として、カメラ座標系C4を基準に教示空間T1を分割しているが、任意の座標系を基準として教示空間T1を分割してもよい。また、教示位置生成部233が生成する教示位置は、小空間内であれば任意の位置、たとえば小空間の頂点などを教示位置に設定してもよい。また、教示姿勢生成部234では、マーカ31を高精度に検出するため、各軸まわりのマーカ31の回転角を閾値以内として教示姿勢を生成しているが、マーカ31の種類によっては、閾値を設定しなくてもよい。また、本実施形態では、姿勢集合保存部236が姿勢集合を制御装置1に保存しているが、これに限らず、図示しないメモリ上に保持してもよく、後段の処理は該メモリから姿勢集合を読み込んでもよい。 Note that in this embodiment, the rotation of the base/camera conversion matrix M1 is obtained in the form of the rotation matrix Rca, but a quaternion or the like may be used. Further, the shape of the teaching space T1 generated by the teaching space generating section 232 is not limited to the shape shown in FIG. For example, if the working space of the robot 30 is narrower than the space that can be captured by the imaging unit 32 with high accuracy, the teaching space T1 has a shape such as a rectangular parallelepiped. Further, when the distortion evaluation value of the imaging unit 32 used in the present embodiment cannot be obtained as a parameter for generating the teaching space T1, only the angle of view or only the angle of view without using the distortion evaluation value It may be determined to use a predetermined portion of the space. Further, as a method of determining the teaching space, the user may specify the shape and size of the teaching space in advance. Further, in the present embodiment, the teaching space T1 is divided based on the camera coordinate system C4 as a method of dividing the teaching space, but the teaching space T1 may be divided based on an arbitrary coordinate system. Also, the teaching position generated by the teaching position generation unit 233 may be set to an arbitrary position within the small space, for example, the vertex of the small space. In order to detect the marker 31 with high accuracy, the teaching posture generation unit 234 generates the teaching posture by setting the rotation angle of the marker 31 about each axis within a threshold value. No need to set. Further, in the present embodiment, the posture set storage unit 236 stores the posture set in the control device 1. However, the present invention is not limited to this, and may be stored in a memory (not shown). You can also load sets.
 続いて、シミュレータ構築部24の処理内容について詳しく説明する。
 シミュレータ構築部24は、事前知識21に基づきシミュレータを構築することで、仮想的にロボット30を動作させ、撮像部32による撮像画像(撮影画像)を生成する機能を有する。
 図13は、シミュレータ構築部24によって構築されるシミュレータの一例を示す図である。
 図13においては、現実装置2Aの設計情報を含む三次元モデル、仕様、配置などの情報を事前知識21から取得し、取得した情報を基にロボット30、マーカ31、撮像部32および環境設置物33を仮想的にシミュレータ上に配置している。
 ロボット30については、事前知識21から取得した情報である、ロボット30の形状情報として各リンクの長さや仕様として得られる関節の情報を用いることで、仮想的にロボット30を動作させることが可能である。また、仮想的にロボット30を動作させることで、実機にて使用する軌道計画のアルゴリズムを適用することが可能である。
Next, the processing contents of the simulator construction unit 24 will be described in detail.
The simulator construction unit 24 has a function of virtually operating the robot 30 by constructing a simulator based on the prior knowledge 21 and generating an image (captured image) captured by the imaging unit 32 .
FIG. 13 is a diagram showing an example of a simulator constructed by the simulator construction unit 24. As shown in FIG.
In FIG. 13, information such as a three-dimensional model including design information of the real device 2A, specifications, layout, etc. is acquired from the prior knowledge 21, and based on the acquired information, the robot 30, the marker 31, the imaging unit 32, and the environment installed objects are detected. 33 are virtually placed on the simulator.
As for the robot 30, by using the joint information obtained as the length of each link and the specifications as shape information of the robot 30, which is information obtained from the prior knowledge 21, the robot 30 can be virtually operated. be. In addition, by operating the robot 30 virtually, it is possible to apply a trajectory planning algorithm used in an actual machine.
 撮像部32については、事前知識21から取得した情報である、撮像部32の画角や解像度、カメラパラメータなどの仕様データを用いることで、撮像部32がシミュレータ上で画像を撮影した際の各画素の値を生成することが可能である。また、シミュレータ上において、ロボット30や環境設置物33の色情報は容易に変更可能である。たとえば、ロボット30や環境設置物33の色を表すRGB値を(255,0,0)などに変更し、シミュレータ上で撮影した画像として生成することも可能である。 For the image capturing unit 32, by using specification data such as the angle of view, resolution, and camera parameters of the image capturing unit 32, which is information acquired from the prior knowledge 21, each image captured by the image capturing unit 32 on the simulator is It is possible to generate pixel values. Also, the color information of the robot 30 and the environment installation object 33 can be easily changed on the simulator. For example, it is possible to change the RGB values representing the colors of the robot 30 and the environment installation object 33 to (255, 0, 0), etc., and generate an image taken on a simulator.
 以上述べたシミュレータ構築部24の処理により、マーカ31を姿勢集合に基づき教示位置に移動させたときに、撮像部32によって撮影される画像を仮想的に生成することが可能である。また、ロボット30の三次元モデルが得られる場合は、仮想的にロボット30の軌道計画および制御が可能であり、事前知識21の設計上の配置情報により、ロボット30が動作したときに撮像部32が撮影した画像を仮想的に生成することが可能である。 By the processing of the simulator construction unit 24 described above, it is possible to virtually generate an image captured by the imaging unit 32 when the marker 31 is moved to the teaching position based on the posture set. Further, when a three-dimensional model of the robot 30 is obtained, the trajectory planning and control of the robot 30 can be virtually performed. It is possible to virtually generate images taken by
 なお、本実施形態では、基部・カメラ変換行列M1の設計値などを含む配置情報を事前知識21から取得し、取得した配置情報に基づいてシミュレータ上に物体を配置しているが、これに限らない。たとえば、撮像部32が撮影した画像などから、現実装置2Aの物体の大きさや配置を示す大まかな値を推定し、この推定結果に基づいてシミュレータ上に物体を配置してもよい。 In the present embodiment, the layout information including the design values of the base-camera conversion matrix M1 is acquired from the prior knowledge 21, and the object is placed on the simulator based on the acquired layout information. do not have. For example, rough values indicating the size and arrangement of objects in the physical device 2A may be estimated from images captured by the imaging unit 32, and the objects may be arranged on the simulator based on the estimation results.
 また、本実施形態では、現実装置2Aをシミュレータに仮想的に配置する際に、現実装置2Aの三次元モデルを使用して形状を再現しているが、三次元モデルが得られない場合は、各座標系の設計値情報のみをシミュレータが保持してもよい。 In addition, in this embodiment, when the physical device 2A is virtually placed in the simulator, the shape is reproduced using the three-dimensional model of the physical device 2A. The simulator may hold only the design value information for each coordinate system.
 また、撮像部32が、三次元計測可能なステレオカメラやToFカメラの場合、もしくは単眼カメラによる画像を機械学習することで三次元形状を得られる場合などは、撮像部32によって取得した現実装置2Aの三次元形状を、シミュレータの構築に活用してもよい。 In addition, when the imaging unit 32 is a stereo camera or ToF camera capable of three-dimensional measurement, or when a three-dimensional shape can be obtained by machine learning an image from a monocular camera, the real device 2A acquired by the imaging unit 32 3D shape may be utilized to construct a simulator.
 続いて、非物理干渉姿勢抽出部25の処理内容について詳しく説明する。
 図14は、非物理干渉姿勢抽出部25の処理手順を示すフローチャートである。
 非物理干渉姿勢抽出部25は、姿勢集合生成部23が生成した姿勢集合の中から、現実装置2Aの各部が互いに物理干渉しない姿勢(非物理干渉姿勢)を抽出する。また、非物理干渉姿勢抽出部25は、シミュレータ構築部24が構築するシミュレータを活用することにより、上記物理干渉しない姿勢を抽出する。物理干渉とは、現実装置2Aにおいて、ロボット30やマーカ31が、ロボット30や撮像部32、環境設置物33等に接触することをいう。
Next, the processing contents of the non-physical interference attitude extraction unit 25 will be described in detail.
FIG. 14 is a flow chart showing the processing procedure of the non-physical interference attitude extraction unit 25. As shown in FIG.
The non-physical interference posture extraction unit 25 extracts postures (non-physical interference postures) in which the units of the physical device 2A do not physically interfere with each other from the posture set generated by the posture set generation unit 23 . Also, the non-physical interference posture extraction unit 25 extracts the posture that does not cause physical interference by utilizing the simulator constructed by the simulator construction unit 24 . Physical interference means that the robot 30 and the marker 31 come into contact with the robot 30, the imaging unit 32, the environmental installation object 33, and the like in the physical device 2A.
 まず、非物理干渉姿勢抽出部25は、姿勢集合生成部23が生成した姿勢集合を制御装置1のメモリ上にロードする(ステップS13)。
 次に、非物理干渉姿勢抽出部25は、上記の姿勢集合に含まれる姿勢の数N1を取得する(ステップS14)。
 次に、非物理干渉姿勢抽出部25は、変数iを初期値(i=1)に設定する(ステップS15)。
 次に、非物理干渉姿勢抽出部25は、シミュレータ上でロボット30をi番目の姿勢へと移動するよう軌道計画を指示する(ステップS16)。
 次に、非物理干渉姿勢抽出部25は、上記ステップS16で指示した軌道計画が成功したか否かを判定する(ステップS17)。具体的には、非物理干渉姿勢抽出部25は、指示した軌道計画でロボット30を動作させたときに、ロボット30とマーカ31が現実装置2A内のいずれの部分にも物理干渉しない場合は軌道計画が成功したと判定し、物理干渉する場合は軌道計画が失敗したと判定する。
First, the non-physical interference posture extraction unit 25 loads the posture set generated by the posture set generation unit 23 onto the memory of the control device 1 (step S13).
Next, the non-physical interference posture extraction unit 25 acquires the number N1 of postures included in the posture set (step S14).
Next, the non-physical interference posture extraction unit 25 sets the variable i to an initial value (i=1) (step S15).
Next, the non-physical interference posture extraction unit 25 instructs the trajectory plan to move the robot 30 to the i-th posture on the simulator (step S16).
Next, the non-physical interference attitude extraction unit 25 determines whether or not the trajectory planning instructed in step S16 has succeeded (step S17). Specifically, when the robot 30 is operated according to the instructed trajectory plan, the non-physical interference posture extraction unit 25 determines the trajectory if the robot 30 and the marker 31 do not physically interfere with any part in the real device 2A. It is determined that the plan has succeeded, and if there is physical interference, it is determined that the trajectory plan has failed.
 次に、非物理干渉姿勢抽出部25は、上記ステップS17で軌道計画が成功したと判定した場合は、i番目の姿勢を成功パターンとして上記メモリ上に保持した後、(ステップS18)、i番目の姿勢が生成された小空間のインデックスに対する成功/失敗情報を保持する(ステップS19)。小空間のインデックスに対する成功情報を保持する場合は、i番目の姿勢が生成された小空間のインデックスに成功フラグを紐付けて保持する。また、小空間のインデックスに対する失敗情報を保持する場合は、i番目の姿勢が生成された小空間のインデックスに失敗フラグを紐付けて保持する。 Next, when the non-physical interference attitude extraction unit 25 determines that the trajectory planning is successful in step S17, it holds the i-th attitude as a success pattern in the memory, and then (step S18). Holds success/failure information for the index of the small space in which the posture of is generated (step S19). When holding the success information for the index of the small space, the success flag is held in association with the index of the small space in which the i-th posture is generated. Also, when holding failure information for the index of the small space, the index of the small space in which the i-th posture is generated is held in association with the failure flag.
 一方で、非物理干渉姿勢抽出部25は、上記ステップS17で軌道計画が失敗したと判定した場合は、ステップS18,S19の処理をパスする。ここで、軌道計画が失敗した場合は、物理干渉が有りと判定される場合に相当し、軌道計画が成功した場合は、物理干渉が無しと判定される場合に相当する。 On the other hand, if the non-physical interference attitude extraction unit 25 determines that the trajectory planning has failed in step S17, the processes in steps S18 and S19 are skipped. Here, when the trajectory planning fails, it corresponds to the case where it is determined that there is physical interference, and when the trajectory planning succeeds, it corresponds to the case where it is determined that there is no physical interference.
 次に、非物理干渉姿勢抽出部25は、変数iの値がi=N1を満たすか否かを判定する(ステップS20)。そして、非物理干渉姿勢抽出部25は、変数iの値がN1未満である場合は、変数iの値を1だけインクリメントした後(ステップS21)、上記ステップS16の処理に戻る。また、非物理干渉姿勢抽出部25は、変数iの値がN1に達した場合は、成功パターンの姿勢集合とインデックスごとの成功/失敗情報を制御装置1に保存した後(ステップS22)、一連の処理を終える。
 以上述べた非物理干渉姿勢抽出部25の処理により、姿勢集合のうち、現実装置2Aが互いに物理干渉しない姿勢のみを抽出することができる。
Next, the non-physical interference posture extraction unit 25 determines whether the value of the variable i satisfies i=N1 (step S20). Then, if the value of the variable i is less than N1, the non-physical interference attitude extraction unit 25 increments the value of the variable i by 1 (step S21), and then returns to the process of step S16. Further, when the value of the variable i reaches N1, the non-physical interference posture extraction unit 25 saves the posture set of the success patterns and the success/failure information for each index in the control device 1 (step S22). finish processing.
Through the processing of the non-physical interference posture extraction unit 25 described above, only postures in which the physical devices 2A do not physically interfere with each other can be extracted from the posture set.
 続いて、非物理干渉姿勢抽出部25が軌道計画の成否を判定する例について説明する。
 図15は、ロボット30が実際にピッキング作業を行うときの姿勢を上から俯瞰した図である。
 図15に示すように、マーカ31を移動先1に移動させる場合は、ロボット30が現実装置2Aのいずれの部分にも物理干渉しない。このため、マーカ31を移動先1に移動させるよう軌道計画を指示した場合は、軌道計画が成功して成功パターンの姿勢となる。これに対して、マーカ31を移動先2に移動させる場合は、ロボット30が環境設置物33(壁33A)に物理干渉するため、軌道計画が失敗する。
 物理干渉の有無を判定する方法については、たとえば、軌道計画により生成された軌道の候補について、その軌道に従いロボット30が動作した際に、環境設置物33等とロボット30との三次元上の位置が重なるか否かで判定することができる。
Next, an example in which the non-physical interference attitude extraction unit 25 determines success or failure of the trajectory planning will be described.
FIG. 15 is a bird's-eye view of the posture when the robot 30 actually performs the picking work.
As shown in FIG. 15, when the marker 31 is moved to the destination 1, the robot 30 does not physically interfere with any part of the physical device 2A. Therefore, when the trajectory planning is instructed to move the marker 31 to the destination 1, the trajectory planning succeeds and the attitude of the success pattern is obtained. On the other hand, when moving the marker 31 to the destination 2, the robot 30 physically interferes with the environmental installation object 33 (wall 33A), and the trajectory planning fails.
Regarding the method of determining the presence or absence of physical interference, for example, for a trajectory candidate generated by a trajectory plan, when the robot 30 operates according to the trajectory, the three-dimensional position of the environmental installation object 33 and the like and the robot 30 can be determined by whether or not they overlap.
 図16は、ロボット30が実際にピッキング作業を行うときの姿勢を側面から俯瞰した図である。
 図16に示すように、マーカ31を移動先3に移動させる場合は、ロボット30が現実装置2Aのいずれの部分にも物理干渉しない。このため、マーカ31を移動先3に移動させるよう軌道計画を指示した場合は、軌道計画が成功して成功パターンの姿勢となる。これに対して、マーカ31を移動先4に移動させる場合は、ロボット30が環境設置物33(コンベア33B)に物理干渉するため、軌道計画が失敗する。
FIG. 16 is a side view of the posture when the robot 30 actually performs the picking work.
As shown in FIG. 16, when moving the marker 31 to the destination 3, the robot 30 does not physically interfere with any part of the physical device 2A. Therefore, when the trajectory planning is instructed to move the marker 31 to the destination 3, the trajectory planning succeeds and the attitude of the success pattern is obtained. On the other hand, when moving the marker 31 to the destination 4, the robot 30 physically interferes with the environment installation object 33 (conveyor 33B), and the trajectory planning fails.
 本実施形態においては、軌道計画が成功であるか失敗であるかを、ロボット30と環境設置物33との物理干渉の有無で判定しているが、これに限らず、たとえば判定条件の1つとしてロボット30の関節角の制約などをユーザが加えることで判定させてもよい。具体的には、ロボット30を移動先に移動させる場合に、ロボット30の関節角が上限値以下の場合は軌道計画が成功と判定させ、上限値を超える場合は軌道計画が失敗と判定させてもよい。 In this embodiment, whether the trajectory planning is successful or not is determined by the presence or absence of physical interference between the robot 30 and the environmental installation object 33. However, the present invention is not limited to this. Alternatively, the user may add restrictions on the joint angles of the robot 30 to make the determination. Specifically, when moving the robot 30 to the destination, if the joint angle of the robot 30 is equal to or less than the upper limit, the trajectory planning is determined to be successful, and if the joint angle exceeds the upper limit, the trajectory planning is determined to be unsuccessful. good too.
 続いて、非遮蔽姿勢抽出部26の処理内容について詳しく説明する。
 図17は、非遮蔽姿勢抽出部26の処理手順を示すフローチャートである。
 非遮蔽姿勢抽出部26は、姿勢集合生成部23が生成し、且つ、非物理干渉姿勢抽出部25が抽出した姿勢集合の中から、マーカ31と撮像部32との間でマーカ31の遮蔽が発生しない姿勢(非遮蔽姿勢)を抽出する。また、非遮蔽姿勢抽出部26は、シミュレータ構築部24が構築するシミュレータを活用することにより、マーカ31の遮蔽が発生しない姿勢を抽出する。
Next, the processing contents of the non-shielding posture extraction unit 26 will be described in detail.
FIG. 17 is a flow chart showing the processing procedure of the non-shielding posture extraction unit 26. As shown in FIG.
The non-shielding posture extraction unit 26 selects from the posture set generated by the posture set generation unit 23 and extracted by the non-physical interference posture extraction unit 25 whether the marker 31 is shielded between the marker 31 and the imaging unit 32 . A posture that does not occur (non-shielding posture) is extracted. Moreover, the non-shielding posture extraction unit 26 extracts a posture in which the shielding of the marker 31 does not occur by utilizing the simulator constructed by the simulator construction unit 24 .
 まず、非遮蔽姿勢抽出部26は、姿勢集合生成部23が生成し、且つ、非物理干渉姿勢抽出部25が抽出した姿勢集合を制御装置1のメモリ上にロードする(ステップS23)。
 次に、非遮蔽姿勢抽出部26は、上記の姿勢集合に含まれる姿勢の数N2を取得する(ステップS24)。
 次に、非遮蔽姿勢抽出部26は、変数iを初期値(i=1)に設定する(ステップS25)。
 次に、非遮蔽姿勢抽出部26は、シミュレータ上でロボット30をi番目の姿勢へと移動するよう軌道計画を指示する(ステップS26)。
 次に、非遮蔽姿勢抽出部26は、シミュレータ上の撮像部32による仮想的な撮影画像を生成する(ステップS27)。
 すなわち、非遮蔽姿勢抽出部26は、i番目の姿勢にロボット30を移動させたときに撮像部32によって得られる撮影画像を仮想的に生成する。
First, the non-shielding posture extraction unit 26 loads the posture set generated by the posture set generation unit 23 and extracted by the non-physical interference posture extraction unit 25 onto the memory of the control device 1 (step S23).
Next, the non-shielding posture extraction unit 26 acquires the number N2 of postures included in the posture set (step S24).
Next, the non-shielding posture extraction unit 26 sets the variable i to an initial value (i=1) (step S25).
Next, the non-shielding posture extraction unit 26 instructs the trajectory plan to move the robot 30 to the i-th posture on the simulator (step S26).
Next, the non-shielding posture extraction unit 26 generates a virtual captured image by the imaging unit 32 on the simulator (step S27).
That is, the non-shielding posture extraction unit 26 virtually generates a photographed image obtained by the imaging unit 32 when the robot 30 is moved to the i-th posture.
 次に、非遮蔽姿勢抽出部26は、生成した撮影画像を解析することにより、マーカ・カメラ変換行列M2を推定する(ステップS28)。撮影画像の解析方法としては、たとえば、マーカ31上にサイズと位置が既知の複数の黒丸の模様が印刷されている場合は、この黒丸の既知のサイズおよび位置と、上記仮想的な撮影画像上での黒丸のサイズおよび位置とを対応付けることにより、マーカ・カメラ変換行列M2を推定することができる。マーカ・カメラ変換行列M2を推定することは、実質的に、三次元のカメラ座標系C4においてマーカ31の位置姿勢、すなわちマーカ31の三次元位置を推定することを意味する。 Next, the non-shielding posture extraction unit 26 estimates the marker-camera conversion matrix M2 by analyzing the generated captured image (step S28). As a method of analyzing the photographed image, for example, when a pattern of a plurality of black circles with known sizes and positions is printed on the marker 31, the known sizes and positions of the black circles and the above-mentioned virtual photographed image are analyzed. The marker-camera transformation matrix M2 can be estimated by associating the sizes and positions of the black circles in . Estimating the marker-camera transformation matrix M2 substantially means estimating the position and orientation of the marker 31, that is, the three-dimensional position of the marker 31 in the three-dimensional camera coordinate system C4.
 次に、非遮蔽姿勢抽出部26は、マーカ・カメラ変換行列M2の推定に成功したか否かを判定する(ステップS29)。そして、非遮蔽姿勢抽出部26は、マーカ・カメラ変換行列M2の推定に成功した場合は、推定の信頼度を算出した後(ステップS30)、その信頼度が閾値以上であるか否かを判定する(ステップS31)。また、非遮蔽姿勢抽出部26は、算出した推定の信頼度が閾値以上である場合は、i番目の姿勢を成功パターンとして上記メモリ上に保持した後(ステップS32)、i番目の姿勢が生成された小空間のインデックスに対する成功/失敗情報を保持する(ステップS33)。小空間のインデックスに対する成功情報を保持する場合は、i番目の姿勢が生成された小空間のインデックスに成功フラグを紐付けて保持する。また、小空間のインデックスに対する失敗情報を保持する場合は、i番目の姿勢が生成された小空間のインデックスに失敗フラグを紐付けて保持する。 Next, the non-shielding posture extraction unit 26 determines whether or not the estimation of the marker-camera conversion matrix M2 has succeeded (step S29). Then, when the estimation of the marker-camera conversion matrix M2 is successful, the non-shielding posture extraction unit 26 calculates the reliability of the estimation (step S30), and then determines whether or not the reliability is equal to or greater than the threshold. (step S31). Further, when the calculated reliability of the estimation is equal to or higher than the threshold, the non-shielding posture extraction unit 26 stores the i-th posture as a successful pattern in the memory (step S32), and then generates the i-th posture. Success/failure information for the index of the created small space is retained (step S33). When holding the success information for the index of the small space, the success flag is held in association with the index of the small space in which the i-th posture is generated. Also, when holding failure information for the index of the small space, the index of the small space in which the i-th posture is generated is held in association with the failure flag.
 一方で、非遮蔽姿勢抽出部26は、上記ステップS29でマーカ・カメラ変換行列M2の推定に失敗したと判定した場合は、ステップS30~S34の処理をパスし、上記ステップS31で信頼度が閾値以上ではないと判定した場合はステップS32,S33の処理をパスする。ここで、マーカ・カメラ変換行列M2の推定に失敗した場合や、推定の信頼度が閾値以上ではない場合は、マーカ31の遮蔽が有りと判定される場合に相当する。これに対して、マーカ・カメラ変換行列M2の推定に成功した場合や、推定の信頼度が閾値以上である場合は、マーカ31の遮蔽が無しと判定される場合に相当する。 On the other hand, if the non-shielding posture extraction unit 26 determines in step S29 that the estimation of the marker-camera transformation matrix M2 has failed, it skips the processing in steps S30 to S34, and in step S31 the reliability is set to the threshold value. If it is determined that the above is not the case, the processing of steps S32 and S33 is skipped. Here, when the estimation of the marker-camera conversion matrix M2 fails, or when the reliability of the estimation is not equal to or greater than the threshold, it corresponds to the case where it is determined that the marker 31 is shielded. On the other hand, when the marker-camera conversion matrix M2 is successfully estimated, or when the reliability of the estimation is equal to or higher than the threshold, it corresponds to the case where it is determined that the marker 31 is not shielded.
 次に、非遮蔽姿勢抽出部26は、変数iの値がi=N2を満たすか否かを判定する(ステップS34)。そして、非遮蔽姿勢抽出部26は、変数iの値がN2未満である場合は、変数iの値を1だけインクリメントした後(ステップS35)、上記ステップS26の処理に戻る。また、非遮蔽姿勢抽出部26は、変数iの値がN2に達した場合は、成功パターンの姿勢集合と、インデックスごとの成功/失敗情報とを制御装置1のメモリに保存した後(ステップS36)、一連の処理を終える。
 以上述べた非遮蔽姿勢抽出部26の処理により、姿勢集合のうち、マーカ31と撮像部32との間でマーカ31の遮蔽が発生しない姿勢のみを抽出することができる。
Next, the non-shielding posture extraction unit 26 determines whether the value of the variable i satisfies i=N2 (step S34). Then, when the value of the variable i is less than N2, the non-shielding posture extraction unit 26 increments the value of the variable i by 1 (step S35), and then returns to the process of step S26. Further, when the value of the variable i reaches N2, the unshielded posture extraction unit 26 saves the posture set of the success pattern and the success/failure information for each index in the memory of the control device 1 (step S36 ), ending the series of processes.
Through the processing of the non-shielded posture extraction unit 26 described above, only postures in which the marker 31 is not shielded between the marker 31 and the imaging unit 32 can be extracted from the posture set.
 続いて、非遮蔽姿勢抽出部26による信頼度の算出方法(ステップS30の処理内容)と評価方法(ステップS31の処理内容)について説明する。
 まず、ステップS28においては、マーカ31の一部が遮蔽により隠れた場合でも、マーカ・カメラ変換行列M2を推定可能な場合がある。しかし、この場合は、マーカ31の全部が撮影画像に映っている場合に比べて、マーカ・カメラ変換行列M2の推定精度が低い可能性がある。マーカ・カメラ変換行列M2の推定精度が低い姿勢をキャリブレーションに使用することは望ましくない。
Subsequently, the method of calculating the reliability (contents of processing in step S30) and the evaluation method (contents of processing in step S31) by the non-shielding posture extraction unit 26 will be described.
First, in step S28, there are cases where the marker-camera conversion matrix M2 can be estimated even when part of the marker 31 is hidden by shielding. However, in this case, the accuracy of estimating the marker-camera conversion matrix M2 may be lower than in the case where all the markers 31 appear in the captured image. It is not desirable to use an orientation with low estimation accuracy of the marker-camera transformation matrix M2 for calibration.
 そこで、非遮蔽姿勢抽出部26は、各教示姿勢におけるマーカ31と撮像部32との相対的な姿勢がシミュレータ上で既知であることを利用し、たとえば、マーカ31が遮蔽なく撮影画像に映ったときのマーカ31の面積と、シミュレータ上で撮像部32の撮影画像上に実際に映っているマーカ31の面積との比率を、推定の信頼度として算出する。また、これ以外にも、たとえば、撮影画像を解析したときに得られるマーカ31上の特徴点の個数と、特徴点の総数との割合などを信頼度として算出する。そして、非遮蔽姿勢抽出部26は、上述のように算出した信頼度が、事前にマーカ31の種類により定めた閾値に比べて高いか否かを判定する。 Therefore, the non-shielding posture extraction unit 26 uses the fact that the relative posture between the marker 31 and the imaging unit 32 in each teaching posture is known on the simulator. The ratio between the area of the marker 31 at that time and the area of the marker 31 actually appearing on the captured image of the imaging unit 32 on the simulator is calculated as the reliability of the estimation. In addition to this, for example, the ratio between the number of feature points on the marker 31 obtained by analyzing the captured image and the total number of feature points is calculated as the reliability. Then, the non-shielding posture extraction unit 26 determines whether or not the reliability calculated as described above is higher than a threshold determined in advance according to the type of the marker 31 .
 図18は、非遮蔽姿勢抽出部26の動作例を示す模式図である。
 図18の左側に一点鎖線で示す枠線内には、上記ステップS23で制御装置1のメモリ上にロードした姿勢集合の一部を示している。また、図18の右側に一点鎖線で示す枠線内には、非遮蔽姿勢抽出部26によって抽出された、遮蔽(オクルージョン)判定後の姿勢集合の一部を示している。本例では、シミュレータ上で撮像部32が撮影した画像の上下および左を覆うように環境設置物33が配置されている。
FIG. 18 is a schematic diagram showing an operation example of the non-shielding posture extraction unit 26. As shown in FIG.
A portion of the posture set loaded onto the memory of the control device 1 in step S23 is shown within the frame line indicated by the dashed line on the left side of FIG. 18 shows part of the set of poses after occlusion determination, extracted by the non-shielding pose extracting unit 26, inside a frame line indicated by a dashed dotted line on the right side of FIG. In this example, the environmental installation object 33 is arranged so as to cover the upper, lower, and left sides of the image captured by the imaging unit 32 on the simulator.
 まず、図18の撮影画像I1に示す姿勢では、ロボット30によってマーカ31の一部が遮蔽されており、マーカ31の位置姿勢の推定が失敗する可能性がある。仮に、撮影画像I1に映っているマーカ31の一部から、マーカ31の位置姿勢の推定が可能であった場合は、非遮蔽姿勢抽出部26は、たとえば上述したようにマーカ31が遮蔽なく撮影画像に映ったときのマーカ31の面積と、シミュレータ上で撮像部32の撮影画像上に実際に映っているマーカ31の面積との比率を、推定の信頼度として算出する。この信頼度を算出するにあたって、あるロボット30の姿勢における、カメラ座標系C4とマーカ31の位置姿勢は、姿勢集合生成部23により既知である。したがって、マーカ31の位置姿勢の情報と、撮像部32の画角、解像度、カメラパラメータなどの情報とを用いて、マーカ31が遮蔽なく画像に映った場合の、画像上でのマーカ31のピクセル位置の集合(以下、「ピクセル集合」という。)を特定することができる。 First, in the posture shown in the captured image I1 of FIG. 18, the robot 30 partially shields the marker 31, and the estimation of the position and posture of the marker 31 may fail. If it were possible to estimate the position and orientation of the marker 31 from a part of the marker 31 appearing in the captured image I1, the unshielded orientation extraction unit 26 would capture the marker 31 without shielding, as described above. A ratio between the area of the marker 31 when it appears in the image and the area of the marker 31 actually appearing on the captured image of the imaging unit 32 on the simulator is calculated as the reliability of estimation. In calculating this reliability, the positions and orientations of the camera coordinate system C4 and the markers 31 in a certain posture of the robot 30 are known by the posture set generation unit 23 . Therefore, by using information on the position and orientation of the marker 31 and information on the angle of view, resolution, camera parameters, etc. of the imaging unit 32, pixels of the marker 31 on the image when the marker 31 appears in the image without obstruction. A set of locations (hereinafter referred to as a "pixel set") can be specified.
 続いて、非遮蔽姿勢抽出部26は、シミュレータでロボット30や環境設置物33の色情報を、マーカ31に含まれない色、たとえばRGB値が(255,0,0)の色に変更するとともに、ロボット30の姿勢を変更して、撮像部32による撮影画像を仮想的に生成する。撮影画像におけるマーカ31の位置はピクセル集合と一致し、遮蔽が生じたピクセルの色は(255,0,0)となる。このため、非遮蔽姿勢抽出部26は、ピクセル集合のうち、(255,0,0)の色ではないピクセルの割合を、推定の信頼度として算出する。したがって、撮影画像I1の場合は、マーカ31がロボット30によって半分ほど遮蔽されているため、推定の信頼度は約50%と算出される。また、信頼度と比較される閾値を予め90%と設定した場合は、撮影画像I1における信頼度は閾値未満となる。 Subsequently, the non-shielding attitude extraction unit 26 changes the color information of the robot 30 and the environmental installation object 33 in the simulator to a color not included in the marker 31, for example, a color with an RGB value of (255, 0, 0). , the posture of the robot 30 is changed to virtually generate an image captured by the imaging unit 32 . The position of the marker 31 in the captured image matches the set of pixels, and the color of the blocked pixels is (255, 0, 0). Therefore, the non-shielding posture extraction unit 26 calculates the ratio of pixels that are not of the color (255, 0, 0) in the pixel set as the reliability of the estimation. Therefore, in the case of the photographed image I1, since half of the marker 31 is blocked by the robot 30, the estimation reliability is calculated to be approximately 50%. Moreover, when the threshold value to be compared with the reliability is set to 90% in advance, the reliability in the photographed image I1 is less than the threshold.
 一方、図18の撮影画像I2に示す姿勢では、マーカ31の遮蔽が生じておらず、マーカ31全体が画像上に映っている。このため、撮影画像I2の場合は、マーカ31の位置姿勢の推定が可能であり、推定の信頼度も閾値以上に高くなるため、成功パターンとなる。
 また、図18の撮影画像I3に示す姿勢では、撮像部32の画角内にマーカ31が配置されているが、環境設置物33によってマーカ31の一部が遮蔽されている。このため、マーカ31の位置姿勢の推定が失敗する可能性がある。また、撮影画像I3にはマーカ31上の黒丸の総数12個のうち9個が映っているため、仮にマーカ31の位置姿勢の推定が可能であった場合は、信頼度の計算が行われる。その際、非遮蔽姿勢抽出部26は、たとえば上述したように、撮影画像を解析したときに得られるマーカ31上の特徴点の個数と、特徴点の総数との割合を、推定の信頼度として算出する。黒丸の中心を特徴点とする場合は、総数12個の特徴点のうち、画像解析で得られる特徴点の個数は9個となり、推定の信頼度は75%と算出される。また、信頼度と比較される閾値を予め90%と設定した場合は、撮影画像I3における信頼度は閾値未満となる。
On the other hand, in the posture shown in the photographed image I2 of FIG. 18, the marker 31 is not blocked and the entire marker 31 is shown in the image. Therefore, in the case of the photographed image I2, the position and orientation of the marker 31 can be estimated, and the reliability of the estimation is higher than the threshold, so this is a successful pattern.
18, the marker 31 is placed within the angle of view of the imaging unit 32, but the marker 31 is partly blocked by the environmental installation object 33. In the posture shown in the photographed image I3 of FIG. Therefore, estimation of the position and orientation of the marker 31 may fail. Further, since 9 out of 12 black dots on the marker 31 are captured in the captured image I3, if the position and orientation of the marker 31 can be estimated, the reliability is calculated. At that time, the non-shielding posture extraction unit 26 uses the ratio of the number of feature points on the marker 31 obtained by analyzing the captured image and the total number of feature points as the reliability of the estimation, for example, as described above. calculate. When the center of the black circle is used as a feature point, the number of feature points obtained by image analysis is 9 out of 12 feature points in total, and the reliability of estimation is calculated as 75%. Moreover, when the threshold value to be compared with the reliability is set to 90% in advance, the reliability in the photographed image I3 is less than the threshold.
 以上述べた非遮蔽姿勢抽出部26の処理により、マーカ31の遮蔽の有無が判定され、且つ、マーカ31の遮蔽が無しと判定された姿勢集合(撮影画像I2、I4、I5…に示す姿勢)が生成される。その結果、撮影画像の右と中央部分にマーカ31が映るような姿勢のみが生成される。
また、非遮蔽姿勢抽出部26は、ロボット30を各姿勢(変数i=1~N2に対応する姿勢)へと移動させたときに撮像部32によって得られる撮影画像を仮想的に生成するとともに、生成した撮影画像を解析することによってマーカ31の三次元位置を推定し、推定に成功したか否かによってマーカ31の遮蔽の有無を判断する。具体的には、推定に成功した場合はマーカ31の遮蔽が無しと判定し、推定に失敗した場合はマーカ31の遮蔽が有りと判定する。これにより、マーカ31の遮蔽が生じない姿勢を抽出することができる。
また、非遮蔽姿勢抽出部26は、マーカ31の三次元位置の推定に成功した場合に、推定の信頼度を求め、信頼度が閾値以上であるか否かによってマーカ31の遮蔽の有無を判定する。具体的には、推定の信頼度が閾値以上であれば、マーカ31の遮蔽が無しと判定し、推定の信頼度が閾値未満であれば、マーカ31の遮蔽が有りと判定する。これにより、マーカ31の三次元位置の推定に成功した姿勢のうち、推定の信頼度が閾値以上に高い姿勢のみを、マーカ31の遮蔽が生じない姿勢として抽出することができる。
Through the above-described processing of the non-shielded posture extraction unit 26, it is determined whether or not the marker 31 is shielded, and the posture set (postures shown in the captured images I2, I4, I5, . . . ) determined to be unshielded. is generated. As a result, only a posture is generated in which the marker 31 appears in the right and center portions of the captured image.
In addition, the non-shielding posture extraction unit 26 virtually generates captured images obtained by the imaging unit 32 when the robot 30 is moved to each posture (postures corresponding to variables i=1 to N2), The three-dimensional position of the marker 31 is estimated by analyzing the generated captured image, and whether or not the marker 31 is shielded is determined depending on whether the estimation is successful. Specifically, if the estimation is successful, it is determined that the marker 31 is not shielded, and if the estimation is unsuccessful, it is determined that the marker 31 is shielded. This makes it possible to extract a posture in which the marker 31 is not blocked.
Further, when the three-dimensional position of the marker 31 is successfully estimated, the unshielded posture extraction unit 26 obtains the reliability of the estimation, and determines whether the marker 31 is shielded or not depending on whether the reliability is equal to or greater than a threshold. do. Specifically, if the reliability of the estimation is equal to or higher than the threshold, it is determined that the marker 31 is not shielded, and if the reliability of the estimation is less than the threshold, it is determined that the marker 31 is shielded. As a result, among the postures for which the three-dimensional position of the marker 31 has been successfully estimated, only postures for which the reliability of estimation is higher than the threshold can be extracted as postures in which the marker 31 is not shielded.
 なお、本実施形態において、非遮蔽姿勢抽出部26は、推定の信頼度を算出しているが、マーカ31の形状や模様によっては推定の信頼度を算出しなくてもよい。すなわち、推定の信頼度は、必要に応じて算出すればよい。また、信頼度の計算は、上述した面積比率や特徴点比率に限らず、たとえば機械学習などを用いてマーカ31の位置姿勢を推定する場合は、推定の信頼度自体を推定させてもよい。 In the present embodiment, the non-shielding posture extraction unit 26 calculates the reliability of the estimation, but depending on the shape or pattern of the marker 31, the reliability of the estimation may not be calculated. That is, the reliability of estimation may be calculated as necessary. Further, calculation of the reliability is not limited to the area ratio and the feature point ratio described above, and when the position and orientation of the marker 31 are estimated using machine learning or the like, the reliability of the estimation itself may be estimated.
 続いて、姿勢集合評価部27の処理内容について詳しく説明する。
 姿勢集合評価部27は、姿勢集合生成部23により生成され、且つ、非物理干渉姿勢抽出部25および非遮蔽姿勢抽出部26により抽出された姿勢集合を用いて、所定値以上に高精度なキャリブレーションが見込めるか否かを判定する。また、姿勢集合評価部27は、所定値以上に高精度なキャリブレーションが見込める場合は、キャリブレーション実行部29にキャリブレーションの実行を指示する。また、姿勢集合評価部27は、所定値以上に高精度なキャリブレーションが見込めない場合は、姿勢集合生成部23に対して、姿勢集合を追加する指示、および/または、生成パラメータの値を変更して姿勢集合を再度生成する指示を行う。
Next, the processing contents of the posture set evaluation unit 27 will be described in detail.
The posture set evaluation unit 27 uses the posture sets generated by the posture set generation unit 23 and extracted by the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26 to perform calibration with a high accuracy equal to or higher than a predetermined value. It is determined whether or not the In addition, the posture set evaluation unit 27 instructs the calibration execution unit 29 to execute the calibration when high-precision calibration of a predetermined value or more can be expected. In addition, when the calibration with a higher accuracy than a predetermined value cannot be expected, the posture set evaluation unit 27 instructs the posture set generation unit 23 to add a posture set and/or changes the value of the generation parameter. command to regenerate the posture set.
 図19は、姿勢集合評価部27の機能ブロック図である。
 図19に示すように、姿勢集合評価部27は、姿勢数評価部271と、姿勢集合抽出部272と、キャリブレーション評価部273と、インデックス評価部274と、を備えている。
FIG. 19 is a functional block diagram of the posture set evaluation unit 27. As shown in FIG.
As shown in FIG. 19 , the posture set evaluation unit 27 includes a posture number evaluation unit 271 , a posture set extraction unit 272 , a calibration evaluation unit 273 and an index evaluation unit 274 .
 姿勢数評価部271は、姿勢集合生成部23により生成され、且つ、非物理干渉姿勢抽出部25および非遮蔽姿勢抽出部26により抽出された姿勢集合に含まれる姿勢の数を評価する。具体的には、姿勢数評価部271は、姿勢集合に含まれる姿勢の数が、予め設定された所定数以内に収まっているか否かを判別する。
 姿勢集合抽出部272は、姿勢集合に含まれる姿勢の数が所定数以内に収まっていないと姿勢数評価部271が判定した場合に、姿勢集合に含まれる複数の姿勢のうち、一部の姿勢であるサブセットを抽出する。たとえば、姿勢集合に合計100個の姿勢が含まれる場合は、100個の姿勢のうち20個の姿勢をサブセットとして抽出する。
The posture number evaluation unit 271 evaluates the number of postures included in the posture set generated by the posture set generation unit 23 and extracted by the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26 . Specifically, the posture number evaluation unit 271 determines whether or not the number of postures included in the posture set is within a predetermined number.
When the posture number evaluation unit 271 determines that the number of postures included in the posture set is not within the predetermined number, the posture set extraction unit 272 extracts some of the multiple postures included in the posture set. Extract the subset where . For example, if a total of 100 poses are included in the pose set, 20 poses out of the 100 poses are extracted as a subset.
 キャリブレーション評価部273は、上記の姿勢集合を用いて、シミュレータ上で仮想的にキャリブレーションに用いるマーカ・カメラ変換行列M2とアーム・基部変換行列M3などのデータを生成し、キャリブレーションにて推定するマーカ・カメラ変換行列M2の推定精度を評価する。すなわち、キャリブレーション評価部273は、キャリブレーションの精度を評価する。
 インデックス評価部274は、たとえば姿勢数の不足などにより、所定値以上に高いキャリブレーション精度が見込めないとキャリブレーション評価部273が判定した場合に、姿勢集合生成部23に対して姿勢集合を追加もしくは再度生成するよう指示する。
The calibration evaluation unit 273 generates data such as the marker/camera conversion matrix M2 and the arm/base conversion matrix M3 that are virtually used for calibration on the simulator using the above-described posture set, and estimates them by calibration. Evaluate the estimation accuracy of the marker-camera transformation matrix M2. That is, the calibration evaluation unit 273 evaluates the accuracy of calibration.
The index evaluation unit 274 adds or adds a posture set to the posture set generation unit 23 when the calibration evaluation unit 273 determines that a calibration accuracy higher than a predetermined value cannot be expected due to, for example, an insufficient number of postures. Instruct to generate again.
 以下に、姿勢集合評価部27の各部の処理内容について詳しく説明する。
 まず、姿勢数評価部271は、姿勢集合に含まれる姿勢の数が事前知識21などで定めた閾値より多い場合に、その姿勢集合の中から一部の姿勢、すなわちサブセットを抽出するよう姿勢集合抽出部272に指示する。また、姿勢数評価部271は、姿勢集合抽出部272が抽出したサブセットにて所定値以上の高いキャリブレーション精度が見込めるか否かをキャリブレーション評価部273に判定させる。このように、姿勢集合の中から姿勢の一部をサブセットとして取り出すことにより、姿勢集合に含まれる姿勢の数が多い場合に、キャリブレーションの所要時間が長くなることを抑制し、効率よくキャリブレーション精度を評価することができる。
The processing contents of each unit of the posture set evaluation unit 27 will be described in detail below.
First, when the number of postures included in a posture set is greater than a threshold determined by the prior knowledge 21 or the like, the posture number evaluation unit 271 extracts a portion of the posture set, that is, a subset. The extraction unit 272 is instructed. In addition, the number-of-postures evaluation unit 271 causes the calibration evaluation unit 273 to determine whether or not the subset extracted by the posture set extraction unit 272 can be expected to have high calibration accuracy equal to or higher than a predetermined value. In this way, by extracting a part of poses from the pose set as a subset, it is possible to suppress the length of time required for calibration when the number of poses included in the pose set is large, and to perform the calibration efficiently. Accuracy can be evaluated.
 姿勢集合抽出部272は、複数の姿勢の集合である姿勢集合から、予め決められたルールに基づき、一部の姿勢を抽出する。ルールは、たとえば、各姿勢が生成された小空間に付与されたインデックスが平均的に出現するように抽出する方法がある。具体例を挙げると、姿勢集合抽出部272は、抽出する姿勢の上限個数が21個と設定されている場合、次のようなルールに基づき、一部の姿勢を抽出する。 The posture set extraction unit 272 extracts some postures from a posture set, which is a set of multiple postures, based on a predetermined rule. As for the rule, for example, there is a method of extracting so that an index assigned to the small space in which each posture is generated appears on average. As a specific example, when the maximum number of postures to be extracted is set to 21, posture set extraction section 272 extracts some postures based on the following rules.
 まず、上記図9に示すように教示空間T1を分割して得られる合計27個の小空間に対応する姿勢(教示姿勢)のうち、Z=1のインデックスが付与された小空間の姿勢が9個、Z=2のインデックスが付与された小空間の姿勢が7個、Z=3のインデックスが付与された小空間の姿勢が7個、つまり合計23個の姿勢が、前段の非物理干渉姿勢抽出部25および非遮蔽姿勢抽出部26によって抽出された場合は、姿勢の抽出数が最も多い、Z=1のインデックスが付与された小空間の姿勢を2つ削減する。これにより、インデックスが平均的に出現するように、一部の姿勢を抽出することができる。 First, as shown in FIG. 9, among postures (teaching postures) corresponding to a total of 27 small spaces obtained by dividing the teaching space T1, 9 postures of the small spaces given an index of Z=1 are 7 poses in the small space indexed with Z = 2, and 7 poses in the small space indexed with Z = 3. When extracted by the extraction unit 25 and the unoccluded orientation extraction unit 26, two orientations of the small space indexed with Z=1, which has the largest number of extracted orientations, are eliminated. This makes it possible to extract some poses so that the indices appear on average.
 キャリブレーション評価部273は、シミュレータ上で仮想的にキャリブレーションする際に用いるマーカ・カメラ変換行列M2とアーム・基部変換行列M3を生成し、最適化処理(後述)によって基部・カメラ変換行列M1の推定値を求める。さらに、キャリブレーション評価部273は、基部・カメラ変換行列M1の推定値とシミュレータ設計時の正解値とを比較し、ロボット作業の実行に必要なキャリブレーション精度、すなわち所定値以上に高いキャリブレーション精度が出ているか否かを判定する。基部・カメラ変換行列M1の推定値を求める方法は、キャリブレーション実行部29の処理内容とあわせて後段で説明する。 The calibration evaluation unit 273 generates a marker-camera conversion matrix M2 and an arm-base conversion matrix M3 that are used when performing virtual calibration on the simulator, and performs optimization processing (described later) to convert the base-camera conversion matrix M1. Find an estimate. Further, the calibration evaluation unit 273 compares the estimated value of the base-camera conversion matrix M1 with the correct value at the time of simulator design, and determines the calibration accuracy required for executing the robot work, that is, the calibration accuracy higher than a predetermined value. It is determined whether or not there is A method for obtaining the estimated value of the base-camera conversion matrix M1 will be described later together with the processing content of the calibration execution unit 29 .
 キャリブレーション評価部273は、生成したマーカ・カメラ変換行列M2とアーム・基部変換行列M3に誤差を加えることにより、基部・カメラ変換行列M1の推定値を求めてもよい。その理由は次のとおりである。
 実際に現実装置2Aでキャリブレーションを実行する場合は、ノイズなどによる誤差が観測データに乗る。このため、上述したようにマーカ・カメラ変換行列M2とアーム・基部変換行列M3に誤差を加えることにより、実機に近い条件でキャリブレーション評価を行うことができる。
The calibration evaluation unit 273 may obtain an estimated value of the base-camera conversion matrix M1 by adding an error to the generated marker-camera conversion matrix M2 and arm-base conversion matrix M3. The reason is as follows.
When calibration is actually performed by the real device 2A, errors due to noise and the like are added to the observed data. Therefore, by adding an error to the marker/camera conversion matrix M2 and the arm/base conversion matrix M3 as described above, calibration evaluation can be performed under conditions close to those of the actual machine.
 その場合、マーカ・カメラ変換行列M2とアーム・基部変換行列M3に加える誤差は、たとえば、正規分布に従う誤差を採用すればよい。このときの正規分布の標準偏差は、事前知識21などで指定するか、マーカ・カメラ変換行列M2とアーム・基部変換行列M3の値に対して一定の割合を有する値で指定してもよい。 In that case, the errors to be added to the marker/camera conversion matrix M2 and the arm/base conversion matrix M3 may adopt, for example, errors following a normal distribution. The standard deviation of the normal distribution at this time may be specified by the prior knowledge 21 or the like, or may be specified by a value having a constant ratio with respect to the values of the marker/camera conversion matrix M2 and the arm/base conversion matrix M3.
 なお、観測データに乗るノイズは、たとえば、観測データがロボット30のエンコーダ値である場合はロボット機構の誤差に起因するノイズであり、観測データが撮像部32による撮影画像である場合は、信号ノイズや画像歪みに起因するノイズである。また、観測データは、撮像部32による撮像画像を解析して得られるマーカ31の位置姿勢であってもよい。 For example, if the observed data is the encoder value of the robot 30, the noise on the observed data is noise caused by an error in the robot mechanism. and noise caused by image distortion. The observation data may also be the position and orientation of the marker 31 obtained by analyzing the image captured by the imaging unit 32 .
 キャリブレーション評価部273は、所定値以上に高精度なキャリブレーションが見込めると判定した場合は、キャリブレーション実行部29に対してキャリブレーションの実行を指示する。キャリブレーション実行部29は、キャリブレーション評価部273からの指示を受けてキャリブレーションを実行することにより、基部・カメラ変換行列M1を推定する。また、キャリブレーション評価部273は、所定値以上に高精度なキャリブレーションが見込めないと判定した場合は、インデックス評価部274に対してインデックス評価の実行を指示する。 When the calibration evaluation unit 273 determines that calibration with a higher accuracy than a predetermined value can be expected, it instructs the calibration execution unit 29 to perform calibration. The calibration execution unit 29 receives an instruction from the calibration evaluation unit 273 and executes calibration to estimate the base-camera conversion matrix M1. Further, when the calibration evaluation unit 273 determines that calibration with a precision higher than a predetermined value cannot be expected, the calibration evaluation unit 273 instructs the index evaluation unit 274 to perform index evaluation.
 インデックス評価部274は、キャリブレーション評価部273からインデックス評価の実行指示を受けた場合(所定値以上に高精度なキャリブレーションが見込めない場合)に、追加で姿勢を生成するよう姿勢集合生成部23に指示する、および/または、生成パラメータ22の値を変更するようパラメータ更新部28に指示する。このようなフィードバックを行うことにより、姿勢集合生成部23では、再度、姿勢集合を生成することが可能になるとともに、追加の姿勢を含む姿勢集合、または、更新後の生成パラメータ22に基づく姿勢集合を生成することが可能となる。これにより、姿勢集合生成部23が再度、姿勢集合を生成した場合に、所定値以上に高精度なキャリブレーションが見込める可能性を高めることができる。 When the index evaluation unit 274 receives an index evaluation execution instruction from the calibration evaluation unit 273 (when calibration with a higher accuracy than a predetermined value cannot be expected), the posture set generation unit 274 generates additional postures. and/or instruct the parameter updating unit 28 to change the value of the generation parameter 22 . By performing such feedback, the posture set generation unit 23 can generate a posture set again, and can also generate a posture set including additional postures or a posture set based on the updated generation parameters 22. can be generated. As a result, when the posture set generation unit 23 generates a posture set again, it is possible to increase the possibility that calibration with a precision higher than the predetermined value can be expected.
 インデックス評価部274は、キャリブレーション評価部273で評価した姿勢集合を取得し、その姿勢集合に含まれる姿勢の数が事前に定めた閾値よりも少ない場合は、各姿勢に割り振られたインデックス(X,Y,Z)のうち、全姿勢集合をサーチした際に登場頻度が最も少ないインデックスをインクリメントするようパラメータ更新部28に指示する。また、インデックス評価部274は、取得した姿勢集合に含まれる姿勢の数が事前に定めた閾値以上に多い場合は、追加の姿勢を生成するよう姿勢集合生成部23に指示する。生成方法の例として、取得した姿勢集合の中から複数の姿勢をランダムに抽出し、その姿勢のインデックスに対応する小空間の中で、教示位置と教示姿勢を生成する方法などがあるが、これに限らない。 The index evaluation unit 274 acquires the posture set evaluated by the calibration evaluation unit 273, and if the number of postures included in the posture set is less than a predetermined threshold value, the index (X , Y, Z), the parameter updating unit 28 is instructed to increment the index that has the lowest frequency of appearance when searching the set of all postures. Also, when the number of postures included in the acquired posture set is greater than or equal to a predetermined threshold value, the index evaluation unit 274 instructs the posture set generation unit 23 to generate additional postures. As an example of the generation method, there is a method of randomly extracting a plurality of postures from the obtained posture set and generating a teaching position and a teaching posture in a small space corresponding to the posture index. is not limited to
 以上述べた姿勢集合評価部27の処理により、姿勢集合の評価結果を、姿勢集合生成部23やパラメータ更新部28にフィードバックすることができる。また、所定値以上に高精度なキャリブレーションが見込める場合にのみ、キャリブレーション実行部29にキャリブレーションを実行させることができる。このため、姿勢数の不足などによってキャリブレーションの精度が低下することを抑制できる。また、所定値以上に高精度なキャリブレーションが見込めない場合は、姿勢集合生成部23に再度、姿勢集合を生成させることができる。 Through the processing of the posture set evaluation unit 27 described above, it is possible to feed back the evaluation result of the posture set to the posture set generation unit 23 and the parameter update unit 28 . Further, only when highly accurate calibration equal to or higher than a predetermined value can be expected, the calibration executing section 29 can be caused to execute the calibration. Therefore, it is possible to prevent the accuracy of calibration from deteriorating due to an insufficient number of postures or the like. Further, when calibration with a precision higher than a predetermined value cannot be expected, the posture set generation unit 23 can be caused to generate a posture set again.
 なお、本実施形態において、姿勢集合評価部27は、姿勢数評価部271および姿勢集合抽出部272を有しているが、これに限らず、キャリブレーション評価部273およびインデックス評価部274のみを有する構成でもよい。また、キャリブレーション評価部273の機能とインデックス評価部274の機能を、1つの機能部に統合してもよい。
 また、本実施形態においては、姿勢集合抽出部272の処理に適用するルールとして、各小空間に付与されたインデックスが平均的に出現するルールを例示したが、これに限らず、たとえば姿勢集合からランダムに一部の姿勢を抽出してもよい。
In the present embodiment, the posture set evaluation unit 27 has the posture number evaluation unit 271 and the posture set extraction unit 272, but is not limited to this, and has only the calibration evaluation unit 273 and the index evaluation unit 274. may be configured. Also, the function of the calibration evaluation unit 273 and the function of the index evaluation unit 274 may be integrated into one functional unit.
Further, in the present embodiment, as a rule applied to the processing of the posture set extraction unit 272, the rule that the index given to each small space appears on average was exemplified. A part of postures may be extracted at random.
 パラメータ更新部28は、姿勢集合の分解能を定めるパラメータX1,Y1,Z1の値を、姿勢集合評価部27からの指示に基づいてインクリメントする。これにより、生成パラメータ22が更新される。このため、姿勢集合生成部23は、更新後の生成パラメータ22を用いて姿勢集合を生成することになる。 The parameter update unit 28 increments the values of the parameters X1, Y1, and Z1 that determine the resolution of the posture set based on the instruction from the posture set evaluation unit 27. Thereby, the generation parameter 22 is updated. Therefore, the pose set generator 23 uses the updated generation parameters 22 to generate a pose set.
 図20は、キャリブレーション実行部29の処理手順を示すフローチャートである。
 キャリブレーション実行部29は、現実装置2Aが有するロボット30と撮像部32を制御し、姿勢集合の各姿勢へとロボット30を移動させたときのマーカ・カメラ変換行列M2とアーム・基部変換行列M3とを取得する。さらに、キャリブレーション実行部29は、取得したマーカ・カメラ変換行列M2とアーム・基部変換行列M3との座標変換パラメータにより、基部・カメラ変換行列M1を推定する。
FIG. 20 is a flow chart showing the processing procedure of the calibration executing section 29. As shown in FIG.
The calibration execution unit 29 controls the robot 30 and the imaging unit 32 of the physical device 2A, and calculates the marker/camera conversion matrix M2 and the arm/base conversion matrix M3 when the robot 30 is moved to each posture of the posture set. and get. Further, the calibration executing unit 29 estimates the base/camera conversion matrix M1 based on the acquired coordinate conversion parameters of the marker/camera conversion matrix M2 and the arm/base conversion matrix M3.
 まず、キャリブレーション実行部29は、姿勢集合評価部27が所定値以上に高精度なキャリブレーションが見込めると判断(評価)した姿勢集合を制御装置1のメモリ上にロードする(ステップS37)。
 次に、キャリブレーション実行部29は、上記の姿勢集合に含まれる姿勢の数N3を取得する(ステップS38)。
 次に、キャリブレーション実行部29は、変数iを初期値(i=1)に設定する(ステップS39)。
 次に、キャリブレーション実行部29は、現実装置2Aにおいて実際にロボット30をi番目の姿勢へと移動させる(ステップS40)。
 次に、キャリブレーション実行部29は、ロボット30の作業空間を撮像部32によって撮影する(ステップS41)。
 次に、キャリブレーション実行部29は、撮像部32によって得られる撮影画像を解析することにより、マーカ・カメラ変換行列M2を推定する(ステップS42)。
First, the calibration executing unit 29 loads, onto the memory of the control device 1, a set of postures that the posture set evaluating unit 27 has determined (evaluated) that calibration with a predetermined value or more can be expected (step S37).
Next, the calibration execution unit 29 acquires the number N3 of postures included in the posture set (step S38).
Next, the calibration execution unit 29 sets the variable i to an initial value (i=1) (step S39).
Next, the calibration executing unit 29 actually moves the robot 30 to the i-th posture in the physical device 2A (step S40).
Next, the calibration execution unit 29 captures an image of the work space of the robot 30 using the imaging unit 32 (step S41).
Next, the calibration executing unit 29 estimates the marker-camera conversion matrix M2 by analyzing the captured image obtained by the imaging unit 32 (step S42).
 次に、キャリブレーション実行部29は、マーカ・カメラ変換行列M2の推定に成功したか否かを判定する(ステップS43)。このステップS43においては、現実装置2Aにおける実際の照明環境、あるいは現実装置2Aが有する撮像部32の設計値との仕様・配置ズレなどにより、マーカ・カメラ変換行列M2の推定に失敗する可能性がある。 Next, the calibration execution unit 29 determines whether or not the estimation of the marker-camera conversion matrix M2 has succeeded (step S43). In this step S43, there is a possibility that the estimation of the marker-camera transformation matrix M2 may fail due to the actual lighting environment in the physical device 2A or the specification/arrangement deviation from the design values of the imaging unit 32 of the physical device 2A. be.
 次に、キャリブレーション実行部29は、マーカ・カメラ変換行列M2の推定に成功した場合は、i番目のロボット30の姿勢値を示すアーム・基部変換行列M3と、i番目のロボット30の姿勢に対応するマーカ31の位置姿勢を示すマーカ・カメラ変換行列M2とを、制御装置1のメモリ上に格納する(ステップS44)。 Next, when the calibration execution unit 29 succeeds in estimating the marker-camera conversion matrix M2, the arm-base conversion matrix M3 indicating the posture value of the i-th robot 30 and the posture of the i-th robot 30 A marker-camera conversion matrix M2 indicating the position and orientation of the corresponding marker 31 is stored in the memory of the control device 1 (step S44).
 次に、キャリブレーション実行部29は、変数iの値がi=N3を満たすか否かを判定する(ステップS45)。そして、キャリブレーション実行部29は、変数iの値がN3未満である場合は、変数iの値を1だけインクリメントした後(ステップS46)、上記ステップS40の処理に戻る。 Next, the calibration execution unit 29 determines whether the value of the variable i satisfies i=N3 (step S45). Then, when the value of the variable i is less than N3, the calibration execution unit 29 increments the value of the variable i by 1 (step S46), and then returns to the process of step S40.
 一方で、キャリブレーション実行部29は、上記ステップS43でマーカ・カメラ変換行列M2の推定に失敗したと判定した場合は、ステップS44の処理をパスする。 On the other hand, when the calibration execution unit 29 determines that the estimation of the marker-camera conversion matrix M2 has failed in step S43, it skips the process of step S44.
 その後、キャリブレーション実行部29は、変数iの値がN3に達した場合は、上記メモリ上のデータから最適化問題を解き、基部・カメラ変換行列M1を推定した後(ステップS47)、一連の処理を終える。
 ステップS47において最適化問題を解くことが、最適化処理に相当する。この最適化処理によって基部・カメラ変換行列M1を推定する手法としては、下記の参考文献に示す公知技術を用いることができる。
After that, when the value of the variable i reaches N3, the calibration execution unit 29 solves the optimization problem from the data in the memory, estimates the base-camera conversion matrix M1 (step S47), and then performs a series of Finish processing.
Solving the optimization problem in step S47 corresponds to optimization processing. As a method for estimating the base-camera conversion matrix M1 by this optimization process, a known technique shown in the following references can be used.
 (参考文献)
 「Simultaneous Robot-World and Hand-Eye Calibration」F.Dornaika,R.Horaud,Aug1998.
 この参考文献に開示された公知技術を概説すると、次のようになる。
 ロボット基部とアーム先端の座標変換パラメータ(変換行列)をAA、アーム先端とマーカの座標変換パラメータをBBとすると、ロボット基部からマーカへの座標変換はAA×BBとなる。一方、ロボット基部とカメラの座標変換パラメータをCC、カメラとマーカの座標変換パラメータをDDとすると、ロボット基部からマーカへの座標変換はCC×DDとなる。つまり、AA×BB=CC×DDの条件が成立する。ここで、座標変換パラメータAAはロボットエンコーダ値により既知のパラメータ、座標変換パラメータBBおよびCCはそれぞれ未知のパラメータ、座標変換パラメータDDは画像解析によって既知のパラメータとなる。そこで最適化処理では、それぞれ既知のパラメータである多数の座標変換パラメータAAおよびDDから、AA×BB=CC×DDの条件を満たす座標変換パラメータBBおよびCCを数値計算によって推定する。
 以上述べたキャリブレーション実行部29の処理により、現実装置2Aにおける基部・カメラ変換行列M1を推定することができる。
(References)
"Simultaneous Robot-World and Hand-Eye Calibration" F. Dornaika, R. Horaud, Aug1998.
A summary of the known technology disclosed in this reference is as follows.
Assuming that the coordinate transformation parameter (transformation matrix) between the robot base and the arm tip is AA, and the coordinate transformation parameter between the arm tip and the marker is BB, the coordinate transformation from the robot base to the marker is AA×BB. On the other hand, if CC is the coordinate transformation parameter of the robot base and the camera, and DD is the coordinate transformation parameter of the camera and the marker, the coordinate transformation from the robot base to the marker is CC×DD. That is, the condition AA*BB=CC*DD is established. Here, the coordinate transformation parameter AA is a known parameter based on robot encoder values, the coordinate transformation parameters BB and CC are unknown parameters, and the coordinate transformation parameter DD is a known parameter through image analysis. Therefore, in the optimization process, coordinate transformation parameters BB and CC that satisfy the condition AA×BB=CC×DD are estimated by numerical calculation from a large number of coordinate transformation parameters AA and DD, which are known parameters.
By the processing of the calibration execution unit 29 described above, the base-camera conversion matrix M1 in the physical device 2A can be estimated.
 また、本実施形態に係るキャリブレーション装置2によれば、撮像部32によってマーカ31を撮像(撮影)可能とするためのロボット30の姿勢集合を姿勢集合生成部23が生成し、その姿勢集合の中から、マーカ31の遮蔽が発生しない姿勢を非遮蔽姿勢抽出部26が抽出する。このため、マーカ31の遮蔽が発生しない姿勢集合を自動で生成することができる。したがって、マーカ31の遮蔽を考慮してロボット30の姿勢集合を教示するとともに、ロボット30の姿勢集合の教示に要する工数を削減することができる。 Further, according to the calibration device 2 according to the present embodiment, the posture set generation unit 23 generates the posture set of the robot 30 for enabling the imaging unit 32 to image (shoot) the marker 31, and the posture set is From among them, the non-shielded posture extraction unit 26 extracts postures in which the marker 31 is not shielded. Therefore, it is possible to automatically generate a set of postures in which the marker 31 is not blocked. Therefore, the posture set of the robot 30 can be taught in consideration of the shielding of the marker 31, and the number of man-hours required for teaching the posture set of the robot 30 can be reduced.
 また、本実施形態に係るキャリブレーション装置2によれば、姿勢集合生成部23が生成した姿勢集合の中から、ロボット30やマーカ31が物理干渉しない姿勢を非物理干渉姿勢抽出部25が抽出する。このため、ロボット30やマーカ31が物理干渉しない姿勢集合を自動で生成することができる。したがって、現実装置2Aにおける物理干渉を考慮してロボット30の姿勢集合を教示するとともに、ロボット30の姿勢集合の教示に要する工数を削減することができる。 Further, according to the calibration device 2 according to the present embodiment, the non-physical interference posture extraction unit 25 extracts postures in which the robot 30 and the marker 31 do not physically interfere from the posture set generated by the posture set generation unit 23. . Therefore, it is possible to automatically generate a set of postures in which the robot 30 and the marker 31 do not physically interfere. Therefore, the pose set of the robot 30 can be taught in consideration of the physical interference in the physical device 2A, and the number of man-hours required for teaching the pose set of the robot 30 can be reduced.
 なお、上記実施形態においては、姿勢集合生成部23が生成した姿勢集合の中から非物理干渉姿勢抽出部25が非物理干渉姿勢を抽出し、該抽出した非物理干渉姿勢の姿勢集合の中から非遮蔽姿勢抽出部26が非遮蔽姿勢を抽出する構成になっているが、これに限らない。たとえば、姿勢集合生成部23が生成した姿勢集合の中から非遮蔽姿勢抽出部26が非遮蔽姿勢を抽出し、該抽出した非遮蔽姿勢の姿勢集合の中から非物理干渉姿勢抽出部25が非物理干渉姿勢を抽出する構成になっていてもよい。また、非物理干渉姿勢抽出部25は、姿勢集合生成部23が生成した姿勢集合の中から非物理干渉姿勢を抽出し、非遮蔽姿勢抽出部26は、姿勢集合生成部23が生成した姿勢集合の中から非物理干渉姿勢を抽出し、非物理干渉姿勢抽出部25および非遮蔽姿勢抽出部26がそれぞれに抽出した姿勢集合のうち、共通の姿勢を対象に姿勢集合評価部27が評価する構成であってもよい。また、非物理干渉姿勢抽出部25および非遮蔽姿勢抽出部26のうち、非遮蔽姿勢抽出部26のみを備えた構成であってもよい。以上の点は、後述する第2実施形態についても同様である。 In the above embodiment, the non-physical interference posture extraction unit 25 extracts the non-physical interference posture from the posture set generated by the posture set generation unit 23, and from the posture set of the extracted non-physical interference postures, Although the non-shielding posture extraction unit 26 is configured to extract the non-shielding posture, the present invention is not limited to this. For example, the non-shielding posture extraction unit 26 extracts the non-shielding posture from the posture set generated by the posture set generation unit 23, and the non-physical interference posture extraction unit 25 extracts the non-shielding posture from the extracted posture set of the non-shielding posture. It may be configured to extract the physical interference posture. Further, the non-physical interference posture extraction unit 25 extracts a non-physical interference posture from the posture set generated by the posture set generation unit 23, and the non-shielding posture extraction unit 26 extracts the posture set generated by the posture set generation unit 23. A configuration in which a non-physical interference posture is extracted from the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26, and the posture set evaluation unit 27 evaluates a common posture among the posture sets extracted respectively by the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26. may be Alternatively, of the non-physical interference posture extraction unit 25 and the non-shielding posture extraction unit 26, only the non-shielding posture extraction unit 26 may be provided. The above points also apply to the second embodiment described later.
 <第2実施形態>
 図21は、第2実施形態に係るロボット制御用のキャリブレーション装置の構成例を示す図である。
 図21に示すように、キャリブレーション装置2-1は、制御装置1と、現実装置2Cと、キャリブレーション制御装置2Dと、を備えている。
<Second embodiment>
FIG. 21 is a diagram showing a configuration example of a robot control calibration device according to the second embodiment.
As shown in FIG. 21, the calibration device 2-1 includes a control device 1, a real device 2C, and a calibration control device 2D.
 現実装置2Cは、ロボットアームからなるロボット30と、ロボット30に取り付けられたマーカ31と、ロボット30の作業空間と周辺を計測する撮像部32-1と、を有する。現実装置2Cは、撮像部32-1以外の構成は、上記第1実施形態の場合と共通である。撮像部32-2は、複数(図例では2つ)の撮像装置32A,32Bによって構成されている。各々の撮像装置32A,32Bは、たとえば、単眼カメラ、ToF(Time of Flight)方式のカメラ、ステレオカメラなどによって構成される。 The real device 2C has a robot 30 consisting of a robot arm, a marker 31 attached to the robot 30, and an imaging unit 32-1 that measures the work space of the robot 30 and its surroundings. The configuration of the physical device 2C is the same as that of the first embodiment except for the imaging unit 32-1. The imaging unit 32-2 is composed of a plurality of (two in the figure) imaging devices 32A and 32B. Each imaging device 32A, 32B is configured by, for example, a monocular camera, a ToF (Time of Flight) camera, a stereo camera, or the like.
 キャリブレーション制御装置2Dは、事前知識21-1と、生成パラメータ22-1と、姿勢集合生成部23-1と、シミュレータ構築部24-1と、非物理干渉姿勢抽出部25-1と、非遮蔽姿勢抽出部26-1と、姿勢集合評価部27-1と、パラメータ更新部28-1と、キャリブレーション実行部29-1とに加えて、共通姿勢抽出部201を備えている。共通姿勢抽出部201以外の構成は基本的に上記第1実施形態の場合と共通である。 The calibration control device 2D includes prior knowledge 21-1, generated parameters 22-1, posture set generator 23-1, simulator construction unit 24-1, non-physical interference posture extraction unit 25-1, and non-physical interference posture extraction unit 25-1. A common posture extraction unit 201 is provided in addition to a shielding posture extraction unit 26-1, a posture set evaluation unit 27-1, a parameter update unit 28-1, and a calibration execution unit 29-1. Configurations other than the common posture extraction unit 201 are basically the same as those of the first embodiment.
 ただし、本実施形態においては、撮像部32-1が複数の撮像装置32A,32Bによって構成されている。このため、事前知識21-1と、生成パラメータ22-1と、姿勢集合生成部23-1と、シミュレータ構築部24-1と、非物理干渉姿勢抽出部25-1と、非遮蔽姿勢抽出部26-1と、姿勢集合評価部27-1と、パラメータ更新部28-1と、キャリブレーション実行部29-1は、それぞれ複数の撮像装置32A,32Bに対して上記第1実施形態の場合と同様の処理動作を行う。たとえば、姿勢集合生成部23-1は、撮像装置32Aによってマーカ31を撮影可能とするためのロボット30の姿勢集合と、撮像装置32Bによってマーカ31を撮影可能とするためのロボット30の姿勢集合とを、個別に生成する。撮像装置32A,32Bごとの個別処理は、姿勢集合生成部23-1だけでなく、シミュレータ構築部24―1、非物理干渉姿勢抽出部25-1、非遮蔽姿勢抽出部26―1、姿勢集合評価部27―1、パラメータ更新部28―1、およびキャリブレーション実行部29―1でも行われる。 However, in the present embodiment, the imaging section 32-1 is composed of a plurality of imaging devices 32A and 32B. For this reason, prior knowledge 21-1, generation parameter 22-1, posture set generation unit 23-1, simulator construction unit 24-1, non-physical interference posture extraction unit 25-1, non-shielding posture extraction unit 26-1, a posture set evaluation unit 27-1, a parameter update unit 28-1, and a calibration execution unit 29-1 perform the same operations as in the first embodiment on the plurality of imaging devices 32A and 32B, respectively. Similar processing operations are performed. For example, the posture set generation unit 23-1 generates a posture set of the robot 30 that enables the imaging device 32A to photograph the marker 31, and a posture set of the robot 30 that enables the imaging device 32B to photograph the marker 31. are generated separately. Individual processing for each of the imaging devices 32A and 32B is performed not only by the posture set generation unit 23-1, but also by the simulator construction unit 24-1, the non-physical interference posture extraction unit 25-1, the non-shielding posture extraction unit 26-1, the posture set The evaluation section 27-1, the parameter update section 28-1, and the calibration execution section 29-1 are also performed.
 共通姿勢抽出部201は、姿勢集合生成部23-1が撮像装置32A,32Bごとに生成し、且つ、非物理干渉姿勢抽出部25-1および非遮蔽姿勢抽出部26-6が撮像装置32A,32Bごとに抽出した姿勢集合の中から、複数の撮像装置32A,32Bで共通に使用可能な複数の姿勢を抽出する。また、共通姿勢抽出部201は、抽出した複数の姿勢が残るように、各撮像装置32A,32Bの姿勢集合を変更する。以下、詳しく説明する。 The common posture extraction unit 201 is generated by the posture set generation unit 23-1 for each of the imaging devices 32A and 32B, and is generated by the non-physical interference posture extraction unit 25-1 and the non-shielding posture extraction unit 26-6 for the imaging devices 32A and 32B. A plurality of postures that can be used in common by the plurality of imaging devices 32A and 32B are extracted from the posture set extracted for each 32B. Further, the common orientation extraction unit 201 changes the orientation set of each imaging device 32A, 32B so that the plurality of extracted orientations remain. A detailed description will be given below.
 共通姿勢抽出部201は、ある撮像装置32Aに対応して生成した姿勢集合に含まれる各々の姿勢が、別の撮像装置32Bにも適用可能であるか否かを判定し、同様の判定処理をすべての撮像装置32A,32Bの組み合わせに対して実施する。例として、ある撮像装置32Aに対応して生成した姿勢集合に含まれる各々の姿勢が、別の撮像装置32Bにも適用可能であるか否かは、シミュレータ構築部24-1が構築するシミュレータ上でロボット30を制御し、各姿勢へとマーカ31を移動した際に、下記(1)~(3)の条件を満たすか否かによって判定可能である。
 (1)マーカ31が撮像装置32Bの教示空間内に含まれていること。
 (2)マーカ31と撮像装置32Bの回転行列が、事前に設定した閾値の範囲内に収まっていること。
 (3)非遮蔽姿勢抽出部26-1の機能にてオクルージョンが発生しないと判定できること。
The common posture extraction unit 201 determines whether or not each posture included in a posture set generated corresponding to a certain imaging device 32A is applicable to another imaging device 32B, and performs similar determination processing. This is performed for all combinations of imaging devices 32A and 32B. As an example, whether or not each orientation included in a set of orientations generated corresponding to a certain imaging device 32A is applicable to another imaging device 32B is determined on the simulator constructed by the simulator construction unit 24-1. When the robot 30 is controlled by , and the marker 31 is moved to each posture, it is possible to determine whether or not the following conditions (1) to (3) are satisfied.
(1) The marker 31 is included within the teaching space of the imaging device 32B.
(2) The rotation matrices of the marker 31 and the imaging device 32B are within the preset threshold range.
(3) The function of the non-occlusion posture extraction unit 26-1 can determine that occlusion does not occur.
 上記の判定結果において、複数の撮像装置32A,32Bで共通に使用可能な姿勢(以下、「共通姿勢」ともいう。)は、その姿勢を適用可能な撮像装置32A,32Bの姿勢集合を構成する姿勢の1つとして追加する。この処理により、事前に定めた姿勢数の上限を超える場合は、いずれかの撮像装置(32Aまたは32B)でのみ使用可能な姿勢は、撮像装置32A,32Bの姿勢集合から除去することで、キャリブレーション実行部29によるキャリブレーションに適用する姿勢の数を調整(削減)する。姿勢を除去する際の優先度は、たとえば、追加された姿勢が位置する小空間と同様の小空間にある姿勢を優先的に除去する方法がある。 In the determination result described above, an orientation that can be used in common by the plurality of imaging devices 32A and 32B (hereinafter also referred to as "common orientation") constitutes an orientation set of the imaging devices 32A and 32B to which the orientation can be applied. Add as one of postures. As a result of this processing, if the upper limit of the number of postures determined in advance is exceeded, postures that can be used only by one of the imaging devices (32A or 32B) are removed from the posture set of the imaging devices 32A and 32B. adjusts (reduces) the number of postures to be applied for calibration by the motion executing unit 29; As for the priority in removing the orientation, for example, there is a method of preferentially removing the orientation in the same small space as the small space in which the added orientation is located.
 本実施形態に係るキャリブレーション装置2-1によれば、上記第1実施形態と同様の効果に加えて、次のような効果が得られる。
 本実施形態においては、複数の撮像装置32A,32Bで共通に使用可能な複数の姿勢を共通姿勢抽出部201が抽出し、抽出した複数の姿勢を適用してキャリブレーション実行部29-1がキャリブレーションを実行する。このため、撮像部32を複数の撮像装置32A,32Bによって構成した場合でも、ロボット30の姿勢集合の教示およびキャリブレーションに要する工数を削減しつつ、キャリブレーションを実行することが可能となる。
According to the calibration device 2-1 according to the present embodiment, the following effects can be obtained in addition to the effects similar to those of the first embodiment.
In this embodiment, the common orientation extraction unit 201 extracts a plurality of orientations that can be used in common by the plurality of imaging devices 32A and 32B, and the calibration execution unit 29-1 performs calibration using the extracted plurality of orientations. run the Therefore, even when the imaging unit 32 is configured with a plurality of imaging devices 32A and 32B, it is possible to perform calibration while reducing man-hours required for teaching and calibrating the posture set of the robot 30 .
 なお、本実施形態において、共通姿勢抽出部201は、姿勢集合生成部23-1が撮像装置32A,32Bごとに生成し、且つ、非物理干渉姿勢抽出部25-1および非遮蔽姿勢抽出部26-6が撮像装置32A,32Bごとに抽出した姿勢集合の中から、共通姿勢を抽出する構成になっているが、これに限らない。たとえば、共通姿勢抽出部201は、姿勢集合生成部23-1が撮像装置32A,32Bごとに生成した姿勢集合の中から、共通姿勢を抽出してもよい。また、共通姿勢抽出部201は、非物理干渉姿勢抽出部25-1が撮像装置32A,32Bごとに抽出した姿勢集合の中から、共通姿勢を抽出してもよい。また、共通姿勢抽出部201は、非遮蔽姿勢抽出部26-6が撮像装置32A,32Bごとに抽出した姿勢集合の中から、共通姿勢を抽出してもよい。 In this embodiment, the common posture extraction unit 201 is generated by the posture set generation unit 23-1 for each of the imaging devices 32A and 32B, and the non-physical interference posture extraction unit 25-1 and the non-shielding posture extraction unit 26 -6 extracts a common orientation from among the orientation sets extracted for each of the imaging devices 32A and 32B, but the present invention is not limited to this. For example, the common posture extraction unit 201 may extract common postures from among the posture sets generated for each of the imaging devices 32A and 32B by the posture set generation unit 23-1. Further, the common posture extraction unit 201 may extract a common posture from among the posture sets extracted for each of the imaging devices 32A and 32B by the non-physical interference posture extraction unit 25-1. Further, the common posture extraction unit 201 may extract common postures from the posture set extracted for each of the imaging devices 32A and 32B by the unshielded posture extraction unit 26-6.
 また、本実施形態において、共通姿勢抽出部201は、シミュレータ上でロボット30を制御し、共通で使用可能な姿勢を判定したが、これに限らず、たとえば、撮像部32の設計上の配置情報と、マーカ31の教示位置および姿勢のみから判定してもよい。また、各々の撮像装置32A,32Bの姿勢集合に含まれる姿勢数の上限を定めず、キャリブレーション時の姿勢を増やすことで、精度の向上が見込める場合がある。その場合は、事前に姿勢数の上限を定めなくてもよい。 In the present embodiment, the common posture extraction unit 201 controls the robot 30 on the simulator and determines common usable postures. , the determination may be made only from the teaching position and orientation of the marker 31 . Further, by increasing the number of postures during calibration without setting the upper limit of the number of postures included in the posture set of each imaging device 32A, 32B, an improvement in accuracy can be expected in some cases. In that case, it is not necessary to predetermine the upper limit of the number of postures.
 <変形例等>
 なお、本発明は、上述した実施形態に限定されるものではなく、様々な変形例を含む。たとえば、上述した実施形態では、本発明の内容を理解しやすいように詳細に説明しているが、本発明は、上述した実施形態で説明したすべての構成を必ずしも備えるものに限定されない。また、ある実施形態の構成の一部を、他の実施形態の構成に置き換えることが可能である。また、ある実施形態の構成に他の実施形態の構成を加えることも可能である。また、各実施形態の構成の一部について、これを削除し、または他の構成を追加し、あるいは他の構成に置換することも可能である。
<Modifications, etc.>
In addition, the present invention is not limited to the above-described embodiments, and includes various modifications. For example, in the above-described embodiments, the details of the present invention have been described for easy understanding, but the present invention is not necessarily limited to having all the configurations described in the above-described embodiments. Also, part of the configuration of one embodiment can be replaced with the configuration of another embodiment. It is also possible to add the configuration of another embodiment to the configuration of one embodiment. Moreover, it is also possible to delete a part of the configuration of each embodiment, add another configuration, or replace it with another configuration.
 2,2-1…キャリブレーション装置、2A,2C…現実装置、23,23-1…姿勢集合生成部、25,25-1…非物理干渉姿勢抽出部、26,26-1…非遮蔽姿勢抽出部、27,27-1…姿勢集合評価部、28,28-1…パラメータ更新部、29,29-1…キャリブレーション実行部、30…ロボット(ロボットアーム)、31…マーカ、32…撮像部、32A,32B…撮像装置、33…環境設置物、201…共通姿勢抽出部、M1…基部・カメラ変換行列(座標変換パラメータ) 2, 2-1... calibration device, 2A, 2C... real device, 23, 23-1... posture set generator, 25, 25-1... non-physical interference posture extraction unit, 26, 26-1... non-shielding posture Extraction unit 27, 27-1 Posture set evaluation unit 28, 28-1 Parameter update unit 29, 29-1 Calibration execution unit 30 Robot (robot arm) 31 Marker 32 Imaging Section 32A, 32B... Imaging device 33... Environment installation object 201... Common posture extraction unit M1... Base/camera conversion matrix (coordinate conversion parameter)

Claims (8)

  1.  予め定められた位置から撮像する撮像部と、
     ロボットに取り付けられ、前記ロボットの動作にしたがって変位するマーカと、
     前記撮像部によって前記マーカを撮像可能とするための前記ロボットの複数の姿勢を含む姿勢集合を生成する姿勢集合生成部と、
     前記姿勢集合生成部によって生成された前記姿勢集合の中から、前記マーカの遮蔽が生じない複数の姿勢を抽出する非遮蔽姿勢抽出部と、
     前記非遮蔽姿勢抽出部によって抽出された前記複数の姿勢に基づいて、前記ロボットの座標系と前記撮像部の座標系を変換するための座標変換パラメータを推定するキャリブレーション実行部と、
     を備えるロボット制御用のキャリブレーション装置。
    an imaging unit that captures an image from a predetermined position;
    a marker attached to a robot and displaced according to the motion of the robot;
    a posture set generation unit that generates a posture set including a plurality of postures of the robot for enabling the imaging unit to capture the marker;
    a non-occluded pose extraction unit that extracts a plurality of poses in which the marker is not shielded from the pose set generated by the pose set generator;
    a calibration executing unit for estimating coordinate transformation parameters for transforming the coordinate system of the robot and the coordinate system of the imaging unit based on the plurality of poses extracted by the unshielded posture extracting unit;
    A calibration device for robot control.
  2.  前記撮像部は、複数の撮像装置によって構成され、
     前記姿勢集合生成部は、前記複数の撮像装置の各々について、前記撮像装置によって前記マーカを撮像可能とするための前記ロボットの姿勢集合を個別に生成し、
     前記姿勢集合生成部が前記撮像装置ごとに個別に生成した前記ロボットの姿勢集合の中から、前記複数の撮像装置で共通に使用可能な複数の姿勢を抽出する共通姿勢抽出部をさらに備え、
     前記キャリブレーション実行部は、前記共通姿勢抽出部によって抽出された複数の姿勢を適用して前記座標変換パラメータを推定する
     請求項1に記載のロボット制御用のキャリブレーション装置。
    The imaging unit is configured by a plurality of imaging devices,
    The pose set generation unit individually generates, for each of the plurality of imaging devices, a pose set of the robot for enabling the imaging device to capture an image of the marker,
    a common posture extracting unit that extracts a plurality of postures that can be used in common by the plurality of imaging devices from among the posture sets of the robot individually generated for each of the imaging devices by the posture set generation unit;
    2. The calibration device for robot control according to claim 1, wherein the calibration execution unit applies the plurality of orientations extracted by the common orientation extraction unit to estimate the coordinate transformation parameters.
  3.  前記姿勢集合生成部によって生成された前記姿勢集合の中から、前記ロボットと前記マーカが、前記ロボットおよび環境設置物を含む現実装置内のいずれの部分にも物理干渉しない複数の姿勢を抽出する非物理干渉姿勢抽出部をさらに備え、
     前記非遮蔽姿勢抽出部は、前記非物理干渉姿勢抽出部によって抽出された前記複数の姿勢を含む姿勢集合の中から、前記マーカの遮蔽が生じない姿勢を抽出する
     請求項1に記載のロボット制御用のキャリブレーション装置。
    A method for extracting, from the pose set generated by the pose set generation unit, a plurality of poses in which the robot and the marker do not physically interfere with any part of the real device including the robot and the environment installation object. further comprising a physical interference posture extraction unit,
    2. The robot control according to claim 1, wherein the non-shielding posture extracting unit extracts a posture in which the marker is not shielded from a posture set including the plurality of postures extracted by the non-physical interference posture extracting unit. calibration equipment for
  4.  前記非遮蔽姿勢抽出部によって抽出された前記複数の姿勢を含む姿勢集合を用いて、所定値以上に高精度なキャリブレーションが見込めるか否かを判定する姿勢集合評価部をさらに備え、
     前記キャリブレーション実行部は、前記所定値以上に高精度なキャリブレーションが見込めると前記姿勢集合評価部が判定した場合に、前記座標変換パラメータを推定する
     請求項1に記載のロボット制御用のキャリブレーション装置。
    further comprising a posture set evaluation unit that determines whether or not calibration with a high accuracy of a predetermined value or higher can be expected using the posture set including the plurality of postures extracted by the unshielded posture extraction unit;
    2. The calibration for robot control according to claim 1, wherein the calibration execution unit estimates the coordinate transformation parameters when the posture set evaluation unit determines that calibration with a high accuracy equal to or higher than the predetermined value can be expected. Device.
  5.  前記姿勢集合評価部は、前記所定値以上に高精度なキャリブレーションが見込めない場合に、前記姿勢集合生成部に対して追加で姿勢を生成するよう指示する
     請求項4に記載のロボット制御用のキャリブレーション装置。
    5. The robot control robot according to claim 4, wherein the posture set evaluation unit instructs the posture set generation unit to generate additional postures when calibration with high precision equal to or higher than the predetermined value cannot be expected. calibration device.
  6.  前記所定値以上に高精度なキャリブレーションが見込めないと前記姿勢集合評価部が判定した場合に、前記姿勢集合生成部が前記姿勢集合の生成に用いるパラメータの値を変更するパラメータ更新部をさらに備える
     請求項4に記載のロボット制御用のキャリブレーション装置。
    The apparatus further includes a parameter updating unit that changes a value of a parameter used by the posture set generation unit to generate the posture set when the posture set evaluation unit determines that calibration with high accuracy equal to or higher than the predetermined value cannot be expected. The calibration device for robot control according to claim 4.
  7.  前記非遮蔽姿勢抽出部は、前記ロボットを各姿勢へと移動させたときに前記撮像部によって得られる撮影画像を仮想的に生成するとともに、生成した前記撮影画像を解析することによって前記マーカの三次元位置を推定し、前記推定に成功したか否かによって前記マーカの遮蔽の有無を判定する
     請求項1に記載のロボット制御用のキャリブレーション装置。
    The non-shielding posture extraction unit virtually generates a photographed image obtained by the imaging unit when the robot is moved to each posture, and analyzes the generated photographed image to determine the cubic position of the marker. The calibration device for robot control according to claim 1, wherein an original position is estimated, and whether or not the marker is shielded is determined based on whether or not the estimation is successful.
  8.  前記非遮蔽姿勢抽出部は、前記推定に成功した場合に、前記推定の信頼度を求め、前記信頼度が閾値以上であるか否かによって前記マーカの遮蔽の有無を判定する
     請求項7に記載のロボット制御用のキャリブレーション装置。
    8. The non-shielding posture extraction unit according to claim 7, wherein, when the estimation is successful, the reliability of the estimation is obtained, and whether or not the marker is shielded is determined based on whether or not the reliability is equal to or greater than a threshold. calibration device for robot control.
PCT/JP2021/040394 2021-03-29 2021-11-02 Calibration device for controlling robot WO2022208963A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-055774 2021-03-29
JP2021055774A JP7437343B2 (en) 2021-03-29 2021-03-29 Calibration device for robot control

Publications (1)

Publication Number Publication Date
WO2022208963A1 true WO2022208963A1 (en) 2022-10-06

Family

ID=83458339

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/040394 WO2022208963A1 (en) 2021-03-29 2021-11-02 Calibration device for controlling robot

Country Status (2)

Country Link
JP (1) JP7437343B2 (en)
WO (1) WO2022208963A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014161603A1 (en) * 2013-04-05 2014-10-09 Abb Technology Ltd A robot system and method for calibration
JP2015182144A (en) * 2014-03-20 2015-10-22 キヤノン株式会社 Robot system and calibration method of robot system
JP2019217571A (en) * 2018-06-15 2019-12-26 オムロン株式会社 Robot control system
JP2020172017A (en) * 2019-03-05 2020-10-22 ザ・ボーイング・カンパニーThe Boeing Company Automatic calibration for a robot optical sensor
JP2021000678A (en) * 2019-06-20 2021-01-07 オムロン株式会社 Control system and control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014161603A1 (en) * 2013-04-05 2014-10-09 Abb Technology Ltd A robot system and method for calibration
JP2015182144A (en) * 2014-03-20 2015-10-22 キヤノン株式会社 Robot system and calibration method of robot system
JP2019217571A (en) * 2018-06-15 2019-12-26 オムロン株式会社 Robot control system
JP2020172017A (en) * 2019-03-05 2020-10-22 ザ・ボーイング・カンパニーThe Boeing Company Automatic calibration for a robot optical sensor
JP2021000678A (en) * 2019-06-20 2021-01-07 オムロン株式会社 Control system and control method

Also Published As

Publication number Publication date
JP2022152845A (en) 2022-10-12
JP7437343B2 (en) 2024-02-22

Similar Documents

Publication Publication Date Title
CN111482959B (en) Automatic hand-eye calibration system and method of robot motion vision system
CN106873550B (en) Simulation device and simulation method
US11338435B2 (en) Gripping system with machine learning
US20160214255A1 (en) Method for calibrating an articulated end effector employing a remote digital camera
US20180066934A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
US9639942B2 (en) Information processing apparatus, information processing method, and storage medium
US20090118864A1 (en) Method and system for finding a tool center point for a robot using an external camera
CN111360821A (en) Picking control method, device and equipment and computer scale storage medium
US20180250813A1 (en) Image Processing Device, Image Processing Method, And Computer Program
CN113379849B (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
WO2021218542A1 (en) Visual perception device based spatial calibration method and apparatus for robot body coordinate system, and storage medium
WO2019059343A1 (en) Workpiece information processing device and recognition method of workpiece
US20190255706A1 (en) Simulation device that simulates operation of robot
EP3578321A1 (en) Method for use with a machine for generating an augmented reality display environment
JP6973444B2 (en) Control system, information processing device and control method
JP2730457B2 (en) Three-dimensional position and posture recognition method based on vision and three-dimensional position and posture recognition device based on vision
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment
CN114494312A (en) Apparatus and method for training a machine learning model for identifying object topology of an object from an image of the object
WO2022208963A1 (en) Calibration device for controlling robot
KR101972432B1 (en) A laser-vision sensor and calibration method thereof
Motai et al. SmartView: hand-eye robotic calibration for active viewpoint generation and object grasping
KR102585332B1 (en) Device and method executing calibration between robot hand and camera separated from robot hand
US20230154162A1 (en) Method For Generating Training Data Used To Learn Machine Learning Model, System, And Non-Transitory Computer-Readable Storage Medium Storing Computer Program
KR102438490B1 (en) Heterogeneous sensors calibration method and apparatus using single checkerboard
Chang et al. Simultaneous Localization and Calibration Employing Two Flying Cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21935119

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21935119

Country of ref document: EP

Kind code of ref document: A1