WO2021103824A1 - Key point position determining method and device in robot hand-eye calibration based on calibration block - Google Patents

Key point position determining method and device in robot hand-eye calibration based on calibration block Download PDF

Info

Publication number
WO2021103824A1
WO2021103824A1 PCT/CN2020/120103 CN2020120103W WO2021103824A1 WO 2021103824 A1 WO2021103824 A1 WO 2021103824A1 CN 2020120103 W CN2020120103 W CN 2020120103W WO 2021103824 A1 WO2021103824 A1 WO 2021103824A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration block
point cloud
point
dimensional
calibration
Prior art date
Application number
PCT/CN2020/120103
Other languages
French (fr)
Chinese (zh)
Inventor
郑振兴
刁世普
Original Assignee
广东技术师范大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东技术师范大学 filed Critical 广东技术师范大学
Publication of WO2021103824A1 publication Critical patent/WO2021103824A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the invention relates to the calibration of a vision guidance system in a robot automatic processing system, the calibration of a vision system in the detection of the position and related parameters of parts to be assembled in a robot automatic assembly system, and the conversion of target position information after a defect is obtained by analyzing sensor data in a processing center
  • the technical field of hand-eye calibration for visual inspection system calibration and other automated processing (operation) processes such as visual guidance tasks in the automated field, specifically relates to a method and device for determining key point positions in robot hand-eye calibration based on a calibration block.
  • Automated equipment is a weapon to make a powerful country. Therefore, it must move towards high speed and intelligence.
  • An important method is to equip the machine with "eyes" and a "brain” that can cooperate with the eyes.
  • This eye can be a monocular camera, a binocular camera, a multi-eye camera, a 3D scanner, or an RGB-D sensor.
  • the present invention proposes a method and device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, which can extract key points with low cost, convenience, and high precision, thereby achieving low cost, convenience, and high precision. Accurately perform hand-eye calibration in the robot vision system.
  • the calibration block is a three-dimensional calibration block
  • the three-dimensional calibration block has a polyhedral structure and an irregular shape
  • the key points are on the three-dimensional calibration block and there are many
  • the preset points do not overlap in the height direction
  • the key point extraction method includes the following steps:
  • Step 1 Adjust the posture of the 3D calibration block so that the projection of any two points of the key points on the 3D calibration block on the XY plane is not parallel to any coordinate axis of the robot base coordinate system;
  • Step 2 Adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud containing the peripheral surface of the key point on the three-dimensional calibration block;
  • Step 3 Convert the CAD model of the 3D calibration block into a point cloud to obtain the point cloud of the 3D calibration block model
  • Step 4 Register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud
  • Step 5 Taking the position of the key points on the 3D calibration block model point cloud as the reference, set the corresponding threshold to obtain the point cloud near the key points from the 3D calibration block point cloud to determine that the key points on the 3D calibration block are in the 3D vision system coordinate system The coordinate value.
  • step 3 includes the following sub-steps:
  • Step 301 Obtain the CAD model of the 3D calibration block and convert it into a PLY format file
  • Step 302 According to the PLY format file, use the data format conversion function in the PCL library to convert it into a point cloud data format to obtain a three-dimensional calibration block model point cloud.
  • step 4 includes the following sub-steps:
  • Step 401 Sampling the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
  • Step 402 Calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
  • Step 403 according to the fast point feature histogram of the 3D calibration block point cloud and the 3D calibration block model point cloud, perform rough registration on the point cloud by using a sampling consistent initial registration algorithm;
  • step 404 the point cloud is accurately registered through the iterative closest point algorithm.
  • step 5 set the corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine that the coordinate value of the point is 3D The coordinate value of the key point on the calibration block in the coordinate system of the 3D vision system.
  • a device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block is a three-dimensional calibration block, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key points are on the three-dimensional calibration block and there are not many At three preset points, the preset points do not overlap in the height direction; the key point extraction device includes
  • the 3D calibration block posture adjustment module is used to adjust the placement posture of the 3D calibration block, so that the connection between any two points of the key points on the 3D calibration block is projected on the XY plane, and any coordinates in the robot base coordinate system The axis is not parallel;
  • the robot posture adjustment module is used to adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud containing the peripheral surface of the key point on the three-dimensional calibration block;
  • the model point cloud conversion module is used to convert the CAD model of the 3D calibration block into a point cloud to obtain the 3D calibration block model point cloud;
  • the registration module is used to register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud;
  • the key point coordinate determination module is used to set the corresponding threshold value based on the key point position on the point cloud of the 3D calibration block model to obtain the point cloud near the key point from the 3D calibration block point cloud to determine the key point on the 3D calibration block.
  • the coordinate value of the coordinate system of the 3D vision system is used to set the corresponding threshold value based on the key point position on the point cloud of the 3D calibration block model to obtain the point cloud near the key point from the 3D calibration block point cloud to determine the key point on the 3D calibration block.
  • model point cloud conversion module includes
  • the PLY format file conversion unit is used to obtain the CAD model of the 3D calibration block and convert it into a PLY format file;
  • the model point cloud acquisition unit is used to convert the PLY format file into a point cloud data format by using the data format conversion function in the PCL library to acquire a three-dimensional calibration block model point cloud.
  • the registration module includes a sampling unit, a fast point feature histogram unit, a rough configuration unit, and a precise registration unit, wherein
  • the sampling unit is used to sample the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
  • the fast point feature histogram unit is used to calculate the feature point descriptors of the 3D calibration block point cloud and the 3D calibration block model point cloud respectively to obtain their respective fast point feature histograms;
  • the coarse configuration unit is used to coarsely register the point cloud based on the 3D calibration block point cloud and the fast point feature histogram of the 3D calibration block model point cloud by using the sampling consistent initial registration algorithm;
  • the precise registration unit is used to accurately register the point cloud through the iterative closest point algorithm.
  • the key point coordinate determination module is used to set a corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine the The coordinate value of the point is the coordinate value of the key point on the 3D calibration block in the coordinate system of the 3D vision system.
  • the present invention uses a three-dimensional calibration block with a polyhedral structure and irregular shape, and multiple key points on the three-dimensional calibration block do not overlap in the height direction, thereby being low-cost, convenient, and Determine the coordinate value of the key point in the robot vision system with high precision; specifically, by adjusting the placement posture of the three-dimensional calibration block, the connection between any two points of the multiple key points is projected on the XY plane, and the Any coordinate axis of the robot base coordinate system is not parallel; then by adjusting the robot's posture, the 3D vision system can obtain the point cloud of the peripheral surface of the key point; finally, the 3D calibration block model point cloud and the collected 3D The calibration block point cloud is registered, and the corresponding threshold is set to determine the point cloud near the key point, so as to obtain the coordinate value of the key point in the coordinate system of the three-dimensional vision system.
  • the transformation matrix of the hand-eye relationship of the robot dynamic three-dimensional vision system can be quickly solved, so as to achieve low-cost, convenient and high-precision realization Hand-eye calibration in robot 3D dynamic vision system.
  • Figure 1 is a schematic diagram of the structure of a calibration block used in the present invention.
  • Figure 2 is a schematic diagram of the robot detection attitude adjustment of the present invention
  • FIG. 3 is a flow chart of an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • FIG. 4 is a structural block diagram of an embodiment of the device for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • step 3 is a flowchart of step 3 in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • FIG. 6 is a structural block diagram of a model point cloud conversion module in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • step 4 is a flowchart of step 4 in an embodiment of the method for extracting key points in the calibration of the robot dynamic 3D vision system according to the present invention
  • FIG. 8 is a structural block diagram of a model point cloud registration module in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system of the present invention.
  • Figure 1 is a calibration block used in the present invention, wherein the calibration block is a three-dimensional calibration block, and the key points are points P1, P2, P3 in Figure 1;
  • Figure 2 is a schematic diagram of the robot detection attitude adjustment of the present invention , After adjusting the placement posture of the three-dimensional calibration block, the detection posture of the robot is adjusted to determine the coordinate values of points P1, P2, and P3 in the coordinate system of the three-dimensional vision system.
  • the coordinate values of points P1, P2, and P3 in the robot base coordinate system are also determined, according to the coordinate values of points P1, P2, and P3 in the three-dimensional vision system coordinate system and the coordinates of points P1, P2, P3 in the robot base coordinate system
  • the coordinate value can solve the transformation matrix of the hand-eye relationship of the robot dynamic 3D vision system, and realize the hand-eye calibration in the robot 3D dynamic vision system. A detailed description will be given below with reference to FIGS. 1 to 4.
  • the calibration block used in the embodiment of the present invention is a three-dimensional calibration block with a special shape, which is specifically expressed as: as shown in FIG. 1, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key points are three-dimensional Points P1, P2, P3 on the calibration block, among which the key points P1, P2, P3 do not overlap in the height direction, and are basically evenly distributed in the height direction.
  • the coordinate value of the key point in the coordinate system of the three-dimensional vision system is determined by the three-dimensional calibration block of this special structure.
  • the embodiment of the present invention discloses a method for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, which includes the following steps:
  • Step 1 as shown in Figure 2, adjust the placement posture of the 3D calibration block so that the connection of any two points of P1, P2, P3 on the 3D calibration block is projected on the XY plane, and the robot base coordinates Any coordinate axis of the system is not parallel;
  • Step 2 as shown in Figure 2, adjust the posture of the robot so that the 3D vision system at the end of the robot can obtain the 3D calibration block point cloud containing the peripheral surface of the P1, P2, and P3 points on the 3D calibration block;
  • Step 3 Convert the CAD model of the 3D calibration block into a point cloud to obtain the point cloud of the 3D calibration block model
  • Step 4 Register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud
  • Step 5 Take the key point positions on the point cloud of the three-dimensional calibration block model (ie P1', P2', P3' points, where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3) as Benchmark, set the corresponding threshold to obtain the point cloud near the key point from the point cloud of the three-dimensional calibration block to determine the coordinate value of the key point on the three-dimensional calibration block in the coordinate system of the three-dimensional vision system.
  • P1', P2', P3' points where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3
  • the embodiment of the present invention also discloses a device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, including:
  • the three-dimensional calibration block posture adjustment module 10 is used to adjust the placement posture of the three-dimensional calibration block, so that the line of any two points of P1, P2, P3 on the three-dimensional calibration block is projected on the XY plane, and the robot base Any coordinate axis of the coordinate system is not parallel;
  • the robot posture adjustment module 20 is used to adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud including the peripheral surface of the P1, P2, and P3 points on the three-dimensional calibration block;
  • the model point cloud conversion module 30 is used to convert the CAD model of the three-dimensional calibration block into a point cloud to obtain the point cloud of the three-dimensional calibration block model;
  • the registration module 40 is used to register the three-dimensional calibration block model point cloud with the obtained three-dimensional calibration block point cloud;
  • the key point coordinate determination module 50 is used for three-dimensional calibration of the key point positions on the block model point cloud (ie P1', P2', P3' points, where P1' corresponds to P1, P2' corresponds to P2, and P3' Corresponding to P3) as the reference, set the corresponding threshold to obtain the point cloud near the key point from the point cloud of the three-dimensional calibration block to determine the coordinate value of the key point on the three-dimensional calibration block in the coordinate system of the three-dimensional vision system.
  • P1', P2', P3' points where P1' corresponds to P1, P2' corresponds to P2, and P3' Corresponding to P3
  • the method for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block uses the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block as the execution target of the steps.
  • step 1 uses the 3D calibration block posture adjustment module 10 as the execution object of the steps
  • step 2 uses the robot posture adjustment module 20 as the execution object of the steps
  • step 3 uses the model point cloud conversion module 30 as the execution object of the steps.
  • Step 4 uses the registration module 40 as the execution object of the step
  • step 5 uses the key point coordinate determination module 50 as the execution object of the step.
  • the determination of the coordinate values of the key points P1, P2, P3 in the coordinate system of the three-dimensional vision system and the coordinate values of the key points P1, P2, P3 in the robot base coordinate system are the key to the solution of the transformation matrix, and the key The coordinate values of points P1, P2, and P3 in the robot base coordinate system are quickly determined by the probe set at the end of the robot. Specifically, when P1, P2, and P3 are touched by the probe, the robot controller The coordinate value after length compensation can be the coordinate value of the key points P1, P2, P3 in the robot base coordinate system. Therefore, determining the coordinate values of the key points P1, P2, and P3 in the coordinate system of the three-dimensional vision system is the key to solving the transformation matrix.
  • the embodiment of the present invention utilizes the key points on the three-dimensional calibration block with the aid of the three-dimensional calibration block with the polyhedral structure and irregular shape, so that the coordinates of the key points in the coordinate system of the three-dimensional vision system can be determined with low cost, convenience and high precision.
  • the hand-eye calibration in the robot's three-dimensional dynamic vision system can be realized at low cost, conveniently and with high precision.
  • step 1 the placement posture of the three-dimensional calibration block is related to whether the acquired data is available. Therefore, in the embodiment of the present invention, as shown in Figures 1 and 2, when adjusting the posture of the three-dimensional calibration block, the key The projection of the line connecting any two of the points P1, P2, and P3 on the XY plane is not parallel to any coordinate axis of the robot base coordinate system, so that the robot end can obtain the key points at the same time under the same detection posture. Point cloud data of each surface.
  • step 2 the detection posture of the robot also needs to be adjusted so that the three-dimensional vision system, such as monocular camera, binocular camera, multi-eye camera, three-dimensional scanner, etc., can obtain usable spatial position data.
  • the three-dimensional vision system installed at the end of the robot can simultaneously obtain the target position points P1, P2, P3 on the three-dimensional calibration block shown in Figure 1 under the same end detection posture. Point cloud on the peripheral surface.
  • step 3 includes the following sub-steps:
  • Step 301 Obtain the CAD model of the 3D calibration block and convert it into a PLY format file
  • Step 302 According to the PLY format file, use the data format conversion function in the PCL library to convert it into a point cloud data format to obtain a three-dimensional calibration block model point cloud.
  • the model point cloud conversion module 30 in the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block includes
  • the PLY format file conversion unit 31 is used to obtain the CAD model of the three-dimensional calibration block and convert it into a PLY format file;
  • the model point cloud acquisition unit 32 is configured to convert the PLY format file into a point cloud data format by using the data format conversion function in the PCL library to acquire a three-dimensional calibration block model point cloud.
  • step 3 uses each unit in the model point cloud conversion module 30 as the execution target of the step.
  • step 301 uses the PLY format file conversion unit 31 as the execution object of the step
  • step 302 uses the model point cloud acquisition unit 32 as the execution object of the step.
  • step 4 includes the following sub-steps:
  • Step 401 Sampling the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
  • Step 402 Calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
  • Step 403 according to the 3D calibration block point cloud and the fast point feature histogram of the 3D calibration block model point cloud, perform rough registration on the point cloud by using a sampling consistent initial registration algorithm;
  • step 404 the point cloud is accurately registered by using the iterative closest point algorithm.
  • the registration module 40 in the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block includes
  • the sampling unit 41 is used to sample the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively;
  • the fast point feature histogram unit 42 is used to calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
  • the coarse configuration unit 43 is used to perform coarse registration on the point cloud by using the sampling consistent initial registration algorithm according to the fast point feature histogram of the 3D calibration block point cloud and the 3D calibration block model point cloud;
  • the precise registration unit 44 is used for precise registration of the point cloud through the iterative closest point algorithm.
  • step 4 uses each unit in the registration module 40 as the execution target of the step.
  • step 401 uses the sampling unit 41 as the execution object of the step
  • step 402 uses the fast point feature histogram unit 42 as the execution object of the step
  • step 403 uses the rough configuration unit 43 as the execution object of the step
  • step 404 is The precise registration unit 44 is used as the execution target of the steps.
  • step 401 the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud can be sampled by using a Volgograd filter to improve the registration speed of the point cloud pair.
  • step 402 the registration of the point cloud pair depends on the feature description. Therefore, in the present invention, it is necessary to calculate the feature point descriptors of the 3D calibration block point cloud and the 3D calibration block model point cloud separately to obtain respective fast point feature histograms (FPFH , Fast Point Feature Histograms);
  • FPFH Fast Point Feature Histograms
  • step 403 it is usually necessary to coarsely register the point cloud pair before accurately registering the point cloud pair. Therefore, in the present invention, the sample consensus initial registration algorithm (SAC-IA, Sample Consensus Initial Aligment) is used to realize the rough matching of the point cloud pair. quasi.
  • SAC-IA Sample Consensus Initial Aligment
  • step 404 after the rough registration of the point cloud pair, the iterative closest point algorithm (ICP, Iterative Closest Point) is used to realize the precise registration of the point cloud pair.
  • ICP iterative closest point algorithm
  • step 5 set the corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine that the coordinate value of the point is 3D The coordinate value of the key point on the calibration block in the coordinate system of the 3D vision system.
  • the key point coordinate determination module 50 in the key point position determination device in the robot hand-eye calibration based on the calibration block is used to set the corresponding threshold to search for the distance 3D calibration from the 3D calibration block point cloud through the nearest neighbor search method.
  • the closest point of the key point cloud on the block model point cloud, and the coordinate value of this point is determined to be the coordinate value of the key point on the 3D calibration block in the coordinate system of the 3D vision system.
  • the key point positions (ie P1', P2', P3' points on the point cloud of the block model are calibrated in three dimensions, where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3.
  • P1' corresponds to P1
  • P2' corresponds to P2
  • P3' corresponds to P3.
  • Correspondence as the benchmark, through the nearest neighbor search method, from the 3D calibration block point cloud, search for the closest point to the key point on the 3D calibration block model point cloud (ie P1', P2', P3' point),
  • the coordinate value of this point is the required key point coordinate value, that is, the coordinate value of the key points P1, P2, P3 in the coordinate system of the three-dimensional vision system.
  • the present invention uses a three-dimensional calibration block with a polyhedral structure and an irregular shape, and multiple key points on the three-dimensional calibration block do not overlap in the height direction, so as to determine the key points in the robot with low cost, convenience and high precision.
  • the coordinate value in the vision system specifically, by adjusting the placement posture of the three-dimensional calibration block, the connection of any two points of multiple key points is projected on the XY plane, and it is with any coordinate axis of the robot base coordinate system Non-parallel; then by adjusting the robot's posture, the 3D vision system can obtain the point cloud of the peripheral surface of the key point; finally, the 3D calibration block model point cloud is registered with the collected 3D calibration block point cloud, Set the corresponding threshold to determine the point cloud near the key point, so as to obtain the coordinate value of the key point in the coordinate system of the 3D vision system.
  • the transformation matrix of the hand-eye relationship of the robot dynamic three-dimensional vision system can be quickly solved, so as to achieve low-cost, convenient and high-precision realization Hand-eye calibration in robot 3D dynamic vision system.
  • the following disclosure provides many different embodiments or examples for realizing different structures of the embodiments of the present invention.
  • the components and settings of specific examples are described below. Of course, they are only examples, and the purpose is not to limit the present invention.
  • the embodiments of the present invention may repeat reference numbers and/or reference letters in different examples. This repetition is for the purpose of simplification and clarity, and does not indicate the relationship between the various embodiments and/or settings discussed. .
  • the embodiments of the present invention provide examples of various specific processes and materials, but those of ordinary skill in the art may be aware of the application of other processes and/or the use of other materials.
  • a "computer-readable medium” can be any device that can contain, store, communicate, propagate, or transmit a program for use by an instruction execution system, device, or device or in combination with these instruction execution systems, devices, or devices.
  • computer readable media include the following: electrical connections (electronic devices) with one or more wiring, portable computer disk cases (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable and editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because it can be used for example by optically scanning the paper or other medium, and then editing, interpreting, or other suitable media if necessary. The program is processed in a manner to obtain the program electronically, and then stored in the computer memory.
  • each part of the embodiments of the present invention can be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if it is implemented by hardware, as in another embodiment, it can be implemented by any one or a combination of the following technologies known in the art: Discrete logic circuits, application-specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
  • a person of ordinary skill in the art can understand that all or part of the steps carried in the method of the foregoing embodiments can be implemented by a program instructing relevant hardware to complete.
  • the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. When executed, it includes one of the steps of the method embodiment or a combination thereof.
  • the functional units in the various embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

Abstract

A key point position determining method and device in robot hand-eye calibration based on a calibration block. The method comprises: enabling the projection of a connection line of any two points in key points on a three-dimensional calibration block on an XY plane to be not parallel to any coordinate axis of a robot base coordinate system by adjusting the placement posture of the three-dimensional calibration block; adjusting the posture of the robot, such that a three-dimensional vision system located at the tail end of the robot can obtain a three-dimensional calibration block point cloud containing the peripheral surface of the key points; then converting a CAD model of the three-dimensional calibration block into a three-dimensional calibration block model point cloud; then registering the three-dimensional calibration block model point cloud and the three-dimensional calibration block point cloud; finally, setting a threshold value to obtain point clouds near the key points from the three-dimensional calibration block point cloud so as to determine coordinate values of the key points in a three-dimensional vision system coordinate system. According to the method and device, the key points can be extracted conveniently with low cost and high precision, so that hand-eye calibration can be carried out in a robot vision system conveniently with low cost and high precision.

Description

基于标定块的机器人手眼标定中关键点位置确定方法与装置Method and device for determining position of key points in robot hand-eye calibration based on calibration block 技术领域Technical field
本发明涉及机器人自动化加工系统中视觉引导系统标定、机器人自动装配系统中待装配零件的位置和相关参数检测中的视觉系统标定、加工中心中通过分析传感器数据获取瑕疵后的目标位置信息的转换中的视觉检测系统标定及其他自动化加工(操作)过程中的视觉引导作业等自动化领域的检测系统的手眼标定技术领域,具体涉及一种基于标定块的机器人手眼标定中关键点位置确定方法与装置。The invention relates to the calibration of a vision guidance system in a robot automatic processing system, the calibration of a vision system in the detection of the position and related parameters of parts to be assembled in a robot automatic assembly system, and the conversion of target position information after a defect is obtained by analyzing sensor data in a processing center The technical field of hand-eye calibration for visual inspection system calibration and other automated processing (operation) processes such as visual guidance tasks in the automated field, specifically relates to a method and device for determining key point positions in robot hand-eye calibration based on a calibration block.
背景技术Background technique
自动化装备是制造强国的利器,因此必须要向高速化,智能化方向迈进,其一个重要的手段是给机器装上“眼睛”和能够与这颗眼睛配合的“大脑”。这只眼睛可以是单目相机,双目相机,多目相机,三维扫描仪,也可以是RGB-D传感器。通过视觉传感器获取相关数据,可以分析得到的加工信息,这里的加工信息是以视觉传感器的坐标系定义的,这些加工信息在被机器人执行前必须变换到机器人基坐标系下。因此,机器人视觉引导系统的手眼关系的标定非常重要。Automated equipment is a weapon to make a powerful country. Therefore, it must move towards high speed and intelligence. An important method is to equip the machine with "eyes" and a "brain" that can cooperate with the eyes. This eye can be a monocular camera, a binocular camera, a multi-eye camera, a 3D scanner, or an RGB-D sensor. Through the visual sensor to obtain the relevant data, you can analyze the obtained processing information, where the processing information is defined in the coordinate system of the visual sensor, and the processing information must be transformed into the robot base coordinate system before being executed by the robot. Therefore, the calibration of the hand-eye relationship of the robot vision guidance system is very important.
目前,眼在手上的视觉系统的手眼标定方法很多,但是对于机器人动态三维视觉系统来说,现有的这些标定方法要么标定精度较低,要么标定成本较高(需要激光跟踪仪等昂贵的仪器设备),且不利于快速标定。因此急需一种能低成本的、便捷的、高精度的手眼标定方法,而在进行手眼标定前,需要提供一种在机器人视觉系统手眼标定中关键点的提取方法,以快速进行手眼标定。三维标定块在机器人视觉系统手眼标定中关键点的提取方法与装置At present, there are many hand-eye calibration methods for the eye-on-hand vision system, but for the robot dynamic 3D vision system, these existing calibration methods have either low calibration accuracy or high calibration cost (requiring expensive laser trackers, etc.) Equipment), and is not conducive to rapid calibration. Therefore, there is an urgent need for a low-cost, convenient, and high-precision hand-eye calibration method. Before hand-eye calibration, it is necessary to provide a method for extracting key points in the hand-eye calibration of the robot vision system to quickly perform hand-eye calibration. Method and device for extracting key points of three-dimensional calibration block in hand-eye calibration of robot vision system
发明内容Summary of the invention
针对现有技术的不足,本发明提出一种基于标定块的机器人手眼标定中关键点位置确定方法与装置,可低成本、便捷、高精度地对关键点进行提取,从而低成本、便捷、高精度地在机器人视觉系统中进行手眼标定。In view of the deficiencies of the prior art, the present invention proposes a method and device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, which can extract key points with low cost, convenience, and high precision, thereby achieving low cost, convenience, and high precision. Accurately perform hand-eye calibration in the robot vision system.
本发明的技术方案是这样实现的:The technical scheme of the present invention is realized as follows:
一种基于标定块的机器人手眼标定中关键点位置确定方法,所述标定块为三维标定块,所述三维标定块为多面体结构且形状不规则,所述关键点为三维标定块上且不少于三个的预设点,所述预设点在高度方向不重合;所述关键点提取方法包括以下步骤:A method for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, the calibration block is a three-dimensional calibration block, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key points are on the three-dimensional calibration block and there are many At the three preset points, the preset points do not overlap in the height direction; the key point extraction method includes the following steps:
步骤1,对三维标定块的摆放姿态进行调节,使三维标定块上的关键点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;Step 1. Adjust the posture of the 3D calibration block so that the projection of any two points of the key points on the 3D calibration block on the XY plane is not parallel to any coordinate axis of the robot base coordinate system;
步骤2,对机器人的姿态进行调节,使处于机器人末端的三维视觉系统能够获取所述三维标定块上包含所述关键点周边面的三维标定块点云;Step 2: Adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud containing the peripheral surface of the key point on the three-dimensional calibration block;
步骤3,将三维标定块的CAD模型转变为点云得到三维标定块模型点云;Step 3: Convert the CAD model of the 3D calibration block into a point cloud to obtain the point cloud of the 3D calibration block model;
步骤4,将三维标定块模型点云与所获得的三维标定块点云进行配准;Step 4: Register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud;
步骤5,以三维标定块模型点云上的关键点位置为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉系统坐标系的坐标值。Step 5: Taking the position of the key points on the 3D calibration block model point cloud as the reference, set the corresponding threshold to obtain the point cloud near the key points from the 3D calibration block point cloud to determine that the key points on the 3D calibration block are in the 3D vision system coordinate system The coordinate value.
进一步的,步骤3包括以下子步骤:Further, step 3 includes the following sub-steps:
步骤301,获取三维标定块的CAD模型,并将其转换为PLY格式文件;Step 301: Obtain the CAD model of the 3D calibration block and convert it into a PLY format file;
步骤302,根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。Step 302: According to the PLY format file, use the data format conversion function in the PCL library to convert it into a point cloud data format to obtain a three-dimensional calibration block model point cloud.
进一步的,步骤4包括以下子步骤:Further, step 4 includes the following sub-steps:
步骤401,分别对三维标定块点云和三维标定块模型点云进行采样;Step 401: Sampling the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
步骤402,分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;Step 402: Calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
步骤403,根据三维标定块点云和三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;Step 403, according to the fast point feature histogram of the 3D calibration block point cloud and the 3D calibration block model point cloud, perform rough registration on the point cloud by using a sampling consistent initial registration algorithm;
步骤404,通过迭代最近点算法对点云进行精准配准。In step 404, the point cloud is accurately registered through the iterative closest point algorithm.
进一步的,步骤5中,设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉系统坐标系的坐标值。Further, in step 5, set the corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine that the coordinate value of the point is 3D The coordinate value of the key point on the calibration block in the coordinate system of the 3D vision system.
一种基于标定块的机器人手眼标定中关键点位置确定装置,所述标定块为三维标定块,所述三维标定块为多面体结构且形状不规则,所述关键点为三维标定块上且不少于三个的预设点,所述预设点在高度方向不重合;所述关键点提取装置包括A device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, the calibration block is a three-dimensional calibration block, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key points are on the three-dimensional calibration block and there are not many At three preset points, the preset points do not overlap in the height direction; the key point extraction device includes
三维标定块姿态调节模块,用于对三维标定块的摆放姿态进行调节,使三维标定块上的关键点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;The 3D calibration block posture adjustment module is used to adjust the placement posture of the 3D calibration block, so that the connection between any two points of the key points on the 3D calibration block is projected on the XY plane, and any coordinates in the robot base coordinate system The axis is not parallel;
机器人姿态调节模块,用于对机器人的姿态进行调节,使处于机器人末端的三维视觉系统能够获取所述三维标定块上包含所述关键点周边面的三维标定块点云;The robot posture adjustment module is used to adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud containing the peripheral surface of the key point on the three-dimensional calibration block;
模型点云转换模块,用于将三维标定块的CAD模型转变为点云得到三维标定块模型点云;The model point cloud conversion module is used to convert the CAD model of the 3D calibration block into a point cloud to obtain the 3D calibration block model point cloud;
配准模块,用于将三维标定块模型点云与所获得的三维标定块点云进行配准;The registration module is used to register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud;
关键点坐标确定模块,用于以三维标定块模型点云上的关键点位置为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉系统坐标系的坐标值。The key point coordinate determination module is used to set the corresponding threshold value based on the key point position on the point cloud of the 3D calibration block model to obtain the point cloud near the key point from the 3D calibration block point cloud to determine the key point on the 3D calibration block. The coordinate value of the coordinate system of the 3D vision system.
进一步的,所述模型点云转换模块包括Further, the model point cloud conversion module includes
PLY格式文件转换单元,用于获取三维标定块的CAD模型,并将其转换为PLY格式文件;The PLY format file conversion unit is used to obtain the CAD model of the 3D calibration block and convert it into a PLY format file;
模型点云获取单元,用于根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。The model point cloud acquisition unit is used to convert the PLY format file into a point cloud data format by using the data format conversion function in the PCL library to acquire a three-dimensional calibration block model point cloud.
进一步的,所述配准模块包括采样单元、快速点特征直方图单元、粗配置单元和精准配准单元,其中Further, the registration module includes a sampling unit, a fast point feature histogram unit, a rough configuration unit, and a precise registration unit, wherein
采样单元,用于分别对三维标定块点云和三维标定块模型点云进行采样;The sampling unit is used to sample the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
快速点特征直方图单元,用于分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;The fast point feature histogram unit is used to calculate the feature point descriptors of the 3D calibration block point cloud and the 3D calibration block model point cloud respectively to obtain their respective fast point feature histograms;
粗配置单元,用于根据三维标定块点云和三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;The coarse configuration unit is used to coarsely register the point cloud based on the 3D calibration block point cloud and the fast point feature histogram of the 3D calibration block model point cloud by using the sampling consistent initial registration algorithm;
精准配准单元,用于通过迭代最近点算法对点云进行精准配准。The precise registration unit is used to accurately register the point cloud through the iterative closest point algorithm.
进一步的,所述关键点坐标确定模块,用于设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉系统坐标系的坐标值。Further, the key point coordinate determination module is used to set a corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine the The coordinate value of the point is the coordinate value of the key point on the 3D calibration block in the coordinate system of the 3D vision system.
与现有技术相比,本发明具有以下优点:本发明通过借助多面体结构且形状不规则的三维标定块,并且该三维标定块上多个关键点在高度方向不重合,从而低成本、便捷、高精度地确定关键点在机器人视觉系统中的坐标值;具体的,通过对三维标定块的摆放姿态进行调节,使多个关键点的任意两点的连线在XY平面的投影,并且与机器人基坐标系的任意坐标轴不平行;然后再通过对机器人的姿态进行调节,使三维视觉系统能给获取到关键点周边面的点云;最后将三维标定块模型点云与采集到的三维标定块点云进行配准,设置相应阈值确定关键点附近点云,从而获取关键点在三维视觉系统坐标系的坐标值。根据关键点在机器人基坐标系下的坐标值及关键点在三维视觉系统坐标系的坐标,可快速求解出机器人动态三维视觉系统的手眼关系的变换矩阵,从而低成本、便捷、高精度地实现在机器人三维动态视觉系统中的手眼标定。Compared with the prior art, the present invention has the following advantages: the present invention uses a three-dimensional calibration block with a polyhedral structure and irregular shape, and multiple key points on the three-dimensional calibration block do not overlap in the height direction, thereby being low-cost, convenient, and Determine the coordinate value of the key point in the robot vision system with high precision; specifically, by adjusting the placement posture of the three-dimensional calibration block, the connection between any two points of the multiple key points is projected on the XY plane, and the Any coordinate axis of the robot base coordinate system is not parallel; then by adjusting the robot's posture, the 3D vision system can obtain the point cloud of the peripheral surface of the key point; finally, the 3D calibration block model point cloud and the collected 3D The calibration block point cloud is registered, and the corresponding threshold is set to determine the point cloud near the key point, so as to obtain the coordinate value of the key point in the coordinate system of the three-dimensional vision system. According to the coordinate values of the key points in the robot base coordinate system and the coordinates of the key points in the coordinate system of the three-dimensional vision system, the transformation matrix of the hand-eye relationship of the robot dynamic three-dimensional vision system can be quickly solved, so as to achieve low-cost, convenient and high-precision realization Hand-eye calibration in robot 3D dynamic vision system.
附图说明Description of the drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative labor.
图1为本发明中所使用的标定块的结构示意图;Figure 1 is a schematic diagram of the structure of a calibration block used in the present invention;
图2为本发明机器人检测姿态调节的示意图;Figure 2 is a schematic diagram of the robot detection attitude adjustment of the present invention;
图3为本发明机器人动态三维视觉系统标定中的关键点提取方法一实施方式的流程图;3 is a flow chart of an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention;
图4为本发明机器人动态三维视觉系统标定中的关键点提取装置一实施方式的结构框图;4 is a structural block diagram of an embodiment of the device for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention;
图5为本发明机器人动态三维视觉系统标定中的关键点提取方法一实施方式中步骤3的流程图;5 is a flowchart of step 3 in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention;
图6为本发明机器人动态三维视觉系统标定中的关键点提取方法一实施方式中模型点云转换模块的结构框图;6 is a structural block diagram of a model point cloud conversion module in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention;
图7为本发明机器人动态三维视觉系统标定中的关键点提取方法一实施方式中步骤4的流程图;7 is a flowchart of step 4 in an embodiment of the method for extracting key points in the calibration of the robot dynamic 3D vision system according to the present invention;
图8为本发明机器人动态三维视觉系统标定中的关键点提取方法一实施方式中模型点云配准模块的结构框图。FIG. 8 is a structural block diagram of a model point cloud registration module in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
图1为本发明中所使用的标定块,其中,所述标定块为三维标定块,所述关键点为图1中的P1、P2、P3点;图2为本发明机器人检测姿态调节的示意图,通过对三维标定块的摆放姿态进行调节后,再对机器人的检测姿态进行调节,以确定P1、P2、P3点在三维视觉系统坐标系的坐标值。当P1、P2、P3点在机器人基坐标系下的坐标值也确定后,根据P1、P2、P3点在三维视觉系统坐标系的坐标值及P1、P2、P3点在机器人基坐标系下的坐标值,即可求解出机器人动态三维视觉系统的手眼关系的变换矩阵,实现在机器人三维动态视觉系统中的手眼标定。下面结合图1~图4进行具体说明。Figure 1 is a calibration block used in the present invention, wherein the calibration block is a three-dimensional calibration block, and the key points are points P1, P2, P3 in Figure 1; Figure 2 is a schematic diagram of the robot detection attitude adjustment of the present invention , After adjusting the placement posture of the three-dimensional calibration block, the detection posture of the robot is adjusted to determine the coordinate values of points P1, P2, and P3 in the coordinate system of the three-dimensional vision system. When the coordinate values of points P1, P2, and P3 in the robot base coordinate system are also determined, according to the coordinate values of points P1, P2, and P3 in the three-dimensional vision system coordinate system and the coordinates of points P1, P2, P3 in the robot base coordinate system The coordinate value can solve the transformation matrix of the hand-eye relationship of the robot dynamic 3D vision system, and realize the hand-eye calibration in the robot 3D dynamic vision system. A detailed description will be given below with reference to FIGS. 1 to 4.
其中,本发明实施方式中所使用的标定块为一种具有特殊形状的三维标定块,具体表现为:如图1所示,三维标定块为多面体结构且形状不规则,所述关键点为三维标定块上P1、P2、P3点,其中关键点P1、P2、P3在高度方向不重合,且基本在高度方向均匀分布。本发明实施方式通过这种特殊结构的三维标定块确定关键点在三维视觉系统坐标系的坐标值。Among them, the calibration block used in the embodiment of the present invention is a three-dimensional calibration block with a special shape, which is specifically expressed as: as shown in FIG. 1, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key points are three-dimensional Points P1, P2, P3 on the calibration block, among which the key points P1, P2, P3 do not overlap in the height direction, and are basically evenly distributed in the height direction. In the embodiment of the present invention, the coordinate value of the key point in the coordinate system of the three-dimensional vision system is determined by the three-dimensional calibration block of this special structure.
参阅图3,本发明实施方式公开了一种基于标定块的机器人手眼标定中关键点位置确定方法,包括以下步骤:Referring to Fig. 3, the embodiment of the present invention discloses a method for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, which includes the following steps:
步骤1,如图2所示,对三维标定块的摆放姿态进行调节,使三维标定块上的P1、P2、P3点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;Step 1, as shown in Figure 2, adjust the placement posture of the 3D calibration block so that the connection of any two points of P1, P2, P3 on the 3D calibration block is projected on the XY plane, and the robot base coordinates Any coordinate axis of the system is not parallel;
步骤2,如图2所示,对机器人的姿态进行调节,使处于机器人末端的三维视觉系统能够获取所述三维标定块上包含P1、P2、P3点周边面的三维标定块点云;Step 2, as shown in Figure 2, adjust the posture of the robot so that the 3D vision system at the end of the robot can obtain the 3D calibration block point cloud containing the peripheral surface of the P1, P2, and P3 points on the 3D calibration block;
步骤3,将三维标定块的CAD模型转变为点云得到三维标定块模型点云;Step 3: Convert the CAD model of the 3D calibration block into a point cloud to obtain the point cloud of the 3D calibration block model;
步骤4,将三维标定块模型点云与所获得的三维标定块点云进行配准;Step 4: Register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud;
步骤5,以三维标定块模型点云上的关键点位置(即P1’、P2’、P3’点,其中P1’与P1相对应,P2’与P2相对应,P3’与P3相对应)为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉系统坐标系的坐标值。Step 5. Take the key point positions on the point cloud of the three-dimensional calibration block model (ie P1', P2', P3' points, where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3) as Benchmark, set the corresponding threshold to obtain the point cloud near the key point from the point cloud of the three-dimensional calibration block to determine the coordinate value of the key point on the three-dimensional calibration block in the coordinate system of the three-dimensional vision system.
参阅图4,本发明实施方式还公开了一种基于标定块的机器人手眼标定中关键点位置确定装置,包括:Referring to Fig. 4, the embodiment of the present invention also discloses a device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, including:
三维标定块姿态调节模块10,用于对三维标定块的摆放姿态进行调节,使三维标定块上的P1、P2、P3点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;The three-dimensional calibration block posture adjustment module 10 is used to adjust the placement posture of the three-dimensional calibration block, so that the line of any two points of P1, P2, P3 on the three-dimensional calibration block is projected on the XY plane, and the robot base Any coordinate axis of the coordinate system is not parallel;
机器人姿态调节模块20,用于对机器人的姿态进行调节,使处于机器人末端的三维视觉系统能够获取所述三维标定块上包含P1、P2、P3点周边面的三维标定块点云;The robot posture adjustment module 20 is used to adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud including the peripheral surface of the P1, P2, and P3 points on the three-dimensional calibration block;
模型点云转换模块30,用于将三维标定块的CAD模型转变为点云得到三维标定块模型点云;The model point cloud conversion module 30 is used to convert the CAD model of the three-dimensional calibration block into a point cloud to obtain the point cloud of the three-dimensional calibration block model;
配准模块40,用于将三维标定块模型点云与所获得的三维标定块点云进行配准;The registration module 40 is used to register the three-dimensional calibration block model point cloud with the obtained three-dimensional calibration block point cloud;
关键点坐标确定模块50,用于以三维标定块模型点云上的关键点位置(即P1’、P2’、P3’点,其中P1’与P1相对应,P2’与P2相对应,P3’与P3相对应)为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉系统坐标系的坐标值。The key point coordinate determination module 50 is used for three-dimensional calibration of the key point positions on the block model point cloud (ie P1', P2', P3' points, where P1' corresponds to P1, P2' corresponds to P2, and P3' Corresponding to P3) as the reference, set the corresponding threshold to obtain the point cloud near the key point from the point cloud of the three-dimensional calibration block to determine the coordinate value of the key point on the three-dimensional calibration block in the coordinate system of the three-dimensional vision system.
本发明实施方式中,基于标定块的机器人手眼标定中关键点位置确定方法是以基于标定块的机器人手眼标定中关键点位置确定装置作为步骤的执行对象。其中,步骤1是以三维标定块姿态调节模块10作为步骤的执行对象,步骤2是以机器人姿态调节模块20作为步骤的执行对象,步骤3是以模型点云转换模块30作为步骤的执行对象,步骤4是以配准模块40作为步骤的执行对象,步骤5是以关键点坐标确定模块50作为步骤的执行对象。In the embodiment of the present invention, the method for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block uses the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block as the execution target of the steps. Among them, step 1 uses the 3D calibration block posture adjustment module 10 as the execution object of the steps, step 2 uses the robot posture adjustment module 20 as the execution object of the steps, and step 3 uses the model point cloud conversion module 30 as the execution object of the steps. Step 4 uses the registration module 40 as the execution object of the step, and step 5 uses the key point coordinate determination module 50 as the execution object of the step.
本发明中,关键点P1、P2、P3在三维视觉系统坐标系下的坐标值的确定及关键点P1、P2、P3在机器人基坐标系下的坐标值,是变换矩阵求解的关键,而关键点P1、P2、P3在机器人基坐标系下的坐标值利用设于机器人末端的探针进行快速确定,具体的,通过探针分别触及P1、P2、P3点时,机器人控制器中经探针长度补偿后的坐标值即可为关键点P1、P2、P3在机器人基坐标系下的坐标值。因此,确定关键点P1、P2、P3在三维视觉系统坐标系下 的坐标值,是求解变换矩阵的关键。本发明实施方式通过借助这种多面体结构且形状不规则的三维标定块,利用三维标定块上的关键点,从而可低成本、便捷、高精度地确定关键点在三维视觉系统坐标系下的坐标值,进而低成本、便捷、高精度地实现在机器人三维动态视觉系统中的手眼标定。In the present invention, the determination of the coordinate values of the key points P1, P2, P3 in the coordinate system of the three-dimensional vision system and the coordinate values of the key points P1, P2, P3 in the robot base coordinate system are the key to the solution of the transformation matrix, and the key The coordinate values of points P1, P2, and P3 in the robot base coordinate system are quickly determined by the probe set at the end of the robot. Specifically, when P1, P2, and P3 are touched by the probe, the robot controller The coordinate value after length compensation can be the coordinate value of the key points P1, P2, P3 in the robot base coordinate system. Therefore, determining the coordinate values of the key points P1, P2, and P3 in the coordinate system of the three-dimensional vision system is the key to solving the transformation matrix. The embodiment of the present invention utilizes the key points on the three-dimensional calibration block with the aid of the three-dimensional calibration block with the polyhedral structure and irregular shape, so that the coordinates of the key points in the coordinate system of the three-dimensional vision system can be determined with low cost, convenience and high precision. In addition, the hand-eye calibration in the robot's three-dimensional dynamic vision system can be realized at low cost, conveniently and with high precision.
步骤1中,三维标定块的摆放姿态关系到所获取的数据是否可用,因此,本发明实施方式中,如图1及图2所示,对三维标定块的姿态进行调节时,需要使关键点P1、P2、P3中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行,以便于机器人末端在同一个检测姿态下,能够同时获得关键点周边多个面的点云数据。In step 1, the placement posture of the three-dimensional calibration block is related to whether the acquired data is available. Therefore, in the embodiment of the present invention, as shown in Figures 1 and 2, when adjusting the posture of the three-dimensional calibration block, the key The projection of the line connecting any two of the points P1, P2, and P3 on the XY plane is not parallel to any coordinate axis of the robot base coordinate system, so that the robot end can obtain the key points at the same time under the same detection posture. Point cloud data of each surface.
步骤2中,同样需要对机器人的检测姿态进行调整,以便于三维视觉系统,如单目相机,双目相机,多目相机,三维扫描仪等,可以得到可用的空间位置数据。如图2所示,在进行调节时,能够通过机器人末端安装的三维视觉系统,在同一个末端检测姿态下,能够同时获得图1所示的三维标定块上的目标位置点P1、P2、P3周边面的点云。In step 2, the detection posture of the robot also needs to be adjusted so that the three-dimensional vision system, such as monocular camera, binocular camera, multi-eye camera, three-dimensional scanner, etc., can obtain usable spatial position data. As shown in Figure 2, during adjustment, the three-dimensional vision system installed at the end of the robot can simultaneously obtain the target position points P1, P2, P3 on the three-dimensional calibration block shown in Figure 1 under the same end detection posture. Point cloud on the peripheral surface.
具体的,如图5所示,步骤3包括以下子步骤:Specifically, as shown in Figure 5, step 3 includes the following sub-steps:
步骤301,获取三维标定块的CAD模型,并将其转换为PLY格式文件;Step 301: Obtain the CAD model of the 3D calibration block and convert it into a PLY format file;
步骤302,根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。Step 302: According to the PLY format file, use the data format conversion function in the PCL library to convert it into a point cloud data format to obtain a three-dimensional calibration block model point cloud.
对应的,如图6所示,基于标定块的机器人手眼标定中关键点位置确定装置中的模型点云转换模块30包括Correspondingly, as shown in FIG. 6, the model point cloud conversion module 30 in the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block includes
PLY格式文件转换单元31,用于获取三维标定块的CAD模型,并将其转换为PLY格式文件;The PLY format file conversion unit 31 is used to obtain the CAD model of the three-dimensional calibration block and convert it into a PLY format file;
模型点云获取单元32,用于根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。The model point cloud acquisition unit 32 is configured to convert the PLY format file into a point cloud data format by using the data format conversion function in the PCL library to acquire a three-dimensional calibration block model point cloud.
本发明实施方式中,步骤3是以模型点云转换模块30中的各个单元作为步骤的执行对象。具体的,步骤301是以PLY格式文件转换单元31作为步骤的执行对象,步骤302是以模型点云获取单元32作为步骤的执行对象。In the embodiment of the present invention, step 3 uses each unit in the model point cloud conversion module 30 as the execution target of the step. Specifically, step 301 uses the PLY format file conversion unit 31 as the execution object of the step, and step 302 uses the model point cloud acquisition unit 32 as the execution object of the step.
具体的,如图7所示,步骤4包括以下子步骤:Specifically, as shown in Fig. 7, step 4 includes the following sub-steps:
步骤401,分别对三维标定块点云和三维标定块模型点云进行采样;Step 401: Sampling the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
步骤402,分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;Step 402: Calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
步骤403,根据三维标定块点云及三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;Step 403, according to the 3D calibration block point cloud and the fast point feature histogram of the 3D calibration block model point cloud, perform rough registration on the point cloud by using a sampling consistent initial registration algorithm;
步骤404,通过使用迭代最近点算法对点云进行精准配准。In step 404, the point cloud is accurately registered by using the iterative closest point algorithm.
对应的,如图8所示,基于标定块的机器人手眼标定中关键点位置确定装置中的配准模块40包括Correspondingly, as shown in FIG. 8, the registration module 40 in the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block includes
采样单元41,用于分别对三维标定块点云和三维标定块模型点云进行采样;The sampling unit 41 is used to sample the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively;
快速点特征直方图单元42,用于分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;The fast point feature histogram unit 42 is used to calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
粗配置单元43,用于根据三维标定块点云和三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;The coarse configuration unit 43 is used to perform coarse registration on the point cloud by using the sampling consistent initial registration algorithm according to the fast point feature histogram of the 3D calibration block point cloud and the 3D calibration block model point cloud;
精准配准单元44,用于通过迭代最近点算法对点云进行精准配准。The precise registration unit 44 is used for precise registration of the point cloud through the iterative closest point algorithm.
本发明实施方式中,步骤4是以配准模块40中的各个单元作为步骤的执行对象。具体的,步骤401是以采样单元41作为步骤的执行对象,步骤402是以快速点特征直方图单元42作为步骤的执行对象,步骤403是以粗配置单元43作为步骤的执行对象,步骤404是以精准配准单元44作为步骤的执行对象。In the embodiment of the present invention, step 4 uses each unit in the registration module 40 as the execution target of the step. Specifically, step 401 uses the sampling unit 41 as the execution object of the step, step 402 uses the fast point feature histogram unit 42 as the execution object of the step, step 403 uses the rough configuration unit 43 as the execution object of the step, and step 404 is The precise registration unit 44 is used as the execution target of the steps.
步骤401中,可通过使用Volgograd滤波器采样三维标定块点云和三维标定块模型点云,以提高点云对的配准速度。In step 401, the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud can be sampled by using a Volgograd filter to improve the registration speed of the point cloud pair.
步骤402中,点云对的配准依赖于特征描述,因此本发明中需要分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图(FPFH,Fast Point Feature Histograms);In step 402, the registration of the point cloud pair depends on the feature description. Therefore, in the present invention, it is necessary to calculate the feature point descriptors of the 3D calibration block point cloud and the 3D calibration block model point cloud separately to obtain respective fast point feature histograms (FPFH , Fast Point Feature Histograms);
步骤403中,在精确配准点云对前通常需要先粗配准点云对,因此在本发明中采用采样一致性初始配准算法(SAC-IA,Sample Consensus Initial Aligment)实现点云对的粗配准。In step 403, it is usually necessary to coarsely register the point cloud pair before accurately registering the point cloud pair. Therefore, in the present invention, the sample consensus initial registration algorithm (SAC-IA, Sample Consensus Initial Aligment) is used to realize the rough matching of the point cloud pair. quasi.
步骤404中,经点云对的粗配准后,再通过使用迭代最近点算法(ICP,Iterative Closest Point)实现点云对的精确配准。In step 404, after the rough registration of the point cloud pair, the iterative closest point algorithm (ICP, Iterative Closest Point) is used to realize the precise registration of the point cloud pair.
进一步的,步骤5中,设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉系统坐标系的坐标值。Further, in step 5, set the corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine that the coordinate value of the point is 3D The coordinate value of the key point on the calibration block in the coordinate system of the 3D vision system.
对应的,基于标定块的机器人手眼标定中关键点位置确定装置中的关键点坐标确定模块50,用于设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉系统坐标系的坐标值。Correspondingly, the key point coordinate determination module 50 in the key point position determination device in the robot hand-eye calibration based on the calibration block is used to set the corresponding threshold to search for the distance 3D calibration from the 3D calibration block point cloud through the nearest neighbor search method. The closest point of the key point cloud on the block model point cloud, and the coordinate value of this point is determined to be the coordinate value of the key point on the 3D calibration block in the coordinate system of the 3D vision system.
本发明实施方式中,以三维标定块模型点云上的关键点位置(即P1’、P2’、P3’点,其中P1’与P1相对应,P2’与P2相对应,P3’与P3相对应)为基准,通过近邻搜索的方法,从三维标定块点云中,搜索出距离三维标定块模型点云上关键点(即P1’、P2’、P3’点)处点云最近的点,该点的坐标值即为所需的关键点坐标值,即关键点P1、P2、P3在三维视觉系统坐标系的坐标值。In the embodiment of the present invention, the key point positions (ie P1', P2', P3' points on the point cloud of the block model are calibrated in three dimensions, where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3. Correspondence) as the benchmark, through the nearest neighbor search method, from the 3D calibration block point cloud, search for the closest point to the key point on the 3D calibration block model point cloud (ie P1', P2', P3' point), The coordinate value of this point is the required key point coordinate value, that is, the coordinate value of the key points P1, P2, P3 in the coordinate system of the three-dimensional vision system.
综上所述,本发明通过借助多面体结构且形状不规则的三维标定块,并且该三维标定块上多个关键点在高度方向不重合,从而低成本、便捷、高精度地确定关键点在机器人视觉系统中的坐标值;具体的,通过对三维标定块的摆放姿态进行调节,使多个关键点的任意两点的连线在XY平面的投影,并且与机器人基坐标系的任意坐标轴不平行;然后再通过对机器人的姿态进行调节,使三维视觉系统能给获取到关键点周边面的点云;最后将三维标定块模型点云与采集到的三维标定块点云进行配准,设置相应阈值确定关键点附近点云,从而获取关键点在三维视觉系统坐标系的坐标值。根据关键点在机器人基坐标系下的坐标值及关键点在三维视觉系统坐标系的坐标,可快速求解出机器人动态三维视觉系统的手眼关系的变换矩阵,从而低成本、便捷、高精度地实现在机器人三维动态视觉系统中的手眼标定。In summary, the present invention uses a three-dimensional calibration block with a polyhedral structure and an irregular shape, and multiple key points on the three-dimensional calibration block do not overlap in the height direction, so as to determine the key points in the robot with low cost, convenience and high precision. The coordinate value in the vision system; specifically, by adjusting the placement posture of the three-dimensional calibration block, the connection of any two points of multiple key points is projected on the XY plane, and it is with any coordinate axis of the robot base coordinate system Non-parallel; then by adjusting the robot's posture, the 3D vision system can obtain the point cloud of the peripheral surface of the key point; finally, the 3D calibration block model point cloud is registered with the collected 3D calibration block point cloud, Set the corresponding threshold to determine the point cloud near the key point, so as to obtain the coordinate value of the key point in the coordinate system of the 3D vision system. According to the coordinate values of the key points in the robot base coordinate system and the coordinates of the key points in the coordinate system of the three-dimensional vision system, the transformation matrix of the hand-eye relationship of the robot dynamic three-dimensional vision system can be quickly solved, so as to achieve low-cost, convenient and high-precision realization Hand-eye calibration in robot 3D dynamic vision system.
在本发明的实施方式的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In the description of the embodiments of the present invention, "plurality" means two or more, unless otherwise specifically defined.
下文的公开提供了许多不同的实施方式或例子用来实现本发明的实施方式的不同结构。为了简化本发明的实施方式的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本发明。此外,本发明的实施方式可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本发明的实施方式提供了的各种特定的工艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。The following disclosure provides many different embodiments or examples for realizing different structures of the embodiments of the present invention. In order to simplify the disclosure of the embodiments of the present invention, the components and settings of specific examples are described below. Of course, they are only examples, and the purpose is not to limit the present invention. In addition, the embodiments of the present invention may repeat reference numbers and/or reference letters in different examples. This repetition is for the purpose of simplification and clarity, and does not indicate the relationship between the various embodiments and/or settings discussed. . In addition, the embodiments of the present invention provide examples of various specific processes and materials, but those of ordinary skill in the art may be aware of the application of other processes and/or the use of other materials.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。Any process or method description in the flowchart or described in other ways herein can be understood as a module, segment or part of code that includes one or more executable instructions for implementing specific logical functions or steps of the process , And the scope of the preferred embodiment of the present invention includes additional implementations, which may not be in the order shown or discussed, including the functions involved in a substantially simultaneous manner or in the reverse order, which should be It is understood by those skilled in the art to which the embodiments of the present invention belong.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理模块的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in the flowchart or described in other ways herein, for example, can be considered as a sequenced list of executable instructions for realizing logic functions, and can be embodied in any computer-readable medium, For use by instruction execution systems, devices, or equipment (such as computer-based systems, systems including processing modules, or other systems that can fetch and execute instructions from instruction execution systems, devices, or equipment), or combine these instruction execution systems, devices Or equipment. For the purposes of this specification, a "computer-readable medium" can be any device that can contain, store, communicate, propagate, or transmit a program for use by an instruction execution system, device, or device or in combination with these instruction execution systems, devices, or devices. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections (electronic devices) with one or more wiring, portable computer disk cases (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable and editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because it can be used for example by optically scanning the paper or other medium, and then editing, interpreting, or other suitable media if necessary. The program is processed in a manner to obtain the program electronically, and then stored in the computer memory.
应当理解,本发明的实施方式的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that each part of the embodiments of the present invention can be implemented by hardware, software, firmware, or a combination thereof. In the foregoing embodiments, multiple steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if it is implemented by hardware, as in another embodiment, it can be implemented by any one or a combination of the following technologies known in the art: Discrete logic circuits, application-specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。A person of ordinary skill in the art can understand that all or part of the steps carried in the method of the foregoing embodiments can be implemented by a program instructing relevant hardware to complete. The program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. When executed, it includes one of the steps of the method embodiment or a combination thereof.
此外,在本发明的各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件 功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, the functional units in the various embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。Although the embodiments of the present invention have been shown and described above, it can be understood that the above-mentioned embodiments are exemplary and should not be construed as limiting the present invention. A person of ordinary skill in the art can comment on the above-mentioned embodiments within the scope of the present invention. The embodiment undergoes changes, modifications, substitutions, and modifications.

Claims (8)

  1. 一种基于标定块的机器人手眼标定中关键点位置确定方法,其特征在于,所述标定块为三维标定块,所述三维标定块为多面体结构且形状不规则,所述关键点为三维标定块上且不少于三个的预设点,所述预设点在高度方向不重合;所述关键点提取方法包括以下步骤:A method for determining the position of key points in a robot hand-eye calibration based on a calibration block, wherein the calibration block is a three-dimensional calibration block, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key points are a three-dimensional calibration block The above and not less than three preset points, the preset points do not overlap in the height direction; the key point extraction method includes the following steps:
    步骤1,对三维标定块的摆放姿态进行调节,使三维标定块上的关键点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;Step 1. Adjust the posture of the 3D calibration block so that the projection of any two points of the key points on the 3D calibration block on the XY plane is not parallel to any coordinate axis of the robot base coordinate system;
    步骤2,对机器人的姿态进行调节,使处于机器人末端的三维视觉系统能够获取所述三维标定块上包含所述关键点周边面的三维标定块点云;Step 2: Adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud containing the peripheral surface of the key point on the three-dimensional calibration block;
    步骤3,将三维标定块的CAD模型转变为点云得到三维标定块模型点云;Step 3: Convert the CAD model of the 3D calibration block into a point cloud to obtain the point cloud of the 3D calibration block model;
    步骤4,将三维标定块模型点云与所获得的三维标定块点云进行配准;Step 4: Register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud;
    步骤5,以三维标定块模型点云上的关键点位置为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉系统坐标系的坐标值。Step 5: Taking the position of the key points on the 3D calibration block model point cloud as the reference, set the corresponding threshold to obtain the point cloud near the key points from the 3D calibration block point cloud to determine that the key points on the 3D calibration block are in the 3D vision system coordinate system The coordinate value.
  2. 如权利要求1所述基于标定块的机器人手眼标定中关键点位置确定方法,其特征在于,步骤3包括以下子步骤:The method for determining the position of key points in the hand-eye calibration of the robot based on the calibration block according to claim 1, wherein step 3 includes the following sub-steps:
    步骤301,获取三维标定块的CAD模型,并将其转换为PLY格式文件;Step 301: Obtain the CAD model of the 3D calibration block and convert it into a PLY format file;
    步骤302,根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。Step 302: According to the PLY format file, use the data format conversion function in the PCL library to convert it into a point cloud data format to obtain a three-dimensional calibration block model point cloud.
  3. 如权利要求1所述基于标定块的机器人手眼标定中关键点位置确定方法,其特征在于,步骤4包括以下子步骤:The method for determining the position of key points in the hand-eye calibration of a robot based on a calibration block according to claim 1, wherein step 4 includes the following sub-steps:
    步骤401,分别对三维标定块点云和三维标定块模型点云进行采样;Step 401: Sampling the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
    步骤402,分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;Step 402: Calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
    步骤403,根据三维标定块点云和三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;Step 403, according to the fast point feature histogram of the 3D calibration block point cloud and the 3D calibration block model point cloud, perform rough registration on the point cloud by using a sampling consistent initial registration algorithm;
    步骤404,通过迭代最近点算法对点云进行精准配准。In step 404, the point cloud is accurately registered through the iterative closest point algorithm.
  4. 如权利要求1所述基于标定块的机器人手眼标定中关键点位置确定方法,其特征在于,步骤5中,设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维 标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉系统坐标系的坐标值。The method for determining the position of key points in the hand-eye calibration of the robot based on the calibration block according to claim 1, wherein in step 5, corresponding thresholds are set to search for the distance 3D from the 3D calibration block point cloud through the nearest neighbor search method. The closest point of the key point cloud on the point cloud of the calibration block model is determined, and the coordinate value of this point is determined to be the coordinate value of the key point on the 3D calibration block in the coordinate system of the 3D vision system.
  5. 一种基于标定块的机器人手眼标定中关键点位置确定装置,其特征在于,所述标定块为三维标定块,所述三维标定块为多面体结构且形状不规则,所述关键点为三维标定块上且不少于三个的预设点,所述预设点在高度方向不重合;所述关键点提取装置包括A device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, wherein the calibration block is a three-dimensional calibration block, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key point is a three-dimensional calibration block Above and not less than three preset points, the preset points do not overlap in the height direction; the key point extraction device includes
    三维标定块姿态调节模块,用于对三维标定块的摆放姿态进行调节,使三维标定块上的关键点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;The 3D calibration block posture adjustment module is used to adjust the placement posture of the 3D calibration block, so that the connection between any two points of the key points on the 3D calibration block is projected on the XY plane, and any coordinates in the robot base coordinate system The axis is not parallel;
    机器人姿态调节模块,用于对机器人的姿态进行调节,使处于机器人末端的三维视觉系统能够获取所述三维标定块上包含所述关键点周边面的三维标定块点云;The robot posture adjustment module is used to adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud containing the peripheral surface of the key point on the three-dimensional calibration block;
    模型点云转换模块,用于将三维标定块的CAD模型转变为点云得到三维标定块模型点云;The model point cloud conversion module is used to convert the CAD model of the 3D calibration block into a point cloud to obtain the 3D calibration block model point cloud;
    配准模块,用于将三维标定块模型点云与所获得的三维标定块点云进行配准;The registration module is used to register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud;
    关键点坐标确定模块,用于以三维标定块模型点云上的关键点位置为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉系统坐标系的坐标值。The key point coordinate determination module is used to set the corresponding threshold value based on the key point position on the point cloud of the 3D calibration block model to obtain the point cloud near the key point from the 3D calibration block point cloud to determine the key point on the 3D calibration block. The coordinate value of the coordinate system of the 3D vision system.
  6. 如权利要求5所述基于标定块的机器人手眼标定中关键点位置确定装置,其特征在于,所述模型点云转换模块包括The device for determining the position of key points in the hand-eye calibration of the robot based on the calibration block according to claim 5, wherein the model point cloud conversion module comprises
    PLY格式文件转换单元,用于获取三维标定块的CAD模型,并将其转换为PLY格式文件;The PLY format file conversion unit is used to obtain the CAD model of the 3D calibration block and convert it into a PLY format file;
    模型点云获取单元,用于根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。The model point cloud acquisition unit is used to convert the PLY format file into a point cloud data format by using the data format conversion function in the PCL library to acquire a three-dimensional calibration block model point cloud.
  7. 如权利要求5所述基于标定块的机器人手眼标定中关键点位置确定装置,其特征在于,所述配准模块包括采样单元、快速点特征直方图单元、粗配置单元和精准配准单元,其中The device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block according to claim 5, wherein the registration module includes a sampling unit, a fast point feature histogram unit, a rough configuration unit, and a precise registration unit, wherein
    采样单元,用于分别对三维标定块点云和三维标定块模型点云进行采样;The sampling unit is used to sample the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
    快速点特征直方图单元,用于分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;The fast point feature histogram unit is used to calculate the feature point descriptors of the 3D calibration block point cloud and the 3D calibration block model point cloud respectively to obtain their respective fast point feature histograms;
    粗配置单元,用于根据三维标定块点云和三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;The coarse configuration unit is used to coarsely register the point cloud based on the 3D calibration block point cloud and the fast point feature histogram of the 3D calibration block model point cloud by using the sampling consistent initial registration algorithm;
    精准配准单元,用于通过迭代最近点算法对点云进行精准配准。The precise registration unit is used to accurately register the point cloud through the iterative closest point algorithm.
  8. 如权利要求5所述基于标定块的机器人手眼标定中关键点位置确定装置,其特征在于,关键点坐标确定模块,用于设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉系统坐标系的坐标值。The device for determining the position of key points in the hand-eye calibration of the robot based on the calibration block according to claim 5, wherein the key point coordinate determination module is used to set corresponding thresholds to separate the point cloud from the three-dimensional calibration block through the method of nearest neighbor search. Search for the closest point to the key point cloud on the 3D calibration block model point cloud, and determine that the coordinate value of this point is the coordinate value of the key point on the 3D calibration block in the coordinate system of the 3D vision system.
PCT/CN2020/120103 2019-11-26 2020-10-10 Key point position determining method and device in robot hand-eye calibration based on calibration block WO2021103824A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911175295.5 2019-11-26
CN201911175295.5A CN110930442B (en) 2019-11-26 2019-11-26 Method and device for determining positions of key points in robot hand-eye calibration based on calibration block

Publications (1)

Publication Number Publication Date
WO2021103824A1 true WO2021103824A1 (en) 2021-06-03

Family

ID=69851142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120103 WO2021103824A1 (en) 2019-11-26 2020-10-10 Key point position determining method and device in robot hand-eye calibration based on calibration block

Country Status (2)

Country Link
CN (1) CN110930442B (en)
WO (1) WO2021103824A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114043087A (en) * 2021-12-03 2022-02-15 厦门大学 Three-dimensional trajectory laser welding seam tracking attitude planning method
CN117140535A (en) * 2023-10-27 2023-12-01 南湖实验室 Robot kinematics parameter calibration method and system based on single measurement

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930442B (en) * 2019-11-26 2020-07-31 广东技术师范大学 Method and device for determining positions of key points in robot hand-eye calibration based on calibration block
CN111797808B (en) * 2020-07-17 2023-07-21 广东技术师范大学 Reverse method and system based on video feature point tracking
CN112790786A (en) * 2020-12-30 2021-05-14 无锡祥生医疗科技股份有限公司 Point cloud data registration method and device, ultrasonic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680124A (en) * 2016-08-01 2018-02-09 康耐视公司 For improving 3 d pose scoring and eliminating the system and method for miscellaneous point in 3 d image data
WO2018145025A1 (en) * 2017-02-03 2018-08-09 Abb Schweiz Ag Calibration article for a 3d vision robotic system
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method
CN109702738A (en) * 2018-11-06 2019-05-03 深圳大学 A kind of mechanical arm hand and eye calibrating method and device based on Three-dimension object recognition
CN110335296A (en) * 2019-06-21 2019-10-15 华中科技大学 A kind of point cloud registration method based on hand and eye calibrating
CN110930442A (en) * 2019-11-26 2020-03-27 广东技术师范大学 Method and device for determining positions of key points in robot hand-eye calibration based on calibration block

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908230B (en) * 2010-07-23 2011-11-23 东南大学 Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method
CN104142157B (en) * 2013-05-06 2017-08-25 北京四维图新科技股份有限公司 A kind of scaling method, device and equipment
US10076842B2 (en) * 2016-09-28 2018-09-18 Cognex Corporation Simultaneous kinematic and hand-eye calibration
CN108828606B (en) * 2018-03-22 2019-04-30 中国科学院西安光学精密机械研究所 One kind being based on laser radar and binocular Visible Light Camera union measuring method
CN108648272A (en) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 Three-dimensional live acquires modeling method, readable storage medium storing program for executing and device
CN108627178B (en) * 2018-05-10 2020-10-13 广东拓斯达科技股份有限公司 Robot eye calibration method and system
CN108994844B (en) * 2018-09-26 2021-09-03 广东工业大学 Calibration method and device for hand-eye relationship of polishing operation arm
CN110355755B (en) * 2018-12-15 2023-05-16 深圳铭杰医疗科技有限公司 Robot hand-eye system calibration method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680124A (en) * 2016-08-01 2018-02-09 康耐视公司 For improving 3 d pose scoring and eliminating the system and method for miscellaneous point in 3 d image data
WO2018145025A1 (en) * 2017-02-03 2018-08-09 Abb Schweiz Ag Calibration article for a 3d vision robotic system
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method
CN109702738A (en) * 2018-11-06 2019-05-03 深圳大学 A kind of mechanical arm hand and eye calibrating method and device based on Three-dimension object recognition
CN110335296A (en) * 2019-06-21 2019-10-15 华中科技大学 A kind of point cloud registration method based on hand and eye calibrating
CN110930442A (en) * 2019-11-26 2020-03-27 广东技术师范大学 Method and device for determining positions of key points in robot hand-eye calibration based on calibration block

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114043087A (en) * 2021-12-03 2022-02-15 厦门大学 Three-dimensional trajectory laser welding seam tracking attitude planning method
CN117140535A (en) * 2023-10-27 2023-12-01 南湖实验室 Robot kinematics parameter calibration method and system based on single measurement
CN117140535B (en) * 2023-10-27 2024-02-02 南湖实验室 Robot kinematics parameter calibration method and system based on single measurement

Also Published As

Publication number Publication date
CN110930442A (en) 2020-03-27
CN110930442B (en) 2020-07-31

Similar Documents

Publication Publication Date Title
WO2021103824A1 (en) Key point position determining method and device in robot hand-eye calibration based on calibration block
CN110842901B (en) Robot hand-eye calibration method and device based on novel three-dimensional calibration block
JP6842520B2 (en) Object detection methods, devices, equipment, storage media and vehicles
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN109242903B (en) Three-dimensional data generation method, device, equipment and storage medium
US10496762B2 (en) Model generating device, position and orientation calculating device, and handling robot device
CN109658504B (en) Map data annotation method, device, equipment and storage medium
Singh et al. Bigbird: A large-scale 3d database of object instances
US20190340783A1 (en) Autonomous Vehicle Based Position Detection Method and Apparatus, Device and Medium
KR20200111617A (en) Gesture recognition method, device, electronic device, and storage medium
CN113146073B (en) Vision-based laser cutting method and device, electronic equipment and storage medium
EP3460715B1 (en) Template creation apparatus, object recognition processing apparatus, template creation method, and program
CN111028205B (en) Eye pupil positioning method and device based on binocular distance measurement
WO2022121283A1 (en) Vehicle key point information detection and vehicle control
US11875535B2 (en) Method, apparatus, electronic device and computer readable medium for calibrating external parameter of camera
US10748027B2 (en) Construction of an efficient representation for a three-dimensional (3D) compound object from raw video data
US11625842B2 (en) Image processing apparatus and image processing method
TW202016531A (en) Method and system for scanning wafer
CN112258567A (en) Visual positioning method and device for object grabbing point, storage medium and electronic equipment
US20200051278A1 (en) Information processing apparatus, information processing method, robot system, and non-transitory computer-readable storage medium
WO2022247137A1 (en) Robot and charging pile recognition method and apparatus therefor
Wang et al. Phocal: A multi-modal dataset for category-level object pose estimation with photometrically challenging objects
CN112836558A (en) Mechanical arm tail end adjusting method, device, system, equipment and medium
KR102618285B1 (en) Method and system for determining camera pose
WO2021103558A1 (en) Rgb-d data fusion-based robot vision guiding method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20893734

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20893734

Country of ref document: EP

Kind code of ref document: A1