CN115049744A - Robot hand-eye coordinate conversion method and device, computer equipment and storage medium - Google Patents

Robot hand-eye coordinate conversion method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115049744A
CN115049744A CN202210809386.5A CN202210809386A CN115049744A CN 115049744 A CN115049744 A CN 115049744A CN 202210809386 A CN202210809386 A CN 202210809386A CN 115049744 A CN115049744 A CN 115049744A
Authority
CN
China
Prior art keywords
center
area
point
group
marker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210809386.5A
Other languages
Chinese (zh)
Inventor
王猛
贺擂
刘健华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Esun Display Co ltd
Original Assignee
Shenzhen Esun Display Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Esun Display Co ltd filed Critical Shenzhen Esun Display Co ltd
Priority to CN202210809386.5A priority Critical patent/CN115049744A/en
Publication of CN115049744A publication Critical patent/CN115049744A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Numerical Control (AREA)
  • Manipulator (AREA)

Abstract

本申请涉及一种机器人手眼坐标转换方法、装置、计算机设备、存储介质和计算机程序产品。方法包括:通过机器人的扫描仪,对机器人的安装了标定板的末端执行器进行图像采集,得到末端执行器处于不同位姿的标定板图像;对不同位姿的标定板图像中的各个标志点,按照至少两种图形进行检测,得到至少两组不同的标志点区域;基于各组中的标志点区域之间的距离,判定各组中的标志点区域的区域中心是否有效;依据各个有效的区域中心及各个有效的区域中心在不同位姿的中心标志点,生成不同位姿的标志点序列;通过不同位姿的标志点序列,标定机器人与扫描仪之间的坐标系转换关系。采用本方法能够提高扫描仪与机器人的坐标转换精度。

Figure 202210809386

The present application relates to a robot hand-eye coordinate conversion method, device, computer equipment, storage medium and computer program product. The method includes: collecting images of the end effector of the robot on which the calibration plate is installed by using a scanner of the robot, so as to obtain the calibration plate images of the end effector in different poses; , perform detection according to at least two kinds of graphics, and obtain at least two groups of different mark point areas; based on the distance between the mark point areas in each group, determine whether the regional center of the mark point area in each group is valid; The regional center and each valid regional center are at the center marker points of different poses to generate marker point sequences of different poses; through the marker point sequences of different poses, the coordinate system transformation relationship between the robot and the scanner is calibrated. By adopting the method, the coordinate conversion accuracy of the scanner and the robot can be improved.

Figure 202210809386

Description

机器人手眼坐标转换方法、装置、计算机设备和存储介质Robot hand-eye coordinate conversion method, device, computer equipment and storage medium

技术领域technical field

本申请涉及机器人技术领域,特别是涉及一种机器人手眼坐标转换方法、装置、计算机设备、存储介质和计算机程序产品。The present application relates to the field of robotics, and in particular, to a method, device, computer equipment, storage medium and computer program product for converting robot hand-eye coordinates.

背景技术Background technique

随着人工智能技术的发展,机器人已经在多个行业中得到了广泛应用。其中,在工业应用领域中,机器人具有视觉感知系统,利用视觉感知系统获取到的三维信息,机器人可以控制末端执行器执行机械加工及安装等动作。简单来讲,三维感知系统相当于人的眼睛,末端执行器相当于人的手,通过手眼之间的配合完成预先设置的动作任务。With the development of artificial intelligence technology, robots have been widely used in many industries. Among them, in the field of industrial application, the robot has a visual perception system. Using the three-dimensional information obtained by the visual perception system, the robot can control the end effector to perform mechanical processing and installation. In simple terms, the 3D perception system is equivalent to the human eye, and the end effector is equivalent to the human hand, which completes the preset action tasks through the cooperation between the hand and the eye.

为了保证机器人将空间物体准确移动到目标位置,需要确定出视觉系统坐标系与机械手坐标系之间的转换关系,传统的转换关系确定方法的精度较差,得到的结果将会与实际真值有较大的偏差。In order to ensure that the robot can accurately move the space object to the target position, it is necessary to determine the conversion relationship between the coordinate system of the vision system and the coordinate system of the manipulator. The traditional conversion relationship determination method has poor accuracy, and the obtained result will be different from the actual value. larger deviation.

发明内容SUMMARY OF THE INVENTION

基于此,有必要针对上述技术问题,提供一种能够提高精度的机器人手眼坐标转换方法、装置、计算机设备、计算机可读存储介质和计算机程序产品。Based on this, it is necessary to provide a robot hand-eye coordinate conversion method, device, computer equipment, computer-readable storage medium and computer program product that can improve the accuracy in response to the above technical problems.

第一方面,本申请提供了一种机器人手眼坐标转换方法。所述方法包括:In a first aspect, the present application provides a robot hand-eye coordinate conversion method. The method includes:

通过机器人的扫描仪,对所述机器人的安装了标定板的末端执行器进行图像采集,得到所述末端执行器处于不同位姿的标定板图像;Using the scanner of the robot, image acquisition is performed on the end effector of the robot on which the calibration plate is installed, so as to obtain the calibration plate images of the end effector in different poses;

对所述不同位姿的标定板图像中的各个标志点,按照至少两种图形进行检测,得到至少两组不同的标志点区域;Detecting each marker point in the calibration plate images of different poses according to at least two kinds of graphics to obtain at least two groups of different marker point regions;

基于各组中的标志点区域之间的距离,判定所述各组中的标志点区域的区域中心是否有效;Based on the distance between the marker point areas in each group, determine whether the area center of the marker point area in each group is valid;

依据各个有效的所述区域中心及各个有效的所述区域中心在所述不同位姿的中心标志点,生成不同位姿的标志点序列;According to each valid said area center and each valid said area center at the center marker point of said different poses, generate a sequence of marker points of different poses;

通过所述不同位姿的标志点序列,标定所述末端执行器与所述扫描仪之间的坐标系转换关系。The coordinate system conversion relationship between the end effector and the scanner is calibrated through the sequence of marker points of different poses.

在其中一个实施例中,所述对所述不同位姿的标定板图像中的各个标志点,按照至少两种图形进行检测,得到至少两组不同的标志点区域,包括:In one of the embodiments, the respective marker points in the calibration board images of different poses are detected according to at least two kinds of graphics, and at least two groups of different marker point regions are obtained, including:

对所述标定板图像中的各个标志点,分别按照至少两种不同图形进行边缘提取检测,得到至少两组不同的图像轮廓;For each marker point in the calibration plate image, edge extraction and detection are performed according to at least two different graphics, to obtain at least two different image contours;

筛选各组中的轮廓长度区间的图像轮廓;Screen the image contours of the contour length intervals in each group;

基于筛选到的各组中图像轮廓,得到所述至少两组不同的标志点区域。Based on the screened image contours in each group, the at least two groups of different marker point regions are obtained.

在其中一个实施例中,所述图像轮廓包括圆形轮廓和多边形轮廓,所述基于筛选到的各组中图像轮廓,得到所述至少两组不同的标志点区域,包括:In one embodiment, the image contours include circular contours and polygonal contours, and the at least two groups of different marker point regions are obtained based on the screened image contours in each group, including:

基于筛选到的圆形轮廓的轮廓面积和轮廓长度进行相似度计算,得到的圆形轮廓相似度;Calculate the similarity based on the contour area and contour length of the screened circular contour, and obtain the similarity of the circular contour;

基于所述圆形轮廓相似度,从所述查找到的圆形轮廓中,选取各标志点各自对应的圆形标志点区域;Based on the similarity of the circular contour, from the found circular contour, select the circular mark point area corresponding to each mark point;

对所述查找到的各组中多边形轮廓进行拟合,得到多边形拟合图像;Fitting the polygonal outlines in each group that has been found to obtain a polygonal fitting image;

将所述多边形拟合图像中的四边形轮廓,作为各标志点各自对应的四边形标志点区域;Using the quadrilateral outline in the polygon fitting image as the quadrilateral mark point area corresponding to each mark point;

所述判定所述各组中的标志点区域的区域中心是否有效,包括:The determining whether the area center of the marker point area in each group is valid includes:

基于各标志点的所述圆形标志点区域与所述四边形标志点区域之间的距离,判定所述圆形标志点区域的区域中心是否有效。Based on the distance between the circular mark point area and the quadrilateral mark point area of each mark point, it is determined whether the area center of the circular mark point area is valid.

在其中一个实施例中,所述基于各组中的标志点区域之间的距离,判定所述各组中的标志点区域的区域中心是否有效,包括:In one of the embodiments, determining whether the area centers of the marker point areas in each group are valid based on the distances between the marker point areas in each group includes:

对各组中的标志点区域分别进行重叠检测,将各组中的重叠标志点区域剔除,得到各组剔除后标志点区域;Overlap detection is performed on the marker point regions in each group respectively, and the overlapping marker point regions in each group are eliminated to obtain the marker point regions after the elimination of each group;

将各组剔除后标志点区域之间的距离与标志点邻近阈值距离进行比较,得到多个邻近点比较结果;Compare the distance between the marked point regions after each group is eliminated with the adjacent threshold distance of the marked point, and obtain the comparison results of multiple adjacent points;

基于各个所述邻近点比较结果,分别判断所述各组剔除后标志点区域中的区域中心是否为有效的。Based on the comparison results of each of the adjacent points, it is respectively determined whether the area centers in the marked point areas of the respective groups are valid or not.

在其中一个实施例中,所述对各组中的标志点区域分别进行重叠检测,将各组中的重叠标志点区域剔除,得到各组剔除后标志点区域,包括:In one embodiment, the overlapping detection is performed on the marker point regions in each group respectively, and the overlapping marker point regions in each group are eliminated to obtain the marker point regions after the elimination of each group, including:

对各组中的标志点区域分别进行组合比对;Make a combined comparison of the marked point areas in each group;

计算组合对比的标志点区域之间的重叠检测距离;Calculate the overlap detection distance between the marker point regions of the combined comparison;

当所述重叠检测距离满足轮廓检测阈值时,分别计算所述组合对比的各标志点区域的区域边长与区域面积;When the overlapping detection distance satisfies the contour detection threshold, calculate the area side length and area area of each marker point area in the combined comparison;

基于所述组合对比的各标志点区域的区域边长与区域面积,将各组中的重叠标志点区域剔除,得到各组剔除后标志点区域。Based on the area side lengths and area areas of the respective marker point regions in the combined comparison, the overlapping marker point regions in each group are eliminated to obtain the eliminated marker point regions in each group.

在其中一个实施例中,所述依据各个有效的所述区域中心及各个有效的所述区域中心在所述不同位姿的中心标志点,生成不同位姿的标志点序列,包括:In one of the embodiments, according to each valid area center and each valid area center at the center marker points of the different poses, generating a sequence of marker points of different poses, including:

基于位姿相同的有效区域中心进行均值化计算,得到不同位姿的区域中心的重心位置;Perform mean calculation based on the center of the effective area with the same pose to obtain the center of gravity of the center of the region with different poses;

基于位姿相同的重心位置与有效区域中心之间的距离,从各所述区域中心中查找出各位姿的中心标志点;Based on the distance between the center of gravity of the same pose and the center of the effective area, find out the center marker point of each pose from the center of each area;

将各个所述区域中心在不同位姿的中心标志点,分别确定为各所述位姿下的极坐标原点;Determining the center mark points of each of the regional centers in different poses as the polar origin of each of the poses;

基于各区域中心与各所述位姿下的极坐标原点,得到各所述各区域中心在各所述位姿的极坐标系中的位置;Based on the center of each region and the polar coordinate origin under each of the poses, the position of the center of each of the regions in the polar coordinate system of each of the poses is obtained;

按照各所述各区域中心在所述极坐标系中的角度,分别对各所述各区域中心在各所述位姿的极坐标系中的位置进行排序,得到不同位姿的标志点序列。According to the angles of the centers of the regions in the polar coordinate system, the positions of the centers of the regions in the polar coordinate system of the poses are sorted respectively, so as to obtain marker point sequences of different poses.

在其中一个实施例中,所述坐标系转换关系包括旋转的转换关系和平移的转换关系,所述通过所述不同位姿的标志点序列,标定所述机器人与所述扫描仪之间的坐标系转换关系,包括:In one of the embodiments, the coordinate system transformation relationship includes a rotation transformation relationship and a translation transformation relationship, and the coordinates between the robot and the scanner are calibrated through the marker point sequences of different poses Department conversion relationship, including:

通过所述标志点序列在不同位姿转换过程对应的旋转向量,标定所述机器人与所述扫描仪的旋转的转换关系;Calibrate the transformation relationship between the rotation of the robot and the scanner through the rotation vectors corresponding to the sequence of landmarks in different pose transformation processes;

通过所述标志点序列在不同位姿转换过程中的原点平移信息,得到第一平移向量;Obtain the first translation vector through the origin translation information of the sequence of landmarks in different pose conversion processes;

通过所述不同位姿的标志点序列拟合所得的标志点球心,以及,预设位姿的标志点序列进行计算,得到第二平移向量;The second translation vector is obtained by calculating the marker point sphere center obtained by fitting the marker point sequence of the different poses, and the marker point sequence of the preset pose;

组合所述第一平移向量和所述第二平移向量,得到所述平移的转换关系。The first translation vector and the second translation vector are combined to obtain the translation conversion relationship.

第二方面,本申请还提供了一种机器人手眼坐标转换装置。所述装置包括:In a second aspect, the present application also provides a robot hand-eye coordinate conversion device. The device includes:

图像采集模块,用于通过机器人的扫描仪,对所述机器人的安装了标定板的末端执行器进行图像采集,得到所述末端执行器处于不同位姿的标定板图像;The image acquisition module is used to collect images of the end effector of the robot on which the calibration plate is installed through the scanner of the robot, so as to obtain the calibration plate images of the end effector in different poses;

边缘检测模块,用于对所述不同位姿的标定板图像中的各个标志点,按照至少两种图形进行检测,得到至少两组不同的标志点区域;an edge detection module, configured to detect each marker point in the calibration plate images of different poses according to at least two kinds of graphics to obtain at least two groups of different marker point regions;

有效标志点判定模块,用于基于各组中的标志点区域之间的距离,判定所述各组中的标志点区域的区域中心是否有效;an effective marker point determination module, for determining whether the regional center of the marker point area in each group is valid based on the distance between the marker point areas in each group;

标志点序列生成模块,用于依据各个有效的所述区域中心及各个有效的所述区域中心在所述不同位姿的中心标志点,生成不同位姿的标志点序列;a marker point sequence generation module, configured to generate marker point sequences of different poses according to each valid said area center and the center marker points of each valid said area center in said different poses;

手眼标定模块,用于通过所述不同位姿的标志点序列,标定所述机器人与所述扫描仪之间的坐标系转换关系。The hand-eye calibration module is used for calibrating the coordinate system conversion relationship between the robot and the scanner through the marker point sequences of different poses.

第三方面,本申请还提供了一种计算机设备。所述计算机设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述任意实施例中机器人手眼坐标转换的步骤。In a third aspect, the present application also provides a computer device. The computer device includes a memory and a processor, wherein the memory stores a computer program, and when the processor executes the computer program, the processor implements the steps of transforming the robot hand-eye coordinates in any of the foregoing embodiments.

第四方面,本申请还提供了一种计算机可读存储介质。所述计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任意实施例中机器人手眼坐标转换的步骤。In a fourth aspect, the present application also provides a computer-readable storage medium. The computer-readable storage medium stores a computer program thereon, and when the computer program is executed by the processor, implements the steps of transforming the robot hand-eye coordinates in any of the foregoing embodiments.

第五方面,本申请还提供了一种计算机程序产品。所述计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述任意实施例中机器人手眼坐标转换的步骤。In a fifth aspect, the present application also provides a computer program product. The computer program product includes a computer program, which, when executed by the processor, implements the steps of transforming the robot's hand-eye coordinates in any of the foregoing embodiments.

上述机器人手眼坐标转换方法、装置、计算机设备、存储介质和计算机程序产品,对所述不同位姿的标定板图像中的各个标志点,按照至少两种图形进行检测,通过这种图像几何特征检测方式,筛选出得到标志点区域;基于各组中的标志点区域之间的距离,判定标定板图像中的标志点是否有效,再依据各个有效的区域中心及各个有效的所述区域中心在所述不同位姿的中心标志点,生成不同位姿的标志点序列,以通过标志点序列确定各标志点在标定板图像中的位置,从而将机器人手眼标定计算转化为三维空间的点对之间的平移与旋转关系计算,可以基于PnP算法计算相应的平移与旋转关系,无需基于位姿之间的变化关系进行计算,计算误差容易控制,方便应用开发人员调试。The above-mentioned robot hand-eye coordinate conversion method, device, computer equipment, storage medium and computer program product are used to detect each marker point in the calibration plate images of different poses according to at least two kinds of graphics, and detect through the geometric features of the image. based on the distance between the mark point areas in each group, determine whether the mark points in the calibration plate image are valid, and then according to each valid area center and each valid said area center Describe the center marker points of different poses, and generate marker point sequences of different poses, so as to determine the position of each marker point in the calibration board image through the marker point sequence, so as to convert the robot hand-eye calibration calculation into three-dimensional space between point pairs The translation and rotation relationship calculation can be based on the PnP algorithm to calculate the corresponding translation and rotation relationship, without the need to calculate based on the change relationship between the poses, the calculation error is easy to control, and it is convenient for application developers to debug.

附图说明Description of drawings

图1为一个实施例中机器人手眼坐标转换方法的应用环境图;Fig. 1 is the application environment diagram of the robot hand-eye coordinate conversion method in one embodiment;

图2为一个实施例中机器人手眼坐标转换方法的流程示意图;2 is a schematic flowchart of a robot hand-eye coordinate conversion method in one embodiment;

图3为一个实施例中标志板的结构示意图;Fig. 3 is the structural representation of the sign board in one embodiment;

图4为另一个实施例中标志点的结构示意图;4 is a schematic structural diagram of a marker point in another embodiment;

图5为一个实施例中机器人手眼坐标转换装置的结构框图;5 is a structural block diagram of a robot hand-eye coordinate conversion device in one embodiment;

图6为一个实施例中计算机设备的内部结构图。FIG. 6 is a diagram of the internal structure of a computer device in one embodiment.

具体实施方式Detailed ways

为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions and advantages of the present application more clearly understood, the present application will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application, but not to limit the present application.

本申请实施例提供的机器人手眼坐标转换方法,可以应用于如图1所示的应用环境中。其中,终端102通过网络与服务器104进行通信。数据存储系统可以存储服务器104需要处理的数据。数据存储系统可以集成在服务器104上,也可以放在云上或其他网络服务器上。终端102通过机器人的扫描仪,对所述机器人的安装了标定板的末端执行器进行图像采集,得到所述末端执行器处于不同位姿的标定板图像;对所述不同位姿的标定板图像中的各个标志点,按照至少两种图形进行检测,得到至少两组不同的标志点区域;基于各组中的标志点区域之间的距离,判定所述各组中的标志点区域的区域中心是否有效;依据各个所述区域中心及各个所述区域中心在所述不同位姿的中心标志点,生成不同位姿的标志点序列;通过所述不同位姿的标志点序列,标定所述末端执行器与所述扫描仪之间的坐标系转换关系。The robot hand-eye coordinate conversion method provided in the embodiment of the present application can be applied to the application environment shown in FIG. 1 . The terminal 102 communicates with the server 104 through the network. The data storage system may store data that the server 104 needs to process. The data storage system can be integrated on the server 104, or it can be placed on the cloud or other network server. The terminal 102 collects images of the end effector of the robot on which the calibration plate is installed through the scanner of the robot, and obtains the calibration plate images of the end effector in different poses; Each of the marker points in , is detected according to at least two kinds of graphics, and at least two groups of different marker point areas are obtained; based on the distance between the marker point areas in each group, the regional center of the marker point area in each group is determined. Whether it is valid; according to each of the area centers and the center marker points of each of the area centers in the different poses, generate a sequence of marker points of different poses; through the sequence of marker points of the different poses, demarcate the end The coordinate system conversion relationship between the actuator and the scanner.

其中,终端102可以但不限于是各种机器人、个人计算机、笔记本电脑、智能手机、平板电脑、物联网设备和便携式可穿戴设备,物联网设备可为智能音箱、智能电视、智能空调、智能车载设备等。便携式可穿戴设备可为智能手表、智能手环、头戴设备等。服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。Wherein, the terminal 102 can be, but is not limited to, various robots, personal computers, notebook computers, smart phones, tablet computers, IoT devices and portable wearable devices, and the IoT devices can be smart speakers, smart TVs, smart air conditioners, smart vehicle equipment, etc. The portable wearable device may be a smart watch, a smart bracelet, a head-mounted device, or the like. The server 104 can be implemented by an independent server or a server cluster composed of multiple servers.

在一个实施例中,如图2所示,提供了一种机器人手眼坐标转换方法,以该方法应用于图1中的终端102为例进行说明,包括以下步骤:In one embodiment, as shown in FIG. 2 , a method for converting hand-eye coordinates of a robot is provided, and the method is applied to the terminal 102 in FIG. 1 as an example for description, including the following steps:

步骤202,通过机器人的扫描仪,对机器人的安装了标定板的末端执行器进行图像采集,得到末端执行器处于不同位姿的标定板图像。Step 202 , by using the scanner of the robot, image acquisition is performed on the end effector of the robot on which the calibration plate is installed, and images of the calibration plate in different poses of the end effector are obtained.

机器人包括机器人本体、扫描仪与机器人的末端执行器;机器人本体与扫描仪的坐标系位移关系是固定,扫描仪包括深度图和灰度相机,用于确定标志点及其像素坐标;而机器人本体的至少一个轴安装机器人的末端执行器,末端执行器所控制的机械手安装有用于进行手眼标定的标志板,标志板贴有标志点。标志板是轻质铝合金制成的标靶,标靶上面贴有带编码标志点,标靶上面应该先贴上白色纸然后在其上面贴标志点,以提高远距离标志点的识别精度标志点。The robot includes the robot body, the scanner and the end effector of the robot; the coordinate system displacement relationship between the robot body and the scanner is fixed, and the scanner includes a depth map and a grayscale camera to determine the marker points and their pixel coordinates; and the robot body At least one axis of the robot is installed with the end effector of the robot, the manipulator controlled by the end effector is installed with a sign board for hand-eye calibration, and the sign board is affixed with sign points. The marking board is a target made of lightweight aluminum alloy. There are coded marking points on the target. White paper should be pasted on the target first, and then marking points should be pasted on it to improve the identification accuracy of long-distance marking points. point.

在一个实施例中,标定板如图3所示,标志板边缘位置环形排布有环带位置标志点,某环带位置标志点附近设置有一个对应的附近标志点作为标记位,而标志板中心位置设置有中心位置标志点。在另一实施例中,存在至少为9个环带位置标志点,且某环带位置标志点附近设置有一个对应的附近标志点作为标记位,各相邻的环带位置标志点与中心位置标志点连线的角度间隔至少为15度,充当标定板图像中的候选标志点,是进行图像检测的标志点,而环带位置标志点与对应的标记位分别与中心位置标志点连线的角度间隔是小于5度的,且标记位并不是进行图像检测的标志点。标志点是由多个特征所构成的图形,其如图4所示。In one embodiment, the calibration board is shown in FIG. 3 , the edge position of the marking board is annularly arranged with the position marking points of the annular belt, and a corresponding nearby marking point is set near the position marking point of a certain annular belt as a marking position, and the marking board The center position is provided with a center position marker point. In another embodiment, there are at least 9 ring-belt position markers, and a corresponding nearby marker point is set near a ring-belt position marker point as a marker position, and each adjacent ring-belt position marker point and the center position The angular interval of the line connecting the marker points is at least 15 degrees, which serves as a candidate marker point in the calibration plate image and is the marker point for image detection. The angular interval is less than 5 degrees, and the marker bit is not a marker point for image detection. A marker point is a graph composed of multiple features, as shown in Figure 4.

在一个实施例中,确定机器人的末端执行器对应的某机械臂的工作范围,在此工作范围内,控制载有标志板的末端执行器尽可能大的覆盖工作区间范围进行多次运动,并通过扫描仪对末端执行器的多次运动进行图像采集,以减少后续计算位姿估算的误差,其中,在对旋转的坐标转换关系进行计算时,在末端执行器的每次运动的过程中,保持工具坐标系与机器人本体坐标系之间的夹角不变。In one embodiment, the working range of a certain robotic arm corresponding to the end effector of the robot is determined, and within this working range, the end effector carrying the sign plate is controlled to cover the working range as much as possible to perform multiple movements, and The image acquisition of the multiple movements of the end effector is performed by the scanner to reduce the error of the subsequent calculation of the pose estimation. Keep the angle between the tool coordinate system and the robot body coordinate system unchanged.

在一个实施例中,通过扫描仪对末端执行器的多次运动进行图像采集,包括:在对平移的坐标转换关系进行计算的过程中,每次运动时就是保持工具坐标系原点位置不同,并旋转机器人工具坐标系,而运动结束后,通过扫描仪的双目相机左右两幅图分别进行左右两组的图像采集,进而得到末端执行器处于不同位姿的左右两组标定板图像序列;当左右两组标定板图像序列与双目相机中的内外参数,按照三角法进行计算时,得到不同位姿下的各标志点的坐标位置。In one embodiment, the image acquisition of the multiple movements of the end effector by the scanner includes: in the process of calculating the coordinate transformation relationship of translation, keeping the origin of the tool coordinate system different during each movement, and The coordinate system of the robot tool is rotated, and after the movement is completed, the left and right two sets of images are collected through the left and right two images of the binocular camera of the scanner, and then the left and right two sets of calibration plate image sequences with the end effector in different poses are obtained; when When the left and right two sets of calibration plate image sequences and the internal and external parameters in the binocular camera are calculated according to the trigonometry, the coordinate positions of each marker point under different poses are obtained.

需要了解的是,通过扫描仪对末端执行器的多次运动进行图像采集涉及两个过程,其中一个过程是对旋转的坐标转换关系进行计算,另一个过程是对平移的坐标转换关系进行计算。这两个过程是相辅相成的,且这两个过程并不局限于上述内容。It should be understood that the image acquisition of the multiple movements of the end effector by the scanner involves two processes, one of which is to calculate the coordinate transformation relationship of rotation, and the other process is to calculate the coordinate transformation relationship of translation. The two processes are complementary, and the two processes are not limited to the above.

在扫描仪进行图像采集后,得到了不同位姿的标定板图像。在各位姿的标定板图像中,同一标志点的形状、坐标位置等标志点特征也存在差异,在基于这些差异按照某图像进行检测后,不同位姿下的图像检测结果也并不相同。After image acquisition by the scanner, images of the calibration plate with different poses are obtained. In the calibration board images of various poses, there are also differences in the shape, coordinate position and other landmark features of the same marker point. After detecting an image based on these differences, the image detection results under different poses are also different.

步骤204,对不同位姿的标定板图像中的各个标志点,按照至少两种图形进行检测,得到至少两组不同的标志点区域。In step 204, each marker point in the calibration plate images of different poses is detected according to at least two kinds of graphics, and at least two groups of different marker point regions are obtained.

标定板图像中的各个标志点分别具有各自的像素坐标位置区域,各标志点的像素坐标位置区域中存在对应的标志点特征,将这些标志点特征按照对应的图像进行检测后,基于检测结果确定标定板图像中各标志点各自对应的多个标志点区域。各标志点各自对应的多个标志点区域,会按照进行检测的不同图形划分为对应的不同组的标志点区域,同组中的标志点区域均是同一图形的轮廓,但同组中的标志点区域的位置、边长、面积等图像属性存在区别,可通过至少一种图像属性判断同组中的各标志点区域是否应当被剔除。Each marker point in the calibration plate image has its own pixel coordinate position area, and there are corresponding marker point features in the pixel coordinate position area of each marker point. Multiple marker point regions corresponding to each marker point in the calibration plate image. The multiple marker point areas corresponding to each marker point will be divided into corresponding different groups of marker point areas according to the different graphics to be detected. There are differences in image attributes such as the position, side length, and area of point regions, and it can be judged by at least one image attribute whether each marker point region in the same group should be eliminated.

在同组中的部分标志点区域被剔除后,基于标志点区域的位置确定不同组的标志点区域之间的对应关系,基于该对应关系分别确定不同组的标志点区域分别对应的各个标志点区域,以判定各组中的标志点区域的区域中心是否为有效的。After some of the marker point regions in the same group are eliminated, the corresponding relationship between the marker point regions of different groups is determined based on the positions of the marker point regions, and each marker point corresponding to the marker point regions of different groups is determined based on the correspondence relationship. area to determine whether the area center of the marker point area in each group is valid.

步骤206,基于各组中的标志点区域之间的距离,判定各组中的标志点区域的区域中心是否为有效的。Step 206 , based on the distances between the marker point regions in each group, determine whether the region centers of the marker point regions in each group are valid.

对于各组中的标志点区域之间的距离,其可以是为同组中的标志点区域之间的距离,以及不同组中的标志点区域之间的距离。当同组中的标志点区域之间的距离小于非极大值抑制的阈值时,终端判断同组中的这部分标志点区域是邻近点区域与重复点区域中的一种非极大值计算区域,对非极大值计算区域进行非极大值计算,去除太邻近的点或者重复点。For the distance between the marker point areas in each group, it may be the distance between the marker point areas in the same group and the distance between the marker point areas in different groups. When the distance between the marker point areas in the same group is less than the threshold for non-maximum value suppression, the terminal determines that this part of the marker point area in the same group is a non-maximum value calculation in the adjacent point area and the repeated point area Area, perform non-maximum calculation on the non-maximum calculation area, and remove too adjacent points or duplicate points.

对于不同组中的标志点区域之间的距离,终端计算不同组中相对应的各个标志点区域之间的距离,以该距离与对应的阈值进行比较,确定某一图形对应组中的标志点区域的区域中心是否为有效的。例如:椭圆组中的标志点区域具有椭圆轮廓,四边形组中的标志点区域具有四边形轮廓,当椭圆组与四边形组中相对应的标志点区域之间的距离小于对应的阈值时,将椭圆组中的各标志点区域的区域中心分别作为有效的,而四边形组中相对应的标志点区域的区域中心均是无效的。For the distance between the marker point areas in different groups, the terminal calculates the distance between the corresponding marker point areas in different groups, compares the distance with the corresponding threshold, and determines the marker point in the corresponding group of a certain graph Whether the zone's zone center is valid. For example, the marker area in the ellipse group has an elliptical outline, and the marker area in the quadrilateral group has a quadrilateral outline. When the distance between the ellipse group and the corresponding marker area in the quadrilateral group is less than the corresponding threshold, the ellipse group The area centers of each mark point area in the group are valid respectively, while the area centers of the corresponding mark point areas in the quadrilateral group are invalid.

在得到各个区域中心后,判断各位姿的标志板图像中的区域中心与标定板所贴到的标志点的数量是否匹配,若是匹配的,则基于各位姿的区域中心确定相应位姿下的中心标志点;若是不匹配的,则进行编码点数量异常处理,直至各位姿的标志板图像中的区域中心与标定板所贴到的标志点数量相匹配。After obtaining the center of each area, determine whether the area center in the sign board image of each pose matches the number of marker points attached to the calibration board. If so, determine the center of the corresponding pose based on the area center of each pose Mark points; if they do not match, abnormal processing of the number of coding points will be performed until the center of the area in the image of the marking board of each pose matches the number of marking points pasted on the calibration board.

其中,区域中心数量异常处理,包括:若某位姿的区域中心数量小于标定板所贴到的标志点数量,则在调整该位姿并重新扫描,直至异常消失;若区域中心数量大于标定板所贴到的标志点数量,则依次去除区域中心与平均像素坐标之间的距离最大的点,直至异常消失。Among them, the abnormal processing of the number of regional centers includes: if the number of regional centers of a certain pose is less than the number of mark points attached to the calibration board, adjust the pose and rescan until the abnormality disappears; if the number of regional centers is greater than the calibration board The number of marked points pasted, the point with the largest distance between the area center and the average pixel coordinate is removed in turn, until the abnormality disappears.

步骤208,依据各个有效的区域中心及各个有效的区域中心在不同位姿的中心标志点,生成不同位姿的标志点序列。Step 208: Generate marker point sequences of different poses according to each valid area center and the center marker points of each valid area center in different poses.

中心标志点是某位姿下的各个区域中心中的一个,中心标志点用于表征设置于标志板的中心位置标志点。由于标定板图像所属的位姿是不同的,同一标定板图像中的各个区域中心的重心位置并不是中心标志点,而与各个区域中心的重心位置最近的区域中心才是中心标志点。The center marker point is one of the center of each area under a certain pose, and the center marker point is used to represent the center position marker point set on the marker board. Since the poses to which the calibration plate images belong are different, the center of gravity of each area center in the same calibration plate image is not the center marker point, but the center of the area closest to the center of gravity of each area center is the center marker point.

对于不同位姿的中心标志点的计算过程,其包括:终端基于某位姿下的各个区域中心的位置数据进行均值化,得到该位姿下的区域中心均值,区域中心均值是某位姿下的标志点的重心位置;对该位姿的重心位置与各个区域中心之间的距离进行计算,得到各个区域中心与重心位置之间的计算结果,基于该计算结果选择该位姿下的各个区域中心中的一个,将选择到的区域中心作为中心标志点;其中,该计算结果表征该位姿下的重心位置与中心标志点之间的距离,小于该位姿下的重心位置与任意区域中心之间的距离。For the calculation process of the center marker points of different poses, it includes: the terminal averages the position data of the center of each region under a certain pose, and obtains the average value of the region center under the pose, and the average value of the regional center is the average value of the region center under a certain pose The position of the center of gravity of the marker point; calculate the distance between the center of gravity of the pose and the center of each area, obtain the calculation result between the center of each area and the center of gravity, and select each area under the pose based on the calculation result One of the centers, using the selected area center as the center marker point; wherein, the calculation result represents the distance between the center of gravity position and the center marker point under the pose, which is smaller than the center of gravity position under the pose and any area center the distance between.

在得到中心标志点之后,基于中心标志点确定各个有效的区域中心相当于中心标志点之间的距离,以此使得各区域中心与相应标志点的对应关系构建了对应关系,该对应关系使各标志点在标定板图像中的排序得以被确定,构成了标志点序列。标志点序列中的各标志点还可以再进行编码,以便于更好地进行标志点的识别与调用。After obtaining the center marker point, the distance between each valid area center is determined based on the center marker point, which is equivalent to the distance between the center marker points, so that the corresponding relationship between each area center and the corresponding marker point is established. The ordering of the marker points in the calibration plate image is determined, constituting the marker point sequence. Each marker point in the marker point sequence can be further coded to facilitate better identification and calling of the marker point.

在一个实施例中,依据各个有效的区域中心及各个有效的区域中心在不同位姿的中心标志点,生成不同位姿的标志点序列,包括:基于位姿相同的重心位置与有效区域中心之间的距离进行均值化计算,得到不同位姿的区域中心的重心位置;基于位姿相同的重心位置与有效的区域中心之间的距离,从各区域中心中查找出各位姿的中心标志点;将各个区域中心在不同位姿的中心标志点,分别确定为各位姿下的极坐标原点;基于各区域中心与各位姿下的极坐标原点,得到各个区域中心在各位姿的极坐标系中的位置;按照各个区域中心在极坐标系中的角度,分别对各个区域中心在各位姿的极坐标系中的位置进行排序,得到不同位姿的标志点序列。In one embodiment, according to each valid area center and the center marker points of each valid area center in different poses, generating a sequence of marker points of different poses, including: based on the position of the same center of gravity of the pose and the difference between the center of the valid region Calculate the mean value of the distance between them to obtain the center of gravity position of the regional center of different poses; based on the distance between the center of gravity position of the same pose and the effective regional center, find the center mark point of each pose from the center of each region; The center mark points of the center of each area in different poses are determined as the polar coordinate origin under each pose; based on the center of each region and the polar coordinate origin under each pose, the polar coordinates of each region center in the polar coordinate system of each pose are obtained. Position; according to the angle of each regional center in the polar coordinate system, the positions of each regional center in the polar coordinate system of each pose are sorted, and the sequence of landmark points of different poses is obtained.

在确定极坐标原点后,计算相邻角度最小的区域中心为起始标志点所在的位置,起始标志点对应的角度设置为起始角度,基于极坐标原点与起始标志点构建极坐标系,各个区域中心在极坐标系中的角度虽然存在区别,但是在旋转和倾斜后,因为各标志点的数量与其在极坐标中的顺序是确定的,且各标志点的基于起始标志点与极坐标原点,按照各个区域中心在极坐标系中的角度,依然能知道该标志点序列中各标志点的相对位置,即极坐标系下的标志点序列具备抗旋转和倾斜的特点。After the polar coordinate origin is determined, the center of the area with the smallest adjacent angle is calculated as the location of the starting marker point, the angle corresponding to the starting marker point is set as the starting angle, and the polar coordinate system is constructed based on the polar coordinate origin and the starting marker point. , although the angle of each area center in the polar coordinate system is different, after rotation and inclination, because the number of each marker point and its order in the polar coordinate system are determined, and each marker point is based on the starting marker point and the The polar coordinate origin, according to the angle of each area center in the polar coordinate system, can still know the relative position of each marker point in the marker point sequence, that is, the marker point sequence in the polar coordinate system has the characteristics of anti-rotation and tilt.

在一个实施例中,将各个区域中心在不同位姿的中心标志点,分别确定为各位姿下的极坐标原点,包括:分别对各位姿相同的区域中心进行均值化,得到各位姿的区域中心的重心位置;基于位姿相同的区域中心的重心位置与区域中心之间的距离区间,对各区域中心进行筛选,得到各位姿的中心标志点;将各位姿的中心标志点,分别确定为各位姿下的极坐标原点。In one embodiment, the central marker points of the center of each region in different poses are respectively determined as the polar coordinate origins under each pose, including: averaging the region centers with the same poses respectively to obtain the region centers of each pose Based on the distance interval between the center of gravity of the regional center of the same pose and the regional center, screen each regional center to obtain the center marker point of each pose; determine the center marker point of each pose as each The polar coordinate origin in the pose.

在一个实施例中,按照各个区域中心在极坐标系中的角度,分别对各个区域中心在各位姿的极坐标系中的位置进行排序,得到不同位姿的标志点序列,包括:计算中心标志点与各区域中心的初始极坐标角度,然后按照初始极坐标角度排序,得到初始标志点序列;对初始标志点序列中临近两点之间的夹角差值进行计算,并确定夹角差值的模长最小的位置为目标起始点;依据目标起始点在初始标志点序列中的位置,确定目标起始点与各个区域中心在极坐标系中的目标角度,基于目标角度重新排布各个区域中心,得到目标极坐标序列。In one embodiment, according to the angle of each area center in the polar coordinate system, the positions of each area center in the polar coordinate system of each pose are sorted respectively, and the sequence of marker points of different poses is obtained, including: calculating the center marker The initial polar coordinate angle between the point and the center of each area, and then sort according to the initial polar coordinate angle to obtain the initial mark point sequence; calculate the angle difference between two adjacent points in the initial mark point sequence, and determine the angle difference The position with the smallest modulo length is the target starting point; according to the position of the target starting point in the initial marker point sequence, determine the target angle between the target starting point and the center of each area in the polar coordinate system, and rearrange the centers of each area based on the target angle , get the target polar coordinate sequence.

步骤210,通过不同位姿的标志点序列,标定机器人与扫描仪之间的坐标系转换关系。In step 210, the coordinate system transformation relationship between the robot and the scanner is calibrated through the sequence of landmark points of different poses.

在一个实施例中,通过不同位姿的标志点序列,标定机器人与扫描仪之间的坐标系转换关系,包括:In one embodiment, the coordinate system transformation relationship between the robot and the scanner is calibrated through the sequence of landmark points of different poses, including:

分别基于位姿相同的标志点序列中的各标志点的位置坐标进行均值化处理,得到各位姿的三维重建坐标均值,对各位姿的三维重建坐标均值进行计算按照PNP算法,得到旋转值与第一平移值;Perform averaging processing based on the position coordinates of each marker point in the sequence of marker points with the same pose, to obtain the mean value of the three-dimensional reconstructed coordinates of each pose, and calculate the mean value of the three-dimensional reconstructed coordinates of each pose. According to the PNP algorithm, the rotation value and the first a translation value;

对各位姿的三维重建坐标均值进行球体拟合,得到标志点拟合球体,基于标志点拟合球体的球心与预设位姿的标志点序列所对应的位置均值进行计算,得到第二平移值;Perform sphere fitting on the mean values of the three-dimensional reconstructed coordinates of each pose to obtain a fitting sphere with marker points, and calculate based on the center of the sphere of the marker point fitting sphere and the position mean value corresponding to the sequence of marker points of the preset pose to obtain the second translation value;

通过旋转值标定机器人与扫描仪之间旋转的坐标系转换关系,并通过第一平移值与第二平移值标定机器人与扫描仪之间平移的坐标系转换关系。The rotational coordinate system conversion relationship between the robot and the scanner is calibrated by the rotation value, and the translation coordinate system conversion relationship between the robot and the scanner is calibrated by the first translation value and the second translation value.

由此,在得到不同位姿的标志点序列后,标定机器人与扫描仪之间的坐标系转换关系可以基于PNP的相关算法准确计算出相应的坐标系转换关系。Therefore, after obtaining the sequence of landmark points with different poses, the coordinate system conversion relationship between the calibration robot and the scanner can be accurately calculated based on the PNP correlation algorithm.

在一个实施例中,坐标系转换关系包括旋转的转换关系和平移的转换关系,通过不同位姿的标志点序列,标定机器人与扫描仪之间的坐标系转换关系,包括:通过标志点序列在不同位姿转换过程中移动的位置信息,标定机器人与扫描仪的旋转的转换关系,并计算第一平移向量;通过不同位姿的标志点序列拟合所得的球心,以及,预设位姿的标志点序列进行计算,得到第二平移向量;组合第一平移向量和第二平移向量,标定平移的转换关系。In one embodiment, the coordinate system conversion relationship includes a rotational conversion relationship and a translational conversion relationship, and the coordinate system conversion relationship between the robot and the scanner is calibrated through the marker point sequence of different poses, including: using the marker point sequence in the The position information of the movement during the transformation of different poses, the transformation relationship between the rotation of the robot and the scanner is calibrated, and the first translation vector is calculated; The mark point sequence is calculated to obtain a second translation vector; the first translation vector and the second translation vector are combined to calibrate the translation conversion relationship.

在一个实施例中,计算第一平移向量的过程,具体包括:保持工具坐标系相对于机器人坐标系的旋转值不变前提下,多次改变末端执行器的位置,在每次改变的位置记录工具坐标系原点坐标值与扫描仪得到的标靶标志点三维重建坐标均值,将这两组坐标值利用PNP算法,得到机器人坐标系与扫描仪坐标系之间的旋转值与第一平移向量。In one embodiment, the process of calculating the first translation vector specifically includes: on the premise that the rotation value of the tool coordinate system relative to the robot coordinate system is kept unchanged, changing the position of the end effector multiple times, and recording the position at each change. The coordinate value of the origin of the tool coordinate system and the mean value of the three-dimensional reconstructed coordinates of the target marker point obtained by the scanner, and the PNP algorithm is used to obtain the rotation value and the first translation vector between the robot coordinate system and the scanner coordinate system by using these two sets of coordinate values.

计算第二平移向量的步骤包括:于计算第一平移向量的过程中,记录最后状态下标靶标志点的三维重建坐标均值,然后在保持此时工具坐标系原点位置不变的情况下,多次旋转工具坐标系改变其姿态,每个姿态下由扫描仪获取标志点三维重建坐标均值,上述多个姿态下得到的该均值分布在一个球面上,拟合球心坐标,结合上述记录的标志点三维重建坐标均值,获得第二平移向量。The step of calculating the second translation vector includes: in the process of calculating the first translation vector, recording the mean value of the three-dimensional reconstructed coordinates of the target marker point in the final state, and then maintaining the position of the origin of the tool coordinate system at this time. The coordinate system of the secondary rotation tool changes its posture, and the scanner obtains the mean value of the three-dimensional reconstructed coordinates of the marker points in each posture. The mean value obtained under the above-mentioned multiple postures is distributed on a spherical surface, and the coordinates of the center of the sphere are fitted, and the above recorded marks are combined. The mean value of the three-dimensional reconstruction coordinates of the point is obtained, and the second translation vector is obtained.

最后,组合计算所得的第一平移向量和第二平移向量,得到平移的转换关系。例如:首先,在采集8组不同位姿的标志点序列后,使用PNP计算即可得到旋转值转换关系和第一平移向量。接下来,将至少7组标志点序列进行拟合,得到拟合后的球心,球心的坐标与旋转次数为0的第一组标志点序列对应的平均值,得到第二平移向量。最后,将该第二平移向量取反加到上述PNP计算出的第一平移向量中,得到机器人与三维扫描仪之间的平移值转换关系。Finally, the calculated first translation vector and the second translation vector are combined to obtain the translation conversion relationship. For example: first, after collecting 8 groups of landmark point sequences with different poses, the rotation value conversion relationship and the first translation vector can be obtained by using PNP calculation. Next, at least 7 groups of marker point sequences are fitted to obtain the fitted sphere center, the coordinates of the sphere center and the average value corresponding to the first group of marker point sequences whose rotation number is 0, to obtain the second translation vector. Finally, the inversion of the second translation vector is added to the first translation vector calculated by the above PNP to obtain the translation value conversion relationship between the robot and the three-dimensional scanner.

上述机器人手眼坐标转换方法中,对不同位姿的标定板图像中的各个标志点,按照至少两种图形进行检测,通过这种图像几何特征检测方式,筛选出得到标志点区域;基于各组中的标志点区域之间的距离,判定标定板图像中的标志点是否有效,再依据各个有效的区域中心及各个有效的区域中心在不同位姿的中心标志点,生成不同位姿的标志点序列,以通过标志点序列确定各标志点在标定板图像中的位置,从而将机器人手眼标定计算转化为三维空间的点对之间的平移与旋转关系计算,可以基于PnP算法计算相应的平移与旋转关系,无需基于位姿之间的变化关系进行计算,计算误差容易控制,方便应用开发人员调试。In the above-mentioned robot hand-eye coordinate conversion method, each marker point in the calibration board images with different poses is detected according to at least two kinds of graphics, and the marker point area is screened out through this image geometric feature detection method; The distance between the marked point regions is determined by determining whether the marking points in the calibration board image are valid, and then according to the center of each valid region and the center marker points of each valid region center in different poses, the sequence of marker points of different poses is generated. , to determine the position of each marker point in the calibration plate image through the sequence of marker points, so as to convert the robot hand-eye calibration calculation into the translation and rotation relationship calculation between point pairs in three-dimensional space, and the corresponding translation and rotation can be calculated based on the PnP algorithm. There is no need to calculate based on the changing relationship between poses and poses, the calculation error is easy to control, and it is convenient for application developers to debug.

在一个实施例中,步骤204,对不同位姿的标定板图像中的各个标志点,按照至少两种图形进行检测,得到至少两组不同的标志点区域,包括:对标定板图像中的各个标志点,分别按照至少两种不同图形进行边缘提取检测,得到至少两组不同的图像轮廓。筛选各组中的轮廓长度区间的图像轮廓。基于筛选到的各组中图像轮廓,得到至少两组不同的标志点区域。In one embodiment, in step 204, each marker point in the calibration plate images of different poses is detected according to at least two kinds of patterns, and at least two groups of different marker point regions are obtained, including: For the marker points, edge extraction and detection are performed according to at least two different graphics respectively, so as to obtain at least two sets of different image contours. Screen the image contours for the contour length intervals in each group. Based on the screened image contours in each group, at least two groups of different marker point regions are obtained.

具体的,对标定板图像中的各个标志点,分别按照至少两种不同图形进行边canny算子缘提取检测,得到至少两组不同的图像轮廓,从至少两组不同的图像轮廓中剔除较短和较长的无用轮廓,得到筛选到的各组图像轮廓,筛选到的各组图像轮廓经过相应的计算,得到至少两组不同的标志点区域。Specifically, for each marker point in the calibration plate image, the edge canny operator is extracted and detected according to at least two different graphs, to obtain at least two sets of different image contours, and the shorter ones are removed from the at least two different image contours. and the longer useless contours to obtain the selected image contours of each group, and the selected image contours of each group are calculated accordingly to obtain at least two groups of different marker point regions.

在一个实施例中,图像轮廓包括圆形轮廓和多边形轮廓,基于筛选到的各组中图像轮廓,得到至少两组不同的标志点区域,包括:基于筛选到的圆形轮廓的轮廓面积和轮廓长度进行相似度计算,得到的圆形轮廓相似度;基于圆形轮廓相似度,从查找到的圆形轮廓中,选取各标志点各自对应的圆形标志点区域;对查找到的各组中多边形轮廓进行拟合,得到多边形拟合图像;将多边形拟合图像中的四边形轮廓,作为各标志点各自对应的四边形标志点区域。In one embodiment, the image contours include circular contours and polygonal contours, and based on the screened image contours in each group, at least two groups of different marker point regions are obtained, including: contour areas and contours based on the screened circular contours Calculate the similarity of the length to obtain the similarity of the circular contour; based on the similarity of the circular contour, from the circular contours found, select the corresponding circular mark point area of each mark point; The polygonal outline is fitted to obtain a polygonal fitting image; the quadrilateral outline in the polygonal fitting image is used as the quadrilateral mark point area corresponding to each mark point.

其中,在计算出圆形轮廓相似度后,选取相似度小于相似度阈值的圆形标志点区域,得到各标志点各自对应的圆形标志点区域;其中,圆形相似度=(4.0*PI*轮廓面积)/(轮廓周长*轮廓周长+1e-7),相似度阈值可以是0.8。Wherein, after calculating the similarity of the circular contour, select the circular mark point area whose similarity is less than the similarity threshold, and obtain the circular mark point area corresponding to each mark point; wherein, the circular similarity degree=(4.0*PI *Contour area)/(Contour perimeter*Contour perimeter+1e-7), the similarity threshold can be 0.8.

相对应的,步骤206,判定各组中的标志点区域的区域中心是否有效,包括:基于各标志点的圆形标志点区域与四边形标志点区域之间的距离,判定圆形标志点区域的区域中心是否有效。其中,由于标志点是圆形,因而圆形标志点区域的区域中心可能是有效的,而四边形标志点区域的区域中心均是无效的,以提高相应的精度。Correspondingly, step 206, judging whether the area center of the mark point area in each group is valid, including: based on the distance between the circular mark point area of each mark point and the quadrilateral mark point area, determine the circular mark point area. Whether the regional center is valid. Among them, since the marker points are circular, the area center of the circular marker point area may be valid, while the area center of the quadrilateral marker point area is invalid, so as to improve the corresponding accuracy.

在一个实施例中,步骤206,基于各组中的标志点区域之间的距离,判定各组中的标志点区域的区域中心是否有效,包括:对各组中的标志点区域分别进行重叠检测,将各组中的重叠标志点区域剔除,得到各组剔除后标志点区域;将各组剔除后标志点区域之间的距离与标志点邻近阈值距离进行比较,得到多个邻近点比较结果;基于各个邻近点比较结果,分别判断各组剔除后标志点区域中的区域中心是否为有效的。In one embodiment, step 206 , based on the distances between the marker point regions in each group, determine whether the region centers of the marker point regions in each group are valid, including: performing overlap detection on the marker point regions in each group respectively. , remove the overlapping marker point areas in each group, and obtain the marker point area after the removal of each group; compare the distance between the marker point areas after the removal of each group with the adjacent threshold distance of the marker points, and obtain the comparison results of multiple adjacent points; Based on the comparison results of each adjacent point, it is judged whether the area center in the marked point area of each group is valid or not.

在各组中的标志点区域分别进行重叠检测的过程中,终端分别基于各组中的标志点区域之间的距离选取同组的标志点区域,计算选取到的同组的标志点区域各自对应的重叠检测属性,基于标志点区域各自对应的重叠检测属性,对同组的各个标志点区域进行检测,以基于检测的结果将各组中的重叠标志点区域剔除。In the process of performing overlapping detection on the marker point regions in each group, the terminal selects the marker point regions in the same group based on the distances between the marker point regions in each group, and calculates the corresponding corresponding points of the selected marker point regions in the same group. The overlap detection attribute of , based on the corresponding overlap detection attributes of the marker point regions, each marker point region in the same group is detected, and the overlapping marker point regions in each group are eliminated based on the detection results.

在一个实施例中,对各组中的标志点区域分别进行重叠检测,将各组中的重叠标志点区域剔除,得到各组剔除后标志点区域,包括:对各组中的标志点区域分别进行组合对比;计算对比得到的标志点区域之间的重叠检测距离;当重叠检测距离满足轮廓检测阈值时,分别计算组合对比的各标志点区域的区域边长与区域面积;基于组合组合对比的各标志点区域的区域边长与区域面积,将各组中的重叠标志点区域剔除,得到各组剔除后标志点区域。In one embodiment, overlapping detection is performed on the marker point regions in each group respectively, and the overlapping marker point regions in each group are eliminated to obtain the marker point regions after the elimination of each group, including: Carry out combined comparison; calculate the overlapping detection distance between the marked point regions obtained by comparison; when the overlapping detection distance meets the contour detection threshold, calculate the regional side length and regional area of each marked point region for combined comparison; The area side length and area area of each mark point area, and the overlapping mark point area in each group is eliminated to obtain the mark point area after each group is eliminated.

重叠检测距离,是同组的标志点区域中任意标志点区域之间的距离,该距离是对同组的标志点区域分别任意组合计算所得到的标志点区域之间的中心距离。The overlapping detection distance is the distance between any marker point regions in the same group of marker point regions, and the distance is the center distance between the marker point regions obtained by arbitrarily combining the same group of marker point regions.

例如:在按照四边形进行检测后,得到的四边形组中的多个四边形标志点区域,分别对各个四边形标志点区域中的任意两个进行组合比对,计算组合对比所得到的两个标志点区域之间的重叠检测距离,在该重叠检测距离小于对应的轮廓检测阈值时,进行轮廓检测。在轮廓检测的过程中,分别计算组合对比的两个标志点区域各自的区域边长与区域面积,计算区域边长的乘积与区域面积之间的比值,当该比值处于有效区间时,将组合中两个四边形的面积进行比较,选取面积较大的剔除掉,并将较小的保留下来,得到四边形组剔除后标志点区域。For example: after detecting according to the quadrilateral, obtain a plurality of quadrilateral mark point areas in the quadrilateral group, respectively carry out a combined comparison on any two of the quadrilateral mark point areas, and calculate the two mark point areas obtained by the combined comparison When the overlapping detection distance is smaller than the corresponding contour detection threshold, contour detection is performed. In the process of contour detection, calculate the respective area side length and area area of the two marker point areas that are combined and compared, and calculate the ratio between the product of the area side length and the area area. When the ratio is in the valid range, the combined Compare the areas of the two quadrilaterals in the middle, select the one with the larger area to be eliminated, and keep the smaller one to obtain the mark point area after the quadrilateral group is eliminated.

由此,本发明通过二维图像几何特征关系筛选标志点,能够在XYZ轴旋转、平移的情况下准确识别并通过标志点序列的方式编码标志点,结合亚像素处理提高双目三维扫描仪重建精度,机器人手眼标定计算使用PnP算法计算点对,计算过程简单、误差容易控制和优化,方便应用开发人员调试。相对于传统的手眼标定方法,传统的方法需要基于位姿的变化关系求解AX=XB方程,由于求解使用了上下两次姿态的差值,该方程的正确求解高度依赖于测量数据的精度,当测量数据稍微出现问题时,所得到的结果会与实际真值有较大的偏差。而本发明计算原理比较直观能够较好的发现实际使用中产生的问题,即使在测量过程中存在机器人走位误差、三维扫描仪精度误差、球心拟合误差,以及计算旋转平移的误差,也能保证较高的准确性。Therefore, the present invention filters the marker points through the geometric feature relationship of the two-dimensional image, can accurately identify and encode the marker points in the case of the XYZ axis rotation and translation, and combine the sub-pixel processing to improve the reconstruction of the binocular 3D scanner. Precision, the robot hand-eye calibration calculation uses the PnP algorithm to calculate the point pair, the calculation process is simple, the error is easy to control and optimize, and it is convenient for application developers to debug. Compared with the traditional hand-eye calibration method, the traditional method needs to solve the AX=XB equation based on the change relationship of the pose. Since the difference between the upper and lower poses is used for the solution, the correct solution of the equation is highly dependent on the accuracy of the measurement data. When there is a slight problem with the measurement data, the obtained result will have a large deviation from the actual true value. The calculation principle of the present invention is relatively intuitive and can better find problems in actual use. Even if there are robot walking errors, three-dimensional scanner accuracy errors, spherical center fitting errors, and errors in calculating rotation and translation during the measurement process, it is still Can guarantee high accuracy.

应该理解的是,虽然如上的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以是以其它的顺序执行。而且,如上的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that, although the steps in the flowcharts involved in the above embodiments are sequentially displayed according to the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and the steps may be executed in other orders. Moreover, at least a part of the steps in the flowcharts involved in the above embodiments may include multiple steps or multiple stages. These steps or stages are not necessarily executed at the same time, but may be executed at different times. The order of execution of these steps or stages is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in the other steps.

基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的机器人手眼坐标转换方法的机器人手眼坐标转换装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个机器人手眼坐标转换装置实施例中的具体限定可以参见上文中对于机器人手眼坐标转换方法的限定,在此不再赘述。Based on the same inventive concept, an embodiment of the present application further provides a robot hand-eye coordinate conversion device for implementing the above-mentioned robot hand-eye coordinate conversion method. The solution to the problem provided by the device is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the robot hand-eye coordinate conversion device provided below can refer to the above for the robot hand-eye coordinate conversion The limitation of the method is not repeated here.

在一个实施例中,如图5所示,提供了一种机器人手眼坐标转换装置,包括:图像采集模块502、边缘检测模块504、有效标志点判定模块506、标志点序列生成模块508和手眼标定模块510,其中:In one embodiment, as shown in FIG. 5, a robot hand-eye coordinate conversion device is provided, including: an image acquisition module 502, an edge detection module 504, a valid marker point determination module 506, a marker point sequence generation module 508, and a hand-eye calibration module Module 510, wherein:

图像采集模块502,用于通过机器人的扫描仪,对所述机器人的安装了标定板的末端执行器进行图像采集,得到所述末端执行器处于不同位姿的标定板图像;The image acquisition module 502 is configured to perform image acquisition on the end effector of the robot on which the calibration plate is installed through the scanner of the robot, to obtain the calibration plate images of the end effector in different poses;

边缘检测模块504,用于对所述不同位姿的标定板图像中的各个标志点,按照至少两种图形进行检测,得到至少两组不同的标志点区域;The edge detection module 504 is configured to detect each marker point in the calibration plate images of different poses according to at least two kinds of graphics, and obtain at least two groups of different marker point regions;

有效标志点判定模块506,用于基于各组中的标志点区域之间的距离,判定所述各组中的标志点区域的区域中心是否有效;The valid marker point determination module 506 is used to determine whether the regional center of the marker point area in each group is valid based on the distance between the marker point areas in each group;

标志点序列生成模块508,用于依据各个有效的所述区域中心及各个有效的所述区域中心在所述不同位姿的中心标志点,生成不同位姿的标志点序列;A marker point sequence generation module 508, configured to generate marker point sequences of different poses according to each valid said area center and each valid said area center at the center marker point of said different poses;

手眼标定模块510,用于通过所述不同位姿的标志点序列,标定所述机器人与所述扫描仪之间的坐标系转换关系。The hand-eye calibration module 510 is used for calibrating the coordinate system transformation relationship between the robot and the scanner through the marker point sequences of different poses.

在其中一个实施例中,所述边缘检测模块504,包括:In one embodiment, the edge detection module 504 includes:

轮廓检测单元,用于对所述标定板图像中的各个标志点,分别按照至少两种不同图形进行边缘提取检测,得到至少两组不同的图像轮廓;a contour detection unit, configured to perform edge extraction and detection on each marker point in the calibration plate image according to at least two different graphics, to obtain at least two sets of different image contours;

长度筛选单元,用于筛选各组中的轮廓长度区间的图像轮廓;a length screening unit for screening the image contours of the contour length intervals in each group;

标志点区域生成单元,用于基于筛选到的各组中图像轮廓,得到所述至少两组不同的标志点区域。A marker point region generating unit, configured to obtain the at least two different groups of marker point regions based on the screened image contours in each group.

在其中一个实施例中,所述标志点区域生成单元,包括:In one embodiment, the marker point area generating unit includes:

相似度计算单元,用于基于筛选到的圆形轮廓的轮廓面积和轮廓长度进行相似度计算,得到的圆形轮廓相似度;The similarity calculation unit is used to calculate the similarity based on the contour area and contour length of the screened circular contour, and obtain the circular contour similarity;

圆形区域筛选单元,用于基于所述圆形轮廓相似度,从所述查找到的圆形轮廓中,选取各标志点各自对应的圆形标志点区域;a circular area screening unit, for selecting the circular mark point area corresponding to each mark point from the circular contour that has been found based on the similarity of the circular contour;

多边形拟合单元,用于对所述查找到的各组中多边形轮廓进行拟合,得到多边形拟合图像;a polygon fitting unit, used for fitting the polygonal outlines in the found groups to obtain a polygonal fitting image;

四边形区域筛选单元,用于将所述多边形拟合图像中的四边形轮廓,作为各标志点各自对应的四边形标志点区域;The quadrilateral area screening unit is used to fit the quadrilateral outline in the polygonal image as the quadrilateral mark point area corresponding to each mark point;

所述有效标志点判定模块506,包括:The effective marker point determination module 506 includes:

中心有效性判断单元,用于基于各标志点的所述圆形标志点区域与所述四边形标志点区域之间的距离,判定所述圆形标志点区域的区域中心是否有效。A center validity judgment unit, configured to judge whether the area center of the circular mark point area is valid based on the distance between the circular mark point area and the quadrilateral mark point area of each mark point.

在其中一个实施例中,所述有效标志点判定模块506,包括:In one embodiment, the valid marker point determination module 506 includes:

重叠区域检测单元,用于对各组中的标志点区域分别进行重叠检测,将各组中的重叠标志点区域剔除,得到各组剔除后标志点区域;The overlapping area detection unit is used to respectively perform overlapping detection on the marker point areas in each group, remove the overlapping marker point areas in each group, and obtain the marker point area after each group is removed;

邻近点比较单元,用于将各组剔除后标志点区域之间的距离与标志点邻近阈值距离进行比较,得到多个邻近点比较结果;The adjacent point comparison unit is used to compare the distance between the marked point regions after each group is eliminated and the adjacent threshold distance of the marked point to obtain a plurality of adjacent point comparison results;

中心有效性判断单元,用于基于各个所述邻近点比较结果,分别判断所述各组剔除后标志点区域中的区域中心是否为有效的。A center validity judging unit, configured to judge, based on the comparison results of each of the adjacent points, whether the area centers in the marked point areas of the groups after the elimination are valid or not.

在其中一个实施例中,所述重叠区域检测单元,包括:In one embodiment, the overlapping area detection unit includes:

区域组合对比子单元,用于对各组中的标志点区域分别进行组合对比;The area combination comparison subunit is used to combine and compare the mark point areas in each group respectively;

第一检测子单元,用于计算组合对比的标志点区域之间的重叠检测距离;The first detection subunit is used to calculate the overlapping detection distance between the marked point regions of the combined comparison;

第二检测子单元,用于当所述重叠检测距离满足轮廓检测阈值时,分别计算所述组合对比的各标志点区域的区域边长与区域面积;The second detection subunit is used to calculate the area side length and area area of each marker point area in the combined comparison respectively when the overlapping detection distance satisfies the contour detection threshold;

重叠区域剔除子单元,用于基于所述组合组合对比的各标志点区域的区域边长与区域面积,将各组中的重叠标志点区域剔除,得到各组剔除后标志点区域。The overlapping area culling subunit is used for culling the overlapping marker point areas in each group based on the area side length and area area of each marker point area compared by the combination to obtain the culled marker point area of each group.

在其中一个实施例中,所述标志点序列生成模块508,包括:In one embodiment, the marker point sequence generation module 508 includes:

重心位置计算单元,用于基于位姿相同的有效区域中心进行均值化计算,得到不同位姿的区域中心的重心位置;The center of gravity position calculation unit is used to perform mean calculation based on the effective area center with the same pose, and obtain the center of gravity position of the center of the area with different poses;

中心标志点确定单元,用于基于位姿相同的重心位置与有效区域中心之间的距离,从各所述区域中心中查找出各位姿的中心标志点;a center marker point determination unit, which is used to find out the center marker points of each pose from the center of each of the regions based on the distance between the center of gravity of the same pose and the center of the effective area;

原点生成单元,用于将各个所述区域中心在不同位姿的中心标志点,分别确定为各所述位姿下的极坐标原点;The origin generating unit is used to determine the center mark points of the center of each of the regions in different poses as the polar coordinate origin under each of the poses;

极坐标系构建单元,用于基于各区域中心与各所述位姿下的极坐标原点,得到各所述各区域中心在各所述位姿的极坐标系中的位置;a polar coordinate system construction unit, configured to obtain the position of each of the regional centers in the polar coordinate system of each of the poses based on the center of each region and the polar coordinate origin under each of the poses;

标志点序列生成单元,用于按照各所述各区域中心在所述极坐标系中的角度,分别对各所述各区域中心在各所述位姿的极坐标系中的位置进行排序,得到不同位姿的标志点序列。The marker point sequence generating unit is configured to sort the positions of the centers of the regions in the polar coordinate system of the poses according to the angles of the centers of the regions in the polar coordinate system, to obtain The sequence of landmark points in different poses.

在其中一个实施例中,所述坐标系转换关系包括旋转的转换关系和平移的转换关系,所述手眼标定模块510,包括:In one embodiment, the coordinate system transformation relationship includes a rotation transformation relationship and a translation transformation relationship, and the hand-eye calibration module 510 includes:

旋转关系标定单元,用于通过所述标志点序列在不同位姿转换过程对应的旋转向量,标定所述机器人与所述扫描仪的旋转的转换关系;a rotation relationship calibration unit, used for calibrating the rotation transformation relationship between the robot and the scanner through the rotation vectors corresponding to the sequence of landmarks in different pose transformation processes;

第一向量计算单元,用于通过所述标志点序列在不同位姿转换过程中的原点平移信息,得到第一平移向量;a first vector calculation unit, used for obtaining the first translation vector through the origin translation information of the sequence of landmarks in different pose conversion processes;

第二向量计算单元,用于通过所述不同位姿的标志点序列拟合所得的标志点球心,以及,预设位姿的标志点序列进行计算,得到第二平移向量;The second vector calculation unit is used for calculating the marker point sphere center obtained by fitting the marker point sequence of the different poses, and the marker point sequence of the preset pose to obtain the second translation vector;

平移关系标定单元,用于组合所述第一平移向量和所述第二平移向量,得到所述平移的转换关系。A translation relationship calibration unit, configured to combine the first translation vector and the second translation vector to obtain the translation conversion relationship.

上述机器人手眼坐标转换装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以是以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。Each module in the above-mentioned robot hand-eye coordinate conversion device may be implemented in whole or in part by software, hardware and combinations thereof. The above modules may be embedded in or independent of the processor in the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute operations corresponding to the above modules.

在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图6所示。该计算机设备包括处理器、存储器、输入/输出接口、通信接口、显示单元和输入装置。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口、显示单元和输入装置通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、移动蜂窝网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种机器人手眼坐标转换方法。该计算机设备的显示单元用于形成视觉可见的画面,可以是显示屏、投影装置或虚拟现实成像装置,显示屏可以是液晶显示屏或电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。In one embodiment, a computer device is provided, and the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 6 . The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. Wherein, the processor, the memory and the input/output interface are connected through the system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Among them, the processor of the computer device is used to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium, an internal memory. The nonvolatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium. The input/output interface of the computer device is used to exchange information between the processor and external devices. The communication interface of the computer device is used for wired or wireless communication with an external terminal, and the wireless communication can be realized by WIFI, mobile cellular network, NFC (Near Field Communication) or other technologies. When the computer program is executed by the processor, a method for converting hand-eye coordinates of a robot is realized. The display unit of the computer equipment is used to form a visually visible picture, which can be a display screen, a projection device or a virtual reality imaging device, the display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a display screen The touch layer covered on the top may also be keys, trackballs or touchpads provided on the casing of the computer equipment, and may also be an external keyboard, touchpad or mouse.

本领域技术人员可以理解,图6中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 6 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied. Include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.

在一个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。In one embodiment, a computer device is also provided, including a memory and a processor, where a computer program is stored in the memory, and the processor implements the steps in the foregoing method embodiments when the processor executes the computer program.

在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, and when the computer program is executed by a processor, implements the steps in the foregoing method embodiments.

在一个实施例中,提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。In one embodiment, a computer program product is provided, including a computer program, which implements the steps in each of the foregoing method embodiments when the computer program is executed by a processor.

需要说明的是,本申请所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。It should be noted that the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) involved in this application are all It is the information and data authorized by the user or fully authorized by all parties, and the collection, use and processing of the relevant data need to comply with the relevant laws, regulations and standards of the relevant countries and regions.

本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-OnlyMemory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic RandomAccess Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented by instructing relevant hardware through a computer program, and the computer program can be stored in a non-volatile computer-readable storage In the medium, when the computer program is executed, it may include the processes of the above-mentioned method embodiments. Wherein, any reference to a memory, a database or other media used in the various embodiments provided in this application may include at least one of a non-volatile memory and a volatile memory. Non-volatile memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive memory (ReRAM), magnetic variable memory (Magnetoresistive Random Memory) Access Memory, MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (Phase Change Memory, PCM), graphene memory, etc. Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration and not limitation, the RAM may be in various forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM). The database involved in the various embodiments provided in this application may include at least one of a relational database and a non-relational database. The non-relational database may include a blockchain-based distributed database, etc., but is not limited thereto. The processors involved in the various embodiments provided in this application may be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, data processing logic devices based on quantum computing, etc., and are not limited to this.

以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. In order to make the description simple, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features It is considered to be the range described in this specification.

以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。The above-mentioned embodiments only represent several embodiments of the present application, and the descriptions thereof are relatively specific and detailed, but should not be construed as a limitation on the scope of the patent of the present application. It should be pointed out that for those skilled in the art, without departing from the concept of the present application, several modifications and improvements can be made, which all belong to the protection scope of the present application. Therefore, the scope of protection of the present application should be determined by the appended claims.

Claims (10)

1. A robot hand-eye coordinate transformation method, characterized in that the method comprises:
acquiring images of an end effector of the robot, which is provided with a calibration plate, by a scanner of the robot to obtain calibration plate images of the end effector at different poses;
detecting each marker point in the calibration plate images with different poses according to at least two graphs to obtain at least two groups of different marker point areas;
determining whether the region center of the landmark regions in each group is valid based on the distance between the landmark regions in each group;
generating marker point sequences with different poses according to the effective region centers and the central marker points of the effective region centers in the different poses;
and calibrating the coordinate system conversion relation between the robot and the scanner through the marker point sequences with different poses.
2. The method according to claim 1, wherein the detecting each marker point in the calibration plate images of different poses according to at least two graphs to obtain at least two groups of different marker point regions comprises:
performing edge extraction detection on each mark point in the calibration board image according to at least two different patterns respectively to obtain at least two groups of different image profiles;
screening the image contour of the contour length interval in each group;
and obtaining the at least two groups of different mark point areas based on the screened image outlines in the groups.
3. The method of claim 2, wherein the image contours comprise a circular contour and a polygonal contour, and the deriving the at least two different sets of landmark regions based on the image contours in the screened sets comprises:
similarity calculation is carried out on the basis of the screened outline area and the outline length of the circular outline, and the similarity of the circular outline is obtained;
selecting circular mark point areas corresponding to the mark points from the searched circular contour based on the similarity of the circular contour;
fitting the searched polygon outlines in each group to obtain a polygon fitting image;
fitting the polygon to a quadrilateral contour in the image to serve as a quadrilateral marking point area corresponding to each marking point;
the determining whether the area centers of the landmark areas in the respective groups are valid includes:
and judging whether the area center of the circular mark point area is effective or not based on the distance between the circular mark point area and the quadrilateral mark point area of each mark point.
4. The method of claim 1, wherein determining whether the region centers of the landmark regions in each group are valid based on the distances between the landmark regions in each group comprises:
respectively carrying out overlapping detection on the mark point areas in each group, and removing the overlapping mark point areas in each group to obtain the removed mark point areas of each group;
comparing the distance between each group of removed mark point areas with the adjacent threshold distance of the mark points to obtain a plurality of adjacent point comparison results;
and respectively judging whether the area center in each group of the removed mark point areas is effective or not based on the comparison result of each adjacent point.
5. The method according to claim 4, wherein the performing overlap detection on the marker point regions in each group respectively, and removing the overlapped marker point regions in each group to obtain each group of removed marker point regions comprises:
respectively carrying out combination comparison on the mark point areas in each group;
calculating the overlapping detection distance between the marker point areas of the combined comparison;
when the overlapping detection distance meets a contour detection threshold, respectively calculating the region side length and the region area of each mark point region for the combined comparison;
and based on the side length and the area of each mark point area which is combined and compared, eliminating the overlapped mark point areas in each group to obtain the mark point areas after the elimination of each group.
6. The method according to claim 1, wherein the generating of marker point sequences of different poses according to each valid region center and the center marker point of each valid region center in the different poses comprises:
carrying out averaging calculation based on the effective area centers with the same pose to obtain the gravity center positions of the area centers with different poses;
based on the distance between the gravity center position and the effective area center with the same pose, finding out the center mark point of each pose from the area center;
respectively determining the center mark points of the centers of the areas in different poses as the origin of polar coordinates under the poses;
obtaining the position of each region center in the polar coordinate system of each pose based on each region center and the polar coordinate origin of each pose;
and according to the angle of each region center in the polar coordinate system, sequencing the positions of each region center in the polar coordinate system of each pose respectively to obtain a marker point sequence of different poses.
7. The method according to any one of claims 1 to 6, wherein the coordinate system transformation relationship comprises a rotation transformation relationship and a translation transformation relationship, and the calibrating the coordinate system transformation relationship between the robot and the scanner through the marker point sequences of different poses comprises:
calibrating the rotation conversion relation between the robot and the scanner through the corresponding rotation vectors of the mark point sequence in the conversion process of different poses;
obtaining a first translational vector through the origin translation information of the mark point sequence in the conversion process of different poses;
calculating through the mark point spherical center obtained by fitting the mark point sequences with different poses and the mark point sequence with a preset pose to obtain a second translation vector;
and combining the first translation vector and the second translation vector to obtain the translation conversion relation.
8. A robot hand-eye coordinate conversion apparatus, characterized by comprising:
the image acquisition module is used for acquiring images of the end effector of the robot, which is provided with the calibration plate, through a scanner of the robot to obtain calibration plate images of the end effector at different poses;
the edge detection module is used for detecting each mark point in the calibration plate images with different poses according to at least two graphs to obtain at least two groups of different mark point areas;
the effective mark point judging module is used for judging whether the area center of the mark point areas in each group is effective or not based on the distance between the mark point areas in each group;
the mark point sequence generating module is used for generating mark point sequences with different poses according to the effective region centers and the center mark points of the effective region centers in the different poses;
and the hand-eye calibration module is used for calibrating the coordinate system conversion relation between the robot and the scanner through the mark point sequences with different poses.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202210809386.5A 2022-07-11 2022-07-11 Robot hand-eye coordinate conversion method and device, computer equipment and storage medium Pending CN115049744A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210809386.5A CN115049744A (en) 2022-07-11 2022-07-11 Robot hand-eye coordinate conversion method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210809386.5A CN115049744A (en) 2022-07-11 2022-07-11 Robot hand-eye coordinate conversion method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115049744A true CN115049744A (en) 2022-09-13

Family

ID=83166310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210809386.5A Pending CN115049744A (en) 2022-07-11 2022-07-11 Robot hand-eye coordinate conversion method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115049744A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116687569A (en) * 2023-07-28 2023-09-05 深圳卡尔文科技有限公司 Coded identification operation navigation method, system and storage medium
CN118279399A (en) * 2024-06-03 2024-07-02 先临三维科技股份有限公司 Scanning equipment pose tracking method and tracking equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110046782A1 (en) * 2008-04-30 2011-02-24 Abb Technology Ab A method and system for determining the relation between a robot coordinate system and a local coordinate system located in the working range of the robot
US20140145948A1 (en) * 2012-11-26 2014-05-29 Everest Display Inc. Interactive projection system and method for calibrating position of light point thereof
CN104864809A (en) * 2015-04-24 2015-08-26 南京航空航天大学 Vision-based position detection coding target and system
CN109015630A (en) * 2018-06-21 2018-12-18 深圳辰视智能科技有限公司 Hand and eye calibrating method, system and the computer storage medium extracted based on calibration point
CN110919658A (en) * 2019-12-13 2020-03-27 东华大学 Robot calibration method based on vision and multi-coordinate system closed-loop conversion
CN111801198A (en) * 2018-08-01 2020-10-20 深圳配天智能技术研究院有限公司 Hand-eye calibration method, system and computer storage medium
CN112170825A (en) * 2020-10-09 2021-01-05 中冶赛迪工程技术股份有限公司 A vision servo-based long nozzle replacement method, equipment, terminal and medium
CN114407018A (en) * 2022-02-11 2022-04-29 天津科技大学 Robot hand-eye calibration method, device, electronic device, storage medium and product

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110046782A1 (en) * 2008-04-30 2011-02-24 Abb Technology Ab A method and system for determining the relation between a robot coordinate system and a local coordinate system located in the working range of the robot
US20140145948A1 (en) * 2012-11-26 2014-05-29 Everest Display Inc. Interactive projection system and method for calibrating position of light point thereof
CN104864809A (en) * 2015-04-24 2015-08-26 南京航空航天大学 Vision-based position detection coding target and system
CN109015630A (en) * 2018-06-21 2018-12-18 深圳辰视智能科技有限公司 Hand and eye calibrating method, system and the computer storage medium extracted based on calibration point
CN111801198A (en) * 2018-08-01 2020-10-20 深圳配天智能技术研究院有限公司 Hand-eye calibration method, system and computer storage medium
CN110919658A (en) * 2019-12-13 2020-03-27 东华大学 Robot calibration method based on vision and multi-coordinate system closed-loop conversion
CN112170825A (en) * 2020-10-09 2021-01-05 中冶赛迪工程技术股份有限公司 A vision servo-based long nozzle replacement method, equipment, terminal and medium
CN114407018A (en) * 2022-02-11 2022-04-29 天津科技大学 Robot hand-eye calibration method, device, electronic device, storage medium and product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙颖超;李振海;: "基于数学形态学的圆形标志定位", 地理空间信息, no. 01, 28 February 2010 (2010-02-28), pages 79 - 81 *
晏晖;胡丙华;: "基于空间拓扑关系的目标自动跟踪与位姿测量技术", 中国测试, no. 04, 30 April 2019 (2019-04-30), pages 13 - 19 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116687569A (en) * 2023-07-28 2023-09-05 深圳卡尔文科技有限公司 Coded identification operation navigation method, system and storage medium
CN116687569B (en) * 2023-07-28 2023-10-03 深圳卡尔文科技有限公司 Coded identification operation navigation method, system and storage medium
CN118279399A (en) * 2024-06-03 2024-07-02 先临三维科技股份有限公司 Scanning equipment pose tracking method and tracking equipment

Similar Documents

Publication Publication Date Title
CN110135455B (en) Image matching method, device and computer readable storage medium
Tang et al. 3D mapping and 6D pose computation for real time augmented reality on cylindrical objects
JP5746477B2 (en) Model generation device, three-dimensional measurement device, control method thereof, and program
JP3735344B2 (en) Calibration apparatus, calibration method, and calibration program
JP5455873B2 (en) Method for determining the posture of an object in a scene
JP5612916B2 (en) Position / orientation measuring apparatus, processing method thereof, program, robot system
JP6594129B2 (en) Information processing apparatus, information processing method, and program
RU2700246C1 (en) Method and system for capturing an object using a robot device
CN108346165A (en) Robot and three-dimensional sensing components in combination scaling method and device
JP5092711B2 (en) Object recognition apparatus and robot apparatus
CN111079565B (en) Construction method and identification method of view two-dimensional attitude template and positioning grabbing system
CN106940704A (en) A kind of localization method and device based on grating map
GB2512460A (en) Position and orientation measuring apparatus, information processing apparatus and information processing method
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
JP2012128661A (en) Information processor, information processing method and program
CN115049744A (en) Robot hand-eye coordinate conversion method and device, computer equipment and storage medium
JP2012026895A (en) Position attitude measurement device, position attitude measurement method, and program
CN113096191B (en) Intelligent calibration method for monocular camera based on coding plane target
TW202238449A (en) Indoor positioning system and indoor positioning method
CN115042184A (en) Robot hand-eye coordinate conversion method and device, computer equipment and storage medium
JP2009128192A (en) Object recognition device and robot device
CN115972202A (en) Method, robot, device, medium and product for controlling operation of a robot arm
CN112634377B (en) Camera calibration method, terminal and computer readable storage medium of sweeping robot
US20240083038A1 (en) Assistance system, image processing device, assistance method and non-transitory computer-readable storage medium
Liang et al. An integrated camera parameters calibration approach for robotic monocular vision guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination