CN114998422B - High-precision rapid three-dimensional positioning system based on error compensation model - Google Patents
High-precision rapid three-dimensional positioning system based on error compensation model Download PDFInfo
- Publication number
- CN114998422B CN114998422B CN202210586073.8A CN202210586073A CN114998422B CN 114998422 B CN114998422 B CN 114998422B CN 202210586073 A CN202210586073 A CN 202210586073A CN 114998422 B CN114998422 B CN 114998422B
- Authority
- CN
- China
- Prior art keywords
- depth
- precision
- dimensional positioning
- error compensation
- marker
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000003550 marker Substances 0.000 claims abstract description 68
- 238000005259 measurement Methods 0.000 claims abstract description 45
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 21
- 238000003384 imaging method Methods 0.000 claims description 22
- 238000000034 method Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 15
- 230000003287 optical effect Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000012821 model calculation Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims 1
- 238000009434 installation Methods 0.000 abstract description 7
- 238000013461 design Methods 0.000 abstract description 5
- 230000006872 improvement Effects 0.000 description 7
- 230000007246 mechanism Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及一种基于误差补偿模型的高精快速三维定位系统,属于相机计算机视觉技术领域。The invention relates to a high-precision and rapid three-dimensional positioning system based on an error compensation model, and belongs to the technical field of camera computer vision.
背景技术Background technique
随着社会的发展和科技的不断进步,机器视觉定位系统广泛应用于各个领域,如机器人的视觉伺服、三维在线尺寸测量等等。当前工业机器人视觉定位系统分为双目视觉系统、单目视觉系统、结构光视觉系统和深度相机系统,其中单目相机具备价格低廉、结构简单等优势,为此在视觉定位上应用广泛。但单目视觉系统获取深度信息困难,因此高精快速的深度测量方法是提高单目视觉系统定位精度的关键。然而,现有的单目相机定位技术主要分为深度已知的二维定位方法和设计复杂标志物来获取三维信息的方法。其中,深度已知的二维定位方法简单,定位精度高,如《基于单目视觉的工件位姿识别与抓取系统》为了弥补单目相机难以获得准确深度信息的问题,通过红外传感器配合工业相机实现对工件的定位和抓取。上述方法主要是通过事先获取深度信息或者通过传感器实现对深度信息的获取,增加了工业成本,传感器安装要求高等问题,并未根本解决单目相机对于深度信息获取难的问题。设计复杂标志物来获取三维信息的方法可以在不添加任何辅助设备的条件下来获取工件在世界坐标系下的三维信息。如《An Improved ArUco Marker for MonocularVision Ranging》通过设计Aruco标志解决了测量单目相机镜头和目标标志之间的三维距离问题,实现了单目相机的深度测量问题;吕若罡的基于标准球的单目测距定位方法,通过提取三个标准球的轮廓计算标准球的相机坐标值,再通过手眼标定变换矩阵实现机器人对工作机夹具的测距定位。这类方法在标志物的设计上较为复杂,对于标志物的精度有较高的要求,或者通过平移的方式获取深度信息,效率较低,并且获得的三维定位精度不高,尤其是深度信息。With the development of society and the continuous advancement of science and technology, machine vision positioning systems are widely used in various fields, such as robot visual servoing, three-dimensional online dimension measurement, etc. At present, the visual positioning system of industrial robots is divided into binocular vision system, monocular vision system, structured light vision system and depth camera system. Among them, monocular cameras have the advantages of low price and simple structure, so they are widely used in visual positioning. However, it is difficult for monocular vision systems to obtain depth information, so high-precision and fast depth measurement methods are the key to improving the positioning accuracy of monocular vision systems. However, the existing monocular camera positioning technology is mainly divided into two-dimensional positioning methods with known depth and methods for designing complex markers to obtain three-dimensional information. Among them, the two-dimensional positioning method with known depth is simple and has high positioning accuracy. For example, in order to make up for the problem that monocular cameras are difficult to obtain accurate depth information, infrared sensors are used in combination with industrial cameras to realize the positioning and grasping of workpieces. The above methods mainly obtain depth information in advance or obtain depth information through sensors, which increases industrial costs and has high requirements for sensor installation. It does not fundamentally solve the problem of monocular cameras having difficulty in obtaining depth information. The method of designing complex markers to obtain three-dimensional information can obtain the three-dimensional information of the workpiece in the world coordinate system without adding any auxiliary equipment. For example, "An Improved ArUco Marker for Monocular Vision Ranging" solves the problem of measuring the three-dimensional distance between the monocular camera lens and the target marker by designing the Aruco marker, and realizes the depth measurement problem of the monocular camera; Lv Ruogang's monocular ranging and positioning method based on standard spheres calculates the camera coordinate value of the standard sphere by extracting the contours of three standard spheres, and then realizes the ranging and positioning of the robot to the working machine fixture through the hand-eye calibration transformation matrix. This type of method is more complicated in the design of the marker, has high requirements for the accuracy of the marker, or obtains depth information by translation, which is less efficient, and the three-dimensional positioning accuracy obtained is not high, especially the depth information.
鉴于此,本发明提出了一种基于误差补偿模型的高精快速三维定位系统,解决了标志物设计复杂、深度信息获取精度较低、需要借助其他传感器等问题。通过获取标志特征圆在像素坐标系和世界坐标系下的半径,计算像素距离比,并利用高精快速的深度测量算法计算出特征圆和相机的距离,借助误差损失模型补偿由于安装等偶然因素导致的计算偏差,从而最终得到相机的深度信息,获得深度信息后,通过二维定位算法获得准确的二维定位,将XYZ三方向的偏差值发送给机器人的控制器进行位姿调整,从而实现三维定位。In view of this, the present invention proposes a high-precision and fast three-dimensional positioning system based on an error compensation model, which solves the problems of complex marker design, low depth information acquisition accuracy, and the need for other sensors. By obtaining the radius of the marker feature circle in the pixel coordinate system and the world coordinate system, calculating the pixel distance ratio, and using a high-precision and fast depth measurement algorithm to calculate the distance between the feature circle and the camera, the error loss model is used to compensate for the calculation deviation caused by accidental factors such as installation, so as to finally obtain the depth information of the camera. After obtaining the depth information, accurate two-dimensional positioning is obtained through a two-dimensional positioning algorithm, and the deviation values in the three directions of XYZ are sent to the robot's controller for posture adjustment, thereby achieving three-dimensional positioning.
发明内容Summary of the invention
本发明的目的是提供一种基于误差补偿模型的高精快速三维定位系统,解决了标志物设计复杂、深度信息获取精度较低、需要借助其他传感器等问题;通过捕获单帧图像并检测特征圆的圆心和半径,基于高精快速的深度测量算法实现深度地准确测量,并且为了消除由于相机安装倾斜,测量平面不平行于相机成像平面等所引起的误差,建立了误差补偿模型,获得准确的深度信息后,通过二维定位算法获得准确的二维定位,将XYZ三方向的偏差值发送给机器人的控制器进行位姿调整,从而实现三维距离的快速高精度地测量。The purpose of the present invention is to provide a high-precision and rapid three-dimensional positioning system based on an error compensation model, which solves the problems of complex marker design, low accuracy in acquiring depth information, and the need to use other sensors; by capturing a single-frame image and detecting the center and radius of a characteristic circle, accurate depth measurement is achieved based on a high-precision and rapid depth measurement algorithm, and in order to eliminate errors caused by the tilted camera installation and the measurement plane not being parallel to the camera imaging plane, an error compensation model is established. After obtaining accurate depth information, accurate two-dimensional positioning is obtained through a two-dimensional positioning algorithm, and the deviation values in the three directions of XYZ are sent to the robot's controller for posture adjustment, thereby achieving rapid and high-precision measurement of three-dimensional distances.
为了实现上述目的,本发明采用的技术方案是:In order to achieve the above object, the technical solution adopted by the present invention is:
一种基于误差补偿模型的高精快速三维定位系统,所述高精快速三维定位系统包括系统硬件和高精度三维定位软件;所述系统硬件包括机器人机械手、单目工业相机、处理器、标志物和电动升降底座,所述单目工业相机固定在机器人机械手的末端,标志物安装于能够上下自动升降的电动升降底座上,且标志物的初始位置位于单目工业相机的正下方;所述单目工业相机通过捕获单张图片来获取标志物的特征圆圆心与单目工业相机的初始深度值,为了克服成像平面和标志物所在平面不平行的问题,将当前像素距离比代入误差补偿模型,得到所需的深度误差补偿值,将初始深度值加上深度误差补偿值即可实现高精快速的深度测量,获得深度信息后,通过二维定位算法获得准确的二维定位,最后通过高精度三维定位软件实现三维定位;所述处理器为实现数据采集、储存和初始深度测量以及误差补偿模型计算提供载体。A high-precision and fast three-dimensional positioning system based on an error compensation model, the high-precision and fast three-dimensional positioning system includes system hardware and high-precision three-dimensional positioning software; the system hardware includes a robot manipulator, a monocular industrial camera, a processor, a marker and an electric lifting base, the monocular industrial camera is fixed at the end of the robot manipulator, the marker is installed on an electric lifting base that can automatically lift up and down, and the initial position of the marker is located directly below the monocular industrial camera; the monocular industrial camera obtains the center of the characteristic circle of the marker and the initial depth value of the monocular industrial camera by capturing a single picture, in order to overcome the problem that the imaging plane and the plane where the marker are located are not parallel, the current pixel distance ratio is substituted into the error compensation model to obtain the required depth error compensation value, the initial depth value plus the depth error compensation value can realize high-precision and fast depth measurement, after obtaining the depth information, accurate two-dimensional positioning is obtained through a two-dimensional positioning algorithm, and finally three-dimensional positioning is realized through high-precision three-dimensional positioning software; the processor provides a carrier for realizing data acquisition, storage and initial depth measurement and error compensation model calculation.
本发明技术方案的进一步改进在于:所述单目工业相机包括相机和镜头,用于采集标志物的图像。A further improvement of the technical solution of the present invention is that the monocular industrial camera includes a camera and a lens, which are used to collect images of the marker.
本发明技术方案的进一步改进在于:所述高精度三维定位软件包括深度测量和二维定位两个功能,深度测量用于测量标志物上特征圆的深度,二维定位用于获得特征圆的深度信息后,计算特征圆xy方向的精确偏差。A further improvement of the technical solution of the present invention is that the high-precision three-dimensional positioning software includes two functions: depth measurement and two-dimensional positioning. The depth measurement is used to measure the depth of the characteristic circle on the marker, and the two-dimensional positioning is used to obtain the depth information of the characteristic circle and then calculate the precise deviation of the characteristic circle in the xy direction.
本发明技术方案的进一步改进在于:所述高精快速三维定位系统通过以下步骤实现其功能:A further improvement of the technical solution of the present invention is that the high-precision rapid three-dimensional positioning system realizes its function through the following steps:
步骤1,将单目工业相机固定在机器人机械手的末端,使机器人机械手的xy方向与单目工业相机像素坐标系的xy方向平行;Step 1, fix the monocular industrial camera at the end of the robot manipulator so that the xy direction of the robot manipulator is parallel to the xy direction of the monocular industrial camera pixel coordinate system;
步骤2,将标志物放置于单目工业相机正下方,自动控制电动升降底座,使标志物表面与镜头保持固定距离;将机器人机械手预计到达的目标位置定义为Di(i=1,2,…,n),并采集对应位置的标志物的图像,定义为Ii(i=1,2,…,n);Step 2: Place the marker directly below the monocular industrial camera, and automatically control the electric lifting base to keep a fixed distance between the marker surface and the lens; define the target position that the robot manipulator is expected to reach as D i (i=1, 2, ..., n), and collect the image of the marker at the corresponding position, which is defined as I i (i=1, 2, ..., n);
步骤3,在机器人作业过程中,每次到达目标位置Di附近后,即开启单目工业相机,自适应调整曝光度,并实时获取标志物图像,然后通过高精度三维定位软件进行三维高精度定位。Step 3: During the robot operation, each time the robot reaches the vicinity of the target position D i , the monocular industrial camera is turned on, the exposure is adaptively adjusted, and the image of the marker is acquired in real time, and then three-dimensional high-precision positioning is performed through high-precision three-dimensional positioning software.
本发明技术方案的进一步改进在于:所述步骤3通过高精度三维定位软件进行三维高精度定位的具体步骤为:A further improvement of the technical solution of the present invention is that the specific steps of performing three-dimensional high-precision positioning by high-precision three-dimensional positioning software in step 3 are as follows:
步骤3.1,根据标志物图像计算深度信息初始值:获取标志物图像后,利用圆形检测算法提取标志物特征圆,根据特征圆在世界坐标系和像素坐标系下的半径以及圆心的像素坐标,计算出像素距离比,实现深度的初步估计;Step 3.1, calculate the initial value of depth information based on the marker image: After obtaining the marker image, use the circle detection algorithm to extract the marker feature circle, and calculate the pixel distance ratio based on the radius of the feature circle in the world coordinate system and the pixel coordinate system and the pixel coordinates of the center of the circle to achieve a preliminary estimation of the depth;
步骤3.2,基于误差补偿模型补偿误差:由于步骤3.1初步估计的深度信息是在相机的成像平面和特征圆所在平面平行的条件下计算的,但是现实中难以按照所述条件严格进行;为了克服由于无法满足条件导致精度下降的问题,将初始深度计算值加上提出的误差补偿模型所计算的补偿值,进而获得高精度的深度计算结果;Step 3.2, compensate the error based on the error compensation model: Since the depth information initially estimated in step 3.1 is calculated under the condition that the imaging plane of the camera and the plane where the characteristic circle is located are parallel, but it is difficult to strictly follow the said conditions in reality; in order to overcome the problem of reduced accuracy due to failure to meet the conditions, the initial depth calculation value is added with the compensation value calculated by the proposed error compensation model, thereby obtaining a high-precision depth calculation result;
步骤3.3,高精度三维定位:深度计算完成后,根据单目工业相机采集目标位置标志物图像和当前测量的深度值进行对比,获得Z方向偏差,同时通过二维平面定位算法,计算出xy方向的精确偏差,并将XYZ三方向的偏差值发送给机器人的控制器进行位姿调整,从而实现机器人三维高精定位。Step 3.3, high-precision three-dimensional positioning: After the depth calculation is completed, the Z direction deviation is obtained by comparing the target position marker image collected by the monocular industrial camera with the currently measured depth value. At the same time, the precise deviation in the xy direction is calculated through the two-dimensional plane positioning algorithm, and the deviation values in the XYZ directions are sent to the robot's controller for posture adjustment, thereby achieving three-dimensional high-precision positioning of the robot.
本发明技术方案的进一步改进在于:所述步骤3.1的具体步骤包括:A further improvement of the technical solution of the present invention is that the specific steps of step 3.1 include:
步骤3.1.1,计算像素距离比:根据小孔成像模型和相机成像原理,特征圆在世界坐标系下的半径和像素坐标系下的半径以及相机光心与圆心的连线构成了相似三角形,设Hc表示凸透镜的光心到成像物体的距离,Hw表示凸透镜的光心到真实物体的距离,Xc表示物体在像素坐标系的长度,Xw表示物体真实的尺寸,为此可得到一下关系:Step 3.1.1, calculate the pixel distance ratio: According to the pinhole imaging model and the camera imaging principle, the radius of the characteristic circle in the world coordinate system and the radius in the pixel coordinate system and the line connecting the camera optical center and the circle center form a similar triangle. Let Hc represent the distance from the optical center of the convex lens to the imaged object, Hw represent the distance from the optical center of the convex lens to the real object, Xc represent the length of the object in the pixel coordinate system, and Xw represent the real size of the object. For this reason, the following relationship can be obtained:
在保证相机成像平面和标志物特征圆平面平行的条件下,设a处的特征圆的圆心为(u1,v1),b处的特征圆圆心为(u2,v2),则像素距离比为:Under the condition that the camera imaging plane and the plane of the characteristic circle of the marker are parallel, let the center of the characteristic circle at a be (u1, v1) and the center of the characteristic circle at b be (u2, v2), then the pixel distance ratio is:
步骤3.1.2,初步估计特征圆的深度:得到像素距离比后,深度计算公式可表达为:Step 3.1.2, preliminarily estimate the depth of the feature circle: After obtaining the pixel distance ratio, the depth calculation formula can be expressed as:
fu是相机内参标定的结果。f u is the result of camera intrinsic calibration.
本发明技术方案的进一步改进在于:所述步骤3.2的具体步骤包括:A further improvement of the technical solution of the present invention is that the specific steps of step 3.2 include:
步骤3.2.1,构建误差补偿模型,步骤3.1计算出的深度信息是在假设相机的成像平面和特征圆所在平面平行的情况下计算出来的,但是现实难以按照假设严格进行;为了克服由于无法满足条件导致精度下降的问题,因此构建误差补偿模型进行误差补偿,设误差补偿模型为:Step 3.2.1, construct an error compensation model. The depth information calculated in step 3.1 is calculated under the assumption that the imaging plane of the camera is parallel to the plane where the characteristic circle is located. However, it is difficult to strictly follow the assumption in reality. In order to overcome the problem of reduced accuracy due to failure to meet the conditions, an error compensation model is constructed for error compensation. Suppose the error compensation model is:
E=f(Hw) (4)E=f( Hw ) (4)
误差模型的评判标准e为:The judgment criterion e of the error model is:
e=E-Er (5)e=EE r (5)
其中,Er表示真实的误差值,E表示计算的误差值,为此,可通过最小二乘法进行误差和像素距离比的曲线拟合,从而准确求取出深度误差;设拟合多项式为:Among them, Er represents the real error value, and E represents the calculated error value. Therefore, the least square method can be used to fit the curve of error and pixel distance ratio, so as to accurately calculate the depth error; let the fitting polynomial be:
y=a0+a1x+a2x2...+ak-1xk-1+akxk (6)y=a 0 +a 1 x +a 2 x 2 ... + ak-1 x k-1 +ak x k (6)
步骤3.2.2,简化误差补偿模型,上式中ai的取值保证各点到这条曲线的距离之和最小,即偏差平方和R2最小,通过上式构建偏方差函数:Step 3.2.2, simplify the error compensation model. The value of ai in the above formula ensures that the sum of the distances from each point to the curve is minimized, that is, the sum of squared deviations R2 is minimized. The partial variance function is constructed using the above formula:
对上式求ai的偏导数,可得到:Taking the partial derivative of a i from the above formula, we can get:
将上式简化为矩阵形式:Simplify the above formula into matrix form:
X*A=Y (9)X*A=Y (9)
为减少计算量,仅计算k=3即可,为此深度测量结果可修正为:To reduce the amount of calculation, only k=3 is required, and the depth measurement result can be corrected as follows:
Zr=zc1+Y (10)Z r = z c1 + Y (10)
通过上式就可以实现高精快速深度测量结果。The above formula can be used to achieve high-precision and fast depth measurement results.
本发明技术方案的进一步改进在于:所述步骤3.3的具体步骤包括:A further improvement of the technical solution of the present invention is that the specific steps of step 3.3 include:
步骤3.3.1,当机器人到达指定目标位置时,采集一张图片并发送给高精度三维定位软件,并与目标位置采集的标准图片做对比;Step 3.3.1, when the robot reaches the specified target position, it collects a picture and sends it to the high-precision 3D positioning software, and compares it with the standard picture collected at the target position;
步骤3.3.2,同时计算目标位置标准图像的深度,以及当前图像的深度测量结果,二者相减,获得Z方向偏差;Step 3.3.2, simultaneously calculate the depth of the standard image at the target position and the depth measurement result of the current image, subtract the two, and obtain the Z direction deviation;
步骤3.3.3,标准图像的特征圆圆心坐标记为C1=(Uo,Vo),当前图像圆心坐标记为Cn=(Un,Vn),通过像素距离比,可以通过下式得到二维的位置误差:Step 3.3.3, the center coordinates of the characteristic circle of the standard image are marked as C 1 = (U o ,V o ), and the center coordinates of the current image are marked as C n = (U n ,V n ). Through the pixel distance ratio, the two-dimensional position error can be obtained by the following formula:
此时,将计算出的误差发送给机器人控制器,控制机器人调整位姿从而实现机器人的快速高精三维定位;At this time, the calculated error is sent to the robot controller to control the robot to adjust its posture so as to achieve fast and high-precision three-dimensional positioning of the robot;
基于误差补偿模型的高精快速三维定位系统,通过捕获单帧图像并检测特征圆的圆心和半径,基于高精快速的深度测量算法实现深度地准确测量,并且为了消除由于相机安装倾斜,测量平面不平行于相机成像平面等所引起的误差,建立了误差补偿模型,在获得准确的深度信息后,利用二维平面定位算法,计算出xy方向的精确偏差,并将XYZ三方向的偏差值发送给机器人的控制器进行位姿调整,从而实现机器人三维高精定位。The high-precision and rapid three-dimensional positioning system based on the error compensation model captures a single-frame image and detects the center and radius of the characteristic circle, and realizes accurate depth measurement based on a high-precision and rapid depth measurement algorithm. In order to eliminate errors caused by the tilted camera installation and the measurement plane not being parallel to the camera imaging plane, an error compensation model is established. After obtaining accurate depth information, the two-dimensional plane positioning algorithm is used to calculate the precise deviation in the xy direction, and the deviation values in the three directions of XYZ are sent to the robot's controller for posture adjustment, thereby realizing three-dimensional high-precision positioning of the robot.
由于采用了上述技术方案,本发明取得的技术效果有:Due to the adoption of the above technical solution, the technical effects achieved by the present invention are as follows:
本发明简化了单目相机定位的过程,通过捕获单帧图像并检测特征圆的圆心和半径,就可以实现高精快速三维距离测量,省去了需要设计复杂标志物来获取物体的三维距离信息。The present invention simplifies the process of monocular camera positioning. By capturing a single-frame image and detecting the center and radius of a characteristic circle, high-precision and rapid three-dimensional distance measurement can be achieved, eliminating the need to design complex markers to obtain three-dimensional distance information of objects.
本发明能够大幅度减少工业成本,仅需要一个工业相机与圆心标定板就可以实现高精快速三维距离测量,省去了需要安装红外测距仪来获取物体深度信息的额外消费。The present invention can significantly reduce industrial costs, and only requires an industrial camera and a circle center calibration plate to achieve high-precision and rapid three-dimensional distance measurement, eliminating the need to install an infrared rangefinder to obtain object depth information.
本发明使用了误差补偿模型,不需要相机的成像平面与标定板平面的绝对平行,简化了安装要求,提高了普适性。The present invention uses an error compensation model, does not require the imaging plane of the camera to be absolutely parallel to the plane of the calibration plate, simplifies installation requirements, and improves universality.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明系统方法的流程图;FIG1 is a flow chart of the system method of the present invention;
图2是本发明系统方法的结构示意图;FIG2 is a schematic diagram of the structure of the system method of the present invention;
图3是本发明系统的硬件系统结构图;FIG3 is a block diagram of the hardware system of the system of the present invention;
其中,1、机器人机械手,2、单目工业相机,3、处理器,4、标志物,5、电动升降底座。Among them, 1. Robot manipulator, 2. Monocular industrial camera, 3. Processor, 4. Marker, 5. Electric lifting base.
具体实施方式Detailed ways
下面结合附图及具体实施例对本发明做进一步详细说明:The present invention is further described in detail below with reference to the accompanying drawings and specific embodiments:
一种基于误差补偿模型的高精快速三维定位系统,如图1-3所示,所述高精快速三维定位系统包括系统硬件和高精度三维定位软件;所述系统硬件包括机器人机械手1、单目工业相机2、处理器3、标志物4和电动升降底座5,所述单目工业相机2固定在机器人机械手1的末端某一位置,标志物4安装于能够上下自动升降的电动升降底座5上,且标志物4的初始位置位于单目工业相机2的正下方;所述单目工业相机2通过捕获单张图片来获取标志物4的特征圆圆心与单目工业相机2的初始深度值,为了克服成像平面和标志物4所在平面不平行的问题,将计算的深度初始值加上误差补偿模型所得到的补偿值,将初始深度值加上深度误差补偿值即可实现高精快速的深度测量,获得深度信息后,通过二维定位算法获得准确的二维定位,最后通过高精度三维定位软件实现三维定位;所述处理器3为实现数据采集、储存和初始深度测量以及误差补偿模型计算提供载体。A high-precision and fast three-dimensional positioning system based on an error compensation model, as shown in Figures 1-3, includes system hardware and high-precision three-dimensional positioning software; the system hardware includes a robot manipulator 1, a monocular industrial camera 2, a processor 3, a marker 4 and an electric lifting base 5, the monocular industrial camera 2 is fixed at a certain position at the end of the robot manipulator 1, the marker 4 is installed on the electric lifting base 5 that can automatically rise and fall, and the initial position of the marker 4 is located directly below the monocular industrial camera 2; the monocular industrial camera 2 obtains the initial depth value of the center of the characteristic circle of the marker 4 and the monocular industrial camera 2 by capturing a single picture, in order to overcome the problem that the imaging plane and the plane where the marker 4 are located are not parallel, the calculated initial depth value is added to the compensation value obtained by the error compensation model, and the initial depth value is added to the depth error compensation value to achieve high-precision and fast depth measurement. After obtaining the depth information, accurate two-dimensional positioning is obtained through a two-dimensional positioning algorithm, and finally three-dimensional positioning is achieved through high-precision three-dimensional positioning software; the processor 3 provides a carrier for realizing data acquisition, storage, initial depth measurement and error compensation model calculation.
优选的,所述的处理器,主要用于实现数据采集、储存和初始深度测量以及误差补偿模型计算提供载体。本实施例中处理器为个人PC机。Preferably, the processor is mainly used to realize data acquisition, storage and initial depth measurement and provide a carrier for error compensation model calculation. In this embodiment, the processor is a personal PC.
优选的,所述标志物4用于单目工业相机2深度测量的图像,通过单目工业相机2捕获一张标志物图像,提取标志物4上特征圆圆心在图像坐标系下的半径长度,实现深度测量。本实施例中标志物为圆心标定板。Preferably, the marker 4 is used for the image of depth measurement of the monocular industrial camera 2. The monocular industrial camera 2 captures a marker image, extracts the radius length of the center of the characteristic circle on the marker 4 in the image coordinate system, and realizes depth measurement. In this embodiment, the marker is a circle center calibration plate.
优选的,所述单目工业相机2由一台相机和镜头组成,主要用于采集标志物4的图像。本实施例中相机为大恒MER-503-20GM-P相机,分辨率为2448(H)*2048(V),帧率为20fps,数据接口为GIge。Preferably, the monocular industrial camera 2 is composed of a camera and a lens, and is mainly used to collect images of the marker 4. In this embodiment, the camera is a Daheng MER-503-20GM-P camera with a resolution of 2448 (H) * 2048 (V), a frame rate of 20fps, and a data interface of GIge.
优选的,所述高精度三维定位软件,主要分为深度测量和二维定位两个功能,深度测量主要用于测量标志物上特征圆的深度,二维定位功能主要用于获得深度信息后,计算特征圆xy方向的精确偏差。Preferably, the high-precision three-dimensional positioning software is mainly divided into two functions: depth measurement and two-dimensional positioning. The depth measurement is mainly used to measure the depth of the characteristic circle on the marker, and the two-dimensional positioning function is mainly used to calculate the precise deviation of the characteristic circle in the xy direction after obtaining the depth information.
优选的,所述电动升降底座5,主要用于标志物的支撑作用,自动控制电动升降底座,使标志物表面与镜头保持固定距离。Preferably, the electric lifting base 5 is mainly used to support the marker, and the electric lifting base is automatically controlled to keep a fixed distance between the marker surface and the lens.
所述高精快速三维定位系统通过以下步骤实现其功能:The high-precision rapid three-dimensional positioning system realizes its functions through the following steps:
步骤1,将单目工业相机2固定在机器人机械手1的末端,使机器人机械手1坐标系中的xy方向与相机像素坐标系的xy方向平行;本实施例中机械手为四自由度串并混联机构,通过四自由度串并混联机构的旋转电机补偿机械手坐标系和相机像素坐标系的夹角,补偿的夹角为81.5°。Step 1, fix the monocular industrial camera 2 at the end of the robot manipulator 1 so that the xy direction in the coordinate system of the robot manipulator 1 is parallel to the xy direction of the camera pixel coordinate system; in this embodiment, the manipulator is a four-degree-of-freedom serial-parallel hybrid mechanism, and the angle between the manipulator coordinate system and the camera pixel coordinate system is compensated by the rotating motor of the four-degree-of-freedom serial-parallel hybrid mechanism, and the compensated angle is 81.5°.
步骤2,将标志物4放置于单目工业相机2正下方,自动控制电动升降底座5,使标志物4表面与镜头保持固定距离;将机器人机械手1预计到达的目标位置定义为Di(i=1,2,…,n),并采集对应位置的标志物4的图像,定义为Ii(i=1,2,…,n);本实施例中目标位置的数量为5个,采集对应位置的标志物图像为5张,并将标志物图像保存至数据库中。Step 2, place the marker 4 directly below the monocular industrial camera 2, and automatically control the electric lifting base 5 to keep a fixed distance between the surface of the marker 4 and the lens; define the target position that the robot manipulator 1 is expected to reach as Di (i=1, 2, ..., n), and collect the image of the marker 4 at the corresponding position, defined as Ii (i=1, 2, ..., n); in this embodiment, the number of target positions is 5, 5 marker images of the corresponding positions are collected, and the marker images are saved in the database.
步骤3,在机器人作业过程中,每次到达目标位置Di附近后,即开启单目工业相机2,自适应调整曝光度,并实时获取标志物图像,然后通过高精度三维定位软件进行三维高精度定位。Step 3: During the operation of the robot, each time the robot reaches the vicinity of the target position D i , the monocular industrial camera 2 is turned on, the exposure is adaptively adjusted, and the image of the marker is acquired in real time, and then three-dimensional high-precision positioning is performed through high-precision three-dimensional positioning software.
步骤3.1,根据标志物图像计算深度信息初始值:获取标志物图像后,利用圆形检测算法提取标志物特征圆,根据特征圆在世界坐标系和像素坐标系下的半径以及圆心的像素坐标,计算出像素距离比,实现深度的初步估计;本实施例中相机保存带有标志物的图像,利用EDcircle圆检测算法实现特征圆半径和圆心坐标的计算,其中像素坐标系下两个特征圆的距离为102.3pixel,世界坐标系下两个特征圆的距离为7.5mm,为此可计算得像素距离比为13.64pixel/mm。通过相机标定可知fu为3520.7431,则可得到当前深度为258.0mm。Step 3.1, calculate the initial value of depth information based on the marker image: After obtaining the marker image, use the circle detection algorithm to extract the marker feature circle, and calculate the pixel distance ratio based on the radius of the feature circle in the world coordinate system and the pixel coordinate system and the pixel coordinates of the center of the circle to achieve a preliminary estimate of the depth; in this embodiment, the camera saves the image with the marker, and uses the EDcircle circle detection algorithm to calculate the radius and center coordinates of the feature circle, where the distance between the two feature circles in the pixel coordinate system is 102.3pixel, and the distance between the two feature circles in the world coordinate system is 7.5mm, so the pixel distance ratio can be calculated to be 13.64pixel/mm. Through camera calibration, it can be known that fu is 3520.7431, and the current depth can be obtained as 258.0mm.
所述步骤3.1的具体步骤包括:The specific steps of step 3.1 include:
步骤3.1.1,计算像素距离比:根据小孔成像模型和相机成像原理,特征圆在世界坐标系下的半径和像素坐标系下的半径以及相机光心与圆心的连线构成了相似三角形,设Hc表示凸透镜的光心到成像物体的距离,Hw表示凸透镜的光心到真实物体的距离,Xc表示物体在像素坐标系的长度,Xw表示物体真实的尺寸,为此可得到一下关系:Step 3.1.1, calculate the pixel distance ratio: According to the pinhole imaging model and the camera imaging principle, the radius of the characteristic circle in the world coordinate system and the radius in the pixel coordinate system and the line connecting the camera optical center and the circle center form a similar triangle. Let Hc represent the distance from the optical center of the convex lens to the imaged object, Hw represent the distance from the optical center of the convex lens to the real object, Xc represent the length of the object in the pixel coordinate system, and Xw represent the real size of the object. For this reason, the following relationship can be obtained:
在保证相机成像平面和标志物特征圆平面平行的条件下,设a处的特征圆的圆心为(u1,v1),b处的特征圆圆心为(u2,v2),则像素距离比为:Under the condition that the camera imaging plane and the plane of the characteristic circle of the marker are parallel, let the center of the characteristic circle at a be (u 1 ,v 1 ) and the center of the characteristic circle at b be (u 2 ,v 2 ), then the pixel distance ratio is:
步骤3.1.2,初步估计特征圆的深度:得到像素距离比后,深度计算公式可表达为:Step 3.1.2, preliminarily estimate the depth of the feature circle: After obtaining the pixel distance ratio, the depth calculation formula can be expressed as:
fu是相机内参标定的结果。f u is the result of camera intrinsic calibration.
步骤3.2,基于误差补偿模型补偿误差:由于步骤3.1初步估计的深度信息是在相机的成像平面和特征圆所在平面平行的条件下测量的,但是现实中难以按照所述条件严格进行;为了克服由于无法满足条件导致精度下降的问题,将初始深度计算值加上误差补偿模型所计算出的误差,进而获得高精度的深度计算结果;本实施例中相机成像平面和特征圆所在平面的X方向倾角为2.43°,Y方向倾角为1.98°。通过像素距离比和计算所得误差通过最小二乘法拟合出误差补偿模型,虽然模型的阶数越高精度越高,但是计算过程越复杂,为此一般选取阶数为两阶,模型的参数分别为K1=0.2144,K2=-5.584,K3=36.28。Step 3.2, compensate the error based on the error compensation model: Since the depth information preliminarily estimated in step 3.1 is measured under the condition that the imaging plane of the camera and the plane where the characteristic circle is located are parallel, it is difficult to strictly follow the said conditions in reality; in order to overcome the problem of reduced accuracy due to failure to meet the conditions, the initial depth calculation value is added with the error calculated by the error compensation model, thereby obtaining a high-precision depth calculation result; in this embodiment, the inclination angle of the camera imaging plane and the plane where the characteristic circle is located in the X direction is 2.43°, and the inclination angle of the Y direction is 1.98°. The error compensation model is fitted by the least squares method through the pixel distance ratio and the calculated error. Although the higher the order of the model, the higher the accuracy, but the more complicated the calculation process, for this reason, the order is generally selected as two orders, and the parameters of the model are K 1 =0.2144, K 2 =-5.584, K 3 =36.28.
所述步骤3.2的具体步骤包括:The specific steps of step 3.2 include:
步骤3.2.1,构建误差补偿模型,步骤3.1计算出的深度信息是在假设相机的成像平面和特征圆所在平面平行的情况下计算出来的,但是现实难以按照假设严格进行;为了克服由于无法满足条件导致精度下降的问题,因此构建误差补偿模型进行误差补偿,设误差补偿模型为:Step 3.2.1, construct an error compensation model. The depth information calculated in step 3.1 is calculated under the assumption that the imaging plane of the camera is parallel to the plane where the characteristic circle is located. However, it is difficult to strictly follow the assumption in reality. In order to overcome the problem of reduced accuracy due to failure to meet the conditions, an error compensation model is constructed for error compensation. Suppose the error compensation model is:
E=f(Hw) (4)E=f( Hw ) (4)
误差模型的评判标准e为:The judgment criterion e of the error model is:
e=E-Er (5)e=EE r (5)
其中,Er表示真实的误差值,E表示计算的误差值,为此,可通过最小二乘法进行误差和像素距离比的曲线拟合,从而准确求取出深度误差;设拟合多项式为:Among them, Er represents the real error value, and E represents the calculated error value. Therefore, the least square method can be used to fit the curve of error and pixel distance ratio, so as to accurately calculate the depth error; let the fitting polynomial be:
y=a0+a1x+a2x2...+ak-1xk-1+akxk (6)y=a 0 +a 1 x +a 2 x 2 ... + ak-1 x k-1 +ak x k (6)
步骤3.2.2,简化误差补偿模型,上式中ai的取值保证各点到这条曲线的距离之和最小,即偏差平方和R2最小,通过上式构建偏方差函数:Step 3.2.2, simplify the error compensation model. The value of ai in the above formula ensures that the sum of the distances from each point to the curve is minimized, that is, the sum of squared deviations R2 is minimized. The partial variance function is constructed using the above formula:
对上式求ai的偏导数,可得到:Taking the partial derivative of a i from the above formula, we can get:
将上式简化为矩阵形式:Simplify the above formula into matrix form:
X*A=Y (9)X*A=Y (9)
为减少计算量,仅计算k=3即可,为此深度测量结果可修正为:To reduce the amount of calculation, only k=3 is required, and the depth measurement result can be corrected as follows:
Zr=zc1+Y (10)Z r = z c1 + Y (10)
通过上式就可以实现高精快速深度测量结果。The above formula can be used to achieve high-precision and fast depth measurement results.
步骤3.3,高精度三维定位:深度估计完成后,根据单目工业相机2采集目标位置标志物图像和当前测量的深度值进行对比,获得Z方向偏差,同时通过二维平面定位算法,计算出xy方向的精确偏差,并将XYZ三方向的偏差值发送给机器人的控制器进行位姿调整,从而实现机器人三维高精定位。本实施例中计算Z方向的偏差为0.35mm,X方向的偏差为0.48mm,y方向的偏差为-1.01mm,将偏差发送至四自由度串并混联机构,通过三个电机的移动实现三维的误差补偿。Step 3.3, high-precision three-dimensional positioning: After the depth estimation is completed, the target position marker image collected by the monocular industrial camera 2 is compared with the currently measured depth value to obtain the Z direction deviation. At the same time, the precise deviation in the xy direction is calculated through the two-dimensional plane positioning algorithm, and the deviation values in the three directions of XYZ are sent to the robot controller for posture adjustment, thereby realizing the robot's three-dimensional high-precision positioning. In this embodiment, the deviation in the Z direction is calculated to be 0.35mm, the deviation in the X direction is 0.48mm, and the deviation in the y direction is -1.01mm. The deviation is sent to the four-degree-of-freedom serial-parallel hybrid mechanism, and three-dimensional error compensation is realized by the movement of three motors.
所述步骤3.3的具体步骤包括:The specific steps of step 3.3 include:
步骤3.3.1,当机器人到达指定目标位置时,采集一张图片并发送给高精度三维定位软件,并与目标位置采集的标准图片做对比;Step 3.3.1, when the robot reaches the specified target position, it collects a picture and sends it to the high-precision 3D positioning software, and compares it with the standard picture collected at the target position;
步骤3.3.2,同时计算目标位置标准图像的深度,以及当前图像的深度测量结果,二者相减,获得Z方向偏差;Step 3.3.2, simultaneously calculate the depth of the standard image at the target position and the depth measurement result of the current image, subtract the two, and obtain the Z direction deviation;
步骤3.3.3,标准图像的特征圆圆心坐标记为C1=(Uo,Vo),当前图像圆心坐标记为Cn=(Un,Vn),通过像素距离比,可以通过下式得到二维的位置误差:Step 3.3.3, the center coordinates of the characteristic circle of the standard image are marked as C 1 = (U o ,V o ), and the center coordinates of the current image are marked as C n = (U n ,V n ). Through the pixel distance ratio, the two-dimensional position error can be obtained by the following formula:
此时,将计算出的误差发送给机器人控制器,控制机器人调整位姿从而实现机器人的快速高精三维定位;At this time, the calculated error is sent to the robot controller to control the robot to adjust its posture so as to achieve fast and high-precision three-dimensional positioning of the robot;
基于误差补偿模型的高精快速三维定位系统,通过捕获单帧图像并检测特征圆的圆心和半径,基于高精快速的深度测量算法实现深度地准确测量,并且为了消除由于相机安装倾斜,测量平面不平行于相机成像平面等所引起的误差,建立了误差补偿模型,在获得准确的深度信息后,利用二维平面定位算法,计算出xy方向的精确偏差,并将XYZ三方向的偏差值发送给机器人的控制器进行位姿调整,从而实现机器人三维高精定位。The high-precision and rapid three-dimensional positioning system based on the error compensation model captures a single-frame image and detects the center and radius of the characteristic circle, and realizes accurate depth measurement based on a high-precision and rapid depth measurement algorithm. In order to eliminate errors caused by the tilted camera installation and the measurement plane not being parallel to the camera imaging plane, an error compensation model is established. After obtaining accurate depth information, the two-dimensional plane positioning algorithm is used to calculate the precise deviation in the xy direction, and the deviation values in the three directions of XYZ are sent to the robot's controller for posture adjustment, thereby realizing three-dimensional high-precision positioning of the robot.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210586073.8A CN114998422B (en) | 2022-05-26 | 2022-05-26 | High-precision rapid three-dimensional positioning system based on error compensation model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210586073.8A CN114998422B (en) | 2022-05-26 | 2022-05-26 | High-precision rapid three-dimensional positioning system based on error compensation model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114998422A CN114998422A (en) | 2022-09-02 |
CN114998422B true CN114998422B (en) | 2024-05-28 |
Family
ID=83028695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210586073.8A Active CN114998422B (en) | 2022-05-26 | 2022-05-26 | High-precision rapid three-dimensional positioning system based on error compensation model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114998422B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116402871B (en) * | 2023-03-28 | 2024-05-10 | 苏州大学 | Monocular distance measurement method and system based on scene parallel elements and electronic equipment |
CN119254937B (en) * | 2024-12-06 | 2025-03-18 | 杭州海康机器人股份有限公司 | Image processing method and device and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106197265A (en) * | 2016-06-30 | 2016-12-07 | 中国科学院长春光学精密机械与物理研究所 | A kind of space free flight simulator precision visual localization method |
CN107088892A (en) * | 2017-04-01 | 2017-08-25 | 西安交通大学 | A kind of industrial robot motion accuracy checking method based on binocular vision |
CN107300382A (en) * | 2017-06-27 | 2017-10-27 | 西北工业大学 | A kind of monocular visual positioning method for underwater robot |
CN108765489A (en) * | 2018-05-29 | 2018-11-06 | 中国人民解放军63920部队 | A kind of pose computational methods, system, medium and equipment based on combination target |
CN109211222A (en) * | 2018-08-22 | 2019-01-15 | 扬州大学 | High-accuracy position system and method based on machine vision |
CN109741393A (en) * | 2018-12-04 | 2019-05-10 | 上海大学 | Diameter measurement and center point positioning method of Agaricus bisporus |
CN109886889A (en) * | 2019-02-12 | 2019-06-14 | 哈尔滨工程大学 | A precise positioning method of the oil-receiving cone sleeve in the air based on the center deviation compensation method |
CN110148174A (en) * | 2019-05-23 | 2019-08-20 | 北京阿丘机器人科技有限公司 | Scaling board, scaling board recognition methods and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7237483B2 (en) * | 2018-07-18 | 2023-03-13 | キヤノン株式会社 | Robot system control method, control program, recording medium, control device, robot system, article manufacturing method |
-
2022
- 2022-05-26 CN CN202210586073.8A patent/CN114998422B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106197265A (en) * | 2016-06-30 | 2016-12-07 | 中国科学院长春光学精密机械与物理研究所 | A kind of space free flight simulator precision visual localization method |
CN107088892A (en) * | 2017-04-01 | 2017-08-25 | 西安交通大学 | A kind of industrial robot motion accuracy checking method based on binocular vision |
CN107300382A (en) * | 2017-06-27 | 2017-10-27 | 西北工业大学 | A kind of monocular visual positioning method for underwater robot |
CN108765489A (en) * | 2018-05-29 | 2018-11-06 | 中国人民解放军63920部队 | A kind of pose computational methods, system, medium and equipment based on combination target |
CN109211222A (en) * | 2018-08-22 | 2019-01-15 | 扬州大学 | High-accuracy position system and method based on machine vision |
CN109741393A (en) * | 2018-12-04 | 2019-05-10 | 上海大学 | Diameter measurement and center point positioning method of Agaricus bisporus |
CN109886889A (en) * | 2019-02-12 | 2019-06-14 | 哈尔滨工程大学 | A precise positioning method of the oil-receiving cone sleeve in the air based on the center deviation compensation method |
CN110148174A (en) * | 2019-05-23 | 2019-08-20 | 北京阿丘机器人科技有限公司 | Scaling board, scaling board recognition methods and device |
Non-Patent Citations (1)
Title |
---|
一种简易的单目视觉位姿测量方法研究;谷凤伟;高宏伟;姜月秋;;光电技术应用;20180815(第04期);第64-70页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114998422A (en) | 2022-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110116407B (en) | Flexible robot pose measurement method and device | |
CN109443207B (en) | A kind of light pen robot in-situ measurement system and method | |
CN108177143B (en) | Robot positioning and grabbing method and system based on laser vision guidance | |
CN114998422B (en) | High-precision rapid three-dimensional positioning system based on error compensation model | |
CN109323650B (en) | A unified method for measuring coordinate system of visual image sensor and light spot ranging sensor in measuring system | |
CN109035200A (en) | A kind of bolt positioning and position and posture detection method based on the collaboration of single binocular vision | |
CN110497187A (en) | Sunflower module assembly system based on vision guidance | |
CN108151660B (en) | A kind of aircraft components butt-joint clearance and the measurement equipment of scale, method and system | |
CN109794963B (en) | A fast positioning method of robots for curved surface components | |
CN114993608B (en) | Wind tunnel model three-dimensional attitude angle measuring method | |
CN111127562B (en) | Calibration method and automatic calibration system for monocular area-array camera | |
CN115493489B (en) | Detection method of relevant surface of the object to be tested | |
CN114001651A (en) | Large-scale long and thin cylinder type component pose in-situ measurement method based on binocular vision measurement and prior detection data | |
JP2016170050A (en) | Position attitude measurement device, position attitude measurement method and computer program | |
CN111536872A (en) | Two-dimensional plane distance measuring device and method based on vision and mark point identification device | |
CN111207670A (en) | Line structured light calibration device and method | |
CN113724337B (en) | Camera dynamic external parameter calibration method and device without depending on tripod head angle | |
CN110568866A (en) | Three-dimensional curved surface vision guiding alignment system and alignment method | |
CN118744434A (en) | Automatic plugging and unplugging method of charging gun for mobile charging robot based on active visual positioning technology | |
CN112362034B (en) | Solid engine multi-cylinder section butt joint guiding measurement method based on binocular vision | |
CN107328358B (en) | The measuring system and measurement method of aluminium cell pose | |
CN115183677B (en) | Inspection and positioning system for automobile assembly | |
CN109773589B (en) | Method, device and equipment for online measurement and machining guidance of workpiece surface | |
CN115345924A (en) | A Bend Pipe Measurement Method Based on Multi-camera Line Laser Scanning | |
CN111028298B (en) | A converging binocular system for space transformation calibration of rigid body coordinate system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |