CN106595517A - Structured light measuring system calibration method based on projecting fringe geometric distribution characteristic - Google Patents

Structured light measuring system calibration method based on projecting fringe geometric distribution characteristic Download PDF

Info

Publication number
CN106595517A
CN106595517A CN201611072978.4A CN201611072978A CN106595517A CN 106595517 A CN106595517 A CN 106595517A CN 201611072978 A CN201611072978 A CN 201611072978A CN 106595517 A CN106595517 A CN 106595517A
Authority
CN
China
Prior art keywords
camera
formula
coordinate
coefficient
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611072978.4A
Other languages
Chinese (zh)
Other versions
CN106595517B (en
Inventor
孙长库
陆鹏
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201611072978.4A priority Critical patent/CN106595517B/en
Publication of CN106595517A publication Critical patent/CN106595517A/en
Application granted granted Critical
Publication of CN106595517B publication Critical patent/CN106595517B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明涉及视觉检测技术,为提出一种结构光测量系统的标定方法,该标定模型适用范围宽,测量精度高,标定过程简单。为此,本发明采用的技术方案是,投射条纹几何分布特征结构光测量系统标定方法,首先利用不确定视角标定方法获取摄像机坐标系下的已知空间点的坐标,再利用这些点的摄像机坐标系坐标和图像坐标及其编码值进行系统标定,分别建立Z方向坐标与图像信息的关系和XY方向坐标与图像信息的关系。本发明主要应用于视觉检测场合。

The invention relates to visual inspection technology, and aims to provide a calibration method for a structured light measurement system. The calibration model has wide application range, high measurement accuracy and simple calibration process. For this reason, the technical solution adopted by the present invention is that the calibration method of the projected fringe geometric distribution characteristic structured light measurement system first uses the uncertain viewing angle calibration method to obtain the coordinates of known spatial points in the camera coordinate system, and then uses the camera coordinates of these points The system coordinates, image coordinates and their coded values are used for system calibration, and the relationship between Z-direction coordinates and image information and the relationship between XY-direction coordinates and image information are respectively established. The invention is mainly applied to the occasion of visual detection.

Description

投射条纹几何分布特征结构光测量系统标定方法Calibration method for structured light measurement system with projected fringe geometric distribution characteristics

技术领域technical field

本发明涉及视觉检测技术,特别是涉及投射条纹几何分布特征结构光测量系统标定方法。The invention relates to visual detection technology, in particular to a calibration method for a structured light measuring system with geometric distribution characteristics of projection fringes.

背景技术Background technique

结构光三维视觉测量技术,具有视觉测量的非接触、速度快、自动化程度高,柔性好等优点。结构光三维视觉基于光学三角法原理,通过计算采集图像的各种光模式特征点的偏移信息反算出被测物体的表面轮廓。光学投射器投射确定的光模式,使得结构光图像信息易于提取,因而测量精度较高,广泛应用于各种工业产品的在线检测。Structured light three-dimensional visual measurement technology has the advantages of non-contact visual measurement, high speed, high degree of automation, and good flexibility. Structured light 3D vision is based on the principle of optical triangulation, and calculates the surface profile of the measured object by calculating the offset information of various light mode feature points in the collected image. The optical projector projects a certain light pattern, which makes the structured light image information easy to extract, so the measurement accuracy is high, and it is widely used in the online detection of various industrial products.

结构光测量系统的精度取决于系统标定精度。现有的结构光测量系统标定方法分为三种,分别为基于矩阵变换的摄影测量法,基于几何关系的三角测量法和多项式拟合法。摄影测量法分为伪相机法,逆向相机法和光平面法。该方法的主要缺点是投影仪的标定过程依赖于摄像机标定参量,从而造成摄像机标定误差的扩散,优点是该标定方法普遍对系统的安装无特殊约束,设备安装与系统标定过程比较操作简单。基于几何关系的三角测量法是根据系统的几何关系建立3D坐标与系统的少数几个参量之间的数学表达式,作为标定与测量的数学模型。该方法的优点是避免了投影仪的标定,缺点是系统安装精度要求较高,模型若过于简化则会造成精度偏低。多项式拟合法假设待测物的3D坐标可用其在摄像机图像中对应像素处的编码值的多项式表示,继而通过实验确定多项式的参量,直接建立摄像机图像的2维坐标到景物点的空间3维坐标的映射。该方法避免了对摄像机和投影仪的标定,但是标定耗时较长且成本较高。Lendray等人提出的一种空间映射模型被大量采用,该方法本质上是一种插值方法,测量在标定范围之外的物体时有较大误差。综上所述,发明一种标定装置简单,标定过程便捷,标定时间快且精度高的标定方法具有十分重大的意义。The accuracy of a structured light measurement system depends on the system calibration accuracy. The existing calibration methods for structured light measurement systems are divided into three types, namely photogrammetry based on matrix transformation, triangulation based on geometric relationship and polynomial fitting method. Photogrammetry is divided into pseudo camera method, reverse camera method and light plane method. The main disadvantage of this method is that the calibration process of the projector depends on the camera calibration parameters, resulting in the spread of camera calibration errors. The advantage is that this calibration method generally has no special restrictions on system installation, and the equipment installation and system calibration process are relatively simple to operate. The triangulation method based on the geometric relationship is to establish the mathematical expression between the 3D coordinates and a few parameters of the system according to the geometric relationship of the system, as a mathematical model for calibration and measurement. The advantage of this method is that it avoids the calibration of the projector, but the disadvantage is that the installation accuracy of the system is high, and if the model is too simplified, the accuracy will be low. The polynomial fitting method assumes that the 3D coordinates of the object to be measured can be represented by a polynomial of the coded value at the corresponding pixel in the camera image, and then the parameters of the polynomial are determined through experiments to directly establish the 2D coordinates of the camera image to the 3D spatial coordinates of the scene point mapping. This method avoids the calibration of cameras and projectors, but the calibration takes a long time and costs a lot. A spatial mapping model proposed by Lendray et al. is widely used. This method is essentially an interpolation method, and there is a large error when measuring objects outside the calibration range. In summary, it is of great significance to invent a calibration method with simple calibration device, convenient calibration process, fast calibration time and high precision.

发明内容Contents of the invention

为克服现有技术的不足,本发明旨在提出一种结构光测量系统的标定方法,该标定模型适用范围宽,测量精度高,标定过程简单。为此,本发明采用的技术方案是,投射条纹几何分布特征结构光测量系统标定方法,首先利用不确定视角标定方法获取摄像机坐标系下的已知空间点的坐标,再利用这些点的摄像机坐标系坐标和图像坐标及其编码值进行系统标定,分别建立Z方向坐标与图像信息的关系和XY方向坐标与图像信息的关系。In order to overcome the deficiencies of the prior art, the present invention aims to propose a calibration method for a structured light measurement system. The calibration model has a wide application range, high measurement accuracy, and a simple calibration process. For this reason, the technical solution adopted by the present invention is that the calibration method of the projected fringe geometric distribution characteristic structured light measurement system first uses the uncertain viewing angle calibration method to obtain the coordinates of known spatial points in the camera coordinate system, and then uses the camera coordinates of these points The system coordinates, image coordinates and their coded values are used for system calibration, and the relationship between Z-direction coordinates and image information and the relationship between XY-direction coordinates and image information are respectively established.

标定Z方向的具体步骤是,测量系统的Z方向数值由图像坐标(u,v)以及对应的编码值p共同确定,投影仪镜头光轴和CCD摄像机镜头光轴具有一定夹角α且假定投影仪投射出的条纹变化方向与摄像机的u方向平行,三个平面h1,h2,h3代表三个与相机成像面平行的平面,平面h1,h2和平面h2,h3的距离相同,均为Δh,平面h3与摄像机光轴交点至投影仪位置A连线与平面h2、h1的焦点依次为D、B,平面h1、h2、h3与投影仪光轴交点依次为C、E、G,在投影仪投射出的同一条光线上的编码值p相同,在摄像机的同一深度上的u随着p的改变的变化率符合关系式:The specific steps for calibrating the Z direction are that the value of the Z direction of the measurement system is jointly determined by the image coordinates (u, v) and the corresponding code value p, the optical axis of the projector lens and the optical axis of the CCD camera lens have a certain angle α and assume that the projection The changing direction of the fringes projected by the instrument is parallel to the u direction of the camera. The three planes h1, h2, h3 represent three planes parallel to the imaging plane of the camera. The distances between the planes h1, h2 and the planes h2, h3 are the same, both are Δh, The intersection of plane h3 and the optical axis of the camera to the position A of the projector and the focal points of planes h2 and h1 are D and B in turn, and the intersection points of plane h1, h2 and h3 and the optical axis of the projector are C, E and G in turn. In the projector The encoded value p on the same projected ray is the same, and the rate of change of u at the same depth of the camera with the change of p conforms to the relational expression:

du/dp=kp+b (1)du/dp=kp+b (1)

其中du/dp为u随着p的改变的变化率,k为斜率,b为截距,又由于在摄像机同一深度上u不随着v的变化而变化,所以又有Among them, du/dp is the change rate of u with the change of p, k is the slope, b is the intercept, and since u does not change with the change of v at the same depth of the camera, so there is

du/dv=0 (2)du/dv=0 (2)

其中du/dv为u随着v的改变的变化率;利用式(1)和式(2)进行积分Where du/dv is the rate of change of u with the change of v; use formula (1) and formula (2) to integrate

u=∫(kp+b)dp (3)u=∫(kp+b)dp (3)

获得在摄像机的同一深度上的图像坐标(u,v)与对应的编码值p的关系函数,其中:A0是关于编码值p的二次项系数,B0是关于编码值p的一次项系数,D0是常数项;Obtain the relationship function between the image coordinates (u, v) at the same depth of the camera and the corresponding coded value p, where: A 0 is the quadratic coefficient of the coded value p, and B 0 is the linear term of the coded value p Coefficient, D 0 is a constant term;

u=A0p2+B0p+D0 (4)u=A 0 p 2 +B 0 p+D 0 (4)

考虑到在摄像机坐标系uov和投影仪坐标系u'o'v'中,坐标轴o'u'与ou会有夹角β存在,需要对uov坐标系做旋转变换:Considering that in the camera coordinate system uov and the projector coordinate system u'o'v', there will be an angle β between the coordinate axis o'u' and ou, it is necessary to perform a rotation transformation on the uov coordinate system:

u0=ucosβ+vsinβ (5)u 0 =ucosβ+vsinβ (5)

带入式(4)可得Bring in (4) to get

ucosβ+vsinβ=A0p2+B0p+D0 ucosβ+vsinβ=A 0 p 2 +B 0 p+D 0

u=A0p2/cosβ+B0p/cosβ-vtanβ+D0/cosβ (6)u=A 0 p 2 /cosβ+B 0 p/cosβ-vtanβ+D 0 /cosβ (6)

即为that is

u=A1p2+B1p+C1v+D1 (7)u=A 1 p 2 +B 1 p+C 1 v+D 1 (7)

其中A1是关于编码值p的二次项系数,B1是关于编码值p的一次项系数,C1是关于图像坐标v的一次项系数,D1是常数项;Wherein A 1 is the quadratic term coefficient about the encoded value p, B 1 is the first-order term coefficient about the encoded value p, C 1 is the first-order term coefficient about the image coordinate v, and D 1 is a constant term;

同一摄像机深度的(u,v,p)处在一个抛物面上,且A1随着投影仪与摄像机镜头光轴的夹角α和摄像机深度的增大而增大,利用获取的图像信息结合最小二乘法对式(7)的未知参数进行计算;The (u,v,p) of the same camera depth is on a paraboloid, and A 1 increases with the increase of the angle α between the projector and the optical axis of the camera lens and the depth of the camera. The unknown parameter of formula (7) is calculated by quadratic method;

由相似三角形原理知ΔABC~ΔADEΔAFG,得到Knowing ΔABC~ΔADEΔAFG from the principle of similar triangles, we get

其中p0为投影仪光轴上的编码值,l表示两点间的长度,由几何光学知识可知条纹周期宽度也和摄像机深度呈线性关系,即:Among them, p 0 is the coding value on the optical axis of the projector, and l represents the length between two points. From the knowledge of geometric optics, we can know that the fringe period width is also linearly related to the depth of the camera, that is:

其中r1为分式中分子项关于h的一次项系数,r2为分式中分母项的常数项,r3为分式中分母项关于h的一次项系数,r4为分式中分母项的常数项;Where r 1 is the first-order coefficient of the numerator item in the fraction with respect to h, r 2 is the constant term of the denominator in the fraction, r 3 is the first-order coefficient of the denominator in the fraction with respect to h, and r 4 is the denominator in the fraction the constant term of the term;

对曲面方程式(7)中的系数A1做归一化处理,即消除条纹粗细变化对摄像机不同深度的编码差值的影响,得:The coefficient A 1 in the surface equation (7) is normalized, that is, to eliminate the influence of the stripe thickness change on the encoding difference of different depths of the camera, and get:

其中u/A1和v/A1是归一化后的图像坐标,由于消除条纹粗细变化对编码差值的影响,所以B1/A1随着摄像机深度h呈线性变化,常数项D1/A1随着摄像机深度h呈二次函数的规律变化,即:Among them, u/A 1 and v/A 1 are the normalized image coordinates. Since the influence of stripe thickness changes on the encoding difference is eliminated, B 1 /A 1 changes linearly with the camera depth h, and the constant term D 1 /A 1 changes with the law of the quadratic function of the camera depth h, that is:

B1/A1=s1h+s2 B 1 /A 1 =s 1 h+s 2

D1/A1=t0h2+t1h+t2 (11)D 1 /A 1 =t 0 h 2 +t 1 h+t 2 (11)

其中s1是一次式的一次项系数,s2是一次式的常数项,t0是二次式的二次项系数,t1是二次式的一次项系数,t2是二次式的常数项;Among them, s 1 is the coefficient of the first term of the first-order formula, s 2 is the constant term of the first-order formula, t 0 is the coefficient of the second-order term of the second-order formula, t 1 is the coefficient of the first-order term of the second-order formula, and t 2 is the coefficient of the second-order term of the second-order formula. Constant term;

将式(9)和式(11)带入式(7)可以得到Substituting formula (9) and formula (11) into formula (7) can get

考虑消除摄像机镜头畸变以及投影仪和摄像机镜头的光轴在水平面上不共面带来的影响,采用二次误差补偿模型,其中k0~k5,l0~l5是误差校正系数;Considering the elimination of the camera lens distortion and the influence of the optical axis of the projector and the camera lens not being coplanar on the horizontal plane, a quadratic error compensation model is adopted, where k 0 ~ k 5 , l 0 ~ l 5 are error correction coefficients;

因此式(12)变为So formula (12) becomes

其中a0,a1分别是关于h的二次项系数中的常数项和一次项系数,b0~b7分别是关于h的二次项系数中的常数项,一次项系数和二次项系数,c0~c7分别是关于h的二次项系数中的常数项,一次项系数和二次项系数;Among them, a 0 and a 1 are the constant term and the first-order coefficient in the quadratic coefficient of h respectively, and b 0 to b 7 are the constant term, the first-order coefficient and the quadratic term in the quadratic coefficient of h respectively Coefficients, c 0 ~ c 7 are the constant term, the first-order coefficient and the second-order coefficient in the quadratic coefficient of h, respectively;

获得的N组已知空间点在摄像机坐标系下Z方向的坐标ZCi和图像对应信息(ui,vi,pi)满足式(14)条件,即为The coordinates Z Ci of the obtained N groups of known spatial points in the Z direction in the camera coordinate system and the corresponding image information (u i , v i , p i ) satisfy the conditions of formula (14), that is,

PX=Q (15)PX=Q (15)

其中X是系数矩阵,是18×1的列向量,P是的形式为N×18的矩阵,Q是形式为N×1的列向量,其中where X is the coefficient matrix and is a column vector of 18×1, P is a matrix of the form N×18, and Q is a column vector of the form N×1, where

X=[g0 g1 … gi … g17]T X=[g 0 g 1 ... g i ... g 17 ] T

构造误差函数construct error function

利用Levenburg-marquardt算法求出式(17)中未知参数,因此通过图像信息(u,v,p)结合一元三次方程求解公式解方程获得摄像机坐标系下Z方向的坐标ZCUse the Levenburg-marquardt algorithm to obtain the unknown parameters in the formula (17), so the coordinate Z C in the Z direction under the camera coordinate system is obtained by combining the image information (u, v, p) with the unary cubic equation to solve the formula and solve the equation;

标定XY方向的具体步骤是:The specific steps to calibrate the XY direction are:

系统的XY方向数值由经畸变校正的图像坐标(u,v)和摄像机坐标系下Z方向数值ZC共同确定,利用公式The XY direction value of the system is determined by the distortion-corrected image coordinates (u, v) and the Z direction value Z C in the camera coordinate system, using the formula

式中s是尺度因子,m11~m34是多项式参数。在标定时,利用已知空间点在摄像机坐标系下的坐标(XC,YC,ZC)和经畸变校正的图像坐标点(u,v)结合最小二乘法即可计算出多项式参数m11~m34,在测量时,利用图像坐标(u,v)和Z方向数值ZC带入多项式即可得出XY方向分别对应的坐标值(XC,YC),因此利用图像信息(u,v,p)得出了被测物在摄像机坐标系下的三维信息(XC,YC,ZC)。In the formula, s is a scale factor, and m 11 ~m 34 are polynomial parameters. During calibration, the polynomial parameter m can be calculated by using the coordinates (X C , Y C , Z C ) of the known space point in the camera coordinate system and the distortion-corrected image coordinate points (u, v) combined with the least square method 11 ~m 34 , when measuring, use the image coordinates (u,v) and the Z direction value Z C to enter the polynomial to get the coordinate values (X C , Y C ) corresponding to the X and Y directions respectively, so using the image information ( u, v, p) to obtain the three-dimensional information (X C , Y C , Z C ) of the measured object in the camera coordinate system.

要获得世界坐标系下的坐标(XW,YW,ZW),利用坐标转换公式(20)获得的摄像机坐标系下的坐标转换为世界坐标系下的坐标,其中R是旋转矩阵,T是平移矩阵;To obtain the coordinates (X W , Y W , Z W ) in the world coordinate system, use the coordinate conversion formula (20) to convert the coordinates in the camera coordinate system to the coordinates in the world coordinate system, where R is the rotation matrix, T is the translation matrix;

标定时,将靶标放置在摄像机视场内,移动靶标,每移动一个位置需要投射结构光条纹并拍摄图像,利用不确定视角标定方法获得靶标上特征点在摄像机坐标系下的坐标,通过图像处理获得特征点的图像坐标及其特征值,将已知量带入式(17)和式(19)通过计算获取系统参数,实际测量时,即可通过式(18)和式(19)获取被测点在摄像机坐标系下的坐标,再通过式(20)即可得到这些点在世界坐标系下的坐标。During calibration, the target is placed in the field of view of the camera, and the target is moved. Every time a position is moved, structured light stripes need to be projected and images are taken. The coordinates of the feature points on the target in the camera coordinate system are obtained by using the uncertain viewing angle calibration method. Through image processing Obtain the image coordinates of the feature points and their eigenvalues, and bring the known quantities into formula (17) and formula (19) to obtain the system parameters through calculation. In actual measurement, you can use formula (18) and formula (19) to obtain the The coordinates of the measurement points in the camera coordinate system, and then through the formula (20), the coordinates of these points in the world coordinate system can be obtained.

本发明的特点及有益效果是:Features and beneficial effects of the present invention are:

本发明适用于垂直光条形式的光栅和二值条纹投射方式。通过几何模型的推导建立了图像特征点及其编码值与空间坐标的关系。克服了传统的基于几何关系的三角测量法中系统约束严格的缺点,也解决了多项式拟合法中无法测量标定范围外物体的问题。该标定过程简单方便,测量精度高,适用测量范围大。The invention is suitable for grating and binary fringe projection in the form of vertical light stripes. The relationship between image feature points and their encoding values and space coordinates is established through the derivation of the geometric model. It overcomes the shortcomings of strict system constraints in the traditional triangulation method based on geometric relations, and also solves the problem that objects outside the calibration range cannot be measured in the polynomial fitting method. The calibration process is simple and convenient, the measurement accuracy is high, and the applicable measurement range is large.

附图说明:Description of drawings:

图1系统几何模型示意图,Figure 1 Schematic diagram of the geometric model of the system,

图2标定特征点与实际测量点对比图,Figure 2 The comparison chart of the calibration feature points and the actual measurement points,

图3测量效果图。Figure 3 Measurement effect diagram.

具体实施方式detailed description

本系统提出的标定方法不需要精密导轨的辅助,标定过程简便,标定精度高,因此非常适合现场标定。The calibration method proposed by this system does not require the assistance of precision guide rails, the calibration process is simple, and the calibration accuracy is high, so it is very suitable for on-site calibration.

本标定方法由两部分构成,首先利用不确定视角标定方法获取摄像机坐标系下的已知空间点的坐标,再利用这些点的摄像机坐标系坐标和图像坐标及其编码值进行系统标定,分别建立Z方向坐标与图像信息的关系和XY方向坐标与图像信息的关系。This calibration method is composed of two parts. Firstly, the coordinates of the known spatial points in the camera coordinate system are obtained by using the uncertain viewing angle calibration method, and then the system calibration is carried out by using the camera coordinate system coordinates and image coordinates of these points and their coded values. The relationship between the coordinates in the Z direction and the image information and the relationship between the coordinates in the XY direction and the image information.

1Z方向的标定1 Calibration in the Z direction

测量系统的Z方向数值由图像坐标(u,v)以及对应的编码值p共同确定。由于投影仪镜头光轴和CCD摄像机镜头光轴具有一定夹角α且假定投影仪投射出的条纹变化方向与摄像机的u方向平行,由几何光学相关原理知,投影仪投射出的条纹宽度随着投射距离的增大呈线性增大趋势,因而在摄像机的同一深度上的u随着p的改变的变化率符合关系式:The Z-direction value of the measurement system is jointly determined by the image coordinates (u, v) and the corresponding coded value p. Since there is a certain angle α between the optical axis of the projector lens and the optical axis of the CCD camera lens, and it is assumed that the changing direction of the stripes projected by the projector is parallel to the u direction of the camera, it is known from the correlation principle of geometric optics that the width of the stripes projected by the projector increases with The increase of the projection distance shows a linear increase trend, so the rate of change of u at the same depth of the camera with the change of p conforms to the relational expression:

du/dp=kp+b (1)du/dp=kp+b (1)

其中du/dp为u随着p的改变的变化率,k为斜率,b为截距。又由于在摄像机同一深度上u不随着v的变化而变化,所以又有Where du/dp is the rate of change of u as p changes, k is the slope, and b is the intercept. And because u does not change with the change of v at the same depth of the camera, so there is

du/dv=0 (2)du/dv=0 (2)

其中du/dv为u随着v的改变的变化率。Where du/dv is the rate of change of u as v changes.

利用式(1)和式(2)进行积分Integrate using formula (1) and formula (2)

u=∫(kp+b)dp (3)u=∫(kp+b)dp (3)

可以获得在摄像机的同一深度上的图像坐标(u,v)与对应的编码值p的关系函数,其中:A0是关于编码值p的二次项系数,B0是关于编码值p的一次项系数,D0是常数项。The relationship function between the image coordinates (u, v) at the same depth of the camera and the corresponding coded value p can be obtained, where: A 0 is the quadratic coefficient of the coded value p, and B 0 is the primary coefficient of the coded value p term coefficient, D 0 is a constant term.

u=A0p2+B0p+D0 (4)u=A 0 p 2 +B 0 p+D 0 (4)

考虑到在摄像机坐标系uov和投影仪坐标系u'o'v'中,坐标轴o'u'与ou会有微小的夹角β,因此需要对uov坐标系做旋转变换Considering that in the camera coordinate system uov and the projector coordinate system u'o'v', the coordinate axis o'u' and ou will have a small angle β, so it is necessary to perform a rotation transformation on the uov coordinate system

u0=ucosβ+vsinβ (5)u 0 =ucosβ+vsinβ (5)

带入式(4)可得Bring in (4) to get

ucosβ+vsinβ=A0p2+B0p+D0 ucosβ+vsinβ=A 0 p 2 +B 0 p+D 0

u=A0p2/cosβ+B0p/cosβ-vtanβ+D0/cosβ (6)u=A 0 p 2 /cosβ+B 0 p/cosβ-vtanβ+D 0 /cosβ (6)

即为that is

u=A1p2+B1p+C1v+D1 (7)u=A 1 p 2 +B 1 p+C 1 v+D 1 (7)

其中A1是关于编码值p的二次项系数,B1是关于编码值p的一次项系数,C1是关于图像坐标v的一次项系数,D1是常数项。Among them, A 1 is the quadratic term coefficient about the encoded value p, B 1 is the first-order term coefficient about the encoded value p, C 1 is the first-order term coefficient about the image coordinate v, and D 1 is a constant term.

可知同一摄像机深度的(u,v,p)处在一个抛物面上,且A1随着投影仪与摄像机镜头光轴的夹角α和摄像机深度的增大而增大。标定时,可利用获取的图像信息结合最小二乘法对式(7)的未知参数进行计算。It can be seen that (u, v, p) of the same camera depth is on a paraboloid, and A 1 increases with the angle α between the projector and the optical axis of the camera lens and the camera depth. During calibration, the unknown parameters of formula (7) can be calculated by using the acquired image information combined with the least square method.

下面来探索抛物面参数随着摄像机深度改变的变化规律。Let's explore how the parameters of the paraboloid change with the depth of the camera.

如图1为系统几何模型,其中投影仪和摄像机的光轴夹角为α,在投影仪投射出的同一条光线上的编码值p相同,如C,E,G具有相同的编码值。三个平面h1,h2,h3代表三个与相机成像面平行的平面,平面h1,h2和平面h2,h3的距离相同,均为Δh。由相似三角形原理知ΔABC~ΔADE~ΔAFG,可以得到Figure 1 shows the geometric model of the system, where the angle between the optical axis of the projector and the camera is α, and the coding value p on the same ray projected by the projector is the same, such as C, E, and G have the same coding value. The three planes h1, h2, h3 represent three planes parallel to the imaging plane of the camera, and the distances between the planes h1, h2 and the planes h2, h3 are the same, both are Δh. Knowing ΔABC~ΔADE~ΔAFG from the principle of similar triangles, we can get

其中p0为投影仪光轴上的编码值,l表示两点间的长度。从图1中可以看出同样的编码差值在不同的摄像机深度对应的长度和深度呈线性关系,而由几何光学知识可知条纹周期宽度也和摄像机深度呈线性关系,即Among them, p 0 is the coded value on the optical axis of the projector, and l represents the length between two points. It can be seen from Figure 1 that the length and depth corresponding to the same encoding difference at different camera depths have a linear relationship, and from the knowledge of geometric optics, it can be known that the fringe period width also has a linear relationship with the camera depth, that is

其中r1为分式中分子项关于h的一次项系数,r2为分式中分母项的常数项,r3为分式中分母项关于h的一次项系数,r4为分式中分母项的常数项。Where r 1 is the first-order coefficient of the numerator item in the fraction with respect to h, r 2 is the constant term of the denominator in the fraction, r 3 is the first-order coefficient of the denominator in the fraction with respect to h, and r 4 is the denominator in the fraction The constant term of the term.

对曲面方程式(7)中的系数A1做归一化处理,即消除条纹粗细变化对摄像机不同深度的编码差值的影响,可得The coefficient A 1 in the surface equation (7) is normalized, that is, to eliminate the influence of the variation of stripe thickness on the encoding difference of different depths of the camera, we can get

其中u/A1和v/A1是归一化后的图像坐标,下面主要考虑B1/A1和D1/A1随着摄像机深度h的变化规律。由于消除条纹粗细变化对编码差值的影响,所以B1/A1随着摄像机深度h呈线性变化,常数项D1/A1随着摄像机深度h呈二次函数的规律变化,即Among them, u/A 1 and v/A 1 are the normalized image coordinates. The following mainly considers the change law of B 1 /A 1 and D 1 /A 1 with the camera depth h. Since the influence of stripe thickness variation on the encoding difference is eliminated, B 1 /A 1 changes linearly with the camera depth h, and the constant term D 1 /A 1 changes with the law of the quadratic function of the camera depth h, namely

B1/A1=s1h+s2 B 1 /A 1 =s 1 h+s 2

D1/A1=t0h2+t1h+t2 (11)D 1 /A 1 =t 0 h 2 +t 1 h+t 2 (11)

其中s1是一次式的一次项系数,s2是一次式的常数项,t0是二次式的二次项系数,t1是二次式的一次项系数,t2是二次式的常数项。Among them, s 1 is the coefficient of the first term of the first-order formula, s 2 is the constant term of the first-order formula, t 0 is the coefficient of the second-order term of the second-order formula, t 1 is the coefficient of the first-order term of the second-order formula, and t 2 is the coefficient of the second-order term of the second-order formula. Constant term.

将式(9)和式(11)带入式(7)可以得到Substituting formula (9) and formula (11) into formula (7) can get

考虑消除摄像机镜头畸变以及投影仪和摄像机镜头的光轴在水平面上不共面带来的影响,采用二次误差补偿模型,其中k0~k5,l0~l5是误差校正系数。Considering the elimination of the camera lens distortion and the influence of the optical axis of the projector and the camera lens not being coplanar on the horizontal plane, a quadratic error compensation model is adopted, where k 0 ~k 5 and l 0 ~l 5 are error correction coefficients.

因此式(12)变为So formula (12) becomes

其中a0,a1分别是关于h的二次项系数中的常数项和一次项系数,b0~b7分别是关于h的二次项系数中的常数项,一次项系数和二次项系数,c0~c7分别是关于h的二次项系数中的常数项,一次项系数和二次项系数。Among them, a 0 and a 1 are the constant term and the first-order coefficient in the quadratic coefficient of h respectively, and b 0 to b 7 are the constant term, the first-order coefficient and the quadratic term in the quadratic coefficient of h respectively The coefficients, c 0 to c 7 are the constant term, the first-order coefficient and the second-order coefficient in the quadratic coefficient of h, respectively.

获得的N组已知空间点在摄像机坐标系下Z方向的坐标ZCi和图像对应信息(ui,vi,pi)满足式(14)条件,即为The coordinates Z Ci of the obtained N groups of known spatial points in the Z direction in the camera coordinate system and the corresponding image information (u i , v i , p i ) satisfy the conditions of formula (14), that is,

PX=Q (15)PX=Q (15)

其中X是系数矩阵,是18×1的列向量,P是的形式为N×18的矩阵,Q是形式为N×1的列向量,其中where X is the coefficient matrix and is a column vector of 18×1, P is a matrix of the form N×18, and Q is a column vector of the form N×1, where

X=[g0 g1 … gi … g17]T X=[g 0 g 1 ... g i ... g 17 ] T

构造误差函数construct error function

利用Levenburg-marquardt算法求出式(17)中未知参数,因此可以通过图像信息(u,v,p)结合一元三次方程求解公式解方程获得摄像机坐标系下Z方向的坐标ZCUse the Levenburg-marquardt algorithm to obtain the unknown parameters in formula (17), so the coordinate Z C in the Z direction in the camera coordinate system can be obtained by solving the equation through the image information (u, v, p) combined with the unary cubic equation solution formula.

2XY方向的标定2. Calibration in XY direction

系统的XY方向数值由经畸变校正的图像坐标(u,v)和摄像机坐标系下Z方向数值ZC共同确定。利用公式The XY direction values of the system are jointly determined by the distortion-corrected image coordinates (u, v) and the Z direction value Z C in the camera coordinate system. use the formula

式中s是尺度因子,m11~m34是多项式参数。在标定时,利用已知空间点在摄像机坐标系下的坐标(XC,YC,ZC)和经畸变校正的图像坐标点(u,v)结合最小二乘法即可计算出多项式参数m11~m34。在测量时,利用图像坐标(u,v)和Z方向数值ZC带入多项式即可得出XY方向分别对应的坐标值(XC,YC),因此利用图像信息(u,v,p)得出了被测物在摄像机坐标系下的三维信息(XC,YC,ZC)。In the formula, s is a scale factor, and m 11 ~m 34 are polynomial parameters. During calibration, the polynomial parameter m can be calculated by using the coordinates (X C , Y C , Z C ) of the known space point in the camera coordinate system and the distortion-corrected image coordinate points (u, v) combined with the least square method 11 ~ m 34 . When measuring, use the image coordinates (u, v) and the Z direction value Z C to enter the polynomial to obtain the corresponding coordinate values (X C , Y C ) in the X and Y directions, so using the image information (u, v, p ) to obtain the three-dimensional information (X C , Y C , Z C ) of the measured object in the camera coordinate system.

要获得世界坐标系下的坐标,可利用坐标转换公式(20)获得的摄像机坐标系下的坐标转换为世界坐标系下的坐标,其中R是旋转矩阵,T是平移矩阵。To obtain the coordinates in the world coordinate system, the coordinates in the camera coordinate system obtained by the coordinate conversion formula (20) can be converted into coordinates in the world coordinate system, where R is the rotation matrix and T is the translation matrix.

标定时,将靶标放置在摄像机视场内,移动靶标,每移动一个位置需要投射结构光条纹并拍摄图像。利用不确定视角标定方法获得靶标上特征点在摄像机坐标系下的坐标,通过图像处理获得特征点的图像坐标及其特征值。将已知量带入式(17)和式(19)通过计算获取系统参数。实际测量时,即可通过式(18)和式(19)获取被测点在摄像机坐标系下的坐标,再通过式(20)即可得到这些点在世界坐标系下的坐标。When calibrating, place the target in the field of view of the camera, move the target, and project structured light stripes and capture images for each position moved. The coordinates of the feature points on the target in the camera coordinate system are obtained by using the uncertain viewing angle calibration method, and the image coordinates and their feature values of the feature points are obtained by image processing. Bring known quantities into formula (17) and formula (19) to obtain system parameters through calculation. In actual measurement, the coordinates of the measured points in the camera coordinate system can be obtained through formula (18) and formula (19), and then the coordinates of these points in the world coordinate system can be obtained through formula (20).

图2是靶标上的特征点的实测值与设定值的对比图,其中红星代表实测值,蓝色三角形代表设定值,通过分析实验数据表明,系统测量误差在0.1mm内,表明本发明可以较好的应用到结构光测量系统的标定中。图3是系统测量人脸的效果图。Fig. 2 is the comparison diagram of the measured value and the set value of the feature point on the target, wherein the red star represents the measured value, and the blue triangle represents the set value, shows by analyzing the experimental data that the system measurement error is within 0.1mm, indicating that the present invention It can be better applied to the calibration of the structured light measurement system. Figure 3 is an effect diagram of the system measuring the face.

Claims (3)

1. a kind of projection striped geometric distribution feature structure light measurement system scaling method, is characterized in that, first with uncertain Visual angle scaling method obtains the coordinate of the known spatial point under camera coordinate system, recycles the camera coordinate system of these points to sit Mark and image coordinate and its encoded radio carry out system calibrating, set up relation and the XY directions of Z-direction coordinate and image information respectively The relation of coordinate and image information.
2. projection striped geometric distribution feature structure light measurement system scaling method as claimed in claim 1, is characterized in that, mark Determine comprising the concrete steps that for Z-direction, the Z-direction numerical value of measuring system is jointly true by image coordinate (u, v) and corresponding encoded radio p Determine, projector lens optical axis and ccd video camera camera lens optical axis have the stripe order recognition that certain angle α and hypothesis projector projects go out Direction is parallel with the u direction of video camera, and three planes h1, h2, h3 represent three planes parallel with camera imaging face, plane H1, h2 and plane h2, the distance of h3 are identical, are Δ h, plane h3 and camera optical axis intersection point to projector position A line with The focus of plane h2, h1 is followed successively by D, B, and plane h1, h2, h3 and projector optical axes crosspoint are followed successively by C, E, G, in projector projects The encoded radio p on same light for going out is identical, and the u in the same depth of video camera meets with the rate of change of the change of p Relational expression:
Du/dp=kp+b (1)
Wherein du/dp is u with the rate of change of the change of p, and k is slope, and b is intercept, and due to the u in the same depth of video camera Do not change with the change of v, so and having
Du/dv=0 (2)
Wherein du/dv is u with the rate of change of the change of v;It is integrated using formula (1) and formula (2)
U=∫ (kp+b) dp (3)
The relation function of the image coordinate (u, v) in the same depth of video camera and corresponding encoded radio p is obtained, wherein:A0It is With regard to the secondary term coefficient of encoded radio p, B0It is the Monomial coefficient with regard to encoded radio p, D0It is constant term;
U=A0p2+B0p+D0 (4)
In view of in camera coordinate system uov and projector coordinates system u'o'v', coordinate axess o'u' and ou has angle β and deposits Needing to do rotation transformation to uov coordinate systems:
u0=ucos β+vsin β (5)
Bring formula (4) into obtain
U cos β+vsin β=A0p2+B0p+D0
U=A0p2/cosβ+B0p/cosβ-vtanβ+D0/cosβ (6)
As
U=A1p2+B1p+C1v+D1 (7)
Wherein A1It is the secondary term coefficient with regard to encoded radio p, B1It is the Monomial coefficient with regard to encoded radio p, C1It is to sit with regard to image The Monomial coefficient of mark v, D1It is constant term;
(u, the v, p) of same camera depth is on a parabola, and A1With projector and the folder of camera lens optical axis The increase of angle α and camera depth and increase, using obtain image information combine unknown parameter of the method for least square to formula (7) Calculated;
Δ ABC~Δ ADE~Δ AFG is known by similar triangle theory, is obtained
l ( B C ) L = l ( D E ) L + Δ h = l ( F G ) L + 2 Δ h - - - ( 8 )
Wherein p0For the encoded radio on projector optical axis, l represents the length of point-to-point transmission, by fringe period knowable to geometric optics knowledge Width is also linear with camera depth, i.e.,:
A 1 = r 1 h + r 2 r 3 h + r 4 - - - ( 9 )
Wherein r1For Monomial coefficient of the molecule item with regard to h, r in fraction2For the constant term of denominator term in fraction, r3For in fraction Monomial coefficient of the denominator term with regard to h, r4For the constant term of denominator term in fraction;
To the coefficient A in surface equation formula (7)1Normalized is done, that is, is eliminated striped thickness and is changed to video camera different depth The impact of coding difference, obtains:
u A 1 = p 2 + B 1 A 1 p + C 1 v A 1 + D 1 A 1 - - - ( 10 )
Wherein u/A1And v/A1It is the image coordinate after normalization, due to eliminating the change of striped thickness to encoding the impact of difference, institute With B1/A1As camera depth h linearly changes, constant term D1/A1As rules of the camera depth h in quadratic function becomes Change, i.e.,:
B1/A1=s1h+s2
D1/A1=t0h2+t1h+t2 (11)
Wherein s1It is the Monomial coefficient of expression of first degree, s2It is the constant term of expression of first degree, t0It is quadratic secondary term coefficient, t1It is two The Monomial coefficient of secondary formula, t2It is quadratic constant term;
Bring formula (9) and formula (11) into formula (7) to obtain
u = r 1 h + r 2 r 3 h + r 4 ( p 2 + ( s 1 h + s 2 ) p + ( t 0 h 2 + t 1 h + t 2 ) ) + C 1 v - - - ( 12 )
The optical axis for considering to eliminate camera lens distortion and projector and camera lens non-coplanar in the horizontal plane brings Affect, using second order error compensation model, wherein k0~k5, l0~l5It is error correction coefficient;
u = k 0 u d 2 + k 1 v d 2 + k 2 u d v d + k 3 u d + k 4 v d + k 5 v = l 0 u d 2 + l 1 v d 2 + l 2 u d v d + l 3 u d + l 4 v d + l 5 - - - ( 13 )
Therefore formula (12) is changed into
h 3 + ( a 0 + a 1 p ) h 2 + ( b 0 + b 1 p + b 2 p 2 + b 3 u d 2 + b 4 v d 2 + b 5 u d v d + b 6 u d + b 7 v d ) h + ( c 0 + c 1 p + c 2 p 2 + c 3 u d 2 + c 4 v d 2 + c 5 u d v d + c 6 u d + c 7 v d ) = 0 - - - ( 14 )
Wherein a0, a1It is the constant term and Monomial coefficient in the secondary term coefficient with regard to h respectively, b0~b7It is with regard to h respectively Constant term in secondary term coefficient, Monomial coefficient and secondary term coefficient, c0~c7In being the secondary term coefficient with regard to h respectively Constant term, Monomial coefficient and secondary term coefficient;
The coordinate Z of the N groups known spatial point of acquisition Z-direction under camera coordinate systemCiWith image corresponding informance (ui,vi,pi) full Sufficient formula (14) condition, as
PX=Q (15)
Wherein X is coefficient matrix, is 18 × 1 column vector, and matrix of the form that P is for N × 18, Q are the row that form is N × 1 Vector, wherein
X=[g0 g1 … gi … g17]T
P = Z C 1 2 p 1 Z C 1 2 Z C 1 p 1 Z C 1 p 1 2 Z C 1 u 1 2 Z C 1 v 1 2 Z C 1 u 1 v 1 Z C 1 u 1 Z C 1 v 1 Z C 1 1 p 1 p 1 2 u 1 2 v 1 2 u 1 v 1 u 1 v 1 Z C 2 2 p 2 Z C 2 2 Z C 2 p 2 Z C 2 p 2 2 Z C 2 u 2 2 Z C 2 v 2 2 Z C 2 u 2 v 2 Z C 2 u 2 Z C 2 v 2 Z C 2 1 p 2 p 2 2 u 2 2 v 2 2 u 2 v 2 u 2 v 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z C i 2 p i Z C i 2 Z C i p i Z C i p i 2 Z C i u i 2 Z C i v i 2 Z C i u i v i Z C i u i Z C i v i Z C i 1 p i p i 2 u i 2 v i 2 u i v i u i v i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z C N 2 p N Z C N 2 Z C N p N Z C N p N 2 Z C N u N 2 Z C N v N 2 Z C N u N v N Z C N u N Z C N v N Z C N 1 p N p N 2 u N 2 v N 2 u N v N u N v N
Q = - Z C 1 3 Z C 2 3 ... Z C i 3 ... Z C N 3 T - - - ( 16 )
Instrument error function
F = Σ [ ( g 0 + g 1 p i ) Z C i 2 + ( g 2 + g 3 p i + g 4 p i 2 + g 5 u i 2 + g 6 v i 2 + g 7 u i v i + g 8 u i + g 9 v i ) Z C i + ( g 10 + g 11 p i + g 12 p i 2 + g 13 u i 2 + g 14 v i 2 + g 15 u i v i + g 16 u i + g 17 v i ) + Z C i 3 ] 2 - - - ( 17 )
Unknown parameter in formula (17) is obtained using Levenburg-marquardt algorithms, therefore is tied by image information (u, v, p) Close simple cubic equation solution formula and solve equation the coordinate Z for obtaining Z-direction under camera coordinate systemC
Z C i 3 + ( g 0 + g 1 p i ) Z C i 2 + ( g 2 + g 3 p i + g 4 p i 2 + g 5 u i 2 + g 6 v i 2 + g 7 u i v i + g 8 u i + g 9 v i ) Z C i + ( g 10 + g 11 p i + g 12 p i 2 + g 13 u i 2 + g 14 v i 2 + g 15 u i v i + g 16 u i + g 17 v i ) = 0 - - - ( 18 )
Demarcate comprising the concrete steps that for XY directions:
The XY direction values of system are by Z-direction numerical value Z under the image coordinate (u, v) and camera coordinate system of Jing distortion correctionsCJointly It is determined that, using formula
s u s v s = m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 m 31 m 32 m 33 m 34 X C Y C Z C 1 - - - ( 19 )
In formula, s is scale factor, m11~m34It is polynomial parameters.When demarcating, using known spatial point in camera coordinate system Under coordinate (XC,YC,ZC) and image coordinate point (u, v) of Jing distortion corrections can calculate multinomial with reference to method of least square Parameter m11~m34, in measurement, using image coordinate (u, v) and Z-direction numerical value ZCXY directions point are drawn by bringing multinomial into Not corresponding coordinate figure (XC,YC), measured object under camera coordinate system three has been drawn hence with image information (u, v, p) Dimension information (XC,YC,ZC)。
3. projection striped geometric distribution feature structure light measurement system scaling method as claimed in claim 1, is characterized in that, Obtain the coordinate (X under world coordinate systemW,YW,ZW), using the seat under the camera coordinate system that Formula of Coordinate System Transformation (20) is obtained Mark is converted to the coordinate under world coordinate system, and wherein R is spin matrix, and T is translation matrix;
X w Y w Z w = R X C Y C Z C + T - - - ( 20 )
During demarcation, target is placed in camera field of view, moving target mark, often moving a position needs projective structure striations And shooting image, coordinate of the characteristic point under camera coordinate system on target is obtained using uncertain visual angle scaling method, pass through Image procossing obtains the image coordinate and its eigenvalue of characteristic point, brings known quantity into formula (17) and formula (19) and is obtained by calculating Systematic parameter, during actual measurement, you can coordinate of the measured point under camera coordinate system is obtained by formula (18) and formula (19), then These coordinates under world coordinate system are obtained by formula (20).
CN201611072978.4A 2016-11-29 2016-11-29 Project striped geometry distribution characteristics structured light measurement system scaling method Expired - Fee Related CN106595517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611072978.4A CN106595517B (en) 2016-11-29 2016-11-29 Project striped geometry distribution characteristics structured light measurement system scaling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611072978.4A CN106595517B (en) 2016-11-29 2016-11-29 Project striped geometry distribution characteristics structured light measurement system scaling method

Publications (2)

Publication Number Publication Date
CN106595517A true CN106595517A (en) 2017-04-26
CN106595517B CN106595517B (en) 2019-01-29

Family

ID=58593675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611072978.4A Expired - Fee Related CN106595517B (en) 2016-11-29 2016-11-29 Project striped geometry distribution characteristics structured light measurement system scaling method

Country Status (1)

Country Link
CN (1) CN106595517B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018228013A1 (en) * 2017-06-12 2018-12-20 北京航空航天大学 Front coated plane mirror-based structured light parameter calibration device and method
CN110542540A (en) * 2019-07-18 2019-12-06 北京的卢深视科技有限公司 Optical axis alignment correction method of structured light module
CN111161358A (en) * 2019-12-31 2020-05-15 华中科技大学鄂州工业技术研究院 Camera calibration method and device for structured light depth measurement
CN113188478A (en) * 2021-04-28 2021-07-30 伏燕军 Mixed calibration method for telecentric microscopic three-dimensional measurement system
CN113418472A (en) * 2021-08-24 2021-09-21 深圳市华汉伟业科技有限公司 Three-dimensional measurement method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0538688A (en) * 1991-07-30 1993-02-19 Nok Corp Coordinate system calibrating method for industrial robot system
US5227985A (en) * 1991-08-19 1993-07-13 University Of Maryland Computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monitored object
CN101216296A (en) * 2008-01-11 2008-07-09 天津大学 Binocular Vision Shaft Calibration Method
CN101245994A (en) * 2008-03-17 2008-08-20 南京航空航天大学 Calibration method of structured light measurement system for three-dimensional contour of object surface
CN103411553A (en) * 2013-08-13 2013-11-27 天津大学 Fast calibration method of multiple line structured light visual sensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0538688A (en) * 1991-07-30 1993-02-19 Nok Corp Coordinate system calibrating method for industrial robot system
US5227985A (en) * 1991-08-19 1993-07-13 University Of Maryland Computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monitored object
CN101216296A (en) * 2008-01-11 2008-07-09 天津大学 Binocular Vision Shaft Calibration Method
CN101245994A (en) * 2008-03-17 2008-08-20 南京航空航天大学 Calibration method of structured light measurement system for three-dimensional contour of object surface
CN103411553A (en) * 2013-08-13 2013-11-27 天津大学 Fast calibration method of multiple line structured light visual sensor

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018228013A1 (en) * 2017-06-12 2018-12-20 北京航空航天大学 Front coated plane mirror-based structured light parameter calibration device and method
US10690492B2 (en) 2017-06-12 2020-06-23 Beihang University Structural light parameter calibration device and method based on front-coating plane mirror
CN110542540A (en) * 2019-07-18 2019-12-06 北京的卢深视科技有限公司 Optical axis alignment correction method of structured light module
CN111161358A (en) * 2019-12-31 2020-05-15 华中科技大学鄂州工业技术研究院 Camera calibration method and device for structured light depth measurement
CN111161358B (en) * 2019-12-31 2022-10-21 华中科技大学鄂州工业技术研究院 A camera calibration method and device for structured light depth measurement
CN113188478A (en) * 2021-04-28 2021-07-30 伏燕军 Mixed calibration method for telecentric microscopic three-dimensional measurement system
CN113418472A (en) * 2021-08-24 2021-09-21 深圳市华汉伟业科技有限公司 Three-dimensional measurement method and system

Also Published As

Publication number Publication date
CN106595517B (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN106595517A (en) Structured light measuring system calibration method based on projecting fringe geometric distribution characteristic
CN105698699B (en) A kind of Binocular vision photogrammetry method based on time rotating shaft constraint
CN104835158B (en) 3D Point Cloud Acquisition Method Based on Gray Code Structured Light and Epipolar Constraint
CN105043251B (en) A kind of scaling method and device of the line structure optical sensor based on mechanical movement
CN102472609B (en) Position and orientation calibration method and apparatus
CN105300316B (en) Optical losses rapid extracting method based on grey scale centre of gravity method
CN102252653B (en) Position and attitude measurement method based on time of flight (TOF) scanning-free three-dimensional imaging
Chatterjee et al. Algorithms for coplanar camera calibration
CN105043259A (en) Numerical control machine tool rotating shaft error detection method based on binocular vision
CN103528543A (en) System calibration method for grating projection three-dimensional measurement
CN101354796B (en) Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model
CN101149836A (en) A two-camera calibration method for 3D reconstruction
CN113658266B (en) Visual measurement method for rotation angle of moving shaft based on fixed camera and single target
CN110940295A (en) High-reflection object measurement method and system based on laser speckle limit constraint projection
CN104316083A (en) Three-dimensional coordinate calibration device and method of TOF (Time-of-Flight) depth camera based on sphere center positioning of virtual multiple spheres
Liu et al. Camera orientation optimization in stereo vision systems for low measurement error
CN104794718A (en) Single-image CT (computed tomography) machine room camera calibration method
Zhang et al. Iterative projector calibration using multi-frequency phase-shifting method
CN110248179A (en) Camera pupil aberration correcting method based on light field coding
CN103258327A (en) Single-pint calibration method based on two-degree-freedom video camera
Cauchois et al. Calibration of the omnidirectional vision sensor: SYCLOP
CN105528788B (en) Scaling method, device and the device for determining 3D shape of relative pose parameter
Im et al. A solution for camera occlusion using a repaired pattern from a projector
Zhang et al. A survey of catadioptric omnidirectional camera calibration
CN118334239A (en) Pipeline three-dimensional reconstruction method and equipment based on stripe projection measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190129

Termination date: 20201129