CN107240133A - Stereoscopic vision mapping model building method - Google Patents

Stereoscopic vision mapping model building method Download PDF

Info

Publication number
CN107240133A
CN107240133A CN201710269150.6A CN201710269150A CN107240133A CN 107240133 A CN107240133 A CN 107240133A CN 201710269150 A CN201710269150 A CN 201710269150A CN 107240133 A CN107240133 A CN 107240133A
Authority
CN
China
Prior art keywords
mrow
msub
msubsup
mtd
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710269150.6A
Other languages
Chinese (zh)
Inventor
李军民
宋昌林
龙宇
曾印权
任二伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN201710269150.6A priority Critical patent/CN107240133A/en
Publication of CN107240133A publication Critical patent/CN107240133A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

本发明公开了一种立体视觉映射模型建立方法,与现有技术相比,本发明对关键畸变因子的标定提出了基于Tsai改进算法、对图像分步进行畸变校正,避免透镜非线性映射导致的计算量增加,降低了算法迭代次数,是一种高效,简便的算法。为了进一步提高标定精度,采用了粒子群优化粒子滤波的非线性优化算法,通过粒子群优化使粒子集朝后验概率密度分布取值较大的区域运动,从而克服了粒子贫乏问题,并极大地降低了精确预估所需的粒子数,提高估计精度。同时粒子群优化时以投影像素误差作为约束条件,提高标定的精度和鲁棒性,在立体视觉导航中有很好的应用前景。The present invention discloses a method for establishing a stereoscopic vision mapping model. Compared with the prior art, the present invention proposes a method for calibrating key distortion factors based on the Tsai improved algorithm, and performs distortion correction on the image in steps, thereby avoiding the increase in the amount of calculation caused by the nonlinear mapping of the lens, reducing the number of algorithm iterations, and is an efficient and simple algorithm. In order to further improve the calibration accuracy, a nonlinear optimization algorithm of particle swarm optimization particle filtering is adopted. Through particle swarm optimization, the particle set is moved toward the area with a larger value of the posterior probability density distribution, thereby overcoming the particle poverty problem, greatly reducing the number of particles required for accurate estimation, and improving the estimation accuracy. At the same time, the projection pixel error is used as a constraint condition during particle swarm optimization to improve the accuracy and robustness of the calibration, and has a good application prospect in stereoscopic vision navigation.

Description

一种立体视觉映射模型建立方法A method for building stereo vision mapping model

技术领域technical field

本发明涉及一种机器视觉技术领域,尤其涉及一种立体视觉映射模型建立方法。The invention relates to the technical field of machine vision, in particular to a method for establishing a stereo vision mapping model.

背景技术Background technique

目前的视觉系统类型主要包括单目视觉系统和立体视觉系统,简单的单目视觉系统成像方式是基于线性的透镜成像,在考虑了视觉系统的结构之后,扩展其成像到空间成像坐标系,并分别构建了全局坐标系,摄像机坐标系,成像平面坐标系以及像素坐标系。目标物空间点转换的目的是从成像图片中提取出目标物的三维空间坐标值,立体视觉系统在单目成像原理的基础上增加了目标点的景深信息,这使得移动机器人能够很好的识别目标的姿态及空间方位朝向信息。但是摄像机透镜加工时的畸变因素,将不可避免的影响视觉系统对空间坐标点成像位置的判断,因此利用数学算法合理求解畸变因子也是提高移动机器人视觉系统认知信息精度的关键。The current types of vision systems mainly include monocular vision systems and stereo vision systems. The simple monocular vision system imaging method is based on linear lens imaging. After considering the structure of the vision system, its imaging is extended to the spatial imaging coordinate system, and The global coordinate system, camera coordinate system, imaging plane coordinate system and pixel coordinate system are constructed respectively. The purpose of object space point conversion is to extract the three-dimensional space coordinate value of the object from the imaging picture. The stereo vision system adds the depth information of the object point on the basis of the monocular imaging principle, which enables the mobile robot to recognize The attitude and spatial orientation of the target. However, the distortion factor during camera lens processing will inevitably affect the vision system's judgment on the imaging position of the spatial coordinate point. Therefore, the use of mathematical algorithms to reasonably solve the distortion factor is also the key to improving the cognitive information accuracy of the mobile robot vision system.

发明内容Contents of the invention

本发明的目的就在于为了解决上述问题而提供一种立体视觉映射模型建立方法。The object of the present invention is to provide a stereo vision mapping model building method in order to solve the above problems.

本发明通过以下技术方案来实现上述目的:The present invention achieves the above object through the following technical solutions:

1、本发明包括以下步骤:1, the present invention comprises the following steps:

步骤1:粒子采样;对通过改进Tsai算法标定的立体视觉内外参数共28个进行采样,因此采样粒子共有28维,采用随机采样法进行粒子采样,重要性密度函数为高斯分布,方差可以通过经验获得;在空间内产生规模为N的粒子群H={h1, h2,…,hN},h为每个粒子在空间内的28维矢量, h=(αx1y1,u0l,v0l,k1l,k2l,p1l,p2llll,txl,tyl,tzlxryr,u0r,v0r,k1r,k2r,p1r,p2rrrr,txr tyr,tzr)Step 1: Particle sampling; a total of 28 internal and external parameters of the stereo vision calibrated by the improved Tsai algorithm are sampled, so the sampled particles have a total of 28 dimensions, and the random sampling method is used for particle sampling. The importance density function is a Gaussian distribution, and the variance can be obtained through experience Obtained; a particle swarm H={h 1 , h 2 ,…,h N } of size N is generated in the space, h is the 28-dimensional vector of each particle in the space, h=(α x1y1 ,u 0l ,v 0l ,k 1l ,k 2l ,p 1l ,p 2llll ,t xl ,t yl ,t zlxryr ,u 0r ,v 0r ,k 1r , k 2r ,p 1r ,p 2rrrr ,t xr t yr ,t zr )

步骤2:利用采样粒子的畸变参数对图像进行校正,利用采样粒子的其他参数生成投影矩阵,这样每个粒子空间(28维矢量)生成了对应的2幅畸变校正图像和2个投影矩阵;Step 2: Use the distortion parameters of the sampled particles to correct the image, and use other parameters of the sampled particles to generate a projection matrix, so that each particle space (28-dimensional vector) generates two corresponding distortion-corrected images and two projection matrices;

步骤3:确定适应度函数;Step 3: Determine the fitness function;

Pi为k空间点实际投影的像素坐标,表示当前粒子h作为立体视觉非线性模型参数估计的像素坐标值,利用每个粒子生成的投影矩阵求解;P i is the pixel coordinate of the actual projection of the k-space point, Indicates that the current particle h is used as the pixel coordinate value estimated by the stereo vision nonlinear model parameter, and is solved by using the projection matrix generated by each particle;

步骤4:粒子群更新;Step 4: Particle swarm update;

为了有效地控制粒子迁移速度,使算法具有精细搜索能力,借鉴模拟退火算法思想在粒子群更新策略中引入惯性因子;在每次迁移中,每个粒子根据以下准则更新位置和速度In order to effectively control the particle migration speed and make the algorithm have fine search ability, the idea of simulated annealing algorithm is used to introduce the inertia factor in the particle swarm update strategy; in each migration, each particle updates its position and speed according to the following criteria

式(2)中,为第i个粒子在第k次迁移时第d维速度;c1为认知因子, c2为社会因子,分别调节粒子向个体极值和全局极值方向迁移的步长;r1和r2为(0, 1)之间分布的随机数;为第i个粒子在第d维的个体极值位置,为粒子群在第d维的全局极值位置,在迭代过程中根据粒子适应度实时更新;惯性因子ω=b–k(b–a)/K,b=0.9,a=0.4,K为最大迁移次数;In formula (2), is the d-dimensional velocity of the i-th particle when it migrates for the k-th time; c 1 is the cognitive factor, c 2 is the social factor, which adjust the step size of the particle’s migration to the individual extremum and the global extremum respectively; r 1 and r 2 is a random number distributed between (0, 1); is the individual extremum position of the i-th particle in the d-dimension, is the global extremum position of the particle swarm in the d-th dimension, with In the iterative process, it is updated in real time according to the fitness of the particles; the inertia factor ω=b–k(b–a)/K, b=0.9, a=0.4, and K is the maximum number of migrations;

步骤5:粒子群更新结束;Step 5: Particle swarm update ends;

当粒子群达到设定的截止迁移次数K或粒子适应值达到期望值ζ,粒子群停止优化;将更新结束后的粒子进行粒子滤波,粒子滤波以棋盘格特征点的3D坐标作为优化条件,也是符合立体视觉实际应用;When the particle swarm reaches the set cut-off migration times K or the particle fitness value reaches the expected value ζ, the particle swarm optimization stops; the particles after the update are subjected to particle filtering, and the particle filtering uses the 3D coordinates of the checkerboard feature points as the optimization condition, which is also in line with Practical application of stereo vision;

步骤6:获取特征点观测值 Step 6: Get observations of feature points

以棋盘格第一个位置作为世界坐标系,求取棋盘格各特征点的坐标值作为观测值,棋盘格采用的30cm*30cm的黑白方格,很容易获取个特征点的观测值;Take the first position of the checkerboard as the world coordinate system, and obtain the coordinate values of each feature point of the checkerboard as the observation value. The checkerboard adopts a 30cm*30cm black and white square, and it is easy to obtain the observation value of each feature point;

步骤7:特征点预测值求解;提取特征点在左右摄像机图像的像素坐标,以当前粒子h作为立体视觉的非线性参数计算对应特征点的预测值;计算方法如下;Step 7: Predicted value of feature points Solve; extract the pixel coordinates of the feature points in the left and right camera images, and use the current particle h as the nonlinear parameter of stereo vision to calculate the predicted value of the corresponding feature points; the calculation method is as follows;

其中,(XW,YW,ZW,1)为点P在世界坐标系下的齐次坐标,(u1,v1,1)与 (u2,v2,1)分别为点P在左右像机的图像坐标系投影的齐次坐标;由式(3) 和(4)可以得到关于XW,YW,ZW的四个线性方程:Among them, (X W , Y W , Z W , 1) is the homogeneous coordinate of point P in the world coordinate system, (u 1 , v 1 , 1) and (u 2 , v 2 , 1) are point P The homogeneous coordinates projected on the image coordinate system of the left and right cameras; four linear equations about X W , Y W , and Z W can be obtained from formulas (3) and (4):

式(5)3个未知数4个方程,为了提高计算精度减少噪声影响,采用最小二乘法求解;Formula (5) has 3 unknowns and 4 equations. In order to improve the calculation accuracy and reduce the noise influence, the least square method is used to solve it;

步骤8:粒子权重计算,并做归一化处理;Step 8: Particle weight calculation and normalization processing;

粒子权重计算就是计算每个粒子hi的概率大小;Particle weight calculation is to calculate the probability of each particle hi ;

由于每个标定特征点Pi之间误差分布是独立,标定参数采样粒子的概率大小计算公式如下:Since the error distribution between each calibration feature point P i is independent, the calculation formula of the probability of calibration parameter sampling particles is as follows:

由式(6)可知,如果能够求出每个特征点Pi对标定参数采样粒子的影响,也即求出p(Pi|hi),通过式(6)很容易求取标定参数采样粒子在所有特征点影响下的概率大小;设每个特征点的观测值预测值为为了计算简单,设 p(Pi|hi)服从高斯分布,其中均值u为的差值,方差为观察值方差和预测值的方差之和;It can be seen from formula (6) that if the influence of each feature point P i on the calibration parameter sampling particles can be obtained, that is, p(P i |h i ), it is easy to obtain the calibration parameter sampling particle through formula (6). The probability of particles under the influence of all feature points; the observed value of each feature point The predicted value is For simplicity of calculation, let p(P i |h i ) obey the Gaussian distribution, where the mean value u is with The difference, the variance is the observation variance and predicted values The sum of the variances;

其中观测值通过棋盘格获取,误差可以忽略不计,预测值由立体视觉获得;Among them, the observed value is obtained by checkerboard, the error is negligible, and the predicted value is obtained by stereo vision;

路标误差计算如下:The landmark error is calculated as follows:

式(8)中r,c是特征点在左图像的投影坐标,r0,c0是左图像的中心坐标. 变量c,r,d被认为均值为零的高斯随机分布,根据协方差前向传播定理,In formula (8), r and c are the projected coordinates of feature points on the left image, and r 0 and c 0 are the center coordinates of the left image. The variables c, r, and d are considered to be Gaussian random distribution with zero mean, according to the covariance before to the propagation theorem,

J为式(9)的雅克比矩阵,分别为对应变量的协方差,一般取式(9)经计算得到:J is the Jacobian matrix of formula (9), are the covariances of the corresponding variables, and generally take Formula (9) is calculated to get:

归一化权重:Normalized weights:

步骤9:求标定参数精确估计值及其协方差,标定参数精确估计值是将每组采样粒子乘以归一化权重然后求和如式(12),其标定参数精确估计值的协方差如式(13):Step 9: Find the accurate estimated value of the calibration parameter and its covariance. The accurate estimated value of the calibration parameter is to multiply each group of sampled particles by the normalized weight and then sum as in formula (12). The covariance of the accurate estimated value of the calibration parameter is as follows: Formula (13):

采用的粒子滤波进行立体视觉标定优化算法,更好的结合了立体视觉导航的具体应用,对所有参数进行整体优化使3维坐标重投影误差最小化。The particle filter used for stereo vision calibration optimization algorithm is better combined with the specific application of stereo vision navigation, and the overall optimization of all parameters minimizes the 3D coordinate reprojection error.

本发明的有益效果在于:The beneficial effects of the present invention are:

本发明是一种立体视觉映射模型建立方法,与现有技术相比,本发明对关键畸变因子的标定提出了基于Tsai改进算法。详细介绍了基于OpenCV的实现方法,算法充分考虑了OpenCV函数库最小二乘法的特性,对图像分步进行畸变校正,避免透镜非线性映射导致的计算量增加,降低了算法迭代次数,是一种高效,简便的算法。为了进一步提高标定精度,采用了粒子群优化粒子滤波的非线性优化算法,通过粒子群优化使粒子集朝后验概率密度分布取值较大的区域运动,从而克服了粒子贫乏问题,并极大地降低了精确预估所需的粒子数,提高估计精度。同时粒子群优化时以投影像素误差作为约束条件,而粒子滤波求取粒子权重时以棋盘格的特征点3D坐标误差进行计算,充分考虑了2维和3 维误差,大大提高标定的精度和鲁棒性。最后进行了实验分析,以标定板特征点3D投影误差作为精度评价指标,结果表明本算法精度高、鲁棒性好,在立体视觉导航中有很好的应用前景。The invention is a method for establishing a stereo vision mapping model. Compared with the prior art, the invention proposes an improved algorithm based on Tsai for the calibration of key distortion factors. The implementation method based on OpenCV is introduced in detail. The algorithm fully considers the characteristics of the least square method of the OpenCV function library, and corrects the distortion of the image step by step, avoiding the increase in calculation caused by the nonlinear mapping of the lens, and reducing the number of iterations of the algorithm. Efficient and simple algorithm. In order to further improve the calibration accuracy, the nonlinear optimization algorithm of particle swarm optimization and particle filter is adopted. Through particle swarm optimization, the particle set moves towards the area with a larger value of the posterior probability density distribution, thus overcoming the problem of particle poverty and greatly improving the calibration accuracy. Reduced the number of particles required for accurate estimation and improved estimation accuracy. At the same time, the particle swarm optimization uses the projected pixel error as the constraint condition, while the particle filter calculates the particle weight with the 3D coordinate error of the feature point of the checkerboard grid, fully considering the 2D and 3D errors, greatly improving the calibration accuracy and robustness sex. Finally, the experimental analysis is carried out, and the 3D projection error of the calibration board feature points is used as the accuracy evaluation index. The results show that the algorithm has high precision and good robustness, and has a good application prospect in stereo vision navigation.

附图说明Description of drawings

图1是本发明的3D路标不确定性图;Fig. 1 is a 3D landmark uncertainty figure of the present invention;

图2是本发明棋盘格平面标定模;Fig. 2 is a checkerboard plane calibration mold of the present invention;

图3是本发明成功检测特征点对比图;Fig. 3 is a comparison diagram of successfully detected feature points of the present invention;

图4是本发明畸变矫正标靶。Fig. 4 is a distortion correction target of the present invention.

具体实施方式detailed description

下面结合附图对本发明作进一步说明:The present invention will be further described below in conjunction with accompanying drawing:

本发明包括以下步骤:The present invention comprises the following steps:

步骤1:粒子采样;对通过改进Tsai算法标定的立体视觉内外参数共28个进行采样,因此采样粒子共有28维,采用随机采样法进行粒子采样,重要性密度函数为高斯分布,方差可以通过经验获得;在空间内产生规模为N的粒子群H={h1, h2,…,hN},h为每个粒子在空间内的28维矢量, h=(αx1y1,u0l,v0l,k1l,k2l,p1l,p2llll,txl,tyl,tzlxryr,u0r,v0r,k1r,k2r,p1r,p2rrrr,txr tyr,tzr)Step 1: Particle sampling; a total of 28 internal and external parameters of the stereo vision calibrated by the improved Tsai algorithm are sampled, so the sampled particles have a total of 28 dimensions, and the random sampling method is used for particle sampling. The importance density function is a Gaussian distribution, and the variance can be obtained through experience Obtained; a particle swarm H={h 1 , h 2 ,…,h N } of size N is generated in the space, h is the 28-dimensional vector of each particle in the space, h=(α x1y1 ,u 0l ,v 0l ,k 1l ,k 2l ,p 1l ,p 2llll ,t xl ,t yl ,t zlxryr ,u 0r ,v 0r ,k 1r , k 2r ,p 1r ,p 2rrrr ,t xr t yr ,t zr )

步骤2:利用采样粒子的畸变参数对图像进行校正,利用采样粒子的其他参数生成投影矩阵,这样每个粒子空间(28维矢量)生成了对应的2幅畸变校正图像和2个投影矩阵;Step 2: Use the distortion parameters of the sampled particles to correct the image, and use other parameters of the sampled particles to generate a projection matrix, so that each particle space (28-dimensional vector) generates two corresponding distortion-corrected images and two projection matrices;

步骤3:确定适应度函数;Step 3: Determine the fitness function;

Pi为k空间点实际投影的像素坐标,表示当前粒子h作为立体视觉非线性模型参数估计的像素坐标值,利用每个粒子生成的投影矩阵求解;P i is the pixel coordinate of the actual projection of the k-space point, Indicates that the current particle h is used as the pixel coordinate value estimated by the stereo vision nonlinear model parameter, and is solved by using the projection matrix generated by each particle;

步骤4:粒子群更新;Step 4: Particle swarm update;

为了有效地控制粒子迁移速度,使算法具有精细搜索能力,借鉴模拟退火算法思想在粒子群更新策略中引入惯性因子;在每次迁移中,每个粒子根据以下准则更新位置和速度In order to effectively control the particle migration speed and make the algorithm have fine search ability, the idea of simulated annealing algorithm is used to introduce the inertia factor in the particle swarm update strategy; in each migration, each particle updates its position and speed according to the following criteria

式(2)中,为第i个粒子在第k次迁移时第d维速度;c1为认知因子, c2为社会因子,分别调节粒子向个体极值和全局极值方向迁移的步长;r1和r2为(0, 1)之间分布的随机数;为第i个粒子在第d维的个体极值位置,为粒子群在第d维的全局极值位置,在迭代过程中根据粒子适应度实时更新;惯性因子ω=b–k(b–a)/K,b=0.9,a=0.4,K为最大迁移次数;In formula (2), is the d-dimensional velocity of the i-th particle when it migrates for the k-th time; c 1 is the cognitive factor, c 2 is the social factor, which adjust the step size of the particle’s migration to the individual extremum and the global extremum respectively; r 1 and r 2 is a random number distributed between (0, 1); is the individual extremum position of the i-th particle in the d-dimension, is the global extremum position of the particle swarm in the d-th dimension, with In the iterative process, it is updated in real time according to the fitness of the particles; the inertia factor ω=b–k(b–a)/K, b=0.9, a=0.4, and K is the maximum number of migrations;

步骤5:粒子群更新结束;Step 5: Particle swarm update ends;

当粒子群达到设定的截止迁移次数K或粒子适应值达到期望值ζ,粒子群停止优化;将更新结束后的粒子进行粒子滤波,粒子滤波以棋盘格特征点的3D坐标作为优化条件,也是符合立体视觉实际应用;When the particle swarm reaches the set cut-off migration times K or the particle fitness value reaches the expected value ζ, the particle swarm optimization stops; the particles after the update are subjected to particle filtering, and the particle filtering uses the 3D coordinates of the checkerboard feature points as the optimization condition, which is also in line with Practical application of stereo vision;

步骤6:获取特征点观测值 Step 6: Get observations of feature points

以棋盘格第一个位置作为世界坐标系,求取棋盘格各特征点的坐标值作为观测值,棋盘格采用的30cm*30cm的黑白方格,很容易获取个特征点的观测值;Take the first position of the checkerboard as the world coordinate system, and obtain the coordinate values of each feature point of the checkerboard as the observation value. The checkerboard adopts a 30cm*30cm black and white square, and it is easy to obtain the observation value of each feature point;

步骤7:特征点预测值求解;提取特征点在左右摄像机图像的像素坐标,以当前粒子h作为立体视觉的非线性参数计算对应特征点的预测值;计算方法如下;Step 7: Predicted value of feature points Solve; extract the pixel coordinates of the feature points in the left and right camera images, and use the current particle h as the nonlinear parameter of stereo vision to calculate the predicted value of the corresponding feature points; the calculation method is as follows;

其中,(XW,YW,ZW,1)为点P在世界坐标系下的齐次坐标,(u1,v1,1)与 (u2,v2,1)分别为点P在左右像机的图像坐标系投影的齐次坐标;由式(3) 和(4)可以得到关于XW,YW,ZW的四个线性方程:Among them, (X W , Y W , Z W , 1) is the homogeneous coordinate of point P in the world coordinate system, (u 1 , v 1 , 1) and (u 2 , v 2 , 1) are point P The homogeneous coordinates projected on the image coordinate system of the left and right cameras; four linear equations about X W , Y W , and Z W can be obtained from formulas (3) and (4):

式(5)3个未知数4个方程,为了提高计算精度减少噪声影响,采用最小二乘法求解;Formula (5) has 3 unknowns and 4 equations. In order to improve the calculation accuracy and reduce the noise influence, the least square method is used to solve it;

步骤8:粒子权重计算,并做归一化处理;Step 8: Particle weight calculation and normalization processing;

粒子权重计算就是计算每个粒子hi的概率大小;Particle weight calculation is to calculate the probability of each particle hi ;

由于每个标定特征点Pi之间误差分布是独立,标定参数采样粒子的概率大小计算公式如下:Since the error distribution between each calibration feature point P i is independent, the calculation formula of the probability of calibration parameter sampling particles is as follows:

由式(6)可知,如果能够求出每个特征点Pi对标定参数采样粒子的影响,也即求出p(Pi|hi),通过式(6)很容易求取标定参数采样粒子在所有特征点影响下的概率大小;设每个特征点的观测值预测值为为了计算简单,设 p(Pi|hi)服从高斯分布,其中均值u为的差值,方差为观察值方差和预测值的方差之和;It can be seen from formula (6) that if the influence of each feature point P i on the calibration parameter sampling particles can be obtained, that is, p(P i |h i ), it is easy to obtain the calibration parameter sampling particle through formula (6). The probability of particles under the influence of all feature points; the observed value of each feature point The predicted value is For simplicity of calculation, let p(P i |h i ) obey the Gaussian distribution, where the mean value u is with The difference, the variance is the observation variance and predicted values The sum of the variances;

其中观测值通过棋盘格获取,误差可以忽略不计,预测值由立体视觉获得;Among them, the observed value is obtained by checkerboard, the error is negligible, and the predicted value is obtained by stereo vision;

路标误差形成如图1,计算如下:The road sign error is formed as shown in Figure 1, and is calculated as follows:

式(8)中r,c是特征点在左图像的投影坐标,r0,c0是左图像的中心坐标. 变量c,r,d被认为均值为零的高斯随机分布,根据协方差前向传播定理,In formula (8), r and c are the projected coordinates of feature points on the left image, and r 0 and c 0 are the center coordinates of the left image. The variables c, r, and d are considered to be Gaussian random distribution with zero mean, according to the covariance before to the propagation theorem,

J为式(9)的雅克比矩阵,分别为对应变量的协方差,一般取式(9)经计算得到:J is the Jacobian matrix of formula (9), are the covariances of the corresponding variables, and generally take Formula (9) is calculated to get:

归一化权重:Normalized weights:

步骤9:求标定参数精确估计值及其协方差,标定参数精确估计值是将每组采样粒子乘以归一化权重然后求和如式(12),其标定参数精确估计值的协方差如式(13):Step 9: Find the accurate estimated value of the calibration parameter and its covariance. The accurate estimated value of the calibration parameter is to multiply each group of sampled particles by the normalized weight and then sum as in formula (12). The covariance of the accurate estimated value of the calibration parameter is as follows: Formula (13):

采用的粒子滤波进行立体视觉标定优化算法,更好的结合了立体视觉导航的具体应用,对所有参数进行整体优化使3维坐标重投影误差最小化。The particle filter used for stereo vision calibration optimization algorithm is better combined with the specific application of stereo vision navigation, and the overall optimization of all parameters minimizes the 3D coordinate reprojection error.

实验与结果分析Experiment and Result Analysis

如图2所示:由于3D标定物的精度难以保证,这里采用多个空间方位角度不同黑白交替排列的棋盘格平面标定模板。使用Point Grey Research公司生产的Bumblebee2Camera作为图像采集设备。双目视觉系统基线长120mm,最大分辨率为1024×768,像素面积为 4.65um×4.65um,帧数为30FPS,70°水平视角下的焦距为3.8mm,50°水平视角下的焦距为 6mm,数字图像位深由支持的数据格式定为24bit,图像信号噪声比率在0dB的时候增益大于 60dB。As shown in Figure 2: Since the accuracy of the 3D calibration object is difficult to guarantee, a checkerboard plane calibration template with different spatial orientations and different black and white alternate arrangements is used here. The Bumblebee2Camera produced by Point Gray Research is used as the image acquisition device. The baseline length of the binocular vision system is 120mm, the maximum resolution is 1024×768, the pixel area is 4.65um×4.65um, the frame rate is 30FPS, the focal length at 70°horizontal viewing angle is 3.8mm, and the focal length at 50°horizontal viewing angle is 6mm , the digital image bit depth is set to 24bit by the supported data format, and the image signal-to-noise ratio is greater than 60dB when the signal-to-noise ratio is 0dB.

目标棋盘格标定板尺寸270mm×210mm,每个棋盘格尺寸为30mm×30mm,中心距30mm,阵列9×7,每幅图像像素比均为640×480,位深8bit,标定面的平整度误差范围在 -0.05mm以内,透镜成像平面与标定靶面的夹角保持在50°以内。The size of the target checkerboard calibration board is 270mm×210mm, the size of each checkerboard is 30mm×30mm, the center distance is 30mm, the array is 9×7, the pixel ratio of each image is 640×480, the bit depth is 8bit, and the flatness error of the calibration surface The range is within -0.05mm, and the angle between the lens imaging plane and the calibration target surface is kept within 50°.

实验时,双目摄像头一共采集3组共10副不同空间方位的图像,应保证获取的棋盘格占据成像平面大部分区域。算法进行特征点提取时,首先在图像中心点区域范围内绘制特征点图像,以避免图像边缘过畸变导致的标定实际误差,特征点位置用图中圆圈内两直径的交点表示,然后标定平板上特征点的行列数。提取成功的图像,前一行的最后一个特征点与后一行的第一个特征点由彩色直线相连,以判断是否找到所有特征点,每一行特征点都用不同颜色的圆圈区别。图中第一行的三幅棋盘格图像为左摄像机特征点提取结果,第二行的三幅图像是在相同空间成像映射下,右摄像机的特征点提取结果。During the experiment, the binocular camera collected a total of 3 groups of 10 images of different spatial orientations, and it should be ensured that the acquired checkerboard occupies most of the imaging plane. When the algorithm extracts feature points, it first draws the feature point image within the range of the center point of the image to avoid the actual calibration error caused by the excessive distortion of the image edge. The number of rows and columns of feature points. For images that are successfully extracted, the last feature point of the previous row is connected with the first feature point of the next row by a colored straight line to determine whether all the feature points are found. Each row of feature points is distinguished by a circle of a different color. The three checkerboard images in the first row in the figure are the feature point extraction results of the left camera, and the three images in the second row are the feature point extraction results of the right camera under the same spatial imaging mapping.

如图3所示:为了验证本文标定方法的可行性,将本文提出的粒子群优化粒子滤波优化算法(方法1)分别与遗传算法非线性优化标定的立体视觉两步标定法[81](方法2)和仅使用粒子群算法非线性优化标定的立体视觉两步标定法[77](方法3)进行了比较,方法1中粒子数位 500,截止迁移次数500,方法2中遗传截止进化代数为10000,方法3中粒子群截止迁移次数为 3000,优化后具体参数如表3.2所示。As shown in Figure 3: In order to verify the feasibility of the calibration method proposed in this paper, the particle swarm optimization particle filter optimization algorithm (method 1) proposed in this paper is respectively combined with the stereo vision two-step calibration method of nonlinear optimization calibration of genetic algorithm [81] (method 2) Compared with the stereo vision two-step calibration method [77] (method 3) that only uses the particle swarm optimization algorithm for nonlinear optimization calibration, the number of particles in method 1 is 500, and the number of cut-off migration is 500. In method 2, the genetic cut-off evolution algebra is 10000, the number of particle swarm cut-off migration in method 3 is 3000, the specific parameters after optimization are shown in Table 3.2.

表3.2参数对比Table 3.2 Comparison of parameters

通过粒子群优化粒子滤波优化算法后更接近真实值,为了进一步评价算法的优劣,结合立体视觉导航应用环境,采用了空间真实坐标点与重建后对应的坐标值的均方根作为评价指标,如式(3-30),其参数如表3.3。After particle swarm optimization and particle filter optimization algorithm, it is closer to the real value. In order to further evaluate the pros and cons of the algorithm, combined with the application environment of stereo vision navigation, the root mean square of the real coordinate point in space and the corresponding coordinate value after reconstruction is used as the evaluation index. Such as formula (3-30), its parameters are shown in Table 3.3.

表3.3性能评价结果Table 3.3 Performance evaluation results

三种算法标定的立体视觉的相互位置参数如表3.4,粒子群优化粒子滤波的参数更接近厂家出厂的参数。The mutual position parameters of the stereo vision calibrated by the three algorithms are shown in Table 3.4, and the parameters of the particle swarm optimization particle filter are closer to the factory parameters.

表3.4左右相机相互位置参数比较Table 3.4 Comparison of mutual position parameters between left and right cameras

最后用标定的畸变参数对图像进行了畸变矫正,图3棋盘标定格图像经过畸变矫正后结果如图4,可以看出,算法可以显著纠正透镜成像时发生在图像边缘的径向畸变和切向畸变。Finally, the calibrated distortion parameters are used to correct the distortion of the image. The result of the distortion correction of the checkerboard image in Figure 3 is shown in Figure 4. It can be seen that the algorithm can significantly correct the radial distortion and tangential distortion that occur at the edge of the image during lens imaging. distortion.

小结summary

本发明内容介绍了立体视觉标定,通过对目前常用的几种视觉系统的标定方法的分析,提出了一种基于2D平面靶标的多项畸变模型立体摄像机标定方法,算法线性求解了尺度焦距因子、像素平面中心点坐标、旋转矩阵参数、平移矩阵参数在内的十项摄像机参数。对关键畸变因子的标定提出了基于Tsai改进算法。详细介绍了基于OpenCV的实现方法,算法充分考虑了OpenCV函数库最小二乘法的特性,对图像分步进行畸变校正,避免透镜非线性映射导致的计算量增加,降低了算法迭代次数,是一种高效,简便的算法。为了进一步提高标定精度,采用了粒子群优化粒子滤波的非线性优化算法,通过粒子群优化使粒子集朝后验概率密度分布取值较大的区域运动,从而克服了粒子贫乏问题,并极大地降低了精确预估所需的粒子数,提高估计精度。同时粒子群优化时以投影像素误差作为约束条件,而粒子滤波求取粒子权重时以棋盘格的特征点3D坐标误差进行计算,充分考虑了2维和3维误差,大大提高标定的精度和鲁棒性。最后进行了实验分析,以标定板特征点3D投影误差作为精度评价指标,结果表明本算法精度高、鲁棒性好,在立体视觉导航中有很好的应用前景。The content of the present invention introduces the calibration of stereo vision. Through the analysis of the calibration methods of several commonly used vision systems at present, a multi-distortion model stereo camera calibration method based on 2D plane targets is proposed. The algorithm linearly solves the scale focal length factor, Ten camera parameters including the center point coordinates of the pixel plane, rotation matrix parameters, and translation matrix parameters. For the calibration of key distortion factors, an improved algorithm based on Tsai is proposed. The implementation method based on OpenCV is introduced in detail. The algorithm fully considers the characteristics of the least square method of the OpenCV function library, and corrects the distortion of the image step by step, avoiding the increase in calculation caused by the nonlinear mapping of the lens, and reducing the number of iterations of the algorithm. Efficient and simple algorithm. In order to further improve the calibration accuracy, the nonlinear optimization algorithm of particle swarm optimization and particle filter is adopted. Through particle swarm optimization, the particle set moves towards the area with a larger value of the posterior probability density distribution, thus overcoming the problem of particle poverty and greatly improving the calibration accuracy. Reduced the number of particles required for accurate estimation and improved estimation accuracy. At the same time, the particle swarm optimization uses the projected pixel error as the constraint condition, and the particle filter calculates the particle weight with the 3D coordinate error of the feature point of the checkerboard. It fully considers the 2D and 3D errors and greatly improves the calibration accuracy and robustness. sex. Finally, the experimental analysis is carried out, and the 3D projection error of the calibration board feature points is used as the accuracy evaluation index. The results show that the algorithm has high precision and good robustness, and has a good application prospect in stereo vision navigation.

以上显示和描述了本发明的基本原理和主要特征及本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是说明本发明的原理,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。The basic principles and main features of the present invention and the advantages of the present invention have been shown and described above. Those skilled in the industry should understand that the present invention is not limited by the above-mentioned embodiments. What are described in the above-mentioned embodiments and the description only illustrate the principle of the present invention. Without departing from the spirit and scope of the present invention, the present invention will also have Variations and improvements are possible, which fall within the scope of the claimed invention. The protection scope of the present invention is defined by the appended claims and their equivalents.

Claims (1)

1. a kind of stereoscopic vision mapping model method for building up, it is characterised in that:Comprise the following steps:
Step 1:Particle sampler;The stereoscopic vision inside and outside parameter demarcated by improving Tsai algorithms is sampled for 28 totally, because This sampling particle has 28 dimensions, carries out particle sampler using stochastical sampling method, the importance density function is Gaussian Profile, and variance can To be obtained by experience;Population H={ the h that scale is N are produced in space1,h2,…,hN, h is each particle in space 28 n dimensional vector ns,
H=(αx1y1,u0l,v0l,k1l,k2l,p1l,p2llll,txl,tyl,tzlxryr,u0r,v0r,k1r,k2r,p1r, p2rrrr,txr tyr,tzr)
Step 2:Image is corrected using the distortion parameter for particle of sampling, is generated and projected using the other specification for particle of sampling Matrix, so each particles spatial (28 n dimensional vector n) generates corresponding 2 width distortion correction image and 2 projection matrixes;
Step 3:Determine fitness function;
<mrow> <mi>E</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mi>h</mi> </mrow> </msub> <mo>^</mo> </mover> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
PiFor the pixel coordinate of the actual projection of k-space point,Represent that current particle h estimates as stereoscopic vision nonlinear model shape parameter The pixel coordinate value of meter, the projection matrix generated using each particle is solved;
Step 4:Population updates;
In order to efficiently control particle migration speed, make algorithm that there is fine search ability, use for reference simulated annealing thought and exist Inertial factor is introduced in population more new strategy;In each migration, each particle is according to following criterion more new position and speed
<mrow> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>v</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>&amp;omega;v</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>+</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <msub> <mi>r</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <msubsup> <mi>h</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msub> <mi>r</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <mrow> <mi>h</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <msubsup> <mi>h</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>h</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>h</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>v</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
In formula (2),For i-th of particle, in kth time migration, d ties up speed;c1For perception factor, c2For the social factor, divide Tiao Jie not the step-length that is migrated to individual extreme value and global extremum direction of particle;r1And r2For the random number being distributed between (0,1); The individual extreme value place tieed up for i-th of particle in d,The global extremum position tieed up for population in d,WithRepeatedly According to particle fitness real-time update during generation;Inertial factor ω=b-k (b-a)/K, b=0.9, a=0.4, K move for maximum Move number of times;
Step 5:Population, which updates, to be terminated;
When population reaches that the cutoff number of times K or particle adaptive value of setting reach desired value ζ, population stops optimization;Will Update the particle after terminating and carry out particle filter, particle filter is also using the 3D coordinates of gridiron pattern characteristic point as optimal conditions Meet stereoscopic vision practical application;
Step 6:Obtain characteristic point observation
Using first position of gridiron pattern as world coordinate system, the coordinate value of each characteristic point of gridiron pattern is asked for as observation, chess The black and white grid for the 30cm*30cm that disk lattice are used, it is easy to obtain the observation of a characteristic point;
Step 7:Characteristic point predicted valueSolve;Pixel coordinate of the characteristic point in left and right cameras image is extracted, with current particle h The predicted value of character pair point is calculated as the nonlinear parameter of stereoscopic vision;Computational methods are as follows;
<mrow> <msub> <mi>Z</mi> <mrow> <mi>C</mi> <mn>1</mn> </mrow> </msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>u</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>11</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>12</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>13</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>14</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>21</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>22</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>23</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>24</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>31</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>32</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>33</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>34</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Y</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Z</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>Z</mi> <mrow> <mi>C</mi> <mn>2</mn> </mrow> </msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>u</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>11</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>12</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>13</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>14</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>21</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>22</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>23</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>24</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>31</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>32</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>33</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>34</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Y</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Z</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
Wherein, (XW, YW, ZW, 1) and it is homogeneous coordinates of the point P under world coordinate system, (u1, v1, 1) and (u2, v2, 1) and it is respectively point P The homogeneous coordinates of the image coordinate system projection of camera in left and right;It can be obtained on X by formula (3) and (4)W, YW, ZWFour lines Property equation:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>31</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>11</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>X</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>32</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>12</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>Y</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>33</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>13</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>Z</mi> <mi>W</mi> </msub> <mo>=</mo> <msubsup> <mi>m</mi> <mn>14</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>34</mn> <mn>1</mn> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>31</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>21</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>X</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>32</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>22</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>Y</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>33</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>23</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>Z</mi> <mi>W</mi> </msub> <mo>=</mo> <msubsup> <mi>m</mi> <mn>24</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>34</mn> <mn>1</mn> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>31</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>11</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>X</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>32</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>12</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>Y</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>33</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>13</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>Z</mi> <mi>W</mi> </msub> <mo>=</mo> <msubsup> <mi>m</mi> <mn>14</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>34</mn> <mn>2</mn> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>31</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>21</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>X</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>32</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>22</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>Y</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>33</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>23</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>Z</mi> <mi>W</mi> </msub> <mo>=</mo> <msubsup> <mi>m</mi> <mn>24</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>34</mn> <mn>2</mn> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Formula 4 equations of (5) 3 unknown numbers, are reduced influence of noise to improve computational accuracy, are solved using least square method;
Step 8:Granular Weights Computing, and do normalized;
Granular Weights Computing is exactly to calculate each particle hiProbability size;
Due to each feature point for calibration PiBetween error distribution be independent, calibrating parameters are sampled the probability size calculation formula of particle It is as follows:
<mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>P</mi> <mo>|</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Pi;</mo> <mi>i</mi> </munder> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
From formula (6), if it is possible to obtain each characteristic point PiInfluence to calibrating parameters sampling particle, namely obtain p (Pi| hi), it is easy to ask for probability size of the calibrating parameters sampling particle under the influence of all characteristic points by formula (6);If each special Levy observation a littlePredicted value isIt is simple in order to calculate, if p (Pi|hi) Gaussian distributed, wherein average u isWithDifference, variance be observed value varianceAnd predicted valueVariance sum;
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>N</mi> <mrow> <mo>(</mo> <msub> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msup> <msub> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mi>i</mi> </msup> <mo>,</mo> <msub> <mo>&amp;Sigma;</mo> <msub> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> </msub> <mo>+</mo> <msub> <mo>&amp;Sigma;</mo> <msub> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mn>2</mn> <mi>&amp;pi;</mi> <mo>|</mo> <msub> <mo>&amp;Sigma;</mo> <msup> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msup> </msub> <mo>+</mo> <msub> <mo>&amp;Sigma;</mo> <msup> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msup> </msub> <mo>|</mo> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </msup> <mo>&amp;times;</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>(</mo> <msub> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msup> <mrow> <mo>(</mo> <msub> <mo>&amp;Sigma;</mo> <msub> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> </msub> <mo>+</mo> <msub> <mo>&amp;Sigma;</mo> <msub> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
Wherein observation is obtained by gridiron pattern, and error can be ignored, and predicted value is obtained by stereoscopic vision;
Road sign error calculation is as follows:
<mrow> <mi>X</mi> <mo>=</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mfrac> <mi>b</mi> <mi>d</mi> </mfrac> <mo>;</mo> <mi>Y</mi> <mo>=</mo> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mfrac> <mi>b</mi> <mi>d</mi> </mfrac> <mo>;</mo> <mi>Z</mi> <mo>=</mo> <mi>f</mi> <mfrac> <mi>b</mi> <mi>d</mi> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
R in formula (8), c are projection coordinate of the characteristic point in left image, r0,c0It is that centre coordinate the variables c, r, d of left image is recognized The gaussian random distribution for being zero for average, according to covariance propagated forward theorem,
<mrow> <msub> <mi>&amp;Sigma;</mi> <mi>X</mi> </msub> <mo>&amp;ap;</mo> <mi>J</mi> <mi>d</mi> <mi>i</mi> <mi>a</mi> <mi>g</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;sigma;</mi> <mi>c</mi> <mn>2</mn> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;sigma;</mi> <mi>r</mi> <mn>2</mn> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <msup> <mi>J</mi> <mi>T</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> 2
J is the Jacobian matrix of formula (9),Respectively to the covariance of dependent variable, typically take Formula (9) is calculated:
<mrow> <msub> <mo>&amp;Sigma;</mo> <mi>X</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mi>b</mi> <mi>d</mi> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>&amp;times;</mo> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>c</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <msup> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mrow> </mtd> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mi>f</mi> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> <mtd> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>r</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>r</mi> <mn>2</mn> </msubsup> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mrow> </mtd> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mi>f</mi> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mi>f</mi> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <msup> <mi>f</mi> <mn>2</mn> </msup> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
Normalized weight:
<mrow> <msup> <mover> <mi>&amp;omega;</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msup> <mo>=</mo> <mfrac> <msup> <mi>&amp;omega;</mi> <mi>i</mi> </msup> <mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> </munder> <msup> <mi>&amp;omega;</mi> <mi>i</mi> </msup> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
Step 9:Calibrating parameters fine estimation and its covariance are asked, calibrating parameters fine estimation is to multiply every group of sampling particle With normalized weight and then summation such as formula (12), the covariance such as formula (13) of its calibrating parameters fine estimation:
<mrow> <mi>P</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <mover> <mi>h</mi> <mo>^</mo> </mover> <mo>-</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mover> <mi>h</mi> <mo>^</mo> </mover> <mo>-</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
The particle filter of use carries out stereoscopic vision calibration optimization algorithm, preferably combines specifically should for stereoscopic vision navigation With carrying out global optimization to all parameters minimizes 3-dimensional coordinate re-projection error.
CN201710269150.6A 2017-04-24 2017-04-24 Stereoscopic vision mapping model building method Pending CN107240133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710269150.6A CN107240133A (en) 2017-04-24 2017-04-24 Stereoscopic vision mapping model building method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710269150.6A CN107240133A (en) 2017-04-24 2017-04-24 Stereoscopic vision mapping model building method

Publications (1)

Publication Number Publication Date
CN107240133A true CN107240133A (en) 2017-10-10

Family

ID=59984066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710269150.6A Pending CN107240133A (en) 2017-04-24 2017-04-24 Stereoscopic vision mapping model building method

Country Status (1)

Country Link
CN (1) CN107240133A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109726853A (en) * 2018-12-04 2019-05-07 东莞理工学院 Path planning algorithm for industrial collaborative robot based on machine vision
CN110333513A (en) * 2019-07-10 2019-10-15 国网四川省电力公司电力科学研究院 A particle filter SLAM method fused with least squares
CN112465918A (en) * 2020-12-06 2021-03-09 西安交通大学 Microscopic vision calibration method based on Tsai calibration

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654476A (en) * 2015-12-25 2016-06-08 江南大学 Binocular calibration method based on chaotic particle swarm optimization algorithm
CN106447763A (en) * 2016-07-27 2017-02-22 扬州大学 Face image three-dimensional reconstruction method for fusion of sparse deformation model and principal component regression algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654476A (en) * 2015-12-25 2016-06-08 江南大学 Binocular calibration method based on chaotic particle swarm optimization algorithm
CN106447763A (en) * 2016-07-27 2017-02-22 扬州大学 Face image three-dimensional reconstruction method for fusion of sparse deformation model and principal component regression algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUNMIN LI 等: "SLAM BASED ON INFORMATION FUSION OF STEREO VISION AND ELECTRONIC COMPASS", 《INTERNATIONAL JOURNAL OF ROBOTICS AND AUTOMATION》 *
LI JUNMIN等: "Robot Pose Estimation and Accuracy Analysis Based on Stereo Vision", 《2013 IEEE 9TH INTERNATIONAL CONFERENCE ON MOBILE AD-HOC AND SENSOR NETWORKS》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109726853A (en) * 2018-12-04 2019-05-07 东莞理工学院 Path planning algorithm for industrial collaborative robot based on machine vision
CN110333513A (en) * 2019-07-10 2019-10-15 国网四川省电力公司电力科学研究院 A particle filter SLAM method fused with least squares
CN110333513B (en) * 2019-07-10 2023-01-10 国网四川省电力公司电力科学研究院 Particle filter SLAM method fusing least square method
CN112465918A (en) * 2020-12-06 2021-03-09 西安交通大学 Microscopic vision calibration method based on Tsai calibration
CN112465918B (en) * 2020-12-06 2024-04-02 西安交通大学 Microscopic vision calibration method based on Tsai calibration

Similar Documents

Publication Publication Date Title
CN112102458B (en) Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance
CN107358631B (en) A Binocular Vision Reconstruction Method Considering 3D Distortion
CN104182982B (en) Overall optimizing method of calibration parameter of binocular stereo vision camera
CN109272537B (en) A panoramic point cloud registration method based on structured light
CN114399554B (en) Calibration method and system of multi-camera system
CN106023303B (en) A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud
WO2018127007A1 (en) Depth image acquisition method and system
CN108335350A (en) The three-dimensional rebuilding method of binocular stereo vision
CN103607584B (en) Real-time registration method for depth maps shot by kinect and video shot by color camera
CN107945220A (en) A kind of method for reconstructing based on binocular vision
CN111126148A (en) DSM (digital communication system) generation method based on video satellite images
CN105825543B (en) Point off density cloud generation method and system are regarded based on low altitude remote sensing image more
CN109215118B (en) Incremental motion structure recovery optimization method based on image sequence
CN110660099B (en) Rational function model fitting method for remote sensing image processing based on neural network
CN112270698B (en) Non-rigid geometric registration method based on nearest curved surface
CN102496160A (en) Calibrating method for centralized vision system of soccer robot
CN105654476A (en) Binocular calibration method based on chaotic particle swarm optimization algorithm
CN106056625B (en) A kind of Airborne IR moving target detecting method based on geographical same place registration
CN109658497B (en) A three-dimensional model reconstruction method and device
CN112258583A (en) Distortion calibration method for close-range images based on equal-distortion variable partitioning
CN106408596A (en) Edge-based local stereo matching method
CN111457930A (en) A high-precision mapping and positioning method using the combination of vehicle-mounted lidar and UAV
CN107240133A (en) Stereoscopic vision mapping model building method
CN113160335A (en) Model point cloud and three-dimensional surface reconstruction method based on binocular vision
CN115578469A (en) Multi-view combined calibration method for large complex workpiece

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171010