CN107240133A - Stereoscopic vision mapping model building method - Google Patents

Stereoscopic vision mapping model building method Download PDF

Info

Publication number
CN107240133A
CN107240133A CN201710269150.6A CN201710269150A CN107240133A CN 107240133 A CN107240133 A CN 107240133A CN 201710269150 A CN201710269150 A CN 201710269150A CN 107240133 A CN107240133 A CN 107240133A
Authority
CN
China
Prior art keywords
mrow
msub
msubsup
mtd
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710269150.6A
Other languages
Chinese (zh)
Inventor
李军民
宋昌林
龙宇
曾印权
任二伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN201710269150.6A priority Critical patent/CN107240133A/en
Publication of CN107240133A publication Critical patent/CN107240133A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

Compared with the prior art, the method for establishing the stereoscopic vision mapping model has the advantages that the Tsai-based improved algorithm is provided for calibrating the key distortion factor, the distortion correction is carried out on the image step by step, the increase of the calculated amount caused by the nonlinear mapping of the lens is avoided, the iteration times of the algorithm are reduced, and the method is efficient and simple. In order to further improve the calibration precision, a nonlinear optimization algorithm of particle swarm optimization particle filtering is adopted, and the particle set moves towards a region with a larger posterior probability density distribution value through particle swarm optimization, so that the problem of particle scarcity is solved, the number of particles required by accurate estimation is greatly reduced, and the estimation precision is improved. Meanwhile, projection pixel errors are used as constraint conditions during particle swarm optimization, the calibration precision and robustness are improved, and the method has a good application prospect in stereoscopic vision navigation.

Description

A kind of stereoscopic vision mapping model method for building up
Technical field
The present invention relates to a kind of technical field of machine vision, more particularly to a kind of stereoscopic vision mapping model method for building up.
Background technology
Current vision system type mainly includes single camera vision system and stereo visual system, simple monocular vision system System imaging mode be based on linear lens imaging, after the structure of vision system is considered, extend its be imaged onto space into As coordinate system, and global coordinate system, camera coordinate system, imaging plane coordinate system and pixel coordinate system are constructed respectively.Mesh The purpose of mark object space point conversion is the three dimensional space coordinate value that object is extracted from imaging picture, and stereo visual system exists The depth of view information of target point is added on the basis of monocular image-forming principle, this causes mobile robot to can be good at recognizing target Posture and dimensional orientation orientation information.But distortion factor during camera lens processing, it will inevitably influence vision Judgement of the system to space coordinate point image space, therefore it is also to improve moving machine rationally to solve distortion factor using mathematical algorithm The key of device human visual system's cognitive information precision.
The content of the invention
The purpose of the present invention is that provides a kind of stereoscopic vision mapping model method for building up to solve the above problems.
The present invention is achieved through the following technical solutions above-mentioned purpose:
1st, the present invention comprises the following steps:
Step 1:Particle sampler;The stereoscopic vision inside and outside parameter demarcated by improving Tsai algorithms is adopted for 28 totally Sample, therefore sampling particle has 28 dimensions, carries out particle sampler using stochastical sampling method, the importance density function is Gaussian Profile, Variance can be obtained by experience;Population H={ the h that scale is N are produced in space1, h2,…,hN, h is each particle 28 n dimensional vector ns in space, h=(αx1y1,u0l,v0l,k1l,k2l,p1l,p2llll,txl,tyl,tzlxryr, u0r,v0r,k1r,k2r,p1r,p2rrrr,txr tyr,tzr)
Step 2:Image is corrected using the distortion parameter for particle of sampling, generated using the other specification for particle of sampling Projection matrix, so each particles spatial (28 n dimensional vector n) generates corresponding 2 width distortion correction image and 2 projection matrixes;
Step 3:Determine fitness function;
PiFor the pixel coordinate of the actual projection of k-space point,Represent that current particle h joins as stereoscopic vision nonlinear model The pixel coordinate value of number estimation, the projection matrix generated using each particle is solved;
Step 4:Population updates;
In order to efficiently control particle migration speed, make algorithm that there is fine search ability, use for reference simulated annealing and think Want to introduce inertial factor in population more new strategy;In each migration, each particle according to following criterion more new position and Speed
In formula (2),For i-th of particle, in kth time migration, d ties up speed;c1For perception factor, c2For it is social because Son, adjusts the step-length that particle is migrated to individual extreme value and global extremum direction respectively;r1And r2It is random for what is be distributed between (0,1) Number;The individual extreme value place tieed up for i-th of particle in d,The global extremum position tieed up for population in d,WithIn an iterative process according to particle fitness real-time update;Inertial factor ω=b-k (b-a)/K, b=0.9, a=0.4, K For maximum migration number of times;
Step 5:Population, which updates, to be terminated;
When population reaches that the cutoff number of times K or particle adaptive value of setting reach desired value ζ, population stops excellent Change;Will update terminate after particle carry out particle filter, particle filter using the 3D coordinates of gridiron pattern characteristic point as optimal conditions, It is also to meet stereoscopic vision practical application;
Step 6:Obtain characteristic point observation
Using first position of gridiron pattern as world coordinate system, the coordinate value of each characteristic point of gridiron pattern is asked for as observation Value, the black and white grid for the 30cm*30cm that gridiron pattern is used, it is easy to obtain the observation of a characteristic point;
Step 7:Characteristic point predicted valueSolve;Pixel coordinate of the characteristic point in left and right cameras image is extracted, with current Particle h calculates the predicted value of character pair point as the nonlinear parameter of stereoscopic vision;Computational methods are as follows;
Wherein, (XW, YW, ZW, 1) and it is homogeneous coordinates of the point P under world coordinate system, (u1, v1, 1) and (u2, v2, 1) point Not Wei point P left and right camera image coordinate system project homogeneous coordinates;It can be obtained on X by formula (3) and (4)W, YW, ZW Four linear equations:
Formula 4 equations of (5) 3 unknown numbers, are reduced influence of noise to improve computational accuracy, are asked using least square method Solution;
Step 8:Granular Weights Computing, and do normalized;
Granular Weights Computing is exactly to calculate each particle hiProbability size;
Due to each feature point for calibration PiBetween error distribution be independent, the probability size of calibrating parameters sampling particle is calculated Formula is as follows:
From formula (6), if it is possible to obtain each characteristic point PiInfluence to calibrating parameters sampling particle, namely obtain p(Pi|hi), it is easy to ask for probability size of the calibrating parameters sampling particle under the influence of all characteristic points by formula (6);If every The observation of individual characteristic pointPredicted value isIt is simple in order to calculate, if p (Pi|hi) Gaussian distributed, wherein average u ForWithDifference, variance be observed value varianceAnd predicted valueVariance sum;
Wherein observation is obtained by gridiron pattern, and error can be ignored, and predicted value is obtained by stereoscopic vision;
Road sign error calculation is as follows:
R in formula (8), c are projection coordinate of the characteristic point in left image, r0,c0It is centre coordinate variable c, the r of left image, D is considered as the gaussian random distribution that average is zero, according to covariance propagated forward theorem,
J is the Jacobian matrix of formula (9),Respectively to the covariance of dependent variable, typically takeFormula (9) is calculated:
Normalized weight:
Step 9:Calibrating parameters fine estimation and its covariance are asked, calibrating parameters fine estimation is by every group of sampling grain Son is multiplied by normalized weight and then summed such as formula (12), the covariance such as formula (13) of its calibrating parameters fine estimation:
The particle filter of use carries out stereoscopic vision calibration optimization algorithm, preferably combines the specific of stereoscopic vision navigation Using carrying out global optimization to all parameters minimizes 3-dimensional coordinate re-projection error.
The beneficial effects of the present invention are:
The present invention is a kind of stereoscopic vision mapping model method for building up, compared with prior art, and the present invention is to key distortion The demarcation of the factor is proposed based on Tsai innovatory algorithms.Describe the implementation method based on OpenCV in detail, algorithm takes into full account The characteristics of OpenCV function library least square methods, carries out distortion correction, it is to avoid lens Nonlinear Mapping causes to image substep Amount of calculation increase, reduce algorithm iteration number of times, be it is a kind of efficiently, easy algorithm.In order to further improve stated accuracy, The nonlinear optimization algorithm of particle group optimizing particle filter is employed, particle collection is made towards posteriority probability density by particle group optimizing The larger regional movement of value is distributed, so as to overcome the poor problem of particle, and the grain needed for accurate estimate is significantly reduced Subnumber, improves estimated accuracy.Simultaneously using projected pixel error as constraints during particle group optimizing, and particle filter asks for grain Calculated during sub- weight with tessellated characteristic point 3D error of coordinates, taken into full account that error is tieed up in 2 peacekeepings 3, greatly improved mark Fixed precision and robustness.Experimental analysis has finally been carried out, using scaling board characteristic point 3D projection errors as precision evaluation index, As a result show that this arithmetic accuracy height, robustness are good, have good application prospect in stereoscopic vision navigation.
Brief description of the drawings
Fig. 1 is the 3D road signs uncertainty figure of the present invention;
Fig. 2 is gridiron pattern plane reference mould of the present invention;
Fig. 3 is that the present invention successfully detects characteristic point comparison diagram;
Fig. 4 is distortion correction target of the present invention.
Embodiment
The invention will be further described below in conjunction with the accompanying drawings:
The present invention comprises the following steps:
Step 1:Particle sampler;The stereoscopic vision inside and outside parameter demarcated by improving Tsai algorithms is adopted for 28 totally Sample, therefore sampling particle has 28 dimensions, carries out particle sampler using stochastical sampling method, the importance density function is Gaussian Profile, Variance can be obtained by experience;Population H={ the h that scale is N are produced in space1, h2,…,hN, h is each particle 28 n dimensional vector ns in space, h=(αx1y1,u0l,v0l,k1l,k2l,p1l,p2llll,txl,tyl,tzlxryr, u0r,v0r,k1r,k2r,p1r,p2rrrr,txr tyr,tzr)
Step 2:Image is corrected using the distortion parameter for particle of sampling, generated using the other specification for particle of sampling Projection matrix, so each particles spatial (28 n dimensional vector n) generates corresponding 2 width distortion correction image and 2 projection matrixes;
Step 3:Determine fitness function;
PiFor the pixel coordinate of the actual projection of k-space point,Represent that current particle h joins as stereoscopic vision nonlinear model The pixel coordinate value of number estimation, the projection matrix generated using each particle is solved;
Step 4:Population updates;
In order to efficiently control particle migration speed, make algorithm that there is fine search ability, use for reference simulated annealing and think Want to introduce inertial factor in population more new strategy;In each migration, each particle according to following criterion more new position and Speed
In formula (2),For i-th of particle, in kth time migration, d ties up speed;c1For perception factor, c2For it is social because Son, adjusts the step-length that particle is migrated to individual extreme value and global extremum direction respectively;r1And r2It is random for what is be distributed between (0,1) Number;The individual extreme value place tieed up for i-th of particle in d,The global extremum position tieed up for population in d,WithIn an iterative process according to particle fitness real-time update;Inertial factor ω=b-k (b-a)/K, b=0.9, a=0.4, K be Maximum migration number of times;
Step 5:Population, which updates, to be terminated;
When population reaches that the cutoff number of times K or particle adaptive value of setting reach desired value ζ, population stops excellent Change;Will update terminate after particle carry out particle filter, particle filter using the 3D coordinates of gridiron pattern characteristic point as optimal conditions, It is also to meet stereoscopic vision practical application;
Step 6:Obtain characteristic point observation
Using first position of gridiron pattern as world coordinate system, the coordinate value of each characteristic point of gridiron pattern is asked for as observation Value, the black and white grid for the 30cm*30cm that gridiron pattern is used, it is easy to obtain the observation of a characteristic point;
Step 7:Characteristic point predicted valueSolve;Pixel coordinate of the characteristic point in left and right cameras image is extracted, with current Particle h calculates the predicted value of character pair point as the nonlinear parameter of stereoscopic vision;Computational methods are as follows;
Wherein, (XW, YW, ZW, 1) and it is homogeneous coordinates of the point P under world coordinate system, (u1, v1, 1) and (u2, v2, 1) point Not Wei point P left and right camera image coordinate system project homogeneous coordinates;It can be obtained on X by formula (3) and (4)W, YW, ZW Four linear equations:
Formula 4 equations of (5) 3 unknown numbers, are reduced influence of noise to improve computational accuracy, are asked using least square method Solution;
Step 8:Granular Weights Computing, and do normalized;
Granular Weights Computing is exactly to calculate each particle hiProbability size;
Due to each feature point for calibration PiBetween error distribution be independent, the probability size of calibrating parameters sampling particle is calculated Formula is as follows:
From formula (6), if it is possible to obtain each characteristic point PiInfluence to calibrating parameters sampling particle, namely obtain p(Pi|hi), it is easy to ask for probability size of the calibrating parameters sampling particle under the influence of all characteristic points by formula (6);If every The observation of individual characteristic pointPredicted value isIt is simple in order to calculate, if p (Pi|hi) Gaussian distributed, wherein average u ForWithDifference, variance be observed value varianceAnd predicted valueVariance sum;
Wherein observation is obtained by gridiron pattern, and error can be ignored, and predicted value is obtained by stereoscopic vision;
Road sign error formation such as Fig. 1, is calculated as follows:
R in formula (8), c are projection coordinate of the characteristic point in left image, r0,c0It is centre coordinate variable c, the r of left image, D is considered as the gaussian random distribution that average is zero, according to covariance propagated forward theorem,
J is the Jacobian matrix of formula (9),Respectively to the covariance of dependent variable, typically takeFormula (9) is calculated:
Normalized weight:
Step 9:Calibrating parameters fine estimation and its covariance are asked, calibrating parameters fine estimation is by every group of sampling grain Son is multiplied by normalized weight and then summed such as formula (12), the covariance such as formula (13) of its calibrating parameters fine estimation:
The particle filter of use carries out stereoscopic vision calibration optimization algorithm, preferably combines the specific of stereoscopic vision navigation Using carrying out global optimization to all parameters minimizes 3-dimensional coordinate re-projection error.
Experiment and interpretation of result
As shown in Figure 2:Because the 3D precision for demarcating thing is difficult to ensure that, here using the different black and white of multiple dimensional orientation angles The gridiron pattern plane reference template being alternately arranged.Produced using Point Grey Research companies Bumblebee2Camera is used as image capture device.Binocular vision system base length 120mm, ultimate resolution be 1024 × 768, elemental area is 4.65um × 4.65um, and frame number is that the focal length under 30FPS, 70 ° of horizontal view angles is 3.8mm, 50 ° of levels Focal length under visual angle is 6mm, and digital picture locating depth is set to 24bit by the data format supported, image signal noise ratio exists Gain is more than 60dB when 0dB.
Target gridiron pattern demarcation board size 270mm × 210mm, each gridiron pattern size is 30mm × 30mm, centre-to-centre spacing 30mm, array 9 × 7, each image pixel ratio is 640 × 480, locating depth 8bit, demarcate face flatness error scope- Within 0.05mm, the angle of lens imaging plane and demarcation target surface is maintained within 50 °.
During experiment, binocular camera has the image in 3 groups of collection totally 10 secondary different spaces orientation altogether, should ensure that the chessboard of acquisition Lattice occupy the most of region of imaging plane.When algorithm carries out feature point extraction, drawn in the picture in heart point regional extent first Feature dot image is two straight in circle in characteristic point position figure to avoid image border from crossing demarcation actual error caused by distortion The intersection point in footpath is represented, then demarcates the ranks number of characteristic point on flat board.Extract successful image, last feature of previous row Point is connected with first characteristic point of rear a line by colored straight line, to judge whether to find all characteristic points, per a line characteristic point All distinguished with the circle of different colours.Three width chessboard table images of the first row are that left camera feature point extracts result in figure, the Three width images of two rows are the feature point extraction results of right video camera under same space imaging mapping.
As shown in Figure 3:In order to verify the feasibility of this paper scaling methods, by set forth herein particle group optimizing particle filter The stereoscopic vision two-stage calibration method that optimized algorithm (method 1) is demarcated with genetic algorithm nonlinear optimization respectively[81](method 2) and only The stereoscopic vision two-stage calibration method demarcated using particle cluster algorithm nonlinear optimization[77](method 3) is compared, in method 1 Heredity cut-off evolutionary generation is population cut-off in 10000, method 3 in particle numerical digit 500, cutoff number of times 500, method 2 It is 3000 to migrate number of times, and design parameter is as shown in table 3.2 after optimization.
The parameter comparison of table 3.2
By after particle group optimizing particle filter optimized algorithm closer to actual value, it is excellent for further evaluation algorithms It is bad, with reference to stereoscopic vision navigation application environment, the root mean square of corresponding coordinate value after employing space true coordinate point and rebuilding As evaluation index, such as formula (3-30), its parameter such as table 3.3.
The Evaluation results of table 3.3
The mutual alignment parameter such as table 3.4 of the stereoscopic vision of three kinds of algorithm demarcation, the parameter of particle group optimizing particle filter The parameter dispatched from the factory closer to producer.
The camera mutual alignment parameter of table 3.4 or so compares
Distortion correction is finally carried out to image with the distortion parameter of demarcation, Fig. 3 chessboard calibrations table images are rectified by distortion Result such as Fig. 4 after just, it can be seen that when algorithm can significantly correct lens imaging generation image border radial distortion and cut To distortion.
Brief summary
Present invention describes stereoscopic vision demarcation, passes through the scaling method to several vision systems conventional at present Analysis, it is proposed that a kind of multinomial distortion model stereo camera scaling method based on 2D plane target drones, algorithm linear solution Ten video cameras ginseng including the yardstick focal length factor, pixel planes center point coordinate, spin matrix parameter, translation matrix parameter Number.Demarcation to crucial distortion factor is proposed based on Tsai innovatory algorithms.Describe the implementation method based on OpenCV in detail, Algorithm has taken into full account the characteristic of OpenCV function library least square methods, carries out distortion correction to image substep, it is to avoid lens are non- Amount of calculation increase, reduces algorithm iteration number of times caused by Linear Mapping, is a kind of efficient, easy algorithm.In order to further Stated accuracy is improved, the nonlinear optimization algorithm of particle group optimizing particle filter is employed, particle collection is made by particle group optimizing Towards the regional movement that posteriority probability density distribution value is larger, so as to overcome the poor problem of particle, and essence is significantly reduced Required population is really estimated, estimated accuracy is improved.Simultaneously using projected pixel error as constraints during particle group optimizing, and Calculated when particle filter asks for particle weights with tessellated characteristic point 3D error of coordinates, taken into full account that 2 peacekeeping 3-dimensionals are missed Difference, greatly improves the precision and robustness of demarcation.Finally carried out experimental analysis, using scaling board characteristic point 3D projection errors as Precision evaluation index, as a result shows that this arithmetic accuracy height, robustness are good, has good application prospect in stereoscopic vision navigation.
The general principle and principal character and advantages of the present invention of the present invention has been shown and described above.The technology of the industry Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the simply explanation described in above-described embodiment and specification is originally The principle of invention, without departing from the spirit and scope of the present invention, various changes and modifications of the present invention are possible, these changes Change and improvement all fall within the protetion scope of the claimed invention.The claimed scope of the invention by appended claims and its Equivalent thereof.

Claims (1)

1. a kind of stereoscopic vision mapping model method for building up, it is characterised in that:Comprise the following steps:
Step 1:Particle sampler;The stereoscopic vision inside and outside parameter demarcated by improving Tsai algorithms is sampled for 28 totally, because This sampling particle has 28 dimensions, carries out particle sampler using stochastical sampling method, the importance density function is Gaussian Profile, and variance can To be obtained by experience;Population H={ the h that scale is N are produced in space1,h2,…,hN, h is each particle in space 28 n dimensional vector ns,
H=(αx1y1,u0l,v0l,k1l,k2l,p1l,p2llll,txl,tyl,tzlxryr,u0r,v0r,k1r,k2r,p1r, p2rrrr,txr tyr,tzr)
Step 2:Image is corrected using the distortion parameter for particle of sampling, is generated and projected using the other specification for particle of sampling Matrix, so each particles spatial (28 n dimensional vector n) generates corresponding 2 width distortion correction image and 2 projection matrixes;
Step 3:Determine fitness function;
<mrow> <mi>E</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mi>h</mi> </mrow> </msub> <mo>^</mo> </mover> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
PiFor the pixel coordinate of the actual projection of k-space point,Represent that current particle h estimates as stereoscopic vision nonlinear model shape parameter The pixel coordinate value of meter, the projection matrix generated using each particle is solved;
Step 4:Population updates;
In order to efficiently control particle migration speed, make algorithm that there is fine search ability, use for reference simulated annealing thought and exist Inertial factor is introduced in population more new strategy;In each migration, each particle is according to following criterion more new position and speed
<mrow> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>v</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>&amp;omega;v</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>+</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <msub> <mi>r</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <msubsup> <mi>h</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msub> <mi>r</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <mrow> <mi>h</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <msubsup> <mi>h</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>h</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>h</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>v</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
In formula (2),For i-th of particle, in kth time migration, d ties up speed;c1For perception factor, c2For the social factor, divide Tiao Jie not the step-length that is migrated to individual extreme value and global extremum direction of particle;r1And r2For the random number being distributed between (0,1); The individual extreme value place tieed up for i-th of particle in d,The global extremum position tieed up for population in d,WithRepeatedly According to particle fitness real-time update during generation;Inertial factor ω=b-k (b-a)/K, b=0.9, a=0.4, K move for maximum Move number of times;
Step 5:Population, which updates, to be terminated;
When population reaches that the cutoff number of times K or particle adaptive value of setting reach desired value ζ, population stops optimization;Will Update the particle after terminating and carry out particle filter, particle filter is also using the 3D coordinates of gridiron pattern characteristic point as optimal conditions Meet stereoscopic vision practical application;
Step 6:Obtain characteristic point observation
Using first position of gridiron pattern as world coordinate system, the coordinate value of each characteristic point of gridiron pattern is asked for as observation, chess The black and white grid for the 30cm*30cm that disk lattice are used, it is easy to obtain the observation of a characteristic point;
Step 7:Characteristic point predicted valueSolve;Pixel coordinate of the characteristic point in left and right cameras image is extracted, with current particle h The predicted value of character pair point is calculated as the nonlinear parameter of stereoscopic vision;Computational methods are as follows;
<mrow> <msub> <mi>Z</mi> <mrow> <mi>C</mi> <mn>1</mn> </mrow> </msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>u</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>11</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>12</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>13</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>14</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>21</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>22</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>23</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>24</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>31</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>32</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>33</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>34</mn> </msub> <mn>1</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Y</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Z</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>Z</mi> <mrow> <mi>C</mi> <mn>2</mn> </mrow> </msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>u</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>11</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>12</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>13</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>14</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>21</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>22</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>23</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>24</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>31</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>32</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>33</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>m</mi> <mn>34</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Y</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Z</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
Wherein, (XW, YW, ZW, 1) and it is homogeneous coordinates of the point P under world coordinate system, (u1, v1, 1) and (u2, v2, 1) and it is respectively point P The homogeneous coordinates of the image coordinate system projection of camera in left and right;It can be obtained on X by formula (3) and (4)W, YW, ZWFour lines Property equation:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>31</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>11</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>X</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>32</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>12</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>Y</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>33</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>13</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>Z</mi> <mi>W</mi> </msub> <mo>=</mo> <msubsup> <mi>m</mi> <mn>14</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>34</mn> <mn>1</mn> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>31</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>21</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>X</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>32</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>22</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>Y</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>33</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>23</mn> <mn>1</mn> </msubsup> <mo>)</mo> <msub> <mi>Z</mi> <mi>W</mi> </msub> <mo>=</mo> <msubsup> <mi>m</mi> <mn>24</mn> <mn>1</mn> </msubsup> <mo>-</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <msubsup> <mi>m</mi> <mn>34</mn> <mn>1</mn> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>31</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>11</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>X</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>32</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>12</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>Y</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>33</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>13</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>Z</mi> <mi>W</mi> </msub> <mo>=</mo> <msubsup> <mi>m</mi> <mn>14</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>34</mn> <mn>2</mn> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>31</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>21</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>X</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>32</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>22</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>Y</mi> <mi>W</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>33</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mn>23</mn> <mn>2</mn> </msubsup> <mo>)</mo> <msub> <mi>Z</mi> <mi>W</mi> </msub> <mo>=</mo> <msubsup> <mi>m</mi> <mn>24</mn> <mn>2</mn> </msubsup> <mo>-</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <msubsup> <mi>m</mi> <mn>34</mn> <mn>2</mn> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Formula 4 equations of (5) 3 unknown numbers, are reduced influence of noise to improve computational accuracy, are solved using least square method;
Step 8:Granular Weights Computing, and do normalized;
Granular Weights Computing is exactly to calculate each particle hiProbability size;
Due to each feature point for calibration PiBetween error distribution be independent, calibrating parameters are sampled the probability size calculation formula of particle It is as follows:
<mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>P</mi> <mo>|</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Pi;</mo> <mi>i</mi> </munder> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
From formula (6), if it is possible to obtain each characteristic point PiInfluence to calibrating parameters sampling particle, namely obtain p (Pi| hi), it is easy to ask for probability size of the calibrating parameters sampling particle under the influence of all characteristic points by formula (6);If each special Levy observation a littlePredicted value isIt is simple in order to calculate, if p (Pi|hi) Gaussian distributed, wherein average u isWithDifference, variance be observed value varianceAnd predicted valueVariance sum;
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>N</mi> <mrow> <mo>(</mo> <msub> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msup> <msub> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mi>i</mi> </msup> <mo>,</mo> <msub> <mo>&amp;Sigma;</mo> <msub> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> </msub> <mo>+</mo> <msub> <mo>&amp;Sigma;</mo> <msub> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mn>2</mn> <mi>&amp;pi;</mi> <mo>|</mo> <msub> <mo>&amp;Sigma;</mo> <msup> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msup> </msub> <mo>+</mo> <msub> <mo>&amp;Sigma;</mo> <msup> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msup> </msub> <mo>|</mo> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </msup> <mo>&amp;times;</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>(</mo> <msub> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msup> <mrow> <mo>(</mo> <msub> <mo>&amp;Sigma;</mo> <msub> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> </msub> <mo>+</mo> <msub> <mo>&amp;Sigma;</mo> <msub> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mover> <mi>P</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>m</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
Wherein observation is obtained by gridiron pattern, and error can be ignored, and predicted value is obtained by stereoscopic vision;
Road sign error calculation is as follows:
<mrow> <mi>X</mi> <mo>=</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mfrac> <mi>b</mi> <mi>d</mi> </mfrac> <mo>;</mo> <mi>Y</mi> <mo>=</mo> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mfrac> <mi>b</mi> <mi>d</mi> </mfrac> <mo>;</mo> <mi>Z</mi> <mo>=</mo> <mi>f</mi> <mfrac> <mi>b</mi> <mi>d</mi> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
R in formula (8), c are projection coordinate of the characteristic point in left image, r0,c0It is that centre coordinate the variables c, r, d of left image is recognized The gaussian random distribution for being zero for average, according to covariance propagated forward theorem,
<mrow> <msub> <mi>&amp;Sigma;</mi> <mi>X</mi> </msub> <mo>&amp;ap;</mo> <mi>J</mi> <mi>d</mi> <mi>i</mi> <mi>a</mi> <mi>g</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;sigma;</mi> <mi>c</mi> <mn>2</mn> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;sigma;</mi> <mi>r</mi> <mn>2</mn> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <msup> <mi>J</mi> <mi>T</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> 2
J is the Jacobian matrix of formula (9),Respectively to the covariance of dependent variable, typically take Formula (9) is calculated:
<mrow> <msub> <mo>&amp;Sigma;</mo> <mi>X</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mi>b</mi> <mi>d</mi> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>&amp;times;</mo> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>c</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <msup> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mrow> </mtd> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mi>f</mi> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> <mtd> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>r</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>r</mi> <mn>2</mn> </msubsup> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mrow> </mtd> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mi>f</mi> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>c</mi> <mo>-</mo> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mi>f</mi> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> <mtd> <mfrac> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mn>2</mn> </msubsup> <msup> <mi>f</mi> <mn>2</mn> </msup> </mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
Normalized weight:
<mrow> <msup> <mover> <mi>&amp;omega;</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msup> <mo>=</mo> <mfrac> <msup> <mi>&amp;omega;</mi> <mi>i</mi> </msup> <mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> </munder> <msup> <mi>&amp;omega;</mi> <mi>i</mi> </msup> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
Step 9:Calibrating parameters fine estimation and its covariance are asked, calibrating parameters fine estimation is to multiply every group of sampling particle With normalized weight and then summation such as formula (12), the covariance such as formula (13) of its calibrating parameters fine estimation:
<mrow> <mi>P</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <mover> <mi>h</mi> <mo>^</mo> </mover> <mo>-</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mover> <mi>h</mi> <mo>^</mo> </mover> <mo>-</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
The particle filter of use carries out stereoscopic vision calibration optimization algorithm, preferably combines specifically should for stereoscopic vision navigation With carrying out global optimization to all parameters minimizes 3-dimensional coordinate re-projection error.
CN201710269150.6A 2017-04-24 2017-04-24 Stereoscopic vision mapping model building method Pending CN107240133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710269150.6A CN107240133A (en) 2017-04-24 2017-04-24 Stereoscopic vision mapping model building method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710269150.6A CN107240133A (en) 2017-04-24 2017-04-24 Stereoscopic vision mapping model building method

Publications (1)

Publication Number Publication Date
CN107240133A true CN107240133A (en) 2017-10-10

Family

ID=59984066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710269150.6A Pending CN107240133A (en) 2017-04-24 2017-04-24 Stereoscopic vision mapping model building method

Country Status (1)

Country Link
CN (1) CN107240133A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109726853A (en) * 2018-12-04 2019-05-07 东莞理工学院 Industrial collaboration Robot Path Planning Algorithm based on machine vision
CN110333513A (en) * 2019-07-10 2019-10-15 国网四川省电力公司电力科学研究院 A kind of particle filter SLAM method merging least square method
CN112465918A (en) * 2020-12-06 2021-03-09 西安交通大学 Microscopic vision calibration method based on Tsai calibration

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654476A (en) * 2015-12-25 2016-06-08 江南大学 Binocular calibration method based on chaotic particle swarm optimization algorithm
CN106447763A (en) * 2016-07-27 2017-02-22 扬州大学 Face image three-dimensional reconstruction method for fusion of sparse deformation model and principal component regression algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654476A (en) * 2015-12-25 2016-06-08 江南大学 Binocular calibration method based on chaotic particle swarm optimization algorithm
CN106447763A (en) * 2016-07-27 2017-02-22 扬州大学 Face image three-dimensional reconstruction method for fusion of sparse deformation model and principal component regression algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUNMIN LI 等: "SLAM BASED ON INFORMATION FUSION OF STEREO VISION AND ELECTRONIC COMPASS", 《INTERNATIONAL JOURNAL OF ROBOTICS AND AUTOMATION》 *
LI JUNMIN等: "Robot Pose Estimation and Accuracy Analysis Based on Stereo Vision", 《2013 IEEE 9TH INTERNATIONAL CONFERENCE ON MOBILE AD-HOC AND SENSOR NETWORKS》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109726853A (en) * 2018-12-04 2019-05-07 东莞理工学院 Industrial collaboration Robot Path Planning Algorithm based on machine vision
CN110333513A (en) * 2019-07-10 2019-10-15 国网四川省电力公司电力科学研究院 A kind of particle filter SLAM method merging least square method
CN110333513B (en) * 2019-07-10 2023-01-10 国网四川省电力公司电力科学研究院 Particle filter SLAM method fusing least square method
CN112465918A (en) * 2020-12-06 2021-03-09 西安交通大学 Microscopic vision calibration method based on Tsai calibration
CN112465918B (en) * 2020-12-06 2024-04-02 西安交通大学 Microscopic vision calibration method based on Tsai calibration

Similar Documents

Publication Publication Date Title
CN112132972B (en) Three-dimensional reconstruction method and system for fusing laser and image data
CN108765328B (en) High-precision multi-feature plane template and distortion optimization and calibration method thereof
CN111126148B (en) DSM (digital communication system) generation method based on video satellite images
CN110443879B (en) Perspective error compensation method based on neural network
CN105160702B (en) The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN111486855A (en) Indoor two-dimensional semantic grid map construction method with object navigation points
CN106780712B (en) Three-dimensional point cloud generation method combining laser scanning and image matching
CN111612845A (en) Laser radar and camera combined calibration method based on mobile calibration plate
CN104820991B (en) A kind of multiple soft-constraint solid matching method based on cost matrix
CN111028350B (en) Method for constructing grid map by using binocular stereo camera
CN105678757B (en) A kind of ohject displacement measuring method
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN110488871B (en) Unmanned aerial vehicle path dynamic planning method based on two-dimensional and three-dimensional integration
CN105654476A (en) Binocular calibration method based on chaotic particle swarm optimization algorithm
CN112270698B (en) Non-rigid geometric registration method based on nearest curved surface
CN109658497B (en) Three-dimensional model reconstruction method and device
CN106570905A (en) Initial attitude verification method of non-cooperative target point cloud
CN107066747A (en) A kind of vision measurement network organizing planing method
CN107240133A (en) Stereoscopic vision mapping model building method
CN106408596A (en) Edge-based local stereo matching method
CN107992588B (en) Terrain display system based on elevation tile data
CN110189283B (en) Remote sensing image DSM fusion method based on semantic segmentation graph
CN112070891A (en) Image area network adjustment method and system with digital ground model as three-dimensional control
CN115326025A (en) Binocular image measuring and predicting method for sea waves

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171010

RJ01 Rejection of invention patent application after publication