CN101826206A - Camera self-calibration method - Google Patents

Camera self-calibration method Download PDF

Info

Publication number
CN101826206A
CN101826206A CN 201010137334 CN201010137334A CN101826206A CN 101826206 A CN101826206 A CN 101826206A CN 201010137334 CN201010137334 CN 201010137334 CN 201010137334 A CN201010137334 A CN 201010137334A CN 101826206 A CN101826206 A CN 101826206A
Authority
CN
China
Prior art keywords
point
dimensional
image
density
dimentional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010137334
Other languages
Chinese (zh)
Other versions
CN101826206B (en
Inventor
苗振江
万艳丽
唐振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN2010101373345A priority Critical patent/CN101826206B/en
Publication of CN101826206A publication Critical patent/CN101826206A/en
Application granted granted Critical
Publication of CN101826206B publication Critical patent/CN101826206B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a camera self-calibration method, which comprises the following steps: step A1, aiming at multiple images shot at multiple angles at the same scene, extracting two-dimensional quasi-dense points through affine-change-based regional expansion algorithm, and obtaining two-dimensional matching points of the two-dimensional quasi-dense points on neighborhood images by tracking; step A2, utilizing two-dimensional quasi-dense points and the two-dimensional matching points of the two-dimensional quasi-dense points on the neighborhood images, obtaining the three-dimensional positions of the quasi-dense points through SFM algorithm, and obtaining camera parameters and a three-dimensional scene structure by restoring; step A3, carrying out iterative optimization of the three-dimensional scene structure and the camera parameters; step A4, combining the camera parameters to select a neighborhood image set corresponding to each image; step A5, resampling the quasi-dense points and the corresponding matching points; and returning to the step A2, and executing the step A2 to step A5 multiple times in a looping way, wherein the last loop ends until the step A3. Through the invention, the accuracy and robustness of calibration results are improved.

Description

A kind of camera self-calibration method
Technical field
The present invention relates to Digital Image Processing and technical field of computer vision, particularly relate to a kind of camera self-calibration method.
Background technology
Camera Calibration is the important research content in the computer vision field, and it is to extract the three-dimensional spatial information necessary procedure from two dimensional image, is widely used in fields such as three-dimensional reconstruction, navigation, vision monitoring.In the aperture camera model, the process of finding the solution projection matrix P is called as camera Calibration (Cameracalibration); Correspondingly, the process of finding the solution intrinsic parameter (Intrinsic parameter) matrix K is called as interior calibration, and the process of finding the solution outer parameter (Extrinsic parameter) R and t is called as outer calibration.
Sensu lato camera Calibration is divided into three kinds: traditional calibrating method, based on the calibrating method and the Camera self-calibration based of active vision.The tradition calibrating method need use through precision machined calibration piece, by setting up corresponding between three-dimensional coordinate is known on the calibration piece point and its picture point, come the inside and outside parameter of computing camera, the calibration process of this method is wasted time and energy, and is not suitable for online calibration and can not uses the occasion of calibrating piece.Need control camera based on the calibrating method of active vision and do some peculair motion, utilize the singularity of this motion to calculate intrinsic parameter by linear gauge, this method can not be applicable to the unknown or uncontrollable occasion of camera motion.More than two kinds of calibrating methods all use the information of scene or camera motion, be not suitable for the most general situation that scene is any, camera motion is unknown, therefore can not satisfy most requirement of actual application.
1992, people such as Maybank and Faugeras has at first proposed the notion [O.D.Faugeras of self calibration (Self-calibration), Q.T.Luong, and S.J.Maybank.Camera self-calibration:Theory and experiments.In European Conference on Computer Vision, pp.321-334,1992.], make that calibration becomes possibility under the general situation of scene the unknown and video camera arbitrary motion.With regard to the research of demarcating, by the unremitting effort of more than ten years, theoretic problem solves substantially, still in existing self calibration algorithm implementation procedure, runs into the algorithm wild effect numerically that different scenes causes through regular meeting with regard to camera; Even successfully calibration, but the precision aspect also is difficult to compare favourably with traditional scaling algorithm.
The factor that influences the self calibration algorithm is a lot, wherein two accurate extraction and tracking that main factor is a match point between the image, and to calibrating result's more reasonably optimizing algorithm.
The first, because Camera self-calibration based is to estimate the inside and outside parameter of camera according to the match point of following the tracks of in the image (also claiming corresponding point) fully, so the quality of the extraction of match point and track algorithm is one of key factor that influences calibration precision and robustness.But the image of input usually exists: take the relatively variation of freedom, illumination of path, block phenomenon, the scene texture of shooting is single or have repetition texture etc., and these all can bring very big difficulty to the accurate extraction and the tracking of match point.Most of scaling algorithm is based on sparse unique point, Lhuillier in 2005 and Quan have proposed notion [the M.Lhuillierand L.Quan of accurate point of density, A quasi-dense approach to surface reconstruction from uncalibratedimages, IEEE Tran.PAMI, Vol.27, No.3, pp.418-433,2005.], they are based on the characteristic matching point, by expansion, sampling process obtains the corresponding point of comparatively dense, the traded off deficiency of sparse point and dense point of this method, and with respect to sparse point, accurate point of density has more significance to the three-dimensional reconstruction based on the uncertain picture of marking on a map.But in this kind expansion algorithm, adopt greedy matching strategy, do not have stricter strobe utility, therefore certainly will have a large amount of mistake couplings, and then influence the precision of accurate point of density.In addition, the seed points expansion algorithm that wherein adopts also not too is applicable to wide baseline case, therefore still exists not enough in actual applications.
Second, boundling optimization (Bundle adjustment) algorithm is a kind of optimized Algorithm in calibration process and field of three-dimension modeling widespread use, it is by minimizing the structure of re-projection error as objective function optimization camera parameter and scene, but when the number of input picture and 3D point number more for a long time, it is very big that its expense becomes, even optimize failure.Certainly it also can be optimized the structure of camera parameter and scene respectively, but adopts identical optimisation strategy, all is to minimize the re-projection error as objective function.
By top analysis as can be seen, in self calibration, the robustness of calibration algorithm and precision are two comparison distinct issues that exist in actual application.
Summary of the invention
Technical matters to be solved by this invention provides a kind of camera self-calibration method, improves the accuracy and the robustness of calibration.
In order to address the above problem, the invention discloses a kind of camera self-calibration method, comprising:
Steps A 1, the multiple image at the Same Scene multi-angle is taken extracts two-dimentional accurate point of density by the zone broadening algorithm based on affine variation, and tracking obtains the two-dimentional match point of two-dimentional accurate point of density on neighborhood image;
Steps A 2, utilize two-dimentional accurate point of density and and the two-dimentional match point on neighborhood image, obtain the three-dimensional position of accurate point of density by the SFM algorithm, recover to obtain camera parameter and three-dimensional scene structure;
Steps A 3 is carried out the iteration optimization of three-dimensional scene structure and camera parameter;
Steps A 4, the neighborhood image set of the every width of cloth image of the selection of parameter of combining camera correspondence;
Steps A 5 is aimed at point of density and corresponding match point and is resampled;
Return steps A 2, the execution in step that repeatedly circulates A2 is to steps A 5; Wherein, carrying out steps A 3 for the last time finishes.
Concrete, described steps A 1 comprises:
Carry out the extraction of two dimensional character point, and obtain the two-dimentional match point between image in twos according to described unique point;
With the coupling related coefficient greater than the two-dimentional match point of pre-set factory threshold value as seed points, and seed points is arranged in the seed points formation according to the coupling related coefficient is descending;
Carry out zone broadening at each seed points, and be inserted into the seed points formation, do further expansion, finish up to the zone broadening of all seed points as new seed points according to the size of the coupling related coefficient of inflexion point;
Obtain two-dimentional accurate point of density by the point that obtains after the described expansion being resampled and filtering, and on neighborhood image two-dimentional match point.
Preferably,
Adopt affine invariant features to detect operator and carry out the extraction of two dimensional character point;
Utilize SFIT feature description operator to describe feature and obtain the corresponding two-dimentional match point of two dimensional character point.
Preferably, the condition that described zone broadening satisfies is: the initial affine parameter of neighbours' territory point of seed points is identical with this seed points, and in presetting the optimization number of times coupling related coefficient of affine parameter correspondence greater than the pre-set factory threshold value.
Preferably, the point that obtains after the described expansion resampled be specially:
Each image division is become the pixel cell of a plurality of β * β, and the central point of getting each pixel cell is as new sampled point;
Obtain the two-dimentional match point of sampled point on neighborhood image by self-adaptation RANSAC algorithm.
Preferably, the point that obtains after the described expansion filtered be specially:
Once filter, reject sampled point and the match point thereof of coupling related coefficient less than the pre-set factory threshold value;
Secondary filtration, reject between sampled point and match point symmetry to pole span from sampled point and match point thereof greater than the preset distance threshold value;
Filter for three times, the number of the match point of rejecting sampled point correspondence is less than the sampled point and the match point thereof of preset number threshold value;
Then the remaining sampled point in filter back is two-dimentional accurate point of density, and filtering the remaining match point in back is the two-dimentional match point of the accurate point of density of respective two-dimensional on neighborhood image.
Further, specifically comprise in the described steps A 2:
Select two images as initial benchmark image, the accurate point of density on two width of cloth images is carried out three-dimensional reconstruction;
Remaining image is introduced one by one, estimated to introduce the camera parameter of image correspondence according to the three-dimensional accurate point of density of having rebuild;
The point of not rebuilding as yet on the camera parameter reconstructed image that utilization obtains constantly upgrades camera parameter and three-dimensional scene structure; Wherein introduce an image and carry out a suboptimization.
Preferably, the iteration optimization of three-dimensional scene structure and camera parameter is specially in the described steps A 3:
Camera parameter is constant, optimizes three-dimensional scene structure; Three-dimensional scene structure is constant, optimizes camera parameter;
Wherein, the optimization of described three-dimensional scene structure is specially: utilize the colour consistency strategy to aim at the three-dimensional position of point of density, and optimize simultaneously at the normal vector in the section of this position;
Described camera parameter is optimized by global objective function.
Preferably, the strategy of choosing of the neighborhood image of every width of cloth image correspondence set is in the described steps A 4:
The number of the match point of sampled point correspondence is greater than the preset number threshold value;
And the angle of the photocentre line of three-dimensional point and two width of cloth images is more than or equal to presetting the angle threshold value.
Preferably, the strategy that accurate point of density and corresponding match point resample in the described steps A 5 is:
Utilize the re-projection error as weight, calculate the weights of accurate point of density correspondence by presetting criterion, when the weights of correspondence greater than zero the time, keep this accurate point of density and corresponding match point.
Compared with prior art, the present invention has the following advantages:
At first, the present invention will introduce expansion process based on the Optimization Model of affined transformation, optimize the position of the match point of new expansion, and this algorithm is applicable to the dense matching under the wide baseline case, have great importance for the accurate location of comparatively dense point.In addition, by the point that obtains after the expansion being resampled and filtering and reject the mistake match point, for calibration provides more accurate candidate matches point.
Secondly, the present invention has improved the precision and the robustness of existing self calibration algorithm by twice iteration optimization scheme.Wherein, the local color consistance of iteration and the global optimum's property structure that is respectively applied for scene and the optimization of camera parameter first, and two processes are alternately carried out.First after the iteration optimization, by the tactful and accurate intensive corresponding sampling policy of choosing of neighborhood view, for secondary iteration provides starting condition more reliably.
In a word, method of the present invention is than original self calibration algorithm, its result is more accurate, and the robustness height has reduced the illumination sudden change, texture repeats or the influence of factor such as single effectively.
Description of drawings
Fig. 1 is the process flow diagram of a kind of camera self-calibration method embodiment of the present invention;
Fig. 2 is the synoptic diagram that recovers to obtain camera parameter and three-dimensional scene structure in the embodiment of the invention.
Fig. 3 be in the embodiment of the invention three-dimensional accurate point of density and on multiple image the synoptic diagram of corresponding two-dimentional match point;
Fig. 4 (a) is the synoptic diagram of the initial characteristics point of two width of cloth images in the embodiment of the invention;
Fig. 4 (b) is the synoptic diagram of the initial matching point of two width of cloth images in the embodiment of the invention;
Fig. 4 (c) is the synoptic diagram of the match point after the expansion of two width of cloth images process, sampling and the filtration in the embodiment of the invention;
Fig. 5 is the comparison synoptic diagram of the match point of the re-projection point on different images and original tracking before and after three-dimensional point iteration optimization in the embodiment of the invention;
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
With reference to Fig. 1, show the process flow diagram of a kind of camera self-calibration method embodiment of the present invention, comprising:
Step 101, the multiple image at the Same Scene multi-angle is taken extracts two-dimentional accurate point of density by the zone broadening algorithm based on affine variation, and tracking obtains the two-dimentional match point of two-dimentional accurate point of density on neighborhood image;
Concrete, described step 101 comprises following substep:
Substep 1011 carries out the extraction of two dimensional character point, and obtains the two-dimentional match point between image in twos according to described unique point;
At first, Same Scene is carried out the collection of multiple image in different angles, adopt the affine invariant features of Hessian to detect operator and on each image, extract the two dimensional character point.Further, utilize SIFT feature description operator that feature is described,, on image in twos, obtain the match point of corresponding pairs, obtain initial matching characteristic at each unique point.That is, by this step obtain on every width of cloth image unique point and on other images corresponding with it match point, this moment unique point and match point be two-dimensional points.
Substep 1012, with the coupling related coefficient greater than the two-dimentional match point of pre-set factory threshold value as seed points, and seed points is arranged in the seed points formation according to the coupling related coefficient is descending;
In embodiments of the present invention, utilize self-adaptation RANSAC algorithm, estimate fundamental matrix F by the match point of initial characteristics than robust.Need to prove, are projective geometries inherent between two width of cloth views how much to the utmost point, and its Algebraic Expression is F, it is to estimate to obtain by the match point between two views, in estimation procedure,, then can reject a large amount of mistake couplings simultaneously because employing is the algorithm for estimating of robustness.The RANSAC algorithm is to estimate F by N group sample data, satisfy by a pair of match point of Sampson range observation according to the F that estimates then the degree of closeness of utmost point geometry is distinguished exterior point (promptly missing match point) and interior point (promptly rejecting the remaining match point in back), in certain sampling number, those F of point in selection has at most, and by obtain all in point reappraise F.Artificial in the reality that frequency in sampling is set is not too suitable under many circumstances, so utilizes the self-adaptation RANSAC method of adjusting frequency in sampling in sampling process automatically in the embodiment of the invention.
Thus, match point is simultaneously disallowable by mistake in a large number, and all set of rejecting the remaining match point in back will be used for next step zone broadening as seed points, and each seed points comprises the position { x, x ' } of corresponding match point.Need to prove that x herein only is a symbol, is not the coordinate on the mathematical meaning.In addition, the relevant affine parameter { A that also comprises corresponding affine zone 0, d 0, u 0, δ 0, S Zncc.Wherein, footmark " 0 " is represented the parameter of the initial point of not expanding as yet (being seed points).
In order to understand the meaning of above-mentioned parameter, see down earlier surface model:
μI 2(Ax+d)+δ=I 1(x)
Wherein, I 1(x) be illustrated in a pixel brightness value at x place.
At first, introduce the notion of baseline.Baseline generally follows stereogram to propose, and it is meant the line at two camera shooting centers.Popular says, narrow baseline is exactly that the parallax of two cameras is smaller, and two width of cloth images in the stereogram of acquisition seem to be more or less the same; Wide baseline just is meant that the parallax of two cameras is bigger, and two width of cloth images in the stereogram of acquisition seem to differ greatly.The narrow baseline stereogram coupling of wide thus baseline stereogram matching ratio difficulty, the accuracy of calibration is poor.
Owing under the wide baseline case, have illumination variation between image, in order to reduce the influence of illumination variation, so the corresponding point between image adopt above model description to matching precision.In this model, A represents the affine transformation matrix between corresponding neighborhood window, and μ represents the scale factor that brightness changes between corresponding neighborhood window, and δ is local constant additive noise, and d is the less deviation amount of introducing in order accurately to locate.S ZnccThen be the zero-mean normalized correlation coefficient of corresponding match point, it is as the coupling related coefficient.All seed points are according to S ZnccSize be arranged in seed points formation Q from high to low, if S ZnccLess than certain pre-set factory threshold value z, then reject, and the high match point of correlativity preferentially is used for the process of expansion of back from this formation.In a preferred embodiment of the invention, pre-set factory threshold value z value is 0.8.
Substep 1013 carries out zone broadening at each seed points, and is inserted into the seed points formation according to the size of the coupling related coefficient of inflexion point, does further expansion as new seed points, finishes up to the zone broadening of all seed points;
Expand the situation that algorithm is applicable to wide baseline in order to make, and in order to obtain the dense Stereo Matching point of more accurate sub-pixel, we introduce expansion process with the affined transformation result in the above-mentioned steps.
Preferably, the condition that described zone broadening satisfies is: the initial affine parameter of neighbours' territory point of seed points is identical with this seed points, and in presetting the optimization number of times coupling related coefficient of affine parameter correspondence greater than the pre-set factory threshold value.
In process of expansion, the neighbours territory of each seed points (i.e. four points in the upper and lower, left and right of this seed points) initial parameter is identical with seed points, then to by following formula being got the optimum parameter value that minimum value obtains inflexion point:
ϵ = Σ x ∈ W [ ( μI 2 ( Ax + d ) + δ ) - I 1 ( x ) ] 2 .
Wherein, W is for waiting to expand the neighborhood of a point window.
If presetting in the iterations of appointment, ε hour and the S of corresponding affine parameter correspondence ZnccGreater than pre-set factory threshold value z, then think and expand successfully, and according to the S of correspondence ZnccThe match point of the sub-pixel that obtains of these new expansions of big young pathbreaker insert in the seed points formation, be used for next step expansion as new seed points.Repeatedly expand all to expand and finish up to each seed points.
Substep 1014 obtains two-dimentional accurate point of density by the point that obtains after the described expansion being resampled and filtering, and on neighborhood image two-dimentional match point.
Behind the zone broadening, realize determining and the tracking of accurate point of density on multiple image of accurate point of density by resample strategy and filtering policy.
Preferably, the point that obtains after the described expansion resampled be specially:
Each image division is become the pixel cell of a plurality of β * β, and the central point of getting each pixel cell is as new sampled point; Obtain the two-dimentional match point of sampled point on neighborhood image by self-adaptation RANSAC algorithm.
That is, each image is divided into the pixel cell of a plurality of β * β (generally getting 8 * 8) as reference image R, the central point of getting each unit is as new sampled point x iFurther, gather V at the neighborhood image of reference image R RIn each image on match point, can pass through H jx iObtain.Wherein, H jBe to adopt the self-adaptation RANSAC algorithm match of robust to obtain by sub-pixel match point to comparatively dense corresponding in the pixel cell (being sampling unit).The initial neighborhood image set V of each reference picture correspondence RDetermine by the threshold value of setting the number of match point between two images.
Preferably, the point that obtains after the described expansion filtered be specially:
Once filter, reject sampled point and the match point thereof of coupling related coefficient less than the pre-set factory threshold value;
Promptly calculate each new sampled point and the zero-mean normalized correlation coefficient S between match point Zncc(promptly mating related coefficient) is if S ZnccLess than pre-set factory threshold value z (for example being 0.8), then this sampled point and match point are rejected;
Secondary filtration, reject between sampled point and match point symmetry to pole span from sampled point and match point thereof greater than the preset distance threshold value;
In conjunction with given symmetry pole span is further filtered from ED (Epipolar Distance):
ED=d(x i,F Tx′ i)+d(x′ i,Fx i)
Wherein, the constraint of polar curve between two views, promptly basis matrix F is that the robustness of utilizing sampled point and initial seed point to carry out is again estimated, if symmetry to pole span from greater than preset distance threshold value (for example getting 1.5), then the pairing match point of this sampled point is rejected.
Filter for three times, the number of the match point of rejecting sampled point correspondence is less than the sampled point and the match point thereof of preset number threshold value;
For example, in the preferred embodiment of the present invention when the match point number of certain sampled point correspondence greater than 3 the time, this sampled point and corresponding match point thereof are just kept, and are used to the self calibration of back.
Then the remaining sampled point in filter back is two-dimentional accurate point of density, and filtering the remaining match point in back is the two-dimentional match point of the accurate point of density of respective two-dimensional on neighborhood image.
Step 102, utilize two-dimentional accurate point of density and and the two-dimentional match point on neighborhood image, obtain the three-dimensional position of accurate point of density by the SFM algorithm, recover to obtain camera parameter and three-dimensional scene structure;
Utilize exercise recovery structure (SFM, structure from motion) algorithm,, obtain three-dimensional point and camera parameter following the tracks of the two-dimensional points that obtains in the above-mentioned steps 101.Concrete steps comprise:
Select two images as initial benchmark image, the accurate point of density on two width of cloth images is carried out three-dimensional reconstruction; Remaining image is introduced one by one, estimated to introduce the camera parameter of image correspondence according to the three-dimensional accurate point of density of having rebuild; The point of not rebuilding as yet on the camera parameter reconstructed image that utilization obtains constantly upgrades camera parameter and three-dimensional scene structure; Wherein introduce an image and carry out a suboptimization.
As shown in Figure 2, for recovering to obtain the synoptic diagram of camera parameter and three-dimensional scene structure in the embodiment of the invention.Select two images as initial benchmark image in multiple image, and make world coordinate system identical with the coordinate system of first width of cloth image, the video camera matrix that obtains them standardizes, and camera parameter---projection matrix is P iAnd P j, utilize the triangle reconfiguration principle then, accurate dense Stereo Matching points all on two width of cloth images is carried out three-dimensionalreconstruction, and camera parameter and accurate intensive three-dimensional point are carried out boundling optimization.As shown in Figure 2, by known P iAnd P jRebuild 3D point M.
Remaining image is introduced one by one, estimated the new camera parameter of introducing the image correspondence, promptly by asking P with the 3D point of rebuilding according to the accurate point of density of the 3D that has rebuild k, and then utilize the point of not rebuilding as yet on the camera matrix reconstructed image of obtaining, thus constantly upgrade the structure of camera parameter and scene, finally finish the reconstruction of all images; Wherein every introducing one width of cloth or a few width of cloth image carry out a boundling optimization.
Step 103 is carried out the iteration optimization of three-dimensional scene structure and camera parameter;
After three-dimensional scene structure is recovered, to the initial estimation of camera parameter and scene structure.Supposing has m width of cloth image, and the three-dimensional point of reconstruct has n, goes on foot in the global optimization procedure the 3rd, because each camera has 11 degree of freedom, and each three-dimensional point has 3 degree of freedom, then needs to optimize 3n+11m parameter.When m and n increase, it is very big that its expense becomes, even finally infeasible.
Therefore, the embodiment of the invention adopts different policy optimization camera parameter and scene structure respectively by replacing the mode of iteration.The iteration optimization of three-dimensional scene structure and camera parameter is specially: camera parameter is constant, optimizes three-dimensional scene structure; Three-dimensional scene structure is constant, optimizes camera parameter; Wherein, the optimization of described three-dimensional scene structure is specially: utilize the colour consistency strategy to aim at the three-dimensional position of point of density, and optimize simultaneously at the normal vector in the section of this position; Described camera parameter is optimized by global objective function.
The optimization of the structure of scene, i.e. the optimization of the position of the three-dimensional point of reconstruct: utilize the position of colour consistency strategy, and optimize simultaneously at the normal vector in the section of this position to three-dimensional point.Suppose the position of three-dimensional point correspondence and normal direction thereof be expressed as p (x, y, z,
Figure GSA00000070796900101
).The objective function of optimizing is the average C (p) of normalized correlation coefficient NCC:
C ( P ) = ( Σ I ∈ V p v NCC ( p , R , I ) ) / | | V p v | |
V p v = { I | I ∈ V p , NCC ( p , R , I ) > ξ }
Wherein, V pThe image collection of every group of sampling back match point correspondence of following the tracks of after promptly final filtration the (comprising reference picture and new neighborhood image set).If after the position of three-dimensional point and the normal direction iteration optimization, the number of Vpv set, promptly
Figure GSA00000070796900104
And the parameter with the highest C (p) is the parameter of this three-dimensional point after optimizing.
And for the optimization of camera parameter, realize by following global objective function optimization:
f = Σ j = 1 n Σ i = 1 m ( | | x ji - x ^ ( P i , X j ) | | )
At this moment, three-dimensional point X jAs known parameters, only optimize camera parameter P iTwo types parameter is by mode iteration optimization alternately.
Step 104, the neighborhood image set of the every width of cloth image of the selection of parameter of combining camera correspondence;
So far, obtained camera parameter, but precision and robustness in order to improve self calibration have been introduced step 104 and step 105, provide better starting condition for further optimizing.
Wherein, the strategy of choosing of the corresponding neighborhood image of every width of cloth image is: the number of the match point of sampled point correspondence is greater than the preset number threshold value; And the angle of the photocentre line of three-dimensional point and two width of cloth images is more than or equal to presetting the angle threshold value.
Although how much the match point number is the very important factor that the neighborhood view is chosen, guaranteeing to have between image enough wide baseline also is one of important factor of the final calibration precision of influence.Before the calibration, weighing baseline width is the thing of comparison difficulty, is easier to but become after the calibration.Therefore, in order to obtain comparatively stable matching characteristic,, adopt following strategy for choosing of neighborhood image set:
S p ( V ) = Σ i = 1 N w ( f i , R , V )
w ( f i , R , V ) = Π V 1 * , V 2 * ∈ R ∪ V R φ ( f i , V 1 * , V 2 * )
&phi; ( f i , V 1 * , V 2 * ) = 1 &alpha; &GreaterEqual; &alpha; min &alpha; / &alpha; min &alpha; < &alpha; min
Wherein, f iBe image V 1 *, V 2 *In a pair of match point, and f iBe visible in reference image R, α is the angle of the photocentre line of three-dimensional point and two width of cloth images.If angle is very little, illustrate that two width of cloth image baselines are narrower.Can guarantee reference image R and neighborhood image V by it RIn all have enough wide baseline in twos.To each reference image R, get λ and have the highest S p(V) image of score is gathered as its new neighborhood image, and note is made V ' R
Step 105 is aimed at point of density and corresponding match point and is resampled;
Return step 102, the execution in step that repeatedly circulates 102 is to step 105; Wherein, carrying out step 103 for the last time finishes.
The purpose of step 105 is to be used for SFM in order to choose more reliable point.Step 104 has been determined new neighborhood set V ' for each reference picture R, the corresponding image collection of each then relevant with this reference picture three-dimensional point also becomes V ' thereupon p(V ' wherein p=V p-(V R-V ' R)).Suppose X jBe accurate point of density in the three dimensions, P iThe expression camera parameter, x JiBe that this three-dimensional point is at image i (i ∈ V ' p) on subpoint.As shown in Figure 3, for the accurate point of density of three-dimensional and on multiple image the synoptic diagram of corresponding two-dimentional match point.
Wherein, the strategy that accurate point of density and corresponding match point resample is: utilize the re-projection error as weight, calculate the weights of accurate point of density correspondence by presetting criterion, when the weights of correspondence greater than zero the time, keep this accurate point of density and corresponding match point.
Concrete, all accurate point of density all are assigned weights by following criterion:
w ji = | | x ji - x ^ ( P i , X j ) | | &Sigma; j = 1 n &Sigma; i = 1 m ( | | x ji - x ^ ( P i , X j ) | | ) | | x ji - x ^ ( P i , X j ) | | < &delta; - 1 | | x ji - x ^ ( P i , X j ) | | &GreaterEqual; &delta;
Have only when weights greater than 0 the time, respective point just is used to the SFM of a new round.
Above-mentioned weight w JiAlso will be used to guide preceding two steps of SFM algorithm, and promptly in described multiple image, select two images as initial benchmark image; Remaining image is introduced one by one three-dimensional position and the optimization that obtains accurate point of density.
At first, in the selection of initial baseline image, need equally to guarantee to have enough wide baseline between two width of cloth images:
S ( I i , I t ) = &Sigma; j = 1 n w ( x ji , x jt )
w ( x ji , x jt ) = 1 &alpha; > &alpha; min , w ji , w jt &NotEqual; 1 0 others
Secondly, when other image is introduced one by one, do not rebuild in (introduce) image search and rebuild the maximum image of (introducing) some coupling number at all, elect it as will add image, and the coupling number of being added up must guarantee to carry out under greater than 0 prerequisite at weights.
Then, return step 102, the execution in step that circulates repeatedly 102 to step 105 is carried out iteration optimization, carries out step 103 for the last time and finishes, and promptly carries out exporting the result after the last iteration optimization of three-dimensional scene structure and camera parameter.General round-robin number of times is elected the degree of accuracy and the robustness that can reach calibration for four times as.
Iteration optimization that above-mentioned optimizing process can be divided into two-layer (internal layer and skin): wherein, the three-dimensional scene structure that the internal layer iteration optimization is promptly carried out first and the iteration optimization of camera parameter; External iteration optimization is promptly after the internal layer iteration optimization, choose and the match point of accurate point of density resamples by the neighborhood image of accurate point of density, the three-dimensional scene structure of carrying out once more behind the SFM of an execution new round and the iteration optimization of camera parameter, this shows that external iteration optimization comprises the internal layer iteration optimization.
In embodiments of the present invention, affined transformation is introduced zone broadening.Accurate location tool for the match point under the wide baseline case has very important significance.Under wide baseline case, the similarity measurement that exists bigger perspective distortion to cause directly setting up between the individual features vertex neighborhood window between image concerns, therefore, need set up the geometric distortion model between the corresponding neighborhood window.Model based on affined transformation is widely used in being introduced into process of expansion on the accurate location of unique point, optimizes the position of the match point of new expansion.In addition, adopt strict filtering policy to reject the mistake coupling, for calibration provides more accurate candidate matches point.In the method for prior art, directly set up the similarity measurement relation between the unique point neighborhood window, therefore, be not suitable for wide baseline case.In addition, prior art does not have extra filtering policy yet, and the expansion back exists a large amount of mistake couplings not have disallowable.
For the precision of matching algorithm is described, utilize on average pole span from having carried out the contrast experiment, on average high more from more little expression matching precision to pole span.
Fig. 4 is the synoptic diagram of two width of cloth image characteristic points, match point in the embodiment of the invention.Wherein, Fig. 4 (a) is the synoptic diagram of the initial characteristics point of two width of cloth images; Fig. 4 (b) is the synoptic diagram of the initial matching point of two width of cloth images; Fig. 4 (c) is the synoptic diagram of the match point after the expansion of two width of cloth images process, sampling and the filtration.For the ease of showing that the sampling unit size is 32 * 32 in the embodiment of the invention, in actual applications, can select flexibly the size of sampling unit, general sampling unit size can be decided to be 8 * 8.
The comparison of table 1 expansion algorithm
Figure GSA00000070796900131
As seen from Table 1, in embodiments of the present invention, expansion and filter after not only obtain more match point number, and have lower on average to pole span from.
Carried out two-layer iteration optimization of the present invention: for the internal layer iteration optimization, adopt different optimisation strategy, the structure of the parameter of camera and scene is optimized respectively and iteration is carried out; For external iteration optimization, get resampling and neighborhood view selection scheme by aiming at point of density, for the iteration calibration of back provides the match point of more accurate tracking, reduce because illumination variation, block the influence of the mistake coupling that factor such as phenomenon causes.In order to prove the validity of algorithm, in embodiments of the present invention, the average re-projection error before and after the iteration is contrasted.The result proves, obtains less average re-projection error after the iteration.
As shown in Figure 5, be the comparison synoptic diagram of the match point of re-projection point on different images before and after three-dimensional point iteration optimization in the embodiment of the invention and original tracking."+" is illustrated in the re-projection point on the different images, the match point of the original tracking of central representation of " zero ".First row are the distributions before the iteration; Secondary series is the figure after preceding this point of iteration and intercepting on every side and the amplification; The 3rd row are the distribution enlarged drawings after the iteration.
Find out easily that from Fig. 5 the re-projection error reduces after the iteration.We compare all 3D points, obtain less average re-projection error after the iteration.Therefore, the camera Calibration method of embodiment of the invention proposition has stronger robustness and precision.
More than to a kind of camera self-calibration method provided by the present invention, be described in detail, used specific case herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (10)

1. a camera self-calibration method is characterized in that, comprising:
Steps A 1, the multiple image at the Same Scene multi-angle is taken extracts two-dimentional accurate point of density by the zone broadening algorithm based on affine variation, and tracking obtains the two-dimentional match point of two-dimentional accurate point of density on neighborhood image;
Steps A 2, utilize two-dimentional accurate point of density and and the two-dimentional match point on neighborhood image, obtain the three-dimensional position of accurate point of density by the SFM algorithm, recover to obtain camera parameter and three-dimensional scene structure;
Steps A 3 is carried out the iteration optimization of three-dimensional scene structure and camera parameter;
Steps A 4, the neighborhood image set of the every width of cloth image of the selection of parameter of combining camera correspondence;
Steps A 5 is aimed at point of density and corresponding match point and is resampled;
Return steps A 2, the execution in step that repeatedly circulates A2 is to steps A 5; Wherein, carrying out steps A 3 for the last time finishes.
2. the method for claim 1 is characterized in that, described steps A 1 comprises:
Carry out the extraction of two dimensional character point, and obtain the two-dimentional match point between image in twos according to described unique point;
With the coupling related coefficient greater than the two-dimentional match point of pre-set factory threshold value as seed points, and seed points is arranged in the seed points formation according to the coupling related coefficient is descending;
Carry out zone broadening at each seed points, and be inserted into the seed points formation, do further expansion, finish up to the zone broadening of all seed points as new seed points according to the size of the coupling related coefficient of inflexion point;
Obtain two-dimentional accurate point of density by the point that obtains after the described expansion being resampled and filtering, and on neighborhood image two-dimentional match point.
3. method as claimed in claim 2 is characterized in that,
Adopt affine invariant features to detect operator and carry out the extraction of two dimensional character point;
Utilize SFIT feature description operator to describe feature and obtain the corresponding two-dimentional match point of two dimensional character point.
4. method as claimed in claim 3 is characterized in that,
The condition that described zone broadening satisfies is: the initial affine parameter of neighbours' territory point of seed points is identical with this seed points, and in presetting the optimization number of times coupling related coefficient of affine parameter correspondence greater than the pre-set factory threshold value.
5. method as claimed in claim 2 is characterized in that, the point that obtains after the described expansion is resampled to be specially:
Each image division is become the pixel cell of a plurality of β * β, and the central point of getting each pixel cell is as new sampled point;
Obtain the two-dimentional match point of sampled point on neighborhood image by self-adaptation RANSAC algorithm.
6. method as claimed in claim 5 is characterized in that, the point that obtains after the described expansion is filtered be specially:
Once filter, reject sampled point and the match point thereof of coupling related coefficient less than the pre-set factory threshold value;
Secondary filtration, reject between sampled point and match point symmetry to pole span from sampled point and match point thereof greater than the preset distance threshold value
Filter for three times, the number of the match point of rejecting sampled point correspondence is less than the sampled point and the match point thereof of preset number threshold value;
Then the remaining sampled point in filter back is two-dimentional accurate point of density, and filtering the remaining match point in back is the two-dimentional match point of the accurate point of density of respective two-dimensional on neighborhood image.
7. the method for claim 1 is characterized in that, specifically comprises in the described steps A 2:
Select two images as initial benchmark image, the accurate point of density on two width of cloth images is carried out three-dimensional reconstruction;
Remaining image is introduced one by one, estimated to introduce the camera parameter of image correspondence according to the three-dimensional accurate point of density of having rebuild;
The point of not rebuilding as yet on the camera parameter reconstructed image that utilization obtains constantly upgrades camera parameter and three-dimensional scene structure; Wherein introduce an image and carry out a suboptimization.
8. the method for claim 1 is characterized in that, the iteration optimization of three-dimensional scene structure and camera parameter is specially in the described steps A 3:
Camera parameter is constant, optimizes three-dimensional scene structure; Three-dimensional scene structure is constant, optimizes camera parameter;
Wherein, the optimization of described three-dimensional scene structure is specially: utilize the colour consistency strategy to aim at the three-dimensional position of point of density, and optimize simultaneously at the normal vector in the section of this position;
Described camera parameter is optimized by global objective function.
9. the method for claim 1 is characterized in that, the strategy of choosing of the neighborhood image of every width of cloth image correspondence set is in the described steps A 4:
The number of the match point of sampled point correspondence is greater than the preset number threshold value;
And the angle of the photocentre line of three-dimensional point and two width of cloth images is more than or equal to presetting the angle threshold value.
10. the method for claim 1 is characterized in that, the strategy that accurate point of density and corresponding match point resample in the described steps A 5 is:
Utilize the re-projection error as weight, calculate the weights of accurate point of density correspondence by presetting criterion, when the weights of correspondence greater than zero the time, keep this accurate point of density and corresponding match point.
CN2010101373345A 2010-03-31 2010-03-31 Camera self-calibration method Expired - Fee Related CN101826206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101373345A CN101826206B (en) 2010-03-31 2010-03-31 Camera self-calibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101373345A CN101826206B (en) 2010-03-31 2010-03-31 Camera self-calibration method

Publications (2)

Publication Number Publication Date
CN101826206A true CN101826206A (en) 2010-09-08
CN101826206B CN101826206B (en) 2011-12-28

Family

ID=42690112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101373345A Expired - Fee Related CN101826206B (en) 2010-03-31 2010-03-31 Camera self-calibration method

Country Status (1)

Country Link
CN (1) CN101826206B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938229A (en) * 2012-09-18 2013-02-20 中国人民解放军装甲兵工程学院 Three-dimensional digital holography photon map
CN103759670A (en) * 2014-01-06 2014-04-30 四川虹微技术有限公司 Object three-dimensional information acquisition method based on digital close range photography
CN105279789A (en) * 2015-11-18 2016-01-27 中国兵器工业计算机应用技术研究所 A three-dimensional reconstruction method based on image sequences
WO2017008516A1 (en) * 2015-07-15 2017-01-19 华为技术有限公司 Two-camera relative position calculation system, device and apparatus
CN104541304B (en) * 2012-08-23 2017-09-12 微软技术许可有限责任公司 Use the destination object angle-determining of multiple cameras
WO2018153374A1 (en) * 2017-02-27 2018-08-30 安徽华米信息科技有限公司 Camera calibration
WO2019024793A1 (en) * 2017-07-31 2019-02-07 腾讯科技(深圳)有限公司 Method for displaying augmented reality and method and device for determining pose information
CN109829502A (en) * 2019-02-01 2019-05-31 辽宁工程技术大学 It is a kind of towards repeating the picture of texture and non-rigid shape deformations to efficient dense matching method
CN110070610A (en) * 2019-04-17 2019-07-30 精伦电子股份有限公司 The characteristic point matching method and device of characteristic point matching method, three-dimensionalreconstruction process
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
CN111862352A (en) * 2020-08-03 2020-10-30 字节跳动有限公司 Positioning model optimization method, positioning method and positioning equipment
CN111899305A (en) * 2020-07-08 2020-11-06 深圳市瑞立视多媒体科技有限公司 Camera automatic calibration optimization method and related system and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000078055A1 (en) * 1999-06-11 2000-12-21 Emile Hendriks Acquisition of 3-d scenes with a single hand held camera
CN101320473A (en) * 2008-07-01 2008-12-10 上海大学 Free multi-vision angle, real-time three-dimensional reconstruction system and method
CN101419705A (en) * 2007-10-24 2009-04-29 深圳华为通信技术有限公司 Video camera demarcating method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000078055A1 (en) * 1999-06-11 2000-12-21 Emile Hendriks Acquisition of 3-d scenes with a single hand held camera
CN101419705A (en) * 2007-10-24 2009-04-29 深圳华为通信技术有限公司 Video camera demarcating method and device
CN101320473A (en) * 2008-07-01 2008-12-10 上海大学 Free multi-vision angle, real-time three-dimensional reconstruction system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《IEEE International Conference on Acoustics Speech and Signal Processing》 20100319 Yanli Wan et al Reconstruction of Dense Point Cloud from Uncalibrated Wide-baseline Images 第1231-1233页3-4节 1-3,5,7-9 , 2 *
《IEEE Transactions on Pattern Analysis and Machine Intelligence》 20050331 Maxime Lhuillier et al A Quasi-Dense Approach to Surface Reconstruction from Uncalibrated Images 第419页2.1节第2段 2 第27卷, 第3期 2 *
《International Journal of Computer Vision》 20081130 Noah Snavely et al Modeling the World from Internet Photo Collections 第194-195页4.2节第1-5段 7 第80卷, 第2期 2 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104541304B (en) * 2012-08-23 2017-09-12 微软技术许可有限责任公司 Use the destination object angle-determining of multiple cameras
CN102938229A (en) * 2012-09-18 2013-02-20 中国人民解放军装甲兵工程学院 Three-dimensional digital holography photon map
CN103759670A (en) * 2014-01-06 2014-04-30 四川虹微技术有限公司 Object three-dimensional information acquisition method based on digital close range photography
CN103759670B (en) * 2014-01-06 2016-09-28 四川虹微技术有限公司 A kind of object dimensional information getting method based on numeral up short
CN106709899A (en) * 2015-07-15 2017-05-24 华为终端(东莞)有限公司 Dual-camera relative position calculation method, device and equipment
WO2017008516A1 (en) * 2015-07-15 2017-01-19 华为技术有限公司 Two-camera relative position calculation system, device and apparatus
US10559090B2 (en) 2015-07-15 2020-02-11 Huawei Technologies Co., Ltd. Method and apparatus for calculating dual-camera relative position, and device
CN106709899B (en) * 2015-07-15 2020-06-02 华为终端有限公司 Method, device and equipment for calculating relative positions of two cameras
CN105279789B (en) * 2015-11-18 2016-11-30 中国兵器工业计算机应用技术研究所 A kind of three-dimensional rebuilding method based on image sequence
CN105279789A (en) * 2015-11-18 2016-01-27 中国兵器工业计算机应用技术研究所 A three-dimensional reconstruction method based on image sequences
WO2018153374A1 (en) * 2017-02-27 2018-08-30 安徽华米信息科技有限公司 Camera calibration
US11222442B2 (en) 2017-07-31 2022-01-11 Tencent Technology (Shenzhen) Company Limited Method for augmented reality display, method for determining pose information, and apparatuses
WO2019024793A1 (en) * 2017-07-31 2019-02-07 腾讯科技(深圳)有限公司 Method for displaying augmented reality and method and device for determining pose information
US11763487B2 (en) 2017-07-31 2023-09-19 Tencent Technology (Shenzhen) Company Limited Method for augmented reality display, method for determining pose information, and apparatuses
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
CN109829502A (en) * 2019-02-01 2019-05-31 辽宁工程技术大学 It is a kind of towards repeating the picture of texture and non-rigid shape deformations to efficient dense matching method
CN109829502B (en) * 2019-02-01 2023-02-07 辽宁工程技术大学 Image pair efficient dense matching method facing repeated textures and non-rigid deformation
CN110070610B (en) * 2019-04-17 2023-04-18 精伦电子股份有限公司 Feature point matching method, and feature point matching method and device in three-dimensional reconstruction process
CN110070610A (en) * 2019-04-17 2019-07-30 精伦电子股份有限公司 The characteristic point matching method and device of characteristic point matching method, three-dimensionalreconstruction process
CN111899305A (en) * 2020-07-08 2020-11-06 深圳市瑞立视多媒体科技有限公司 Camera automatic calibration optimization method and related system and equipment
CN111862352A (en) * 2020-08-03 2020-10-30 字节跳动有限公司 Positioning model optimization method, positioning method and positioning equipment

Also Published As

Publication number Publication date
CN101826206B (en) 2011-12-28

Similar Documents

Publication Publication Date Title
CN101826206B (en) Camera self-calibration method
CN111815757B (en) Large member three-dimensional reconstruction method based on image sequence
CN111028277B (en) SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network
CN103218783B (en) Satellite remote sensing images fast geometric correcting method based on control point image database
Wu Towards linear-time incremental structure from motion
US10438366B2 (en) Method for fast camera pose refinement for wide area motion imagery
Deng et al. Noisy depth maps fusion for multiview stereo via matrix completion
CN103822616A (en) Remote-sensing image matching method with combination of characteristic segmentation with topographic inequality constraint
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN112765095B (en) Method and system for filing image data of stereo mapping satellite
Antone et al. Scalable extrinsic calibration of omni-directional image networks
CN101882308A (en) Method for improving accuracy and stability of image mosaic
Schönberger et al. Structure-from-motion for MAV image sequence analysis with photogrammetric applications
CN108759788B (en) Unmanned aerial vehicle image positioning and attitude determining method and unmanned aerial vehicle
CN103235810B (en) Remote sensing image reference mark data intelligence search method
CN107220996B (en) One kind is based on the consistent unmanned plane linear array of three-legged structure and face battle array image matching method
Yao et al. Relative camera refinement for accurate dense reconstruction
JP2023530449A (en) Systems and methods for air and ground alignment
Jiang et al. Learned local features for structure from motion of uav images: A comparative evaluation
Cui et al. Tracks selection for robust, efficient and scalable large-scale structure from motion
CN114399547B (en) Monocular SLAM robust initialization method based on multiframe
Magri et al. Bending the doming effect in structure from motion reconstructions through bundle adjustment
US20230032712A1 (en) Method For RPC Refinement By Means of a Corrective 3D Rotation
Bartelsen et al. Orientation and dense reconstruction from unordered wide baseline image sets
Wang et al. Fast and accurate satellite multi-view stereo using edge-aware interpolation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111228

Termination date: 20120331