CN103218820A - Camera calibration error compensation method based on multi-dimensional characteristics - Google Patents

Camera calibration error compensation method based on multi-dimensional characteristics Download PDF

Info

Publication number
CN103218820A
CN103218820A CN2013101404455A CN201310140445A CN103218820A CN 103218820 A CN103218820 A CN 103218820A CN 2013101404455 A CN2013101404455 A CN 2013101404455A CN 201310140445 A CN201310140445 A CN 201310140445A CN 103218820 A CN103218820 A CN 103218820A
Authority
CN
China
Prior art keywords
file
feature
key point
gabor
camera calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101404455A
Other languages
Chinese (zh)
Other versions
CN103218820B (en
Inventor
吴宏杰
奚雪峰
陆卫忠
胡伏原
付保川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University of Science and Technology
Original Assignee
Suzhou University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University of Science and Technology filed Critical Suzhou University of Science and Technology
Priority to CN201310140445.5A priority Critical patent/CN103218820B/en
Publication of CN103218820A publication Critical patent/CN103218820A/en
Application granted granted Critical
Publication of CN103218820B publication Critical patent/CN103218820B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

A camera calibration error compensation method based on multi-dimensional characteristics includes the following steps: (1) preparing data: collecting p images of standard targets, obtaining p images with errors, selecting q key points through each images, obtaining p*q key points; (2) extracting the characteristics of the key points: extracting the characteristics of each key point; (3) calculating the actual errors of the p*q key points (delta x, delta y) p*q; (4) conducting simulated training: conducting a support vector regression model training by using support vector machine (SVM) light tools; and (5) estimating errors: obtaining the actual position (x, y) q of q key points, then extracting the characteristics of the q key points according to step (2), storing the characteristics of the q key points in a to-be-regressed characteristic file, and calculating the compensation value (delta x, delta y) of each key point. According to the camera calibration error compensation method based on the multi-dimensional characteristics, association characteristics of a scene image are used, the compensation value of each collected image estimated in support vector regression is adopted, and by means of the camera calibration error compensation method based on the multi-dimensional characteristics, compensated light target center is close to an ideal light target center.

Description

A kind of camera calibration error compensating method based on the multidimensional feature
Technical field
The invention belongs to image processing field, be specifically related to a kind of camera calibration error compensating method based on the multidimensional feature.
Background technology
Camera calibration is one of major issue in the machine vision.For the pin-hole model camera, calibration process mainly is to find the solution camera intrinsic parameter (Intrinsicparameters) and outer parameter (Extrinsicparameters).Arrive scene coordinate (world coordinate system) conversion by this inside and outside parametric solution object pixel coordinate (image coordinate system).Calibration process can adopt direct linear approach, Tsai method, Zhang Zhengyou method etc.But no matter adopt which kind of scaling method,, make to have error between coordinates of targets that actual computation goes out and the ideal coordinates because camera lens radial distortion, inclination geometry deformation, site environment change.Revise the quality of camera calibration error, not only directly influence the accuracy of changes in coordinates, and the accuracy of the follow-up high-rise image understanding of remote effect.
For the existing more research both at home and abroad of camera calibration compensation of error method.People such as Bukhari utilize pedal line method and homogeneous equation model to estimate radial error from single image.People such as Wang propose distortional point and are on the concentric circles at the radial distortion problem, compensate distortion, the effect preferably that obtains by the center of circle of calculating distortional point.People such as Lucchese have proposed the method for correct radial distortion simultaneously and inclination and distortion, but only compensate with the radial distortion of 5 order polynomials needing at least with binomial.People such as Zhang Jiacheng have mixed multiple compensate for estimated model, with classical model fault image is proofreaied and correct for the first time, carry out the secondary fine correction with the Polyhedral Function Fitting Method method, carrying out gray scale with cubic B-spline function rebuilds, compare single lens distortion calibration model, precision improves, and robustness strengthens, after the correction radially root-mean-square error be 0.3 pixel.People such as Liu Tang have set up the conjunctive model of radial distortion and oblique distortions, try to achieve the distortion parameter of standard grid with least square method and optimization algorithm, at last target are compensated.
Above-mentioned compensation method has all obtained certain effect, but has two aspects still can continue to improve.The first, with high-order moment offset is carried out modeling, along with number of times increases, calculated amount is exponential quick growth, is difficult to the contradiction between EQUILIBRIUM CALCULATION FOR PROCESS time and the compensation quality; The second, camera distortion not only cause departing from of regional area pixel position, also influenced the imaging of view picture figure.So choosing of characteristics of image must take into account local feature and global characteristics.
Camera calibration error compensation model at present commonly used be camera calibration by finding the solution inside and outside parameter, realize that object pixel coordinate (image coordinate system) is to scene coordinate (world coordinate system) conversion.The camera calibration error model is as follows:
Figure BDA00003083703800011
Order A = f x 0 c x 0 f y c y 0 0 1 ,
Figure BDA00003083703800022
In the formula, (X, Y Z) are the world coordinates of a point, and (u v) is the pixel coordinate of point on image, and A is the intrinsic parameter of camera, (c x, c y) be the principal point position, f xWith f yBe camera focus, they are unit with the pixel.Rotation-translation matrix R|T is called outer parameter, and it has described the pose of camera with respect to scene.
Figure BDA00003083703800023
Be camera compensation of error function, R is the proper vector (r of current environment 1, r 2..., r n), can be the feature of various sign current environment situations such as average light illumination, average gray, gray variance.So-called compensation is exactly the penalty function that finds an optimum that depends on R
Figure BDA00003083703800024
Summary of the invention
Technical matters to be solved by this invention provides a kind of camera calibration compensation of error method based on the multidimensional feature.
For solving above technical matters, the present invention adopts following technical scheme:
A kind of camera calibration compensation of error method based on the multidimensional feature, it may further comprise the steps:
(1) prepares data: gather p width of cloth standard target target image earlier, obtain the image that the p width of cloth has error, from every width of cloth image, choose q key point then, obtain p * q described key point;
(2) feature of extraction key point: extract the feature of each described key point, described feature comprises color characteristic, local Gabor feature and global association feature;
(3) actual error (Δ x, Δ y) of p * q described key point of calculating P * q: the actual error (Δ x, Δ y) of the ideal position coordinate of each key point coordinate and this key point among the calculating p width of cloth figure P * q
(4) simulated training: adopt the SVMLight instrument to carry out the support vector regression model training, with (Δ x, the Δ y) that obtains in the feature of p * q described key point obtaining in the described step (2) and the described step (3) P * qAs input, obtain model file at last;
(5) estimation error: when photographing the new picture of a width of cloth, obtain q key point physical location (x, y) q, (2) extract the feature of this q described key point then set by step, deposit in the tag file that needs to return, and calculate the offset (Δ x, Δ y) of each described key point.
Further, the color characteristic that is extracted in the described step (2) be around each described key point in N * n-quadrant the average of the color component of 4 kinds of color spaces commonly used and variance as feature, described 4 kinds of color spaces commonly used are RGB, CMYK, HSV, HIS, and wherein, N is smaller or equal to 50.
Further again, local Gabor Feature Extraction method may further comprise the steps in the described step (2):
A. coloured image is normalized to gray level and is 256 gray-scale map;
B. be divided into the subwindow of L N * N around the current key point, wherein, L is smaller or equal to 50;
C. each window is carried out the Gabor conversion, original image is F(x, y), and through obtaining new image Q(x after the Gabor conversion, y), wherein,
Q(x,y)=[(Gabor R(x,y) *F(x,y)) 2+(Gabor I(x,y) *F(x,y)) 2] 1/2 (1)
In the formula (1),
Gabor R ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ cos 2 π ( x cos θ + y sin θ ) l - - - ( 2 )
Gabor I ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ sin 2 π ( x cos θ + y sin θ ) l - - - ( 3 )
σ=π in formula (2), (3);
Figure BDA00003083703800033
Figure BDA00003083703800036
Behind the computational transformation, the average of each window gray scale is represented that by μ the standard deviation of each window is represented that by δ (k=δ/μ) expression at last, obtains the local Gabor feature of L window: μ to the Z-factor of each window by k 1, μ 2..., μ i..., μ L, δ 1, δ 2..., δ i..., δ LWith k 1, k 2..., k i..., k L, total 3L dimension, wherein, i is a series of windows number.
Further, the global association Feature Extraction may further comprise the steps in the described step (2):
A. can regard local Gabor feature as three one-dimensional sequence: μ 1, μ 2..., μ i..., μ L, δ 1, δ 2..., δ i..., δ LWith k 1, k 2..., k i..., k L, wherein i represents the window sequence number;
B. utilize autocorrelation function compute associations feature, autocorrelation function is suc as formula (4), and wherein, m is relevant exponent number, and L is the number of Gabor subwindow, and m<L,
R μn = 1 L - n Σ i = 1 L - n μ i μ i + n , ( n = 1,2 , . . . , m ) - - - ( 4 )
This moment is by sequence μ 1, μ 2..., μ i..., μ LCan calculate m feature R μ 1, R μ 2..., R μ m, same can calculate R δ 1, R δ 2..., R δ mWith R K1, R K2..., R Km, be total to 3m global association feature.
Further, the actual error of the calculating p * q key point in the described step (3) (Δ x, Δ y) P * qMethod may further comprise the steps:
A, will from standard target is marked on a map, obtain q key point laser measurement the center as the ideal position coordinate (x, y) s, wherein, s=1,2 ..., q;
B, with the some recognition technology obtain q key point among the p width of cloth figure physical location (x, y) T * s, wherein, t=1,2 ..., p, s=1,2 ..., q;
C, calculating: (Δ x, Δ y) P * q=| (x, y) s-(x, y) t * s|.
Further, the SVMLight instrument is adopted in the simulated training in the described step (4), and its step is as follows:
A. import training order, described training order is: svm_learn – zr – t2 – g0.12example_filemodel_file, and wherein, example_file is the input file of training data; Model_file is the model result file of output; It is r that Can is Shuoed – z, the expression Regression Model; It is 2 that Can is Shuoed – t, and the radially basic kernel function of Gauss (RBF) is adopted in expression; It is 0.12 that Can is Shuoed – g, and the expression Gauss radially parameter gamma of base nuclear is 0.12.
B. the generation model file need generate two different model files respectively for x, y.
Further, the step of the estimation error in the described step (5) is as follows:
A. input returns order, described recurrence order is svm_classify example_file model_file output_file, wherein, the tag file of example_file for needing to return, model_file is the model file of the generation in the described step (4), and output_file is an output file;
B. generate output file, each row is corresponding one by one with row in the tag file that need to return in the described output file, and expression needs each regressand value of going in the tag file of recurrence.
Because adopt above technical scheme, the present invention compared with prior art has following advantage:
The characteristics of image that the present invention extracted has been taken into account local feature and global characteristics, and utilize the linked character of image scene, adopt support vector regression to estimate the compensation rate of every width of cloth images acquired in real time, adopt the method for the present invention light target center after the compensation that makes more to approach the desired light pinwheel.
With the method that traditional employing high-order moment is carried out modeling to offset, many feature extracting methods of the present invention, adopt the quantity that increases training image to improve precision of prediction, and the computing time of compensating error and training time are irrelevant.
Description of drawings
Fig. 1 is the standard target picture of marking on a map;
Fig. 2 is 9 subwindow figure;
Fig. 3 is the distribution situation of central point in the different distance scope before and after the compensation;
Fig. 4 is an emulation bridge original image;
Fig. 5 is the contrast of the coordinate after compensating with the coordinate that does not compensate.
Embodiment
Further set forth the present invention below in conjunction with accompanying drawing.
A kind of camera calibration error compensating method based on the multidimensional feature specifically may further comprise the steps:
(1) prepare data: gather p width of cloth standard target target image earlier, the standard target obtains the image that the p width of cloth has error as shown in Figure 1, chooses q key point then from every width of cloth image, obtains p * q key point, and wherein p, q are positive integer;
(2) feature of extraction key point:
Extract the feature of each described key point, each key point has three category features, is respectively color characteristic, local Gabor feature and global association feature, and such key point has p * q.The extracting method of three category features is as follows:
(21) extracting method of color characteristic adopts color characteristic extracting method commonly used, extracts around each key point in N * n-quadrant the average of color component of 4 kinds of color space RGB commonly used, CMYK, HSV, HIS and variance as feature.Wherein, the RGB color space has three color components, and 3 * 2 dimensional features are arranged; The CMYK color space has three color components, and 3 * 2 dimensional features are arranged; There are three color components in the hsv color space, and 3 * 2 dimensional features are arranged, and the HSI color space has four color components, and 4 * 2 dimensional features are arranged, thus have 26 dimension color characteristics, in this example, N=8.
(22) local Gabor Feature Extraction method, concrete steps are as follows:
A. coloured image is normalized to gray level and is 256 gray-scale map;
B. will be divided into the subwindow of L individual 8 * 8 around the current key point M, in this example, L=9, as shown in Figure 2;
C. each window is carried out the Gabor conversion, original image is F(x, y), and through obtaining new image Q(x after the Gabor conversion, y), wherein,
Q(x,y)=[(Gabor R(x,y) *F(x,y)) 2+(Gabor I(x,y) *F(x,y)) 2] 1/2 (1)
In the formula (1),
Gabor R ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ cos 2 π ( x cos θ + y sin θ ) l - - - ( 2 )
Gabor I ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ sin 2 π ( x cos θ + y sin θ ) l - - - ( 3 )
σ=π in formula (2), (3),
Figure BDA00003083703800055
Figure BDA00003083703800054
Behind the computational transformation, the average of each window gray scale is represented that by μ the standard deviation of each window is represented that by δ the Z-factor of each window is by k (k=δ/μ) expression.At last, obtain the local Gabor feature of 9 windows: μ 1, μ 2..., μ i..., μ 9, δ 1, δ 2..., δ i..., δ LWith k 1, k 2..., k i..., k 9, have 27 dimensions, wherein, i is a series of windows number.
(23) global association Feature Extraction method, step is as follows:
A. can regard local Gabor feature as three one-dimensional sequence: μ 1, μ 2..., μ i..., μ 9, δ 1, δ 2..., δ i..., δ LWith k 1, k 2..., k i..., k 9, wherein i represents the window sequence number;
B. utilize autocorrelation function compute associations feature, autocorrelation function is suc as formula (4), and wherein, m is relevant exponent number, and L is the number of Gabor subwindow, and m<L.
R μn = 1 L - n Σ i = 1 L - n μ i μ i + n , ( n = 1,2 , . . . , m ) - - - ( 4 )
In this example, m=3, this moment is by sequence μ 1, μ 2..., μ i..., μ 9Can calculate three feature R μ 1, R μ 2, R μ 3, same can calculate R δ 1, R δ 2, R δ 3With R K1, R K2, R K3, totally 9 global association features.
(3) actual error (Δ x, Δ y) of p * q key point of calculating P * q: the actual error (Δ x, Δ y) of the ideal position coordinate of each key point coordinate and these key points among the calculating p width of cloth figure P * q, its method step is as follows:
A, will from Fig. 1, obtain q key point laser measurement the center as the ideal position coordinate (x, y) s, wherein, s=1,2 ..., q;
B, with the some recognition technology obtain q key point among the p width of cloth figure physical location (x, y) t * s, wherein, t=1,2 ..., p, s=1,2 ..., q;
C, calculating: (Δ x, Δ y) P * q=| (x, y) s-(x, y) t * s|.
(4) simulated training:
Adopt the SVMLight instrument to carry out support vector regression (SVM) model training, with (Δ x, the Δ y) that obtains in the feature of p * q key point obtaining in the step (2) and the step (3) P * qAs input, obtain model file model_file at last, step is as follows:
A. import training order
Training order is: svm_learn – zr – t2 – g0.12example_filemodel_file
Wherein, example_file is the input file of training data; Model_file is the model result file of output; It is r that Can is Shuoed – z, the expression Regression Model; It is 2 that Can is Shuoed – t, and the radially basic kernel function of Gauss (RBF) is adopted in expression; It is 0.12 that Can is Shuoed – g, and the expression Gauss radially parameter gamma of base nuclear is 0.12.
B. form the form following (following only is form type) of example_file file:
Training data of each line display comprises the distance and 62 dimensional features of compensation, totally 62 row, and form of each row is as follows:
Δx1:0.652:0.78…62:0.32
Same, Δ y1:0.652:0.78 ... 62:0.32
C. generation model file: model_file
In this example, need be x, y generates two different model files respectively.
(5) estimation error: when photographing a secondary new picture, the physical location that obtains q key point when photographing the new picture of a pair, obtain q key point physical location (x, y) q, (2) extract the feature of this q key point then set by step, deposit in the tag file that needs to return, and calculate the offset (Δ x, Δ y) of each key point, and its step is as follows:
A. input returns order
Returning order is: svm_classifyexample_filemodel_fileoutput_file
At this moment, the tag file of example_file for needing to return; Model_file is the model file of the generation in the step (4); Output_file is an output file.
B. the form that forms the example_file file is as follows:
One of each line display returns feature, comprises 62 dimensional features, its form following (following only is form type):
1:0.652:0.78…62:0.32
C. generate the output_file file
In the output_file file, its form is corresponding one by one with the row of example_file for each row, the regressand value of each row among the expression example_file.
(6) emulation experiment
In this experiment, the error compensation that 10 monitoring points on the emulation bridge have been carried out 50 times is tested, as Fig. 4 is emulation bridge original image, 1~10 stain among Fig. 4 is 10 monitoring points, add up respectively in 50 experiments, calculated the distance of position in the desirable light target, light target center before and after the compensation earlier, the distribution situation of central point in the different distance scope before and after the statistic compensation again, distribution situation is as shown in Figure 3.Here we the center of laser measurement as desired light pinwheel position.Abscissa axis label 1 is represented respectively to compensate forward and backward central point distribution with 1_SVR among Fig. 2, and black represents to put apart from desired center the number of 10 pixels; The ash colour specification is apart from the number of desired center point 11 to 20 pixels; Dark grey is represented the number apart from desired center point 21 to 30 pixels; The white expression is apart from the number of desired center point 30 above pixels.
As can be seen from Figure 3, compensation back, No. 1, No. 4, No. 5, No. 7, No. 9 monitoring points is than having more point nearer apart from ideal point before and after the compensation; No. 2, No. 3, No. 6 both performances of monitoring point are suitable; Before slightly being worse than compensation after No. 8, No. 10 compensation.All in all dynamic compensation method of the present invention has played compensating action to the light target center.
Fig. 5 has provided the contrast of coordinate and the coordinate that does not compensate after the compensation, the target spot center after as can be seen from the figure most of compensation than the target spot center of not compensated more near the position of target on the laser measurement bridge.
The foregoing description only is explanation technical conceive of the present invention and characteristics; its purpose is to allow the personage who is familiar with this technology can understand content of the present invention and enforcement according to this; can not limit protection scope of the present invention with this; all equivalences that spirit is done according to the present invention change or modify, and all should be encompassed within protection scope of the present invention.

Claims (7)

1. camera calibration error compensating method based on the multidimensional feature, it is characterized in that: it may further comprise the steps:
(1) prepares data: gather p width of cloth standard target target image earlier, obtain the image that the p width of cloth has error, from every width of cloth image, choose q key point then, obtain p * q described key point;
(2) feature of extraction key point: extract the feature of each described key point, described feature comprises color characteristic, local Gabor feature and global association feature;
(3) actual error (Δ x, Δ y) of p * q described key point of calculating P * q: the actual error (Δ x, Δ y) of the ideal position coordinate of each key point coordinate and this key point among the calculating p width of cloth figure P * q
(4) simulated training: adopt the SVMLight instrument to carry out the support vector regression model training, with (Δ x, the Δ y) that obtains in the feature of p * q described key point obtaining in the described step (2) and the described step (3) P * qAs input, obtain model file at last;
(5) estimation error: when photographing the new picture of a width of cloth, obtain q key point physical location (x, y) q, (2) extract the feature of this q described key point then set by step, deposit in the tag file that needs to return, and calculate the offset (Δ x, Δ y) of each described key point.
2. the camera calibration error compensating method based on the multidimensional feature according to claim 1, it is characterized in that: the color characteristic that is extracted in the described step (2) be around each described key point in N * n-quadrant the average of the color component of 4 kinds of color spaces commonly used and variance as feature, described 4 kinds of color spaces commonly used are RGB, CMYK, HSV, HIS, wherein, N is smaller or equal to 50.
3. the camera calibration error compensating method based on the multidimensional feature according to claim 2 is characterized in that: local Gabor Feature Extraction method may further comprise the steps in the described step (2):
A. coloured image is normalized to gray level and is 256 gray-scale map;
B. be divided into the subwindow of L N * N around the current key point, wherein, L is smaller or equal to 50;
C. each window is carried out the Gabor conversion, original image is F(x, y), and through obtaining new image Q(x after the Gabor conversion, y), wherein,
Q(x,y)=[(Gabor R(x,y) *F(x,y)) 2+(Gabor I(x,y) *F(x,y)) 2] 1/2 (1)
In the formula (1),
Gabor R ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ cos 2 π ( x cos θ + y sin θ ) l - - - ( 2 )
Gabor I ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ sin 2 π ( x cos θ + y sin θ ) l - - - ( 3 )
σ=π in formula (2), (3);
Figure FDA00003083703700022
Figure FDA00003083703700023
Behind the computational transformation, the average of each window gray scale is represented that by μ the standard deviation of each window is represented that by δ (k=δ/μ) expression at last, obtains the local Gabor feature of L window: μ to the Z-factor of each window by k 1, μ 2..., μ i..., μ L, δ 1, δ 2..., δ i..., δ LWith k 1, k 2..., k i..., k L, total 3L dimension, wherein, i is a series of windows number.
4. the camera calibration error compensating method based on the multidimensional feature according to claim 3 is characterized in that: the global association Feature Extraction may further comprise the steps in the described step (2):
A. can regard local Gabor feature as three one-dimensional sequence: μ 1, μ 2..., μ i..., μ L, δ 1, δ 2..., δ i..., δ LWith k 1, k 2..., k i..., k L, wherein i represents the window sequence number;
B. utilize autocorrelation function compute associations feature, autocorrelation function is suc as formula (4), and wherein, m is relevant exponent number, and L is the number of Gabor subwindow, and m<L,
R μn = 1 L - n Σ i = 1 L - n μ i μ i + n , ( n = 1,2 , . . . , m ) - - - ( 4 )
This moment is by sequence μ 1, μ 2..., μ i..., μ LCan calculate m feature R μ 1, R μ 2..., R μ m, same can calculate R δ 1, R δ 2..., R δ mWith R K1, R K2..., R Km, be total to 3m global association feature.
5. the camera calibration error compensating method based on the multidimensional feature according to claim 1 is characterized in that: the actual error of the calculating p * q key point in the described step (3) (Δ x, Δ y) P * qMethod may further comprise the steps:
A, will from standard target is marked on a map, obtain q key point laser measurement the center as the ideal position coordinate (x, y) s, wherein, s=1,2 ..., q;
B, with the some recognition technology obtain q key point among the p width of cloth figure physical location (x, y) T * s, wherein, t=1,2 ..., p, s=1,2 ..., q;
C, calculating: (Δ x, Δ y) P * q=| (x, y) s-(x, y) T * s|.
6. the camera calibration error compensating method based on the multidimensional feature according to claim 1 is characterized in that: the SVMLight instrument is adopted in the simulated training in the described step (4), and its step is as follows:
A. import training order, described training order is: svm_learn – zr – t2 – g0.12example_filemodel_file, and wherein, example_file is the input file of training data; Model_file is the model result file of output; It is r that Can is Shuoed – z, the expression Regression Model; It is 2 that Can is Shuoed – t, and the radially basic kernel function of Gauss (RBF) is adopted in expression; It is 0.12 that Can is Shuoed – g, and the expression Gauss radially parameter gamma of base nuclear is 0.12;
B. the generation model file need generate two different model files respectively for x, y.
7. the camera calibration error compensating method based on the multidimensional feature according to claim 1 is characterized in that: the step of the estimation error in the described step (5) is as follows:
A. input returns order, described recurrence order is svm_classify example_file model_file output_file, wherein, the tag file of example_file for needing to return, model_file is the model file of the generation in the described step (4), and output_file is an output file;
B. generate output file, each row is corresponding one by one with row in the tag file that need to return in the described output file, and expression needs each regressand value of going in the tag file of recurrence.
CN201310140445.5A 2013-04-22 2013-04-22 A kind of camera calibration error compensating method based on multidimensional characteristic Expired - Fee Related CN103218820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310140445.5A CN103218820B (en) 2013-04-22 2013-04-22 A kind of camera calibration error compensating method based on multidimensional characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310140445.5A CN103218820B (en) 2013-04-22 2013-04-22 A kind of camera calibration error compensating method based on multidimensional characteristic

Publications (2)

Publication Number Publication Date
CN103218820A true CN103218820A (en) 2013-07-24
CN103218820B CN103218820B (en) 2016-02-10

Family

ID=48816563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310140445.5A Expired - Fee Related CN103218820B (en) 2013-04-22 2013-04-22 A kind of camera calibration error compensating method based on multidimensional characteristic

Country Status (1)

Country Link
CN (1) CN103218820B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107703513A (en) * 2017-08-15 2018-02-16 株洲嘉成科技发展有限公司 A kind of novel non-contact contact net relative position detection method based on image procossing
CN107797517A (en) * 2017-09-30 2018-03-13 湖南文理学院 The method and system detected using realizing of Robot Vision steel band punching processing
CN110136209A (en) * 2019-05-21 2019-08-16 Oppo广东移动通信有限公司 A kind of camera calibration method, device and computer readable storage medium
CN111886982A (en) * 2020-08-21 2020-11-06 农业农村部南京农业机械化研究所 Real-time detection system and detection method for dry land planting operation quality
CN113167606A (en) * 2018-12-21 2021-07-23 欧姆龙株式会社 Method for correcting detection value of linear scale

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030076980A1 (en) * 2001-10-04 2003-04-24 Siemens Corporate Research, Inc.. Coded visual markers for tracking and camera calibration in mobile computing systems
CN101425181A (en) * 2008-12-15 2009-05-06 浙江大学 Panoramic view vision auxiliary parking system demarcating method
US20110254923A1 (en) * 2010-04-19 2011-10-20 Samsung Electronics Co., Ltd. Image processing apparatus, method and computer-readable medium
CN102507598A (en) * 2011-11-02 2012-06-20 苏州科技学院 High-speed unordered capsule defect detecting system
CN102750704A (en) * 2012-06-29 2012-10-24 吉林大学 Step-by-step video camera self-calibration method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030076980A1 (en) * 2001-10-04 2003-04-24 Siemens Corporate Research, Inc.. Coded visual markers for tracking and camera calibration in mobile computing systems
CN101425181A (en) * 2008-12-15 2009-05-06 浙江大学 Panoramic view vision auxiliary parking system demarcating method
US20110254923A1 (en) * 2010-04-19 2011-10-20 Samsung Electronics Co., Ltd. Image processing apparatus, method and computer-readable medium
CN102507598A (en) * 2011-11-02 2012-06-20 苏州科技学院 High-speed unordered capsule defect detecting system
CN102750704A (en) * 2012-06-29 2012-10-24 吉林大学 Step-by-step video camera self-calibration method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙曦: "一种基于校正误差的立体相机标定算法", 《小型微型计算机系统》, vol. 33, no. 4, 30 April 2012 (2012-04-30), pages 869 - 972 *
曾喆昭等: "一种基于正交基神经网络算法的传感器误差补偿方法", 《传感技术学报》, 31 March 2007 (2007-03-31), pages 536 - 539 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107703513A (en) * 2017-08-15 2018-02-16 株洲嘉成科技发展有限公司 A kind of novel non-contact contact net relative position detection method based on image procossing
CN107703513B (en) * 2017-08-15 2021-05-14 株洲嘉成科技发展有限公司 Non-contact net relative position detection method based on image processing
CN107797517A (en) * 2017-09-30 2018-03-13 湖南文理学院 The method and system detected using realizing of Robot Vision steel band punching processing
CN107797517B (en) * 2017-09-30 2020-09-11 湖南文理学院 Method and system for realizing steel belt punching processing detection by adopting machine vision
CN113167606A (en) * 2018-12-21 2021-07-23 欧姆龙株式会社 Method for correcting detection value of linear scale
CN113167606B (en) * 2018-12-21 2022-12-20 欧姆龙株式会社 Method for correcting detection value of linear scale
CN110136209A (en) * 2019-05-21 2019-08-16 Oppo广东移动通信有限公司 A kind of camera calibration method, device and computer readable storage medium
CN111886982A (en) * 2020-08-21 2020-11-06 农业农村部南京农业机械化研究所 Real-time detection system and detection method for dry land planting operation quality
CN111886982B (en) * 2020-08-21 2022-03-22 农业农村部南京农业机械化研究所 Detection method of dry land planting operation quality real-time detection system

Also Published As

Publication number Publication date
CN103218820B (en) 2016-02-10

Similar Documents

Publication Publication Date Title
CN111830953B (en) Vehicle self-positioning method, device and system
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN108986037A (en) Monocular vision odometer localization method and positioning system based on semi-direct method
CN105957041B (en) A kind of wide-angle lens infrared image distortion correction method
CN103218812B (en) Method for rapidly acquiring tree morphological model parameters based on photogrammetry
CN103218820A (en) Camera calibration error compensation method based on multi-dimensional characteristics
WO2021109775A1 (en) Methods and devices for generating training sample, training model and recognizing character
CN109858374B (en) Automatic extraction method and device for arrow mark lines in high-precision map making
CN104715251B (en) A kind of well-marked target detection method based on histogram linear fit
CN113052109A (en) 3D target detection system and 3D target detection method thereof
CN106384355B (en) A kind of automatic calibration method in projection interactive system
CN110031829A (en) A kind of targeting accuracy distance measuring method based on monocular vision
CN107798685A (en) Pedestrian's height determines method, apparatus and system
CN112258588A (en) Calibration method and system of binocular camera and storage medium
CN112016497A (en) Single-view Taijiquan action analysis and assessment system based on artificial intelligence
CN113255578B (en) Traffic identification recognition method and device, electronic equipment and storage medium
CN105139401A (en) Depth credibility assessment method for depth map
CN110264527A (en) Real-time binocular stereo vision output method based on ZYNQ
CN110414101B (en) Simulation scene measurement method, accuracy measurement method and system
CN116030519A (en) Learning attention detection and assessment method for live broadcast teaching platform
CN112381876B (en) Traffic sign marking method and device and computer equipment
CN106909881A (en) The method and system of corn breeding base ridge number are extracted based on unmanned aerial vehicle remote sensing images
CN107341773A (en) A kind of vignetting bearing calibration of multispectral image
CN104200469B (en) Data fusion method for vision intelligent numerical-control system
CN111444777A (en) Forward-looking sonar target detection marking method capable of simultaneously marking shadows

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160210

Termination date: 20210422

CF01 Termination of patent right due to non-payment of annual fee