CN110969668A - Stereoscopic calibration algorithm of long-focus binocular camera - Google Patents
Stereoscopic calibration algorithm of long-focus binocular camera Download PDFInfo
- Publication number
- CN110969668A CN110969668A CN201911152607.0A CN201911152607A CN110969668A CN 110969668 A CN110969668 A CN 110969668A CN 201911152607 A CN201911152607 A CN 201911152607A CN 110969668 A CN110969668 A CN 110969668A
- Authority
- CN
- China
- Prior art keywords
- image
- point
- camera
- pixel
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 17
- 239000011159 matrix material Substances 0.000 claims description 91
- 238000013519 translation Methods 0.000 claims description 41
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000012216 screening Methods 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 7
- 238000004519 manufacturing process Methods 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 230000000717 retained effect Effects 0.000 claims description 6
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims 2
- 238000012545 processing Methods 0.000 abstract description 2
- 239000000284 extract Substances 0.000 abstract 1
- 238000003384 imaging method Methods 0.000 description 2
- 241001235534 Graphis <ascomycete fungus> Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
- G06T2207/30208—Marker matrix
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention discloses a stereo calibration algorithm of a long-focus binocular camera, and belongs to the field of image processing and computer vision. Since the focal length of the telephoto lens is relatively large, the calibration object needs to be placed at a relatively far place. Due to the size limitation of the calibration object, the calibration object cannot be covered into the whole angle of view. The method extracts and matches feature points from the area which cannot be covered by the calibration plate, and calibrates the camera according to the matched feature points and the coordinates of the angular points of the calibration plate. The method successfully overcomes the limitation that the size of the calibration plate is not enough to cover the whole field angle; the program is simple and easy to realize; the method makes full use of the commonly calibrated checkerboard angular points and the self-calibrated scene characteristic points, can simultaneously correct the internal reference and the external reference of the telephoto camera, and has simple and convenient operation and accurate result.
Description
Technical Field
The invention belongs to the field of image processing and computer vision, and relates to a method for extracting and matching feature points from an area which cannot be covered by a calibration plate, and calibrating a camera according to the matched feature points and coordinates of angular points of the calibration plate, so that the problem that the feature points cannot be covered on the whole field angle due to the size of the calibration plate is solved.
Background
Stereo vision is an important topic in the field of computer vision, and its aim is to reconstruct the three-dimensional geometric information of a scene. Binocular stereo vision is an important branch of stereo vision. The binocular stereo vision uses a left camera and a right camera to simulate two eyes, calculates a depth image by calculating the difference between binocular images, and can be used in a plurality of fields such as three-dimensional reconstruction and target detection, wherein the difference between the binocular images becomes parallax. The binocular stereo vision has the advantages of high efficiency, proper precision, simple system structure, low cost and the like, and is widely applied to the fields of robot vision, vehicle navigation, medical imaging and the like.
The binocular stereo vision needs to match the imaging points of the left and right images at the same point, so that the focal lengths and the imaging center points of the two lenses of the camera, the position relationship between the two lenses and the position relationship between the left and right lenses need to be known. In addition, since the production process of the camera lens is limited, the photographed image has distortion, which is called as distortion of the camera. To obtain the above data and eliminate distortion, we need to calibrate the camera.
We get the parameters of the two lenses of the camera and their relative position parameters through the calibration process, however, these parameters are not stable and invariant. With the change of temperature, humidity and the like, the internal parameters of the camera lens can change; further, an accidental collision of the camera may cause a positional relationship between the two lenses to change. Therefore, the camera needs to correct the internal and external parameters every time the camera is used for a period of time, and the camera is self-calibrated.
When calibrating the internal and external parameters of the camera lens, the relationship between the image coordinate system and the three-dimensional world coordinate system needs to be calculated. Therefore, we need to know the physical dimensions of the calibration object. Further, it is also necessary to obtain coordinates of matching feature points on the left and right images covering the entire field angle (FOV).
A tele digital camera is a digital camera with a tele lens. The focal length of a lens is typically expressed in millimeters, such as a 35mm lens, a 50mm lens, a 135mm lens, and so forth, as we often speak. The lens may be classified into a wide-angle lens, a standard lens, a telephoto lens, and the like according to its focal length, which is actually classified according to the angle of view of the lens. The imaging law of a lens is such that the sum of the reciprocal of the object distance and the reciprocal of the image distance equals the reciprocal of the focal length, i.e.
Where u, v represent the object distance and the distance, respectively, and f represents the focal length of the camera. Zhejiang university proposes a long-focus camera calibration method (patent publication: CN102622744A) based on a polynomial projection model, which converts points on an image onto a spherical surface and optimizes the points according to the projection properties of straight lines on the spherical surface. Since the focal length of the telephoto lens is relatively large, the calibration object needs to be placed at a relatively far place. Due to the size limitation of the calibrators, it is not possible to cover the calibrators into the entire FOV. In order to solve the contradiction, the invention uses the calibration version and the characteristic points in the scene to correct the internal and external parameters of the camera. It is acceptable that the calibration plate does not cover the FOV since the internal parameters of the lens are known in advance. The image coordinates and world coordinates of the feature points on the calibration board are used for correcting the internal reference, and the image coordinates of all the feature points are used for correcting the external reference.
Disclosure of Invention
The invention aims to overcome the limitation that the size of a calibration plate is not enough to cover the whole field angle, and provides a scheme for detecting and matching characteristic points of an uncovered area of the calibration plate, so as to detect and match the characteristic points of the uncovered area of the calibration plate and correct the original calibration result according to the characteristic points. The flow is shown in FIG. 1.
The technical scheme of the invention is as follows:
a stereo calibration algorithm of a tele binocular camera comprises the following steps:
1) and (3) detecting angular points of the checkerboard: and shooting the chessboard pattern calibration plate image by using a long-focus binocular camera, detecting and matching the chessboard pattern angular points, and acquiring the area which is not covered by the chessboard pattern.
2) Scene feature point detection: and shooting a plurality of groups of scene images by using a long-focus binocular camera, and detecting and extracting feature points for further screening.
3) Matching the characteristic points: and matching the feature points extracted from the left and right images according to the feature description values in the matching window, and deleting repeated matching.
4) And (3) optimizing a matching result by parabolic fitting: the matching result is optimized by a unitary quadratic parabolic fit.
5) Judging the coverage area of the feature points: and (4) dividing the image into m-n grids, if the characteristic points cover all the grids, carrying out the next step, otherwise, continuously shooting the image, and repeating the steps 2) to 4).
6) Correcting the internal reference calibration result: the image coordinates and world coordinates of the feature points on the calibration plate are used to correct the references.
7) And correcting the external reference calibration result: and correcting the calibration result according to the matched checkerboard corner coordinates and scene characteristic point coordinates.
The characteristic point extraction in the step 2) specifically comprises the following steps:
2-1) constructing a single-scale difference Gaussian pyramid (DoG). Differential gaussian pyramid a differential gaussian pyramid is derived from the difference of adjacent Scale spaces and is often used for Scale-invariant feature transform (SIFT). The scale space of an image is defined as: the convolution of the gaussian convolution kernel with the image is a function of the parameter σ in the gaussian convolution kernel. Specifically, the scale space of the scene image I (x, y) is:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein ,the method is characterized in that the method is a Gaussian kernel function, sigma is a scale factor, the size of sigma determines the smoothness degree of an image, the large scale corresponds to the general appearance characteristic of the image, and the small scale corresponds to the detail characteristic of the image. Large sigma values correspond to coarse scale (low resolution) and small sigma values correspond to fine scale (high resolution). Denotes a convolution operation. L (x, y, σ) is the scale space of image I (x, y), as shown in FIG. 2. The difference is made between the scale spaces with different scales to obtain a layer of difference Gaussian pyramid, and a normalization scale factor lambda is multiplied to enable the maximum value of the DoG image to be 255.
D(x,y,σ)=λ(L(x,y,kσ)-L(x,y,σ))
Unlike SIFT, we compute only one scale feature. The reason for this is two: firstly, the calculation amount for calculating a plurality of scale spaces is too large to realize reality; secondly, the accuracy of the SIFT features obtained using the multi-scale space is not sufficient to meet the calibration requirements.
2-2) comparing each point in the obtained DoG with a pixel point in the neighborhood to judge whether the point is a local extreme point.
2-2-1) recording the DoG obtained in the step as D. D is subjected to an expansion operation, and the result is recorded as D1. Will D1Comparing each pixel point with the point on 8-neighborhood, if the pixel point is local maximum, adding it into candidate point set P1And (c) removing the residue.
2-2-2) inverting D and then performing an expansion operation, and recording the result as D2. Will D2Comparing each pixel point with the point in 8-neighborhood, if the pixel point is local minimum, adding it into candidate point set P2And (c) removing the residue.
2-2-3) reacting P1 and P2Taking intersection to obtain P3=P1∩P2. Get P3And taking the points with the middle DoG gray value larger than 15 as a characteristic point set { P }. 2-3) because noise points can appear in the feature points judged only according to the Gaussian features, we need to denoise the Gaussian feature points. Here, a common filter can be usedAnd filtering noise points and edge points.
Matching the characteristic points, specifically comprising the following steps:
3-1) divide the image into m × n blocks. For each feature point of the left imageFind its corresponding block in the left graphBlockThe corresponding right graph search range is recorded asAs shown in fig. 3. Finding a variable capable of describing the similarity degree of the characteristic points to evaluateAndthe similarity degree of any point in the image, if the maximum similarity degree is larger than the threshold value t1Then it is regarded as the coarse matching point
3-2) ifAndmaximum value of similarity in sfirstAnd the second largest value ssecondSatisfies the following conditions:
F(sfirst,ssecond)≥t2
the match is retained, where t2Is a threshold value, F(s)first,ssecond) For the description of sfirst and ssecondThe relationship between。
After screening according to the rule, matching according to the stepsCorresponding characteristic point on the left graphIf it is satisfied withThen the match is retained
And optimizing a matching result by parabolic fitting, which specifically comprises the following steps:
4-1) feature points of left graphFor reference, the integral pixel characteristic points of the corresponding right image are optimized by parabolic fittingThe obtained sub-pixel characteristic points corresponding to the right image wherein As a sub-pixel offset in the x-direction,is the sub-pixel offset in the y-direction.
4-2) to correspond to the integer pixel feature points of the right imageAs a reference, calculating the sub-pixel characteristic point of the corresponding left image according to the method of 4-1) wherein As a sub-pixel offset in the x-direction,is the sub-pixel offset in the y-direction.
Correcting the internal reference calibration result, specifically comprising the following steps:
6-1) obtaining the original calibration result: the parameters of the two lenses are acquired from the tele binocular camera hardware itself, including the parameters of the focal lengths fx and fy (unit: pixel) in the x direction and the y direction, the lens principal point positions u0 and v0, and the like.
6-2) updating the internal reference result.
For each checkerboard corner point image Pi, the following steps are carried out:
(1) and acquiring a rotation matrix R and a translation vector T between the image coordinates and the calibration plate coordinates under the checkerboard corner point image Pi. According to the camera model described above, let M be [ X, Y, Z,1 ] as a point of the three-dimensional world coordinate]TThe two-dimensional camera plane pixel coordinate is m ═ u, v,1]TTherefore, the homography relationship from the checkerboard plane to the image plane for calibration is:
sm=K[R,T]M
wherein s is a scale factor, K is a camera intrinsic parameter, and R and T are a rotation matrix and a translation vector between an image coordinate and a calibration plate coordinate respectively. We stipulate the checkerboard plane as Z ═ 0, then have
We call K [ r1, r2, t ] a homography matrix H, i.e.
H=[h1,h2h3]=λK[r1r2t]
A homography matrix H between the checkerboard coordinate system and the image coordinate system is calculated. After the calculation of the homography matrix is completed, a rotation matrix R and a translation vector T between the image coordinates and the calibration plate coordinates are calculated, and the calculation formula is as follows:
r3=r1×r2
t=λK-1h3
wherein λ represents a normalization coefficient;
(2) it is assumed here that we have collected n 'sets of images containing a checkerboard, with m' checkerboard corners in each image. And (3) enabling the projection point of the corner point Mij on the ith image on the image under the camera matrix obtained by the calculation to be:
where Ri and ti are the rotation matrix and translation vector for the ith sub-map, Δ Ri and ΔtiThe amount of change in Ri and ti, respectively. K is an internal parameter matrix with a variance Δ K. Corner pointThe probability density function for mij is:
constructing a likelihood function:
let L take the maximum value, i.e. let the following equation be the minimum. The Levenberg-Marquardt algorithm of the multi-parameter nonlinear system optimization problem is used for iterative solution of the optimal solution.
Correcting the external reference calibration result, specifically comprising the following steps:
7-1) calculating matched left and right characteristic pointsCorresponding coordinates in a normal coordinate system. The pixel coordinate system takes the upper left corner of the picture as an origin, and the x axis and the y axis of the pixel coordinate system are respectively parallel to the x axis and the y axis of the image coordinate system. The unit of the pixel coordinate system is a pixel, which is a basic and indivisible unit of image display. The normal coordinate system takes the optical center of the camera as the origin of the image coordinate system, and the distance from the optical center to the image plane is scaled to 1. The relationship of pixel coordinates to normal coordinates is as follows:
u=KX
wherein ,pixel coordinates representing an image;representing internal moments of a cameraArray, fx and fyFocal lengths (unit is pixel) in x and y directions of the image, respectively, (c)x,cy) Representing a position of a camera stagnation point;are coordinates in a normal coordinate system. Knowing the pixel coordinate system of the image and the camera's internal parameters allows to calculate the regular coordinate system of the pixel point correspondences, i.e.
X=K-1 u
wherein ,andare respectivelyAndthe coordinates of the pixels of (a) and (b),andare respectivelyAndnormal coordinate of, Kl and KrRespectively, the reference matrices for the left and right cameras.
7-2) removing image distortion: the normal coordinates of the left and right image feature points after distortion removal are calculated from the normal coordinates of the left and right image feature points and the distortion coefficients of the left and right cameras.
Due to the limitation of lens production process, the lens in practical situation has some distortion phenomena to cause nonlinear distortion, which can be roughly divided into radial distortion and tangential distortion.
The radial distortion of the image is the position deviation of image pixel points generated along the radial direction by taking a distortion center as a central point, so that the image formed in the image is deformed. The radial distortion is roughly expressed as follows:
xd=x(1+k1r2+k2r4+k3r6)
yd=y(1+k1r2+k2r4+k3r6)
wherein ,r2=x2+y2,k1、k2、k3Is a radial distortion parameter.
Tangential distortion is due to imperfections in the camera fabrication such that the lens itself is not parallel to the image plane, and can be quantitatively described as:
xd=x+(2p1xy+p2(r2+2x2))
yd=y+(p1(r2+2y2)+2p2xy)
wherein ,p1、p2Is the tangential distortion coefficient.
In summary, the coordinate relationship before and after distortion is as follows:
xd=x(1+k1r2+k2r4+k3r6)+(2p1xy+p2(r2+2x2))
yd=y(1+k1r2+k2r4+k3r6)+(p1(r2+2y2)+2p2xy)
wherein (x, y) is a normal coordinate in an ideal state, (x)d,yd) Are the actual normal coordinates with distortion. We get (x)d,yd) As an initial value of (x, y), the actual (x, y) is obtained by iteratively calculating several times (for example, 20 times).
7-3) rotating the left and right images according to the original rotation relationship of the two cameras: the rotation matrix R and translation vector t between the original left and right cameras are known, so that
Xr=RXl+t
wherein ,XlNormal coordinates, X, representing the left camerarRepresenting the normal coordinates of the right camera. And rotating the left picture by a half angle in the positive direction of R, and rotating the right picture by a half angle in the negative direction of R.
For each pair of left and right characteristic points obtained in the previous step after distortion removalNormal coordinates of
7-4) reducing the image after the distortion removal rotation to a pixel coordinate system according to the formula u-KX. According to the updated left and right characteristic points of the last stepNormal coordinates ofCalculating distortion-removed corrected image coordinates
7-5) solving a basic matrix F and an essential matrix E according to the characteristic point pair coordinates after the distortion removal correction of the left and right images and the internal reference matrixes of the left and right cameras: left and right corresponding pixel point pairs ul、urThe relationship to the basis matrix F is:
and (4) further screening the point pairs by using random sample consensus (RANSAC), and then substituting the coordinates of the corresponding points into the formula to construct a homogeneous linear equation set to solve F.
The relationship between the base matrix and the essence matrix is:
wherein ,Kl、KrRespectively, the reference matrices for the left and right cameras.
7-6) decomposing the left and right camera rotation and translation relationships after correction from the essence matrix: the relationship of the essential matrix E to the rotation R and translation t is as follows:
E=[t]×R
wherein [t]×A cross-product matrix representing t.
Performing singular value decomposition on E to obtain
Defining two matrices
so E can be written in the following two forms
(1)E=UZUTUWVT
Let [ t)]×=UZUT,R=UWVT
(2)E=-UZUTUWTVT
Let [ t)]×=-UZUT,R=UWTVT
Four pairs of R and t are obtained, and a solution with three-dimensional significance is selected.
7-7) the decomposed rotation and translation relation is superposed into the original external reference.
The rotation matrix before distortion removal is recorded as R, and the translation vector is recorded as t ═ tx,ty,tz)T(ii) a The rotation matrix calculated in step 6-2) is R ', and the translation vector is t ' ═ t 'x,t′y,t′z)T. Then new Rnew and tnewAs follows
Rnew=R1/2R'R1/2
The invention has the beneficial effects that: the invention aims to overcome the limitation that the size of a calibration plate is not enough to cover the whole field angle, adds some steps to detect and match the characteristic points of the uncovered area of the calibration plate, and corrects the original calibration result according to the characteristic points. The method fully utilizes the commonly calibrated checkerboard angular points and the self-calibrated scene characteristic points, can simultaneously correct the internal reference and the external reference of the telephoto camera, and has simple and convenient operation and accurate result.
Drawings
FIG. 1 is a schematic flow chart.
Fig. 2 shows a gaussian differential pyramid (DoG).
Fig. 3 is a schematic diagram of block matching. Wherein, (a) is a left figure, and (b) is a right figure.
Detailed Description
The invention provides a method for extracting a calibration point covering a full field angle, which is described in detail in combination with the accompanying drawings and embodiments as follows:
1) and (3) detecting angular points of the checkerboard: shooting the image of the checkerboard calibration plate, detecting and matching the checkerboard angular points, and acquiring the area which is not covered by the checkerboard.
2) Scene feature point detection: and shooting a plurality of groups of outdoor scene images, and detecting and matching the characteristic points for further screening.
2-1) building a single-scale differential Gaussian pyramid (DoG). Differential gaussian pyramid a differential gaussian pyramid is derived from the difference of adjacent Scale spaces and is often used for Scale-invariant feature transform (SIFT). The scale space of an image is defined as: the convolution of the gaussian convolution kernel with the image is a function of the parameter σ in the gaussian convolution kernel. Specifically, the scale space of the image I (x, y) is:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein ,the method is characterized in that the method is a Gaussian kernel function, sigma is a scale factor, the size of sigma determines the smoothness degree of an image, the large scale corresponds to the general appearance characteristic of the image, and the small scale corresponds to the detail characteristic of the image. Large sigma values correspond to coarse scale (low resolution) and small sigma values correspond to fine scale (high resolution). Denotes a convolution operation. L (x, y, σ) is the scale space of the image I (x, y). The difference is made between the scale spaces with different scales to obtain a layer of difference Gaussian pyramid, and a normalization scale factor lambda is multiplied to enable the maximum value of the DoG image to be 255.
D(x,y,σ)=λ(L(x,y,kσ)-L(x,y,σ))
To increase the computation speed, we compute just one scale of DoG, as shown in fig. 3.
2-2) comparing each point in the obtained DoG with 8 pixel points in a 3 multiplied by 3 neighborhood to judge whether the point is a local extreme point.
2-2-1) recording the DoG obtained in the step as D. D is subjected to an expansion operation, and the result is recorded as D1. Will D1Each pixel point in the group is 8-adjacent to the pixel pointComparing the points in the domain, if the pixel point is local maximum, adding it into the candidate point set P1And (c) removing the residue.
2-2-2) inverting D and then performing an expansion operation, and recording the result as D2. Will D2Comparing each pixel point with the point in 8-neighborhood, if the pixel point is local minimum, adding it into candidate point set P2And (c) removing the residue.
2-2-3) reacting P1 and P2Taking intersection to obtain P3=P1∩P2. Get P3And taking the points with the middle DoG gray value larger than 15 as a characteristic point set { P }.
2-3) because noise points can appear in the feature points judged only according to the Gaussian features, we need to denoise the Gaussian feature points. Here, a common filter can be used to filter noise and edge points.
3) Matching the characteristic points: and matching the obtained feature points on the left image and the right image according to the feature description values under the matching window, and deleting repeated matching.
3-1) divide the image into m × n blocks. For each feature point of the left imageFind its corresponding block in the left graphBlockThe corresponding right graph search range is recorded asAs shown in fig. 3. Finding a variable capable of describing the similarity degree of the characteristic points to evaluateAndthe similarity degree of any point in the image, if the maximum similarity degree is larger than the threshold value t1Then it is regarded as the coarse matching point
3-2) ifAndmaximum value of similarity in sfirstAnd the second largest value ssecondSatisfies the following conditions:
F(sfirst,ssecond)≥t2
the match is retained, where t2Is a threshold value, F(s)first,ssecond) For the description of sfirst and ssecondThe relationship between them.
After screening according to the rule, matching according to the stepsCorresponding characteristic point on the left graphIf it is satisfied withThen the match is retained
4) And (3) optimizing a matching result by parabolic fitting: and optimizing the matching result by using a unitary quadratic parabolic fit.
And optimizing a matching result by parabolic fitting, which specifically comprises the following steps:
4-1) feature points of left graphIs a baseQuasi, parabolic fitting optimization corresponds to integer pixel characteristic point of right imageThe obtained sub-pixel characteristic points corresponding to the right image
4-2) to correspond to the integer pixel feature points of the right imageAs a reference, calculating the sub-pixel characteristic point of the corresponding left image according to the method of 4-1)
5) Judging the coverage area of the feature points: if the corner points extracted in the previous step can completely cover the area which can not be covered by the calibration plate, the camera calibration can be carried out according to the coordinates of the corner points of the checkerboard and the coordinates of the scene feature points.
6) And (3) correcting an internal reference calibration result: the image coordinates and world coordinates of the feature points on the calibration plate are used to correct the references.
6-1) obtaining the original calibration result: the parameters of the two lenses are acquired from the camera hardware itself, including the focal lengths fx and fy (unit: pixel) in the x-direction and the y-direction, the lens principal point positions u0 and v0, and the like.
6-2) updating the internal reference result.
For each checkerboard corner point image Pi, the following steps are carried out:
(1) and obtaining a rotation matrix R and a translation vector T between the image coordinates and the calibration plate coordinates under the checkerboard corner point image Pi. According to the camera model described above, let M be [ X, Y, Z,1 ] as a point of the three-dimensional world coordinate]TThe two-dimensional camera plane pixel coordinate is m ═ u, v,1]TWhat is, what isTaking the homography relationship from the checkerboard plane for calibration to the image plane as follows:
sm=K[R,T]M
wherein s is a scale factor, K is a camera intrinsic parameter, and R and T are a rotation matrix and a translation vector between an image coordinate and a calibration plate coordinate respectively. We stipulate the checkerboard plane as Z ═ 0, then have
We call K [ r1, r2, t ] a homography matrix H, i.e.
H=[h1,h2h3]=λK[r1r2t]
A homography matrix H between the checkerboard coordinate system and the image coordinate system is calculated. After the calculation of the homography matrix is completed, a rotation matrix R and a translation vector T between the image coordinates and the calibration plate coordinates are calculated, and the calculation formula is as follows:
r3=r1×r2
t=λK-1h3
(2) it is assumed here that we have collected n 'sets of images containing a checkerboard, with m' checkerboard corners in each image. And (3) enabling the projection point of the corner point Mij on the ith image on the image under the camera matrix obtained by the calculation to be:
where Ri and ti are the rotation matrix and translation vector for the ith sub-map, Δ Ri and ΔtiThe amount of change in Ri and ti, respectively. K is an internal parameter matrix with a variance Δ K. The probability density function for the corner point mij is then:
constructing a likelihood function:
let L take the maximum value, i.e. let the following equation be the minimum. The Levenberg-Marquardt algorithm of the multi-parameter nonlinear system optimization problem is used for iterative solution of the optimal solution.
7) And (3) correcting an internal reference calibration result: the image coordinates of all the feature points are used to correct the external parameters.
7-1) calculating matched left and right characteristic pointsCorresponding coordinates in a normal coordinate system. The pixel coordinate system takes the upper left corner of the picture as an origin, and the x axis and the y axis of the pixel coordinate system are respectively parallel to the x axis and the y axis of the image coordinate system. The unit of the pixel coordinate system is a pixel, which is a basic and indivisible unit of image display. With cameras in normal coordinate systemsThe optical center serves as the origin of the image coordinate system and the distance of the optical center to the image plane is scaled to 1. The relationship of pixel coordinates to normal coordinates is as follows:
u=KX
wherein ,pixel coordinates representing an image;representing the internal reference matrix of the camera, fx and fyFocal lengths (unit is pixel) in x and y directions of the image, respectively, (c)x,cy) Representing a location of a camera store;are coordinates in a normal coordinate system. Knowing the pixel coordinate system of the image and the camera's internal parameters allows to calculate the regular coordinate system of the pixel point correspondences, i.e.
X=K-1 u
wherein ,andare respectivelyAndthe coordinates of the pixels of (a) and (b),andare respectivelyAndnormal coordinate of, Kl and KrRespectively, the reference matrices for the left and right cameras.
7-2) removing image distortion: the normal coordinates of the left and right image feature points after distortion removal are calculated from the normal coordinates of the left and right image feature points and the distortion coefficients of the left and right cameras.
Due to the limitation of lens production process, the lens in practical situation has some distortion phenomena to cause nonlinear distortion, which can be roughly divided into radial distortion and tangential distortion.
The radial distortion of the image is the position deviation of image pixel points generated along the radial direction by taking a distortion center as a central point, so that the image formed in the image is deformed. The radial distortion is roughly expressed as follows:
xd=x(1+k1r2+k2r4+k3r6)
yd=y(1+k1r2+k2r4+k3r6)
wherein ,r2=x2+y2,k1、k2、k3Is a radial distortion parameter.
Tangential distortion is due to imperfections in the camera fabrication such that the lens itself is not parallel to the image plane, and can be quantitatively described as:
xd=x+(2p1xy+p2(r2+2x2))
yd=y+(p1(r2+2y2)+2p2xy)
wherein ,p1、p2Is the tangential distortion coefficient.
In summary, the coordinate relationship before and after distortion is as follows:
xd=x(1+k1r2+k2r4+k3r6)+(2p1xy+p2(r2+2x2))
yd=y(1+k1r2+k2r4+k3r6)+(p1(r2+2y2)+2p2xy)
wherein (x, y) is a normal coordinate in an ideal state, (x)d,yd) Are the actual normal coordinates with distortion. We get (x)d,yd) As an initial value of (x, y), the actual (x, y) is obtained by iteratively calculating several times (for example, 20 times).
7-3) rotating the left and right images according to the original rotation relationship of the two cameras: the rotation matrix R and translation vector t between the original left and right cameras are known, so that
Xr=RXl+t
wherein ,XlNormal coordinates, X, representing the left camerarRepresenting the normal coordinates of the right camera. And rotating the left picture by a half angle in the positive direction of R, and rotating the right picture by a half angle in the negative direction of R.
For each pair of left and right characteristic points obtained in the previous step after distortion removalNormal coordinates of
7-4) reducing the image after the distortion removal rotation to a pixel coordinate system according to the formula u-KX. According to the updated left and right characteristic points of the last stepNormal coordinates ofCalculating distortion-removed corrected image coordinates
7-5) solving a basic matrix F and an essential matrix E according to the characteristic point pair coordinates after the distortion removal correction of the left and right images and the internal reference matrixes of the left and right cameras: left and right corresponding pixel point pairs ul、urThe relationship to the basis matrix F is:
and (4) further screening the point pairs by using random sample consensus (RANSAC), and then substituting the coordinates of the corresponding points into the formula to construct a homogeneous linear equation set to solve F.
The relationship between the base matrix and the essence matrix is:
wherein ,Kl、KrRespectively, the reference matrices for the left and right cameras.
7-6) decomposing the left and right camera rotation and translation relationships after correction from the essence matrix: the relationship of the essential matrix E to the rotation R and translation t is as follows:
E=[t]×R
wherein [t]×Cross multiplication moment representing tAnd (5) arraying.
Performing singular value decomposition on E to obtain
Defining two matrices
so E can be written in the following two forms
(1)E=UZUTUWVT
Let [ t)]×=UZUT,R=UWVT
(2)E=-UZUTUWTVT
Let [ t)]×=-UZUT,R=UWTVT
Four pairs of R and t are obtained, and a solution with three-dimensional significance is selected.
7-7) the decomposed rotation and translation relation is superposed into the original external reference.
The rotation matrix before distortion removal is recorded as R, and the translation vector is recorded as t ═ tx,ty,tz)T(ii) a The rotation matrix calculated in step 6-2) is R ', and the translation vector is t ' ═ t 'x,t′y,t′z)T. Then new Rnew and tnewAs follows
Rnew=R1/2R'R1/2
Claims (10)
1. The stereo calibration algorithm of the tele binocular camera is characterized by comprising the following steps of:
1) and (3) detecting angular points of the checkerboard: shooting an image of the checkerboard calibration board by using a long-focus binocular camera, detecting and matching checkerboard angular points, and acquiring an area which is not covered by the checkerboard;
2) scene feature point detection: shooting a scene image by using a long-focus binocular camera, and detecting and extracting feature points;
3) matching the characteristic points: matching the feature points extracted from the left and right images according to the feature description values in the matching window, and deleting repeated matching;
4) and (3) optimizing a matching result by parabolic fitting: optimizing the matching result by a unitary quadratic parabolic fit;
5) judging the coverage area of the feature points: dividing the image into m × n grids, and when the feature points cover all the grids, performing the next step, otherwise, continuously shooting the scene image, and repeating the steps 2) to 4);
6) correcting the internal reference calibration result: correcting the internal reference by using the image coordinates and world coordinates of the feature points on the calibration plate;
7) and correcting the external reference calibration result: and correcting the calibration result according to the matched checkerboard corner coordinates and scene characteristic point coordinates.
2. The stereo calibration algorithm of the tele binocular camera according to claim 1, wherein the feature point extraction in the step 2) specifically comprises the following steps:
2-1) constructing a single-scale difference Gaussian pyramid (DoG), wherein the scale space of a scene image I (x, y) is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein ,is a gaussian kernel, σ is a scale factor, represents a convolution operation; l (x, y, σ) is the scale space of the image I (x, y), the difference is made between the scale spaces of different scales to obtain a layer of difference gaussian pyramid, and a normalized scale factor λ is multiplied to make the maximum value of the DoG image 255:
D(x,y,σ)=λ(L(x,y,kσ)-L(x,y,σ));
2-2) comparing each point in the obtained DoG with a pixel point in a neighborhood to judge whether the point is a local extreme point;
2-2-1) recording the obtained DoG as D, and recording the result as D by performing an expansion operation on D1(ii) a Will D1Comparing each pixel point with the point on 8-neighborhood, if the pixel point is local maximum, adding it into candidate point set P1Lining;
2-2-2) inverting D and then performing an expansion operation, and recording the result as D2(ii) a Will D2Comparing each pixel point with the point in 8-neighborhood, if the pixel point is local minimum, adding it into candidate point set P2Lining;
2-2-3) reacting P1 and P2Taking intersection to obtain P3=P1∩P2(ii) a Get P3Taking the points with the middle DoG gray value larger than 15 as a characteristic point set { P };
2-3) denoising the Gaussian characteristic points by using a filter, and filtering noise points and edge points.
3. The stereo calibration algorithm of the tele binocular camera according to claim 1 or 2, wherein the feature point matching in step 3) specifically comprises the following steps:
3-1) divide the image into m × n blocks, for each feature point of the left imageFind its corresponding block in the left graphBlockThe corresponding right graph search range is recorded asWhen a feature point similarity degree is foundVariable to evaluateAndthe similarity degree of any point in the image is larger than the threshold value t1Then it is regarded as the coarse matching point
3-2) whenAndmaximum value of similarity in sfirstAnd the second largest value ssecondSatisfies the following conditions:
F(sfirst,ssecond)≥t2
the match is retained, where t2Is a threshold value, F(s)first,ssecond) For the description of sfirst and ssecondThe relationship between;
4. The stereo calibration algorithm of the tele binocular camera according to claim 1 or 2, wherein the parabolic fitting in the step 4) optimizes the matching result, specifically comprising the steps of:
4-1) feature points of left graphFor reference, the integral pixel characteristic points of the corresponding right image are optimized by parabolic fittingThe obtained sub-pixel characteristic points corresponding to the right image wherein As a sub-pixel offset in the x-direction,is the sub-pixel offset in the y-direction;
4-2) to correspond to the integer pixel feature points of the right imageAs a reference, calculating the sub-pixel characteristic point of the corresponding left image according to the method of 4-1) wherein As a sub-pixel offset in the x-direction,is the sub-pixel offset in the y-direction;
5. The stereo calibration algorithm of the tele binocular camera according to claim 3, wherein the parabolic fitting in the step 4) optimizes the matching result, specifically comprising the steps of:
4-1) feature points of left graphFor reference, the integral pixel characteristic points of the corresponding right image are optimized by parabolic fittingThe obtained sub-pixel characteristic points corresponding to the right image wherein As a sub-pixel offset in the x-direction,is the sub-pixel offset in the y-direction;
4-2) to correspond to the integer pixel feature points of the right imageAs a reference, calculating the sub-pixel characteristic point of the corresponding left image according to the method of 4-1) wherein As a sub-pixel offset in the x-direction,is the sub-pixel offset in the y-direction;
6. The stereoscopic calibration algorithm for the tele binocular camera according to claim 1, 2 or 5, wherein the step 6) of correcting the intra-reference calibration result specifically comprises the following steps:
6-1) obtaining the original calibration result: parameters of two lenses are acquired from the tele binocular camera hardware itself, including focal lengths fx and fy in the x and y directions, units: pixel, lens principal point position u0 and v 0;
6-2) for each checkerboard corner point image Pi, carrying out the following steps:
(1) acquiring a rotation matrix R and a translation vector T between an image coordinate and a calibration plate coordinate under the checkerboard corner point image Pi; according to the camera model, a point of the three-dimensional world coordinate is set as M ═ X, Y, Z,1]TThe two-dimensional camera plane pixel coordinate is m ═ u, v,1]TThen, the homography relationship from the checkerboard plane to the image plane for calibration is:
where s is a scale factor, K is a camera intrinsic parameter, and Z is 0 for a checkerboard plane, then
K [ r1, r2, t ] is the homography matrix H, i.e.
H=[h1,h2h3]=λK[r1r2t]
Calculating a homography matrix H between the checkerboard coordinate system and the image coordinate system; after the calculation of the homography matrix is completed, a rotation matrix R and a translation vector T between the image coordinates and the calibration plate coordinates are calculated, and the calculation formula is as follows:
r3=r1×r2
t=λK-1h3
wherein λ represents a normalization coefficient;
(2) setting and collecting n' pairs of images containing checkerboards, wherein each image is provided with m checkerboard angular points; let corner M on ith sub-imageijThe projection points on the obtained image under the camera matrix are:
where Ri and ti are the rotation matrix and translation vector for the ith sub-map, Δ Ri and ΔtiThe variation amounts of Ri and ti, respectively; k is the moment of an internal parameterThe matrix, the variation of which is Δ K,representing the projection point of the angular point Mij on the obtained image under the camera matrix; then corner point mijThe probability density function of (a) is:
constructing a likelihood function:
let L take the maximum value, i.e. let the following equation be the minimum; and (3) iterative solution solving is carried out by using a Levenberg-Marquardt algorithm of a multi-parameter nonlinear system optimization problem:
7. the stereo calibration algorithm of the tele binocular camera according to claim 3, wherein the step 6) of correcting the internal reference calibration result specifically comprises the following steps:
6-1) obtaining the original calibration result: parameters of two lenses are acquired from the tele binocular camera hardware itself, including focal lengths fx and fy in the x and y directions, units: pixel, lens principal point position u0 and v 0;
6-2) for each checkerboard corner point image Pi, carrying out the following steps:
(1) acquiring a rotation matrix R and a translation vector T between an image coordinate and a calibration plate coordinate under the checkerboard corner point image Pi; according to the camera model, a point of the three-dimensional world coordinate is set as M ═ X, Y, Z,1]TThe two-dimensional camera plane pixel coordinate is m ═ u, v,1]TThen, the homography relationship from the checkerboard plane to the image plane for calibration is:
where s is a scale factor, K is a camera intrinsic parameter, and Z is 0 for a checkerboard plane, then
K [ r1, r2, t ] is the homography matrix H, i.e.
H=[h1,h2h3]=λK[r1r2t]
Calculating a homography matrix H between the checkerboard coordinate system and the image coordinate system; after the calculation of the homography matrix is completed, a rotation matrix R and a translation vector T between the image coordinates and the calibration plate coordinates are calculated, and the calculation formula is as follows:
r3=r1×r2
t=λK-1h3
wherein λ represents a normalization coefficient;
(2) n' pairs of images containing checkerboards are collected, and each image has m checkerboard angular points(ii) a Let corner M on ith sub-imageijThe projection points on the obtained image under the camera matrix are:
where Ri and ti are the rotation matrix and translation vector for the ith sub-map, Δ Ri and ΔtiThe variation amounts of Ri and ti, respectively; k is an internal parameter matrix with a variance Δ K,representing the projection point of the angular point Mij on the obtained image under the camera matrix; then corner point mijThe probability density function of (a) is:
constructing a likelihood function:
let L take the maximum value, i.e. let the following equation be the minimum; and (3) iterative solution solving is carried out by using a Levenberg-Marquardt algorithm of a multi-parameter nonlinear system optimization problem:
8. the stereo calibration algorithm of the tele binocular camera according to claim 4, wherein the step 6) of correcting the internal reference calibration result specifically comprises the following steps:
6-1) obtaining the original calibration result: parameters of two lenses are acquired from the tele binocular camera hardware itself, including focal lengths fx and fy in the x and y directions, units: pixel, lens principal point position u0 and v 0;
6-2) for each checkerboard corner point image Pi, carrying out the following steps:
(1) acquiring a rotation matrix R and a translation vector T between an image coordinate and a calibration plate coordinate under the checkerboard corner point image Pi; according to the camera model, a point of the three-dimensional world coordinate is set as M ═ X, Y, Z,1]TThe two-dimensional camera plane pixel coordinate is m ═ u, v,1]TThen, the homography relationship from the checkerboard plane to the image plane for calibration is:
where s is a scale factor, K is a camera intrinsic parameter, and Z is 0 for a checkerboard plane, then
K [ r1, r2, t ] is the homography matrix H, i.e.
H=[h1,h2h3]=λK[r1r2t]
Calculating a homography matrix H between the checkerboard coordinate system and the image coordinate system; after the calculation of the homography matrix is completed, a rotation matrix R and a translation vector T between the image coordinates and the calibration plate coordinates are calculated, and the calculation formula is as follows:
r3=r1×r2
t=λK-1h3
wherein λ represents a normalization coefficient;
(2) setting and collecting n' pairs of images containing checkerboards, wherein each image is provided with m checkerboard angular points; let corner M on ith sub-imageijThe projection points on the obtained image under the camera matrix are:
where Ri and ti are the rotation matrix and translation vector for the ith sub-map, Δ Ri and ΔtiThe variation amounts of Ri and ti, respectively; k is an internal parameter matrix with a variance Δ K,representing the projection point of the angular point Mij on the obtained image under the camera matrix; then corner point mijThe probability density function of (a) is:
constructing a likelihood function:
let L take the maximum value, i.e. let the following equation be the minimum; and (3) iterative solution solving is carried out by using a Levenberg-Marquardt algorithm of a multi-parameter nonlinear system optimization problem:
9. the stereoscopic calibration algorithm of the tele binocular camera according to claim 1, 2, 5, 7 or 8, wherein the step of correcting the external reference calibration result comprises the following steps:
7-1) calculating matched left and right characteristic pointsCoordinates in a corresponding normal coordinate system
The pixel coordinate system takes the upper left corner of the picture as an origin, and the x axis and the y axis of the pixel coordinate system are respectively parallel to the x axis and the y axis of the image coordinate system; the unit of the pixel coordinate system is a pixel; the normal coordinate system takes the optical center of the camera as the origin of the image coordinate system, and the distance from the optical center to the image plane is scaled to 1; the relationship of pixel coordinates to normal coordinates is as follows:
u=KX
wherein ,pixel coordinates representing an image;representing the internal reference matrix of the camera, fx and fyRepresenting the focal lengths of the image in the x-and y-directions, respectively, in pixels, (c)x,cy) Representing a position of a camera stagnation point;is a coordinate in a normal coordinate system; the pixel coordinate system of the known image and the normal coordinate system corresponding to the pixel points calculated by the camera's internal parameters, i.e.
X=K-1u
wherein ,andare respectivelyAndthe coordinates of the pixels of (a) and (b),andare respectivelyAndnormal coordinate of, Kl and KrThe internal reference matrixes of the left camera and the right camera respectively;
7-2) removing image distortion
Calculating the undistorted normal coordinates of the left and right image feature points according to the normal coordinates of the left and right image feature points and respective distortion coefficients of the left and right cameras;
the image radial distortion is the position deviation of image pixel points along the radial direction by taking a distortion center as a central point, so that the image formed in the image is deformed; the radial distortion is expressed as follows:
xd=x(1+k1r2+k2r4+k3r6)
yd=y(1+k1r2+k2r4+k3r6)
wherein ,r2=x2+y2,k1、k2、k3Is a radial distortion parameter;
tangential distortion is due to imperfections in the camera fabrication, causing the lens itself to be non-parallel to the image plane, quantitatively described as:
xd=x+(2p1xy+p2(r2+2x2))
yd=y+(p1(r2+2y2)+2p2xy)
wherein ,p1、p2Is a tangential distortion coefficient;
in summary, the coordinate relationship before and after distortion is as follows:
xd=x(1+k1r2+k2r4+k3r6)+(2p1xy+p2(r2+2x2))
yd=y(1+k1r2+k2r4+k3r6)+(p1(r2+2y2)+2p2xy)
wherein (x, y) is a normal coordinate in an ideal state, (x)d,yd) Is the true coordinate with distortion in reality; with (x)d,yd) As the initial value of (x, y), the actual (x, y) is obtained through iterative calculation;
7-3) rotating the left and right images according to the original rotation relationship of the two cameras: the rotation matrix R and translation vector t between the original left and right cameras are known, so that
Xr=RXl+t
wherein ,XlNormal coordinates, X, representing the left camerarNormal coordinates representing the right camera; rotating the left image by a half angle in the positive direction of R, and rotating the right image by a half angle in the negative direction of R;
for each pair of left and right characteristic points obtained in the previous step after distortion removalNormal coordinates of
7-4) reducing the image after distortion removal rotation to a pixel coordinate system according to a formula u-KX; according to the updated left and right characteristic points of the last stepNormal coordinates ofCalculating distortion-removed corrected image coordinates:
7-5) solving a basic matrix F and an essential matrix E according to the characteristic point pair coordinates after the distortion removal correction of the left and right images and the internal reference matrixes of the left and right cameras: left and right corresponding pixel point pairs ul、urThe relationship to the basis matrix F is:
further screening the point pairs by using random sampling consistency, substituting corresponding point coordinates into the formula, and constructing a homogeneous linear equation set to solve F;
the relationship between the base matrix and the essence matrix is:
wherein ,Kl、KrRespectively are internal reference matrixes of the left camera and the right camera;
7-6) decomposing the left and right camera rotation and translation relationships after correction from the essence matrix: the relationship of the essential matrix E to the rotation R and translation t is as follows:
E=[t]×R
wherein [t]×A cross-product matrix representing t;
and D, performing singular value decomposition on the E to obtain:
two matrices are defined:
so E is written in the following two forms
(1)E=UZUTUWVT
Let [ t)]×=UZUT,R=UWVT
(2)E=-UZUTUWTVT
Let [ t)]×=-UZUT,R=UWTVT
Obtaining four pairs of R and t, and selecting a solution with three-dimensional significance;
7-7) adding the decomposed rotation and translation relation to the original external reference
The rotation matrix before distortion removal is recorded as R, and the translation vector is recorded as t ═ tx,ty,tz)T(ii) a Step 6-2) countingThe calculated rotation matrix is R ', and the translation vector is t ═ t'x,t′y,t′z)T(ii) a Then new Rnew and tnewAs follows
Rnew=R1/2R′R1/2
10. The stereo calibration algorithm of the tele binocular camera of claim 6, wherein the step of correcting the external reference calibration result specifically comprises the steps of:
7-1) calculating matched left and right characteristic pointsCoordinates in a corresponding normal coordinate system
The pixel coordinate system takes the upper left corner of the picture as an origin, and the x axis and the y axis of the pixel coordinate system are respectively parallel to the x axis and the y axis of the image coordinate system; the unit of the pixel coordinate system is a pixel; the normal coordinate system takes the optical center of the camera as the origin of the image coordinate system, and the distance from the optical center to the image plane is scaled to 1; the relationship of pixel coordinates to normal coordinates is as follows:
u=KX
wherein ,pixel coordinates representing an image;representing the internal reference matrix of the camera, fx and fyRepresenting the focal lengths of the image in the x-and y-directions, respectively, in pixels, (c)x,cy) Representing a position of a camera stagnation point;is a coordinate in a normal coordinate system; the pixel coordinate system of the known image and the normal coordinate system corresponding to the pixel points calculated by the camera's internal parameters, i.e.
X=K-1u
wherein ,andare respectivelyAndthe coordinates of the pixels of (a) and (b),andare respectivelyAndnormal coordinate of, Kl and KrThe internal reference matrixes of the left camera and the right camera respectively;
7-2) removing image distortion
Calculating the undistorted normal coordinates of the left and right image feature points according to the normal coordinates of the left and right image feature points and respective distortion coefficients of the left and right cameras;
the image radial distortion is the position deviation of image pixel points along the radial direction by taking a distortion center as a central point, so that the image formed in the image is deformed; the radial distortion is expressed as follows:
xd=x(1+k1r2+k2r4+k3r6)
yd=y(1+k1r2+k2r4+k3r6)
wherein ,r2=x2+y2,k1、k2、k3Is a radial distortion parameter;
tangential distortion is due to imperfections in the camera fabrication, causing the lens itself to be non-parallel to the image plane, quantitatively described as:
xd=x+(2p1xy+p2(r2+2x2))
yd=y+(p1(r2+2y2)+2p2xy)
wherein ,p1、p2Is a tangential distortion coefficient;
in summary, the coordinate relationship before and after distortion is as follows:
xd=x(1+k1r2+k2r4+k3r6)+(2p1xy+p2(r2+2x2))
yd=y(1+k1r2+k2r4+k3r6)+(p1(r2+2y2)+2p2xy)
wherein (x, y) is a normal coordinate in an ideal state, (x)d,yd) Is the true coordinate with distortion in reality; with (x)d,yd) As the initial value of (x, y), the actual (x, y) is obtained through iterative calculation;
7-3) rotating the left and right images according to the original rotation relationship of the two cameras: the rotation matrix R and translation vector t between the original left and right cameras are known, so that
Xr=RXl+t
wherein ,XlNormal coordinates, X, representing the left camerarNormal coordinates representing the right camera; rotating the left image by a half angle in the positive direction of R, and rotating the right image by a half angle in the negative direction of R;
for each pair of left and right characteristic points obtained in the previous step after distortion removalNormal coordinates of
7-4) reducing the image after distortion removal rotation to a pixel coordinate system according to a formula u-KX; according to the updated left and right characteristic points of the last stepNormal coordinates ofCalculating distortion-removed corrected image coordinates:
7-5) removing distortion according to the left and right images, and correcting the coordinates of the feature point pairs and the internal parameters of the left and right camerasSolving a basic matrix F and an essential matrix E by an array: left and right corresponding pixel point pairs ul、urThe relationship to the basis matrix F is:
further screening the point pairs by using random sampling consistency, substituting corresponding point coordinates into the formula, and constructing a homogeneous linear equation set to solve F;
the relationship between the base matrix and the essence matrix is:
wherein ,Kl、KrRespectively are internal reference matrixes of the left camera and the right camera;
7-6) decomposing the left and right camera rotation and translation relationships after correction from the essence matrix: the relationship of the essential matrix E to the rotation R and translation t is as follows:
E=[t]×R
wherein [t]×A cross-product matrix representing t;
and D, performing singular value decomposition on the E to obtain:
two matrices are defined:
so E is written in the following two forms
(1)E=UZUTUWVT
Let [ t)]×=UZUT,R=UWVT
(2)E=-UZUTUWTVT
Let [ t)]×=-UZUT,R=UWTVT
Obtaining four pairs of R and t, and selecting a solution with three-dimensional significance;
7-7) adding the decomposed rotation and translation relation to the original external reference
The rotation matrix before distortion removal is recorded as R, and the translation vector is recorded as t ═ tx,ty,tz)T(ii) a The rotation matrix calculated in step 6-2) is R ', and the translation vector is t ' ═ t 'x,t′y,t′z)T(ii) a Then new Rnew and tnewAs follows
Rnew=R1/2R′R1/2
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911152607.0A CN110969668B (en) | 2019-11-22 | 2019-11-22 | Stereo calibration algorithm of long-focus binocular camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911152607.0A CN110969668B (en) | 2019-11-22 | 2019-11-22 | Stereo calibration algorithm of long-focus binocular camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110969668A true CN110969668A (en) | 2020-04-07 |
CN110969668B CN110969668B (en) | 2023-05-02 |
Family
ID=70031189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911152607.0A Active CN110969668B (en) | 2019-11-22 | 2019-11-22 | Stereo calibration algorithm of long-focus binocular camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110969668B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111862229A (en) * | 2020-06-05 | 2020-10-30 | 北京中科慧眼科技有限公司 | Binocular camera adjusting method and device |
CN112066876A (en) * | 2020-08-27 | 2020-12-11 | 武汉大学 | Method for rapidly measuring object size by using mobile phone |
CN112465913A (en) * | 2020-11-18 | 2021-03-09 | 广东博智林机器人有限公司 | Binocular camera-based correction method and device |
CN112465916A (en) * | 2020-11-27 | 2021-03-09 | 浙江光珀智能科技有限公司 | RGBD binocular calibration method and system based on full-view-field plane calibration plate |
CN113012238A (en) * | 2021-04-09 | 2021-06-22 | 南京星顿医疗科技有限公司 | Method for rapid calibration and data fusion of multi-depth camera |
CN113077519A (en) * | 2021-03-18 | 2021-07-06 | 中国电子科技集团公司第五十四研究所 | Multi-phase external parameter automatic calibration method based on human skeleton extraction |
CN113409399A (en) * | 2021-06-10 | 2021-09-17 | 武汉库柏特科技有限公司 | Dual-camera combined calibration method, system and device |
CN113450416A (en) * | 2020-06-15 | 2021-09-28 | 天津工业大学 | TCSC (thyristor controlled series) method applied to three-dimensional calibration of three-view camera |
CN113676696A (en) * | 2020-05-14 | 2021-11-19 | 杭州萤石软件有限公司 | Target area monitoring method and system |
CN113706635A (en) * | 2021-10-28 | 2021-11-26 | 中国测绘科学研究院 | Long-focus camera calibration method based on point feature and line feature fusion |
CN113884017A (en) * | 2021-08-19 | 2022-01-04 | 中国电力科学研究院有限公司 | Insulator non-contact deformation detection method and system based on trinocular vision |
CN113983934A (en) * | 2021-11-15 | 2022-01-28 | 西安交通大学 | Copper-clad plate online high-speed dimension measurement method and device based on double-line-array camera |
CN114862969A (en) * | 2022-05-27 | 2022-08-05 | 国网江苏省电力有限公司电力科学研究院 | Onboard holder camera angle self-adaptive adjusting method and device of intelligent inspection robot |
WO2023045147A1 (en) * | 2021-09-24 | 2023-03-30 | 上海闻泰电子科技有限公司 | Method and system for calibrating binocular camera, and electronic device and storage medium |
CN116704047A (en) * | 2023-08-01 | 2023-09-05 | 安徽云森物联网科技有限公司 | Pedestrian ReID-based calibration method for monitoring camera equipment position |
CN117036506A (en) * | 2023-08-25 | 2023-11-10 | 浙江大学海南研究院 | Binocular camera calibration method |
WO2024002291A1 (en) * | 2022-07-01 | 2024-01-04 | 华为技术有限公司 | Vein image collection apparatus and thinning design method therefor, and terminal device |
CN117036506B (en) * | 2023-08-25 | 2024-05-10 | 浙江大学海南研究院 | Binocular camera calibration method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150093042A1 (en) * | 2012-06-08 | 2015-04-02 | Huawei Technologies Co., Ltd. | Parameter calibration method and apparatus |
CN104574419A (en) * | 2015-01-28 | 2015-04-29 | 深圳市安健科技有限公司 | Lens distortion parameter calibration method and system |
CN105118055A (en) * | 2015-08-11 | 2015-12-02 | 北京电影学院 | Camera positioning correction calibration method and system |
CN108876749A (en) * | 2018-07-02 | 2018-11-23 | 南京汇川工业视觉技术开发有限公司 | A kind of lens distortion calibration method of robust |
-
2019
- 2019-11-22 CN CN201911152607.0A patent/CN110969668B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150093042A1 (en) * | 2012-06-08 | 2015-04-02 | Huawei Technologies Co., Ltd. | Parameter calibration method and apparatus |
CN104574419A (en) * | 2015-01-28 | 2015-04-29 | 深圳市安健科技有限公司 | Lens distortion parameter calibration method and system |
CN105118055A (en) * | 2015-08-11 | 2015-12-02 | 北京电影学院 | Camera positioning correction calibration method and system |
CN108876749A (en) * | 2018-07-02 | 2018-11-23 | 南京汇川工业视觉技术开发有限公司 | A kind of lens distortion calibration method of robust |
Non-Patent Citations (1)
Title |
---|
刘俸材 等: "双目视觉的立体标定方法" * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113676696A (en) * | 2020-05-14 | 2021-11-19 | 杭州萤石软件有限公司 | Target area monitoring method and system |
CN111862229A (en) * | 2020-06-05 | 2020-10-30 | 北京中科慧眼科技有限公司 | Binocular camera adjusting method and device |
CN113450416B (en) * | 2020-06-15 | 2024-03-15 | 天津工业大学 | TCSC method applied to three-dimensional calibration of three-dimensional camera |
CN113450416A (en) * | 2020-06-15 | 2021-09-28 | 天津工业大学 | TCSC (thyristor controlled series) method applied to three-dimensional calibration of three-view camera |
CN112066876A (en) * | 2020-08-27 | 2020-12-11 | 武汉大学 | Method for rapidly measuring object size by using mobile phone |
CN112465913A (en) * | 2020-11-18 | 2021-03-09 | 广东博智林机器人有限公司 | Binocular camera-based correction method and device |
CN112465916A (en) * | 2020-11-27 | 2021-03-09 | 浙江光珀智能科技有限公司 | RGBD binocular calibration method and system based on full-view-field plane calibration plate |
CN113077519A (en) * | 2021-03-18 | 2021-07-06 | 中国电子科技集团公司第五十四研究所 | Multi-phase external parameter automatic calibration method based on human skeleton extraction |
CN113012238B (en) * | 2021-04-09 | 2024-04-16 | 南京星顿医疗科技有限公司 | Method for quick calibration and data fusion of multi-depth camera |
CN113012238A (en) * | 2021-04-09 | 2021-06-22 | 南京星顿医疗科技有限公司 | Method for rapid calibration and data fusion of multi-depth camera |
CN113409399A (en) * | 2021-06-10 | 2021-09-17 | 武汉库柏特科技有限公司 | Dual-camera combined calibration method, system and device |
CN113884017A (en) * | 2021-08-19 | 2022-01-04 | 中国电力科学研究院有限公司 | Insulator non-contact deformation detection method and system based on trinocular vision |
CN113884017B (en) * | 2021-08-19 | 2024-04-05 | 中国电力科学研究院有限公司 | Non-contact deformation detection method and system for insulator based on three-eye vision |
WO2023045147A1 (en) * | 2021-09-24 | 2023-03-30 | 上海闻泰电子科技有限公司 | Method and system for calibrating binocular camera, and electronic device and storage medium |
CN113706635B (en) * | 2021-10-28 | 2022-04-01 | 中国测绘科学研究院 | Long-focus camera calibration method based on point feature and line feature fusion |
CN113706635A (en) * | 2021-10-28 | 2021-11-26 | 中国测绘科学研究院 | Long-focus camera calibration method based on point feature and line feature fusion |
CN113983934B (en) * | 2021-11-15 | 2022-11-01 | 西安交通大学 | Copper-clad plate online high-speed dimension measurement method and device based on double-line-array camera |
CN113983934A (en) * | 2021-11-15 | 2022-01-28 | 西安交通大学 | Copper-clad plate online high-speed dimension measurement method and device based on double-line-array camera |
CN114862969A (en) * | 2022-05-27 | 2022-08-05 | 国网江苏省电力有限公司电力科学研究院 | Onboard holder camera angle self-adaptive adjusting method and device of intelligent inspection robot |
WO2024002291A1 (en) * | 2022-07-01 | 2024-01-04 | 华为技术有限公司 | Vein image collection apparatus and thinning design method therefor, and terminal device |
CN116704047A (en) * | 2023-08-01 | 2023-09-05 | 安徽云森物联网科技有限公司 | Pedestrian ReID-based calibration method for monitoring camera equipment position |
CN116704047B (en) * | 2023-08-01 | 2023-10-27 | 安徽云森物联网科技有限公司 | Pedestrian ReID-based calibration method for monitoring camera equipment position |
CN117036506A (en) * | 2023-08-25 | 2023-11-10 | 浙江大学海南研究院 | Binocular camera calibration method |
CN117036506B (en) * | 2023-08-25 | 2024-05-10 | 浙江大学海南研究院 | Binocular camera calibration method |
Also Published As
Publication number | Publication date |
---|---|
CN110969668B (en) | 2023-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110969668B (en) | Stereo calibration algorithm of long-focus binocular camera | |
CN110969670B (en) | Multispectral camera dynamic three-dimensional calibration method based on significant features | |
CN110969669B (en) | Visible light and infrared camera combined calibration method based on mutual information registration | |
CN110969667B (en) | Multispectral camera external parameter self-correction algorithm based on edge characteristics | |
CN112802124B (en) | Calibration method and device for multiple stereo cameras, electronic equipment and storage medium | |
CN111429533B (en) | Camera lens distortion parameter estimation device and method | |
CN109118544B (en) | Synthetic aperture imaging method based on perspective transformation | |
CN111080709B (en) | Multispectral stereo camera self-calibration algorithm based on track feature registration | |
CN109727278B (en) | Automatic registration method for airborne LiDAR point cloud data and aerial image | |
CN110992409B (en) | Multispectral stereo camera dynamic registration method based on Fourier transform registration | |
CN110910456B (en) | Three-dimensional camera dynamic calibration method based on Harris angular point mutual information matching | |
CN113920205B (en) | Calibration method of non-coaxial camera | |
CN110956661A (en) | Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix | |
CN113012234B (en) | High-precision camera calibration method based on plane transformation | |
CN109974618B (en) | Global calibration method of multi-sensor vision measurement system | |
CN116433737A (en) | Method and device for registering laser radar point cloud and image and intelligent terminal | |
CN110136048B (en) | Image registration method and system, storage medium and terminal | |
CN110880191B (en) | Infrared stereo camera dynamic external parameter calculation method based on histogram equalization | |
Perdigoto et al. | Calibration of mirror position and extrinsic parameters in axial non-central catadioptric systems | |
CN113793266A (en) | Multi-view machine vision image splicing method, system and storage medium | |
CN112929626A (en) | Three-dimensional information extraction method based on smartphone image | |
CN114998448A (en) | Method for calibrating multi-constraint binocular fisheye camera and positioning space point | |
CN110910457B (en) | Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics | |
CN109754435B (en) | Camera online calibration method based on small target fuzzy image | |
CN115409898A (en) | High-precision camera calibration method and device based on special annular calibration plate |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |