CN103530880B - Based on the camera marking method of projection Gaussian network pattern - Google Patents

Based on the camera marking method of projection Gaussian network pattern Download PDF

Info

Publication number
CN103530880B
CN103530880B CN201310482789.4A CN201310482789A CN103530880B CN 103530880 B CN103530880 B CN 103530880B CN 201310482789 A CN201310482789 A CN 201310482789A CN 103530880 B CN103530880 B CN 103530880B
Authority
CN
China
Prior art keywords
camera
points
gaussian
point
prime
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310482789.4A
Other languages
Chinese (zh)
Other versions
CN103530880A (en
Inventor
贾振元
刘巍
李明星
刘阳
杨景豪
张驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201310482789.4A priority Critical patent/CN103530880B/en
Publication of CN103530880A publication Critical patent/CN103530880A/en
Application granted granted Critical
Publication of CN103530880B publication Critical patent/CN103530880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The camera marking method that the present invention is based on projection Gaussian network pattern belongs to image procossing and Computer Vision Detection field, the field calibration method of the inside and outside parameter of video camera in particularly large forgings Size Measuring System.Camera marking method utilizes horizontal, vertical striation gray scale in the direction of the width in Gaussian network pattern to be the characteristic of Gaussian distribution, the image coordinate of the point on light stripe centric line can be obtained accurately by fitted Gaussian curve, and then simulate the center line equation of horizontal stroke, vertical striation, the intersection point of horizontal, vertical light stripe centric line is feature point for calibration, according to the image coordinate of taking the feature point for calibration provided in the image of the Gaussian network pattern obtained, substep obtains the inside and outside parameter of video camera.The present invention has high real-time, robustness and higher stated accuracy, and substep is demarcated can obtain high-precision camera parameters, avoids coupling problem when simultaneously being solved by all camera parameters, is applicable to forging scene and carries out on-line proving to video camera.

Description

Camera calibration method based on projection Gauss grid pattern
Technical Field
The invention belongs to the field of image processing and computer vision detection, and particularly relates to a field calibration method for internal and external parameters of a camera in a large forging dimension measurement system.
Background
One of the basic tasks of computer vision processing is to recover the three-dimensional geometric information of an object from two-dimensional image information. To perform the task of finding the corresponding surface points of the spatial object using the image points, a geometric model of the camera imaging is determined, the parameters of which are referred to as camera parameters. The camera parameters are divided into internal and external parameters, the internal parameters are parameters of the camera, which are related to geometric and optical characteristics, and the external parameters are three-dimensional positions and directions of the camera relative to a certain world coordinate system. The process of determining the internal and external parameters of the camera is called camera calibration, and the precision of the calibration method directly influences the precision of computer vision measurement. Therefore, the research of quickly, simply and accurately calibrating the camera is undoubtedly of great significance.
The traditional camera calibration methods can be classified into 3D-based three-dimensional target calibration methods, 2D-based planar target calibration methods (represented by checkerboard target calibration methods proposed by zhangzhengyou) and 1D-based calibration methods according to the differences of calibration references. In the traditional calibration methods, a calibration reference object is needed, and for the calibration of a large-field-of-view camera, whether the characteristic points of the calibration reference object are uniformly distributed in the whole field of view directly influences the calibration precision. On one hand, the manufacturing of the high-precision large-size calibration target is expensive and difficult to maintain. On the other hand, the method is not suitable for the occasions which are not suitable for being used on line and cannot use the calibration reference object. Under the high-temperature environment of a forging workshop, the method for marking the block, the plate and the target point can not be applied. Therefore, the traditional camera calibration method cannot meet the requirement of online size parameter measurement of the large forging. In addition, although the self-calibration method does not use any calibration object, the self-calibration method can estimate the intrinsic parameters of the camera according to the corresponding relation of the image points between the images by only using the constraint existing in the intrinsic parameters of the camera. The method is flexible in operation, but the precision is not high, and the robustness is not enough.
The problem can be solved by adopting a projector to project a target, the projected characteristic pattern theoretically has a step change at the edge, but actually, due to diffusion effect, the pattern edge has a gradual change trend, and the edge can shift to the black background side. Taking a projector projecting a circular characteristic light spot array as an example, the center of a circular light spot is a characteristic point for calibration, and because each light spot can diffuse to the periphery to different degrees, it is difficult to obtain the accurate center of the circular spot by adopting a centroid method according to an image after binarization, and it is not easy to extract the boundary of the circular characteristic and perform circular (or elliptical) fitting by using a related algorithm to obtain the center of the circular spot with high precision. Similarly, a projector is used for projecting a general light bar combination pattern, and the center of the light bar is extracted as a characteristic line, so that the precision is difficult to guarantee.
Disclosure of Invention
The invention aims to solve the technical problems of the prior art, and aims to solve the problems that the traditional calibration method has low precision, non-real time and even can not be applied, and the self-calibration method has low precision, insufficient robustness and the like in a forging site.
The technical scheme adopted by the invention is a camera calibration method based on a projected Gaussian grid pattern, which is characterized in that the characteristic that the gray scales of horizontal and vertical light bars in the Gaussian grid pattern in the width direction are in Gaussian distribution is utilized, the image coordinates of points on the central line of the light bars can be obtained at high precision by fitting a Gaussian curve, further the central line equation of the horizontal and vertical light bars is fitted, the intersection point of the central lines of the horizontal and vertical light bars is a calibration characteristic point, and the internal and external parameters of the camera are obtained step by step according to the image coordinates of the calibration characteristic point provided in the shot image of the Gaussian grid pattern; the method comprises the following specific steps:
step 1: and (5) building a camera calibration system. A left four-dimensional electric control platform 2a, a right four-dimensional electric control platform 2b and a projector 3 are installed on the table top of the platform 1, a left camera 4a is fixed on the left four-dimensional electric control platform 2a, and a right camera 4b is fixed on the right four-dimensional electric control platform 2 b.
Step 2: and projecting a Gaussian grid pattern, shooting and acquiring intersection point coordinates. Projecting a Gaussian grid pattern 6 consisting of a plurality of parallel transverse light bars and a plurality of parallel longitudinal light bars onto a smooth flat plate or wall surface 5 in a factory building through a projector 3, wherein the gray scales of all the light bars in the width direction are in Gaussian distribution, and the intersection point A of the transverse light bars and the longitudinal light bars isi,jTo mark the feature points, i is the number of the horizontal light bars in the order from top to bottom, and j is the number of the vertical light bars in the order from left to right. Due to the superposition of light intensity at the intersection of the horizontal and vertical light bars, after the binarization processing is performed on the images shot by the left camera 4a and the right camera 4b, only bright spots at the intersection of the grids, namely isolated connected regions, are left in the obtained images. Centroid coordinates of connected regions can be obtained by centroid method (u 0)i,j,v0i,j) As a feature point Ai,jThe coarse position of (2). A circular area with the rough position as the center and a radius of delta pixels is used as a search range, and then the search range is [ u0 ]i,j-Δ,u0i,j+Δ]The transverse light bars are searched once in the width direction every delta/n within the range, fitting is carried out according to Gaussian distribution characteristics, and Gaussian distribution peak points are used as points on the central lines of the transverse light bars, so that points P on 2n +1 central lines can be obtainedi,j,sSubscript s is 1,2,3, …,2n +1, and a straight line l is fittedh,i,j. Similarly, in [ v0i,j-Δ,v0i,j+Δ]The longitudinal light bars are searched once in the width direction every delta/n within the range, fitting is carried out according to Gaussian distribution characteristics, Gaussian distribution peak points are used as points on the central line of the longitudinal light bars, and points Q on 2n +1 central lines can be obtainedi,j,tSubscript t is 1,2,3, …,2n +1, and then a straight line l is fittedv,i,j. Finally, the intersection point of two intersecting straight lines in the same search range is obtained as a calibration characteristic point Ai,jThe coordinates of which are (u)i,j,vi,j)。
And step 3: the rough coordinates of the principal point are obtained. Shooting the same projected Gaussian grid pattern 6, characteristic point A, by using the left camera 4a or the right camera 4b under two different focal lengthsi,jRespectively, are (u 1)i,j,v1i,j) And (u 2)i,j,v2i,j) The principal point coordinate is (u)0,v0) Then, there are:
u 2 i , j - u 0 u 1 i , j - u 0 = v 2 i , j - v 0 v 1 i , j - v 0 - - - ( 2 )
the rough position of the principal point being determined by the above equationCoordinates (u)0,v0)。
And 4, step 4: and solving the distortion coefficient and the optimized principal point coordinate. The intersection point A of actual shooting can be deduced according to the distortion modeli,jCoordinate p ofi,j=(ui,j,vi,j,1)TCoordinates q of the intersection with the ideali,j=(u'i,j,v'i,j,1)TThe conversion relationship of (1) is as follows:
u ′ i , j v ′ i , j = u i , j v i , j + u ~ i , j ( k 1 r i , j 2 + k 2 r i , j 4 ) +2 p 1 u ~ i , j v ~ i , j + p 2 ( r i , j 2 + 2 u ~ i , j 2 ) v ~ i , j ( k 1 r i , j 2 + k 2 r i , j 4 ) + p 1 ( r i , j 2 + 2 v ~ i , j 2 ) + 2 p 2 u ~ i , j v ~ i , j - - - ( 3 )
wherein, u ~ i , j = u i , j - u 0 , v ~ i , j = v i , j - v 0 , r i , j = u ~ i , j 2 + v ~ i , j 2 , k1and k is2As radial distortion coefficient, p1And p2Is the tangential distortion coefficient.
In addition, taking the grid pattern with the same number of horizontal and vertical bars as an example, the total number of intersections is num, and then there is one row for each rowThe intersection points, according to the linear fidelity, that is, the property of collinear points on the same light bar, and combining the essential conditions of collinear three points, can list the optimization objective function as follows:
min Σ i = 1 n u m ( Σ j = n u m ( i - 1 ) + 1 n u m ( i - 1 ) + n u m - 2 | q i , j + 1 T [ q i , j ] × q i , j + 2 | ) - - - ( 4 )
wherein,is three points Ai,j,Ai,j+1,Ai,j+2The requirement of collinearity. [ q ] ofi,j]×Represents qi,jThe antisymmetric matrix of (a), namely:
[ q i , j ] × = 0 - 1 - v ′ i , j 1 0 - u ′ i , j - v ′ i , j u ′ i , j 0 - - - ( 5 )
optimizing by a Levenberg-Marquardt nonlinear optimization algorithm to minimize the value of the target function of the formula (4), and acquiring the distortion coefficient k1、k2、p1And p2And optimized principal point coordinates (u'0,v'0). Then, the coordinates of all the intersections are corrected to ideal coordinates by equation (3).
And 5: the remaining internal parameters of the camera are evaluated. And (3) taking the corrected ideal intersection point as a characteristic point, driving the left camera 4a to do two groups of orthogonal motions by using the left four-dimensional electronic control platform 2a by adopting an active vision method, respectively shooting an image of a projected Gaussian grid pattern 6 at three start and end positions of each group of orthogonal motions, and finally shooting by using the left camera 4a to obtain 6 images.
The parallel straight line and the infinite plane intersect at the same infinite point, namely a vanishing point. And a group of orthogonal motion comprises two translations, the connecting line of corresponding intersection points on two images shot at the start and end positions of one translation motion is a group of spatial parallel lines, and the two translations are mutually vertical, so that a group of orthogonal vanishing point pairs e can be obtainedi1(ui1,vi1) And ei2(ui2,vi2) Where i is 1,2, stands for the order of orthogonal movements and has Oei1·Oei2Using two sets of orthogonal pairs of vanishing points, the scale factor α in the intrinsic parameter matrix K for the left camera 4a can be solved separately by solving the following set of equationsxAnd αy
( u 11 - u ′ 0 ) ( u 12 - u ′ 0 ) / α x 2 + ( v 11 - v ′ 0 ) ( v 12 - v ′ 0 ) / α y 2 + 1 = 0 ( u 21 - u ′ 0 ) ( u 22 - u ′ 0 ) / α x 2 + ( v 21 - v ′ 0 ) ( v 22 - v ′ 0 ) / α y + 1 = 0 - - - ( 6 )
Likewise, the scale factor α in the intrinsic parameter matrix K of the right camera 4b may be obtainedxAnd αy
Step 6: external parameters of the camera are acquired. The left camera 4a and the right camera 4b capture the same projected gaussian grid pattern 6, a world coordinate system is established on the camera coordinate system of the left camera 4a, and the basic matrix F is calculated using the corrected matching points of the images captured by the left and right cameras 4a, 4 b. The essential matrix E can be calculated with the internal parameters and the basic matrix being determined with a difference of a scaling factor s. After decomposition of the intrinsic matrix E, the extrinsic parameters (rotation matrix R 'and translation vector t') can be determined with a difference of one scaling factor.
Projecting parallel Gauss light bar to actual length L by projector 30On the accurately measured gauge block, the central line of sub-pixel light bar is fitted by using the Gaussian characteristic of the light bar, the point with abrupt gray scale change is used as the boundary point of the gauge block, and the length L 'of the gauge block is reconstructed according to the obtained internal and external parameters'0The scale factor can be obtained as: s ═ L0/L'0. Thus, the actual camera extrinsic parameters (rotation matrix R 'and translation vector t s t') can be obtained. Thus, the calibration process of the camera is completed.
The method has the advantages that the intersection points of the Gaussian grid patterns projected by the projector are used as the calibration characteristic points, so that the use of calibration blocks, calibration plates and pasting marker points is avoided, and the real-time calibration of the camera under complex environments such as a forging site is convenient to realize. The method can determine the position information of the calibration characteristic points with high precision according to the characteristic that the gray scale in the width direction of the light strip is in Gaussian distribution, has high robustness, can obtain high-precision camera parameters through step-by-step calibration, and can simultaneously avoid the problem of coupling when all the camera parameters are simultaneously solved.
Drawings
FIG. 1 is a schematic diagram of a calibration system of the present invention. Wherein: 1-shock insulation platform, 2 a-left side four-dimensional electric control platform, 2 b-right side four-dimensional electric control platform, 3-projector, 4 a-left side camera, 4 b-right side camera, 5-smooth flat plate or wall surface, 6-Gaussian grid pattern.
Fig. 2 is an image of a grid pattern taken by a camera according to the present invention.
Fig. 3 illustrates the present invention performing binarization processing on an image of a grid pattern and acquiring rough positions of intersections.
FIG. 4 is a scale factor for the reconstructed slab size of a Gaussian light bar array of the present invention.
Detailed Description
The following describes the embodiments of the present invention in further detail with reference to the drawings and technical solutions.
Camera calibration typically employs a classical pinhole imaging model, whose expression is as follows:
wherein (X)w,Yw,Zw,1)TIs the homogeneous coordinate of a space point in a world coordinate system, (u, v,1)TFor corresponding image pixel coordinate system o0Homogeneous coordinates in uv, αxF/dx is o0Scale factor on the u-axis in the uv coordinate system, αyF/dy is o0The scale factor on the v axis in the uv coordinate system, f is the focal length of the camera lens, dx and dy are the horizontal and vertical physical dimensions of the pixel respectively, (u0,v0) As principal point coordinates, pcFor the scale factor, K is the camera internal parameter matrix, [ R | t]Is the external parameter matrix of the camera, wherein R is the rotation matrix and t is the translation vector.
The camera internal parameters include principal point coordinates (u)0,v0) Scale factor αx、αyCoefficient of radial distortion k1、k2And tangential distortion coefficient p1、p2. The camera external parameters are the orientation of the camera coordinate system relative to the world coordinate system, and comprise a rotation matrix R and a translation vector t.
Step 1: and (5) building a camera calibration system. A left four-dimensional electric control platform 2a, a right four-dimensional electric control platform 2b and a projector 3 are installed on the table top of the platform 1, a left camera 4a is fixed on the left four-dimensional electric control platform 2a, and a right camera 4b is fixed on the right four-dimensional electric control platform 2b, as shown in fig. 1.
Step 2: and projecting a Gaussian grid pattern, shooting and acquiring intersection point coordinates. Projecting a Gaussian grid pattern 6 on a smooth flat plate or a wall surface 5 in a factory building through a projector 3, wherein the Gaussian grid pattern 6 consists of a plurality of parallel transverse light bars and a plurality of parallel longitudinal light bars, the gray scale of each light bar is in Gaussian distribution in the width direction, and the intersection point A of the transverse light bar and the longitudinal light bari,jTo mark the feature points, i is the number of the horizontal light bars in the order from top to bottom, and j is the number of the vertical light bars in the order from left to right. An image of the projected gaussian grid pattern captured by the left camera 4a or the right camera 4b is shown in fig. 2. Due to the superposition of light intensity at the intersection of the horizontal and vertical light bars, after the binarization processing is performed on the images captured by the left camera 4a and the right camera 4b, only bright spots at the intersection of the grids, i.e., isolated connected regions, remain in the obtained images, as shown in fig. 3. Centroid coordinates of connected regions can be obtained by centroid method (u 0)i,j,v0i,j) As a feature point Ai,jThe coarse position of (2). A circular area with the rough position as the center and a radius of delta pixels is used as a search range, and then the search range is [ u0 ]i,j-Δ,u0i,j+Δ]The transverse light bars are searched once in the width direction every delta/n within the range, fitting is carried out according to Gaussian distribution characteristics, and Gaussian distribution peak points are used as points on the central lines of the transverse light bars, so that points P on 2n +1 central lines can be obtainedi,j,sSubscript s is 1,2,3, …,2n +1, and a straight line l is fittedh,i,j. Similarly, in [ v0i,j-Δ,v0i,j+Δ]Searching the longitudinal light bars once every delta/n along the width direction within the range, fitting according to the Gaussian distribution characteristic, and fitting the Gaussian distribution characteristicThe peak point of the distribution is used as the point on the central line of the longitudinal light bar, and the points Q on 2n +1 central lines can be obtainedi,j,tSubscript t is 1,2,3, …,2n +1, and then a straight line l is fittedv,i,j. Finally, the intersection point of two intersecting straight lines in the same search range is obtained as a calibration characteristic point Ai,jThe coordinates of which are (u)i,j,vi,j)。
And step 3: the rough coordinates of the principal point are obtained. The principal point is obtained by using a zoom method, and the left side camera 4a or the right side camera 4b shoots the same projection Gaussian grid pattern 6 under two different focal lengths, namely a characteristic point Ai,jRespectively, are (u 1)i,j,v1i,j) And (u 2)i,j,v2i,j) The principal point coordinate is (u)0,v0) Then, there are:
u 2 i , j - u 0 u 1 i , j - u 0 = v 2 i , j - v 0 v 1 i , j - v 0 - - - ( 2 )
once the zoom center of the lens is regarded as the principal point, the coordinate (u) of the rough position of the principal point can be obtained by the above expression0,v0)。
And 4, step 4: and solving the distortion coefficient and the optimized principal point coordinate. The intersection point A of actual shooting can be deduced according to the distortion modeli,jCoordinate p ofi,j=(ui,j,vi,j,1)TCoordinates q of the intersection with the ideali,j=(u'i,j,v'i,j,1)TThe conversion relationship of (1) is as follows:
u ′ i , j v ′ i , j = u i , j v i , j + u ~ i , j ( k 1 r i , j 2 + k 2 r i , j 4 ) +2 p 1 u ~ i , j v ~ i , j + p 2 ( r i , j 2 + 2 u ~ i , j 2 ) v ~ i , j ( k 1 r i , j 2 + k 2 r i , j 4 ) + p 1 ( r i , j 2 + 2 v ~ i , j 2 ) + 2 p 2 u ~ i , j v ~ i , j - - - ( 3 )
wherein, u ~ i , j = u i , j - u 0 , v ~ i , j = v i , j - v 0 , r i , j = u ~ i , j 2 + v ~ i , j 2 , k1and k is2As radial distortion coefficient, p1And p2Is the tangential distortion coefficient.
In addition, taking the grid pattern with the same number of horizontal and vertical bars as an example, the total number of intersections is num, and then there is one row for each rowOne crossingThe following can be listed as the optimization objective function according to the principle condition of the collinear three points, i.e. the property of the collinear straight line, i.e. the collinear points on the same light bar:
min Σ i = 1 n u m ( Σ j = n u m ( i - 1 ) + 1 n u m ( i - 1 ) + n u m - 2 | q i , j + 1 T [ q i , j ] × q i , j + 2 | ) - - - ( 4 )
wherein,is three points Ai,j,Ai,j+1,Ai,j+2The requirement of collinearity. [ q ] ofi,j]×Represents qi,jThe antisymmetric matrix of (a), namely:
[ q i , j ] × = 0 - 1 - v ′ i , j 1 0 - u ′ i , j - v ′ i , j u ′ i , j 0 - - - ( 5 )
optimizing by a Levenberg-Marquardt nonlinear optimization algorithm to minimize the value of the target function of the formula (4), and acquiring the distortion coefficient k1、k2、p1And p2And optimized principal point coordinates (u'0,v'0). Then, the coordinates of all the intersections are corrected to ideal coordinates by equation (3).
And 5: the remaining internal parameters of the camera are evaluated. And (3) taking the corrected ideal intersection point as a characteristic point, driving the left camera 4a to do two groups of orthogonal motions by using the left four-dimensional electronic control platform 2a by adopting an active vision method, respectively shooting an image of a projection Gaussian grid pattern 6 at three start and end positions of each group of orthogonal motions, and finally shooting by using the left camera 4a to obtain 6 images. The specific process is as follows: (1) adjusting the left four-dimensional electric control platform 2a to a proper position, starting a first group of orthogonal motions, and respectively shooting an image of a projection Gaussian grid pattern 6 at three initial and final positions of the orthogonal motions; (2) and (3) the left side camera 4a is enabled to look down at a certain angle by using the left side four-dimensional electric control platform 2a, a second group of orthogonal motions are started, and images of the projected Gaussian grid pattern 6 are respectively shot at three start and end positions of the orthogonal motions.
The image of an infinitely distant point on a straight line is called the vanishing point of the straight line. Because the parallel straight line intersects with the infinite plane at the same infinite point, namely a vanishing point. A group of orthogonal motion comprises two translations, the connecting line of the corresponding intersection points on the two images shot at the start and end positions of one translation motion is a group of space parallel lines, and the two translations are mutually perpendicular, so that a group of positive lines can be obtainedHidden and vanished point pair e of intersectioni1(ui1,vi1) And ei2(ui2,vi2) The index i being 1,2, representing the order of orthogonal movements, and having Oei1·Oei2Using two sets of orthogonal pairs of vanishing points, the scale factor α in the intrinsic parameter matrix K for the left camera 4a can be solved separately by solving the following set of equationsxAnd αy
( u 11 - u ′ 0 ) ( u 12 - u ′ 0 ) / α x 2 + ( v 11 - v ′ 0 ) ( v 12 - v ′ 0 ) / α y 2 + 1 = 0 ( u 21 - u ′ 0 ) ( u 22 - u ′ 0 ) / α x 2 + ( v 21 - v ′ 0 ) ( v 22 - v ′ 0 ) / α y + 1 = 0 - - - ( 6 )
Likewise, the scale factor α in the intrinsic parameter matrix K of the right camera 4b may be obtainedxAnd αy
Step 6: and acquiring external parameters. The left camera 4a and the right camera 4b capture the same projected gaussian grid pattern 6, a world coordinate system is established on the camera coordinate system of the left camera 4a, and the fundamental matrix F is calculated using the corrected matching points of the images captured by the left and right cameras 4a, 4 b. The intrinsic matrix E can be calculated with the use of the intrinsic parameters and the fundamental matrix with a difference of a scale factor s. After decomposition of the intrinsic matrix E, the extrinsic parameters (rotation matrix R 'and translation vector t') can be determined with a difference of one scaling factor.
As shown in fig. 4, a parallel bar of gaussian light is projected to a substantial length L by a projector 30On the accurately measured gauge block, the central line of sub-pixel light bar is fitted by using the Gaussian characteristic of the light bar, the point with abrupt gray scale change is used as the boundary point of the gauge block, and the length L 'of the gauge block is reconstructed according to the obtained internal and external parameters'0The scale factor can be obtained as: s ═ L0/L'0. Thus, the actual camera extrinsic parameters (rotation matrix R 'and translation vector t s t') can be obtained. Thus, the calibration process of the camera is completed.
The camera calibration method provided by the invention has good real-time performance, robustness and higher calibration precision, and can be used for online calibration of a large-view-field camera in complex environments such as a forging site and the like.

Claims (1)

1. A camera calibration method based on a projected Gaussian grid pattern is characterized in that the camera calibration method utilizes the characteristic that the gray scales of horizontal and vertical light bars in the Gaussian grid pattern in the width direction are in Gaussian distribution, image coordinates of points on the center line of the light bars can be obtained at high precision by fitting a Gaussian curve, further a center line equation of the horizontal and vertical light bars is fitted, the intersection point of the center lines of the horizontal and vertical light bars is a calibration feature point, and internal and external parameters of a camera are obtained step by step according to image coordinates of the calibration feature point provided in an image of the photographed Gaussian grid pattern; the method comprises the following specific steps:
step 1: building a camera calibration system; a left four-dimensional electric control platform (2a), a right four-dimensional electric control platform (2b) and a projector (3) are arranged on the table top of a platform (1), a left camera (4a) is fixed on the left four-dimensional electric control platform (2a), and a right camera (4b) is fixed on the right four-dimensional electric control platform (2 b);
step 2: projecting a Gaussian grid pattern, shooting and acquiring intersection point coordinates; a Gauss grid pattern (6) consisting of a plurality of parallel transverse light bars and a plurality of parallel longitudinal light bars is projected to a smooth flat plate or a wall surface (5) in a factory building through a projector (3), the gray scales of all the light bars in the width direction are in Gaussian distribution, and the intersection point A of the transverse light bars and the longitudinal light bars isi,jFor calibrating the characteristic points, i is the number of the transverse light bars in the sequence from top to bottom, and j is the number of the longitudinal light bars in the sequence from left to right; because the light intensity at the intersection of the horizontal and vertical light bars is superposed, after the binarization processing is carried out on the images shot by the left camera (4a) and the right camera (4b), only bright spots at the intersection of grids, namely isolated connected regions, are left in the obtained images; centroid coordinates of connected regions can be obtained by centroid method (u 0)i,j,v0i,j) As a feature point Ai,jA coarse position of (a); a circular area with the rough position as the center and a radius of delta pixels is used as a search range, and then the search range is [ u0 ]i,j-Δ,u0i,j+Δ]The transverse light bars are searched once in the width direction every delta/n within the range, fitting is carried out according to Gaussian distribution characteristics, and Gaussian distribution peak points are used as points on the central lines of the transverse light bars, so that points P on 2n +1 central lines can be obtainedi,j,sSubscript s is 1,2,3, …,2n +1, and a straight line l is fittedh,i,j(ii) a Similarly, in [ v0i,j-Δ,v0i,j+Δ]The longitudinal light bars are searched once in the width direction every delta/n within the range, fitting is carried out according to Gaussian distribution characteristics, Gaussian distribution peak points are used as points on the central line of the longitudinal light bars, and points Q on 2n +1 central lines can be obtainedi,j,tSubscript t is 1,2,3, …,2n +1, and then a straight line l is fittedv,i,j(ii) a Finally, the intersection point of two intersecting straight lines in the same search range is obtained as a calibration characteristic point Ai,jThe coordinates of which are (u)i,j,vi,j);
And step 3: acquiring a rough coordinate of a principal point; shooting the same projected Gaussian grid pattern (6) by using a left camera (4a) or a right camera (4b) at two different focal lengths, wherein the characteristic point A isi,jRespectively, are (u 1)i,j,v1i,j) And (u 2)i,j,v2i,j) The principal point coordinate is (u)0,v0) Then, there are:
u 2 i , j - u 0 u 1 i , j - u 0 = v 2 i , j - v 0 v 1 i , j - v 0 - - - ( 2 )
the coordinates (u) of the rough position of the principal point can be found by the above equation0,v0);
Step (ii) of4: solving a distortion coefficient and an optimized principal point coordinate; the intersection point A of actual shooting can be deduced according to the distortion modeli,jCoordinate p ofi,j=(ui,j,vi,j,1)TCoordinates q of the intersection with the ideali,j=(u'i,j,v'i,j,1)TThe conversion relationship of (1) is as follows:
u ′ i , j v ′ i , j = u i , j v i , j + u ~ i , j ( k 1 r i , j 2 + k 2 r i , j 4 ) + 2 p 1 u ~ i , j v ~ i , j + p 2 ( r i , j 2 + 2 u ~ i , j 2 ) v ~ i , j ( k 1 r i , j 2 + k 2 r i , j 4 ) + p 1 ( r i , j 2 + 2 v ~ i , j 2 ) + 2 p 2 u ~ i , j v ~ i , j - - - ( 3 )
wherein, u ~ i , j = u i , j - u 0 , v ~ i , j = v i , j - v 0 , r i , j = u ~ i , j 2 + v ~ i , j 2 , k1and k is2As radial distortion coefficient, p1And p2Is a tangential distortion coefficient;
in addition, taking the grid pattern with the same number of horizontal and vertical bars as an example, the total number of intersections is num, and then there is one row for each rowThe intersection points, according to the linear fidelity, that is, the property of collinear points on the same light bar, and combining the essential conditions of collinear three points, can list the optimization objective function as follows:
min Σ i = 1 n u m ( Σ j = n u m ( i - 1 ) + 1 n u m ( i - 1 ) + n u m - 2 | q i , j + 1 T [ q i , j ] × q i , j + 2 | ) - - - ( 4 )
wherein,is three points Ai,j,Ai,j+1,Ai,j+2A sufficient condition for co-linearity; [ q ] ofi,j]×Represents qi,jThe antisymmetric matrix of (a), namely:
[ q i , j ] × = 0 - 1 v ′ i , j 1 0 - u ′ i , j - v ′ i , j u ′ i , j 0 - - - ( 5 )
optimizing by a Levenberg-Marquardt nonlinear optimization algorithm to minimize the value of the target function of the formula (4), and acquiring the distortion coefficient k1、k2、p1And p2And optimized principal point coordinates (u'0,v'0) (ii) a Then, correcting all the intersection point coordinates into ideal coordinates by using a formula (3);
and 5: obtaining other internal parameters of the camera; the corrected ideal intersection points are used as characteristic points, an active vision method is adopted, the left side four-dimensional electric control platform (2a) is used for driving the left side camera (4a) to do two groups of orthogonal motions, images of a projected Gaussian grid pattern (6) are respectively shot at three start and end positions of each group of orthogonal motions, and finally the left side camera (4a) shoots to obtain 6 images;
the parallel straight line and the infinite plane are intersected at the same infinite point, namely a vanishing point; and a group of orthogonal motion comprises two translations, the connecting line of corresponding intersection points on two images shot at the start and end positions of one translation motion is a group of spatial parallel lines, and the two translations are mutually vertical, so that a group of orthogonal vanishing point pairs e can be obtainedi1(ui1,vi1) And ei2(ui2,vi2) The index i being 1,2, representing the order of orthogonal movements, and having Oei1·Oei20, wherein O ═ u'0,v'0) Is the principal point of the camera, and by using two orthogonal pairs of vanishing points, the scale factor α in the internal parameter matrix K of the left camera (4a) can be solved respectively by solving the following equationsxAnd αy
( u 11 - u ′ 0 ) ( u 12 - u ′ 0 ) / α x 2 + ( v 11 - v ′ 0 ) ( v 12 - v ′ 0 ) / α y 2 + 1 = 0 ( u 21 - u ′ 0 ) ( u 22 - u ′ 0 ) / α x 2 + ( v 21 - v ′ 0 ) ( v 22 - v ′ 0 ) / α y 2 + 1 = 0 - - - ( 6 )
Likewise, the scale factor α in the intrinsic parameter matrix K of the right camera (4b) can be obtainedxAnd αy
Step 6: acquiring external parameters of a camera; the left camera (4a) and the right camera (4b) shoot Gaussian grid patterns (6) with the same projection, a world coordinate system is established on a camera coordinate system of the left camera (4a), and a basic matrix F is calculated by using the corrected matching points of the images shot by the left camera (4a), the right camera (4a) and the right camera (4 b); calculating an essential matrix E by using the solved internal parameters and the basic matrix under the condition of a difference of a scale factor s; after the intrinsic matrix E is decomposed, the external parameters (the rotation matrix R 'and the translation vector t') can be determined under the condition of a difference of a scale factor;
projecting parallel Gaussian light bars to actual length L by using projector (3)0On the accurately measured gauge block, the central line of sub-pixel light bar is fitted by using the Gaussian characteristic of the light bar, the point with abrupt gray scale change is used as the boundary point of the gauge block, and the length L 'of the gauge block is reconstructed according to the obtained internal and external parameters'0The scale factor can be obtained as: s ═ L0/L'0(ii) a Therefore, the actual external parameters of the camera (the rotation matrix R 'and the translation vector t s t') can be obtained; thus, the calibration process of the camera is completed.
CN201310482789.4A 2013-10-16 2013-10-16 Based on the camera marking method of projection Gaussian network pattern Active CN103530880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310482789.4A CN103530880B (en) 2013-10-16 2013-10-16 Based on the camera marking method of projection Gaussian network pattern

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310482789.4A CN103530880B (en) 2013-10-16 2013-10-16 Based on the camera marking method of projection Gaussian network pattern

Publications (2)

Publication Number Publication Date
CN103530880A CN103530880A (en) 2014-01-22
CN103530880B true CN103530880B (en) 2016-04-06

Family

ID=49932859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310482789.4A Active CN103530880B (en) 2013-10-16 2013-10-16 Based on the camera marking method of projection Gaussian network pattern

Country Status (1)

Country Link
CN (1) CN103530880B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104167001B (en) * 2014-08-27 2017-02-15 大连理工大学 Large-visual-field camera calibration method based on orthogonal compensation
CN104156974A (en) * 2014-09-05 2014-11-19 大连理工大学 Camera distortion calibration method on basis of multiple constraints
CN105758337B (en) * 2014-12-19 2018-09-04 宁波舜宇光电信息有限公司 A method of obtaining angle between lens plane and image sensor plane
CN104777327B (en) * 2015-03-17 2018-03-20 河海大学 Time-space image velocity-measuring system and method based on laser assisted demarcation
CN104820973B (en) * 2015-05-07 2017-10-03 河海大学 The method for correcting image of distortion curve radian detection template
CN104933717B (en) * 2015-06-17 2017-08-11 合肥工业大学 The camera interior and exterior parameter automatic calibration method of target is demarcated based on directionality
CN105716539B (en) * 2016-01-26 2017-11-07 大连理工大学 A kind of three-dimentioned shape measurement method of quick high accuracy
CN107464263A (en) * 2016-06-02 2017-12-12 维森软件技术(上海)有限公司 Automobile calibration system and its scaling method
CN107464218A (en) * 2016-06-02 2017-12-12 维森软件技术(上海)有限公司 Automobile calibration system and its scaling method
CN107580203B (en) * 2017-07-18 2019-01-15 长春理工大学 Immersion active stereo projective perspective transformation matrix solving method
CN108198219B (en) * 2017-11-21 2022-05-13 合肥工业大学 Error compensation method for camera calibration parameters for photogrammetry
CN108805936B (en) * 2018-05-24 2021-03-26 北京地平线机器人技术研发有限公司 Camera external parameter calibration method and device and electronic equipment
CN109993799B (en) * 2019-03-08 2023-03-24 贵州电网有限责任公司 Ultraviolet camera calibration method and calibration device
JP7243510B2 (en) * 2019-07-29 2023-03-22 セイコーエプソン株式会社 Projector control method and projector
CN110415299B (en) * 2019-08-02 2023-02-24 山东大学 Vehicle position estimation method based on set guideboard under motion constraint
CN112427487A (en) * 2019-08-26 2021-03-02 北京机电研究所有限公司 Device for measuring size of thermal state free forging by utilizing optical image and display grid
CN111579220B (en) * 2020-05-29 2023-02-10 江苏迪盛智能科技有限公司 Resolution ratio board
CN111968183B (en) * 2020-08-17 2022-04-05 西安交通大学 Gauge block calibration method for calibrating monocular line laser three-dimensional measurement module
CN114705266A (en) * 2022-04-02 2022-07-05 北京智科车联科技有限公司 Oil tank oil quantity detection method and device, oil tank, T-box and vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5159361A (en) * 1989-03-09 1992-10-27 Par Technology Corporation Method and apparatus for obtaining the topography of an object
US7232990B2 (en) * 2004-06-30 2007-06-19 Siemens Medical Solutions Usa, Inc. Peak detection calibration for gamma camera using non-uniform pinhole aperture grid mask
CN101776437A (en) * 2009-09-30 2010-07-14 江南大学 Calibration technology for vision sub-pixel of embedded type machine with optical path adjustment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8248476B2 (en) * 2008-09-03 2012-08-21 University Of South Carolina Robust stereo calibration system and method for accurate digital image correlation measurements

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5159361A (en) * 1989-03-09 1992-10-27 Par Technology Corporation Method and apparatus for obtaining the topography of an object
US7232990B2 (en) * 2004-06-30 2007-06-19 Siemens Medical Solutions Usa, Inc. Peak detection calibration for gamma camera using non-uniform pinhole aperture grid mask
CN101776437A (en) * 2009-09-30 2010-07-14 江南大学 Calibration technology for vision sub-pixel of embedded type machine with optical path adjustment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于立体靶标的摄像机标定方法;张捷 等;《东南大学学报(自然科学版)》;20110531;第41卷(第3期);543-548 *

Also Published As

Publication number Publication date
CN103530880A (en) 2014-01-22

Similar Documents

Publication Publication Date Title
CN103530880B (en) Based on the camera marking method of projection Gaussian network pattern
Chen et al. High-accuracy multi-camera reconstruction enhanced by adaptive point cloud correction algorithm
CN102376089B (en) Target correction method and system
CN102034238B (en) Multi-camera system calibrating method based on optical imaging probe and visual graph structure
CN104266608B (en) Field calibration device for visual sensor and calibration method
CN104331896A (en) System calibration method based on depth information
CN103278138A (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN104596439A (en) Speckle matching and three-dimensional measuring method based on phase information aiding
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
CN102155923A (en) Splicing measuring method and system based on three-dimensional target
CN105716542A (en) Method for three-dimensional data registration based on flexible feature points
CN103106661B (en) Two, space intersecting straight lines linear solution parabolic catadioptric camera intrinsic parameter
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
CN105931222A (en) High-precision camera calibration method via low-precision 2D planar target
CN104677277B (en) A kind of method and system for measuring object geometric attribute or distance
CN115457147A (en) Camera calibration method, electronic device and storage medium
CN104463969B (en) A kind of method for building up of the model of geographical photo to aviation tilt
CN105139411A (en) Large visual field camera calibration method based on four sets of collinear constraint calibration rulers
CN105631844A (en) Image camera calibration method
CN114998448B (en) Multi-constraint binocular fisheye camera calibration and space point positioning method
CN102693543A (en) Method for automatically calibrating Pan-Tilt-Zoom in outdoor environments
CN104807405A (en) Three-dimensional coordinate measurement method based on light ray angle calibration
CN115082538A (en) System and method for three-dimensional reconstruction of surface of multi-view vision balance ring part based on line structure light projection
CN104123726B (en) Heavy forging measuring system scaling method based on vanishing point

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant