CN113920205A - Calibration method of non-coaxial camera - Google Patents
Calibration method of non-coaxial camera Download PDFInfo
- Publication number
- CN113920205A CN113920205A CN202111526560.7A CN202111526560A CN113920205A CN 113920205 A CN113920205 A CN 113920205A CN 202111526560 A CN202111526560 A CN 202111526560A CN 113920205 A CN113920205 A CN 113920205A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- matrix
- coordinate system
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
A calibration method of a non-coaxial camera comprises the following steps: acquiring a calibration plate image shot by a non-coaxial camera; acquiring feature points in a calibration plate image and image coordinates and world coordinates thereof; calculating a homography matrix; decomposing the homography matrix according to a preset conversion model from world coordinates to image coordinates to obtain an internal reference and an external reference of the non-coaxial camera, wherein the internal reference comprises an inclined matrix representing the transformation from an inclined image plane coordinate system to a non-inclined image plane coordinate system, the inclined image plane is an image plane vertical to the optical axis of the lens, and the non-inclined image plane is an image plane of the non-coaxial camera; and carrying out nonlinear optimization on the distortion coefficient and the internal parameter and the external parameter obtained by decomposition to obtain the final internal parameter, external parameter and distortion coefficient. Due to the introduction of the inclined image plane and the non-inclined image plane, the inclined matrix is added for describing the transformation from the inclined image plane coordinate system to the non-inclined image plane coordinate system, and the calibration precision of the non-coaxial camera can be effectively improved.
Description
Technical Field
The invention relates to the technical field of camera calibration, in particular to a calibration method of a non-coaxial camera.
Background
In the image measuring process and machine vision application, in order to determine the three-dimensional geometric position of a certain point on the surface of a space object, a geometric model imaged by a camera must be established, namely, the corresponding relation between the three-dimensional geometric position of the certain point on the surface of the space object and the corresponding point on an image is determined, so that after the image coordinates of the image shot by the camera are obtained, the corresponding three-dimensional space coordinates can be deduced according to the geometric model imaged by the camera. The parameters in the geometric model are the parameters of the camera, and the process of determining the parameters of the camera is called camera calibration. The calibration of camera parameters is a very critical link, and the accuracy of the calibration result and the stability of the calibration algorithm directly influence the accuracy of the result generated by the camera. Therefore, the condition that camera calibration is well done is the premise that follow-up work is well done. The camera calibration is usually carried out by utilizing a calibration plate which is widely applied to the aspects of machine vision, image measurement, photogrammetry, three-dimensional reconstruction and the like, the image of the calibration plate with a fixed-spacing pattern array is shot by a camera, and a geometric model of camera imaging can be obtained through calculation of a calibration algorithm, so that high-precision measurement and reconstruction results are obtained. At present, a calibration board with a checkerboard pattern or a solid circular array pattern is usually used for camera calibration, wherein the checkerboard calibration board acquires feature points by positioning checkerboard corner points, the circular array calibration board acquires the feature points by positioning dot centers, and subsequent calibration work can be performed after the coordinates of the feature points and the corresponding relation between the feature points and world coordinates are determined.
Disclosure of Invention
The application provides a calibration method of a non-coaxial camera, which can be used for calibrating the non-coaxial camera.
According to a first aspect, an embodiment provides a calibration method for a non-coaxial camera, where the non-coaxial camera includes an image plane and a lens, and a normal vector of the image plane and an optical axis of the lens are not coaxial, the calibration method including:
acquiring a calibration plate image shot by a non-coaxial camera;
acquiring feature points in the calibration plate image, and image coordinates and corresponding world coordinates of the feature points;
calculating a homography matrix according to the image coordinates of the feature points and the corresponding world coordinatesH;
According to a preset conversion model from world coordinates to image coordinates, carrying out decomposition calculation on the homography matrix to obtain internal parameters and external parameters of the non-coaxial camera, wherein the internal parameters comprise a tilt matrixH tilt Tilt matrixH tilt Representing a transformation from an oblique image plane coordinate system, which is an image plane perpendicular to the optical axis of the lens, to a non-oblique image plane coordinate system, which is an image plane of the non-on-axis camera;
and carrying out nonlinear optimization on the distortion coefficient of the non-coaxial camera and the internal parameter and the external parameter obtained by decomposition to obtain the final internal parameter, external parameter and distortion coefficient of the non-coaxial camera.
In one embodiment, the conversion model is:
wherein the homography matrix, (r,c) T Is the image coordinates of the feature points (a)x w ,y w ,z w ) T Is the world coordinate of the feature point,for transformation matrix from world coordinate system to camera coordinate system,RIn order to be a matrix of rotations,tin order to be a matrix of displacements,z c for feature points in the camera coordinate systemzThe coordinates of the position of the object to be imaged,is a transformation matrix from the camera coordinate system to the tilted image plane coordinate system,fis the focal length of the non-coaxial camera,is a transformation matrix from the non-tilted image plane coordinate system to the image coordinate system,s x ands y pixel sizes in the horizontal and vertical directions of the non-coaxial camera, respectively (c x ,c y ) Is a point of a main optical axis, and is,is an internal reference part, and is characterized in that,is the part of the external ginseng.
In one embodiment, the lens of the non-coaxial camera is a non-telecentric lens or an object-side telecentric lens, and the tilt matrix is
Wherein the content of the first and second substances,dfor the translation distance of the tilted image plane to the non-tilted image plane,q 11、q 12、q 13、q 21、q 22、q 23、q 31、q 32、q 33is a rotation matrixQElement of (1), rotation matrixQRepresenting a rotational transformation of the tilted image plane with respect to an original coordinate system, an
WhereinρIndicating the angle of rotation about the Z-axis,τand the angle of rotation around the X axis is represented, the X axis of the original coordinate system is the horizontal direction of the non-inclined image plane, the Y axis is the vertical direction of the non-inclined image plane, and the Z axis is the vertical line of the non-inclined image plane.
In one embodiment, the obtaining the internal reference and the external reference of the non-coaxial camera by performing the decomposition calculation on the homography matrix includes:
the parameter matrix is calculated according to the following constraint conditionsA
Wherein the content of the first and second substances,
h 1is a homography matrixHThe first column of vectors of (a) is,h 2is a homography matrixHOf the second column vector of (a) is,h 3is a homography matrixHThe vector of the third column of (a),r 1is a rotation matrixRThe first column of vectors of (a) is,r 2is a rotation matrixRA second column vector of (a);
according tor 1=A -1 h 1,r 2=A -1 h 2,t=A -1 h 3Calculating to obtain matrixr 1 r 2 t]According toCalculating to obtain a tilt matrixH tilt 。
In one embodiment, the external parameter of the non-coaxial camera isComprising an equivalent axis of rotationkAnd equivalent shaft angleθThe obtaining of the internal reference and the external reference of the non-coaxial camera by performing decomposition calculation on the homography matrix comprises the following steps:
the parameter matrix is calculated according to the following constraint conditionsA
Wherein the content of the first and second substances,
according tor 1=A -1 h 1,r 2=A -1 h 2,t=A -1 h 3Calculating to obtain matrixr 1 r 2 t];
Whereinh 1Is a homography matrixHThe first column of vectors of (a) is,h 2is a homography matrixHOf the second column vector of (a) is,h 3is a homography matrixHThe vector of the third column of (a),r 1is a rotation matrixRThe first column of vectors of (a) is,r 2is a rotation matrixRA second column vector of (a);
according to the calculated rotation matrixR,Obtaining an equivalent axis of rotationkAnd equivalent shaft angleθIn which a rotation matrix isREquivalent rotation axiskAnd equivalent shaft angleθThe transformation relationship of (1) is as follows:
R(k,θ)=
k x 、k y 、k z is an equivalent rotation axiskThree components of (a);
In an embodiment, the performing nonlinear optimization on the distortion coefficient of the non-coaxial camera and the decomposed internal parameter and external parameter to obtain a final internal parameter, external parameter and distortion coefficient of the non-coaxial camera includes:
presetting an initial value of a distortion coefficient, taking the internal parameter and the external parameter obtained by decomposition as the initial values of the internal parameter and the external parameter, and iteratively solving an optimal solution according to the following loss functions to obtain the final internal parameter, external parameter and distortion coefficient of the non-coaxial camera:
wherein the content of the first and second substances,n m to scale the number of feature points in the plate image,n c in order to be the number of cameras,n 0the number of calibration plate images taken for the camera,p j is the coordinate of the characteristic point in the world coordinate system,representing the appearance of the calibration plate image in the reference camera,is shown askThe transformation of the camera with respect to the reference camera,i k is shown inkThe transformation under the camera is carried out,p jkl is as followskThe first shot by the cameralFirst in the calibration plate imagejThe image coordinates of the individual feature points,v jkl take a value of 0 or 1 whenjA characteristic point iskThe first shot by the cameral1 if the image of each calibration plate is visible, or 0 if the image of each calibration plate is visible; function(s)Representing the transformation from the image coordinate system to the inclined image plane coordinate system, wherein the transformation from the image coordinate system to the inclined image plane coordinate system by using the internal reference and the inverse distortion of the inclined image plane coordinate by using the distortion coefficient are included; function(s)A transformation from the world coordinate system to the tilted image plane coordinate system is represented, which includes transforming the world coordinate system to the camera coordinate system using the external reference.
In one embodiment, the calibration method of the non-coaxial camera further includes: and before each iteration, distortion correction is carried out on the inclined image plane coordinates of the characteristic points by using the calculated distortion coefficient.
In one embodiment, the method is based on a formulaIteratively solving the optimal solution, whereinq k Is shown askThe vector composed of the internal parameter, the external parameter and the distortion coefficient of the non-coaxial camera during the secondary iteration,δby the formulaIs determined in whichεCorresponding to all feature points in each calibration plate image taken by each camera at the current iterationAndthe difference of (a) is used to form a vector,Jis a Jacobian matrix, which is composed of Jacobian matrices of cameras, andithe Jacobian matrix of the individual cameras is
WhereinAndrespectively representiThe first shot by the camerajPartial derivatives of the corresponding internal and external parameters of each calibration plate image.
In one embodiment, the calibration plate image is a circular array calibration plate image; acquiring the characteristic points in the calibration board image and the image coordinates of the characteristic points by the following method:
carrying out image processing on the calibration plate image to obtain circular feature points in the calibration plate image;
performing edge extraction on the circular feature points to obtain edge points of the circular feature points, and performing ellipse fitting by using the edge points to obtain image coordinates of the circular feature points, wherein the image coordinates of the circular feature points refer to image coordinates of the circle centers of the circular feature points;
determining the corresponding relation between the image coordinates of the circular feature points and world coordinates;
and carrying out error correction on the image coordinates of the circular feature points by using an ellipse equation to obtain the final image coordinates of the circular feature points.
According to a second aspect, an embodiment provides a computer-readable storage medium having a program stored thereon, the program being executable by a processor to implement the method for calibrating a non-coaxial camera according to the first aspect.
According to the calibration method of the non-coaxial camera and the computer-readable storage medium of the embodiment, for the non-coaxial camera, when a conversion model from world coordinates to image coordinates is established, the concept of the inclined image plane and the non-inclined image plane is introduced by taking into account the fact that the optical axis of the lens of the non-coaxial camera is not coaxial with the optical axis of the imaging plane, wherein the inclined image plane is the image plane perpendicular to the optical axis of the lens of the non-coaxial camera, the non-inclined image plane is the imaging plane of the non-coaxial camera, and the inclined matrix is addedH tilt This parameter is used to describe the transformation from the tilted image plane coordinate system to the non-tilted image planeTransformation of the plane coordinate system to tilt the matrixH tilt The method is used as a part of camera internal reference, a mathematical model in the existing method is modified, the process of coordinate system conversion in the non-coaxial camera can be well described, the calibration precision of the non-coaxial camera is improved, and the error of a working result generated in the process of applying the non-coaxial camera is reduced.
Drawings
FIG. 1 is a schematic diagram of an optical configuration of a coaxial camera;
FIG. 2 is a schematic diagram of an optical configuration of a non-coaxial camera;
FIG. 3 is a schematic diagram of the transformation of each coordinate system in the pinhole camera model;
FIG. 4 is a flow diagram of a method for calibrating a non-coaxial camera in one embodiment;
FIG. 5 is a schematic diagram of a tilt transformation;
FIG. 6 is a flowchart of a method for extracting feature point high precision coordinates of a circular array calibration plate according to an embodiment;
FIG. 7 is a flow diagram of image processing of a calibration plate image to obtain circular feature points therein in one embodiment;
FIG. 8 is a flow chart of image processing of a calibration plate image to obtain circular feature points therein in another embodiment;
FIG. 9 is a schematic view of a circular array calibration plate with triangular markers;
FIG. 10 is a schematic view of a circular array calibration plate with hollow dots;
FIG. 11 is a flowchart of determining the correspondence between the image coordinates of the circular feature points and the world coordinates in the circular array calibration plate with the triangular markers;
FIG. 12 is a flowchart of determining the correspondence between the image coordinates and world coordinates of circular feature points in a circular array calibration plate having hollow points;
FIG. 13 is a schematic diagram of the transformation of coordinate systems in a line scan camera model;
FIG. 14 is a flowchart of a calibration method for a line-scan camera in an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings). Reference herein to "image plane" and "imaging plane" are all the same concept.
The optical axis of the lens and the optical axis of the imaging plane (i.e. the normal vector of the imaging plane) in the camera mainly used at present are coaxial, but in the field of machine vision, due to manufacturing or design reasons, the lens of some cameras is not necessarily parallel to the imaging plane, that is, the optical axis of the camera lens is not coaxial with the optical axis of the imaging plane, and if this point is not considered and the calibration is performed according to the traditional method, an error which cannot be ignored is introduced.
The main method of the current camera calibration is designed and calculated according to Zhangyingyou calibration method, and mainly comprises the following calculation steps:
(1) acquiring a homography matrix according to the corresponding relation between the world coordinates and the image coordinates of the feature points in the calibration plate;
(2) decomposing the homography matrix, and calculating to obtain initial parameters of the internal parameters or the external parameters;
(3) and performing nonlinear optimization on the initial parameters by adopting an LM (Levenberg-Marquardt ) algorithm, and iteratively calculating the internal parameter, the external parameter and the distortion coefficient to obtain a final calibration result.
However, in the zhangxiong calibration method, the case that the optical axis of the lens and the optical axis of the imaging plane are coaxial is mainly considered, and the design of the processing mode and the mathematical model in the non-coaxial case is not given, so that it is necessary to provide a calibration method which can be used for the non-coaxial camera.
In the case of no or coaxial axes, the optical structure of the camera is shown in fig. 1, in which the optical axis of the image plane (i.e. the vertical axis or the vector of the image plane) and the optical axis of the lens coincide, while in the case of an off or non-coaxial axis, the optical structure of the camera is shown in fig. 2, in which the optical axis of the image plane and the optical axis of the lens do not coincide but have a certain angleθ。
In the case of the on-axis, the projective transformation relationship of each coordinate system during the imaging process of the camera can be represented by a pinhole camera model, as shown in fig. 3. Points in the World Coordinate System (WCS)P w Point projected onto imaging plane through lens projection centerPTo obtain pointsP w Image coordinates projected onto an imaging planeq i It needs to be first converted into the Camera Coordinate System (CCS). Of camera coordinate systemxShaft andywith axes parallel to the image, respectivelycShaft andrthe shaft is provided with a plurality of axial holes,zthe axis is perpendicular to the imaging plane of the image andzthe arrangement of the axes will cause the camera to be orientedAll points beforezCoordinates being positive numbers, in which the image iscThe axial direction is the horizontal direction of the image,rthe axial direction is the vertical direction of the image. In FIG. 3x c A shaft,y c Shaft andz c with axes representing camera coordinate systems, respectivelyxA shaft,yShaft andza shaft. The transformation from the world coordinate system to the camera coordinate system can be formulatedP c = c H w P w Is shown in whichP c =(x c ,y c ,z c ) T Are the coordinates in the camera coordinate system,P w =(x w ,y w ,z w ) T are coordinates in a world coordinate system and are, c H w can use a rotation matrixRAnd translation matrixtTo indicate.
After the world coordinate system is converted into the camera coordinate system, the world coordinate system needs to be converted into the image plane coordinate system, which is a process of converting 3D coordinates into 2D coordinates. For non-telecentric lenses, such as CCTV (Closed Circuit Television) lenses, this transformation can be expressed as:
whereinfIndicating the focal length of the camera lens, (ii)u,v) T Representing the coordinates in the image plane coordinate system.
For a telecentric lens, this transformation can be expressed as:
whereinmShowing the magnification of the lens.
Distortion of the lens after projection onto the imaging plane will result in coordinatesq c =(u,v) T Is changed so that distorted coordinates are formed on the imaging planeThis variation can be modeled on the imaging plane alone, that is to say that no three-dimensional information is required here. For most lenses, their distortion can be sufficiently approximated as radial distortion, and two models are usually used to describe distortion, one being a division model and one being a polynomial model. The division model is as follows:
wherein the parametersκIndicating the magnitude of the radial distortion ifκNegative, i.e. barrel distortion, ifκPositive, i.e. becoming pincushion. The distortion can be corrected by:
the polynomial model is as follows:
wherein,k 1、k 2、k 3、p 1、p 2Are model coefficients. According to the model, the solution can be obtained by using a Newton methodu、vThe initial value of iteration is the undistorted initial value itself.
And finally, converting the Image plane Coordinate System into an Image Coordinate System (ICS), and expressing the Image plane Coordinate System into a formula:
whereins x Ands y pixel sizes in the horizontal direction and the vertical direction of the camera, respectively (c x ,c y ) Is the main optical axis point, typically the center of the image.
The entire transformation described above can therefore be expressed as:
this is the mathematical model on which camera calibration is based.
In a non-coaxial camera, the optical axis of the image plane and the optical axis of the lens do not coincide but have an off-angleθTherefore, the above model is not applicable in the process of converting the camera coordinate system to the image coordinate system, and for this reason, the present application introduces the concept of an oblique image plane, which is an image plane perpendicular to the optical axis of the lens of the non-coaxial camera, and a non-oblique image plane, which is an image plane of the non-coaxial camera, and in the process of converting the camera coordinate system to the image coordinate system, the camera coordinate system is first converted to the oblique image plane coordinate system, then the oblique image plane coordinate system is converted to the non-oblique image plane coordinate system, and finally the non-oblique image plane coordinate system is converted to the image coordinate system. Tilt matrix for transformation of tilted image plane coordinate system to non-tilted image plane coordinate systemH tilt So the whole transformation process, if distortion is not considered, can be expressed as:
on the basis of the above conversion model, the present application provides a calibration method for a non-coaxial camera, please refer to fig. 4, in which the method includes steps 110 to 150, which will be described in detail below.
Step 110: and acquiring a calibration plate image shot by the non-coaxial camera.
The calibration plate can be a checkerboard calibration plate, a circular array calibration plate, etc. During calibration, the non-coaxial cameras can be placed in a plurality of poses (i.e. positions and angles of the non-coaxial cameras relative to the calibration plate) according to experience, and the calibration plate is shot when the non-coaxial cameras are in each pose, so that a plurality of different calibration plate images are obtained for calibration.
Step 120: and acquiring the characteristic points in the calibration plate image, and the image coordinates and the corresponding world coordinates of the characteristic points.
For the chessboard pattern calibration plate, the characteristic points are angular points of the chessboard pattern, for the circular array calibration plate, the characteristic points are the centers of the circular characteristic points in the circular array, and the circular characteristic points are circular patterns on the circular array calibration plate.
The world coordinate system can be constructed according to the parameter information of the calibration plate so as to obtain the world coordinates corresponding to the feature points, wherein the parameter information of the calibration plate comprises the size of the calibration plate, the size of the checkerboard, the radius of the circular feature points, the distance between the feature points and the like. The method for extracting the high-precision coordinates of the feature points is provided for the circular array calibration plate, error correction can be performed on the image coordinates of the feature points by using an elliptic equation, and the precision of the image coordinates of the feature points is effectively improved, so that the precision of camera calibration is improved, and the method is elaborated in detail hereinafter.
Step 130: calculating a homography matrix according to the image coordinates of the feature points and the corresponding world coordinatesH. The homography matrix can be calculated by using the image coordinates of a plurality of characteristic points and the corresponding world coordinatesH。
Step 140: according to a preset conversion model from world coordinates to image coordinates, corresponding to the homography matrixHPerforming decomposition calculation to obtain internal reference and external reference of the non-coaxial camera, wherein the internal reference comprises a tilt matrixH tilt 。
It will be appreciated that the homography matrixAnd thus to the homography matrixHAnd decomposing to obtain the internal reference and the external reference of the non-coaxial camera. Will tilt the matrixH tilt Considered as part of the internal reference, the internal reference part of the non-coaxial camera isThe external ginseng part is。
Referring to FIG. 5, the present application introduces three parameters to represent the tilt matrixH tilt I.e. image plane distancedAngle of rotationτAndρthe tilted image plane can be considered as a translation distance with respect to a non-tilted image plane (i.e., an image plane of a non-on-axis camera)dAngle of rotation about X-axis of original coordinate systemτAngle of rotation about Z axisρThe method comprises the steps of obtaining an original coordinate system, wherein the X axis of the original coordinate system is the horizontal direction of a non-inclined image plane, the Y axis is the vertical direction of the non-inclined image plane, and the Z axis is the vertical line of the non-inclined image plane. By rotating matricesQTo represent the rotational transformation of the tilted image plane with respect to the original coordinate system, it can be calculated from the geometric transformation relation
Then the tilt matrix is used for non-telecentric lenses such as CCTV lenses and object-side telecentric lenses according to the geometric transformation relationshipH tilt Is composed of
Tilt matrix for image-side telecentric lens and two-sided telecentric lensH tilt Is composed of
In correspondence with the homography matrixHWhen the inner reference and the outer reference of the non-coaxial camera are calculated by decomposition, the camera can be used for calculating the inner reference and the outer reference of the non-coaxial camera
wherein,h 1Is a homography matrixHThe first column of vectors of (a) is,h 2is a homography matrixHOf the second column vector of (a) is,h 3is a homography matrixHThe vector of the third column of (a),r 1is a rotation matrixRThe first column of vectors of (a) is,r 2is a rotation matrixRThe second column vector of (1). The parameter matrix may be calculated according to the following constraintsA:
Then according tor 1=A -1 h 1,r 2=A -1 h 2,t=A -1 h 3Calculating to obtain matrixr 1 r 2 t]Thereby obtaining the ginseng part. Internal reference part, camera focusfPixel sizes x Ands y principal optical axis point (c x ,c y ) Can be known in advance and then according toThe tilt matrix can be calculatedH tilt 。
In one embodiment, the rotation matrix may beRBy equivalent rotary shaftskAnd equivalent shaft angleθTo indicate the equivalent rotation axiskAnd equivalent shaft angleθMay be considered part of the external reference. Rotation matrixREquivalent rotation axiskAnd equivalent shaft angleθThe transformation relationship of (1) is as follows:
k x 、k y 、k z is an equivalent rotation axiskThree components of (a).
In accordance withr 1=A -1 h 1,r 2=A -1 h 2,t=A -1 h 3Calculating to obtain matrixr 1 r 2 t]Then a rotation matrix is obtainedRAccording to the above formula, the equivalent rotation axis can be calculatedkAnd equivalent shaft angleθ。
Step 150: and carrying out nonlinear optimization on the distortion coefficient of the non-coaxial camera and the internal parameter and the external parameter obtained by decomposition to obtain the final internal parameter, external parameter and distortion coefficient of the non-coaxial camera.
In the step, the distortion coefficient of the non-coaxial camera and the internal parameter and the external parameter obtained by decomposition are subjected to nonlinear optimization through the set loss function, the initial values of the internal parameter and the external parameter in the iterative process can be the internal parameter and the external parameter obtained by decomposition, and the initial value of the distortion coefficient can be preset according to experience. Since distortion occurs during the process of projecting points through the lens to the tilted image plane, and distortion is a non-linear change, one can choose to create a loss function on the tilted image plane, dividing the entire transform into two parts: partly from the image coordinate system to the inclined image plane coordinate system and partly from the world coordinate system to the inclined image plane, the closer the two results are to the specification, the better the calibration result is, so the loss function can be constructed as follows:
the loss function can adapt to the calibration of a plurality of cameras, wherein one camera is determined as a reference camera, and other cameras can be transformed into a coordinate system of the reference camera to be uniformly calculated. Wherein the content of the first and second substances,n m to scale the number of feature points in the plate image,n c in order to be the number of cameras,n 0the number of calibration plate images taken for the camera,p j is the coordinate of the characteristic point in the world coordinate system,representing the appearance of the calibration plate image in the reference camera,is shown askThe transformation of the camera with respect to the reference camera,i k indicates that the transformation is inkThe transformation under the camera is carried out,p jkl is as followskThe first shot by the cameralFirst in the calibration plate imagejThe image coordinates of the individual feature points,v jkl take a value of 0 or 1 whenjA characteristic point iskThe first shot by the cameralThe number of the calibration plate images is 1 when visible, and 0 otherwise. Function(s)Representing a transformation from an image coordinate system to an inclined image plane coordinate systemFrom the above, this includes the process of transforming the image coordinate system to the non-tilted image plane coordinate system by using the internal reference, transforming the non-tilted image plane coordinate system to the tilted image plane coordinate system by using the non-tilted image plane coordinate system, and performing the inverse distortion on the tilted image plane coordinate by using the distortion coefficient. Function(s)Which represents the transformation from the world coordinate system to the tilted image plane coordinate system, as known from the above, this includes the process of converting the world coordinate system to the camera coordinate system, and converting the camera coordinate system to the tilted image plane coordinate system by using the external reference.
Iterative computation can be performed by using the LM algorithm, and the updating of parameters in the iterative process can be expressed asWhereinq k Is shown askThe vector composed of the internal parameter, the external parameter and the distortion coefficient of the non-coaxial camera during the secondary iteration,δby the formulaIs determined in whichεCorresponding to all feature points in each calibration plate image taken by each camera at the current iterationAndthe difference of (a) is used to form a vector,Jis a Jacobian matrix, which is composed of Jacobian matrices of cameras, andithe Jacobian matrix of the individual cameras is
WhereinAndrespectively representiThe first shot by the camerajPartial derivatives of the corresponding internal and external parameters of each calibration plate image. The solution of the partial derivatives is explained below.
For the external reference part, rotating the matrixRThe equivalent rotation axis and the equivalent shaft angle are used as the rotation axisr x ,r y ,r z ] T Then the equivalent shaft angle equals:thus, a rotation vector of unit can be obtained asThen rotation matrixRCan be expressed as:
a 0=-sinθl i ,a 1=[sinθ-2(1-cosθ)θ’]l i ,
a 2=2(1-cosθ)θ’,a 3=[cosθ-θ’sinθ] l i ,a 4=θ’sinθ,
When in usei=0,l i = l x ;i=1,l i = l y ;i=2,l i = l z 。
Defining a vector:
dr 0=[2l x ,l y ,l z ,l y ,0,0,l z ,0,0],dr 1=[0,l x ,0,l x ,2l y ,l z ,0,l z ,0],dr 2=[0,0,l x ,0,0,l y ,l x ,l y ,2l z ],q x =[0,-r z ,r y ,r z ,0,-r x ,- r y ,r x ,0],
dq 0=[0,0,0,0,0,-1,0,1,0],dq 1=[0,0,1,0,0,0,-1,0,0],dq 2=[0,-1,0,1,0,0,0,0,0],
then it is firstiThe partial derivative of the external parameters of the individual cameras can be expressed as:
As for the internal reference part, as can be seen from equation (2), the solution can be divided into three parts. The first part isSince the formula (3) relates to the transformation of the image coordinate system into the non-tilted image plane coordinate system, it is the pairConducting derivation to obtain
As can be seen from equation (1), distortion is not considered
u=(c-c x )S x ,v=(r-c y )S y ,
Thus, the partial derivative can be found:
the second part is a tilting matrixH tilt The same is required toThe derivation can be carried out by rotating the matrixQObtaining a partial derivativeH tilt The partial derivatives of (1). For rotation matrixQIf it is directly according toτAndρcalculated, for rotational angle ambiguity reasonsτAndρnot the only solution, but two eligible solutions may occur. To eliminate ambiguity, a constraint is added here:
order tot 2=tan2()=S 2+C 2,c 2=2cos2()==. Then the rotation matrixQThe respective parameters in (a) may be expressed as:
thus, it is possible to obtain
According to the formula
Can find outt 2、c 2AboutS、CPartial derivatives of (3), into the rotation matrixQCan find the rotation matrixQAboutS、CSo that the tilt matrix can be obtainedH tilt According to a differential matrix formula of an inverse matrixCan find outThe partial derivatives of (1).
The third part isThe same is required toAnd (6) carrying out derivation. In the case of a CCTV lens,the partial transformation can be expressed as:
for the distortion coefficient part, for the division model, then
and (4) synthesizing the above parts, and obtaining the partial derivative of the whole internal reference part according to a chain rule.
Since no distortion is taken into account when calculating the internal and external parameters in step 140, distortion correction may be added during the non-linear optimization, in one embodiment, during the non-linear optimization, the calculated distortion coefficients are used to perform distortion correction on the tilted image plane coordinates of the feature points before each iteration. Specifically, for the division model, since the reversible solution can be directly obtained, the calculation can be directly performed according to the following formula:
for polynomial models, the assumed distortion model can be expressed asWhereinIs the distorted tilted image plane coordinates,as coordinates, vectors, in the camera coordinate systemf d Byf u Andf v is composed of two parts, and
the corrected coordinates can thus be expressed asTo calculatef d It can be subjected to taylor expansion, considering only the linear part:
thus, it is possible to obtain
The distortion correction for the polynomial model therefore proceeds as follows:
(2) And (3) carrying out iterative computation to eliminate distortion:
initializationCalculatingAndthen updated according to the following formulax、yAnd performing iteration:
according to the calibration method of the non-coaxial camera in the embodiment, for the non-coaxial camera, when a conversion model from world coordinates to image coordinates is established, the condition that the optical axis of the lens of the non-coaxial camera is not coaxial with the optical axis of the imaging plane is considered, and the concepts of the inclined imaging plane and the non-inclined imaging plane are introduced, wherein the inclined imaging plane is the imaging plane vertical to the optical axis of the lens of the non-coaxial camera, the non-inclined imaging plane is the imaging plane of the non-coaxial camera, and the inclined matrix is addedH tilt This parameter, which describes the transformation from the tilted image plane coordinate system to the non-tilted image plane coordinate system, will tilt the matrixH tilt The method is regarded as a part of camera internal parameters, and a mathematical model in the existing method is modified; and introduces a rotation angle around the X-axis of the original coordinate systemτAngle of rotation about Z axisρThe transformation between the inclined image plane and the non-inclined image plane is expressed, and the process of converting the coordinate system in the non-coaxial camera can be well described; in one embodiment, the tilted image plane coordinates are also entered during the non-linear optimization processAnd correcting line distortion. In summary, the calibration method of the non-coaxial camera provided by the application improves the calibration precision of the non-coaxial camera, and reduces the error of the working result generated in the process of applying the non-coaxial camera.
Referring to fig. 6, a method for extracting high-precision coordinates of feature points of a circular array calibration plate in step 120 is described, wherein the method includes steps 210 to 250.
Step 210: a calibration plate image is acquired. The acquired calibration board image may be captured by a coaxial camera or a non-coaxial camera.
Step 220: the calibration plate image is image-processed to obtain circular feature points therein. The image processing comprises binarization, filtering, feature screening and the like. Referring to FIG. 7, an exemplary process for obtaining circular feature points includes steps 310-340, which are described in detail below.
Step 310: and performing edge extraction on the calibration plate image to obtain a calibration plate boundary frame so as to obtain the position of the calibration plate in the calibration plate image.
Step 320: and constructing an image pyramid for the calibration plate area to obtain pyramid images of each layer, wherein the calibration plate area is an area positioned in a boundary frame of the calibration plate in the calibration plate image. In the image pyramid, the image resolution of the upper layer is small, and the image resolution of the lower layer is large. The specific number of layers of the image pyramid can be set empirically.
Step 330: and carrying out binarization processing on the pyramid image of the current layer to search for circular feature points. The initial value of the pyramid image of the current layer is the pyramid image of the topmost layer.
In one embodiment, the binarization processing may be a process performed iteratively by one gray value step. Specifically, within a preset threshold interval, selecting a gray threshold at preset intervals from small to large, performing threshold segmentation on the current-layer pyramid image by using the gray threshold every time the gray threshold is selected, obtaining a circular area, judging that a circular feature point meeting preset conditions is searched in the current-layer pyramid image when the number of the circular areas is equal to the preset number, and stopping selecting the next gray threshold, otherwise, continuing to select the next gray threshold to perform threshold segmentation on the current-layer pyramid image until the preset threshold interval is traversed. For example, a preset threshold interval is 50-90, if the step size is 10, that is, the preset interval value is 10, then 50, 60, 70, 80, and 90 are sequentially selected as grayscale thresholds to perform threshold segmentation on the pyramid image of the current layer until the number of circular regions is equal to the preset number. The preset threshold interval may be set empirically. After the threshold segmentation to obtain the circular region, some morphological processing and area screening can be performed to obtain more accurate results.
Step 340: and judging whether the circular feature points meeting the preset conditions are searched in the pyramid image of the current layer. If yes, the process is ended, otherwise, the next layer pyramid image is used as the current layer pyramid image, and the step 330 is returned.
Referring to fig. 8, another embodiment of a process for obtaining circular feature points includes steps 410-430, which are described in detail below.
Step 410: and constructing an image pyramid for the calibration plate image to obtain pyramid images of all layers.
Step 420: and performing circular feature point search on the pyramid image of the current layer. The initial value of the pyramid image of the current layer is the pyramid image of the topmost layer.
The circular feature point search may be performed as follows: the method comprises the steps of conducting binarization processing on a pyramid image of a current layer to obtain a circular area, conducting statistical analysis processing on the area of the circular area to obtain the area with the maximum occurrence frequency, calculating a radius according to the area with the maximum occurrence frequency, multiplying the radius by the corresponding multiplying power of the pyramid image of the current layer to obtain an estimated radius of a circular feature point, and searching the circular feature point in a calibration plate image according to the estimated radius.
The estimated radius of the circular feature points may be obtained by histogram statistics. After binarization processing is carried out on a pyramid image of a current layer to obtain a circular region, the circular region is firstly screened according to a preset roundness range and/or an area range, histogram statistics of the area is carried out on the screened circular region, a function mapping relation of the area and the occurrence frequency is established, and the area with the maximum occurrence frequency is obtained; then, calculating the radius according to the area with the maximum occurrence frequency, and multiplying the radius by the multiplying power corresponding to the pyramid image of the current layer to obtain the estimated radius of the circular feature point; and finally, filtering the calibration plate image according to the estimated radius, then performing threshold segmentation to obtain a characteristic point estimation region, and removing the characteristic point estimation region with the area larger than a preset area threshold. And calculating the number of the characteristic point estimation areas, and judging that circular characteristic points meeting preset conditions are searched in the pyramid image of the current layer when the number of the characteristic point estimation areas is equal to the preset number.
Step 430: and judging whether the circular feature points meeting the preset conditions are searched in the pyramid image of the current layer. If yes, the process is ended, otherwise, the next layer pyramid image is used as the current layer pyramid image, and the process returns to step 420.
In the method for obtaining the circular feature points in the calibration plate image in the embodiment, the circular feature points are searched by constructing the image pyramid, the searching is performed layer by layer from the top layer of the image pyramid to the lower layer, and the searching can be stopped when the circular feature points meeting the preset conditions are searched in a certain layer. Because the image resolution of the upper layer of the image pyramid is small and the image is small, the method is favorable for improving the searching efficiency of the circular feature points. In some embodiments, the binarization processing is a process performed iteratively according to a gray value step length, and is not divided by using a single gray threshold, which is beneficial to more accurately extracting the circular feature points.
The following steps 230 to 250 are described.
Step 230: and performing edge extraction on the circular feature points to obtain edge points of the circular feature points, and performing ellipse fitting by using the edge points to obtain image coordinates of the circular feature points, wherein the image coordinates of the circular feature points refer to image coordinates of the circle centers of the circular feature points.
Step 240: and determining the corresponding relation between the image coordinates of the circular feature points and the world coordinates.
The commonly used calibration plate is not provided with a reference, and a worker needs to manually select the reference to compare the image coordinate of the feature point with the world coordinate to determine the corresponding relation of the image coordinate and the world coordinate, so that the calibration plate is complicated. The application provides two circular array calibration plates with references, and provides a method for determining the corresponding relation between the image coordinates and world coordinates of circular feature points. One of them is a circular array calibration board with a triangular marker, as shown in fig. 9, one corner of the calibration board is provided with a triangular marker, the triangular marker is an isosceles right triangle, and its right angle vertex is one of the vertices of the circular array calibration board, and the other two vertices are respectively on two sides of the circular array calibration board adjacent to the right angle vertex. Another type is a circular array of calibration plates with hollow dots, as shown in fig. 10, which are grouped into clusters, 5 clusters in fig. 10.
Referring to fig. 11, in the circular array calibration plate with triangular markers, determining the correspondence between the image coordinates of the circular feature points and the world coordinates includes the following steps:
step 510: and detecting the triangular marker in the calibration plate image, and determining the relative position relation between the circular feature point and the triangular marker. Triangular markers can be detected by detecting the hypotenuse of the triangle.
Step 520: and establishing a reference coordinate system by taking the triangular marker as a reference, and determining the one-to-one correspondence between the image coordinates of the circular feature points and the world coordinates according to the parameter information of the circular array calibration plate. Based on step 510, a reference coordinate system is established, and then the position of the circular feature point in the reference coordinate system is obtained, the reference coordinate corresponds to the world coordinate, and the one-to-one correspondence between the image coordinate of the circular feature point and the world coordinate can be determined by using the parameter information of the circular array calibration plate.
Referring to fig. 12, in the circular array calibration plate with a hollow point, determining the correspondence between the image coordinates of the circular feature point and the world coordinates includes the following steps:
step 610: and extracting hollow points from the obtained circular feature points, and dividing the hollow points into different clusters by using a clustering algorithm.
Step 620: and calculating to obtain a hollow point with the shortest sum of the distances from all other hollow points in the cluster, taking the hollow point as the center point of the cluster, and classifying the non-hollow points into the cluster with the closest distance.
Step 630: and determining the position of the cluster in the circular array calibration plate according to the arrangement mode of hollow points in the cluster. For example, in fig. 10, it can be seen that the arrangement of the hollow dots in 5 clusters is different, and the position of the cluster in the circular array calibration plate can be determined accordingly.
Step 640: and determining the relative position relationship between the other clusters and the reference cluster by taking one cluster as the reference cluster, so that the relative position relationship between the circular characteristic points in the other clusters and the reference cluster can be determined.
Step 650: and establishing a reference coordinate system by taking the central point of the reference cluster as an origin, and determining the one-to-one correspondence between the image coordinates of the circular feature points and the world coordinates according to the parameter information of the circular array calibration plate. On the basis of step 640, the position of the circular feature point in the reference coordinate system can be obtained after the reference coordinate system is established, the reference coordinate corresponds to the world coordinate, and the one-to-one correspondence relationship between the image coordinate of the circular feature point and the world coordinate can be determined by using the parameter information of the circular array calibration plate.
In one embodiment, after the corresponding relationship between the image coordinates of the circular feature points and the world coordinates is obtained, sub-pixel edge extraction may also be performed. Specifically, a homography matrix is calculated according to the one-to-one correspondence between the image coordinates and the world coordinates of the hollow points in the clusters, and the homography matrix is utilized to map the world coordinates of other circular mark points to the image to obtain mapping points; and then, acquiring circular feature points containing mapping points, and performing sub-pixel edge extraction and ellipse fitting on the circular feature points to obtain new edge points and image coordinates of the circular feature points. The accuracy is further improved by using the edge points and the image coordinates obtained by extracting the sub-pixel edges.
Step 250: and carrying out error correction on the image coordinates of the circular feature points by using an ellipse equation to obtain the final image coordinates of the circular feature points.
BalanceFIs an elliptic equation matrix. The center of the circle can be expressed as:. The transformation of the world coordinate system to the image coordinate system can be expressed as:P i =H t P w thus, the transformed circle can be expressed as:the transformed center can be expressed as:。
the transformation of the world coordinate system to the image coordinate system is expressed without considering distortion, and if the distortion is considered, the relation between the distortion and the non-distortion needs to be established. In an embodiment of the application, the elliptic equation matrix is used, a target function is established according to the idea that the difference between an observed value and an expected value is minimum, an undistorted image coordinate is solved, and error correction of the image coordinate of the circular feature point is achieved.
The ellipse before error correction is a distorted ellipse and can be expressed as:whereinFor correcting the pre-error circular characteristic pointsThe coordinates of the image of (a) are,is a distorted elliptic equation matrix. If distortion is not considered, it is a standard quadratic curve, and the elliptic equation matrix can be expressed asDThe equation of the curve isWhereinpThe image coordinates of the error corrected circular feature points.DToMay use a transformation matrixH D To show that:
Whereinλ 1、λ 2Andλ 3is a matrix of elliptic equationsDIs determined by the characteristic value of (a),Uis a matrix of corresponding feature vectors,andis a matrix of elliptic equationsIs determined by the characteristic value of (a),is a matrix of corresponding eigenvectors.
To obtainH D Then, the image coordinates of the error-corrected circular feature points can be solved according to the following objective functionp i :
Wherein the subscriptiRepresents the firstiAnd (4) points.
In another embodiment, an ellipse equation can be solved by means of ellipse fitting to obtain the image coordinates of the center point of the ellipse, the image coordinates are compared with the image coordinates of the center point of the ellipse which is not fitted, and the difference between the two image coordinates is used for directly carrying out error correction on the image coordinates of the circular feature points. The ellipse fitting may be performed according to the following objective function:
whereina、b、c、d、e、fIs a coefficient in an elliptic equation (x i ,y i ) Is the image coordinates of the edge points of the circular feature points,w i in order to be the weight, the weight is,nthe number of edge points.
Then, the image coordinates of the center point of the ellipse are calculated according to the following formula:
Calculating the image coordinates of the center point of the ellipse not fitted according to the following formula:
WhereinFThe set of points in the area of the circular feature points, i.e. all points in the whole circle,p i for the point in the set to be,I(p i ) Is a pointp i A gray value of (A), (B), (C), (D)x i ,y i ) Is a pointp i Image coordinates, subscripts ofiRepresents the firstiAnd (4) points.
Calculating image coordinatesAndand compensating the image coordinates of the circular feature points by the deviation between the circular feature points to finish error correction.
According to the method for extracting the high-precision coordinates of the feature points of the circular array calibration plate, the calibration plate image is subjected to image processing to obtain the circular feature points, then edge extraction and ellipse fitting are carried out on the circular feature points to obtain the image coordinates of the circular feature points, after the image coordinates are obtained, an ellipse equation is used for carrying out error correction on the image coordinates, and the error correction can be realized through ellipse fitting, error compensation and the like. In the process of obtaining the circular feature points, searching the circular feature points by constructing an image pyramid, searching layer by layer from the top layer of the image pyramid to the lower layer, and stopping searching when the circular feature points meeting preset conditions are searched in a certain layer. Because the image resolution of the upper layer of the image pyramid is small and the image is small, the method is favorable for improving the searching efficiency of the circular feature points. In some embodiments, the binarization processing is a process performed iteratively according to a gray value step length, and is not divided by using a single gray threshold, which is beneficial to more accurately extracting the circular feature points. In summary, the method for extracting the feature point high-precision coordinates of the circular array calibration plate effectively improves the precision and efficiency of extracting the feature point coordinates, thereby improving the precision of camera calibration.
For a line scan camera, the calibration work is also important, and the application also provides a calibration method of the line scan camera. According to the method for calibrating the line scan camera, the line scan camera is modeled, a line scan camera model is provided, parameters in the model are solved, calibration of the line scan camera is completed, and the line scan camera model is introduced firstly below.
Referring to fig. 13, the line scan camera model represents a coordinate transformation relationship from world coordinates to image coordinates in the line scan camera. For a line scan camera, because the photosensitive units of the line scan camera are only one line, the object to be shot needs to move to complete the shooting of the complete image, and in the moving process of the object, the line scan camera continuously scans the object line by line and splices the images of each line to obtain the complete image. The motion vector of the object can be expressed asV=(V x ,V y ,V z ) T Wherein, in the step (A),V x 、V y 、V z respectively represent objects atx、y、zSpeed of movement in the direction. The coordinate transformation relationship in a line scan camera can be divided into two parts: one is a transformation relation between the world coordinate system, the camera coordinate system and the image plane coordinate system, and the other is a transformation relation between the image plane coordinate system and the image coordinate system. Since the object is moving all the time, world coordinates can be represented by motion vectors.
For the transformation relation among the world coordinate system, the camera coordinate system and the image plane coordinate system, the first transformation equation is used for expressing, and for a non-telecentric lens such as a CCTV lens, the first transformation equation is as follows:
whereintRepresents time (a)x c ,y c ,z c ) T Representing the coordinates in the camera coordinate system,λis a coefficient which is preset in the process of setting,findicating the focal length of the line scan camera,image plane coordinates representing distortionThe abscissa of the (c) axis of the (c),,c y is a principal optical axis point (c x ,c y ) The ordinate of (a) is,s y is composed ofyThe extension length in the direction indicates how long the object moves, the line scan camera will scan the object once,representation using distortion model pairs、p v And performing the calculated undistorted image plane coordinates.
For a telecentric lens:
whereinmIs the magnification of the lens.
The transformation relation between the image plane coordinate system and the image coordinate system can be expressed by a second transformation equation, which is specifically as follows:
whereins x Scanning camera for linexPixel size in the direction: (c,r) T Representing the image coordinates.
When the transformation relation among the world coordinate system, the camera coordinate system and the image plane coordinate system is established, the distortion of a lens is not considered, the undistorted image plane coordinate is used, however, the actually obtained image plane coordinate is necessarily the distorted coordinateTherefore, it is necessary to use the distortion model pair、p v The calculation is performed to obtain undistorted coordinates, here denoted as. The distortion model may be a division model or a polynomial model.
The first transformation equation, the second transformation equation and the distortion model form the line scan camera model of the present application. Referring to fig. 14, a calibration method of a line-scan camera in an embodiment includes steps 710 to 730, which are described in detail below.
Step 710: a calibration plate image is acquired. As mentioned above, the calibration plate image is obtained by continuously scanning the moving calibration plate with the line scan camera. The calibration plate can be a checkerboard calibration plate, a circular array calibration plate, etc.
Step 720: and acquiring the characteristic points and the image coordinates thereof in the calibration plate image. For a checkerboard the feature points are the corner points of the checkerboard, for a circular array scale the features areThe point is the center of a circle of a circular feature point in the circular array, i.e., a circular pattern on the circular array calibration plate. The image coordinates of the feature points may be obtained by image processing the calibration plate image, the firstiThe image coordinates of each feature point can be expressed as (c i ,r i ) T 。
Step 730: the initial values of the parameters in the line scan camera model are preset, nonlinear optimization is carried out according to a preset loss function, the parameters of the line scan camera model are obtained, and therefore calibration of the line scan camera is completed.
The parameters of the line scan camera model to be solved comprise the focal lengthfA principal optical axis point (c x ,c y ) Line scanning cameraxPixel size in directions x 、yExtended length in directions y Motion vectorV=(V x ,V y ,V z ) T And distortion coefficients in the distortion model. An initial value may be set in advance empirically for these parameters, and an appropriate initial value may be searched for by initializing a parameter search for the motion vector. First receiving a preset input from a userV x 、V y 、V z Then searching in a range of space containing the value, here in a space of 3 x 3, the value with the smallest error being selected as the valueV x 、V y 、V z Where the error is minimal means that the values on both sides of the equation in the first transformation equation are closest.
The loss function is established in an image coordinate system, is set according to the image coordinates of the characteristic points, and is defined as:
whereinThe coordinates of the image after the distortion are represented,q i representing the coordinates of the image without distortion,ndenotes the total number of characteristic points, subscriptsiIs shown asiAnd (4) a characteristic point.
The image coordinates obtained in step 720 are distorted image coordinates, and the undistorted image coordinates can be calculated according to the distortion model and the second transformation equation, and can be expressed asThe loss function obtained after substitution is
For undistorted image plane coordinates, if a division model is used, it can be expressed as:
whereinκIs the distortion coefficient.
If a polynomial model is used, the assumed distortion model can be expressed asWhereinIs the distorted tilted image plane coordinates,p c as coordinates, vectors, in the camera coordinate systemf d ByAndis composed of two parts, and
The undistorted coordinates can therefore be expressed asTo calculatef d It can be subjected to taylor expansion, considering only the linear part:
thus, it is possible to obtain:
for a non-telecentric lens, the first transformation equation can be used to obtainAnd substituting the first transformation equation into the second transformation equation to obtain:
according to the above formula canBy usingExpressed, substituted into the formula (5), and made publicFormula (4) can be obtainedThe expression of (2) is an expression about the parameter to be solved, so that the parameter to be solved can be substituted into the loss function, and the parameter of the line scan camera model can be obtained by solving the loss function.
Similarly, for telecentric lenses:
according to the above formula canBy usingExpressed and substituted into the first transformation equation in the same wayThe expression of (2) is an expression about the parameter to be solved, so that the parameter to be solved can be substituted into the loss function, and the parameter of the line scan camera model can be obtained by solving the loss function.
The solution process can adopt LM algorithm to carry out iterative computation, and the updating of parameters in the iterative process can be expressed asWhereinq k Is shown askThe vector of parameters of the line scan camera model at the time of the sub-iteration,δby the formulaIs determined in whichJThe matrix of the Jacobian is obtained,εfor correspondence by all feature pointsIs used to form a vector of values of (c),δfor scanning camera by lineThe parameters of the model constitute a vector. When the lens of the line scan camera is a non-telecentric lens and the distortion model is a polynomial model,
when the lens of the line scan camera is a non-telecentric lens and the distortion model is a division model,
when the lens of the line scan camera is telecentric and the distortion model is a polynomial model,
when the lens of the line scan camera is telecentric and the distortion model is a division model,
jacobi matrixJConsists of partial derivatives of the parameters to be determined. For polynomial models, the distortion coefficient is recorded as a vectorPartial derivative of
According to the calibration method of the line scan camera in the embodiment, firstly, a line scan camera model is pre-established and used for representing the coordinate transformation relation from the world coordinate to the image coordinate in the line scan camera, the initial value of the parameter of the line scan camera model is set, when the calibration is carried out, the line scan camera continuously scans the moving calibration board to obtain the image of the calibration board, then the characteristic point and the image coordinate in the image of the calibration board are obtained, the parameter of the line scan camera model is subjected to nonlinear optimization according to the preset loss function, wherein the loss function is set according to the image coordinate of the characteristic point, and finally the parameter of the line scan camera model is obtained, so that the calibration of the line scan camera is finished.
Reference is made herein to various exemplary embodiments. However, those skilled in the art will recognize that changes and modifications may be made to the exemplary embodiments without departing from the scope hereof. For example, the various operational steps, as well as the components used to perform the operational steps, may be implemented in differing ways depending upon the particular application or consideration of any number of cost functions associated with operation of the system (e.g., one or more steps may be deleted, modified or incorporated into other steps).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. Additionally, as will be appreciated by one skilled in the art, the principles herein may be reflected in a computer program product on a computer readable storage medium, which is pre-loaded with computer readable program code. Any tangible, non-transitory computer-readable storage medium may be used, including magnetic storage devices (hard disks, floppy disks, etc.), optical storage devices (CD-to-ROM, DVD, Blu-Ray discs, etc.), flash memory, and/or the like. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including means for implementing the function specified. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified.
Those skilled in the art will recognize that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the invention. Accordingly, the scope of the invention should be determined only by the claims.
Claims (10)
1. A calibration method for a non-coaxial camera, the non-coaxial camera comprising an image plane and a lens, wherein a normal vector of the image plane is not coaxial with an optical axis of the lens, the calibration method comprising:
acquiring a calibration plate image shot by a non-coaxial camera;
acquiring feature points in the calibration plate image, and image coordinates and corresponding world coordinates of the feature points;
calculating a homography matrix according to the image coordinates of the feature points and the corresponding world coordinatesH;
According to a preset conversion model from world coordinates to image coordinates, carrying out decomposition calculation on the homography matrix to obtain internal parameters and external parameters of the non-coaxial camera, wherein the internal parameters comprise a tilt matrixH tilt Tilt matrixH tilt Representing a transformation from an oblique image plane coordinate system, which is an image plane perpendicular to the optical axis of the lens, to a non-oblique image plane coordinate system, which is an image plane of the non-on-axis camera;
and carrying out nonlinear optimization on the distortion coefficient of the non-coaxial camera and the internal parameter and the external parameter obtained by decomposition to obtain the final internal parameter, external parameter and distortion coefficient of the non-coaxial camera.
2. A calibration method according to claim 1, wherein the conversion model is:
wherein the homography matrix, (r,c) T Is the image coordinates of the feature points (a)x w ,y w ,z w ) T Is the world coordinate of the feature point,is a transformation matrix from the world coordinate system to the camera coordinate system,Rin order to be a matrix of rotations,tin order to be a matrix of displacements,z c for feature points in the camera coordinate systemzThe coordinates of the position of the object to be imaged,is a transformation matrix from the camera coordinate system to the tilted image plane coordinate system,fis the focal length of the non-coaxial camera,is a transformation matrix from the non-tilted image plane coordinate system to the image coordinate system,s x ands y pixel sizes in the horizontal and vertical directions of the non-coaxial camera, respectively (c x ,c y ) Is a point of a main optical axis, and is,is an internal reference part, and is characterized in that,is the part of the external ginseng.
3. The calibration method according to claim 2, wherein the lens of the non-coaxial camera is a non-telecentric lens or an object-side telecentric lens, and the tilt matrix is
Wherein the content of the first and second substances,dfor the translation distance of the tilted image plane to the non-tilted image plane,q 11、q 12、q 13、q 21、q 22、q 23、q 31、q 32、q 33is a rotation matrixQElement of (1), rotation matrixQRepresenting a rotational transformation of the tilted image plane with respect to an original coordinate system, an
WhereinρIndicating the angle of rotation about the Z-axis,τand the angle of rotation around the X axis is represented, the X axis of the original coordinate system is the horizontal direction of the non-inclined image plane, the Y axis is the vertical direction of the non-inclined image plane, and the Z axis is the vertical line of the non-inclined image plane.
4. A calibration method according to claim 2 or 3, wherein the obtaining of the internal reference and the external reference of the non-coaxial camera by performing the decomposition calculation on the homography matrix comprises:
the parameter matrix is calculated according to the following constraint conditionsA
Wherein
h 1Is a homography matrixHThe first column of vectors of (a) is,h 2is a homography matrixHOf the second column vector of (a) is,h 3is a homography matrixHThe vector of the third column of (a),r 1is a rotation matrixRThe first column of vectors of (a) is,r 2is a rotation matrixRA second column vector of (a);
5. A calibration method according to claim 2 or 3, wherein the external reference of the non-coaxial camera further comprises an equivalent rotation axiskAnd equivalent shaft angleθThe obtaining of the internal reference and the external reference of the non-coaxial camera by performing decomposition calculation on the homography matrix comprises the following steps:
the parameter matrix is calculated according to the following constraint conditionsA
Wherein the content of the first and second substances,
according tor 1=A -1 h 1,r 2=A -1 h 2,t=A -1 h 3Calculating to obtain matrixr 1 r 2 t];
Whereinh 1Is a homography matrixHThe first column of vectors of (a) is,h 2is a homography matrixHOf the second column vector of (a) is,h 3is a homography matrixHThe vector of the third column of (a),r 1is a rotation matrixRThe first column of vectors of (a) is,r 2is a rotation matrixRA second column vector of (a);
according to the calculated rotation matrixR,Obtaining an equivalent axis of rotationkAnd equivalent shaft angleθIn which a rotation matrix isREquivalent rotation axiskAnd equivalent shaft angleθThe transformation relationship of (1) is as follows:
6. The calibration method according to claim 2, wherein the performing nonlinear optimization on the distortion coefficients of the non-coaxial camera and the decomposed internal parameters and external parameters to obtain final internal parameters, external parameters and distortion coefficients of the non-coaxial camera comprises:
presetting an initial value of a distortion coefficient, taking the internal parameter and the external parameter obtained by decomposition as the initial values of the internal parameter and the external parameter, and iteratively solving an optimal solution according to the following loss functions to obtain the final internal parameter, external parameter and distortion coefficient of the non-coaxial camera:
wherein the content of the first and second substances,n m to scale the number of feature points in the plate image,n c in order to be the number of cameras,n 0the number of calibration plate images taken for the camera,p j is the coordinate of the characteristic point in the world coordinate system,representing the appearance of the calibration plate image in the reference camera,is shown askThe transformation of the camera with respect to the reference camera,i k is shown inkThe transformation under the camera is carried out,p jkl is as followskThe first shot by the cameralFirst in the calibration plate imagejThe image coordinates of the individual feature points,v jkl take a value of 0 or 1 whenjA characteristic point iskThe first shot by the cameral1 if the image of each calibration plate is visible, or 0 if the image of each calibration plate is visible; function(s)Representing the transformation from the image coordinate system to the inclined image plane coordinate system, wherein the transformation from the image coordinate system to the inclined image plane coordinate system by using the internal reference and the inverse distortion of the inclined image plane coordinate by using the distortion coefficient are included; function(s)Representing by the worldTransformation of the coordinate system to the tilted image plane coordinate system includes transforming the world coordinate system to the camera coordinate system using the external parameters.
7. The calibration method according to claim 6, further comprising: and before each iteration, distortion correction is carried out on the inclined image plane coordinates of the characteristic points by using the calculated distortion coefficient.
8. Calibration method according to claim 6, characterized in that it is based on a formulaIteratively solving the optimal solution, whereinq k Is shown askThe vector composed of the internal parameter, the external parameter and the distortion coefficient of the non-coaxial camera during the secondary iteration,δby the formulaIs determined in whichεCorresponding to all feature points in each calibration plate image taken by each camera at the current iterationAndthe difference of (a) is used to form a vector,Jis a Jacobian matrix, which is composed of Jacobian matrices of cameras, andithe Jacobian matrix of the individual cameras is
9. The calibration method according to claim 1, wherein the calibration plate image is a circular array calibration plate image; acquiring the characteristic points in the calibration board image and the image coordinates of the characteristic points by the following method:
carrying out image processing on the calibration plate image to obtain circular feature points in the calibration plate image;
performing edge extraction on the circular feature points to obtain edge points of the circular feature points, and performing ellipse fitting by using the edge points to obtain image coordinates of the circular feature points, wherein the image coordinates of the circular feature points refer to image coordinates of the circle centers of the circular feature points;
determining the corresponding relation between the image coordinates of the circular feature points and world coordinates;
and carrying out error correction on the image coordinates of the circular feature points by using an ellipse equation to obtain the final image coordinates of the circular feature points.
10. A computer-readable storage medium, characterized in that the medium has stored thereon a program which is executable by a processor for implementing a calibration method as claimed in any one of claims 1 to 9.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210129725.5A CN114494464A (en) | 2021-12-15 | 2021-12-15 | Calibration method of line scanning camera |
CN202210131436.9A CN114463442A (en) | 2021-12-15 | 2021-12-15 | Calibration method of non-coaxial camera |
CN202111526560.7A CN113920205B (en) | 2021-12-15 | 2021-12-15 | Calibration method of non-coaxial camera |
CN202210130372.0A CN114529613A (en) | 2021-12-15 | 2021-12-15 | Method for extracting characteristic point high-precision coordinates of circular array calibration plate |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111526560.7A CN113920205B (en) | 2021-12-15 | 2021-12-15 | Calibration method of non-coaxial camera |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210130372.0A Division CN114529613A (en) | 2021-12-15 | 2021-12-15 | Method for extracting characteristic point high-precision coordinates of circular array calibration plate |
CN202210131436.9A Division CN114463442A (en) | 2021-12-15 | 2021-12-15 | Calibration method of non-coaxial camera |
CN202210129725.5A Division CN114494464A (en) | 2021-12-15 | 2021-12-15 | Calibration method of line scanning camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113920205A true CN113920205A (en) | 2022-01-11 |
CN113920205B CN113920205B (en) | 2022-03-18 |
Family
ID=79249214
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210130372.0A Pending CN114529613A (en) | 2021-12-15 | 2021-12-15 | Method for extracting characteristic point high-precision coordinates of circular array calibration plate |
CN202111526560.7A Active CN113920205B (en) | 2021-12-15 | 2021-12-15 | Calibration method of non-coaxial camera |
CN202210129725.5A Pending CN114494464A (en) | 2021-12-15 | 2021-12-15 | Calibration method of line scanning camera |
CN202210131436.9A Pending CN114463442A (en) | 2021-12-15 | 2021-12-15 | Calibration method of non-coaxial camera |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210130372.0A Pending CN114529613A (en) | 2021-12-15 | 2021-12-15 | Method for extracting characteristic point high-precision coordinates of circular array calibration plate |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210129725.5A Pending CN114494464A (en) | 2021-12-15 | 2021-12-15 | Calibration method of line scanning camera |
CN202210131436.9A Pending CN114463442A (en) | 2021-12-15 | 2021-12-15 | Calibration method of non-coaxial camera |
Country Status (1)
Country | Link |
---|---|
CN (4) | CN114529613A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115930784A (en) * | 2023-01-09 | 2023-04-07 | 广州市易鸿智能装备有限公司 | Point inspection method of visual inspection system |
CN116188594A (en) * | 2022-12-31 | 2023-05-30 | 梅卡曼德(北京)机器人科技有限公司 | Calibration method, calibration system, calibration device and electronic equipment of camera |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114862866B (en) * | 2022-07-11 | 2022-09-20 | 深圳思谋信息科技有限公司 | Calibration plate detection method and device, computer equipment and storage medium |
CN117135454A (en) * | 2023-01-13 | 2023-11-28 | 荣耀终端有限公司 | Image processing method, device and storage medium |
CN116878388B (en) * | 2023-09-07 | 2023-11-14 | 东莞市兆丰精密仪器有限公司 | Line scanning measurement method, device and system and computer readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009017480A (en) * | 2007-07-09 | 2009-01-22 | Nippon Hoso Kyokai <Nhk> | Camera calibration device and program thereof |
US20120133779A1 (en) * | 2010-11-29 | 2012-05-31 | Microsoft Corporation | Robust recovery of transform invariant low-rank textures |
CN107680139A (en) * | 2017-10-17 | 2018-02-09 | 中国人民解放军国防科技大学 | Universality calibration method of telecentric binocular stereo vision measurement system |
CN108447098A (en) * | 2018-03-13 | 2018-08-24 | 深圳大学 | A kind of telecentricity moves camera shaft scaling method and system |
CN110298888A (en) * | 2019-06-12 | 2019-10-01 | 上海智能制造功能平台有限公司 | Camera calibration method based on uniaxial high precision displacement platform |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107123146A (en) * | 2017-03-20 | 2017-09-01 | 深圳市华汉伟业科技有限公司 | The mark localization method and system of a kind of scaling board image |
CN107274454B (en) * | 2017-06-14 | 2020-12-15 | 昆明理工大学 | Method for extracting characteristic points of circular array calibration plate |
CN109816733B (en) * | 2019-01-14 | 2023-08-18 | 京东方科技集团股份有限公司 | Camera parameter initialization method and device, camera parameter calibration method and device and image acquisition system |
KR102297683B1 (en) * | 2019-07-01 | 2021-09-07 | (주)베이다스 | Method and apparatus for calibrating a plurality of cameras |
CN111145238B (en) * | 2019-12-12 | 2023-09-22 | 中国科学院深圳先进技术研究院 | Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment |
CN113012234B (en) * | 2021-03-16 | 2022-09-02 | 中国人民解放军火箭军工程大学 | High-precision camera calibration method based on plane transformation |
CN113610917A (en) * | 2021-08-09 | 2021-11-05 | 河南工业大学 | Circular array target center image point positioning method based on blanking points |
-
2021
- 2021-12-15 CN CN202210130372.0A patent/CN114529613A/en active Pending
- 2021-12-15 CN CN202111526560.7A patent/CN113920205B/en active Active
- 2021-12-15 CN CN202210129725.5A patent/CN114494464A/en active Pending
- 2021-12-15 CN CN202210131436.9A patent/CN114463442A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009017480A (en) * | 2007-07-09 | 2009-01-22 | Nippon Hoso Kyokai <Nhk> | Camera calibration device and program thereof |
US20120133779A1 (en) * | 2010-11-29 | 2012-05-31 | Microsoft Corporation | Robust recovery of transform invariant low-rank textures |
CN107680139A (en) * | 2017-10-17 | 2018-02-09 | 中国人民解放军国防科技大学 | Universality calibration method of telecentric binocular stereo vision measurement system |
CN108447098A (en) * | 2018-03-13 | 2018-08-24 | 深圳大学 | A kind of telecentricity moves camera shaft scaling method and system |
CN110298888A (en) * | 2019-06-12 | 2019-10-01 | 上海智能制造功能平台有限公司 | Camera calibration method based on uniaxial high precision displacement platform |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116188594A (en) * | 2022-12-31 | 2023-05-30 | 梅卡曼德(北京)机器人科技有限公司 | Calibration method, calibration system, calibration device and electronic equipment of camera |
CN116188594B (en) * | 2022-12-31 | 2023-11-03 | 梅卡曼德(北京)机器人科技有限公司 | Calibration method, calibration system, calibration device and electronic equipment of camera |
CN115930784A (en) * | 2023-01-09 | 2023-04-07 | 广州市易鸿智能装备有限公司 | Point inspection method of visual inspection system |
CN115930784B (en) * | 2023-01-09 | 2023-08-25 | 广州市易鸿智能装备有限公司 | Point inspection method of visual inspection system |
Also Published As
Publication number | Publication date |
---|---|
CN114529613A (en) | 2022-05-24 |
CN114494464A (en) | 2022-05-13 |
CN114463442A (en) | 2022-05-10 |
CN113920205B (en) | 2022-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113920205B (en) | Calibration method of non-coaxial camera | |
CN108648240B (en) | Non-overlapping view field camera attitude calibration method based on point cloud feature map registration | |
CN110969668B (en) | Stereo calibration algorithm of long-focus binocular camera | |
CN109598762B (en) | High-precision binocular camera calibration method | |
Tang et al. | A precision analysis of camera distortion models | |
CN102376089B (en) | Target correction method and system | |
CN109272574B (en) | Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation | |
CN112465912B (en) | Stereo camera calibration method and device | |
Von Gioi et al. | Towards high-precision lens distortion correction | |
WO2018201677A1 (en) | Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system | |
CN112802124A (en) | Calibration method and device for multiple stereo cameras, electronic equipment and storage medium | |
CN110738608B (en) | Plane image correction method and system | |
CN112929626B (en) | Three-dimensional information extraction method based on smartphone image | |
JP6641729B2 (en) | Line sensor camera calibration apparatus and method | |
CN105118086A (en) | 3D point cloud data registering method and system in 3D-AOI device | |
CN112258588A (en) | Calibration method and system of binocular camera and storage medium | |
JP2004317245A (en) | Distance detection device, distance detection method and distance detection program | |
CN115457147A (en) | Camera calibration method, electronic device and storage medium | |
CN113920206A (en) | Calibration method of perspective tilt-shift camera | |
CN111462246B (en) | Equipment calibration method of structured light measurement system | |
CN113793266A (en) | Multi-view machine vision image splicing method, system and storage medium | |
JP5998532B2 (en) | Correction formula calculation method, correction method, correction apparatus, and imaging apparatus | |
CN116625258A (en) | Chain spacing measuring system and chain spacing measuring method | |
CN113962853B (en) | Automatic precise resolving method for rotary linear array scanning image pose | |
CN116071433A (en) | Camera calibration method and system, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |