CN113920205A - Calibration method of non-coaxial camera - Google Patents

Calibration method of non-coaxial camera Download PDF

Info

Publication number
CN113920205A
CN113920205A CN202111526560.7A CN202111526560A CN113920205A CN 113920205 A CN113920205 A CN 113920205A CN 202111526560 A CN202111526560 A CN 202111526560A CN 113920205 A CN113920205 A CN 113920205A
Authority
CN
China
Prior art keywords
image
camera
matrix
coordinate system
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111526560.7A
Other languages
Chinese (zh)
Other versions
CN113920205B (en
Inventor
魏宇明
杨洋
黄涛
黄淦
吴创廷
翟爱亭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huahan Weiye Technology Co ltd
Original Assignee
Shenzhen Huahan Weiye Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huahan Weiye Technology Co ltd filed Critical Shenzhen Huahan Weiye Technology Co ltd
Priority to CN202210129725.5A priority Critical patent/CN114494464A/en
Priority to CN202210131436.9A priority patent/CN114463442A/en
Priority to CN202111526560.7A priority patent/CN113920205B/en
Priority to CN202210130372.0A priority patent/CN114529613A/en
Publication of CN113920205A publication Critical patent/CN113920205A/en
Application granted granted Critical
Publication of CN113920205B publication Critical patent/CN113920205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

A calibration method of a non-coaxial camera comprises the following steps: acquiring a calibration plate image shot by a non-coaxial camera; acquiring feature points in a calibration plate image and image coordinates and world coordinates thereof; calculating a homography matrix; decomposing the homography matrix according to a preset conversion model from world coordinates to image coordinates to obtain an internal reference and an external reference of the non-coaxial camera, wherein the internal reference comprises an inclined matrix representing the transformation from an inclined image plane coordinate system to a non-inclined image plane coordinate system, the inclined image plane is an image plane vertical to the optical axis of the lens, and the non-inclined image plane is an image plane of the non-coaxial camera; and carrying out nonlinear optimization on the distortion coefficient and the internal parameter and the external parameter obtained by decomposition to obtain the final internal parameter, external parameter and distortion coefficient. Due to the introduction of the inclined image plane and the non-inclined image plane, the inclined matrix is added for describing the transformation from the inclined image plane coordinate system to the non-inclined image plane coordinate system, and the calibration precision of the non-coaxial camera can be effectively improved.

Description

Calibration method of non-coaxial camera
Technical Field
The invention relates to the technical field of camera calibration, in particular to a calibration method of a non-coaxial camera.
Background
In the image measuring process and machine vision application, in order to determine the three-dimensional geometric position of a certain point on the surface of a space object, a geometric model imaged by a camera must be established, namely, the corresponding relation between the three-dimensional geometric position of the certain point on the surface of the space object and the corresponding point on an image is determined, so that after the image coordinates of the image shot by the camera are obtained, the corresponding three-dimensional space coordinates can be deduced according to the geometric model imaged by the camera. The parameters in the geometric model are the parameters of the camera, and the process of determining the parameters of the camera is called camera calibration. The calibration of camera parameters is a very critical link, and the accuracy of the calibration result and the stability of the calibration algorithm directly influence the accuracy of the result generated by the camera. Therefore, the condition that camera calibration is well done is the premise that follow-up work is well done. The camera calibration is usually carried out by utilizing a calibration plate which is widely applied to the aspects of machine vision, image measurement, photogrammetry, three-dimensional reconstruction and the like, the image of the calibration plate with a fixed-spacing pattern array is shot by a camera, and a geometric model of camera imaging can be obtained through calculation of a calibration algorithm, so that high-precision measurement and reconstruction results are obtained. At present, a calibration board with a checkerboard pattern or a solid circular array pattern is usually used for camera calibration, wherein the checkerboard calibration board acquires feature points by positioning checkerboard corner points, the circular array calibration board acquires the feature points by positioning dot centers, and subsequent calibration work can be performed after the coordinates of the feature points and the corresponding relation between the feature points and world coordinates are determined.
Disclosure of Invention
The application provides a calibration method of a non-coaxial camera, which can be used for calibrating the non-coaxial camera.
According to a first aspect, an embodiment provides a calibration method for a non-coaxial camera, where the non-coaxial camera includes an image plane and a lens, and a normal vector of the image plane and an optical axis of the lens are not coaxial, the calibration method including:
acquiring a calibration plate image shot by a non-coaxial camera;
acquiring feature points in the calibration plate image, and image coordinates and corresponding world coordinates of the feature points;
calculating a homography matrix according to the image coordinates of the feature points and the corresponding world coordinatesH
According to a preset conversion model from world coordinates to image coordinates, carrying out decomposition calculation on the homography matrix to obtain internal parameters and external parameters of the non-coaxial camera, wherein the internal parameters comprise a tilt matrixH tilt Tilt matrixH tilt Representing a transformation from an oblique image plane coordinate system, which is an image plane perpendicular to the optical axis of the lens, to a non-oblique image plane coordinate system, which is an image plane of the non-on-axis camera;
and carrying out nonlinear optimization on the distortion coefficient of the non-coaxial camera and the internal parameter and the external parameter obtained by decomposition to obtain the final internal parameter, external parameter and distortion coefficient of the non-coaxial camera.
In one embodiment, the conversion model is:
Figure 425015DEST_PATH_IMAGE001
wherein the homography matrix
Figure 279838DEST_PATH_IMAGE002
, (rc) T Is the image coordinates of the feature points (a)x w y w z w ) T Is the world coordinate of the feature point,
Figure 528417DEST_PATH_IMAGE003
for transformation matrix from world coordinate system to camera coordinate system,RIn order to be a matrix of rotations,tin order to be a matrix of displacements,z c for feature points in the camera coordinate systemzThe coordinates of the position of the object to be imaged,
Figure 861309DEST_PATH_IMAGE004
is a transformation matrix from the camera coordinate system to the tilted image plane coordinate system,fis the focal length of the non-coaxial camera,
Figure 380409DEST_PATH_IMAGE005
is a transformation matrix from the non-tilted image plane coordinate system to the image coordinate system,s x ands y pixel sizes in the horizontal and vertical directions of the non-coaxial camera, respectively (c x c y ) Is a point of a main optical axis, and is,
Figure 406134DEST_PATH_IMAGE006
is an internal reference part, and is characterized in that,
Figure 142008DEST_PATH_IMAGE003
is the part of the external ginseng.
In one embodiment, the lens of the non-coaxial camera is a non-telecentric lens or an object-side telecentric lens, and the tilt matrix is
Figure 544171DEST_PATH_IMAGE007
Wherein the content of the first and second substances,dfor the translation distance of the tilted image plane to the non-tilted image plane,q 11q 12q 13q 21q 22q 23q 31q 32q 33is a rotation matrixQElement of (1), rotation matrixQRepresenting a rotational transformation of the tilted image plane with respect to an original coordinate system, an
Figure 416312DEST_PATH_IMAGE008
WhereinρIndicating the angle of rotation about the Z-axis,τand the angle of rotation around the X axis is represented, the X axis of the original coordinate system is the horizontal direction of the non-inclined image plane, the Y axis is the vertical direction of the non-inclined image plane, and the Z axis is the vertical line of the non-inclined image plane.
In one embodiment, the obtaining the internal reference and the external reference of the non-coaxial camera by performing the decomposition calculation on the homography matrix includes:
the parameter matrix is calculated according to the following constraint conditionsA
Figure 612938DEST_PATH_IMAGE009
Wherein the content of the first and second substances,
Figure 69065DEST_PATH_IMAGE010
Figure 9339DEST_PATH_IMAGE011
h 1is a homography matrixHThe first column of vectors of (a) is,h 2is a homography matrixHOf the second column vector of (a) is,h 3is a homography matrixHThe vector of the third column of (a),r 1is a rotation matrixRThe first column of vectors of (a) is,r 2is a rotation matrixRA second column vector of (a);
according tor 1=A -1 h 1r 2=A -1 h 2t=A -1 h 3Calculating to obtain matrixr 1 r 2 t]According to
Figure 100002_DEST_PATH_IMAGE012
Calculating to obtain a tilt matrixH tilt
In one embodiment, the external parameter of the non-coaxial camera isComprising an equivalent axis of rotationkAnd equivalent shaft angleθThe obtaining of the internal reference and the external reference of the non-coaxial camera by performing decomposition calculation on the homography matrix comprises the following steps:
the parameter matrix is calculated according to the following constraint conditionsA
Figure 204828DEST_PATH_IMAGE009
Wherein the content of the first and second substances,
Figure 837935DEST_PATH_IMAGE013
Figure 784288DEST_PATH_IMAGE011
according tor 1=A -1 h 1r 2=A -1 h 2t=A -1 h 3Calculating to obtain matrixr 1 r 2 t];
Whereinh 1Is a homography matrixHThe first column of vectors of (a) is,h 2is a homography matrixHOf the second column vector of (a) is,h 3is a homography matrixHThe vector of the third column of (a),r 1is a rotation matrixRThe first column of vectors of (a) is,r 2is a rotation matrixRA second column vector of (a);
according to the calculated rotation matrixR,Obtaining an equivalent axis of rotationkAnd equivalent shaft angleθIn which a rotation matrix isREquivalent rotation axiskAnd equivalent shaft angleθThe transformation relationship of (1) is as follows:
R(k,θ)=
Figure 262673DEST_PATH_IMAGE014
k x k y k z is an equivalent rotation axiskThree components of (a);
according to
Figure DEST_PATH_IMAGE015
Calculating to obtain a tilt matrixH tilt
In an embodiment, the performing nonlinear optimization on the distortion coefficient of the non-coaxial camera and the decomposed internal parameter and external parameter to obtain a final internal parameter, external parameter and distortion coefficient of the non-coaxial camera includes:
presetting an initial value of a distortion coefficient, taking the internal parameter and the external parameter obtained by decomposition as the initial values of the internal parameter and the external parameter, and iteratively solving an optimal solution according to the following loss functions to obtain the final internal parameter, external parameter and distortion coefficient of the non-coaxial camera:
Figure 578248DEST_PATH_IMAGE016
wherein the content of the first and second substances,n m to scale the number of feature points in the plate image,n c in order to be the number of cameras,n 0the number of calibration plate images taken for the camera,p j is the coordinate of the characteristic point in the world coordinate system,
Figure 382256DEST_PATH_IMAGE017
representing the appearance of the calibration plate image in the reference camera,
Figure 314440DEST_PATH_IMAGE018
is shown askThe transformation of the camera with respect to the reference camera,i k is shown inkThe transformation under the camera is carried out,p jkl is as followskThe first shot by the cameralFirst in the calibration plate imagejThe image coordinates of the individual feature points,v jkl take a value of 0 or 1 whenjA characteristic point iskThe first shot by the cameral1 if the image of each calibration plate is visible, or 0 if the image of each calibration plate is visible; function(s)
Figure 95052DEST_PATH_IMAGE019
Representing the transformation from the image coordinate system to the inclined image plane coordinate system, wherein the transformation from the image coordinate system to the inclined image plane coordinate system by using the internal reference and the inverse distortion of the inclined image plane coordinate by using the distortion coefficient are included; function(s)
Figure DEST_PATH_IMAGE020
A transformation from the world coordinate system to the tilted image plane coordinate system is represented, which includes transforming the world coordinate system to the camera coordinate system using the external reference.
In one embodiment, the calibration method of the non-coaxial camera further includes: and before each iteration, distortion correction is carried out on the inclined image plane coordinates of the characteristic points by using the calculated distortion coefficient.
In one embodiment, the method is based on a formula
Figure 265133DEST_PATH_IMAGE021
Iteratively solving the optimal solution, whereinq k Is shown askThe vector composed of the internal parameter, the external parameter and the distortion coefficient of the non-coaxial camera during the secondary iteration,δby the formula
Figure DEST_PATH_IMAGE022
Is determined in whichεCorresponding to all feature points in each calibration plate image taken by each camera at the current iteration
Figure 974463DEST_PATH_IMAGE023
And
Figure 128364DEST_PATH_IMAGE024
the difference of (a) is used to form a vector,Jis a Jacobian matrix, which is composed of Jacobian matrices of cameras, andithe Jacobian matrix of the individual cameras is
Figure 214132DEST_PATH_IMAGE025
Wherein
Figure 100002_DEST_PATH_IMAGE026
And
Figure 740184DEST_PATH_IMAGE027
respectively representiThe first shot by the camerajPartial derivatives of the corresponding internal and external parameters of each calibration plate image.
In one embodiment, the calibration plate image is a circular array calibration plate image; acquiring the characteristic points in the calibration board image and the image coordinates of the characteristic points by the following method:
carrying out image processing on the calibration plate image to obtain circular feature points in the calibration plate image;
performing edge extraction on the circular feature points to obtain edge points of the circular feature points, and performing ellipse fitting by using the edge points to obtain image coordinates of the circular feature points, wherein the image coordinates of the circular feature points refer to image coordinates of the circle centers of the circular feature points;
determining the corresponding relation between the image coordinates of the circular feature points and world coordinates;
and carrying out error correction on the image coordinates of the circular feature points by using an ellipse equation to obtain the final image coordinates of the circular feature points.
According to a second aspect, an embodiment provides a computer-readable storage medium having a program stored thereon, the program being executable by a processor to implement the method for calibrating a non-coaxial camera according to the first aspect.
According to the calibration method of the non-coaxial camera and the computer-readable storage medium of the embodiment, for the non-coaxial camera, when a conversion model from world coordinates to image coordinates is established, the concept of the inclined image plane and the non-inclined image plane is introduced by taking into account the fact that the optical axis of the lens of the non-coaxial camera is not coaxial with the optical axis of the imaging plane, wherein the inclined image plane is the image plane perpendicular to the optical axis of the lens of the non-coaxial camera, the non-inclined image plane is the imaging plane of the non-coaxial camera, and the inclined matrix is addedH tilt This parameter is used to describe the transformation from the tilted image plane coordinate system to the non-tilted image planeTransformation of the plane coordinate system to tilt the matrixH tilt The method is used as a part of camera internal reference, a mathematical model in the existing method is modified, the process of coordinate system conversion in the non-coaxial camera can be well described, the calibration precision of the non-coaxial camera is improved, and the error of a working result generated in the process of applying the non-coaxial camera is reduced.
Drawings
FIG. 1 is a schematic diagram of an optical configuration of a coaxial camera;
FIG. 2 is a schematic diagram of an optical configuration of a non-coaxial camera;
FIG. 3 is a schematic diagram of the transformation of each coordinate system in the pinhole camera model;
FIG. 4 is a flow diagram of a method for calibrating a non-coaxial camera in one embodiment;
FIG. 5 is a schematic diagram of a tilt transformation;
FIG. 6 is a flowchart of a method for extracting feature point high precision coordinates of a circular array calibration plate according to an embodiment;
FIG. 7 is a flow diagram of image processing of a calibration plate image to obtain circular feature points therein in one embodiment;
FIG. 8 is a flow chart of image processing of a calibration plate image to obtain circular feature points therein in another embodiment;
FIG. 9 is a schematic view of a circular array calibration plate with triangular markers;
FIG. 10 is a schematic view of a circular array calibration plate with hollow dots;
FIG. 11 is a flowchart of determining the correspondence between the image coordinates of the circular feature points and the world coordinates in the circular array calibration plate with the triangular markers;
FIG. 12 is a flowchart of determining the correspondence between the image coordinates and world coordinates of circular feature points in a circular array calibration plate having hollow points;
FIG. 13 is a schematic diagram of the transformation of coordinate systems in a line scan camera model;
FIG. 14 is a flowchart of a calibration method for a line-scan camera in an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings). Reference herein to "image plane" and "imaging plane" are all the same concept.
The optical axis of the lens and the optical axis of the imaging plane (i.e. the normal vector of the imaging plane) in the camera mainly used at present are coaxial, but in the field of machine vision, due to manufacturing or design reasons, the lens of some cameras is not necessarily parallel to the imaging plane, that is, the optical axis of the camera lens is not coaxial with the optical axis of the imaging plane, and if this point is not considered and the calibration is performed according to the traditional method, an error which cannot be ignored is introduced.
The main method of the current camera calibration is designed and calculated according to Zhangyingyou calibration method, and mainly comprises the following calculation steps:
(1) acquiring a homography matrix according to the corresponding relation between the world coordinates and the image coordinates of the feature points in the calibration plate;
(2) decomposing the homography matrix, and calculating to obtain initial parameters of the internal parameters or the external parameters;
(3) and performing nonlinear optimization on the initial parameters by adopting an LM (Levenberg-Marquardt ) algorithm, and iteratively calculating the internal parameter, the external parameter and the distortion coefficient to obtain a final calibration result.
However, in the zhangxiong calibration method, the case that the optical axis of the lens and the optical axis of the imaging plane are coaxial is mainly considered, and the design of the processing mode and the mathematical model in the non-coaxial case is not given, so that it is necessary to provide a calibration method which can be used for the non-coaxial camera.
In the case of no or coaxial axes, the optical structure of the camera is shown in fig. 1, in which the optical axis of the image plane (i.e. the vertical axis or the vector of the image plane) and the optical axis of the lens coincide, while in the case of an off or non-coaxial axis, the optical structure of the camera is shown in fig. 2, in which the optical axis of the image plane and the optical axis of the lens do not coincide but have a certain angleθ
In the case of the on-axis, the projective transformation relationship of each coordinate system during the imaging process of the camera can be represented by a pinhole camera model, as shown in fig. 3. Points in the World Coordinate System (WCS)P w Point projected onto imaging plane through lens projection centerPTo obtain pointsP w Image coordinates projected onto an imaging planeq i It needs to be first converted into the Camera Coordinate System (CCS). Of camera coordinate systemxShaft andywith axes parallel to the image, respectivelycShaft andrthe shaft is provided with a plurality of axial holes,zthe axis is perpendicular to the imaging plane of the image andzthe arrangement of the axes will cause the camera to be orientedAll points beforezCoordinates being positive numbers, in which the image iscThe axial direction is the horizontal direction of the image,rthe axial direction is the vertical direction of the image. In FIG. 3x c A shaft,y c Shaft andz c with axes representing camera coordinate systems, respectivelyxA shaft,yShaft andza shaft. The transformation from the world coordinate system to the camera coordinate system can be formulatedP c = c H w P w Is shown in whichP c =(x c y c z c ) T Are the coordinates in the camera coordinate system,P w =(x w y w z w ) T are coordinates in a world coordinate system and are, c H w can use a rotation matrixRAnd translation matrixtTo indicate.
After the world coordinate system is converted into the camera coordinate system, the world coordinate system needs to be converted into the image plane coordinate system, which is a process of converting 3D coordinates into 2D coordinates. For non-telecentric lenses, such as CCTV (Closed Circuit Television) lenses, this transformation can be expressed as:
Figure 100002_DEST_PATH_IMAGE028
whereinfIndicating the focal length of the camera lens, (ii)uv) T Representing the coordinates in the image plane coordinate system.
For a telecentric lens, this transformation can be expressed as:
Figure 354837DEST_PATH_IMAGE029
whereinmShowing the magnification of the lens.
Distortion of the lens after projection onto the imaging plane will result in coordinatesq c =(uv) T Is changed so that distorted coordinates are formed on the imaging plane
Figure 996033DEST_PATH_IMAGE030
This variation can be modeled on the imaging plane alone, that is to say that no three-dimensional information is required here. For most lenses, their distortion can be sufficiently approximated as radial distortion, and two models are usually used to describe distortion, one being a division model and one being a polynomial model. The division model is as follows:
Figure 885492DEST_PATH_IMAGE031
wherein the parametersκIndicating the magnitude of the radial distortion ifκNegative, i.e. barrel distortion, ifκPositive, i.e. becoming pincushion. The distortion can be corrected by:
Figure 325438DEST_PATH_IMAGE032
the polynomial model is as follows:
Figure DEST_PATH_IMAGE033
wherein
Figure 845413DEST_PATH_IMAGE034
k 1k 2k 3p 1p 2Are model coefficients. According to the model, the solution can be obtained by using a Newton methoduvThe initial value of iteration is the undistorted initial value itself.
And finally, converting the Image plane Coordinate System into an Image Coordinate System (ICS), and expressing the Image plane Coordinate System into a formula:
Figure DEST_PATH_IMAGE035
, (1)
whereins x Ands y pixel sizes in the horizontal direction and the vertical direction of the camera, respectively (c x c y ) Is the main optical axis point, typically the center of the image.
The entire transformation described above can therefore be expressed as:
Figure 442747DEST_PATH_IMAGE036
this is the mathematical model on which camera calibration is based.
In a non-coaxial camera, the optical axis of the image plane and the optical axis of the lens do not coincide but have an off-angleθTherefore, the above model is not applicable in the process of converting the camera coordinate system to the image coordinate system, and for this reason, the present application introduces the concept of an oblique image plane, which is an image plane perpendicular to the optical axis of the lens of the non-coaxial camera, and a non-oblique image plane, which is an image plane of the non-coaxial camera, and in the process of converting the camera coordinate system to the image coordinate system, the camera coordinate system is first converted to the oblique image plane coordinate system, then the oblique image plane coordinate system is converted to the non-oblique image plane coordinate system, and finally the non-oblique image plane coordinate system is converted to the image coordinate system. Tilt matrix for transformation of tilted image plane coordinate system to non-tilted image plane coordinate systemH tilt So the whole transformation process, if distortion is not considered, can be expressed as:
Figure 870317DEST_PATH_IMAGE037
on the basis of the above conversion model, the present application provides a calibration method for a non-coaxial camera, please refer to fig. 4, in which the method includes steps 110 to 150, which will be described in detail below.
Step 110: and acquiring a calibration plate image shot by the non-coaxial camera.
The calibration plate can be a checkerboard calibration plate, a circular array calibration plate, etc. During calibration, the non-coaxial cameras can be placed in a plurality of poses (i.e. positions and angles of the non-coaxial cameras relative to the calibration plate) according to experience, and the calibration plate is shot when the non-coaxial cameras are in each pose, so that a plurality of different calibration plate images are obtained for calibration.
Step 120: and acquiring the characteristic points in the calibration plate image, and the image coordinates and the corresponding world coordinates of the characteristic points.
For the chessboard pattern calibration plate, the characteristic points are angular points of the chessboard pattern, for the circular array calibration plate, the characteristic points are the centers of the circular characteristic points in the circular array, and the circular characteristic points are circular patterns on the circular array calibration plate.
The world coordinate system can be constructed according to the parameter information of the calibration plate so as to obtain the world coordinates corresponding to the feature points, wherein the parameter information of the calibration plate comprises the size of the calibration plate, the size of the checkerboard, the radius of the circular feature points, the distance between the feature points and the like. The method for extracting the high-precision coordinates of the feature points is provided for the circular array calibration plate, error correction can be performed on the image coordinates of the feature points by using an elliptic equation, and the precision of the image coordinates of the feature points is effectively improved, so that the precision of camera calibration is improved, and the method is elaborated in detail hereinafter.
Step 130: calculating a homography matrix according to the image coordinates of the feature points and the corresponding world coordinatesH. The homography matrix can be calculated by using the image coordinates of a plurality of characteristic points and the corresponding world coordinatesH
Step 140: according to a preset conversion model from world coordinates to image coordinates, corresponding to the homography matrixHPerforming decomposition calculation to obtain internal reference and external reference of the non-coaxial camera, wherein the internal reference comprises a tilt matrixH tilt
It will be appreciated that the homography matrix
Figure 902121DEST_PATH_IMAGE038
And thus to the homography matrixHAnd decomposing to obtain the internal reference and the external reference of the non-coaxial camera. Will tilt the matrixH tilt Considered as part of the internal reference, the internal reference part of the non-coaxial camera is
Figure 796259DEST_PATH_IMAGE006
The external ginseng part is
Figure 677627DEST_PATH_IMAGE003
Referring to FIG. 5, the present application introduces three parameters to represent the tilt matrixH tilt I.e. image plane distancedAngle of rotationτAndρthe tilted image plane can be considered as a translation distance with respect to a non-tilted image plane (i.e., an image plane of a non-on-axis camera)dAngle of rotation about X-axis of original coordinate systemτAngle of rotation about Z axisρThe method comprises the steps of obtaining an original coordinate system, wherein the X axis of the original coordinate system is the horizontal direction of a non-inclined image plane, the Y axis is the vertical direction of the non-inclined image plane, and the Z axis is the vertical line of the non-inclined image plane. By rotating matricesQTo represent the rotational transformation of the tilted image plane with respect to the original coordinate system, it can be calculated from the geometric transformation relation
Q=
Figure 908888DEST_PATH_IMAGE039
=
Figure 792268DEST_PATH_IMAGE040
Then the tilt matrix is used for non-telecentric lenses such as CCTV lenses and object-side telecentric lenses according to the geometric transformation relationshipH tilt Is composed of
Figure 450783DEST_PATH_IMAGE041
Tilt matrix for image-side telecentric lens and two-sided telecentric lensH tilt Is composed of
Figure DEST_PATH_IMAGE042
In correspondence with the homography matrixHWhen the inner reference and the outer reference of the non-coaxial camera are calculated by decomposition, the camera can be used for calculating the inner reference and the outer reference of the non-coaxial camera
Figure 22709DEST_PATH_IMAGE043
(2)
Viewed as a whole, provided
Figure 792082DEST_PATH_IMAGE044
From the orthogonality, one can obtain:
Figure 562592DEST_PATH_IMAGE045
wherein
Figure DEST_PATH_IMAGE046
h 1Is a homography matrixHThe first column of vectors of (a) is,h 2is a homography matrixHOf the second column vector of (a) is,h 3is a homography matrixHThe vector of the third column of (a),r 1is a rotation matrixRThe first column of vectors of (a) is,r 2is a rotation matrixRThe second column vector of (1). The parameter matrix may be calculated according to the following constraintsA
Figure 120173DEST_PATH_IMAGE009
Then according tor 1=A -1 h 1r 2=A -1 h 2t=A -1 h 3Calculating to obtain matrixr 1 r 2 t]Thereby obtaining the ginseng part. Internal reference part, camera focusfPixel sizes x Ands y principal optical axis point (c x c y ) Can be known in advance and then according to
Figure 710554DEST_PATH_IMAGE047
The tilt matrix can be calculatedH tilt
In one embodiment, the rotation matrix may beRBy equivalent rotary shaftskAnd equivalent shaft angleθTo indicate the equivalent rotation axiskAnd equivalent shaft angleθMay be considered part of the external reference. Rotation matrixREquivalent rotation axiskAnd equivalent shaft angleθThe transformation relationship of (1) is as follows:
Figure DEST_PATH_IMAGE048
k x k y k z is an equivalent rotation axiskThree components of (a).
In accordance withr 1=A -1 h 1r 2=A -1 h 2t=A -1 h 3Calculating to obtain matrixr 1 r 2 t]Then a rotation matrix is obtainedRAccording to the above formula, the equivalent rotation axis can be calculatedkAnd equivalent shaft angleθ
Step 150: and carrying out nonlinear optimization on the distortion coefficient of the non-coaxial camera and the internal parameter and the external parameter obtained by decomposition to obtain the final internal parameter, external parameter and distortion coefficient of the non-coaxial camera.
In the step, the distortion coefficient of the non-coaxial camera and the internal parameter and the external parameter obtained by decomposition are subjected to nonlinear optimization through the set loss function, the initial values of the internal parameter and the external parameter in the iterative process can be the internal parameter and the external parameter obtained by decomposition, and the initial value of the distortion coefficient can be preset according to experience. Since distortion occurs during the process of projecting points through the lens to the tilted image plane, and distortion is a non-linear change, one can choose to create a loss function on the tilted image plane, dividing the entire transform into two parts: partly from the image coordinate system to the inclined image plane coordinate system and partly from the world coordinate system to the inclined image plane, the closer the two results are to the specification, the better the calibration result is, so the loss function can be constructed as follows:
Figure 752459DEST_PATH_IMAGE016
,(3)
the loss function can adapt to the calibration of a plurality of cameras, wherein one camera is determined as a reference camera, and other cameras can be transformed into a coordinate system of the reference camera to be uniformly calculated. Wherein the content of the first and second substances,n m to scale the number of feature points in the plate image,n c in order to be the number of cameras,n 0the number of calibration plate images taken for the camera,p j is the coordinate of the characteristic point in the world coordinate system,
Figure 111896DEST_PATH_IMAGE017
representing the appearance of the calibration plate image in the reference camera,
Figure 112213DEST_PATH_IMAGE018
is shown askThe transformation of the camera with respect to the reference camera,i k indicates that the transformation is inkThe transformation under the camera is carried out,p jkl is as followskThe first shot by the cameralFirst in the calibration plate imagejThe image coordinates of the individual feature points,v jkl take a value of 0 or 1 whenjA characteristic point iskThe first shot by the cameralThe number of the calibration plate images is 1 when visible, and 0 otherwise. Function(s)
Figure 688426DEST_PATH_IMAGE049
Representing a transformation from an image coordinate system to an inclined image plane coordinate systemFrom the above, this includes the process of transforming the image coordinate system to the non-tilted image plane coordinate system by using the internal reference, transforming the non-tilted image plane coordinate system to the tilted image plane coordinate system by using the non-tilted image plane coordinate system, and performing the inverse distortion on the tilted image plane coordinate by using the distortion coefficient. Function(s)
Figure DEST_PATH_IMAGE050
Which represents the transformation from the world coordinate system to the tilted image plane coordinate system, as known from the above, this includes the process of converting the world coordinate system to the camera coordinate system, and converting the camera coordinate system to the tilted image plane coordinate system by using the external reference.
Iterative computation can be performed by using the LM algorithm, and the updating of parameters in the iterative process can be expressed as
Figure 268443DEST_PATH_IMAGE021
Whereinq k Is shown askThe vector composed of the internal parameter, the external parameter and the distortion coefficient of the non-coaxial camera during the secondary iteration,δby the formula
Figure 482387DEST_PATH_IMAGE022
Is determined in whichεCorresponding to all feature points in each calibration plate image taken by each camera at the current iteration
Figure 653605DEST_PATH_IMAGE051
And
Figure 218579DEST_PATH_IMAGE052
the difference of (a) is used to form a vector,Jis a Jacobian matrix, which is composed of Jacobian matrices of cameras, andithe Jacobian matrix of the individual cameras is
Figure 900489DEST_PATH_IMAGE025
Wherein
Figure 234518DEST_PATH_IMAGE053
And
Figure 576638DEST_PATH_IMAGE054
respectively representiThe first shot by the camerajPartial derivatives of the corresponding internal and external parameters of each calibration plate image. The solution of the partial derivatives is explained below.
For the external reference part, rotating the matrixRThe equivalent rotation axis and the equivalent shaft angle are used as the rotation axisr x r y r z ] T Then the equivalent shaft angle equals:
Figure 628908DEST_PATH_IMAGE055
thus, a rotation vector of unit can be obtained as
Figure 81886DEST_PATH_IMAGE056
Then rotation matrixRCan be expressed as:
Figure DEST_PATH_IMAGE057
wherein I is an identity matrix, and I is an identity matrix,R z is an antisymmetric matrix, and
Figure 237798DEST_PATH_IMAGE058
definition of
Figure DEST_PATH_IMAGE059
Is obtained by
a 0=-sinθl i a 1=[sinθ-2(1-cosθ)θ’]l i
a 2=2(1-cosθ)θ’a 3=[cosθ-θ’sinθ] l i a 4=θ’sinθ
When in usei=0,l i = l x i=1,l i = l y i=2,l i = l z
Defining a vector:
i=[1,0,0,0,1,0,0,0,1],r t =[
Figure 219661DEST_PATH_IMAGE060
l x l y l x l z l x l y
Figure 493647DEST_PATH_IMAGE061
l y l z l x l z l y l z
Figure DEST_PATH_IMAGE062
],
dr 0=[2l x l y l z l y ,0,0,l z ,0,0],dr 1=[0,l x ,0,l x ,2l y l z ,0,l z ,0],dr 2=[0,0,l x ,0,0,l y l x l y ,2l z ],q x =[0,-r z r y r z ,0,-r x ,- r y r x ,0],
dq 0=[0,0,0,0,0,-1,0,1,0],dq 1=[0,0,1,0,0,0,-1,0,0],dq 2=[0,-1,0,1,0,0,0,0,0],
then it is firstiThe partial derivative of the external parameters of the individual cameras can be expressed as:
Figure 219158DEST_PATH_IMAGE063
wherein
Figure DEST_PATH_IMAGE064
= a 0 i+a 1 r t +a 2 dr i +a 3 q x +a 4 dq i ( i =0,1,2)。
As for the internal reference part, as can be seen from equation (2), the solution can be divided into three parts. The first part is
Figure 232507DEST_PATH_IMAGE065
Since the formula (3) relates to the transformation of the image coordinate system into the non-tilted image plane coordinate system, it is the pair
Figure 182008DEST_PATH_IMAGE066
Conducting derivation to obtain
Figure 677712DEST_PATH_IMAGE067
As can be seen from equation (1), distortion is not considered
u=(c-c x )S x v=(r-c y )S y
Thus, the partial derivative can be found:
Figure DEST_PATH_IMAGE068
the second part is a tilting matrixH tilt The same is required to
Figure 472492DEST_PATH_IMAGE069
The derivation can be carried out by rotating the matrixQObtaining a partial derivativeH tilt The partial derivatives of (1). For rotation matrixQIf it is directly according toτAndρcalculated, for rotational angle ambiguity reasonsτAndρnot the only solution, but two eligible solutions may occur. To eliminate ambiguity, a constraint is added here:
Figure DEST_PATH_IMAGE070
order tot 2=tan2(
Figure 71839DEST_PATH_IMAGE071
)=S 2+C 2c 2=2cos2(
Figure 926662DEST_PATH_IMAGE071
)=
Figure 175241DEST_PATH_IMAGE072
=
Figure DEST_PATH_IMAGE073
. Then the rotation matrixQThe respective parameters in (a) may be expressed as:
Figure 416123DEST_PATH_IMAGE074
thus, it is possible to obtain
Figure 433757DEST_PATH_IMAGE075
According to the formula
Figure DEST_PATH_IMAGE076
Figure 193903DEST_PATH_IMAGE077
Can find outt 2c 2AboutSCPartial derivatives of (3), into the rotation matrixQCan find the rotation matrixQAboutSCSo that the tilt matrix can be obtainedH tilt According to a differential matrix formula of an inverse matrix
Figure DEST_PATH_IMAGE078
Can find out
Figure 867461DEST_PATH_IMAGE079
The partial derivatives of (1).
The third part is
Figure DEST_PATH_IMAGE080
The same is required to
Figure 237000DEST_PATH_IMAGE081
And (6) carrying out derivation. In the case of a CCTV lens,
Figure DEST_PATH_IMAGE082
the partial transformation can be expressed as:
Figure 312403DEST_PATH_IMAGE083
then it is obtained:
Figure DEST_PATH_IMAGE084
in the case of a telecentric lens system,
Figure 243450DEST_PATH_IMAGE085
the partial transformation can be expressed as:
Figure DEST_PATH_IMAGE086
then it is obtained:
Figure 905769DEST_PATH_IMAGE087
for the distortion coefficient part, for the division model, then
Figure DEST_PATH_IMAGE088
For a polynomial model, then
Figure 49306DEST_PATH_IMAGE089
Thus, it is possible to obtain:
Figure DEST_PATH_IMAGE090
Figure 244795DEST_PATH_IMAGE091
writing derived variables into vector representations
Figure DEST_PATH_IMAGE092
Then the above formula can be expressed as:
Figure 845278DEST_PATH_IMAGE093
and (4) synthesizing the above parts, and obtaining the partial derivative of the whole internal reference part according to a chain rule.
Since no distortion is taken into account when calculating the internal and external parameters in step 140, distortion correction may be added during the non-linear optimization, in one embodiment, during the non-linear optimization, the calculated distortion coefficients are used to perform distortion correction on the tilted image plane coordinates of the feature points before each iteration. Specifically, for the division model, since the reversible solution can be directly obtained, the calculation can be directly performed according to the following formula:
Figure 24587DEST_PATH_IMAGE032
for polynomial models, the assumed distortion model can be expressed as
Figure DEST_PATH_IMAGE094
Wherein
Figure 237393DEST_PATH_IMAGE095
Is the distorted tilted image plane coordinates,
Figure DEST_PATH_IMAGE096
as coordinates, vectors, in the camera coordinate systemf d Byf u Andf v is composed of two parts, and
Figure 54433DEST_PATH_IMAGE097
the corrected coordinates can thus be expressed as
Figure DEST_PATH_IMAGE098
To calculatef d It can be subjected to taylor expansion, considering only the linear part:
Figure 61703DEST_PATH_IMAGE099
thus, it is possible to obtain
Figure 993887DEST_PATH_IMAGE100
The distortion correction for the polynomial model therefore proceeds as follows:
(1) coordinates of the image (c)rc) T Transformation to oblique image plane coordinates
Figure DEST_PATH_IMAGE101
(2) And (3) carrying out iterative computation to eliminate distortion:
initialization
Figure 479226DEST_PATH_IMAGE102
Calculating
Figure DEST_PATH_IMAGE103
And
Figure 413422DEST_PATH_IMAGE104
then updated according to the following formulaxyAnd performing iteration:
Figure DEST_PATH_IMAGE105
Figure 326014DEST_PATH_IMAGE106
according to the calibration method of the non-coaxial camera in the embodiment, for the non-coaxial camera, when a conversion model from world coordinates to image coordinates is established, the condition that the optical axis of the lens of the non-coaxial camera is not coaxial with the optical axis of the imaging plane is considered, and the concepts of the inclined imaging plane and the non-inclined imaging plane are introduced, wherein the inclined imaging plane is the imaging plane vertical to the optical axis of the lens of the non-coaxial camera, the non-inclined imaging plane is the imaging plane of the non-coaxial camera, and the inclined matrix is addedH tilt This parameter, which describes the transformation from the tilted image plane coordinate system to the non-tilted image plane coordinate system, will tilt the matrixH tilt The method is regarded as a part of camera internal parameters, and a mathematical model in the existing method is modified; and introduces a rotation angle around the X-axis of the original coordinate systemτAngle of rotation about Z axisρThe transformation between the inclined image plane and the non-inclined image plane is expressed, and the process of converting the coordinate system in the non-coaxial camera can be well described; in one embodiment, the tilted image plane coordinates are also entered during the non-linear optimization processAnd correcting line distortion. In summary, the calibration method of the non-coaxial camera provided by the application improves the calibration precision of the non-coaxial camera, and reduces the error of the working result generated in the process of applying the non-coaxial camera.
Referring to fig. 6, a method for extracting high-precision coordinates of feature points of a circular array calibration plate in step 120 is described, wherein the method includes steps 210 to 250.
Step 210: a calibration plate image is acquired. The acquired calibration board image may be captured by a coaxial camera or a non-coaxial camera.
Step 220: the calibration plate image is image-processed to obtain circular feature points therein. The image processing comprises binarization, filtering, feature screening and the like. Referring to FIG. 7, an exemplary process for obtaining circular feature points includes steps 310-340, which are described in detail below.
Step 310: and performing edge extraction on the calibration plate image to obtain a calibration plate boundary frame so as to obtain the position of the calibration plate in the calibration plate image.
Step 320: and constructing an image pyramid for the calibration plate area to obtain pyramid images of each layer, wherein the calibration plate area is an area positioned in a boundary frame of the calibration plate in the calibration plate image. In the image pyramid, the image resolution of the upper layer is small, and the image resolution of the lower layer is large. The specific number of layers of the image pyramid can be set empirically.
Step 330: and carrying out binarization processing on the pyramid image of the current layer to search for circular feature points. The initial value of the pyramid image of the current layer is the pyramid image of the topmost layer.
In one embodiment, the binarization processing may be a process performed iteratively by one gray value step. Specifically, within a preset threshold interval, selecting a gray threshold at preset intervals from small to large, performing threshold segmentation on the current-layer pyramid image by using the gray threshold every time the gray threshold is selected, obtaining a circular area, judging that a circular feature point meeting preset conditions is searched in the current-layer pyramid image when the number of the circular areas is equal to the preset number, and stopping selecting the next gray threshold, otherwise, continuing to select the next gray threshold to perform threshold segmentation on the current-layer pyramid image until the preset threshold interval is traversed. For example, a preset threshold interval is 50-90, if the step size is 10, that is, the preset interval value is 10, then 50, 60, 70, 80, and 90 are sequentially selected as grayscale thresholds to perform threshold segmentation on the pyramid image of the current layer until the number of circular regions is equal to the preset number. The preset threshold interval may be set empirically. After the threshold segmentation to obtain the circular region, some morphological processing and area screening can be performed to obtain more accurate results.
Step 340: and judging whether the circular feature points meeting the preset conditions are searched in the pyramid image of the current layer. If yes, the process is ended, otherwise, the next layer pyramid image is used as the current layer pyramid image, and the step 330 is returned.
Referring to fig. 8, another embodiment of a process for obtaining circular feature points includes steps 410-430, which are described in detail below.
Step 410: and constructing an image pyramid for the calibration plate image to obtain pyramid images of all layers.
Step 420: and performing circular feature point search on the pyramid image of the current layer. The initial value of the pyramid image of the current layer is the pyramid image of the topmost layer.
The circular feature point search may be performed as follows: the method comprises the steps of conducting binarization processing on a pyramid image of a current layer to obtain a circular area, conducting statistical analysis processing on the area of the circular area to obtain the area with the maximum occurrence frequency, calculating a radius according to the area with the maximum occurrence frequency, multiplying the radius by the corresponding multiplying power of the pyramid image of the current layer to obtain an estimated radius of a circular feature point, and searching the circular feature point in a calibration plate image according to the estimated radius.
The estimated radius of the circular feature points may be obtained by histogram statistics. After binarization processing is carried out on a pyramid image of a current layer to obtain a circular region, the circular region is firstly screened according to a preset roundness range and/or an area range, histogram statistics of the area is carried out on the screened circular region, a function mapping relation of the area and the occurrence frequency is established, and the area with the maximum occurrence frequency is obtained; then, calculating the radius according to the area with the maximum occurrence frequency, and multiplying the radius by the multiplying power corresponding to the pyramid image of the current layer to obtain the estimated radius of the circular feature point; and finally, filtering the calibration plate image according to the estimated radius, then performing threshold segmentation to obtain a characteristic point estimation region, and removing the characteristic point estimation region with the area larger than a preset area threshold. And calculating the number of the characteristic point estimation areas, and judging that circular characteristic points meeting preset conditions are searched in the pyramid image of the current layer when the number of the characteristic point estimation areas is equal to the preset number.
Step 430: and judging whether the circular feature points meeting the preset conditions are searched in the pyramid image of the current layer. If yes, the process is ended, otherwise, the next layer pyramid image is used as the current layer pyramid image, and the process returns to step 420.
In the method for obtaining the circular feature points in the calibration plate image in the embodiment, the circular feature points are searched by constructing the image pyramid, the searching is performed layer by layer from the top layer of the image pyramid to the lower layer, and the searching can be stopped when the circular feature points meeting the preset conditions are searched in a certain layer. Because the image resolution of the upper layer of the image pyramid is small and the image is small, the method is favorable for improving the searching efficiency of the circular feature points. In some embodiments, the binarization processing is a process performed iteratively according to a gray value step length, and is not divided by using a single gray threshold, which is beneficial to more accurately extracting the circular feature points.
The following steps 230 to 250 are described.
Step 230: and performing edge extraction on the circular feature points to obtain edge points of the circular feature points, and performing ellipse fitting by using the edge points to obtain image coordinates of the circular feature points, wherein the image coordinates of the circular feature points refer to image coordinates of the circle centers of the circular feature points.
Step 240: and determining the corresponding relation between the image coordinates of the circular feature points and the world coordinates.
The commonly used calibration plate is not provided with a reference, and a worker needs to manually select the reference to compare the image coordinate of the feature point with the world coordinate to determine the corresponding relation of the image coordinate and the world coordinate, so that the calibration plate is complicated. The application provides two circular array calibration plates with references, and provides a method for determining the corresponding relation between the image coordinates and world coordinates of circular feature points. One of them is a circular array calibration board with a triangular marker, as shown in fig. 9, one corner of the calibration board is provided with a triangular marker, the triangular marker is an isosceles right triangle, and its right angle vertex is one of the vertices of the circular array calibration board, and the other two vertices are respectively on two sides of the circular array calibration board adjacent to the right angle vertex. Another type is a circular array of calibration plates with hollow dots, as shown in fig. 10, which are grouped into clusters, 5 clusters in fig. 10.
Referring to fig. 11, in the circular array calibration plate with triangular markers, determining the correspondence between the image coordinates of the circular feature points and the world coordinates includes the following steps:
step 510: and detecting the triangular marker in the calibration plate image, and determining the relative position relation between the circular feature point and the triangular marker. Triangular markers can be detected by detecting the hypotenuse of the triangle.
Step 520: and establishing a reference coordinate system by taking the triangular marker as a reference, and determining the one-to-one correspondence between the image coordinates of the circular feature points and the world coordinates according to the parameter information of the circular array calibration plate. Based on step 510, a reference coordinate system is established, and then the position of the circular feature point in the reference coordinate system is obtained, the reference coordinate corresponds to the world coordinate, and the one-to-one correspondence between the image coordinate of the circular feature point and the world coordinate can be determined by using the parameter information of the circular array calibration plate.
Referring to fig. 12, in the circular array calibration plate with a hollow point, determining the correspondence between the image coordinates of the circular feature point and the world coordinates includes the following steps:
step 610: and extracting hollow points from the obtained circular feature points, and dividing the hollow points into different clusters by using a clustering algorithm.
Step 620: and calculating to obtain a hollow point with the shortest sum of the distances from all other hollow points in the cluster, taking the hollow point as the center point of the cluster, and classifying the non-hollow points into the cluster with the closest distance.
Step 630: and determining the position of the cluster in the circular array calibration plate according to the arrangement mode of hollow points in the cluster. For example, in fig. 10, it can be seen that the arrangement of the hollow dots in 5 clusters is different, and the position of the cluster in the circular array calibration plate can be determined accordingly.
Step 640: and determining the relative position relationship between the other clusters and the reference cluster by taking one cluster as the reference cluster, so that the relative position relationship between the circular characteristic points in the other clusters and the reference cluster can be determined.
Step 650: and establishing a reference coordinate system by taking the central point of the reference cluster as an origin, and determining the one-to-one correspondence between the image coordinates of the circular feature points and the world coordinates according to the parameter information of the circular array calibration plate. On the basis of step 640, the position of the circular feature point in the reference coordinate system can be obtained after the reference coordinate system is established, the reference coordinate corresponds to the world coordinate, and the one-to-one correspondence relationship between the image coordinate of the circular feature point and the world coordinate can be determined by using the parameter information of the circular array calibration plate.
In one embodiment, after the corresponding relationship between the image coordinates of the circular feature points and the world coordinates is obtained, sub-pixel edge extraction may also be performed. Specifically, a homography matrix is calculated according to the one-to-one correspondence between the image coordinates and the world coordinates of the hollow points in the clusters, and the homography matrix is utilized to map the world coordinates of other circular mark points to the image to obtain mapping points; and then, acquiring circular feature points containing mapping points, and performing sub-pixel edge extraction and ellipse fitting on the circular feature points to obtain new edge points and image coordinates of the circular feature points. The accuracy is further improved by using the edge points and the image coordinates obtained by extracting the sub-pixel edges.
Step 250: and carrying out error correction on the image coordinates of the circular feature points by using an ellipse equation to obtain the final image coordinates of the circular feature points.
For a radius ofrCenter is
Figure 11074DEST_PATH_IMAGE107
The equation can be expressed as:
Figure 96841DEST_PATH_IMAGE108
wherein
Figure DEST_PATH_IMAGE109
BalanceFIs an elliptic equation matrix. The center of the circle can be expressed as:
Figure 622894DEST_PATH_IMAGE110
. The transformation of the world coordinate system to the image coordinate system can be expressed as:P i =H t P w thus, the transformed circle can be expressed as:
Figure 768705DEST_PATH_IMAGE111
the transformed center can be expressed as:
Figure 675481DEST_PATH_IMAGE112
the transformation of the world coordinate system to the image coordinate system is expressed without considering distortion, and if the distortion is considered, the relation between the distortion and the non-distortion needs to be established. In an embodiment of the application, the elliptic equation matrix is used, a target function is established according to the idea that the difference between an observed value and an expected value is minimum, an undistorted image coordinate is solved, and error correction of the image coordinate of the circular feature point is achieved.
The ellipse before error correction is a distorted ellipse and can be expressed as:
Figure 299360DEST_PATH_IMAGE113
wherein
Figure DEST_PATH_IMAGE114
For correcting the pre-error circular characteristic pointsThe coordinates of the image of (a) are,
Figure 444034DEST_PATH_IMAGE115
is a distorted elliptic equation matrix. If distortion is not considered, it is a standard quadratic curve, and the elliptic equation matrix can be expressed asDThe equation of the curve is
Figure DEST_PATH_IMAGE116
WhereinpThe image coordinates of the error corrected circular feature points.DTo
Figure 728122DEST_PATH_IMAGE115
May use a transformation matrixH D To show that:
Figure 856615DEST_PATH_IMAGE117
diagonalizing the elliptic equation matrix can yield:
Figure 284185DEST_PATH_IMAGE118
wherein
Figure DEST_PATH_IMAGE119
Whereinλ 1λ 2Andλ 3is a matrix of elliptic equationsDIs determined by the characteristic value of (a),Uis a matrix of corresponding feature vectors,
Figure 283365DEST_PATH_IMAGE120
and
Figure DEST_PATH_IMAGE121
is a matrix of elliptic equations
Figure 741285DEST_PATH_IMAGE115
Is determined by the characteristic value of (a),
Figure 357074DEST_PATH_IMAGE122
is a matrix of corresponding eigenvectors.
Order to
Figure 588335DEST_PATH_IMAGE123
Then, then
Figure 973180DEST_PATH_IMAGE124
To obtainH D Then, the image coordinates of the error-corrected circular feature points can be solved according to the following objective functionp i
Figure DEST_PATH_IMAGE125
Wherein the subscriptiRepresents the firstiAnd (4) points.
In another embodiment, an ellipse equation can be solved by means of ellipse fitting to obtain the image coordinates of the center point of the ellipse, the image coordinates are compared with the image coordinates of the center point of the ellipse which is not fitted, and the difference between the two image coordinates is used for directly carrying out error correction on the image coordinates of the circular feature points. The ellipse fitting may be performed according to the following objective function:
Figure 599071DEST_PATH_IMAGE126
Figure 967736DEST_PATH_IMAGE127
whereinabcdefIs a coefficient in an elliptic equation (x i y i ) Is the image coordinates of the edge points of the circular feature points,w i in order to be the weight, the weight is,nthe number of edge points.
Then, the image coordinates of the center point of the ellipse are calculated according to the following formula
Figure 737109DEST_PATH_IMAGE128
Figure 976460DEST_PATH_IMAGE129
Calculating the image coordinates of the center point of the ellipse not fitted according to the following formula
Figure 71455DEST_PATH_IMAGE130
Figure 661836DEST_PATH_IMAGE131
WhereinFThe set of points in the area of the circular feature points, i.e. all points in the whole circle,p i for the point in the set to be,I(p i ) Is a pointp i A gray value of (A), (B), (C), (D)x i y i ) Is a pointp i Image coordinates, subscripts ofiRepresents the firstiAnd (4) points.
Calculating image coordinates
Figure 447348DEST_PATH_IMAGE132
And
Figure 806786DEST_PATH_IMAGE133
and compensating the image coordinates of the circular feature points by the deviation between the circular feature points to finish error correction.
According to the method for extracting the high-precision coordinates of the feature points of the circular array calibration plate, the calibration plate image is subjected to image processing to obtain the circular feature points, then edge extraction and ellipse fitting are carried out on the circular feature points to obtain the image coordinates of the circular feature points, after the image coordinates are obtained, an ellipse equation is used for carrying out error correction on the image coordinates, and the error correction can be realized through ellipse fitting, error compensation and the like. In the process of obtaining the circular feature points, searching the circular feature points by constructing an image pyramid, searching layer by layer from the top layer of the image pyramid to the lower layer, and stopping searching when the circular feature points meeting preset conditions are searched in a certain layer. Because the image resolution of the upper layer of the image pyramid is small and the image is small, the method is favorable for improving the searching efficiency of the circular feature points. In some embodiments, the binarization processing is a process performed iteratively according to a gray value step length, and is not divided by using a single gray threshold, which is beneficial to more accurately extracting the circular feature points. In summary, the method for extracting the feature point high-precision coordinates of the circular array calibration plate effectively improves the precision and efficiency of extracting the feature point coordinates, thereby improving the precision of camera calibration.
For a line scan camera, the calibration work is also important, and the application also provides a calibration method of the line scan camera. According to the method for calibrating the line scan camera, the line scan camera is modeled, a line scan camera model is provided, parameters in the model are solved, calibration of the line scan camera is completed, and the line scan camera model is introduced firstly below.
Referring to fig. 13, the line scan camera model represents a coordinate transformation relationship from world coordinates to image coordinates in the line scan camera. For a line scan camera, because the photosensitive units of the line scan camera are only one line, the object to be shot needs to move to complete the shooting of the complete image, and in the moving process of the object, the line scan camera continuously scans the object line by line and splices the images of each line to obtain the complete image. The motion vector of the object can be expressed asV=(V x V y V z ) T Wherein, in the step (A),V x V y V z respectively represent objects atxyzSpeed of movement in the direction. The coordinate transformation relationship in a line scan camera can be divided into two parts: one is a transformation relation between the world coordinate system, the camera coordinate system and the image plane coordinate system, and the other is a transformation relation between the image plane coordinate system and the image coordinate system. Since the object is moving all the time, world coordinates can be represented by motion vectors.
For the transformation relation among the world coordinate system, the camera coordinate system and the image plane coordinate system, the first transformation equation is used for expressing, and for a non-telecentric lens such as a CCTV lens, the first transformation equation is as follows:
Figure DEST_PATH_IMAGE134
whereintRepresents time (a)x c y c z c ) T Representing the coordinates in the camera coordinate system,λis a coefficient which is preset in the process of setting,findicating the focal length of the line scan camera,
Figure 275944DEST_PATH_IMAGE135
image plane coordinates representing distortion
Figure 353622DEST_PATH_IMAGE136
The abscissa of the (c) axis of the (c),
Figure DEST_PATH_IMAGE137
c y is a principal optical axis point (c x c y ) The ordinate of (a) is,s y is composed ofyThe extension length in the direction indicates how long the object moves, the line scan camera will scan the object once,
Figure 432174DEST_PATH_IMAGE138
representation using distortion model pairs
Figure 380538DEST_PATH_IMAGE135
p v And performing the calculated undistorted image plane coordinates.
For a telecentric lens:
Figure 286177DEST_PATH_IMAGE139
whereinmIs the magnification of the lens.
The transformation relation between the image plane coordinate system and the image coordinate system can be expressed by a second transformation equation, which is specifically as follows:
Figure DEST_PATH_IMAGE140
whereins x Scanning camera for linexPixel size in the direction: (cr) T Representing the image coordinates.
When the transformation relation among the world coordinate system, the camera coordinate system and the image plane coordinate system is established, the distortion of a lens is not considered, the undistorted image plane coordinate is used, however, the actually obtained image plane coordinate is necessarily the distorted coordinate
Figure 54413DEST_PATH_IMAGE136
Therefore, it is necessary to use the distortion model pair
Figure 205165DEST_PATH_IMAGE135
p v The calculation is performed to obtain undistorted coordinates, here denoted as
Figure 273615DEST_PATH_IMAGE141
. The distortion model may be a division model or a polynomial model.
The first transformation equation, the second transformation equation and the distortion model form the line scan camera model of the present application. Referring to fig. 14, a calibration method of a line-scan camera in an embodiment includes steps 710 to 730, which are described in detail below.
Step 710: a calibration plate image is acquired. As mentioned above, the calibration plate image is obtained by continuously scanning the moving calibration plate with the line scan camera. The calibration plate can be a checkerboard calibration plate, a circular array calibration plate, etc.
Step 720: and acquiring the characteristic points and the image coordinates thereof in the calibration plate image. For a checkerboard the feature points are the corner points of the checkerboard, for a circular array scale the features areThe point is the center of a circle of a circular feature point in the circular array, i.e., a circular pattern on the circular array calibration plate. The image coordinates of the feature points may be obtained by image processing the calibration plate image, the firstiThe image coordinates of each feature point can be expressed as (c i r i ) T
Step 730: the initial values of the parameters in the line scan camera model are preset, nonlinear optimization is carried out according to a preset loss function, the parameters of the line scan camera model are obtained, and therefore calibration of the line scan camera is completed.
The parameters of the line scan camera model to be solved comprise the focal lengthfA principal optical axis point (c x c y ) Line scanning cameraxPixel size in directions x yExtended length in directions y Motion vectorV=(V x V y V z ) T And distortion coefficients in the distortion model. An initial value may be set in advance empirically for these parameters, and an appropriate initial value may be searched for by initializing a parameter search for the motion vector. First receiving a preset input from a userV x V y V z Then searching in a range of space containing the value, here in a space of 3 x 3, the value with the smallest error being selected as the valueV x V y V z Where the error is minimal means that the values on both sides of the equation in the first transformation equation are closest.
The loss function is established in an image coordinate system, is set according to the image coordinates of the characteristic points, and is defined as:
Figure DEST_PATH_IMAGE142
wherein
Figure 84576DEST_PATH_IMAGE143
The coordinates of the image after the distortion are represented,q i representing the coordinates of the image without distortion,ndenotes the total number of characteristic points, subscriptsiIs shown asiAnd (4) a characteristic point.
The image coordinates obtained in step 720 are distorted image coordinates, and the undistorted image coordinates can be calculated according to the distortion model and the second transformation equation, and can be expressed as
Figure DEST_PATH_IMAGE144
The loss function obtained after substitution is
Figure 307485DEST_PATH_IMAGE145
For undistorted image plane coordinates, if a division model is used, it can be expressed as:
Figure 760463DEST_PATH_IMAGE146
whereinκIs the distortion coefficient.
If a polynomial model is used, the assumed distortion model can be expressed as
Figure 948999DEST_PATH_IMAGE147
Wherein
Figure 727599DEST_PATH_IMAGE148
Is the distorted tilted image plane coordinates,p c as coordinates, vectors, in the camera coordinate systemf d By
Figure 736007DEST_PATH_IMAGE149
And
Figure 258255DEST_PATH_IMAGE150
is composed of two parts, and
Figure 802762DEST_PATH_IMAGE151
wherein
Figure 486684DEST_PATH_IMAGE152
k 1k 2k 3p 1p 2Is the distortion coefficient.
The undistorted coordinates can therefore be expressed as
Figure 982388DEST_PATH_IMAGE153
To calculatef d It can be subjected to taylor expansion, considering only the linear part:
Figure 42747DEST_PATH_IMAGE099
thus, it is possible to obtain:
Figure DEST_PATH_IMAGE154
Figure 409138DEST_PATH_IMAGE155
for a non-telecentric lens, the first transformation equation can be used to obtain
Figure DEST_PATH_IMAGE156
And substituting the first transformation equation into the second transformation equation to obtain:
Figure 965759DEST_PATH_IMAGE157
according to the above formula can
Figure DEST_PATH_IMAGE158
By using
Figure 417600DEST_PATH_IMAGE159
Expressed, substituted into the formula (5), and made publicFormula (4) can be obtained
Figure 16072DEST_PATH_IMAGE159
The expression of (2) is an expression about the parameter to be solved, so that the parameter to be solved can be substituted into the loss function, and the parameter of the line scan camera model can be obtained by solving the loss function.
Similarly, for telecentric lenses:
Figure 535171DEST_PATH_IMAGE160
according to the above formula can
Figure 560896DEST_PATH_IMAGE158
By using
Figure 296771DEST_PATH_IMAGE159
Expressed and substituted into the first transformation equation in the same way
Figure 433354DEST_PATH_IMAGE159
The expression of (2) is an expression about the parameter to be solved, so that the parameter to be solved can be substituted into the loss function, and the parameter of the line scan camera model can be obtained by solving the loss function.
The solution process can adopt LM algorithm to carry out iterative computation, and the updating of parameters in the iterative process can be expressed as
Figure 305495DEST_PATH_IMAGE161
Whereinq k Is shown askThe vector of parameters of the line scan camera model at the time of the sub-iteration,δby the formula
Figure 767700DEST_PATH_IMAGE162
Is determined in whichJThe matrix of the Jacobian is obtained,εfor correspondence by all feature points
Figure 958248DEST_PATH_IMAGE163
Is used to form a vector of values of (c),δfor scanning camera by lineThe parameters of the model constitute a vector. When the lens of the line scan camera is a non-telecentric lens and the distortion model is a polynomial model,
Figure 164101DEST_PATH_IMAGE164
when the lens of the line scan camera is a non-telecentric lens and the distortion model is a division model,
Figure 156328DEST_PATH_IMAGE165
when the lens of the line scan camera is telecentric and the distortion model is a polynomial model,
Figure 523856DEST_PATH_IMAGE166
when the lens of the line scan camera is telecentric and the distortion model is a division model,
Figure 968743DEST_PATH_IMAGE167
jacobi matrixJConsists of partial derivatives of the parameters to be determined. For polynomial models, the distortion coefficient is recorded as a vector
Figure 214173DEST_PATH_IMAGE168
Partial derivative of
Figure 795327DEST_PATH_IMAGE169
According to the calibration method of the line scan camera in the embodiment, firstly, a line scan camera model is pre-established and used for representing the coordinate transformation relation from the world coordinate to the image coordinate in the line scan camera, the initial value of the parameter of the line scan camera model is set, when the calibration is carried out, the line scan camera continuously scans the moving calibration board to obtain the image of the calibration board, then the characteristic point and the image coordinate in the image of the calibration board are obtained, the parameter of the line scan camera model is subjected to nonlinear optimization according to the preset loss function, wherein the loss function is set according to the image coordinate of the characteristic point, and finally the parameter of the line scan camera model is obtained, so that the calibration of the line scan camera is finished.
Reference is made herein to various exemplary embodiments. However, those skilled in the art will recognize that changes and modifications may be made to the exemplary embodiments without departing from the scope hereof. For example, the various operational steps, as well as the components used to perform the operational steps, may be implemented in differing ways depending upon the particular application or consideration of any number of cost functions associated with operation of the system (e.g., one or more steps may be deleted, modified or incorporated into other steps).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. Additionally, as will be appreciated by one skilled in the art, the principles herein may be reflected in a computer program product on a computer readable storage medium, which is pre-loaded with computer readable program code. Any tangible, non-transitory computer-readable storage medium may be used, including magnetic storage devices (hard disks, floppy disks, etc.), optical storage devices (CD-to-ROM, DVD, Blu-Ray discs, etc.), flash memory, and/or the like. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including means for implementing the function specified. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified.
Those skilled in the art will recognize that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the invention. Accordingly, the scope of the invention should be determined only by the claims.

Claims (10)

1. A calibration method for a non-coaxial camera, the non-coaxial camera comprising an image plane and a lens, wherein a normal vector of the image plane is not coaxial with an optical axis of the lens, the calibration method comprising:
acquiring a calibration plate image shot by a non-coaxial camera;
acquiring feature points in the calibration plate image, and image coordinates and corresponding world coordinates of the feature points;
calculating a homography matrix according to the image coordinates of the feature points and the corresponding world coordinatesH
According to a preset conversion model from world coordinates to image coordinates, carrying out decomposition calculation on the homography matrix to obtain internal parameters and external parameters of the non-coaxial camera, wherein the internal parameters comprise a tilt matrixH tilt Tilt matrixH tilt Representing a transformation from an oblique image plane coordinate system, which is an image plane perpendicular to the optical axis of the lens, to a non-oblique image plane coordinate system, which is an image plane of the non-on-axis camera;
and carrying out nonlinear optimization on the distortion coefficient of the non-coaxial camera and the internal parameter and the external parameter obtained by decomposition to obtain the final internal parameter, external parameter and distortion coefficient of the non-coaxial camera.
2. A calibration method according to claim 1, wherein the conversion model is:
Figure 380612DEST_PATH_IMAGE001
wherein the homography matrix
Figure DEST_PATH_IMAGE002
, (rc) T Is the image coordinates of the feature points (a)x w y w z w ) T Is the world coordinate of the feature point,
Figure 202812DEST_PATH_IMAGE003
is a transformation matrix from the world coordinate system to the camera coordinate system,Rin order to be a matrix of rotations,tin order to be a matrix of displacements,z c for feature points in the camera coordinate systemzThe coordinates of the position of the object to be imaged,
Figure DEST_PATH_IMAGE004
is a transformation matrix from the camera coordinate system to the tilted image plane coordinate system,fis the focal length of the non-coaxial camera,
Figure 389074DEST_PATH_IMAGE005
is a transformation matrix from the non-tilted image plane coordinate system to the image coordinate system,s x ands y pixel sizes in the horizontal and vertical directions of the non-coaxial camera, respectively (c x c y ) Is a point of a main optical axis, and is,
Figure DEST_PATH_IMAGE006
is an internal reference part, and is characterized in that,
Figure 925228DEST_PATH_IMAGE007
is the part of the external ginseng.
3. The calibration method according to claim 2, wherein the lens of the non-coaxial camera is a non-telecentric lens or an object-side telecentric lens, and the tilt matrix is
Figure DEST_PATH_IMAGE008
Wherein the content of the first and second substances,dfor the translation distance of the tilted image plane to the non-tilted image plane,q 11q 12q 13q 21q 22q 23q 31q 32q 33is a rotation matrixQElement of (1), rotation matrixQRepresenting a rotational transformation of the tilted image plane with respect to an original coordinate system, an
Figure 444328DEST_PATH_IMAGE009
WhereinρIndicating the angle of rotation about the Z-axis,τand the angle of rotation around the X axis is represented, the X axis of the original coordinate system is the horizontal direction of the non-inclined image plane, the Y axis is the vertical direction of the non-inclined image plane, and the Z axis is the vertical line of the non-inclined image plane.
4. A calibration method according to claim 2 or 3, wherein the obtaining of the internal reference and the external reference of the non-coaxial camera by performing the decomposition calculation on the homography matrix comprises:
the parameter matrix is calculated according to the following constraint conditionsA
Figure DEST_PATH_IMAGE010
Wherein
Figure 407736DEST_PATH_IMAGE011
Figure DEST_PATH_IMAGE012
Figure 612452DEST_PATH_IMAGE013
h 1Is a homography matrixHThe first column of vectors of (a) is,h 2is a homography matrixHOf the second column vector of (a) is,h 3is a homography matrixHThe vector of the third column of (a),r 1is a rotation matrixRThe first column of vectors of (a) is,r 2is a rotation matrixRA second column vector of (a);
according tor 1=A -1 h 1r 2=A -1 h 2t=A -1 h 3Calculating to obtain matrixr 1 r 2 t]According to
Figure DEST_PATH_IMAGE014
Calculating to obtain a tilt matrixH tilt
5. A calibration method according to claim 2 or 3, wherein the external reference of the non-coaxial camera further comprises an equivalent rotation axiskAnd equivalent shaft angleθThe obtaining of the internal reference and the external reference of the non-coaxial camera by performing decomposition calculation on the homography matrix comprises the following steps:
the parameter matrix is calculated according to the following constraint conditionsA
Figure 716412DEST_PATH_IMAGE010
Wherein the content of the first and second substances,
Figure 119712DEST_PATH_IMAGE011
Figure 581917DEST_PATH_IMAGE015
Figure 8350DEST_PATH_IMAGE013
according tor 1=A -1 h 1r 2=A -1 h 2t=A -1 h 3Calculating to obtain matrixr 1 r 2 t];
Whereinh 1Is a homography matrixHThe first column of vectors of (a) is,h 2is a homography matrixHOf the second column vector of (a) is,h 3is a homography matrixHThe vector of the third column of (a),r 1is a rotation matrixRThe first column of vectors of (a) is,r 2is a rotation matrixRA second column vector of (a);
according to the calculated rotation matrixR,Obtaining an equivalent axis of rotationkAnd equivalent shaft angleθIn which a rotation matrix isREquivalent rotation axiskAnd equivalent shaft angleθThe transformation relationship of (1) is as follows:
Figure DEST_PATH_IMAGE016
k x k y k z is an equivalent rotation axiskThree components of (a);
according to
Figure DEST_PATH_IMAGE017
Calculating to obtain a tilt matrixH tilt
6. The calibration method according to claim 2, wherein the performing nonlinear optimization on the distortion coefficients of the non-coaxial camera and the decomposed internal parameters and external parameters to obtain final internal parameters, external parameters and distortion coefficients of the non-coaxial camera comprises:
presetting an initial value of a distortion coefficient, taking the internal parameter and the external parameter obtained by decomposition as the initial values of the internal parameter and the external parameter, and iteratively solving an optimal solution according to the following loss functions to obtain the final internal parameter, external parameter and distortion coefficient of the non-coaxial camera:
Figure 417466DEST_PATH_IMAGE018
wherein the content of the first and second substances,n m to scale the number of feature points in the plate image,n c in order to be the number of cameras,n 0the number of calibration plate images taken for the camera,p j is the coordinate of the characteristic point in the world coordinate system,
Figure DEST_PATH_IMAGE019
representing the appearance of the calibration plate image in the reference camera,
Figure 848841DEST_PATH_IMAGE020
is shown askThe transformation of the camera with respect to the reference camera,i k is shown inkThe transformation under the camera is carried out,p jkl is as followskThe first shot by the cameralFirst in the calibration plate imagejThe image coordinates of the individual feature points,v jkl take a value of 0 or 1 whenjA characteristic point iskThe first shot by the cameral1 if the image of each calibration plate is visible, or 0 if the image of each calibration plate is visible; function(s)
Figure DEST_PATH_IMAGE021
Representing the transformation from the image coordinate system to the inclined image plane coordinate system, wherein the transformation from the image coordinate system to the inclined image plane coordinate system by using the internal reference and the inverse distortion of the inclined image plane coordinate by using the distortion coefficient are included; function(s)
Figure 216368DEST_PATH_IMAGE022
Representing by the worldTransformation of the coordinate system to the tilted image plane coordinate system includes transforming the world coordinate system to the camera coordinate system using the external parameters.
7. The calibration method according to claim 6, further comprising: and before each iteration, distortion correction is carried out on the inclined image plane coordinates of the characteristic points by using the calculated distortion coefficient.
8. Calibration method according to claim 6, characterized in that it is based on a formula
Figure DEST_PATH_IMAGE023
Iteratively solving the optimal solution, whereinq k Is shown askThe vector composed of the internal parameter, the external parameter and the distortion coefficient of the non-coaxial camera during the secondary iteration,δby the formula
Figure DEST_PATH_IMAGE024
Is determined in whichεCorresponding to all feature points in each calibration plate image taken by each camera at the current iteration
Figure 35157DEST_PATH_IMAGE025
And
Figure DEST_PATH_IMAGE026
the difference of (a) is used to form a vector,Jis a Jacobian matrix, which is composed of Jacobian matrices of cameras, andithe Jacobian matrix of the individual cameras is
Figure DEST_PATH_IMAGE027
Wherein
Figure DEST_PATH_IMAGE028
And
Figure 93637DEST_PATH_IMAGE029
respectively representiThe first shot by the camerajPartial derivatives of the corresponding internal and external parameters of each calibration plate image.
9. The calibration method according to claim 1, wherein the calibration plate image is a circular array calibration plate image; acquiring the characteristic points in the calibration board image and the image coordinates of the characteristic points by the following method:
carrying out image processing on the calibration plate image to obtain circular feature points in the calibration plate image;
performing edge extraction on the circular feature points to obtain edge points of the circular feature points, and performing ellipse fitting by using the edge points to obtain image coordinates of the circular feature points, wherein the image coordinates of the circular feature points refer to image coordinates of the circle centers of the circular feature points;
determining the corresponding relation between the image coordinates of the circular feature points and world coordinates;
and carrying out error correction on the image coordinates of the circular feature points by using an ellipse equation to obtain the final image coordinates of the circular feature points.
10. A computer-readable storage medium, characterized in that the medium has stored thereon a program which is executable by a processor for implementing a calibration method as claimed in any one of claims 1 to 9.
CN202111526560.7A 2021-12-15 2021-12-15 Calibration method of non-coaxial camera Active CN113920205B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202210129725.5A CN114494464A (en) 2021-12-15 2021-12-15 Calibration method of line scanning camera
CN202210131436.9A CN114463442A (en) 2021-12-15 2021-12-15 Calibration method of non-coaxial camera
CN202111526560.7A CN113920205B (en) 2021-12-15 2021-12-15 Calibration method of non-coaxial camera
CN202210130372.0A CN114529613A (en) 2021-12-15 2021-12-15 Method for extracting characteristic point high-precision coordinates of circular array calibration plate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111526560.7A CN113920205B (en) 2021-12-15 2021-12-15 Calibration method of non-coaxial camera

Related Child Applications (3)

Application Number Title Priority Date Filing Date
CN202210130372.0A Division CN114529613A (en) 2021-12-15 2021-12-15 Method for extracting characteristic point high-precision coordinates of circular array calibration plate
CN202210131436.9A Division CN114463442A (en) 2021-12-15 2021-12-15 Calibration method of non-coaxial camera
CN202210129725.5A Division CN114494464A (en) 2021-12-15 2021-12-15 Calibration method of line scanning camera

Publications (2)

Publication Number Publication Date
CN113920205A true CN113920205A (en) 2022-01-11
CN113920205B CN113920205B (en) 2022-03-18

Family

ID=79249214

Family Applications (4)

Application Number Title Priority Date Filing Date
CN202210130372.0A Pending CN114529613A (en) 2021-12-15 2021-12-15 Method for extracting characteristic point high-precision coordinates of circular array calibration plate
CN202111526560.7A Active CN113920205B (en) 2021-12-15 2021-12-15 Calibration method of non-coaxial camera
CN202210129725.5A Pending CN114494464A (en) 2021-12-15 2021-12-15 Calibration method of line scanning camera
CN202210131436.9A Pending CN114463442A (en) 2021-12-15 2021-12-15 Calibration method of non-coaxial camera

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210130372.0A Pending CN114529613A (en) 2021-12-15 2021-12-15 Method for extracting characteristic point high-precision coordinates of circular array calibration plate

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202210129725.5A Pending CN114494464A (en) 2021-12-15 2021-12-15 Calibration method of line scanning camera
CN202210131436.9A Pending CN114463442A (en) 2021-12-15 2021-12-15 Calibration method of non-coaxial camera

Country Status (1)

Country Link
CN (4) CN114529613A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115930784A (en) * 2023-01-09 2023-04-07 广州市易鸿智能装备有限公司 Point inspection method of visual inspection system
CN116188594A (en) * 2022-12-31 2023-05-30 梅卡曼德(北京)机器人科技有限公司 Calibration method, calibration system, calibration device and electronic equipment of camera

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862866B (en) * 2022-07-11 2022-09-20 深圳思谋信息科技有限公司 Calibration plate detection method and device, computer equipment and storage medium
CN117135454A (en) * 2023-01-13 2023-11-28 荣耀终端有限公司 Image processing method, device and storage medium
CN116878388B (en) * 2023-09-07 2023-11-14 东莞市兆丰精密仪器有限公司 Line scanning measurement method, device and system and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009017480A (en) * 2007-07-09 2009-01-22 Nippon Hoso Kyokai <Nhk> Camera calibration device and program thereof
US20120133779A1 (en) * 2010-11-29 2012-05-31 Microsoft Corporation Robust recovery of transform invariant low-rank textures
CN107680139A (en) * 2017-10-17 2018-02-09 中国人民解放军国防科技大学 Universality calibration method of telecentric binocular stereo vision measurement system
CN108447098A (en) * 2018-03-13 2018-08-24 深圳大学 A kind of telecentricity moves camera shaft scaling method and system
CN110298888A (en) * 2019-06-12 2019-10-01 上海智能制造功能平台有限公司 Camera calibration method based on uniaxial high precision displacement platform

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123146A (en) * 2017-03-20 2017-09-01 深圳市华汉伟业科技有限公司 The mark localization method and system of a kind of scaling board image
CN107274454B (en) * 2017-06-14 2020-12-15 昆明理工大学 Method for extracting characteristic points of circular array calibration plate
CN109816733B (en) * 2019-01-14 2023-08-18 京东方科技集团股份有限公司 Camera parameter initialization method and device, camera parameter calibration method and device and image acquisition system
KR102297683B1 (en) * 2019-07-01 2021-09-07 (주)베이다스 Method and apparatus for calibrating a plurality of cameras
CN111145238B (en) * 2019-12-12 2023-09-22 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN113012234B (en) * 2021-03-16 2022-09-02 中国人民解放军火箭军工程大学 High-precision camera calibration method based on plane transformation
CN113610917A (en) * 2021-08-09 2021-11-05 河南工业大学 Circular array target center image point positioning method based on blanking points

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009017480A (en) * 2007-07-09 2009-01-22 Nippon Hoso Kyokai <Nhk> Camera calibration device and program thereof
US20120133779A1 (en) * 2010-11-29 2012-05-31 Microsoft Corporation Robust recovery of transform invariant low-rank textures
CN107680139A (en) * 2017-10-17 2018-02-09 中国人民解放军国防科技大学 Universality calibration method of telecentric binocular stereo vision measurement system
CN108447098A (en) * 2018-03-13 2018-08-24 深圳大学 A kind of telecentricity moves camera shaft scaling method and system
CN110298888A (en) * 2019-06-12 2019-10-01 上海智能制造功能平台有限公司 Camera calibration method based on uniaxial high precision displacement platform

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188594A (en) * 2022-12-31 2023-05-30 梅卡曼德(北京)机器人科技有限公司 Calibration method, calibration system, calibration device and electronic equipment of camera
CN116188594B (en) * 2022-12-31 2023-11-03 梅卡曼德(北京)机器人科技有限公司 Calibration method, calibration system, calibration device and electronic equipment of camera
CN115930784A (en) * 2023-01-09 2023-04-07 广州市易鸿智能装备有限公司 Point inspection method of visual inspection system
CN115930784B (en) * 2023-01-09 2023-08-25 广州市易鸿智能装备有限公司 Point inspection method of visual inspection system

Also Published As

Publication number Publication date
CN114529613A (en) 2022-05-24
CN114494464A (en) 2022-05-13
CN114463442A (en) 2022-05-10
CN113920205B (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN113920205B (en) Calibration method of non-coaxial camera
CN108648240B (en) Non-overlapping view field camera attitude calibration method based on point cloud feature map registration
CN110969668B (en) Stereo calibration algorithm of long-focus binocular camera
CN109598762B (en) High-precision binocular camera calibration method
Tang et al. A precision analysis of camera distortion models
CN102376089B (en) Target correction method and system
CN109272574B (en) Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
CN112465912B (en) Stereo camera calibration method and device
Von Gioi et al. Towards high-precision lens distortion correction
WO2018201677A1 (en) Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system
CN112802124A (en) Calibration method and device for multiple stereo cameras, electronic equipment and storage medium
CN110738608B (en) Plane image correction method and system
CN112929626B (en) Three-dimensional information extraction method based on smartphone image
JP6641729B2 (en) Line sensor camera calibration apparatus and method
CN105118086A (en) 3D point cloud data registering method and system in 3D-AOI device
CN112258588A (en) Calibration method and system of binocular camera and storage medium
JP2004317245A (en) Distance detection device, distance detection method and distance detection program
CN115457147A (en) Camera calibration method, electronic device and storage medium
CN113920206A (en) Calibration method of perspective tilt-shift camera
CN111462246B (en) Equipment calibration method of structured light measurement system
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
JP5998532B2 (en) Correction formula calculation method, correction method, correction apparatus, and imaging apparatus
CN116625258A (en) Chain spacing measuring system and chain spacing measuring method
CN113962853B (en) Automatic precise resolving method for rotary linear array scanning image pose
CN116071433A (en) Camera calibration method and system, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant