CN116385557A - Camera calibration method, camera calibration device, computer equipment and storage medium - Google Patents

Camera calibration method, camera calibration device, computer equipment and storage medium Download PDF

Info

Publication number
CN116385557A
CN116385557A CN202310293380.1A CN202310293380A CN116385557A CN 116385557 A CN116385557 A CN 116385557A CN 202310293380 A CN202310293380 A CN 202310293380A CN 116385557 A CN116385557 A CN 116385557A
Authority
CN
China
Prior art keywords
camera
point
matching
points
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310293380.1A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xhorse Electronics Co Ltd
Original Assignee
Shenzhen Xhorse Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xhorse Electronics Co Ltd filed Critical Shenzhen Xhorse Electronics Co Ltd
Priority to CN202310293380.1A priority Critical patent/CN116385557A/en
Publication of CN116385557A publication Critical patent/CN116385557A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to a camera calibration method, a camera calibration device, computer equipment and a storage medium. The method comprises the following steps: for each camera in the dual-camera, acquiring a first matching point pair of the marking point on the polar correction image and the marking point on the calibration object; mapping the mark points on the polar line correction image to an original image based on polar line correction reflection relation to obtain original image mapping points; the original image is an image shot by the camera; matching the original image mapping points with the mark points on the original image to obtain a second matching point pair; the second matching point pair comprises marking points on the polar correction image and marking points on the matched original image; determining a target matching point pair corresponding to each camera based on the first matching point pair and the second matching point pair; the target matching point pair comprises marking points on the original image and marking points on the matched calibration object; and calibrating the camera based on the target matching point pair. By adopting the method, the equipment parameters can be detected and corrected in real time.

Description

Camera calibration method, camera calibration device, computer equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to a camera calibration method, a camera calibration device, computer equipment and a storage medium.
Background
Factory set parameters are a very important part for all people touching electronic devices, including the parameter values of the device, some variables displayed, some running operating logic, etc. For handheld 3D binocular scanning equipment, calibration parameters of the binocular system are particularly important when leaving the factory.
Camera calibration algorithms have been constantly updated and developed over the past decades. Zhang Zhengyou the traditional calibration modes such as burning patterns and recording are improved, and the digital camera can be simply calibrated by calculation only by taking some pictures with identification points, so that the method is used and improved by fans and research personnel in many related fields.
In the feedback and research of Zhang Zhengyou calibration method by many practitioners, it can be known that the method can be used in many daily life aspects, such as shooting camera correction images, three-dimensional reconstruction with low precision requirement and the like, although the method is simple to use. It is noted that the algorithm accuracy is not further improved unless each step of its algorithm is modified.
In order to achieve binocular camera calibration under the conditions of more complexity and higher precision requirements, a traditional mode combines the iteration value and the iteration residual value of each step of calibration equation to provide a higher-precision calibration algorithm, and in the aspect of residual optimization, the method can be used for improving the iteration precision of the whole equation to achieve more precise calibration.
However, complicated environments and unreasonable operations may cause slight structural movements to the binocular scanning apparatus, that is, poor stability of the apparatus, and slight movements of the relative positional relationship of the binocular camera and the scanning light source may often occur, thereby resulting in that factory calibrated parameters may not be well adapted to the apparatus at this time.
Disclosure of Invention
Based on this, it is necessary to provide a camera calibration method, apparatus, computer device and storage medium capable of detecting and correcting device parameters in real time, in view of the above-mentioned technical problems.
A method of landmark matching, the method comprising:
acquiring a scanning point cloud obtained by shooting a calibration object under a current scanning visual angle; the marker comprises a mark point;
matching the calibrated object point cloud with the scanning point cloud to obtain a first matching point pair; the first matching point pair comprises a calibration object matching point and a matched scanning matching point;
Projecting the scanning matching points to an image coordinate system of a camera to obtain image mapping points;
acquiring a calibration object image obtained by shooting the calibration object by the camera;
matching the image mapping points with the mark points on the marker image to obtain a second matching point pair; the second matching point pair comprises a scanning matching point and a mark point on the matched calibration object image;
and for the camera, determining a target matching point pair of which the mark point on the marker image is matched with the mark point on the marker based on the first matching point pair and the second matching point pair.
A landmark matching apparatus, the apparatus comprising:
the scanning point cloud acquisition module is used for acquiring scanning point clouds obtained by shooting the calibration object under the current scanning visual angle; the marker comprises a mark point;
the first matching module is used for matching the calibrated object point cloud with the scanning point cloud to obtain a first matching point pair; the first matching point pair comprises a calibration object matching point and a matched scanning matching point;
the projection module is used for projecting the scanning matching points to an image coordinate system of the camera to obtain image mapping points;
the calibration object image acquisition module is used for acquiring a calibration object image obtained by shooting the calibration object by the camera;
The second matching module is used for matching the image mapping points with the mark points on the marker image to obtain a second matching point pair; the second matching point pair comprises a scanning matching point and a mark point on the matched calibration object image;
and the target matching module is used for determining a target matching point pair of which the mark point on the marker image is matched with the mark point on the marker for the camera based on the first matching point pair and the second matching point pair.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the camera calibration method embodiments when the computer program is executed.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of embodiments of a camera calibration method.
The camera calibration method, the camera calibration device, the computer equipment and the storage medium are characterized in that for each camera, a first matching point pair, in which the marking point on the polar correction image is matched with the marking point on the calibration object, is obtained, and then the marking point on the polar correction image is mapped to an original image based on the polar correction reflection relation so as to determine a second matching point pair; determining a target matching point pair corresponding to each camera based on the first matching point pair and the second matching point pair; through the separate matching of the calibration object, the polar correction image and the original image, the point pairs as many as possible can be obtained, and through multiple matching, more accurate target mark point pairs can be obtained, and the equipment parameters can be detected and corrected in real time, so that a better scanning result can be obtained.
Drawings
FIG. 1 is an application environment diagram of a camera calibration method in one embodiment;
FIG. 2 is a flow chart of a camera calibration method according to an embodiment;
FIG. 3 is a flow chart of camera calibration based on target matching point pairs in one embodiment;
FIG. 4 is a schematic flow chart of matching a calibration object point cloud with a scanning point cloud in one embodiment;
FIG. 5 is a schematic diagram of marker features in one embodiment;
FIG. 6 is a schematic representation of features between three pairs of points in one embodiment;
FIG. 7 is a schematic diagram of an image coordinate system of projecting scan matching points to a camera in one embodiment;
FIG. 8 is a block diagram of a camera calibration apparatus in one embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without undue burden, are within the scope of the present application.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present application are merely used to explain the relative positional relationship, movement conditions, etc. between the components in a specific posture (as shown in the drawings), if the specific posture is changed, the directional indicators correspondingly change, and the connection may be a direct connection or an indirect connection.
In addition, descriptions such as those related to "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated in this application. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be regarded as not exist and not within the protection scope of the present application.
The terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first matching point may be symmetric as a second matching point pair, and similarly, a second matching point may be symmetric as a first matching point pair, without departing from the scope of the present application. The first matching point pair and the second matching point pair are both matching point pairs, but they are not the same matching point pair.
It is to be understood that in the following embodiments, "connected" is understood to mean "electrically connected", "communicatively connected", etc., if the connected circuits, modules, units, etc., have electrical or data transfer between them.
The method for matching the mark points can be applied to an application environment as shown in fig. 1. FIG. 1 is a diagram of an application environment for a mark point matching method in one embodiment. Among them, the terminal device 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The number of cameras 120 may be 2. In the following embodiments, a multi-camera is taken as an example of a binocular camera. The photographed image will be described herein. The calibration object 130 is used for calibration, and the surface of the calibration object is provided with a preset mark point. The calibration object may be any object, such as a calibration plate, a square, a stereo head portrait, etc. The preset mark points can be changed in position according to the requirements.
In one embodiment, as shown in fig. 2, a flow chart of a camera calibration method in one embodiment includes the following steps:
step 202, for each camera in the dual cameras, a first matching point pair is obtained, wherein the marking point on the polar correction image is matched with the marking point on the calibration object.
The polar line correction is to take a certain target coordinate system as a main point, rotate, translate and shrink the respective camera coordinate system and focal plane to the target coordinate system to form a focal plane with a new common focal length, so that the polar line pair is collinear and parallel to a certain coordinate axis of the focal plane. The epipolar line corrected image refers to an image captured by a camera and subjected to epipolar line correction. The first matching point pair comprises a marking point on the polar correction image and a marking point on a marker matched with the marking point on the polar correction image.
Specifically, each camera has a corresponding first matching point pair. The terminal equipment can carry out a first matching point pair of the marking point on the polar correction image and the marking point on the calibration object through a marking point matching algorithm.
Step 204, mapping the mark points on the polar correction image to the original image based on the polar correction reflection relation to obtain the mapping points of the original image; the original image is an image photographed by a camera.
Wherein the original image is an image photographed by a camera and not subjected to epipolar correction. The epipolar correction mapping relationship refers to a relationship that converts an original image into an epipolar correction image. The epipolar correction reflection relationship refers to a relationship in which an epipolar correction image is inverted into an original image.
Specifically, the epipolar correction generates four tables, which represent the sub-pixel coordinates of the original photographed image corresponding to each pixel of the corrected image, so that the mapping table can be reversely established based on the epipolar correction logic, that is, the sub-pixel mark coordinates of the original image can be mapped into the epipolar corrected image coordinate system through two-dimensional bilinear interpolation, so that the corresponding relationship between the pixel point in each epipolar corrected image and the pixel point in the original image can be found. And the terminal equipment maps the mark points on the polar correction image to the original image based on the polar correction reflection relation to obtain the mapping points of the original image. In epipolar correction, the matrix C is used to measure the transformation relationship of the original image coordinate system and the corrected image coordinate system. The matrix C' is used to measure the transformation relationship of the corrected image coordinate system to the original image coordinate system.
P ori =C'P rect
Figure BDA0004150537700000041
Wherein P is l Sub-pixel coordinates, P, of the epipolar line corrected point o Is the center coordinates of the epipolar corrected image (the camera principal point coordinates in the new coordinate system), and f is the focal length. P (P) rect Refer to the original image, P ori Is a polar corrected image.
Step 206, matching the mapping points of the original image with the mark points on the original image to obtain a second matching point pair; the second matching point pair contains the marker point on the epipolar corrected image and the marker point on the matched original image.
The second matching point pair comprises a marking point on the polar correction image and a marking point on the original image matched with the marking point on the polar correction image.
Specifically, the terminal equipment matches the original image mapping points with the marking points on the original image to obtain marking points on the polar correction image and the matched marking points on the original image.
Step 208, determining a target matching point pair corresponding to each camera based on the first matching point pair and the second matching point pair; the target matching point pair comprises the marking point on the original image and the marking point on the matched calibration object.
Specifically, the first matching point pair comprises a marking point on the polar correction image and a marking point on the matched calibration object. The second matching point pair contains the marker point on the epipolar corrected image and the marker point on the matched original image. Then, the terminal device may correct the marker point on the image based on the epipolar line in the first matching point pair, determine the matched marker point on the original image from the second matching point pair, and obtain the target marker point pair. Alternatively, the terminal device may correct the marker point on the image based on the epipolar line in the second matching point pair, determine the marker point on the matched calibration object from the first matching point pair, and obtain the target marker point pair.
Step 210, calibrating the camera based on the target matching point pair.
Specifically, the terminal device substitutes the target matching point pair into a reference collineation equation to obtain a target collineation equation, and determines camera parameters of each camera, such as camera parameters, relative pose parameters and the like, based on the target collineation equation.
In this embodiment, for each camera, a first matching point pair is obtained, in which a marker point on the polar correction image is matched with a marker point on the calibration object, and then, based on the polar correction reflection relationship, the marker point on the polar correction image is mapped to the original image to determine a second matching point pair; determining a target matching point pair corresponding to each camera based on the first matching point pair and the second matching point pair; the calibration object, the polar correction image and the original image are separated for matching, so that point pairs as many as possible can be obtained, more accurate target mark point pairs can be obtained through multiple times of matching, equipment parameters are detected and corrected in real time, and unstable factors of each equipment are optimized to obtain better scanning results.
In one embodiment, matching the mapping points of the original image with the mark points on the original image to obtain a second matching point pair includes: and adding a distortion coefficient for the original image mapping point, and adjusting the distortion coefficient until the original image mapping point is matched with the adjacent mark point on the original image to obtain a second matching point pair.
Specifically, for each original image mapping point, the terminal device increases a distortion coefficient for the original image mapping point, and adjusts the distortion coefficient until a second matching point pair is obtained when the original image mapping point on the original image matches with a marking point on an adjacent original image, wherein the marking point on the polar line correction image matches with the marking point on the original image.
P pair P ori The distortion is increased because the polar corrected graph is subjected to de-distortion operation, and if the relation between the original graph and the midpoint of the corrected graph needs to be found, the distortion needs to be increased, and the polar corrected graph can be obtained
Figure BDA0004150537700000051
x 0 ,y 0 Is the pixel coordinates of the principal point of projection of the camera optical center on the image plane.
Iterative calculation of P using the above equation ori When it is hardly changed any more, a critical value is reached, and then the correspondence of the coordinates of the marker point pixels of the original image and the epipolar correction image can be obtained based on the coordinate deviation.
In this embodiment, since epipolar correction not only makes the images collinear, but also corrects the distortion of the images, when the epipolar correction image is mapped to the original image, it is necessary to increase the distortion coefficient for the mapping point of the original image until the point is matched with the adjacent point of the point, the correct matching point pair can be found, and the distortion coefficient is obtained therefrom and can be used for subsequent other processing.
In one embodiment, camera calibration based on a target matching point pair includes: for each camera, calibrating the camera based on the target matching point pair to obtain a calibrated camera internal reference; and comparing the calibrated camera internal parameters with the reference camera internal parameters, determining a target camera internal parameters based on the comparison result, and calibrating the camera based on the target camera internal parameters and the target matching point pairs.
The calibration camera internal reference is obtained by calculating based on the target matching point pair. The reference camera reference refers to a reference camera reference at the time of shipment of the camera.
Specifically, for each camera, the terminal device substitutes the target matching point pair into a reference collineation equation to obtain a target collineation equation, determines a calibration camera reference based on the target collineation equation, compares the calibration camera reference with the reference camera reference, and determines a target camera reference from the calibration camera reference and the reference camera reference based on a comparison result.
In this embodiment, the calibration camera reference and the reference camera reference are compared, and the target camera reference is determined from the calibration camera reference and the reference camera reference based on the comparison result, so that a more accurate camera reference can be selected from the target camera reference and the subsequent camera calibration can be performed, thereby improving the accuracy of the camera calibration.
In one embodiment, determining a target camera reference from the calibration camera reference and the reference camera reference based on the comparison, calibrating the camera based on the target camera reference and the target matching point pair, comprises:
and when the difference value of the calibrated camera internal parameters and the reference camera internal parameters is within a preset range, taking the reference camera internal parameters as target camera internal parameters, and calibrating the camera based on the reference camera internal parameters and the target matching point pairs.
Specifically, the difference value between the calibration camera internal parameter and the reference camera internal parameter is within a preset range, that is, the absolute value of the difference value between the calibration camera internal parameter and the reference camera internal parameter is smaller than or equal to a preset difference value, or the ratio of the difference value between the calibration camera internal parameter and the reference camera internal parameter is smaller than or equal to a preset ratio, or the like. The difference between the calibration camera internal parameter and the reference camera internal parameter is not within the preset range, and is opposite, and is not described herein.
When the difference value between the calibrated camera internal parameter and the reference camera internal parameter is within a preset range, the fact that the deviation of the calibrated camera internal parameter relative to the reference camera internal parameter is not large is indicated, the camera still keeps stable, the reference camera internal parameter when defaulting to factory is accurate at the moment, the camera relative pose parameter is determined based on the reference camera internal parameter and the target matching point pair, and the camera external parameter can be determined.
When the difference value between the internal parameters of the calibration camera and the internal parameters of the reference camera is not within the preset range, the fact that the internal parameters of the calibration camera are larger than the internal parameters of the reference camera is indicated, the internal parameters of the calibration camera are possibly problematic in the camera, the factory-returning calibration single-camera parameters are recommended, and the subsequent camera calibration is not performed.
In this embodiment, when the difference between the calibrated camera internal parameter and the reference camera internal parameter is within the preset range, the reference camera internal parameter is accurate, so that camera calibration can be accurately performed based on the reference camera internal parameter and the target matching point pair.
In one embodiment, the present application found in the study that the whole dual-objective calibration was aimed at serving the subsequent epipolar calibration, during which most scholars prefer to use libraries like opencv, the Zhang Zhengyou calibration method is chosen, but the Zhang Zhengyou calibration method is to make large rotations and translations for both cameras during epipolar calibration, so the requirement for dual-objective calibration results is high. However, in the embodiment of the present application, it is noted that the principle of epipolar correction is to pull the original image with distortion to the same epipolar line through distortion removal and variation, so as to form a corrected image without distortion, and each corresponding point is for one epipolar line. Therefore, how to rotate and change the camera plane can be selected based on the model of the camera plane, and the camera plane only needs to be moved and cut later. Based on this, if the left camera plane is changed into the left camera coordinate system with hardly any change, the effect of the exogenous error of the left camera on the epipolar line correction is almost 0. Thus, in order to simplify the binocular calibration model and the acceleration calculation, the left camera internal parameters and the right camera external parameters are fixed in the embodiments of the application, and the two camera relative pose and the right camera external parameters are processed in the adjustment model. However, it should be noted that the right camera external parameters can be replaced by the left camera external parameters and the relative pose of the cameras, so that the whole model can only process the relative pose between the cameras, thereby greatly reducing the computational complexity. The polar line correction is mainly performed by the left camera, so that the final error is very small and can be comparable with the effect of all iterations. It will be appreciated that the left camera plane may also be mapped into the right camera coordinate system based on the specific needs of the right camera.
FIG. 3 is a schematic flow chart of camera calibration based on the target matching point pairs in one embodiment, which includes:
and 302, substituting the target matching point pair into a reference collineation equation to obtain a target collineation equation.
Wherein, the collineation equation is a mathematical relation expressing that three points of an object point, an image point and a projection center point are positioned on the same straight line. For single-camera calibration, the beam adjustment model studies the collinearity equations, and the parameters of each camera must satisfy the following reference collinearity equations:
Figure BDA0004150537700000071
Figure BDA0004150537700000072
where Δx and Δy are distortions, (X, Y) are pixel coordinates of the image and (X, Y, Z) are true coordinates of the marker point on the calibration plate.
Then the target matching point pair contains the pixel coordinates (X, Y) of the original image and also contains the coordinates (X, Y, Z) of the marker point on the matched calibration object, so that the target collineation equation can be obtained after substitution.
And step 304, performing bias guide on each matrix element in the first camera external parameter matrix based on the target collineation equation to obtain a first relational expression.
x 0 ,y 0 Is the pixel coordinates of the principal point of projection of the camera optical center on the image plane, and f is the focal length of the camera. The external parameters correspond to a, b, cThey are the attitude angles of the camera
Figure BDA0004150537700000073
Omega, kappa and camera position X s ,Y s ,Z s And (3) determining. It will be appreciated that the distortion may or may not be added to the collinear equation.
A, b and c in the above, and attitude angle
Figure BDA0004150537700000074
Omega, kappa and camera position X s ,Y s ,Z s Are unknowns. The pixel coordinates of the image can be obtained by reading the image, and the real coordinates of the calibration points on the calibration plate can also be obtained by other calibration modes.
The first camera external parameter matrix is determined by the first camera external parameter, and the R corresponding to the first camera external parameter matrix is specific r Is that
Figure BDA0004150537700000075
Where r is used to denote the first camera. Actually a in Rr r1 Equivalent to a in the collinearly equation 1 … such a push is not described in detail herein. Similarly, the elements in the first camera extrinsic matrix are also unknowns.
Then the partial matrix elements in the first camera outlier matrix are included in the collinearly equation, so the bias is easier and faster.
And 306, performing bias guide on each relative pose parameter based on the relative pose parameter matrix, and determining the product of the bias guide and a preset second camera external parameter matrix to obtain a second relation.
Specifically, the relative pose parameter matrix is based on relative pose parameters
Figure BDA0004150537700000081
Determining the relative pose parameter matrix Rc as
Figure BDA0004150537700000082
The preset second camera external parameter matrix refers to a known camera external parameter matrix pre-stored in the terminal device, as follows
Figure BDA0004150537700000083
R l The values of the parameters a, b and c are known.
The second relation represents the first camera external parameter matrix R r The formula of the bias derivative of the relative pose parameter is calculated by each matrix element in the matrix, wherein the first camera extrinsic matrix R r Unknown. Then there is
Figure BDA0004150537700000084
Step 308, determining a partial derivative function relation of the colinear equation for partial derivative of the relative pose parameter based on the product of the first relation and the second relation.
Specifically, the expression form of the collineation equation is changed into
Figure BDA0004150537700000085
Figure BDA0004150537700000086
Then the partial derivative function relation of the colinear equation to the partial derivative of the relative pose parameter is
Figure BDA0004150537700000087
The partial derivative function relation of the colinear equation for partial derivative of the relative pose parameter can be decomposed into partial derivative of the first camera external parameter matrix based on the chain rule and partial derivative of the first camera external parameter matrix for the relative pose parameter.
Step 310, obtaining a pixel deviation relation between the real coordinates of the image and the target collineation equation based on the product of the increment of the relative pose parameter and the partial derivative function relation.
The pixel deviation relation may be a pixel deviation relation in the x direction or a pixel deviation relation in the y direction. The two calculation modes are consistent.
Specifically, based on a first-order Taylor expansion, a partial derivative can be obtained for each external parameter in the collinearly equation
Figure BDA0004150537700000091
Figure BDA0004150537700000092
Wherein,,
Figure BDA0004150537700000095
is a first camera external parameter. Delta xi r Refers to the delta of the first camera external parameter. And because the relative pose parameter matrix has the relation with the first camera external parameter matrix and the unknown second camera external parameter matrix
R c =R l R' r
Therefore, the partial derivative of the external parameter of the first camera in the above relation can be converted into the partial derivative of the relative pose parameter between the two cameras, and can be rewritten as
Figure BDA0004150537700000093
Figure BDA0004150537700000094
Then to the left of the equation is the pixel deviation between the image true x-coordinate and the co-line equation F, and the pixel deviation between the image true y-coordinate and the co-line equation G.
Step 312, iterating the pixel deviation relation, and when the iteration is completed, obtaining each relative pose parameter value of the first camera.
Specifically, the terminal device iterates the relative pose parameter in the pixel deviation relation, and when the iteration reaches a preset number of times or when the error of the pixel deviation relation is smaller than or equal to a preset error, the relative pose parameter value of the first camera is obtained.
In this embodiment, the coefficients in the pixel deviation relation are found by analysis to be able to be decomposed into the first relation and the second relation, and the collineation equation includes each matrix element in the first camera external parameter matrix, and the relative pose parameter matrix includes the relative pose parameter, so that the problem is solved by adopting a matrix form, the partial derivative of the variable complicated in the collineation equation is replaced by the partial derivative of the matrix, the model form is simple, programming is very convenient, the calculation complexity is greatly reduced, the calculation of the relative pose parameter value can be completed quickly, and the binocular vision calibration result is consistent with the calibration result in the traditional mode.
In one embodiment, iterating the pixel deviation relationship, when iterated, obtaining respective relative pose parameter values for the first camera, comprising:
and adjusting the values of the relative pose parameters in the pixel deviation relation, and obtaining the values of the relative pose parameters of the first camera when the pixel deviation relation meets the error condition.
The meeting of the error condition may be that the value of the pixel deviation relation is less than or equal to a preset error, or the like.
Specifically, the terminal device adjusts the values of the relative pose parameters in the pixel deviation relation, and when the values of the pixel deviation relation are smaller than or equal to a preset error, or when the square values of the pixel deviation relation are smaller than or equal to the preset error, the adjusted values at the moment are used as the relative pose parameter values of the first camera.
In this embodiment, the following problem only needs to be optimized for the pixel deviation relation
Figure BDA0004150537700000101
The above expression is described taking the square of the pixel deviation relation as an example, and the y-coordinate value of the image is the same. It will be appreciated that the x-and y-coordinates may also be iterated separately for the pixel deviation relationship.
In this embodiment, the values of the relative pose parameters in the pixel deviation relation are adjusted, and when the pixel deviation relation satisfies the error condition, the values of the relative pose parameters of the first camera are obtained, and only one model is needed to be iterated to optimize the values, so that the complexity of calculation can be greatly reduced, the determination efficiency of the relative pose parameter values of the camera is improved, and the calibration efficiency is further improved.
In one embodiment, the method further comprises: substituting the relative pose parameter values into a relative pose parameter matrix to obtain a target relative pose matrix; a target first camera outlier matrix is obtained based on a product of a transpose matrix of the target relative pose matrix and a preset second camera outlier matrix.
Specifically, the values in the target first camera extrinsic matrix are all known. The first camera extrinsic matrix and the relative pose matrix can be mutually converted, namely:
R 2 =R' c R 1
then, the target relative pose matrix is obtained by substituting the relative pose parameter values into the relative pose parameter matrix, and the target first camera extrinsic matrix is obtained based on the product of the transpose matrix of the target relative pose matrix and the preset second camera extrinsic matrix.
In the embodiment, substituting the relative pose parameter value into the relative pose parameter matrix to obtain a target relative pose matrix; obtaining a target first camera external parameter matrix based on the product of a transposed matrix of the target relative pose matrix and a preset second camera external parameter matrix, obtaining external parameter matrices of the first camera and the second camera, and obtaining all calibration parameters of the binocular camera by combining known internal parameters of the first camera and the second camera.
In one embodiment, the relative pose parameter matrix includes relative pose parameters; the collinearity equation contains matrix elements of a first camera extrinsic matrix; the partial derivative function relation is determined based on the product of two partial derivative functions decomposed by a chain rule, wherein one partial derivative function is used for solving partial derivative of a first camera external parameter matrix by a collinearity equation, and the other partial derivative is used for solving partial derivative of the relative pose parameter by the first camera external parameter matrix.
Specifically, the direct calculation of the partial derivatives of the collinearly equation to the relative pose parameters is complex in calculation amount. Through analysis, two partial derivative functions are decomposed through a chain rule, and the calculation of the two partial derivative functions is quite simple, so that the relative pose parameter values can be obtained quickly.
In one embodiment, as shown in fig. 4, a flow chart of matching a calibration object point cloud with a scanning point cloud in one embodiment is shown, wherein:
step 402, acquiring a scanning point cloud obtained by shooting a calibration object under a current scanning view angle; the marker comprises a mark point.
The current scanning visual angle refers to a visual angle when a camera to be calibrated scans. The scanning point cloud is the three-dimensional coordinates of the marker point under the current camera coordinate system. The scanning point cloud also has position information of the marker point at the current scanning angle of view and vector information.
Specifically, the binocular camera shoots the calibration object under the current scanning visual angle, and synthesizes the calibration object to obtain a scanning point cloud. The terminal equipment acquires a scanning point cloud obtained by shooting a calibration object in real time under the current scanning visual angle.
Step 404, matching the calibration object point cloud with the scanning point cloud to obtain a reference matching point pair; the reference matching point pair comprises a calibration object matching point and a matched scanning matching point.
The calibration object point cloud is a three-dimensional coordinate of a mark point calculated based on binocular vision parameters by shooting a calibration object through a precisely calibrated left camera and a precisely calibrated right camera. Similarly, the calibration object point cloud has position information and vector information of each mark point on the calibration object. The vector information may be a normal vector of the marker points.
The marker matching points are marker points on the marker that match points in the scanned point cloud. The scanning matching points refer to marking points which are in the scanning point cloud and are matched with marking points on the calibration object.
Specifically, the computer equipment adopts a characteristic matching algorithm of the mark points to match the target point cloud with the scanning point cloud, and when the matching is successful, a successfully matched reference matching point pair is obtained. The feature matching algorithm can be referred to as ICP (Iterative Closest Point ) algorithm, fast-ICP algorithm, energy minimization feature matching, topological structure-based point cloud matching and the like.
In step 406, the scan matching points are projected onto the camera's image coordinate system to obtain image mapping points.
Wherein projection refers to projecting a point cloud to an image coordinate system. And (5) projecting the scanning matching points to points in an image coordinate system, namely, the image mapping points. The image coordinate system refers to the coordinate system when the camera captures an image. The image coordinate system is the coordinate system corresponding to the calibration object image.
Specifically, a laser triangulation method is generally used for calculating coordinates of three-dimensional points, specifically, rays formed by connecting pixel points with a left camera optical center and rays formed by connecting corresponding pixel points with a right camera intersect in a three-dimensional space, and the intersection point is a three-dimensional coordinate point corresponding to two matched pixel points. The scanning matching points in all the reference matching point pairs can be projected to the image coordinate system of the camera by a beam adjustment method to obtain image mapping points. If the scanning matching points are projected to the image coordinate system of the left camera, the first matching point pair corresponding to the left camera can be finally obtained. And projecting the scanning matching points to an image coordinate system of the right camera, so that a first matching point pair corresponding to the right camera can be finally obtained.
In step 408, a calibration object image obtained by photographing the calibration object by the camera is obtained.
Specifically, for a binocular camera, a calibration object image obtained by photographing a calibration object by each camera is acquired.
Step 410, matching the image mapping points with the mark points on the calibration object image to obtain image matching point pairs; the image matching point pair comprises a scanning matching point and a mark point on the matched calibration object image.
The image matching point pair comprises a scanning matching point and a mark point on a matched calibration object image.
Specifically, the terminal equipment matches the image mapping points with the mark points on the calibration object image, and screens out the point pairs which are not successfully matched, so as to obtain the image matching point pairs.
Step 412, for the camera, determining a first matching point pair for the marker on the marker image to match the marker on the marker based on the reference matching point pair and the image matching point pair.
The first matching point pair comprises a marking point on the matched calibration object image and a marking point on the calibration object.
Specifically, the reference matching point pair includes a matched scan matching point and a calibration object matching point. And the image matching point pairs are the scanning matching points and the mark points on the matched calibration object images. The terminal device may then determine a matching marker matching point from the reference matching point pair based on the scanned matching points in the image matching point pair, and then obtain a first matching point pair where the marker on the marker image matches the marker on the marker. Alternatively, the terminal device may determine the marker point on the matched marker image from the image matching point pair based on the scanned matching point in the reference matching point pair, and then obtain the first matching point pair in which the marker point on the marker image matches the marker point on the marker.
In the embodiment, a calibration object point cloud and a scanning point cloud are matched to obtain a successfully matched reference matching point pair, and then the image mapping point is matched with a mark point on a calibration object image based on the projection of the scanning matching point in the reference matching point pair to a corresponding camera to determine an image matching point pair; determining a first matching point pair of the mark point on the marker image and the mark point on the marker based on the reference matching point pair and the image matching point pair; the calibration object is matched with the scanning point cloud, and the scanning point cloud and the calibration object image are separated, so that the most possible point pairs can be obtained; and through multiple times of matching, more accurate target mark point pairs can be obtained, unstable factors of each equipment are optimized, and a better scanning result is obtained subsequently.
In one embodiment, matching the calibration object point cloud with the scanning point cloud to obtain a reference matching point pair includes: performing feature matching on the calibrated object point cloud and the scanning point cloud to obtain a point pair with successful feature matching; determining a first conversion relation from the current scanning visual angle to a calibration object coordinate system based on the point pairs successfully matched with the features; mapping the mark point under the current scanning view angle to a calibration object coordinate system based on the first conversion relation to obtain a calibration object mapping point; and matching the mapping points of the calibration object with the marking points on the calibration object to obtain a reference matching point pair.
Features such as distance, normal vector included angle of adjacent points and the like can be obtained in the feature matching. The first conversion relation is to convert from the current scanning visual angle to the calibration object coordinate system. The first transformation relationship may include a rotation matrix, a translation matrix, and a transformation vector.
Specifically, the terminal equipment acquires the characteristics of the mark points in the mark point cloud and the characteristics of the mark points in the scanning point cloud, performs characteristic matching, and obtains the point pairs with successful characteristic matching when the matching is successful. At this time, the number of pairs of points with successfully matched features is large, and the pairs of points contain many miscellaneous points. Then a plurality of non-collinear pairs of points can be selected from the pairs of points for which the feature matching was successful, and a first transformation relationship from the current scan perspective to the calibration object coordinate system is calculated. The terminal equipment can map all the mark points under the current scanning visual angle to a coordinate system of a calibration object based on the first conversion relation, namely, the coordinates of all the mark points under the current scanning visual angle are converted to the coordinates under the coordinate system of the calibration object, so that the mapping point of the calibration object is obtained. And the terminal equipment matches the mapping points of the calibration object with the marking points on the calibration object, and when the matching is successful, a successfully matched reference matching point pair is obtained.
In the embodiment, the characteristic matching is performed on the calibration object point cloud and the scanning point cloud to obtain the point pairs with successful characteristic matching, namely, the point pairs are screened once in the matching process, so that some impurity points with unmatched characteristics can be removed; determining a first conversion relation based on the point pairs with successful feature matching, mapping again, mapping the marker points under the current view angle to a marker coordinate system, matching the marker mapping points with the marker points on the marker to obtain a reference matching point pair with successful matching, so that the marker points which are not matched before can be matched with the corresponding marker points, and retaining effective points as much as possible; and because the matching is carried out through the first conversion relation, more accurate matching point pairs are obtained, and the accuracy of the subsequent scanning is greatly improved.
In one embodiment, matching the marker mapping points with the marker to obtain a reference matching point pair includes:
matching the mapping points of the calibration object with the marking points on the calibration object to obtain effective point pairs; the effective point pair comprises a marking point under the current scanning visual angle and a marking point on a matched calibration object;
determining a second conversion relation from the current scanning visual angle to the calibration object coordinate system based on the effective point pairs;
and mapping the mark point under the current scanning visual angle to a coordinate system of the calibration object based on the second conversion relation, and obtaining a reference matching point pair when the mapped mark point is matched with the mark point on the calibration object.
The effective point pair comprises a marking point under the current scanning visual angle and a marking point on a calibration object matched with the marking point under the current scanning visual angle. The second conversion relationship may not be the same as the first conversion relationship. The first conversion relation is calculated based on the points with successful feature matching; the second conversion relation is calculated based on the effective point pair of successful matching. The second conversion relation is more accurate in value than the first conversion relation.
Specifically, the terminal equipment matches the marker mapping points with the marker marking points on the marker, and when the matching is successful, an effective point pair of successful matching is obtained. The terminal equipment calculates a rotation translation transformation relation between the two point clouds, namely a second transformation relation, through an SVD (Singular Value Decomposition ) algorithm based on the effective point pair successfully matched. Based on the second conversion relation, the terminal equipment maps the mark point under the current view angle to the calibration object coordinate system, so that the position of the mapped mark point in the calibration object coordinate system is more accurate compared with that mapped based on the first conversion relation. And the mapped mark points are matched with the mark points on the calibration object, so that the obtained logarithm of the reference matching point pairs is more.
In this embodiment, by setting appropriate matching conditions and procedures, the accuracy of matching point pairs is improved while the number of point pairs is maintained.
In one embodiment, mapping the marker point under the current scanning view angle to the coordinate system of the calibration object based on the second conversion relation, and when the mapped marker point is matched with the marker point on the calibration object, obtaining the reference matching point pair includes:
mapping the mark points under the current scanning visual angle to a calibration object coordinate system based on a second conversion relation, and reserving the mapped mark points on the same plane;
and when the mapped mark points on the same plane are matched with the mark points on the calibration object, obtaining a reference matching point pair.
Specifically, after the mark point under the current scanning view angle is matched with the mark point on the calibration object, the mark point is almost positioned on the plane of the calibration object, so the mark point under the current scanning view angle after rotation and translation should be very close to the plane of the calibration object. The terminal equipment maps the mark points under the current scanning visual angle to a calibration object coordinate system based on a second conversion relation, eliminates the mapped mark points which are not positioned on the calibration object plane, and retains the mapped mark points positioned on the same plane. And matching the mapped mark points on the same plane with the mark points on the calibration object, and when the mapped mark points on the same plane are matched with the mark points on the calibration object, obtaining a reference matching point pair by the terminal equipment.
In this embodiment, by screening the mapped marker points, the impurity points which are not located on the same plane are removed, so that the obtained marker point pairs are more accurate, and the subsequent camera calibration is also more accurate.
In one embodiment, performing feature matching on the calibration object point cloud and the scanning point cloud to obtain a point pair with successful feature matching, including:
the method comprises the steps of obtaining a normal vector of a calibration object point cloud and a first distance between each point in the calibration object point cloud and an adjacent point;
determining a normal vector of each point in the scanning point cloud and a second distance between each point and an adjacent point;
and matching the normal vector of the calibration object point cloud with the normal vector of each point in the scanning point cloud, and matching the first distance with the second distance, and obtaining the point pair with successful feature matching when the normal vector and the distance are both successfully matched.
Specifically, the first distance is a distance between each point in the object point cloud and an adjacent point. The second distance refers to the distance between each point in the scanning point cloud and the adjacent point. The characteristic of the calibration object point cloud can comprise a normal vector of the calibration object point cloud and the distance between each point in the calibration object point cloud and the adjacent point; and the method can also comprise calibrating the normal vector included angle between each point and the adjacent point in the object point cloud.
Based on the normal vector of the calibration object point cloud and the normal vector in the scanning point cloud, matching can comprise matching the values of the normal vector, and can also comprise matching the normal vector included angles between the matching points and the adjacent points. In general, normal vectors of all points in the calibration object point cloud are consistent. Alternatively, normal vectors of points in the calibration object point cloud may be inconsistent, and then the normal vector of the calibration object point cloud is the normal vector of the points. The normal vector of the calibration object point cloud and the distance between each point in the calibration object point cloud and the adjacent point are preset and stored in the terminal equipment.
In this embodiment, the normal vector of the calibration object point cloud is matched with the normal vector of each point in the scanning point cloud, and the first distance and the second distance are matched, when both the two distances are successfully matched, the point pairs with successful feature matching are obtained, and the point pairs meeting the primary features can be rapidly screened.
In one embodiment, determining a first conversion relationship from the current scan perspective to the calibration object coordinate system based on the pairs of points for which feature matching was successful includes:
selecting three feature matching point pairs from the feature matching point pairs;
determining three mark points in the same point cloud from the three feature matching point pairs, and determining whether the three feature matching point pairs meet the congruent triangle condition when the three mark points in the same point cloud are not collinear;
When the three feature matching point pairs meet the congruent triangle condition, a first conversion relation converted from the current scanning visual angle to the calibration object coordinate system is determined based on the three feature matching point pairs.
Specifically, from the point pairs matched with the features, the terminal equipment randomly selects three point pairs matched with the features. The point pair matched with the characteristic comprises points in the scanning point cloud and the matched mark points on the calibration object, so that three mark points in the same point cloud can be three points in the scanning point cloud or three points in the calibration object point cloud. When three marker points located in the same point cloud are not collinear, it is illustrated that they may form a triangle. The terminal equipment determines whether the triangle formed by the three points in the scanning point cloud is congruent with the triangle formed by the three points in the calibration object point cloud. When the two triangles are equal, namely the three feature matching point pairs meet the equal triangle condition, a first conversion relation converted from the current scanning visual angle to the calibration object coordinate system is determined based on the three feature matching point pairs.
In this embodiment, the point pairs with successful feature matching may further include many miscellaneous points, i.e., non-marker points, so that further screening is required; the three points on the common line have fewer characteristics, so that non-collinear points are required to be selected for judging the congruent triangle, and when the three point pairs matched with the characteristics meet the congruent triangle condition, the accuracy of the three point pairs matched with the characteristics is higher, so that the accuracy of the calculated first conversion relation is also higher.
In one embodiment, the calibration object is taken as a calibration plate, and the number of the marking points is 15 as an example. And performing feature matching on the calibration object point cloud and the scanning point cloud, wherein 18 pairs of successfully matched point pairs can be obtained. Then any 3 pairs that are not collinear are selected from the 18 pairs, and the first conversion relation is calculated to be mapped and matched, so as to obtain 10 pairs of effective point pairs. Based on the 10 pairs of effective point pairs, calculating a second conversion relation, mapping the whole scanning point cloud to a calibration plate coordinate system, and discarding if some points are not on the same calibration plate plane after mapping; and matching the mapping points on the same calibration plate plane with the marking points on the calibration plate, and finally obtaining 14-15 pairs of reference matching point pairs. In this embodiment, more matching point pairs can be obtained as much as possible while ensuring the accuracy of the matching of the marker points.
In one embodiment, a method for landmark matching includes:
step (a 1), acquiring a scanning point cloud obtained by shooting a calibration object under a current scanning visual angle; the marker comprises a mark point.
And (a 2) obtaining a normal vector of the calibration object point cloud and a first distance between each point in the calibration object point cloud and an adjacent point.
And (a 3) determining the normal vector of each point in the scanning point cloud and the second distance between each point and the adjacent point.
And (a 4) matching the normal vector of the calibration object point cloud with the normal vector of each point in the scanning point cloud, and matching the first distance with the second distance, and obtaining the point pair with successfully matched characteristics when the normal vector and the distance are successfully matched.
And (a 5) selecting three feature matching point pairs from the feature matching point pairs.
And (a 6) determining three mark points in the same point cloud from the three feature matching point pairs, and determining whether the three feature matching point pairs meet the congruent triangle condition when the three mark points in the same point cloud are not collinear.
And (a 7) determining a first conversion relation converted from the current scanning visual angle to the calibration object coordinate system based on the three feature matching point pairs when the three feature matching point pairs meet the congruent triangle condition.
And (a 8) mapping the mark point under the current scanning visual angle to a calibration object coordinate system based on the first conversion relation to obtain a calibration object mapping point.
And (a 9) matching the mapping points of the calibration object with the marking points on the calibration object to obtain effective point pairs. The effective point pair comprises a marking point under the current scanning visual angle and a matching marking point on the calibration object.
And (a 10) determining a second conversion relation for converting the current scanning visual angle into the coordinate system of the calibration object based on the effective point pairs.
And (a 11) mapping the mark points under the current scanning visual angle to a calibration object coordinate system based on the second conversion relation, and reserving the mapped mark points on the same plane.
Step (a 12), when the mapped mark points on the same plane are matched with the mark points on the calibration object, a reference matching point pair is obtained; the reference matching point pair comprises a calibration object matching point and a matched scanning matching point.
And (a 13) projecting the scanning matching points to an image coordinate system of the camera to obtain image mapping points.
And (a 14) obtaining a calibration object image obtained by shooting the calibration object by the camera.
And (a 15) matching the image mapping points with the mark points on the marker image to obtain image matching point pairs. The image matching point pair comprises a scanning matching point and a mark point on the matched calibration object image.
And (a 16) determining a first matching point pair for each camera in the dual cameras, wherein the first matching point pair is used for matching the marking point on the image of the calibration object with the marking point on the calibration object based on the reference matching point pair and the image matching point pair.
Step (a 17), mapping the mark points on the polar line correction image to the original image based on polar line correction reflection relation to obtain original image mapping points; the original image is an image photographed by the camera.
Step (a 18), adding a distortion coefficient to the original image mapping point, and adjusting the distortion coefficient until the original image mapping point is matched with a mark point on an adjacent original image to obtain a second matching point pair; the second matching point pair comprises a marking point on the polar correction image and a marking point on the matched original image.
A step (a 19) of determining a target matching point pair corresponding to each camera based on the first matching point pair and the second matching point pair; the target matching point pair comprises the marking point on the original image and the marking point on the matched calibration object.
And (a 20) for each camera, calibrating the camera by a single-phase camera based on the target matching point pair, and obtaining a calibrated camera internal reference.
Step (a 21) comparing the calibration camera reference with the reference camera reference.
And (a 22) taking the reference camera internal parameter as the target camera internal parameter when the difference value between the calibrated camera internal parameter and the reference camera internal parameter is within a preset range.
And (a 23) taking the calibrated camera internal parameter as the target camera internal parameter when the difference value between the calibrated camera internal parameter and the reference camera internal parameter is not within the preset range.
And (a 24) substituting the target matching point pair and the target camera internal parameters into a reference collineation equation to obtain a target collineation equation.
And (a 25) performing bias guide on each matrix element in the first camera external parameter matrix based on the target collineation equation to obtain a first relation.
And (a 26) performing bias guide on each relative pose parameter based on the relative pose parameter matrix, and determining the product of the bias guide and a preset second camera external parameter matrix to obtain a second relation.
And (a 27) determining a partial derivative functional relation of the colinear equation for partial derivative of the relative pose parameter based on the product of the first relation and the second relation.
And (a 28) obtaining a pixel deviation relation between the real coordinates of the image and the target collineation equation based on the product of the increment of the relative pose parameter and the partial derivative function relation.
And (a 29) iterating the pixel deviation relation, and obtaining each relative pose parameter value of the first camera when the iteration is completed.
1. 3D point cloud matching
Specifically, in scanning, matching of the 3D point cloud is an indispensable step, and a rigid transformation matrix of the scanner needs to be calculated between every two frames (in the case of multiple frames, the position of the first frame is generally selected as a reference value, and the remaining frames are rotationally translated with the first frame as a target). In 3D point cloud matching, there are many methods to solve, and currently, the most well-known ICP algorithm and Fast-ICP algorithm are widely used, which focus on feature matching between two unknown point clouds, such as distance features, normal vector features, point cloud local divergence features, matching rate, matching error, and the like. In addition, these algorithms perform well in local exact matches, but SVD and four-element algorithms perform more efficiently in terms of one coarse match. Often, many scanning devices take a combination of both approaches. The SVD and quaternion modes are firstly used for rough matching, then the matching accuracy is improved by locally using ICP iterative matching, and the method is widely used in a plurality of papers. However, the embodiment of the application aims at matching the data obtained by post-calibration processing, and does not need an ICP algorithm in the scanning stage, so that the matching of the marker points of the unknown relationship is performed by adopting an SVD method.
Before the token matching, preparation work needs to be done on the data, in other words, useful features need to be calculated for matching. FIG. 5 is a schematic diagram of a marker feature in one embodiment. The appearance of the marker points used in the method is shown in fig. 5, each marker point is a round surface, the inside is a white circle, and the outside is a black circle. It is apparent that the white circular area of each marker has a center and radius, and that there is a fixed normal vector for the entire marker circle, such as n1 and n2 in fig. 5. The angle between n1 and n2 may also be characteristic of a point.
In addition to the features of each landmark point itself, feature matching in embodiments of the present application also focuses on the relative features from point to point. As shown in fig. 5, the distance between points is an essential attribute, and the normal vector angle between two marker points also affects the correspondence of the matching points. According to the SVD algorithm, at least three non-collinear points are needed for calculating the eigenvalues and eigenvectors of the covariance matrix, so that point combinations formed by every three point clouds are not only required to be non-collinear, but also form matched congruent triangles with the point combinations of the corresponding three points. As shown in fig. 6, a characteristic diagram between three pairs of points in one embodiment is shown.
i) And calculating coordinates and normal vectors of each frame of mark points, then calculating the distance and normal vector included angle between each mark point and the adjacent point, and storing all calculated characteristics.
ii) screening and matching the characteristics of the scanning point cloud and the mark point cloud of the calibration object, and selecting the point pair which meets the condition and has successful characteristic matching.
iii) According to the point pairs with successful feature matching, three points are selected from the interior of the combination, whether the three points are collinear or not is firstly judged, if the three points are collinear, the three points are selected again, if the three points are not collinear, whether the corresponding three point groups meet the condition of congruent triangles or not is judged, and if the three point groups meet the condition, the rotational translation matrix and the vector are calculated through SVD.
iV) transforming all scanning points into a calibration object space through the calculated rotation translation matrix and vector, and screening all effective point pairs.
And V) after screening all the effective points, calculating a rotation translation matrix by using all the effective points again through an SVD algorithm, and then changing the scanning points again to screen out as many matching point pairs as possible, thereby completing the method.
There is some screening of points before and during the matching process of the 3D point cloud. Some points are discarded in the epipolar correction process, some points cannot well meet the collineation equation, and some points are not on the calibration object plane after being matched.
For the binocular system after epipolar correction, a laser triangulation method is generally adopted for calculating coordinates of three-dimensional points, specifically, rays formed by connecting pixel points by the optical centers of the left camera and rays formed by connecting corresponding pixel points by the right camera intersect in a three-dimensional space, and the intersection points are three-dimensional coordinate points corresponding to two matched pixel points. FIG. 7 is a schematic diagram of an image coordinate system of a camera with scan matching points projected onto the camera in one embodiment. After the rotation of the matching point cloud, it finds the pixel point of the left camera (l) and the pixel point of the right camera (r) in opposite directions, and these two points must meet the strong limit condition of epipolar correction, which is based on the fact that points with large deviation or the same characteristic caused by shooting angle can be screened to a large extent (i.e. the two points are projected to almost the same pixel by the left camera, but the deviation is large in the projected pixel of the right camera). After the scanning point is matched with the point of the calibration object, the marking point is noticed to be almost positioned on the plane of the calibration object, so the matching point after rotation and translation is very close to the plane of the calibration object, and the judgment is based on the fact that some points with larger deviation and misextracted miscellaneous points can be removed.
In short, two conditions are: the first condition is to rotationally translate the current view angle scanning point cloud to the calibration object coordinate system, and all matching points should be very close to the calibration object plane. The second condition is that all possible matching points are obtained by feature matching, then a rotation translation relation is obtained, the point of the calibration object is related to the current scanning visual angle through the transformation, the matching points are projected back to the pixel points of the left camera and the right camera through a laser triangulation method, and the pixel coordinates of the corresponding matching points in the scanning point cloud of the current visual angle are required to be quite close to each other.
Scanning point cloud m 1 ,m 2 ,m 3 Is composed of the matched pixel pair (l) 1 ,r 1 ),(l 2 ,r 2 ),(l 3 ,r 3 ) Calculated as M on them and the calibration material 1 ,M 2 ,M 3 The three marker points match. Obviously, in the figure (l 4 ,r 4 ) Will calculate P (P in the following formula), with great probability and M of the calibration object 4 Matching, but the point is discarded because it is displaced from the plane more after rotation and translation. For all matching points m i Plane S can be calculated where ax+by+cz+d=0. That is, the meaning of=0 in the following formula is on the same plane.
According to the above description, all matching points should be close to this plane and, after rotation back to the calibration object coordinate system, should be close to the calibration object plane. For this plane, the epipolar correction procedure will result in two corrected coordinate system left camera reference matrices PL, which by geometric relationship can be found as follows:
Figure BDA0004150537700000181
Figure BDA0004150537700000182
Wherein l 1 Refers to a mark point, w, on a marker image shot by a left camera 1 Is the conversion relation at the time of projection.
The above procedure can generate enough corresponding relation between the pixel points of the polar correction image photographed at the current visual angle and the marker mark points of the marker:
{Pair L }=(P l1 ,M j1 )∪(P l2 ,M j2 )∪...∪(P li ,M ji )∪....,i=1,2...n 1 ,j i ∈{1,2,3...,num},
and { Pair } R }=(P r1 ,M k1 )∪(P r2 ,M k2 )∪...∪(P ri ,M ki )∪...,i=1,2...n 2 ,k i ∈{1,2,3...,num}
The two corresponding relations are important in subsequent processing, and the corresponding relation between the pixel point of the original image shot by the current visual angle camera and the marker mark point is searched by adopting polar line correction reverse mapping, so that calibration data is formed for secondary calibration.
2. Polar correction mapping
The polar correction generates four tables, which represent the sub-pixel coordinates of each pixel of the corrected image corresponding to the original photographed image, so that according to polar correction logic, a mapping table can be reversely established, that is, the pixel point of the original image corresponds to the sub-pixel after polar correction, and the sub-pixel mark point coordinates of the original image can be mapped into the polar corrected image coordinate system through two-dimensional bilinear interpolation, so that the corresponding relationship between the pixel point of each polar corrected image and the pixel point of the original image can be found.
In epipolar correction, the matrix C' is used to measure the transformation of the corrected image coordinate system into the original image coordinate system. I.e.
P ori =C'P rect
Figure BDA0004150537700000191
Here P l Sub-pixel coordinates, P, of the epipolar line corrected point o Is the center coordinates of the image after epipolar correction (the camera principal point coordinates in the new coordinate system), f is the focal length, and Δx and Δy are distortions.
Then, for P ori The distortion is added because the polar corrected image is subjected to de-distortion operation, and if the relation between the original image and the mark point in the polar corrected image needs to be found, the distortion needs to be added, and the above formula and the following formula (2) are combined to obtain
Figure BDA0004150537700000192
Iterative calculation of P using equation (11) ori When it is almost unchanged, a critical value is reached, denoted as P final The correspondence of the point pixel coordinates of the original image and the epipolar correction image can then be obtained from the coordinate deviations. And combining the matching point pairs of the marking points on the polar correction image and the marking points on the calibration object, which are obtained in the 3D point cloud matching, so as to obtain the target matching point pair of the marking points on the original image and the marking points on the calibration object.
3. Calibration analysis
In the binocular calibration based on the adjustment model, many papers take the form that the internal parameters of the camera are fixed, the external parameters of the two cameras and the relative pose between the cameras are used as required variables, all the variables are substituted into the adjustment model to iterate, and finally, a group of variables are calculated as final values through the minimum optimization residual, and the three iterative models are included to be optimized together, so that the whole process is complicated.
In the embodiment of the application, the purpose of the whole dual-target calibration is to serve the subsequent polar correction, in the polar correction process, most scholars prefer to use libraries such as opencv, and a Zhang Zhengyou calibration method is selected, but the Zhang Zhengyou calibration method is to perform large rotation and translation on two cameras in the polar correction process, so that the requirement on the dual-target calibration result is high. It is noted herein that the principle of epipolar correction is to pull a distorted artwork to the same epipolar line by removing distortions and changes, forming a corrected image free of distortions, with each corresponding point being for one epipolar line. Therefore, how to rotate and change the camera plane can be selected based on the model, and only the image needs to be moved and cut later. Based on this, if the left camera plane is changed into the left camera coordinate system with hardly any change, the effect of the exogenous error of the left camera on the epipolar line correction is almost 0. Thus, to simplify the binocular calibration model and the acceleration calculations, the left camera internal and external parameters are fixed herein, and the two camera relative pose and the right camera external parameters are processed in the adjustment model. However, it should be noted that the right camera external parameters can be replaced by the left camera external parameters and the relative pose of the cameras, so that the whole model can only process the relative pose between the cameras, thereby greatly reducing the computational complexity. The polar line correction is mainly performed by the left camera, so that the final error is very small and can be comparable with the effect of all iterations.
The following detailed discussion begins with the collinear equations for the camera, and is described in detail in connection with the two-dimensional DLT algorithm.
1. Global minimum re-projection residual optimization of full parameters
For single-camera calibration, the beam adjustment model studies the collinearly equation, and the parameters of each camera must satisfy the following model:
Figure BDA0004150537700000201
where Δx and Δy are distortions, (X, Y) are pixel coordinates of the image, (X, Y, Z) are true coordinates of the marker point on the calibration plate, and the distortion model is selected as follows:
Δx=(x-x 0 )(K 1 r 2 +K 2 r 4 )+P 1 [r 2 +2(x-x 0 ) 2 ]+2P 2 (x-x 0 )(y-y 0 )+P 3 (x-x 0 )+P 4 (y-y 0 )
Δy=(y-y 0 )(K 1 r 2 +K 2 r 4 )+P 2 [r 2 +2(y-y 0 ) 2 ]+2P 1 (x-x 0 )(y-y 0 ) (2)
K 1 ,K 2 ,P 1 ,P 2 ,P 3 ,P 4 is the coefficient of camera distortion, x 0 ,y 0 Is the pixel coordinates of the principal point of projection of the camera optical center on the image plane, f is the focal length of the camera, and these 9 parameters are internal parameters of the camera model employed herein. Based on the model (1), the external parameters are corresponding to a, b, c, which are the attitude angles of the camera
Figure BDA0004150537700000202
Omega, kappa and camera position X s ,Y s ,Z s And (3) determining.
For a full-parameter optimized binocular calibration model, the aim is to make both cameras meet not only the model (1), but also the conditions
R c =R l R' r ,T c =R l (T r -T l ) (3)
Let the two equations in model (1) be F (x) and G (y), respectively, without taking into account the camera's internal parameters, the variables are therefore
Figure BDA0004150537700000203
And->
Figure BDA0004150537700000204
The external parameters of the left camera and the external parameters of the right camera are respectively. In this embodiment, l represents a left camera, which corresponds to a second camera; the right camera is denoted by r and corresponds to the first camera. At the position of In the optimization process, the model (1) can obtain an iterative equation (first-order taylor expansion) of each point by taking partial derivatives of external parameters:
Figure BDA0004150537700000211
combining equation (3) and equation (4), the Lagrangian multiplier optimization model for the global parameters is as follows:
Figure BDA0004150537700000212
the traditional model (5) has high computational complexity, requires multiple iterations, and generates a large number of computations. The embodiment of the application adopts another mode to solve the same problem.
2. Binocular residual optimization based on monocular in the embodiment of the application
For the above description, the embodiment of the application provides a binocular residual error optimization model mainly based on monocular, and transforms the external parameters of a single camera to form a model only related to the relative pose of the two cameras. It is noted that this operation is related to the subsequent step of epipolar correction, i.e. epipolar correction the selected reference plane, if the pixel plane of the left camera, is almost indistinguishable from 1 in terms of mathematical principles, but can be faster and less complex. If epipolar correction is the choice of additional planes, then the participation of both cameras in optimization is one aspect of precision pursuit that needs to be considered.
Based on the model (1), the partial derivative of each external parameter in the adjustment model of the right camera can be obtained
Figure BDA0004150537700000213
Wherein the method comprises the steps of
Figure BDA0004150537700000214
Is a simple deformation of the model (1).
Note that equation (3), so the partial derivative of the right camera outlier in equation (6-1) can be converted into model (1) to partial derivative of the relative pose between the two cameras, known as
Figure BDA0004150537700000215
Therefore equation (6-1) can be rewritten as
Figure BDA0004150537700000221
For equation (6-2), the problem of complexity in model (5) need not be dealt with, and only the following problem need be optimized
Figure BDA0004150537700000222
The problem is solved very simply, in the embodiment of the application, only the equation (7) is required to be iterated continuously, and the relative pose when the condition of very small error is satisfied can be iterated
Figure BDA0004150537700000223
3. Binocular model calculation in traditional manner
The above has shown that the optimization of model (7) is equivalent to the constant iterative calculation of true value for equation (6-2). For equation (6-2), the relative pose needs to be set for model (1)
Figure BDA0004150537700000224
Each component is partial derivative. Note that solving this problem requires knowing R in equation (3) c And T c Wherein->
Figure BDA0004150537700000225
ω c And kappa (kappa) c Corresponding R c And translation vector T c Respectively->
Figure BDA0004150537700000226
External parameters of the right camera are known
Figure BDA0004150537700000227
Corresponding R r And T r Is that
Figure BDA0004150537700000228
Left camera external parameter
Figure BDA0004150537700000229
Corresponding R l And T l Is that
Figure BDA00041505377000002210
In the above precondition description, a constant value is fixed.
By combining the formula (3), the formula (8) and the formula (9-2) in a conventional manner, each element of the right camera extrinsic matrix and vector can be obtained to satisfy the following equation set
Figure BDA0004150537700000231
By substituting equation (10) into model (1) in a conventional manner, all a can be substituted r ,b r ,c r ,X r ,Y r ,Z r By using
Figure BDA0004150537700000232
And (3) combining the non-linear combination expression of the three-dimensional model with the formula (6-2), obtaining partial derivative equations of the collinear model (1) on all relative pose variables, and calculating a numerical value close to a true value through iteration.
In the process, the calculation complexity of partial derivative calculation of a parameter of a colinear equation of relative pose relation variables is quite high. It can be noted that in equation (10), the variables are quite many and the combination equation is all nonlinear.
4. Optimization method adopted in embodiment of the application
Thus, in actual programming, a search method is needed to reduce the computational complexity. By observation of the model, the kernel for equation (6-2) is the coefficient before the increment of each variable at the time of iteration. I.e. each is obtained
Figure BDA0004150537700000233
And->
Figure BDA0004150537700000234
Derivative based on the differential chain law, the derivation can be made +.>
Figure BDA0004150537700000235
Note that R to the right of the equal sign of equation (11) rij Is the right camera external parameter matrix R r Thus requiring the right side of the equal sign of equation (11)
Figure BDA0004150537700000236
Only the whole matrix R needs to be found r For->
Figure BDA0004150537700000237
The partial derivatives of the matrix for each variable are sufficient. Based on the transformation relation equation (3), it is possible to obtain
Figure BDA0004150537700000238
Then in equation (11)
Figure BDA0004150537700000239
Becomes into
Figure BDA00041505377000002310
I.e. at R' c For each of (a)
Figure BDA00041505377000002311
Partial derivative matrix is obtained by the variables of the camera, and then the right side of the matrix is multiplied by the left camera external parameter matrix R l The element in the j-th column of the i-th row is taken out, namely +.>
Figure BDA00041505377000002312
Is a numerical value of (2).
Solving the equation (11)
Figure BDA0004150537700000241
After the value of->
Figure BDA0004150537700000242
The values of (2) are obtained by solving partial derivatives of elements in the ith row and the jth column of the external parameter matrix of the right camera by the first equation in the collinear equation (1), so that the value of each coefficient of the equation (11) is obtained, a nonlinear complex equation set for solving the combination of the model (1) and the equation (10) is not needed, the elements in the matrix are only needed to be solved based on the equation (11) and the equation (13), and attention is needed to be paid to that the partial derivatives of each matrix only need to calculate derivatives of a very simple trigonometric function, so that the complexity is very low, and the model is very visual. To this end, the model is completed, and the matrix-form model is based on the collineation equation, namely model (1), based on equation (11), equation (13) and equation (6-2) together. In the embodiment, by understanding and understanding the model and combining a chain rule and simple partial differential calculation, the calculation complexity of the binocular calibration is greatly reduced, the calibration efficiency is improved, and the rapid binocular calibration is realized; and through the analysis, the implementation mode is improved, and the obtained calibration result is not different from the traditional mode.
In the embodiment, a calibration object point cloud and a scanning point cloud are matched to obtain a successfully matched reference matching point pair, and then the image mapping point is matched with a mark point on a calibration object image based on the projection of the scanning matching point in the reference matching point pair to a corresponding camera to determine an image matching point pair; determining a first matching point pair of the mark point on the marker image and the mark point on the marker based on the reference matching point pair and the image matching point pair; the calibration object is matched with the scanning point cloud, and the scanning point cloud and the calibration object image are separated, so that the most possible point pairs can be obtained; and through multiple times of matching, more accurate target mark point pairs can be obtained, unstable factors of each equipment are optimized, and a better scanning result is obtained subsequently.
It should be understood that, although the respective steps in the flowcharts of fig. 2 to 4 described above are sequentially shown as indicated by arrows, and the respective steps in step (a 1) to step (a 29) are sequentially shown as indicated by numerals, these steps are not necessarily sequentially performed in the order indicated by the arrows or numerals. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 to 4 may include a plurality of steps or stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the execution of the steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least some of the other steps or stages.
In one embodiment, as shown in fig. 8, a block diagram of a camera calibration device in one embodiment is shown. FIG. 8 provides a camera calibration apparatus that may employ software modules or hardware modules, or a combination of both, as part of a computer device, the apparatus comprising in particular: a first matching point pair acquisition module 802, a epipolar correction reflection module 804, a second matching point pair determination module 806, a target matching point pair acquisition module 808, and a calibration module 810, wherein:
a first matching point pair obtaining module 802, configured to obtain, for each of the two cameras, a first matching point pair in which a marker point on the epipolar correction image is matched with a marker point on the calibration object;
the polar correction mapping module 804 is configured to map the marker point on the polar correction image to the original image based on the polar correction mapping relationship, so as to obtain an original image mapping point; the original image is an image photographed by a camera;
a second matching point pair determining module 806, configured to match the mapping point of the original image with the mark point on the original image, to obtain a second matching point pair; the second matching point pair comprises a marking point on the polar correction image and a marking point on the matched original image;
A target matching point pair obtaining module 808, configured to determine a target matching point pair corresponding to each camera based on the first matching point pair and the second matching point pair; the target matching point pair comprises a marking point on the original image and a marking point on the matched calibration object;
the calibration module 810 is configured to perform camera calibration based on the target matching point pair.
In this embodiment, for each camera, a first matching point pair is obtained, in which a marker point on the polar correction image is matched with a marker point on the calibration object, and then, based on the polar correction reflection relationship, the marker point on the polar correction image is mapped to the original image to determine a second matching point pair; determining a target matching point pair corresponding to each camera based on the first matching point pair and the second matching point pair; the calibration object, the polar correction image and the original image are separated for matching, so that point pairs as many as possible can be obtained, more accurate target mark point pairs can be obtained through multiple times of matching, equipment parameters are detected and corrected in real time, unstable factors of each equipment are optimized, and a better scanning result is obtained subsequently.
In one embodiment, the second matching point pair determining module 806 is configured to add a distortion coefficient to the mapping point of the original image, and adjust the distortion coefficient until the mapping point of the original image matches the mark point on the adjacent original image, so as to obtain a second matching point pair.
In this embodiment, since epipolar correction not only makes the images collinear, but also corrects the distortion of the images, when the epipolar correction image is mapped to the original image, it is necessary to increase the distortion coefficient for the mapping point of the original image until the point is matched with the adjacent point of the point, the correct matching point pair can be found, and the distortion coefficient is obtained therefrom and can be used for subsequent other processing.
In one embodiment, the calibration module 810 is configured to perform, for each camera, single-phase calibration on the camera based on the target matching point, to obtain a calibrated camera reference; and comparing the calibrated camera internal parameters with the reference camera internal parameters, determining a target camera internal parameters based on the comparison result, and calibrating the camera based on the target camera internal parameters and the target matching point pairs.
In this embodiment, the calibration camera reference and the reference camera reference are compared, and the target camera reference is determined from the calibration camera reference and the reference camera reference based on the comparison result, so that a more accurate camera reference can be selected from the target camera reference and the subsequent camera calibration can be performed, thereby improving the accuracy of the camera calibration.
In one embodiment, the calibration module 810 is configured to perform camera calibration based on the reference camera reference and the target matching point pair by using the reference camera reference as the target camera reference when the difference between the calibration camera reference and the reference camera reference is within the preset range.
In this embodiment, when the difference between the calibrated camera internal parameter and the reference camera internal parameter is within the preset range, the reference camera internal parameter is accurate, so that camera calibration can be accurately performed based on the reference camera internal parameter and the target matching point pair.
In one embodiment, the calibration module 810 is configured to substitute the target matching point pair into a reference collineation equation to obtain a target collineation equation; performing bias guide on each matrix element in the first camera external parameter matrix based on the target collineation equation to obtain a first relational expression; performing bias guide on each relative pose parameter based on the relative pose parameter matrix, and determining the product of the bias guide and a preset second camera extrinsic parameter matrix to obtain a second relational expression; determining a partial derivative function relation of a colinear equation for partial derivative of the relative pose parameter based on the product of the first relation and the second relation; obtaining a pixel deviation relation between the real coordinates of the image and a target collineation equation based on the product of the increment of the relative pose parameter and the partial derivative function relation; and iterating the pixel deviation relation, and obtaining each relative pose parameter value of the first camera when the iteration is completed.
In this embodiment, the coefficients in the pixel deviation relation are found by analysis to be able to be decomposed into the first relation and the second relation, and the collineation equation includes each matrix element in the first camera external parameter matrix, and the relative pose parameter matrix includes the relative pose parameter, so that the problem is solved by adopting a matrix form, the partial derivative of the variable complicated in the collineation equation is replaced by the partial derivative of the matrix, the model form is simple, programming is very convenient, the calculation complexity is greatly reduced, the calculation of the relative pose parameter value can be completed quickly, and the binocular vision calibration result is consistent with the calibration result in the traditional mode.
In one embodiment, the calibration module 810 is configured to adjust values of the relative pose parameters in the pixel deviation relationship, and obtain values of the relative pose parameters of the first camera when the pixel deviation relationship satisfies the error condition.
In this embodiment, the values of the relative pose parameters in the pixel deviation relation are adjusted, and when the pixel deviation relation satisfies the error condition, the values of the relative pose parameters of the first camera are obtained, and only one model is needed to be iterated to optimize the values, so that the complexity of calculation can be greatly reduced, the determination efficiency of the relative pose parameter values of the camera is improved, and the calibration efficiency is further improved.
In one embodiment, the calibration module 810 is further configured to substitute the relative pose parameter values into a relative pose parameter matrix to obtain a target relative pose matrix; a target first camera outlier matrix is obtained based on a product of a transpose matrix of the target relative pose matrix and a preset second camera outlier matrix.
In the embodiment, substituting the relative pose parameter value into the relative pose parameter matrix to obtain a target relative pose matrix; obtaining a target first camera external parameter matrix based on the product of a transposed matrix of the target relative pose matrix and a preset second camera external parameter matrix, obtaining external parameter matrices of the first camera and the second camera, and obtaining all calibration parameters of the binocular camera by combining known internal parameters of the first camera and the second camera.
In one embodiment, a first matching point pair obtaining module 802 is configured to obtain a scanning point cloud obtained by photographing a calibration object at a current scanning view angle; the marker comprises a mark point; matching the calibrated object point cloud with the scanning point cloud to obtain a reference matching point pair; the reference matching point pair comprises a calibration object matching point and a matched scanning matching point; projecting the scanning matching points to an image coordinate system of the camera to obtain image mapping points; acquiring a calibration object image obtained by shooting a calibration object by a camera; matching the image mapping points with mark points on the marker image to obtain image matching point pairs; the image matching point pair comprises a scanning matching point and a mark point on the matched calibration object image; for the camera, a first matching point pair is determined, wherein the first matching point pair is used for matching the marking point on the image of the calibration object with the marking point on the calibration object, based on the reference matching point pair and the image matching point pair.
In the embodiment, a calibration object point cloud and a scanning point cloud are matched to obtain a successfully matched reference matching point pair, and then the image mapping point is matched with a mark point on a calibration object image based on the projection of the scanning matching point in the reference matching point pair to a corresponding camera to determine an image matching point pair; determining a first matching point pair of the mark point on the marker image and the mark point on the marker based on the reference matching point pair and the image matching point pair; the calibration object is matched with the scanning point cloud, and the scanning point cloud and the calibration object image are separated, so that the most possible point pairs can be obtained; and through multiple times of matching, more accurate target mark point pairs can be obtained, unstable factors of each equipment are optimized, and a better scanning result is obtained subsequently.
In one embodiment, a first matching point pair obtaining module 802 is configured to perform feature matching on the calibration object point cloud and the scanning point cloud to obtain a point pair with successful feature matching; determining a first conversion relation from the current scanning visual angle to a calibration object coordinate system based on the point pairs successfully matched with the features; mapping the mark point under the current scanning view angle to a calibration object coordinate system based on the first conversion relation to obtain a calibration object mapping point; and matching the mapping points of the calibration object with the marking points on the calibration object to obtain a reference matching point pair.
In the embodiment, the characteristic matching is performed on the calibration object point cloud and the scanning point cloud to obtain the point pairs with successful characteristic matching, namely, the point pairs are screened once in the matching process, so that some impurity points with unmatched characteristics can be removed; determining a first conversion relation based on the point pairs with successful feature matching, mapping again, mapping the marker points under the current view angle to a marker coordinate system, matching the marker mapping points with the marker points on the marker to obtain a reference matching point pair with successful matching, so that the marker points which are not matched before can be matched with the corresponding marker points, and retaining effective points as much as possible; and because the matching is carried out through the first conversion relation, more accurate matching point pairs are obtained, and the accuracy of the subsequent scanning is greatly improved.
In one embodiment, a first matching point pair obtaining module 802 is configured to match the calibration object mapping point with a mark point on the calibration object to obtain a valid point pair; the effective point pair successfully matched comprises a marking point under the current scanning visual angle and a marking point on a matched calibration object; determining a second conversion relation from the current scanning visual angle to the calibration object coordinate system based on the effective point pairs; and mapping the mark point under the current scanning visual angle to a coordinate system of the calibration object based on the second conversion relation, and obtaining a reference matching point pair when the mapped mark point is matched with the mark point on the calibration object.
In this embodiment, by setting appropriate matching conditions and procedures, the accuracy of matching point pairs is improved while the number of point pairs is maintained.
In one embodiment, the first matching point pair obtaining module 802 is configured to map the marker point under the current scanning view angle to the calibration object coordinate system based on the second conversion relationship, and keep the mapped marker point on the same plane; and when the mapped mark points on the same plane are matched with the mark points on the calibration object, obtaining a reference matching point pair.
In this embodiment, by screening the mapped marker points, the impurity points which are not located on the same plane are removed, so that the obtained marker point pairs are more accurate, and the subsequent camera calibration is also more accurate.
In one embodiment, a first matching point pair obtaining module 802 is configured to obtain a normal vector of the calibration object point cloud and a first distance between each point in the calibration object point cloud and an adjacent point; determining a normal vector of each point in the scanning point cloud and a second distance between each point and an adjacent point; and matching the normal vector of the calibration object point cloud with the normal vector of each point in the scanning point cloud, and matching the first distance with the second distance, and obtaining the point pair with successful feature matching when the normal vector and the distance are both successfully matched.
In this embodiment, the normal vector of the calibration object point cloud is matched with the normal vector of each point in the scanning point cloud, and the first distance and the second distance are matched, when both the two distances are successfully matched, the point pairs with successful feature matching are obtained, and the point pairs meeting the primary features can be rapidly screened.
In one embodiment, a first matching point pair obtaining module 802 is configured to select three feature matching point pairs from the feature matching point pairs;
determining three mark points in the same point cloud from the three feature matching point pairs, and determining whether the three feature matching point pairs meet the congruent triangle condition when the three mark points in the same point cloud are not collinear;
When the three feature matching point pairs meet the congruent triangle condition, a first conversion relation converted from the current scanning visual angle to the calibration object coordinate system is determined based on the three feature matching point pairs.
In this embodiment, the point pairs with successful feature matching may further include many miscellaneous points, i.e., non-marker points, so that further screening is required; the three points on the common line have fewer characteristics, so that non-collinear points are required to be selected for judging the congruent triangle, and when the three point pairs matched with the characteristics meet the congruent triangle condition, the accuracy of the three point pairs matched with the characteristics is higher, so that the accuracy of the calculated first conversion relation is also higher.
For specific limitations of the camera calibration device, reference may be made to the above limitations of the camera calibration method, and no further description is given here. The modules in the camera calibration device can be implemented in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal device, and an internal structure diagram thereof may be as shown in fig. 9. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a camera calibration method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a non-transitory computer readable storage medium, which when executed may comprise the steps of the above described embodiments of the methods. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (10)

1. A method for calibrating a camera, the method comprising:
for each camera in the dual-camera, acquiring a first matching point pair of the marking point on the polar correction image and the marking point on the calibration object;
mapping the mark points on the polar line correction image to an original image based on polar line correction reflection relation to obtain original image mapping points; the original image is an image shot by the camera;
matching the original image mapping points with the mark points on the original image to obtain a second matching point pair; the second matching point pair comprises a marking point on the polar correction image and a marking point on the original image which is matched with the marking point on the polar correction image;
determining a target matching point pair corresponding to each camera based on the first matching point pair and the second matching point pair; the target matching point pair comprises a marking point on the original image and a marking point on a matched calibration object;
and calibrating the camera based on the target matching point pair.
2. The method of claim 1, wherein said matching the original image mapping points with the marker points on the original image to obtain a second matching point pair comprises:
And adding a distortion coefficient for the original image mapping point, and adjusting the distortion coefficient until the original image mapping point is matched with the adjacent mark point on the original image to obtain a second matching point pair.
3. The method of claim 1, wherein the camera calibration based on the target matching point pair comprises:
calibrating the camera by a single-phase camera based on the target matching point pair to obtain a calibrated camera internal reference;
and comparing the calibrated camera internal reference with a reference camera internal reference, determining a target camera internal reference from the calibrated camera internal reference and the reference camera internal reference based on a comparison result, and calibrating the camera based on the target camera internal reference and the target matching point pair.
4. A method according to claim 3, wherein the determining a target camera reference from the calibration camera reference and the reference camera reference based on the comparison result, and performing camera calibration based on the target camera reference and the target matching point pair, comprises:
and when the difference value between the calibrated camera internal parameter and the reference camera internal parameter is within a preset range, taking the reference camera internal parameter as a target camera internal parameter, and calibrating the camera based on the reference camera internal parameter and the target matching point pair.
5. The method of claim 1, wherein the camera calibration based on the target matching point pair comprises:
substituting the target matching point pair into a reference collineation equation to obtain a target collineation equation;
performing bias guide on each matrix element in the first camera external parameter matrix based on the target collineation equation to obtain a first relational expression;
performing bias guide on each relative pose parameter based on the relative pose parameter matrix, and determining the product of the bias guide and a preset second camera extrinsic parameter matrix to obtain a second relational expression;
determining a partial derivative function relation of the target collineation equation for partial derivative of the relative pose parameter based on the product of the first relation and the second relation;
obtaining a pixel deviation relation between the real coordinates of the image and the target collineation equation based on the product of the increment of the relative pose parameter and the partial derivative function relation;
and iterating the pixel deviation relation, and obtaining each relative pose parameter value of the first camera when the iteration is completed.
6. The method of claim 1, wherein the obtaining a first pair of matching points on the epipolar corrected image where the marker points match the marker points on the marker comprises:
Acquiring a scanning point cloud obtained by shooting a calibration object under a current scanning visual angle; the marker comprises a mark point;
matching the calibrated object point cloud with the scanning point cloud to obtain a reference matching point pair; the reference matching point pair comprises a calibration object matching point and a matched scanning matching point;
projecting the scanning matching points to an image coordinate system of the camera to obtain image mapping points;
acquiring a calibration object image obtained by shooting the calibration object by the camera;
matching the image mapping points with mark points on the marker image to obtain image matching point pairs; the image matching point pair comprises a scanning matching point and a mark point on a matched calibration object image;
and determining a first matching point pair of which the mark points on the marker image are matched with the mark points on the marker based on the reference matching point pair and the image matching point pair.
7. The method of claim 6, wherein the matching the calibration object point cloud with the scanning point cloud to obtain a reference matching point pair comprises:
performing feature matching on the calibration object point cloud and the scanning point cloud to obtain a point pair with successful feature matching;
Determining a first conversion relation for converting from the current scanning visual angle to a calibration object coordinate system based on the point pairs successfully matched with the features;
mapping the mark point under the current scanning view angle to the calibration object coordinate system based on the first conversion relation to obtain a calibration object mapping point;
and matching the mapping points of the calibration object with the marking points on the calibration object to obtain a reference matching point pair.
8. A camera calibration apparatus, the apparatus comprising:
the first matching point pair acquisition module is used for acquiring a first matching point pair of which the marking points on the polar correction image are matched with the marking points on the calibration object for each camera in the dual-camera;
the polar line correction reflecting module is used for mapping the mark points on the polar line correction image to the original image based on the polar line correction reflecting relation to obtain original image mapping points; the original image is an image shot by the camera;
the second matching point pair determining module is used for matching the original image mapping points with the marking points on the original image to obtain a second matching point pair; the second matching point pair comprises a marking point on the polar correction image and a marking point on the original image which is matched with the marking point on the polar correction image;
The target matching point pair acquisition module is used for determining a target matching point pair corresponding to each camera based on the first matching point pair and the second matching point pair; the target matching point pair comprises a marking point on the original image and a marking point on a matched calibration object;
and the calibration module is used for calibrating the camera based on the target matching point pair.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202310293380.1A 2023-03-14 2023-03-14 Camera calibration method, camera calibration device, computer equipment and storage medium Pending CN116385557A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310293380.1A CN116385557A (en) 2023-03-14 2023-03-14 Camera calibration method, camera calibration device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310293380.1A CN116385557A (en) 2023-03-14 2023-03-14 Camera calibration method, camera calibration device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116385557A true CN116385557A (en) 2023-07-04

Family

ID=86974325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310293380.1A Pending CN116385557A (en) 2023-03-14 2023-03-14 Camera calibration method, camera calibration device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116385557A (en)

Similar Documents

Publication Publication Date Title
Ishikawa et al. Lidar and camera calibration using motions estimated by sensor fusion odometry
US11625896B2 (en) Face modeling method and apparatus, electronic device and computer-readable medium
CN110689581B (en) Structured light module calibration method, electronic device and computer readable storage medium
US20200058153A1 (en) Methods and Devices for Acquiring 3D Face, and Computer Readable Storage Media
US20210044725A1 (en) Camera-specific distortion correction
JP2017091079A (en) Image processing device and method for extracting image of object to be detected from input data
CN113409391B (en) Visual positioning method and related device, equipment and storage medium
CN109613974B (en) AR home experience method in large scene
CN112184811B (en) Monocular space structured light system structure calibration method and device
CN113920206B (en) Calibration method of perspective tilt-shift camera
US20230025058A1 (en) Image rectification method and device, and electronic system
CN112470192A (en) Dual-camera calibration method, electronic device and computer-readable storage medium
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN109887002A (en) Image feature point matching method and device, computer equipment and storage medium
KR20240089161A (en) Filming measurement methods, devices, instruments and storage media
CN115564842A (en) Parameter calibration method, device, equipment and storage medium for binocular fisheye camera
Resch et al. On-site semi-automatic calibration and registration of a projector-camera system using arbitrary objects with known geometry
CN114299156A (en) Method for calibrating and unifying coordinates of multiple cameras in non-overlapping area
CN113642397B (en) Object length measurement method based on mobile phone video
JP7474137B2 (en) Information processing device and control method thereof
CN116485902A (en) Mark point matching method, device, computer equipment and storage medium
CN111432117B (en) Image rectification method, device and electronic system
CN112070844A (en) Calibration method and device of structured light system, calibration tool diagram, equipment and medium
CN116612253A (en) Point cloud fusion method, device, computer equipment and storage medium
CN116385557A (en) Camera calibration method, camera calibration device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination