Disclosure of Invention
The embodiment of the application provides a method and a device for calibrating parameters of a laser radar and a camera based on image registration, and the method and the device are used for solving the problems that when the relative postures of the camera and the laser radar are changed, an external parameter matrix needs to be re-calibrated, and the re-calibration process is complicated. The technical scheme is as follows:
in one aspect, a method for calibrating parameters of a laser radar and a camera based on image registration is provided, and the method includes:
after calibration of calibration parameters between a camera and a laser radar is completed, acquiring a reference camera image and a reference point cloud projection drawing according to a preset scene, wherein the reference point cloud projection drawing is obtained according to the reference camera image and the calibration parameters;
detecting whether a relative attitude between the camera and the laser radar is changed;
and if the relative posture is changed, correcting the calibration parameters according to a homography matrix obtained by registering the reference camera image and the reference point cloud projection graph.
In one possible implementation, the acquiring a reference camera image and a reference point cloud projection map according to a predetermined scene includes:
shooting the preset scene through the camera to obtain the reference camera image;
generating, by the lidar, reference point cloud data for the predetermined scene;
and generating the reference point cloud projection drawing according to the reference camera image, the reference point cloud data and the calibration parameters.
In one possible implementation, when the calibration parameters include an internal reference matrix and an external reference matrix, the generating the reference point cloud projection drawing according to the reference camera image, the reference point cloud data, and the calibration parameters includes:
inputting the reference camera image, the reference point cloud data and the calibration parameters into a first formula and a second formula to obtain the reference point cloud projection diagram;
the first formula is
The second formula is
Wherein (X Y Z) is the coordinate of a data point in the reference point cloud data, I is the reflectivity of the data point, (X Y Z) is the coordinate of the corresponding data point in the camera coordinate system, (u v) is the pixel coordinate of the corresponding pixel point in the pixel coordinate system,
is the reference matrix of the said device,
is the external reference matrix and z is the depth of the data point in the camera coordinate system.
In one possible implementation, the method further includes:
filling pixel values of pixel points corresponding to the data points according to the depth of each data point, and mapping the filled pixel values by using a color space to obtain the reference point cloud projection graph; or,
inputting the coordinates of each data point in a camera coordinate system into a third formula, filling the pixel values of pixel points corresponding to the data points according to the obtained calculation result, mapping the filled pixel values by using a color space to obtain the reference point cloud projection drawing, wherein the third formula is that the coordinates of each data point in the camera coordinate system are input into the third formula, the pixel values of the pixel points corresponding to the data points are filled according to the obtained calculation result, and the third formula is that the filled
Or,
and acquiring a reflectivity value of each data point, filling a pixel value of a pixel point corresponding to the data point according to the reflectivity value of each data point, and mapping the filled pixel value by using a color space to obtain the reference point cloud projection diagram.
In one possible implementation, the predetermined scene includes a predetermined number of objects.
In a possible implementation manner, the correcting the calibration parameter according to the homography matrix obtained by registering the reference camera image and the reference point cloud projection diagram includes:
shooting the preset scene through the camera to obtain an image to be corrected;
generating a transformed point cloud projection diagram according to the reference camera image, the image to be corrected and the reference point cloud projection diagram;
and correcting the calibration parameters according to the point cloud projection image to be corrected and the transformed point cloud projection image.
In one possible implementation, the method further includes:
and adjusting the position of the camera, wherein the similarity between the image to be corrected, which is obtained by shooting the preset scene by the adjusted camera, and the reference camera image exceeds a preset threshold value.
In one possible implementation, the generating a transformed point cloud projection diagram according to the reference camera image, the image to be corrected, and the reference point cloud projection diagram includes:
registering the reference camera image and the image to be corrected to obtain a first homography matrix from the reference camera image to the image to be corrected;
and carrying out perspective transformation on the reference point cloud projection drawing according to the first homography matrix to obtain the transformed point cloud projection drawing.
In a possible implementation manner, the correcting the calibration parameter according to the point cloud projection image to be corrected and the transformed point cloud projection image includes:
registering the point cloud projection image to be corrected and the transformed point cloud projection image to obtain a second homography matrix from the point cloud projection image to be corrected to the transformed point cloud projection image;
and multiplying the second homography matrix, the internal reference matrix and the external reference matrix to obtain corrected calibration parameters.
In one aspect, an apparatus for calibrating parameters of a lidar and a camera based on image registration is provided, the apparatus comprising:
the acquisition module is used for acquiring a reference camera image and a reference point cloud projection image according to a preset scene after calibration of calibration parameters between the camera and the laser radar is completed, wherein the reference point cloud projection image is obtained by calculation according to the reference camera image;
the detection module is used for detecting whether the relative attitude between the camera and the laser radar changes;
and the correction module is used for correcting the calibration parameters according to a homography matrix obtained by registering the reference camera image and the reference point cloud projection image if the relative posture changes.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
after calibration of calibration parameters between the camera and the laser radar is completed, a reference camera image and a reference point cloud projection image can be obtained according to a preset scene, wherein the reference point cloud projection image is obtained according to the reference camera image and the calibration parameters; detecting whether the relative attitude between the camera and the laser radar is changed; and if the relative posture is changed, correcting calibration parameters according to a homography matrix obtained by registering the reference camera image and the reference point cloud projection image. The calibration method comprises the steps of obtaining a homography matrix by combining image registration and original calibration parameters, correcting the calibration parameters according to the homography matrix, fully utilizing the original calibration parameters without re-calibration, and automatically searching characteristic points to finish calibration parameter correction without arranging a calibration test environment, thereby simplifying the calibration parameter correction process and improving the calibration efficiency. In addition, the calibration parameter error after correction is small by combining the registration of the reference camera image and the reference point cloud projection image, so that the correction precision is improved.
In addition, a predetermined number of objects are contained in the predetermined scene, so that a certain number of key points can be detected by using the feature description words during image registration, and the accuracy of image registration is improved.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
Please refer to fig. 1, which illustrates a flowchart of a method for calibrating parameters of a lidar and a camera based on image registration according to an embodiment of the present application, wherein the method for calibrating parameters of the lidar and the camera based on image registration may be applied to an intelligent driving system and a robot system. The laser radar and camera calibration parameter correction method based on image registration can comprise the following steps:
step 101, after calibration of calibration parameters between a camera and a laser radar is completed, a reference camera image and a reference point cloud projection image are obtained according to a preset scene, wherein the reference point cloud projection image is obtained according to the reference camera image and the calibration parameters.
In this embodiment, any calibration method may be used to calibrate the calibration parameters between the camera and the laser radar to obtain the original calibration parameters, and this embodiment does not limit the calibration method. In order to ensure that more feature points are detected, the laser radar may be a radar with 32 lines or more.
After calibration is completed, the relative attitude between the camera and the laser radar changes due to vibration and impact generated during the running process of the vehicle, so that the original calibration parameters are not applicable any more, and therefore the calibration parameters need to be corrected subsequently.
In this embodiment, after the original calibration parameters are obtained, a reference image needs to be obtained according to a predetermined scene, and the original calibration parameters may be corrected according to the reference image. The predetermined scene may be a scene of a fixed location, such as a parking lot, a fixed location in the road, or the predetermined scene may also be a scene of a disposed calibration board, which is not limited in this embodiment. It should be noted that a predetermined number of objects need to be included in a predetermined scene, so that it can be ensured that a certain number of key points can be detected by using the feature descriptors during image registration, thereby improving the accuracy of image registration.
The reference image in this embodiment includes a reference camera image and a reference point cloud projection map, and the reference point cloud projection map is calculated according to the reference camera image and calibration parameters, and the calculation process is described below.
In one possible calculation, acquiring the reference camera image and the reference point cloud projection map according to a predetermined scene may include: shooting a preset scene through a camera to obtain a reference camera image; generating reference point cloud data for a predetermined scene by a laser radar; and generating a point cloud projection diagram according to the reference camera image, the reference point cloud data and the calibration parameters.
When the reference camera image and the reference point cloud data are acquired and the calibration parameters include an internal reference matrix and an external reference matrix, generating a reference point cloud projection drawing according to the reference camera image, the reference point cloud data and the calibration parameters, which may include: inputting the reference camera image, the reference point cloud data and the calibration parameters into a first formula and a second formula to obtain a reference point cloud projection diagram;
the first formula is
The second formula is
Wherein (X Y Z) is the coordinate of a data point in the reference point cloud data, I is the reflectivity of the data point, (xy Z) is the coordinate of the data point corresponding to (X Y Z) in the camera coordinate system, (u v) is the pixel coordinate of the corresponding pixel point in the pixel coordinate system,
is an internal reference matrix, and the reference matrix is,
is the external reference matrix and z is the depth of the data point in the camera coordinate system.
The reference point cloud data can be projected onto a camera plane through the first formula and the second formula to obtain a reference point cloud projection diagram. In the present embodiment, three conversion modes are provided, and the three conversion modes are described below.
In the first conversion mode, the pixel values of the pixel points corresponding to the data points are filled according to the depth of each data point, and the filled pixel values are mapped by using a color space to obtain a point cloud projection diagram.
The second conversion mode is that the coordinates of each data point under the camera coordinate system are input into a third formula, the pixel values of pixel points corresponding to the data points are filled according to the obtained calculation result, the filled pixel values are mapped by utilizing a color space, and a reference point cloud projection drawing is obtained, wherein the third formula is
And in the third conversion mode, the reflectivity value of each data point is obtained, the pixel value of the pixel point corresponding to the data point is filled according to the reflectivity value of each data point, and the filled pixel value is mapped by utilizing a color space to obtain a point cloud projection image. The reflectivity value of each data point may be obtained, and the obtaining manner is not limited in this embodiment.
Step 102, detecting whether the relative attitude between the camera and the laser radar is changed.
When a significant shift in the data points projected onto the image by the lidar is detected or considered to be observed, the original calibration parameters are determined to need to be corrected.
And 103, if the relative posture is changed, correcting calibration parameters according to a homography matrix obtained by registering the reference camera image and the reference point cloud projection image.
If it is determined that the relative posture between the camera and the lidar is changed, the camera and the lidar need to be placed in a predetermined scene, and the original calibration parameters are corrected in the predetermined scene, and a correction process is described below.
Specifically, correcting the calibration parameter according to the homography matrix obtained by registering the reference camera image and the reference point cloud projection drawing may include the following substeps.
And a substep 1031 of capturing a predetermined scene by a camera to obtain an image to be corrected.
Before the image to be corrected is obtained, the position of the camera can be adjusted, so that the image to be corrected shot by the camera is consistent with the reference camera image as much as possible. Namely, the method further comprises: and adjusting the position of the camera, wherein the similarity between the image to be corrected, which is obtained by shooting a preset scene by the adjusted camera, and the image of the reference camera exceeds a preset threshold value.
It should be noted that when the vehicle is placed under the predetermined scene again, the object under the predetermined scene does not need to be identical to the object in the initial predetermined scene, as long as there are partially identical objects.
And a sub-step 1032 of generating a transformed point cloud projection diagram according to the reference camera image, the image to be corrected and the reference point cloud projection diagram.
The generating of the transformed point cloud projection map according to the reference camera image, the image to be corrected and the reference point cloud projection map may include: registering the reference camera image and the image to be corrected to obtain a first homography matrix H from the reference camera image to the image to be corrected1(ii) a According to the first homography matrix H1And carrying out perspective transformation on the reference point cloud projection image to obtain a transformed point cloud projection image.
And a substep 1033 of correcting the calibration parameters according to the point cloud projection image to be corrected and the transformed point cloud projection image.
Wherein, correcting the calibration parameters according to the point cloud projection image to be corrected and the converted point cloud projection image, may include: registering the point cloud projection image to be corrected and the transformed point cloud projection image to obtain a second homography matrix H from the point cloud projection image to be corrected to the transformed point cloud projection image2(ii) a And multiplying the second homography matrix, the internal reference matrix and the external reference matrix to obtain the corrected calibration parameters.
Taking the second homography matrix as H
2The internal reference matrix is
The external reference matrix is
The calibrated calibration parameter is H
2·M
i·M
e. Subsequently, a fourth formula should be satisfied between the camera image and the point cloud projection diagram
Referring to fig. 2, fig. 2 shows a conversion relationship between images during calibration of calibration parameters, that is, a first homography matrix from a reference camera image to a camera image to be calibrated is calculated, perspective transformation is performed on a reference point cloud projection image according to the first homography matrix to obtain a transformed point cloud projection image, and finally a second homography matrix from the point cloud projection image to be calibrated to the transformed point cloud projection image is calculated to obtain calibrated calibration parameters.
Fig. 3 shows an intermediate image of each step in fig. 2, taking a real-scene image containing a vehicle as an example.
FIG. 4 is a matching result of key points when the point cloud projection image to be corrected is aligned with the transformed point cloud projection image.
The lower diagram in fig. 5 shows the effect of superimposing the reference camera image and the point cloud projection view, the middle diagram shows the effect of superimposing the image to be corrected and the point cloud projection view, and the upper diagram shows the effect of superimposing the corrected camera image and the point cloud projection view.
In summary, according to the laser radar and camera calibration parameter correction method based on image registration provided in the embodiment of the present application, after calibration of calibration parameters between a camera and a laser radar is completed, a reference camera image and a reference point cloud projection drawing are obtained according to a predetermined scene, where the reference point cloud projection drawing is obtained according to the reference camera image and the calibration parameters; detecting whether the relative attitude between the camera and the laser radar is changed; and if the relative posture is changed, correcting calibration parameters according to a homography matrix obtained by registering the reference camera image and the reference point cloud projection image. The calibration method comprises the steps of obtaining a homography matrix by combining image registration and original calibration parameters, correcting the calibration parameters according to the homography matrix, fully utilizing the original calibration parameters without re-calibration, and automatically searching characteristic points to finish calibration parameter correction without arranging a calibration test environment, thereby simplifying the calibration parameter correction process and improving the calibration efficiency. In addition, the corrected calibration parameter error is small by combining the registration of the camera image and the point cloud projection image, so that the correction precision is improved.
In addition, a predetermined number of objects are contained in the predetermined scene, so that a certain number of key points can be detected by using the feature description words during image registration, and the accuracy of image registration is improved.
Please refer to fig. 6, which illustrates a block diagram of a lidar and camera calibration parameter calibration apparatus based on image registration according to an embodiment of the present disclosure, wherein the lidar and camera calibration parameter calibration apparatus based on image registration may be applied to an intelligent driving system and a robot system. The laser radar and camera calibration parameter correction device based on image registration can comprise:
the acquiring module 610 is configured to acquire a reference camera image and a reference point cloud projection map according to a predetermined scene after calibration of calibration parameters between the camera and the laser radar is completed, where the reference point cloud projection map is calculated according to the reference camera image;
a detection module 620, configured to detect whether a relative posture between the camera and the laser radar changes;
and a correcting module 630, configured to correct the calibration parameter according to a homography matrix obtained by registering the reference camera image and the reference point cloud projection image if the relative pose changes.
In an optional embodiment, the obtaining module 610 is further configured to:
shooting a preset scene through a camera to obtain a reference camera image;
generating reference point cloud data for a predetermined scene by a laser radar;
and generating a reference point cloud projection diagram according to the reference camera image, the reference point cloud data and the calibration parameters.
In an alternative embodiment, when the calibration parameters include an internal reference matrix and an external reference matrix, the obtaining module 610 is further configured to:
inputting the reference camera image, the reference point cloud data and the calibration parameters into a first formula and a second formula to obtain a reference point cloud projection diagram;
the first formula is
The second formula is
Wherein (X Y Z) is the coordinate of a data point in the reference point cloud data, I is the reflectivity of the data point, (xy Z) is the coordinate of the data point corresponding to (X Y Z) in the camera coordinate system, (u v) is the pixel coordinate of the corresponding pixel point in the pixel coordinate system,
is an internal reference matrix, and the reference matrix is,
is the external reference matrix and z is the depth of the data point in the camera coordinate system.
In an optional embodiment, the apparatus further comprises:
the first filling module is used for filling pixel values of pixel points corresponding to the data points according to the depth of each data point, and mapping the filled pixel values by using a color space to obtain a reference point cloud projection graph; or,
the second filling module is used for inputting the coordinates of each data point in the camera coordinate system into a third formula, filling the pixel values of the pixel points corresponding to the data points according to the obtained calculation result, and mapping the filled pixel values by using a color space to obtain a reference point cloud projection drawing, wherein the third formula is
Or,
and the third filling module is used for acquiring the reflectivity value of each data point, filling the pixel value of the pixel point corresponding to the data point according to the reflectivity value of each data point, and mapping the filled pixel value by using a color space to obtain a reference point cloud projection diagram.
In an alternative embodiment, the predetermined scene contains a predetermined number of objects.
In an alternative embodiment, the calibration module 630 is further configured to:
shooting a preset scene through a camera to obtain an image to be corrected;
generating a transformed point cloud projection diagram according to the reference camera image, the image to be corrected and the reference point cloud projection diagram;
and correcting calibration parameters according to the point cloud projection image to be corrected and the transformed point cloud projection image.
In an optional embodiment, the apparatus further comprises:
and the adjusting module is used for adjusting the position of the camera, and the similarity between the image to be corrected, which is obtained by shooting a preset scene by the adjusted camera, and the reference camera image exceeds a preset threshold value.
In an alternative embodiment, the calibration module 630 is further configured to:
registering the reference camera image and the image to be corrected to obtain a first homography matrix from the reference camera image to the image to be corrected;
and carrying out perspective transformation on the reference point cloud projection image according to the first homography matrix to obtain a transformed point cloud projection image.
In an alternative embodiment, the calibration module 630 is further configured to:
registering the point cloud projection image to be corrected and the transformed point cloud projection image to obtain a second homography matrix from the point cloud projection image to be corrected to the transformed point cloud projection image;
and multiplying the second homography matrix, the internal reference matrix and the external reference matrix to obtain the corrected calibration parameters.
In summary, the image registration-based lidar and camera calibration parameter correction apparatus provided in the embodiment of the present application may obtain a reference camera image and a reference point cloud projection map according to a predetermined scene after calibration of calibration parameters between the camera and the lidar is completed, where the reference point cloud projection map is obtained according to the reference camera image and the calibration parameters; detecting whether the relative attitude between the camera and the laser radar is changed; and if the relative posture is changed, correcting calibration parameters according to a homography matrix obtained by registering the reference camera image and the reference point cloud projection image. The calibration method comprises the steps of obtaining a homography matrix by combining image registration and original calibration parameters, correcting the calibration parameters according to the homography matrix, fully utilizing the original calibration parameters without re-calibration, and automatically searching characteristic points to finish calibration parameter correction without arranging a calibration test environment, thereby simplifying the calibration parameter correction process and improving the calibration efficiency. In addition, the corrected calibration parameter error is small by combining the registration of the camera image and the point cloud projection image, so that the correction precision is improved.
In addition, a predetermined number of objects are contained in the predetermined scene, so that a certain number of key points can be detected by using the feature description words during image registration, and the accuracy of image registration is improved.
An embodiment of the present application provides a computer-readable storage medium, having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which are loaded and executed by a processor to implement the method for image registration based lidar and camera calibration parameter correction as described above.
One embodiment of the present application provides an intelligent driving system or a robot system, which includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the laser radar and camera calibration parameter correction method based on image registration.
It should be noted that: in the calibration parameter correction of the lidar and the camera calibration parameter correction device based on image registration provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the lidar and the camera calibration parameter correction device based on image registration is divided into different functional modules to complete all or part of the above described functions. In addition, the image registration-based lidar and camera calibration parameter correction device provided in the above embodiment and the image registration-based lidar and camera calibration parameter correction method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description should not be taken as limiting the embodiments of the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the embodiments of the present application should be included in the scope of the embodiments of the present application.