Disclosure of Invention
The invention aims to provide a method for automatically calibrating the spatial positions of a multi-laser radar and multi-camera sensor.
In order to achieve the above object, the present invention provides a method for automatically calibrating spatial positions of multiple lidar and multiple camera sensors, the method comprising:
step 1) respectively extracting linear features from point cloud data acquired by each laser radar and RGB images acquired by each camera sensor, calculating gray value scores through projection to obtain total scores of all laser radar points, and selecting the highest total score by adopting a self-adaptive optimization method to further obtain the position relation between each laser radar and a certain camera sensor;
step 2) registering the point cloud data of every two laser radars to obtain the position relation between the laser radars;
step 3) obtaining the position relation between the camera sensors according to the camera sensing image data by utilizing epipolar geometric constraints of a plurality of cameras;
and 4) carrying out global optimization according to the position relation obtained in the step 1), the step 2) and the step 3) to finish the calibration of the spatial positions of the multi-laser radar and multi-camera sensor.
As an improvement of the above method, the step 1) specifically includes:
step 1-1) screening data which accord with straight line characteristics from point cloud data in an image form;
step 1-2) merging three frames of point cloud data which accord with straight line characteristics into one frame by adopting a local mapping method to obtain denser point cloud data;
step 1-3) converting the RGB image into a gray image, and extracting the gray image which accords with the linear characteristics by adopting a linear detection algorithm;
step 1-4) mapping the point cloud data obtained in the step 1-2) to the gray level image obtained in the step 1-3) through inverse distance transformation, calculating the gray level value of one laser radar point which accords with the straight line characteristics after projection as a score, accumulating the scores of all the laser radar points as a total score, and selecting the highest total score by adopting a self-adaptive optimization method to obtain the position relation between one laser radar and a certain camera sensor.
As an improvement of the above method, the step 1-1) specifically comprises:
and filtering and removing outliers and point cloud data points corresponding to the ground for the point cloud data in the image form, and removing the point cloud data points with fewer adjacent points by adopting a point cloud clustering algorithm, thereby screening out the data which accord with the straight line characteristics.
As an improvement of the above method, the step 1-3) specifically comprises:
the RGB image is converted into a gray image, rough searching is carried out by adopting a wide image line characteristic, a small gray change and a relatively large step length, fine searching is carried out by adopting a thin image line characteristic, a large gray change and a relatively small step length, and the gray image which accords with the linear characteristic is extracted through an LSD linear detection algorithm.
As an improvement of the above method, the step 2) specifically includes:
dividing point cloud data acquired by each laser radar into grids, expressing each grid by normal distribution, calculating the mean value and variance of data points in each grid, matching a point set by the normal distribution, and obtaining the spatial position relation between every two laser radars by registering two groups of point clouds.
As an improvement of the above method, the step 3) specifically includes:
and obtaining the spatial position relation between the camera sensors by using the epipolar geometric constraint of the multiple cameras and adopting an iterative algorithm.
As an improvement of the above method, the step 4) specifically includes:
taking the position relation between the ith laser radar and the jth camera sensor obtained in the step 1), the position relation between the jth laser radar and the ith camera sensor and the position relation between the ith laser radar and the jth laser radar obtained in the step 2) as vertexes, taking each laser radar and each camera sensor to form an edge, taking the total score and the fitting degree of the laser radars and the polar coordinate constraint between the camera sensors as calibration errors, reducing the calibration errors through global optimization, and completing the calibration of the spatial positions of the multi-laser radar and the multi-camera sensor
Compared with the prior art, the invention has the advantages that:
1. the method can automatically complete feature extraction without manually designing a calibration plate and depending on a specific environment; the limitation of a calibration scene is avoided, and the applicability and convenience of the calibration method are improved;
2. the method of the invention is real-time online calibration, can automatically correct the error in the using process, and avoids the influence of the accumulated error in the using process on the system;
3. the method for screening the point cloud linear features and the image linear features can filter non-linear features and improve the automatic calibration precision;
4. the method provided by the invention optimizes multiple laser radars and multiple cameras, reduces the overall error, accurately calibrates the result, and is closer to the practical application scene.
Detailed Description
A method for automatically calibrating spatial positions of multiple laser radars and multiple camera sensors, the method comprising:
extracting the features of the image and the point cloud by adopting a series of preprocessing methods; respectively extracting linear features from the image and the point cloud, and refining by applying feature filtering;
screening data which accord with straight line characteristics from the laser radar point cloud data; screening data which accord with the straight line characteristics from the image data of the camera sensor; a filtering feature, the remaining features being identifiable as line features; and obtaining the finally optimized external parameters by using a self-adaptive optimization method based on search method optimization and graph optimization.
And selecting the space position relationship of the plurality of laser radars and the plurality of camera sensors corresponding to the highest total score from the plurality of total scores as the position relationship of the plurality of calibrated laser radars and the plurality of camera sensors.
And carrying out point cloud registration on the two laser radars to obtain the external parameter position relation between the laser radar 1 and the laser radar 2.
And solving external parameters between the cameras according to the images of the two cameras by using the epipolar geometric constraint of the multiple cameras.
And global optimization is performed by using external parameters among the sensors, so that errors are reduced, and a more accurate result is obtained.
Further line feature extraction of the image specifically comprises:
in the image processing, the RGB image is firstly converted into a gray image, and then a straight line detection algorithm is adopted to extract straight line features in the gray image. A distance transformation model is then applied to these images containing straight-line features.
Firstly: a coarse search with wider image lines, smaller gray scale changes and relatively larger step sizes is employed so that regions that may contain the best solution are quickly found, preventing the search from falling into local optima. Then apply the finer image line features with larger gray scale changes and smaller step sizes to obtain more accurate calibration results.
Further preprocessing and line feature extraction of the radar point cloud, and projecting the radar point cloud onto an image, wherein the method specifically comprises the following steps:
the principle of lidar processing is to use the distance discontinuity to obtain more borderline features. In order to achieve the aim, three frames of point clouds are combined into one frame by adopting a local mapping method, more points can be presented in one frame, and line features extracted from the point clouds are denser. Specifically, a transformation matrix between the current frame and the previous two frames is calculated by a Normal Distribution Transformation (NDT) method. By converting three frames of point cloud into one frame, a denser point cloud is obtained, which may improve the performance of extracting line features, especially in the case of using a low beam lidar.
The dense point cloud is converted to an image form, and each pixel stores distance information for a corresponding LiDAR point. By comparing the distance between the current point and the adjacent point, the abnormal value which is too far away from the adjacent point is removed, and more accurate line characteristics can be extracted. It is noted that instead of considering each beam separately, we consider the distance information between the multiple beams. It allows the proposed method to extract horizontal features, minimizing horizontal and vertical errors using line features. The horizontal line features and the vertical line features are stored in two different point clouds, respectively. In this case, the feature of the intersection line of planes, which is rarely present, is ignored, which is advantageous for improving the calculation efficiency.
And (3) mapping point cloud line features extracted by the laser radar to a gray scale image of the image through inverse distance conversion, and removing outliers and points corresponding to the ground by using two filtering methods to leave line feature point clouds.
After filtering, a point cloud clustering algorithm is adopted for removing line features with fewer adjacent points. The two filtering steps can provide more organized point cloud line characteristics, obtain more organized line characteristics in the image and ensure better optimization results in the subsequent steps.
The self-adaptive optimization of the mapping of the line feature point cloud to the line feature image specifically comprises the following steps:
a self-adaptive optimization method is adopted, and a search method and a graph optimization method are sequentially used. In the search optimization process, the method gives consideration to both calculation accuracy and efficiency. Before optimization, the proposed method extracts the point cloud line characteristics of the laser radar to an image and calculates the proportion of the point cloud of the laser radar projected to a gray scale area.
Two searching steps are adopted to accurately solve, in order to avoid the search trapping in a local solution, a coarse search with small image line width, small gray scale change and relatively large step length is adopted, and a region possibly containing the best solution is quickly found. And then, thinner image line characteristics are adopted, the gray scale change is larger, and the step length is smaller, so that a more accurate calibration result is obtained. When the proportion of the lidar spot projected onto the gray area exceeds a certain threshold, switching is made between these two steps and the change in gray level. The higher the gray scale score of the point cloud line features falling on the white line features, the more accurate the calibration result.
For a spatial position relation, screening data which accord with linear characteristics from the laser radar point cloud data; screening data which accord with the straight line characteristics from the image data of the camera sensor; projecting the laser radar data conforming to the linear characteristics to a pixel coordinate system of a camera sensor, calculating the gray value of one laser radar point conforming to the linear characteristics after projection as a score, and accumulating the scores of all the laser radar points as a total score; traversing all spatial position relations near the external parameter matrixes of the current frame and the two frames in the past, stopping searching the spatial position relations near the current external parameter matrix when the optimal values are found in the three frames of data, and searching scores of all spatial positions again at the positions of the optimal values until the optimal values are reached; as the position relationship between the calibrated lidar and the camera sensor.
An adaptive optimization method is adopted to make the optimization to be carried out towards the correct direction. And optimizing the cost function by adopting a search method, and searching and comparing surrounding gray scale scores. And further optimizing the solution obtained by the search optimization method by using a graph optimization method, wherein the optimization vertex is a transformation matrix of the camera and the laser radar, the edge error is the gray level corresponding to each point cloud line characteristic point on the image, constructing a g2o graph optimizer, configuring an optimization solver and a gradient descent method, and finally calling an optimization function for optimization.
External reference calibration of a laser radar-laser radar and a camera-camera specifically comprises the following steps:
calibrating a laser radar-laser radar by using a laser radar point cloud registration method:
firstly, preprocessing two groups of point clouds, dividing a space into grids, and counting points falling in each grid; calculating the mean value and covariance of each grid according to points in each grid, and constructing Gaussian distribution; and calculating the joint probability according to the predicted attitude and solving R and t to obtain an external parameter transformation matrix. And counting the adaptive scores of the projections between the two point cloud line characteristics.
The extrinsic parameters are then further refined using an iterative Levenberg-Marquardt algorithm (LM) using the epipolar geometric constraints of the multiple cameras.
Through the steps, the external parameters of multiple radars and multiple cameras can be preliminarily optimized, and finally, the external parameters of the radar-camera, the radar-radar and the camera-camera are globally optimized, so that the overall error is reduced.
And constructing global optimization to reduce overall errors and improve precision by external parameters between multiple groups of laser radars and cameras calibrated by line characteristics, external parameters between radars and cameras obtained by antipodal constraints. In the global optimization, the optimization vertexes are respectively radar i-camera i, radar j-camera j and an external parameter transformation matrix of the radar i-radar j, the sides are ternary sides, one side is formed between each two groups of laser radars and two groups of cameras, errors of the sides are composed of four parts, the optimization gradient direction is provided for the optimization process of the four vertexes, and optimization iteration is guided. The first part is the sum of the gray values of the characteristic points of each line, the points of which are mapped on the image, between the camera i and the laser radar i, the second part is the sum of the gray values of the characteristic points of each line, the points of which are mapped on the image, between the camera j and the laser radar j, the third part is the fitting degree of the laser radar i and the laser radar j after NDT registration, and the fourth part is the error of polar coordinate constraint between the camera i and the camera j. And finally, reducing the calibration error through global optimization to obtain a result with higher precision.
The technical solution of the present invention is described in detail below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, embodiment 1 of the present invention proposes a method for automatically calibrating spatial positions of multiple lidar and multiple camera sensors, and this embodiment takes two lidar and two camera sensors as examples for description, and the number is set as shown in fig. 2, but is not limited to the above number. The method comprises the following steps:
obtaining the spatial position of the radar relative to the camera sensor, the spatial position between two cameras and the spatial position between two laser radars to obtain the spatial position relation between a plurality of groups of laser radars and the camera sensor;
as shown in fig. 3 (a) and 3 (b), the images acquired by two cameras and the point cloud data acquired by two laser radars are shown.
For the spatial position relation of the laser radar and the camera, screening out data which accord with straight line characteristics from the point cloud data of the laser radar; screening data which accord with the straight line characteristics from the image data of the camera sensor; projecting the laser radar data conforming to the straight line characteristics to a pixel coordinate system of a camera sensor, calculating the gray value of one laser radar point conforming to the straight line characteristics after projection as a score, and accumulating the scores of all the laser radar points as a total score; traversing all spatial position relations to obtain a plurality of total scores; and selecting the space position relationship of the laser radar and the camera sensor corresponding to the highest total score from the plurality of total scores as the position relationship of the calibrated laser radar and the calibrated camera sensor.
For the spatial position relation between two laser radars, the statistical characteristics of local points are represented, the point cloud is divided into grids, and each grid can be represented by normal distribution. And the mean and variance of the points in each grid are calculated, matching the set of points by matching to a normal distribution. And obtaining the spatial position relation of the two laser radars through registration between the two groups of point clouds.
For the spatial position relationship between the two cameras, external parameters are further refined by utilizing the epipolar geometric constraint of the multiple cameras and adopting an iterative Levenberg-Marquardt algorithm (LM) algorithm to obtain the spatial position relationship between the two cameras.
The method for calibrating the space position between the laser radar and the camera is used for self-adaptive optimization, and specifically comprises the following steps:
and matching the multi-frame laser radar data to obtain a conversion matrix among the multi-frame laser radar data, and obtaining denser point cloud data by superposing the multi-frame point cloud data to one frame.
Optimizing by adopting a search method to obtain a highest-score result of the point cloud of the nearby position projected to the gray level image;
and (5) optimizing a further accurate external parameter result by using a graph.
For the spatial position relation of the whole sensor, the calibration errors among multiple sensors are reduced through global optimization, and the method specifically comprises the following steps:
obtaining external parameters between the two laser radars through point cloud registration;
obtaining external parameters between two cameras through epipolar geometric constraint of the two images;
the global optimization is performed by grouping the four parts into the error of the edge by using the external parameter transformation matrix with the vertex of the radar i-camera i, the vertex of the radar j-camera j and the vertex of the radar i-radar j as the ternary edge, as shown in fig. 4. P is the position of the image point, K is the camera internal reference, R and t are the relative poses of the external reference,
for the transform matrix (i, j =1,2,3 \ 8230; \8230;) the superscript T is the transpose matrix and the FitnessScore is the two-point cloud fitting.
The method can judge whether the current space positions of the laser radar and the camera are accurate or not in real time, and correct the space position errors of the laser radar and the camera on line under the condition of low accuracy. The method does not need to artificially design a calibration object, can automatically acquire information in the environment and calibrate the spatial positions of a plurality of groups of laser radars and cameras.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.