CN113256696B - External parameter calibration method of laser radar and camera based on natural scene - Google Patents
External parameter calibration method of laser radar and camera based on natural scene Download PDFInfo
- Publication number
- CN113256696B CN113256696B CN202110716414.4A CN202110716414A CN113256696B CN 113256696 B CN113256696 B CN 113256696B CN 202110716414 A CN202110716414 A CN 202110716414A CN 113256696 B CN113256696 B CN 113256696B
- Authority
- CN
- China
- Prior art keywords
- stage
- particles
- particle
- representing
- particle swarm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Geometry (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Optics & Photonics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a laser radar and camera external reference calibration method based on a natural scene, which is characterized in that based on radar point cloud and camera images which are acquired simultaneously, pedestrians in the natural scene are taken as targets, and through point cloud pedestrian detection and image pedestrian instance segmentation, central points and vertexes of corresponding pedestrians in the point cloud and the image are extracted, so that key point matching of the point cloud and the image is obtained. Taking the key points as initial points of the particle swarm, initializing a translation vector and a rotation matrix by using a particle swarm algorithm, performing iteration on the key point positions for a limited number of times by using a random particle swarm algorithm, then directly optimizing a rotation vector and a feature matrix by using the random particle swarm algorithm, and finally converging to obtain a stable rotation matrix and a stable translation vector. The problem that the point cloud and the image can not be strictly corresponding is solved, and more accurate key point positions are obtained, so that more accurate external reference results are obtained, and the interpretability is higher.
Description
Technical Field
The invention relates to the technical field of computer vision and robots, in particular to a laser radar and camera external parameter calibration method based on a natural scene.
Background
As the application of the laser radar and the camera in the unmanned system becomes more and more extensive, the method of using the multi-sensor fusion to improve the perception level of the unmanned system gets more and more attention. The camera can obtain high resolution RGB pictures with sufficient color, texture etc information but lacking depth information and being sensitive to light. Radar can provide three dimensional spatial information of a target but with low resolution and no color information. The combination of the two can well make up respective defects and more comprehensively react the characteristics of the target. Because the camera and the radar are in different coordinate systems, and the information fusion of the camera and the radar needs a coordinate transformation matrix between the radar and the camera, accurate external reference calibration is the key for accurate fusion of the multiple sensors.
For external reference calibration of cameras and radars, a chessboard and other specific calibration objects are used in the traditional calibration method, the methods often need to extract lines or surfaces and the like of the calibration objects, and the method based on the artificial calibration objects takes long time for early preparation and processing and needs specific calibration scenes. Geometric features such as planes, line segments, parallel line vanishing points and the like in a natural scene are used for realizing a non-target calibration method, high requirements are provided for accurate line and surface extraction in images and point clouds, and the characteristics of the lines, the surfaces and the like often exist in artificial environments such as indoor wall surfaces, buildings and the like, so that the calibration scene of the method is limited. To reduce the dependency on features in the scene, there are methods that do not utilize feature matching but utilize self-motion estimation to compute the external parameters, motion estimation based methods require accurate and sufficient self-motion state estimation while requiring the presence of three-dimensional geometric features and trackable visual features in the scene, the lack of these features can lead to positional slippage in LiDAR range estimation, affecting the accuracy of the calibration results, and motion estimation based methods are easier to move and calibrate at robotic platforms, but are not suitable for vehicle type platforms, such as unmanned vehicles, because vehicles do not easily make sufficient motion in a short time. In addition, with the success of deep learning in recent years, methods for applying deep learning to camera-radar external parameter estimation also appear, however, a network-based method needs a depth true value as supervision data, an accurate depth true value is often difficult to obtain in a real scene, the method of the deep network has poor interpretability, and the generalization capability has certain limitation.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a laser radar and camera external parameter calibration method based on a natural scene, which overcomes the problem that point cloud and images can not be strictly corresponding, and obtains more accurate key point positions, thereby obtaining more accurate external parameter results.
In order to achieve the above object, the present invention provides a method for calibrating external parameters of a laser radar and a camera based on a natural scene, comprising the following steps:
step 1, acquiring a laser point cloud and a camera image which are synchronous and have pedestrians, and acquiring detection frames of different individual pedestrians in the laser point cloud and masks of the different individual pedestrians in the camera image;
step 2, establishing mapping between the laser point cloud and pedestrians in the camera image based on the detection frame and the mask to obtain a plurality of 2D-3D matching point pairs of the camera image and the laser point cloud;
step 3, inputting the 2D-3D matching point pairs serving as first-stage particles into a first-stage particle swarm optimization model, iteratively optimizing the 2D-3D matching point pairs, and obtaining initial external parameters based on the optimized 2D-3D matching point pairs;
and 4, inputting the initial external parameters serving as second-stage particles into a second-stage particle swarm optimization model for iterative optimization to obtain actual external parameters between the camera and the laser radar.
In one embodiment, in step 1:
inputting the laser point cloud into a pointpilars point cloud detection network, and outputting detection frames of different individual pedestrians;
the camera images are input into an example segmentation network MRCNN, and masks of different individual pedestrians are output.
In one embodiment, the specific process of step 2 is:
step 2.1, converting the laser point cloud into a depth map, and determining coordinates corresponding to the radar point cloud and a detection frame according to pixel values in the detection frame in the depth map;
step 2.2, corresponding the depth map to the pedestrian in the camera image through manual marking to obtain pedestrian mapping of the laser point cloud and the camera image, and then extracting the central point and the head vertex of the pedestrian in the laser point cloud and the camera image respectively to obtain a plurality of groups of matching point pairs, wherein each group of matching point pairs comprises a 2D point in the camera image and a 3D point in the laser point cloud;
step 2.3, extracting a plurality of groups of 2D points and 3D points with mapping relations in the camera image, around the corresponding 2D points and around the corresponding 3D points in the laser point cloud respectively for each group of matching point pairs in the step 2.2 to obtain a plurality of new matching point pairs;
and 2.4, integrating the matching point pairs obtained in the step 2.2 and the step 2.3 to obtain the 2D-3D matching point pair in the step 2.
In one embodiment, in step 3, the iterative optimization of the first-stage particle swarm optimization model specifically includes the following steps:
step 3.1, constructing first-stage particles based on all the 2D-3D matching point pairs;
step 3.2, calculating a first-stage evaluation function of each first-stage particle;
step 3.3, taking the first-stage particle with the lowest first-stage evaluation function as the optimal first-stage particle in the current population, and calculating the updating speed of each iteration of each first-stage particle;
step 3.4, updating the first-stage particles based on the updating speed;
step 3.5, calculating a second-stage evaluation function of each first-stage particle after the particle is updated;
step 3.6, extracting the first-stage particles with the lowest second-stage evaluation function, and expanding the first-stage particles to obtain particle swarm of next iteration;
and 3.7, judging whether the iteration times are maximum, if so, outputting the external parameter corresponding to the optimal particle in the current population as an initial external parameter, and otherwise, returning to the step 3.2 after one iteration.
In one embodiment, the iterative optimization of the first-stage particle swarm optimization model specifically comprises the following steps:
step 3.1, constructing first-stage particles based on all the 2D-3D matching point pairs:
in the formula (I), the compound is shown in the specification,a particle swarm representing a first-stage particle swarm optimization model,is shown asThe coordinates of the 2D points in the set of 2D-3D matching point pairs,is shown asThe coordinates of the 3D points in the set of 2D-3D matching point pairs,representing the first stage in a particle swarm optimization modelThe number of the first-stage particles,representing the total number of the 2D-3D matching point pairs and the total number of first-stage particles in the first-stage particle swarm optimization model;
step 3.2, calculating a first-stage evaluation function of each first-stage particle:
in the formula (I), the compound is shown in the specification,is shown asA first-stage evaluation function for the first-stage particles, K being a camera intrinsic parameter,is as followsThe corresponding external parameter of the first-stage particle, P represents the coordinate of any 3D point in the detection frame,representing the set of all 3D points in all detection boxes, HSS is a binary function, represented as:
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,to representThe point P is projected to fall on the mask,indicating that the 3D point P does not fall in the mask after projection;
step 3.3, taking the first-stage particle with the lowest first-stage evaluation function as the optimal first-stage particle in the current population, and calculating the updating speed of each iteration of each first-stage particle:
in the formula (I), the compound is shown in the specification,is shown asThe first stage particles are inThe update rate at the time of the sub-iteration,representing the optimal first phase particles in the current population,representing the first phase particles that are optimal for the historical iteration,in order to be the step size,is a random number, and is a random number,wherein, in the step (A), =1,2,···, ,expressing the maximum iteration number of the first-stage particle swarm optimization modelWhen the ratio is not less than 1, =0;
and 3.4, updating the first-stage particles based on the updating speed:
step 3.5, calculating the second-stage evaluation function of each first-stage particle after the particle update:
in the formula (I), the compound is shown in the specification,is shown asA second-stage evaluation function for the first-stage particles, K being a camera intrinsic parameter,indicating updated the secondThe corresponding external parameter of the first-stage particle, P represents the coordinate of any 3D point in the detection frame,representing the set of all 3D points in all detection boxes,is a binary function expressed as:
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P does not fall in the mask after projection;
step 3.6, extracting the first-stage particles with the lowest second-stage evaluation function,1≤A≤And is based onExpanding to obtain the particle swarm of the next iteration:
step 3.7, judgeIf yes, outputting an external parameter corresponding to the optimal first-stage particle in the current population as an initial external parameter, otherwise, commanding=+1, and taking the optimal first-stage particle in the current population as the optimal first-stage particle for historical iteration, and returning to the step 3.2.
In one embodiment, in step 4, the iterative optimization of the second-stage particle swarm optimization model specifically includes the following steps:
step 4.1, guiding expansion based on the gradient of each variable in the initial external parameter to obtain a particle swarm of a second-stage particle swarm optimization model;
step 4.2, calculating a first-stage evaluation function of each second-stage particle;
4.3, taking the second-stage particle with the lowest first-stage evaluation function as the optimal second-stage particle in the current population, and calculating the updating speed of each second-stage particle in each iteration;
4.4, updating the second-stage particles based on the updating speed;
step 4.5, calculating a second-stage evaluation function of each second-stage particle after the particle is updated;
and 4.6, extracting second-stage particles with the lowest second-stage evaluation function, judging whether the iteration frequency is the maximum or not, if so, outputting the second-stage particles with the lowest second-stage evaluation function as actual external parameters between the camera and the laser radar, otherwise, iterating once and updating the second-stage particles with the optimal historical iteration, and returning to the step 4.2.
In one embodiment, the iterative optimization of the second-stage particle swarm optimization model specifically comprises the following steps:
step 4.1, guiding expansion based on the gradient of each variable in the initial external parameters to obtain a particle swarm of the second-stage particle swarm optimization model:
in the formula (I), the compound is shown in the specification,a particle swarm representing a second stage particle swarm optimization model,representing the second stage in the particle swarm optimization modelThe number of the second-stage particles is,is shown asThe azimuth angle in the individual second-stage particles,is shown asThe tumbling angle in the individual second stage particles,is shown asThe roll angle in the individual second-stage particles,is shown asThe translation vector in each of the second-stage particles,representing the total number of second-stage particles in the second-stage particle swarm optimization model,indicating the azimuth angle in the initial outer reference,indicating the roll angle in the initial outer reference,indicating the roll angle in the initial profile,representing the translation vector in the initial outer reference,is the step length;
step 4.2, calculating the first-stage evaluation function of each second-stage particle:
in the formula (I), the compound is shown in the specification,is shown asA first-stage evaluation function of second-stage particles, K being a camera parameter, P representing coordinates of any 3D point in the detection box,representing the set of all 3D points in all detection boxes, HSS is a binary function, represented as:
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P does not fall in the mask after projection;
step 4.3, taking the second-stage particle with the lowest first-stage evaluation function as the optimal second-stage particle in the current population, and calculating the updating speed of each second-stage particle in each iteration:
in the formula (I), the compound is shown in the specification,is shown asThe second stage particles are inThe update rate at the time of the sub-iteration,representing the optimal second-stage particles in the current population,representing the second-stage particles for which the historical iteration is optimal,in order to be the step size,is a random number, and is a random number,wherein, in the step (A), =1,2,···, ,representing the iteration number of the second stage particle swarm optimization model whenWhen the ratio is not less than 1, =0;
and 4.4, updating the second-stage particles based on the updating speed:
step 4.5, calculating the second-stage evaluation function of each second-stage particle after the particle update:
in the formula (I), the compound is shown in the specification,is shown asA second-stage evaluation function of second-stage particles, K being a camera parameter, P representing coordinates of any 3D point in the detection box,indicates all detectionsThe set of all 3D points in the box,is a binary function expressed as:
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P has not fallen into the mask after projection
Step 4.6, extracting the second-stage particles with the lowest second-stage evaluation function,1≤C≤Judgment ofIf yes, outputtingAs an actual external parameter between camera and lidarWill beAs second-stage particles optimized for historical iterationAnd then returning to the step 4.2.
The beneficial technical effects of the invention are as follows:
1. compared with the calibration method using artificial calibration objects such as a chessboard and the like, the method provided by the invention takes the pedestrians in a natural scene as the target, does not depend on specific artificial calibration objects, saves the complex preparation work before calibration and is simple and convenient to operate;
2. compared with a calibration method using specific geometric characteristics such as parallel lines, planes and V-shaped angles, the method provided by the invention has more applicable scenes:
specific geometric characteristics such as parallel lines, V-shaped angles and the like are usually present in artificial environments such as indoor wall surfaces and building bodies, so that the application scene of a calibration algorithm of the geometric characteristics is limited, the method disclosed by the invention aims at visible pedestrians and is not limited by the specific geometric characteristics, and the method is more universal;
3. compared with a calibration method for estimating external parameters by using a deep neural network, the method is not end-to-end, but is formed by connecting a plurality of functional modules, and has high interpretability:
the method comprises four modules of point cloud pedestrian detection, image pedestrian instance segmentation, feature point selection and two-stage random particle swarm optimization, wherein a pedestrian is taken as a target, a matched pedestrian key point is taken as an initial point pair of a particle swarm model, the particle swarm is subsequently utilized to sequentially carry out two-stage optimization on the key point position and an external parameter, and compared with a depth network which inputs point cloud and an image and outputs an external parameter result, the method has high interpretability;
4. compared with a calibration method in which the extracted key point pairs are directly input into the particle swarm model and the output result is used as the final external parameter, the method provided by the invention adopts the particle swarm algorithm to optimize the key point positions after the key point pairs are extracted, so that the obtained final external parameter result is more accurate:
in the conventional calibration method, the external parameters are obtained by directly using a PnP algorithm after the matching point pairs are obtained, but the points in the point cloud and the points in the image do not have strict one-to-one mapping relation due to the sparsity of the point cloud, so that the positions of the initially extracted matching point pairs are not correct, the positions of the selected corresponding point pairs are optimized by using a particle swarm model, the problem that the point cloud and the image cannot be strictly corresponding is solved, more accurate key point positions are obtained, and more accurate external parameter results are obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for calibrating external parameters of a laser radar and a camera based on a natural scene according to an embodiment of the present invention;
FIG. 2 is a simulation diagram illustrating the detection frame and mask segmentation according to an embodiment of the present invention;
FIG. 3 is a simulation diagram illustrating the mapping of a laser point cloud to a pedestrian in a camera image according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating the mapping of the laser point cloud and the pedestrian in the camera image according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of an iterative optimization process of a first-stage particle swarm optimization model according to an embodiment of the present invention;
FIG. 6 is a schematic view of an iterative optimization process of a second-stage particle swarm optimization model according to an embodiment of the present disclosure;
fig. 7 is a simulation diagram of a calibration result in the embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all the directional indicators (such as up, down, left, right, front, and rear … …) in the embodiment of the present invention are only used to explain the relative position relationship between the components, the movement situation, etc. in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "connected," "secured," and the like are to be construed broadly, and for example, "secured" may be a fixed connection, a removable connection, or an integral part; the connection can be mechanical connection, electrical connection, physical connection or wireless communication connection; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
At present, the accuracy of 3D point cloud detection and image semantic segmentation for pedestrians has reached a higher accuracy, and in order to achieve calibration without artificial calibration objects in a natural scene, a non-end-to-end calibration method with high interpretability is designed based on radar point cloud and camera images acquired simultaneously and with pedestrians in a natural scene as a target. According to the method, through point cloud pedestrian detection and image pedestrian instance segmentation, central points and vertexes of corresponding pedestrians in the point cloud and the image are extracted, and therefore key point matching of the point cloud and the image is obtained. Taking the key points as initial points of the particle swarm, initializing a translation vector and a rotation matrix by using a particle swarm algorithm, performing iteration on the key point positions for a limited number of times by using a random particle swarm algorithm, then directly optimizing a rotation vector and a feature matrix by using the random particle swarm algorithm, and finally converging to obtain a stable rotation matrix and a stable translation vector.
Referring to fig. 1, the method for calibrating external parameters of a laser radar and a camera based on a natural scene disclosed in this embodiment specifically includes the following steps:
step 1, acquiring a laser point cloud and a camera image which are synchronous and have pedestrians, inputting the laser point cloud into a pointpilars point cloud detection network, and outputting detection frames of different individual pedestrians; inputting the camera image into the example segmentation network MRCNN, and outputting masks of different individual pedestrians, that is, as shown in fig. 2, where fig. 2(a) is a simulation diagram of the mask and fig. 2(b) is a simulation diagram of the detection frame.
And 2, establishing mapping between the laser point cloud and the pedestrians in the camera image based on the detection frame and the mask to obtain a plurality of 2D-3D matching point pairs of the camera image and the laser point cloud. For the convenience of mapping the point cloud and the pedestrian in the image, reference is made to fig. 3-4, where fig. 3(a) is a laser point cloud conversion depth map, and fig. 3(b) is a camera image. The specific implementation process of establishing the mapping in this embodiment is as follows:
step 2.1, converting the laser point cloud into a depth map, and determining coordinates corresponding to the radar point cloud and a detection frame according to pixel values in the detection frame in the depth map;
step 2.2, corresponding the depth map to the pedestrian in the camera image through manual marking to obtain pedestrian mapping of the laser point cloud and the camera image, and then extracting the central point and the head vertex of the pedestrian in the laser point cloud and the camera image respectively to obtain a plurality of groups of matching point pairs, wherein each group of matching point pairs comprises a 2D point in the camera image and a 3D point in the laser point cloud;
step 2.3, extracting a plurality of groups of 2D points and 3D points with mapping relations in the camera image, around the corresponding 2D points and around the corresponding 3D points in the laser point cloud respectively for each group of matching point pairs in the step 2.2 to obtain a plurality of new matching point pairs;
and 2.4, integrating the matching point pairs obtained in the step 2.2 and the step 2.3 to obtain the 2D-3D matching point pair in the step 2.
And 3, inputting the 2D-3D matching point pairs serving as first-stage particles into a first-stage particle swarm optimization model, iteratively optimizing the 2D-3D matching point pairs, and obtaining initial external parameters based on the optimized 2D-3D matching point pairs. Referring to fig. 5, the iterative optimization process of the first-stage particle swarm optimization model is as follows:
step 3.1, constructing first-stage particles based on all the 2D-3D matching point pairs:
in the formula (I), the compound is shown in the specification,a particle swarm representing a first-stage particle swarm optimization model,is shown asGroup 2D-The coordinates of the 2D points in the pair of 3D matching points,is shown asThe coordinates of the 3D points in the set of 2D-3D matching point pairs,representing the first stage in a particle swarm optimization modelThe number of the first-stage particles,representing the total number of the 2D-3D matching point pairs and the total number of first-stage particles in the first-stage particle swarm optimization model;
step 3.2, calculating a first-stage evaluation function of each first-stage particle:
in the formula (I), the compound is shown in the specification,is shown asA first-stage evaluation function for the first-stage particles, K being a camera intrinsic parameter,is as followsThe corresponding external parameter of the first-stage particle, P represents the coordinate of any 3D point in the detection frame,representing the set of all 3D points in all detection boxes, HSS is a binary function, represented as:
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P does not fall in the mask after projection;
step 3.3, taking the first-stage particle with the lowest first-stage evaluation function as the optimal first-stage particle in the current population, and calculating the updating speed of each iteration of each first-stage particle:
in the formula (I), the compound is shown in the specification,is shown asThe first stage particles are inThe update rate at the time of the sub-iteration,representing the optimal first phase particles in the current population,representing the first phase particles that are optimal for the historical iteration,in order to be the step size,is a random number, and is a random number,wherein, in the step (A), =1,2,···, ,expressing the maximum iteration number of the first-stage particle swarm optimization modelWhen the ratio is not less than 1,=0, wherein, when the number of iterations is the first time, the first-stage particle with the optimal historical iteration is the corresponding first-stage particle itself;
step 3.4, updating the first phase particles based on the update speed
Step 3.5, calculating the second-stage evaluation function of each first-stage particle after the particle update:
in the formula (I), the compound is shown in the specification,is shown asA second-stage evaluation function for the first-stage particles, K being a camera intrinsic parameter,indicating updated the secondThe corresponding external parameter of the first-stage particle, P represents the coordinate of any 3D point in the detection frame,representing the set of all 3D points in all detection boxes, HSS is a binary function, represented as:
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P does not fall in the mask after projection;
step 3.6, extracting the first-stage particles with the lowest second-stage evaluation function,1≤A≤And is based onExpanding to obtain the particle swarm of the next iteration:
step 3.7, judgeIf yes, outputting an external parameter corresponding to the optimal first-stage particle in the current population as an initial external parameter, otherwise, commandingAnd is combined withAnd 3.2, taking the optimal first-stage particle in the current population as the optimal first-stage particle of the historical iteration, and returning to the step 3.2.
And 4, inputting the initial external parameters serving as second-stage particles into a second-stage particle swarm optimization model for iterative optimization to obtain actual external parameters between the camera and the laser radar. Referring to fig. 6, the iterative optimization process of the second stage particle swarm optimization model is as follows:
step 4.1, guiding expansion based on the gradient of each variable in the initial external parameters to obtain a particle swarm of the second-stage particle swarm optimization model:
in the formula (I), the compound is shown in the specification,a particle swarm representing a second stage particle swarm optimization model,representing the second stage in the particle swarm optimization modelThe number of the second-stage particles is,is shown asThe azimuth angle in the individual second-stage particles,is shown asThe tumbling angle in the individual second stage particles,is shown asThe roll angle in the individual second-stage particles,is shown asThe translation vector in each of the second-stage particles,representing the total number of second-stage particles in the second-stage particle swarm optimization model,indicating the azimuth angle in the initial outer reference,indicating the roll angle in the initial outer reference,indicating the roll angle in the initial profile,representing the translation vector in the initial outer reference,is the step length;
step 4.2, calculating the first-stage evaluation function of each second-stage particle:
in the formula (I), the compound is shown in the specification,is shown asA first-stage evaluation function of second-stage particles, K being a camera parameter, P representing coordinates of any 3D point in the detection box,representing the set of all 3D points in all detection boxes, HSS is a binary function, represented as:
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P does not fall in the mask after projection;
step 4.3, taking the second-stage particle with the lowest first-stage evaluation function as the optimal second-stage particle in the current population, and calculating the updating speed of each second-stage particle in each iteration:
in the formula (I), the compound is shown in the specification,is shown asThe second stage particles are inThe update rate at the time of the sub-iteration,representing the optimal second-stage particles in the current population,representing the second-stage particles for which the historical iteration is optimal,in order to be the step size,is a random number, and is a random number,wherein, in the step (A), =1,2,···, ,representing the iteration number of the second stage particle swarm optimization model whenWhen the ratio is not less than 1,=0, wherein, when the number of iterations is the first time, the second-stage particle with the optimal historical iteration is the corresponding second-stage particle itself;
and 4.4, updating the second-stage particles based on the updating speed:
step 4.5, calculating the second-stage evaluation function of each second-stage particle after the particle update:
in the formula (I), the compound is shown in the specification,is shown asA second-stage evaluation function of second-stage particles, K being a camera parameter, P representing coordinates of any 3D point in the detection box,representing the set of all 3D points in all detection boxes, HSS is a binary function, represented as:
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P has not fallen into the mask after projection
Step 4.6, extracting the second-stage particles with the lowest second-stage evaluation function,1≤C≤Judgment ofIf yes, outputtingAs an actual external parameter between camera and lidarWill beAs a function of the second stage of evaluationSecond-stage particles optimized for historical iterationAnd then returning to the step 4.2.
Fig. 7 is a schematic diagram of an effect of projecting point clouds onto an image after final external parameters are obtained through two-stage particle swarm convergence in this embodiment, where a first line of image and a second line of image are distributed as calibration results of two scenes, fig. 7(a), 7(b), and 7(c) are a change process of the point clouds projected onto the image when a first scene is optimized, and fig. 7(d), 7(e), and 7(f) are a change process of the point clouds projected onto the image when a second scene is optimized.
The calibration method disclosed by the embodiment takes pedestrians visible everywhere in a natural scene as a target, does not need to prepare a calibration plate and other special objects for calibration, does not need to have a scene with a special geometric structure such as a building, and greatly expands the use scene of the calibration method. Because of the sparsity of the radar point cloud, the point cloud and the camera image do not have a one-to-one mapping relationship, and the position of the feature point obtained by feature extraction is not accurate, the calibration process is optimized through two-stage particle swarm convergence, the problem that the point cloud and the image cannot strictly correspond is solved, and a more accurate key point position is obtained, so that a more accurate external parameter result is obtained.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (5)
1. A laser radar and camera external reference calibration method based on natural scenes is characterized by comprising the following steps:
step 1, acquiring a laser point cloud and a camera image which are synchronous and have pedestrians, and acquiring detection frames of different individual pedestrians in the laser point cloud and masks of the different individual pedestrians in the camera image;
step 2, establishing mapping between the laser point cloud and pedestrians in the camera image based on the detection frame and the mask to obtain a plurality of 2D-3D matching point pairs of the camera image and the laser point cloud;
step 3, inputting the 2D-3D matching point pairs serving as first-stage particles into a first-stage particle swarm optimization model, iteratively optimizing the 2D-3D matching point pairs, and obtaining initial external parameters based on the optimized 2D-3D matching point pairs;
step 4, inputting the initial external parameters serving as second-stage particles into a second-stage particle swarm optimization model for iterative optimization to obtain actual external parameters between the camera and the laser radar;
in step 3, the iterative optimization of the first-stage particle swarm optimization model specifically includes the following steps:
step 3.1, constructing first-stage particles based on all the 2D-3D matching point pairs:
in the formula (I), the compound is shown in the specification,a particle swarm representing a first-stage particle swarm optimization model,is shown asThe coordinates of the 2D points in the set of 2D-3D matching point pairs,is shown asSeating of 3D points in a set of 2D-3D matching pointsThe mark is that,representing the first stage in a particle swarm optimization modelThe number of the first-stage particles,representing the total number of the 2D-3D matching point pairs and the total number of first-stage particles in the first-stage particle swarm optimization model;
step 3.2, calculating a first-stage evaluation function of each first-stage particle:
in the formula (I), the compound is shown in the specification,is shown asA first-stage evaluation function for the first-stage particles, K being a camera intrinsic parameter,is as followsThe corresponding external parameter of the first-stage particle, P represents the coordinate of any 3D point in the detection frame,the HSS is a binary function, expressed as:
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P does not fall in the mask after projection;
step 3.3, taking the first-stage particle with the lowest first-stage evaluation function as the optimal first-stage particle in the current population, and calculating the updating speed of each iteration of each first-stage particle:
in the formula (I), the compound is shown in the specification,is shown asThe first stage particles are inThe update rate at the time of the sub-iteration,representing the optimal first phase particles in the current population,representing the first phase particles that are optimal for the historical iteration,、in order to be the step size,is a random number, and is a random number,wherein, in the step (A), =1,2,···, ,expressing the maximum iteration number of the first-stage particle swarm optimization modelWhen the ratio is not less than 1,=0;
and 3.4, updating the first-stage particles based on the updating speed:
step 3.5, calculating the second-stage evaluation function of each first-stage particle after the particle update:
in the formula (I), the compound is shown in the specification,is shown asA second-stage evaluation function for the first-stage particles, K being a camera intrinsic parameter,indicating updated the secondThe corresponding external parameter of the first-stage particle, P represents the coordinate of any 3D point in the detection frame,representing the set of all 3D points in all detection boxes,
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P does not fall in the mask after projection;
step 3.6, extracting the first-stage particles with the lowest second-stage evaluation function,1≤A≤And is based onExpanding to obtain the particle swarm of the next iteration:
step 3.7, judge=If yes, outputting an external parameter corresponding to the optimal first-stage particle in the current population as an initial external parameter, otherwise, commanding=+1, and taking the optimal first-stage particle in the current population as the optimal first-stage particle for historical iteration, and returning to the step 3.2.
2. The method for calibrating the external parameters of the laser radar and the camera based on the natural scene as claimed in claim 1, wherein in step 1:
inputting the laser point cloud into a pointpilars point cloud detection network, and outputting detection frames of different individual pedestrians;
the camera images are input into an example segmentation network MRCNN, and masks of different individual pedestrians are output.
3. The method for calibrating the external parameters of the laser radar and the camera based on the natural scene as claimed in claim 1, wherein the specific process of the step 2 is as follows:
step 2.1, converting the laser point cloud into a depth map, and determining coordinates corresponding to the radar point cloud and a detection frame according to pixel values in the detection frame in the depth map;
step 2.2, corresponding the depth map to the pedestrian in the camera image through manual marking to obtain pedestrian mapping of the laser point cloud and the camera image, and then extracting the central point and the head vertex of the pedestrian in the laser point cloud and the camera image respectively to obtain a plurality of groups of matching point pairs, wherein each group of matching point pairs comprises a 2D point in the camera image and a 3D point in the laser point cloud;
step 2.3, extracting a plurality of groups of 2D points and 3D points with mapping relations in the camera image, around the corresponding 2D points and around the corresponding 3D points in the laser point cloud respectively for each group of matching point pairs in the step 2.2 to obtain a plurality of new matching point pairs;
and 2.4, integrating the matching point pairs obtained in the step 2.2 and the step 2.3 to obtain the 2D-3D matching point pair in the step 2.
4. The method for external reference calibration of lidar and camera based on natural scene as claimed in claim 1, 2 or 3, wherein in step 4, the iterative optimization of the second stage particle swarm optimization model specifically comprises the following steps:
step 4.1, guiding expansion based on the gradient of each variable in the initial external parameter to obtain a particle swarm of a second-stage particle swarm optimization model;
step 4.2, calculating a first-stage evaluation function of each second-stage particle;
4.3, taking the second-stage particle with the lowest first-stage evaluation function as the optimal second-stage particle in the current population, and calculating the updating speed of each second-stage particle in each iteration;
4.4, updating the second-stage particles based on the updating speed;
step 4.5, calculating a second-stage evaluation function of each second-stage particle after the particle is updated;
and 4.6, extracting second-stage particles with the lowest second-stage evaluation function, judging whether the iteration frequency is the maximum or not, if so, outputting the second-stage particles with the lowest second-stage evaluation function as actual external parameters between the camera and the laser radar, otherwise, iterating once and updating the second-stage particles with the optimal historical iteration, and returning to the step 4.2.
5. The method for external reference calibration of lidar and camera based on natural scene as claimed in claim 4, wherein the iterative optimization of the second stage particle swarm optimization model comprises:
step 4.1, guiding expansion based on the gradient of each variable in the initial external parameters to obtain a particle swarm of the second-stage particle swarm optimization model:
in the formula (I), the compound is shown in the specification,a particle swarm representing a second stage particle swarm optimization model,representing the second stage in the particle swarm optimization modelThe number of the second-stage particles is,is shown asThe azimuth angle in the individual second-stage particles,is shown asThe tumbling angle in the individual second stage particles,is shown asThe roll angle in the individual second-stage particles,is shown asThe translation vector in each of the second-stage particles,representing the total number of second-stage particles in the second-stage particle swarm optimization model,indicating the azimuth angle in the initial outer reference,indicating the roll angle in the initial outer reference,indicating the roll angle in the initial profile,representing the translation vector in the initial outer reference,、、、、、is the step length;
step 4.2, calculating the first-stage evaluation function of each second-stage particle:
in the formula (I), the compound is shown in the specification,is shown asA first-stage evaluation function of second-stage particles, K being a camera parameter, P representing coordinates of any 3D point in the detection box,the HSS is a binary function, expressed as:
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P does not fall in the mask after projection;
step 4.3, taking the second-stage particle with the lowest first-stage evaluation function as the optimal second-stage particle in the current population, and calculating the updating speed of each second-stage particle in each iteration:
in the formula (I), the compound is shown in the specification,is shown asThe second stage particles are inThe update rate at the time of the sub-iteration,representing the optimal second-stage particles in the current population,representing the second-stage particles for which the historical iteration is optimal,、in order to be the step size,is a random number, and is a random number,wherein, in the step (A), =1,2,···, ,representing the iteration number of the second stage particle swarm optimization model whenWhen the ratio is not less than 1, =0;
and 4.4, updating the second-stage particles based on the updating speed:
step 4.5, calculating the second-stage evaluation function of each second-stage particle after the particle update:
in the formula (I), the compound is shown in the specification,is shown asA second-stage evaluation function of second-stage particles, K being a camera parameter, P representing coordinates of any 3D point in the detection box,representing the set of all 3D points in all detection boxes,
in the formula (I), the compound is shown in the specification,representing the mask in the camera image corresponding to the detection box,indicating that the 3D point P falls within the mask after projection,indicating that the 3D point P does not fall in the mask after projection;
step 4.6, extracting the second-stage particles with the lowest second-stage evaluation function,1≤C≤Judgment of=If yes, outputtingAs an actual external parameter between camera and lidar=+1, willSecond stage of evaluation of the functionSecond stage particles optimized for historical iterationsAnd then returning to the step 4.2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110716414.4A CN113256696B (en) | 2021-06-28 | 2021-06-28 | External parameter calibration method of laser radar and camera based on natural scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110716414.4A CN113256696B (en) | 2021-06-28 | 2021-06-28 | External parameter calibration method of laser radar and camera based on natural scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113256696A CN113256696A (en) | 2021-08-13 |
CN113256696B true CN113256696B (en) | 2021-09-24 |
Family
ID=77189798
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110716414.4A Active CN113256696B (en) | 2021-06-28 | 2021-06-28 | External parameter calibration method of laser radar and camera based on natural scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113256696B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115184909B (en) * | 2022-07-11 | 2023-04-07 | 中国人民解放军国防科技大学 | Vehicle-mounted multi-spectral laser radar calibration system and method based on target detection |
CN115471574B (en) * | 2022-11-02 | 2023-02-03 | 北京闪马智建科技有限公司 | External parameter determination method and device, storage medium and electronic device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416811A (en) * | 2018-03-08 | 2018-08-17 | 云南电网有限责任公司电力科学研究院 | A kind of video camera self-calibrating method and device |
CN112785702A (en) * | 2020-12-31 | 2021-05-11 | 华南理工大学 | SLAM method based on tight coupling of 2D laser radar and binocular camera |
CN112907681A (en) * | 2021-02-26 | 2021-06-04 | 北京中科慧眼科技有限公司 | Combined calibration method and system based on millimeter wave radar and binocular camera |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447869B (en) * | 2015-11-30 | 2019-02-12 | 四川华雁信息产业股份有限公司 | Camera self-calibration method and device based on particle swarm optimization algorithm |
EP3477334A1 (en) * | 2017-10-24 | 2019-05-01 | Veoneer Sweden AB | Automatic radar sensor calibration |
CN107977997B (en) * | 2017-11-29 | 2020-01-17 | 北京航空航天大学 | Camera self-calibration method combined with laser radar three-dimensional point cloud data |
CN108509918B (en) * | 2018-04-03 | 2021-01-08 | 中国人民解放军国防科技大学 | Target detection and tracking method fusing laser point cloud and image |
CN110456330B (en) * | 2019-08-27 | 2021-07-09 | 中国人民解放军国防科技大学 | Method and system for automatically calibrating external parameter without target between camera and laser radar |
CN110543727B (en) * | 2019-09-05 | 2023-01-03 | 北京工业大学 | Improved particle swarm algorithm-based omnidirectional mobile intelligent wheelchair robot parameter identification method |
-
2021
- 2021-06-28 CN CN202110716414.4A patent/CN113256696B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416811A (en) * | 2018-03-08 | 2018-08-17 | 云南电网有限责任公司电力科学研究院 | A kind of video camera self-calibrating method and device |
CN112785702A (en) * | 2020-12-31 | 2021-05-11 | 华南理工大学 | SLAM method based on tight coupling of 2D laser radar and binocular camera |
CN112907681A (en) * | 2021-02-26 | 2021-06-04 | 北京中科慧眼科技有限公司 | Combined calibration method and system based on millimeter wave radar and binocular camera |
Also Published As
Publication number | Publication date |
---|---|
CN113256696A (en) | 2021-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108242079B (en) | VSLAM method based on multi-feature visual odometer and graph optimization model | |
CN110853075B (en) | Visual tracking positioning method based on dense point cloud and synthetic view | |
CN106940704B (en) | Positioning method and device based on grid map | |
Bazin et al. | 3-line RANSAC for orthogonal vanishing point detection | |
CN109272537B (en) | Panoramic point cloud registration method based on structured light | |
CA2826534C (en) | Backfilling points in a point cloud | |
CN112001958B (en) | Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation | |
CN111899301A (en) | Workpiece 6D pose estimation method based on deep learning | |
CN113256696B (en) | External parameter calibration method of laser radar and camera based on natural scene | |
CN112819903A (en) | Camera and laser radar combined calibration method based on L-shaped calibration plate | |
CN111998862B (en) | BNN-based dense binocular SLAM method | |
CN111144349A (en) | Indoor visual relocation method and system | |
CN113050074B (en) | Camera and laser radar calibration system and calibration method in unmanned environment perception | |
CN113361365B (en) | Positioning method, positioning device, positioning equipment and storage medium | |
Bybee et al. | Method for 3-D scene reconstruction using fused LiDAR and imagery from a texel camera | |
CN112785724A (en) | Visual color matching method for ancient buildings based on LiDAR point cloud and two-dimensional image | |
CN117197333A (en) | Space target reconstruction and pose estimation method and system based on multi-view vision | |
CN109636897B (en) | Octmap optimization method based on improved RGB-D SLAM | |
Yoon et al. | Targetless multiple camera-LiDAR extrinsic calibration using object pose estimation | |
CN112258631B (en) | Three-dimensional target detection method and system based on deep neural network | |
CN111198563B (en) | Terrain identification method and system for dynamic motion of foot type robot | |
Le Besnerais et al. | Dense height map estimation from oblique aerial image sequences | |
Buck et al. | Capturing uncertainty in monocular depth estimation: Towards fuzzy voxel maps | |
CN116630953A (en) | Monocular image 3D target detection method based on nerve volume rendering | |
CN115830116A (en) | Robust visual odometer method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |