CN116993836A - Road-end laser radar camera calibration method and system - Google Patents

Road-end laser radar camera calibration method and system Download PDF

Info

Publication number
CN116993836A
CN116993836A CN202310978865.4A CN202310978865A CN116993836A CN 116993836 A CN116993836 A CN 116993836A CN 202310978865 A CN202310978865 A CN 202310978865A CN 116993836 A CN116993836 A CN 116993836A
Authority
CN
China
Prior art keywords
flow
point cloud
camera
scene
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310978865.4A
Other languages
Chinese (zh)
Inventor
陈仕韬
海仁伟
沈艳晴
高润钦
辛景民
郑南宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Shun'an Artificial Intelligence Research Institute
Xian Jiaotong University
Original Assignee
Ningbo Shun'an Artificial Intelligence Research Institute
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Shun'an Artificial Intelligence Research Institute, Xian Jiaotong University filed Critical Ningbo Shun'an Artificial Intelligence Research Institute
Priority to CN202310978865.4A priority Critical patent/CN116993836A/en
Publication of CN116993836A publication Critical patent/CN116993836A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application discloses a road-end laser radar camera calibration method and a system, wherein the method comprises the following steps: for two continuous point clouds, estimating scene flows between the two continuous point clouds, setting a threshold value to realize dynamic and static separation, and obtaining the point cloud scene flows of the moving object; for two continuous frames of images, separating dynamic objects through image optical flow to obtain the image optical flow of a moving target; performing coarse calibration on the laser radar and the camera external parameters based on the image optical flow and the point cloud scene flow of the moving target to obtain initial estimation on the laser radar and the camera external parameters; optimizing initial estimation of the laser radar and the camera external parameters according to the image light flow and the point cloud scene flow of the moving target to obtain accurate estimation of the laser radar and the camera external parameters; the method solves the inconvenience of the existing calibration on deployment of the road-end sensor, extracts the traffic flow in the traffic scene by using the optical flow scene flow, and automatically realizes the external parameter calibration of the laser radar and the camera.

Description

Road-end laser radar camera calibration method and system
Technical Field
The application belongs to the technical field of sensor parameter calibration, and particularly relates to a method and a system for calibrating a laser radar camera at a road end.
Background
The automatic driving system faces a great safety challenge, and the vehicle-road cooperative sensing system is considered to be capable of effectively improving the safety of the automatic driving system. The road test equipment can bring an ultra-wide view angle for an automatic driving system, solves the problem of limited perception range of a bicycle, and can provide real-time information of road environment for a driver by taking the road test perception as an infrastructure, such as collision early warning, front congestion reminding, traffic accident reminding and the like of people and motor vehicles; the method provides monitoring and prediction of road traffic environment for traffic management departments, such as traffic flow statistics, vehicle parking detection, interval speed measurement and the like.
Lidar and cameras are common sensor configurations in road-side perception. Cameras can provide images containing rich semantic information but lack depth information, while lidars can provide accurate but sparse 3D measurement point clouds, but detailed semantic information is difficult to perceive, so that a perception algorithm based on fusion of the lidar and the camera often has better performance, and external parameter calibration of the lidar and the camera is the basis of implementation of the laser radar and the camera. Different from the laser radar camera calibration on the traditional robot, the automatic calibration method has higher requirements on the automation degree of the road end sensor calibration. The prior method adopts a checkerboard to calibrate the laser radar and the camera, the deployment of the checkerboard can affect normal traffic, and once an object is removed, the recalibration is difficult to realize. Meanwhile, some methods based on semantic features depend on specific semantic features, are seriously dependent on a data set, and have low universality. With the development of road-end sensing systems, the research of an automatic laser camera external parameter calibration method is very necessary.
The existing laser radar camera calibration technology mostly depends on the checkerboard, the checkerboard is required to be installed in the calibration method, and particularly for road-end equipment, the installation heights of the laser radar and the camera are generally higher, so that the required checkerboard is often larger, normal traffic is seriously affected, and the checkerboard cannot be recalibrated after being removed. Because the road-end equipment is usually stationary after being installed, the calibration method based on the movement is not suitable for the road-end equipment. The end-to-end calibration method based on learning relies on calibrated data as true values for training, and generalization performance of the method in different scenes is difficult to guarantee. The semantic feature alignment-based method can extract semantic elements such as vehicles, pedestrians and ground, and is a relatively universal feature. However, both point cloud semantic segmentation and image semantic segmentation depend on a large number of data labels, the segmentation effect also depends on a data set, and the performance of the point cloud semantic segmentation and the image semantic segmentation in different scene deployment is difficult to guarantee.
Disclosure of Invention
In order to solve the problems in the prior art, the application provides a laser radar camera external parameter automatic calibration algorithm based on optical flow and scene flow, solves the problem that the existing calibration is inconvenient for deployment of a road-end sensor, extracts traffic flow (vehicles and pedestrians) in a traffic scene by using the optical flow scene flow, and automatically realizes external parameter calibration of the laser radar and the camera.
In order to achieve the above purpose, the application adopts the following technical scheme: a calibrating method of a laser radar camera at a road end comprises the following steps:
for two continuous point clouds, estimating scene flows between the two continuous point clouds, setting a threshold value to realize dynamic and static separation, and obtaining the point cloud scene flows of the moving object;
for two continuous frames of images, separating dynamic objects through image optical flow to obtain the image optical flow of a moving target;
performing coarse calibration on the laser radar and the camera external parameters based on the image optical flow and the point cloud scene flow of the moving target to obtain initial estimation on the laser radar and the camera external parameters;
and optimizing initial estimation of the laser radar and the camera external parameters according to the image light flow and the point cloud scene flow of the moving target to obtain accurate estimation of the laser radar and the camera external parameters.
Further, for two continuous point clouds, estimating a scene flow between the two continuous point clouds, setting a threshold to realize dynamic and static separation, and obtaining a point cloud scene flow of a moving target, wherein the method specifically comprises the following steps:
for point cloud data, estimating and filtering ground points by using a point cloud ground estimation method, and removing noise points by using an outlier removal algorithm to obtain relatively pure point cloud of an object on the ground;
and (3) estimating the point cloud scene flow of the object on the ground relatively pure point cloud by using a nerve priori scene flow method by utilizing continuous point cloud frames to obtain the point cloud scene flow, filtering out the object with static or moving speed lower than the speed threshold by setting the speed threshold, and obtaining the point cloud mask ML of the moving target to obtain the scene flow and the point cloud of the moving target.
Further, for two continuous images, the dynamic object is separated by the image optical flow to obtain the image optical flow of the moving object, specifically:
for continuous frame images, a moving object is segmented out through optical flow estimation to obtain a mask MI of the moving object in the images, and the movement of the moving object in an image coordinate system and the pixel point representation of the obtained image optical flow are given.
Further, based on the image light flow and the point cloud scene flow of the moving target, performing coarse calibration on the laser radar and the camera external parameters to obtain initial estimation on the laser radar and the camera external parameters, including the following steps:
estimating an initial value of the extrinsic parameter using a sampling method: external parameters of the laser radar and the camera comprise rotation parameters and translation parameters, and only the gesture is sampled and the translation parameters are set under the condition that the relative translation of the laser radar and the camera is smaller than the set translation amountIs [0,0] T For the displacement between the LiDAR and the camera not smaller than the set translation amount, giving the range of the relative displacement of the laser radar and the camera, and sampling the relative translation;
estimating the moving object optical flow and scene flow main direction and centroid: describing each moving object by using the mass center of the moving object, and obtaining an image optical flow set f of each moving object by using a Euclidean distance clustering method on pixels of the moving object aiming at the image optical flow k,j For the point cloud scene flow, obtaining the point cloud and the scene flow set F of each moving object by using a Euclidean distance clustering method for the point cloud of the moving object k,j The method comprises the steps of carrying out a first treatment on the surface of the Then calculating the mass center and the main direction of the optical flow of each moving object and the mass center and the main direction of the scene flow;
the method comprises the steps of projecting a point cloud scene flow centroid and a scene flow main direction to a camera pixel coordinate system through a camera internal parameter and a sampling external parameter, evaluating the sampling pose by projecting and calculating the scene flow centroid and the main direction after the point cloud projection and the distance between the closest optical flow centroid and the closest main direction to the scene flow centroid, obtaining a series of distance scores of the sampling external parameter, sequencing the scores of all the sampling external parameters, and finally obtaining the corresponding external parameter with the lowest score as the initial estimation of the laser radar and the camera external parameter.
Further, the pixel centroid and the light flow main direction and the point cloud centroid and the scene light flow main direction of each moving object are calculated by using the following formula:
wherein ,fk,j Representative image I k A set of pixels and optical flows of a moving object j, an element f of the set k,j,i ={p k,j,i ,v k,j,i By the image coordinates p of the moving object pixels k,j,i And the optical flow v of the pixel k,j,i Composition is prepared. F (F) k,j Representative point cloud P k Point cloud of moving object j and scene flow set of moving object j, one element F in the set k,j,i ={P k,j,i ,V k,j,i By a point P in the moving object point cloud k,j,i And scene stream V for that point k,j,i Composition is prepared.
Further, the scene flow projection formula is as follows:
wherein, point P in the point cloud k.i The pixel coordinate p is obtained through the projection of the camera internal parameter K and the external parameters { R, t } of the camera and the laser radar k.i ,Z c Representing the Z-axis coordinate in the point camera coordinate system, and the scene flow estimate V for the point k,i Projection in pixel coordinate system is expressed asA Z-direction component of the scene stream representing the point projected under the camera coordinate system;
the evaluation function is as follows:
wherein ,point cloud centroid P representing and moving object j k,j Projection in an image +.>Image centroid of nearest moving object, +.>For the main direction of the optical flow of the moving object +.>The main direction V of the moving object scene k,j Projections in the image, α and β are adjustment factors for adjusting the weights of the point projections and the scene flow projections.
Further, according to the image light flow and the point cloud scene flow of the moving target, the initial estimation of the laser radar and the camera external parameters is optimized, and the accurate estimation of the laser radar and the camera external parameters is obtained specifically as follows:
an optimization equation is constructed, a nonlinear optimization mode is adopted for solving, iteration optimization is carried out to obtain accurate external parameter estimation, and an optimization target has two aspects, namely, the projection of a moving object point cloud obtained by a laser radar is completely covered by a corresponding optical flow mask, and the scene flow of the moving object point cloud projection is as close as possible to the optical flow of the scene flow in a pixel coordinate system;
when an optimization equation is constructed, the moving object point cloud is projected to a pixel coordinate system, the nearest neighbor searching method is utilized to obtain the moving object pixel coordinate closest to the moving object point cloud, the difference between the pixel distance and the projection and the optical flow of scene flow estimation is calculated, and the optimization equation is as follows:
wherein ,representing and moving a point P in an object m-point cloud k,m,i Projection in an image +.>The point of the closest approach is,is pixel dot +.>Optical flow estimation of->Representative point P k,m,i Scene stream V of (2) k,m,i And (3) the projection in the image, mu and gamma are adjusting factors, and are used for adjusting the weights of the point projection and the scene flow projection in the loss function, and when the optimization is performed, after each time the local optimal value is obtained through the optimization, the local optimal value is used as the initial value of the next iteration to perform the optimization, and the global optimal value is obtained through the iterative optimization, so that the accurate estimation of the external parameters of the laser radar and the camera is finally obtained.
Based on the conception, the application also provides a road-end laser radar camera calibration system which comprises a point cloud scene flow estimation module, an image optical flow estimation module, a coarse calibration module and a fine calibration module;
the point cloud scene flow estimation module is used for estimating scene flows between two frames of point clouds according to two frames of continuous point clouds, setting a threshold value to realize dynamic and static separation and obtain the point cloud scene flows of the moving target;
the image optical flow estimation module is used for separating dynamic objects through image optical flow according to two continuous frames of images to obtain the image optical flow of a moving target;
the rough calibration module performs rough calibration on the laser radar and the camera external parameters based on the image light flow and the point cloud scene flow of the moving target to obtain initial estimation on the laser radar and the camera external parameters;
the fine calibration module is used for optimizing initial estimation of the laser radar and the camera external parameters according to the image light flow and the point cloud scene flow of the moving target, and obtaining accurate estimation of the laser radar and the camera external parameters.
The application also provides a computer device, which comprises a processor and a memory, wherein the memory is used for storing a computer executable program, the processor reads the computer executable program from the memory and executes the computer executable program, and the processor can realize the road-side laser radar camera calibration method when executing the computer executable program.
Meanwhile, a computer readable storage medium is provided, and a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the road-side laser radar camera calibration method can be realized.
Compared with the prior art, the application has at least the following beneficial effects:
the method adopts a low-level general characteristic optical flow and scene flow as calibration characteristics, at road end equipment, a laser radar and a camera are basically motionless, a large number of moving targets such as pedestrians and vehicles exist, the segmentation of moving objects can be easily realized by using the optical flow and scene flow estimation, the dependence of the optical flow and the scene flow on a data set is not strong, and the generalization of a deep learning method in different scenes can be ensured; on the one hand, the method provided by the application has more general required characteristics, the moving objects are ubiquitous in traffic scenes, and the dependence of the optical flow scene flow estimation on a data set is smaller.
Drawings
Fig. 1 is a flowchart of an automatic calibration method of a road-end laser radar camera based on an image light stream and a point cloud scene stream.
FIG. 2 is a calibration process in a pixel coordinate system using image light streams and point cloud scene streams for calibration.
FIG. 3 is a flow chart for coarse calibration using image light flow and point cloud scene flow.
FIG. 4 is a graph of the effect of the coarse calibration method on the simulated data and the real data.
FIG. 5 is a flow chart of a method for fine calibration using image light flow and point cloud scene flow.
FIG. 6 is a graph of the result of calibration on real data using image light flow and point cloud scene flow.
Fig. 7 shows the consistency evaluation result of the method in the actual scene.
Detailed Description
Exemplary examples of the application are set forth in detail below, with reference to the drawings and detailed description, wherein various details of embodiments of the application are included to facilitate understanding. It is to be understood that these examples are for the purpose of illustrating the application only and are not to be construed as limiting the scope of the application, since modifications to the application, which are various equivalent to those skilled in the art, will fall within the scope of the application as defined in the appended claims after reading the application.
FIG. 1 is a flow chart of an automatic calibration method of a road-side laser radar camera based on an image light stream and a point cloud scene stream. The method is suitable for calibrating the laser radar and the camera sensor at the road end. Comprising three parts: data preprocessing, optical flow scene flow estimation, coarse calibration and fine calibration. The input of the proposed algorithm is a two-frame continuous point cloudAnd two frames of consecutive RGB image I k ,/> wherein Np Number N representing points in the point cloud h ,N w Representing the size of the image, while also requiring knowledge of the camera's internal parameters, denoted K.epsilon.R 3×3 The objective is to estimate the 6-DOF outliers R and t between the lidar and the camera.
Because the movement of the road-side equipment is very small and the speed is almost 0, the segmentation of dynamic objects such as pedestrians and automobiles can be easily realized by utilizing the point cloud scene flow and the image optical flow. For two continuous point clouds, firstly, estimating a scene flow F between the two continuous point clouds by using a point cloud scene flow estimation algorithm Pk ={P k ,V k Setting a threshold value to realize dynamic and static separation, and simultaneously obtaining a dynamic point cloud mask, and capturing a laser radar through the dynamic point cloud mask to obtain a dynamic object point cloud; for two continuous images, obtaining the image optical flow by using an optical flow estimation methodImage mask M is obtained by separating dynamic objects by using image optical flow I . Motion segmentation of images and point clouds to move objects using image light streams and point cloud scene streamsAnd (3) solving external parameters of the laser radar and the camera based on the image and the point cloud.
The method comprises the steps of solving external parameters of the laser radar and the camera, namely coarse calibration and fine calibration. The first step is to calculate the principal vector and geometric center of the dynamic object of the point cloud scene flow and the image optical flow respectively. Then, sampling the external parameters, projecting the main speed and the geometric center of the point cloud scene flow into an image, scoring the sampling pose based on projection, and obtaining the pose with the highest score as an initial estimation { R } init ,t init }. The second part is fine calibration, and the initial estimation obtained in the first step is used as an initial value to project the point cloud and the scene flow to a pixel coordinate system. And (3) matching the point cloud to the pixels by utilizing nearest neighbor search, constructing a reprojection error iterative optimization solution of the pixels and the optical flow, and obtaining accurate external parameters of the radar and the camera. Finally, the application tests in the simulation platform Carla and the actual scene, and verifies the effectiveness of the application. The details of the application are as follows:
step 1: processing data collected by a sensor, filtering data noise, and then estimating image optical flow and point cloud scene flow by using an optical flow scene flow estimation algorithm, wherein the specific details are as follows:
training on a common dataset (i.e., flyingChairs, flyingThing d, sintel) using typical image optical flow algorithms such as RAFT for successive frame images, may be better done without fine tuning in the real data, since optical flow is a relatively low-level and general feature.
Since the road-side lidar and the camera are almost stationary, a mask MI of a dynamic object in an image is obtained by dividing the dynamic object (vehicle and pedestrian) through optical flow estimation, and the motion of the dynamic object in an image coordinate system can be given, and the pixel and optical flow estimation of the dynamic object in the image can be obtained through the mask and is expressed as f i,k ={p i,k ,v i,k }∈f k
For point cloud data, unlike image data, the point cloud data has many noise points and useless ground points, firstly, a point cloud ground estimation algorithm (such as PatchWork) is used for estimating the ground points and filtering the ground points, and then an outlier removal algorithm is used for removing the noise points, so that the relatively pure point cloud of an object on the ground can be obtained.
The method comprises the steps of estimating a scene flow of relatively pure point clouds of objects on the ground by utilizing continuous point cloud frames, estimating the scene flow of the point clouds by utilizing nerve priors, wherein the method is Neural Scene Flow Prior, and the scene flow of the point clouds can be obtained by estimating the scene flow of the point clouds on line and is expressed asBy setting a speed threshold to filter out stationary or low-speed objects, the point cloud mask M of the moving object can be obtained L And obtaining the point cloud and scene flow estimation of the moving target.
Step 2: coarse calibration is carried out by utilizing the image optical flow and the point cloud scene flow of the moving target obtained in the step 1, the algorithm flow is shown in fig. 3, and the specific details are as follows:
1) And (3) external parameter sampling: the purpose of the coarse calibration is to find an initial value that approximates the true value of the extrinsic parameter. In order to fully automate the estimation process, the present application uses a sampling method to estimate the initial values of the extrinsic parameters. External parameters of the lidar and camera include a rotation parameter R and a translation parameter t. The range of possible values of the rotation profile R is limited, whereas the relative translation parameter t can have an infinite range without installation parameters. Thus, in the present application, only the pose is sampled and the translation parameter is set to [0,0 ]] T When the relative displacement between the lidar and the camera is small (displacement amount<50 cm) is effective. In contrast, for large relative translations (displacement amounts) of LiDAR and camera>50 cm), it is necessary to give an approximate range of relative displacement of the lidar and the camera and sample the relative translation.
2) Estimating the moving object optical flow and scene flow main direction and centroid: in order to reduce the amount of computation, each moving object is described using only the centroid of the moving object at the initial value estimation. For the image optical flow, using Euclidean distance clustering method to the moving object mask to obtain the image optical flow set f of each moving object k,j For the point cloud scene flow, obtaining the point cloud and the scene flow set F of each moving object by using a Euclidean distance clustering method for the point cloud of the moving object k,j
The optical flow centroid and the optical flow principal direction for each moving object and the scene flow centroid and the scene flow principal direction are then calculated using the following formula:
wherein ,fk,j Representative image I k A set of pixels and optical flows of a moving object j, an element f of the set k,j,i ={p k,j,i ,v k,j,i By the image coordinates p of the moving object pixels k,j,i And the optical flow v of the pixel k,j,i Composition is prepared. F (F) k,j Representative point cloud P k Point cloud of moving object j and scene flow set of moving object j, one element F in the set k,j,i ={P k,j,i ,V k,j,i By a point P in the moving object point cloud k,j,i And scene stream V for that point k,j,i Composition is prepared.
And further obtaining an optical flow centroid and an optical flow main direction of each moving object in the image and a scene flow centroid and a scene flow main direction.
3) After the main direction of the light flow and the main direction of the scene flow are obtained, the centroid of the point cloud of the moving object and the main direction of the scene flow are projected to a camera pixel coordinate system through a camera internal parameter K and sampling external parameters R and t, and the projection formula of the scene flow is as follows:
wherein, point P in the point cloud k.i The pixel coordinate p is obtained through the projection of the camera internal parameter K and the external parameters { R, t } of the camera and the laser radar k.i ,Z c Representing the Z-axis coordinate in the point camera coordinate system. While scene flow estimate V for that point k,i Projection in pixel coordinate system is expressed asThe Z-direction component of the scene stream representing the point projected under the camera coordinate system.
The scene flow projection formula omits a high-order small quantity, and calculates the distances between the mass center and the main direction of the scene flow after the point cloud projection and the nearest mass center and the nearest main direction of the optical flow through projection, so as to evaluate the sampling pose, and the evaluation function is as follows:
wherein ,representing and moving a point P in an object m-point cloud k,m,i Projection in an image +.>The point of the closest approach is,is pixel dot +.>Optical flow estimation of->Representative point P k,m,i Scene stream V of (2) k,m,i Projection in an image. μ and γ are adjustment factors for adjusting the weights of the point projections and scene flow projections at the loss function. When optimizing, each time after obtaining local optimal value through optimizing, the office is then startedThe optimal value of the part is optimized as the initial value of the next iteration. And (3) obtaining a global optimal value through continuous iterative optimization, and finally obtaining accurate estimation of the laser radar and the camera external parameters.
The evaluation function sorts the scores D of all the sampling external parameters by using the sampling pose of the consistency evaluation of the point cloud scene flow and the optical flow, and the corresponding external parameter with the lowest score D is the initial estimation of the external parameters of the laser radar and the camera.
In order to verify the validity of the initial value estimation algorithm, the validity of the method is verified by collecting data in a simulation environment Carla. Specifically, in a simulation environment, a laser radar and a camera are arranged at a typical traffic intersection, the height of the laser radar and the camera is set to be 9m, the horizontal elevation angle of the laser radar and the camera is set to be-30 degrees, and rotation in external parameters of the laser radar and the camera is expressed as q= [0.500, -0.500,0.500,0.500]The displacement is t= [ -0.600,0.354, -0.354] T
In the simulation environment, an external parameter value can be obtained, and an initial value estimation result is obtained by setting the rotation error of the external parameter value to be not more than 20 degrees, wherein the initial value estimation experimental result is as follows:
in the experiment, 156 frames of point clouds and image data are tested, the effectiveness of the method can be proved from experimental results, the experiment also shows the influence of the parameter posture sampling interval on the success rate and the operation time, the parameter posture sampling interval is selected according to actual conditions in actual use, and the experimental results of initial value estimation in a simulation environment are shown in fig. 4.
Step 3: and (3) optimizing the external parameters by using the external parameter initial value obtained in the step (2) and the optical flow and scene flow obtained in the step (1) to obtain more accurate parameter estimation, wherein the partial flow is shown in fig. 5, and the principle is shown in fig. 2, and specifically comprises the following steps:
and (3) projecting the point cloud and the scene flow estimation of the moving object to an image coordinate system by utilizing the initial value of the external parameter estimation obtained in the previous step (2), wherein the point cloud obtained by projection is not completely overlapped with the moving object due to a certain gap between the external parameter obtained by initial estimation and the real external parameter, as shown in fig. 6 (a), and when the laser radar and the camera are calibrated, the moving object point cloud mask in the image should completely cover the moving object point cloud, as shown in fig. 6 (c).
In order to achieve accurate calibration of the laser radar and the camera external parameters, an optimization equation is constructed, a nonlinear optimization mode is adopted for solving, iteration optimization is carried out to obtain accurate external parameter estimation, and an optimization target has two aspects, namely, the projection of the moving object point cloud obtained by the laser radar is completely covered by a corresponding optical flow mask, and the scene flow projected by the moving object point cloud is as close to the optical flow in a pixel coordinate system as possible.
When an optimization equation is constructed, the moving object point cloud is projected to a pixel coordinate system, the nearest neighbor searching method is utilized to obtain the moving object pixel coordinate closest to the moving object point cloud, the difference between the pixel distance and the projection and the optical flow of scene flow estimation is calculated, and the matching process is shown in figure 2. The optimization equation is as follows:
wherein ,representing and moving a point P in an object m-point cloud k,m,i Projection in an image +.>The point of the closest approach is,is pixel dot +.>Optical flow estimation of->Representative point P k,m,i Scene stream V of (2) k,m,i Projection in an image. μ and γ are adjustment factors for adjusting the weights of the point projections and scene flow projections at the loss function.
Because of errors in the optical flow scene flow estimation, data with larger errors need to be filtered out when scene flow projection errors are constructed. Since matching is done by nearest neighbor, this approach does not fully reflect the true data match. Therefore, after the optimization is completed once, a local optimal value is obtained, the next optimization is performed by taking the value as an initial value and performing data association matching to perform re-optimization, and the global optimal value is obtained through continuous iterative optimization. Finally, accurate estimation of the laser radar and the camera external parameters is obtained, and the estimation result is shown in fig. 6 (b).
The application is tested in detail in the simulation environment and the real scene, in the simulation environment, the external parameter values of the laser radar and the camera can be known, and the method can realize the effect that the average translational error is 6.24cm and the average rotation error is 1.73 degrees through the test of 125 frames of laser camera data. In the real data, the true value of the external parameters of the laser radar camera cannot be obtained, so that the consistency of the algorithm is tested, and compared with the traditional manual calibration method, and the comparison effect is shown in fig. 6. The consistency is better through testing of about 300 frames of data, and the consistency results are shown in fig. 7. Wherein both translation and rotation are evaluated for consistency by error from the results of manual calibration.
Based on the conception of the method, the application provides a road-end laser radar camera calibration system, which comprises a point cloud scene flow estimation module, an image optical flow estimation module, a coarse calibration module and a fine calibration module;
the point cloud scene flow estimation module is used for estimating scene flows between two frames of point clouds according to two frames of continuous point clouds, setting a threshold value to realize dynamic and static separation and obtain the point cloud scene flows of the moving target;
the image optical flow estimation module is used for separating dynamic objects through image optical flow according to two continuous frames of images to obtain the image optical flow of a moving target;
the rough calibration module performs rough calibration on the laser radar and the camera external parameters based on the image light flow and the point cloud scene flow of the moving target to obtain initial estimation on the laser radar and the camera external parameters;
the fine calibration module is used for optimizing initial estimation of the laser radar and the camera external parameters according to the image light flow and the point cloud scene flow of the moving target, and obtaining accurate estimation of the laser radar and the camera external parameters.
The application also provides a computer device, which comprises a processor and a memory, wherein the memory is used for storing computer executable programs, the processor reads part or all of the computer executable programs from the memory and executes the computer executable programs, and the calibration method of the road laser radar camera can be realized when the processor executes part or all of the computer executable programs.
In another aspect, the present application provides a computer readable storage medium, where a computer program is stored, where the computer program, when executed by a processor, can implement the method for calibrating a road-side laser radar camera according to the present application.
The computer device may be a notebook computer, a desktop computer, a vehicle computer, or a workstation.
The processor of the present application may be a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or an off-the-shelf programmable gate array (FPGA).
The memory can be an internal memory unit of a notebook computer, a desktop computer, a vehicle-mounted computer or a workstation, such as a memory and a hard disk; external storage units such as removable hard disks, flash memory cards may also be used.
Computer readable storage media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. The computer readable storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), solid state disk (SSD, solid State Drives), or optical disk, etc. The random access memory may include resistive random access memory (ReRAM, resistance Random Access Memory) and dynamic random access memory (DRAM, dynamic Random Access Memory), among others.
In summary, in a first aspect, the present application provides a method for extracting a general moving object facing a traffic scene, the method comprising the steps of: the vehicle and the pedestrian moving in the successive frame images are estimated using the optical flow, and the moving object mask and the moving speed thereof in the images are obtained. And removing the ground point cloud in the point cloud by using a point cloud ground segmentation algorithm, and removing the noise points and the unnecessary points by using an outlier removal method to obtain the point cloud of the object on the ground. And estimating scene flow of the filtered continuous frame point clouds to obtain point cloud motion information. And setting a speed threshold value to filter the static point cloud to obtain a dynamic object point cloud mask and the movement of the moving object point cloud.
In a second aspect, the application provides a coarse calibration method based on an optical flow scene flow; and based on the moving object image mask and the moving object point cloud mask acquired by the optical flow and the scene flow, each moving object in the image and the radar point cloud is obtained, and the main movement direction and the mass center of the image and the point cloud are estimated. And (5) sampling possible external parameters of the laser radar and the camera to obtain all possible rough external parameters. And (3) evaluating all sampling poses by aligning and distance of the mass center and the main direction of the point cloud moving object, and selecting the best possible external parameters as initial estimation of the external parameters of the laser radar camera according to the evaluation.
In a third aspect, the application provides a laser radar camera fine calibration method based on optical flow and scene flow. And projecting the point cloud and the scene stream of the moving object onto the image by utilizing the inherent correlation of the point cloud scene stream and the image optical stream to the expression of the moving object, and optimizing the pixel distance and the optical stream distance of the moving object of the image to obtain accurate external parameter estimation.
According to the method, the moving objects in the traffic scene are acquired through the optical flow and the scene flow, so that automatic laser camera calibration is realized, and the method is different from a traditional calibration method in that the method provided by the application is more universal in required characteristics, the moving objects are ubiquitous in the traffic scene, the dependence of the optical flow scene flow estimation on a data set is smaller, the method provided by the application can realize full-automatic calibration, does not need human participation, and is more suitable for the deployment of a road-end perception system.
While the application has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (10)

1. The calibrating method of the laser radar camera at the road end is characterized by comprising the following steps:
for two continuous point clouds, estimating scene flows between the two continuous point clouds, setting a threshold value to realize dynamic and static separation, and obtaining the point cloud scene flows of the moving object;
for two continuous frames of images, separating dynamic objects through image optical flow to obtain the image optical flow of a moving target;
performing coarse calibration on the laser radar and the camera external parameters based on the image optical flow and the point cloud scene flow of the moving target to obtain initial estimation on the laser radar and the camera external parameters;
and optimizing initial estimation of the laser radar and the camera external parameters according to the image light flow and the point cloud scene flow of the moving target to obtain accurate estimation of the laser radar and the camera external parameters.
2. The method for calibrating a laser radar camera at a road end according to claim 1, wherein for two continuous point clouds, a scene flow between the two continuous point clouds is estimated, a threshold is set to realize dynamic and static separation, and a point cloud scene flow of a moving target is obtained, specifically:
for point cloud data, estimating and filtering ground points by using a point cloud ground estimation method, and removing noise points by using an outlier removal algorithm to obtain relatively pure point cloud of an object on the ground;
and (3) estimating the point cloud scene flow of the object on the ground relatively pure point cloud by using a nerve priori scene flow method by utilizing continuous point cloud frames to obtain the point cloud scene flow, filtering out the object with static or moving speed lower than the speed threshold by setting the speed threshold, and obtaining the point cloud mask of the moving object to obtain the scene flow and the point cloud of the moving object.
3. The method for calibrating a laser radar camera at a road end according to claim 1, wherein for two continuous images, the image optical flow of the moving object is obtained by separating dynamic objects through the image optical flow, specifically:
for continuous frame images, a moving object is segmented out through optical flow estimation to obtain a mask MI of the moving object in the images, and the movement of the moving object in an image coordinate system and the pixel point representation of the obtained image optical flow are given.
4. The method for calibrating a road-side laser radar camera according to claim 1, wherein the step of performing coarse calibration on the laser radar and the camera external parameters based on the image light flow and the point cloud scene flow of the moving target to obtain initial estimation on the laser radar and the camera external parameters comprises the following steps:
estimating an initial value of the extrinsic parameter using a sampling method: external parameters of the lidar and the camera include rotation parameters and translation parameters, and in the case that the relative translation of the lidar and the camera is smaller than the set translation amount, only the pose is sampled and the translation parameters are set to [0,0] T For the displacement between the LiDAR and the camera not smaller than the set translation amount, giving the range of the relative displacement of the laser radar and the camera, and sampling the relative translation;
estimating the moving object optical flow and scene flow main direction and centroid: describing each moving object by using the mass center of the moving object, and obtaining an image optical flow set f of each moving object by using a Euclidean distance clustering method on pixels of the moving object aiming at the image optical flow k,j For the point cloud scene flow, obtaining the point cloud and the scene flow set F of each moving object by using a Euclidean distance clustering method for the point cloud of the moving object k,j The method comprises the steps of carrying out a first treatment on the surface of the Then calculating the mass center and the main direction of the optical flow of each moving object and the mass center and the main direction of the scene flow;
the method comprises the steps of projecting a point cloud scene flow centroid and a scene flow main direction to a camera pixel coordinate system through a camera internal parameter and a sampling external parameter, evaluating the sampling pose by projecting and calculating the scene flow centroid and the main direction after the point cloud projection and the distance between the closest optical flow centroid and the closest main direction to the scene flow centroid, obtaining a series of distance scores of the sampling external parameter, sequencing the scores of all the sampling external parameters, and finally obtaining the corresponding external parameter with the lowest score as the initial estimation of the laser radar and the camera external parameter.
5. The method of calibrating a road-side lidar camera according to claim 4, wherein the pixel centroid and the light flow main direction and the point cloud centroid and the scene flow main direction of each moving object are calculated using the following formula:
wherein ,fk,j Representative image I k A set of pixels and optical flows of a moving object j, an element f of the set k,j,i ={p k,j,i ,v k,j,i By the image coordinates p of the moving object pixels k,j,i And the optical flow v of the pixel k,j,i Composition, F k,j Representative point cloud P k Point cloud of moving object j and scene flow set of moving object j, one element F in the set k,j,i ={P k,j,i ,V k,j,i By a point P in the moving object point cloud k,j,i And scene stream V for that point k,j,i Composition is prepared.
6. The method for calibrating a road-side lidar camera according to claim 4, wherein the scene flow projection formula is as follows:
wherein, point P in the point cloud k.i The pixel coordinate p is obtained through the projection of the camera internal parameter K and the external parameters { R, t } of the camera and the laser radar k.i ,Z c Representing the Z-axis coordinate in the point camera coordinate system, and the scene flow estimate V for the point k,i Projection in pixel coordinate system is expressed asA Z-direction component of the scene stream representing the point projected under the camera coordinate system;
the evaluation function is as follows:
wherein ,point cloud centroid P representing and moving object j k,j Projection in an image +.>Image centroid of nearest moving object, +.>For the main direction of the optical flow of the moving object +.>The moving object scene streamPrincipal direction V k,j Projections in the image, α and β are adjustment factors for adjusting the weights of the point projections and the scene flow projections.
7. The method for calibrating a laser radar camera at a road end according to claim 1, wherein the method is characterized in that according to the image light flow and the point cloud scene flow of a moving target, the initial estimation of the laser radar and the camera external parameters is optimized, and the accurate estimation of the laser radar and the camera external parameters is obtained specifically as follows:
constructing an optimization equation, solving in a nonlinear optimization mode, and obtaining accurate external parameter estimation through iterative optimization, wherein an optimization target has two aspects, namely, the projection of a moving object point cloud obtained by a laser radar is completely covered by a corresponding optical flow Mask, and the scene flow of the moving object point cloud projection is as close as possible to the optical flow of the scene flow in a pixel coordinate system;
when an optimization equation is constructed, the moving object point cloud is projected to a pixel coordinate system, the nearest neighbor searching method is utilized to obtain the moving object pixel coordinate closest to the moving object point cloud, the difference between the pixel distance and the projection and the optical flow of scene flow estimation is calculated, and the optimization equation is as follows:
wherein ,representing and moving a point P in an object m-point cloud k,m,i Projection in an image +.>Nearest point +.>Is pixel dot +.>Optical flow estimation of->Representative point P k,m,i Scene stream V of (2) k,m,i And (3) the projection in the image, mu and gamma are adjusting factors, and are used for adjusting the weights of the point projection and the scene flow projection in the loss function, and when the optimization is performed, after each time the local optimal value is obtained through the optimization, the local optimal value is used as the initial value of the next iteration to perform the optimization, and the global optimal value is obtained through the iterative optimization, so that the accurate estimation of the external parameters of the laser radar and the camera is finally obtained.
8. The road-end laser radar camera calibration system is characterized by comprising a point cloud scene flow estimation module, an image optical flow estimation module, a coarse calibration module and a fine calibration module;
the point cloud scene flow estimation module is used for estimating scene flows between two frames of point clouds according to two frames of continuous point clouds, setting a threshold value to realize dynamic and static separation and obtain the point cloud scene flows of the moving target;
the image optical flow estimation module is used for separating dynamic objects through image optical flow according to two continuous frames of images to obtain the image optical flow of a moving target;
the rough calibration module performs rough calibration on the laser radar and the camera external parameters based on the image light flow and the point cloud scene flow of the moving target to obtain initial estimation on the laser radar and the camera external parameters;
the fine calibration module is used for optimizing initial estimation of the laser radar and the camera external parameters according to the image light flow and the point cloud scene flow of the moving target, and obtaining accurate estimation of the laser radar and the camera external parameters.
9. A computer device comprising a processor and a memory, the memory storing a computer executable program, the processor reading the computer executable program from the memory and executing the computer executable program, the processor executing the computer executable program to implement the road side lidar camera calibration method of any of claims 1-7.
10. A computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the method for calibrating a road-side lidar camera according to any of claims 1 to 7 is implemented.
CN202310978865.4A 2023-08-04 2023-08-04 Road-end laser radar camera calibration method and system Pending CN116993836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310978865.4A CN116993836A (en) 2023-08-04 2023-08-04 Road-end laser radar camera calibration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310978865.4A CN116993836A (en) 2023-08-04 2023-08-04 Road-end laser radar camera calibration method and system

Publications (1)

Publication Number Publication Date
CN116993836A true CN116993836A (en) 2023-11-03

Family

ID=88533560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310978865.4A Pending CN116993836A (en) 2023-08-04 2023-08-04 Road-end laser radar camera calibration method and system

Country Status (1)

Country Link
CN (1) CN116993836A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392241A (en) * 2023-12-11 2024-01-12 新石器中研(上海)科技有限公司 Sensor calibration method and device in automatic driving and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392241A (en) * 2023-12-11 2024-01-12 新石器中研(上海)科技有限公司 Sensor calibration method and device in automatic driving and electronic equipment
CN117392241B (en) * 2023-12-11 2024-03-05 新石器中研(上海)科技有限公司 Sensor calibration method and device in automatic driving and electronic equipment

Similar Documents

Publication Publication Date Title
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN108647638B (en) Vehicle position detection method and device
CN104766058B (en) A kind of method and apparatus for obtaining lane line
CN109242884B (en) Remote sensing video target tracking method based on JCFNet network
Khammari et al. Vehicle detection combining gradient analysis and AdaBoost classification
US11430228B2 (en) Dynamic driving metric output generation using computer vision methods
CN113421289B (en) High-precision vehicle track data extraction method for overcoming unmanned aerial vehicle shooting disturbance
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN108009494A (en) A kind of intersection wireless vehicle tracking based on unmanned plane
Liu et al. Development of a vision-based driver assistance system with lane departure warning and forward collision warning functions
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN116993836A (en) Road-end laser radar camera calibration method and system
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN111814602A (en) Intelligent vehicle environment dynamic target detection method based on vision
CN105512641A (en) Method for using laser radar scanning method to calibrate dynamic pedestrians and vehicles in video in snowing or raining state
CN113281782A (en) Laser radar snow point filtering method based on unmanned vehicle
CN115327572A (en) Method for detecting obstacle in front of vehicle
CN112907972B (en) Road vehicle flow detection method and system based on unmanned aerial vehicle and computer readable storage medium
CN112598743B (en) Pose estimation method and related device for monocular vision image
CN114549542A (en) Visual semantic segmentation method, device and equipment
CN112766117A (en) Vehicle detection and distance measurement method based on YOLOV4-tiny
CN116794650A (en) Millimeter wave radar and camera data fusion target detection method and device
CN113553958B (en) Expressway green belt detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination