CN114578328A - Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors - Google Patents

Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors Download PDF

Info

Publication number
CN114578328A
CN114578328A CN202210174586.8A CN202210174586A CN114578328A CN 114578328 A CN114578328 A CN 114578328A CN 202210174586 A CN202210174586 A CN 202210174586A CN 114578328 A CN114578328 A CN 114578328A
Authority
CN
China
Prior art keywords
laser radar
point cloud
position relation
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210174586.8A
Other languages
Chinese (zh)
Other versions
CN114578328B (en
Inventor
张新钰
王继鹏
鲍泽峰
熊一瑾
高涵文
郭世纯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Jiashibao Intelligent Technology Co ltd
Original Assignee
Suzhou Jiashibao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jiashibao Intelligent Technology Co ltd filed Critical Suzhou Jiashibao Intelligent Technology Co ltd
Priority to CN202210174586.8A priority Critical patent/CN114578328B/en
Publication of CN114578328A publication Critical patent/CN114578328A/en
Application granted granted Critical
Publication of CN114578328B publication Critical patent/CN114578328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a method for automatically calibrating the spatial positions of a multi-laser radar and multi-camera sensor, which comprises the following steps: step 1) respectively extracting linear features from point cloud data acquired by each laser radar and RGB images acquired by each camera sensor, calculating gray value scores through projection to obtain total scores of all laser radar points, and selecting the highest total score by adopting a self-adaptive optimization method to further obtain the position relation between each laser radar and a certain camera sensor; step 2) registering the point cloud data of every two laser radars to obtain the position relation between the laser radars; step 3) obtaining the position relation between the camera sensors according to the camera sensing image data by utilizing epipolar geometric constraints of a plurality of cameras; and 4) carrying out global optimization according to the position relation obtained in the step 1), the step 2) and the step 3) to finish the calibration of the spatial positions of the multi-laser radar and multi-camera sensor.

Description

Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors
Technical Field
The invention relates to the field of multi-sensor calibration, in particular to an automatic calibration method for spatial positions of a multi-laser radar and multi-camera sensor.
Background
Information fusion is a key technology of automatic driving safety, multi-mode perception plays a key role in realizing robust environmental perception by unmanned driving, multi-mode fusion is a precondition for realizing robust perception, a single sensor is difficult to cope with all-weather full-scene perception, and inherent defects of hardware materials exist; the problems of data isomerism, mode difference, unbalanced sampling and the like exist in a multi-sensor data mode, and a special fusion model needs to be designed for utilization; however, in the configuration scheme of the automatic driving vehicle sensor of the existing main vehicle enterprises, a laser radar and a camera are basically arranged, multi-mode fusion is an important way for improving the precision, and reliable external parameters between the laser radar and the camera sensor are preconditions of multi-mode data fusion. A multi-sensor calibration method is one of the key research points and difficulties in the field of unmanned driving, most research contents of the current fusion technology research are a single camera and a single laser radar, but a plurality of cameras and a plurality of laser radars are usually carried on an automatic driving vehicle. In the actual application scene using multiple groups of cameras and laser radar data, the reliability of the calibration result cannot be guaranteed by the single-camera radar fusion technology. In addition, in the operation process, the spatial position relationship between the laser radar and the camera changes gradually, so that an automatic calibration method capable of monitoring the spatial position relationship between the multiple sensors and correcting the accumulated error in time is needed.
Disclosure of Invention
The invention aims to provide a method for automatically calibrating the spatial positions of a multi-laser radar and multi-camera sensor.
In order to achieve the above object, the present invention provides a method for automatically calibrating spatial positions of multiple lidar and multiple camera sensors, the method comprising:
step 1) respectively extracting linear features from point cloud data acquired by each laser radar and RGB images acquired by each camera sensor, calculating gray value scores through projection to obtain total scores of all laser radar points, and selecting the highest total score by adopting a self-adaptive optimization method to further obtain the position relation between each laser radar and a certain camera sensor;
step 2) registering the point cloud data of every two laser radars to obtain the position relation between the laser radars;
step 3) obtaining the position relation between the camera sensors according to the camera sensing image data by utilizing epipolar geometric constraints of a plurality of cameras;
and 4) carrying out global optimization according to the position relation obtained in the step 1), the step 2) and the step 3) to finish the calibration of the spatial positions of the multi-laser radar and multi-camera sensor.
As an improvement of the above method, the step 1) specifically includes:
step 1-1) screening data which accord with straight line characteristics from point cloud data in an image form;
step 1-2) merging three frames of point cloud data which accord with straight line characteristics into one frame by adopting a local mapping method to obtain denser point cloud data;
step 1-3) converting the RGB image into a gray image, and extracting the gray image which accords with the linear characteristic by adopting a linear detection algorithm;
step 1-4) mapping the point cloud data obtained in the step 1-2) to the gray level image obtained in the step 1-3) through inverse distance transformation, calculating the gray level value of one laser radar point which accords with the straight line characteristics after projection as a score, accumulating the scores of all the laser radar points as a total score, and selecting the highest total score by adopting a self-adaptive optimization method to obtain the position relation between one laser radar and a certain camera sensor.
As an improvement of the above method, the step 1-1) specifically comprises:
and filtering the point cloud data in the form of an image to remove outliers and point cloud data points corresponding to the ground, and removing the point cloud data points with fewer adjacent points by adopting a point cloud clustering algorithm, thereby screening out the data conforming to the linear characteristics.
As an improvement of the above method, the step 1-3) specifically comprises:
the RGB image is converted into a gray image, rough search is carried out by adopting a wide image line characteristic, a small gray change and a relatively large step length, fine search is carried out by adopting a thin image line characteristic, a large gray change and a relatively small step length, and the gray image which accords with the straight line characteristic is extracted by an LSD straight line detection algorithm.
As an improvement of the above method, the step 2) specifically includes:
dividing point cloud data acquired by each laser radar into grids, expressing each grid by normal distribution, calculating the mean value and variance of data points in each grid, matching a point set by the normal distribution, and obtaining the spatial position relation between every two laser radars by registering two groups of point clouds.
As an improvement of the above method, the step 3) specifically includes:
and obtaining the spatial position relation between the camera sensors by using the epipolar geometric constraint of the multiple cameras and adopting an iterative algorithm.
As an improvement of the above method, the step 4) specifically includes:
taking the position relation between the ith laser radar and the jth camera sensor obtained in the step 1), the position relation between the jth laser radar and the ith camera sensor, and the position relation between the ith laser radar and the jth laser radar obtained in the step 2) as vertexes, taking each laser radar and each camera sensor to form an edge, taking the total score and the fitting degree of the laser radars and the polar coordinate constraint between the camera sensors as calibration errors, reducing the calibration errors through global optimization, and completing the calibration of the spatial positions of the multi-laser radar and the multi-camera sensor
Compared with the prior art, the invention has the advantages that:
1. the method can automatically complete feature extraction without manually designing a calibration plate and depending on a specific environment; the limitation of a calibration scene is avoided, and the applicability and convenience of the calibration method are improved;
2. the method of the invention is real-time online calibration, can automatically correct the error in the using process, and avoids the influence of the accumulated error in the using process on the system;
3. the method for screening the point cloud linear characteristics and the image linear characteristics can filter non-linear characteristics and improve the automatic calibration precision;
4. the method provided by the invention optimizes multiple laser radars and multiple cameras, reduces the overall error, accurately calibrates the result, and is closer to the practical application scene.
Drawings
FIG. 1 is a flow chart of a method for automatic spatial position calibration of multiple lidar and multiple camera sensors in accordance with the present invention;
fig. 2 is a schematic position diagram of the in-vehicle sensor lidar and the camera of embodiment 1;
FIG. 3(a) is an image captured by two cameras of example 1 at a certain time;
FIG. 3(b) is the point cloud data collected by two laser radars of real-time example 1 at a certain time;
fig. 4 is a schematic diagram of the steps of global optimization performed in embodiment 1.
Detailed Description
A method for automatically calibrating spatial positions of multiple laser radars and multiple camera sensors, the method comprising:
extracting the features of the image and the point cloud by adopting a series of preprocessing methods; respectively extracting linear features from the image and the point cloud, and refining by applying feature filtering;
screening data which accord with straight line characteristics from the laser radar point cloud data; screening data which accord with the straight line characteristics from the image data of the camera sensor; a filtering feature, the remaining features being identifiable as line features; and obtaining the finally optimized external parameters by using a self-adaptive optimization method based on search method optimization and graph optimization.
And selecting the space position relationship of the plurality of laser radars and the plurality of camera sensors corresponding to the highest total score from the plurality of total scores as the position relationship of the plurality of calibrated laser radars and the plurality of camera sensors.
And performing point cloud registration on the two laser radars to obtain external parameters of the laser radar 1 and the laser radar 2. Positional relationship
And solving external parameters between the cameras according to the images of the two cameras by using the epipolar geometric constraint of the multiple cameras.
And global optimization is performed by using external parameters among the sensors, so that errors are reduced, and a more accurate result is obtained.
Further line feature extraction of the image specifically comprises:
in the image processing, the RGB image is firstly converted into a gray image, and then a straight line detection algorithm is adopted to extract straight line features in the gray image. A distance transformation model is then applied to these images containing straight-line features.
Firstly: a coarse search with wider image lines, smaller gray scale changes and relatively larger step sizes is employed so that regions that may contain the best solution are quickly found, preventing the search from falling into local optima. Then apply the finer image line features with larger gray scale changes and smaller step sizes to obtain more accurate calibration results.
Further preprocessing and line feature extraction of the radar point cloud, and projecting the radar point cloud onto an image, wherein the method specifically comprises the following steps:
the principle of lidar processing is to use the distance discontinuity to obtain more borderline features. In order to achieve the aim, three frames of point clouds are combined into one frame by adopting a local mapping method, more points can be presented in one frame, and line features extracted from the point clouds are denser. Specifically, a transformation matrix between a current frame and two previous frames is calculated by a Normal Distribution Transformation (NDT) method. By converting three frames of point clouds into one frame, a denser point cloud is obtained, which may improve the performance of extracting line features, especially in the case of using low beam lidar.
The denser point cloud is converted to an image form, with each pixel storing distance information for a corresponding LiDAR point. By comparing the distance between the current point and the adjacent point, the abnormal value which is too far away from the adjacent point is removed, and more accurate line characteristics can be extracted. It is noted that instead of considering each beam separately, we consider the distance information between the multiple beams. It allows the proposed method to extract horizontal features, minimizing horizontal and vertical errors using line features. The horizontal line features and the vertical line features are stored in two different point clouds, respectively. In this case, the feature of the intersection line of planes, which is rarely present, is ignored, which is advantageous for improving the calculation efficiency.
And (3) mapping point cloud line features extracted by the laser radar to a gray scale image of the image through inverse distance conversion, and removing outliers and points corresponding to the ground by using two filtering methods to leave line feature point clouds.
After filtering, a point cloud clustering algorithm is adopted for removing line features with fewer adjacent points. The two filtering steps can provide more organized point cloud line characteristics, obtain more organized line characteristics in the image and ensure better optimization results in the subsequent steps.
The self-adaptive optimization of the mapping of the pair of line feature point clouds to the line feature image specifically comprises the following steps:
a self-adaptive optimization method is adopted, and a search method and a graph optimization method are sequentially used. In the search optimization process, the method gives consideration to both calculation accuracy and efficiency. Before optimization, the proposed method extracts the point cloud line characteristics of the laser radar to an image and calculates the proportion of the point cloud of the laser radar projected to a gray scale area.
Two searching steps are adopted for accurate solution, and in order to avoid the search from being trapped in a local solution, the area possibly containing the best solution is quickly found by adopting coarse search with small image line width, small gray level change and relatively large step length. And then, thinner image line characteristics are adopted, the gray scale change is larger, and the step length is smaller, so that a more accurate calibration result is obtained. When the proportion of the lidar spot projected onto the gray area exceeds a certain threshold, switching is made between these two steps and the change in gray level. The higher the gray scale score of the point cloud line features falling on the white line features, the more accurate the calibration result.
For a spatial position relation, screening data which accord with straight line characteristics from the laser radar point cloud data; screening data which accord with the straight line characteristics from the image data of the camera sensor; projecting the laser radar data conforming to the linear characteristics to a pixel coordinate system of a camera sensor, calculating the gray value of one laser radar point conforming to the linear characteristics after projection as a score, and accumulating the scores of all the laser radar points as a total score; traversing all spatial position relations near the external parameter matrixes of the current frame and the two frames in the past, stopping searching the spatial position relations near the current external parameter matrix when the optimal values are found in the three frames of data, and searching scores of all spatial positions again at the positions of the optimal values until the optimal values are reached; as the position relationship between the calibrated lidar and the camera sensor.
An adaptive optimization method is adopted to make the optimization to be carried out towards the correct direction. And optimizing the cost function by adopting a search method, and searching and comparing surrounding gray scale scores. And further optimizing the solution obtained by the search optimization method by using a graph optimization method, wherein the optimization vertex is a transformation matrix of the camera and the laser radar, the edge error is the gray level corresponding to each point cloud line characteristic point on the image, constructing a g2o graph optimizer, configuring an optimization solver and a gradient descent method, and finally calling an optimization function for optimization.
External reference calibration of a laser radar-laser radar and a camera-camera specifically comprises the following steps:
the method comprises the following steps of (1) calibrating a laser radar-laser radar by using a laser radar point cloud registration method:
firstly, preprocessing two groups of point clouds, dividing a space into grids, and counting points falling in each grid; calculating the mean value and covariance of each grid according to points in each grid, and constructing Gaussian distribution; and calculating the joint probability according to the predicted attitude and solving R and t to obtain an external parameter transformation matrix. And counting the adaptive scores of the projections between the two point cloud line characteristics.
Then, by using the epipolar geometric constraint of the multiple cameras, the external parameters are further refined by adopting an iterative Levenberg-Marquardt algorithm (LM).
Through the steps, the external parameters of the multiple radars and the multiple cameras can be preliminarily optimized, and finally, the external parameters of the radar-camera, the radar-radar and the camera-camera are globally optimized, so that the overall error is reduced.
And constructing global optimization to reduce overall errors and improve precision by external parameters between multiple groups of laser radars and cameras calibrated by line characteristics, external parameters between radars and cameras obtained by antipodal constraints. In the global optimization, the optimization vertexes are respectively radar i-camera i, radar j-camera j and an external parameter transformation matrix of the radar i-radar j, the sides are ternary sides, one side is formed between each two groups of laser radars and the two groups of cameras, the errors of the sides are composed of four parts, the optimization gradient direction is provided for the optimization process of the four vertexes, and optimization iteration is guided. The first part is the sum of the gray values of the characteristic points of each line, the points of which are mapped on the image, between the camera i and the laser radar i, the second part is the sum of the gray values of the characteristic points of each line, the points of which are mapped on the image, between the camera j and the laser radar j, the third part is the fitting degree of the laser radar i and the laser radar j after NDT registration, and the fourth part is the error of polar coordinate constraint between the camera i and the camera j. And finally, reducing the calibration error through global optimization to obtain a result with higher precision.
The technical solution of the present invention will be described in detail below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, embodiment 1 of the present invention proposes an automatic calibration method for spatial positions of multiple lidar and multiple camera sensors, and this embodiment takes two lidar and two camera sensors as an example for description, and the number is set as shown in fig. 2, but is not limited to the above number. The method comprises the following steps:
obtaining the spatial position of the radar relative to the camera sensor, the spatial position between two cameras and the spatial position between two laser radars to obtain the spatial position relation between a plurality of groups of laser radars and the camera sensor;
as shown in fig. 3(a) and 3(b), the images acquired by two cameras and the point cloud data acquired by two laser radars are shown.
For the spatial position relation of the laser radar and the camera, screening out data which accord with straight line characteristics from the point cloud data of the laser radar; screening data which accord with the straight line characteristics from the image data of the camera sensor; projecting the laser radar data conforming to the linear characteristics to a pixel coordinate system of a camera sensor, calculating the gray value of one laser radar point conforming to the linear characteristics after projection as a score, and accumulating the scores of all the laser radar points as a total score; traversing all spatial position relations to obtain a plurality of total scores; and selecting the spatial position relationship of the laser radar and the camera sensor corresponding to the highest total score from the plurality of total scores as the position relationship of the calibrated laser radar and the calibrated camera sensor.
For the spatial position relation between two laser radars, the statistical characteristics of local points are represented, the point cloud is divided into grids, and each grid can be represented by normal distribution. And the mean and variance of the points in each grid are calculated, matching the set of points by matching to a normal distribution. And obtaining the spatial position relation of the two laser radars through registration between the two groups of point clouds.
For the spatial position relationship between the two cameras, external parameters are further refined by utilizing the epipolar geometric constraint of the multiple cameras and adopting an iterative Levenberg-Marquardt algorithm (LM) algorithm to obtain the spatial position relationship between the two cameras.
The method for calibrating the space position between the laser radar and the camera is used for self-adaptive optimization, and specifically comprises the following steps:
and matching the multi-frame laser radar data to obtain a conversion matrix among the multi-frame laser radar data, and overlapping the multi-frame point cloud data to one frame to obtain denser point cloud data.
Optimizing by adopting a search method to obtain a highest-score result of the point cloud of the nearby position projected to the gray level image;
and further refining the external parameter result by adopting graph optimization.
For the spatial position relation of the whole sensor, the calibration errors among the multiple sensors are reduced through global optimization, and the method specifically comprises the following steps:
obtaining external parameters between the two laser radars through point cloud registration;
obtaining external parameters between two cameras through epipolar geometric constraint of the two images;
the global optimization is performed by grouping the four parts into the error of the edge by using the external parameter transformation matrix with the vertex of the radar i-camera i, the vertex of the radar j-camera j and the vertex of the radar i-radar j as the ternary edge, as shown in fig. 4. P is the position of the image point, K is the camera internal reference, R, t is the relative pose of the external reference,
Figure BDA0003518568820000071
for the transform matrix (i, j ═ 1, 2, 3 … …), superscript T is the transpose matrix and FitnessScore is the two-point cloud fitness.
The method can judge whether the current space positions of the laser radar and the camera are accurate or not in real time, and correct the space position errors of the laser radar and the camera on line under the condition of low accuracy. The method does not need to artificially design a calibration object, can automatically acquire information in the environment and calibrate the spatial positions of a plurality of groups of laser radars and cameras.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

1. A method for automatically calibrating spatial positions of multiple laser radars and multiple camera sensors, the method comprising:
step 1) respectively extracting linear features from point cloud data acquired by each laser radar and RGB images acquired by each camera sensor, calculating gray value scores through projection to obtain total scores of all laser radar points, and selecting the highest total score by adopting a self-adaptive optimization method to further obtain the position relation between each laser radar and a certain camera sensor;
step 2) registering the point cloud data of every two laser radars to obtain the position relation between the laser radars;
step 3) obtaining the position relation between the camera sensors according to the camera sensing image data by utilizing epipolar geometric constraints of a plurality of cameras;
and 4) carrying out global optimization according to the position relation obtained in the step 1), the step 2) and the step 3) to finish the calibration of the spatial positions of the multi-laser radar and multi-camera sensor.
2. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 1, wherein the step 1) specifically comprises:
step 1-1) screening data which accord with straight line characteristics from point cloud data in an image form;
step 1-2) merging three frames of point cloud data which accord with straight line characteristics into one frame by adopting a local mapping method to obtain denser point cloud data;
step 1-3) converting the RGB image into a gray image, and extracting the gray image which accords with the linear characteristics by adopting a linear detection algorithm;
step 1-4) mapping the point cloud data obtained in the step 1-2) to the gray level image obtained in the step 1-3) through inverse distance transformation, calculating the gray level value of one laser radar point which accords with the straight line characteristics after projection as a score, accumulating the scores of all the laser radar points as a total score, and selecting the highest total score by adopting a self-adaptive optimization method to obtain the position relation between one laser radar and a certain camera sensor.
3. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 2, wherein the step 1-1) comprises:
and filtering the point cloud data in the form of an image to remove outliers and point cloud data points corresponding to the ground, and removing the point cloud data points with fewer adjacent points by adopting a point cloud clustering algorithm, thereby screening out the data conforming to the linear characteristics.
4. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 2, wherein the step 1-3) comprises:
the RGB image is converted into a gray image, rough search is carried out by adopting a wide image line characteristic, a small gray change and a relatively large step length, fine search is carried out by adopting a thin image line characteristic, a large gray change and a relatively small step length, and the gray image which accords with the straight line characteristic is extracted by an LSD straight line detection algorithm.
5. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 1, wherein the step 2) comprises:
dividing point cloud data acquired by each laser radar into grids, expressing each grid by normal distribution, calculating the mean value and variance of data points in each grid, matching a point set by the normal distribution, and obtaining the spatial position relation between every two laser radars by registering two groups of point clouds.
6. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 1, wherein the step 3) comprises:
and obtaining the spatial position relation between the camera sensors by utilizing the epipolar geometric constraint of the multiple cameras and adopting an iterative algorithm.
7. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 1, wherein the step 4) comprises:
taking the position relation between the ith laser radar and the jth camera sensor obtained in the step 1), the position relation between the jth laser radar and the ith camera sensor, and the position relation between the ith laser radar and the jth laser radar obtained in the step 2) as vertexes, taking each laser radar and each camera sensor to form an edge, taking the total score and the fitting degree of the laser radars and the polar coordinate constraint between the camera sensors as calibration errors, reducing the calibration errors through global optimization, and completing the calibration of the spatial positions of the multi-laser radar and the multi-camera sensor.
CN202210174586.8A 2022-02-24 2022-02-24 Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors Active CN114578328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210174586.8A CN114578328B (en) 2022-02-24 2022-02-24 Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210174586.8A CN114578328B (en) 2022-02-24 2022-02-24 Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors

Publications (2)

Publication Number Publication Date
CN114578328A true CN114578328A (en) 2022-06-03
CN114578328B CN114578328B (en) 2023-03-17

Family

ID=81775220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210174586.8A Active CN114578328B (en) 2022-02-24 2022-02-24 Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors

Country Status (1)

Country Link
CN (1) CN114578328B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115588051A (en) * 2022-09-29 2023-01-10 中国矿业大学(北京) Automatic calibration method for space positions of laser radar and camera in ore processing link
CN116843747A (en) * 2023-08-31 2023-10-03 北京路凯智行科技有限公司 Calibration method and calibration system for camera and laser radar

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358633A (en) * 2017-07-12 2017-11-17 北京轻威科技有限责任公司 Join scaling method inside and outside a kind of polyphaser based on 3 points of demarcation things
CN108288291A (en) * 2018-06-07 2018-07-17 北京轻威科技有限责任公司 Polyphaser calibration based on single-point calibration object
CN109300162A (en) * 2018-08-17 2019-02-01 浙江工业大学 A kind of multi-line laser radar and camera combined calibrating method based on fining radar scanning marginal point
CN109345596A (en) * 2018-09-19 2019-02-15 百度在线网络技术(北京)有限公司 Multisensor scaling method, device, computer equipment, medium and vehicle
CN109598765A (en) * 2018-12-21 2019-04-09 浙江大学 Join combined calibrating method outside monocular camera and millimetre-wave radar based on spherical calibration object
US20200116866A1 (en) * 2018-10-11 2020-04-16 Baidu Usa Llc Automatic lidar calibration based on cross validation for autonomous driving
CN111429521A (en) * 2020-03-05 2020-07-17 深圳市镭神智能系统有限公司 External parameter calibration method, device, medium and electronic equipment for camera and laser radar
CN111505606A (en) * 2020-04-14 2020-08-07 武汉大学 Method and device for calibrating relative pose of multi-camera and laser radar system
CN112017205A (en) * 2020-07-27 2020-12-01 清华大学 Automatic calibration method and system for space positions of laser radar and camera sensor
CN113589263A (en) * 2021-08-06 2021-11-02 北京易航远智科技有限公司 Multi-homologous sensor combined calibration method and system
CN113592957A (en) * 2021-08-06 2021-11-02 北京易航远智科技有限公司 Multi-laser radar and multi-camera combined calibration method and system
EP3929872A1 (en) * 2020-06-28 2021-12-29 TuSimple, Inc. Multi-sensor calibration system
CN113884104A (en) * 2021-09-27 2022-01-04 苏州挚途科技有限公司 Multi-sensor combined calibration method and device and electronic equipment
CN113988197A (en) * 2021-11-03 2022-01-28 山东新一代信息产业技术研究院有限公司 Multi-camera and multi-laser radar based combined calibration and target fusion detection method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358633A (en) * 2017-07-12 2017-11-17 北京轻威科技有限责任公司 Join scaling method inside and outside a kind of polyphaser based on 3 points of demarcation things
CN108288291A (en) * 2018-06-07 2018-07-17 北京轻威科技有限责任公司 Polyphaser calibration based on single-point calibration object
CN109300162A (en) * 2018-08-17 2019-02-01 浙江工业大学 A kind of multi-line laser radar and camera combined calibrating method based on fining radar scanning marginal point
US20200088858A1 (en) * 2018-09-19 2020-03-19 Baidu Online Network Technology (Beijing) Co., Ltd. Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
CN109345596A (en) * 2018-09-19 2019-02-15 百度在线网络技术(北京)有限公司 Multisensor scaling method, device, computer equipment, medium and vehicle
US20200116866A1 (en) * 2018-10-11 2020-04-16 Baidu Usa Llc Automatic lidar calibration based on cross validation for autonomous driving
CN109598765A (en) * 2018-12-21 2019-04-09 浙江大学 Join combined calibrating method outside monocular camera and millimetre-wave radar based on spherical calibration object
CN111429521A (en) * 2020-03-05 2020-07-17 深圳市镭神智能系统有限公司 External parameter calibration method, device, medium and electronic equipment for camera and laser radar
CN111505606A (en) * 2020-04-14 2020-08-07 武汉大学 Method and device for calibrating relative pose of multi-camera and laser radar system
EP3929872A1 (en) * 2020-06-28 2021-12-29 TuSimple, Inc. Multi-sensor calibration system
CN112017205A (en) * 2020-07-27 2020-12-01 清华大学 Automatic calibration method and system for space positions of laser radar and camera sensor
CN113589263A (en) * 2021-08-06 2021-11-02 北京易航远智科技有限公司 Multi-homologous sensor combined calibration method and system
CN113592957A (en) * 2021-08-06 2021-11-02 北京易航远智科技有限公司 Multi-laser radar and multi-camera combined calibration method and system
CN113884104A (en) * 2021-09-27 2022-01-04 苏州挚途科技有限公司 Multi-sensor combined calibration method and device and electronic equipment
CN113988197A (en) * 2021-11-03 2022-01-28 山东新一代信息产业技术研究院有限公司 Multi-camera and multi-laser radar based combined calibration and target fusion detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XINYU ZHANG ET AL.: "Line-based Automatic Extrinsic Calibration of LiDAR and Camera", 《2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》 *
张新钰 等: "面向自动驾驶目标检测的深度多模态融合技术", 《智能系统学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115588051A (en) * 2022-09-29 2023-01-10 中国矿业大学(北京) Automatic calibration method for space positions of laser radar and camera in ore processing link
CN116843747A (en) * 2023-08-31 2023-10-03 北京路凯智行科技有限公司 Calibration method and calibration system for camera and laser radar
CN116843747B (en) * 2023-08-31 2024-01-26 北京路凯智行科技有限公司 Calibration method and calibration system for camera and laser radar

Also Published As

Publication number Publication date
CN114578328B (en) 2023-03-17

Similar Documents

Publication Publication Date Title
WO2022022694A1 (en) Method and system for sensing automated driving environment
CN114578328B (en) Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors
CN112417926B (en) Parking space identification method and device, computer equipment and readable storage medium
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
CN112149550B (en) Automatic driving vehicle 3D target detection method based on multi-sensor fusion
WO2022188663A1 (en) Target detection method and apparatus
CN111699410B (en) Processing method, equipment and computer readable storage medium of point cloud
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN115546202B (en) Tray detection and positioning method for unmanned forklift
CN114280626A (en) Laser radar SLAM method and system based on local structure information expansion
CN115187737A (en) Semantic map construction method based on laser and vision fusion
CN117115784A (en) Vehicle detection method and device for target data fusion
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN117590362B (en) Multi-laser radar external parameter calibration method, device and equipment
Sun et al. Automatic targetless calibration for LiDAR and camera based on instance segmentation
CN114608522B (en) Obstacle recognition and distance measurement method based on vision
CN115082532B (en) Ship collision prevention method for river-crossing transmission line based on laser radar
CN116309817A (en) Tray detection and positioning method based on RGB-D camera
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++
CN115690711A (en) Target detection method and device and intelligent vehicle
CN115236643A (en) Sensor calibration method, system, device, electronic equipment and medium
CN114814827A (en) Pedestrian classification method and system based on 4D millimeter wave radar and vision fusion
CN113932712A (en) Melon and fruit vegetable size measuring method based on depth camera and key points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Wang Jipeng

Inventor after: Bao Zefeng

Inventor after: Xiong Yijin

Inventor after: Gao Hanwen

Inventor after: Guo Shichun

Inventor before: Zhang Xinyu

Inventor before: Wang Jipeng

Inventor before: Bao Zefeng

Inventor before: Xiong Yijin

Inventor before: Gao Hanwen

Inventor before: Guo Shichun