CN114578328B - Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors - Google Patents

Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors Download PDF

Info

Publication number
CN114578328B
CN114578328B CN202210174586.8A CN202210174586A CN114578328B CN 114578328 B CN114578328 B CN 114578328B CN 202210174586 A CN202210174586 A CN 202210174586A CN 114578328 B CN114578328 B CN 114578328B
Authority
CN
China
Prior art keywords
laser radar
point cloud
position relation
image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210174586.8A
Other languages
Chinese (zh)
Other versions
CN114578328A (en
Inventor
张新钰
王继鹏
鲍泽峰
熊一瑾
高涵文
郭世纯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Jiashibao Intelligent Technology Co ltd
Original Assignee
Suzhou Jiashibao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jiashibao Intelligent Technology Co ltd filed Critical Suzhou Jiashibao Intelligent Technology Co ltd
Priority to CN202210174586.8A priority Critical patent/CN114578328B/en
Publication of CN114578328A publication Critical patent/CN114578328A/en
Application granted granted Critical
Publication of CN114578328B publication Critical patent/CN114578328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a method for automatically calibrating the spatial positions of a multi-laser radar and multi-camera sensor, which comprises the following steps: step 1) respectively extracting linear features from point cloud data acquired by each laser radar and RGB images acquired by each camera sensor, calculating gray value scores through projection to obtain total scores of all laser radar points, and selecting the highest total score by adopting a self-adaptive optimization method to further obtain the position relation between each laser radar and a certain camera sensor; step 2) registering the point cloud data of every two laser radars to obtain the position relation between the laser radars; step 3) obtaining the position relation between the camera sensors according to the camera sensing image data by utilizing epipolar geometric constraints of a plurality of cameras; and 4) carrying out global optimization according to the position relation obtained in the step 1), the step 2) and the step 3) to finish the calibration of the spatial positions of the multi-laser radar and multi-camera sensor.

Description

Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors
Technical Field
The invention relates to the field of multi-sensor calibration, in particular to an automatic calibration method for spatial positions of a multi-laser radar and multi-camera sensor.
Background
Information fusion is a key technology of automatic driving safety, multi-mode perception plays a key role in realizing robust environmental perception by unmanned driving, multi-mode fusion is a precondition for realizing robust perception, a single sensor is difficult to cope with all-weather full-scene perception, and inherent defects of hardware materials exist; the multi-sensor data modes have the problems of data isomerism, mode difference, sampling imbalance and the like, and a special fusion model needs to be designed for utilization; however, in the existing configuration scheme of the automatic driving vehicle sensor of the main vehicle enterprise, a laser radar and a camera are basically arranged, multi-mode fusion is an important way for improving precision, and reliable external parameters between the laser radar and the camera sensor are preconditions of multi-mode data fusion. A multi-sensor calibration method is one of the key research points and difficulties in the field of unmanned driving, most research contents of the current fusion technology research are a single camera and a single laser radar, but a plurality of cameras and a plurality of laser radars are usually carried on an automatic driving vehicle. In the actual application scene using multiple groups of cameras and laser radar data, the reliability of the calibration result cannot be guaranteed by the single-camera radar fusion technology. In addition, in the operation process, the spatial position relationship between the laser radar and the camera will change gradually, so an automatic calibration method capable of monitoring the spatial position relationship between the multiple sensors and correcting the accumulated error in time is required.
Disclosure of Invention
The invention aims to provide a method for automatically calibrating the spatial positions of a multi-laser radar and multi-camera sensor.
In order to achieve the above object, the present invention provides a method for automatically calibrating spatial positions of multiple lidar and multiple camera sensors, the method comprising:
step 1) respectively extracting linear features from point cloud data acquired by each laser radar and RGB images acquired by each camera sensor, calculating gray value scores through projection to obtain total scores of all laser radar points, and selecting the highest total score by adopting a self-adaptive optimization method to further obtain the position relation between each laser radar and a certain camera sensor;
step 2) registering the point cloud data of every two laser radars to obtain the position relation between the laser radars;
step 3) obtaining the position relation between the camera sensors according to the camera sensing image data by utilizing epipolar geometric constraints of a plurality of cameras;
and 4) carrying out global optimization according to the position relation obtained in the step 1), the step 2) and the step 3) to finish the calibration of the spatial positions of the multi-laser radar and multi-camera sensor.
As an improvement of the above method, the step 1) specifically includes:
step 1-1) screening data which accord with straight line characteristics from point cloud data in an image form;
step 1-2) merging three frames of point cloud data which accord with straight line characteristics into one frame by adopting a local mapping method to obtain denser point cloud data;
step 1-3) converting the RGB image into a gray image, and extracting the gray image which accords with the linear characteristics by adopting a linear detection algorithm;
step 1-4) mapping the point cloud data obtained in the step 1-2) to the gray level image obtained in the step 1-3) through inverse distance transformation, calculating the gray level value of one laser radar point which accords with the straight line characteristics after projection as a score, accumulating the scores of all the laser radar points as a total score, and selecting the highest total score by adopting a self-adaptive optimization method to obtain the position relation between one laser radar and a certain camera sensor.
As an improvement of the above method, the step 1-1) specifically comprises:
and filtering and removing outliers and point cloud data points corresponding to the ground for the point cloud data in the image form, and removing the point cloud data points with fewer adjacent points by adopting a point cloud clustering algorithm, thereby screening out the data which accord with the straight line characteristics.
As an improvement of the above method, the step 1-3) specifically comprises:
the RGB image is converted into a gray image, rough searching is carried out by adopting a wide image line characteristic, a small gray change and a relatively large step length, fine searching is carried out by adopting a thin image line characteristic, a large gray change and a relatively small step length, and the gray image which accords with the linear characteristic is extracted through an LSD linear detection algorithm.
As an improvement of the above method, the step 2) specifically includes:
dividing point cloud data acquired by each laser radar into grids, expressing each grid by normal distribution, calculating the mean value and variance of data points in each grid, matching a point set by the normal distribution, and obtaining the spatial position relation between every two laser radars by registering two groups of point clouds.
As an improvement of the above method, the step 3) specifically includes:
and obtaining the spatial position relation between the camera sensors by using the epipolar geometric constraint of the multiple cameras and adopting an iterative algorithm.
As an improvement of the above method, the step 4) specifically includes:
taking the position relation between the ith laser radar and the jth camera sensor obtained in the step 1), the position relation between the jth laser radar and the ith camera sensor and the position relation between the ith laser radar and the jth laser radar obtained in the step 2) as vertexes, taking each laser radar and each camera sensor to form an edge, taking the total score and the fitting degree of the laser radars and the polar coordinate constraint between the camera sensors as calibration errors, reducing the calibration errors through global optimization, and completing the calibration of the spatial positions of the multi-laser radar and the multi-camera sensor
Compared with the prior art, the invention has the advantages that:
1. the method can automatically complete feature extraction without manually designing a calibration plate and depending on a specific environment; the limitation of a calibration scene is avoided, and the applicability and convenience of the calibration method are improved;
2. the method of the invention is real-time online calibration, can automatically correct the error in the using process, and avoids the influence of the accumulated error in the using process on the system;
3. the method for screening the point cloud linear features and the image linear features can filter non-linear features and improve the automatic calibration precision;
4. the method provided by the invention optimizes multiple laser radars and multiple cameras, reduces the overall error, accurately calibrates the result, and is closer to the practical application scene.
Drawings
FIG. 1 is a flow chart of a method for automatic spatial position calibration of multiple lidar and multiple camera sensors in accordance with the present invention;
fig. 2 is a schematic position diagram of the in-vehicle sensor lidar and the camera of embodiment 1;
FIG. 3 (a) is an image captured by two cameras of example 1 at a certain time;
FIG. 3 (b) is the point cloud data collected by two laser radars of real-time example 1 at a certain time;
fig. 4 is a schematic diagram of the steps of global optimization performed in embodiment 1.
Detailed Description
A method for automatically calibrating spatial positions of multiple laser radars and multiple camera sensors, the method comprising:
extracting the features of the image and the point cloud by adopting a series of preprocessing methods; respectively extracting linear features from the image and the point cloud, and refining by applying feature filtering;
screening data which accord with straight line characteristics from the laser radar point cloud data; screening data which accord with the straight line characteristics from the image data of the camera sensor; a filtering feature, the remaining features being identifiable as line features; and obtaining the finally optimized external parameters by using a self-adaptive optimization method based on search method optimization and graph optimization.
And selecting the space position relationship of the plurality of laser radars and the plurality of camera sensors corresponding to the highest total score from the plurality of total scores as the position relationship of the plurality of calibrated laser radars and the plurality of camera sensors.
And carrying out point cloud registration on the two laser radars to obtain the external parameter position relation between the laser radar 1 and the laser radar 2.
And solving external parameters between the cameras according to the images of the two cameras by using the epipolar geometric constraint of the multiple cameras.
And global optimization is performed by using external parameters among the sensors, so that errors are reduced, and a more accurate result is obtained.
Further line feature extraction of the image specifically comprises:
in the image processing, the RGB image is firstly converted into a gray image, and then a straight line detection algorithm is adopted to extract straight line features in the gray image. A distance transformation model is then applied to these images containing straight-line features.
Firstly: a coarse search with wider image lines, smaller gray scale changes and relatively larger step sizes is employed so that regions that may contain the best solution are quickly found, preventing the search from falling into local optima. Then apply the finer image line features with larger gray scale changes and smaller step sizes to obtain more accurate calibration results.
Further preprocessing and line feature extraction of the radar point cloud, and projecting the radar point cloud onto an image, wherein the method specifically comprises the following steps:
the principle of lidar processing is to use the distance discontinuity to obtain more borderline features. In order to achieve the aim, three frames of point clouds are combined into one frame by adopting a local mapping method, more points can be presented in one frame, and line features extracted from the point clouds are denser. Specifically, a transformation matrix between the current frame and the previous two frames is calculated by a Normal Distribution Transformation (NDT) method. By converting three frames of point cloud into one frame, a denser point cloud is obtained, which may improve the performance of extracting line features, especially in the case of using a low beam lidar.
The dense point cloud is converted to an image form, and each pixel stores distance information for a corresponding LiDAR point. By comparing the distance between the current point and the adjacent point, the abnormal value which is too far away from the adjacent point is removed, and more accurate line characteristics can be extracted. It is noted that instead of considering each beam separately, we consider the distance information between the multiple beams. It allows the proposed method to extract horizontal features, minimizing horizontal and vertical errors using line features. The horizontal line features and the vertical line features are stored in two different point clouds, respectively. In this case, the feature of the intersection line of planes, which is rarely present, is ignored, which is advantageous for improving the calculation efficiency.
And (3) mapping point cloud line features extracted by the laser radar to a gray scale image of the image through inverse distance conversion, and removing outliers and points corresponding to the ground by using two filtering methods to leave line feature point clouds.
After filtering, a point cloud clustering algorithm is adopted for removing line features with fewer adjacent points. The two filtering steps can provide more organized point cloud line characteristics, obtain more organized line characteristics in the image and ensure better optimization results in the subsequent steps.
The self-adaptive optimization of the mapping of the line feature point cloud to the line feature image specifically comprises the following steps:
a self-adaptive optimization method is adopted, and a search method and a graph optimization method are sequentially used. In the search optimization process, the method gives consideration to both calculation accuracy and efficiency. Before optimization, the proposed method extracts the point cloud line characteristics of the laser radar to an image and calculates the proportion of the point cloud of the laser radar projected to a gray scale area.
Two searching steps are adopted to accurately solve, in order to avoid the search trapping in a local solution, a coarse search with small image line width, small gray scale change and relatively large step length is adopted, and a region possibly containing the best solution is quickly found. And then, thinner image line characteristics are adopted, the gray scale change is larger, and the step length is smaller, so that a more accurate calibration result is obtained. When the proportion of the lidar spot projected onto the gray area exceeds a certain threshold, switching is made between these two steps and the change in gray level. The higher the gray scale score of the point cloud line features falling on the white line features, the more accurate the calibration result.
For a spatial position relation, screening data which accord with linear characteristics from the laser radar point cloud data; screening data which accord with the straight line characteristics from the image data of the camera sensor; projecting the laser radar data conforming to the linear characteristics to a pixel coordinate system of a camera sensor, calculating the gray value of one laser radar point conforming to the linear characteristics after projection as a score, and accumulating the scores of all the laser radar points as a total score; traversing all spatial position relations near the external parameter matrixes of the current frame and the two frames in the past, stopping searching the spatial position relations near the current external parameter matrix when the optimal values are found in the three frames of data, and searching scores of all spatial positions again at the positions of the optimal values until the optimal values are reached; as the position relationship between the calibrated lidar and the camera sensor.
An adaptive optimization method is adopted to make the optimization to be carried out towards the correct direction. And optimizing the cost function by adopting a search method, and searching and comparing surrounding gray scale scores. And further optimizing the solution obtained by the search optimization method by using a graph optimization method, wherein the optimization vertex is a transformation matrix of the camera and the laser radar, the edge error is the gray level corresponding to each point cloud line characteristic point on the image, constructing a g2o graph optimizer, configuring an optimization solver and a gradient descent method, and finally calling an optimization function for optimization.
External reference calibration of a laser radar-laser radar and a camera-camera specifically comprises the following steps:
calibrating a laser radar-laser radar by using a laser radar point cloud registration method:
firstly, preprocessing two groups of point clouds, dividing a space into grids, and counting points falling in each grid; calculating the mean value and covariance of each grid according to points in each grid, and constructing Gaussian distribution; and calculating the joint probability according to the predicted attitude and solving R and t to obtain an external parameter transformation matrix. And counting the adaptive scores of the projections between the two point cloud line characteristics.
The extrinsic parameters are then further refined using an iterative Levenberg-Marquardt algorithm (LM) using the epipolar geometric constraints of the multiple cameras.
Through the steps, the external parameters of multiple radars and multiple cameras can be preliminarily optimized, and finally, the external parameters of the radar-camera, the radar-radar and the camera-camera are globally optimized, so that the overall error is reduced.
And constructing global optimization to reduce overall errors and improve precision by external parameters between multiple groups of laser radars and cameras calibrated by line characteristics, external parameters between radars and cameras obtained by antipodal constraints. In the global optimization, the optimization vertexes are respectively radar i-camera i, radar j-camera j and an external parameter transformation matrix of the radar i-radar j, the sides are ternary sides, one side is formed between each two groups of laser radars and two groups of cameras, errors of the sides are composed of four parts, the optimization gradient direction is provided for the optimization process of the four vertexes, and optimization iteration is guided. The first part is the sum of the gray values of the characteristic points of each line, the points of which are mapped on the image, between the camera i and the laser radar i, the second part is the sum of the gray values of the characteristic points of each line, the points of which are mapped on the image, between the camera j and the laser radar j, the third part is the fitting degree of the laser radar i and the laser radar j after NDT registration, and the fourth part is the error of polar coordinate constraint between the camera i and the camera j. And finally, reducing the calibration error through global optimization to obtain a result with higher precision.
The technical solution of the present invention is described in detail below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, embodiment 1 of the present invention proposes a method for automatically calibrating spatial positions of multiple lidar and multiple camera sensors, and this embodiment takes two lidar and two camera sensors as examples for description, and the number is set as shown in fig. 2, but is not limited to the above number. The method comprises the following steps:
obtaining the spatial position of the radar relative to the camera sensor, the spatial position between two cameras and the spatial position between two laser radars to obtain the spatial position relation between a plurality of groups of laser radars and the camera sensor;
as shown in fig. 3 (a) and 3 (b), the images acquired by two cameras and the point cloud data acquired by two laser radars are shown.
For the spatial position relation of the laser radar and the camera, screening out data which accord with straight line characteristics from the point cloud data of the laser radar; screening data which accord with the straight line characteristics from the image data of the camera sensor; projecting the laser radar data conforming to the straight line characteristics to a pixel coordinate system of a camera sensor, calculating the gray value of one laser radar point conforming to the straight line characteristics after projection as a score, and accumulating the scores of all the laser radar points as a total score; traversing all spatial position relations to obtain a plurality of total scores; and selecting the space position relationship of the laser radar and the camera sensor corresponding to the highest total score from the plurality of total scores as the position relationship of the calibrated laser radar and the calibrated camera sensor.
For the spatial position relation between two laser radars, the statistical characteristics of local points are represented, the point cloud is divided into grids, and each grid can be represented by normal distribution. And the mean and variance of the points in each grid are calculated, matching the set of points by matching to a normal distribution. And obtaining the spatial position relation of the two laser radars through registration between the two groups of point clouds.
For the spatial position relationship between the two cameras, external parameters are further refined by utilizing the epipolar geometric constraint of the multiple cameras and adopting an iterative Levenberg-Marquardt algorithm (LM) algorithm to obtain the spatial position relationship between the two cameras.
The method for calibrating the space position between the laser radar and the camera is used for self-adaptive optimization, and specifically comprises the following steps:
and matching the multi-frame laser radar data to obtain a conversion matrix among the multi-frame laser radar data, and obtaining denser point cloud data by superposing the multi-frame point cloud data to one frame.
Optimizing by adopting a search method to obtain a highest-score result of the point cloud of the nearby position projected to the gray level image;
and (5) optimizing a further accurate external parameter result by using a graph.
For the spatial position relation of the whole sensor, the calibration errors among multiple sensors are reduced through global optimization, and the method specifically comprises the following steps:
obtaining external parameters between the two laser radars through point cloud registration;
obtaining external parameters between two cameras through epipolar geometric constraint of the two images;
the global optimization is performed by grouping the four parts into the error of the edge by using the external parameter transformation matrix with the vertex of the radar i-camera i, the vertex of the radar j-camera j and the vertex of the radar i-radar j as the ternary edge, as shown in fig. 4. P is the position of the image point, K is the camera internal reference, R and t are the relative poses of the external reference,
Figure GDA0004059500930000071
for the transform matrix (i, j =1,2,3 \ 8230; \8230;) the superscript T is the transpose matrix and the FitnessScore is the two-point cloud fitting.
The method can judge whether the current space positions of the laser radar and the camera are accurate or not in real time, and correct the space position errors of the laser radar and the camera on line under the condition of low accuracy. The method does not need to artificially design a calibration object, can automatically acquire information in the environment and calibrate the spatial positions of a plurality of groups of laser radars and cameras.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (6)

1. A method for automatically calibrating spatial positions of multiple laser radars and multiple camera sensors, the method comprising:
step 1) respectively extracting straight line features from point cloud data acquired by each laser radar and RGB images acquired by each camera sensor, calculating gray value scores through projection to obtain total scores of all laser radar points, and selecting the highest total score by adopting a self-adaptive optimization method to further obtain the position relation between each laser radar and a certain camera sensor;
step 2) registering the point cloud data of every two laser radars to obtain the position relation between the laser radars;
step 3) obtaining the position relation between the camera sensors according to the image data sensed by the cameras by using epipolar geometric constraints of the cameras;
step 4) carrying out global optimization according to the position relation obtained in the step 1), the step 2) and the step 3) to finish the calibration of the spatial positions of the multi-laser radar and the multi-camera sensor;
the step 4) specifically comprises the following steps:
and taking the position relation between the ith laser radar and the jth camera sensor obtained in the step 1), the position relation between the jth laser radar and the ith camera sensor, and the position relation between the ith laser radar and the jth laser radar obtained in the step 2) as vertexes, taking each laser radar and each camera sensor to form an edge, and taking the total score and the fitting degree of the laser radars and the polar coordinate constraint between the camera sensors as calibration errors, reducing the calibration errors through global optimization, and completing the calibration of the spatial positions of the multiple laser radars and the multiple camera sensors.
2. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 1, wherein the step 1) specifically comprises:
step 1-1) screening data which accord with straight line characteristics from point cloud data in an image form;
step 1-2) merging three frames of point cloud data which accord with straight line characteristics into one frame by adopting a local mapping method to obtain denser point cloud data;
step 1-3) converting the RGB image into a gray image, and extracting the gray image which accords with the linear characteristic by adopting a linear detection algorithm;
step 1-4) mapping the point cloud data obtained in the step 1-2) to the gray level image obtained in the step 1-3) through inverse distance transformation, calculating the gray level value of one laser radar point which accords with the straight line characteristics after projection as a score, accumulating the scores of all the laser radar points as a total score, and selecting the highest total score by adopting a self-adaptive optimization method to obtain the position relation between one laser radar and a certain camera sensor.
3. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 2, wherein the step 1-1) comprises:
and filtering the point cloud data in the form of an image to remove outliers and point cloud data points corresponding to the ground, and removing the point cloud data points with fewer adjacent points by adopting a point cloud clustering algorithm, thereby screening out the data conforming to the linear characteristics.
4. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 2, wherein the step 1-3) comprises:
the RGB image is converted into a gray image, rough search is carried out by adopting a wide image line characteristic, a small gray change and a relatively large step length, fine search is carried out by adopting a thin image line characteristic, a large gray change and a relatively small step length, and the gray image which accords with the straight line characteristic is extracted by an LSD straight line detection algorithm.
5. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 1, wherein the step 2) comprises:
dividing point cloud data acquired by each laser radar into grids, representing each grid by normal distribution, calculating the mean value and the variance of data points in each grid, matching point sets through the normal distribution, and obtaining the spatial position relation between every two laser radars through registration between two groups of point clouds.
6. The method for automatically calibrating the spatial positions of multiple lidar and multiple camera sensors according to claim 1, wherein the step 3) comprises:
and obtaining the spatial position relation between the camera sensors by utilizing the epipolar geometric constraint of the multiple cameras and adopting an iterative algorithm.
CN202210174586.8A 2022-02-24 2022-02-24 Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors Active CN114578328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210174586.8A CN114578328B (en) 2022-02-24 2022-02-24 Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210174586.8A CN114578328B (en) 2022-02-24 2022-02-24 Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors

Publications (2)

Publication Number Publication Date
CN114578328A CN114578328A (en) 2022-06-03
CN114578328B true CN114578328B (en) 2023-03-17

Family

ID=81775220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210174586.8A Active CN114578328B (en) 2022-02-24 2022-02-24 Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors

Country Status (1)

Country Link
CN (1) CN114578328B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115588051B (en) * 2022-09-29 2023-06-13 中国矿业大学(北京) Automatic calibration method for laser radar and camera space position in ore processing link
CN116843747B (en) * 2023-08-31 2024-01-26 北京路凯智行科技有限公司 Calibration method and calibration system for camera and laser radar

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598765A (en) * 2018-12-21 2019-04-09 浙江大学 Join combined calibrating method outside monocular camera and millimetre-wave radar based on spherical calibration object
CN113884104A (en) * 2021-09-27 2022-01-04 苏州挚途科技有限公司 Multi-sensor combined calibration method and device and electronic equipment

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358633A (en) * 2017-07-12 2017-11-17 北京轻威科技有限责任公司 Join scaling method inside and outside a kind of polyphaser based on 3 points of demarcation things
CN108288291A (en) * 2018-06-07 2018-07-17 北京轻威科技有限责任公司 Polyphaser calibration based on single-point calibration object
CN109300162B (en) * 2018-08-17 2021-08-03 浙江工业大学 Multi-line laser radar and camera combined calibration method based on refined radar scanning edge points
CN109345596B (en) * 2018-09-19 2024-07-12 阿波罗智能技术(北京)有限公司 Multi-sensor calibration method, device, computer equipment, medium and vehicle
US11428817B2 (en) * 2018-10-11 2022-08-30 Baidu Usa Llc Automatic LIDAR calibration based on cross validation for autonomous driving
CN111429521B (en) * 2020-03-05 2021-12-21 深圳市镭神智能系统有限公司 External parameter calibration method, device, medium and electronic equipment for camera and laser radar
CN111505606B (en) * 2020-04-14 2022-08-05 武汉大学 Method and device for calibrating relative pose of multi-camera and laser radar system
AU2021204030A1 (en) * 2020-06-28 2022-01-20 Beijing Tusen Weilai Technology Co., Ltd. Multi-sensor calibration system
CN112017205B (en) * 2020-07-27 2021-06-25 清华大学 Automatic calibration method and system for space positions of laser radar and camera sensor
CN113589263B (en) * 2021-08-06 2023-10-31 北京易航远智科技有限公司 Method and system for jointly calibrating multiple homologous sensors
CN113592957B (en) * 2021-08-06 2023-07-11 北京易航远智科技有限公司 Multi-laser radar and multi-camera combined calibration method and system
CN113988197B (en) * 2021-11-03 2024-08-23 山东新一代信息产业技术研究院有限公司 Multi-camera and multi-laser radar based combined calibration and target fusion detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598765A (en) * 2018-12-21 2019-04-09 浙江大学 Join combined calibrating method outside monocular camera and millimetre-wave radar based on spherical calibration object
CN113884104A (en) * 2021-09-27 2022-01-04 苏州挚途科技有限公司 Multi-sensor combined calibration method and device and electronic equipment

Also Published As

Publication number Publication date
CN114578328A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
WO2022022694A1 (en) Method and system for sensing automated driving environment
CN114578328B (en) Automatic calibration method for spatial positions of multiple laser radars and multiple camera sensors
CN112417926B (en) Parking space identification method and device, computer equipment and readable storage medium
WO2022188663A1 (en) Target detection method and apparatus
CN111144213B (en) Object detection method and related equipment
CN112836633A (en) Parking space detection method and parking space detection system
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN115187737A (en) Semantic map construction method based on laser and vision fusion
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN117115784A (en) Vehicle detection method and device for target data fusion
CN116309817A (en) Tray detection and positioning method based on RGB-D camera
CN114280626A (en) Laser radar SLAM method and system based on local structure information expansion
CN113947724A (en) Automatic line icing thickness measuring method based on binocular vision
CN114608522B (en) Obstacle recognition and distance measurement method based on vision
CN115546202A (en) Tray detection and positioning method for unmanned forklift
CN115097419A (en) External parameter calibration method and device for laser radar IMU
CN114910891A (en) Multi-laser radar external parameter calibration method based on non-overlapping fields of view
CN114814827A (en) Pedestrian classification method and system based on 4D millimeter wave radar and vision fusion
CN114037968A (en) Lane line detection method based on depth radar point cloud and image data fusion
CN117029870A (en) Laser odometer based on road surface point cloud
CN115236643B (en) Sensor calibration method, system, device, electronic equipment and medium
CN115690711A (en) Target detection method and device and intelligent vehicle
CN114581346A (en) Multispectral image fusion method for urban low-altitude remote sensing monitoring target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Wang Jipeng

Inventor after: Bao Zefeng

Inventor after: Xiong Yijin

Inventor after: Gao Hanwen

Inventor after: Guo Shichun

Inventor before: Zhang Xinyu

Inventor before: Wang Jipeng

Inventor before: Bao Zefeng

Inventor before: Xiong Yijin

Inventor before: Gao Hanwen

Inventor before: Guo Shichun

CB03 Change of inventor or designer information