CN114935748A - Large-baseline multi-laser-radar calibration method and system based on detected object - Google Patents
Large-baseline multi-laser-radar calibration method and system based on detected object Download PDFInfo
- Publication number
- CN114935748A CN114935748A CN202210523924.4A CN202210523924A CN114935748A CN 114935748 A CN114935748 A CN 114935748A CN 202210523924 A CN202210523924 A CN 202210523924A CN 114935748 A CN114935748 A CN 114935748A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- laser radar
- matching
- laser
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 37
- 238000004891 communication Methods 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 2
- 230000007547 defect Effects 0.000 abstract description 4
- 230000008447 perception Effects 0.000 abstract description 4
- 238000004364 calculation method Methods 0.000 abstract description 2
- 230000007613 environmental effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention discloses a large-baseline multi-laser-radar calibration method based on a detected object, which comprises the following steps of: detecting a target object through a plurality of single laser radars, and determining a central point and three-dimensional coordinates of the target object; randomly selecting three target detection objects and central points thereof in each laser radar; carrying out nearest point cloud registration to obtain a primary rough matching result, repeating the calculation, carrying out iteration until a defined convergence condition is met, completing N times of rough matching to obtain corresponding matching parameters, using the obtained rough matching result as an initial value, carrying out iteration closest point cloud matching algorithm on the whole point cloud, carrying out regression to obtain accurate matching parameters of the multi-laser radar, and fusing the multi-laser radar point cloud by using the parameters to realize calibration of the multi-laser radar; the method can be suitable for outdoor large-baseline scenes to make up the perception defect of a single laser radar and ensure accurate and dense environmental three-dimensional information.
Description
Technical Field
The invention relates to the technical field of vision measurement, in particular to a large-baseline multi-laser-radar calibration method and system based on a detected object.
Background
Currently, in the fields of robots, unmanned driving, security monitoring, intelligent transportation, and the like, in order to enhance the perception capability of devices, various sensors are employed, such as monocular cameras, binocular cameras, thermal cameras, laser radars, and the like. Among many sensors, lidar is widely used because it provides accurate and dense three-dimensional information of the environment and reflected intensity information.
However, existing single lidar solutions have a number of drawbacks: firstly, most of fixedly installed laser radars have limited visual field range, and the visual field range cannot reach 360 degrees, so that full coverage is realized; secondly, most mechanical lidar can achieve 360-degree coverage in the horizontal direction, but scanning beams are limited, and the vertical resolution of the lidar in the 360-degree field of view is also limited. Therefore, if it is desired to improve the lidar sensing range and improve the resolution, multiple lidars need to be placed, and the most important step for a multiple lidar system is the calibration of the multiple lidars.
Although a plurality of laser radar calibration methods and systems exist at present, most of the laser radar calibration methods and systems are limited to indoor and short-baseline scenes, and cannot be applied to outdoor and large-baseline scenes, such as unmanned driving and intelligent traffic scenes. Therefore, the multi-laser radar calibration sensing system applicable to outdoor and large-baseline scenes has important practical value.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a method and a system for calibrating a large-baseline multi-laser radar based on a detected object; the method can be suitable for outdoor large-baseline scenes to make up the perception defect of a single laser radar and ensure accurate and dense environment three-dimensional information.
In order to achieve the purpose, the invention provides a large-baseline multi-laser radar calibration method based on a detected object, which comprises the following steps of:
step S1: detecting target objects in a plurality of single laser radars by a deep learning technology, and determining a central point of the target objects and three-dimensional coordinates under a laser radar coordinate system;
step S2: randomly selecting three target detection objects and central points thereof in each laser radar; carrying out the nearest point cloud registration to obtain a primary coarse matching result, and calculating a rotation matrix R and a translational vector T in the external reference matrix data;
step S3: repeating the step S2, iterating until the defined convergence condition is met, and completing N times of coarse matching to obtain corresponding matching parameters, namely a rotation matrix R and a translation vector T in the extrinsic parameter matrix data;
step S4: taking the rough matching result obtained in the step S3 as an initial value, performing an iterative closest point cloud matching algorithm on the whole point cloud, and performing regression to obtain accurate matching parameters of the multi-laser radar, namely a rotation matrix R and a translational vector T in the extrinsic parameter matrix data; and the parameters are utilized to fuse the multi-laser radar point cloud, so as to realize the calibration of the multi-laser radar.
Preferably, the step S1 further includes the following steps;
step S11: determining a target object to be focused on by the operation task and the category of the target object;
step S12: shooting a target object through a laser radar to obtain radar point cloud data in all single laser radars;
step S13: and inputting the point cloud data into a deep convolution neural network, and obtaining the central point of the target object and the three-dimensional coordinates of the target object in a laser radar coordinate system from the output result of the deep convolution neural network.
Preferably, step S2 further includes; selecting any two pairs of three center points according to any given matching relation; and carrying out SVD (singular value decomposition) to solve an external parameter matrix between the two to obtain a rotation matrix R and a translational vector T.
Preferably, the step S3 further includes the following steps;
step S31: repeating iteration on all the detected object center points, calculating the matching error obtained each time, and scoring the matching accuracy of the matching error;
step S32: and outputting the final score with the highest score as the optimal solution of the rough matching.
Preferably, step S4 further includes; taking the rough optimal solution obtained in the step S32 as an initial value, performing global matching on all point clouds, and performing point cloud registration according to an iterative closest point algorithm to obtain an optimized final solution; namely a rotation matrix R and a translational vector T in the external reference matrix data; and the parameters are utilized to fuse the multi-laser radar point cloud, so as to realize the calibration of the multi-laser radar.
The invention also provides a system adopting the method for calibrating the large-baseline multi-laser radar based on the detected object, which comprises the following steps: the system comprises a target object, a multi-laser radar, a point cloud computing module, a parameter output module and a calibration module;
the target objects are placed at a common visual field in the single laser radars;
the multi-laser radar is used for shooting a target object in the field of view to obtain radar point cloud data in all the single laser radars;
the point cloud computing module is in communication connection with the multiple laser radars, inputs point cloud data into the deep convolutional neural network, and obtains a central point of a target object and three-dimensional coordinates of the target object in a laser radar coordinate system from output results of the deep convolutional neural network;
the parameter output module is in communication connection with the point cloud computing module, and is used for randomly selecting three target objects and central points thereof under a plurality of laser radars to carry out nearest near point cloud registration to obtain a primary coarse matching result; repeating iteration until the algorithm meets the defined convergence condition, and completing N times of coarse matching to obtain corresponding matching parameters; taking the obtained rough matching result as an initial value, performing an iterative closest point cloud matching algorithm on the whole point cloud, and performing regression to obtain accurate matching parameters of the multi-laser radar, namely a rotation matrix R and a translational vector T in the external reference matrix data;
and the calibration module is in communication connection with the parameter output module and fuses the multi-laser radar point cloud by using the matching parameters to realize calibration of the multi-laser radar.
Compared with the prior art, the invention has the beneficial effects that:
the method and the system provided by the invention detect the central point and the three-dimensional coordinate of the target object in the laser radar, then carry out multiple iterations, convergence and rough matching on the coordinate, and use the rough matching result as an initial value to carry out a point cloud matching algorithm of an iteration nearest point on the whole point cloud, and regress to obtain accurate matching parameters of the multiple laser radars, namely a rotation matrix R and a translational vector T in the external reference matrix data; the parameters are utilized to fuse the multi-laser radar point cloud, so that the calibration of the multi-laser radar is realized; the method can be suitable for outdoor large-baseline scenes to make up the perception defect of a single laser radar and ensure accurate and dense three-dimensional environment information; the integration of a plurality of laser radar point clouds can be realized, and a more reliable sensing system is built; on the basis, each three-dimensional visual perception task can achieve stable performance, such as: the method comprises the following steps of stable tracking, speed prediction, track prediction, abnormal behavior detection and the like of a target object in a three-dimensional space.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a block flow diagram of a method for large baseline multi-lidar calibration based on detection of objects in accordance with the present invention;
FIG. 2 is a block diagram of a system for large baseline multi-lidar calibration based on object detection according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are one embodiment of the present invention, and not all embodiments of the present invention. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention without any creative work belong to the protection scope of the present invention.
Example one
Referring to fig. 1, an embodiment of the present invention provides a method for large baseline multi-lidar calibration based on an object detection.
First, the method provided by the present invention is explained in its entirety: the invention provides a large-baseline multi-laser-radar calibration method based on a detected object; the calibration and fusion of the multiple laser radars can be carried out according to the target object, the building of a multiple laser radar system is realized, and the reliable and stable performance of a subsequent sensing task is further ensured.
Next, a method for calibrating a large-baseline multi-lidar based on an object to be detected is described in detail, and fig. 1 is a flowchart of the present invention.
Referring to fig. 1, the present invention mainly comprises the following steps: step S1: and detecting a target object from the point cloud.
Specifically, first, a target object to be focused on by a job task and a category of the target object are determined; for example: pedestrians, automobiles, and the like; secondly, shooting a target object through a laser radar to obtain radar point cloud data in all the single laser radars; thirdly, detecting target objects in the plurality of single laser radars by a deep learning technology; and finally, inputting the point cloud data into a deep convolution neural network, and obtaining the central point of the target object and the three-dimensional coordinates of the target object in the laser radar coordinate system from the output result of the deep convolution neural network.
Step S2: randomly selecting three target detection objects and central points thereof in each laser radar; carrying out the nearest point cloud registration to obtain a primary coarse matching result, and calculating a rotation matrix R and a translational vector T in the external parameter matrix data; further, step S2 includes; selecting any two pairs of three center points according to any given matching relation; and carrying out SVD (singular value decomposition) to solve an external parameter matrix between the two to obtain a rotation matrix R and a translational vector T.
Step S3: and repeating the step S2, iterating all the point clouds until the defined convergence condition is met, and completing N times of coarse matching to obtain corresponding matching parameters, namely the rotation matrix R and the translational vector T in the external reference matrix data.
Further, since the two pairs of center points are arbitrarily selected in step S2 and a matching relationship is arbitrarily given, the obtained parameters are not optimal solutions, all the detected object center points need to be iterated repeatedly, the matching error obtained each time is calculated, the matching accuracy is scored, and the final score is the highest and is output as the optimal solution for rough matching.
Step S4: point cloud global matching: taking the rough optimal solution obtained in the step S3 as an initial value, performing global matching on all point clouds, performing point cloud registration according to an iterative closest point algorithm, and performing regression to obtain accurate matching parameters of the multi-laser radar, namely a rotation matrix R and a translational vector T in the external parameter matrix data; and the parameters are utilized to fuse the multi-laser radar point cloud, realize the calibration of the multi-laser radar, realize the sensing system of the large-baseline multi-laser radar calibration, and further provide guarantee for the reliable performance and stability of subsequent tasks.
In summary, the hardware platform of the present invention includes two or more laser radars, which can be placed at random, with a distance up to 30 m and a viewing angle change up to 90 degrees.
The invention has the innovation points that the calibration of a plurality of laser radars can be automatically completed according to a target object concerned by a task, the corresponding external parameter matrix is obtained through calculation, the multi-laser radar point cloud fusion is completed, the multi-radar sensing system is realized, and the performance of the subsequent task is favorably improved.
Example two
Referring to fig. 2, a second embodiment of the present invention provides a system using the method for large-baseline multi-lidar calibration based on object detection described in the first embodiment.
Referring to fig. 2, the system for large-baseline multi-lidar calibration based on a detected object includes a target object, a multi-lidar, a point cloud computing module, a parameter output module and a calibration module;
the target objects are placed at a common visual field in the single laser radars;
the multi-laser radar is used for shooting a target object in the field of view to obtain radar point cloud data in all the single laser radars;
the point cloud computing module is in communication connection with the multiple laser radars, inputs point cloud data into the deep convolutional neural network, and obtains a central point of a target object and three-dimensional coordinates of the target object in a laser radar coordinate system from output results of the deep convolutional neural network;
the parameter output module is in communication connection with the point cloud computing module, and is used for randomly selecting three target objects and central points thereof under a plurality of laser radars to carry out nearest near point cloud registration to obtain a primary coarse matching result; repeating iteration until the algorithm meets the defined convergence condition, and completing N times of coarse matching to obtain corresponding matching parameters; taking the obtained rough matching result as an initial value, performing an iterative closest point cloud matching algorithm on the whole point cloud, and performing regression to obtain accurate matching parameters of the multi-laser radar, namely a rotation matrix R and a translational vector T in the external reference matrix data;
the calibration module is in communication connection with the parameter output module, fuses the multi-laser radar point cloud with the matching parameters, realizes calibration of the multi-laser radar, realizes a sensing system for large-baseline multi-laser radar calibration, and further provides guarantee for reliable performance and stability of subsequent tasks.
In the existing scheme, the large-baseline multi-radar calibration depends on artificially designed external calibration objects, the method and the system thereof provided by the invention eliminate the dependence on artificially set calibration objects, and can be realized by means of detected objects, such as: and (4) completing the on-line calibration of multiple laser radars for automobiles, pedestrians and the like.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (6)
1. A large-baseline multi-laser radar calibration method based on an object to be detected is characterized by comprising the following steps: the method comprises the following steps:
step S1: detecting target objects in a plurality of single laser radars by a deep learning technology, and determining a central point of the target objects and three-dimensional coordinates under a laser radar coordinate system;
step S2: randomly selecting three target detection objects and central points thereof in each laser radar; carrying out the nearest point cloud registration to obtain a primary coarse matching result, and calculating a rotation matrix R and a translational vector T in the external reference matrix data;
step S3: repeating the step S2, iterating until the defined convergence condition is met, and completing N times of coarse matching to obtain corresponding matching parameters, namely a rotation matrix R and a translation vector T in the external parameter matrix data;
step S4: taking the rough matching result obtained in the step S3 as an initial value, performing an iterative closest point cloud matching algorithm on the whole point cloud, and performing regression to obtain accurate matching parameters of the multi-laser radar, namely a rotation matrix R and a translational vector T in the extrinsic parameter matrix data; and the parameters are utilized to fuse the multi-laser radar point cloud, so that the calibration of the multi-laser radar is realized.
2. The method of claim 1 for large baseline multi-lidar calibration based on object detection, wherein: the step S1 further includes the following steps;
step S11: determining a target object to be focused on by the operation task and the category of the target object;
step S12: shooting a target object through a laser radar to obtain radar point cloud data in all single laser radars;
step S13: and inputting the point cloud data into a deep convolution neural network, and obtaining the central point of the target object and the three-dimensional coordinates of the target object in a laser radar coordinate system from the output result of the deep convolution neural network.
3. The method of claim 1 for large baseline multi-lidar calibration based on object detection, wherein: the step S2 further includes; selecting any two pairs of three central points according to any given matching relationship; and carrying out SVD (singular value decomposition) to solve an external parameter matrix between the two to obtain a rotation matrix R and a translational vector T.
4. The method of claim 1 for large baseline multi-lidar calibration based on object detection, wherein: the step S3 further includes the following steps;
step S31: repeating iteration on all the detected object center points, calculating the matching error obtained each time, and scoring the matching accuracy of the matching error;
step S32: and outputting the final score with the highest score as the optimal solution of the rough matching.
5. The method of claim 4 for large baseline multi-lidar calibration based on object detection, wherein: the step S4 further includes; taking the rough optimal solution obtained in the step S32 as an initial value, performing global matching on all point clouds, and performing point cloud registration according to an iterative closest point algorithm to obtain an optimized final solution; namely a rotation matrix R and a translational vector T in the external reference matrix data; and the parameters are utilized to fuse the multi-laser radar point cloud, so as to realize the calibration of the multi-laser radar.
6. A system for using the method of any of claims 1 to 5 for large baseline multi-lidar calibration based on sensing an object, comprising: the system comprises a target object, a multi-laser radar, a point cloud computing module, a parameter output module and a calibration module;
the target objects are placed at a common visual field in the single laser radars;
the multi-laser radar is used for shooting a target object in the field to obtain radar point cloud data in all the single laser radars;
the point cloud computing module is in communication connection with the multiple laser radars, inputs point cloud data into the deep convolutional neural network, and obtains a central point of a target object and three-dimensional coordinates of the target object in a laser radar coordinate system from output results of the deep convolutional neural network;
the parameter output module is in communication connection with the point cloud computing module, and is used for randomly selecting three target objects and central points thereof under a plurality of laser radars to carry out nearest near point cloud registration to obtain a primary coarse matching result; repeating iteration until the algorithm meets the defined convergence condition, and completing N times of coarse matching to obtain corresponding matching parameters; taking the obtained rough matching result as an initial value, performing an iterative closest point cloud matching algorithm on the whole point cloud, and performing regression to obtain accurate matching parameters of the multi-laser radar, namely a rotation matrix R and a translational vector T in the external reference matrix data;
and the calibration module is in communication connection with the parameter output module and fuses the multi-laser radar point cloud by using the matching parameters to realize calibration of the multi-laser radar.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210523924.4A CN114935748B (en) | 2022-05-14 | 2022-05-14 | Method and system for calibrating large baseline multi-laser radar based on detected object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210523924.4A CN114935748B (en) | 2022-05-14 | 2022-05-14 | Method and system for calibrating large baseline multi-laser radar based on detected object |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114935748A true CN114935748A (en) | 2022-08-23 |
CN114935748B CN114935748B (en) | 2024-09-10 |
Family
ID=82863591
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210523924.4A Active CN114935748B (en) | 2022-05-14 | 2022-05-14 | Method and system for calibrating large baseline multi-laser radar based on detected object |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114935748B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115097426A (en) * | 2022-08-24 | 2022-09-23 | 盟识科技(苏州)有限公司 | Automatic calibration method after vehicle-mounted laser radar replacement, storage medium and vehicle |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110658530A (en) * | 2019-08-01 | 2020-01-07 | 北京联合大学 | Map construction method and system based on double-laser-radar data fusion and map |
WO2021253193A1 (en) * | 2020-06-15 | 2021-12-23 | 深圳市大疆创新科技有限公司 | Calibration method and calibration apparatus for external parameters of multiple groups of laser radars, and computer storage medium |
CN114114312A (en) * | 2021-11-24 | 2022-03-01 | 重庆邮电大学 | Three-dimensional target detection method based on fusion of multi-focal-length camera and laser radar |
-
2022
- 2022-05-14 CN CN202210523924.4A patent/CN114935748B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110658530A (en) * | 2019-08-01 | 2020-01-07 | 北京联合大学 | Map construction method and system based on double-laser-radar data fusion and map |
WO2021253193A1 (en) * | 2020-06-15 | 2021-12-23 | 深圳市大疆创新科技有限公司 | Calibration method and calibration apparatus for external parameters of multiple groups of laser radars, and computer storage medium |
CN114114312A (en) * | 2021-11-24 | 2022-03-01 | 重庆邮电大学 | Three-dimensional target detection method based on fusion of multi-focal-length camera and laser radar |
Non-Patent Citations (1)
Title |
---|
韩栋斌;徐友春;王任栋;齐尧;李华;: "基于多对点云匹配的三维激光雷达外参数标定", 激光与光电子学进展, no. 02, 24 September 2017 (2017-09-24) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115097426A (en) * | 2022-08-24 | 2022-09-23 | 盟识科技(苏州)有限公司 | Automatic calibration method after vehicle-mounted laser radar replacement, storage medium and vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN114935748B (en) | 2024-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Reina et al. | Radar‐based perception for autonomous outdoor vehicles | |
CN107632308B (en) | Method for detecting contour of obstacle in front of vehicle based on recursive superposition algorithm | |
Dogru et al. | Drone detection using sparse lidar measurements | |
CN111856499B (en) | Map construction method and device based on laser radar | |
CN113095154A (en) | Three-dimensional target detection system and method based on millimeter wave radar and monocular camera | |
Adil et al. | A novel algorithm for distance measurement using stereo camera | |
KR101888295B1 (en) | Method for estimating reliability of distance type witch is estimated corresponding to measurement distance of laser range finder and localization of mobile robot using the same | |
CN108596117B (en) | Scene monitoring method based on two-dimensional laser range finder array | |
WO2022217988A1 (en) | Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program | |
CN114829971A (en) | Laser radar calibration method and device and storage medium | |
CN113743171A (en) | Target detection method and device | |
CN114935748B (en) | Method and system for calibrating large baseline multi-laser radar based on detected object | |
Shetty et al. | Adaptive covariance estimation of LiDAR‐based positioning errors for UAVs | |
Mo et al. | A survey on recent reflective detection methods in simultaneous localization and mapping for robot applications | |
Zaiter et al. | 3d lidar extrinsic calibration method using ground plane model estimation | |
Lucks et al. | Improving trajectory estimation using 3D city models and kinematic point clouds | |
Li et al. | Mobile robot map building based on laser ranging and kinect | |
Torchalla et al. | Robust multisensor fusion for reliable mapping and navigation in degraded visual conditions | |
Kwon et al. | A new feature commonly observed from air and ground for outdoor localization with elevation map built by aerial mapping system | |
CN114814810A (en) | Pedestrian detection method | |
KR102050756B1 (en) | Micro target detection device | |
Steder et al. | Maximum likelihood remission calibration for groups of heterogeneous laser scanners | |
Yamada et al. | Probability-Based LIDAR–Camera Calibration Considering Target Positions and Parameter Evaluation Using a Data Fusion Map | |
Farzadpour et al. | Modeling and optimizing the coverage performance of the lidar sensor network | |
Shojaeipour et al. | Robot path obstacle locator using webcam and laser emitter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |