CN116051650A - Laser radar and camera combined external parameter calibration method and device - Google Patents

Laser radar and camera combined external parameter calibration method and device Download PDF

Info

Publication number
CN116051650A
CN116051650A CN202211644585.1A CN202211644585A CN116051650A CN 116051650 A CN116051650 A CN 116051650A CN 202211644585 A CN202211644585 A CN 202211644585A CN 116051650 A CN116051650 A CN 116051650A
Authority
CN
China
Prior art keywords
target
point cloud
coordinate system
frame
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211644585.1A
Other languages
Chinese (zh)
Inventor
李平
郭交通
叶茂
潘力澜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Network Technology Shanghai Co Ltd
Original Assignee
International Network Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Network Technology Shanghai Co Ltd filed Critical International Network Technology Shanghai Co Ltd
Priority to CN202211644585.1A priority Critical patent/CN116051650A/en
Publication of CN116051650A publication Critical patent/CN116051650A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention provides a laser radar and camera combined external parameter calibration method and device, wherein a camera is used for shooting a target to obtain a picture with a target pattern, and the laser radar is used for obtaining point cloud data with target information; detecting first target information in the picture, and estimating initial postures of each target and each frame of picture in the picture relative to the first frame of picture and initial postures of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the first target information and second target information in the point cloud data; calculating the initial value of a conversion parameter of a radar coordinate system and a vehicle body coordinate system; the relative gesture of each target and the first frame laser radar coordinate system, the gesture of the point cloud under the radar coordinate system and the conversion parameter calibration values among different coordinate systems are obtained through a nonlinear optimization algorithm, the problem of inconsistent calibration results caused by separate calibration is avoided, the calibration precision is improved, the calibration cost is reduced, and the requirements on a calibration site are reduced.

Description

Laser radar and camera combined external parameter calibration method and device
Technical Field
The invention relates to the technical field of automatic driving, in particular to a laser radar and camera combined external parameter calibration method and device.
Background
LiDAR (Light Detection And Ranging, liDAR) is capable of providing environmental depth information by way of direct measurement, but at a relatively low resolution. The camera is able to provide rich environmental information, but lacks depth information while being susceptible to illumination. Therefore, the combination of the laser radar and the camera forms good complementation, and is widely applied to important tasks such as perception, positioning and the like in the fields of robots and autopilots. In the field of autopilot, the results of sensing and positioning are ultimately reflected in the vehicle body coordinate system, which requires calibration of external parameters of high-precision sensors to convert the data acquired by them into the vehicle body coordinate system. The existing calibration method mostly adopts a step-by-step calibration mode: the laser radar and the camera are calibrated through the target, any sensor is calibrated to external parameters under the vehicle body coordinate system through other means, the external parameters under the vehicle body coordinate system are calibrated through the sensor, the mode of hand-eye calibration is adopted, the mode needs high-speed movement of the vehicle, and meanwhile parameters of the vehicle are calibrated. In addition, the sensor (camera or laser radar) to be calibrated needs to realize continuous track estimation, the track estimation also needs to use equipment such as a combined navigation system or a straightener with higher price, so that the calibration cost is higher, some calibration actions need to be finished in advance, the calibration steps are more, the technical difficulty is high, and in addition, the consistency of the calibration result is difficult to ensure in a step-by-step calibration mode.
Disclosure of Invention
The invention provides a laser radar and camera combined external parameter calibration method and device, which are used for solving the defects that the traditional laser radar and camera combined external parameter calibration method is high in calibration cost, high in technical difficulty and incapable of guaranteeing the consistency of calibration results.
The invention provides a laser radar and camera combined external parameter calibration method, which comprises the following steps:
shooting a target through a camera to obtain a picture with a target pattern, and obtaining point cloud data with target information through a laser radar;
detecting first target information in a picture, and detecting second target information in point cloud data according to the first target information and the point cloud data;
estimating initial postures of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculating initial postures of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the second target information and design values of a radar coordinate system and a camera coordinate system, and calculating conversion parameter initial values of the radar coordinate system and a vehicle body coordinate system according to the laser radar installation design values and observed ground point cloud;
and obtaining the relative posture of each target and the first frame of laser radar coordinate system in the point cloud data, the posture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system through a nonlinear optimization algorithm.
The invention provides a laser radar and camera combined external parameter calibration method, wherein first target information comprises the following steps: target corner 2D coordinates and target ID information.
The invention provides a laser radar and camera combined external parameter calibration method, which estimates the initial gesture of each target and each frame of picture in a picture relative to a first frame of picture according to first target information, and comprises the following steps:
and estimating the initial posture of each target and each frame of picture relative to the first frame of picture by using a PnP algorithm according to the known target size information, the 2D coordinates of the target angular points and the target ID information.
The invention provides a laser radar and camera combined external parameter calibration method, wherein the method comprises the following steps of:
screening the point cloud data to obtain non-ground point cloud data;
clustering the non-ground point cloud data to obtain a plurality of point cloud clusters;
filtering the plurality of point cloud clusters according to the known target size information to obtain a first suspected target Yun Julei;
projecting the suspected target point cloud clusters to a picture according to design parameters of a camera and a laser radar, and comparing target point positions in the picture with the positions of the suspected target point cloud clusters projected to the picture to obtain a second suspected target point Yun Julei;
And carrying out flatness screening on the second suspected target cloud clusters to obtain the second target information.
The invention provides a laser radar and camera combined external parameter calibration method, which screens the point cloud data to obtain non-ground point cloud data, and comprises the following steps:
dividing the ground point cloud data in the point cloud data by using a random sampling consistent method;
fitting a plane by using the ground point cloud data to obtain a ground point cloud parameter plane equation under a radar coordinate system;
and screening out non-ground point cloud data according to the ground point cloud parameter plane equation.
The invention provides a laser radar and camera combined external parameter calibration method, which estimates the initial gesture of each target and each frame of point cloud relative to a first frame of laser radar point cloud according to the first target information and the second target information, and comprises the following steps:
according to the same targets in the two continuous frames of pictures, the mapping relation from the 3D coordinates to the 2D coordinates is obtained according to the initial posture of each target and each frame of picture relative to the first frame of picture and the known target size information;
calculating the 3D coordinates of the target corner under a first frame of camera coordinate system according to the mapping relation from the 3D coordinates to the 2D coordinates and the 2D coordinates of the target corner;
Calculating a difference value between a projection 2D coordinate of the 3D coordinate of the target corner point under the first frame camera coordinate system projected onto a picture and a 2D coordinate of a target pattern in the picture;
optimizing an initial pose of each target and each frame of picture relative to a first frame of picture by minimizing the difference;
and calculating the initial pose of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the optimized initial pose of each target and each frame of picture relative to the first frame of picture and the parameter design values of the camera and the laser radar.
The invention provides a laser radar and camera combined external parameter calibration method, which calculates a 3D coordinate of a target corner under a first frame camera coordinate system according to a mapping relation from the 3D coordinate to a 2D coordinate and the 2D coordinate of the target corner, and comprises the following steps:
calculating the relative motion relation between frames according to the mapping relation;
fixing the camera pose of the first frame, and obtaining the camera poses of all frames under a first frame camera coordinate system according to the relative motion relation among frames;
calculating the pose of the target under the first frame camera coordinate system according to the camera poses of all frames under the first frame camera coordinate system and the relative pose of each target pattern and the camera;
And calculating the 3D coordinates of the corner points of the target under the first frame of camera coordinate system according to the posture of the target under the first frame of camera coordinate system and the known target size information.
The invention provides a laser radar and camera combined external parameter calibration method, which calculates a conversion parameter initial value of a radar coordinate system and a vehicle body coordinate system according to a laser radar installation design value and an observed ground point cloud, and comprises the following steps:
screening out point cloud data of a static time period according to an SFM method;
and estimating initial values of conversion parameters of a radar coordinate system and a vehicle body coordinate system according to a ground point cloud parameter plane equation and a laser radar installation design value corresponding to the point cloud data of the static time period, wherein the conversion parameters of the radar coordinate system and the vehicle body coordinate system comprise roll and pitch angles of the radar coordinate system relative to the vehicle body coordinate system.
The invention provides a laser radar and camera combined external parameter calibration method, wherein the nonlinear optimization algorithm comprises the following steps:
minimizing at least one of a re-projection error model, a point cloud correlation error model, an incomplete constraint error model, an installation prior error model, and a point cloud projection error model.
The invention provides a laser radar and camera combined external parameter calibration method, which further comprises the following steps:
And solving the nonlinear optimization algorithm by using an LM method to obtain the relative posture of each target and the first frame of laser radar coordinate system, the posture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system in the point cloud data.
The invention provides a laser radar and camera combined external parameter calibration method, which comprises the following steps:
and constructing a re-projection error model according to the difference between the projected 2D coordinates projected onto the picture according to the 3D coordinates of the target corner point under the first frame camera coordinate system and the 2D coordinates of the target pattern in the picture.
The invention provides a laser radar and camera combined external parameter calibration method, which comprises the following steps:
and calculating a covariance matrix of points of the same plane in the point cloud data, wherein the minimum eigenvalue of the covariance matrix is the matching error of the plane, and the sum of the matching errors of all the planes is used as a point cloud association error model.
The invention provides a laser radar and camera combined external parameter calibration method, which comprises the following steps:
Calculating the motion trail of the vehicle body according to the initial gesture of each target and each frame of point cloud relative to the first frame of laser radar point cloud and the initial value of the conversion parameters of the radar coordinate system and the vehicle body coordinate system;
and projecting any frame of vehicle body gesture to the previous frame according to the motion track of the vehicle body, calculating the vehicle body speed according to each frame of projected data by using a uniform motion hypothesis, and constructing an incomplete constraint error model by using the hypothesis that the vehicle body lateral speed is zero.
The invention provides a laser radar and camera combined external parameter calibration method, which comprises the following steps:
and constructing an installation priori error model according to the design position coordinate constraint of the radar coordinate system under the vehicle body coordinate system and the roll and pitch angle constraint.
The invention provides a laser radar and camera combined external parameter calibration method, which comprises the following steps:
and projecting the suspected target point cloud clusters to the picture by design parameters of the camera and the laser radar, and constructing a point cloud projection error model according to the sum of the target point positions in the picture and the position errors of the suspected target point cloud clusters projected to the picture.
The invention provides a laser radar and camera combined external parameter calibration method, which comprises the steps of shooting a target through a camera to obtain a picture with a target pattern, and obtaining point cloud data with target information through the laser radar, wherein the method comprises the following steps:
and acquiring the picture and the point cloud data of the target stationary time period, and acquiring the picture and the point cloud data of the target motion time period.
The invention also provides an external parameter calibration device combining the laser radar and the camera, which comprises:
the acquisition module is used for shooting a target through a camera to acquire a picture with a target pattern and acquiring point cloud data with target information through a laser radar;
the detection module is used for detecting first target information in the picture and detecting second target information in the point cloud data according to the first target information and the point cloud data;
the initial estimation module is used for estimating the initial gesture of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculating the initial gesture of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the second target information and the design values of the radar coordinate system and the camera coordinate system, and calculating the initial value of conversion parameters of the radar coordinate system and the vehicle body coordinate system according to the laser radar installation design value and the observed ground point cloud;
The optimization module is used for obtaining the relative posture of each target and the first frame of laser radar coordinate system, the posture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system in the point cloud data through a nonlinear optimization algorithm.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the external parameter calibration method of the laser radar and the camera combination when executing the program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for calibrating external parameters of a lidar and camera combination as described in any of the above.
The invention provides a laser radar and camera combined external parameter calibration method and a device, wherein the laser radar and camera combined external parameter calibration method comprises the steps of shooting a target through a camera to obtain a picture with a target pattern, and obtaining point cloud data with target information through the laser radar; detecting first target information in the picture, and detecting second target information in point cloud data according to the first target information and the point cloud data; estimating initial postures of each target and each frame of picture in the picture relative to a first frame of picture according to first target information, estimating initial postures of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculating initial postures of each target and each frame of point cloud relative to a first frame of laser radar point cloud according to the second target information and design values of a radar coordinate system and a camera coordinate system, and calculating conversion parameter initial values of the radar coordinate system and a vehicle body coordinate system according to laser radar installation design values and observed ground point cloud; the relative gesture of each target and the first frame of laser radar coordinate system in the point cloud data, the gesture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system are obtained through a nonlinear optimization algorithm, the problem that the calibration results caused by separate calibration of the cameras and the laser radars are inconsistent is avoided, the calibration precision is improved, only targets with known sizes are used in the calibration process, the method does not depend on other hardware equipment, the calibration cost is reduced, the operation is simple and convenient, the vehicle is not required to move at high speed and in high dynamic, and the requirements on the calibration sites are further reduced.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for calibrating external parameters by combining a laser radar and a camera;
FIG. 2 is a second flow chart of the method for calibrating external parameters by combining a laser radar and a camera according to the present invention;
FIG. 3 is a third flow chart of the method for calibrating external parameters by combining the laser radar and the camera;
FIG. 4 is a flow chart of a method for calibrating external parameters by combining a laser radar and a camera;
FIG. 5 is a flow chart of a method for calibrating external parameters by combining a laser radar and a camera;
FIG. 6 is a schematic structural diagram of an external parameter calibration device combining a laser radar and a camera;
fig. 7 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a flowchart of a method for calibrating external parameters of a laser radar and a camera combination according to an embodiment of the present invention, as shown in fig. 1, where the method for calibrating external parameters of a laser radar and a camera combination according to an embodiment of the present invention includes:
step 101, shooting a target through a camera to obtain a picture with a target pattern, and obtaining point cloud data with target information through a laser radar;
in an embodiment of the present invention, capturing a target by a camera to obtain a picture having a target pattern, and capturing point cloud data having target information by a laser radar, includes:
and acquiring the picture and the point cloud data of the target stationary time period, and acquiring the picture and the point cloud data of the target motion time period.
In the embodiment of the invention, the target is controlled to be stationary and then moves, and the image and the point cloud data of the stationary time period and the image and the point cloud data of the target moving time period are respectively acquired.
102, detecting first target information in a picture, and detecting second target information in point cloud data according to the first target information and the point cloud data;
in an embodiment of the present invention, the first target information includes target corner 2D coordinates and target ID information.
Step 103, estimating initial postures of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculating initial postures of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the second target information and design values of a radar coordinate system and a camera coordinate system, and calculating conversion parameter initial values of the radar coordinate system and a vehicle body coordinate system according to the laser radar installation design values and observed ground point cloud;
in the embodiment of the invention, according to the known target size information, the 2D coordinates of the target corner points and the target ID information, the initial relative gesture of each target and the camera is estimated by using a PnP (Perspective from N Point, perspective N-point projection) algorithm.
The PnP (Perselect-n-Point) algorithm is a method of solving 3D to 2D Point-to-Point motion.
And 104, obtaining the relative posture of each target and the first frame of laser radar coordinate system, the posture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system in the point cloud data through a nonlinear optimization algorithm.
In an embodiment of the present invention, the method further includes:
and solving the nonlinear optimization algorithm by using an LM (Levenberg-Marquardt) method to obtain the relative posture of each target and a first frame of laser radar coordinate system in the point cloud data, the posture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and a camera coordinate system and the conversion parameter calibration value of the radar coordinate system and a vehicle body coordinate system.
The traditional method for calibrating the external parameters of the camera and the laser radar mostly adopts a step-by-step calibration mode: the laser radar and the camera are calibrated through the target, any sensor is calibrated to external parameters under the vehicle body coordinate system through other means, the external parameters under the vehicle body coordinate system are calibrated through the sensor, the mode of hand-eye calibration is adopted, the mode needs high-speed movement of the vehicle, and meanwhile parameters of the vehicle are calibrated. In addition, the sensor (camera or laser radar) to be calibrated needs to realize continuous track estimation, the track estimation also needs to use equipment such as a combined navigation system or a straightener with higher price, so that the calibration cost is higher, some calibration actions need to be finished in advance, the calibration steps are more, the technical difficulty is high, and in addition, the consistency of the calibration result is difficult to ensure in a step-by-step calibration mode.
The external parameter calibration method combining the laser radar and the camera comprises the steps of shooting a target through the camera to obtain a picture with a target pattern, and obtaining point cloud data with target information through the laser radar; detecting first target information in the picture, and detecting second target information in point cloud data according to the first target information and the point cloud data; estimating initial postures of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculating initial postures of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the second target information and design values of a radar coordinate system and a camera coordinate system, and calculating conversion parameter initial values of the radar coordinate system and a vehicle body coordinate system according to the laser radar installation design values and observed ground point cloud; the relative gesture of each target and the first frame of laser radar coordinate system in the point cloud data, the gesture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system are obtained through a nonlinear optimization algorithm, the problem that the calibration results caused by separate calibration of the cameras and the laser radars are inconsistent is avoided, the calibration precision is improved, only targets with known sizes are used in the calibration process, the method does not depend on other hardware equipment, the calibration cost is reduced, the operation is simple and convenient, the vehicle is not required to move at high speed and in high dynamic, and the requirements on the calibration sites are further reduced.
Based on any of the above embodiments, as shown in fig. 2, detecting the second target information in the point cloud data according to the first target information and the point cloud data includes:
step 201, screening point cloud data to obtain non-ground point cloud data;
in the embodiment of the invention, screening the point cloud data to obtain non-ground point cloud data comprises the following steps:
step 2011, dividing the ground point cloud data in the point cloud data by using a random sampling consistent method;
step 2012, fitting a plane by using the ground point cloud data to obtain a ground point cloud parameter plane equation under a radar coordinate system;
and 2013, screening out non-ground point cloud data according to a ground point cloud parameter plane equation.
Step 202, clustering non-ground point cloud data to obtain a plurality of point cloud clusters;
step 203, filtering the plurality of point cloud clusters according to the known target size information to obtain a first suspected target Yun Julei;
step 204, projecting the suspected target point cloud clusters to the picture according to design parameters of the camera and the laser radar, and comparing the target point positions in the picture with the positions of the suspected target point cloud clusters projected to the picture to obtain a second suspected target point Yun Julei;
and 205, performing flatness screening on the second suspected target cloud clusters to obtain second target information.
Based on any of the above embodiments, as shown in fig. 3, estimating an initial relative pose between each target in the point cloud data and the lidar according to the first target information and the second target information includes:
step 301, associating the same targets in two continuous frames of pictures according to the target ID, and obtaining a mapping relation from a 3D coordinate to a 2D coordinate according to the initial relative gesture of each target and a camera and the known target size information;
step 302, calculating 3D coordinates of the target corner under a first frame of camera coordinate system according to the mapping relation from the 3D coordinates to the 2D coordinates and the 2D coordinates of the target corner;
in the embodiment of the invention, calculating the 3D coordinates of the target corner under the first frame of camera coordinate system according to the mapping relation from the 3D coordinates to the 2D coordinates and the 2D coordinates of the target corner comprises:
step 3021, calculating a relative motion relationship between frames according to the mapping relationship;
step 3022, fixing the first frame camera pose, and obtaining camera poses of all frames under a first frame camera coordinate system according to the relative motion relationship between frames;
step 3023, calculating the pose of the target under the first frame camera coordinate system according to the camera poses of all frames under the first frame camera coordinate system and the relative pose of each target pattern and the camera;
Step 3024, calculating 3D coordinates of the target corner under the first frame camera coordinate system according to the pose of the target under the first frame camera coordinate system and the known target size information.
Step 303, calculating the difference between the projected 2D coordinates of the 3D coordinates of the target corner point under the first frame camera coordinate system projected onto the picture and the 2D coordinates of the target pattern in the picture;
step 304, optimizing the initial pose of each target and each frame of picture relative to the first frame of picture by minimizing the difference value;
step 305, calculating the initial pose of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the optimized initial pose of each target and each frame of picture relative to the first frame of picture and the parameter design values of the camera and the laser radar.
Based on any of the above embodiments, as shown in fig. 4, estimating initial values of conversion parameters of a radar coordinate system and a vehicle body coordinate system according to first target information and second target information includes:
step 401, screening out point cloud data of a static time period according to an SFM method;
and step 402, estimating initial values of conversion parameters of a radar coordinate system and a vehicle body coordinate system according to a ground point cloud parameter plane equation and a laser radar installation design value corresponding to point cloud data of a static time period, wherein the conversion parameters of the radar coordinate system and the vehicle body coordinate system comprise roll and pitch angles of the radar coordinate system relative to the vehicle body coordinate system.
Based on any of the above embodiments, the nonlinear optimization algorithm includes:
minimizing at least one of a re-projection error model, a point cloud correlation error model, an incomplete constraint error model, an installation prior error model, and a point cloud projection error model.
In the embodiment of the invention, the method for constructing the minimized re-projection error model comprises the following steps:
and constructing a re-projection error model according to the difference between the projected 2D coordinates projected onto the picture according to the 3D coordinates of the target corner point under the first frame camera coordinate system and the 2D coordinates of the target pattern in the picture.
In the embodiment of the invention, the method for constructing the point cloud association error model comprises the following steps:
and calculating a covariance matrix of points of the same plane in the point cloud data, wherein the minimum eigenvalue of the covariance matrix is the matching error of the plane, and the sum of the matching errors of all the planes is used as a point cloud association error model.
In the embodiment of the invention, the method for constructing the non-integrity constraint error model comprises the following steps:
calculating the motion trail of the vehicle body according to the initial gesture of each target and each frame of point cloud relative to the first frame of laser radar point cloud and the initial value of the conversion parameters of the radar coordinate system and the vehicle body coordinate system;
And projecting any frame of vehicle body gesture to the previous frame according to the motion track of the vehicle body, calculating the vehicle body speed according to each frame of projected data by using a uniform motion hypothesis, and constructing an incomplete constraint error model by using the hypothesis that the vehicle body lateral speed is zero.
In the embodiment of the invention, the construction method for the installation priori error model comprises the following steps:
and constructing an installation priori error model according to the design position coordinate constraint of the radar coordinate system under the vehicle body coordinate system and the roll and pitch angle constraint.
In the embodiment of the invention, the method for constructing the point cloud projection error model comprises the following steps:
and projecting the suspected target point cloud clusters to the picture by design parameters of the camera and the laser radar, and constructing a point cloud projection error model according to the sum of the target point position in the picture and the position error of the suspected target point cloud clusters projected to the picture.
As shown in fig. 5, the specific flow of the external parameter calibration method combining the laser radar and the camera comprises the following steps: the method comprises a data acquisition stage, a characteristic detection stage, an initialization stage and an external parameter joint optimization stage.
In the data acquisition stage, the original data of the laser radar and the camera are collected, and the mode of static and then moving is adopted, wherein the data comprise an original picture acquired by the camera and an environment point cloud acquired by laser radar scanning.
In the feature detection stage, the collected original data is used for detecting target patterns (april tag) in pictures based on the existing image target detection algorithm to obtain 4 corner coordinates and pattern ID information of the patterns. Because the target size information is known, the relative gesture of each target pattern and a camera is estimated by using a computer vision classical Perspective from n Point algorithm, a ground point cloud is segmented by using a random sampling consistent method, and a plane is fitted by using ground point cloud data, so that a ground parameter plane equation of the ground under a radar coordinate system is obtained.
And clustering non-ground point clouds on the basis that the ground point clouds are screened, and removing the point cloud clusters with too small or too large volumes according to the target size information. Projecting the clustered point clouds to the picture by using design parameters of the camera and the radar, finding the nearest target pattern from the projected picture and the target position in the picture, and endowing the point clouds with corresponding IDs. And (3) carrying out flatness screening on the point cloud clusters endowed with the ID, and reserving the point cloud clusters with better flatness as target point clouds.
In the initialization stage, based on target information detected by a camera, a variable to be solved is initialized by an SFM (Structrue From Motion, three-dimensional reconstruction in a two-dimensional moving image) method: the pose of all data frames and the pose of all target patterns in the first frame radar coordinate system are calculated. Distinguishing stationary and moving time periods according to SFM results, calculating Roll (Roll) and Pitch (Pitch) of a radar coordinate system relative to a vehicle body by using a stationary period ground point cloud, and specifically comprising:
(1) And according to the same targets in the continuous two-frame pictures related to the target ID, obtaining a series of 3D-to-2D matching relations by using the relative gestures of the targets and the cameras, the image coordinates of the corner points of the targets and the actual physical dimensions of the targets, and calculating the inter-frame relative motion relation by using a PnP method.
(2) And fixing the gesture of the first frame, and obtaining the gestures of all frames under the first frame camera coordinate system according to the relative motion relation.
(3) And calculating the pose of all targets under the first frame camera coordinate system according to the pose of all frames under the first frame camera coordinate system and the relative pose of the targets and the cameras.
(4) And calculating 3D coordinates of the corner points of the target under the first frame of camera coordinate system by using the pose of the target under the first frame of camera coordinate system and the actual physical size of the target. Each point is projected to construct a re-projection error from the camera pose to all frames where the target can be observed, optimizing all camera poses and target poses by minimizing the re-projection error. The re-projection error is the difference between the image coordinate of the calculated 3D coordinate projected onto the picture and the detected image coordinate.
(5) And calculating the postures of all the data frames relative to the first frame radar coordinate system and the postures of all the targets relative to the first frame radar coordinate system by using the optimized postures of all the cameras and the target postures and the parameter design values of the cameras and the laser radar.
(6) And distinguishing the data of the static time period and the data of the motion time period according to the SFM result, and estimating the roll and pitch angles of the radar coordinate system relative to the vehicle body coordinate system by using a multi-frame point cloud ground parameter equation detected in the data of the static time period.
In the external parameter combined optimization stage, the problem of graph optimization is built, the re-projection error, the point cloud association error, the incompleteness constraint error, the installation priori error and the point cloud projection error are minimized, and meanwhile, the external parameters of a camera and a laser radar and the conversion parameters from a radar coordinate system to a vehicle body coordinate system are estimated. And solving an error model by using an LM method, and iteratively updating variables to be solved based on the initial relative gesture of each target pattern and a camera in the picture, the initial relative gesture of each target and a laser radar in point cloud data and the initial value of conversion parameters of a radar coordinate system and a vehicle body coordinate system until the algorithm converges.
According to the laser radar and camera combined external parameter calibration method provided by the embodiment of the invention, the camera, the laser radar and the external parameter calibration of the vehicle body are not required to be calibrated step by step, the vehicle parameters are not required to be calibrated, other sensors and expensive equipment or systems are not required to be relied on, and the camera and laser Lei Fa external parameters can be calibrated quickly, at low cost and with high precision only by using the original data of the camera and the laser radar and the low-cost square target, and the operation is convenient.
The external parameter calibration device of the laser radar and camera combination provided by the invention is described below, and the external parameter calibration device of the laser radar and camera combination described below and the external parameter calibration method of the laser radar and camera combination described above can be correspondingly referred to each other.
Fig. 6 is a schematic diagram of an external parameter calibration device for combining a laser radar with a camera according to an embodiment of the present invention, as shown in fig. 6, where the external parameter calibration device for combining a laser radar with a camera according to an embodiment of the present invention includes:
an acquisition module 601, configured to acquire a picture with a target pattern by capturing a target with a camera, and acquire point cloud data with target information with a laser radar;
the detection module 602 is configured to detect first target information in the picture, and detect second target information in the point cloud data according to the first target information and the point cloud data;
the initial estimating module 603 is configured to estimate an initial pose of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculate an initial pose of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the second target information and design values of the radar coordinate system and the camera coordinate system, and calculate initial values of conversion parameters of the radar coordinate system and the vehicle body coordinate system according to the laser radar installation design values and the observed ground point cloud;
The optimizing module 604 is configured to obtain, by using a nonlinear optimizing algorithm, a relative pose of each target in the point cloud data and the first frame of laser radar coordinate system, a pose of each frame of point cloud in the first frame of laser radar coordinate system, a calibration value of a conversion parameter of the radar coordinate system and the camera coordinate system, and a calibration value of a conversion parameter of the radar coordinate system and the vehicle body coordinate system.
According to the external parameter calibration device combining the laser radar and the camera, which is provided by the embodiment of the invention, a picture with a target pattern is obtained by shooting a target through the camera, and point cloud data with target information is obtained through the laser radar; detecting first target information in the picture, and detecting second target information in point cloud data according to the first target information and the point cloud data; estimating initial postures of each target and each frame of picture in the picture relative to a first frame of picture according to first target information, estimating initial postures of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculating initial postures of each target and each frame of point cloud relative to a first frame of laser radar point cloud according to the second target information and design values of a radar coordinate system and a camera coordinate system, and calculating conversion parameter initial values of the radar coordinate system and a vehicle body coordinate system according to laser radar installation design values and observed ground point cloud; the relative gesture of each target and the first frame of laser radar coordinate system in the point cloud data, the gesture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system are obtained through a nonlinear optimization algorithm, the problem that the calibration results caused by separate calibration of the cameras and the laser radars are inconsistent is avoided, the calibration precision is improved, only targets with known sizes are used in the calibration process, the method does not depend on other hardware equipment, the calibration cost is reduced, the operation is simple and convenient, the vehicle is not required to move at high speed and in high dynamic, and the requirements on the calibration sites are further reduced.
Fig. 7 illustrates a physical schematic diagram of an electronic device, as shown in fig. 7, which may include: processor 710, communication interface 720, memory 730, and communication bus 740, wherein processor 710, communication interface 720, memory 730 communicate with each other via communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform a method of external parameter calibration for a lidar and camera combination, the method comprising: shooting a target through a camera to obtain a picture with a target pattern, and obtaining point cloud data with target information through a laser radar; detecting first target information in the picture, and detecting second target information in point cloud data according to the first target information and the point cloud data; estimating initial postures of each target and each frame of picture in the picture relative to a first frame of picture according to first target information, estimating initial postures of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculating initial postures of each target and each frame of point cloud relative to a first frame of laser radar point cloud according to the second target information and design values of a radar coordinate system and a camera coordinate system, and calculating conversion parameter initial values of the radar coordinate system and a vehicle body coordinate system according to laser radar installation design values and observed ground point cloud; and obtaining the relative posture of each target and the first frame of laser radar coordinate system in the point cloud data, the posture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system through a nonlinear optimization algorithm.
Further, the logic instructions in the memory 730 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the method for calibrating external parameters of a lidar and camera combination provided by the above methods, the method comprising: shooting a target through a camera to obtain a picture with a target pattern, and obtaining point cloud data with target information through a laser radar; detecting first target information in the picture, and detecting second target information in point cloud data according to the first target information and the point cloud data; estimating initial postures of each target and each frame of picture in the picture relative to a first frame of picture according to first target information, estimating initial postures of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculating initial postures of each target and each frame of point cloud relative to a first frame of laser radar point cloud according to the second target information and design values of a radar coordinate system and a camera coordinate system, and calculating conversion parameter initial values of the radar coordinate system and a vehicle body coordinate system according to laser radar installation design values and observed ground point cloud; and obtaining the relative posture of each target and the first frame of laser radar coordinate system in the point cloud data, the posture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system through a nonlinear optimization algorithm.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (19)

1. The external parameter calibration method combining the laser radar and the camera is characterized by comprising the following steps of:
shooting a target through a camera to obtain a picture with a target pattern, and obtaining point cloud data with target information through a laser radar;
detecting first target information in a picture, and detecting second target information in point cloud data according to the first target information and the point cloud data;
estimating initial postures of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculating initial postures of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the second target information and design values of a radar coordinate system and a camera coordinate system, and calculating conversion parameter initial values of the radar coordinate system and a vehicle body coordinate system according to the laser radar installation design values and observed ground point cloud;
And obtaining the relative posture of each target and the first frame of laser radar coordinate system in the point cloud data, the posture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system through a nonlinear optimization algorithm.
2. The method of claim 1, wherein the first target information comprises: target corner 2D coordinates and target ID information.
3. The method for calibrating external parameters by combining a laser radar and a camera according to claim 2, wherein the estimating the initial pose of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information comprises:
and estimating the initial posture of each target and each frame of picture relative to the first frame of picture by using a PnP algorithm according to the known target size information, the 2D coordinates of the target angular points and the target ID information.
4. The method for calibrating external parameters of a combination of a lidar and a camera according to claim 3, wherein the detecting the second target information in the point cloud data according to the first target information and the point cloud data comprises:
Screening the point cloud data to obtain non-ground point cloud data;
clustering the non-ground point cloud data to obtain a plurality of point cloud clusters;
filtering the plurality of point cloud clusters according to the known target size information to obtain a first suspected target Yun Julei;
projecting the suspected target point cloud clusters to a picture according to design parameters of a camera and a laser radar, and comparing target point positions in the picture with the positions of the suspected target point cloud clusters projected to the picture to obtain a second suspected target point Yun Julei;
and carrying out flatness screening on the second suspected target cloud clusters to obtain the second target information.
5. The method for calibrating external parameters by combining a laser radar and a camera according to claim 4, wherein the step of screening the point cloud data to obtain non-ground point cloud data comprises the steps of:
dividing the ground point cloud data in the point cloud data by using a random sampling consistent method;
fitting a plane by using the ground point cloud data to obtain a ground point cloud parameter plane equation under a radar coordinate system;
and screening out non-ground point cloud data according to the ground point cloud parameter plane equation.
6. The method for calibrating external parameters associated with a laser radar and a camera according to claim 4, wherein calculating the initial pose of each target and each frame of point cloud relative to the first frame of point cloud of the laser radar according to the second target information and the design values of the radar coordinate system and the camera coordinate system comprises:
According to the same targets in the two continuous frames of pictures, the mapping relation from the 3D coordinates to the 2D coordinates is obtained according to the initial posture of each target and each frame of picture relative to the first frame of picture and the known target size information;
calculating the 3D coordinates of the target corner under a first frame of camera coordinate system according to the mapping relation from the 3D coordinates to the 2D coordinates and the 2D coordinates of the target corner;
calculating a difference value between a projection 2D coordinate of the 3D coordinate of the target corner point under the first frame camera coordinate system projected onto a picture and a 2D coordinate of a target pattern in the picture;
optimizing an initial pose of each target and each frame of picture relative to a first frame of picture by minimizing the difference;
and calculating the initial pose of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the optimized initial pose of each target and each frame of picture relative to the first frame of picture and the parameter design values of the camera and the laser radar.
7. The method for calibrating external parameters of a combination of a lidar and a camera according to claim 6, wherein the calculating the 3D coordinates of the target corner point in the first frame of the camera coordinate system according to the mapping relationship between the 3D coordinates and the 2D coordinates of the target corner point comprises:
Calculating the relative motion relation between frames according to the mapping relation;
fixing the camera pose of the first frame, and obtaining the camera poses of all frames under a first frame camera coordinate system according to the relative motion relation among frames;
calculating the pose of the target under the first frame camera coordinate system according to the camera poses of all frames under the first frame camera coordinate system and the relative pose of each target pattern and the camera;
and calculating the 3D coordinates of the corner points of the target under the first frame of camera coordinate system according to the posture of the target under the first frame of camera coordinate system and the known target size information.
8. The method for calibrating external parameters by combining a laser radar and a camera according to claim 6, wherein calculating initial values of conversion parameters of a radar coordinate system and a vehicle body coordinate system according to a laser radar installation design value and an observed ground point cloud comprises:
screening out point cloud data of a static time period according to an SFM method;
and estimating initial values of conversion parameters of a radar coordinate system and a vehicle body coordinate system according to a ground point cloud parameter plane equation and a laser radar installation design value corresponding to the point cloud data of the static time period, wherein the conversion parameters of the radar coordinate system and the vehicle body coordinate system comprise roll and pitch angles of the radar coordinate system relative to the vehicle body coordinate system.
9. The method for calibrating external parameters of a combination of a laser radar and a camera according to claim 1, wherein the nonlinear optimization algorithm comprises:
minimizing at least one of a re-projection error model, a point cloud correlation error model, an incomplete constraint error model, an installation prior error model, and a point cloud projection error model.
10. The method for calibrating external parameters of a combination of a lidar and a camera according to claim 1 or 9, further comprising:
and solving the nonlinear optimization algorithm by using an LM method to obtain the relative posture of each target and the first frame of laser radar coordinate system, the posture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system in the point cloud data.
11. The method for calibrating external parameters of a combination of a laser radar and a camera according to claim 9, wherein the method for constructing a minimized re-projection error model comprises:
and constructing a re-projection error model according to the difference between the projected 2D coordinates projected onto the picture according to the 3D coordinates of the target corner point under the first frame camera coordinate system and the 2D coordinates of the target pattern in the picture.
12. The method for calibrating external parameters by combining a laser radar and a camera according to claim 9, wherein the method for constructing the point cloud association error model comprises the following steps:
and calculating a covariance matrix of points of the same plane in the point cloud data, wherein the minimum eigenvalue of the covariance matrix is the matching error of the plane, and the sum of the matching errors of all the planes is used as a point cloud association error model.
13. The method for calibrating external parameters of a combination of a laser radar and a camera according to claim 9, wherein the method for constructing an incomplete constraint error model comprises the following steps:
calculating the motion trail of the vehicle body according to the initial gesture of each target and each frame of point cloud relative to the first frame of laser radar point cloud and the initial value of the conversion parameters of the radar coordinate system and the vehicle body coordinate system;
and projecting any frame of vehicle body gesture to the previous frame according to the motion track of the vehicle body, calculating the vehicle body speed according to each frame of projected data by using a uniform motion hypothesis, and constructing an incomplete constraint error model by using the hypothesis that the vehicle body lateral speed is zero.
14. The method for calibrating external parameters of a combination of a laser radar and a camera according to claim 9, wherein the method for constructing the installation priori error model comprises the following steps:
And constructing an installation priori error model according to the design position coordinate constraint of the radar coordinate system under the vehicle body coordinate system and the roll and pitch angle constraint.
15. The method for calibrating external parameters by combining a laser radar and a camera according to claim 9, wherein the method for constructing the point cloud projection error model comprises the following steps:
and projecting the suspected target point cloud clusters to the picture by design parameters of the camera and the laser radar, and constructing a point cloud projection error model according to the sum of the target point positions in the picture and the position errors of the suspected target point cloud clusters projected to the picture.
16. The method for calibrating external parameters by combining a laser radar and a camera according to claim 1, wherein capturing a picture with a target pattern by a camera and capturing point cloud data with target information by the laser radar comprises:
and acquiring the picture and the point cloud data of the target stationary time period, and acquiring the picture and the point cloud data of the target motion time period.
17. An external parameter calibration device for combining a laser radar with a camera, comprising:
the acquisition module is used for shooting a target through a camera to acquire a picture with a target pattern and acquiring point cloud data with target information through a laser radar;
The detection module is used for detecting first target information in the picture and detecting second target information in the point cloud data according to the first target information and the point cloud data;
the initial estimation module is used for estimating the initial gesture of each target and each frame of picture in the picture relative to the first frame of picture according to the first target information, calculating the initial gesture of each target and each frame of point cloud relative to the first frame of laser radar point cloud according to the second target information and the design values of the radar coordinate system and the camera coordinate system, and calculating the initial value of conversion parameters of the radar coordinate system and the vehicle body coordinate system according to the laser radar installation design value and the observed ground point cloud;
the optimization module is used for obtaining the relative posture of each target and the first frame of laser radar coordinate system, the posture of each frame of point cloud under the first frame of laser radar coordinate system, the conversion parameter calibration value of the radar coordinate system and the camera coordinate system and the conversion parameter calibration value of the radar coordinate system and the vehicle body coordinate system in the point cloud data through a nonlinear optimization algorithm.
18. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of calibrating external parameters of a lidar and camera combination according to any of claims 1 to 16 when the program is executed.
19. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the method of calibrating external parameters of a lidar in combination with a camera according to any of claims 1 to 16.
CN202211644585.1A 2022-12-20 2022-12-20 Laser radar and camera combined external parameter calibration method and device Pending CN116051650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211644585.1A CN116051650A (en) 2022-12-20 2022-12-20 Laser radar and camera combined external parameter calibration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211644585.1A CN116051650A (en) 2022-12-20 2022-12-20 Laser radar and camera combined external parameter calibration method and device

Publications (1)

Publication Number Publication Date
CN116051650A true CN116051650A (en) 2023-05-02

Family

ID=86121440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211644585.1A Pending CN116051650A (en) 2022-12-20 2022-12-20 Laser radar and camera combined external parameter calibration method and device

Country Status (1)

Country Link
CN (1) CN116051650A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115362A (en) * 2023-10-20 2023-11-24 成都量芯集成科技有限公司 Three-dimensional reconstruction method for indoor structured scene
CN117315018A (en) * 2023-08-31 2023-12-29 上海理工大学 User plane pose detection method, equipment and medium based on improved PnP

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315018A (en) * 2023-08-31 2023-12-29 上海理工大学 User plane pose detection method, equipment and medium based on improved PnP
CN117315018B (en) * 2023-08-31 2024-04-26 上海理工大学 User plane pose detection method, equipment and medium based on improved PnP
CN117115362A (en) * 2023-10-20 2023-11-24 成都量芯集成科技有限公司 Three-dimensional reconstruction method for indoor structured scene
CN117115362B (en) * 2023-10-20 2024-04-26 成都量芯集成科技有限公司 Three-dimensional reconstruction method for indoor structured scene

Similar Documents

Publication Publication Date Title
Wasenmüller et al. Comparison of kinect v1 and v2 depth images in terms of accuracy and precision
CN107025663B (en) Clutter scoring system and method for 3D point cloud matching in vision system
Weber et al. Automatic registration of unordered point clouds acquired by Kinect sensors using an overlap heuristic
CN116051650A (en) Laser radar and camera combined external parameter calibration method and device
US7616807B2 (en) System and method for using texture landmarks for improved markerless tracking in augmented reality applications
US9519968B2 (en) Calibrating visual sensors using homography operators
Chien et al. Visual odometry driven online calibration for monocular lidar-camera systems
Muñoz-Bañón et al. Targetless camera-LiDAR calibration in unstructured environments
CN112750168B (en) Calibration method and device for internal parameters of event camera, computer equipment and storage medium
KR102169309B1 (en) Information processing apparatus and method of controlling the same
CN113034612B (en) Calibration device, method and depth camera
US10991105B2 (en) Image processing device
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN114217665A (en) Camera and laser radar time synchronization method, device and storage medium
EP3175312A2 (en) Video-assisted landing guidance system and method
Nagamatsu et al. Self-calibrated dense 3D sensor using multiple cross line-lasers based on light sectioning method and visual odometry
CN114119652A (en) Method and device for three-dimensional reconstruction and electronic equipment
Liu et al. Outdoor camera calibration method for a GPS & camera based surveillance system
KR100933304B1 (en) An object information estimator using the single camera, a method thereof, a multimedia device and a computer device including the estimator, and a computer-readable recording medium storing a program for performing the method.
Fuersattel et al. Geometric primitive refinement for structured light cameras
JP6670712B2 (en) Self-position estimation device, moving object and self-position estimation method
CN112750205B (en) Plane dynamic detection system and detection method
TWI730482B (en) Plane dynamic detection system and detection method
KR20240055102A (en) Characterizing and improving image processing
KR20240076344A (en) SLAM that can be implemented at low cost and uses sensor fusion technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination