CN114140534A - Combined calibration method for laser radar and camera - Google Patents

Combined calibration method for laser radar and camera Download PDF

Info

Publication number
CN114140534A
CN114140534A CN202111350523.5A CN202111350523A CN114140534A CN 114140534 A CN114140534 A CN 114140534A CN 202111350523 A CN202111350523 A CN 202111350523A CN 114140534 A CN114140534 A CN 114140534A
Authority
CN
China
Prior art keywords
camera
calibration
calibration plate
image
lidar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111350523.5A
Other languages
Chinese (zh)
Inventor
徐嘉骏
刘吉成
辛绍杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI UNIVERSITY
Original Assignee
SHANGHAI UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI UNIVERSITY filed Critical SHANGHAI UNIVERSITY
Priority to CN202111350523.5A priority Critical patent/CN114140534A/en
Publication of CN114140534A publication Critical patent/CN114140534A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a combined calibration method for a laser radar and a camera, which comprises the following steps: 1) the exposure and focal length parameters of the camera are respectively detected and adjusted through a camera detection module, so that the exposure and focal length parameters can reach a preset threshold range, and the accuracy of a calibration result is ensured; 2) the method comprises the following steps of performing static frame detection on a calibration plate in a motion state through a static frame detection module, selecting a plurality of frames of camera images meeting requirements to form a static frame set { S }, and preparing for subsequent data acquisition of a laser radar and a camera; 3) through the combined calibration module, camera internal parameters are calibrated firstly, and then after the 3D space coordinate and the 2D pixel coordinate of the calibration plate are acquired, a final combined calibration result is obtained according to the conversion relation between the two coordinates. The invention selects the square monochromatic plate as the calibration plate, and can automatically adjust the equipment parameters, select the data frame and calculate the calibration result in the whole process without manual intervention.

Description

一种用于激光雷达与相机的联合标定方法A joint calibration method for lidar and camera

技术领域technical field

本发明涉及多传感器融合标定领域,具体地说,涉及一种用于激光雷达与相机的联合标定方法。The invention relates to the field of multi-sensor fusion calibration, in particular to a joint calibration method for laser radar and camera.

背景技术Background technique

随着计算机与机器视觉技术的快速发展,对于移动机器人的研究也日益成为了机器人学领域的热点和难点。移动机器人在运动过程中对外部环境的感知必须通过传感器进行获取,其中,相机与激光雷达是使用频率最高的两种传感器。对于视觉相机而言,虽然具有丰富的色彩信息,且分辨率较高,但其对光照特别敏感;而激光雷达虽然不受光照条件影响且能够提供精确的几何信息,但其分辨率和刷新率低。因此,仅凭单个传感器采集到的数据往往无法给移动机器人提供足够清晰准确的环境信息,进而需要采取多传感器融合的方案,融合技术的前提则是实现多传感器之间的参数校准。With the rapid development of computer and machine vision technology, research on mobile robots has increasingly become a hot and difficult point in the field of robotics. The perception of the external environment by a mobile robot during the movement process must be acquired by sensors, among which, cameras and lidars are the two most frequently used sensors. For visual cameras, although they have rich color information and high resolution, they are particularly sensitive to lighting; while lidar is not affected by lighting conditions and can provide accurate geometric information, its resolution and refresh rate Low. Therefore, the data collected by a single sensor is often unable to provide clear and accurate environmental information for mobile robots, and a multi-sensor fusion solution is required. The premise of fusion technology is to achieve parameter calibration between multiple sensors.

现有的联合标定方法大多没有明确阐述在数据采集前对视觉设备的检测与调整工作,忽略了硬件参数对于标定结果的重要影响;在标定过程中往往会将标定板静置于若干个预定的待测位置进行数据采集,这使得效率较低,且标定结果不具有很好的普遍性;在一些标定方法中还需要设计复杂的标定装置或特殊几何形状的标定板来协助完成标定操作,且计算过程复杂,不易使用。Most of the existing joint calibration methods do not clearly describe the detection and adjustment of visual equipment before data acquisition, ignoring the important influence of hardware parameters on the calibration results; in the calibration process, the calibration board is often placed in several predetermined Data collection is performed at the position to be measured, which makes the efficiency low, and the calibration results are not very universal; in some calibration methods, it is necessary to design complex calibration devices or calibration plates with special geometric shapes to assist in completing the calibration operation, and The calculation process is complicated and difficult to use.

发明内容SUMMARY OF THE INVENTION

针对现有的技术缺陷,本发明提出了一种用于激光雷达与相机的联合标定方法,通过模块化的设计,提出了相机参数检测的实施方法,保证了图像数据信息的准确性;并将传统的静置标定板数据采集方式优化为在运动过程中的实时数据筛选与采集;最后利用坐标转换公式求解出激光雷达与相机之间的变换参数矩阵,从而达到二者联合标定目的。整个标定方法实用高效,标定精度高,且更加符合实际需求。In view of the existing technical defects, the present invention proposes a joint calibration method for lidar and camera, and proposes an implementation method for camera parameter detection through modular design, which ensures the accuracy of image data information; and The traditional static calibration plate data collection method is optimized for real-time data screening and collection during the movement process; finally, the coordinate transformation formula is used to solve the transformation parameter matrix between the lidar and the camera, so as to achieve the purpose of joint calibration between the two. The whole calibration method is practical and efficient, has high calibration accuracy, and is more in line with actual needs.

为达到上述目的,本发明采用如下技术方案:To achieve the above object, the present invention adopts the following technical solutions:

一种用于激光雷达与相机的联合标定方法,具体包括如下步骤:A joint calibration method for lidar and camera, which specifically includes the following steps:

1)通过相机检测模块,分别对相机的曝光度和焦距参数进行检测和调整,使其均能够达到预定的阈值范围内,以确保标定结果的准确性;1) Through the camera detection module, the exposure and focal length parameters of the camera are detected and adjusted respectively, so that they can reach the predetermined threshold range to ensure the accuracy of the calibration results;

2)通过静止帧检测模块,对处于运动状态下的标定板进行静止帧检测,选择若干帧满足要求的相机图像组成静止帧集合{S},为后续激光雷达与相机的数据采集做准备;2) Through the still frame detection module, the still frame detection is carried out on the calibration board in the moving state, and several frames of camera images that meet the requirements are selected to form a still frame set {S} to prepare for the subsequent data collection of the lidar and the camera;

3)通过联合标定模块,首先对相机内参进行标定,然后在采集到标定板的3D空间坐标和2D像素坐标后,根据二者之间的转换关系求得最终的联合标定结果。3) Through the joint calibration module, the internal parameters of the camera are first calibrated, and then after the 3D space coordinates and 2D pixel coordinates of the calibration plate are collected, the final joint calibration result is obtained according to the conversion relationship between the two.

进一步地,所述的步骤1)包括以下步骤:Further, described step 1) comprises the following steps:

1-1)使用相机拍摄一幅包含有标定板的初始图像;1-1) Use the camera to take an initial image containing the calibration plate;

1-2)对获取到的相机图像进行角点检测,得到每一个角点在像素坐标系下的位置信息;连接各角点坐标形成标定板的多边形图案,即获得标定板在初始相机图像中的所在区域;1-2) Perform corner detection on the acquired camera image to obtain the position information of each corner in the pixel coordinate system; connect the coordinates of each corner to form a polygon pattern of the calibration plate, that is, obtain the calibration plate in the initial camera image. the area in which it is located;

1-3)计算初始图像中标定板所在区域内像素的平均值RGB,并将其与预设的像素阈值TH进行比较,若像素平均值RGB在阈值TH范围内,转入步骤1-4);否则,向相机发送信号以自动调整相机的曝光度参数,并重新进行图像采集;1-3) Calculate the average RGB of the pixels in the area where the calibration plate is located in the initial image, and compare it with the preset pixel threshold TH. If the pixel average RGB is within the threshold TH, go to step 1-4) ; otherwise, send a signal to the camera to automatically adjust the camera's exposure parameters and re-acquire the image;

1-4)使用调整好曝光度的相机拍摄,获取一幅包含有标定板的全新图像;1-4) Shoot with a camera that has adjusted the exposure to obtain a new image including the calibration plate;

1-5)对获取到的相机图像再次进行角点检测,以获得标定板在相机图像中的区域信息;1-5) Perform corner point detection on the acquired camera image again to obtain the area information of the calibration plate in the camera image;

1-6)利用Tenengrad梯度方法对相机图像中标定板所在的区域进行清晰度打分,该方法基于Sobel算子分别在水平和垂直方向上计算梯度值,并将处理后的平均灰度值作为图像清晰度的衡量指标,若平均灰度值在预设的阈值范围内([T1,T2]),则表示相机对焦良好,完成相机曝光度和清晰度的检测和调整工作;否则,向相机发送信号以调整相机焦距,并重新进行图像采集。1-6) Use the Tenengrad gradient method to score the sharpness of the area where the calibration plate is located in the camera image. This method calculates the gradient values in the horizontal and vertical directions based on the Sobel operator, and uses the processed average gray value as the image. A measure of sharpness, if the average gray value is within the preset threshold range ([T 1 , T 2 ]), it means that the camera is in good focus, and the detection and adjustment of camera exposure and sharpness are completed; otherwise, the The camera sends a signal to adjust the camera's focus and retake the image acquisition.

进一步地,所述的步骤2)包括以下步骤:Further, described step 2) comprises the following steps:

2-1)将激光雷达与相机固定在同一基座上组成待标定传感器模块,保证基座固定及三者的相对位置不发生变化,并保证激光雷达的探测范围和相机的视野具有60%以上的重叠区域;2-1) Fix the lidar and the camera on the same base to form the sensor module to be calibrated, ensure that the base is fixed and the relative position of the three does not change, and ensure that the detection range of the lidar and the field of view of the camera are more than 60% the overlapping area;

2-2)选择正方形形状的单色板作为标定板,将标定板在激光雷达与相机视野的重合范围内进行运动,运动过程包括平移和旋转的变化;2-2) Select a square-shaped monochrome plate as the calibration plate, and move the calibration plate within the overlapping range of the lidar and the camera field of view, and the movement process includes changes in translation and rotation;

2-3)定义标定板坐标系记作OB,亦为世界坐标系OW,选定标定板左上角角点作为标定板坐标系的原点,两侧直角边指向作为标定板坐标系的X/Y轴方向,根据右手定则确定Z轴方向;根据正方形标定板的实际尺寸计算得标定板各角点在OB中的坐标PB,亦为PW2-3) Define the calibration plate coordinate system marked as O B , which is also the world coordinate system O W , select the upper left corner of the calibration plate as the origin of the calibration plate coordinate system, and the right-angled sides on both sides point to X as the calibration plate coordinate system /Y-axis direction, according to the right-hand rule to determine the Z-axis direction; according to the actual size of the square calibration plate, the coordinates P B of each corner point of the calibration plate in OB are calculated, which is also P W ;

2-4)根据坐标PB和角点在像素坐标系中的坐标Ppx,利用EPnP算法求得标定板坐标系到相机坐标系的转换矩阵

Figure BDA0003355689600000021
对运动过程中相机采集到的每一帧图像数据进行处理,获得标定板到相机的转换矩阵列表
Figure BDA0003355689600000022
2-4) According to the coordinates P B and the coordinates P px of the corner points in the pixel coordinate system, use the EPnP algorithm to obtain the conversion matrix from the calibration board coordinate system to the camera coordinate system
Figure BDA0003355689600000021
Process each frame of image data collected by the camera during motion to obtain a list of conversion matrices from the calibration board to the camera
Figure BDA0003355689600000022

2-5)对于任意一帧相机图像,确定其在前后两个时刻之间标定板的运动量;首先选取第i帧相机图像中标定板坐标系上的任意一点PBi,利用方程

Figure BDA0003355689600000023
求得选定点PBi在第j帧相机图像中的投影位置P'Bj;若标定板在前后两个时刻没有发生相对运动,则Ti=Tj,即P'Bj=PBi;否则,通过方程式
Figure BDA0003355689600000031
求得P'Bj与其在第j帧相机图像中的实际位置PBj之间的差值,即为标定板在这两个时刻间的位移量;2-5) For any frame of camera image, determine the amount of motion of the calibration plate between the two moments before and after it; first select any point P Bi on the coordinate system of the calibration plate in the ith frame of camera image, and use the equation
Figure BDA0003355689600000023
Obtain the projection position P' Bj of the selected point P Bi in the jth frame of the camera image; if the calibration plate does not move relative to the two moments before and after, then T i =T j , that is, P' Bj =P Bi ; otherwise , via the equation
Figure BDA0003355689600000031
Obtain the difference between P' Bj and its actual position P Bj in the jth frame of the camera image, which is the displacement of the calibration plate between these two moments;

2-6)对相邻数据帧进行比较,选取运动变化满足要求的数据帧作为静止帧;对于一块标定板,其包括一个角点列表C={Xn},通过方程式m(C,i,j)=maxd(Xni,Xnj)确定标定板的最终位移情况;根据方程式m(C,i-1,i)<x&&m(C,i,i+1)<x,遍历每一帧相机图像,当且仅当某一帧的运动相较于相邻帧小于阈值位移时,选择该帧作为静止帧;且选定的静止帧还需保证能散布在激光雷达与相机视野的重合区域内,即满足:m(C,si,sj)>y,其中si,sj∈{S},x=1cm,y=300cm,集合S为已选定的静止帧集合。2-6) Adjacent data frames are compared, and the data frames whose motion changes meet the requirements are selected as static frames; for a calibration board, it includes a corner list C={X n }, by equation m(C, i, j)=maxd(X ni , X nj ) to determine the final displacement of the calibration plate; according to the equation m(C,i-1,i)<x&&m(C,i,i+1)<x, traverse each frame of camera Image, if and only if the motion of a certain frame is smaller than the threshold displacement compared to the adjacent frame, the frame is selected as a still frame; and the selected still frame must also ensure that it can be scattered in the overlapping area of the lidar and the camera's field of view , that is, m(C, s i , s j )>y, where s i , s j ∈ {S}, x=1 cm, y=300 cm, and the set S is the selected still frame set.

进一步地,所述的步骤3)包括以下步骤:Further, described step 3) comprises the following steps:

3-1)对相机的内参进行标定,得到

Figure BDA0003355689600000032
其中矩阵K表示相机的内参数,fx,fy表示焦距,cx,cy表示主点偏移量;3-1) Calibrate the internal parameters of the camera to get
Figure BDA0003355689600000032
The matrix K represents the internal parameters of the camera, f x , f y represent the focal length, and c x , cy represent the principal point offset;

3-2)在选定静止帧的同时采集该帧的激光雷达三维点云数据和相机图像数据,若采集到的选定帧数据能够清晰全面地描述标定板信息,则将其加入静止帧集合{S};否则,跳过该帧,重新选定下一个静止帧并进行数据采集;3-2) Collect the lidar 3D point cloud data and camera image data of the frame while selecting the still frame. If the collected selected frame data can clearly and comprehensively describe the calibration board information, it will be added to the still frame set {S}; otherwise, skip this frame, reselect the next still frame and perform data collection;

3-3)对激光雷达采集到的点云数据进行处理,求取标定板角点的空间坐标;首先利用RANSAC算法对激光点云进行平面拟合,提取出在激光雷达坐标系下标定板的所在平面;然后将所有点云投影到拟合平面上,通过网格划分法提取出点云的边界点集合;再分别对边界点利用最小二乘法进行直线拟合,获取到标定板的外缘轮廓线及其直线方程;四条外缘边线之间的两两交点形成标定板的四个角点,通过求解直线方程得到各角点的坐标数据,作为标定板在该位置的3D空间坐标;3-3) Process the point cloud data collected by the lidar, and obtain the spatial coordinates of the corner points of the calibration board; first, use the RANSAC algorithm to fit the laser point cloud, and extract the coordinates of the calibration board in the lidar coordinate system. Then, project all the point clouds onto the fitting plane, and extract the boundary point set of the point cloud by meshing method; then use the least squares method to perform line fitting on the boundary points respectively, and obtain the outer edge of the calibration plate. The contour line and its straight line equation; the two intersection points between the four outer edge lines form the four corner points of the calibration plate, and the coordinate data of each corner point is obtained by solving the straight line equation, as the 3D space coordinates of the calibration plate at this position;

3-4)对每一个静止帧对应的图像信息进行角点检测;首先将采集到的相机图像转化为灰度图像,然后利用Shi-Tmoasi算法确定图像中强角点的位置,再提取出其亚像素级的精确坐标数据,作为标定板角点在该位置的2D像素坐标;3-4) Perform corner detection on the image information corresponding to each still frame; first convert the collected camera image into a grayscale image, then use the Shi-Tmoasi algorithm to determine the position of the strong corner in the image, and then extract its Sub-pixel accurate coordinate data, as the 2D pixel coordinates of the corner point of the calibration board at this position;

3-5)由世界坐标系与相机坐标系之间的转换公式:3-5) The conversion formula between the world coordinate system and the camera coordinate system:

Figure BDA0003355689600000041
Figure BDA0003355689600000041

其中,

Figure BDA0003355689600000042
表示由步骤3-3)求得的标定板角点3D空间坐标,
Figure BDA0003355689600000043
表示由步骤3-4)求得的标定板角点在像素坐标系中的2D像素坐标,
Figure BDA0003355689600000044
Figure BDA0003355689600000045
表示相机的外参矩阵,
Figure BDA0003355689600000046
表示由步骤3-1)求得的相机内参矩阵;由此,利用EPnP算法解得激光雷达坐标系与相机坐标系之间的旋转矩阵R和平移向量t,进而实现对激光雷达与相机的联合标定。in,
Figure BDA0003355689600000042
Represents the 3D space coordinates of the corner point of the calibration board obtained by step 3-3),
Figure BDA0003355689600000043
represents the 2D pixel coordinates of the corner points of the calibration board in the pixel coordinate system obtained by step 3-4),
Figure BDA0003355689600000044
Figure BDA0003355689600000045
represents the extrinsic parameter matrix of the camera,
Figure BDA0003355689600000046
Represents the camera internal parameter matrix obtained by step 3-1); thus, the rotation matrix R and translation vector t between the lidar coordinate system and the camera coordinate system are solved by using the EPnP algorithm, and then the combination of the lidar and the camera is realized. Calibration.

上述的标定板选取为一块正方形的单色版,其边长为0.5m,厚度为5mm,由漫反射材料制成,有利于激光雷达的探测。The above-mentioned calibration plate is selected as a square monochrome plate with a side length of 0.5m and a thickness of 5mm.

相较于现有的标定方法,本发明的优点和有益效果为:Compared with the existing calibration method, the advantages and beneficial effects of the present invention are:

1.本发明采用了模块化的设计,将整个标定过程分为硬件参数调优、动态数据采集、传感器联合标定三个步骤,在确保融合信息正确无误的基础上可以根据需求自行增加或删除相应模块,能够满足至少一个激光雷达和一个相机的联合标定工作。1. The present invention adopts a modular design, and divides the entire calibration process into three steps: hardware parameter tuning, dynamic data acquisition, and joint sensor calibration. On the basis of ensuring that the fusion information is correct, it can be added or deleted according to requirements. The module can meet the joint calibration work of at least one lidar and one camera.

2.本发明提出了相机参数检测的具体实施方式,在正式标定前可自动检测相机的曝光度和焦距参数并进行调优,确保了图像数据信息的完整性和准确性。2. The present invention proposes a specific embodiment of camera parameter detection, which can automatically detect and optimize the camera's exposure and focal length parameters before formal calibration, ensuring the integrity and accuracy of image data information.

3.本发明不采用在预定位置静置标定板进行数据采集的方式,而是创新性地提出了在标定板运动过程中进行实时数据筛选与采集,通过相邻帧之间位移量的比较选定满足要求的数据帧作为采集对象;整个过程自动实现,无需人工干预。3. The present invention does not adopt the method of statically placing the calibration plate at a predetermined position for data collection, but innovatively proposes to perform real-time data screening and collection during the movement of the calibration plate. The data frame that meets the requirements is set as the acquisition object; the whole process is automatically realized without manual intervention.

4.本发明提出了选用正方形单色版作为标定板,以正方形板的四个角点作为待检测点;该标定板由漫反射材料制成,对于激光雷达而言具有更好的探测效果。4. The present invention proposes to select a square monochrome plate as the calibration plate, and use the four corners of the square plate as the points to be detected; the calibration plate is made of diffuse reflection material, which has better detection effect for lidar.

本发明提出的标定方法无需手动识别和匹配激光雷达与相机的对应检测点,而是利用二者坐标系之间的转换关系,采用数据拟合的优化方法进行计算求解,确保了标定过程严谨准确,避免了标定结果的随机性。The calibration method proposed by the invention does not need to manually identify and match the corresponding detection points of the laser radar and the camera, but uses the conversion relationship between the two coordinate systems, and adopts the optimization method of data fitting to calculate and solve, which ensures that the calibration process is rigorous and accurate. , to avoid the randomness of the calibration results.

附图说明Description of drawings

图1为本发明标定方法的流程图。Fig. 1 is the flow chart of the calibration method of the present invention.

图2为本发明标定方法中的相机检测模块流程图。FIG. 2 is a flow chart of the camera detection module in the calibration method of the present invention.

图3为本发明标定方法中的静止帧检测步骤流程图。FIG. 3 is a flow chart of the still frame detection steps in the calibration method of the present invention.

图4为本发明标定方法的结构示意图。FIG. 4 is a schematic structural diagram of the calibration method of the present invention.

图5为本发明标定方法中的激光雷达与相机联合标定步骤流程图。FIG. 5 is a flowchart of the joint calibration steps of the laser radar and the camera in the calibration method of the present invention.

图6为根据标定板激光点云拟合的标定板平面示意图。FIG. 6 is a schematic plan view of the calibration plate according to the laser point cloud fitting of the calibration plate.

图7为拟合的标定板外缘轮廓及角点示意图。FIG. 7 is a schematic diagram of the outer edge contour and corner points of the fitted calibration plate.

具体实施方式Detailed ways

下面结合附图和具体实施例对本发明进行详细描述。以下实施例仅用于更加清楚地说明本发明的技术方案,而不能以此来限制本发明的保护范围。The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. The following examples are only used to illustrate the technical solutions of the present invention more clearly, and cannot be used to limit the protection scope of the present invention.

如图1所示,一种用于激光雷达与相机的联合标定方法,具体包括如下步骤:As shown in Figure 1, a joint calibration method for lidar and camera includes the following steps:

1)通过相机检测模块,分别对相机的曝光度和焦距参数进行检测和调整,使其均能够达到预定的阈值范围内,以确保标定结果的准确性。1) Through the camera detection module, the exposure and focal length parameters of the camera are detected and adjusted respectively, so that they can all reach the predetermined threshold range to ensure the accuracy of the calibration results.

如图2所示,所述的步骤1)包括以下步骤:As shown in Figure 2, described step 1) comprises the following steps:

1-1)使用相机拍摄一幅包含有标定板的初始图像;1-1) Use the camera to take an initial image containing the calibration plate;

1-2)对获取到的相机图像进行角点检测,得到每一个角点在像素坐标系下的位置信息;连接各角点坐标形成标定板的多边形图案,即获得标定板在初始相机图像中的所在区域;1-2) Perform corner detection on the acquired camera image to obtain the position information of each corner in the pixel coordinate system; connect the coordinates of each corner to form a polygon pattern of the calibration plate, that is, obtain the calibration plate in the initial camera image. the area in which it is located;

1-3)计算初始图像中标定板所在区域内像素的平均值(RGB),并将其与预设的像素阈值TH(例如,TH=128或接近128,每个像素的取值范围为[0,255])进行比较,若像素平均值RGB在阈值TH范围内(RGB=TH±5),转入步骤1-4);否则,向相机发送信号以自动调整相机的曝光度参数,并重新进行图像采集;1-3) Calculate the average value (RGB) of the pixels in the area where the calibration plate is located in the initial image, and compare it with the preset pixel threshold TH (for example, TH=128 or close to 128, the value range of each pixel is [ 0,255]) for comparison, if the pixel average RGB is within the threshold TH range (RGB=TH±5), go to step 1-4); otherwise, send a signal to the camera to automatically adjust the camera’s exposure parameters, and repeat Image Acquisition;

1-4)使用调整好曝光度的相机拍摄,获取一幅包含有标定板的全新图像;1-4) Shoot with a camera that has adjusted the exposure to obtain a new image including the calibration plate;

1-5)对获取到的相机图像再次进行角点检测,以获得标定板在相机图像中的区域信息;1-5) Perform corner point detection on the acquired camera image again to obtain the area information of the calibration plate in the camera image;

1-6)利用Tenengrad梯度方法对相机图像中标定板所在的区域进行清晰度打分,该方法基于Sobel算子分别在水平和垂直方向上计算梯度值,并将处理后的平均灰度值作为图像清晰度的衡量指标,若平均灰度值在预设的阈值范围内([T1,T2]),则表示相机对焦良好,完成相机曝光度和清晰度的检测和调整工作;否则,向相机发送信号以调整相机焦距,并重新进行图像采集。1-6) Use the Tenengrad gradient method to score the sharpness of the area where the calibration plate is located in the camera image. This method calculates the gradient values in the horizontal and vertical directions based on the Sobel operator, and uses the processed average gray value as the image. A measure of sharpness, if the average gray value is within the preset threshold range ([T 1 , T 2 ]), it means that the camera is in good focus, and the detection and adjustment of camera exposure and sharpness are completed; otherwise, the The camera sends a signal to adjust the camera's focus and retake the image acquisition.

2)通过静止帧检测模块,对处于运动状态下的标定板进行静止帧检测,选择若干帧满足要求的相机图像组成静止帧集合{S},为后续激光雷达与相机的数据采集做准备。2) Through the still frame detection module, the still frame detection is performed on the calibration board in the moving state, and several frames of camera images that meet the requirements are selected to form a still frame set {S} to prepare for the subsequent data collection of lidar and cameras.

如图3所示,所述的步骤2)包括以下步骤:As shown in Figure 3, described step 2) comprises the following steps:

2-1)将激光雷达与相机固定在同一基座上组成待标定传感器模块,保证基座固定及三者的相对位置不发生变化,并保证激光雷达的探测范围和相机的视野具有60%以上的重叠区域;2-1) Fix the lidar and the camera on the same base to form the sensor module to be calibrated, ensure that the base is fixed and the relative position of the three does not change, and ensure that the detection range of the lidar and the field of view of the camera are more than 60% the overlapping area;

2-2)选择正方形形状的单色板作为标定板,将标定板在激光雷达与相机视野的重合范围内进行运动,运动过程包括平移和旋转的变化。标定板大小为固定的边长为0.5m,厚度为0.5mm的正方形,由漫反射材料制成,有利于激光雷达的反射;与传感器模块之间的位置及对应关系如图4所示。2-2) Select a square-shaped monochrome plate as the calibration plate, and move the calibration plate within the overlapping range of the lidar and the camera field of view. The movement process includes changes in translation and rotation. The calibration board is a square with a fixed side length of 0.5m and a thickness of 0.5mm. It is made of diffuse reflective material, which is conducive to the reflection of the lidar. The position and corresponding relationship with the sensor module are shown in Figure 4.

2-3)定义标定板坐标系记作OB(亦为世界坐标系OW),选定标定板左上角角点作为标定板坐标系的原点,两侧直角边指向作为标定板坐标系的X/Y轴方向,根据右手定则确定Z轴方向;根据正方形标定板的实际尺寸计算得标定板各角点在OB中的坐标PB(亦为PW);2-3) Define the coordinate system of the calibration board and mark it as O B (also known as the world coordinate system O W ), select the upper left corner of the calibration board as the origin of the calibration board coordinate system, and the right-angled sides on both sides point to the coordinate system of the calibration board. The X/Y axis direction, according to the right-hand rule, determine the Z axis direction; according to the actual size of the square calibration plate, the coordinates P B (also P W ) of each corner point of the calibration plate in OB are calculated;

2-4)根据坐标PB和角点在像素坐标系中的坐标Ppx,利用EPnP算法求得标定板坐标系到相机坐标系的转换矩阵

Figure BDA0003355689600000061
对运动过程中相机采集到的每一帧图像数据进行处理,获得标定板到相机的转换矩阵列表
Figure BDA0003355689600000062
2-4) According to the coordinates P B and the coordinates P px of the corner points in the pixel coordinate system, use the EPnP algorithm to obtain the conversion matrix from the calibration board coordinate system to the camera coordinate system
Figure BDA0003355689600000061
Process each frame of image data collected by the camera during motion to obtain a list of conversion matrices from the calibration board to the camera
Figure BDA0003355689600000062

2-5)对于任意一帧相机图像,确定其在前后两个时刻之间标定板的运动量;首先选取第i帧相机图像中标定板坐标系上的任意一点PBi,利用方程

Figure BDA0003355689600000063
求得选定点PBi在第j帧相机图像中的投影位置P'Bj;若标定板在前后两个时刻没有发生相对运动,则Ti=Tj,即P'Bj=PBi;否则,通过方程式
Figure BDA0003355689600000064
求得P'Bj与其在第j帧相机图像中的实际位置PBj之间的差值,即为标定板在这两个时刻间的位移量;2-5) For any frame of camera image, determine the amount of motion of the calibration plate between the two moments before and after it; first select any point P Bi on the coordinate system of the calibration plate in the ith frame of camera image, and use the equation
Figure BDA0003355689600000063
Obtain the projection position P' Bj of the selected point P Bi in the jth frame of the camera image; if the calibration plate does not move relative to the two moments before and after, then T i =T j , that is, P' Bj =P Bi ; otherwise , via the equation
Figure BDA0003355689600000064
Obtain the difference between P' Bj and its actual position P Bj in the jth frame of the camera image, which is the displacement of the calibration plate between these two moments;

2-6)对相邻数据帧进行比较,选取运动变化满足要求的数据帧作为静止帧;对于一块标定板,其包括一个角点列表C={Xn},通过方程式m(C,i,j)=maxd(Xni,Xnj)确定标定板的最终位移情况;根据方程式m(C,i-1,i)<x&&m(C,i,i+1)<x,遍历每一帧相机图像,当且仅当某一帧的运动相较于相邻帧小于阈值位移时,选择该帧作为静止帧;且选定的静止帧还需保证能散布在激光雷达与相机视野的重合区域内,即满足:m(C,si,sj)>y,其中si,sj∈{S},x=1cm,y=300cm,集合S为已选定的静止帧集合。2-6) Adjacent data frames are compared, and the data frames whose motion changes meet the requirements are selected as static frames; for a calibration board, it includes a corner list C={X n }, by equation m(C, i, j)=maxd(X ni , X nj ) to determine the final displacement of the calibration plate; according to the equation m(C,i-1,i)<x&&m(C,i,i+1)<x, traverse each frame of camera Image, if and only if the motion of a certain frame is smaller than the threshold displacement compared to the adjacent frame, the frame is selected as a still frame; and the selected still frame must also ensure that it can be scattered in the overlapping area of the lidar and the camera's field of view , that is, m(C, s i , s j )>y, where s i , s j ∈ {S}, x=1 cm, y=300 cm, and the set S is the selected still frame set.

3)通过联合标定模块,首先对相机内参进行标定,然后在采集到标定板的3D空间坐标和2D像素坐标后,根据二者之间的转换关系求得最终的联合标定结果。3) Through the joint calibration module, the internal parameters of the camera are first calibrated, and then after the 3D space coordinates and 2D pixel coordinates of the calibration plate are collected, the final joint calibration result is obtained according to the conversion relationship between the two.

如图5所示,所述的步骤3)包括以下步骤:As shown in Figure 5, described step 3) comprises the following steps:

3-1)对相机的内参进行标定,得到

Figure BDA0003355689600000065
其中矩阵K表示相机的内参数,fx,fy表示焦距(即相机镜头光心到成像平面的距离),cx,cy表示主点偏移量(即相机主轴与图像的实际交点位置,单位均为像素);3-1) Calibrate the internal parameters of the camera to get
Figure BDA0003355689600000065
The matrix K represents the internal parameters of the camera, f x , f y represent the focal length (that is, the distance from the optical center of the camera lens to the imaging plane), and c x , cy represent the principal point offset (that is, the actual intersection of the camera principal axis and the image position. , the unit is pixel);

3-2)在选定静止帧的同时采集该帧的激光雷达三维点云数据和相机图像数据,若采集到的选定帧数据能够清晰全面地描述标定板信息,则将其加入静止帧集合{S};否则,跳过该帧,重新选定下一个静止帧并进行数据采集;3-2) Collect the lidar 3D point cloud data and camera image data of the frame while selecting the still frame. If the collected selected frame data can clearly and comprehensively describe the calibration board information, it will be added to the still frame set {S}; otherwise, skip this frame, reselect the next still frame and perform data collection;

3-3)对激光雷达采集到的点云数据进行处理,求取标定板角点的空间坐标。首先利用RANSAC算法对激光点云进行平面拟合,提取出在激光雷达坐标系下标定板的所在平面;假设空间平面方程的表达式为:Ax+By+Cz+D=0,则有平面法向量为n=(A,B,C);每次从点云中随机选取三个点组成一个平面,计算所有其他点到该平面的距离

Figure BDA0003355689600000071
Figure BDA0003355689600000072
如果距离d小于阈值T,则认为这些点处于同一个平面,称为内点;经过多次迭代,选取内点最多的平面作为最优拟合平面,如图6所示;然后将所有点云投影到拟合平面上,通过网格划分法提取出点云的边界点集合;再分别对边界点利用最小二乘法进行直线拟合;假设空间直线表达式为:y=a+bx,对于n个边界点(xi,yi)建立方程组
Figure BDA0003355689600000073
计算求取a、b,获得标定板的外缘轮廓线及其直线方程;四条外缘边线之间的两两交点形成标定板的四个角点,通过求解直线方程得到各角点的坐标数据,作为标定板在该位置的3D空间坐标;标定板外缘轮廓及角点示意图如图7所示。3-3) Process the point cloud data collected by the lidar, and obtain the spatial coordinates of the corners of the calibration board. First, the RANSAC algorithm is used to fit the laser point cloud, and the plane where the calibration plate is located in the lidar coordinate system is extracted. Assuming that the expression of the space plane equation is: A x +B y +C z +D=0, then There is a plane normal vector n=(A, B, C); three points are randomly selected from the point cloud to form a plane each time, and the distances from all other points to the plane are calculated
Figure BDA0003355689600000071
Figure BDA0003355689600000072
If the distance d is less than the threshold T, these points are considered to be in the same plane, which is called interior points; after multiple iterations, the plane with the most interior points is selected as the best fitting plane, as shown in Figure 6; Projected onto the fitting plane, and the boundary point set of the point cloud is extracted by the meshing method; and then the boundary points are fitted by the least squares method respectively. Assuming that the spatial linear expression is: y=a+bx, for n boundary points (x i , y i ) to establish a system of equations
Figure BDA0003355689600000073
Calculate and obtain a and b to obtain the outline of the outer edge of the calibration plate and its straight line equation; the two intersection points between the four outer edge lines form the four corner points of the calibration plate, and the coordinate data of each corner point can be obtained by solving the line equation , as the 3D space coordinates of the calibration plate at this position; the outline and corners of the outer edge of the calibration plate are shown in Figure 7.

3-4)对每一个静止帧对应的图像信息进行角点检测;首先将采集到的相机图像转化为灰度图像,然后利用Shi-Tmoasi算法确定图像中强角点的位置,再提取出其亚像素级的精确坐标数据,作为标定板角点在该位置的2D像素坐标;3-4) Perform corner detection on the image information corresponding to each still frame; first convert the collected camera image into a grayscale image, then use the Shi-Tmoasi algorithm to determine the position of the strong corner in the image, and then extract its Sub-pixel accurate coordinate data, as the 2D pixel coordinates of the corner point of the calibration board at this position;

3-5)由世界坐标系与相机坐标系之间的转换公式:3-5) The conversion formula between the world coordinate system and the camera coordinate system:

Figure BDA0003355689600000074
Figure BDA0003355689600000074

其中,

Figure BDA0003355689600000075
表示由步骤3-3)求得的标定板角点3D空间坐标,
Figure BDA0003355689600000076
表示由步骤3-4)求得的标定板角点在像素坐标系中的2D像素坐标,
Figure BDA0003355689600000077
Figure BDA0003355689600000081
表示相机的外参矩阵,
Figure BDA0003355689600000082
表示由步骤3-1)求得的相机内参矩阵;由此,利用EPnP算法解得激光雷达坐标系与相机坐标系之间的旋转矩阵R和平移向量t,进而实现对激光雷达与相机的联合标定。本实施例选用正方形单色板作为标定板,在整个过程中能够自动进行设备参数调整、数据帧选择、以及标定结果的计算,无需人工干预。in,
Figure BDA0003355689600000075
Represents the 3D space coordinates of the corner point of the calibration board obtained by step 3-3),
Figure BDA0003355689600000076
represents the 2D pixel coordinates of the corner points of the calibration board in the pixel coordinate system obtained by step 3-4),
Figure BDA0003355689600000077
Figure BDA0003355689600000081
represents the extrinsic parameter matrix of the camera,
Figure BDA0003355689600000082
Represents the camera internal parameter matrix obtained by step 3-1); thus, the rotation matrix R and translation vector t between the lidar coordinate system and the camera coordinate system are solved by using the EPnP algorithm, and then the combination of the lidar and the camera is realized. Calibration. In this embodiment, a square monochrome plate is used as the calibration plate, and the device parameter adjustment, the selection of the data frame, and the calculation of the calibration result can be automatically performed in the whole process without manual intervention.

上面对本发明实施例结合附图进行了说明,但本发明不限于上述实施例,还可以根据本发明的发明创造的目的做出多种变化,凡依据本发明技术方案的精神实质和原理下做的改变、修饰、替代、组合或简化,均应为等效的置换方式,只要符合本发明的发明目的,只要不背离本发明的技术原理和发明构思,都属于本发明的保护范围。The embodiments of the present invention have been described above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned embodiments, and various changes can also be made according to the purpose of the invention and creation of the present invention. Changes, modifications, substitutions, combinations or simplifications should be equivalent substitution methods, as long as they meet the purpose of the present invention, as long as they do not deviate from the technical principles and inventive concepts of the present invention, all belong to the protection scope of the present invention.

Claims (5)

1.一种用于激光雷达与相机的联合标定方法,其特征在于,具体包括如下步骤:1. a joint calibration method for lidar and camera, is characterized in that, specifically comprises the steps: 1)通过相机检测模块,分别对相机的曝光度和焦距参数进行检测和调整,使其均能够达到预定的阈值范围内,以确保标定结果的准确性;1) Through the camera detection module, the exposure and focal length parameters of the camera are detected and adjusted respectively, so that they can reach the predetermined threshold range to ensure the accuracy of the calibration results; 2)通过静止帧检测模块,对处于运动状态下的标定板进行静止帧检测,选择若干帧满足要求的相机图像组成静止帧集合{S},为后续激光雷达与相机的数据采集做准备;2) Through the still frame detection module, the still frame detection is carried out on the calibration board in the moving state, and several frames of camera images that meet the requirements are selected to form a still frame set {S} to prepare for the subsequent data collection of the lidar and the camera; 3)通过联合标定模块,首先对相机内参进行标定,然后在采集到标定板的3D空间坐标和2D像素坐标后,根据二者之间的转换关系求得最终的联合标定结果。3) Through the joint calibration module, the internal parameters of the camera are first calibrated, and then after the 3D space coordinates and 2D pixel coordinates of the calibration plate are collected, the final joint calibration result is obtained according to the conversion relationship between the two. 2.根据权利要求1所述的用于激光雷达与相机的联合标定方法,其特征在于,所述的步骤1)包括以下步骤:2. The joint calibration method for lidar and camera according to claim 1, wherein the step 1) comprises the following steps: 1-1)使用相机拍摄一幅包含有标定板的初始图像;1-1) Use the camera to take an initial image containing the calibration plate; 1-2)对获取到的相机图像进行角点检测,得到每一个角点在像素坐标系下的位置信息;连接各角点坐标形成标定板的多边形图案,即获得标定板在初始相机图像中的所在区域;1-2) Perform corner detection on the acquired camera image to obtain the position information of each corner in the pixel coordinate system; connect the coordinates of each corner to form a polygon pattern of the calibration plate, that is, obtain the calibration plate in the initial camera image. the area in which it is located; 1-3)计算初始图像中标定板所在区域内像素的平均值RGB,并将其与预设的像素阈值TH进行比较,若像素平均值RGB在阈值TH范围内,转入步骤1-4);否则,向相机发送信号以自动调整相机的曝光度参数,并重新进行图像采集;1-3) Calculate the average RGB of the pixels in the area where the calibration plate is located in the initial image, and compare it with the preset pixel threshold TH. If the pixel average RGB is within the threshold TH, go to step 1-4) ; otherwise, send a signal to the camera to automatically adjust the camera's exposure parameters and re-acquire the image; 1-4)使用调整好曝光度的相机拍摄,获取一幅包含有标定板的全新图像;1-4) Shoot with a camera that has adjusted the exposure to obtain a new image including the calibration plate; 1-5)对获取到的相机图像再次进行角点检测,以获得标定板在相机图像中的区域信息;1-5) Perform corner point detection on the acquired camera image again to obtain the area information of the calibration plate in the camera image; 1-6)利用Tenengrad梯度方法对相机图像中标定板所在的区域进行清晰度打分,该方法基于Sobel算子分别在水平和垂直方向上计算梯度值,并将处理后的平均灰度值作为图像清晰度的衡量指标,若平均灰度值在预设的阈值范围内([T1,T2]),则表示相机对焦良好,完成相机曝光度和清晰度的检测和调整工作;否则,向相机发送信号以调整相机焦距,并重新进行图像采集。1-6) Use the Tenengrad gradient method to score the sharpness of the area where the calibration plate is located in the camera image. This method calculates the gradient values in the horizontal and vertical directions based on the Sobel operator, and uses the processed average gray value as the image. A measure of sharpness, if the average gray value is within the preset threshold range ([T 1 , T 2 ]), it means that the camera is in good focus, and the detection and adjustment of camera exposure and sharpness are completed; otherwise, the The camera sends a signal to adjust the camera's focus and retake the image acquisition. 3.根据权利要求1所述的用于激光雷达与相机的联合标定方法,其特征在于,所述的步骤2)包括以下步骤:3. The joint calibration method for lidar and camera according to claim 1, wherein the step 2) comprises the following steps: 2-1)将激光雷达与相机固定在同一基座上组成待标定传感器模块,保证基座固定及三者的相对位置不发生变化,并保证激光雷达的探测范围和相机的视野具有60%以上的重叠区域;2-1) Fix the lidar and the camera on the same base to form the sensor module to be calibrated, ensure that the base is fixed and the relative position of the three does not change, and ensure that the detection range of the lidar and the field of view of the camera are more than 60% the overlapping area; 2-2)选择正方形形状的单色板作为标定板,将标定板在激光雷达与相机视野的重合范围内进行运动,运动过程包括平移和旋转的变化;2-2) Select a square-shaped monochrome plate as the calibration plate, and move the calibration plate within the overlapping range of the lidar and the camera field of view, and the movement process includes changes in translation and rotation; 2-3)定义标定板坐标系记作OB,亦为世界坐标系OW,选定标定板左上角角点作为标定板坐标系的原点,两侧直角边指向作为标定板坐标系的X/Y轴方向,根据右手定则确定Z轴方向;根据正方形标定板的实际尺寸计算得标定板各角点在OB中的坐标PB,亦为PW2-3) Define the calibration plate coordinate system marked as O B , which is also the world coordinate system O W , select the upper left corner of the calibration plate as the origin of the calibration plate coordinate system, and the right-angled sides on both sides point to X as the calibration plate coordinate system /Y-axis direction, according to the right-hand rule to determine the Z-axis direction; according to the actual size of the square calibration plate, the coordinates P B of each corner point of the calibration plate in OB are calculated, which is also P W ; 2-4)根据坐标PB和角点在像素坐标系中的坐标Ppx,利用EPnP算法求得标定板坐标系到相机坐标系的转换矩阵
Figure FDA0003355689590000021
对运动过程中相机采集到的每一帧图像数据进行处理,获得标定板到相机的转换矩阵列表
Figure FDA0003355689590000022
2-4) According to the coordinates P B and the coordinates P px of the corner points in the pixel coordinate system, use the EPnP algorithm to obtain the conversion matrix from the calibration board coordinate system to the camera coordinate system
Figure FDA0003355689590000021
Process each frame of image data collected by the camera during motion to obtain a list of conversion matrices from the calibration board to the camera
Figure FDA0003355689590000022
2-5)对于任意一帧相机图像,确定其在前后两个时刻之间标定板的运动量;首先选取第i帧相机图像中标定板坐标系上的任意一点PBi,利用方程
Figure FDA0003355689590000023
求得选定点PBi在第j帧相机图像中的投影位置P′Bj;若标定板在前后两个时刻没有发生相对运动,则Ti=Tj,即P′Bj=PBi;否则,通过方程式
Figure FDA0003355689590000024
求得P′Bj与其在第j帧相机图像中的实际位置PBj之间的差值,即为标定板在这两个时刻间的位移量;
2-5) For any frame of camera image, determine the amount of motion of the calibration plate between the two moments before and after it; first select any point P Bi on the coordinate system of the calibration plate in the ith frame of camera image, and use the equation
Figure FDA0003355689590000023
Obtain the projection position P' Bj of the selected point P Bi in the jth frame of the camera image; if the calibration plate does not move relative to the two moments before and after, then T i =T j , that is, P' Bj =P Bi ; otherwise , via the equation
Figure FDA0003355689590000024
Obtain the difference between P′ Bj and its actual position P Bj in the jth frame of camera image, which is the displacement of the calibration plate between these two moments;
2-6)对相邻数据帧进行比较,选取运动变化满足要求的数据帧作为静止帧;对于一块标定板,其包括一个角点列表C={Xn},通过方程式m(C,i,j)=max d(Xni,Xnj)确定标定板的最终位移情况;根据方程式m(C,i-1,i)<x&&m(C,i,i+1)<x,遍历每一帧相机图像,当且仅当某一帧的运动相较于相邻帧小于阈值位移时,选择该帧作为静止帧;且选定的静止帧还需保证能散布在激光雷达与相机视野的重合区域内,即满足:m(C,si,sj)>y,其中si,sj∈{S},x=1cm,y=300cm,集合S为已选定的静止帧集合。2-6) Adjacent data frames are compared, and the data frames whose motion changes meet the requirements are selected as static frames; for a calibration board, it includes a corner list C={X n }, by equation m(C, i, j)=max d(X ni , X nj ) determines the final displacement of the calibration plate; according to the equation m(C, i-1, i)<x&&m(C, i, i+1)<x, traverse each frame For camera images, if and only if the motion of a certain frame is smaller than the threshold displacement compared to the adjacent frames, this frame is selected as a still frame; and the selected still frame must also ensure that it can be scattered in the overlapping area of the lidar and the camera's field of view , that is, m(C, s i , s j )>y, where s i , s j ∈ {S}, x=1 cm, y=300 cm, and the set S is the selected still frame set.
4.根据权利要求1所述的用于激光雷达与相机的联合标定方法,其特征在于,所述的步骤3)包括以下步骤:4. The joint calibration method for lidar and camera according to claim 1, wherein the step 3) comprises the following steps: 3-1)对相机的内参进行标定,得到
Figure FDA0003355689590000025
其中矩阵K表示相机的内参数,fx,fy表示焦距,cx,cy表示主点偏移量;
3-1) Calibrate the internal parameters of the camera to get
Figure FDA0003355689590000025
The matrix K represents the internal parameters of the camera, f x , f y represent the focal length, and c x , cy represent the principal point offset;
3-2)在选定静止帧的同时采集该帧的激光雷达三维点云数据和相机图像数据,若采集到的选定帧数据能够清晰全面地描述标定板信息,则将其加入静止帧集合{S};否则,跳过该帧,重新选定下一个静止帧并进行数据采集;3-2) Collect the lidar 3D point cloud data and camera image data of the frame while selecting the still frame. If the collected selected frame data can clearly and comprehensively describe the calibration board information, it will be added to the still frame set {S}; otherwise, skip this frame, reselect the next still frame and perform data collection; 3-3)对激光雷达采集到的点云数据进行处理,求取标定板角点的空间坐标;首先利用RANSAC算法对激光点云进行平面拟合,提取出在激光雷达坐标系下标定板的所在平面;然后将所有点云投影到拟合平面上,通过网格划分法提取出点云的边界点集合;再分别对边界点利用最小二乘法进行直线拟合,获取到标定板的外缘轮廓线及其直线方程;四条外缘边线之间的两两交点形成标定板的四个角点,通过求解直线方程得到各角点的坐标数据,作为标定板在该位置的3D空间坐标;3-3) Process the point cloud data collected by the lidar, and obtain the spatial coordinates of the corner points of the calibration board; first, use the RANSAC algorithm to fit the laser point cloud, and extract the coordinates of the calibration board in the lidar coordinate system. Then, project all the point clouds onto the fitting plane, and extract the boundary point set of the point cloud by meshing method; then use the least squares method to perform line fitting on the boundary points respectively, and obtain the outer edge of the calibration plate. The contour line and its straight line equation; the two intersection points between the four outer edge lines form the four corner points of the calibration plate, and the coordinate data of each corner point is obtained by solving the straight line equation, as the 3D space coordinates of the calibration plate at this position; 3-4)对每一个静止帧对应的图像信息进行角点检测;首先将采集到的相机图像转化为灰度图像,然后利用Shi-Tmoasi算法确定图像中强角点的位置,再提取出其亚像素级的精确坐标数据,作为标定板角点在该位置的2D像素坐标;3-4) Perform corner detection on the image information corresponding to each still frame; first convert the collected camera image into a grayscale image, then use the Shi-Tmoasi algorithm to determine the position of the strong corner in the image, and then extract its Sub-pixel accurate coordinate data, as the 2D pixel coordinates of the corner point of the calibration board at this position; 3-5)由世界坐标系与相机坐标系之间的转换公式:3-5) The conversion formula between the world coordinate system and the camera coordinate system:
Figure FDA0003355689590000031
Figure FDA0003355689590000031
其中,
Figure FDA0003355689590000032
表示由步骤3-3)求得的标定板角点3D空间坐标,
Figure FDA0003355689590000033
表示由步骤3-4)求得的标定板角点在像素坐标系中的2D像素坐标,
Figure FDA0003355689590000034
Figure FDA0003355689590000035
表示相机的外参矩阵,
Figure FDA0003355689590000036
表示由步骤3-1)求得的相机内参矩阵;由此,利用EPnP算法解得激光雷达坐标系与相机坐标系之间的旋转矩阵R和平移向量t,进而实现对激光雷达与相机的联合标定。
in,
Figure FDA0003355689590000032
Represents the 3D space coordinates of the corner point of the calibration board obtained by step 3-3),
Figure FDA0003355689590000033
represents the 2D pixel coordinates of the corner points of the calibration board in the pixel coordinate system obtained by step 3-4),
Figure FDA0003355689590000034
Figure FDA0003355689590000035
represents the extrinsic parameter matrix of the camera,
Figure FDA0003355689590000036
Represents the camera internal parameter matrix obtained by step 3-1); thus, the rotation matrix R and translation vector t between the lidar coordinate system and the camera coordinate system are solved by using the EPnP algorithm, and then the combination of the lidar and the camera is realized. Calibration.
5.根据权利要求1至4任意一项所述的用于激光雷达与相机的联合标定方法,其特征在于,所述的标定板选取为一块正方形的单色版,其边长为0.5m,厚度为5mm,由漫反射材料制成。5. The joint calibration method for lidar and camera according to any one of claims 1 to 4, characterized in that, the calibration plate is selected as a square monochrome version, and its side length is 0.5m, The thickness is 5mm and it is made of diffuse reflection material.
CN202111350523.5A 2021-11-15 2021-11-15 Combined calibration method for laser radar and camera Pending CN114140534A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111350523.5A CN114140534A (en) 2021-11-15 2021-11-15 Combined calibration method for laser radar and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111350523.5A CN114140534A (en) 2021-11-15 2021-11-15 Combined calibration method for laser radar and camera

Publications (1)

Publication Number Publication Date
CN114140534A true CN114140534A (en) 2022-03-04

Family

ID=80393209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111350523.5A Pending CN114140534A (en) 2021-11-15 2021-11-15 Combined calibration method for laser radar and camera

Country Status (1)

Country Link
CN (1) CN114140534A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758005A (en) * 2022-03-23 2022-07-15 中国科学院自动化研究所 Laser radar and camera external parameter calibration method and device
CN115049743A (en) * 2022-07-08 2022-09-13 炬像光电技术(上海)有限公司 High-precision camera module calibration method, device and storage medium
CN115147495A (en) * 2022-06-01 2022-10-04 魔视智能科技(上海)有限公司 Calibration method, device and system for vehicle-mounted system
CN115994955A (en) * 2023-03-23 2023-04-21 深圳佑驾创新科技有限公司 Camera external parameter calibration method and device and vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369630A (en) * 2020-02-27 2020-07-03 河海大学常州校区 A method of multi-line lidar and camera calibration
WO2020233443A1 (en) * 2019-05-21 2020-11-26 菜鸟智能物流控股有限公司 Method and device for performing calibration between lidar and camera
US20210003712A1 (en) * 2019-07-05 2021-01-07 DeepMap Inc. Lidar-to-camera transformation during sensor calibration for autonomous vehicles
CN112669393A (en) * 2020-12-31 2021-04-16 中国矿业大学 Laser radar and camera combined calibration method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020233443A1 (en) * 2019-05-21 2020-11-26 菜鸟智能物流控股有限公司 Method and device for performing calibration between lidar and camera
US20210003712A1 (en) * 2019-07-05 2021-01-07 DeepMap Inc. Lidar-to-camera transformation during sensor calibration for autonomous vehicles
CN111369630A (en) * 2020-02-27 2020-07-03 河海大学常州校区 A method of multi-line lidar and camera calibration
CN112669393A (en) * 2020-12-31 2021-04-16 中国矿业大学 Laser radar and camera combined calibration method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
康国华;张琪;张晗;徐伟证;张文豪;: "基于点云中心的激光雷达与相机联合标定方法研究", 仪器仪表学报, no. 12, 15 December 2019 (2019-12-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758005A (en) * 2022-03-23 2022-07-15 中国科学院自动化研究所 Laser radar and camera external parameter calibration method and device
CN115147495A (en) * 2022-06-01 2022-10-04 魔视智能科技(上海)有限公司 Calibration method, device and system for vehicle-mounted system
CN115049743A (en) * 2022-07-08 2022-09-13 炬像光电技术(上海)有限公司 High-precision camera module calibration method, device and storage medium
CN115994955A (en) * 2023-03-23 2023-04-21 深圳佑驾创新科技有限公司 Camera external parameter calibration method and device and vehicle

Similar Documents

Publication Publication Date Title
CN114140534A (en) Combined calibration method for laser radar and camera
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
WO2021259151A1 (en) Calibration method and apparatus for laser calibration system, and laser calibration system
CN104851104B (en) Using the flexible big view calibration method of target high speed camera close shot
CN102155923B (en) Splicing measuring method and system based on three-dimensional target
CN105913439B (en) A kind of large-field shooting machine scaling method based on laser tracker
CN104990515B (en) Large-sized object three-dimensional shape measure system and its measuring method
CN107144241B (en) A kind of binocular vision high-precision measuring method based on depth of field compensation
CN109018591A (en) A kind of automatic labeling localization method based on computer vision
CN115830103A (en) Monocular color-based transparent object positioning method, device and storage medium
CN103530880A (en) Camera calibration method based on projected Gaussian grid pattern
CN103729837A (en) Rapid calibration method of single road condition video camera
KR101589167B1 (en) System and Method for Correcting Perspective Distortion Image Using Depth Information
CN113281723B (en) AR tag-based calibration method for structural parameters between 3D laser radar and camera
CN106971408A (en) A kind of camera marking method based on space-time conversion thought
CN109187637B (en) Method and system for workpiece defect measurement based on infrared thermal imager
CN107084680A (en) Target depth measuring method based on machine monocular vision
CN112258583A (en) Distortion calibration method for close-range images based on equal-distortion variable partitioning
CN109523539A (en) Large-sized industrial plate on-line measurement system and method based on polyphaser array
CN105374067A (en) Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof
CN115082538A (en) 3D reconstruction system and method of multi-vision gimbal parts surface based on line structured light projection
CN117115272A (en) Telecentric camera calibration and three-dimensional reconstruction method for precipitation particle multi-angle imaging
CN112361982B (en) Method and system for extracting three-dimensional data of large-breadth workpiece
WO2025026099A1 (en) Rail vehicle limit detection method and system based on three-dimensional point cloud data, and electronic device
CN111028298B (en) A converging binocular system for space transformation calibration of rigid body coordinate system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination