WO2022142759A1 - Lidar and camera joint calibration method - Google Patents
Lidar and camera joint calibration method Download PDFInfo
- Publication number
- WO2022142759A1 WO2022142759A1 PCT/CN2021/129942 CN2021129942W WO2022142759A1 WO 2022142759 A1 WO2022142759 A1 WO 2022142759A1 CN 2021129942 W CN2021129942 W CN 2021129942W WO 2022142759 A1 WO2022142759 A1 WO 2022142759A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- lidar
- camera
- point cloud
- dimensional
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000003252 repetitive effect Effects 0.000 claims abstract description 14
- 238000001514 detection method Methods 0.000 claims abstract description 6
- 230000003068 static effect Effects 0.000 claims abstract description 5
- 238000013459 approach Methods 0.000 claims abstract description 3
- 239000011159 matrix material Substances 0.000 claims description 16
- 238000006073 displacement reaction Methods 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
Definitions
- the invention belongs to the field of multi-sensor data fusion, and in particular relates to a joint calibration method of a laser radar and a camera.
- Lidar and cameras are widely used in driverless, intelligent robots and other fields.
- lidar is that it can accurately reflect the spatial three-dimensional information of the environment, but it is lacking in detailed description.
- the camera cannot reflect the spatial three-dimensional information of the environment, it has outstanding effects in detail and color description. Therefore, in an unmanned system, it is necessary to integrate the use of lidar and camera to give full play to their respective advantages.
- the premise of fusion is that it needs to be jointly calibrated and unified in space. Most of the existing joint calibration methods of lidar and camera are based on multi-line repetitive scanning lidar. Due to the late product launch, few people have been involved in non-repetitive scanning lidar.
- the open source autopilot framework Autoware provides a joint calibration method of lidar and camera, and encapsulates it in the Autoware_Camera_Lidar_Calibrator toolkit.
- This method needs to manually circle the position of the calibration board in the three-dimensional point cloud to determine the plane and distance where the calibration board is located, so as to calculate the lidar attitude through the angle of the lidar relative to this plane, and then compare it with the camera, and finally get the lidar. Joint calibration results with the camera.
- this method has the problem of inaccurate manual circle points, which makes it impossible to accurately determine the plane and distance of the calibration plate in the 3D point cloud, resulting in inaccurate calibration results.
- 201910498286.3 discloses "a multi-camera system and lidar combined system and its joint calibration method", the method selects the point cloud on the checkerboard calibration board from the point cloud data through the calibration software, and after selecting the point cloud, the Project the point cloud into the camera coordinate system, observe whether the selected point cloud is located in the center of the calibration plate, adjust the selected point cloud so that the selected point cloud is projected in the center of the checkerboard calibration plate, and click Calibration after all adjustments are made.
- Output lidar calibration results This method mainly relies on the manual selection of the point cloud at the center of the calibration plate. There is an error in the judgment of the human eye, and the number of samples is small, so the joint calibration results are difficult to be accurate.
- the purpose of the present invention is to provide a joint calibration method of laser radar and camera, based on the feature that the longer the scanning integration time of the non-repetitive scanning laser radar is, the higher the point cloud coverage, so as to solve the problem of low precision caused by the traditional joint calibration method.
- the technical solution for realizing the purpose of the present invention is: a joint calibration method for a laser radar and a camera.
- the calibration steps are as follows:
- Step 1 Fix the lidar and the camera on the same base, keep the relative position of the lidar and the camera unchanged, and ensure that the overlapping field of view of the lidar and the camera accounts for more than 50% of the camera's field of view. Go to step 2.
- Step 2 Calibrate the camera to get the camera internal parameters f x , f y represent the focal length of the camera, c x , cy represent the offset of the camera's optical axis on the image coordinate system, go to step 3.
- Step 3 Place the checkerboard calibration board at different positions in the overlapping field of view of the lidar and the camera to collect the data of the camera and the lidar. At each position, the camera collects one frame of image data, and the lidar collects it for 20 to 30 seconds. 3D point cloud data, go to step 4.
- Step 4 Screen the lidar 3D point cloud data and camera image data collected at each position. If there is data that cannot fully and clearly reflect the checkerboard calibration board, discard the data and fine-tune the checkerboard calibration board. attitude, re-collect data for the position, and go to step five; otherwise, go to step five directly.
- Step 5 Normalize the collected three-dimensional point cloud data of the lidar in the x-axis direction to generate a two-dimensional normalized grayscale image of the lidar, wherein, according to the point cloud intensity information in the three-dimensional point cloud data of the lidar Determine the pixel grayscale in the two-dimensional normalized grayscale image, and go to step six.
- Step 6 Perform corner detection on the obtained two-dimensional normalized grayscale image of the lidar and the camera image, and obtain the coordinates of the corner points of the checkerboard in the two-dimensional normalized grayscale image of the lidar and the camera image, respectively, and transfer to Step seven.
- Step 7 Reverse the corner coordinates of each group of lidar 2D normalized grayscale images to obtain the corresponding lidar 3D point cloud checkerboard corner coordinates, and go to step 8.
- Step 8 Return to Step 4, traverse the lidar 3D point cloud and camera image data collected at each position, and obtain a series of lidar 3D point cloud and camera image checkerboard corner coordinate point pairs, camera image corner coordinates and The three-dimensional laser point cloud coordinate transformation relationship is:
- [u c , v c ] is the camera image checkerboard corner coordinates
- K is the camera internal parameter
- R is the joint calibration rotation matrix
- P(x, y, z) is the three-dimensional lidar point on the checkerboard corner.
- Cloud coordinates T is the joint calibration displacement matrix; input each group of lidar 3D point cloud and camera image checkerboard corner coordinates, and finally obtain the rotation matrix R and displacement matrix T, complete a non-repetitive scanning lidar and Joint calibration of cameras.
- the present invention has the following significant advantages:
- the existing methods mostly use traditional multi-line lidar for joint calibration with cameras.
- the scanning trajectory of traditional multi-line lidar remains unchanged, and the point cloud coverage in the field of view is low. It is difficult to accurately reflect the calibration board information, which affects the Accuracy of joint calibration.
- the laser radar scanning method used in the present invention is non-repetitive scanning.
- the present invention collects data on the checkerboard calibration plate for 20 to 30 seconds while keeping the laser radar in a static state.
- the corner point information in the checkerboard calibration board can be clearly distinguished, and then the accurate corner point coordinate information can be extracted.
- the plane and distance of the checkerboard calibration board are estimated by selecting the three-dimensional point cloud data printed on the checkerboard calibration board.
- the invention proposes a 3D-2D-3D method to accurately obtain the three-dimensional point cloud coordinates played on the corners of the checkerboard calibration board. That is, the 3D point cloud data collected by the lidar is normalized on the x-axis, and a 2D normalized grayscale image of the lidar is established. The point cloud intensity information of the point is provided, and then the corner point detection is performed on the two-dimensional normalized grayscale image of the lidar to obtain the coordinates of the corner points of the calibration plate in the grayscale image, and finally the calibration plate angle in the detected grayscale image is used.
- the point coordinate backtracking finds the corresponding precise three-dimensional point cloud coordinates of the lidar on the corners of the checkerboard calibration board, which improves the accuracy of joint calibration.
- FIG. 1 is a flow chart of a joint calibration method for a lidar and a camera according to the present invention.
- FIG. 2 is a schematic diagram of a black and white checkerboard calibration board used in the present invention.
- Figure 3 is a schematic diagram of the definition of the lidar point cloud coordinate system.
- FIG. 4 is a schematic structural diagram of the present invention.
- the invention adopts a non-repetitive scanning laser radar camera for joint calibration.
- the non-repetitive scanning laser radar has the characteristic that each scanning trajectory is not repeated. With the increase of the scanning time, the coverage rate of the output three-dimensional point cloud field increases continuously. After a few seconds of static scanning, the field of view coverage is close to 100%, which can fully reflect the precise environmental details. Combined with the large-scale self-made checkerboard calibration plate, the joint calibration accuracy is greatly improved.
- Step 1 Fix the lidar and the camera on the same base, keep the relative position of the lidar and the camera unchanged, and ensure that the overlapping field of view of the lidar and the camera accounts for more than 50% of the camera's field of view.
- the lidar scanning method is non-repetitive scanning, and the lidar scanning trajectory is not repeated. After a few seconds of static scanning, the coverage rate of the field of view approaches 100%, that is, almost all areas in the field of view are covered.
- Step 2 Calibrate the camera to get the camera internal parameters f x , f y represent the focal length of the camera, and c x , cy represent the offset of the optical axis of the camera on the image coordinate system.
- Step 3 Place the checkerboard calibration board in different positions (9 to 20, depending on the size of the overlapping field of view) in the overlapping field of view of the lidar and the camera to collect data from the camera and the lidar.
- the camera collects a frame of image data
- the lidar collects 3D point cloud data for 20 to 30 seconds, which can ensure that the lidar point cloud coverage in the field of view is close to 100%, which can fully reflect the checkerboard corner information.
- the checkerboard calibration board In order to fully collect the camera image and lidar 3D point cloud data at different positions in the overlapping field of view, on the premise that the lidar and the camera can fully collect all the checkerboard calibration board data, the checkerboard calibration board should be placed in a position that covers the laser.
- the radar and the camera overlap the near, far, left and right borders and the middle position of the field of view, and the adjacent positions are separated by 3 to 5 meters.
- Step 4 Screen the lidar 3D point cloud data and camera image data collected at each position. If there is data that cannot fully and clearly reflect the checkerboard calibration board, discard the data and fine-tune the checkerboard calibration board. attitude, re-collect data for the position, and go to step five; otherwise, go to step five directly.
- Step 5 As shown in Figure 3, in the lidar point cloud coordinate system, the front of the lidar is the x-axis of the lidar point cloud coordinate. Therefore, in order to fully reflect the information of each corner of the checkerboard calibration board in the field of view, it is necessary to normalize the collected three-dimensional point cloud data of the lidar in the x-axis direction to generate a two-dimensional normalized grayscale image of the lidar. , and according to the point cloud intensity information in the lidar 3D point cloud data, the pixel grayscale in the two-dimensional normalized grayscale image is determined.
- the coordinates of the 3D point cloud are first normalized, and the 3D point cloud data point P 0 (x 0 , y 0 , z 0 , i 0 ) of the lidar is set;
- i 0 is the intensity information of the three-dimensional point cloud coordinate point of the lidar, which is directly output by the lidar;
- Pixel coordinates [u,v] K 0 *P 1 (x 1 ,y 1 ,z 1 ) of the lidar 2D normalized grayscale image transformed from the lidar 3D point cloud;
- the maximum value i max of the point cloud intensity in all the three-dimensional point cloud data of lidar collected at each position is counted, then the gray value at the pixel coordinates [u, v] of the two-dimensional normalized gray image of the lidar is calculated.
- Step 6 Perform corner detection on the obtained two-dimensional normalized grayscale image of the lidar and the camera image, and obtain the coordinates of the corner points of the checkerboard in the two-dimensional normalized grayscale image of the lidar and the camera image, respectively.
- Step 7 Reverse the corner coordinates in the two-dimensional normalized grayscale image of the lidar to obtain the corresponding three-dimensional point cloud coordinates of the lidar.
- Step 8 Return to Step 4, traverse the lidar 3D point cloud and camera image data collected at each position, and obtain a series of lidar 3D point cloud and camera image checkerboard corner coordinate point pairs, camera image corner coordinates and The three-dimensional laser point cloud coordinate transformation relationship is:
- [u c , v c ] is the camera image checkerboard corner coordinates
- K is the camera internal parameter
- R is the joint calibration rotation matrix
- P(x, y, z) is the three-dimensional lidar point on the checkerboard corner.
- Cloud coordinates T is the joint calibration displacement matrix; input each group of lidar 3D point cloud and camera image checkerboard corner coordinates, and finally obtain the rotation matrix R and displacement matrix T, complete a non-repetitive scanning lidar and Joint calibration of cameras.
- Step 1 Fix the lidar and the camera side by side on the same base facing the same direction, and the overlapping field of view of the lidar and the camera accounts for more than 50% of the camera's field of view.
- Step 2 Calibrate the camera to get the camera internal parameters f x , f y represent the focal length of the camera, and c x , cy represent the offset of the optical axis of the camera on the image coordinate system.
- Step 3 In order to fully collect camera images and lidar 3D point cloud data at different positions in the overlapping field of view, a, b, c, d, e, f, g, h, i are selected in the overlapping field of view this time. A total of 9 different positions (as shown in Figure 4, arranged in the form of concentric circles with different radii). At each location, the camera collects one frame of image data, and the lidar collects 20 seconds of 3D point cloud data. Among them, the checkerboard calibration board used is shown in Figure 2. In order to ensure that the camera and lidar can clearly collect the data of the checkerboard calibration board at a distance, the size of each grid of the checkerboard calibration board is set to 20 cm, 20 The grid is arranged alternately in five rows and four columns.
- Step 4 Screen the lidar 3D point cloud data and camera image data collected at each position. If there is data that cannot fully and clearly reflect the checkerboard calibration board, discard the data and fine-tune the checkerboard calibration board. attitude, re-collect data for the position, and go to step five; otherwise, go to step five directly.
- Step 5 Normalize the three-dimensional point cloud data of the laser radar at each position collected in the x-axis direction to generate a two-dimensional normalized grayscale image of the laser radar.
- the image pixel grayscale in the two-dimensional normalized grayscale image is determined according to the point cloud intensity in the lidar three-dimensional point cloud data.
- the coordinates of the 3D point cloud are first normalized, and the 3D point cloud data point of the lidar is set as P 0 (x 0 , y 0 , z 0 , i 0 ).
- i 0 is the intensity information of the three-dimensional point cloud coordinate point of the lidar, which is directly output by the lidar.
- the maximum value i max of the point cloud intensity in all the three-dimensional point cloud data of lidar collected at each position is counted, then the gray value at the pixel coordinates [u, v] of the two-dimensional normalized gray image of the lidar is calculated.
- Step 6 Perform corner detection on the obtained two-dimensional normalized grayscale image of the lidar and the camera image at each position, and obtain the chessboard corner points in the two-dimensional normalized grayscale image of the lidar and the camera image respectively. coordinate.
- Step 7 Reverse the corner coordinates in each group of laser radar two-dimensional normalized grayscale images to obtain the corresponding three-dimensional point cloud coordinates of laser radar.
- the secondary data acquisition time is 20 seconds, the coverage rate of the 3D point cloud is close to 100%, and the amount of 3D point cloud data is relatively large, so the coordinates of the lidar 3D point cloud corresponding to the corner points of the lidar 2D normalized grayscale image are searched in reverse.
- Step 8 Return to Step 4, traverse the lidar 3D point cloud and camera image data collected at all locations, and obtain a series of lidar 3D point cloud and camera image corner coordinate point pairs.
- the transformation relationship between camera image corner coordinates and 3D laser point cloud coordinates is:
- [u c , v c ] is the camera image checkerboard corner coordinates
- K is the camera internal parameter
- R is the joint calibration rotation matrix
- P(x, y, z) is the three-dimensional lidar point on the checkerboard corner.
- Cloud coordinates T is the joint calibration displacement matrix. Input the obtained three-dimensional point cloud of the lidar and the coordinates of the corners of the camera image checkerboard, and finally obtain the rotation matrix R and the displacement matrix T, and complete a non-repetitive scanning lidar and camera joint calibration.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
A lidar and camera joint calibration method, where a utilized lidar scanning mode is non-repetitive scanning, i.e. a trajectory is different every time the lidar scans, and after static capture for a number of seconds, point cloud coverage within a field of view approaches 100%. According to a non-repetitive scanning characteristic of this type of lidar, a self-made large size checkerboard calibration board is sequentially placed at different positions in an overlapping field of view of a lidar and a camera, and while the camera captures an image, the lidar captures three-dimensional point cloud data for a relatively long amount of time, and obtained three-dimensional point cloud data is converted into a two-dimensional normalized gray scale image according to point cloud intensity, then corner point detection is performed on the normalized gray scale image and the camera image, a corresponding two-dimensional gray scale image and camera image corner point pair is obtained, and subsequently, precise three-dimensional point cloud corner point coordinates are found in a backtracking manner according to a corner point in the two-dimensional gray scale image, and finally a joint calibration result is obtained according to a corresponding three-dimensional point cloud corner point and camera image corner point coordinates. The present method is more precise compared to a traditional method.
Description
本发明属于多传感器数据融合领域,具体涉及一种激光雷达与相机联合标定方法。The invention belongs to the field of multi-sensor data fusion, and in particular relates to a joint calibration method of a laser radar and a camera.
激光雷达与相机被广泛应用于无人驾驶、智能机器人等领域。Lidar and cameras are widely used in driverless, intelligent robots and other fields.
激光雷达的优势在于可以准确地反映出环境的空间三维信息,但却在细节描述方面有所欠缺,相机虽然无法反映出环境的空间三维信息,但却在细节及颜色描述方面具有突出的效果。因此,在一个无人系统中,需要对激光雷达与相机进行融合使用,发挥各自的优点。然而,融合的前提是需要对其进行联合标定,进行空间上的坐标统一。现有的激光雷达与相机联合标定方法大多是基于多线重复扫描激光雷达,由于产品问世较晚,非重复扫描激光雷达方面鲜有人涉及。The advantage of lidar is that it can accurately reflect the spatial three-dimensional information of the environment, but it is lacking in detailed description. Although the camera cannot reflect the spatial three-dimensional information of the environment, it has outstanding effects in detail and color description. Therefore, in an unmanned system, it is necessary to integrate the use of lidar and camera to give full play to their respective advantages. However, the premise of fusion is that it needs to be jointly calibrated and unified in space. Most of the existing joint calibration methods of lidar and camera are based on multi-line repetitive scanning lidar. Due to the late product launch, few people have been involved in non-repetitive scanning lidar.
开源自动驾驶框架Autoware提供了一种激光雷达与相机联合标定方法,并将其封装于Autoware_Camera_Lidar_Calibrator工具包中。该方法需要手动圈出标定板在三维点云中的位置来确定标定板所在的平面及距离,从而通过激光雷达相对于这个面的角度推算出激光雷达姿态,再和相机对比,最终得到激光雷达与相机联合标定结果。但此种方法存在着手动圈点不精确导致无法准确确定标定板在三维点云中的平面及距离的问题,导致了标定结果的不准确。The open source autopilot framework Autoware provides a joint calibration method of lidar and camera, and encapsulates it in the Autoware_Camera_Lidar_Calibrator toolkit. This method needs to manually circle the position of the calibration board in the three-dimensional point cloud to determine the plane and distance where the calibration board is located, so as to calculate the lidar attitude through the angle of the lidar relative to this plane, and then compare it with the camera, and finally get the lidar. Joint calibration results with the camera. However, this method has the problem of inaccurate manual circle points, which makes it impossible to accurately determine the plane and distance of the calibration plate in the 3D point cloud, resulting in inaccurate calibration results.
201910498286.3公开了《一种多相机系统和激光雷达联合系统及其联合标定方法》,该方法通过标定软件在点云数据中选择打在棋盘格标定板上的点云,选择完点云后,将点云投影到相机坐标系中,观测所选点云是否位于标定板中心,调整所选的点云,使得所选点云投影后都在棋盘格标定板的中心,都调整好后点击标定,输出激光雷达标定结果。该方法主要依赖手动选取打在标定板中心位置的点云,人眼判断存在误差,且样本数量较少,联合标定结果很难精确。201910498286.3 discloses "a multi-camera system and lidar combined system and its joint calibration method", the method selects the point cloud on the checkerboard calibration board from the point cloud data through the calibration software, and after selecting the point cloud, the Project the point cloud into the camera coordinate system, observe whether the selected point cloud is located in the center of the calibration plate, adjust the selected point cloud so that the selected point cloud is projected in the center of the checkerboard calibration plate, and click Calibration after all adjustments are made. Output lidar calibration results. This method mainly relies on the manual selection of the point cloud at the center of the calibration plate. There is an error in the judgment of the human eye, and the number of samples is small, so the joint calibration results are difficult to be accurate.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种激光雷达与相机联合标定方法,基于非重复扫描激光雷达扫描积分时间越长点云覆盖率越高的特点,以解决传统联合标定方法所带来的精度低问题。The purpose of the present invention is to provide a joint calibration method of laser radar and camera, based on the feature that the longer the scanning integration time of the non-repetitive scanning laser radar is, the higher the point cloud coverage, so as to solve the problem of low precision caused by the traditional joint calibration method.
实现本发明目的的技术解决方案为:一种激光雷达与相机联合标定方法,标定步骤如下:The technical solution for realizing the purpose of the present invention is: a joint calibration method for a laser radar and a camera. The calibration steps are as follows:
步骤一:将激光雷达与相机固定在同一个底座上,保持激光雷达与相机相对位置不变,保证激光雷达与相机重合视野部分占到相机视野的50%以上,转入步骤二。Step 1: Fix the lidar and the camera on the same base, keep the relative position of the lidar and the camera unchanged, and ensure that the overlapping field of view of the lidar and the camera accounts for more than 50% of the camera's field of view. Go to step 2.
步骤二:对相机进行标定,得到相机内参
f
x、f
y表示相机焦距,c
x,c
y表示相机光轴在图像坐标系上的偏移量,转入步骤三。
Step 2: Calibrate the camera to get the camera internal parameters f x , f y represent the focal length of the camera, c x , cy represent the offset of the camera's optical axis on the image coordinate system, go to step 3.
步骤三:将棋盘格标定板依次放置于激光雷达与相机重合视野中的不同位置进行相机与激光雷达的数据采集,在每个位置处,相机采集一帧图像数据,激光雷达采集20~30秒三维点云数据,转入步骤四。Step 3: Place the checkerboard calibration board at different positions in the overlapping field of view of the lidar and the camera to collect the data of the camera and the lidar. At each position, the camera collects one frame of image data, and the lidar collects it for 20 to 30 seconds. 3D point cloud data, go to step 4.
步骤四:对每个位置上采集到的激光雷达三维点云数据与相机图像数据进行筛选,若存在无法全面且清晰反映出棋盘格标定板的数据,则舍弃该数据,并微调棋盘格标定板姿态,重新对该位置进行数据采集,转入步骤五;否则,直接转入步骤五。Step 4: Screen the lidar 3D point cloud data and camera image data collected at each position. If there is data that cannot fully and clearly reflect the checkerboard calibration board, discard the data and fine-tune the checkerboard calibration board. attitude, re-collect data for the position, and go to step five; otherwise, go to step five directly.
步骤五:对采集到的激光雷达三维点云数据进行x轴方向的归一化处理,生成激光雷达二维归一化灰度图,其中,根据激光雷达三维点云数据中的点云强度信息确定二维归一化灰度图中的像素灰度,转入步骤六。Step 5: Normalize the collected three-dimensional point cloud data of the lidar in the x-axis direction to generate a two-dimensional normalized grayscale image of the lidar, wherein, according to the point cloud intensity information in the three-dimensional point cloud data of the lidar Determine the pixel grayscale in the two-dimensional normalized grayscale image, and go to step six.
步骤六:对得到的激光雷达二维归一化灰度图与相机图像进行角点检测,分别得到棋盘格角点在激光雷达二维归一化灰度图与相机图像中的坐标,转入步骤七。Step 6: Perform corner detection on the obtained two-dimensional normalized grayscale image of the lidar and the camera image, and obtain the coordinates of the corner points of the checkerboard in the two-dimensional normalized grayscale image of the lidar and the camera image, respectively, and transfer to Step seven.
步骤七:将每组激光雷达二维归一化灰度图中的角点坐标逆向得到其对应的激光雷达三维点云棋盘格角点坐标,转入步骤八。Step 7: Reverse the corner coordinates of each group of lidar 2D normalized grayscale images to obtain the corresponding lidar 3D point cloud checkerboard corner coordinates, and go to step 8.
步骤八:返回步骤四,遍历每个位置上采集到的激光雷达三维点云与相机图像数据,得到一系列激光雷达三维点云与相机图像棋盘格角点坐标点对,相机图像角点坐标与三维激光点云坐标变换关系为:Step 8: Return to Step 4, traverse the lidar 3D point cloud and camera image data collected at each position, and obtain a series of lidar 3D point cloud and camera image checkerboard corner coordinate point pairs, camera image corner coordinates and The three-dimensional laser point cloud coordinate transformation relationship is:
[u
c,v
c]=K(R*P(x,y,z)+T)
[u c ,v c ]=K(R*P(x,y,z)+T)
式中,[u
c,v
c]为相机图像棋盘格角点坐标,K为相机内参,R为联合标定旋转矩阵,P(x,y,z)为棋盘格角点上的激光雷达三维点云坐标,T为联合标定位移 矩阵;输入得到的每组激光雷达三维点云与相机图像棋盘格角点坐标,最终可求得旋转矩阵R与位移矩阵T,完成一种非重复扫描激光雷达与相机的联合标定。
In the formula, [u c , v c ] is the camera image checkerboard corner coordinates, K is the camera internal parameter, R is the joint calibration rotation matrix, P(x, y, z) is the three-dimensional lidar point on the checkerboard corner. Cloud coordinates, T is the joint calibration displacement matrix; input each group of lidar 3D point cloud and camera image checkerboard corner coordinates, and finally obtain the rotation matrix R and displacement matrix T, complete a non-repetitive scanning lidar and Joint calibration of cameras.
本发明与现有技术相比,其显著优点在于:Compared with the prior art, the present invention has the following significant advantages:
(1)现有方法多采用传统多线激光雷达进行与相机的联合标定,传统多线激光雷达扫描轨迹不变,视场中的点云覆盖率低,难以准确反映出标定板信息,影响了联合标定的准确性。本发明所采用的激光雷达扫描方式为非重复扫描,本发明在保持激光雷达静止状态下对棋盘格标定板进行20~30秒数据采集,此时视野中点云覆盖率趋近100%,可以清晰分辨出棋盘格标定板中的角点信息,进而可以提取出准确的角点坐标信息。(1) The existing methods mostly use traditional multi-line lidar for joint calibration with cameras. The scanning trajectory of traditional multi-line lidar remains unchanged, and the point cloud coverage in the field of view is low. It is difficult to accurately reflect the calibration board information, which affects the Accuracy of joint calibration. The laser radar scanning method used in the present invention is non-repetitive scanning. The present invention collects data on the checkerboard calibration plate for 20 to 30 seconds while keeping the laser radar in a static state. The corner point information in the checkerboard calibration board can be clearly distinguished, and then the accurate corner point coordinate information can be extracted.
(2)不同于现有技术中通过选取打在棋盘格标定板上的三维点云数据来估算棋盘格标定板平面与距离。本发明提出了一种3D-2D-3D方法来精确获得打在棋盘格标定板角点上的三维点云坐标。即先将激光雷达采集到的三维点云数据进行x轴上的归一化处理,建立激光雷达二维归一化灰度图,灰度图像素灰度信息由激光雷达三维点云中每个点的点云强度信息提供,再对激光雷达二维归一化灰度图进行角点检测,得到灰度图中的标定板角点坐标,最后通过检测到的灰度图中的标定板角点坐标回溯找到对应的打在棋盘格标定板角点上的精确激光雷达三维点云坐标,提高了联合标定的精度。(2) Different from the prior art, the plane and distance of the checkerboard calibration board are estimated by selecting the three-dimensional point cloud data printed on the checkerboard calibration board. The invention proposes a 3D-2D-3D method to accurately obtain the three-dimensional point cloud coordinates played on the corners of the checkerboard calibration board. That is, the 3D point cloud data collected by the lidar is normalized on the x-axis, and a 2D normalized grayscale image of the lidar is established. The point cloud intensity information of the point is provided, and then the corner point detection is performed on the two-dimensional normalized grayscale image of the lidar to obtain the coordinates of the corner points of the calibration plate in the grayscale image, and finally the calibration plate angle in the detected grayscale image is used. The point coordinate backtracking finds the corresponding precise three-dimensional point cloud coordinates of the lidar on the corners of the checkerboard calibration board, which improves the accuracy of joint calibration.
图1为本发明的激光雷达与相机联合标定方法流程图。FIG. 1 is a flow chart of a joint calibration method for a lidar and a camera according to the present invention.
图2为本发明的所使用的黑白棋盘格标定板示意图。FIG. 2 is a schematic diagram of a black and white checkerboard calibration board used in the present invention.
图3为激光雷达点云坐标系定义示意图。Figure 3 is a schematic diagram of the definition of the lidar point cloud coordinate system.
图4为本发明的结构示意图。FIG. 4 is a schematic structural diagram of the present invention.
本发明采用一种非重复扫描激光雷达相机进行联合标定,非重复扫描激光雷达具有每次扫描轨迹不重复的特点,随着扫描时间的增加,所输出的三维点云视野覆盖率不断增大,静止扫描数秒后,视野覆盖率趋近100%,可充分反映出精确的环境细节信息,结合自制的大型棋盘格标定板,使得联合标定精度大幅度提高。The invention adopts a non-repetitive scanning laser radar camera for joint calibration. The non-repetitive scanning laser radar has the characteristic that each scanning trajectory is not repeated. With the increase of the scanning time, the coverage rate of the output three-dimensional point cloud field increases continuously. After a few seconds of static scanning, the field of view coverage is close to 100%, which can fully reflect the precise environmental details. Combined with the large-scale self-made checkerboard calibration plate, the joint calibration accuracy is greatly improved.
结合图1,本发明所述的一种激光雷达与相机联合标定方法,具体步骤如下:With reference to Fig. 1, a method for joint calibration of a laser radar and a camera according to the present invention, the specific steps are as follows:
步骤一:将激光雷达与相机固定在同一个底座上,保持激光雷达与相机相对位置不变,保证激光雷达与相机重合视野部分占到相机视野的50%以上。Step 1: Fix the lidar and the camera on the same base, keep the relative position of the lidar and the camera unchanged, and ensure that the overlapping field of view of the lidar and the camera accounts for more than 50% of the camera's field of view.
所述激光雷达扫描方式为非重复扫描,激光雷达扫描轨迹不重复,静止扫描数秒后,视场覆盖率趋近100%,即视场中几乎所有区域都会覆盖到。The lidar scanning method is non-repetitive scanning, and the lidar scanning trajectory is not repeated. After a few seconds of static scanning, the coverage rate of the field of view approaches 100%, that is, almost all areas in the field of view are covered.
步骤二:对相机进行标定,得到相机内参
f
x、f
y表示相机焦距,c
x,c
y表示相机光轴在图像坐标系上的偏移量。
Step 2: Calibrate the camera to get the camera internal parameters f x , f y represent the focal length of the camera, and c x , cy represent the offset of the optical axis of the camera on the image coordinate system.
步骤三:将棋盘格标定板依次放置于激光雷达与相机重合视野中的不同位置(9~20个,视重合视野区域大小而定)进行相机与激光雷达的数据采集,在每个位置处,相机采集一帧图像数据,激光雷达采集20~30秒三维点云数据,这样可以保证视野范围内激光雷达点云覆盖率趋近100%,可以充分反映出棋盘格角点信息。Step 3: Place the checkerboard calibration board in different positions (9 to 20, depending on the size of the overlapping field of view) in the overlapping field of view of the lidar and the camera to collect data from the camera and the lidar. At each position, The camera collects a frame of image data, and the lidar collects 3D point cloud data for 20 to 30 seconds, which can ensure that the lidar point cloud coverage in the field of view is close to 100%, which can fully reflect the checkerboard corner information.
为了能够充分采集到重合视野中不同位置处的相机图像与激光雷达三维点云数据,在激光雷达与相机能够充分采集到全部棋盘格标定板数据的前提下,棋盘格标定板放置位置要覆盖激光雷达与相机重合视野区域的近处、远处、左右边界及中间位置,相邻位置之间相隔3~5米。In order to fully collect the camera image and lidar 3D point cloud data at different positions in the overlapping field of view, on the premise that the lidar and the camera can fully collect all the checkerboard calibration board data, the checkerboard calibration board should be placed in a position that covers the laser. The radar and the camera overlap the near, far, left and right borders and the middle position of the field of view, and the adjacent positions are separated by 3 to 5 meters.
步骤四:对每个位置上采集到的激光雷达三维点云数据与相机图像数据进行筛选,若存在无法全面且清晰反映出棋盘格标定板的数据,则舍弃该数据,并微调棋盘格标定板姿态,重新对该位置进行数据采集,转入步骤五;否则,直接转入步骤五。Step 4: Screen the lidar 3D point cloud data and camera image data collected at each position. If there is data that cannot fully and clearly reflect the checkerboard calibration board, discard the data and fine-tune the checkerboard calibration board. attitude, re-collect data for the position, and go to step five; otherwise, go to step five directly.
步骤五:如图3所示,在激光雷达点云坐标系中,激光雷达正前方为激光雷达点云坐标x轴。因此,为了能够全面反映出视野中棋盘格标定板各个角点信息,需要对采集到的激光雷达三维点云数据进行x轴方向的归一化处理,生成激光雷达二维归一化灰度图,并根据激光雷达三维点云数据中的点云强度信息确定二维归一化灰度图中的像素灰度。Step 5: As shown in Figure 3, in the lidar point cloud coordinate system, the front of the lidar is the x-axis of the lidar point cloud coordinate. Therefore, in order to fully reflect the information of each corner of the checkerboard calibration board in the field of view, it is necessary to normalize the collected three-dimensional point cloud data of the lidar in the x-axis direction to generate a two-dimensional normalized grayscale image of the lidar. , and according to the point cloud intensity information in the lidar 3D point cloud data, the pixel grayscale in the two-dimensional normalized grayscale image is determined.
在建立激光雷达归一化灰度图过程中,首先对三维点云坐标进行归一化处理,设激光雷达三维点云数据点P
0(x
0,y
0,z
0,i
0);其中,i
0为该激光雷达三维点云坐标点的强度信息,该信息由激光雷达直接输出;
In the process of establishing the normalized grayscale image of the lidar, the coordinates of the 3D point cloud are first normalized, and the 3D point cloud data point P 0 (x 0 , y 0 , z 0 , i 0 ) of the lidar is set; , i 0 is the intensity information of the three-dimensional point cloud coordinate point of the lidar, which is directly output by the lidar;
其次,设置激光雷达二维归一化灰度图分辨率为u
0*v
0,
Second, set the resolution of the two-dimensional normalized grayscale image of the lidar to u 0 *v 0 ,
由激光雷达三维点云转化而来的激光雷达二维归一化灰度图像素坐标[u,v]=K
0*P
1(x
1,y
1,z
1);
Pixel coordinates [u,v]=K 0 *P 1 (x 1 ,y 1 ,z 1 ) of the lidar 2D normalized grayscale image transformed from the lidar 3D point cloud;
同时,统计每个位置处采集到的所有激光雷达三维点云数据中的点云强度最大值i
max,则激光雷达二维归一化灰度图像素坐标[u,v]处的灰度值为
At the same time, the maximum value i max of the point cloud intensity in all the three-dimensional point cloud data of lidar collected at each position is counted, then the gray value at the pixel coordinates [u, v] of the two-dimensional normalized gray image of the lidar is calculated. for
步骤六:对得到的激光雷达二维归一化灰度图与相机图像进行角点检测,分别得到棋盘格角点在激光雷达二维归一化灰度图与相机图像中的坐标。Step 6: Perform corner detection on the obtained two-dimensional normalized grayscale image of the lidar and the camera image, and obtain the coordinates of the corner points of the checkerboard in the two-dimensional normalized grayscale image of the lidar and the camera image, respectively.
步骤七:将激光雷达二维归一化灰度图中的角点坐标逆向得到其对应的激光雷达三维点云坐标。Step 7: Reverse the corner coordinates in the two-dimensional normalized grayscale image of the lidar to obtain the corresponding three-dimensional point cloud coordinates of the lidar.
步骤八:返回步骤四,遍历每个位置上采集到的激光雷达三维点云与相机图像数据,得到一系列激光雷达三维点云与相机图像棋盘格角点坐标点对,相机图像角点坐标与三维激光点云坐标变换关系为:Step 8: Return to Step 4, traverse the lidar 3D point cloud and camera image data collected at each position, and obtain a series of lidar 3D point cloud and camera image checkerboard corner coordinate point pairs, camera image corner coordinates and The three-dimensional laser point cloud coordinate transformation relationship is:
[u
c,v
c]=K(R*P(x,y,z)+T)
[u c ,v c ]=K(R*P(x,y,z)+T)
式中,[u
c,v
c]为相机图像棋盘格角点坐标,K为相机内参,R为联合标定旋转矩阵,P(x,y,z)为棋盘格角点上的激光雷达三维点云坐标,T为联合标定位移矩阵;输入得到的每组激光雷达三维点云与相机图像棋盘格角点坐标,最终可求得旋转矩阵R与位移矩阵T,完成一种非重复扫描激光雷达与相机的联合标定。
In the formula, [u c , v c ] is the camera image checkerboard corner coordinates, K is the camera internal parameter, R is the joint calibration rotation matrix, P(x, y, z) is the three-dimensional lidar point on the checkerboard corner. Cloud coordinates, T is the joint calibration displacement matrix; input each group of lidar 3D point cloud and camera image checkerboard corner coordinates, and finally obtain the rotation matrix R and displacement matrix T, complete a non-repetitive scanning lidar and Joint calibration of cameras.
实施例1Example 1
结合图1,一种激光雷达与相机联合标定方法,步骤如下:Combined with Figure 1, a method for joint calibration of lidar and camera, the steps are as follows:
步骤一:将激光雷达与相机朝向同一方向并排固定在同一个底座上,激光雷达与相机重合视野部分占到相机视野50%以上。Step 1: Fix the lidar and the camera side by side on the same base facing the same direction, and the overlapping field of view of the lidar and the camera accounts for more than 50% of the camera's field of view.
步骤二:对相机进行标定,得到相机内参
f
x、f
y表示相机焦距,c
x,c
y表示相机光轴在图像坐标系上的偏移量。
Step 2: Calibrate the camera to get the camera internal parameters f x , f y represent the focal length of the camera, and c x , cy represent the offset of the optical axis of the camera on the image coordinate system.
步骤三:为了能够充分采集到重合视野中不同位置处的相机图像与激光雷达三维点云数据,此次在重合视野中选取了a、b、c、d、e、f、g、h、i共9个不 同位置(如图4,以不同半径的同心圆形式排布)。在每个位置处,相机采集一帧图像数据,激光雷达采集20秒三维点云数据。其中,所使用的棋盘格标定板如图2所示,为了保证相机与激光雷达能够清晰采集到较远处棋盘格标定板数据,棋盘格标定板的每个格子大小设置为20厘米,20个格子五行四列相间排列。Step 3: In order to fully collect camera images and lidar 3D point cloud data at different positions in the overlapping field of view, a, b, c, d, e, f, g, h, i are selected in the overlapping field of view this time. A total of 9 different positions (as shown in Figure 4, arranged in the form of concentric circles with different radii). At each location, the camera collects one frame of image data, and the lidar collects 20 seconds of 3D point cloud data. Among them, the checkerboard calibration board used is shown in Figure 2. In order to ensure that the camera and lidar can clearly collect the data of the checkerboard calibration board at a distance, the size of each grid of the checkerboard calibration board is set to 20 cm, 20 The grid is arranged alternately in five rows and four columns.
步骤四:对每个位置上采集到的激光雷达三维点云数据与相机图像数据进行筛选,若存在无法全面且清晰反映出棋盘格标定板的数据,则舍弃该数据,并微调棋盘格标定板姿态,重新对该位置进行数据采集,转入步骤五;否则,直接转入步骤五。Step 4: Screen the lidar 3D point cloud data and camera image data collected at each position. If there is data that cannot fully and clearly reflect the checkerboard calibration board, discard the data and fine-tune the checkerboard calibration board. attitude, re-collect data for the position, and go to step five; otherwise, go to step five directly.
步骤五:对采集到的每个位置上的激光雷达三维点云数据进行x轴方向的归一化处理,生成激光雷达二维归一化灰度图。其中,根据激光雷达三维点云数据中点云强度确定二维归一化灰度图中的图像像素灰度。Step 5: Normalize the three-dimensional point cloud data of the laser radar at each position collected in the x-axis direction to generate a two-dimensional normalized grayscale image of the laser radar. Among them, the image pixel grayscale in the two-dimensional normalized grayscale image is determined according to the point cloud intensity in the lidar three-dimensional point cloud data.
在建立激光雷达归一化灰度图过程中,首先对三维点云坐标进行归一化处理,设激光雷达三维点云数据点为P
0(x
0,y
0,z
0,i
0)。其中,i
0为该激光雷达三维点云坐标点的强度信息,该信息有激光雷达直接输出。
In the process of establishing the normalized grayscale image of the lidar, the coordinates of the 3D point cloud are first normalized, and the 3D point cloud data point of the lidar is set as P 0 (x 0 , y 0 , z 0 , i 0 ). Among them, i 0 is the intensity information of the three-dimensional point cloud coordinate point of the lidar, which is directly output by the lidar.
其次,设置激光雷达二维归一化灰度图分辨率为u
0*v
0、增益
Second, set the resolution of the two-dimensional normalized grayscale image of the lidar to u 0 *v 0 , the gain
由激光雷达三维点云转化而来的激光雷达二维归一化灰度图像素坐标表示为[u,v]=K
0*P
1(x
1,y
1,z
1)。
The pixel coordinates of the lidar 2D normalized grayscale image transformed from the lidar 3D point cloud are expressed as [u, v]=K 0 *P 1 (x 1 , y 1 , z 1 ).
同时,统计每个位置处采集到的所有激光雷达三维点云数据中的点云强度最大值i
max,则激光雷达二维归一化灰度图像素坐标[u,v]处的灰度值为
At the same time, the maximum value i max of the point cloud intensity in all the three-dimensional point cloud data of lidar collected at each position is counted, then the gray value at the pixel coordinates [u, v] of the two-dimensional normalized gray image of the lidar is calculated. for
步骤六:对得到的每个位置处激光雷达二维归一化灰度图与相机图像进行角点检测,分别得到棋盘格角点在激光雷达二维归一化灰度图与相机图像中的坐标。Step 6: Perform corner detection on the obtained two-dimensional normalized grayscale image of the lidar and the camera image at each position, and obtain the chessboard corner points in the two-dimensional normalized grayscale image of the lidar and the camera image respectively. coordinate.
步骤七:将每组激光雷达二维归一化灰度图中的角点坐标逆向得到其对应的激光雷达三维点云坐标,在此过程中,由于本发明所采用的非重复扫描激光雷达每次数据采集时间为20秒,三维点云视野覆盖率趋近100%且三维点云数据量较大,因此逆向寻找激光雷达二维归一化灰度图角点对应的激光雷达三维点云坐标 时会出现多个结果,在此我们对所找到的若干个激光雷达三维点云三轴坐标值进行求平均,减少误差。Step 7: Reverse the corner coordinates in each group of laser radar two-dimensional normalized grayscale images to obtain the corresponding three-dimensional point cloud coordinates of laser radar. The secondary data acquisition time is 20 seconds, the coverage rate of the 3D point cloud is close to 100%, and the amount of 3D point cloud data is relatively large, so the coordinates of the lidar 3D point cloud corresponding to the corner points of the lidar 2D normalized grayscale image are searched in reverse. When there are multiple results, here we average the three-axis coordinate values of several lidar 3D point clouds found to reduce errors.
步骤八:返回步骤四,遍历所有位置处采集到的激光雷达三维点云与相机图像数据,可得到一系列激光雷达三维点云与相机图像角点坐标点对。相机图像角点坐标与三维激光点云坐标变换关系为:Step 8: Return to Step 4, traverse the lidar 3D point cloud and camera image data collected at all locations, and obtain a series of lidar 3D point cloud and camera image corner coordinate point pairs. The transformation relationship between camera image corner coordinates and 3D laser point cloud coordinates is:
[u
c,v
c]=K(R*P(x,y,z)+T)
[u c ,v c ]=K(R*P(x,y,z)+T)
式中,[u
c,v
c]为相机图像棋盘格角点坐标,K为相机内参,R为联合标定旋转矩阵,P(x,y,z)为棋盘格角点上的激光雷达三维点云坐标,T为联合标定位移矩阵。输入得到的每组激光雷达三维点云与相机图像棋盘格角点坐标,最终可求得旋转矩阵R与位移矩阵T,完成一种非重复扫描激光雷达与相机的联合标定。
In the formula, [u c , v c ] is the camera image checkerboard corner coordinates, K is the camera internal parameter, R is the joint calibration rotation matrix, P(x, y, z) is the three-dimensional lidar point on the checkerboard corner. Cloud coordinates, T is the joint calibration displacement matrix. Input the obtained three-dimensional point cloud of the lidar and the coordinates of the corners of the camera image checkerboard, and finally obtain the rotation matrix R and the displacement matrix T, and complete a non-repetitive scanning lidar and camera joint calibration.
Claims (5)
- 一种激光雷达与相机联合标定方法,其特征在于,标定步骤如下:A method for joint calibration of lidar and camera, characterized in that the calibration steps are as follows:步骤一:将激光雷达与相机固定在同一个底座上,保持激光雷达与相机相对位置不变,保证激光雷达与相机重合视野部分占到相机视野的50%以上,转入步骤二;Step 1: Fix the lidar and the camera on the same base, keep the relative position of the lidar and the camera unchanged, and ensure that the overlapping field of view of the lidar and the camera accounts for more than 50% of the camera's field of view, and then go to step 2;步骤二:对相机进行标定,得到相机内参 f x、f y表示相机焦距,c x,c y表示相机光轴在图像坐标系上的偏移量,转入步骤三; Step 2: Calibrate the camera to get the camera internal parameters f x , f y represent the focal length of the camera, c x , c y represent the offset of the camera's optical axis on the image coordinate system, go to step 3;步骤三:将棋盘格标定板依次放置于激光雷达与相机重合视野中的不同位置进行相机与激光雷达的数据采集,在每个位置处,相机采集一帧图像数据,激光雷达采集20~30秒三维点云数据,转入步骤四;Step 3: Place the checkerboard calibration board at different positions in the overlapping field of view of the lidar and the camera to collect the data of the camera and the lidar. At each position, the camera collects one frame of image data, and the lidar collects it for 20 to 30 seconds. 3D point cloud data, go to step 4;步骤四:对每个位置上采集到的激光雷达三维点云数据与相机图像数据进行筛选,若存在无法全面且清晰反映出棋盘格标定板的数据,则舍弃该数据,并微调棋盘格标定板姿态,重新对该位置进行数据采集,转入步骤五;否则,直接转入步骤五;Step 4: Screen the lidar 3D point cloud data and camera image data collected at each position. If there is data that cannot fully and clearly reflect the checkerboard calibration board, discard the data and fine-tune the checkerboard calibration board. attitude, re-collect data for the position, and go to step five; otherwise, go directly to step five;步骤五:对采集到的激光雷达三维点云数据进行x轴方向的归一化处理,生成激光雷达二维归一化灰度图,其中,根据激光雷达三维点云数据中的点云强度信息确定二维归一化灰度图中的像素灰度,转入步骤六;Step 5: Normalize the collected three-dimensional point cloud data of the lidar in the x-axis direction to generate a two-dimensional normalized grayscale image of the lidar, wherein, according to the point cloud intensity information in the three-dimensional point cloud data of the lidar Determine the pixel grayscale in the two-dimensional normalized grayscale image, and go to step 6;步骤六:对得到的激光雷达二维归一化灰度图与相机图像进行角点检测,分别得到棋盘格角点在激光雷达二维归一化灰度图与相机图像中的坐标,转入步骤七;Step 6: Perform corner detection on the obtained two-dimensional normalized grayscale image of the lidar and the camera image, and obtain the coordinates of the corner points of the checkerboard in the two-dimensional normalized grayscale image of the lidar and the camera image, respectively, and transfer to Step seven;步骤七:将每组激光雷达二维归一化灰度图中的角点坐标逆向得到其对应的激光雷达三维点云棋盘格角点坐标,转入步骤八;Step 7: Reverse the corner coordinates of each group of lidar 2D normalized grayscale images to obtain the corresponding lidar 3D point cloud checkerboard corner coordinates, and go to step 8;步骤八:返回步骤四,遍历每个位置上采集到的激光雷达三维点云与相机图像数据,得到一系列激光雷达三维点云与相机图像棋盘格角点坐标点对,相机图像角点坐标与三维激光点云坐标变换关系为:Step 8: Return to Step 4, traverse the lidar 3D point cloud and camera image data collected at each position, and obtain a series of lidar 3D point cloud and camera image checkerboard corner coordinate point pairs, camera image corner coordinates and The three-dimensional laser point cloud coordinate transformation relationship is:[u c,v c]=K(R*P(x,y,z)+T) [u c ,v c ]=K(R*P(x,y,z)+T)式中,[u c,v c]为相机图像棋盘格角点坐标,K为相机内参,R为联合标定旋转矩阵,P(x,y,z)为棋盘格角点上的激光雷达三维点云坐标,T为联合标定位移 矩阵;输入得到的每组激光雷达三维点云与相机图像棋盘格角点坐标,最终可求得旋转矩阵R与位移矩阵T,完成一种非重复扫描激光雷达与相机的联合标定。 In the formula, [u c , v c ] is the camera image checkerboard corner coordinates, K is the camera internal parameter, R is the joint calibration rotation matrix, P(x, y, z) is the three-dimensional lidar point on the checkerboard corner. Cloud coordinates, T is the joint calibration displacement matrix; input each group of lidar 3D point cloud and camera image checkerboard corner coordinates, and finally obtain the rotation matrix R and displacement matrix T, complete a non-repetitive scanning lidar and Joint calibration of cameras.
- 根据权利要求1所述的激光雷达与相机联合标定方法,其特征在于:所述激光雷达扫描方式为非重复扫描,激光雷达扫描轨迹不重复,静止扫描数秒后,视场覆盖率趋近100%。The joint calibration method for a lidar and a camera according to claim 1, wherein the lidar scanning mode is non-repetitive scanning, the lidar scanning trajectory does not repeat, and after static scanning for several seconds, the coverage of the field of view approaches 100% .
- 根据权利要求1所述的激光雷达与相机联合标定方法,其特征在于:步骤三中,不同位置数量为9~20个,视重合视野区域大小而定。The method for jointly calibrating a lidar and a camera according to claim 1, wherein in step 3, the number of different positions is 9-20, depending on the size of the overlapping field of view.
- 根据权利要求1所述的激光雷达与相机联合标定方法,其特征在于:步骤三中,为了能够充分采集到重合视野中不同位置处的相机图像与激光雷达三维点云数据,在激光雷达与相机能够充分采集到全部棋盘格标定板数据的前提下,棋盘格标定板放置位置要覆盖激光雷达与相机重合视野区域的近处、远处、左右边界及中间位置,相邻位置之间相隔3~5米。The joint calibration method for lidar and camera according to claim 1, characterized in that: in step 3, in order to fully collect the camera image and lidar three-dimensional point cloud data at different positions in the overlapping field of view, between lidar and camera On the premise that all the checkerboard calibration board data can be fully collected, the position of the checkerboard calibration board should cover the near, far, left and right borders and middle positions of the overlapping field of view of the lidar and the camera, and the adjacent positions are separated by 3~ 5 meters.
- 根据权利要求1所述的激光雷达与相机联合标定方法,其特征在于:步骤五中,在建立激光雷达归一化灰度图过程中,首先对三维点云坐标进行归一化处理,设激光雷达三维点云数据点P 0(x 0,y 0,z 0,i 0);其中,i 0为该激光雷达三维点云坐标点的强度信息,该信息由激光雷达直接输出; The method for joint calibration of lidar and camera according to claim 1, characterized in that: in step 5, in the process of establishing the normalized grayscale image of lidar, first normalize the coordinates of the three-dimensional point cloud, set the laser Radar 3D point cloud data point P 0 (x 0 , y 0 , z 0 , i 0 ); where i 0 is the intensity information of the lidar 3D point cloud coordinate point, which is directly output by lidar;其次,设置激光雷达二维归一化灰度图分辨率为u 0*v 0, Second, set the resolution of the two-dimensional normalized grayscale image of the lidar to u 0 *v 0 ,由激光雷达三维点云转化而来的激光雷达二维归一化灰度图像素坐标[u,v]=K 0*P 1(x 1,y 1,z 1); Pixel coordinates [u,v]=K 0 *P 1 (x 1 ,y 1 ,z 1 ) of the lidar 2D normalized grayscale image transformed from the lidar 3D point cloud;同时,统计每个位置处采集到的所有激光雷达三维点云数据中的点云强度最大值i max,则激光雷达二维归一化灰度图像素坐标[u,v]处的灰度值为 At the same time, the maximum value i max of the point cloud intensity in all the three-dimensional point cloud data of lidar collected at each position is counted, then the gray value at the pixel coordinates [u, v] of the two-dimensional normalized gray image of the lidar is calculated. for
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011632288.6 | 2020-12-31 | ||
CN202011632288.6A CN112669393B (en) | 2020-12-31 | 2020-12-31 | Laser radar and camera combined calibration method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022142759A1 true WO2022142759A1 (en) | 2022-07-07 |
Family
ID=75413055
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/129942 WO2022142759A1 (en) | 2020-12-31 | 2021-11-11 | Lidar and camera joint calibration method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112669393B (en) |
WO (1) | WO2022142759A1 (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113985445A (en) * | 2021-08-24 | 2022-01-28 | 中国北方车辆研究所 | 3D target detection algorithm based on data fusion of camera and laser radar |
CN115131344A (en) * | 2022-08-25 | 2022-09-30 | 泉州华中科技大学智能制造研究院 | Method for extracting shoe-making molding rubber thread through light intensity data |
CN115170675A (en) * | 2022-07-22 | 2022-10-11 | 信利光电股份有限公司 | Method for expanding camera view |
CN115236689A (en) * | 2022-09-23 | 2022-10-25 | 北京小马易行科技有限公司 | Method and device for determining relative positions of laser radar and image acquisition equipment |
CN115343299A (en) * | 2022-10-18 | 2022-11-15 | 山东大学 | Lightweight highway tunnel integrated detection system and method |
CN115561730A (en) * | 2022-11-11 | 2023-01-03 | 湖北工业大学 | Positioning navigation method based on laser radar feature recognition |
CN115810078A (en) * | 2022-11-22 | 2023-03-17 | 武汉际上导航科技有限公司 | Method for coloring laser point cloud based on POS data and airborne visible light image |
CN116027269A (en) * | 2023-03-29 | 2023-04-28 | 成都量芯集成科技有限公司 | Plane scene positioning method |
CN116152333A (en) * | 2023-04-17 | 2023-05-23 | 天翼交通科技有限公司 | Method, device, equipment and medium for calibrating camera external parameters |
CN116543091A (en) * | 2023-07-07 | 2023-08-04 | 长沙能川信息科技有限公司 | Visualization method, system, computer equipment and storage medium for power transmission line |
CN116538996A (en) * | 2023-07-04 | 2023-08-04 | 云南超图地理信息有限公司 | Laser radar-based topographic mapping system and method |
CN116563391A (en) * | 2023-05-16 | 2023-08-08 | 深圳市高素科技有限公司 | Automatic laser structure calibration method based on machine vision |
CN117268350A (en) * | 2023-09-18 | 2023-12-22 | 广东省核工业地质局测绘院 | Mobile intelligent mapping system based on point cloud data fusion |
CN117607829A (en) * | 2023-12-01 | 2024-02-27 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Ordered reconstruction method of laser radar point cloud and computer readable storage medium |
CN117630892A (en) * | 2024-01-25 | 2024-03-01 | 北京科技大学 | Combined calibration method and system for visible light camera, infrared camera and laser radar |
CN117830438A (en) * | 2024-03-04 | 2024-04-05 | 数据堂(北京)科技股份有限公司 | Laser radar and camera combined calibration method based on specific marker |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669393B (en) * | 2020-12-31 | 2021-10-22 | 中国矿业大学 | Laser radar and camera combined calibration method |
CN113177988B (en) * | 2021-04-30 | 2023-12-05 | 中德(珠海)人工智能研究院有限公司 | Spherical screen camera and laser calibration method, device, equipment and storage medium |
CN113391299B (en) * | 2021-04-30 | 2023-09-22 | 深圳市安思疆科技有限公司 | Parameter calibration method and device for scanning area array laser radar |
CN113281723B (en) * | 2021-05-07 | 2022-07-22 | 北京航空航天大学 | AR tag-based calibration method for structural parameters between 3D laser radar and camera |
CN113205555B (en) * | 2021-05-28 | 2023-09-19 | 上海扩博智能技术有限公司 | Method, system, apparatus and storage medium for maintaining a blade centered in a camera field of view |
CN113447948B (en) * | 2021-05-28 | 2023-03-21 | 淮阴工学院 | Camera and multi-laser-radar fusion method based on ROS robot |
CN113256740A (en) * | 2021-06-29 | 2021-08-13 | 湖北亿咖通科技有限公司 | Calibration method of radar and camera, electronic device and storage medium |
CN113702999A (en) * | 2021-07-08 | 2021-11-26 | 中国矿业大学 | Expressway side slope crack detection method based on laser radar |
CN113838141B (en) * | 2021-09-02 | 2023-07-25 | 中南大学 | External parameter calibration method and system for single-line laser radar and visible light camera |
CN114022566A (en) * | 2021-11-04 | 2022-02-08 | 安徽省爱夫卡电子科技有限公司 | Combined calibration method for single line laser radar and camera |
WO2023077827A1 (en) | 2021-11-08 | 2023-05-11 | 南京理工大学 | Three-dimensional tower-type checkerboard for multi-sensor calibration, and lidar-camera joint calibration method based on checkerboard |
CN116091610B (en) * | 2021-11-08 | 2023-11-10 | 南京理工大学 | Combined calibration method of radar and camera based on three-dimensional tower type checkerboard |
CN114241298A (en) * | 2021-11-22 | 2022-03-25 | 腾晖科技建筑智能(深圳)有限公司 | Tower crane environment target detection method and system based on laser radar and image fusion |
CN113838213A (en) * | 2021-11-23 | 2021-12-24 | 深圳市其域创新科技有限公司 | Three-dimensional model generation method and system based on laser and camera sensor |
CN114782651A (en) * | 2022-05-14 | 2022-07-22 | 中新国际联合研究院 | External parameter automatic calibration method for non-repetitive scanning 3D laser radar and thermal camera |
CN115082570B (en) * | 2022-07-01 | 2024-03-19 | 中国科学院宁波材料技术与工程研究所 | Calibration method for laser radar and panoramic camera |
CN117388831B (en) * | 2023-12-13 | 2024-03-15 | 中科视语(北京)科技有限公司 | Camera and laser radar combined calibration method and device, electronic equipment and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111192331A (en) * | 2020-04-09 | 2020-05-22 | 浙江欣奕华智能科技有限公司 | External parameter calibration method and device for laser radar and camera |
CN111612845A (en) * | 2020-04-13 | 2020-09-01 | 江苏大学 | Laser radar and camera combined calibration method based on mobile calibration plate |
CN111754578A (en) * | 2019-03-26 | 2020-10-09 | 舜宇光学(浙江)研究院有限公司 | Combined calibration method and system for laser radar and camera and electronic equipment |
US10859684B1 (en) * | 2019-11-12 | 2020-12-08 | Huawei Technologies Co., Ltd. | Method and system for camera-lidar calibration |
CN112669393A (en) * | 2020-12-31 | 2021-04-16 | 中国矿业大学 | Laser radar and camera combined calibration method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509918B (en) * | 2018-04-03 | 2021-01-08 | 中国人民解放军国防科技大学 | Target detection and tracking method fusing laser point cloud and image |
US10739462B2 (en) * | 2018-05-25 | 2020-08-11 | Lyft, Inc. | Image sensor processing using a combined image and range measurement system |
US11393097B2 (en) * | 2019-01-08 | 2022-07-19 | Qualcomm Incorporated | Using light detection and ranging (LIDAR) to train camera and imaging radar deep learning networks |
CN111311689B (en) * | 2020-02-10 | 2020-10-30 | 清华大学 | Method and system for calibrating relative external parameters of laser radar and camera |
CN111369630A (en) * | 2020-02-27 | 2020-07-03 | 河海大学常州校区 | Method for calibrating multi-line laser radar and camera |
-
2020
- 2020-12-31 CN CN202011632288.6A patent/CN112669393B/en active Active
-
2021
- 2021-11-11 WO PCT/CN2021/129942 patent/WO2022142759A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111754578A (en) * | 2019-03-26 | 2020-10-09 | 舜宇光学(浙江)研究院有限公司 | Combined calibration method and system for laser radar and camera and electronic equipment |
US10859684B1 (en) * | 2019-11-12 | 2020-12-08 | Huawei Technologies Co., Ltd. | Method and system for camera-lidar calibration |
CN111192331A (en) * | 2020-04-09 | 2020-05-22 | 浙江欣奕华智能科技有限公司 | External parameter calibration method and device for laser radar and camera |
CN111612845A (en) * | 2020-04-13 | 2020-09-01 | 江苏大学 | Laser radar and camera combined calibration method based on mobile calibration plate |
CN112669393A (en) * | 2020-12-31 | 2021-04-16 | 中国矿业大学 | Laser radar and camera combined calibration method |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113985445A (en) * | 2021-08-24 | 2022-01-28 | 中国北方车辆研究所 | 3D target detection algorithm based on data fusion of camera and laser radar |
CN113985445B (en) * | 2021-08-24 | 2024-08-09 | 中国北方车辆研究所 | 3D target detection algorithm based on camera and laser radar data fusion |
CN115170675B (en) * | 2022-07-22 | 2023-10-03 | 信利光电股份有限公司 | Method for expanding camera vision |
CN115170675A (en) * | 2022-07-22 | 2022-10-11 | 信利光电股份有限公司 | Method for expanding camera view |
CN115131344B (en) * | 2022-08-25 | 2022-11-08 | 泉州华中科技大学智能制造研究院 | Method for extracting shoe-making molding rubber thread through light intensity data |
CN115131344A (en) * | 2022-08-25 | 2022-09-30 | 泉州华中科技大学智能制造研究院 | Method for extracting shoe-making molding rubber thread through light intensity data |
CN115236689A (en) * | 2022-09-23 | 2022-10-25 | 北京小马易行科技有限公司 | Method and device for determining relative positions of laser radar and image acquisition equipment |
CN115343299A (en) * | 2022-10-18 | 2022-11-15 | 山东大学 | Lightweight highway tunnel integrated detection system and method |
CN115561730A (en) * | 2022-11-11 | 2023-01-03 | 湖北工业大学 | Positioning navigation method based on laser radar feature recognition |
CN115561730B (en) * | 2022-11-11 | 2023-03-17 | 湖北工业大学 | Positioning navigation method based on laser radar feature recognition |
CN115810078A (en) * | 2022-11-22 | 2023-03-17 | 武汉际上导航科技有限公司 | Method for coloring laser point cloud based on POS data and airborne visible light image |
CN116027269A (en) * | 2023-03-29 | 2023-04-28 | 成都量芯集成科技有限公司 | Plane scene positioning method |
CN116152333A (en) * | 2023-04-17 | 2023-05-23 | 天翼交通科技有限公司 | Method, device, equipment and medium for calibrating camera external parameters |
CN116152333B (en) * | 2023-04-17 | 2023-09-01 | 天翼交通科技有限公司 | Method, device, equipment and medium for calibrating camera external parameters |
CN116563391A (en) * | 2023-05-16 | 2023-08-08 | 深圳市高素科技有限公司 | Automatic laser structure calibration method based on machine vision |
CN116563391B (en) * | 2023-05-16 | 2024-02-02 | 深圳市高素科技有限公司 | Automatic laser structure calibration method based on machine vision |
CN116538996A (en) * | 2023-07-04 | 2023-08-04 | 云南超图地理信息有限公司 | Laser radar-based topographic mapping system and method |
CN116538996B (en) * | 2023-07-04 | 2023-09-29 | 云南超图地理信息有限公司 | Laser radar-based topographic mapping system and method |
CN116543091B (en) * | 2023-07-07 | 2023-09-26 | 长沙能川信息科技有限公司 | Visualization method, system, computer equipment and storage medium for power transmission line |
CN116543091A (en) * | 2023-07-07 | 2023-08-04 | 长沙能川信息科技有限公司 | Visualization method, system, computer equipment and storage medium for power transmission line |
CN117268350A (en) * | 2023-09-18 | 2023-12-22 | 广东省核工业地质局测绘院 | Mobile intelligent mapping system based on point cloud data fusion |
CN117607829A (en) * | 2023-12-01 | 2024-02-27 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Ordered reconstruction method of laser radar point cloud and computer readable storage medium |
CN117630892A (en) * | 2024-01-25 | 2024-03-01 | 北京科技大学 | Combined calibration method and system for visible light camera, infrared camera and laser radar |
CN117630892B (en) * | 2024-01-25 | 2024-03-29 | 北京科技大学 | Combined calibration method and system for visible light camera, infrared camera and laser radar |
CN117830438A (en) * | 2024-03-04 | 2024-04-05 | 数据堂(北京)科技股份有限公司 | Laser radar and camera combined calibration method based on specific marker |
Also Published As
Publication number | Publication date |
---|---|
CN112669393B (en) | 2021-10-22 |
CN112669393A (en) | 2021-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022142759A1 (en) | Lidar and camera joint calibration method | |
CN112396664B (en) | Monocular camera and three-dimensional laser radar combined calibration and online optimization method | |
CN110148169B (en) | Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera | |
CN111369630A (en) | Method for calibrating multi-line laser radar and camera | |
CN111612845A (en) | Laser radar and camera combined calibration method based on mobile calibration plate | |
CN108555908A (en) | A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras | |
CN110823252B (en) | Automatic calibration method for multi-line laser radar and monocular vision | |
CN111325801B (en) | Combined calibration method for laser radar and camera | |
CN105716542B (en) | A kind of three-dimensional data joining method based on flexible characteristic point | |
CN110842940A (en) | Building surveying robot multi-sensor fusion three-dimensional modeling method and system | |
CN110349221A (en) | A kind of three-dimensional laser radar merges scaling method with binocular visible light sensor | |
CN108389233B (en) | Laser scanner and camera calibration method based on boundary constraint and mean value approximation | |
CN111311689A (en) | Method and system for calibrating relative external parameters of laser radar and camera | |
CN111486864B (en) | Multi-source sensor combined calibration method based on three-dimensional regular octagon structure | |
CN108154536A (en) | The camera calibration method of two dimensional surface iteration | |
CN114998448B (en) | Multi-constraint binocular fisheye camera calibration and space point positioning method | |
An et al. | Building an omnidirectional 3-D color laser ranging system through a novel calibration method | |
CN114283203A (en) | Calibration method and system of multi-camera system | |
CN113793270A (en) | Aerial image geometric correction method based on unmanned aerial vehicle attitude information | |
CN112305557B (en) | Panoramic camera and multi-line laser radar external parameter calibration system | |
CN114413958A (en) | Monocular vision distance and speed measurement method of unmanned logistics vehicle | |
CN115272474A (en) | Three-dimensional calibration plate for combined calibration of laser radar and camera and calibration method | |
CN114677531B (en) | Multi-mode information fusion method for detecting and positioning targets of unmanned surface vehicle | |
CN115880369A (en) | Device, system and method for jointly calibrating line structured light 3D camera and line array camera | |
CN115187612A (en) | Plane area measuring method, device and system based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21913502 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21913502 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2024121216 Country of ref document: RU |