CN104574406B - A kind of combined calibrating method between 360 degree of panorama laser and multiple vision systems - Google Patents

A kind of combined calibrating method between 360 degree of panorama laser and multiple vision systems Download PDF

Info

Publication number
CN104574406B
CN104574406B CN201510021270.5A CN201510021270A CN104574406B CN 104574406 B CN104574406 B CN 104574406B CN 201510021270 A CN201510021270 A CN 201510021270A CN 104574406 B CN104574406 B CN 104574406B
Authority
CN
China
Prior art keywords
laser
calibration
dimensional
point
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510021270.5A
Other languages
Chinese (zh)
Other versions
CN104574406A (en
Inventor
闫飞
庄严
金鑫彤
王伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201510021270.5A priority Critical patent/CN104574406B/en
Publication of CN104574406A publication Critical patent/CN104574406A/en
Application granted granted Critical
Publication of CN104574406B publication Critical patent/CN104574406B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明提供了一种360度全景激光与多个视觉系统的联合标定方法,属于移动机器人自主环境感知技术领域。本发明最大的创新点在于采用简单的黑色卡纸作为标定设备,快速实现多个视觉系统与全景激光的同时标定。为此本发明基于黑卡纸可使照射到其表面的激光光束具有较低的反射率特性,经过生成反射值图、范围滤波、二值化、点聚类处理,从采集的三维激光数据中提取出属于标定装置的特征点,并采用平面提取方法滤除环境中的噪声物体,进而采用迭代优化方法对基于激光数据的三维特征点和基于图像数据的二维特征点进行求解,从而得到传感器之间的旋转矩阵和平移矩阵。本发明为多传感器信息融合奠定基础,可用在移动机器人场景重构等领域。

The invention provides a joint calibration method of a 360-degree panoramic laser and multiple vision systems, belonging to the technical field of autonomous environment perception of mobile robots. The biggest innovation of the present invention is that simple black cardboard is used as the calibration device to quickly realize the simultaneous calibration of multiple vision systems and panoramic lasers. For this reason, the present invention can make the laser beam irradiated on its surface have lower reflectivity characteristics based on the black cardboard. After generating reflection value map, range filtering, binarization and point clustering, the collected three-dimensional laser data Extract the feature points belonging to the calibration device, and use the plane extraction method to filter out the noise objects in the environment, and then use the iterative optimization method to solve the three-dimensional feature points based on laser data and the two-dimensional feature points based on image data, so as to obtain the sensor Between rotation matrix and translation matrix. The invention lays the foundation for multi-sensor information fusion, and can be used in fields such as mobile robot scene reconstruction.

Description

一种360度全景激光与多个视觉系统间的联合标定方法A joint calibration method between 360-degree panoramic laser and multiple vision systems

技术领域technical field

本发明属于环境感知技术领域,涉及到三维激光测距系统与多个视觉系统之间的数据融合,特别涉及到三维激光测距系统与多个视觉系统之间的联合标定方法。The invention belongs to the technical field of environment perception, relates to data fusion between a three-dimensional laser ranging system and multiple visual systems, and particularly relates to a joint calibration method between the three-dimensional laser ranging system and multiple visual systems.

背景技术Background technique

在复杂场景中,单一传感器无法满足环境感知和场景理解等任务需求,因而多个传感器之间的数据匹配与融合是提高环境感知与场景理解性能的必要手段,其中多个传感器之间的联合标定是其关键步骤。目前已有多种三维激光和单目视觉的外参标定方法,最常见的方法是使用黑白标定板来标定三维激光与视觉系统之间的外部参数(Joung J H,AnK H,Kang J W,et al.3D environment reconstruction using modified color ICPalgorithm by fusion of a camera and a 3D laser range finder[C],IEEE/RSJInternational Conference on Intelligent Robots and Systems(IROS),2009:3082-3088.),由于这种标定板本身是用于视觉系统内参标定的,所以方便视觉系统处理,但是对于三维激光系统来说,在标定过程中对黑白标定板的扫描效果要求很高,且该方法只适用于较近场景的标定,扫描角度有一定限制。为了便于三维激光系统获取标定特征,文献(García-Moreno A I,Gonzalez-Barbosa J J,Ornelas-Rodriguez F J,et al.LIDAR andPanoramic Camera Extrinsic Calibration Approach Using a Pattern Plane[M],Pattern Recognition.Springer Berlin Heidelberg,2013:104-113.)利用多个菱形孔洞的边缘角点作为特征进行三维激光雷达与全景摄像机的参数标定,但是由于受激光点边缘效应的影响,角点特征的获取并不是特别精确,标定的稳定性无法保证。此外,文献(ThomasJ Osgood,Yingping Huang.Calibration of laser scanner and camera fusion systemfor intelligent vehicles using Nelder-Mead optimization[J],MeasurementScience&Technology,2013,vol.24,no.3,pp:1-10.)用悬在空中的圆形白色卡片作为标定物进行外参标定,这种方法虽然便于确定特征物体在三维激光数据和图像中的位置,但标定时需要开阔的实验环境,且由于标定物体积比较小,放大了激光点边缘效应对标定物三维坐标的影响,影响标定精度。专利(庄严;王伟;陈东;杨生鹏,大连理工大学,一种三维激光和单目视觉间的自动标定方法,专利号200910187344,)提出采用带孔洞的黑白标定板进行三维激光与单目视觉间的自动标定,该板上有两种信息:颜色信息和空洞信息。其中颜色信息由摄像机获取,通过OpenCV中提供的相关角点检测算法检测出黑白格的交点(即特征点);空洞信息则通过激光测距仪获取,通过相关算法在激光数据中找到黑白格的交点,然后将两种场景中的特征点对应,就完成了场景的匹配。但该方法存在一定局限性,要求摄像机和激光测距仪的距离较小,不适合摄像机和激光测距仪较远的情况,而且只能适用于激光与单个视觉系统的标定。In complex scenes, a single sensor cannot meet the task requirements of environment perception and scene understanding. Therefore, data matching and fusion between multiple sensors is a necessary means to improve the performance of environment perception and scene understanding. The joint calibration between multiple sensors is its key step. At present, there are a variety of external parameter calibration methods for 3D laser and monocular vision. The most common method is to use black and white calibration plates to calibrate the external parameters between the 3D laser and the vision system (Joung J H, AnK H, Kang J W, et al .3D environment reconstruction using modified color ICPalgorithm by fusion of a camera and a 3D laser range finder[C],IEEE/RSJInternational Conference on Intelligent Robots and Systems(IROS),2009:3082-3088.), due to the calibration board itself It is used to calibrate the internal parameters of the vision system, so it is convenient for the vision system to process. However, for the 3D laser system, the scanning effect of the black and white calibration plate is very demanding during the calibration process, and this method is only suitable for the calibration of nearby scenes. There are certain restrictions on the scanning angle. In order to facilitate the acquisition of calibration features by the 3D laser system, literature (García-Moreno A I, Gonzalez-Barbosa J J, Ornelas-Rodriguez F J, et al. LIDAR and Panoramic Camera Extrinsic Calibration Approach Using a Pattern Plane[M], Pattern Recognition. Springer Berlin Heidelberg, 2013: 104-113.) Using the edge corners of multiple diamond-shaped holes as features to calibrate the parameters of 3D lidar and panoramic camera, but due to the influence of the edge effect of laser points, the acquisition of corner features is not particularly accurate. Stability cannot be guaranteed. In addition, the literature (ThomasJ Osgood, Yingping Huang.Calibration of laser scanner and camera fusion system for intelligent vehicles using Nelder-Mead optimization[J], MeasurementScience&Technology, 2013, vol.24, no.3, pp:1-10.) uses suspension The circular white card in the air is used as a calibration object for external reference calibration. Although this method is convenient for determining the position of the feature object in the 3D laser data and image, it requires an open experimental environment during calibration, and because the volume of the calibration object is relatively small, The influence of the edge effect of the laser point on the three-dimensional coordinates of the calibration object is amplified, which affects the calibration accuracy. Patent (Zheng Yan; Wang Wei; Chen Dong; Yang Shengpeng, Dalian University of Technology, an automatic calibration method between 3D laser and monocular vision, patent number 200910187344,) proposed to use a black and white calibration plate with holes for 3D laser and monocular vision There are two kinds of information on the board: color information and void information. The color information is obtained by the camera, and the intersection point (i.e., the feature point) of the black and white grid is detected by the relevant corner detection algorithm provided in OpenCV; the hole information is obtained by the laser rangefinder, and the black and white grid is found in the laser data through the relevant algorithm. The intersection point, and then the feature points in the two scenes correspond to complete the matching of the scene. However, this method has certain limitations, requiring a small distance between the camera and the laser range finder, which is not suitable for the situation where the camera and the laser range finder are far away, and can only be applied to the calibration of the laser and a single vision system.

发明内容Contents of the invention

本发明要解决的问题是三维激光测距系统与多个视觉系统之间的自动联合标定,仅通过一次数据采集就可以方便的实现多个系统的外参计算,解决了激光测距系统与视觉系统不能距离太远的局限,降低了标定方法对标定物和标定环境的要求,减小激光点边缘效应对标定结果的影响,增强了标定方法的实用性和准确性。The problem to be solved by the present invention is the automatic joint calibration between the three-dimensional laser ranging system and multiple vision systems, and the external parameter calculation of multiple systems can be conveniently realized through only one data collection, which solves the problem of the laser ranging system and the vision system. The limitation that the system cannot be too far away reduces the requirements of the calibration method on the calibration object and the calibration environment, reduces the influence of the edge effect of the laser point on the calibration results, and enhances the practicability and accuracy of the calibration method.

本发明的技术方案如下:Technical scheme of the present invention is as follows:

1.三维全景激光特性分析和标定装置的设计1. Design of three-dimensional panoramic laser characteristic analysis and calibration device

本发明使用的三维全景激光系统是由二维激光传感器和带有步进电机的旋转云台构成的,旋转云台在水平面上旋转,二维激光传感器扫描平面成扇形,与云台旋转平面垂直,每组激光数据同时包含测距数据和反射值数据,且两种数据一一对应。由于激光传感器存在一定的测量误差,且在物体边缘的激光数据受边缘效应影响,当前景与后景物体距离比较近时,很难根据测距数据将前景和后景物体的数据精确区分;而反射值数据同时受物体材质、颜色等多种物体属性影响,不受物体间距离的限制,可方便实现多类物体数据的划分。The three-dimensional panoramic laser system used in the present invention is composed of a two-dimensional laser sensor and a rotating platform with a stepping motor. The rotating platform rotates on a horizontal plane, and the scanning plane of the two-dimensional laser sensor is fan-shaped and perpendicular to the rotating plane of the platform. , each set of laser data contains ranging data and reflection value data, and the two data correspond to each other. Since the laser sensor has a certain measurement error, and the laser data at the edge of the object is affected by the edge effect, when the distance between the foreground and the background object is relatively close, it is difficult to accurately distinguish the data of the foreground and the background object according to the ranging data; and Reflection value data is affected by various object properties such as object material and color at the same time, and is not limited by the distance between objects, which can facilitate the division of multi-type object data.

针对上述三维全景激光特性,设计出了三维全景激光与多个视觉系统的联合标定装置,所选材料为黑卡纸,并裁剪统一的形状和规格(如图1所示)。经过实验验证,选择黑卡纸可使照射到其表面的激光光束具有较低的反射率,为标定装置设计统一的形状,便于后续特征点的提取,以保证标定算法的鲁棒性和标定结果的准确性。为了获得最好的标定效果,标定黑纸的大小可以根据摄像头的可视范围、激光数据的疏密程度进行适当调整。According to the characteristics of the above-mentioned 3D panoramic laser, a joint calibration device of 3D panoramic laser and multiple vision systems was designed. The selected material was black cardboard, and the unified shape and specification were cut (as shown in Figure 1). After experimental verification, the choice of black cardboard can make the laser beam irradiated on its surface have a lower reflectivity, design a uniform shape for the calibration device, and facilitate the extraction of subsequent feature points to ensure the robustness of the calibration algorithm and calibration results accuracy. In order to obtain the best calibration effect, the size of the calibration black paper can be properly adjusted according to the visual range of the camera and the density of the laser data.

2.标定所需激光数据特征点的提取2. Extraction of laser data feature points required for calibration

标定中所需要提取的激光数据特征点是标定黑纸的中心点,首先需要提取投射在标定黑纸上的三维激光点。标定黑纸数据的提取分为预检测和去噪两个环节,其中预检测环节利用黑卡纸的反射值特性,从二维反射值图中查找标定黑纸上激光点的序号,并确定激光点的三维坐标。去噪环节对获取的三维激光数据进行分析处理,去除误检测的三维激光数据,以保证标定精度。The feature point of the laser data to be extracted in the calibration is the center point of the calibration black paper. First, the three-dimensional laser point projected on the calibration black paper needs to be extracted. The extraction of calibration black paper data is divided into two steps: pre-detection and denoising. In the pre-detection part, the reflection value characteristics of the black cardboard are used to find the serial number of the laser point on the calibration black paper from the two-dimensional reflection value map, and determine the laser The 3D coordinates of the point. The denoising link analyzes and processes the acquired 3D laser data to remove falsely detected 3D laser data to ensure calibration accuracy.

基于激光数据的标定黑纸预检测步骤如下:The pre-detection steps of the calibration black paper based on laser data are as follows:

1)生成二值化反射值图。以总数据组数和每组数据的激光点数量作为x方向和y方向的像素点个数生成二维图片,使得一个激光点对应一个图像像素。每个像素点的灰度值根据激光点的反射值大小,利用公式(1)对图片进行灰度处理获得,进而得到三维激光点云对应的二维反射值图(如图5(a)所示)。1) Generate a binarized reflection value map. The total number of data sets and the number of laser points in each set of data are used as the number of pixels in the x-direction and y-direction to generate a two-dimensional image, so that one laser point corresponds to one image pixel. The gray value of each pixel is obtained by processing the image according to the reflection value of the laser point using formula (1), and then the two-dimensional reflection value map corresponding to the three-dimensional laser point cloud is obtained (as shown in Figure 5(a) Show).

其中di和gi分别为激光点i的反射值和灰度值,dmax和dmin是所有激光点的最大和最小反射值。where d i and g i are the reflection value and gray value of laser point i respectively, and d max and d min are the maximum and minimum reflection values of all laser points.

为了更便于处理,避免较多干扰,选取标定黑纸所在的距三维全景激光一定范围内的激光数据,此范围可以根据标定黑纸摆放的位置而适当调整(如图5(b)所示)。对反射值图进行二值化处理:In order to facilitate processing and avoid more interference, the laser data within a certain range from the three-dimensional panoramic laser where the calibration black paper is located is selected. This range can be adjusted appropriately according to the position of the calibration black paper (as shown in Figure 5(b) ). Binarize the reflection value map:

其中gi′为二值化后的像素灰度值,是所有像素点的灰度均值,kg为灰度调节阈值,影响二值化效果。二值化后,黑纸区域数据的灰度值均为0,图中只剩下黑色和白色,使黑纸区域更加清晰(如图5(c)所示)。Where g i ′ is the pixel gray value after binarization, is the gray mean value of all pixels, and k g is the gray adjustment threshold, which affects the binarization effect. After binarization, the gray value of the data in the black paper area is 0, and only black and white are left in the picture, making the black paper area clearer (as shown in Figure 5(c)).

2)对黑色像素点进行聚类。2) Cluster the black pixels.

上述产生的二值图会有一些黑色杂点,为了确定属于各个标定黑纸的激光点,排除干扰,将采用邻域搜索的方法,对二值化灰度图中的黑色像素点进行聚类。聚类过程如下:The binary image generated above will have some black noise points. In order to determine the laser points belonging to each calibration black paper and eliminate interference, the neighborhood search method will be used to cluster the black pixels in the binary grayscale image. . The clustering process is as follows:

聚类算法:Clustering Algorithm:

通过上述聚类算法,黑色像素点被聚类为多个像素簇。因为标定黑纸在二值图中所占面积较大,因此可以通过判断像素簇的大小确定是否为标定黑纸,若像素簇的点数小于一定范围,则判断为杂点簇,然后将此簇中所有点的灰度值变成255。通过上述处理,反射值图中只剩下标定黑纸(如图5(d)所示)。Through the above clustering algorithm, the black pixels are clustered into multiple pixel clusters. Because the calibrated black paper occupies a large area in the binary image, it can be determined whether it is a calibrated black paper by judging the size of the pixel cluster. If the number of points in the pixel cluster is less than a certain range, it is judged as a noise cluster, and then this cluster The gray value of all points in becomes 255. Through the above processing, only the calibration black paper is left in the reflection value map (as shown in Figure 5(d)).

由于激光测距数据和反射值数据按次序一一对应,根据聚类后的像素簇中每个像素的图像索引(xi,yi),利用公式m×xi+yi可以得到对应的测距数据序号,其中m为每组激光数据的激光点个数,所以每个像素簇都可以找到对应激光点簇。Since the laser ranging data and the reflection value data are in one-to-one correspondence in order, according to the image index ( xi , y i ) of each pixel in the clustered pixel cluster , the corresponding The serial number of the ranging data, where m is the number of laser points of each set of laser data, so each pixel cluster can find the corresponding laser point cluster.

上述算法完成了对标定黑纸上激光点的初步检测。该检测方法的优点在于:利用激光的反射值作为检测依据,标定黑纸不用远离背景环境,降低了对标定环境的要求;在二维反射值图的基础上进行聚类,与三维激光点比较降低一维难度。The above algorithm completes the preliminary detection of the laser point on the calibration black paper. The advantages of this detection method are: using the reflection value of the laser as the detection basis, the calibration black paper does not need to be far away from the background environment, which reduces the requirements for the calibration environment; clustering is carried out on the basis of the two-dimensional reflection value map, compared with the three-dimensional laser point Reduce one-dimensional difficulty.

3)三维激光特征点的去噪算法3) Denoising algorithm of 3D laser feature points

受环境中干扰物体的影响,检测到的激光点簇并不全部对应标定黑纸,因此需要下面给出的算法对初步检测得到的激光点簇进行验证,除去环境噪声的影响,确保标定精度。Affected by interfering objects in the environment, not all detected laser point clusters correspond to the calibration black paper. Therefore, the algorithm given below is required to verify the preliminary detected laser point clusters to remove the influence of environmental noise and ensure calibration accuracy.

由于采用激光的反射值作为检测依据,所以环境中具有与标定黑纸相近颜色和材质的物体为主要干扰。针对这种主要干扰,用平整度评估方法区分标定物体和非标定物体。由于属于标定物上的激光点完全在一个平面上,所以通过判断激光点构成的平面是都平整,可以将非平面的标定物体区分出来。设一个激光点簇包含p=[p1,p2,...,pN]共N个激光点,则激光点的坐标协方差矩阵为:Since the reflection value of the laser is used as the detection basis, objects with similar colors and materials to the calibration black paper in the environment are the main interference. For this main interference, the flatness evaluation method is used to distinguish between calibration objects and non-calibration objects. Since the laser points belonging to the calibration object are completely on a plane, the non-planar calibration objects can be distinguished by judging whether the plane formed by the laser points is flat. Suppose a laser point cluster contains p=[p 1 ,p 2 ,...,p N ] a total of N laser points, then the coordinate covariance matrix of the laser points is:

其中,为激光点中点。计算该协方差矩阵的三个特征值:λ0、λ1、λ2,当其中最小的特征值满足λ2<<λ1≈λ0,激光点簇成平面行。这种情况下,可以用两个特征值λ2和λ0的比值来评估所成平面的平整度,当比值小于给定阈值的时候,认为该激光点簇属于标定物体:in, is the midpoint of the laser point. Calculate the three eigenvalues of the covariance matrix: λ 0 , λ 1 , λ 2 , and when the smallest eigenvalue satisfies λ 2 <<λ 1 ≈λ 0 , the laser points cluster into plane rows. In this case, the flatness of the formed plane can be evaluated by the ratio of the two eigenvalues λ2 and λ0 . When the ratio is less than a given threshold, the laser point cluster is considered to belong to the calibration object:

其中kλ为平整度评估阈值。where k λ is the flatness evaluation threshold.

此外,当激光光束超射到玻璃镜面的时候,由于反射和折射的综合作用,激光的反射值比较小,这种情况下,在灰度图中很难与标定黑纸进行有效区分,对于这种干扰,可以从激光传感器的特性入手。激光光束穿过玻璃镜面后,碰到障碍物反射回来,实际的测量距离远大于玻璃镜面相对于激光传感器的距离。所以当激光点簇的中心点距离小于一定阈值时,认为该点簇为标定物体:In addition, when the laser beam hits the glass mirror, due to the combined effect of reflection and refraction, the reflection value of the laser is relatively small. In this case, it is difficult to effectively distinguish it from the calibration black paper in the grayscale image. For this This kind of interference can be started from the characteristics of the laser sensor. After the laser beam passes through the glass mirror, it hits an obstacle and reflects back. The actual measurement distance is much greater than the distance between the glass mirror and the laser sensor. Therefore, when the center point distance of the laser point cluster is less than a certain threshold, the point cluster is considered as the calibration object:

其中 是激光点中点的三维坐标,kd为距离阈值。in with is the midpoint of the laser point The three-dimensional coordinates of , k d is the distance threshold.

3.三维激光数据到二维视觉数据间变换关系的迭代计算3. Iterative calculation of the transformation relationship between 3D laser data and 2D visual data

由于视觉传感器的可视范围有限,远小于三维激光传感器的测量范围,在进行标定时,可以设计与标定黑纸存在色差的背景,然后可以利用像素灰度信息提取出黑卡纸在图像中的对应像素,借鉴黑色像素点聚类方法,对像素点进行分割,以每个像素簇的所有像素的平均图像坐标作为图像特征点。Since the visual range of the visual sensor is limited, which is much smaller than the measurement range of the 3D laser sensor, when performing calibration, it is possible to design a background with a color difference from the black paper, and then use the pixel grayscale information to extract the black cardboard in the image. For the corresponding pixels, the black pixel point clustering method is used for reference to segment the pixel points, and the average image coordinates of all pixels in each pixel cluster are used as image feature points.

根据已经获得的激光数据特征中心点和视觉数据特征角点匹配对,采用迭代优化方法来求解三维空间到二维空间的射影变换,此处采用高斯牛顿迭代法进行外参数优化求取。为了方便计算,图像特征点和激光特征点都将用齐次坐标表示。定义为图像特征点二维齐次坐标向量,pi=[xi,yi,zi,1]为三维激光点齐次坐标向量,激光点经过转换后对应图像中的像素点坐标向量为标定的目的是计算一组变换参数使得:According to the matching pairs of laser data feature center points and visual data feature corner points, the iterative optimization method is used to solve the projective transformation from three-dimensional space to two-dimensional space. Here, the Gauss-Newton iterative method is used to optimize the external parameters. For the convenience of calculation, both image feature points and laser feature points will be represented by homogeneous coordinates. definition is the two-dimensional homogeneous coordinate vector of the image feature point, p i =[ xi ,y i , zi ,1] is the homogeneous coordinate vector of the three-dimensional laser point, after the laser point is converted, the corresponding pixel point coordinate vector in the image is The purpose of calibration is to calculate a set of transformation parameters makes:

其中fx,fy分别为视觉传感器x,y方向的焦距,(ux,uy)是感光元件中点相对于图像中心的偏移向量。fx,fy,ux,uy为视觉系统的内部参数,可以通过传统的内参标定方法获得的,所以此处为已知量。r1,r2,r3分别为绕x,y,z三轴的旋转角度,tx,ty,tz分别为三坐标轴方向的平移量。Where f x , f y are the focal lengths of the visual sensor in the x and y directions respectively, and (u x , u y ) are the offset vectors of the midpoint of the photosensitive element relative to the image center. f x , f y , u x , u y are the internal parameters of the vision system, which can be obtained by traditional internal reference calibration methods, so here are known quantities. r 1 , r 2 , r 3 are the rotation angles around the x, y, and z axes respectively, and t x , ty , t z are the translation amounts in the directions of the three coordinate axes respectively.

由高斯牛顿迭代法,给定k时刻的变换参数附近有:By the Gauss-Newton iterative method, the transformation parameters at time k are given make exist Nearby are:

其中Jacobian矩阵寻找下一个迭代点使得:where the Jacobian matrix Find the next iteration point makes:

得正规化方程:make Get the normalization equation:

代入前式的高斯牛顿法的迭代格式:The iterative format of the Gauss-Newton method substituted into the previous formula:

利用该迭代格式求取满足公式(6)的变换参数,得到标定结果。The iterative format is used to obtain the transformation parameters satisfying the formula (6), and the calibration result is obtained.

本发明的效果和好处是可以有效地减少噪点、距离、入射角等对三维激光和视觉系统间标定的干扰,标定效果比较稳定,能够同时实现三维全景激光和多个视觉系统间的标定,标定的实际应用性得到有效的提高。标定所应用的方法设计简单,而且标定装置质量轻、易于携带,不仅可以在室内应用,也可以在室外复杂的环境下实现三维全景激光和视觉系统的快速正确标定,进而可以实现三维激光数据和视觉数据的融合,实现复杂环境的场景重构,为基于多传感器数据融合的机器智能技术的发展打下坚实的基础。The effect and advantage of the present invention is that it can effectively reduce the interference of noise, distance, incident angle, etc. on the calibration between the three-dimensional laser and the visual system, the calibration effect is relatively stable, and the calibration between the three-dimensional panoramic laser and multiple visual systems can be realized at the same time. The practical applicability has been effectively improved. The design of the calibration method is simple, and the calibration device is light in weight and easy to carry. It can be used not only indoors, but also in complex outdoor environments to quickly and correctly calibrate the 3D panoramic laser and vision system, and then realize the 3D laser data and The fusion of visual data realizes scene reconstruction in complex environments and lays a solid foundation for the development of machine intelligence technology based on multi-sensor data fusion.

附图说明Description of drawings

图1为标定使用的黑卡纸。Figure 1 shows the black cardboard used for calibration.

图2为设备分布图,包括四个视觉系统和一个全景激光Figure 2 is a distribution diagram of equipment, including four vision systems and a panoramic laser

图3为四个视觉系统采集的包含标定黑纸的图片Figure 3 is a picture of the calibrated black paper collected by the four vision systems

图4为全景激光传感器采集的三维激光数据Figure 4 shows the three-dimensional laser data collected by the panoramic laser sensor

图5为从激光数据中提取标定黑纸的过程。(a)三维激光点云对应的二维反射值图,(b)是选取范围内的反射值图,(c)是二值化后的反射值图,(d)是聚类处理后的反射值图。Figure 5 is the process of extracting the calibration black paper from the laser data. (a) The two-dimensional reflection value map corresponding to the three-dimensional laser point cloud, (b) is the reflection value map within the selected range, (c) is the reflection value map after binarization, (d) is the reflection after clustering processing value graph.

图6为从三维激光数据中提取出投射到人身体上的三维激光点。Fig. 6 is to extract the 3D laser points projected onto the human body from the 3D laser data.

图7为将图5中的激光点利用标定结果反投到四个图片中的效果图。Fig. 7 is an effect diagram of back-projecting the laser point in Fig. 5 into four pictures using the calibration results.

具体实施方式detailed description

为了验证本方法的有效性,利用如图2所构建的传感器系统来进行标定方法的验证。全景激光传感器由Hokuyo UTM-30LX型激光传感器和旋转云台组成,其中激光传感器平面扫描角度为0-270度,云台步进电机的频率的应用范围为500-2500Hz。利用电机带动激光传感器获得场景的三维激光测距数据。四个视觉系统采用的都是普通ANC FULL HD1080P网络摄像机,使用USB2.0接口,可视视角为60度,分辨率为1280×960。标定装置采用九个300mm×100mm的黑纸,放置在场景中的不同位置。In order to verify the effectiveness of this method, the sensor system constructed in Figure 2 is used to verify the calibration method. The panoramic laser sensor consists of a Hokuyo UTM-30LX laser sensor and a rotating pan/tilt. The plane scanning angle of the laser sensor is 0-270 degrees, and the frequency range of the stepping motor of the pan/tilt is 500-2500Hz. Use the motor to drive the laser sensor to obtain the three-dimensional laser ranging data of the scene. The four visual systems are all common ANC FULL HD1080P network cameras, using USB2.0 interface, with a viewing angle of 60 degrees and a resolution of 1280×960. The calibration device uses nine 300mm×100mm black papers, which are placed at different positions in the scene.

分别从四个视觉系统采集场景中的标定装置图片(如图3所示),利用对应的图像处理方法可以从图片中提取标定装置。全景激光传感器可以获取整个空间的三维点云数据(如图4所示)以及每个激光点对应的反射值,再经过生成反射值图、范围滤波、二值化、点聚类处理后,可以从激光数据中提取出数据标定装置的激光点(如图5所示)。对图片中二维标定装置特征点和激光数据中的三维标定装置特征点进行迭代匹配,可以计算出旋转和平移矩阵。The pictures of the calibration device in the scene are collected from the four vision systems (as shown in FIG. 3 ), and the calibration device can be extracted from the picture by using the corresponding image processing method. The panoramic laser sensor can obtain the 3D point cloud data of the entire space (as shown in Figure 4) and the reflection value corresponding to each laser point, and then after generating the reflection value map, range filtering, binarization, and point clustering, it can The laser point of the data calibration device is extracted from the laser data (as shown in FIG. 5 ). The rotation and translation matrix can be calculated by iteratively matching the feature points of the two-dimensional calibration device in the picture and the feature points of the three-dimensional calibration device in the laser data.

得到的三维全景激光测距系统与四个视觉系统的外参数为:The obtained external parameters of the 3D panoramic laser ranging system and the four vision systems are:

从定性的角度分析,通过激光点向图片投影的方式对标定结果进行直观验证。以人作为参考物,重新采集对应的图片和三维激光数据,从三维激光点中提取出属于身体部分的激光点(如图6所示),利用标定得到的旋转矩阵和平移矩阵,将激光点投影到四个图片上,检验标定效果,图7中四个图片里人身体上的白色点为投影的激光点,可以看出激光点可以准确的投影到图片的对应区域上。From a qualitative point of view, the calibration results are visually verified by projecting laser points onto pictures. Taking the person as a reference object, reacquire the corresponding pictures and 3D laser data, extract the laser points belonging to the body part from the 3D laser points (as shown in Figure 6), and use the rotation matrix and translation matrix obtained from the calibration to convert the laser point Projected onto the four pictures to check the calibration effect, the white dots on the human body in the four pictures in Figure 7 are the projected laser dots, it can be seen that the laser dots can be accurately projected onto the corresponding areas of the pictures.

Claims (1)

1.一种360度全景激光与多个视觉系统间的联合标定方法,其特征在于:采用简单的黑卡纸作为标定装置,利用黑卡纸可使照射到其表面的激光光束具有较低的反射率特性,从二维反射值图中提取属于黑卡纸的激光点,并取其中点作为特征点,分别在不同视觉系统的图像中匹配对应的特征点,迭代计算系统之间的变换关系,完成三维全景激光和多个视觉系统的联合标定;具体步骤如下:1. A joint calibration method between a 360-degree panoramic laser and multiple vision systems, characterized in that: simple black cardboard is used as the calibration device, and the laser beam irradiated on its surface has a lower Reflectivity characteristics, extract the laser points belonging to the black cardboard from the two-dimensional reflection value map, and take the midpoint as the feature point, match the corresponding feature points in the images of different vision systems, and iteratively calculate the transformation relationship between the systems , to complete the joint calibration of 3D panoramic laser and multiple vision systems; the specific steps are as follows: a)首先裁剪出若干个矩形黑色卡纸,卡纸的长不小于30厘米,宽不小于10厘米,随机放置在室内环境中的不同平面上;a) First cut out several rectangular black cardboard, the length of the cardboard is not less than 30 cm, the width is not less than 10 cm, and randomly placed on different planes in the indoor environment; b)利用全景激光采集环境三维数据和反射值数据,用对图片进行灰度处理,得到三维激光点云对应的二维反射值图,其中di和gi分别为激光点i的反射值和灰度值,dmax和dmin是所有激光点的最大和最小反射值;选取标定黑纸所在的距三维全景激光一定范围内的激光数据,并对该数据对应的像素点进行二值化处理,使图中只剩下黑色和白色,黑纸区域更加清晰;采用邻域搜索的方法,将二值化后黑色像素点进行聚类,排除干扰;b) Use the panoramic laser to collect the three-dimensional data and reflection value data of the environment, and use Perform grayscale processing on the image to obtain the two-dimensional reflection value map corresponding to the three-dimensional laser point cloud, where d i and g i are the reflection value and gray value of laser point i respectively, and d max and d min are the maximum values of all laser points and the minimum reflection value; select the laser data within a certain range from the three-dimensional panoramic laser where the black paper is calibrated, and perform binary processing on the pixels corresponding to the data, so that only black and white are left in the picture, and the black paper area is more Clear; Neighborhood search method is used to cluster the black pixels after binarization to eliminate interference; c)用平整度评估方法区分标定物体和非标定干扰物体,设一个激光点簇包含p=[p1,p2,...,pN]共N个激光点,则激光点的坐标协方差矩阵为 其中为激光点中点;计算该协方差矩阵的三个特征值:λ0、λ1、λ2,当λ2<<λ1≈λ0且比值小于给定阈值的时候其中kλ为平整度评估阈值,认为该激光点簇属于标定物体;c) Use the flatness evaluation method to distinguish the calibration object from the non-calibration interference object, assuming that a laser point cluster contains p=[p 1 ,p 2 ,...,p N ] a total of N laser points, then the coordinate coordinates of the laser points The variance matrix is in is the midpoint of the laser point; calculate the three eigenvalues of the covariance matrix: λ 0 , λ 1 , λ 2 , when λ 2 <<λ 1 ≈λ 0 and the ratio is less than a given threshold Where k λ is the flatness evaluation threshold, it is considered that the laser point cluster belongs to the calibration object; d)定义为图像特征点,pi=[xi,yi,zi,1]为三维激光点,激光点经过转换后对应图像中的像素点坐标向量为 标定的目的是计算一组变换参数满足 其中fx,fy分别为视觉传感器x,y方向的焦距,(ux,uy)是感光元件中点相对于图像中心的偏移向量,都为已知量;r1,r2,r3分别为绕x,y,z三轴的旋转角度,tx,ty,tz分别为三坐标轴方向的平移量;给定k时刻的变换参数附近有 其中Jacobian矩阵寻找下一个迭代点使得:得正规化方程从而得到 利用该迭代格式求取变换参数,得到标定结果。d) Definition is the feature point of the image, p i =[ xi ,y i , zi ,1] is the three-dimensional laser point, and the coordinate vector of the pixel point in the image corresponding to the laser point after conversion is The purpose of calibration is to calculate a set of transformation parameters Satisfy Where f x , f y are the focal lengths of the visual sensor in the x and y directions respectively, (u x , u y ) are the offset vectors of the midpoint of the photosensitive element relative to the image center, both of which are known quantities; r 1 , r 2 , r 3 are the rotation angles around the three axes of x, y, and z respectively, t x , ty y , t z are the translation amounts in the direction of the three coordinate axes respectively; the transformation parameters at a given time k make exist nearby where the Jacobian matrix Find the next iteration point makes: make get the normalized equation thus get Using this iterative format to obtain the transformation parameters and obtain the calibration results.
CN201510021270.5A 2015-01-16 2015-01-16 A kind of combined calibrating method between 360 degree of panorama laser and multiple vision systems Active CN104574406B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510021270.5A CN104574406B (en) 2015-01-16 2015-01-16 A kind of combined calibrating method between 360 degree of panorama laser and multiple vision systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510021270.5A CN104574406B (en) 2015-01-16 2015-01-16 A kind of combined calibrating method between 360 degree of panorama laser and multiple vision systems

Publications (2)

Publication Number Publication Date
CN104574406A CN104574406A (en) 2015-04-29
CN104574406B true CN104574406B (en) 2017-06-23

Family

ID=53090378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510021270.5A Active CN104574406B (en) 2015-01-16 2015-01-16 A kind of combined calibrating method between 360 degree of panorama laser and multiple vision systems

Country Status (1)

Country Link
CN (1) CN104574406B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105067023B (en) * 2015-08-31 2017-11-14 中国科学院沈阳自动化研究所 A kind of panorama three-dimensional laser sensing data calibration method and device
CN107024687B (en) * 2016-02-01 2020-07-24 北京自动化控制设备研究所 Method for quickly calibrating installation error of POS/laser radar in offline manner
CN105798909B (en) * 2016-04-29 2018-08-03 上海交通大学 Robot Zero positioning System and method for based on laser and vision
CN105844658B (en) * 2016-06-06 2018-08-17 南昌航空大学 The visible light and laser sensor extrinsic calibration method detected automatically based on straight line
CN106097348B (en) * 2016-06-13 2019-03-05 大连理工大学 A fusion method of 3D laser point cloud and 2D image
CN106446378B (en) * 2016-09-13 2019-12-03 中国科学院计算技术研究所 A method and system for describing room shape geometric features
CN108120447B (en) * 2016-11-28 2021-08-31 沈阳新松机器人自动化股份有限公司 Multi-laser equipment data fusion method
CN106679671B (en) * 2017-01-05 2019-10-11 大连理工大学 A Navigation Marking Map Recognition Method Based on Laser Data
CN109589179B (en) * 2017-10-02 2023-01-17 吕孝宇 Mixed reality system and method for determining spatial coordinates of a dental instrument
CN109993801A (en) * 2019-03-22 2019-07-09 上海交通大学 A calibration device and calibration method for a two-dimensional camera and a three-dimensional sensor
CN110189381B (en) * 2019-05-30 2021-12-03 北京眸视科技有限公司 External parameter calibration system, method, terminal and readable storage medium
CN110823252B (en) * 2019-11-06 2022-11-18 大连理工大学 An automatic calibration method for multi-line lidar and monocular vision
CN111462251B (en) * 2020-04-07 2021-05-11 深圳金三立视频科技股份有限公司 Camera calibration method and terminal
CN112258517A (en) * 2020-09-30 2021-01-22 无锡太机脑智能科技有限公司 Automatic map repairing method and device for laser radar grid map
CN113759346B (en) * 2020-10-10 2024-06-18 北京京东乾石科技有限公司 Laser radar calibration method and device, electronic equipment and storage medium
CN117953082B (en) * 2024-03-26 2024-07-19 深圳市其域创新科技有限公司 Laser radar and camera combined calibration method, system and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080091891A (en) * 2007-04-10 2008-10-15 삼성중공업 주식회사 Automatic calibration method of robot-based multi-laser vision system
CN101493318A (en) * 2008-09-16 2009-07-29 北京航空航天大学 Rudder deflection angle synchronization dynamic measurement system and implementing method thereof
CN101698303A (en) * 2009-09-11 2010-04-28 大连理工大学 Automatic calibration method between three-dimensional laser and monocular vision
CN101799271A (en) * 2010-04-01 2010-08-11 哈尔滨工业大学 Method for obtaining camera calibration point under large viewing field condition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080091891A (en) * 2007-04-10 2008-10-15 삼성중공업 주식회사 Automatic calibration method of robot-based multi-laser vision system
CN101493318A (en) * 2008-09-16 2009-07-29 北京航空航天大学 Rudder deflection angle synchronization dynamic measurement system and implementing method thereof
CN101698303A (en) * 2009-09-11 2010-04-28 大连理工大学 Automatic calibration method between three-dimensional laser and monocular vision
CN101799271A (en) * 2010-04-01 2010-08-11 哈尔滨工业大学 Method for obtaining camera calibration point under large viewing field condition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Automatic extrinsic self-calibration for fusing data from monocular vision and 3-D laser scanner;Zhuang Yan, 等;《IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT》;20140731;第63卷(第7期);1874-1876 *

Also Published As

Publication number Publication date
CN104574406A (en) 2015-04-29

Similar Documents

Publication Publication Date Title
CN104574406B (en) A kind of combined calibrating method between 360 degree of panorama laser and multiple vision systems
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
CN109961468B (en) Volume measurement method and device based on binocular vision and storage medium
Scaramuzza et al. Extrinsic self calibration of a camera and a 3D laser range finder from natural scenes
CN109564690B (en) Estimating the size of an enclosed space using a multi-directional camera
Liang et al. Image based localization in indoor environments
Alismail et al. Automatic calibration of a range sensor and camera system
Treible et al. Cats: A color and thermal stereo benchmark
Alismail et al. Automatic calibration of spinning actuated lidar internal parameters
WO2018227576A1 (en) Method and system for detecting ground shape, method for drone landing, and drone
CN109934230A (en) A Radar Point Cloud Segmentation Method Based on Visual Aid
CN102713671A (en) Point cloud data processing device, point cloud data processing method, and point cloud data processing program
CN110230979B (en) A three-dimensional target and a three-dimensional color digital system calibration method thereof
CN109035207B (en) Density-adaptive laser point cloud feature detection method
CN110243307A (en) An automatic three-dimensional color imaging and measurement system
CN114140539B (en) Method and device for acquiring position of indoor object
TWI755765B (en) System for calibrating visual coordinate system and depth coordinate system, calibration method and calibration device
CN109752855A (en) A kind of method of hot spot emitter and detection geometry hot spot
Hoang et al. Simple and efficient method for calibration of a camera and 2D laser rangefinder
CN114004894A (en) A method for determining the spatial relationship between lidar and binocular camera based on three calibration boards
CN111060006A (en) A Viewpoint Planning Method Based on 3D Model
CN114137564A (en) Method and device for automatic identification and positioning of indoor objects
CN112802114B (en) Multi-vision sensor fusion device, method thereof and electronic equipment
CN115359130A (en) Radar and camera combined calibration method and device, electronic equipment and storage medium
Sui et al. Extrinsic calibration of camera and 3D laser sensor system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant