CN117934831A - A 3D semantic segmentation method based on camera and laser fusion - Google Patents
A 3D semantic segmentation method based on camera and laser fusion Download PDFInfo
- Publication number
- CN117934831A CN117934831A CN202311872786.1A CN202311872786A CN117934831A CN 117934831 A CN117934831 A CN 117934831A CN 202311872786 A CN202311872786 A CN 202311872786A CN 117934831 A CN117934831 A CN 117934831A
- Authority
- CN
- China
- Prior art keywords
- camera
- point cloud
- cloud data
- laser
- laser point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 86
- 230000004927 fusion Effects 0.000 title claims abstract description 84
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000013527 convolutional neural network Methods 0.000 claims description 14
- 101100029886 Caenorhabditis elegans lov-1 gene Proteins 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000010606 normalization Methods 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 7
- 230000007423 decrease Effects 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 9
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及自动驾驶与语义分割技术领域,具体涉及一种基于相机和激光融合的三维语义分割方法。The present invention relates to the technical field of autonomous driving and semantic segmentation, and in particular to a three-dimensional semantic segmentation method based on camera and laser fusion.
背景技术Background technique
在自动驾驶领域,语义分割对于场景理解是非常重要的。语义分割任务是为每一个相机像素、激光点云输入分配一个对应的语义标签。目前主要存在两类方法,基于相机和基于激光雷达的方法。In the field of autonomous driving, semantic segmentation is very important for scene understanding. The semantic segmentation task is to assign a corresponding semantic label to each camera pixel and laser point cloud input. There are currently two main methods: camera-based and lidar-based methods.
相机图像包含三个通道的色彩数据,因此具有更丰富外观信息,例如颜色、纹理。但是相机作为被动式传感器,容易受到照明条件、天气的影响,另外由于相机是2D传感器缺乏深度信息,通常情况下很难得到周围环境的准确距离信息。激光雷达属于主动式传感器,通过向外部发射激光并接收反射激光算出准确距离,在不同光照条件下性能几乎不受影响。但是,由于点云稀疏、分布不规则、缺乏纹理,因此在小物体、远距离、结构相似的场景下分割效果较差。Camera images contain color data from three channels, so they have richer appearance information, such as color and texture. However, as a passive sensor, the camera is easily affected by lighting conditions and weather. In addition, since the camera is a 2D sensor that lacks depth information, it is usually difficult to obtain accurate distance information of the surrounding environment. LiDAR is an active sensor that calculates the exact distance by emitting lasers to the outside and receiving reflected lasers. Its performance is almost unaffected under different lighting conditions. However, due to the sparse point cloud, irregular distribution, and lack of texture, the segmentation effect is poor in scenes with small objects, long distances, and similar structures.
目前基于相机图像和激光点云融合方案结合了基于相机和基于激光雷达两种方法的优点,通过考虑图像的纹理和激光的距离,达到三维语义分割的目的。但是,该方法存在激光雷达分割缺乏纹理特征和图像分割缺乏距离的问题。At present, the camera image and laser point cloud fusion solution combines the advantages of the camera-based and laser radar-based methods, and achieves the purpose of 3D semantic segmentation by considering the texture of the image and the distance of the laser. However, this method has the problem that the laser radar segmentation lacks texture features and the image segmentation lacks distance.
发明内容Summary of the invention
针对现有技术中的上述不足,本发明提供了一种基于相机和激光融合的三维语义分割方法。In view of the above-mentioned deficiencies in the prior art, the present invention provides a three-dimensional semantic segmentation method based on camera and laser fusion.
为了达到上述发明目的,本发明采用的技术方案为:In order to achieve the above-mentioned object of the invention, the technical solution adopted by the present invention is:
一种基于相机和激光融合的三维语义分割方法,包括以下步骤:A three-dimensional semantic segmentation method based on camera and laser fusion includes the following steps:
S1、将相机图像输入三维语义分割网络的相机模块提取图像特征,得到原始大小的相机图像特征图;S1, input the camera image into the camera module of the 3D semantic segmentation network to extract image features and obtain a camera image feature map of the original size;
S2、将激光点云数据输入三维语义分割网络的激光模块提取激光点云数据特征,得到原始大小的激光点云数据特征图;S2, inputting the laser point cloud data into the laser module of the 3D semantic segmentation network to extract the features of the laser point cloud data, and obtaining a laser point cloud data feature map of the original size;
S3、将步骤S1中相机图像特征图与步骤S2中激光点云数据特征图输入三维语义分割网络的融合模块进行特征融合,得到融合后的相机图像特征和激光点云数据特征;S3, inputting the camera image feature map in step S1 and the laser point cloud data feature map in step S2 into a fusion module of a three-dimensional semantic segmentation network for feature fusion, to obtain fused camera image features and laser point cloud data features;
S4、将步骤S3中得到的融合后的图像特征与激光点云数据特征分别输入相机模块与激光模块,得到相机图像特征图和激光点云数据特征图;S4, inputting the fused image features and laser point cloud data features obtained in step S3 into the camera module and the laser module respectively to obtain a camera image feature map and a laser point cloud data feature map;
S5、将步骤S4中得到的相机图像特征图和激光点云数据特征图输入三维语义分割网络的监督模块,采用自监督模式或有监督模式计算损失函数;S5, inputting the camera image feature map and the laser point cloud data feature map obtained in step S4 into the supervision module of the 3D semantic segmentation network, and calculating the loss function in a self-supervised mode or a supervised mode;
S6、根据步骤S5中计算的损失函数,计算三维语义分割网络的相机模块、激光模块、融合模块以及监督模块的梯度,并采用梯度下降法更新三维语义分割网络的参数权重,得到训练好的三维语义分割网络;S6. According to the loss function calculated in step S5, the gradients of the camera module, laser module, fusion module and supervision module of the 3D semantic segmentation network are calculated, and the parameter weights of the 3D semantic segmentation network are updated by using the gradient descent method to obtain a trained 3D semantic segmentation network.
S7、获取相机图像和激光点云数据,输入步骤S6中训练好的三维语义分割网络,得到激光点云数据与相机图像的语义分割结果。S7, obtaining camera images and laser point cloud data, inputting the three-dimensional semantic segmentation network trained in step S6, and obtaining semantic segmentation results of the laser point cloud data and the camera images.
进一步地,相机模块与激光模块由编码器和解码器构成,其中,编码器中的特征图尺寸逐层减小,解码器中的特征图尺寸逐层增加,并在图像尺寸相同的编码器层和解码器层之间加入跳跃连接结构。Furthermore, the camera module and the laser module are composed of an encoder and a decoder, wherein the size of the feature map in the encoder decreases layer by layer, the size of the feature map in the decoder increases layer by layer, and a jump connection structure is added between the encoder layer and the decoder layer with the same image size.
进一步地,步骤S1具体包括:Furthermore, step S1 specifically includes:
S11、获取相机采集的相机图像,将相机图像输入编码器中,采用卷积神经网络提取相机图像的局部特征,得到相机图像特征图;S11, obtaining a camera image captured by a camera, inputting the camera image into an encoder, and using a convolutional neural network to extract local features of the camera image to obtain a camera image feature map;
S12、根据步骤S11中得到的相机图像特征图,利用池化层逐层降低相机图像特征图的尺寸;S12, according to the camera image feature map obtained in step S11, using a pooling layer to reduce the size of the camera image feature map layer by layer;
S13、将降低尺寸的相机图像特征图输入解码器中,采用卷积神经网络和双线性上采样方法逐层恢复相机图像特征图的尺寸,得到原始大小的相机图像特征图。S13. Input the reduced-size camera image feature map into the decoder, and use a convolutional neural network and a bilinear upsampling method to restore the size of the camera image feature map layer by layer to obtain the camera image feature map of the original size.
进一步地,步骤S2具体包括:Furthermore, step S2 specifically includes:
S21、获取激光雷达采集的激光点云数据,将激光点云数据进行相机平面的投影,得到二维激光点云数据;S21, acquiring laser point cloud data collected by the laser radar, and projecting the laser point cloud data onto a camera plane to obtain two-dimensional laser point cloud data;
S22、将步骤S21中得到的二维激光点云数据输入编码器中,采用卷积神经网络提取二维激光点云数据的局部特征,得到激光点云数据特征图;S22, inputting the two-dimensional laser point cloud data obtained in step S21 into an encoder, and using a convolutional neural network to extract local features of the two-dimensional laser point cloud data to obtain a laser point cloud data feature map;
S23、根据步骤S22中得到的激光点云数据特征图,利用池化层逐层降低激光点云数据特征图的尺寸;S23, according to the laser point cloud data feature map obtained in step S22, using a pooling layer to reduce the size of the laser point cloud data feature map layer by layer;
S24、将降低尺寸的激光点云数据特征图输入解码器中,利用卷积神经网络和双线性上采样方法逐层恢复激光点云数据特征图的尺寸,得到原始大小的激光点云数据特征图。S24. Input the reduced-size laser point cloud data feature map into the decoder, and use a convolutional neural network and a bilinear upsampling method to restore the size of the laser point cloud data feature map layer by layer to obtain the laser point cloud data feature map of the original size.
进一步地,将激光点云数据进行相机平面的投影的计算公式为:Furthermore, the calculation formula for projecting the laser point cloud data onto the camera plane is:
[x′i,y′i,z′i]T=K×Tr×[xi,yi,zi,1]T [x′ i , y′ i , z′ i ] T = K × T r × [x i , y i , z i , 1] T
Ml[ui][vi]=1M l [u i ][v i ]=1
其中,x′i、y′i、z′i分别表示第i个激光点云数据在相机坐标系下的位置,T表示转置,K表示相机的内参,Tr表示激光到相机的转移矩阵,xi、yi、zi表示第i个激光点云数据在x、y与z轴上的位置,ui、vi分别表示第i个激光点云在相机平面的垂直和水平方向上的索引,Ml表示激光雷达掩码。Among them, x′ i , y′ i , z′ i respectively represent the position of the i-th laser point cloud data in the camera coordinate system, T represents transpose, K represents the intrinsic parameter of the camera, Tr represents the transfer matrix from laser to camera, x i , y i , zi represent the position of the i-th laser point cloud data on the x, y and z axes, ui , vi represent the index of the i-th laser point cloud in the vertical and horizontal directions of the camera plane, respectively, and M l represents the lidar mask.
进一步地,融合模块由拼接模块、卷积层、滑动窗口注意力模块构成,其中,滑动窗口注意力模块由第一滑动窗口注意力层和第二滑动窗口注意力层构成,第一滑动窗口注意力层由层标准化模块、W-MSA模块以及多层感知器模块构成,第二滑动窗口注意力层由层标准化模块、SW-MSA模块以及多层感知器模块构成。Furthermore, the fusion module is composed of a splicing module, a convolutional layer, and a sliding window attention module, wherein the sliding window attention module is composed of a first sliding window attention layer and a second sliding window attention layer, the first sliding window attention layer is composed of a layer normalization module, a W-MSA module, and a multilayer perceptron module, and the second sliding window attention layer is composed of a layer normalization module, a SW-MSA module, and a multilayer perceptron module.
进一步地,步骤S3具体包括:Furthermore, step S3 specifically includes:
S31、将步骤S1中相机图像特征图与步骤S2中激光点云数据特征图输入融合模块的拼接模块,得到相机与激光雷达的拼接特征;S31, inputting the camera image feature map in step S1 and the laser point cloud data feature map in step S2 into a splicing module of a fusion module to obtain splicing features of the camera and the laser radar;
S32、将步骤S31中得到的相机与激光雷达的拼接特征输入卷积层,得到相机与激光雷达的融合特征;S32, inputting the splicing features of the camera and the laser radar obtained in step S31 into the convolution layer to obtain the fusion features of the camera and the laser radar;
S33、将步骤S32中得到的相机与激光雷达的融合特征输入滑动窗口注意力模块,得到相机与激光雷达的融合注意力特征;S33, inputting the fusion features of the camera and the laser radar obtained in step S32 into the sliding window attention module to obtain the fusion attention features of the camera and the laser radar;
S34、将步骤S33中得到的相机与激光雷达的融合注意力特征与步骤S32中得到的相机与激光雷达的融合特征按比例融入步骤S1中图像特征图和步骤S2中激光点云数据特征图中,得到融合后的相机图像特征和激光点云数据特征。S34. Integrate the fused attention features of the camera and the laser radar obtained in step S33 and the fused features of the camera and the laser radar obtained in step S32 into the image feature map in step S1 and the laser point cloud data feature map in step S2 in proportion to obtain fused camera image features and laser point cloud data features.
进一步地,步骤S34中融合后的图像特征与激光点云数据特征的计算公式为:Furthermore, the calculation formula of the fused image features and laser point cloud data features in step S34 is:
Cfusion=Corigin+a1×SelfAttension×FusionFeatureC fusion = C origin + a 1 × SelfAttension × FusionFeature
Lfusion=Lorigin+a2×SelfAttension×FusionFeatureL fusion = L origin + a 2 × SelfAttension × FusionFeature
其中,Cfusion表示融合后的相机图像特征,Corigin表示原始大小的相机图像特征,a1、a2分别表示融合比例因子,SelfAttension表示相机与激光雷达的融合注意力特征,FusionFeature表示相机与激光雷达的融合特征,Lfusion表示融合后的激光点云数据特征,Lorigin表示原始大小的激光点云数据特征。Among them, C fusion represents the fused camera image features, C origin represents the camera image features of the original size, a 1 and a 2 represent the fusion scale factors respectively, SelfAttension represents the fusion attention features of the camera and lidar, FusionFeature represents the fusion features of the camera and lidar, L fusion represents the fused laser point cloud data features, and L origin represents the laser point cloud data features of the original size.
进一步地,将步骤S4中得到的相机图像特征图和激光点云数据特征图输入监督模块,采用自监督模式计算损失函数的具体过程为:Furthermore, the camera image feature map and the laser point cloud data feature map obtained in step S4 are input into the supervision module, and the specific process of calculating the loss function using the self-supervision mode is as follows:
监督模块通过加入置信度的PIDNet网络产生伪标签,同时保留高置信度像素与激光点云数据,通过设置相机掩膜和激光雷达掩膜,得到自监督模式的损失函数,即:The supervision module generates pseudo labels by adding the confidence PIDNet network, while retaining high-confidence pixels and laser point cloud data. By setting the camera mask and lidar mask, the loss function of the self-supervised mode is obtained, namely:
Lself-supervised=Lfoc1+Llov1+Lfov2+Llov2+Lkl L self-supervised = L foc1 + L lov1 + L fov2 + L lov2 + L kl
其中,Lself-supervised表示自监督模式的损失函数,Lkl表示单向KL散度,Lfoc1、Llov1分别表示相机分支的预测结果和伪标签之间的聚焦损失和洛瓦兹损失,Lfov2、Llov2表示激光雷达分支的预测结果和伪标签之间的聚焦损失和洛瓦兹损失,u、v分别表示预测结果特征图的长度和宽度,C表示置信度,focalloss(·)表示聚焦损失函数计算公式,Predcamera表示相机分支的预测结果,PredLidar表示激光雷达分支的预测结果,label表示伪标签值,Mθ1表示相机掩膜,Mθ2表示激光雷达掩膜,Ml表示激光雷达掩码。Among them, L self-supervised represents the loss function of the self-supervised mode, L kl represents the one-way KL divergence, L foc1 , L lov1 represent the focal loss and Lovalz loss between the prediction result of the camera branch and the pseudo label, respectively, L fov2 , L lov2 represent the focal loss and Lovalz loss between the prediction result of the lidar branch and the pseudo label, u, v represent the length and width of the prediction result feature map, respectively, C represents the confidence, focalloss(·) represents the focal loss function calculation formula, Pred camera represents the prediction result of the camera branch, Pred Lidar represents the prediction result of the lidar branch, label represents the pseudo label value, M θ1 represents the camera mask, M θ2 represents the lidar mask, and M l represents the lidar mask.
进一步地,将步骤S4中得到的相机图像特征图和激光点云数据特征图输入监督模块,采用有监督模式计算损失函数的具体过程为:Furthermore, the camera image feature map and the laser point cloud data feature map obtained in step S4 are input into the supervision module, and the specific process of calculating the loss function in the supervised mode is as follows:
监督模块采用聚焦损失、洛瓦兹损失对参数权重进行调整,得到有监督模式的损失函数,即:The supervision module uses focus loss and Lovalz loss to adjust the parameter weights to obtain the loss function of the supervised mode, namely:
Lsupervised=Lfoc1+Llov1+Lfov2+Llov2 L supervised =L foc1 +L lov1 +L fov2 +L lov2
其中,Lsupervised表示有监督模式的损失函数,Lfoc1、Llov1分别表示相机分支的预测结果和真值标签之间的聚焦损失与洛瓦兹损失、Lfov2、Llov2分别表示激光雷达分支的预测结果和真值标签之间的聚焦损失与洛瓦兹损失。Among them, L supervised represents the loss function of the supervised mode, L foc1 and L lov1 represent the focus loss and Lovalz loss between the prediction results of the camera branch and the true value label, respectively, and L fov2 and L lov2 represent the focus loss and Lovalz loss between the prediction results of the lidar branch and the true value label, respectively.
本发明具有以下有益效果:The present invention has the following beneficial effects:
1.本发明所提出的一种基于相机和激光融合的三维语义分割方法,通过有效结合相机图像的纹理信息和激光雷达的距离信息,提高语义分割的准确率,同时针对小物体激光点云较为稀疏的情况,通过引入相机图像信息,使得预测结果更加准确;1. The three-dimensional semantic segmentation method based on camera and laser fusion proposed in the present invention improves the accuracy of semantic segmentation by effectively combining the texture information of camera images and the distance information of laser radar. At the same time, in view of the sparse laser point cloud of small objects, the prediction result is more accurate by introducing camera image information;
2.在融合模块采用滑动窗口注意力机制,在光照、颜色剧烈变化下,三维语义分割网络有更强的鲁棒性;2. The sliding window attention mechanism is used in the fusion module, which makes the 3D semantic segmentation network more robust under drastic changes in lighting and color;
3.利用加入置信度的PIDNet网络产生伪标签,不需要任何人工手动标注点云标签,仍可以通过跨数据、跨模态的方式训练三维语义分割网络,提高了三维语义分割网络的预测精度。3. The PIDNet network with confidence is used to generate pseudo labels. No manual annotation of point cloud labels is required. The 3D semantic segmentation network can still be trained in a cross-data and cross-modal manner, which improves the prediction accuracy of the 3D semantic segmentation network.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明所提出的一种基于相机和激光融合的三维语义分割方法的流程示意图;FIG1 is a schematic diagram of a flow chart of a 3D semantic segmentation method based on camera and laser fusion proposed in the present invention;
图2为三维语义分割网络结构示意图;Figure 2 is a schematic diagram of the three-dimensional semantic segmentation network structure;
图3为融合模块结构示意图;Fig. 3 is a schematic diagram of the fusion module structure;
图4为监督模块中加入置信度的PIDNet网络的结构示意图;FIG4 is a schematic diagram of the structure of the PIDNet network with confidence added to the supervision module;
图5为三维语义分割网络分割结果示意图。Figure 5 is a schematic diagram of the segmentation results of the three-dimensional semantic segmentation network.
具体实施方式Detailed ways
下面对本发明的具体实施方式进行描述,以便于本技术领域的技术人员理解本发明,但应该清楚,本发明不限于具体实施方式的范围,对本技术领域的普通技术人员来讲,只要各种变化在所附的权利要求限定和确定的本发明的精神和范围内,这些变化是显而易见的,一切利用本发明构思的发明创造均在保护之列。The specific implementation modes of the present invention are described below to facilitate those skilled in the art to understand the present invention. However, it should be clear that the present invention is not limited to the scope of the specific implementation modes. For those of ordinary skill in the art, as long as various changes are within the spirit and scope of the present invention as defined and determined by the attached claims, these changes are obvious, and all inventions and creations utilizing the concept of the present invention are protected.
如图1所示,一种基于相机和激光融合的三维语义分割方法,包括以下步骤S1-S7:As shown in FIG1 , a 3D semantic segmentation method based on camera and laser fusion includes the following steps S1-S7:
如图2所示,图2为三维语义分割网络结构示意图;图2中三维语义分割网络包括相机模块、激光模块、融合模块以及监督模块。三维语义分割网络整体上采用双流架构,即两个输入端,相机图像输入相机模块,激光雷达的激光点云数据输入激光模块。其中,相机模块和激光模块结构相似,各自由特征图尺寸逐层减小的编码器和特征图尺寸逐层增加的解码器构成。在编码器阶段采用卷积神经网络提取相机图像或激光点云数据的局部特征,并通过池化层降低特征图的尺寸;在解码器阶段采用卷积神经网络和双线性上采样方法,逐层恢复特征图的尺寸直到恢复到原始输入的相机图像或激光点云数据图像的尺寸大小。同时,本实施例在相机模块或激光模块的编码器与解码器相同尺寸特征图之间加入跳跃连接结构,如图2在相机模块或激光模块中,特征直接由编码器通过虚线所示的路径传播到解码器中。As shown in FIG. 2 , FIG. 2 is a schematic diagram of the structure of a three-dimensional semantic segmentation network; the three-dimensional semantic segmentation network in FIG. 2 includes a camera module, a laser module, a fusion module, and a supervision module. The three-dimensional semantic segmentation network as a whole adopts a dual-stream architecture, that is, two input ends, the camera image is input into the camera module, and the laser point cloud data of the laser radar is input into the laser module. Among them, the camera module and the laser module have similar structures, each consisting of an encoder whose feature map size decreases layer by layer and a decoder whose feature map size increases layer by layer. In the encoder stage, a convolutional neural network is used to extract local features of the camera image or laser point cloud data, and the size of the feature map is reduced by a pooling layer; in the decoder stage, a convolutional neural network and a bilinear upsampling method are used to restore the size of the feature map layer by layer until it is restored to the size of the original input camera image or laser point cloud data image. At the same time, this embodiment adds a jump connection structure between the encoder and the decoder of the same size feature map of the camera module or laser module, as shown in FIG. 2 in the camera module or laser module, and the feature is directly propagated from the encoder to the decoder through the path shown by the dotted line.
S1、将相机图像输入三维语义分割网络的相机模块提取图像特征,得到原始大小的相机图像特征图。S1. Input the camera image into the camera module of the 3D semantic segmentation network to extract image features and obtain a camera image feature map of the original size.
S2、将激光点云数据输入三维语义分割网络的激光模块提取激光点云数据特征,得到原始大小的激光点云数据特征图。S2. Input the laser point cloud data into the laser module of the 3D semantic segmentation network to extract the features of the laser point cloud data and obtain a feature map of the laser point cloud data of the original size.
具体地,相机模块与激光模块由编码器和解码器构成,其中,编码器中的特征图尺寸逐层减小,解码器中的特征图尺寸逐层增加,并在图像尺寸相同的编码器层和解码器层之间加入跳跃连接结构。Specifically, the camera module and the laser module are composed of an encoder and a decoder, wherein the size of the feature map in the encoder decreases layer by layer, the size of the feature map in the decoder increases layer by layer, and a jump connection structure is added between the encoder layer and the decoder layer with the same image size.
本实施例中,通过逐层的方式扩大了卷积的感受视野,提高了不同大小的物体的分割性能,同时利用跳跃连接结构,保留了原始图像特征的边缘位置信息,使得语义分割更加精确。其中,编码器和解码器的层数可自由设置,并根据输入的图像尺寸进行微调,通常为4层。In this embodiment, the convolutional perception field of view is expanded layer by layer, the segmentation performance of objects of different sizes is improved, and the edge position information of the original image features is retained by using the jump connection structure, making the semantic segmentation more accurate. The number of layers of the encoder and decoder can be freely set and fine-tuned according to the input image size, usually 4 layers.
具体地,步骤S1具体包括S11-S13:Specifically, step S1 includes S11-S13:
S11、获取相机采集的相机图像,将相机图像输入编码器中,采用卷积神经网络提取相机图像的局部特征,得到相机图像特征图。S11. Obtain a camera image captured by a camera, input the camera image into an encoder, and use a convolutional neural network to extract local features of the camera image to obtain a camera image feature map.
S12、根据步骤S11中得到的相机图像特征图,利用池化层逐层降低相机图像特征图的尺寸。S12. According to the camera image feature map obtained in step S11, a size of the camera image feature map is reduced layer by layer using a pooling layer.
S13、将降低尺寸的相机图像特征图输入解码器中,采用卷积神经网络和双线性上采样方法逐层恢复相机图像特征图的尺寸,得到原始大小的相机图像特征图。S13. Input the reduced-size camera image feature map into the decoder, and use a convolutional neural network and a bilinear upsampling method to restore the size of the camera image feature map layer by layer to obtain the camera image feature map of the original size.
具体地,步骤S2具体包括S21-S24:Specifically, step S2 specifically includes S21-S24:
S21、获取激光雷达采集的激光点云数据,将激光点云数据进行相机平面的投影,得到二维激光点云数据。S21. Obtain laser point cloud data collected by the laser radar, and project the laser point cloud data onto the camera plane to obtain two-dimensional laser point cloud data.
具体地,将激光点云数据进行相机平面的投影的计算公式为:Specifically, the calculation formula for projecting the laser point cloud data onto the camera plane is:
[x′i,y′i,z′i]T=K×Tr×[xi,yi,zi,1]T [x′ i , y′ i , z′ i ] T = K × T r × [x i , y i , z i , 1] T
Ml[ui][vi]=1M l [u i ][v i ]=1
其中,x′i、y′i、z′i分别表示第i个激光点云数据在相机坐标系下的位置,T表示转置,K表示相机的内参,Tr表示激光到相机的转移矩阵,xi、yi、zi表示第i个激光点云数据在x、y与z轴上的位置,ui、vi分别表示第i个激光点云在相机平面的垂直和水平方向上的索引,Ml表示激光雷达掩码。Among them, x′ i , y′ i , z′ i respectively represent the position of the i-th laser point cloud data in the camera coordinate system, T represents transpose, K represents the intrinsic parameter of the camera, Tr represents the transfer matrix from laser to camera, x i , y i , zi represent the position of the i-th laser point cloud data on the x, y and z axes, ui , vi represent the index of the i-th laser point cloud in the vertical and horizontal directions of the camera plane, respectively, and M l represents the lidar mask.
本实施例中,由于相机数据为二维数据,激光点云数据为三维数据,因此存在空间不统一的问题,通常机械旋转式激光雷达的视角范围要大于相机视角范围,因此本实施例中将激光点云数据进行相机平面的投影,转换成二维的激光点云数据。其中,公式[x′i,y′i,z′i]T=K×Tr×[xi,yi,zi,1]T为激光点云数据向相机图像投影的公式,激光点云为Pi={xi,yi,zi}T、xi,yi,zi分别表示激光点云在三维空间的坐标;Tr表示激光到相机的转移矩阵,即描述激光雷达和相机两个传感器之间的物理距离,K表示相机的内参,即三维空间映射到二维感光平面的关系,[x′i,y′i,z′i]T表示相机坐标系下的三维点;公式通过将尺度缩放z′i得到[ui,vi,1]T,即激光点云在相机平面对应的二维坐标;由于激光点云数据通常是稀疏的,因此并不是每个图像像素都有对应的激光点云,所以利用公式Ml[ui][vi]=1表示有激光点云映射的位置。In this embodiment, since the camera data is two-dimensional data and the laser point cloud data is three-dimensional data, there is a problem of spatial inconsistency. Usually, the viewing angle range of a mechanical rotating laser radar is larger than the viewing angle range of a camera. Therefore, in this embodiment, the laser point cloud data is projected onto the camera plane and converted into two-dimensional laser point cloud data. Wherein, the formula [x′ i , y′ i , z′ i ] T =K× Tr ×[x i , y i , z i , 1] T is the formula for projecting the laser point cloud data onto the camera image. The laser point cloud is P i ={x i , y i , z i } T , x i , y i , z i respectively represent the coordinates of the laser point cloud in three-dimensional space; Tr represents the transfer matrix from the laser to the camera, that is, it describes the physical distance between the two sensors of the laser radar and the camera, K represents the internal parameters of the camera, that is, the relationship between the three-dimensional space mapping to the two-dimensional photosensitive plane, [x′ i , y′ i , z′ i ] T represents the three-dimensional point in the camera coordinate system; the formula By scaling z′ i, we get [u i , vi , 1] T , which is the two-dimensional coordinates of the laser point cloud on the camera plane. Since the laser point cloud data is usually sparse, not every image pixel has a corresponding laser point cloud. Therefore, the formula M l [u i ][ vi ] = 1 is used to indicate the position where the laser point cloud is mapped.
S22、将步骤S21中得到的二维激光点云数据输入编码器中,采用卷积神经网络提取二维激光点云数据的局部特征,得到激光点云数据特征图。S22, inputting the two-dimensional laser point cloud data obtained in step S21 into an encoder, using a convolutional neural network to extract local features of the two-dimensional laser point cloud data, and obtaining a laser point cloud data feature map.
S23、根据步骤S22中得到的激光点云数据特征图,利用池化层逐层降低激光点云数据特征图的尺寸。S23. According to the laser point cloud data feature map obtained in step S22, a size of the laser point cloud data feature map is reduced layer by layer using a pooling layer.
S24、将降低尺寸的激光点云数据特征图输入解码器中,利用卷积神经网络和双线性上采样方法逐层恢复激光点云数据特征图的尺寸,得到原始大小的激光点云数据特征图。S24. Input the reduced-size laser point cloud data feature map into the decoder, and use a convolutional neural network and a bilinear upsampling method to restore the size of the laser point cloud data feature map layer by layer to obtain the laser point cloud data feature map of the original size.
S3、将步骤S1中相机图像特征图与步骤S2中激光点云数据特征图输入三维语义分割网络的融合模块进行特征融合,得到融合后的相机图像特征和激光点云数据特征。S3, inputting the camera image feature map in step S1 and the laser point cloud data feature map in step S2 into the fusion module of the three-dimensional semantic segmentation network for feature fusion, so as to obtain fused camera image features and laser point cloud data features.
如图3所示,图3为融合模块结构示意图;图3(a)中,融合模块包括拼接模块(C)、卷积层(Conv)、滑动窗口注意力模块(Swin-transformer)。本实施例中,将相机图像特征图中的原始大小的图像特征(Corigin)与激光点云数据特征图中的原始大小的激光点云数据特征(Lorigin)输入拼接模块,得到拼接特征(Concat Feature),其次将拼接特征输入卷积层,得到融合特征(Fusion Feature),然后将融合特征输入滑动窗口注意力模块,得到融合注意力特征(Self-Attention Feature),最后将融合注意力特征按照比例与原始大小的图像特征与原始大小的激光点云数据特征进行融合,得到融合后的相机图像特征(Cfusion)和融合后的激光点云数据特征(Lfusion)。图3(b)中,在滑动窗口注意力模块设置了第一滑动窗口注意力层和第二滑动窗口注意力层,第一滑动窗口注意力层由层标准化模块(LN)、W-MSA模块(Windows Multi-Head Self-Attention)以及多层感知器模块MLP)构成,第二滑动窗口注意力层由层标准化模块(LN)、SW-MSA模块(Shifted Windows Multi-Head Self-Attention)以及多层感知器模块(MLP)构成;首先将融合特征打平得到补丁特征(PatchFeature),然后经过第一滑动窗口注意力层和第二滑动窗口注意力层,第一滑动窗口注意力层和第二滑动窗口注意力层的不同之处仅仅在于,第一滑动窗口注意力层使用了W-MSA结构,W-MSA结构的目的是用于减少计算量,且只会在每个窗口内计算自注意力,第二滑动窗口注意力层使用了SW-MSA结构,SW-MSA结构的目的是通过平移窗口提供窗口间的信息交流。As shown in FIG3 , FIG3 is a schematic diagram of the fusion module structure; in FIG3 (a), the fusion module includes a splicing module (C), a convolution layer (Conv), and a sliding window attention module (Swin-transformer). In this embodiment, the original size image feature (C origin ) in the camera image feature map and the original size laser point cloud data feature (L origin ) in the laser point cloud data feature map are input into the splicing module to obtain a splicing feature (Concat Feature), and then the splicing feature is input into the convolution layer to obtain a fusion feature (Fusion Feature), and then the fusion feature is input into the sliding window attention module to obtain a fusion attention feature (Self-Attention Feature), and finally the fusion attention feature is fused according to the ratio with the original size image feature and the original size laser point cloud data feature to obtain a fused camera image feature (C fusion ) and a fused laser point cloud data feature (L fusion ). In Figure 3(b), the first sliding window attention layer and the second sliding window attention layer are set in the sliding window attention module. The first sliding window attention layer is composed of a layer normalization module (LN), a W-MSA module (Windows Multi-Head Self-Attention) and a multi-layer perceptron module MLP), and the second sliding window attention layer is composed of a layer normalization module (LN), a SW-MSA module (Shifted Windows Multi-Head Self-Attention) and a multi-layer perceptron module (MLP); first, the fusion feature is flattened to obtain a patch feature (PatchFeature), and then passes through the first sliding window attention layer and the second sliding window attention layer. The difference between the first sliding window attention layer and the second sliding window attention layer is that the first sliding window attention layer uses the W-MSA structure. The purpose of the W-MSA structure is to reduce the amount of calculation and only calculate self-attention in each window. The second sliding window attention layer uses the SW-MSA structure. The purpose of the SW-MSA structure is to provide information exchange between windows by shifting windows.
具体地,融合模块由拼接模块、卷积层、滑动窗口注意力模块构成,其中,滑动窗口注意力模块由第一滑动窗口注意力层和第二滑动窗口注意力层构成,第一滑动窗口注意力层由层标准化模块、W-MSA模块以及多层感知器模块构成,第二滑动窗口注意力层由层标准化模块、SW-MSA模块以及多层感知器模块构成。Specifically, the fusion module is composed of a splicing module, a convolutional layer, and a sliding window attention module, wherein the sliding window attention module is composed of a first sliding window attention layer and a second sliding window attention layer, the first sliding window attention layer is composed of a layer normalization module, a W-MSA module, and a multilayer perceptron module, and the second sliding window attention layer is composed of a layer normalization module, a SW-MSA module, and a multilayer perceptron module.
具体地,步骤S3具体包括S31-S34:Specifically, step S3 specifically includes S31-S34:
S31、将步骤S1中相机图像特征图与步骤S2中激光点云数据特征图输入融合模块的拼接模块,得到相机与激光雷达的拼接特征。S31, input the camera image feature map in step S1 and the laser point cloud data feature map in step S2 into the splicing module of the fusion module to obtain the splicing features of the camera and the laser radar.
S32、将步骤S31中得到的相机与激光雷达的拼接特征输入卷积层,得到相机与激光雷达的融合特征。S32: Input the concatenated features of the camera and the lidar obtained in step S31 into the convolution layer to obtain the fusion features of the camera and the lidar.
S33、将步骤S32中得到的相机与激光雷达的融合特征输入滑动窗口注意力模块,得到相机与激光雷达的融合注意力特征。S33. Input the fusion features of the camera and the laser radar obtained in step S32 into the sliding window attention module to obtain the fusion attention features of the camera and the laser radar.
S34、将步骤S33中得到的相机与激光雷达的融合注意力特征与步骤S32中得到的相机与激光雷达的融合特征按比例融入步骤S1中图像特征图和步骤S2中激光点云数据特征图中,得到融合后的相机图像特征和激光点云数据特征。S34. Integrate the fused attention features of the camera and the laser radar obtained in step S33 and the fused features of the camera and the laser radar obtained in step S32 into the image feature map in step S1 and the laser point cloud data feature map in step S2 in proportion to obtain fused camera image features and laser point cloud data features.
具体地,步骤S34中融合后的图像特征与激光点云数据特征的计算公式为:Specifically, the calculation formula of the fused image features and laser point cloud data features in step S34 is:
Cfusion=Corigin+a1×SelfAttension×FusionFeatureC fusion = C origin + a 1 × SelfAttension × FusionFeature
Lfusion=Lorigin+a2×SelfAttension×FusionFeatureL fusion = L or i g in + a 2 × SelfAttension × FusionFeature
其中,Cfusion表示融合后的相机图像特征,Corigin表示原始大小的相机图像特征,a1、a2分别表示融合比例因子,SelfAttension表示相机与激光雷达的融合注意力特征,FusionFeature表示相机与激光雷达的融合特征,Lfusion表示融合后的激光点云数据特征,Lorigin表示原始大小的激光点云数据特征。Among them, C fusion represents the fused camera image features, C origin represents the camera image features of the original size, a 1 and a 2 represent the fusion scale factors respectively, SelfAttension represents the fusion attention features of the camera and lidar, FusionFeature represents the fusion features of the camera and lidar, L fusion represents the fused laser point cloud data features, and L origin represents the laser point cloud data features of the original size.
S4、将步骤S3中得到的融合后的图像特征与激光点云数据特征分别输入相机模块与激光模块,得到相机图像特征图和激光点云数据特征图。S4. Input the fused image features and laser point cloud data features obtained in step S3 into the camera module and the laser module respectively to obtain a camera image feature map and a laser point cloud data feature map.
本实施例中,将在融合模块融合后的图像特征与激光点云数据特征分别输入相机模块与激光模块,分别继续进行特征提取。设置融合模块的目的就是为了有效的进行数据交互。本实施例中的融合模块采用滑动窗口注意力的结构,相较于卷积的方式,不仅可以通过全局注意力机制更好的进行特征选择,而且将融合后的图像特征与激光点云数据特征通过注意力机制加权到图像、激光各自原始特征上,以便进行下一次的特征提取。In this embodiment, the image features and laser point cloud data features fused in the fusion module are input into the camera module and the laser module respectively, and feature extraction is continued respectively. The purpose of setting the fusion module is to effectively interact with data. The fusion module in this embodiment adopts a sliding window attention structure. Compared with the convolution method, it can not only better select features through the global attention mechanism, but also weight the fused image features and laser point cloud data features to the original features of the image and laser through the attention mechanism, so as to perform the next feature extraction.
S5、将步骤S4中得到的相机图像特征图和激光点云数据特征图输入三维语义分割网络的监督模块,采用自监督模式或有监督模式计算损失函数。S5. Input the camera image feature map and the laser point cloud data feature map obtained in step S4 into the supervision module of the three-dimensional semantic segmentation network, and calculate the loss function in a self-supervised mode or a supervised mode.
本实施例中,在监督模块采用了两种模式,一种为有监督模式,一种为自监督模式;有监督模式通过采用真实标签的方式,即通过原始数据集中激光点云的真实标签进行训练,监督相机的预测值和激光雷达的预测值;自监督模式通过伪标签的方式,即没有激光点云的真实标签,则通过在其他训练集预先训练好的2D图像分割网络对图像做推理,同时保留伪标签结果和置信度,利用伪标签和置信度的方式联合监督网络收敛,利用伪标签监督相机的预测值和激光雷达的预测值。In this embodiment, two modes are adopted in the supervision module, one is a supervised mode and the other is a self-supervised mode; the supervised mode adopts the real label method, that is, the real label of the laser point cloud in the original data set is used for training to supervise the camera's prediction value and the lidar's prediction value; the self-supervised mode uses a pseudo-label method, that is, there is no real label of the laser point cloud, and the image is inferred by a 2D image segmentation network pre-trained in other training sets, while retaining the pseudo-label results and confidence, and using the pseudo-label and confidence method to jointly supervise the network convergence, and using the pseudo-label to supervise the camera's prediction value and the lidar's prediction value.
具体地,将步骤S4中得到的相机图像特征图和激光点云数据特征图输入监督模块,采用自监督模式计算损失函数的具体过程为:Specifically, the camera image feature map and the laser point cloud data feature map obtained in step S4 are input into the supervision module, and the specific process of calculating the loss function using the self-supervision mode is as follows:
监督模块通过加入置信度的PIDNet网络产生伪标签,同时保留高置信度像素与激光点云数据,通过设置相机掩膜和激光雷达掩膜,得到自监督模式的损失函数,即:The supervision module generates pseudo labels by adding the confidence PIDNet network, while retaining high-confidence pixels and laser point cloud data. By setting the camera mask and lidar mask, the loss function of the self-supervised mode is obtained, namely:
Lself-supervised=Lfoc1+Llov1+Lfov2+Llov2+Lkl L self-supervised = L foc1 + L lov1 + L fov2 + L lov2 + L kl
其中,Lself-supervised表示自监督模式的损失函数,Lkl表示单向KL散度,Lfoc1、Llov1分别表示相机分支的预测结果和伪标签之间的聚焦损失和洛瓦兹损失,Lfov2、Llov2表示激光雷达分支的预测结果和伪标签之间的聚焦损失和洛瓦兹损失,u、v分别表示预测结果特征图的长度和宽度,C表示置信度,focalloss(·)表示聚焦损失函数计算公式,Predcamera表示相机分支的预测结果,PredLidar表示激光雷达分支的预测结果,label表示伪标签值,Mθ1表示相机掩膜,Mθ2表示激光雷达掩膜,Ml表示激光雷达掩码。Among them, L self-supervised represents the loss function of the self-supervised mode, L kl represents the one-way KL divergence, L foc1 , L lov1 represent the focal loss and Lovalz loss between the prediction result of the camera branch and the pseudo label, respectively, L fov2 , L lov2 represent the focal loss and Lovalz loss between the prediction result of the lidar branch and the pseudo label, u, v represent the length and width of the prediction result feature map, respectively, C represents the confidence, focalloss(·) represents the focal loss function calculation formula, Pred camera represents the prediction result of the camera branch, Pred Lidar represents the prediction result of the lidar branch, label represents the pseudo label value, M θ1 represents the camera mask, M θ2 represents the lidar mask, and M l represents the lidar mask.
本实施例中,只有当置信度大于θ1/θ2时,相机掩膜或激光雷达掩膜被激活。In this embodiment, the camera mask or the lidar mask is activated only when the confidence is greater than θ1/θ2.
如图4所示,图4为监督模块中加入置信度的PIDNet网络的结构示意图;本实施例中,采用自监督模式生成伪标签对网络进行训练,并利用伪标签预测相机的预测值以及激光雷达的预测值。图4为加入置信度的PIDNet网络,利用该网络可以计算置信度以及伪标签,该网络仅仅用作产生伪标签使用,不参与到整个三维语义分割网络的训练过程。其中,置信度C根据熵E来定义,熵E通过概率p计算,若三维语义分割网络的输出越集中在某一类,则熵E越小,置信度C越接近于1。置信度的计算公式为:n表示语义分割的类别数目。As shown in Figure 4, Figure 4 is a schematic diagram of the structure of the PIDNet network with confidence added to the supervision module; in this embodiment, the self-supervised mode is used to generate pseudo labels to train the network, and the pseudo labels are used to predict the camera's prediction value and the lidar's prediction value. Figure 4 is a PIDNet network with confidence added. The network can be used to calculate confidence and pseudo labels. The network is only used to generate pseudo labels and does not participate in the training process of the entire three-dimensional semantic segmentation network. Among them, the confidence C is defined according to the entropy E, and the entropy E is calculated by the probability p. If the output of the three-dimensional semantic segmentation network is more concentrated in a certain category, the smaller the entropy E, and the confidence C is closer to 1. The calculation formula for confidence is: n represents the number of categories for semantic segmentation.
具体地,将步骤S4中得到的相机图像特征图和激光点云数据特征图输入监督模块,采用有监督模式计算损失函数的具体过程为:Specifically, the camera image feature map and the laser point cloud data feature map obtained in step S4 are input into the supervision module, and the specific process of calculating the loss function in the supervised mode is as follows:
监督模块采用聚焦损失、洛瓦兹损失对参数权重进行调整,得到有监督模式的损失函数,即:The supervision module uses focus loss and Lovalz loss to adjust the parameter weights to obtain the loss function of the supervised mode, namely:
Lsupervised=Lfoc1+Llov1+Lfov2+Llov2 L supervised =L foc1 +L lov1 +L fov2 +L lov2
其中,Lsupervised表示有监督模式的损失函数,Lfoc1、Llov1分别表示相机分支的预测结果和真值标签之间的聚焦损失与洛瓦兹损失、Lfov2、Llov2分别表示激光雷达分支的预测结果和真值标签之间的聚焦损失与洛瓦兹损失。Among them, L supervised represents the loss function of the supervised mode, L foc1 and L lov1 represent the focus loss and Lovalz loss between the prediction results of the camera branch and the true value label, respectively, and L fov2 and L lov2 represent the focus loss and Lovalz loss between the prediction results of the lidar branch and the true value label, respectively.
S6、根据步骤S5中计算的损失函数,计算三维语义分割网络的相机模块、激光模块、融合模块以及监督模块的梯度,并采用梯度下降法更新三维语义分割网络的参数权重,得到训练好的三维语义分割网络。S6. According to the loss function calculated in step S5, the gradients of the camera module, laser module, fusion module and supervision module of the three-dimensional semantic segmentation network are calculated, and the parameter weights of the three-dimensional semantic segmentation network are updated by the gradient descent method to obtain a trained three-dimensional semantic segmentation network.
本实施例中,利用计算的损失函数,通过计算相机模块、激光模块、融合模块以及监督模块的梯度进行逐层反向传播,更新三维语义分割网络的参数权重,最终使三维语义分割网络收敛,得到训练好的三维语义分割网络。In this embodiment, the calculated loss function is used to perform back propagation layer by layer by calculating the gradients of the camera module, laser module, fusion module and supervision module, and the parameter weights of the three-dimensional semantic segmentation network are updated, so that the three-dimensional semantic segmentation network is finally converged to obtain a trained three-dimensional semantic segmentation network.
S7、获取相机图像和激光点云数据,输入步骤S6中训练好的三维语义分割网络,得到激光点云数据与相机图像的语义分割结果。S7, obtaining camera images and laser point cloud data, inputting the three-dimensional semantic segmentation network trained in step S6, and obtaining semantic segmentation results of the laser point cloud data and the camera images.
本实施例中,利用本发明所提出的三维语义分割方法与纯激光雷达分割方法进行实验对比,发现本发明提出的三维语义分割方法比纯激光雷达分割方法的性能提升了2.6%,同时图像的引入在小物体分割上更加具有优势。具体分割结果如表1所示:In this embodiment, the three-dimensional semantic segmentation method proposed in the present invention is compared with the pure laser radar segmentation method. It is found that the performance of the three-dimensional semantic segmentation method proposed in the present invention is improved by 2.6% compared with the pure laser radar segmentation method. At the same time, the introduction of images has more advantages in the segmentation of small objects. The specific segmentation results are shown in Table 1:
表1在SemanticKITTI数据集上的结果Table 1 Results on the SemanticKITTI dataset
*由我们实现,前视激光雷达数据来源于此,+其他结果来自于基准*,加粗为最高结果,下划线为第二高结果 * Implemented by us, forward-looking lidar data comes from here, + other results come from benchmarks * , bold is the highest result, underline is the second highest result
如图5所示,图5为三维语义分割网络分割结果示意图;图5为利用本发明所提出的一种基于相机和激光融合的三维语义分割方法对相机图像和激光点云数据进行分割,得到相机图像和激光点云数据的三维语义分割结果。从图5中可以看出,因为道路左侧的树木阴影,地面变得很难分辨出来,但是采用本发明所提出的三维语义分割方法,仍可以精确的识别道路边界,同时远处的骑自行车的人,虽然反射点云稀疏,但仍能被三维语义分割网络准确预测出来。As shown in FIG5 , FIG5 is a schematic diagram of the segmentation result of the 3D semantic segmentation network; FIG5 is a 3D semantic segmentation result of the camera image and the laser point cloud data obtained by segmenting the camera image and the laser point cloud data using the 3D semantic segmentation method based on camera and laser fusion proposed by the present invention. As can be seen from FIG5 , due to the shadow of the trees on the left side of the road, the ground becomes difficult to distinguish, but the 3D semantic segmentation method proposed by the present invention can still accurately identify the road boundary, and at the same time, although the reflection point cloud of the cyclist in the distance is sparse, it can still be accurately predicted by the 3D semantic segmentation network.
本发明中应用了具体实施例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。The present invention uses specific embodiments to illustrate the principles and implementation methods of the present invention. The description of the above embodiments is only used to help understand the method of the present invention and its core idea. At the same time, for those skilled in the art, according to the idea of the present invention, there will be changes in the specific implementation methods and application scope. In summary, the content of this specification should not be understood as a limitation on the present invention.
本领域的普通技术人员将会意识到,这里所述的实施例是为了帮助读者理解本发明的原理,应被理解为本发明的保护范围并不局限于这样的特别陈述和实施例。本领域的普通技术人员可以根据本发明公开的这些技术启示做出各种不脱离本发明实质的其它各种具体变形和组合,这些变形和组合仍然在本发明的保护范围内。Those skilled in the art will appreciate that the embodiments described herein are intended to help readers understand the principles of the present invention, and should be understood that the protection scope of the present invention is not limited to such specific statements and embodiments. Those skilled in the art can make various other specific variations and combinations that do not deviate from the essence of the present invention based on the technical revelations disclosed by the present invention, and these variations and combinations are still within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311872786.1A CN117934831A (en) | 2023-12-29 | 2023-12-29 | A 3D semantic segmentation method based on camera and laser fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311872786.1A CN117934831A (en) | 2023-12-29 | 2023-12-29 | A 3D semantic segmentation method based on camera and laser fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117934831A true CN117934831A (en) | 2024-04-26 |
Family
ID=90762311
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311872786.1A Pending CN117934831A (en) | 2023-12-29 | 2023-12-29 | A 3D semantic segmentation method based on camera and laser fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117934831A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118262385A (en) * | 2024-05-30 | 2024-06-28 | 齐鲁工业大学(山东省科学院) | Person Re-ID Method Based on Dispatching Sequence and Training Based on Camera Difference |
CN119665840A (en) * | 2024-12-09 | 2025-03-21 | 成都运达科技股份有限公司 | A method, system and medium for detecting the distance from the center line of a coupler to the rail surface |
-
2023
- 2023-12-29 CN CN202311872786.1A patent/CN117934831A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118262385A (en) * | 2024-05-30 | 2024-06-28 | 齐鲁工业大学(山东省科学院) | Person Re-ID Method Based on Dispatching Sequence and Training Based on Camera Difference |
CN119665840A (en) * | 2024-12-09 | 2025-03-21 | 成都运达科技股份有限公司 | A method, system and medium for detecting the distance from the center line of a coupler to the rail surface |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021233029A1 (en) | Simultaneous localization and mapping method, device, system and storage medium | |
CN117934831A (en) | A 3D semantic segmentation method based on camera and laser fusion | |
CN111508013B (en) | Stereo matching method | |
WO2024217115A1 (en) | Three-dimensional object detection method based on multi-modal fusion and deep attention mechanism | |
CN114066960B (en) | Three-dimensional reconstruction method, point cloud fusion method, device, equipment and storage medium | |
CN113409459A (en) | Method, device and equipment for producing high-precision map and computer storage medium | |
CN113256698B (en) | Monocular 3D reconstruction method with depth prediction | |
CN101908230A (en) | A 3D Reconstruction Method Based on Region Depth Edge Detection and Binocular Stereo Matching | |
CN107170042A (en) | A kind of many three-dimensional rebuilding methods regarding Stereo matching of unordered graph picture | |
CN117496312A (en) | Three-dimensional multi-target detection method based on multi-mode fusion algorithm | |
CN113112547A (en) | Robot, repositioning method thereof, positioning device and storage medium | |
US20250139874A1 (en) | Learning method, information processing device, and recording medium | |
CN113361447A (en) | Lane line detection method and system based on sliding window self-attention mechanism | |
CN116071721A (en) | Transformer-based high-precision map real-time prediction method and system | |
CN115511759A (en) | A Point Cloud Image Depth Completion Method Based on Cascade Feature Interaction | |
CN114494589A (en) | Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium | |
CN113269689B (en) | A depth image completion method and system based on normal vector and Gaussian weight constraints | |
CN116222577B (en) | Closed-loop detection method, training method, system, electronic device and storage medium | |
CN116128966A (en) | A Semantic Localization Method Based on Environmental Objects | |
CN118608692A (en) | A method for constructing a four-dimensional base for digital twin cities based on four-dimensional space-time increments | |
CN115393712B (en) | SAR image road extraction method and system based on dynamic hybrid pooling strategy | |
CN116468769A (en) | An Image-Based Depth Information Estimation Method | |
CN114663298B (en) | Disparity map inpainting method and system based on semi-supervised deep learning | |
CN118172422B (en) | Method and device for positioning and imaging target of interest using vision, inertia and laser collaboration | |
CN118447167A (en) | NeRF three-dimensional reconstruction method and system based on 3D point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |