CN113034512B - Weld joint tracking method based on feature segmentation - Google Patents

Weld joint tracking method based on feature segmentation Download PDF

Info

Publication number
CN113034512B
CN113034512B CN202110277763.0A CN202110277763A CN113034512B CN 113034512 B CN113034512 B CN 113034512B CN 202110277763 A CN202110277763 A CN 202110277763A CN 113034512 B CN113034512 B CN 113034512B
Authority
CN
China
Prior art keywords
feature
layer
segmentation
erfnet
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110277763.0A
Other languages
Chinese (zh)
Other versions
CN113034512A (en
Inventor
柏连发
王业宇
赵壮
韩静
张毅
罗隽
郭卓然
杨傲东
王兴国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202110277763.0A priority Critical patent/CN113034512B/en
Publication of CN113034512A publication Critical patent/CN113034512A/en
Application granted granted Critical
Publication of CN113034512B publication Critical patent/CN113034512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Laser Beam Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a weld joint tracking method based on feature segmentation, which comprises the following steps: 1. and collecting a molten pool image, and performing ROI selection on the molten pool image. 2. An Encoder-Decoder segmentation network structure of ERFNet is adopted to carry out molten pool image segmentation, a cross entropy loss function commonly used for semantic segmentation is used, global feature information is fused, and a pyramid pooling module is added into a backbone network. The invention improves the network structure and the Loss function of the existing ERFNet, and carries out multi-scale feature fusion on the high-level features and the low-level features by referring to UNet on the network structure, and Focal local is used for replacing the cross entropy Loss function on the Loss function. The problems of line breakage of the central line of the laser stripe, overlarge deviation of welding characteristic points and the like are avoided. The performance of the feature extraction algorithm is effectively improved, the laser stripe center line and the weld joint feature points of each weld joint of each layer are accurately extracted, the efficiency of the algorithm is not lost, and the requirement of weld joint tracking on real-time performance is guaranteed.

Description

基于特征分割的焊缝跟踪方法Seam Tracking Method Based on Feature Segmentation

技术领域technical field

本发明涉及一种基于特征分割的焊缝跟踪方法,属于焊接自动化技术领域。The invention relates to a seam tracking method based on feature segmentation, which belongs to the technical field of welding automation.

背景技术Background technique

增材焊接不仅应用在坡口填充的任务中,还能更广泛地应用在通过逐层堆积的焊接方法快速成型实体零件,利用其加工周期短、成型速率快和材料利用率高的优点,实现具有复杂几何形状和结构的零件的快速制造。随着增材制造技术在航空航天、国防军工、汽车制造、电子产品以及生物医疗等重要领域的应用越来越广泛,金属零部件快速成型的需求也越来越大,所以将焊缝跟踪技术应用于金属零部件快速成型任务中具有重要的战略意义。Additive welding is not only used in the task of groove filling, but also more widely used in the rapid prototyping of solid parts through layer-by-layer welding methods. Taking advantage of its advantages of short processing cycle, fast forming rate and high material utilization rate, it can realize Rapid manufacturing of parts with complex geometries and structures. With the increasing application of additive manufacturing technology in important fields such as aerospace, national defense, automobile manufacturing, electronic products, and biomedicine, the demand for rapid prototyping of metal parts is also increasing, so the welding seam tracking technology It is of great strategic significance to be applied to the task of rapid prototyping of metal parts.

从大坡口增材焊缝跟踪实验中可以得知,通过焊缝跟踪系统采集到的激光条纹图像中只有激光条纹轮廓线是有效信息,因此在ERFNet训练过程中会出现正负样本不均衡的问题,同时由于在平板增材焊接中焊接层数与焊接道数都远大于大坡口填充实验,所以高层焊缝的激光条纹特征会更加不明显,因此直接将ERFNet应用在平板增材焊缝特征提取实验中会导致激光条纹中心线出现断线、焊接特征点偏差过大等问题。需要对ERFNet的网络结构和损失函数进行改进,在网络结构上参考UNet对高层特征和底层特征进行多尺度特征融合,在损失函数上用Focal Loss代替交叉熵损失函数,则可以避免这些问题。It can be known from the large-groove additive weld seam tracking experiment that only the laser stripe contour line is valid information in the laser stripe image collected by the weld seam tracking system, so there will be an imbalance of positive and negative samples in the ERFNet training process At the same time, since the number of welding layers and welding passes in flat plate additive welding is much larger than that of large groove filling experiments, the laser stripe characteristics of high-rise welds will be less obvious, so ERFNet is directly applied to flat plate additive welds. In the feature extraction experiment, problems such as broken lines in the center line of laser stripes and excessive deviation of welding feature points will occur. It is necessary to improve the network structure and loss function of ERFNet. In the network structure, refer to UNet to perform multi-scale feature fusion of high-level features and low-level features, and replace the cross-entropy loss function with Focal Loss in the loss function, these problems can be avoided.

在深度卷积神经网络中,根据感受野的理论,低层特征图分辨率更高,包含更多的位置、细节信息,但是由于经过的卷积运算更少,所以语义信息更低,噪声更多,而高层特征图具有更强的语义信息,但是由于经过更多的卷积运算,所以分辨率很低,对图像细节的感知能力较差。根据特征图的这种特性,可以采用多尺度特征融合的方式提升网络获取图像中特征信息的能力。In the deep convolutional neural network, according to the theory of the receptive field, the low-level feature map has higher resolution and contains more position and detail information, but due to fewer convolution operations, the semantic information is lower and the noise is more , while the high-level feature map has stronger semantic information, but due to more convolution operations, the resolution is very low and the perception of image details is poor. According to this feature of the feature map, multi-scale feature fusion can be used to improve the ability of the network to obtain feature information in the image.

多尺度特征融合是将具有丰富空间特征的低层特征图与具有丰富语义信息的高层特征图进行融合得到新的特征图,使得新的特征图同时具有高分辨率和高语义信息的特点,这一方法在目标检测和语义分割领域中被广泛的应用。Multi-scale feature fusion is to fuse the low-level feature map with rich spatial features and the high-level feature map with rich semantic information to obtain a new feature map, so that the new feature map has the characteristics of high resolution and high semantic information at the same time. The method is widely used in the fields of object detection and semantic segmentation.

本发明在ERFNet的网络结构上便参考UNet网络进行了改进,采用多尺度特征融合的方式提升网络提取激光条纹中心线和焊接特征点的效果。UNet的网络结构与ERFNet相同,也是一种基于Encoder-Decoder结构的算法,但是在UNet中利用Concat操作将编码器和解码器中尺寸相同的特征图进行拼接,使得网络在上采样的过程中能够获取到更多的空间信息与语义信息,从而提升分割的精度。The present invention improves the network structure of ERFNet with reference to the UNet network, and adopts a multi-scale feature fusion method to improve the network's effect of extracting laser stripe centerlines and welding feature points. The network structure of UNet is the same as that of ERFNet, and it is also an algorithm based on the Encoder-Decoder structure, but in UNet, the Concat operation is used to splice the feature maps of the same size in the encoder and decoder, so that the network can be upsampled. Obtain more spatial information and semantic information, thereby improving the accuracy of segmentation.

本发明直接将对应的特征图相加实现多尺度特征融合,原因在于使用Concat操作进行特征叠加会增加特征通道数,导致计算量增大,减慢算法提取激光中心线和焊接特征点的速度,因此为了保证本方法算法的实时性与可靠性,采用对应特征图直接相加的方式实现多尺度特征融合。The present invention directly adds the corresponding feature maps to achieve multi-scale feature fusion. The reason is that using the Concat operation for feature superposition will increase the number of feature channels, resulting in an increase in the amount of calculation and slowing down the speed of the algorithm to extract the laser centerline and welding feature points. Therefore, in order to ensure the real-time performance and reliability of the algorithm of this method, the multi-scale feature fusion is realized by directly adding the corresponding feature maps.

发明内容Contents of the invention

为了解决上述技术问题,本发明提供一种基于特征分割的焊缝跟踪方法,其具体技术方案如下:In order to solve the above technical problems, the present invention provides a seam tracking method based on feature segmentation, and its specific technical scheme is as follows:

一种基于特征分割的焊缝跟踪方法,其特征在于:包括如下步骤:A seam tracking method based on feature segmentation, characterized in that: comprising the steps of:

步骤S1:采集熔池图像,对熔池图像进行ROI选择;Step S1: collecting the molten pool image, and performing ROI selection on the molten pool image;

步骤S2:采用ERFNet的Encoder-Decoder分割网络结构进行熔池图像分割,且其使用语义分割常用的交叉熵损失函数,融合全局特征信息,并在主干网络中加入金字塔池化模块,其中带有平衡因子αt的交叉熵损失函数公式为:

Figure BDA0002977323300000021
式中αt为正样本时,αt=α∈[0,1],αt为负样本时,αt=1-α,yc表示样本c的标签,pc表示样本c预测为正类的概率,M为类别数量;Step S2: Use the Encoder-Decoder segmentation network structure of ERFNet to segment the melt pool image, and use the cross-entropy loss function commonly used in semantic segmentation to fuse global feature information, and add a pyramid pooling module to the backbone network, which has a balance The formula for the cross-entropy loss function of factor α t is:
Figure BDA0002977323300000021
In the formula, when α t is a positive sample, α t = α∈[0,1], when α t is a negative sample, α t =1-α, y c represents the label of sample c, p c represents that sample c is predicted to be positive The probability of the class, M is the number of categories;

进一步的,所述步骤S1中ROI区域为包含熔池轮廓的固定区域,所述ROI区域大小为512像素×512像素。Further, the ROI area in the step S1 is a fixed area including the contour of the molten pool, and the size of the ROI area is 512 pixels×512 pixels.

进一步的,所述步骤S2中网络结构基于SegNet与ENet,整个模型包含23层,其中1-16层为Encoder,17-23层为Decoder。Further, the network structure in the step S2 is based on SegNet and ENet, and the whole model includes 23 layers, wherein layers 1-16 are Encoder, and layers 17-23 are Decoder.

进一步的,所述步骤S2中网络结构采用高层特征和底层特征的多尺度特征融合,将ERFNet中第8层与第16层的特征图融合,第3层与第17层的特征图融合,第2层与第20层的特征图融合。Further, in the step S2, the network structure adopts multi-scale feature fusion of high-level features and low-level features, and fuses the feature maps of the 8th layer and the 16th layer in ERFNet, and fuses the feature maps of the 3rd layer and the 17th layer. Layer 2 is fused with the feature maps of layer 20.

进一步的,所述步骤S2中的ERFNet的帧数为120FPS以上。Further, the frame number of ERFNet in the step S2 is above 120FPS.

进一步的,所述步骤S2中网络结构的高层焊缝相对于低层焊缝的RMSE与PDE均值呈上升趋势,焊缝特征点与人工标记特征点之间的偏移量为1~2个像素。Further, in the step S2, the mean value of RMSE and PDE of the high-level welds of the network structure is on the rise relative to the low-level welds, and the offset between the feature points of the welds and the manually marked feature points is 1-2 pixels.

本发明的有益效果是:对焊缝特征提取算法进行了优化与改进,在ERFNet中引入多尺度特征融合策略与Focal Loss,并设计进行平板增材焊缝特征提取实验验证改进后算法的可行性,不仅能够有效地提升特征提取算法的性能,准确地提取各层各道焊缝的激光条纹中心线和焊缝特征点,同时能够不损失算法的效率,保证焊缝跟踪对实时性的要求。进行平板增材焊接实验对改进后焊缝跟踪方案进行实际工况下适应性的验证,通过比对分析人工焊接的立方体与系统焊接成型的立方体,多次实验结果表明两者的成型效果相差甚微,证明了改进后的焊缝跟踪方案在快速成型实体零件任务中有着较高的可靠性。The beneficial effects of the present invention are: the weld feature extraction algorithm is optimized and improved, the multi-scale feature fusion strategy and Focal Loss are introduced into ERFNet, and the plate additive weld feature extraction experiment is designed to verify the feasibility of the improved algorithm , not only can effectively improve the performance of the feature extraction algorithm, accurately extract the laser stripe centerline and weld feature points of each layer and each weld, but also can not lose the efficiency of the algorithm, and ensure the real-time requirements of weld tracking. The plate additive welding experiment was carried out to verify the adaptability of the improved seam tracking scheme under actual working conditions. By comparing and analyzing the manually welded cube and the system welded cube, the results of many experiments showed that the forming effects of the two are very different. Micro, it is proved that the improved seam tracking scheme has high reliability in the task of rapid prototyping of solid parts.

附图说明Description of drawings

图1是本发明的增材制造零件成型;Fig. 1 is the molding of additive manufacturing parts of the present invention;

图2是本发明的UNet网络结构示意图;Fig. 2 is a schematic diagram of the UNet network structure of the present invention;

图3是本发明的改进后ERFNet的网络结构图;Fig. 3 is the network structure figure of ERFNet after the improvement of the present invention;

图4是本发明的平板增材焊缝数据集图,Fig. 4 is the data set diagram of flat plate additive welding seam of the present invention,

图中,(a1)第一层第三道激光条纹图;(a2)第一层第三道焊缝特征图;In the figure, (a1) the third laser stripe pattern of the first layer; (a2) the characteristic map of the third weld seam of the first layer;

(b1)第二层第二道激光条纹图;(b2)第二层第二道焊缝特征图;(b1) The second laser stripe pattern of the second layer; (b2) The characteristic map of the second layer of the second weld;

(c1)第二层第四道激光条纹图;(c2)第二层第四道焊缝特征图;(c1) The fourth laser stripe pattern of the second layer; (c2) The characteristic map of the fourth weld seam of the second layer;

(d1)第三层第三道激光条纹图;(d2)第三层第三道焊缝特征图;(d1) The third laser stripe pattern of the third layer; (d2) The characteristic map of the third layer and the third weld seam;

(e1)第四层第二道激光条纹图;(e2)第四层第二道焊缝特征图;(e1) The second laser fringe pattern of the fourth layer; (e2) The characteristic map of the second weld seam of the fourth layer;

(f1)第五层第二道激光条纹图;(f2)第五层第二道焊缝特征图;(f1) The second laser fringe pattern of the fifth layer; (f2) The second weld feature map of the fifth layer;

图5是本发明的各层各道分割结果图,Fig. 5 is the segmentation result figure of each layer and each track of the present invention,

图中,a表示输入测试图;b表示输出结果图;In the figure, a represents the input test graph; b represents the output result graph;

(1-1)~(1-4)第一层四道焊缝的激光条纹图与特征分割图;(1-1)~(1-4) The laser fringe pattern and feature segmentation map of the four-pass weld of the first layer;

(2-1)~(2-4)第二层四道焊缝的激光条纹图与特征分割图;(2-1)~(2-4) The laser fringe pattern and feature segmentation map of the second layer of four welds;

(3-1)~(3-4)第三层四道焊缝的激光条纹图与特征分割图;(3-1)~(3-4) The laser fringe pattern and feature segmentation map of the four-pass weld on the third layer;

(4-1)~(4-4)第四层四道焊缝的激光条纹图与特征分割图;(4-1)~(4-4) The laser fringe pattern and feature segmentation map of the four-pass weld on the fourth layer;

(5-1)~(5-4)第五层四道焊缝的激光条纹图与特征分割图(5-1)~(5-4) Laser fringe pattern and feature segmentation map of the fifth layer four-pass weld

图6是本发明的平板增材焊接实验焊道规划与焊道编号图;Fig. 6 is a flat plate additive welding experiment weld bead planning and weld bead numbering diagram of the present invention;

图7是本发明的各层焊接完成后的立方体表面形貌对比图,Fig. 7 is the comparison diagram of the cube surface topography after each layer of the present invention is welded,

图中,(a1)标准工件的第一层表面形貌;(b1)实验工件的第一层表面形貌;In the figure, (a1) the surface topography of the first layer of the standard workpiece; (b1) the topography of the first layer of the experimental workpiece;

(a2)标准工件的第二层表面形貌;(b2)实验工件的第二层表面形貌;(a2) The second-layer surface topography of the standard workpiece; (b2) The second-layer surface topography of the experimental workpiece;

(a3)标准工件的第三层表面形貌;(b3)实验工件的第三层表面形貌;(a3) The surface topography of the third layer of the standard workpiece; (b3) The topography of the third layer of the experimental workpiece;

(a4)标准工件的第四层表面形貌;(b4)实验工件的第四层表面形貌;(a4) The topography of the fourth layer of the standard workpiece; (b4) The topography of the fourth layer of the experimental workpiece;

(a5)标准工件的第五层表面形貌;(b5)实验工件的第五层表面形貌。(a5) The fifth-layer surface topography of the standard workpiece; (b5) The fifth-layer surface topography of the experimental workpiece.

具体实施方式Detailed ways

现在结合附图对本发明作进一步详细的说明。这些附图均为简化的示意图,仅以示意方式说明本发明的基本结构,因此其仅显示与本发明有关的构成。The present invention is described in further detail now in conjunction with accompanying drawing. These drawings are all simplified schematic diagrams, which only illustrate the basic structure of the present invention in a schematic manner, so they only show the configurations related to the present invention.

如图1~3所示,本发明的基于特征分割的焊缝跟踪方法具体如下所示:As shown in Figures 1 to 3, the seam tracking method based on feature segmentation of the present invention is specifically as follows:

一种基于特征分割的焊缝跟踪方法,包括如下步骤:A seam tracking method based on feature segmentation, comprising the following steps:

步骤S1:采集熔池图像,对熔池图像进行ROI选择;ROI区域为包含熔池轮廓的固定区域,所述ROI区域大小为512像素×512像素。Step S1: collect the molten pool image, and perform ROI selection on the molten pool image; the ROI area is a fixed area including the outline of the molten pool, and the size of the ROI area is 512 pixels×512 pixels.

步骤S2:采用ErfNet的Encoder-Decoder分割网络结构进行熔池图像分割,且其使用语义分割常用的交叉熵损失函数,融合全局特征信息,并在主干网络中加入金字塔池化模块,其中带有平衡因子αt的交叉熵损失函数公式为:

Figure BDA0002977323300000041
式中αt为正样本时,αt=α∈[0,1],αt为负样本时,αt=1-α,yc表示样本c的标签,pc表示样本c预测为正类的概率,M为类别数量;网络结构基于SegNet与ENet,整个模型包含23层,其中1-16层为Encoder,17-23层为Decoder。网络结构采用高层特征和底层特征的多尺度特征融合,将ERFNet中第8层与第16层的特征图融合,第3层与第17层的特征图融合,第2层与第20层的特征图融合。ERFNet的帧数为120FPS以上。网络结构的高层焊缝相对于低层焊缝的RMSE与PDE均值呈上升趋势,焊缝特征点与人工标记特征点之间的偏移量为1~2个像素。Step S2: Use ErfNet's Encoder-Decoder segmentation network structure to segment the melt pool image, and use the cross-entropy loss function commonly used in semantic segmentation to fuse global feature information, and add a pyramid pooling module to the backbone network, which has a balance The formula for the cross-entropy loss function of factor α t is:
Figure BDA0002977323300000041
In the formula, when α t is a positive sample, α t = α∈[0,1], when α t is a negative sample, α t =1-α, y c represents the label of sample c, p c represents that sample c is predicted to be positive The probability of the class, M is the number of categories; the network structure is based on SegNet and ENet, the whole model contains 23 layers, of which 1-16 layers are Encoder, and 17-23 layers are Decoder. The network structure adopts multi-scale feature fusion of high-level features and low-level features. The feature maps of the 8th and 16th layers in ERFNet are fused, the feature maps of the 3rd and 17th layers are fused, and the features of the 2nd and 20th layers are fused. Graph Fusion. The frame rate of ERFNet is above 120FPS. The average value of RMSE and PDE of the high-level welds in the network structure is on the rise compared with the low-level welds, and the offset between the weld feature points and the manually marked feature points is 1 to 2 pixels.

以下通过实施例验证本发明的技术效果:The technical effect of the present invention is verified by the following examples:

(一)基于改进的ERFNet的平板增材焊缝特征提取算法(1) Feature extraction algorithm of plate additive weld seam based on improved ERFNet

从大坡口增材焊缝跟踪实验中可以得知,通过焊缝跟踪系统采集到的激光条纹图像中只有激光条纹轮廓线是有效信息,因此在ERFNet训练过程中会出现正负样本不均衡的问题,同时由于在平板增材焊接中焊接层数与焊接道数都远大于大坡口填充实验,所以高层焊缝的激光条纹特征会更加不明显,因此直接将ERFNet应用在平板增材焊缝特征提取实验中会导致激光条纹中心线出现断线、焊接特征点偏差过大等问题。为了避免这些问题,本发明对ERFNet的网络结构和损失函数进行了改进:网络结构上参考UNet对高层特征和底层特征进行多尺度特征融合,损失函数上用Focal Loss代替交叉熵损失函数。It can be known from the large-groove additive weld seam tracking experiment that only the laser stripe contour line is valid information in the laser stripe image collected by the weld seam tracking system, so there will be an imbalance of positive and negative samples in the ERFNet training process At the same time, since the number of welding layers and welding passes in flat plate additive welding is much larger than that of large groove filling experiments, the laser stripe characteristics of high-rise welds will be less obvious, so ERFNet is directly applied to flat plate additive welds. In the feature extraction experiment, problems such as broken lines in the center line of laser stripes and excessive deviation of welding feature points will occur. In order to avoid these problems, the present invention improves the network structure and loss function of ERFNet: the network structure refers to UNet to perform multi-scale feature fusion of high-level features and low-level features, and the loss function uses Focal Loss instead of cross-entropy loss function.

(1)多尺度特征融合(1) Multi-scale feature fusion

在深度卷积神经网络中,根据感受野的理论,低层特征图分辨率更高,包含更多的位置、细节信息,但是由于经过的卷积运算更少,所以语义信息更低,噪声更多,而高层特征图具有更强的语义信息,但是由于经过更多的卷积运算,所以分辨率很低,对图像细节的感知能力较差。根据特征图的这种特性,可以采用多尺度特征融合的方式提升网络获取图像中特征信息的能力。In the deep convolutional neural network, according to the theory of the receptive field, the low-level feature map has higher resolution and contains more position and detail information, but due to fewer convolution operations, the semantic information is lower and the noise is more , while the high-level feature map has stronger semantic information, but due to more convolution operations, the resolution is very low and the perception of image details is poor. According to this feature of the feature map, multi-scale feature fusion can be used to improve the ability of the network to obtain feature information in the image.

多尺度特征融合是将具有丰富空间特征的低层特征图与具有丰富语义信息的高层特征图进行融合得到新的特征图,使得新的特征图同时具有高分辨率和高语义信息的特点,这一方法在目标检测和语义分割领域中被广泛的应用。Multi-scale feature fusion is to fuse the low-level feature map with rich spatial features and the high-level feature map with rich semantic information to obtain a new feature map, so that the new feature map has the characteristics of high resolution and high semantic information at the same time. The method is widely used in the fields of object detection and semantic segmentation.

本方法在ERFNet的网络结构上参考UNet网络进行了改进,采用多尺度特征融合的方式提升网络提取激光条纹中心线和焊接特征点的效果。UNet的网络结构如图2所示,与ERFNet相同,UNet也是一种基于Encoder-Decoder结构的算法,但是在UNet中利用Concat操作将编码器和解码器中尺寸相同的特征图进行拼接,使得网络在上采样的过程中能够获取到更多的空间信息与语义信息,从而提升分割的精度。This method improves the network structure of ERFNet with reference to the UNet network, and uses multi-scale feature fusion to improve the effect of network extraction of laser stripe centerlines and welding feature points. The network structure of UNet is shown in Figure 2. Same as ERFNet, UNet is also an algorithm based on the Encoder-Decoder structure, but in UNet, the Concat operation is used to splice the feature maps of the same size in the encoder and decoder, making the network In the process of upsampling, more spatial information and semantic information can be obtained, thereby improving the accuracy of segmentation.

参考UNet中对应大小特征图拼接的特征融合方式,将ERFNet中第8层与第16层的特征图融合,第3层与第17层的特征图融合,第2层与第20层的特征图融合,网络结构如图3所示。不同于UNet中使用Concat操作融合特征图,本发明直接将对应的特征图相加实现多尺度特征融合,原因在于使用Concat操作进行特征叠加会增加特征通道数,导致计算量增大,减慢算法提取激光中心线和焊接特征点的速度,因此为了保证本文算法的实时性与可靠性,采用对应特征图直接相加的方式实现多尺度特征融合。Referring to the feature fusion method of corresponding size feature map splicing in UNet, the feature maps of the 8th layer and the 16th layer in ERFNet are fused, the feature maps of the 3rd layer and the 17th layer are fused, and the feature maps of the 2nd layer and the 20th layer are fused. Fusion, the network structure is shown in Figure 3. Different from using Concat operation to fuse feature maps in UNet, the present invention directly adds corresponding feature maps to achieve multi-scale feature fusion. The reason is that using Concat operation for feature superposition will increase the number of feature channels, resulting in increased calculation and slowing down the algorithm. The speed of the laser centerline and welding feature points is extracted. Therefore, in order to ensure the real-time performance and reliability of the algorithm in this paper, the method of directly adding the corresponding feature maps is used to achieve multi-scale feature fusion.

(2)Focal Loss(2)Focal Loss

Focal Loss的提出来源于目标检测领域,主要是为了解决目标检测中正负样本极端不平衡的问题。在本方法的焊缝跟踪任务中,激光视觉系统采集到的图像中只有激光条纹属于正样本,同样存在正负样本不均衡的问题。所以为了提升算法提取激光条纹特征的能力,可以用FocalLoss代替ERFNet中采用的交叉熵损失函数。The proposal of Focal Loss comes from the field of target detection, mainly to solve the problem of extreme imbalance between positive and negative samples in target detection. In the seam tracking task of this method, only the laser stripes belong to the positive samples in the images collected by the laser vision system, and there is also the problem of imbalance between positive and negative samples. Therefore, in order to improve the algorithm's ability to extract laser stripe features, FocalLoss can be used instead of the cross-entropy loss function used in ERFNet.

Focal Loss是在传统的交叉熵损失函数的基础上改进得来的。传统的交叉熵损失函数的数学表达式中,对于正样本而言,输出概率越大损失越小,对于负样本而言,输出概率越小损失越小,但是在正负样本极度不平衡的情况下,交叉熵损失函数在迭代的过程中可能会无法优化到最优。为了解决这个问题,首先引入平衡因子αt,对于正样本,αt=α∈[0,1],对于负样本,αt=1-α,则带有平衡因子αt的交叉熵损失函数如下:

Figure BDA0002977323300000061
平衡因子αt可以较好的平衡正负样本比例不均,但是无法解决简单样本与困难样本的问题,所以在式平衡因子αt的交叉熵损失函数的基础上引入参数γ,使模型更加集中于困难的、错误分类的样本。因此,FocalLoss的数学表达式如下:
Figure BDA0002977323300000062
式中参数γ为超参数,用于调节简单样本权重降低的速率,当γ=0时,FocalLoss会转换为传统的交叉熵损失函数,当γ值增加时,调制因子的影响也在增加,使得易分类样本的损失减少。Focal Loss is improved on the basis of the traditional cross-entropy loss function. In the mathematical expression of the traditional cross-entropy loss function, for positive samples, the greater the output probability, the smaller the loss. For negative samples, the smaller the output probability, the smaller the loss. However, in the case of extreme imbalance between positive and negative samples Under this condition, the cross-entropy loss function may not be optimized to the optimum in the iterative process. In order to solve this problem, first introduce the balance factor α t , for positive samples, α t = α∈[0,1], for negative samples, α t =1-α, then cross-entropy loss function with balance factor α t as follows:
Figure BDA0002977323300000061
The balance factor α t can better balance the uneven proportion of positive and negative samples, but it cannot solve the problem of simple samples and difficult samples, so the parameter γ is introduced on the basis of the cross-entropy loss function of the balance factor α t to make the model more concentrated for difficult, misclassified samples. Therefore, the mathematical expression of FocalLoss is as follows:
Figure BDA0002977323300000062
In the formula, the parameter γ is a hyperparameter, which is used to adjust the rate of simple sample weight reduction. When γ=0, FocalLoss will be converted into a traditional cross-entropy loss function. When the value of γ increases, the influence of the modulation factor also increases, making The loss of easy-to-classify samples is reduced.

使用FocalLoss训练ERFNet能够降低大量简单负样本在训练中所占的权重,这对于本发明的应用场景十分重要。Using FocalLoss to train ERFNet can reduce the weight of a large number of simple negative samples in training, which is very important for the application scenario of the present invention.

(二)平板增材焊缝特征提取实验(2) Feature extraction experiment of flat plate additive welding seam

本实验采用基于ERFNet网络的激光视觉焊缝跟踪系统的设备与方案采集并制作平板增材焊缝激光数据集。部分数据集图像如图4所示。In this experiment, the equipment and scheme of the laser vision weld seam tracking system based on the ERFNet network are used to collect and make a flat plate additive weld laser data set. Some dataset images are shown in Figure 4.

对比平板增材焊缝数据集与大坡口增材焊缝数据集可以看出,平板增材焊接中激光条纹的线形更加平缓,随着焊接层数和焊接道数的增加,激光条纹的强度在图像的边缘位置上会减弱,焊缝特征点的个数会更少,位置更加不明显,所以对特征提取算法提出了更高的要求。为了测试改进后的ERFNet的性能,对各层各道焊缝的激光条纹图进行验证,网络测试的环境配置也与基于ERFNet网络的激光视觉焊缝跟踪系统相同。部分测试集测试结果如图5所示。Comparing the flat-plate additive weld data set with the large-groove additive weld data set, it can be seen that the line shape of the laser stripes in the flat-plate additive welding is smoother, and the intensity of the laser stripes increases with the increase in the number of welding layers and welding passes. The edge position of the image will be weakened, the number of weld feature points will be less, and the position will be less obvious, so higher requirements are put forward for the feature extraction algorithm. In order to test the performance of the improved ERFNet, the laser fringe pattern of each layer and each weld is verified. The environment configuration of the network test is also the same as that of the laser vision seam tracking system based on the ERFNet network. Part of the test set test results are shown in Figure 5.

网络对激光条纹中心线的分割效果,从图5中可以看出,改进后的ERFNet对各层各道焊缝激光条纹都能够实现较理想的分割效果,关键的位置上没有出现断线等问题。其次是焊缝跟踪中的核心任务:焊缝特征点的分割,虽然相对于大坡口实验,本实验中的特征点数量较少,正负样本分布更加不均衡,但是通过对ERFNet的改进,对于各层各道焊缝都能够实现很好的分割效果,特征点区域位置的提取与实际结果相符。这说明针对ERFNet网络结构和损失函数的改进能够提升特征提取算法的性能,同时改进后的ERFNet的帧数可以达到120FPS以上,并没有因为引入多尺度特征融合而损失效率,仍然满足焊缝跟踪实时性的要求。因此改进后的特征提取算法完全符合智能化焊接对精度和时间的要求。The segmentation effect of the network on the center line of the laser stripe can be seen from Figure 5, the improved ERFNet can achieve an ideal segmentation effect on the laser stripes of each layer and each weld seam, and there are no problems such as broken lines at key positions . The second is the core task in weld tracking: the segmentation of weld feature points. Although compared with the large groove experiment, the number of feature points in this experiment is less, and the distribution of positive and negative samples is more unbalanced. However, through the improvement of ERFNet, For each layer and each weld seam, a good segmentation effect can be achieved, and the extraction of the location of the feature point area is consistent with the actual result. This shows that the improvement of the ERFNet network structure and loss function can improve the performance of the feature extraction algorithm. At the same time, the number of frames of the improved ERFNet can reach more than 120FPS, and there is no loss of efficiency due to the introduction of multi-scale feature fusion, which still meets the requirements of real-time seam tracking. sexual demands. Therefore, the improved feature extraction algorithm fully meets the precision and time requirements of intelligent welding.

为了准确地评价改进后的ERFNet性能,采用RMSE评价激光条纹中心线的提取效果,PDE评价焊缝特征点的准确性。各层各道焊缝的误差值如表1所示。In order to accurately evaluate the performance of the improved ERFNet, RMSE is used to evaluate the extraction effect of the laser stripe centerline, and PDE is used to evaluate the accuracy of the weld feature points. The error values of each weld of each layer are shown in Table 1.

Figure BDA0002977323300000071
Figure BDA0002977323300000071

表1改进后ERFNet的分割图与标注图之间的RMES和PDETable 1 RMES and PDE between the segmentation map and the labeling map of ERFNet after improvement

由表1中可以得出,对于各层焊缝而言,RMSE的均值依次为0.6181、0.6273、0.6199、0.6623和0.6557,PDE的均值依次为2.2035、2.6789、2.2414、2.5516和2.4439。相较于第一层焊缝,高层焊缝的RMSE与PDE均值会有不同程度地增加,说明随着焊接层数的增加,激光条纹图的分割误差会有所增加。第一层焊缝的PDE均值为2.2035,表明由改进后的算法得到的焊缝特征点与人工标记特征点之间的偏移量在两个像素左右,可以认为改进后的特征提取算法能够准确地获取到第一层焊缝的特征点。同时由上表可以得出,高层焊缝相较于第一层焊缝的RMSE和PDE的增幅分别低于0.0442和0.4754,说明随着焊接层数的增加,虽然分割精度有所下降,但是焊缝特征点的偏移量小于一个像素,可以认为改进后的算法对于高层焊缝能够取得理想的分割精度。综上所述,本发明对于ERFNet的改进能够提升特征提取算法的性能。It can be concluded from Table 1 that for each layer of welds, the mean values of RMSE are 0.6181, 0.6273, 0.6199, 0.6623 and 0.6557, and the mean values of PDE are 2.2035, 2.6789, 2.2414, 2.5516 and 2.4439. Compared with the first layer of welds, the mean values of RMSE and PDE of high-level welds will increase to varying degrees, indicating that as the number of welding layers increases, the segmentation error of the laser fringe pattern will increase. The average PDE value of the first layer of welds is 2.2035, indicating that the offset between the weld feature points obtained by the improved algorithm and the manually marked feature points is about two pixels. It can be considered that the improved feature extraction algorithm can be accurate The feature points of the first-layer weld are obtained accurately. At the same time, it can be concluded from the above table that the increase of RMSE and PDE of high-rise welds compared with the first-layer welds are lower than 0.0442 and 0.4754 respectively, indicating that with the increase of welding layers, although the segmentation accuracy decreases, the welding The offset of seam feature points is less than one pixel. It can be considered that the improved algorithm can achieve ideal segmentation accuracy for high-level welds. In summary, the improvement of the present invention to ERFNet can improve the performance of the feature extraction algorithm.

为了验证改进后的焊缝跟踪方案在快速成型实体零件领域的可行性,下面进行平板增材焊接实验,通过分析实际焊接点与人工焊接点间的偏移并对比最终的成型效果来判断本文的焊缝跟踪方案是否满足智能化增材制造领域的要求。In order to verify the feasibility of the improved seam tracking scheme in the field of rapid prototyping solid parts, the plate additive welding experiment is carried out below, and the deviation between the actual welding point and the artificial welding point is analyzed to judge the final forming effect. Does the seam tracking solution meet the requirements in the field of intelligent additive manufacturing.

(三)平板增材焊缝跟踪实验(3) Flat plate additive welding seam tracking experiment

本实验的实验平台基于焊缝跟踪系统和焊缝跟踪软件,选用平板的材料为Q235不锈钢。由于需要验证本文焊缝跟踪系统在快速成型实体零件领域中的可行性,所以将通过增材焊接成型一个立方体的方式模拟零件成型。在本实验中将以不锈钢平板作为底板,焊接层数为5层,每层的焊道数为5道,共25道焊道。具体的焊道规划与焊道编号如图6所示。The experimental platform of this experiment is based on the seam tracking system and seam tracking software, and the material of the plate is Q235 stainless steel. Due to the need to verify the feasibility of the seam tracking system in this paper in the field of rapid prototyping solid parts, the part forming will be simulated by forming a cube through additive welding. In this experiment, a stainless steel flat plate will be used as the bottom plate, the number of welding layers is 5 layers, and the number of welding passes for each layer is 5, making a total of 25 welding passes. The specific weld bead planning and weld bead numbers are shown in Figure 6.

根据上述实验方案,结合焊缝跟踪系统与焊缝跟踪软件,进行平板增材焊缝跟踪实验。类似于大坡口焊接实验,本实验同样将通过对比标准工件与实验工件验证改进后的算法与系统在零件快速成型任务中的可靠性。According to the above experimental plan, combined with the seam tracking system and seam tracking software, the plate additive seam tracking experiment was carried out. Similar to the large groove welding experiment, this experiment will also verify the reliability of the improved algorithm and system in the task of rapid prototyping of parts by comparing the standard workpiece with the experimental workpiece.

首先关注示教焊接点与焊接系统计算得出的焊接点之间的误差。根据图5所示的焊接方案,选取部分焊道起始焊接点与终止焊接点的三维坐标并进行误差分析。测量结果如表2~3所示。First, focus on the error between the taught welding point and the welding point calculated by the welding system. According to the welding scheme shown in Figure 5, the three-dimensional coordinates of the starting welding point and the ending welding point of part of the weld bead are selected and error analysis is carried out. The measurement results are shown in Tables 2-3.

Figure BDA0002977323300000081
Figure BDA0002977323300000081

Figure BDA0002977323300000091
Figure BDA0002977323300000091

表2平板增材焊接起始焊接点三维坐标及误差Table 2 Three-dimensional coordinates and errors of the initial welding point of flat plate additive welding

Figure BDA0002977323300000092
Figure BDA0002977323300000092

表3平板增材焊接终止焊接点三维坐标及误差Table 3 Three-dimensional coordinates and errors of the termination welding point of flat plate additive welding

由表中记录的实验数据可以看出,虽然平板增材焊接中焊接层数和焊道数更多,但是示教焊接点与算法得到的焊接点之间各维度坐标的偏差均不超过1.00mm,这说明改进后的焊缝跟踪方案满足零件快速成型任务对高精度的要求。From the experimental data recorded in the table, it can be seen that although the number of welding layers and the number of welding passes are more in flat plate additive welding, the deviation of each dimension coordinate between the teaching welding point and the welding point obtained by the algorithm does not exceed 1.00mm , which shows that the improved seam tracking scheme meets the high precision requirements of parts rapid prototyping tasks.

其次关注焊接过程中的立方体的成型效果。根据图5所示的焊接方案,每焊完一层,测量一次立方体的高度,并对比实验工件与标准工件的高度值,测量结果如表4所示。Secondly, pay attention to the shaping effect of the cube during the welding process. According to the welding scheme shown in Figure 5, after each layer is welded, the height of the cube is measured, and the height values of the experimental workpiece and the standard workpiece are compared. The measurement results are shown in Table 4.

Figure BDA0002977323300000101
Figure BDA0002977323300000101

表4平板增材焊接中各层焊接完成后工件的高度Table 4 Height of the workpiece after welding of each layer in flat plate additive welding

由表中的测量数据可知,随着焊接层数和焊道道数的增加,误差值会有一定的累积,但是最大误差不超过0.90mm,说明在实际焊接过程中能够实现理想的成型效果。It can be seen from the measurement data in the table that with the increase of the number of welding layers and welding passes, the error value will accumulate to a certain extent, but the maximum error does not exceed 0.90mm, indicating that the ideal forming effect can be achieved in the actual welding process.

在记录各层焊接完成之后立方体高度值的同时,对比实验工件与标准工件各层的表面形貌,如图7所示。从各层焊接完成后的形貌对比图中可以直观的看出,实验工件与标准工件各层的成型效果并无明显的差异,结合表4的数据及分析,说明本节对焊缝跟踪系统的改进能够满足快速成型实体零件领域高精度,实时性的要求。While recording the height of the cube after each layer of welding is completed, compare the surface morphology of each layer of the experimental workpiece and the standard workpiece, as shown in Figure 7. It can be seen intuitively from the comparison chart of the appearance of each layer after welding that there is no obvious difference in the forming effect of each layer of the experimental workpiece and the standard workpiece. Combining the data and analysis in Table 4, it shows that this section has a good understanding of the welding seam tracking system. The improvement can meet the high-precision and real-time requirements in the field of rapid prototyping solid parts.

为了验证本发明的焊缝跟踪方案在快速成型实体零件领域中的可行性,进行了平板增材焊缝跟踪实验。针对本实验中激光条纹轮廓更加复杂,正负样本比例不均衡以及焊接层数和焊接道数更多等问题,本发明对焊缝特征提取算法进行了优化与改进,在ERFNet中引入多尺度特征融合策略与Focal Loss,并设计进行平板增材焊缝特征提取实验验证改进后算法的可行性,结果显示本发明采用的改进方案不仅能够有效地提升特征提取算法的性能,准确地提取各层各道焊缝的激光条纹中心线和焊缝特征点,同时能够不损失算法的效率,保证焊缝跟踪对实时性的要求。通过后续实验进行平板增材焊接实验对改进后焊缝跟踪方案进行实际工况下适应性的验证,比对分析人工焊接的立方体与系统焊接成型的立方体,多次实验结果表明两者的成型效果相差甚微,证明了改进后的焊缝跟踪方案在快速成型实体零件任务中有着较高的可靠性。In order to verify the feasibility of the seam tracking scheme of the present invention in the field of rapid prototyping solid parts, a plate additive seam tracking experiment was carried out. Aiming at the problems of more complex laser stripe profile, unbalanced ratio of positive and negative samples, and more welding layers and passes in this experiment, the invention optimizes and improves the weld seam feature extraction algorithm, and introduces multi-scale features into ERFNet Fusion strategy and Focal Loss, and design experiments to verify the feasibility of the improved algorithm for flat plate additive weld feature extraction. The results show that the improved scheme adopted in the present invention can not only effectively improve the performance of the feature extraction algorithm, but also accurately extract The laser stripe centerline and weld feature points of the first-pass weld seam can not lose the efficiency of the algorithm at the same time, and ensure the real-time requirements of weld seam tracking. Through the follow-up experiment, the plate additive welding experiment was carried out to verify the adaptability of the improved seam tracking scheme under actual working conditions, and to compare and analyze the manually welded cube and the system welded cube. The results of multiple experiments showed that the forming effect of the two The difference is insignificant, which proves that the improved seam tracking scheme has high reliability in the task of rapid prototyping solid parts.

以上述依据本发明的理想实施例为启示,通过上述的说明内容,相关工作人员完全可以在不偏离本项发明技术思想的范围内,进行多样的变更以及修改。本项发明的技术性范围并不局限于说明书上的内容,必须要根据权利要求范围来确定其技术性范围。Inspired by the above-mentioned ideal embodiment according to the present invention, through the above-mentioned description content, relevant workers can make various changes and modifications within the scope of not departing from the technical idea of the present invention. The technical scope of the present invention is not limited to the content in the specification, but must be determined according to the scope of the claims.

Claims (3)

1.一种基于特征分割的焊缝跟踪方法,其特征在于:包括如下步骤:1. a seam tracking method based on feature segmentation, is characterized in that: comprise the steps: 步骤S1:采集熔池图像,对熔池图像进行ROI选择;Step S1: collecting the molten pool image, and performing ROI selection on the molten pool image; 步骤S2:采用ERFNet的Encoder-Decoder分割网络结构进行熔池图像分割,所述ERFNet的帧数为120FPS以上,所述网络结构采用高层特征和底层特征的多尺度特征融合,所述网络结构的高层焊缝相对于低层焊缝的RMSE与PDE均值呈上升趋势,焊缝特征点与人工标记特征点之间的偏移量为1~2个像素,且其使用语义分割常用的交叉熵损失函数,将ERFNet中第8层与第16层的特征图融合,第3层与第17层的特征图融合,第2层与第20层的特征图融合,并在主干网络中加入金字塔池化模块,其中带有平衡因子
Figure DEST_PATH_IMAGE001
的交叉熵损失函数公式为:
Figure DEST_PATH_IMAGE002
,式中
Figure DEST_PATH_IMAGE003
为正样本时,
Figure DEST_PATH_IMAGE004
Figure 840496DEST_PATH_IMAGE001
为负样本时,
Figure DEST_PATH_IMAGE005
Figure DEST_PATH_IMAGE006
表示样本c的标签,
Figure DEST_PATH_IMAGE007
表示样本c预测为正类的概率,M为类别数量。
Step S2: Use the Encoder-Decoder segmentation network structure of ERFNet to segment the molten pool image. The frame number of the ERFNet is above 120FPS. The network structure adopts multi-scale feature fusion of high-level features and bottom-level features. The high-level The average value of RMSE and PDE of welds relative to low-level welds is on the rise. The offset between weld feature points and manually marked feature points is 1~2 pixels, and it uses the cross-entropy loss function commonly used in semantic segmentation. In ERFNet, the feature maps of the 8th layer and the 16th layer are fused, the feature maps of the 3rd layer and the 17th layer are fused, the feature maps of the 2nd layer and the 20th layer are fused, and the pyramid pooling module is added to the backbone network. which has a balance factor
Figure DEST_PATH_IMAGE001
The formula of the cross entropy loss function is:
Figure DEST_PATH_IMAGE002
, where
Figure DEST_PATH_IMAGE003
When it is a positive sample,
Figure DEST_PATH_IMAGE004
,
Figure 840496DEST_PATH_IMAGE001
When it is a negative sample,
Figure DEST_PATH_IMAGE005
,
Figure DEST_PATH_IMAGE006
Indicates the label of sample c,
Figure DEST_PATH_IMAGE007
Indicates the probability that sample c is predicted to be a positive class, and M is the number of categories.
2.根据权利要求1所述的基于特征分割的焊缝跟踪方法,其特征在于:所述步骤S1中ROI区域为包含熔池轮廓的固定区域,所述ROI区域大小为512像素×512像素。2. The seam tracking method based on feature segmentation according to claim 1, characterized in that: in the step S1, the ROI area is a fixed area including the outline of the molten pool, and the size of the ROI area is 512 pixels×512 pixels. 3.根据权利要求1所述的基于特征分割的焊缝跟踪方法,其特征在于:所述步骤S2中网络结构基于SegNet与ENet,整个模型包含23层,其中1-16层为Encoder,17-23层为Decoder。3. The seam tracking method based on feature segmentation according to claim 1, characterized in that: the network structure in the step S2 is based on SegNet and ENet, and the whole model includes 23 layers, wherein 1-16 layers are Encoder, 17- The 23rd layer is Decoder.
CN202110277763.0A 2021-03-15 2021-03-15 Weld joint tracking method based on feature segmentation Active CN113034512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110277763.0A CN113034512B (en) 2021-03-15 2021-03-15 Weld joint tracking method based on feature segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110277763.0A CN113034512B (en) 2021-03-15 2021-03-15 Weld joint tracking method based on feature segmentation

Publications (2)

Publication Number Publication Date
CN113034512A CN113034512A (en) 2021-06-25
CN113034512B true CN113034512B (en) 2022-11-11

Family

ID=76470691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110277763.0A Active CN113034512B (en) 2021-03-15 2021-03-15 Weld joint tracking method based on feature segmentation

Country Status (1)

Country Link
CN (1) CN113034512B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021117714A1 (en) 2021-07-08 2023-01-12 Endress+Hauser SE+Co. KG Automatic seam detection for a welding process
CN115121913B (en) * 2022-08-30 2023-01-10 北京博清科技有限公司 Method for extracting laser central line

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10452980B1 (en) * 2019-01-25 2019-10-22 StradVision, Inc. Learning method and learning device for extracting feature from input image by using convolutional layers in multiple blocks in CNN, resulting in hardware optimization which allows key performance index to be satisfied, and testing method and testing device using the same
CN111985274A (en) * 2019-05-23 2020-11-24 中国科学院沈阳自动化研究所 Remote sensing image segmentation algorithm based on convolutional neural network
CN112381095A (en) * 2021-01-15 2021-02-19 南京理工大学 Electric arc additive manufacturing layer width active disturbance rejection control method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10452980B1 (en) * 2019-01-25 2019-10-22 StradVision, Inc. Learning method and learning device for extracting feature from input image by using convolutional layers in multiple blocks in CNN, resulting in hardware optimization which allows key performance index to be satisfied, and testing method and testing device using the same
CN111985274A (en) * 2019-05-23 2020-11-24 中国科学院沈阳自动化研究所 Remote sensing image segmentation algorithm based on convolutional neural network
CN112381095A (en) * 2021-01-15 2021-02-19 南京理工大学 Electric arc additive manufacturing layer width active disturbance rejection control method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于加权损失函数的多尺度对抗网络图像语义分割算法;张宏钊等;《计算机应用与软件》;20200112(第01期);全文 *

Also Published As

Publication number Publication date
CN113034512A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN108759667B (en) Distance measurement method of front vehicle based on monocular vision and image segmentation under vehicle camera
CN113034512B (en) Weld joint tracking method based on feature segmentation
CN103870623B (en) Preprocessing simulated analysis template for vehicle model
CN109447033A (en) Vehicle front obstacle detection method based on YOLO
CN109584281B (en) A Layered Counting Method of Overlapping Particles Based on Color Image and Depth Image
CN100507937C (en) A Vision Method for Automatic Weld Seam Recognition Based on Local Image Texture Feature Matching
CN109932730A (en) LiDAR target detection method based on multi-scale unipolar 3D detection network
CN104463241A (en) Vehicle type recognition method in intelligent transportation monitoring system
CN102279190A (en) Image detection method for weld seam surface defects of laser welded plates of unequal thickness
CN104462646B (en) A kind of quality evaluating method of ship flame forming plate
CN108106627A (en) A kind of monocular vision vehicle positioning method of the online dynamic calibration of distinguished point based
CN104400265A (en) Feature extraction method applicable to corner weld of laser vision guided welding robot
CN114140439A (en) Method and device for feature point recognition of laser welding seam based on deep learning
CN104050640A (en) Multi-view dense point cloud data fusion method
CN114473309A (en) Welding position identification method for automatic welding system and automatic welding system
CN115937736A (en) Small target detection method based on attention and context awareness
CN114648549A (en) A traffic scene target detection and localization method integrating vision and lidar
CN114119539A (en) Online bow net running state detection method based on key point detection
CN116824543A (en) Automatic driving target detection method based on OD-YOLO
CN104036096B (en) Method for mapping bump features on inclined face to manufacturing feature bodies
CN106529391A (en) Robust speed-limit traffic sign detection and recognition method
CN113256620A (en) Vehicle body welding quality information judging method based on difference convolution neural network
CN104850869A (en) Automatic identification system and method for high-speed railway CPIII marker
CN116385432B (en) Light-weight decoupling wheat scab spore detection method
CN115861957B (en) Novel dynamic object segmentation method based on sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant