CN117415501A - Real-time welding feature extraction and penetration monitoring method based on machine vision - Google Patents
Real-time welding feature extraction and penetration monitoring method based on machine vision Download PDFInfo
- Publication number
- CN117415501A CN117415501A CN202311442613.6A CN202311442613A CN117415501A CN 117415501 A CN117415501 A CN 117415501A CN 202311442613 A CN202311442613 A CN 202311442613A CN 117415501 A CN117415501 A CN 117415501A
- Authority
- CN
- China
- Prior art keywords
- penetration
- molten pool
- model
- training
- data set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000035515 penetration Effects 0.000 title claims abstract description 151
- 238000003466 welding Methods 0.000 title claims abstract description 79
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000012544 monitoring process Methods 0.000 title claims abstract description 57
- 238000000605 extraction Methods 0.000 title claims abstract description 45
- 230000008569 process Effects 0.000 claims abstract description 21
- 238000010586 diagram Methods 0.000 claims abstract description 16
- 238000002372 labelling Methods 0.000 claims abstract description 14
- 238000004519 manufacturing process Methods 0.000 claims abstract description 12
- 238000002844 melting Methods 0.000 claims abstract description 5
- 230000008018 melting Effects 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 53
- 230000011218 segmentation Effects 0.000 claims description 14
- 238000012360 testing method Methods 0.000 claims description 13
- 238000012795 verification Methods 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 10
- 230000000877 morphologic effect Effects 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 4
- 238000005520 cutting process Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 239000012634 fragment Substances 0.000 claims description 3
- 238000005728 strengthening Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 8
- 238000004364 calculation method Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 7
- 230000007246 mechanism Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 239000011324 bead Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000007711 solidification Methods 0.000 description 2
- 230000008023 solidification Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000010953 base metal Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000017525 heat dissipation Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K31/00—Processes relevant to this subclass, specially adapted for particular articles or purposes, but not covered by only one of the preceding main groups
- B23K31/12—Processes relevant to this subclass, specially adapted for particular articles or purposes, but not covered by only one of the preceding main groups relating to investigating the properties, e.g. the weldability, of materials
- B23K31/125—Weld quality monitoring
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K37/00—Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Mechanical Engineering (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Optics & Photonics (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of welding technology monitoring, and discloses a real-time welding characteristic extraction and penetration monitoring method based on machine vision, which comprises the following steps: s1, acquiring melting pool images of three conditions of lack of penetration, moderate penetration and penetration by using a CCD camera; s2, respectively labeling pixel levels of molten pool images in three conditions of penetration, moderate penetration and penetration to generate corresponding labeling diagrams; and S3, carrying out data enhancement on the generated annotation graph and the original graph simultaneously, expanding a data set, and manufacturing a molten pool characteristic extraction model data set and a molten pool penetration state monitoring model data set. The real-time welding characteristic extraction and penetration monitoring method based on machine vision provides a method for extracting welding pool characteristics in real time and predicting welding seam penetration state. The method has the characteristics of strong anti-interference capability, less calculation parameters, high precision and the like, and is used for meeting the requirements of welding process monitoring under different working conditions.
Description
Technical Field
The invention relates to the technical field of welding technology monitoring, in particular to a real-time welding characteristic extraction and penetration monitoring method based on machine vision.
Background
Welding is used as a key part of modern manufacturing technology, and is mainly used in the industrial manufacturing fields of mechanical manufacturing, constructional engineering, rail transit, ocean engineering, aerospace and the like. Because of the high-speed development of advanced manufacturing technology, intelligent welding technology represented by numerical control, computer, power electronics and robot technology gradually replaces the traditional manual welding method to improve the quality and welding efficiency of welding products, and simultaneously solve the problems of working intensity of welders and welding environment improvement, the modern welding production technology continuously tends to be in a mechanized, automatic and intelligent direction, and has important significance for improving labor conditions, improving productivity and welding precision, ensuring welding quality and reducing operation cost.
The welding process quality control is an important component of an intelligent welding technology, the welding quality generally refers to whether the product quality produced through a welding process meets the product design requirement or not, the content of the welding quality generally comprises mechanical properties, geometric dimensions after welding, internal and external defects and the like, in the actual welding process, geometric parameters such as penetration, width, surplus weld and the like of a welding line are all important factors directly influencing the quality of the welding line, therefore, the study of molten pool information in the welding process and the control of parameters of the molten pool are of great practical significance, the maintenance of a stable proper penetration state is a precondition for ensuring the welding quality, the improper penetration state, such as incomplete penetration, excessive penetration, penetration and the like, seriously reduces the joint strength of the welding line and the reliability of the connection of a component, leads to the rejection of workpieces, and serious accidents of the loss of the welding line, the penetration state is comprehensively influenced by factors such as welding parameters, joint forms, workpiece materials, heat dissipation conditions and the like, in the welding process of a robot, a welding worker and a large number of welding tests are established, welding specifications are set for acquiring proper penetration states for different working condition setting, however, the penetration state is a proper penetration state is required, the penetration state and the welding condition is not observed, and the welding state is difficult to be continuously interfered with the welding state and is not consistent with the welding state, and the welding state is difficult to be formed by the visual and is not controlled, and the welding state is difficult to be continuously and interfered with the welding state.
Disclosure of Invention
(one) solving the technical problems
The invention aims to provide a method for extracting characteristics of a welding pool in real time and predicting a penetration state of a welding seam. The method has the characteristics of strong anti-interference capability, less calculation parameters, high precision and the like, and is used for meeting the requirements of welding process monitoring under different working conditions, and the method for real-time welding feature extraction and penetration monitoring based on machine vision is provided.
(II) technical scheme
The technical scheme for solving the technical problems is as follows:
a real-time welding feature extraction and penetration monitoring method based on machine vision comprises the following steps:
s1, acquiring melting pool images of three conditions of lack of penetration, moderate penetration and penetration by using a CCD camera;
s2, respectively labeling pixel levels of molten pool images in three conditions of penetration, moderate penetration and penetration to generate corresponding labeling diagrams;
s3, carrying out data enhancement on the generated annotation graph and the original graph simultaneously, expanding a data set, and manufacturing a molten pool characteristic extraction model data set and a molten pool penetration state monitoring model data set;
s4, building a molten pool feature extraction model, and training the model by adopting a training data set;
s5, building a molten pool penetration state monitoring model, and training the model by adopting a training data set;
s6, in the actual welding process, shooting a real-time image of a welding area by using a CCD camera, inputting the real-time image into a lightweight molten pool image semantic segmentation model, and extracting morphological characteristics of the molten pool area;
s7, generating a binary molten pool image from the light molten pool image semantic segmentation model, inputting the binary molten pool image into a molten pool penetration state monitoring model, and predicting the penetration state.
On the basis of the technical scheme, the invention can be improved as follows.
Further, the step S1 specifically includes the following steps:
s101, shooting fragments which are not melted through and moderately melted through in the welding process by using a CCD camera respectively;
s102, dividing video files with three conditions of lack of penetration, moderate penetration and penetration into picture data according to frames respectively, storing the picture data in different folders, and finally cutting out an interested region by taking a molten pool as a center;
s103, processing the image by using median filtering, suppressing noise and strengthening molten pool characteristics.
Further, the step S2 specifically includes the following steps:
s201, marking molten pool images of three conditions of penetration, moderate penetration and penetration by using a Labelme marking tool;
s202, selecting a ploygong function in Labelme software, marking the edge contour of a molten pool pixel by pixel, generating a json file corresponding to an original image, and generating a mask image with the same name by using a python script;
s203, converting the generated file into a binary image after marking is completed, wherein a black part is used as a background, a pixel value is 0, a white part is used as a molten pool characteristic, and a pixel value is 255.
Further, the step S3 specifically includes the following steps:
s301, carrying out data enhancement on the mask image obtained in the step S2 and an original image by using 5 modes of rotation, inversion, scaling, brightness change and contrast change, expanding a data set, and manufacturing a molten pool characteristic extraction model data set and a molten pool penetration state monitoring model data set after the data set is expanded;
s302, a molten pool semantic segmentation model data set comprises molten pool images and corresponding annotation drawings of three conditions of non-penetration, moderate penetration and penetration, and the molten pool images and the corresponding annotation drawings are stored in a folder;
s303, a molten pool penetration state monitoring model only comprises labeling diagrams of three conditions of lack-penetration, moderate penetration and penetration, and the labeling diagrams are respectively stored in three folders;
s304, dividing the data set into a training set, a testing set and a verification set.
Further, the step S4 specifically includes the following steps:
s401, constructing a lightweight molten pool image semantic segmentation model;
s402, training a model by adopting a training set, and then using a test set and a verification set test model;
s403, drawing a change curve of a loss value and the number of training wheels, an mIoU value and the number of training wheels and an mPA value and the number of training wheels of model training;
s404, observing whether the model converges or not according to the loss value verified by the model. Judging whether the training is sufficient;
s405, recording and storing an optimal model according to the loss value, the mPA and the mIoU value of model training.
Further, the step S5 specifically includes the following steps:
s501, constructing a molten pool penetration state monitoring model;
s502, the penetration state is divided into non-penetration, moderate penetration and penetration, the morphological characteristics of a molten pool are used as input, the penetration state is used as output, and a molten pool penetration state monitoring model is trained.
(III) beneficial effects
Compared with the prior art, the technical scheme of the application has the following beneficial technical effects:
the invention provides a molten pool feature extraction model which is based on an improved UNet deep learning network model and an ECANet attention mechanism module, wherein the left side of the network is a coding structure, namely a feature extraction module, and the method comprises 4 times of downsampling, wherein each downsampling adopts a pooling mode to reduce the dimension of a feature map, enlarge the perception range of the model, obtain more global information, adopt convolution layer blocks to extract features before each downsampling, each convolution layer block comprises two convolution layers, perform normalization processing and correct a Relu function after each convolution, and the right side of the network is a decoding module, comprises four times of upsampling, gradually restores the size of a feature image extracted by an encoder to the size of an original input picture, and finally outputs segmented laser stripes, and in the continuous convolution process of the image, the model can learn the feature of the image of a deeper layer, but the detail of the image can be lost, so that the feature is fused with the feature map in a corresponding encoder after each upsampling in a decoder, so as to improve the edge accuracy of a segmentation structure.
Drawings
FIG. 1 is a flow chart of a real-time welding feature extraction and penetration monitoring method based on machine vision according to the present invention;
FIG. 2 is a schematic diagram of a molten pool feature extraction model structure of a machine vision-based real-time welding feature extraction and penetration monitoring method of the present invention;
FIG. 3 is a schematic diagram of ECANet attention mechanism module in a molten pool feature extraction model and a molten pool penetration state monitoring model of a real-time welding feature extraction and penetration monitoring method based on machine vision;
FIG. 4 is a schematic diagram of a molten pool penetration state monitoring model structure of a real-time welding feature extraction and penetration monitoring method based on machine vision;
fig. 5 is a schematic diagram showing two unit structure intents of a ShuffleNet network in a molten pool penetration state monitoring model of a machine vision based real-time welding feature extraction and penetration monitoring method of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the arc welding process, welding wires are heated by high-temperature electric arcs to form molten drops, the molten drops fall off to melt base metal, and fluid in a molten state is a molten pool under the combined action of arc pressure, molten drop gravity and electromagnetic force. The penetration state of the molten pool determines the welding quality, the research on the penetration state is helpful for improving the appearance of a weld joint and improving the joint capacity performance, and the research and analysis on the penetration state are not separated from the extraction of the characteristics of the molten pool. However, the original molten pool image has the interference of welding wires, electric arcs, solidification welding beads, metal accumulation, splashing, smog and the like, and the extraction of the complete characteristic signals of the molten pool is very difficult.
Referring to fig. 1-5, the method for real-time welding feature extraction and penetration monitoring based on machine vision of the invention comprises the following steps:
s1, acquiring melting pool images of three conditions of lack of penetration, moderate penetration and penetration by using a CCD camera;
the method comprises the following specific steps:
s101, shooting fragments which are not melted through and moderately melted through in the welding process by using a CCD camera respectively;
s102, dividing video files with three conditions of lack of penetration, moderate penetration and penetration into picture data according to frames respectively, storing the picture data in different folders, and finally cutting out an interested region by taking a molten pool as a center;
s103, the welding process is quite complex in environment, and the characteristics of a molten pool are not obvious due to interference of welding wires, electric arcs, solidification welding beads, metal accumulation, splashing, smoke and the like in an original molten pool image, and the image is processed by using median filtering, so that noise is suppressed, and the characteristics of the molten pool are enhanced;
s2, respectively labeling pixel levels of molten pool images in three conditions of penetration, moderate penetration and penetration to generate corresponding labeling diagrams;
the method comprises the following specific steps:
s201, marking molten pool images of three conditions of penetration, moderate penetration and penetration by using a Labelme marking tool;
s202, selecting a ploygong function in Labelme software, marking the edge contour of a molten pool pixel by pixel, generating a json file corresponding to an original image, and generating a mask image with the same name by using a python script;
s203, converting the generated file into a binary image after marking is completed, wherein a black part is a background, a pixel value is 0, a white part is a molten pool characteristic, and a pixel value is 255;
s3, carrying out data enhancement on the generated annotation graph and the original graph simultaneously, expanding a data set, and manufacturing a molten pool characteristic extraction model data set and a molten pool penetration state monitoring model data set;
the method comprises the following steps:
s301, carrying out data enhancement on the mask image obtained in the step S2 and an original image by using 5 modes of rotation, inversion, scaling, brightness change and contrast change, expanding a data set, and manufacturing a molten pool characteristic extraction model data set and a molten pool penetration state monitoring model data set after the data set is expanded;
s302, a molten pool semantic segmentation model data set comprises molten pool images and corresponding annotation drawings of three conditions of non-penetration, moderate penetration and penetration, and the molten pool images and the corresponding annotation drawings are stored in a folder;
s303, a molten pool penetration state monitoring model only comprises labeling diagrams of three conditions of lack-penetration, moderate penetration and penetration, and the labeling diagrams are respectively stored in three folders;
s304, dividing the data set into a training set, a testing set and a verification set;
the molten pool feature extraction model dataset is obtained by mixing all the image data and the tag data obtained after expansion and is used as the input of a molten pool feature extraction model, and the pictures are not required to be additionally classified; the data set of the molten pool penetration state monitoring model needs to respectively take a label image data set of three conditions of incomplete penetration, moderate penetration and penetration as the input of the molten pool penetration state monitoring model, does not need an original picture data set, but is strictly classified, and the data set of the molten pool characteristic extraction model needs to be divided into a training set, a verification set and a test set according to the proportion of 8:1:1; the training set is used for training data, the verification set prevents over fitting, and the test set verifies training effects;
s4, building a molten pool feature extraction model, and training the model by adopting a training data set;
the method comprises the following specific steps:
s401, constructing a lightweight molten pool image semantic segmentation model;
s402, training a model by adopting a training set, and then using a test set and a verification set test model;
s403, drawing a change curve of a loss value and the number of training wheels, an mIoU value and the number of training wheels and an mPA value and the number of training wheels of model training;
s404, observing whether the model converges or not according to the loss value verified by the model; judging whether the training is sufficient;
s405, recording and storing an optimal model according to a loss value, mPA and mIoU values of model training;
the molten pool feature extraction model is used for obtaining a deep feature image through encoding an image and obtaining an original image feature extraction image through decoding the feature image, the required calculation amount is very large, and the molten pool features are required to be extracted in real time in the process of monitoring the molten pool penetration state by the intelligent welding robot; the parameter number of the convolutional neural network depends on the resolution of the image and the channel number of each convolutional layer; first, before a picture is input to a convolution layer, the picture is scaled to a size of 512×512 using opencv without losing picture details; secondly, the number of convolution kernels of each convolution block in the encoder is designed to be 1/4 of the native UNet network, namely, the number of convolution kernels in each convolution block is 16, 32, 64, 128, 256; likewise, the number of corresponding convolution kernels in the decoder is correspondingly reduced to 1/4 of the number of native unets; finally, there are two layers of convolutions in each convolution blockLayers, each using a 3x3 convolution kernel and one Relu; the convolution of two 3×3 blocks can obtain larger receptive fields under the condition of fewer parameters, more image features are extracted in a loss mode, and meanwhile, the nonlinear images can be better fitted by containing two Relu in each convolution block; the improvement can obtain a lightweight Unet network, which can greatly reduce the reasoning time but possibly lose the accuracy, thus increasing the ECA-Net channel attention mechanism module in the downsampling stage and using the Focal Loss function (using L FL Representation) replaces the BCELoss function and introduces the Dice Loss function (L DL Representation) with correction;
L=L FL +L DL
after a lightweight molten pool feature extraction model is built, inputting the molten pool feature extraction model data set manufactured in the step S3 into the lightweight molten pool feature extraction model for training, recording a loss value, an mIoU value, an mPA value and a training wheel number of model training, and drawing a change curve of the loss value, the training wheel number, the mIoU value, the training wheel number and the mPA value, which are trained by the model;
it should be noted here that each round of training is immediately followed by verification using the verification set; the model learns the characteristics of the melting pool through a training set, and determines parameters such as weight, paranoid and the like of the model; monitoring the learning effect of each round by using a verification set, not changing the parameters of the model, and simultaneously recording the loss value, the mIoU value and the mPA value of model training so as to comprehensively analyze the performance of the model; training is stopped when the loss value of 5 consecutive rounds has not changed, as then a fit may occur; after training the model, evaluating the performance of the model by using the test set;
s5, building a molten pool penetration state monitoring model, and training the model by adopting a training data set;
the method comprises the following specific steps:
s501, constructing a molten pool penetration state monitoring model;
s502, the penetration state is divided into non-penetration, moderate penetration and penetration, the morphological characteristics of a molten pool are used as input, the penetration state is used as output, and a molten pool penetration state monitoring model is trained;
the molten pool penetration state monitoring model is constructed based on a SheffeNet depth convolution neural network model and an ECANet attention mechanism module, the SheffeNet network is mainly formed by overlapping two single times, as shown in fig. 5 (a) and (b), in the SheffeNet unit (a), input features are firstly divided into two according to the number of channels, namely two grouping operations are carried out on the input features in the channel dimension, each group is provided with a branch structure, the left side carries out equal mapping without any operation, the right side carries out point-by-point convolution for 2 times 1 and separable convolution for 1 time 3 times 3, and the number of input channels and the number of output channels on two sides are the same; the two-side output features are spliced and fused in the channel dimension, and channel shuffling is used to ensure that the two-side features are fully fused; the unit (b) mainly completes channel doubling, width and height halving operation in dimension, and belongs to a downsampling module; the right side of the unit (b) is subjected to depth separable convolution with a step distance of 2 relative to the right side of the unit (a), and 2 times downsampling of the features is completed; the left branch carries out 1-time 3x3 depth separable convolution with a step distance of 2 and 1-time 1x1 point-by-point convolution; the branches at two sides are spliced through channel dimension to complete the channel doubling of the characteristics; finally, in order to improve the precision of the model, ECANet attention mechanism modules are added after the unit (a) and the unit (b) are spliced;
the integral molten pool penetration state monitoring model is shown in figure 4, and a molten pool image is input into a convolution layer and a pooling layer of the first stage of the molten pool penetration state monitoring model to obtain a characteristic diagram with the size of 64 multiplied by 56; then, according to the repetition times of network units appointed by two, three and four stages of the molten pool penetration state monitoring model, a characteristic diagram with the size of 192 multiplied by 7 is obtained; finally, the network serial convolution layer and the global maximum pooling layer respectively increase the depth of the feature map to 1024 and reduce the size to 1;
after the network model is built, the network model is divided into non-penetration, moderate penetration and penetration by the penetration state, the morphological characteristics of a molten pool are taken as input and the penetration state is taken as output, and a molten pool penetration state monitoring model is trained;
s6, in the actual welding process, shooting a real-time image of a welding area by using a CCD camera, inputting the real-time image into a lightweight molten pool image semantic segmentation model, and extracting morphological characteristics of the molten pool area;
s7, generating a binary molten pool image from the light molten pool image semantic segmentation model, inputting the binary molten pool image into a molten pool penetration state monitoring model, and predicting the penetration state.
The invention provides a molten pool feature extraction model based on an improved UNet deep learning network model and an ECANet attention mechanism module, wherein the left side of the network is a coding structure, namely a feature extraction module, and the method comprises 4 times of downsampling, wherein each downsampling adopts a pooling mode to reduce the dimension of a feature map, enlarge the perception range of the model, obtain more global information, adopt convolution layer blocks to extract features before each downsampling, each convolution layer block comprises two convolution layers, carry out normalization processing and correct a Relu function after each convolution, the right side of the network is a decoding module, and comprises four times of upsampling, gradually restore the size of a feature image extracted by an encoder to the size of an original input picture, finally output segmented laser stripes, and in the continuous convolution process of the image, the model can learn the feature of the image of a deeper layer, but lose the detail of the image, so that the feature is fused with the feature map in a corresponding encoder after each upsampling in a decoder so as to improve the edge accuracy of a segmentation structure
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (6)
1. The real-time welding characteristic extraction and penetration monitoring method based on machine vision is characterized by comprising the following steps of:
s1, acquiring melting pool images of three conditions of lack of penetration, moderate penetration and penetration by using a CCD camera;
s2, respectively labeling pixel levels of molten pool images in three conditions of penetration, moderate penetration and penetration to generate corresponding labeling diagrams;
s3, carrying out data enhancement on the generated annotation graph and the original graph simultaneously, expanding a data set, and manufacturing a molten pool characteristic extraction model data set and a molten pool penetration state monitoring model data set;
s4, building a molten pool feature extraction model, and training the model by adopting a training data set;
s5, building a molten pool penetration state monitoring model, and training the model by adopting a training data set;
s6, in the actual welding process, shooting a real-time image of a welding area by using a CCD camera, inputting the real-time image into a lightweight molten pool image semantic segmentation model, and extracting morphological characteristics of the molten pool area;
s7, generating a binary molten pool image from the light molten pool image semantic segmentation model, inputting the binary molten pool image into a molten pool penetration state monitoring model, and predicting the penetration state.
2. The method for real-time welding feature extraction and penetration monitoring based on machine vision according to claim 1, wherein in step S1, the method specifically comprises the following steps:
s101, shooting fragments which are not melted through and moderately melted through in the welding process by using a CCD camera respectively;
s102, dividing video files with three conditions of lack of penetration, moderate penetration and penetration into picture data according to frames respectively, storing the picture data in different folders, and finally cutting out an interested region by taking a molten pool as a center;
s103, processing the image by using median filtering, suppressing noise and strengthening molten pool characteristics.
3. The method for real-time welding feature extraction and penetration monitoring based on machine vision according to claim 2, wherein in step S2, the method specifically comprises the following steps:
s201, marking molten pool images of three conditions of penetration, moderate penetration and penetration by using a Labelme marking tool;
s202, selecting a ploygong function in Labelme software, marking the edge contour of a molten pool pixel by pixel, generating a json file corresponding to an original image, and generating a mask image with the same name by using a python script;
s203, converting the generated file into a binary image after marking is completed, wherein a black part is used as a background, a pixel value is 0, a white part is used as a molten pool characteristic, and a pixel value is 255.
4. The method for real-time welding feature extraction and penetration monitoring based on machine vision according to claim 3, wherein in step S3, the method specifically comprises the following steps:
s301, carrying out data enhancement on the mask image obtained in the step S2 and an original image by using 5 modes of rotation, inversion, scaling, brightness change and contrast change, expanding a data set, and manufacturing a molten pool characteristic extraction model data set and a molten pool penetration state monitoring model data set after the data set is expanded;
s302, a molten pool semantic segmentation model data set comprises molten pool images and corresponding annotation drawings of three conditions of non-penetration, moderate penetration and penetration, and the molten pool images and the corresponding annotation drawings are stored in a folder;
s303, a molten pool penetration state monitoring model only comprises labeling diagrams of three conditions of lack-penetration, moderate penetration and penetration, and the labeling diagrams are respectively stored in three folders;
s304, dividing the data set into a training set, a testing set and a verification set.
5. The method for real-time welding feature extraction and penetration monitoring based on machine vision according to claim 1, wherein in step S4, the method specifically comprises the following steps:
s401, constructing a lightweight molten pool image semantic segmentation model;
s402, training a model by adopting a training set, and then using a test set and a verification set test model;
s403, drawing a change curve of a loss value and the number of training wheels, an mIoU value and the number of training wheels and an mPA value and the number of training wheels of model training;
s404, observing whether the model converges or not according to the loss value verified by the model. Judging whether the training is sufficient;
s405, recording and storing an optimal model according to the loss value, the mPA and the mIoU value of model training.
6. The method for real-time welding feature extraction and penetration monitoring based on machine vision according to claim 1, wherein in step S5, the method specifically comprises the following steps:
s501, constructing a molten pool penetration state monitoring model;
s502, the penetration state is divided into non-penetration, moderate penetration and penetration, the morphological characteristics of a molten pool are used as input, the penetration state is used as output, and a molten pool penetration state monitoring model is trained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311442613.6A CN117415501A (en) | 2023-11-01 | 2023-11-01 | Real-time welding feature extraction and penetration monitoring method based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311442613.6A CN117415501A (en) | 2023-11-01 | 2023-11-01 | Real-time welding feature extraction and penetration monitoring method based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117415501A true CN117415501A (en) | 2024-01-19 |
Family
ID=89530917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311442613.6A Pending CN117415501A (en) | 2023-11-01 | 2023-11-01 | Real-time welding feature extraction and penetration monitoring method based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117415501A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117600702A (en) * | 2024-01-23 | 2024-02-27 | 厦门锋元机器人有限公司 | Aluminum welding production line supervision method and system based on artificial intelligence |
CN118314431A (en) * | 2024-06-07 | 2024-07-09 | 临沂临工重托机械有限公司 | Plasma arc welding penetration prediction method and system based on multi-view molten pool image |
-
2023
- 2023-11-01 CN CN202311442613.6A patent/CN117415501A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117600702A (en) * | 2024-01-23 | 2024-02-27 | 厦门锋元机器人有限公司 | Aluminum welding production line supervision method and system based on artificial intelligence |
CN117600702B (en) * | 2024-01-23 | 2024-04-02 | 厦门锋元机器人有限公司 | Aluminum welding production line supervision method and system based on artificial intelligence |
CN118314431A (en) * | 2024-06-07 | 2024-07-09 | 临沂临工重托机械有限公司 | Plasma arc welding penetration prediction method and system based on multi-view molten pool image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117415501A (en) | Real-time welding feature extraction and penetration monitoring method based on machine vision | |
CN110363781B (en) | Molten pool contour detection method based on deep neural network | |
CN114155372A (en) | Deep learning-based structured light weld curve identification and fitting method | |
CN112381095B (en) | Electric arc additive manufacturing layer width active disturbance rejection control method based on deep learning | |
CN108624880A (en) | A kind of Laser Cladding Quality intelligence control system and its intelligent control method | |
Liu et al. | 3DSMDA-Net: An improved 3DCNN with separable structure and multi-dimensional attention for welding status recognition | |
CN113379740A (en) | VPPAW fusion in-situ real-time monitoring system based on perforation molten pool image and deep learning | |
Cai et al. | Real-time identification of molten pool and keyhole using a deep learning-based semantic segmentation approach in penetration status monitoring | |
CN116597391B (en) | Synchronous on-line monitoring method for weld surface morphology and penetration state | |
CN110210497A (en) | A kind of real-time characteristics of weld seam detection method of robust | |
CN112381783B (en) | Weld track extraction method based on red line laser | |
CN111754507A (en) | Light-weight industrial defect image classification method based on strong attention machine mechanism | |
CN115272204A (en) | Bearing surface scratch detection method based on machine vision | |
CN114170176A (en) | Automatic detection method for steel grating welding seam based on point cloud | |
CN113076989A (en) | Chip defect image classification method based on ResNet network | |
CN114119504A (en) | Automatic steel part welding line detection method based on cascade convolution neural network | |
Luo et al. | Waterdrop removal from hot-rolled steel strip surfaces based on progressive recurrent generative adversarial networks | |
CN117611571A (en) | Strip steel surface defect detection method based on improved YOLO model | |
CN115609110B (en) | Electric arc composite additive penetration prediction method based on multimode fusion | |
CN116843657A (en) | Welding defect detection method and device based on attention fusion | |
CN113435670B (en) | Prediction method for deviation quantification of additive manufacturing cladding layer | |
CN114022750A (en) | Welding spot appearance image identification method and system based on aggregation-calibration CNN | |
CN116246065A (en) | Refined image semantic segmentation method based on progressive neighbor aggregation | |
CN112733934B (en) | Multi-mode feature fusion road scene semantic segmentation method in complex environment | |
CN114682879A (en) | Weld joint tracking method based on target tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |