CN114119504A - Automatic steel part welding line detection method based on cascade convolution neural network - Google Patents
Automatic steel part welding line detection method based on cascade convolution neural network Download PDFInfo
- Publication number
- CN114119504A CN114119504A CN202111312070.7A CN202111312070A CN114119504A CN 114119504 A CN114119504 A CN 114119504A CN 202111312070 A CN202111312070 A CN 202111312070A CN 114119504 A CN114119504 A CN 114119504A
- Authority
- CN
- China
- Prior art keywords
- layer
- neural network
- weld joint
- welding
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30152—Solder
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a steel welding seam automatic detection method based on a cascade convolution neural network, which comprises the following steps: s1, scanning the steel part by using an industrial camera and an infrared laser to obtain weld data; s2, carrying out weld joint region labeling and center line labeling on the image, and completing the manufacture of a weld joint reference image set and a center line reference image set; s3, training a cascade convolution neural network by using the weld data and the weld reference image set according to a training algorithm, and training a central line extraction network by using the weld data and the central line reference image set; s4, scanning the steel piece to be detected by using an industrial camera and an infrared laser ray to obtain data of a weld joint to be detected; and S5, inputting the welding seam data to be detected into the trained detection model and outputting the detection result. The invention can accurately extract the position of the welding seam, greatly improves the anti-interference capability, ensures the welding quality and improves the self-adaptive capability of the automatic welding system.
Description
Technical Field
The invention belongs to the technical field of weld joint detection, and particularly relates to an automatic steel part weld joint detection method based on a cascade convolution neural network.
Background
Welding technology has become one of the widely used connection methods, and is mainly applied in the aerospace field, the electronic manufacturing field, the mechanical manufacturing field, the ship manufacturing field and the like. However, the welding site environment is very harsh, and harmful gases generated by the welding torch during welding and dazzling arc light generated by the welding torch during welding easily threaten the life safety of welders. With the aging population, welding workers are reduced year by year, the welding demand is increased year by year, and the traditional manual welding cannot bear the demand of the current society.
With the development of the artificial intelligence industry and the electronic manufacturing technology, a new idea and a new technology are provided for the welding industry, the artificial intelligence provides a powerful technical support for realizing the welding automation and the intellectualization, the welding requirement of a new era can be met, and the welding seam detection technology is a key technology for realizing the welding automation and the intellectualization. The efficient and accurate welding line identification and detection have great engineering practical significance for realizing the third-generation autonomous intelligent welding robot.
With the rapid development of the automation process of the manufacturing industry, metal welding becomes an essential link in the production process. In order to meet the requirements of automatic welding production, position information such as width, central line and the like of a welding line in the welding process needs to be obtained in real time. Laser vision based seam tracking has become a hotspot in the field of automated welding. The laser vision sensing technology is to project laser on the surface of a weld joint to form a weld joint stripe image containing weld joint outline information, and then perform characteristic analysis. In the welding process, because of the interference of strong arc light, splashing, arc noise and the like, the collected image is seriously polluted by noise and is difficult to accurately position, thereby directly influencing the welding quality. Therefore, in the automatic welding process, it becomes important to accurately obtain the weld position information.
The methods applied to weld laser line feature extraction are various, including a threshold value method and a gray scale gravity center method which are divided according to gray scale values of different levels of a target and a background, an edge method for extracting the edge of the target, a geometric center method for extracting the center of an image, a method for solving the pixel position of a light stripe by using a Steger algorithm and the like. The weld joint detection method designed aiming at the specific steel part can realize high-precision detection, however, the detection precision of the weld joints of the steel parts with different types and strong arc light interference is low, and the requirement of actual welding tracking is difficult to meet.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides an automatic detection method of a steel part welding line based on a cascade convolution neural network, and aims to solve the technical problems that the welding line detection method designed for specific steel parts can realize high-precision detection, the detection precision of the welding line of the steel parts with different types and strong arc light interference is low, and the like.
In order to achieve the purpose, the invention provides the following technical scheme: a steel welding seam automatic detection method based on a cascade convolution neural network comprises the following steps:
step S1, acquiring weld data, and scanning the steel part by using an industrial camera and an infrared laser to acquire the weld data;
step S2, data labeling, namely performing weld joint region labeling and center line labeling on the image to complete the manufacture of a weld joint reference image set and a center line reference image set;
step S3, training a cascade convolution neural network and a center line extraction network, training the cascade convolution neural network by using the welding line data and the welding line reference image set according to a training algorithm, and training the center line extraction network by using the welding line data and the center line reference image set;
s4, acquiring data of a weld joint to be detected, and scanning a steel piece to be detected by using an industrial camera and an infrared laser ray to obtain the data of the weld joint to be detected;
and step S5, calculating a detection result, inputting the welding seam data to be detected into a trained detection model, and outputting the detection result.
Further, the step S2 specifically includes:
step S2-1, converting the welding seam data image into PNG format;
and S2-2, labeling the weld joint image processed in the step S2-1, wherein the labeling content comprises the position and the size of the weld joint area and the position and the size of a center line.
Further, in step S3, the training algorithm includes a random gradient training algorithm, an Adam algorithm, an RMSProp algorithm, an adarad algorithm, an adaelta algorithm, or an Adamax algorithm.
Further, in step S3, the cascaded convolutional neural network sequentially includes an input layer, a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, a fifth deconvolution layer, a sixth convolutional layer, a fourth deconvolution layer, a seventh convolutional layer, a third deconvolution layer, an eighth convolutional layer, a second deconvolution layer, a first deconvolution layer, a ninth convolutional layer, an output layer, and a Softmax layer; the low-layer convolution can extract more specific details including laser stripe edge information, the high-layer convolution can extract more position information, feature fusion is carried out on the low-layer convolution and the high-layer convolution by adding, and local detail features are supplemented by carrying out feature fusion; in the cascade convolutional neural network, the output of the first convolutional layer and the second deconvolution layer, the output of the second convolutional layer and the third deconvolution layer, the output of the third convolutional layer and the fourth deconvolution layer, and the output of the fourth convolutional layer and the fifth deconvolution layer are subjected to feature fusion.
Further, in the step S3, in order to train the cascaded convolutional neural network, cross entropy loss is adopted, and a calculation method is shown as a formula one;
wherein theta is1Representing parameters of the cascaded convolutional neural network, M representing the minimum batch size, N representing the number of pixels in the block, and K representing the number of classes; 1 (y-k) is a function, taking 1 when y-k, otherwise 0;represents the jth pixel in the ith block,indicating that the last layer is laminated onAn output of the pixel;to representThe corresponding pixel of the pixel in the real label;
the probability that a pixel belongs to the kth class isThe calculation mode is shown as the formula II;
further, in step S3, the centerline extraction network sequentially includes an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a third pooling layer, a third upper pooling layer, a third deconvolution layer, a second upper pooling layer, a second deconvolution layer, a first upper pooling layer, a first deconvolution layer, an output layer, and a Softmaxc layer; compared with a cascaded convolutional neural network, the central line extraction network also comprises an encoding layer and a decoding layer, but the central line extraction network has a smaller structure; on one hand, the feature graph output by the last layer of convolutional layer of the cascaded convolutional neural network contains less interference information than the original graph, and the centerline extraction network can take the feature graph as the input of the network; on the other hand, compared with the weld detection, fewer centerline pixels are used to train the centerline extraction network.
Further, in the step S3, in order to train the centerline extraction network, cross entropy loss is adopted, and a calculation method is shown in formula three;
wherein theta is2A parameter representing the centerline extraction network, 1(z ═ k) is a function, taking 1 when z ═ k, and taking 0 otherwise;the output of the last layer of deconvolution layer of the network is extracted by the center;to representThe pixel corresponds to the pixel in the central line real label;
the probability that a pixel belongs to the kth class isThe calculation mode is shown as the formula IV;
further, in step S3, the overall network loss calculation mode is shown as formula five;
Loss(θ1,θ2)=lossseg(y,f(x),θ1)+losscen(z,h(x),θ2) And a fifth expression.
Further, the step S5 specifically includes:
step S5-1, thinning the central line by utilizing a gray scale centroid method;
step S5-2, for single pass welding, fitting a central line by using a least square method LSM, wherein the intersection point of the central line is the position of the characteristic point; for multi-pass welding, NURBS is adopted to perform curve fitting on the central line, and the positions of the characteristic points are obtained by derivation of the fitted curve.
Further, in the step S5-2, the NURBS curve is calculated as shown in formula six;
wherein P isiRepresenting the control vertex, wiRepresenting the weight factors, a and b take 0 and 1, respectively, Ni,p(U) is a B-spline basis function of order P defined on the node vector U;
the calculation mode of the node vector U is shown as a formula seven;
compared with the prior art, the invention has the beneficial effects that:
(1) the method can accurately extract the position of the welding seam under the interference of strong arc light, splashing, arc noise and the like, greatly improves the anti-interference capability, ensures the welding quality and improves the self-adaptive capability of the automatic welding system.
(2) The local detail features are supplemented by feature fusion, and any preprocessing and post-processing are not needed, so that the laser stripe boundary is more accurate and clearer.
(3) The cascade convolution neural network has deep learning capability, integrates the multi-level characteristics of the welding seam image by continuously training and learning the characteristic information of the laser line, can accurately acquire the characteristics of the whole welding seam, and simultaneously has good noise suppression capability in the aspect of details and better extraction effect and extraction precision.
Drawings
FIG. 1 is a general flow diagram of the present invention;
FIG. 2 is a schematic diagram of a network structure of a cascaded convolutional neural network and a centerline extraction network of the present invention;
FIG. 3 is a schematic diagram of the positions of the feature points of a single pass weld;
fig. 4 is a schematic diagram of the positions of the characteristic points of the multi-pass welding.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. The embodiments described herein are only for explaining the technical solution of the present invention and are not limited to the present invention.
A method for automatically detecting a steel welding seam based on a cascade convolution neural network is shown in figure 1 and comprises the following steps:
step S1, acquiring weld data, and scanning the steel part by using an industrial camera and an infrared laser to acquire the weld data;
step S2, data labeling, namely performing weld joint region labeling and center line labeling on the image to complete the manufacture of a weld joint reference image set and a center line reference image set;
step S2 specifically includes:
step S2-1, converting the weld digital image into PNG format;
and S2-2, labeling the weld joint image processed in the step S2-1, wherein the labeling content comprises the position and the size of the weld joint area and the position and the size of a center line.
Step S3, training a cascade convolution neural network and a center line extraction network, training the cascade convolution neural network by using the welding line data and the welding line reference image set according to a training algorithm, and training the center line extraction network by using the welding line data and the center line reference image set;
s4, acquiring data of a weld joint to be detected, and scanning a steel piece to be detected by using an industrial camera and an infrared laser ray to obtain the data of the weld joint to be detected;
and step S5, calculating a detection result, inputting the welding seam data to be detected into a trained detection model, and outputting the detection result.
Step S5 specifically includes:
step S5-1, thinning the central line by utilizing a gray scale centroid method;
step S5-2, for single pass welding, fitting a central line by using a least square method LSM, wherein the intersection point of the central line is the position of the characteristic point; for multi-pass welding, NURBS is adopted to perform curve fitting on the central line, and the positions of the characteristic points are obtained by derivation of the fitted curve.
In step S1, in order to improve the accuracy of data acquisition, the resolution of the industrial camera is 656 × 492 and the wavelength of the laser emitter is 650 nm.
In step S3, the training algorithm includes a random gradient training algorithm, an Adam algorithm, an RMSProp algorithm, an adagard algorithm, an adapelta algorithm, or an Adamax algorithm.
In step S3, for the initialization of the network, the parameters to be trained in the network are randomly initialized by gaussian distribution with a standard deviation of 0.01. The learning rate and the weight decrement are set to 5 × 10, respectively-5And 5X 10-4. 3200 original weld images and weld reference images are selected, wherein 2400 training sample sets and 800 testing sets are selected, and the cascade convolution neural network is trained. Meanwhile, 3200 original weld images and center line reference images are selected, wherein 2400 training sample sets and 800 testing sets are selected, and the center line extraction network is trained.
In step S3, a model of the concatenated convolutional neural network is shown in fig. 2. The cascaded convolutional neural network sequentially comprises an input layer, a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, a fifth deconvolution layer, a sixth convolutional layer, a fourth deconvolution layer, a seventh convolutional layer, a third deconvolution layer, an eighth convolutional layer, a second deconvolution layer, a first deconvolution layer, a ninth convolutional layer, an output layer and a Softmax layer. In the continuous convolution process, the characteristic information of the image is lost, such as edge information, target position and the like. The weld detection requires the ability to provide accurate position information as well as edge information of the laser stripe. The low-layer convolution can extract more specific details including laser stripe edge information, and the high-layer convolution can extract more position information, so that the method performs feature fusion of adding the low-layer output and the high-layer output, and supplements local detail features by performing feature fusion. In the cascade convolutional neural network, the output of the first convolutional layer and the second deconvolution layer, the output of the second convolutional layer and the third deconvolution layer, the output of the third convolutional layer and the fourth deconvolution layer, and the output of the fourth convolutional layer and the fifth deconvolution layer are subjected to feature fusion.
In step S3, in order to train the cascaded convolutional neural network, cross entropy loss is adopted, and the calculation method is shown in formula one.
Wherein theta is1Parameters representing the cascaded convolutional neural network, as shown in table 1, M represents the minimum batch size, N represents the number of pixels in the block, and K represents the number of classes; 1 (y-k) is a function, taking 1 when y-k, otherwise 0;represents the jth pixel in the ith block,indicating that the last layer is laminated onAn output of the pixel;to representThe pixels correspond to the pixels in the real label.
TABLE 1 parameters of cascaded convolutional neural networks
Layer | Kernel size | Channels(In,Out) | Stride | Padding | Output size |
Input | — | — | — | — | 64×64×3 |
Conv1-1 | 3×3 | 3,32 | 2 | 0.5 | 32×32×32 |
Conv1-2 | 3×3 | 32,32 | 1 | 1 | 32×32×32 |
Conv2-1 | 3×3 | 32,32 | 2 | 0.5 | 16×16×32 |
Conv2-2 | 3×3 | 32,32 | 1 | 1 | 16×16×32 |
Conv3-1 | 3×3 | 32,64 | 2 | 0.5 | 8×8×64 |
Conv3-2 | 3×3 | 64,64 | 1 | 1 | 8×8×64 |
Conv4 | 3×3 | 64,128 | 2 | 0.5 | 4×4×128 |
Conv5-1 | 3×3 | 128,128 | 2 | 0.5 | 2×2×128 |
Conv5-2 | 3×3 | 128,128 | 1 | 1 | 2×2×128 |
Conv5-3 | 3×3 | 128,128 | 1 | 1 | 2×2×128 |
Conv5-4 | 3×3 | 128,128 | 1 | 1 | 2×2×128 |
DConv5 | 3×3 | 128,128 | 2 | 0.5 | 4×4×128 |
Conv6 | 3×3 | 256,128 | 1 | 1 | 4×4×128 |
DConv4 | 3×3 | 128,64 | 2 | 0.5 | 8×8×64 |
Conv7 | 3×3 | 128,64 | 1 | 1 | 8×8×64 |
DConv3 | 3×3 | 64,32 | 2 | 0.5 | 16×16×32 |
Conv8 | 3×3 | 64,32 | 1 | 1 | 16×16×32 |
DConv2 | 3×3 | 32,32 | 2 | 0.5 | 32×32×32 |
DConv1 | 3×3 | 64,32 | 2 | 0.5 | 64×64×32 |
Conv9 | 3×3 | 32,32 | 1 | 1 | 64×64×32 |
Output | 3×3 | 32,2 | 1 | 1 | 64×64×2 |
Softmax | — | — | — | — | 64×64×2 |
The probability that a pixel belongs to the kth class isThe calculation method is shown in formula two.
In step S3, the centerline extraction network is modeled as shown in fig. 2. The central line extraction network sequentially comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a third pooling layer, a third upper pooling layer, a third deconvolution layer, a second upper pooling layer, a second deconvolution layer, a first upper pooling layer, a first deconvolution layer, an output layer and a Softmax layer. Compared with a cascaded convolutional neural network, the centerline extraction network also comprises an encoding layer and a decoding layer, but the centerline extraction network is smaller in structure. On one hand, the feature diagram output by the last layer of convolutional layer of the cascaded convolutional neural network contains less interference information such as strong arc, splash, arc noise and the like than the original diagram, and the central line extraction network can take the feature diagram as the input of the network. On the other hand, compared with the weld detection, fewer centerline pixels are used to train the centerline extraction network. While overfitting tends to occur in deeper networks, so selecting a smaller network is more suitable for centerline extraction.
In step S3, in order to train the centerline extraction network, cross entropy loss is used, and the calculation method is shown in formula three.
Wherein theta is2Parameters representing the centerline extraction network, as shown in table 2; 1(z ═ k) is a function, taking 1 when z ═ k, and otherwise taking 0;the output of the last layer of deconvolution layer of the network is extracted by the center;to representThe pixels are the corresponding pixels in the centerline real label.
TABLE 2 centerline extraction network parameters
The probability that a pixel belongs to the kth class isThe calculation method is shown in formula four.
In step S3, the overall network loss calculation method is shown in equation five.
Loss(θ1,θ2)=lossseg(y,f(x),θ1)+losscen(z,h(x),θ2) Formula five
In step S5-2, the NURBS curve is calculated as shown in equation six.
Wherein P isiRepresenting the control vertex, ωiRepresenting the weight factors, a and b take 0 and 1, respectively, Ni,p(U) is a B-spline basis function of order P defined on the node vector U.
The node vector U is calculated as shown in equation seven.
In step S5-2, a single welding is performed as shown in fig. 3, where the intersection points of the center lines are the left boundary point, the lowest point, and the right boundary point, respectively, and the lowest point is the position of the next welding; in the multi-pass welding, as shown in fig. 4, the minimum extreme point is the position of the next welding, and the fluctuation range of the welding can be limited by the minimum extreme point and the maximum extreme point.
The foregoing merely represents preferred embodiments of the invention, which are described in some detail and detail, and therefore should not be construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes, modifications and substitutions can be made without departing from the spirit of the present invention, and these are all within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A steel part welding line automatic detection method based on a cascade convolution neural network is characterized by comprising the following steps: the method comprises the following steps:
step S1, acquiring weld data, and scanning the steel part by using an industrial camera and an infrared laser to acquire the weld data;
step S2, data labeling, namely performing weld joint region labeling and center line labeling on the image to complete the manufacture of a weld joint reference image set and a center line reference image set;
step S3, training a cascade convolution neural network and a center line extraction network, training the cascade convolution neural network by using the welding line data and the welding line reference image set according to a training algorithm, and training the center line extraction network by using the welding line data and the center line reference image set;
s4, acquiring data of a weld joint to be detected, and scanning a steel piece to be detected by using an industrial camera and an infrared laser ray to obtain the data of the weld joint to be detected;
and step S5, calculating a detection result, inputting the welding seam data to be detected into a trained detection model, and outputting the detection result.
2. The automatic detection method for the steel part weld joint based on the cascaded convolutional neural network as claimed in claim 1, characterized in that: the step S2 specifically includes:
step S2-1, converting the welding seam data image into PNG format;
and S2-2, labeling the weld joint image processed in the step S2-1, wherein the labeling content comprises the position and the size of the weld joint area and the position and the size of a center line.
3. The automatic detection method for the steel part weld joint based on the cascaded convolutional neural network as claimed in claim 1, characterized in that: in step S3, the training algorithm includes a random gradient training algorithm, an Adam algorithm, an RMSProp algorithm, an adagard algorithm, an adapelta algorithm, or an Adamax algorithm.
4. The automatic detection method for the steel part weld joint based on the cascaded convolutional neural network as claimed in claim 1, characterized in that: in step S3, the cascaded convolutional neural network sequentially includes an input layer, a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, a fifth deconvolution layer, a sixth convolutional layer, a fourth deconvolution layer, a seventh convolutional layer, a third deconvolution layer, an eighth convolutional layer, a second deconvolution layer, a first deconvolution layer, a ninth convolutional layer, an output layer, and a Softmax layer; the low-layer convolution can extract more specific details including laser stripe edge information, the high-layer convolution can extract more position information, feature fusion is carried out on the low-layer convolution and the high-layer convolution by adding, and local detail features are supplemented by carrying out feature fusion; in the cascade convolutional neural network, the output of the first convolutional layer and the second deconvolution layer, the output of the second convolutional layer and the third deconvolution layer, the output of the third convolutional layer and the fourth deconvolution layer, and the output of the fourth convolutional layer and the fifth deconvolution layer are subjected to feature fusion.
5. The automatic detection method for the steel part weld joint based on the cascaded convolutional neural network as claimed in claim 4, characterized in that: in the step S3, in order to train the cascaded convolutional neural network, cross entropy loss is adopted, and a calculation mode is shown as a formula one;
wherein theta is1Representing parameters of the cascaded convolutional neural network, M representing the minimum batch size, N representing the number of pixels in the block, and K representing the number of classes; 1 (y-k) is a function, taking 1 when y-k, otherwise 0;represents the jth pixel in the ith block,indicating that the last layer is laminated onAn output of the pixel;to representThe corresponding pixel of the pixel in the real label;
the probability that a pixel belongs to the kth class isThe calculation mode is shown as the formula II;
6. the automatic detection method for the steel part weld joint based on the cascaded convolutional neural network as claimed in claim 5, characterized in that: in step S3, the centerline extraction network sequentially includes an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a third pooling layer, a third upper pooling layer, a third deconvolution layer, a second upper pooling layer, a second deconvolution layer, a first upper pooling layer, a first deconvolution layer, an output layer, and a Softmaxc layer; compared with a cascaded convolutional neural network, the central line extraction network also comprises an encoding layer and a decoding layer, but the central line extraction network has a smaller structure; on one hand, the feature graph output by the last layer of convolutional layer of the cascaded convolutional neural network contains less interference information than the original graph, and the centerline extraction network can take the feature graph as the input of the network; on the other hand, compared with the weld detection, fewer centerline pixels are used to train the centerline extraction network.
7. The automatic detection method for the steel part weld joint based on the cascaded convolutional neural network as claimed in claim 6, characterized in that: in the step S3, in order to train the centerline extraction network, cross entropy loss is adopted, and the calculation mode is shown in formula three;
wherein theta is2A parameter representing the centerline extraction network, 1(z ═ k) is a function, taking 1 when z ═ k, and taking 0 otherwise;the output of the last layer of deconvolution layer of the network is extracted by the center;to representThe pixel corresponds to the pixel in the central line real label;
the probability that a pixel belongs to the kth class isThe calculation mode is shown as the formula IV;
8. the automatic detection method for the steel part weld joint based on the cascaded convolutional neural network as claimed in claim 7, characterized in that: in step S3, the overall network loss calculation mode is shown as formula five;
Loss(θ1,θ2)=lossseg(y,f(x),θ1)+losscen(z,h(x),θ2) And a fifth expression.
9. The automatic detection method for the steel part weld joint based on the cascaded convolutional neural network as claimed in claim 1, characterized in that: the step S5 specifically includes:
step S5-1, thinning the central line by utilizing a gray scale centroid method;
step S5-2, for single pass welding, fitting a central line by using a least square method LSM, wherein the intersection point of the central line is the position of the characteristic point; for multi-pass welding, NURBS is adopted to perform curve fitting on the central line, and the positions of the characteristic points are obtained by derivation of the fitted curve.
10. The automatic detection method for the steel part weld joint based on the cascaded convolutional neural network as claimed in claim 9, characterized in that: in the step S5-2, the NURBS curve calculation mode is shown as the formula six;
wherein P isiRepresenting the control vertex, wiRepresenting the weight factors, a and b take 0 and 1, respectively, Ni,p(U) is a B-spline basis function of order P defined on the node vector U;
the calculation mode of the node vector U is shown as a formula seven;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111312070.7A CN114119504A (en) | 2021-11-08 | 2021-11-08 | Automatic steel part welding line detection method based on cascade convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111312070.7A CN114119504A (en) | 2021-11-08 | 2021-11-08 | Automatic steel part welding line detection method based on cascade convolution neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114119504A true CN114119504A (en) | 2022-03-01 |
Family
ID=80381085
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111312070.7A Pending CN114119504A (en) | 2021-11-08 | 2021-11-08 | Automatic steel part welding line detection method based on cascade convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114119504A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114936517A (en) * | 2022-04-28 | 2022-08-23 | 上海波士内智能科技有限公司 | Metal welding signal characteristic curve characteristic modeling method based on deep learning |
CN117564533A (en) * | 2024-01-15 | 2024-02-20 | 苏州德星云智能装备有限公司 | Metal mesh bearing object welding method and device based on machine vision and storage medium |
-
2021
- 2021-11-08 CN CN202111312070.7A patent/CN114119504A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114936517A (en) * | 2022-04-28 | 2022-08-23 | 上海波士内智能科技有限公司 | Metal welding signal characteristic curve characteristic modeling method based on deep learning |
CN117564533A (en) * | 2024-01-15 | 2024-02-20 | 苏州德星云智能装备有限公司 | Metal mesh bearing object welding method and device based on machine vision and storage medium |
CN117564533B (en) * | 2024-01-15 | 2024-03-22 | 苏州德星云智能装备有限公司 | Metal mesh bearing object welding method and device based on machine vision and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109741347B (en) | Iterative learning image segmentation method based on convolutional neural network | |
CN114119504A (en) | Automatic steel part welding line detection method based on cascade convolution neural network | |
CN113628178B (en) | Steel product surface defect detection method with balanced speed and precision | |
CN111913435B (en) | Single/multi-target key point positioning method based on stacked hourglass network | |
CN112070727B (en) | Metal surface defect detection method based on machine learning | |
CN114354639B (en) | Weld defect real-time detection method and system based on 3D point cloud | |
CN112329860A (en) | Hybrid deep learning visual detection method, device, equipment and storage medium | |
CN112381095B (en) | Electric arc additive manufacturing layer width active disturbance rejection control method based on deep learning | |
CN116524062B (en) | Diffusion model-based 2D human body posture estimation method | |
CN114473309A (en) | Welding position identification method for automatic welding system and automatic welding system | |
CN114170176A (en) | Automatic detection method for steel grating welding seam based on point cloud | |
Yu et al. | The centerline extraction algorithm of weld line structured light stripe based on pyramid scene parsing network | |
CN114022586A (en) | Defect image generation method based on countermeasure generation network | |
CN105321166A (en) | Annular weld joint edge extraction method based on GAP predictor and self-adaptive genetic algorithm | |
Li et al. | Weld image recognition algorithm based on deep learning | |
Xu et al. | A new welding path planning method based on point cloud and deep learning | |
Moon et al. | Extraction of line objects from piping and instrumentation diagrams using an improved continuous line detection algorithm | |
CN117415501A (en) | Real-time welding feature extraction and penetration monitoring method based on machine vision | |
CN113034494A (en) | Rubber seal ring defect detection method based on deep learning | |
CN117611571A (en) | Strip steel surface defect detection method based on improved YOLO model | |
CN116342542A (en) | Lightweight neural network-based steel product surface defect detection method | |
CN113435670B (en) | Prediction method for deviation quantification of additive manufacturing cladding layer | |
CN115953387A (en) | Radiographic image weld defect detection method based on deep learning | |
CN115457077A (en) | Passive visual weld joint tracking method based on deep learning semantic segmentation | |
CN113159278A (en) | Partitioned network system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |