CN112232400B - Stainless steel weld ultrasonic defect detection method based on depth feature fusion - Google Patents

Stainless steel weld ultrasonic defect detection method based on depth feature fusion Download PDF

Info

Publication number
CN112232400B
CN112232400B CN202011083232.XA CN202011083232A CN112232400B CN 112232400 B CN112232400 B CN 112232400B CN 202011083232 A CN202011083232 A CN 202011083232A CN 112232400 B CN112232400 B CN 112232400B
Authority
CN
China
Prior art keywords
defect
stainless steel
layer
network
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011083232.XA
Other languages
Chinese (zh)
Other versions
CN112232400A (en
Inventor
张睿
赵娜
张永梅
闫耀东
傅留虎
陈立潮
王建安
潘理虎
谢刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Science and Technology filed Critical Taiyuan University of Science and Technology
Priority to CN202011083232.XA priority Critical patent/CN112232400B/en
Publication of CN112232400A publication Critical patent/CN112232400A/en
Application granted granted Critical
Publication of CN112232400B publication Critical patent/CN112232400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention provides a multi-type defect detection method for stainless steel welding seams based on depth feature fusion. Experiments show that the method can realize high-precision classification of five stainless steel weld defects such as unfused, slag inclusion, incomplete penetration, air holes and cracks, and the average accuracy reaches 97.7 percent, and meanwhile, the constructed identification network belongs to a light-weight model, so that the operation efficiency is effectively improved, and the problem that the depth feature extraction needs a long time is solved. Compared with the existing intelligent detection method for the welding seam, the ultrasonic detection method provided by the invention is simple and convenient, meets the requirements of actual engineering application on accuracy and efficiency, and provides a new technical reference for the intelligent detection field of the welding seam.

Description

Stainless steel weld ultrasonic defect detection method based on depth feature fusion
Technical Field
The invention relates to the technical field of stainless steel weld joint detection, in particular to an ultrasonic defect detection (identification) method for (austenitic) stainless steel weld joints based on depth feature fusion.
Background
Welding defect detection is critical for construction and industrial activities and is an important task in industrial production. As a national resource, austenitic stainless steel is an important task in weld defect detection of austenitic stainless steel. Therefore, the rapid and timely mastering of the austenitic stainless steel weld defect detection model is an engineering practice problem to be solved urgently. At present, the traditional weld defect detection work needs to be completed manually, and the detection method generally detects the weld defect by a professional, analyzes the severity of the defect and performs the qualitative detection on the defect, and has the defects of strong subjectivity, time and labor consumption and high detection cost. With the development of artificial intelligence, computer vision and machine learning have been successfully applied to austenitic stainless steel weld defect detection.
Ma Xiaolei for developing and researching two typical material defects of cracks and air holes, finite element simulation software COMSOL is used for modeling, and defect echo A signals are extracted for pattern recognition, so that 92% recognition rate of the two types of defects is realized; starting from the time domain and frequency domain characteristics, the Xiaokaki Wang et al uses a PSO-SVM algorithm to achieve 95% recognition rate for a good spot welding joint, a qualified spot welding joint and a disqualified spot welding joint under the condition that 9 characteristic values are finally extracted; hu Hongwei et al propose feature extraction of weld defect echo signal data based on a one-dimensional local binary pattern (1-D LBP) algorithm by combining with Kernel Principal Component Analysis (KPCA), and finally achieve classification accuracy of 96.7% on slag inclusion, air holes and non-penetration 3-type weld defects by a support vector machine.
The method of feature extraction and machine learning based on signal processing has the effects that the method is mainly limited to detection of three defects of air holes, slag inclusion and incomplete penetration, and the reason for the detection result is mainly that the traditional machine learning method has high task correlation, the defect features are required to be designed manually, the detection is still influenced by experience of a detector, environment and defect types, rich expertise is required to be used as support, and artificial errors are easier to bring when the features are selected, so that the problems of strong subjective experience of the extracted features, insufficient feature expression and the like are caused, the real-sense intelligent detection and identification cannot be realized, and the calculation cost is high, so that the real-time detection requirements of various faults are difficult to meet.
In recent years, deep learning has achieved good performance in various machine vision recognition tasks. Unlike machine learning methods, deep learning does not require manual analysis of features to design a relevant feature extractor, but rather can automatically extract features from the original image, and has a great advantage in terms of image classification compared with conventional image recognition algorithms. The method provides a new way for automatic detection of the weld defects of the austenitic stainless steel.
Xiong Zhang et al propose a solar cell surface defect detection algorithm fused with a multichannel convolutional neural network, and the recognition rate of the broken cell wall, the crack and the unwelded area defect on the surface of the solar cell panel reaches 98.3 percent on average; jiaze Shang et al propose a RayNet with an 8-layer structure for X-ray image weld defect classification of oil pipelines with classification accuracy of 96.5% on concave defects, undercut defects, rod defects, circular defects, lack of penetration defects and incomplete penetration defects; liu Han et al use clustering algorithm to carry on defect segmentation to the welding seam defect X-ray picture of the petroleum steel pipe, then utilize 2 layers CNN network to discern and position three kinds of defects of welding seam, achieve 97.44% average recognition accuracy finally; fernando Roca Barcel (Mi et al) applies wavelet transformation to eliminate speckle noise of TOFD two-dimensional ultrasonic weld defect images, and realizes detection of three types of weld defects through a convolutional neural network, and the highest classification accuracy of 99.28% is obtained; the Zhifen Zhang et al designs a novel 11-layer CNN classification network based on weld defect images, and by developing an image acquisition system to collect welding images with different welding currents, feeding speeds and different angles, the recognition of three types of weld defects of the aluminum alloy sheet, namely non-penetration, normal penetration and burning-through, is realized, and the recognition accuracy can reach 99.38%. Huang Huandong et al perform feature analysis on ultrasonic TOFD-D scanning images of five types of weld defects, namely cracks, air holes, slag inclusions, incomplete penetration and incomplete fusion, and realize automatic identification of the type of the weld D-scan image defects by a constructed Faster RCNN neural network after defining the relation between image features and defect contours, wherein the identification accuracy reaches more than 97%.
As deep learning is applied and developed in the field of weld detection, considerable effects are achieved, but the following problems still remain: in the existing method, a sample set is constructed by adopting methods such as optical sensor detection, X-ray detection and TOFD two-dimensional imaging, in practical engineering application, the optical sensing method cannot well acquire material defect characteristics of a welding seam internal structure, the X-ray detection method has the safety problem, real-time detection is difficult to realize, and the TOFD two-dimensional imaging method has higher requirements on transverse and longitudinal resolution performance and real-time performance of an ultrasonic probe of the equipment, so that an ultrasonic A-type echo detection method is still selected for real-time detection in engineering, and compared with other methods, the method is safer and faster, and the price of a detection device is relatively low. However, researches show that researches on an intelligent decision algorithm based on an ultrasonic A-type echo detection method are relatively fresh, the reason is that the material characteristics of an austenitic stainless steel weld are the problem, when ultrasonic waves are used for detecting the austenitic stainless steel weld, the material structure of the ultrasonic A-type echo detection method can cause deflection, sound beam distortion and attenuation of ultrasonic wave propagation in the direction, so that the judgment of defects becomes very difficult, the signal-to-noise ratio of detection signals is low, defect missing detection and misjudgment are easy to generate, and the reliability of detection results is difficult to ensure, so that technical difficulties are provided for intelligent detection of the austenitic stainless steel weld.
Disclosure of Invention
Aiming at the problems of limited manual extraction feature discrimination, time-consuming deep convolution neural network training and the like in austenitic stainless steel weld joint detection, the invention aims to further optimize an austenitic stainless steel weld joint defect identification technology, and provides an austenitic stainless steel weld joint multi-type defect detection method based on fusion of ultrasonic A-scanning one-dimensional detection signal shallow statistics characteristics and deep time-frequency characteristics (based on depth characteristic fusion), aiming at solving the problems of limited manual extraction feature discrimination, time-consuming deep convolution neural network training, overfitting state caused by few samples and the like in austenitic stainless steel weld joint detection, and effectively improving the classification identification performance of five weld joint defects of unfused, slag inclusion, unfused penetration, air holes and cracks while ensuring low calculation cost.
The invention is realized by adopting the following technical scheme:
an ultrasonic defect detection and identification method for (austenitic) stainless steel weld joints based on depth feature fusion comprises the following steps:
a. and respectively carrying out A-type echo data acquisition and label marking on the defect samples by using an ultrasonic flaw detector and an ultrasonic oblique probe on the defects of five austenitic stainless steel weld joints, such as unfused, slag inclusion, incomplete penetration, air holes and cracks, so as to construct a training data set I.
b. The method comprises the steps of constructing a depth feature fusion network model, realizing defect feature extraction after double-channel depth feature fusion through training calculation, and outputting a defect classification recognition result through a support vector machine, wherein the method comprises the following steps of:
(1) Constructing a shallow defect sensitive characteristic index model based on statistical analysis, wherein the shallow defect sensitive characteristic index model is taken as a channel 1 and comprises a root mean square value X rms Amplitude X of square root r Absolute mean value
Figure BDA0002719430860000031
Standard deviation sigma, maximum value X max Minimum value X min Peak-to-peak value V pp Kurtosis beta, kurtosis->
Figure BDA0002719430860000032
Degree of deviation f x Moment coefficient of eighth order r 8 Coefficient of sixteen moment r 16 Waveform index K, peak index C, pulse index I f Margin index L and kurtosis index x q Deviation index K 3 Mean square spectrum M s Center of gravity F of frequency spectrum D Frequency domain variance V F Correlation factor F R Harmonic factor H,Spectral origin moment M n Sum of power spectrum G p (k) The 25 characteristics are used as stainless steel weld defect statistical characteristic indexes;
(2) Constructing a defect detection network LackNet with 6layers of convolution kernels and a full connection layer as a channel 2: in the LackNet network, after a picture is input into a network model, feature extraction is firstly carried out through a 7 multiplied by 3 convolution kernel, and an activation function is tanh
Figure BDA0002719430860000033
The step length is 4; then a maximum pooling of 3 x 3 steps of 2 is performed, which is the first layer of the network; the convolution kernels of the second layer and the third layer are 3 multiplied by 3, the activation function is tanh, the step length is 1, and the maximum pooling with the step length of 3 multiplied by 3 being 2 is used again in the third layer; the last three layers of the network are full-connection layers, the data output by the third layer are fully connected with 4096 neurons of the fourth layer, 4096 data are generated after tanh processing, and 4096 data are output after dropout processing; similarly, 4096 data output by the fourth layer are fully connected with 4096 neurons of the fifth layer, then processed by tanh to generate 4096 data, and then processed by dropout to output 4096 data; finally, 4096 data output by the fifth layer are fully connected with 5 neurons of the sixth layer, and trained values are output after training;
deep time-frequency characteristics are extracted: c, converting the weld joint detection echo signal into a time-frequency diagram through short-time Fourier transform, namely converting the training data set I obtained in the step a into a two-dimensional time-frequency diagram, carrying out deep time-frequency characteristic extraction training calculation on a channel 2LackNet convolutional neural network as input, and outputting an optimal weight W and a bias value b;
(3) Construction of depth feature fusion network model
The shallow layer feature (25 dimension) of the channel 1 and the deep layer feature (4096 dimension feature parameter of the second layer full-connection layer) based on the channel 2 are connected by adopting cascade fusion, wherein the cascade fusion is to directly connect the feature quantities of two or more feature images, the elements are all reserved, the feature dimension of the feature image after fusion corresponding to the feature quantities is the sum of the feature quantities of the two feature images, and the feature after fusion is 4121 dimension;
(4) Constructing a data set by using the 4121-dimensional feature vector and the corresponding defect type label after fusion as classifier input, combining the 5-class weld defect feature vector training data in pairs by adopting a one-versus-one algorithm, constructing 5 (5-1)/2 support vector machines, setting a RBF kernel function on the kernel function type, selecting a penalty factor c and a gamma function g by using a function SVMcgForclass, performing classification test by using the obtained optimal c and g training data, and outputting a corresponding defect type label result after training calculation.
c. Repeating the step a to detect new weld defects, collecting and labeling labels, and constructing an A-type scanning one-dimensional detection signal which is not included in a training data set I as a test data set II, wherein the test data set II comprises five austenitic stainless steel weld defect types of unfused, slag inclusion, incomplete penetration, air holes and cracks; and c, inputting the test data set II into a trained deep and shallow feature fusion network model constructed in the step b, and when the accuracy of the evaluation output result reaches a preset value, using the trained deep and shallow feature fusion network model for classifying, detecting and identifying the austenitic stainless steel weld joints with multiple types of defects.
The method is a stainless steel weld defect image recognition research based on feature fusion, and comprises shallow feature extraction based on statistical analysis, deep time-frequency feature extraction based on CNN and a stainless steel weld defect classification model based on feature fusion.
1. Shallow feature extraction based on statistical analysis: the statistical analysis is to explore and find the statistical rule of the random process by using a probability statistical method, and the theory on which the statistical analysis is based is the random process theory. In ultrasonic detection of an austenitic stainless steel welding line, uncertainty exists in ultrasonic detection due to a test environment and an austenitic stainless steel welding line material, and then a collected signal contains random components. It is difficult to describe such signals with exact mathematical expressions. Random signal data has its important characteristics: under the same conditions, repeated tests are carried out, and although the results of each test may have some differences, a large number of repeated tests are carried out, and the results show a certain statistical rule. The invention adopts a statistical analysis method to extract characteristics of weld joint detection signals, and simultaneously can give consideration to sensitivity and stability, and combines 25 characteristics of a peak value index, a root mean square value, a pulse index, a mean square spectrum, a square root amplitude value, a correlation factor, a bias index, an absolute mean value, a kurtosis index, a frequency domain variance, a standard deviation, a maximum value, a sixteen-order moment coefficient, a frequency spectrum center of gravity, a minimum value, a margin index, a harmonic factor, a peak value, kurtosis, a spectrum origin moment, a kurtosis, a power spectrum sum, an eighth-order moment coefficient, a waveform index and a bias degree to be used as stainless steel defect statistical characteristic indexes.
2. Deep time-frequency feature extraction based on CNN
The convolutional neural network is applied to the image recognition problem in computer vision in a large number, and belongs to the deep feedforward artificial neural network. The convolutional neural network is responsible for receiving the detected images, and the training results of each round of the training set and the verification set are reversely transferred to adjust network structure parameters. The convolution layer sequentially extracts the features of the shallow layers of the images through convolution check images, and the convolution results are output through an activation function to form a feature map (feature map) of the layer, wherein the feature map (feature map) comprises relative position information besides the feature value. The pooling layer screens and filters the feature map obtained by the convolution layer, so that network parameters are reduced, main features are further extracted by compressing the features, and redundant features are removed. And finally, integrating the information extracted by the convolution layer and the pooling layer by the full-connection layer, and inputting the information into a classifier for image classification.
In order to construct a two-dimensional image sample set which is suitable for the input requirement of a convolutional neural network and combine the characteristics of ultrasonic weld detection echo signals, the invention converts the weld detection echo signals into a time-frequency diagram through short-time Fourier transform, namely converts one-dimensional time domain detection signals into a two-dimensional time-frequency diagram, and converts the problem of extracting and clustering the detected one-dimensional signal characteristics into the problem of image processing while retaining the time domain characteristics and the frequency domain characteristics of the signals and reflecting the time-frequency characteristics of the frequency-time mutual change relationship.
Convolutional neural networks generally increase with increasing iteration number, and classification effects are continually increasing. However, the more effective the features extracted by the deeper network are, and the over-fitting phenomenon is easy to occur when the deep network such as GoogLeNet, resNet50 is adopted in the early experiment aiming at the problem of limited data scale of the invention. And the deeper network model parameters are more, so that the model is easy to be subjected to overfitting under the condition of limited training sets, the training speed is slow, and the model optimization difficulty is higher after the training speed is slow. In order to design an optimal network structure applicable to the data size limitation of the invention, leNet, alexNet, VGG16 and GoogLeNet with different depths are selected for testing, and multiple experiments show that the AlexNet model classification performance on the data set of the invention is best, and the accuracy is seriously reduced along with the deepening of the network from AlexNet to VGG16 and then to GoogLeNet, so that the classification task of the invention does not need a deeper network structure. Compared to LeNet, alexNet not only has deeper network depth, but also uses a larger convolution kernel. Therefore, the defect detection network LackNet with 6layers of convolution kernels and full connection layers is constructed by taking an AlexNet network as a prototype through a large number of experiments. The network structure and parameters are shown in figure 2, the first layer of the network of the invention uses 7X 3 convolution kernel to extract the characteristics on the basis of AlexNet, and then connects with 3X 7 small convolution kernel, thus reducing a large number of parameters compared with AlexNet network; using a tanh activation function after the convolution layer; and finally, integrating the features by using three full-connection layers, and extracting parameters of a second full-connection layer to perform next feature fusion. The LackNet network constructed by the invention is more suitable for the time-frequency image classification task with smaller data scale.
3. Stainless steel weld defect classification model based on feature fusion
The characteristic fusion methods commonly used at present are additive fusion, maximum fusion, cascade fusion and the like. The additive fusion is to add the values of the corresponding position elements in the feature images to ensure that the channel number of the fused feature images is the same as the channel number before. The method is suitable for fusing the two feature graphs with the same dimension. The maximum fusion is similar to the additive fusion, and the value of the element corresponding to the position of the two feature images is larger and is used as the fusion feature value, and the method is also applicable to the two feature images with the same dimension. The cascade fusion is to directly connect the two feature images, the elements are all reserved, and the channel number of the feature images after fusion corresponding to the elements is the sum of the two feature images. The cascade fusion method is applicable to two feature maps of any dimension, and the feature map can be selected by fusing original data or a certain layer of the neural network. Therefore, the invention adopts cascade fusion to process the characteristic values of different channel numbers input by the front end.
In order to achieve the purpose of feature fusion, the invention extracts the second-layer full-connection layer features of the LackNet network, and performs cascade fusion with the shallow features extracted by the statistical method to obtain 4121-dimension features. And finally, inputting the fused characteristics into a support vector machine to identify the defect image of the stainless steel weld.
In a word, the detection algorithm provided by the invention carries out cascade fusion on shallow statistical features and depth time-frequency features based on a LackNet convolutional network, and finally realizes defect classification and identification through a Support Vector Machine (SVM). Experiments show that the method can realize high-precision classification of five stainless steel weld defects such as unfused, slag inclusion, incomplete penetration, air holes and cracks, and the average accuracy reaches 97.7 percent, and meanwhile, the constructed identification network belongs to a light-weight model, so that the operation efficiency is effectively improved, and the problem that the depth feature extraction needs a long time is solved. Compared with the existing intelligent detection method for the welding seam, the ultrasonic detection method provided by the invention is simple and convenient, meets the requirements of actual engineering application on accuracy and efficiency, and provides a new technical reference for the intelligent detection field of the welding seam.
Drawings
FIG. 1 shows a stainless steel weld defect classification model based on feature fusion.
Fig. 2 shows LackNet network structure and parameters.
Fig. 3 shows the eighth order moment coefficients.
Fig. 4 shows the absolute average amplitude in the time domain.
Fig. 5 shows the peak-to-peak value.
Fig. 6 shows a pulse index.
Fig. 7 shows the correlation factor.
Fig. 8 shows the spectrum origin moment.
Fig. 9 shows the iteration curves of different depths and activation function accuracy.
Fig. 10 shows a confusion matrix.
Fig. 11 shows five defective time domains and corresponding time-frequency diagrams.
Detailed Description
Specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
1. An ultrasonic defect detection and identification method for (austenitic) stainless steel weld joints based on depth feature fusion adopts a lightweight network with double-channel shallow statistical features and deep time-frequency feature fusion to carry out intelligent detection and identification on multi-type defects of austenitic stainless steel weld joints; the method comprises the following specific steps:
a. and respectively carrying out A-type echo data acquisition and label marking on the defect samples by using an ultrasonic flaw detector and an ultrasonic oblique probe on the defects of five austenitic stainless steel weld joints, such as unfused, slag inclusion, incomplete penetration, air holes and cracks, so as to construct a training data set I.
In order to ensure the diversity and effectiveness of data, when the A-type echo data are acquired, a K2.5 oblique probe (2.5 MHz, 9X 9mm of wafer) is adopted to detect on one side and both sides of a welding seam by using an A-type ultrasonic scanning primary reflection wave method (the coupling agent is engine oil), meanwhile, the scanning is kept at 90 degrees with the central line of the welding seam, and five A-type scanning is respectively carried out on a zigzag type, a front-back type, a left-right type, a 10-15 degree corner and a surrounding type A, and wavelet transformation noise reduction and label marking treatment are carried out on various scanned defect detection echo one-dimensional signals Xi;
labeling: the collected data are divided and stored in 5 folders according to the corresponding defect types, and the labels of the files are sequentially set to be cracks, air holes, slag inclusion, unfused and incomplete penetration.
b. The method comprises the steps of constructing a depth feature fusion network model, realizing defect feature extraction after double-channel depth feature fusion through training calculation, and outputting a defect classification recognition result (shown in fig. 1) through a support vector machine (Support Vector Machine, SVM), wherein the method comprises the following steps:
the depth feature fusion network model is formed by a shallow defect sensitive feature network model (channel 1) and a LackNet convolution network model (channel 2) which are constructed based on statistical analysis in a two-channel cascade mode.
(1) Constructing a model based on statistical analysis of shallow defect sensitive characteristic indexes (networks) as a channel 1, wherein the model comprises a root mean square value X rms Amplitude X of square root r Absolute mean value
Figure BDA0002719430860000081
Standard deviation sigma, maximum value X max Minimum value X min Peak-to-peak value V pp Kurtosis beta, kurtosis
Figure BDA0002719430860000082
Degree of deviation f x Moment coefficient of eighth order r 8 Coefficient of sixteen moment r 16 Waveform index K, peak index C, pulse index I f Margin index L and kurtosis index x q Deviation index K 3 Mean square spectrum M s Center of gravity F of frequency spectrum D Frequency domain variance V F Correlation factor F R Harmonic factor H, spectral origin moment M n Sum of power spectrum G p (k) The 25 features are used as stainless steel weld defect statistical feature indexes, and the statistical feature parameters are shown in the following table and correspond to formulas (1) to (25) respectively.
Figure BDA0002719430860000083
Figure BDA0002719430860000091
Wherein: mu (mu) x X is the mean value of the number of x, x=1, 2, …, N; x is X R X is the real part of the Fourier transform I K=0, 1, …, N/2-1。
The test data set sample signal contains random components (caused by the tested environment and austenitic stainless steel weld joint materials), and the shallow characteristic index system (channel 1) based on the statistical analysis method is constructed in the method, so that the characteristic quantity of the defect signal essence can be reflected, and the defect sensitivity and stability can be considered.
(2) In order to solve the problems that training of a deep convolutional neural network is time-consuming and a convolutional neural network is in an overfitting state due to a small number of samples when austenitic stainless steel weld defects are classified, a defect detection network LackNet with 6layers of convolutional kernels and full-connection layers is constructed and used as a channel 2, and deep time-frequency characteristics are extracted.
The network structure and parameters are shown in fig. 2, in the LackNet network, after the picture (two-dimensional time-frequency diagram) is input into the network model, the characteristic extraction is firstly carried out through a 7×7×3 convolution kernel, and the activation function is tanh
Figure BDA0002719430860000101
The step length is 4; then a maximum pooling of 3 x 3 steps of 2 is passed, which is the first layer of the network. The convolution kernels of the second layer and the third layer are 3×3 in size, so as to reduce the model parameters, the activation function is tanh, the step size is 1, and the maximum pooling with the step size of 3×3 being 2 is used again in the third layer. The pooling layer adopts a maximum pooling structure of 3 multiplied by 3, so that not only can the offset error of the estimated mean value caused by parameter errors be reduced, but also the loss of texture information can be reduced; use of tanh->
Figure BDA0002719430860000102
And the function is activated, so that the characteristic effect can be enhanced, and the training speed is increased. The last three layers of the network are full-connection layers, the data output by the third layer are fully connected with 4096 neurons of the fourth layer, 4096 data are generated after tanh processing, and 4096 data are output after dropout processing; similarly, 4096 data output by the fourth layer is fully connected with 4096 neurons of the fifth layer, then processed by tanh to generate 4096 data, and then processed by dropout to output 4096 data (and extracting parameters of the fully connected layer of the fourth layer for processing)The next step of feature fusion); finally, 4096 data output by the fifth layer are fully connected with 5 neurons of the sixth layer, and trained values are output after training.
Deep time-frequency characteristics are extracted: in order to construct the LackNet convolutional neural network input requirements suitable for the channel 2 and combine the characteristics of ultrasonic weld detection echo signals, the invention converts the weld detection echo signals into a time-frequency diagram through short-time Fourier transform, namely, the training data set I (one-dimensional time domain signals) obtained in the step a is converted into a two-dimensional time-frequency diagram, and the deep time-frequency characteristic extraction training calculation is carried out on the channel 2LackNet convolutional neural network as input, so as to output the optimal weight W and the offset value b. When the LackNet convolutional neural network of the channel 2 is subjected to deep time-frequency feature extraction training calculation, adam is used for network optimization, and the Adam has the advantage that the self-adaptive learning rate is used for accelerating the convergence rate of the network. MiniBatchSize was set to 128, the initial learning rate was 1e-4, and the learning rate was attenuated by 10% every 30 epochs.
The Adam optimization algorithm combines Momentum and RMSprop together by first initializing v dW =0,S dW =0,v db =0,S db =0, dW, db is calculated with the current mini-batch in the t-th iteration. Next, a Momentum exponentially weighted average is calculated, having the formulas (26) to (27):
v dW =β 1 v dW +(1-β 1 )dW (26)
v db =β 1 v db +(1-β 1 )db, (27)
then updating with RMSprop, and using different super-parameters beta 2 The formulae (28) to (29):
S dW =β 2 S dW +(1-β 2 )(dW) 2 (28)
S db =β 2 S db +(1-β 2 )(db) 2 (29)
the steps are equivalent to Momentum updating the super parameter beta 1 RMSprop updates the superparameter β 2 After that, the deviation correction is calculated, and the following formulas (30) to (33) are adopted:
Figure BDA0002719430860000111
Figure BDA0002719430860000112
Figure BDA0002719430860000113
Figure BDA0002719430860000114
finally updating the weight, wherein the weight is represented by formulas (34) - (35):
Figure BDA0002719430860000115
Figure BDA0002719430860000116
(3) Construction of depth feature fusion network model
The deep and shallow feature fusion network model adopts cascade fusion, which connects the shallow features (25 dimension) of the channel 1 based on statistics and the deep features (4096 dimension feature parameters of the second full-connection layer) of the channel 2 based on LackNet, wherein the cascade fusion directly connects the feature quantities of two or more feature graphs, the elements are all reserved, the feature dimension of the feature graph after fusion corresponding to the feature quantities is the sum of the feature quantities of the two feature graphs, and the feature after fusion is 4121 dimension.
(4) Constructing the fused 4121-dimensional feature vector and the corresponding defect type label into a data set to serve as classifier input, combining the 5-class weld defect feature vector training data in pairs by adopting a one-versus-one algorithm, constructing 5 (5-1)/2 support vector machines, setting a RBF (radial basis) kernel function on the kernel function type, selecting a function SVgForclass to complete the selection of penalty factors c and gamma functions g, using the obtained optimal c and g training data, performing classification test, and outputting a corresponding defect type label result after training calculation.
In specific implementation, the classification detection method is carried out under Intel Core CPU I5-8300H@2.3GHz, 8GB memory, a 64-bit Windows operating system, matlab2019a, libsvm3.12 and deep toolbox12.0 platforms, and the data set constructed in the step a is used.
By cooperating with Shanxi institute of Electrical design, inc., an experimental platform was set up in the unit to prepare a sample containing five types of defects, namely, unfused, slag inclusion, lack of penetration, porosity, and cracks. The defective samples were data collected using a KARL dutsch (ECHOGRAPH) flaw detector, a Tektronix DPO 2024B oscilloscope, and a 2.5p 9 x 9K2.5 oblique probe. In order to ensure the diversity and effectiveness of data, the inclined probe is kept at 90 degrees with the center line of the welding line during data acquisition, and five scanning steps including zigzag, front-back, left-right, 10-15-degree corner and surrounding are respectively carried out, wherein the probe is ensured to move back and forth on the cross section of the welding joint. Five ultrasonic A-scan defect data 544 of unfused, slag inclusion, lack of penetration, cracks and air holes are collected in total. Because the original signal contains a large number of irrelevant signals, the wavelet transformation is used for denoising the original A-scan data, the time domain graph is transformed to extract statistical features, the data after denoising is transformed to a time frequency graph by using a fast Fourier algorithm, and the deep feature extraction is carried out by inputting the data into a neural network, as shown in fig. 11, the time domain graph and the time frequency graph corresponding to five defects after denoising are obtained.
c. Repeating the step a to detect new weld defects, collecting and labeling labels, and constructing an A-type scanning one-dimensional detection signal which is not included in a training data set I as a test data set II, wherein the test data set II comprises five austenitic stainless steel weld defect types of unfused, slag inclusion, incomplete penetration, air holes and cracks; and c, inputting the test data set II into the trained deep and shallow feature fusion network model constructed in the step b, and when the accuracy of the evaluation output result reaches a preset value, using the trained deep and shallow feature fusion network model for classifying and identifying the austenitic stainless steel weld joints with multiple types of defects.
The invention can identify and classify the weld defects of austenitic stainless steel into five categories of unfused, slag inclusion, incomplete penetration, air holes and cracks.
2. The key points of the application of the method are as follows:
(1) Because interference signals are random in ultrasonic defect detection of austenitic stainless steel welding seams and are influenced by the internal structure of materials, acoustic deflection, acoustic beam distortion and the like can occur in ultrasonic detection, in order to find out characteristic quantities which can reflect the essence of defect signals and enable the quantity of the characteristic quantities to be proper, the characteristic of welding seam detection signals is extracted by adopting a statistical analysis method, sensitivity and stability can be considered, and 25 characteristics are selected as stainless steel welding seam defect statistical characteristic indexes in combination with practical conditions.
(2) In order to construct a two-dimensional image sample set which is suitable for the input requirement of a convolutional neural network and combine the characteristics of ultrasonic weld detection echo signals, the invention converts the weld detection echo signals into a time-frequency diagram through short-time Fourier transform, converts one-dimensional time-domain detection signals into a two-dimensional time-frequency diagram, and converts the problem of extracting and clustering the characteristics of the detected one-dimensional signals into the problem of image processing while retaining the time-domain characteristics and the frequency-domain characteristics of the signals and reflecting the time-frequency characteristics of the frequency-time mutual change relationship.
(3) Aiming at the problem of limited data scale, the invention innovatively constructs a defect detection network LackNet with 6layers of convolution kernels and a full connection layer. The first layer uses 7×7×3 convolution kernel for feature extraction, and then connects small convolution kernel of 3×3×7, which reduces a lot of parameters compared with other classical networks; using a tanh activation function after the convolution layer; finally, three fully connected layers are used to integrate the features. The LackNet network constructed by the invention is more suitable for the time-frequency image classification task with smaller data scale.
(4) In order to solve the problems that the austenitic stainless steel weld detection has limited feature discrimination, the deep convolutional neural network training is time-consuming, the convolutional neural network is in a fitting state and the like due to a small number of samples, the invention designs a stainless steel weld defect lightweight class classification network model based on feature fusion, and finally realizes high-precision classification recognition of five stainless steel weld defects such as unfused, slag inclusion, unslotted, air holes and cracks by combining a support vector machine through cascade fusion of LackNet network depth features and statistical analysis shallow features. Compared with the existing method, the method has certain advantages in efficiency, precision and the number of identifiable defect types.
3. The beneficial effects of the invention are as follows:
(1) And (3) statistical feature analysis: in order to verify the validity of the statistical features selected by the invention, the statistical features of the wavelet noise reduction detection signals are analyzed. Fig. 3 to 8 are six feature index diagrams with obvious selection effect. From the figure, it can be seen that although the statistical characteristic indexes of the various defects do not achieve the effect of complete separation, the middle part of the 5 types of faults are layered. In FIG. 3, the eighth order coefficients are not completely welded or fused, and the pores are obviously layered; in FIG. 4, inclusion and unfused stratification in the time domain absolute average; in fig. 5, delamination occurs between the pores and unfused and other three categories in the peak-to-peak values; in fig. 6, cracks and other four types of obvious layering in the frequency domain pulse indexes, and the phenomenon of overlapping of the unwelded part and the unfused part; in fig. 7, the pores in the correlation factor are significantly layered with unfused and unwelded pores, respectively; in fig. 8, the pores in the spectrum origin moment are clearly distinguished from the other four types of defect statistics. Although the selected statistical features cannot fully show all characteristic differences of defects, it can be seen that various statistical feature indexes have certain sensitivity to different defect signal features, 25 types of statistical features are fused, and the combined action is favorable for separating the defect signal features and becomes a favorable basis for classifying the defects of the welding seam.
(2) Deep feature analysis: since the classification of the invention is oriented to small sample data sets, the invention performs experiments by constructing CNNs with shallower depths. Fig. 9 shows the experimental precision of CNN networks constructed according to the present invention with 5layers (5 layers), 6layers (6 layers) and 7layers (7 layers) using a relu activation function, and CNN networks using a tanh activation function (LackNet) as a function of the number of iterations. As can be seen from the figure, at the network depth of 6, the model classification accuracy reaches 92.86; network accuracy drops by 3.57% when using a 5-layer structure; when a 7-layer structure is used, the network accuracy is reduced by 1.79%. Description for the classification task of the present invention, the network is most suitable at depth 6. Therefore, the present invention selects CNNs with 6-layer modules as the deep feature extraction network structure of the present invention. Because the tanh activation function is in the training process, the characteristic effect is obvious, and the training speed is accelerated. In the experiment of the invention, the classification accuracy of the network (LackNet) using the tanh activation function is improved by 1.78% compared with that of the network (LackNet) using the relu activation function (6 layers). Therefore, the invention selects the tanh as the activation function, and further improves the feature extraction capability of the LackNet network. Fig. 10 shows a confusion matrix obtained after the LackNet classification, and it can be seen from the figure that the average classification accuracy of the model reaches 94.6%, and the identification accuracy of cracks, pores and unfused defects reaches 100%, so that the LackNet network of the present invention has reached a better level in terms of feature learning ability and model robustness.
(3) Visualization of experimental model and comparison of different models: the extracted 25-dimensional statistical features and 4096-dimensional deep features automatically extracted by the LackNet are subjected to cascade fusion to be 4121-dimensional features, and then are input into a support vector machine for one vs one classification. Through experiments, the characteristic fusion model can realize 97.77% classification accuracy for 5 types of weld defects such as unfused, slag inclusion, incomplete penetration, air holes and cracks on a small sample data set of the weld defects. In order to verify the self-adaptive feature learning capability of the model for the weld defect classification task, the fused features are visualized by using a T-distributed random nearest neighbor embedding (T-Distribution Stochastic Neighbour Embedding, T-SNE) method, and the result is shown in FIG. 9. The feature fusion model provided by the invention can learn effective features from the weld defect data set to classify weld defects.
(4) In order to verify the effectiveness of the classification model constructed by the invention, the SVM optimized by PSO in machine learning is adopted to be compared with a latest research CNN network in the aspect of weld defects and a current advanced deep learning VGG16 model on the same test set and training set. Classification Accuracy (P) and Recall (R) for each type of weld defect, as well as overall Average Accuracy (Average Precision, AP), average Recall (AR) and Accuracy (Accuracy) indices are selected to measure the performance of the model. The experimental results of each model are shown in table 1. The calculation methods of the precision rate, the recall rate and the accuracy rate are shown in the following formulas (36) to (38):
Precision=TP/(TP+FP) (36)
Precision=TP/(TP+FP) (37)
Accuracy=(TP+TN)/(TP+FP+TN+FN) (38)
wherein: TP (True Positive) the image is marked as a positive sample, and the classification result is also a positive sample; FN (False Negative) the image is marked as a positive sample, and the classification result is a negative sample; FP (False Positive) the image is marked as a negative sample, and the classification result is a positive sample; TN (TrueNegative) denotes a negative sample, and the classification result is also a negative sample.
Table 1 comparison of precision and recall results for various models on the dataset of the present invention
Figure BDA0002719430860000151
From Table 1, it is seen that the feature fusion method of the present invention is superior to other models in three indexes of AP, AR and Accuracy; compared with a PSOSVM model, the method and the device have the advantages that the statistical features and the depth features are fused and then input into the SVM for training, a complicated SVM optimization process is not needed, the feature extraction capacity and the robustness of the model are improved, and a large amount of time is saved; compared with CWTCNN and VGG16, when the data input is insufficient, the convergence can still be realized; meanwhile, compared with VGG16, the CWTCNN also obtains better classification accuracy on the data set of the invention, which indicates that a deeper network structure is not needed for the data set of the invention, otherwise, the problem of network degradation is generated and the training is difficult. Experiments show that compared with the existing advanced method, the method provided by the invention can obtain a good effect on classification accuracy.
Meanwhile, it can also be seen from table 1: the feature fusion model can obtain better classification effects on cracks, air holes, unfused and unwelded, and the feature fusion method greatly improves the robustness of the model. The recognition accuracy of only 90.9% on the welding seam defect slag inclusion indicates that the model can not effectively distinguish slag inclusion from other four defects, and further optimization is provided.
The foregoing description is only exemplary embodiments of the present invention and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent flow modifications made by the present invention and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the present invention.

Claims (4)

1. A stainless steel weld ultrasonic defect detection method based on depth feature fusion is characterized in that: the method comprises the following steps:
a. respectively carrying out A-type echo data acquisition and label marking on the defect sample by using an ultrasonic flaw detector and an ultrasonic oblique probe on five austenitic stainless steel weld defects such as unfused, slag inclusion, incomplete penetration, air holes and cracks, so as to construct a training data set I;
b. the method comprises the steps of constructing a depth feature fusion network model, realizing defect feature extraction after double-channel depth feature fusion through training calculation, and outputting a defect classification recognition result through a support vector machine, wherein the method comprises the following steps of:
(1) Constructing a shallow defect sensitive characteristic index model based on statistical analysis, wherein the shallow defect sensitive characteristic index model is taken as a channel 1 and comprises a root mean square value X rms Amplitude X of square root r Absolute mean value
Figure FDA0002719430850000012
Standard deviation sigma, maximum value X max Minimum value X min Peak-to-peak value V pp Kurtosis beta, kurtosis->
Figure FDA0002719430850000013
Degree of deviation f x Moment coefficient of eighth order r 8 Coefficient of sixteen moment r 16 Waveform index K, peak index C, pulse fingerLabel I f Margin index L and kurtosis index x q Deviation index K 3 Mean square spectrum M s Center of gravity F of frequency spectrum D Frequency domain variance V F Correlation factor F R Harmonic factor H, spectral origin moment M n Sum of power spectrum G p (k) The 25 characteristics are used as stainless steel weld defect statistical characteristic indexes;
(2) Constructing a defect detection network LackNet with 6layers of convolution kernels and a full connection layer as a channel 2: in the LackNet network, after a picture is input into a network model, feature extraction is firstly carried out through a 7 multiplied by 3 convolution kernel, and an activation function is as follows
Figure FDA0002719430850000011
The step length is 4; then a maximum pooling of 3 x 3 steps of 2 is performed, which is the first layer of the network; the convolution kernels of the second layer and the third layer are 3 multiplied by 3, the activation function is tanh, the step length is 1, and the maximum pooling with the step length of 3 multiplied by 3 being 2 is used again in the third layer; the last three layers of the network are full-connection layers, the data output by the third layer are fully connected with 4096 neurons of the fourth layer, 4096 data are generated after tanh processing, and 4096 data are output after dropout processing; similarly, 4096 data output by the fourth layer are fully connected with 4096 neurons of the fifth layer, then processed by tanh to generate 4096 data, and then processed by dropout to output 4096 data; finally, 4096 data output by the fifth layer are fully connected with 5 neurons of the sixth layer, and trained values are output after training;
deep time-frequency characteristics are extracted: c, converting the weld joint detection echo signal into a time-frequency diagram through short-time Fourier transform, namely converting the training data set I obtained in the step a into a two-dimensional time-frequency diagram, and carrying out deep time-frequency characteristic extraction training calculation on the LackNet convolutional neural network of the channel 2 as input to output an optimal weight W and a bias value b;
(3) Construction of depth feature fusion network model
Adopting cascade fusion to connect the shallow layer characteristics of the channel 1 and the deep layer characteristics of the channel 2, wherein the fused characteristics are 4121-dimension;
(4) Constructing a data set by using the 4121-dimensional feature vector and the corresponding defect type label after fusion as classifier input, combining the 5-class weld defect feature vector training data two by adopting a one-versus-one algorithm, constructing 5 (5-1)/2 support vector machines, setting a RBF kernel function on a kernel function type, selecting a penalty factor c and a gamma function g by using a function SVMcgForclass, performing classification test by using the obtained optimal c and g training data, and outputting a corresponding defect type label result after training calculation;
c. repeating the step a to detect new weld defects, collecting and labeling labels, and constructing an A-type scanning one-dimensional detection signal which is not included in a training data set I as a test data set II, wherein the test data set II comprises five austenitic stainless steel weld defect types of unfused, slag inclusion, incomplete penetration, air holes and cracks; and c, inputting the test data set II into a trained deep and shallow feature fusion network model constructed in the step b, and when the accuracy of the evaluation output result reaches a preset value, using the trained deep and shallow feature fusion network model for classifying and identifying the austenitic stainless steel weld joints with multiple types of defects.
2. The ultrasonic defect detection method for the stainless steel weld joint based on depth feature fusion of claim 1, which is characterized by comprising the following steps: in the step a, during echo data acquisition, a K2.5 oblique probe is adopted to detect on one side and two sides of a weld joint by using an A-type ultrasonic scanning primary reflection wave method, meanwhile, scanning is kept at 90 degrees with the center line of the weld joint, and four types of A-type scanning are respectively carried out at the angles of sawtooth, front and back, left and right, 10-15 degrees and around, and echo one-dimensional signals X are detected for various defects obtained by scanning i Performing wavelet transformation noise reduction and label labeling treatment;
labeling: the collected data are divided and stored in 5 folders according to the corresponding defect types, and the labels of the files are sequentially set to be cracks, air holes, slag inclusion, unfused and incomplete penetration.
3. The ultrasonic defect detection method for the stainless steel weld joint based on depth feature fusion of claim 1, which is characterized by comprising the following steps: in the step (1), the corresponding formulas are shown as formulas (1) to (25);
Figure FDA0002719430850000031
Figure FDA0002719430850000032
Figure FDA0002719430850000033
Figure FDA0002719430850000034
X max =max|x i | (5)
X min =min|x i | (6)
V pp =max|x i |-min|x i | (7)
Figure FDA0002719430850000041
Figure FDA0002719430850000042
Figure FDA0002719430850000043
Figure FDA0002719430850000044
Figure FDA0002719430850000045
Figure FDA0002719430850000046
Figure FDA0002719430850000047
Figure FDA0002719430850000048
Figure FDA0002719430850000049
Figure FDA00027194308500000410
Figure FDA00027194308500000411
Figure FDA00027194308500000412
Figure FDA0002719430850000051
Figure FDA0002719430850000052
Figure FDA0002719430850000053
Figure FDA0002719430850000054
Figure FDA0002719430850000055
Figure FDA0002719430850000056
wherein: mu (mu) x X is the mean value of the number of x, x=1, 2, …, N; x is X R X is the real part of the Fourier transform I For the imaginary part, k=0, 1, …, N/2-1.
4. The ultrasonic defect detection method for the stainless steel weld joint based on depth feature fusion of claim 1, which is characterized by comprising the following steps: in the step (2), when deep time-frequency characteristic extraction training calculation is carried out on the LackNet convolutional neural network of the channel 2, adam is used for network optimization, miniBatchSize is set to 128, the initial learning rate is 1e-4, and the learning rate attenuation is 10% of the attenuation of every 30 epochs;
the Adam optimization algorithm combines Momentum and RMSprop together by first initializing v dW =0,S dW =0,v db =0,S db =0, calculating dW, db with the current mini-batch in the t-th iteration; next, a Momentum exponentially weighted average is calculated, having the formulas (26) to (27):
v dW =β 1 v dW +(1-β 1 )dW (26)
v db =β 1 v db +(1-β 1 )db, (27)
then updating with RMSprop, usingDifferent superparameter beta 2 The formulae (28) to (29):
S dW =β 2 S dW +(1-β 2 )(dW) 2 (28)
S db =β 2 S db +(1-β 2 )(db) 2 (29)
the steps are equivalent to Momentum updating the super parameter beta 1 RMSprop update of superparameter beta 2 After that, the deviation correction is calculated, and the following formulas (30) to (33) are adopted:
Figure FDA0002719430850000061
Figure FDA0002719430850000062
Figure FDA0002719430850000063
Figure FDA0002719430850000064
finally updating the weight, wherein the weight is represented by formulas (34) - (35):
Figure FDA0002719430850000065
Figure FDA0002719430850000066
CN202011083232.XA 2020-10-12 2020-10-12 Stainless steel weld ultrasonic defect detection method based on depth feature fusion Active CN112232400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011083232.XA CN112232400B (en) 2020-10-12 2020-10-12 Stainless steel weld ultrasonic defect detection method based on depth feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011083232.XA CN112232400B (en) 2020-10-12 2020-10-12 Stainless steel weld ultrasonic defect detection method based on depth feature fusion

Publications (2)

Publication Number Publication Date
CN112232400A CN112232400A (en) 2021-01-15
CN112232400B true CN112232400B (en) 2023-06-20

Family

ID=74112098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011083232.XA Active CN112232400B (en) 2020-10-12 2020-10-12 Stainless steel weld ultrasonic defect detection method based on depth feature fusion

Country Status (1)

Country Link
CN (1) CN112232400B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157556A (en) * 2021-03-09 2021-07-23 金陵科技学院 Industry building software defect management method based on selected principal component identification
CN113129266B (en) * 2021-03-22 2022-03-29 太原科技大学 Stainless steel weld defect detection method based on multi-domain expression data enhancement and model self-optimization
CN113221666A (en) * 2021-04-20 2021-08-06 武汉工程大学 Intelligent identification method for seam defects of metal plate
CN113256566A (en) * 2021-04-29 2021-08-13 广州杰赛科技股份有限公司 Pipeline weld defect identification method
CN113340997B (en) * 2021-05-11 2024-01-12 西安交通大学 Online detection method for laser shock peening defect based on acoustic emission double-channel range
CN113298190B (en) * 2021-07-05 2023-04-07 四川大学 Weld image recognition and classification algorithm based on large-size unbalanced samples
CN113642220B (en) * 2021-08-26 2023-09-22 江苏科技大学 Ship welding process optimization method based on RBF and MOPSO
CN113538433B (en) * 2021-09-17 2021-11-26 海门市创睿机械有限公司 Mechanical casting defect detection method and system based on artificial intelligence
CN114088817B (en) * 2021-10-28 2023-10-24 扬州大学 Deep learning flat ceramic membrane ultrasonic defect detection method based on deep features
CN114519792B (en) * 2022-02-16 2023-04-07 无锡雪浪数制科技有限公司 Welding seam ultrasonic image defect identification method based on machine and depth vision fusion
CN114897889B (en) * 2022-06-27 2023-01-31 浙江旭派动力科技有限公司 Automatic full-inspection method and system for spot welding of battery pack
CN114878582B (en) * 2022-07-01 2022-10-18 苏州翔楼新材料股份有限公司 Defect detection and analysis method and system for special steel
CN115082835B (en) * 2022-07-22 2023-02-17 东南大学溧阳研究院 Transformer substation fault identification method based on wavelet video decomposition and fast RCNN
CN115588166B (en) * 2022-11-10 2023-02-17 新乡市诚德能源科技装备有限公司 Prevent leaking marine LNG fuel jar
CN116776103B (en) * 2023-08-18 2023-10-13 江苏省特种设备安全监督检验研究院 Intelligent welding line detection regulation and control system and method based on machine vision
CN116990298B (en) * 2023-09-28 2023-12-08 南通中奥车用新材料有限公司 Finished product quality evaluation system for artificial leather production equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451997A (en) * 2017-07-31 2017-12-08 南昌航空大学 A kind of automatic identifying method of the welding line ultrasonic TOFD D scanning defect types based on deep learning
CN108345911A (en) * 2018-04-16 2018-07-31 东北大学 Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics
CN110826630A (en) * 2019-11-08 2020-02-21 哈尔滨工业大学 Radar interference signal feature level fusion identification method based on deep convolutional neural network
CN111107675A (en) * 2020-01-21 2020-05-05 山东科华电力技术有限公司 Cable channel edge Internet of things terminal and method based on ubiquitous power Internet of things

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI653605B (en) * 2017-12-25 2019-03-11 由田新技股份有限公司 Automatic optical detection method, device, computer program, computer readable recording medium and deep learning system using deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451997A (en) * 2017-07-31 2017-12-08 南昌航空大学 A kind of automatic identifying method of the welding line ultrasonic TOFD D scanning defect types based on deep learning
CN108345911A (en) * 2018-04-16 2018-07-31 东北大学 Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics
CN110826630A (en) * 2019-11-08 2020-02-21 哈尔滨工业大学 Radar interference signal feature level fusion identification method based on deep convolutional neural network
CN111107675A (en) * 2020-01-21 2020-05-05 山东科华电力技术有限公司 Cable channel edge Internet of things terminal and method based on ubiquitous power Internet of things

Also Published As

Publication number Publication date
CN112232400A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112232400B (en) Stainless steel weld ultrasonic defect detection method based on depth feature fusion
CN112258496A (en) Underground drainage pipeline disease segmentation method based on full convolution neural network
Posilović et al. Flaw detection from ultrasonic images using YOLO and SSD
CN107121501A (en) A kind of turbine rotor defect classification method
CN112946081B (en) Ultrasonic imaging method based on intelligent extraction and fusion of defect multiple features
Cheng et al. Automatic defect depth estimation for ultrasonic testing in carbon fiber reinforced composites using deep learning
CN112668527A (en) Ultrasonic guided wave semi-supervised imaging detection method
CN107727749A (en) A kind of ultrasonic quantitative detection method based on wavelet packet fusion feature extraction algorithm
Lawson et al. Automatic detection of defects in industrial ultrasound images using a neural network
CN112302061A (en) Intelligent rapid interpretation method for integrity detection signal of low-strain foundation pile
Zhang et al. Ultrasonic lamination defects detection of carbon fiber composite plates based on multilevel LSTM
McKnight et al. GANs and alternative methods of synthetic noise generation for domain adaption of defect classification of Non-destructive ultrasonic testing
Zhao et al. Automated quantification of small defects in ultrasonic phased array imaging using AWGA-gcForest algorithm
Qin et al. A novel physically interpretable end-to-end network for stress monitoring in laser shock peening
Medak et al. Detection of Defective Bolts from Rotational Ultrasonic Scans Using Convolutional Neural Networks
CN113256566A (en) Pipeline weld defect identification method
Maalmi et al. Towards automatic analysis of ultrasonic time-of-flight diffraction data using genetic-based inverse Hough transform
Qidwai Autonomous corrosion detection in gas pipelines: a hybrid-fuzzy classifier approach using ultrasonic nondestructive evaluation protocols
CN116953196B (en) Defect detection and safety state assessment method for steel tee joint
Wallace et al. Experience, testing and future development of an ultrasonic inspection analysis defect decision support tool for CANDU reactors
CN117233347B (en) Carbon steel spheroidization grade measuring method, system and equipment
de Moura et al. Welding defect pattern recognition in TOFD signals Part 2. Non-linear classifiers
Cheng et al. Classification of Welding Defects Based on Convolutional Neural Networks and Electromagnetic Eddy Current Testing
Le Berre et al. Simulation and processing tools for the design and performance evaluation of FMC-TFM techniques
Meksen et al. Defects clustering using Kohonen networks during ultrasonic inspection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant