CN113674247A - X-ray weld defect detection method based on convolutional neural network - Google Patents

X-ray weld defect detection method based on convolutional neural network Download PDF

Info

Publication number
CN113674247A
CN113674247A CN202110965549.4A CN202110965549A CN113674247A CN 113674247 A CN113674247 A CN 113674247A CN 202110965549 A CN202110965549 A CN 202110965549A CN 113674247 A CN113674247 A CN 113674247A
Authority
CN
China
Prior art keywords
module
convolution
layer
network
weld
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110965549.4A
Other languages
Chinese (zh)
Other versions
CN113674247B (en
Inventor
刘卫朋
山圣旗
王睿
陈海永
孙嘉明
崔晓锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN202110965549.4A priority Critical patent/CN113674247B/en
Publication of CN113674247A publication Critical patent/CN113674247A/en
Application granted granted Critical
Publication of CN113674247B publication Critical patent/CN113674247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an X-ray weld defect detection method based on a convolutional neural network, which comprises the following steps: establishing a weld image data set containing different types of weld defects, and labeling weld labels on all weld pictures in the data set; establishing an AF-RCNN model, wherein the AF-RCNN model comprises a backbone network module, an area generation module and a target classification and position regression module; the main network module adopts a residual error network (ResNet) and characteristic pyramid network (FPN) structure, and a high-efficiency convolution attention module is introduced between the residual error network (ResNet) and the characteristic pyramid network (FPN) so as to enhance the learning capability of the network on unobvious defects and small target characteristics, and simultaneously introduce a CIOU loss function and enhance the positioning capability of the aiming frame; and training an AF-RCNN model by using the established data set for classifying and positioning the weld defects. All the defects have the accuracy rate of over 94 percent and the detection speed of 11.65 FPS.

Description

X-ray weld defect detection method based on convolutional neural network
Technical Field
The invention belongs to the field of welding defect detection, and particularly relates to a welding seam defect detection method based on a convolutional neural network.
Background
Welded structures have been widely used in many fields, such as construction, vehicles, aerospace, rail, petrochemical, and mechanical electrical. Since welding defects inevitably occur during welding due to differences in environmental conditions and welding techniques, it is important to check the quality of a weld to ensure reliability and safety of a structure. X-ray weld defect detection is one of the most common methods for detecting the welding quality, and researchers have made a lot of researches on the aspect of X-ray weld defect automatic detection and also have obtained a lot of important achievements.
In the field of weld defect detection, the traditional detection method requires that detection personnel have enough experience to judge, and the detection result is easily influenced by the subjectivity of the detection personnel. Meanwhile, a large amount of labeling work is needed, and the interference of factors such as field detection environment and the like is easy to cause a series of problems such as false detection, missing detection and the like. In order to accurately judge the quality of the welding seam, researchers introduce models such as a Convolutional Neural Network (CNN) and the like to carry out intelligent detection on the defects of the welding seam.
The current target detection algorithm based on deep learning comprises two modes, namely a single-stage mode and a double-stage mode. The SSD and the YOLO algorithm are single-stage target detection algorithms which are widely applied and rapidly developed at present, the number of single-stage detection parameters is small, the training and detection speeds are high, and the detection precision is low. The Faster-RCNN algorithm is used as a representative of the two-stage target detection, is widely applied in the industrial field due to high precision, and is further developed.
The traditional fast-RCNN algorithm takes a VGG network model as a main network module, the convolution depth is shallow, the characteristic information learning is not sufficient, and the detection precision is relatively low due to the fact that the welding seam data set has the factors of small defect target area, low defect-background contrast ratio and the like; or the convolution depth is set to be too deep, so that the characteristic information of the small target defect is lost, and the detection effect of the crack defect detection with a low gray value is poor.
Disclosure of Invention
Aiming at the characteristics of small target area of weld defects, low contrast between the defects and the background and the like, the invention provides a weld defect detection method based on a convolutional neural network.
The invention provides an AF-RCNN model-based X-ray weld defect detection method. In order to reduce the loss of defective information in the convolution process and enhance the information learning capacity, the invention adopts a combination of a residual error network (ResNet) and a characteristic pyramid network (FPN) as a main network module, and introduces a high-efficiency convolution attention module to enhance the learning capacity of the network on unobvious defects and small target characteristics, and simultaneously introduces a CIOU loss function to enhance the positioning capacity of an aiming frame.
The technical scheme of the invention is as follows:
an X-ray weld defect detection method based on a convolutional neural network is characterized by comprising the following contents:
establishing a weld image data set containing different types of weld defects, and labeling weld labels on all weld pictures in the data set;
establishing an AF-RCNN model, wherein the AF-RCNN model comprises a backbone network module, an area generation module and a target classification and position returning module; the main network module adopts a residual error network (ResNet) and characteristic pyramid network (FPN) structure, and a high-efficiency convolution attention module is introduced between the residual error network (ResNet) and the characteristic pyramid network (FPN) so as to enhance the learning capability of the network on unobvious defects and small target characteristics, and simultaneously introduce a CIOU loss function and enhance the positioning capability of the aiming frame;
and training an AF-RCNN model by using the established data set for classifying and positioning the weld defects.
The establishing process of the data set comprises the following steps:
collecting original X-ray pictures of weld defects, wherein the size of the original pictures is more than 3000X 1000, the number of the original pictures is 10-30, and each original picture contains different weld defects; dividing each original picture into a plurality of small pictures according to three different pixel sizes of 160 multiplied by 160, 240 multiplied by 240 and 320 multiplied by 320 by utilizing a sliding window mode, unifying the size of 160 multiplied by 160 pixels to obtain a small picture set, selecting pictures with defect characteristics from the small picture set, classifying the pictures according to different defect types to form a final welding seam image data set, wherein the data set comprises the pictures simultaneously containing various defects, the same defect has different sizes in the final welding seam image data set image, the defects are distributed at different positions in the data set image, and the same defect presents different sizes so as to ensure the diversity of the data set;
manually marking the weld joint label by utilizing lableimg software, and storing the label in a format of a Pascal VOC data set to obtain a new weld joint image data set with uniform size; randomly dividing all marked weld pictures into a training set, a verification set and a testing set, wherein the quantity ratio is 4:3:3, and the weld defects comprise six types including air holes p, slag inclusion s, incomplete fusion lof, incomplete penetration lop, cracks c and undercut u.
The residual error network ResNet has five layers which are respectively C1, C2, C3, C4 and C5, wherein 16 residual error modules are included; the residual error module comprises three convolution layers which are connected in sequence, input x enters a second convolution layer and a third convolution layer through a first convolution layer and a Relu activation function, and the output of the third convolution layer and the original input x are used as residual errors to obtain the output of the residual error module; each residual error module only needs to learn the residual error between input and output, and finally outputs F (x) + x as the input of the next residual error module;
wherein, the C1 layers comprise 160 × 160 input layers, convolution layers with convolution kernels of 7 × 7 and pooling layers;
there are three residual modules in the C2 layer, and the three convolutional layers of each residual module are: a convolution layer of 1 × 1Conv,256,1,1 × 1Conv is arranged between the input and the output of the first residual module of the 1 × 1Conv,64,1,1, 3,1, 1 × 1Conv,256,1,1 layer of C2;
there are four residual modules in the C3 layer, and the three convolutional layers of each residual module are: a convolution layer of 1 × 1Conv,512,1,2 is arranged between the input and the output of the first residual module of the 1 × 1Conv,128,1,1, 3 × 3Conv,128,3,1, 1 × 1Conv,512,1,1, C3 layers;
there are six residual modules in the C4 layer, and the three convolutional layers of each residual module are: a convolution layer of 1 × 1Conv,1024,1,1, 3 × 3Conv,256,3,2, 1 × 1Conv,1024,1,1, 1,2 is arranged between the input and the output of the first residual module of the 1 × 1Conv,256,1,1, C3 layers;
there are three residual modules in the C5 layer, and the three convolutional layers of each residual module are: a convolution layer of 1 × 1Conv,2048,1,1, 1 is arranged between the input and the output of the first residual error module of the 1 × 1Conv,512,1, 3,2, 1 × 1Conv,2048,1,1 layers of C5 layers;
wherein 1 × 1Conv,64,1,1 represents a convolution operation with a convolution kernel size of 1, a number of 64, and a step size of 1; 3 × 3Conv,512,3,2 represents convolution operations with a convolution kernel size of 3, a number of 512, and a step size of 2;
inputting a weld image into a C1 layer, activating downsampling operation through convolution, pooling and Relu function, entering a C2 layer, outputting characteristics of a C2 layer, sequentially learning and downsampling the characteristics and the semantic information through a C3 layer, a C4 layer and a C5 layer for three times, outputting a characteristic diagram containing deep defect information, and enabling the resolution to be the lowest;
an efficient convolution attention module is introduced after an output feature map F of a C5 layer, the efficient convolution attention module is divided into a channel attention module C and a space attention module S, and a channel attention feature map M is generated after weld defects are subjected to feature refinement learning generated by a channel attention modelC,MCFusing the feature map F to generate F 'as a space attention input feature map, and generating a space attention feature map M after the F' passes through a space attention moduleS,MSFusing with F 'to generate a final attention feature map F';
the characteristic diagram F 'inputs a P4 layer serving as a characteristic pyramid network (FPN) into a characteristic diagram, the characteristic pyramid network (FPN) comprises P1-P4 layers, the characteristic diagram F' is subjected to convolution operation and activation of 1 × 1Conv,256,1, 3 × 3Conv,256,3,1, the output of the P4 layer is directly input into an RPN module, meanwhile, the output of the P4 layer is subjected to up-sampling operation to a P3 layer, then is subjected to 1 × 1 convolution fusion and addition with characteristic information of a C4 layer of a residual error network (ResNet), and then is sequentially subjected to up-sampling to the P2 and P1 layers; respectively fusing the feature graph after each time of upsampling with a corresponding feature layer of a residual error network (ResNet) through 1 multiplied by 1 convolution, increasing learning of shallow feature information of the network on the basis of deep feature information, and finally inputting the feature information of each layer into an RPN module to generate an interested region;
the RPN network generates a prediction frame and a plurality of interested areas, and obtains the real position information of the defect target to train the approximate position of the interested areas of the network; positioning the prediction frame through the ROIAlign layer to obtain an accurate candidate frame; and finally, classifying the target defect aiming frame through a classification network, comparing the target defect aiming frame with the position information of the real aiming frame, and calculating the position loss and the classification loss.
All the defects have the accuracy rate of over 94 percent and the detection speed of 11.65 FPS.
Compared with the prior art, the invention has the following advantages:
the invention adds the high-efficiency convolution attention module in the backbone network module, combines the channel attention and the space attention and enhances the network learning ability. A high-efficiency convolution attention mechanism is introduced, feature information is enhanced and fused in two dimensions of a channel and a space in a deep layer of a convolutional neural network, the gradient disappearance phenomenon of shallow target information is improved, meanwhile, dimension reduction operation is not carried out, and the capability of network cross-channel information interaction is enhanced. In the invention, the depth of the convolutional neural network in the main network module is improved by considering the characteristics of low gray value of the weld defect image, small target defect and difficulty in distinguishing the target from the background, the residual error network is composed of 16 residual error modules, the features are also overlapped in the residual error calculation process, and the feature extraction capability is enhanced. Meanwhile, the output of the residual error module is fused with the corresponding FPN through a convolution layer, so that the organic combination of a residual error network and the FPN is realized, shallow feature information and deep feature information are subjected to up-sampling fusion by utilizing a Feature Pyramid Network (FPN), a more detailed feature map of the feature information is generated, and the attention to small target information is stronger.
In the aiming frame positioning stage, the CIOU loss function adopted by the invention considers three factors of the regression of the overlapping region, the distance of the central point and the length-width ratio prediction frame. The method solves the problems of aiming frame positioning and regression under special conditions that the positions of the prediction frame and the real frame are not coincident, the real frame covers the prediction frame and the like. Meanwhile, the IOU loss function calculation method of position regression in the ROI Head module is improved, ROIAlign is used for replacing ROI Pooling operation, near rounding operation is not needed, a prediction frame can be generated more accurately, training loss is reduced, bilinear interpolation is adopted in the mathematical calculation method of the position coordinates in ROIAlign, the quantitative rounding operation of the position coordinates of the prediction frame is eliminated in the position regression of the aiming frame, the positioning deviation of the prediction frame is reduced, accuracy is improved, and the training loss is reduced.
Compared with a single-stage target detection network, the accuracy of the invention is higher by adopting a double-stage AF-RCNN target detection network as a detection model, and meanwhile, the high-efficiency convolution attention module introduced by the invention is a lightweight model, and training parameters are rarely increased, compared with the traditional fast-RCNN algorithm, the detection result mAP value of the invention reaches 85.4%, is improved by 2.3%, and the small target detection result AP is a small target detection resultSThe detection speed reaches 36.3 percent, is improved by 4.8 percent, and is 11.65FPS (FPS is the number of average detection images per second) which is equivalent to the fast-RCNN detection speed of 11.62 FPS. The invention can obviously increase the detection accuracy and achieve better detection effect under the condition of not reducing the detection speed.
The method is particularly suitable for crack defect detection with low gray value (the background color of X-ray is gray, some defects are light gray, and the detection is difficult due to the similar gray value), and under the condition that the data set is large (more than 1000 pieces, particularly 2000 plus 4000 pieces, including various defect images), the learning capacity of defect target information is stronger, and the extraction capacity and the positioning accuracy of a network model on small target information are improved.
Drawings
FIG. 1 is a diagram of raw weld defects collected by the present invention;
FIG. 2 is a diagram of various weld defects in a new data set after being trimmed by the method of the present invention;
FIG. 3 is a schematic diagram of the network structure of the AF-RCNN model of the present invention;
FIG. 4 is a diagram illustrating a backbone network of the AF-RCNN model according to the present invention;
FIG. 5 is a diagram of a residual module according to the present invention;
FIG. 6 is a structure diagram of the ResNet network C1-C5 layers;
FIG. 7 is a diagram of the FPN network architecture of the present invention;
FIG. 8 is a block diagram of a high efficiency convolution attention module in accordance with the present invention;
FIG. 9 is a comparison graph of PR curves;
FIG. 10 is a comparison graph of small target PR curves;
FIG. 11 is a visual comparison chart of partial defect detection results;
fig. 12 is a training loss graph.
Detailed Description
The embodiments of the present invention will be more fully and more clearly described below with reference to the accompanying drawings of examples of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without any inventive step, are within the scope of the present invention.
The invention relates to an X-ray weld defect detection method based on a convolutional neural network, which is used for detecting and identifying weld defects and mainly comprises the following steps:
(1) weld data set preparation
As shown in fig. 1, original pictures of the weld defect data set used in the present invention are not processed, the size of the original pictures is 3000 × 1000, in this embodiment, the number of the original pictures is 10, each original picture may contain different types of weld defects, and the size and resolution thereof are not suitable for the target detection task, so that the pictures need to be processed. In order to ensure the sufficiency of a welding seam data set and the diversity of each type of defects, each original picture is firstly divided by a sliding window mode according to three different pixel sizes of 160 multiplied by 160, 240 multiplied by 240 and 320 multiplied by 320, finally about 3000 small pictures are formed, the sizes are unified to 160 multiplied by 160 pixels, 2714 pictures with defect characteristics are selected from the small pictures and are classified according to different defect types, and the data set comprises pictures containing various defects at the same time. And the data set is divided three times according to different sizes in a sliding manner so as to ensure that input images of the network have the same resolution ratio, a data set for target detection is constructed, the size of the same defect in the final data set image is different, so that the defects are distributed at different positions in the data set image, and the same defect presents different sizes, so that the diversity of the data set is ensured, and the effect of expanding the data set is achieved.
And then manually labeling the weld joint label by utilizing lableimg software, and storing the label in a format of a Pascal VOC data set to obtain a new weld joint image data set with uniform size. Randomly dividing all marked weld pictures into a training set, a verification set and a test set, wherein the quantity ratio is 4:3:3, the specific quantity is shown in table 1, and the data set is marked with English abbreviation of defects. The weld defects mainly comprise six types including air holes p, slag inclusion s, unfused lof, incomplete lop, cracks c and undercut u. FIG. 2 is a schematic diagram of various types of defects in a data set.
TABLE 1 weld defect data set
Figure BDA0003223689950000051
(2) Training AF-RCNN algorithm network model
FIG. 3 is a diagram of an AF-RCNN model structure, wherein the AF-RCNN model proposed by the present invention is a short term for Attention fast-RCNN, and as shown in FIG. 3, the AF-RCNN model is mainly composed of three modules, namely a Backbone network (Backbone) module, a region generation (RPN) module and a target classification and location regression (ROI Head) module. The main network module adopts a network architecture combining a residual error network (ResNet) and a characteristic pyramid network (FPN), and introduces a high-efficiency convolution attention module for receiving an input image and extracting characteristics. The region generation (RPN) module directly acts on the feature image generated by the backbone network module, and then generates a plurality of region candidate boxes through anchor points, and the region candidate boxes are input into the ROI Head module for target classification and position regression.
The ROI Head module mainly realizes the target classification and the position correction of the region generated by the RPN module. The RPN module screens out a candidate target region by using a low threshold classifier, and considers the situation that a plurality of background regions and the same object are selected by different candidate frames in a plurality of candidate regions generated by the RPN module, so that the target classification needs to be carried out on each candidate region through an ROI Head module to determine the target category, and meanwhile, the problem that the same target is selected by frames for many times is solved by adopting a non-maximum suppression method. Roilign (bilinear interpolation) was used in the model to perform positional regression of the prediction box.
FIG. 4 is a diagram of a backbone network module of the AF-RCNN model. The main network module adopts a network architecture combining a residual error network (ResNet) and a characteristic pyramid network (FPN), and introduces an efficient convolution attention module for receiving an input image and extracting characteristics. The weld joint image is firstly subjected to batch normalization processing, picture information is converted into gray value digital information, and then deep convolution information extraction is carried out through a residual error network (ResNet).
The high-efficiency convolution attention module is combined with the ResNet and the FPN network to serve as an improved main network module, and the high-efficiency convolution attention module is placed between a ResNet network convolution layer and an FPN layer in order to focus on small target characteristic information in the deep network and achieve a better detection effect in consideration of the phenomenon that welding seam defect small target information can be lost in the deep convolution neural network learning process. And inputting the characteristic information into the FPN network, and further fusing the deep-layer characteristic information and the shallow-layer characteristic information by the FPN network to enable the characteristic information to be more sufficiently learned.
As shown in fig. 4, ResNet has five layers, namely C1, C2, C3, C4 and C5, wherein 16 residual modules are included. The residual error module structure is shown in fig. 5, and includes three convolution layers connected in sequence, where x is the input of the residual error module, f (x) is the function obtained by fitting the residual error module, Relu is the activation function, the input x enters the second and third convolution layers through the first convolution layer and the Relu activation function, the output of the third convolution layer and the original input x are used as residual errors, and finally f (x) + x is output. Each residual module only needs to learn the residual between input and output, and finally outputs F (x) + x as the input of the next residual module, so that the learning sufficiency is improved, and the network learning complexity is reduced. Fig. 6 is a diagram showing the structure of each layer C1-C5, wherein 1 × 1Conv,64,1,1 represents the convolution operation with convolution kernel size 1, number 64, and step size 1, x 1 represents one residual block, x 2 represents two residual blocks, the structure of the residual blocks in each layer is the same, all residual blocks are connected in sequence, and there are 16 residual blocks in 5 layers. The weld images are input into a C1 layer to be subjected to preliminary learning, the resolution of a feature map of the weld images is the highest, and target feature information is reserved. And then activating a downsampling operation into a C2 layer through convolution, pooling and Relu function, and outputting a feature map with reduced resolution at a C2 layer, so that more semantic information can be learned. And the feature map is subjected to three times of learning and down sampling of C3, C4 and C5 layers in sequence, the feature and semantic information are fully learned, the feature map containing deep defect information is output, and the resolution reaches the lowest.
The characteristic diagram contains deep weld defect characteristic information, and in order to enhance the characteristic learning of the characteristic diagram on small target defects and small gray value defects, the invention introduces an efficient convolution attention module after the characteristic diagram is output at a C5 layer. The high-efficiency convolution attention module is an attention module acting on a feedforward convolution neural network and is used for solving the problem that the high-efficiency convolution attention module is not suitable for the feedforward convolution neural network, and a characteristic diagram F epsilon R is givenC×H×WAs an input, the module will follow the channel dimension ωC∈R1×1×CAnd spatial dimension MS∈R1×H×WAn attention map is inferred and then multiplied with the input feature map for adaptive feature refinement. The formula is as follows:
Figure BDA0003223689950000061
in the formula:
Figure BDA0003223689950000062
representing element-by-element multiplication, F 'representing the channel attention feature map, F' representing the final attention feature output, ωC(. and M)S(. cndot.) represents the computation of the feature map along the channel dimension and the spatial dimension, respectively. The high-efficiency convolution attention module is divided into a channel attention module and a spatial attention module, which are denoted by C and S in fig. 4, respectively. Generating a channel attention feature map M after the weld defects are subjected to feature refinement learning generated by a channel attention modelC,MCF 'is generated by fusing with the feature map F and serves as a space attention input feature map, and the space attention feature map M is generated after the F' passes through a space attention moduleS,MSFusing with F 'to generate final attention feature map F'. The specific embodiments thereof are described below.
FIG. 8 is a block diagram of the computation of the high efficiency convolution attention module, wherein the channel attention feature map generation process is as follows: the feature map F is a feature map output by the C5 layer in the attached figure 4, firstly average pooled AvgPool is carried out to generate a feature vector of 1 × 1 × C, then one-dimensional convolution cross-channel information interaction is carried out, and Relu activation is carried out to obtain the channel attention MCAnd then fused with the feature map F to generate a channel attention feature map F'. The specific calculation method is that a matrix W is usedkThe learned channel attention is shown, and the learned channel attention comprises k multiplied by C parameters, C represents the number of characteristic diagram channels, and k represents the number of adjacent channels of each channel, so that mutual independence of information among different channels is avoided. WkThe matrix form is shown in equation 1.6. Equation 1.8 defines a 2-power nonlinear mapping relationship between k and C
Figure BDA0003223689950000063
Psi is
Figure BDA0003223689950000064
Is derived from a linear mapping C-2 × k-1 with a relatively limited characteristic relationship, without modificationThe value of the linearity coefficient is varied, thus defining a value of 2 for gamma and 1 for b. The k value can be calculated from equation 1.9.
Figure BDA0003223689950000071
The value of k is not C, and k is calculated by C. In the matrix Wk, the total number of eigen-channels is C. Such as: the first row represents the weight of the first channel, considering that it is associated with k adjacent channels, so the first element to the kth element of the first row are not zero, and the rest of the unassociated channel weights are zero.
Figure BDA0003223689950000072
C=φ(k)=2γ×k-b (1.8)
Figure BDA0003223689950000073
The method can be realized by one-dimensional convolution, and the formula is as follows:
ωC=σ(C1Dk(y)) (1.10)
Figure BDA0003223689950000074
where C1D represents a one-dimensional convolution, σ represents the Relu activation function,
Figure BDA0003223689950000075
representing element-by-element multiplication. i takes values from 1 to C, expressed as the ith channel, ωiAnd the weight of the ith channel after one-dimensional convolution. y represents an input feature vector of 1 × 1 × C, i.e., a channel weight vector, whose weight y of the ith channeliOnly the mutual information between it and k adjacent channels is considered,
Figure BDA0003223689950000076
denotes yiIs determined by the set of k adjacent channels. | t-oddRepresenting the nearest odd number of t.
The process of generating the spatial attention feature map shown in fig. 8 is as follows: firstly, the channel attention feature map F' is subjected to operations of max pooling Maxpool and average pooling AvgPool to generate
Figure BDA0003223689950000077
And
Figure BDA0003223689950000078
two-dimensional feature maps respectively representing maximum pool features and average pool features, reducing the maximum pool features and the average pool features into one channel through a convolution layer, and activating through a Sigmoid function to generate a spatial attention feature map MSFinally, this feature map is multiplied with the initial input feature map F' to output the final efficient convolved attention feature map F ". The mathematical expression is as follows:
Figure BDA0003223689950000079
Figure BDA00032236899500000710
in the formula: sigma denotes a Sigmoid activation function,
Figure BDA00032236899500000711
denotes element-by-element multiplication, f7×7Representing a convolution operation with a convolution kernel size of 7 x 7.
The spatial attention model is generated by the spatial relationship between each element, and different from the channel attention, the attention of the spatial attention is the position of the feature information, and the channel attention can be supplemented. The channel attention and the space attention respectively pay attention to the semantic information and the position information of the target feature, the semantic information and the position information are complementary states, and the channel attention and the space attention are placed in sequence to achieve a better learning effect.
As shown in fig. 4, the high efficiency convolution attention feature map F ″ is an input feature map of P4 layers as a Feature Pyramid Network (FPN), and fig. 7 shows specific convolution structures and parameters of P1-P4 layers. The P4 level profile has the smallest resolution and field of view, which contains the deepest level of feature information. And then carrying out convolution and activation to carry out up-sampling operation to a P3 layer, increasing the sizes of the feature map and the sensing field, enhancing semantic information, carrying out 1 × 1 convolution fusion and addition on the feature information of a C4 layer of a residual error network (ResNet), and then sequentially carrying out up-sampling to a P2 layer and a P1 layer. And respectively fusing the feature map subjected to the upsampling at each time with a corresponding feature layer of a residual error network (ResNet) through 1 multiplied by 1 convolution, adding learning of shallow feature information of the network on the basis of deep feature information, and finally inputting the feature information of each layer into an RPN module to generate an interested region.
The RPN network generates a prediction frame and a plurality of interested areas, and obtains the real position information of the defect target to train the approximate position of the interested areas of the network. And positioning the prediction frame through the ROIAlign layer to obtain an accurate candidate frame. And finally, classifying the target defect aiming frame through a classification network, comparing the target defect aiming frame with the position information of the real aiming frame, and calculating the position loss and the classification loss.
And adopting the CIOU loss function for predicting the loss function of the frame regression and positioning. IOU refers to the real frame B of the defect targetgtCompared with the intersection ratio of the prediction frame B, the IOU loss function used by the traditional Faster-RCNN only considers the size of the intersection ratio, and when the overlap ratio of the prediction frame and the real frame is zero, the prediction frame cannot perform regression calculation, so that the loss is large. The improved CIOU loss of the invention introduces a corresponding penalty term, and can simultaneously consider three elements of prediction box regression: the overlapping area, the distance of the central point and the length-width ratio improve the phenomena of inaccurate positioning and large loss of the network regression prediction frame. The expression mode is as follows:
Figure BDA0003223689950000081
Figure BDA0003223689950000082
Figure BDA0003223689950000083
Figure BDA0003223689950000084
in the formula: b and BgtRespectively representing the areas of the prediction frame and the real frame; b and bgtRespectively representing the central point position coordinates of the prediction frame and the real frame, wherein rho (·) represents a Euclidean distance function, and c is the length of a rectangular diagonal line which minimally surrounds the prediction frame and the real frame; α is a weight coefficient; v is used to measure the similarity of aspect ratios, where ω and ωgtRespectively representing the widths of the prediction and real boxes, h and hgtRepresenting the heights of the prediction box and the real box, respectively.
Setting training parameters: the method uses ImageNet data set training weight as pre-training weight to accelerate the model training speed. The initial learning rate is set to 0.005, the total number of training iterations is 50, the learning rate is automatically reduced to one third of the original rate every 5 times of iterative training of the network model, the initial momentum is set to 0.9, and the Bath _ size is set to 4.
Starting training: and inputting the marked VOC format data set into a backbone network module, generating a feature map through a series of convolution operations of ResNet, further extracting features of the feature map through a channel and a space attention model to generate an attention feature map, and mutually fusing the attention feature map and the original feature map to enhance target information. The fused attention feature map is input into an FPN network for up-sampling operation, and feature fusion is carried out on the feature map and ResNet network intermediate layer features, so that the effect of simultaneously considering deep layer feature information and shallow layer feature information is achieved. And (3) feeding the feature map generated after the FPN network is further fused into an RPN to generate a prediction frame and a plurality of interested areas, and simultaneously obtaining the real position information of the defect target to train the approximate position of the interested area of the network. And positioning the prediction frame through the ROIAlign layer to obtain an accurate candidate frame. And finally, classifying the target defect aiming frame through a classification network to achieve the detection effect. After the training is finished for 50 times of iteration, the network model stores the parameter information of each stage, and a final optimal weight model is generated and used for testing the network effect and the detection capability.
(3) Testing AF-RCNN model detection effect
Inputting a test set weld defect picture into the AF-RCNN model trained in the step (2), wherein the picture size is unified to 160 multiplied by 160 pixels, the picture is only transmitted in the forward direction in the network, convolution calculation and defect feature extraction are carried out on the picture by using trained weight parameters, a main network module outputs a feature map, region-of-interest output and aiming frame marking are carried out through RPN, redundant aiming frames are abandoned by using a non-maximum value suppression method, and the situation that the same target is detected for multiple times is avoided. And finally, entering a classification network, outputting aiming frame classification confidence coefficients by the full connection layer to classify the target, and finally outputting the welding seam image with the aiming frame, the classification information and the confidence coefficients after detection.
In order to prove the advancement of the method, the VOC data sets prepared by the method are respectively used for comparing different network models for detecting the defects of the welding seams, training and testing are carried out, and the experimental results and comparison are shown in the table 2:
TABLE 2 comparison of the experimental results of the models
Figure BDA0003223689950000091
As can be seen from Table 2, the mAP detected by the SSD network for the whole data set is 75.1%, and the detection effect AP for the small target defect isSOnly 24.9%. The detection effect of the final network model AF-RCNN algorithm is obviously improved after the efficient convolution attention module and the backbone network module are introduced and improved, the mAP reaches 85.4%, and compared with the effect of the Faster-RCNN algorithm, the effect is improved by 2.3%. Meanwhile, the detection effect on small targets and medium targets is obviously improved, and the detection effect is respectively improved by 4.8% and 2.1%. The experimental result shows that the AF-RCNN modelThe type has higher attention to small targets, the detection capability is stronger, and the overall detection level is remarkably improved.
Fig. 9 and 10 are respectively a P-R curve of all categories and a P-R curve of a small target detection result, where the abscissa is recall ratio R, the ordinate represents accuracy ratio P, an area enclosed by the curves and the abscissa represents an mep value detected by an algorithm, and a larger area means a larger value, indicating a better network detection effect. The black dotted line shows the detection effect of the fast-RCNN model, and the black curve represents the P-R curve of the AF-RCNN algorithm introduced into the efficient convolution attention module. As can be seen from the figure, the AF-RCNN model provided by the invention achieves the best detection effect on overall detection and small target detection.
TABLE 3 Defect detection accuracy
Figure BDA0003223689950000101
Table 3 shows the accuracy of the detection of various defects by different algorithms, and the comparison shows that the detection accuracy of the algorithm of the invention reaches the highest, wherein the defects of the crack (c) have the conditions of more gray values, low defect target and relatively difficult detection, the detection accuracy of the invention reaches 94 percent, which is obviously higher than 88 percent of that of fast-RCNN, and the accuracy of the rest defects is more than 95 percent. The accuracy rate of all defects reaches more than 94%.
FIG. 11 is a visual comparison graph of partial defect detection results, the conventional fast-RCNN algorithm has the situations of missed detection and repeated detection when detecting a small target and a defect with a low gray value, and the AF-RCNN model of the application has higher detection precision on the small target, is more accurate in anchor frame positioning and obviously reduces the situation of repeated detection. FIG. 12 is a comparison curve of loss values of the conventional fast-RCNN algorithm and the AF-RCNN model, and it can be seen that the AF-RCNN algorithm of the present invention has Faster convergence speed and less training loss, and further verifies the effectiveness of the present invention.
Nothing in this specification is said to apply to the prior art.

Claims (7)

1. An X-ray weld defect detection method based on a convolutional neural network is characterized by comprising the following steps:
establishing a weld image data set containing different types of weld defects, and labeling weld labels on all weld pictures in the data set;
establishing an AF-RCNN model, wherein the AF-RCNN model comprises a backbone network module, an area generation module and a target classification and position regression module; the main network module adopts a residual error network (ResNet) and characteristic pyramid network (FPN) structure, and a high-efficiency convolution attention module is introduced between the residual error network (ResNet) and the characteristic pyramid network (FPN) so as to enhance the learning capability of the network on unobvious defects and small target characteristics, and simultaneously introduce a CIOU loss function and enhance the positioning capability of the aiming frame;
and training an AF-RCNN model by using the established data set for classifying and positioning the weld defects.
2. The detection method according to claim 1, wherein the data set is established by:
collecting original X-ray pictures of weld defects, wherein the size of the original pictures is more than 3000X 1000, the number of the original pictures is 10-30, and each original picture contains different weld defects; dividing each original picture into a plurality of small pictures according to three different pixel sizes of 160 multiplied by 160, 240 multiplied by 240 and 320 multiplied by 320 by utilizing a sliding window mode, unifying the size of 160 multiplied by 160 pixels to obtain a small picture set, selecting pictures with defect characteristics from the small picture set, classifying the pictures according to different defect types to form a final welding seam image data set, wherein the data set comprises the pictures simultaneously containing various defects, the same defect has different sizes in the final welding seam image data set image, the defects are distributed at different positions in the data set picture, and the same defect presents different sizes so as to ensure the diversity of the data set;
manually marking the weld joint label by utilizing lableimg software, and storing the label in a format of a Pascal VOC data set to obtain a new weld joint image data set with uniform size; randomly dividing all marked weld pictures into a training set, a verification set and a testing set, wherein the quantity ratio is 4:3:3, and the weld defects comprise six types including air holes p, slag inclusion s, incomplete fusion lof, incomplete penetration lop, cracks c and undercut u.
3. The detection method according to claim 1,
the residual error network ResNet has five layers which are respectively C1, C2, C3, C4 and C5, wherein 16 residual error modules are included; the residual error module comprises three convolution layers which are connected in sequence, input x enters a second convolution layer and a third convolution layer through a first convolution layer and a Relu activation function, and the output of the third convolution layer and the original input x are used as residual errors to obtain the output of the residual error module; each residual error module only needs to learn the residual error between input and output, and finally outputs F (x) + x as the input of the next residual error module;
wherein, the C1 layers comprise 160 × 160 input layers, convolution layers with convolution kernels of 7 × 7 and pooling layers;
there are three residual modules in the C2 layer, and the three convolutional layers of each residual module are: a convolution layer of 1 × 1Conv,256,1,1 × 1Conv is arranged between the input and the output of the first residual module of the 1 × 1Conv,64,1,1, 3,1, 1 × 1Conv,256,1,1 layer of C2;
there are four residual modules in the C3 layer, and the three convolutional layers of each residual module are: a convolution layer of 1 × 1Conv,512,1,2 is arranged between the input and the output of the first residual module of the 1 × 1Conv,128,1,1, 3 × 3Conv,128,3,1, 1 × 1Conv,512,1,1, C3 layers;
there are six residual modules in the C4 layer, and the three convolutional layers of each residual module are: a convolution layer of 1 × 1Conv,1024,1,1, 3 × 3Conv,256,3,2, 1 × 1Conv,1024,1,1, 1,2 is arranged between the input and the output of the first residual module of the 1 × 1Conv,256,1,1, C3 layers;
there are three residual modules in the C5 layer, and the three convolutional layers of each residual module are: a convolution layer of 1 × 1Conv,2048,1,1, 1 is arranged between the input and the output of the first residual error module of the 1 × 1Conv,512,1, 3,2, 1 × 1Conv,2048,1,1 layers of C5 layers;
wherein 1 × 1Conv,64,1,1 represents a convolution operation with a convolution kernel size of 1, a number of 64, and a step size of 1; 3 × 3Conv,512,3,2 represents convolution operations with a convolution kernel size of 3, a number of 512, and a step size of 2;
inputting the weld image into a C1 layer, activating downsampling operation through convolution, pooling and Relu function, entering a C2 layer, enabling output features of a C2 layer to be subjected to three times of learning and downsampling through a C3 layer, a C4 layer and a C5 layer in sequence, fully learning features and semantic information, outputting a feature map containing deep defect information, and enabling the resolution to be the lowest;
an efficient convolution attention module is introduced after an output feature map F of a C5 layer, the efficient convolution attention module is divided into a channel attention module C and a space attention module S, and a channel attention feature map M is generated after weld defects are subjected to feature refinement learning generated by a channel attention modelC,MCF 'is generated by fusing with the feature map F and serves as a space attention input feature map, and the space attention feature map M is generated after the F' passes through a space attention moduleS,MSFusing with F 'to generate a final attention feature map F';
the characteristic diagram F 'inputs a P4 layer serving as a characteristic pyramid network (FPN) into a characteristic diagram, the characteristic pyramid network (FPN) comprises P1-P4 layers, the characteristic diagram F' is subjected to convolution operation and activation of 1 × 1Conv,256,1, 3 × 3Conv,256,3,1, the output of the P4 layer is directly input into an RPN module, meanwhile, the output of the P4 layer is subjected to up-sampling operation to a P3 layer, then is subjected to 1 × 1 convolution fusion and addition with characteristic information of a C4 layer of a residual error network (ResNet), and then is sequentially subjected to up-sampling to the P2 and P1 layers; fusing the feature map subjected to the upsampling each time with a corresponding feature layer of a residual error network (ResNet) through 1 multiplied by 1 convolution, adding learning of network shallow feature information on the basis of deep feature information, and finally inputting the feature information of each layer into an RPN module to generate an interested region;
the RPN network generates a prediction frame and a plurality of interested areas, and obtains the real position information of the defect target to train the approximate position of the interested areas of the network; positioning the prediction frame through the ROIAlign layer to obtain an accurate candidate frame; and finally, classifying the target defect aiming frame through a classification network, comparing the target defect aiming frame with the position information of the real aiming frame, and calculating the position loss and the classification loss.
4. The detection method according to claim 1, wherein the high-efficiency convolution attention module comprises a channel attention model and a spatial attention model connected in sequence, and the channel attention model is: the input feature map F is a feature map F output by a residual error network C5 layer, firstly average pooled AvgPool is carried out to generate a feature vector of 1 multiplied by C, then one-dimensional convolution cross-channel information interaction is carried out, and Relu activation is carried out to obtain a channel attention feature map MC,MCThen fusing the feature map F to generate a channel attention feature map F'; the specific calculation method is as follows: by means of a matrix WkIndicates the learned channel attention, WkK multiplied by C parameters are included, C represents the number of characteristic diagram channels, and k represents the number of adjacent channels of each channel;
Figure FDA0003223689940000021
Figure FDA0003223689940000022
C=φ(k)=2γ×k-b (1.8)
Figure FDA0003223689940000031
in the formula
Figure FDA0003223689940000032
To define a 2 power nonlinear mapping between k and C,. phi.
Figure FDA0003223689940000033
The inverse function of (a), gamma is 2, and b is 1; σ denotes Relu activation function, i takes values from 1 to C, tableShown as the ith channel, ωiThe weight of the ith channel after one-dimensional convolution is used; y denotes an input feature vector of 1 × 1 × C, i.e., a channel weight vector, yiIs an element in y, representing the weight of the ith channel; omegai kDenotes yiA set of k adjacent channels; | t-oddRepresents the nearest odd number of t;
the spatial attention model is: firstly, the channel attention feature map F' is subjected to operations of max pooling Maxpool and average pooling AvgPool to generate
Figure FDA0003223689940000034
And
Figure FDA0003223689940000035
two-dimensional feature maps respectively representing maximum pool features and average pool features, reducing the two-dimensional feature maps into a channel through a convolution layer, and activating through a Sigmoid function to generate a spatial attention feature map MSFinally, the feature map M is usedSMultiplying with the originally input channel attention feature map F' to output a final efficient convolved attention feature map F "; the mathematical expression is formula (1.12) -formula (1.13):
Figure FDA0003223689940000036
Figure FDA0003223689940000037
in the formula: sigma denotes a Sigmoid activation function,
Figure FDA0003223689940000038
denotes element-by-element multiplication, f7×7Representing a convolution operation with a convolution kernel size of 7 x 7.
5. The detection method according to claim 1, characterized in that CIO is adoptedThe U loss function is used for predicting the loss function of frame regression and positioning, CIOU loss function LCIOUThe expression of (A) is as follows:
Figure FDA0003223689940000039
Figure FDA00032236899400000310
Figure FDA00032236899400000311
Figure FDA00032236899400000312
in the formula: b and BgtRespectively representing the areas of the prediction frame and the real frame; b and bgtRespectively representing the position coordinates of the central points of the prediction frame and the real frame, wherein rho (·) represents a Euclidean distance function, and c is the length of a rectangular diagonal line which minimally surrounds the prediction frame and the real frame; α is a weight coefficient; v is used to measure the similarity of aspect ratios, where ω and ωgtRespectively representing the widths of the prediction and real boxes, h and hgtRepresenting the heights of the prediction box and the real box, respectively.
6. The detection method according to claim 1, wherein the training parameter sets: training weights are used as pre-training weights, the initial learning rate is set to be 0.005, the total number of training iterations is 50, the AF-RCNN model automatically reduces the learning rate to one third of the original rate after every 5 times of iterative training, the initial momentum is set to be 0.9, and the Bath _ size is set to be 4.
7. The inspection method of claim 1, wherein all defects have an accuracy of 94% or more and an inspection speed of 11.65 FPS.
CN202110965549.4A 2021-08-23 2021-08-23 X-ray weld defect detection method based on convolutional neural network Active CN113674247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110965549.4A CN113674247B (en) 2021-08-23 2021-08-23 X-ray weld defect detection method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110965549.4A CN113674247B (en) 2021-08-23 2021-08-23 X-ray weld defect detection method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN113674247A true CN113674247A (en) 2021-11-19
CN113674247B CN113674247B (en) 2023-09-01

Family

ID=78544815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110965549.4A Active CN113674247B (en) 2021-08-23 2021-08-23 X-ray weld defect detection method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN113674247B (en)

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850241A (en) * 2021-11-30 2021-12-28 城云科技(中国)有限公司 Vehicle window parabolic detection method and device, computer program product and electronic device
CN113888542A (en) * 2021-12-08 2022-01-04 常州微亿智造科技有限公司 Product defect detection method and device
CN114187256A (en) * 2021-11-24 2022-03-15 南京芯谱视觉科技有限公司 Method for detecting defects of welding seam X-ray photograph
CN114240923A (en) * 2021-12-27 2022-03-25 青岛科技大学 Full-automatic BLDC motor winding machine product defect detection method based on machine vision
CN114359680A (en) * 2021-12-17 2022-04-15 中国人民解放军海军工程大学 Panoramic vision water surface target detection method based on deep learning
CN114428877A (en) * 2022-01-27 2022-05-03 西南石油大学 Intelligent clothing matching method and system
CN114519693A (en) * 2021-12-17 2022-05-20 江苏康缘药业股份有限公司 Method for detecting surface defects, model construction method and device and electronic equipment
CN114565822A (en) * 2022-01-27 2022-05-31 国网浙江省电力有限公司湖州供电公司 High-voltage power grid hanging object detection method based on deep learning
CN114627383A (en) * 2022-05-13 2022-06-14 南京航空航天大学 Small sample defect detection method based on metric learning
CN114663366A (en) * 2022-03-07 2022-06-24 南京工业大学 Hotel suit door detection method based on YOLOv5s neural network
CN114693670A (en) * 2022-04-24 2022-07-01 西京学院 Ultrasonic detection method for weld defects of longitudinal submerged arc welded pipe based on multi-scale U-Net
CN114757904A (en) * 2022-04-07 2022-07-15 河南大学 Surface defect detection method based on AI deep learning algorithm
CN114782538A (en) * 2022-06-16 2022-07-22 长春融成智能设备制造股份有限公司 Visual positioning method compatible with different barrel shapes and applied to filling field
CN114821246A (en) * 2022-06-28 2022-07-29 山东省人工智能研究院 Small target detection method based on multi-level residual error network perception and attention mechanism
CN114943903A (en) * 2022-05-25 2022-08-26 广西财经学院 Self-adaptive clustering target detection method for aerial image of unmanned aerial vehicle
CN114998570A (en) * 2022-07-19 2022-09-02 上海闪马智能科技有限公司 Method and device for determining object detection frame, storage medium and electronic device
CN115035290A (en) * 2022-05-07 2022-09-09 上海工程技术大学 Target detection method based on improved fast RCNN
CN115082482A (en) * 2022-08-23 2022-09-20 山东优奭趸泵业科技有限公司 Metal surface defect detection method
CN115100460A (en) * 2022-06-13 2022-09-23 广州丽芳园林生态科技股份有限公司 A method, device, device and storage medium for plant classification and identification based on deep learning and vector retrieval
CN115115895A (en) * 2022-07-28 2022-09-27 吉林大学 Explosive mobile phone X-ray image classification method based on attention mechanism
CN115122005A (en) * 2022-07-27 2022-09-30 广东省源天工程有限公司 Ultra-large type miter gate door body welding device
CN115147711A (en) * 2022-07-23 2022-10-04 河南大学 Underwater target detection network and method based on improved Retianet
CN115187595A (en) * 2022-09-08 2022-10-14 北京东方国信科技股份有限公司 End plug weld defect detection model training method, detection method and electronic equipment
CN115330740A (en) * 2022-08-22 2022-11-11 河海大学 Lightweight crack identification method based on MDCN
CN115375677A (en) * 2022-10-24 2022-11-22 山东省计算中心(国家超级计算济南中心) Wine bottle defect detection method and system based on multi-path and multi-scale feature fusion
CN115439483A (en) * 2022-11-09 2022-12-06 四川川锅环保工程有限公司 High-quality welding seam and welding seam defect identification system, method and storage medium
CN115601357A (en) * 2022-11-29 2023-01-13 南京航空航天大学(Cn) Stamping part surface defect detection method based on small sample
CN115861772A (en) * 2023-02-22 2023-03-28 杭州电子科技大学 Multi-scale single-stage target detection method based on RetinaNet
CN116091496A (en) * 2023-04-07 2023-05-09 菲特(天津)检测技术有限公司 Defect detection method and device based on improved Faster-RCNN
CN116152226A (en) * 2023-04-04 2023-05-23 东莞职业技术学院 Method for detecting defects of image on inner side of commutator based on fusible feature pyramid
CN116206248A (en) * 2023-04-28 2023-06-02 江西省水利科学院(江西省大坝安全管理中心、江西省水资源管理中心) Target detection method based on machine learning guide deep learning
CN116342531A (en) * 2023-03-27 2023-06-27 中国十七冶集团有限公司 Light-weight large-scale building high-altitude steel structure weld defect identification model, weld quality detection device and method
WO2023173598A1 (en) * 2022-03-15 2023-09-21 中国华能集团清洁能源技术研究院有限公司 Fan blade defect detection method and system based on improved ssd model
CN117152139A (en) * 2023-10-30 2023-12-01 华东交通大学 Patch inductance defect detection method based on example segmentation technology
CN117828406A (en) * 2024-03-01 2024-04-05 山东大学 Multi-layer single-pass welding seam size prediction method and system based on deep learning
CN117911840A (en) * 2024-03-20 2024-04-19 河南科技学院 Deep learning method for detecting surface defects of filter screen
CN117934820A (en) * 2024-03-22 2024-04-26 中国人民解放军海军航空大学 Infrared target identification method based on difficult sample enhancement loss
CN118014947A (en) * 2024-01-30 2024-05-10 瑄立(无锡)智能科技有限公司 Rapid diagnostic system for identifying morphology of acute promyelocytic leukemia
CN118429355A (en) * 2024-07-05 2024-08-02 浙江伟臻成套柜体有限公司 Lightweight power distribution cabinet shell defect detection method based on feature enhancement
CN118674985A (en) * 2024-06-21 2024-09-20 山东大学 Noise-resistant weld joint feature recognition method and system based on lightweight neural network
CN118840337A (en) * 2024-07-03 2024-10-25 江苏省特种设备安全监督检验研究院 Crane track defect identification method based on convolutional neural network
CN119090877A (en) * 2024-11-06 2024-12-06 江苏中车数字科技有限公司 A method and system for visually monitoring welding operation process and quality

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345911A (en) * 2018-04-16 2018-07-31 东北大学 Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics
CN110570410A (en) * 2019-09-05 2019-12-13 河北工业大学 Detection method for automatically identifying and detecting weld defects
CN112149720A (en) * 2020-09-09 2020-12-29 南京信息工程大学 Fine-grained vehicle type identification method
US20210089807A1 (en) * 2019-09-25 2021-03-25 Samsung Electronics Co., Ltd. System and method for boundary aware semantic segmentation
CN112927217A (en) * 2021-03-23 2021-06-08 内蒙古大学 Thyroid nodule invasiveness prediction method based on target detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345911A (en) * 2018-04-16 2018-07-31 东北大学 Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics
CN110570410A (en) * 2019-09-05 2019-12-13 河北工业大学 Detection method for automatically identifying and detecting weld defects
US20210089807A1 (en) * 2019-09-25 2021-03-25 Samsung Electronics Co., Ltd. System and method for boundary aware semantic segmentation
CN112149720A (en) * 2020-09-09 2020-12-29 南京信息工程大学 Fine-grained vehicle type identification method
CN112927217A (en) * 2021-03-23 2021-06-08 内蒙古大学 Thyroid nodule invasiveness prediction method based on target detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谷静;张可帅;朱漪曼;: "基于卷积神经网络的焊缝缺陷图像分类研究", 应用光学, no. 03 *

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187256A (en) * 2021-11-24 2022-03-15 南京芯谱视觉科技有限公司 Method for detecting defects of welding seam X-ray photograph
CN113850241A (en) * 2021-11-30 2021-12-28 城云科技(中国)有限公司 Vehicle window parabolic detection method and device, computer program product and electronic device
CN113888542A (en) * 2021-12-08 2022-01-04 常州微亿智造科技有限公司 Product defect detection method and device
CN114359680A (en) * 2021-12-17 2022-04-15 中国人民解放军海军工程大学 Panoramic vision water surface target detection method based on deep learning
CN114519693A (en) * 2021-12-17 2022-05-20 江苏康缘药业股份有限公司 Method for detecting surface defects, model construction method and device and electronic equipment
CN114240923A (en) * 2021-12-27 2022-03-25 青岛科技大学 Full-automatic BLDC motor winding machine product defect detection method based on machine vision
CN114428877B (en) * 2022-01-27 2023-09-15 西南石油大学 Intelligent clothing matching method and system
CN114428877A (en) * 2022-01-27 2022-05-03 西南石油大学 Intelligent clothing matching method and system
CN114565822A (en) * 2022-01-27 2022-05-31 国网浙江省电力有限公司湖州供电公司 High-voltage power grid hanging object detection method based on deep learning
CN114663366B (en) * 2022-03-07 2024-11-05 南京工业大学 Hotel suit gate detection method based on YOLOv s neural network
CN114663366A (en) * 2022-03-07 2022-06-24 南京工业大学 Hotel suit door detection method based on YOLOv5s neural network
WO2023173598A1 (en) * 2022-03-15 2023-09-21 中国华能集团清洁能源技术研究院有限公司 Fan blade defect detection method and system based on improved ssd model
CN114757904A (en) * 2022-04-07 2022-07-15 河南大学 Surface defect detection method based on AI deep learning algorithm
CN114757904B (en) * 2022-04-07 2024-08-02 河南大学 Surface defect detection method based on AI deep learning algorithm
CN114693670A (en) * 2022-04-24 2022-07-01 西京学院 Ultrasonic detection method for weld defects of longitudinal submerged arc welded pipe based on multi-scale U-Net
CN114693670B (en) * 2022-04-24 2023-05-23 西京学院 Ultrasonic detection method for weld defects of longitudinal submerged arc welded pipe based on multi-scale U-Net
CN115035290A (en) * 2022-05-07 2022-09-09 上海工程技术大学 Target detection method based on improved fast RCNN
CN114627383A (en) * 2022-05-13 2022-06-14 南京航空航天大学 Small sample defect detection method based on metric learning
US11823425B2 (en) 2022-05-13 2023-11-21 Nanjing University Of Aeronautics And Astronautics Few-shot defect detection method based on metric learning
CN114943903A (en) * 2022-05-25 2022-08-26 广西财经学院 Self-adaptive clustering target detection method for aerial image of unmanned aerial vehicle
CN115100460A (en) * 2022-06-13 2022-09-23 广州丽芳园林生态科技股份有限公司 A method, device, device and storage medium for plant classification and identification based on deep learning and vector retrieval
CN114782538B (en) * 2022-06-16 2022-09-16 长春融成智能设备制造股份有限公司 Visual positioning method compatible with different barrel shapes applied to filling field
CN114782538A (en) * 2022-06-16 2022-07-22 长春融成智能设备制造股份有限公司 Visual positioning method compatible with different barrel shapes and applied to filling field
CN114821246B (en) * 2022-06-28 2022-10-14 山东省人工智能研究院 Small target detection method based on multi-level residual error network perception and attention mechanism
CN114821246A (en) * 2022-06-28 2022-07-29 山东省人工智能研究院 Small target detection method based on multi-level residual error network perception and attention mechanism
CN114998570A (en) * 2022-07-19 2022-09-02 上海闪马智能科技有限公司 Method and device for determining object detection frame, storage medium and electronic device
CN115147711A (en) * 2022-07-23 2022-10-04 河南大学 Underwater target detection network and method based on improved Retianet
CN115147711B (en) * 2022-07-23 2024-07-16 河南大学 Underwater target detection network and method based on improvement RETINANET
CN115122005A (en) * 2022-07-27 2022-09-30 广东省源天工程有限公司 Ultra-large type miter gate door body welding device
CN115115895A (en) * 2022-07-28 2022-09-27 吉林大学 Explosive mobile phone X-ray image classification method based on attention mechanism
CN115330740B (en) * 2022-08-22 2023-08-08 河海大学 MDCN-based lightweight crack identification method
CN115330740A (en) * 2022-08-22 2022-11-11 河海大学 Lightweight crack identification method based on MDCN
CN115082482A (en) * 2022-08-23 2022-09-20 山东优奭趸泵业科技有限公司 Metal surface defect detection method
CN115082482B (en) * 2022-08-23 2022-11-22 山东优奭趸泵业科技有限公司 Metal surface defect detection method
CN115187595A (en) * 2022-09-08 2022-10-14 北京东方国信科技股份有限公司 End plug weld defect detection model training method, detection method and electronic equipment
CN115375677B (en) * 2022-10-24 2023-04-18 山东省计算中心(国家超级计算济南中心) Wine bottle defect detection method and system based on multi-path and multi-scale feature fusion
CN115375677A (en) * 2022-10-24 2022-11-22 山东省计算中心(国家超级计算济南中心) Wine bottle defect detection method and system based on multi-path and multi-scale feature fusion
CN115439483A (en) * 2022-11-09 2022-12-06 四川川锅环保工程有限公司 High-quality welding seam and welding seam defect identification system, method and storage medium
CN115601357A (en) * 2022-11-29 2023-01-13 南京航空航天大学(Cn) Stamping part surface defect detection method based on small sample
CN115861772A (en) * 2023-02-22 2023-03-28 杭州电子科技大学 Multi-scale single-stage target detection method based on RetinaNet
CN116342531A (en) * 2023-03-27 2023-06-27 中国十七冶集团有限公司 Light-weight large-scale building high-altitude steel structure weld defect identification model, weld quality detection device and method
CN116342531B (en) * 2023-03-27 2024-01-19 中国十七冶集团有限公司 Device and method for detecting quality of welding seam of high-altitude steel structure of lightweight large-scale building
CN116152226A (en) * 2023-04-04 2023-05-23 东莞职业技术学院 Method for detecting defects of image on inner side of commutator based on fusible feature pyramid
CN116091496B (en) * 2023-04-07 2023-11-24 菲特(天津)检测技术有限公司 Defect detection method and device based on improved Faster-RCNN
CN116091496A (en) * 2023-04-07 2023-05-09 菲特(天津)检测技术有限公司 Defect detection method and device based on improved Faster-RCNN
CN116206248B (en) * 2023-04-28 2023-07-18 江西省水利科学院(江西省大坝安全管理中心、江西省水资源管理中心) Target detection method based on machine learning guide deep learning
CN116206248A (en) * 2023-04-28 2023-06-02 江西省水利科学院(江西省大坝安全管理中心、江西省水资源管理中心) Target detection method based on machine learning guide deep learning
CN117152139A (en) * 2023-10-30 2023-12-01 华东交通大学 Patch inductance defect detection method based on example segmentation technology
CN118014947A (en) * 2024-01-30 2024-05-10 瑄立(无锡)智能科技有限公司 Rapid diagnostic system for identifying morphology of acute promyelocytic leukemia
CN118014947B (en) * 2024-01-30 2024-08-27 瑄立(无锡)智能科技有限公司 Rapid diagnostic system for identifying morphology of acute promyelocytic leukemia
CN117828406A (en) * 2024-03-01 2024-04-05 山东大学 Multi-layer single-pass welding seam size prediction method and system based on deep learning
CN117911840A (en) * 2024-03-20 2024-04-19 河南科技学院 Deep learning method for detecting surface defects of filter screen
CN117934820B (en) * 2024-03-22 2024-06-14 中国人民解放军海军航空大学 Infrared target identification method based on difficult sample enhancement loss
CN117934820A (en) * 2024-03-22 2024-04-26 中国人民解放军海军航空大学 Infrared target identification method based on difficult sample enhancement loss
CN118674985A (en) * 2024-06-21 2024-09-20 山东大学 Noise-resistant weld joint feature recognition method and system based on lightweight neural network
CN118840337A (en) * 2024-07-03 2024-10-25 江苏省特种设备安全监督检验研究院 Crane track defect identification method based on convolutional neural network
CN118429355A (en) * 2024-07-05 2024-08-02 浙江伟臻成套柜体有限公司 Lightweight power distribution cabinet shell defect detection method based on feature enhancement
CN119090877A (en) * 2024-11-06 2024-12-06 江苏中车数字科技有限公司 A method and system for visually monitoring welding operation process and quality

Also Published As

Publication number Publication date
CN113674247B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN113674247A (en) X-ray weld defect detection method based on convolutional neural network
US20230316702A1 (en) Explainable artificial intelligence (ai) based image analytic, automatic damage detection and estimation system
CN112036447A (en) Zero-sample target detection system and learnable semantic and fixed semantic fusion method
CN113111875B (en) Seamless steel rail weld defect recognition device and method based on deep learning
CN112818969A (en) Knowledge distillation-based face pose estimation method and system
CN115147418B (en) Compression training method and device for defect detection model
CN113971764A (en) Remote sensing image small target detection method based on improved YOLOv3
CN116258664A (en) Deep learning-based intelligent defect detection method for photovoltaic cell
CN113421304A (en) Intelligent positioning method for industrial radiographic negative image weld bead area
CN114757904A (en) Surface defect detection method based on AI deep learning algorithm
CN111444916A (en) License plate positioning and identifying method and system under unconstrained condition
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN116958073A (en) Small sample steel defect detection method based on attention feature pyramid mechanism
CN116363610A (en) Improved YOLOv 5-based aerial vehicle rotating target detection method
CN114612658A (en) Image semantic segmentation method based on dual-class-level confrontation network
CN118644466A (en) Weld quality defect real-time detection method based on deep learning
CN112287895A (en) Model construction method, recognition method and system for river drain outlet detection
CN115830514B (en) Whole river reach surface flow velocity calculation method and system suitable for curved river channel
CN116934737A (en) Weld joint combination defect identification and classification method
CN117671452A (en) Construction method and system of broken gate detection model of lightweight up-sampling YOLOX
CN117218457A (en) Self-supervision industrial anomaly detection method based on double-layer two-dimensional normalized flow
CN115908379A (en) Electric cooker liner image data enhancement method based on mask generation countermeasure network
CN113642662A (en) Lightweight classification model-based classification detection method and device
CN113052799A (en) Osteosarcoma and osteochondroma prediction method based on Mask RCNN network
CN118823614B (en) Low-altitude UAV target detection algorithm based on improved SSD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant