CN113034478A - Weld defect identification and positioning method and system based on deep learning network - Google Patents

Weld defect identification and positioning method and system based on deep learning network Download PDF

Info

Publication number
CN113034478A
CN113034478A CN202110349482.1A CN202110349482A CN113034478A CN 113034478 A CN113034478 A CN 113034478A CN 202110349482 A CN202110349482 A CN 202110349482A CN 113034478 A CN113034478 A CN 113034478A
Authority
CN
China
Prior art keywords
image
network
target
defect
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110349482.1A
Other languages
Chinese (zh)
Other versions
CN113034478B (en
Inventor
李砚峰
朱彦军
孙前来
李晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Science and Technology filed Critical Taiyuan University of Science and Technology
Priority to CN202110349482.1A priority Critical patent/CN113034478B/en
Publication of CN113034478A publication Critical patent/CN113034478A/en
Application granted granted Critical
Publication of CN113034478B publication Critical patent/CN113034478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The invention relates to the field of image recognition, in particular to a weld defect recognition and positioning method and system based on a deep learning network. The method comprises the following steps: s1: acquiring a welding seam defect radiographic image, taking part of the image as a training data set, and taking the rest of the image as a test data set; s2: carrying out normalization preprocessing on the images in the data set; s3: constructing an identification and positioning network model for identifying and positioning the welding seam defect image; s4: detecting and processing the welding seam defect image by utilizing the identification and positioning network model; s5: training the recognition positioning network model until reaching the required training termination condition; s6: and based on the obtained new test data set, adopting the trained recognition positioning network to recognize and position the weld defects, and evaluating the detection performance of the network module. The method overcomes the defects of low accuracy, insufficient identification and positioning precision and low detection efficiency of the traditional welding seam defect identification method.

Description

Weld defect identification and positioning method and system based on deep learning network
Technical Field
The invention relates to the field of image recognition, in particular to a weld defect recognition and positioning method and system based on a deep learning network.
Background
Radiographic defect image identification is an important nondestructive inspection method. In the field of ray welding seam defect image identification, the method adopting manual online detection is influenced by subjective experience of quality inspectors, and easily causes problems of missing detection, false detection and the like when the detection task amount is large, thereby influencing the accuracy of the detection result.
In order to solve the problems, researchers continuously explore and adopt an artificial intelligence algorithm to realize automatic identification of welding defects in the ray welding seam image. At present, the methods for identifying and positioning the ray weld defect image based on the artificial intelligence algorithm mainly comprise two main types, one type is based on the traditional neural network algorithm, and the other type is based on the convolutional neural network (convolutional neural network) algorithm of deep learning. The traditional neural network algorithm is used for identifying the ray weld defect image, firstly, the image is segmented, the weld part in the image is separated, then, the characteristics of the geometric dimension, the texture and the like of the weld defect are extracted and screened based on manual experience, finally, the characteristic parameters are used as input, and the identification and the positioning of the weld defect in the ray image are realized through the traditional neural network. The traditional neural network algorithm mainly comprises SVM, BP, ANFIS, AdaBoost and RBF, PCA, ANN, multilayer perceptron (MLP) and the like. The main problems of this type of process are: the radiographic image is complex, accurate segmentation of the image is difficult to achieve, extraction and screening of features are affected by human factors, rich information contained in the image cannot be fully utilized, and diversity of the image cannot be comprehensively expressed.
The convolution neural network method based on deep learning directly takes the image as input, does not need to manually extract target characteristics, and can automatically learn the complex depth characteristics in the ray weld defect image. Based on these depth features, the convolutional neural network can find the location of the defective target and classify each target through its good fault tolerance, parallelism, and generalization capability. The intelligent end-to-end (end-to-end) identification and positioning from the input of an original ray weld image to the output of the weld defect classification and defect position are really realized.
At present, target identification and positioning algorithms based on deep learning convolutional neural networks are divided into one-stage and two-stage methods. The one-stage method is to directly classify and locate the generated anchors, and the two-stage method firstly generates candidate regions (regions), and then maps the candidate regions onto the feature map for classification and location. Therefore, the two-stage method is time-consuming, and the performance of the two-stage method is difficult to meet the requirement of real-time detection; although the detection efficiency of the one-stage method represented by a YOLO series algorithm and an SSD algorithm is obviously improved compared with that of the two-stage method, the one-stage method still has the defects of low accuracy and recall rate and insufficient identification and positioning accuracy; the requirements on the accuracy rate and the real-time performance of the detection of the welding seam defects are difficult to meet.
Disclosure of Invention
In order to overcome the problems in the prior art, the method and the system for identifying and positioning the weld defects based on the deep learning network overcome the defects of low accuracy, insufficient identification and positioning accuracy and low detection efficiency of the traditional weld defect identification method.
The technical scheme provided by the invention is as follows:
a weld defect identification and positioning method based on a deep learning network comprises the following steps:
s1: acquiring a weld defect radiographic image containing pores, slag inclusion, cracks, unfused and incomplete penetration defects, wherein part of the radiographic image is used as a training data set, and the rest of the radiographic image is used as a test data set;
s2: carrying out normalization pretreatment on images in the test data set or the training data set, and obtaining image blocks with uniform resolution ratio about the welding seam defect images after the pretreatment;
s3: constructing an identification and positioning network model for identifying and positioning the welding seam defect image; the identification and positioning network model comprises a feature extraction module, a target detection module and an output module;
s4: detecting and processing the preprocessed welding seam defect image by using the recognition and positioning network model, and outputting a prediction conclusion; the process comprises the following steps:
s41: extracting shallow features and deep features in the welding seam defect image by using a feature extraction module in the identification positioning network;
s42: reconstructing the extracted features of different layers and the original image by using an object detection module and an image gradient ascent method to obtain low-layer features rich in detail information and high-layer features rich in semantic information, and realizing transverse and longitudinal short connection through FPN backbone and Bottom-up path augmentation in the object detection module;
s43: extracting features from CSPDens _ bolck in a feature extraction module, performing twice up-sampling operation on branches with different resolutions, cascading a feature layer after up-sampling with a shallow feature layer, and respectively performing independent detection on fusion feature maps with multiple scales;
s44: an anchor mechanism is introduced into YOLO of an output module, and an anchor value is obtained by using a K-means clustering method, so that parameters more conforming to an object to be detected are obtained in an initialization stage of network training; finally, combining the target positions and the category information extracted on different scales by adopting a maximum suppression algorithm to obtain a final detection result;
s5: adjusting parameters of network model training, and training the recognition positioning network model in the step S3 by adopting the method in the step S4 and the preprocessed training data set obtained in the step S1 until a required training termination condition is reached;
s6: and based on the acquired new test data set, adopting the trained recognition and positioning network in the step S5 to recognize and position the weld defect result, and evaluating the detection performance of the network module.
Further, in step S2, the image preprocessing method includes: the acquired original image is cut according to the specification of 320 multiplied by 320 pixels to generate an image as an input image block, for the original image with the width and the height not being 320 multiplied by 320, the cutting is finished in a mode of partially reserving an overlapped area in the image, so that all the cut image blocks keep the same specification, and finally, all the image blocks belonging to the same original image data are numbered in sequence.
Furthermore, the identification and positioning network model takes the whole image as input data, the input image is divided into N multiplied by N grids, each divided cell is responsible for detecting a target with a central point falling in the grid, and the generated anchor is directly classified and positioned; wherein the content of the first and second substances,
the feature extraction module adopts a CSPDens _ block module which combines a CSPNet network and a DenseT network, and the CSPDens _ block module is applied to a main network to realize feature extraction of the ray weld defect image; the target detection module adopts FPN background and Bottom-up path augmentation in the PANet to realize the fusion of shallow features and deep features; the output module adopts a YOLO layer in YOLOv4 to realize the classification and regression of the multi-scale target; and performing NMS processing on the boundary box with higher confidence coefficient obtained by calculation to obtain a final detection result.
Further, in step S41, the processing procedure of the CSPDens _ block module in the feature extraction stage is as follows:
s411: dividing a feature map obtained by convolution of a previous layer into 2 parts by CSPDens _ block, wherein one part passes through a Dens module, and the other part and the output of the Dens directly carry out Concat to realize connection expansion of the feature map so as to enable the gradient flow to be transmitted on different network paths;
s412: cross-layer feature information transmission is realized through a Dens module, and feature information is directly transmitted to a subsequent network layer by skipping part of network layers, so that the network learns more inter-layer feature relation; further, a dense connection network of all layers in the front and a rear layer is established on the basis of ResNe through DensNet, and feature reuse is realized;
the calculation formula of the channel connection is as follows:
xl=Hl([x0,x1,......,xl-1])
in the above formula: [ x ] of0,x1,......,xl-1]Output profile, H, representing 0, … …, l-1 layerslDenotes a channel merge operation, HlIncluding a convolution of 3 × 3, BN and LEAKEY RELU;
s413: the separable Convolution is adopted to replace the traditional Convolution, and the separable Convolution decomposes a complete Convolution operation into two steps of operation, namely Depthwise Convolition and Pointwise Convolition; the Depthwise Convolution is completely performed in a two-dimensional plane, the number of filters is the same as that of the Depth of the previous layer, and the Pointwise Convolution performs weighted combination on the feature map output by the Depthwise Convolution in the Depth direction by using a 1 × 1 Convolution kernel.
Further, in step S43, the scale fusion process of the target detection stage is as follows:
s431: improving a multi-scale detection module in the YOLOv4, and expanding the original 3 scales into 4 scales;
s432: the original input size is 320 multiplied by 320, and the resolution size and convolution kernel of the operations of the Dens module in CSPDens _ bolck are 160 multiplied by 160 and 32 in sequence; 80X 80, 64; 40 × 40, 128; 20 × 20, 256; each branch of the target detection module detects the feature map after CSPDens _ bolck multi-scale fusion;
s433: reducing 1/2 the operations and convolution kernels of the Dens module in the CSPDens _ bolck at layers 2, 3, 4 and 5 relative to YOLO; performing double upsampling operation on branches with the resolutions of 10 multiplied by 10, 20 multiplied by 20 and 40 multiplied by 40, cascading the upsampled characteristic layer with a shallow characteristic layer, and respectively performing independent detection on fusion characteristic graphs with 4 scales;
s434: the improved multi-scale fusion is expanded to predict a target to be detected on four scale feature maps of 10 multiplied by 10, 20 multiplied by 20, 40 multiplied by 40 and 80 multiplied by 80, learn position features from a shallow feature layer, and perform accurate fine-grained detection on deep features after fusion and up-sampling.
Further, in step S44, performing dimension clustering again by using the K-means algorithm, it is necessary to make the IOU values of anchor box and ground channel as large as possible, so that the target function of distance measurement uses the ratio DIOU of the intersection and union of the predicted bounding box and the real bounding box as a measurement standard, and the formula of the measurement function is as follows:
Figure BDA0003002013620000041
in the above formula, targ _ box is the target box of the sample label, cent is the clustering center, d represents the metric distance, and DIOU represents the ratio of the intersection and union of the prediction bounding box and the real bounding box.
Further, in the target detection stage, the position of the defect detected in the image block in the original image is determined through coordinate conversion, for any image block, the image block is divided into S multiplied by S grids, and each grid predicts B rectangular bounding boxes containing the target defect and C probability values belonging to a certain category; each rectangular bounding box contains 5 data values, namely: (x, y, w, h, confidence), wherein (x, y) is the offset of the center of the rectangular bounding box relative to the cell, (w, h) is the width and height of the rectangular bounding box, and confidence is the confidence that the target in a certain grid belongs to a certain class of defects;
then, for S × S grids into which the image with width W and height H is divided, let the coordinates of a certain grid in the image be (x)i,yj),xiAnd yjThe value range of (1) is (0, S-1), and the coordinate of the central point of the predicted boundary box is (x)c,yc) Then the final predicted position (x, y) normalization process formula is as follows:
Figure BDA0003002013620000042
Figure BDA0003002013620000043
the confidence value is used for representing the probability of whether the bounding box contains the target and the coincidence degree of the current bounding box and the real bounding box, and the calculation formula is as follows:
Figure BDA0003002013620000044
in the above formula, Pr(obj) tableIndicating whether the target defect exists in the grid or not, if so, Pr(obj) ═ 1, if not, Pr(obj) ═ 0; DIOU represents the ratio of the intersection and union of the predicted bounding box and the real bounding box;
the output probability P of each grid prediction is expressed as follows:
Figure BDA0003002013620000045
in the above formula, Pr(obj) represents the probability of the presence of a target defect in the mesh, Pr(classiI obj) represents the conditional probability that the mesh contains a defect belonging to the i-th class target, Pr(classi) Representing the probability of the i-th type target defect; DIOU represents the ratio of the intersection and union of the predicted bounding box and the true bounding box.
Further, in step S5, during the network model training, the LEAKEY RELU is used as the activation function, and the coefficient when x is less than or equal to 0 is adjusted to 0.01 according to the detected target feature, which is expressed as follows:
Figure BDA0003002013620000051
the loss function defining the training network comprises three parts: the calculation formulas of the bounding box loss, the confidence coefficient loss and the classification loss are respectively as follows:
loss=losscoord+lossconf+lossclass
therein, losscoordThe bounding box loss function is expressed by the following formula:
Figure BDA0003002013620000052
in the above formula:
Figure BDA0003002013620000053
the abscissa, ordinate, width, representing the center of the real target bounding box,Value of height, xi,yi,wi,hiValues representing the abscissa, ordinate, width, height of the prediction target bounding box, sxs is the number of divided grids, B is the number of prediction bounding boxes per grid,
Figure BDA0003002013620000054
judging whether the ith grid where the jth bounding box is located is responsible for detecting the defect or not, and if so, selecting the grid which is responsible for detecting the defect and has the largest DIOU value with the real bounding box; lambda [ alpha ]coordThe penalty coefficient is coordinate prediction, the penalty coefficient has the effects that when a network traverses the whole image, each grid does not necessarily contain target defects, and when the grid does not contain the target defects, the confidence coefficient is 0, so that the training gradient is greatly spanned, the final model is unstable, and in order to solve the problem, a hyperparameter lambda is set in a loss functioncorrdTo control the loss of predicted positions of the target frame;
Figure BDA0003002013620000055
is an adjustment parameter of the convergence rate of network training;
lossconfthe confidence loss function is expressed by the following calculation formula:
Figure BDA0003002013620000056
in the above formula:
Figure BDA0003002013620000057
representing the true confidence that the target defect in the ith mesh belongs to a certain class, ciIn order to predict the degree of confidence,
Figure BDA0003002013620000058
the jth bounding box of the ith grid does not contain the target defect, lambdanoobjA penalty coefficient for representing confidence when the grid does not contain the detection target;
lossclassthe classification loss function is expressed by the formula:
Figure BDA0003002013620000059
in the above formula: c represents a predicted target defect class,
Figure BDA00030020136200000510
representing the true probability value, p, that the object in the ith grid belongs to a certain class of defectsi(c) Indicating the predicted probability values for objects in the ith mesh belonging to a certain class of defects,
Figure BDA00030020136200000511
indicating whether the ith mesh is responsible for the target defect.
The invention also comprises a weld defect identification and positioning system based on the deep learning network, which adopts the weld defect identification and positioning method based on the deep learning network to complete the identification and positioning of weld defects in a weld image and give a prediction result; the system comprises: the system comprises an image acquisition module, an image preprocessing module and an identification and positioning network module.
The image acquisition module is used for acquiring a welding seam defect radiographic image containing air holes, slag inclusions, cracks, unfused and incomplete penetration defects, taking the image as a training set or a test set, and finishing the training of a system or finishing the task of identifying and positioning the welding seam defects in the image based on the image in the training set or the test set.
The image preprocessing module is used for carrying out normalization preprocessing on the images in the training set or the testing set, so that image blocks with uniform resolution of the welding seam defect images are obtained after preprocessing.
The identification positioning network module takes the processed image as input data, divides the input image into N multiplied by N grids, enables each divided cell to be responsible for detecting a target with a central point falling in the grid, and directly classifies and positions the generated anchor; the recognition positioning network module comprises a feature extraction sub-module, a target detection sub-module and an output sub-module, wherein the feature extraction sub-module adopts a CSPDens _ block module which combines a CSPNet network and a DensNet network, and the CSPDens _ block module is applied to a main network to realize feature extraction of the ray weld defect image; the target detection submodule realizes the fusion of shallow features and deep features by adopting FPN backbone and Bottom-up path augmentation in the PANet; the output submodule adopts a YOLO layer in YOLOv4 to realize the classification and regression of the multi-scale target; and performing NMS processing on the boundary box with higher confidence coefficient obtained by calculation to obtain a final detection result.
The invention also comprises a weld defect identifying and positioning terminal based on the deep learning network, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the weld defect identifying and positioning method based on the deep learning network.
The welding seam defect identification and positioning method based on the deep learning network has the following beneficial effects:
1. the invention provides a one-stage weld defect identification and positioning method based on a deep learning network, which inputs the whole image into the network and directly calibrates the position and the category of a target defect on an output image. The YOLO network is improved by adopting a characteristic pyramid, reducing the network depth, introducing a jump connection volume block, a K-means algorithm and the like, so that the accuracy and the speed of the network for identifying and positioning the welding seam defects are improved. The method has the capability of on-line real-time identification and positioning of the weld defects, and the engineering application value of the method is improved.
2. The method improves the YOLO original network, and improves the identification and positioning precision of the algorithm on the weld defect target. The CSPDense block and the separable convolution are adopted to effectively improve the feature extraction capability, shallow feature information is fully utilized through upward concatenat operation, deep semantic features are fused, the characterization capability of a feature pyramid is enhanced, unnecessary convolution and modules are reduced, and the operation amount of the network is greatly reduced.
3. Compared with the two-stage target detection algorithm, the method provided by the invention has the advantages that the detection accuracy and recall rate of the welding seam defects are improved to a certain extent, the detection speed and the identification accuracy are improved compared with the original YOLO algorithm, and the requirements on the detection accuracy and the real-time performance of the welding seam defects can be met.
Drawings
FIG. 1 is a flowchart of a weld defect identification and positioning method based on a deep learning network according to embodiment 1;
fig. 2 is a schematic structural diagram of the recognition positioning network model in the embodiment 1;
fig. 3 is a schematic structural diagram of a CSPDens _ block module in this embodiment 1, where part (a) in the diagram is a schematic processing flow diagram of the CSPDens _ block, part (b) in the diagram is a schematic processing flow diagram of separable convolution, and part (c) in the diagram is a schematic diagram of a tens module implementing cross-layer feature information transfer;
FIG. 4 is a partial image sample of a weld defect radiographic image acquired in the present example 2;
FIG. 5 is a distribution curve of the results of the K-means cluster analysis in this example 2;
FIG. 6 is a curve relating loss function values and iteration times of the method of this embodiment to those of the control group in the network training process of this embodiment 2;
FIG. 7 is a graph of the average cross-over ratio between the method of the present embodiment and the control group in the network training process of this embodiment 2;
FIG. 8 is a comparison graph of the defect results at the detection sites of the method and the control group provided by the present embodiment for the same input image in the present embodiment 2;
FIG. 9 is a block diagram of a weld defect identification and localization system based on deep learning network provided in example 3;
labeled as:
1. an image acquisition module; 2. an image preprocessing module; 3. identifying a positioning module; 31. a feature extraction submodule; 32. a target detection submodule; 33. and outputting the submodule.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
As shown in fig. 1, the embodiment provides a weld defect identification and positioning method based on a deep learning network, which includes the following steps:
s1: and acquiring a weld defect radiographic image containing pores, slag inclusions, cracks, unfused and incomplete penetration defects, wherein part of the radiographic image is used as a training data set, and the rest of the radiographic image is used as a test data set.
S2: carrying out normalization pretreatment on images in the test data set or the training data set, and obtaining image blocks with uniform resolution ratio about the welding seam defect images after the pretreatment;
the image preprocessing method comprises the following steps: the acquired original image is cut according to the specification of 320 multiplied by 320 pixels to generate an image as an input image block, for the original image with the width and the height not being 320 multiplied by 320, the cutting is finished in a mode of partially reserving an overlapped area in the image, so that all the cut image blocks keep the same specification, and finally, all the image blocks belonging to the same original image data are numbered in sequence.
S3: and constructing an identification and positioning network model for identifying and positioning the welding seam defect image.
The convolutional neural network based on deep learning can learn and extract shallow features such as rich edges and textures in an image through convolution and downsampling operations, and can also learn and extract deep features such as structures and semantics. However, when the network depth is large and the down-sampling operation is too many, the detail information of the image is lost, so that part of the small target features disappear, and the identification and positioning effects on the small target are poor. The objects identified and positioned in the invention not only comprise small target weld defects such as air holes, slag inclusion, fine cracks and the like, but also comprise larger weld defects such as incomplete fusion, incomplete penetration and the like. Therefore, the design of the identification positioning network model fully considers the large difference of the defect scale.
Based on the reasons, the welding seam defect image identification and positioning network constructed in the embodiment is composed of a feature extraction module, a target detection module and an output module.
In the deep learning neural network, CSPNet has the advantages of less parameters, small calculated amount, strong generalization performance, capability of effectively improving the characteristic learning capability of the network and the like, and DenSnNet can effectively relieve gradient dispersion so that information is spread more smoothly in the front-back direction.
The target detection module adopts FPN background and Bottom-up path augmentation in the PANet to realize the fusion of shallow features and deep features; so as to improve the accuracy of restoring the characteristic image pixels when identifying and positioning.
The output module adopts a YOLO layer in YOLOv4 to realize the classification and regression of the multi-scale target; and performing NMS processing on the boundary box with higher confidence coefficient obtained by calculation to obtain a final detection result.
The overall structure of the network model is shown in FIG. 2, in which Conv represents Convolution, Concat represents feature map connection, and CBL represents Convolume, Batch Normalization, and Leaky ReLU.
S4: detecting and processing the preprocessed welding seam defect image by using the recognition and positioning network model, and outputting a prediction conclusion; the process comprises the following steps:
s41: and extracting the shallow features and the deep features in the welding seam defect image by using a feature extraction module in the identification and positioning network.
Generally, the convolutional neural network based on deep learning regards an input picture as being formed by overlapping pictures with various different characteristics, the characteristics of the whole input picture are scanned through a plurality of filters, and the structured deep semantic characteristics are extracted through downsampling, so that more filters and deeper networks can extract more characteristics of the pictures. However, in practical application, the deeper the network, the error accumulation is generated, so that the problem of gradient dissipation occurs, and the network performance is reduced.
Therefore, the improved CSPDens _ block module in this embodiment has the following processing procedure in the feature extraction stage:
s411: dividing a feature map obtained by convolution of a previous layer into 2 parts by CSPDens _ block, wherein one part passes through a Dens module, and the other part and the output of the Dens directly carry out Concat to realize connection expansion of the feature map so as to enable the gradient flow to be transmitted on different network paths; therefore, the network calculation amount is reduced, meanwhile, richer gradient fusion information is obtained, and the reasoning speed and the reasoning accuracy are improved. The structure of the CSPDens _ block is shown in part (a) of fig. 3.
S412: as shown in part (c) of fig. 3, cross-layer feature information transfer is realized through a Dens module, and feature information is directly transferred to a subsequent network layer by skipping part of the network layer, so that the network learns more inter-layer feature connections; further, a dense connection network of all layers in the front and a rear layer is established on the basis of ResNe through DensNet, and feature reuse is realized; the Dens operation allows the network to learn more hierarchical feature relationships. Therefore, the loss and gradient disappearance of the feature information in the layer-by-layer transmission are reduced, and the transmission of the features in the network is accelerated.
The method adopted by the Dens module is not the addition of the characteristic image pixels, but the connection of the channels, and the calculation formula is as follows:
xl=Hl([x0,x1,......,xl-1])
in the above formula: [ x ] of0,x1,......,xl-1]Output profile, H, representing 0, … …, l-1 layerslDenotes a channel merge operation, HlIncluding a convolution of 3 × 3, BN and LEAKEY RELU;
s413: as the number of network layers increases, conventional convolution of a multi-channel input image results in an exponential increase in the number of computations.
Therefore, in order to further improve the network performance and control the inter-channel connection tensor, in this embodiment, separable Convolution is adopted to replace the conventional Convolution for extracting features, and the separable Convolution decomposes a complete Convolution operation into two steps, as shown in part (b) of fig. 3, which are Depthwise contribution and Pointwise contribution, respectively; the Depthwise Convolution is performed in a two-dimensional plane, the number of filters is the same as that of the Depth of the previous layer, and the Pointwise Convolution performs weighted combination on the feature map output by the Depthwise Convolution in the Depth direction by using a 1 × 1 Convolution kernel. Thus, feature transfer is guaranteed, and the calculation amount is reduced.
S42: and reconstructing the extracted features of different layers and the original image by using an object detection module and an image gradient ascent method to obtain low-layer features rich in detail information and high-layer features rich in semantic information, and realizing transverse and longitudinal short connection through the FPN backbone and Bottom-up path augmentation in the object detection module.
Because the deep layer neuron is activated by the structural characteristics of a wider receptive field, and the shallow layer neuron is a characteristic diagram generated by the characteristic activation of local edges, textures and the like, the network realizes transverse and longitudinal short connection through FPN backbone and Bottom-up path augmentation, the information circulation of the deep layer characteristic and the shallow layer characteristic can be faster, and the positioning and detecting capability of the characteristic structure is further improved. After improvement, more feature information is fused, so that the detail feature characterization capability of the feature pyramid is enhanced, the detection precision of the small defect target is improved, and the omission ratio is reduced.
S43: extracting features from CSPDens _ bolck in a feature extraction module, performing twice up-sampling operation on branches with different resolutions, cascading a feature layer after up-sampling with a shallow feature layer, and respectively performing independent detection on fusion feature maps with multiple scales;
the scale fusion process of the target detection stage is as follows:
s431: in the detection of the ray weld defect image, the small target defect generally has only dozens of pixels or even less, and the semantic information extracted by the network from the only few pixels is very limited. In the characteristic extraction process, shallow layer characteristics have higher resolution and are rich in stronger position information; deep features have strong semantic information, but the location information is coarse. According to an image gradient ascending method, original images are reconstructed by using features extracted from different layers, and a conclusion that low-layer features rich in detail information and high-layer features rich in semantic information can better assist target detection is obtained.
The traditional YOLOv4 network predicts the target to be detected by using 3 feature maps with different scales, upsamples the feature maps output by the last two Residual _ bolck blocks, and fuses the upsampled feature maps with the feature maps with corresponding sizes of the network shallow layer into effective information for prediction.
In this embodiment, in order to more fully utilize the shallow feature and the location information, the multi-scale detection module in YOLOv4 is improved, and the original 3 scales are expanded to 4 scales.
S432: the original input size is 320 multiplied by 320, and the resolution size and convolution kernel of the operations of the Dens module in CSPDens _ bolck are 160 multiplied by 160 and 32 in sequence; 80X 80, 64; 40 × 40, 128; 20 × 20, 256; each branch of the target detection module detects the feature map after CSPDens _ bolck multi-scale fusion;
s433: in order to reduce the amount of calculation and increase the detection speed, the operations of the Dens module and the convolution kernel in the CSPDens _ bolck layers 2, 3, 4 and 5 are reduced by 1/2 relative to YOLO in the example; performing double upsampling operation on branches with the resolutions of 10 multiplied by 10, 20 multiplied by 20 and 40 multiplied by 40, cascading the upsampled characteristic layer with a shallow characteristic layer, and respectively performing independent detection on fusion characteristic graphs with 4 scales;
s434: the improved multi-scale fusion is expanded to predict a target to be detected on four scale feature maps of 10 multiplied by 10, 20 multiplied by 20, 40 multiplied by 40 and 80 multiplied by 80, learn position features from a shallow feature layer, and perform accurate fine-grained detection on deep features after fusion and up-sampling.
In the embodiment, the shallow feature information with more scales is fused, the characterization capability of the feature pyramid is enhanced, the detection precision of the small target defect is improved, the missing rate is reduced, and finally the redundant frame is removed through a non-maximum suppression algorithm.
S44: an anchor mechanism is introduced into YOLO of an output module, and an anchor value is obtained by using a K-means clustering method, so that parameters more conforming to an object to be detected are obtained in an initialization stage of network training; and finally, combining the target positions and the category information extracted on different scales by adopting a maximum suppression algorithm to obtain a final detection result. The method comprises the following steps that dimension clustering is carried out again through a K-means algorithm, the IOU value of an anchor box and a ground channel needs to be made as large as possible, therefore, the ratio DIOU of the intersection and the union of a prediction boundary box and a real boundary box is used as a measurement standard for a distance measurement target function, and the formula of the measurement function is as follows:
Figure BDA0003002013620000101
in the above formula, targ _ box is the target box of the sample label, cent is the clustering center, d represents the metric distance, and DIOU represents the ratio of the intersection and union of the prediction bounding box and the real bounding box.
S5: and (4) adjusting parameters of network model training, and training the recognition positioning network model in the step S1 by adopting the method in the step S4 and the preprocessed training data set obtained in the step S3 until a required training termination condition is reached.
During network model training, using LEAKEY RELU as an activation function, and adjusting the coefficient when x is less than or equal to 0 to 0.01 according to the detected target characteristic, wherein the formula is as follows:
Figure BDA0003002013620000111
the loss function defining the training network comprises three parts: the calculation formulas of the bounding box loss, the confidence coefficient loss and the classification loss are respectively as follows:
loss=losscoord+lossconf+lossclass
therein, losscoordThe bounding box loss function is expressed by the following formula:
Figure BDA0003002013620000112
in the above formula:
Figure BDA0003002013620000113
values representing the abscissa, ordinate, width, height of the center of the real target bounding box, xi,yi,wi,hiValues representing the abscissa, ordinate, width, height of the prediction target bounding box, sxs is the number of divided grids, B is the number of prediction bounding boxes per grid,
Figure BDA0003002013620000114
judging whether the ith grid where the jth bounding box is located is responsible for detecting the defect or not, and if so, selecting the grid which is responsible for detecting the defect and has the largest DIOU value with the real bounding box; lambda [ alpha ]coordThe penalty coefficient is coordinate prediction, the penalty coefficient has the effects that when a network traverses the whole image, each grid does not necessarily contain target defects, and when the grid does not contain the target defects, the confidence coefficient is 0, so that the training gradient is greatly spanned, the final model is unstable, and in order to solve the problem, a hyperparameter lambda is set in a loss functioncorrdTo control the loss of predicted positions of the target frame;
Figure BDA0003002013620000115
is an adjustment parameter of the convergence rate of network training;
lossconfthe confidence loss function is expressed by the following calculation formula:
Figure BDA0003002013620000116
in the above formula:
Figure BDA0003002013620000117
representing the true confidence that the target defect in the ith mesh belongs to a certain class, ciIn order to predict the degree of confidence,
Figure BDA0003002013620000118
the jth bounding box of the ith grid does not contain the target defect, lambdanoobjIndicating the absence of detection objects in the gridPenalty coefficient of confidence degree of time-scaling;
lossclassthe classification loss function is expressed by the formula:
Figure BDA0003002013620000119
in the above formula: c represents a predicted target defect class,
Figure BDA00030020136200001110
representing the true probability value, p, that the object in the ith grid belongs to a certain class of defectsi(c) Indicating the predicted probability values for objects in the ith mesh belonging to a certain class of defects,
Figure BDA00030020136200001111
indicating whether the ith mesh is responsible for the target defect.
S6: and based on the acquired new test data set, adopting the trained recognition and positioning network in the step S5 to recognize and position the weld defect result, and evaluating the detection performance of the network module.
In this embodiment, the identification and positioning network model takes the whole image as input data, the input image is divided into N × N grids, each divided cell is responsible for detecting a target whose center point falls in the grid, and the generated anchor is directly classified and positioned.
In the embodiment, the target defects with different scales and types exist in the weld defect image, different target defects have different characteristics, and the real characteristics of the weld defects can be reflected more accurately by fusing deep-layer and shallow-layer characteristic information and spatial information. In the characteristic extraction process, the image blocks are subjected to multi-channel convolution and down-sampling operation to extract target characteristics, and then various simple targets and complex targets are accurately identified and positioned by the aid of the network. Therefore, in the network of the embodiment, feature fusion is realized by performing upsampling and connection operations on feature maps of different layers.
Therefore, in the target detection stage, the position of the defect detected in the image block in the original image is determined through coordinate conversion, for any image block, the image block is divided into S multiplied by S grids, and each grid predicts B rectangular bounding boxes containing the target defect and C probability values belonging to a certain category; each rectangular bounding box contains 5 data values, namely: (x, y, w, h, confidence), wherein (x, y) is the offset of the center of the rectangular bounding box relative to the cell, (w, h) is the width and height of the rectangular bounding box, and confidence is the confidence that the target in a certain grid belongs to a certain class of defects;
then, for S × S grids into which the image with width W and height H is divided, let the coordinates of a certain grid in the image be (x)i,yj),xiAnd yjThe value range of (1) is (0, S-1), and the coordinate of the central point of the predicted boundary box is (x)c,yc) Then the final predicted position (x, y) normalization process formula is as follows:
Figure BDA0003002013620000121
Figure BDA0003002013620000122
the confidence value is used for representing the probability of whether the bounding box contains the target and the coincidence degree of the current bounding box and the real bounding box, and the calculation formula is as follows:
Figure BDA0003002013620000123
in the above formula, Pr(obj) indicates whether there is a target defect in the mesh, and if so, Pr(obj) ═ 1, if not, Pr(obj) ═ 0; DIOU represents the ratio of the intersection and union of the predicted bounding box and the real bounding box;
the output probability P of each grid prediction is expressed as follows:
Figure BDA0003002013620000124
in the above formula, Pr(obj) represents the probability of the presence of a target defect in the mesh, Pr(classiI obj) represents the conditional probability that the mesh contains a defect belonging to the i-th class target, Pr(classi) Representing the probability of the i-th type target defect; DIOU represents the ratio of the intersection and union of the predicted bounding box and the true bounding box.
Example 2
The present embodiment provides a simulation experiment of the weld defect identifying and positioning method based on the deep learning network as in embodiment 1 (in other embodiments, the simulation experiment may not be performed, and an experiment may also be performed using other experimental schemes to determine the influence of the network model and its related parameters in the method on the weld identifying and positioning performance of the method).
(I) Experimental conditions
In this embodiment, the detection experiment adopts operating systems Windows10, CPUi7-8700k, 1080ti as GPU, 16GB as memory, and tensorflow as deep learning framework.
Setting initialization parameters of network training: the maximum iteration is 50000 times, the learning rate is 0.001, the batch _ size is set 32, the weight attenuation coefficient is 0.0005, and the impulse constant is 0.9, and the values of the learning rate and the batch _ size are properly adjusted according to the trend of loss reduction until the loss function value is less than or equal to the experience threshold value, and the training is stopped.
(II) data set acquisition
In this embodiment, the experimental image data comes from an obstetrics and research cooperative enterprise. 5 common welding seam defect ray images of air holes, slag inclusion, cracks, incomplete fusion and incomplete penetration are collected, and 920 images are acquired respectively. 800 images of each defect are randomly extracted for network training, and the rest 120 images are used as test set images.
In addition, in order to avoid overfitting of the network due to the small data volume of the training set, the present embodiment further performs cutting, flipping, translation, contrast adjustment and noise disturbance change on 800 original images, expands various defect maps, and generates 50162 images, wherein 10076 images containing air hole defects, 9847 images containing slag defects, 10150 images containing crack defects, 10326 images containing unfused defects and 9763 images containing no through-weld defects are generated, and a part of weld X-ray image samples in the experiment are shown in fig. 4.
(III) Cluster analysis
In the one-stage target identification and positioning network of the embodiment, the YOLO introduces an anchor mechanism, and obtains an anchor value by using a K-means clustering method, so that the parameters of the object to be detected are obtained in the initialization stage of network training, and the deviation between the initialization parameters and the optimized parameters is reduced.
The number and the size of the anchor boxes directly influence the accuracy and the speed of identifying and positioning the defect target, so that the setting of the appropriate anchor parameters is particularly important. However, the YOLO algorithm is an anchor value obtained after training on COCO and VOC data sets, and is not suitable for the weld defect detection studied in this embodiment. Therefore, in this embodiment, dimension clustering is performed again by using the K-means algorithm, the result of clustering analysis performed on the label by using the K-means algorithm is shown in fig. 5,
in the image, the first 12 anchors values were chosen: (7, 9), (13, 17), (21, 37), (36, 52), (69, 48), (12, 48), (96, 18), (24, 265), (180, 22), (57, 258), (168, 63), (132, 265), and are assigned to feature maps of 4 scales according to the area size, and feature maps of larger scales use smaller anchors and calculate 3 prediction frames per mesh.
(IV) results of model testing
And calling a defect image training set from a database of the X-ray image of the welding seam, and training a YOLO network module and a CSPDensNet network module, wherein the training time is 12.6h and 13.8h respectively. And calling a test set data image to input into a YOLO network module and a CSPDensNet network module for detection.
The curves of the loss function values loss and the average intersection ratio IoU curves of the two network models are compared, respectively. Fig. 6 shows a correlation curve between the loss function value and the number of iterations in the network training process. The average cross-over ratio curve of the training is shown in fig. 7.
The values of the loss function of the YOLO model iterated over the training set and the test set are shown by the curves YOLO _ train and YOLO _ test in fig. 6. The values of the loss functions for the iterations with CSPDensNet on the training and test sets are shown in fig. 6 by the curves CSPDnet _ train and CSPDnet _ test.
Analyzing the results in fig. 6, it can be found that: the CSPDensNet network model trains the value of a loss function representing the recognition accuracy rate on a verification set to be better than YOLO. The value of the loss function of the CSPDensNet model is gradually stable and finally reaches about 2%, while the value of the loss function of the YOLO model is decreased quickly, but after the value of the loss function reaches the minimum value after 4500 iterations, the value of the loss function rises after oscillation, and finally reaches about 4%.
Meanwhile, as can be found by analyzing fig. 7, the average cross-over ratio of the anchor box and the ground trouh box of the CSPDensNet network model is also obviously higher than YOLO in the training process.
(V) evaluation of Performance
In the field of target detection, accuracy (Precision) and Recall (Recall) are important criteria for judging and detecting the quality of a network model, in the embodiment, the two indexes and detection time are adopted to evaluate an experimental result, and a comparison experiment is performed by taking a traditional YOLOv4 network as a comparison group, so that the performances of the test network provided by the embodiment and the traditional YOLOv4 network are compared.
The accuracy (Precision) and Recall (Recall) are expressed as follows:
Figure BDA0003002013620000141
Figure BDA0003002013620000142
in the above formula, TP represents the number of detected positive samples, i.e., the number of samples for correctly classifying the detected defects; FP represents the number of negative samples detected as positive samples, i.e., the number of samples detected as a defect classification error; FN is detected as a positive number of samples of negative samples, i.e., the number of samples that have not been detected but actually contain defects.
The test results obtained by statistics in the simulation experiment are shown in table 1:
table 1: statistical table of weld defect detection results of network and comparison group in the embodiment
Figure BDA0003002013620000143
Figure BDA0003002013620000151
Analyzing the above test results, it can be found that, compared with the conventional YOLOv4 algorithm, the accuracy or recall rate of the present embodiment is significantly improved in the detection of 5 common weld defects of porosity, slag inclusion, cracks, unfused and incomplete penetration.
FIG. 8 is an image of part of the test results in the simulation experiment in this embodiment; the lines (a) to (e) are images of air hole (Pore), Slag inclusion (Slag), Crack (Crack), non-fusion (LOF), and non-penetration (LOP) defects, respectively. In fig. 8, (1) is a truncated partial radiographic weld defect image to be detected, the (2) is the detection result of the YOLOv4 algorithm, and the (3) is the detection result in the present embodiment.
Comparing the identification and positioning results of the defects of the air holes (Pore), the Slag inclusion (Slag) and the cracks (Crack) in the (2) th column and the (3) th column in the figure 8 can find that: the YOLOv4 has the condition of missing detection, and the method in the embodiment has higher accuracy of identifying and positioning various defects.
In addition, the same data set is used in the embodiment to compare the method with some classic CNN target detection algorithms based on candidate regions, and the evaluation index is the average accuracy (mAP) of various defects. The higher the mAP value is, the better the identification and positioning effects of the algorithm on various weld defects are.
The experimental results of the average accuracy of the methods in this example and the control group are shown in table 2:
TABLE 2 statistical table of the test results of the methods and different algorithms in this example
Name of algorithm mAP(%) Recall (%) Detection time (ms)
R-CNN 70.6 70.9 29500
Fast R-CNN 80.9 81.7 2380
Faster R-CNN 93.1 93.6 1650
YOLOv4 87.7 88.5 24.89
Text algorithm 94.9 95.7 19.58
As can be seen from table 2: compared with the R-CNN and Fast R-CNN algorithms based on two-stage, the method and the YOLOv4 algorithm provided by the embodiment based on one-stage have obvious advantages in detection speed and precision in weld defect identification and positioning. In comparison with Faster R-CNN, YOLOv4 is inferior in accuracy and recall index, but is far superior to the algorithm in detection speed.
In addition, the method provided by the embodiment has obviously improved performance in accuracy or recall ratio compared with the Yolov4, and is also improved in detection speed, compared with fast R-CNN, the method provided by the embodiment has obvious advantages in detection speed and is also improved in detection precision.
Example 3
The embodiment provides a weld defect identification and positioning system based on a deep learning network, which adopts the weld defect identification and positioning method based on the deep learning network as in embodiment 1 to complete the identification and positioning of weld defects in a weld image and give a prediction result; the system comprises: the system comprises an image acquisition module 1, an image preprocessing module 2 and an identification and positioning network module 3.
The image acquisition module 1 is used for acquiring a weld defect radiographic image containing air holes, slag inclusions, cracks, unfused and incomplete penetration defects, taking the image as a training set or a test set, and completing training of a system or completing a task of identifying and positioning the weld defects in the image based on the image in the training set or the test set.
The image preprocessing module 2 is used for carrying out normalization preprocessing on the images in the training set or the testing set, so that image blocks with uniform resolution of the welding seam defect images are obtained after preprocessing.
The identification and positioning network module 3 takes the processed image as input data, divides the input image into N multiplied by N grids, enables each divided cell to be responsible for detecting a target with a central point falling in the grid, and directly classifies and positions the generated anchor; the recognition positioning network module 3 comprises a feature extraction submodule 31, a target detection submodule 32 and an output submodule 33, wherein the feature extraction submodule 31 adopts a CSPDens _ block module which combines a CSPNet network and a DenseT network, and the CSPDens _ block module is applied to a main network to realize feature extraction of the ray weld defect image; the target detection submodule 32 adopts FPN backbone and Bottom-up path augmentation in the PANet to realize the fusion of shallow features and deep features; the output submodule 33 realizes classification and regression of multi-scale targets by using a YOLO layer in YOLOv 4; and performing NMS processing on the boundary box with higher confidence coefficient obtained by calculation to obtain a final detection result.
Example 4
The embodiment provides a weld defect identification and positioning terminal based on a deep learning network, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the weld defect identification and positioning method based on the deep learning network as in embodiment 1.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions and improvements made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A weld defect identification and positioning method based on a deep learning network is characterized by comprising the following steps:
s1: acquiring a weld defect radiographic image containing pores, slag inclusion, cracks, unfused and incomplete penetration defects, wherein part of the radiographic image is used as a training data set, and the rest of the radiographic image is used as a test data set;
s2: carrying out normalization pretreatment on images in the test data set or the training data set to obtain image blocks with uniform resolution ratio about the welding seam defect images after pretreatment;
s3: constructing an identification and positioning network model for identifying and positioning the welding seam defect image; the identification positioning network model comprises a feature extraction module, a target detection module and an output module;
s4: detecting and processing the preprocessed welding seam defect image by using the recognition positioning network model, and outputting a prediction conclusion; the process comprises the following steps:
s41: extracting shallow features and deep features in the welding seam defect image by using a feature extraction module in the identification positioning network;
s42: reconstructing the extracted features of different layers and the original image by using an object detection module and an image gradient ascent method to obtain low-layer features rich in detail information and high-layer features rich in semantic information, and realizing transverse and longitudinal short connection through FPN backbone and Bottom-up path augmentation in the object detection module;
s43: extracting features from CSPDens _ bolck in a feature extraction module, performing twice up-sampling operation on branches with different resolutions, cascading a feature layer after up-sampling with a shallow feature layer, and respectively performing independent detection on fusion feature maps with multiple scales;
s44: an anchor mechanism is introduced into YOLO of an output module, and an anchor value is obtained by using a K-means clustering method, so that parameters more conforming to an object to be detected are obtained in an initialization stage of network training; finally, combining the target positions and the category information extracted on different scales by adopting a maximum suppression algorithm to obtain a final detection result;
s5: adjusting parameters of network model training, and training the recognition positioning network model in the step S3 by adopting the method in the step S4 and the preprocessed training data set obtained in the step S1 until a required training termination condition is reached;
s6: and based on the acquired new test data set, adopting the trained recognition and positioning network in the step S5 to recognize and position the weld defect result, and evaluating the detection performance of the network module.
2. The weld defect identification and positioning method based on the deep learning network as claimed in claim 1, wherein: in step S2, the image preprocessing method includes: the acquired original image is cut according to the specification of 320 multiplied by 320 pixels to generate an image as an input image block, for the original image with the width and the height not being 320 multiplied by 320, the cutting is finished in a mode of partially reserving an overlapped area in the image, so that all the cut image blocks keep the same specification, and finally, all the image blocks belonging to the same original image data are numbered in sequence.
3. The weld defect identification and positioning method based on the deep learning network as claimed in claim 2, characterized in that: the identification and positioning network model takes an integral image as input data, the input image is divided into N multiplied by N grids, each divided cell is responsible for detecting a target with a central point falling in the grid, and the generated anchor is directly classified and positioned; the feature extraction module adopts a CSPDens _ block module which combines a CSPNet network and a DenSnNet network, and the CSPDens _ block module is applied to a main network to realize feature extraction of the ray weld defect image; the target detection module adopts FPN background and Bottom-up path augmentation in the PANet to realize the fusion of shallow features and deep features; the output module adopts a YOLO layer in YOLOv4 to realize the classification and regression of the multi-scale target; and performing NMS processing on the boundary box with higher confidence coefficient obtained by calculation to obtain a final detection result.
4. The weld defect identification and positioning method based on the deep learning network as claimed in claim 3, wherein: in step S41, the CSPDens _ block module in the feature extraction stage has the following processing procedure:
s411: dividing a feature map obtained by convolution of a previous layer into 2 parts by CSPDens _ block, wherein one part passes through a Dens module, and the other part is directly connected with the output of the Dens module to realize connection expansion of the feature map, so that the gradient stream is transmitted on different network paths;
s412: cross-layer feature information transmission is realized through a Dens module, and feature information is directly transmitted to a subsequent network layer by skipping part of network layers, so that the network learns more inter-layer feature relation; further, a dense connection network of all layers in the front and a rear layer is established on the basis of ResNe through DensNet, and feature reuse is realized;
the calculation formula of the channel connection is as follows:
xl=Hl([x0,x1,......,xl-1])
in the above formula: [ x ] of0,x1,......,xl-1]Output profile, H, representing 0, … …, l-1 layerslDenotes a channel merge operation, HlIncluding a convolution of 3 × 3, BN and LEAKEY RELU;
s413: the method comprises the following steps of (1) replacing traditional Convolution by separable Convolution, wherein the separable Convolution is implemented by decomposing a complete Convolution operation into two steps, namely Depthwise Convolution and Pointwise Convolution; the Depthwise Convolution is completely performed in a two-dimensional plane, the number of filters is the same as that of the Depth of the previous layer, and the Pointwise Convolution performs weighted combination on the feature map output by the Depthwise Convolution in the Depth direction by using a 1 × 1 Convolution kernel.
5. The weld defect identification and positioning method based on the deep learning network as claimed in claim 4, wherein: in step S43, the scale fusion process in the target detection stage is as follows:
s431: improving a multi-scale detection module in the YOLOv4, and expanding the original 3 scales into 4 scales;
s432: the original input size is 320 multiplied by 320, and the resolution size and convolution kernel of the operations of the Dens module in CSPDens _ bolck are 160 multiplied by 160 and 32 in sequence; 80X 80, 64; 40 × 40, 128; 20 × 20, 256; each branch of the target detection module detects the feature map after CSPDens _ bolck multi-scale fusion;
s433: reducing 1/2 the operations and convolution kernels of the Dens module in the CSPDens _ bolck at layers 2, 3, 4 and 5 relative to YOLO; performing double upsampling operation on branches with the resolutions of 10 multiplied by 10, 20 multiplied by 20 and 40 multiplied by 40, cascading the upsampled characteristic layer with a shallow characteristic layer, and respectively performing independent detection on fusion characteristic graphs with 4 scales;
s434: the improved multi-scale fusion is expanded to predict a target to be detected on four scale feature maps of 10 multiplied by 10, 20 multiplied by 20, 40 multiplied by 40 and 80 multiplied by 80, learn position features from a shallow feature layer, and perform accurate fine-grained detection on deep features after fusion and up-sampling.
6. The weld defect identification and positioning method based on the deep learning network as claimed in claim 5, wherein: in step S44, performing dimension clustering again by using the K-means algorithm, and it is necessary to make the IOU values of the anchor box and the ground channel as large as possible, so that the target function of distance measurement uses the ratio DIOU of the intersection and the union of the predicted bounding box and the real bounding box as a measurement standard, and the formula of the measurement function is as follows:
Figure FDA0003002013610000031
in the above formula, targ _ box is the target box of the sample label, cent is the clustering center, d represents the metric distance, and DIOU represents the ratio of the intersection and union of the prediction bounding box and the real bounding box.
7. The weld defect identification and positioning method based on the deep learning network as claimed in claim 6, wherein: in the target detection stage, the position of the defect detected in the image block in the original image is determined through coordinate conversion, for any image block, the image block is divided into S multiplied by S grids, and each grid predicts B rectangular bounding boxes containing the target defect and C probability values belonging to a certain category; each rectangular bounding box contains 5 data values, namely: (x, y, w, h, confidence), wherein (x, y) is the offset of the center of the rectangular bounding box relative to the cell, (w, h) is the width and height of the rectangular bounding box, and confidence is the confidence that the target in a certain grid belongs to a certain class of defects;
then, for S × S grids into which the image with width W and height H is divided, let the coordinates of a certain grid in the image be (x)i,yj),xiAnd yjThe value range of (1) is (0, S-1), and the coordinate of the central point of the predicted boundary box is (x)c,yc) Then the final predicted position (x, y) normalization process formula is as follows:
Figure FDA0003002013610000032
Figure FDA0003002013610000033
the confidence value is used for representing the probability of whether the bounding box contains the target and the coincidence degree of the current bounding box and the real bounding box, and the calculation formula is as follows:
Figure FDA0003002013610000034
in the above formula, Pr(obj) indicates whether there is a target defect in the mesh, and if so, Pr(obj) ═ 1, if not, Pr(obj) ═ 0; DIOU represents the ratio of the intersection and union of the predicted bounding box and the real bounding box;
the output probability P of each grid prediction is expressed as follows:
Figure FDA0003002013610000041
in the above formula, Pr(obj) represents the probability of the presence of a target defect in the mesh, Pr(classiI obj) represents the conditional probability that the mesh contains a defect belonging to the i-th class target, Pr(classi) Representing the probability of the i-th type target defect; DIOU represents the ratio of the intersection and union of the predicted bounding box and the true bounding box.
8. The weld defect identification and positioning method based on the deep learning network as claimed in claim 7, wherein:
in step S5, the LEAKEY RELU is used as an activation function during the network model training, and the coefficient when x is less than or equal to 0 is adjusted to 0.01 according to the detected target feature, and the formula is as follows:
Figure FDA0003002013610000042
the loss function defining the training network comprises three parts: the calculation formulas of the bounding box loss, the confidence coefficient loss and the classification loss are respectively as follows:
loss=losscoord+lossconf+lossclass
therein, losscoordThe bounding box loss function is expressed by the following formula:
Figure FDA0003002013610000043
in the above formula:
Figure FDA0003002013610000044
values representing the abscissa, ordinate, width, height of the center of the real target bounding box, xi,yi,wi,hiValues representing the abscissa, ordinate, width, height of the prediction target bounding box, sxs is the number of divided grids, B is the number of prediction bounding boxes per grid,
Figure FDA0003002013610000045
judging whether the ith grid where the jth bounding box is located is responsible for detecting the defect or not, and if so, selecting the grid which is responsible for detecting the defect and has the largest DIOU value with the real bounding box; lambda [ alpha ]coordThe penalty coefficient is coordinate prediction, the penalty coefficient has the effects that when a network traverses the whole image, each grid does not necessarily contain target defects, and when the grid does not contain the target defects, the confidence coefficient is 0, so that the training gradient is greatly spanned, the final model is unstable, and in order to solve the problem, a hyperparameter lambda is set in a loss functioncorrdTo control the loss of predicted positions of the target frame;
Figure FDA0003002013610000046
is an adjustment parameter of the convergence speed of network training, where theta1And theta2Is an initial parameter set during network training;
the lossconfThe confidence loss function is expressed by the following calculation formula:
Figure FDA0003002013610000047
in the above formula:
Figure FDA0003002013610000048
representing the true confidence that the target defect in the ith mesh belongs to a certain class, ciIn order to predict the degree of confidence,
Figure FDA0003002013610000049
the jth bounding box of the ith grid does not contain the target defect, lambdanoobjA penalty coefficient for representing confidence when the grid does not contain the detection target;
the lossclassThe classification loss function is expressed by the formula:
Figure FDA0003002013610000051
in the above formula: c represents a predicted target defect class,
Figure FDA0003002013610000052
representing the true probability value, p, that the object in the ith grid belongs to a certain class of defectsi(c) Indicating the predicted probability values for objects in the ith mesh belonging to a certain class of defects,
Figure FDA0003002013610000053
indicating whether the ith mesh is responsible for the target defect.
9. A weld defect identification and positioning system based on a deep learning network is characterized in that the weld defect identification and positioning method based on the deep learning network as claimed in any one of claims 1 to 8 is adopted to complete the identification and positioning of weld defects in a weld ray image and give a prediction result; the system comprises:
the image acquisition module is used for acquiring a welding seam defect radiographic image containing air holes, slag inclusion, cracks, unfused and incomplete penetration defects, taking the image as a training set or a test set, and finishing the training of a system or finishing the task of identifying and positioning the welding seam defects in the image based on the image in the training set or the test set;
the image preprocessing module is used for carrying out normalization preprocessing on the images in the training set or the testing set so as to obtain image blocks with uniform resolution on the welding seam defect images after preprocessing; and
the identification positioning network module takes the processed image as input data, divides the input image into N multiplied by N grids, enables each divided cell to be responsible for detecting a target with a central point falling in the grid, and directly classifies and positions the generated anchor; the identification positioning network comprises a feature extraction submodule, a target detection submodule and an output submodule, wherein the feature extraction submodule adopts a CSPDens _ block module which combines a CSPNet network and a DensNet network, and the CSPDens _ block module is applied to a main network to realize feature extraction of a ray weld defect image; the target detection submodule realizes the fusion of shallow features and deep features by adopting FPN backbone and Bottom-up path augmentation in the PANet; the output submodule adopts a YOLO layer in YOLOv4 to realize the classification and regression of the multi-scale target; and performing NMS processing on the boundary box with higher confidence coefficient obtained by calculation to obtain a final detection result.
10. A weld defect identification and positioning terminal based on a deep learning network, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, and is characterized in that: the processor executes the weld defect identification and positioning method based on the deep learning network according to any one of claims 1 to 8.
CN202110349482.1A 2021-03-31 2021-03-31 Weld defect identification positioning method and system based on deep learning network Active CN113034478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110349482.1A CN113034478B (en) 2021-03-31 2021-03-31 Weld defect identification positioning method and system based on deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110349482.1A CN113034478B (en) 2021-03-31 2021-03-31 Weld defect identification positioning method and system based on deep learning network

Publications (2)

Publication Number Publication Date
CN113034478A true CN113034478A (en) 2021-06-25
CN113034478B CN113034478B (en) 2023-06-06

Family

ID=76453320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110349482.1A Active CN113034478B (en) 2021-03-31 2021-03-31 Weld defect identification positioning method and system based on deep learning network

Country Status (1)

Country Link
CN (1) CN113034478B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344475A (en) * 2021-08-05 2021-09-03 国网江西省电力有限公司电力科学研究院 Transformer bushing defect identification method and system based on sequence modal decomposition
CN113469177A (en) * 2021-06-30 2021-10-01 河海大学 Drainage pipeline defect detection method and system based on deep learning
CN113591270A (en) * 2021-06-29 2021-11-02 北京交通大学 Monitoring method for analyzing and comparing performance on steel rail weld joint data set based on MDCD (modified discrete cosine transform)
CN113628211A (en) * 2021-10-08 2021-11-09 深圳市信润富联数字科技有限公司 Parameter prediction recommendation method, device and computer readable storage medium
CN113781415A (en) * 2021-08-30 2021-12-10 广州大学 Defect detection method, device, equipment and medium for X-ray image
CN113989280A (en) * 2021-12-28 2022-01-28 武汉市鑫景诚路桥钢模有限公司 Steel structure welding crack defect detection method based on image processing technology
CN114140439A (en) * 2021-12-03 2022-03-04 厦门大学 Laser welding seam feature point identification method and device based on deep learning
CN114140669A (en) * 2022-02-07 2022-03-04 南昌工程学院 Welding defect recognition model training method and device and computer terminal
CN114155561A (en) * 2022-02-08 2022-03-08 杭州迪英加科技有限公司 Helicobacter pylori positioning method and device
CN114187242A (en) * 2021-11-25 2022-03-15 北京航空航天大学 Guidance optical fiber surface defect detection and positioning method based on deep learning
CN114240885A (en) * 2021-12-17 2022-03-25 成都信息工程大学 Cloth flaw detection method based on improved Yolov4 network
CN114266974A (en) * 2021-12-23 2022-04-01 福州大学 Automatic positioning welding method based on deep learning
CN114332008A (en) * 2021-12-28 2022-04-12 福州大学 Unsupervised defect detection and positioning method based on multi-level feature reconstruction
CN114359736A (en) * 2022-03-02 2022-04-15 深圳市智源空间创新科技有限公司 Method and device for detecting pipeline defects in complex light environment
CN114519792A (en) * 2022-02-16 2022-05-20 无锡雪浪数制科技有限公司 Welding seam ultrasonic image defect identification method based on machine and depth vision fusion
CN114596273A (en) * 2022-03-02 2022-06-07 江南大学 Intelligent detection method for multiple defects of ceramic substrate by using YOLOV4 network
CN114972721A (en) * 2022-06-13 2022-08-30 中国科学院沈阳自动化研究所 Power transmission line insulator string recognition and positioning method based on deep learning
CN115063725A (en) * 2022-06-23 2022-09-16 中国民航大学 Airplane skin defect identification system based on multi-scale self-adaptive SSD algorithm
CN115121895A (en) * 2022-04-28 2022-09-30 广东省威汇智能科技有限公司 Selective wave-soldering early warning method and device based on deep learning and storage medium
CN115187595A (en) * 2022-09-08 2022-10-14 北京东方国信科技股份有限公司 End plug weld defect detection model training method, detection method and electronic equipment
CN115797914A (en) * 2023-02-02 2023-03-14 武汉科技大学 Metallurgical crane trolley track surface defect detection system
CN115984672A (en) * 2023-03-17 2023-04-18 成都纵横自动化技术股份有限公司 Method and device for detecting small target in high-definition image based on deep learning
CN116342531A (en) * 2023-03-27 2023-06-27 中国十七冶集团有限公司 Light-weight large-scale building high-altitude steel structure weld defect identification model, weld quality detection device and method
CN116363124A (en) * 2023-05-26 2023-06-30 南京杰智易科技有限公司 Steel surface defect detection method based on deep learning
CN116385336A (en) * 2022-12-14 2023-07-04 广州市斯睿特智能科技有限公司 Deep learning-based weld joint detection method, system, device and storage medium
CN116452878A (en) * 2023-04-20 2023-07-18 广东工业大学 Attendance checking method and system based on deep learning algorithm and binocular vision
CN116503417A (en) * 2023-06-29 2023-07-28 武汉纺织大学 Automatic recognition, positioning and size calculation method for ultra-long weld joint and typical defect
CN117079100A (en) * 2023-08-14 2023-11-17 北京大学 Weld defect recognition system based on deep learning
CN117952983A (en) * 2024-03-27 2024-04-30 中电科大数据研究院有限公司 Intelligent manufacturing production process monitoring method and system based on artificial intelligence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728192A (en) * 2019-09-16 2020-01-24 河海大学 High-resolution remote sensing image classification method based on novel characteristic pyramid depth network
CN110766098A (en) * 2019-11-07 2020-02-07 中国石油大学(华东) Traffic scene small target detection method based on improved YOLOv3
CN111091538A (en) * 2019-12-04 2020-05-01 上海君睿信息技术有限公司 Method and device for automatically identifying and detecting pipeline welding seam and defect
CN111476756A (en) * 2020-03-09 2020-07-31 重庆大学 Method for identifying casting DR image loose defects based on improved YO L Ov3 network model
CN112418236A (en) * 2020-11-24 2021-02-26 重庆邮电大学 Automobile drivable area planning method based on multitask neural network
CN112541483A (en) * 2020-12-25 2021-03-23 三峡大学 Dense face detection method combining YOLO and blocking-fusion strategy

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728192A (en) * 2019-09-16 2020-01-24 河海大学 High-resolution remote sensing image classification method based on novel characteristic pyramid depth network
CN110766098A (en) * 2019-11-07 2020-02-07 中国石油大学(华东) Traffic scene small target detection method based on improved YOLOv3
CN111091538A (en) * 2019-12-04 2020-05-01 上海君睿信息技术有限公司 Method and device for automatically identifying and detecting pipeline welding seam and defect
CN111476756A (en) * 2020-03-09 2020-07-31 重庆大学 Method for identifying casting DR image loose defects based on improved YO L Ov3 network model
CN112418236A (en) * 2020-11-24 2021-02-26 重庆邮电大学 Automobile drivable area planning method based on multitask neural network
CN112541483A (en) * 2020-12-25 2021-03-23 三峡大学 Dense face detection method combining YOLO and blocking-fusion strategy

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHIEN-YAO WANG等: ""CSPNET:A NEW BACKBONE THAT CAN ENHANCE LEARNING CAPABILITY OF CNN"", 《CVPR》 *
MEHMET ALI DAGLIOGLU等: ""Object-Detection mit You Only Look Once (YOLO)"", 《HTTPS://OPUS4.KOBV.DE/OPUS4-HOF/FRONTDOOR/INDEX/INDEX/DOCID/117》 *
李航等: "基于深度卷积神经网络的小目标检测算法", 《计算机工程与科学》 *
杨凯等: "基于YOLO网络系统的材料缺陷目标检测方法研究", 《系统科学学报》 *

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591270A (en) * 2021-06-29 2021-11-02 北京交通大学 Monitoring method for analyzing and comparing performance on steel rail weld joint data set based on MDCD (modified discrete cosine transform)
CN113591270B (en) * 2021-06-29 2024-01-19 北京交通大学 MDCD-based monitoring method for performance analysis and comparison on steel rail weld data set
CN113469177B (en) * 2021-06-30 2024-04-26 河海大学 Deep learning-based drainage pipeline defect detection method and system
CN113469177A (en) * 2021-06-30 2021-10-01 河海大学 Drainage pipeline defect detection method and system based on deep learning
CN113344475A (en) * 2021-08-05 2021-09-03 国网江西省电力有限公司电力科学研究院 Transformer bushing defect identification method and system based on sequence modal decomposition
CN113781415A (en) * 2021-08-30 2021-12-10 广州大学 Defect detection method, device, equipment and medium for X-ray image
CN113628211A (en) * 2021-10-08 2021-11-09 深圳市信润富联数字科技有限公司 Parameter prediction recommendation method, device and computer readable storage medium
CN113628211B (en) * 2021-10-08 2022-02-15 深圳市信润富联数字科技有限公司 Parameter prediction recommendation method, device and computer readable storage medium
CN114187242A (en) * 2021-11-25 2022-03-15 北京航空航天大学 Guidance optical fiber surface defect detection and positioning method based on deep learning
CN114140439A (en) * 2021-12-03 2022-03-04 厦门大学 Laser welding seam feature point identification method and device based on deep learning
CN114240885B (en) * 2021-12-17 2022-08-16 成都信息工程大学 Cloth flaw detection method based on improved Yolov4 network
CN114240885A (en) * 2021-12-17 2022-03-25 成都信息工程大学 Cloth flaw detection method based on improved Yolov4 network
CN114266974A (en) * 2021-12-23 2022-04-01 福州大学 Automatic positioning welding method based on deep learning
CN113989280A (en) * 2021-12-28 2022-01-28 武汉市鑫景诚路桥钢模有限公司 Steel structure welding crack defect detection method based on image processing technology
CN114332008A (en) * 2021-12-28 2022-04-12 福州大学 Unsupervised defect detection and positioning method based on multi-level feature reconstruction
CN113989280B (en) * 2021-12-28 2022-03-22 武汉市鑫景诚路桥钢模有限公司 Steel structure welding crack defect detection method based on image processing technology
CN114140669A (en) * 2022-02-07 2022-03-04 南昌工程学院 Welding defect recognition model training method and device and computer terminal
CN114155561A (en) * 2022-02-08 2022-03-08 杭州迪英加科技有限公司 Helicobacter pylori positioning method and device
CN114519792A (en) * 2022-02-16 2022-05-20 无锡雪浪数制科技有限公司 Welding seam ultrasonic image defect identification method based on machine and depth vision fusion
CN114359736A (en) * 2022-03-02 2022-04-15 深圳市智源空间创新科技有限公司 Method and device for detecting pipeline defects in complex light environment
CN114596273A (en) * 2022-03-02 2022-06-07 江南大学 Intelligent detection method for multiple defects of ceramic substrate by using YOLOV4 network
CN115121895A (en) * 2022-04-28 2022-09-30 广东省威汇智能科技有限公司 Selective wave-soldering early warning method and device based on deep learning and storage medium
CN114972721A (en) * 2022-06-13 2022-08-30 中国科学院沈阳自动化研究所 Power transmission line insulator string recognition and positioning method based on deep learning
CN115063725A (en) * 2022-06-23 2022-09-16 中国民航大学 Airplane skin defect identification system based on multi-scale self-adaptive SSD algorithm
CN115063725B (en) * 2022-06-23 2024-04-26 中国民航大学 Aircraft skin defect identification system based on multi-scale self-adaptive SSD algorithm
CN115187595A (en) * 2022-09-08 2022-10-14 北京东方国信科技股份有限公司 End plug weld defect detection model training method, detection method and electronic equipment
CN116385336A (en) * 2022-12-14 2023-07-04 广州市斯睿特智能科技有限公司 Deep learning-based weld joint detection method, system, device and storage medium
CN116385336B (en) * 2022-12-14 2024-04-12 广州市斯睿特智能科技有限公司 Deep learning-based weld joint detection method, system, device and storage medium
CN115797914B (en) * 2023-02-02 2023-05-02 武汉科技大学 Metallurgical crane trolley track surface defect detection system
CN115797914A (en) * 2023-02-02 2023-03-14 武汉科技大学 Metallurgical crane trolley track surface defect detection system
CN115984672A (en) * 2023-03-17 2023-04-18 成都纵横自动化技术股份有限公司 Method and device for detecting small target in high-definition image based on deep learning
CN116342531A (en) * 2023-03-27 2023-06-27 中国十七冶集团有限公司 Light-weight large-scale building high-altitude steel structure weld defect identification model, weld quality detection device and method
CN116342531B (en) * 2023-03-27 2024-01-19 中国十七冶集团有限公司 Device and method for detecting quality of welding seam of high-altitude steel structure of lightweight large-scale building
CN116452878B (en) * 2023-04-20 2024-02-02 广东工业大学 Attendance checking method and system based on deep learning algorithm and binocular vision
CN116452878A (en) * 2023-04-20 2023-07-18 广东工业大学 Attendance checking method and system based on deep learning algorithm and binocular vision
CN116363124A (en) * 2023-05-26 2023-06-30 南京杰智易科技有限公司 Steel surface defect detection method based on deep learning
CN116503417B (en) * 2023-06-29 2023-09-08 武汉纺织大学 Automatic recognition, positioning and size calculation method for ultra-long weld joint and typical defect
CN116503417A (en) * 2023-06-29 2023-07-28 武汉纺织大学 Automatic recognition, positioning and size calculation method for ultra-long weld joint and typical defect
CN117079100A (en) * 2023-08-14 2023-11-17 北京大学 Weld defect recognition system based on deep learning
CN117079100B (en) * 2023-08-14 2024-02-09 北京大学 Weld defect recognition system based on deep learning
CN117952983A (en) * 2024-03-27 2024-04-30 中电科大数据研究院有限公司 Intelligent manufacturing production process monitoring method and system based on artificial intelligence

Also Published As

Publication number Publication date
CN113034478B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
CN113034478A (en) Weld defect identification and positioning method and system based on deep learning network
CN110321923B (en) Target detection method, system and medium for fusion of different-scale receptive field characteristic layers
CN110298321B (en) Road blocking information extraction method based on deep learning image classification
CN112967243A (en) Deep learning chip packaging crack defect detection method based on YOLO
CN114240821A (en) Weld defect detection method based on improved YOLOX
CN111832608B (en) Iron spectrum image multi-abrasive particle identification method based on single-stage detection model yolov3
CN110348435B (en) Target detection method and system based on regional candidate network
CN112200225B (en) Steel rail damage B display image identification method based on deep convolution neural network
CN115861772A (en) Multi-scale single-stage target detection method based on RetinaNet
US20230343078A1 (en) Automated defect classification and detection
CN112288700A (en) Rail defect detection method
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN113609895A (en) Road traffic information acquisition method based on improved Yolov3
Nazarov et al. Classification of defects in welds using a convolution neural network
CN114782410A (en) Insulator defect detection method and system based on lightweight model
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
Zeren et al. Comparison of SSD and faster R-CNN algorithms to detect the airports with data set which obtained from unmanned aerial vehicles and satellite images
CN117274774A (en) Yolov 7-based X-ray security inspection image dangerous goods detection algorithm
CN115830302A (en) Multi-scale feature extraction and fusion power distribution network equipment positioning identification method
CN115690001A (en) Method for detecting defects in steel pipe welding digital radiographic image
CN116977239A (en) Defect detection method, device, computer equipment and storage medium
CN112949634B (en) Railway contact net nest detection method
CN116129123B (en) End-to-end chromosome segmentation method based on uncertainty calibration and region decomposition
CN113657196B (en) SAR image target detection method, SAR image target detection device, electronic equipment and storage medium
CN117576038A (en) Fabric flaw detection method and system based on YOLOv8 network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant