CN113723526A - Method for identifying different types of craters - Google Patents

Method for identifying different types of craters Download PDF

Info

Publication number
CN113723526A
CN113723526A CN202111018165.8A CN202111018165A CN113723526A CN 113723526 A CN113723526 A CN 113723526A CN 202111018165 A CN202111018165 A CN 202111018165A CN 113723526 A CN113723526 A CN 113723526A
Authority
CN
China
Prior art keywords
layer
depth
input end
maximum pooling
crater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111018165.8A
Other languages
Chinese (zh)
Other versions
CN113723526B (en
Inventor
付波
杨俊�
曾金全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202111018165.8A priority Critical patent/CN113723526B/en
Publication of CN113723526A publication Critical patent/CN113723526A/en
Application granted granted Critical
Publication of CN113723526B publication Critical patent/CN113723526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying different types of craters, which comprises the following steps: s1, collecting an X-ray crater image and constructing an annotation data set; s2, training the crater positioning and identifying model by using the labeling data set to obtain a trained crater positioning and identifying model; s3, recognizing the image needing to be subjected to crater recognition by adopting the trained crater positioning recognition model to obtain the position and the type of the crater; the invention solves the problem that the identification of the position and the condition of the welding port by the existing method wastes time and labor.

Description

Method for identifying different types of craters
Technical Field
The invention relates to the field of image recognition, in particular to a method for recognizing different types of craters.
Background
When the carbon steel, stainless steel or alloy steel product is produced or constructed with other products, the carbon steel, stainless steel or alloy steel product is connected with other products by welding rods, and the seam of the welding rod connection between the products is called a welding crater. The traditional weld crater positioning and identifying method needs to perform weld crater position positioning firstly and then carry out identification detection, and the traditional identification detection method such as destructive detection comprises the following steps: the mechanical property test comprises the following steps: tensile test, hardness test, bending test, fatigue test, impact test, and the like; the chemical analysis test comprises: chemical composition analysis, corrosion test, and the like; the metallographic examination comprises the following steps: macro-inspection, micro-inspection, etc. Non-destructive testing such as visual inspection includes: size inspection, geometric shape inspection, appearance scar inspection and the like; the pressure resistance test comprises the following steps: hydrostatic test, pneumatic test, and the like; the sealing test included: airtight test, water-carrying test, ammonia gas test, sinking test, kerosene leakage test, ammonia leakage test and the like; magnetic powder inspection; coloring inspection; ultrasonic flaw detection; and (5) performing radiographic inspection. These detection methods are time-consuming and labor-consuming, and can cause damage to the detected articles to different extents; in a nondestructive testing mode, a macroscopic inspection method of direct visual inspection has high requirements on testing personnel, and visual inspection can not penetrate through a workpiece to inspect the internal information of the workpiece although any instrument or equipment is not used, so that the application area of the detection is narrow; generally, after a related crater picture is obtained by a radiographic method in a nondestructive testing mode, craters can be identified and distinguished manually, although the detection accuracy is guaranteed to a certain extent, the manual identification and distinction of large batches of crater data pictures is time-consuming and labor-consuming.
Disclosure of Invention
Aiming at the defects in the prior art, the method for identifying the welding craters of different types solves the problem that the identification of the positions and the types of the welding craters by the existing method wastes time and labor.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a method of identifying different types of craters, comprising the steps of:
s1, collecting an X-ray crater image and constructing an annotation data set;
s2, training the crater positioning and identifying model by using the labeling data set to obtain a trained crater positioning and identifying model;
and S3, recognizing the image needing to be subjected to crater recognition by adopting the trained crater positioning recognition model to obtain the position and the type of the crater.
Further, the step S1 includes the following sub-steps:
s11, collecting an X-ray crater image to obtain a crater image data set;
s12, classifying the weld seam image data sets to obtain different types of image data sets;
s13, carrying out image enhancement processing on the image data sets of different types to obtain enhanced image data sets;
and S14, labeling abnormal crater images in the enhanced image data set to obtain a labeled data set.
Further, the image enhancement processing in step S13 includes: improving the contrast of an image, improving the brightness of an image, horizontally flipping, vertically flipping, rotating 90 degrees clockwise, and rotating 90 degrees counterclockwise.
Further, the labeling in step S14 includes: and marking the position of the welding opening and the type of the welding opening.
Further, the crater positioning and identifying model in step S2 includes: a first depth-separable convolutional layer, a second depth-separable convolutional layer, a third depth-separable convolutional layer, a fourth depth-separable convolutional layer, a fifth depth-separable convolutional layer, a sixth depth-separable convolutional layer, a seventh depth-separable convolutional layer, an eighth depth-separable convolutional layer, a ninth depth-separable convolutional layer, a tenth depth-separable convolutional layer, an eleventh depth-separable convolutional layer, a first maximum pooling layer, a second maximum pooling layer, a third maximum pooling layer, a fourth maximum pooling layer, a fifth maximum pooling layer, a sixth maximum pooling layer, a seventh maximum pooling layer, an eighth maximum pooling layer, a zero-filled maximum pooling layer, a first stitching layer, a second stitching layer, an upper sampling layer, and a detector;
the input end of the first depth separable convolution layer is used as the input end of the crater positioning identification model, and the output end of the first depth separable convolution layer is connected with the input end of the first maximum pooling layer; the output end of the first maximum pooling layer is connected with the input end of the second depth separable convolution layer; the output end of the second depth separable convolutional layer is connected with the input end of the second maximum pooling layer; the output end of the second maximum pooling layer is connected with the input end of the third depth separable convolution layer; an output end of the third depth-separable convolutional layer is connected with an input end of a third maximum pooling layer; the output end of the third maximum pooling layer is connected with the input end of a fourth depth-separable convolutional layer; an output end of the fourth depth-separable convolutional layer is connected with an input end of a fourth maximum pooling layer; the output end of the fourth maximum pooling layer is connected with the input end of the fifth depth separable convolution layer; an output end of the fifth depth-separable convolutional layer is connected with an input end of a fifth maximum pooling layer; the output end of the fifth maximum pooling layer is connected with the input end of the sixth depth separable convolution layer; the output end of the sixth depth separable convolution layer is respectively connected with the input end of the zero filling type maximum pooling layer and the first input end of the second splicing layer; the output end of the zero-padding type maximum pooling layer is respectively connected with the input end of the first splicing layer, the input end of the sixth maximum pooling layer, the input end of the seventh maximum pooling layer, the input end of the eighth maximum pooling layer, the output end of the sixth maximum pooling layer, the output end of the seventh maximum pooling layer and the output end of the eighth maximum pooling layer; the output end of the first splicing layer is connected with the input end of the seventh depth separable convolution layer; an output end of the seventh depth separable convolutional layer is connected with an input end of a ninth depth separable convolutional layer; an output end of the ninth depth-separable convolutional layer is connected with an input end of the eighth depth-separable convolutional layer and an input end of the tenth depth-separable convolutional layer, respectively; the output end of the eighth depth separable convolution layer is connected with the input end of the upper sampling layer; the output end of the upper sampling layer is connected with the second input end of the second splicing layer; the output end of the second splice layer is connected with the input end of the eleventh depth separable convolutional layer; an output end of the eleventh depth-separable convolutional layer is connected with a first input end of a detector; an output terminal of the tenth depth separable convolutional layer is connected with a second input terminal of the detector; and the output end of the detector is used as the output end of the crater positioning and identifying model.
The beneficial effects of the above further scheme are: in the crater positioning identification model, the structures from the first depth separable convolution layer to the seventh depth separable convolution layer form a trunk network part in the model structure, and the part can extract the most basic image characteristic information in the input image; the part is provided with a sixth largest pooling layer, a seventh largest pooling layer and an eighth largest pooling layer, the sizes of the three largest pooling layers are different, so that the receptive fields of the largest pooling layers are different, the feature fusion of different scales can be realized, and the feature information of each scale is further extracted. The residual structure of the model is two different scale feature detection branches, wherein the structure formed by branches generated from the sixth layer depth separable convolution layer and the ninth layer depth separable convolution layer is a small scale detection branch, so that micro scale features can be further extracted on the basis of basic image feature information, and the model plays a main role in weld crater positioning identification; the structure formed by the ninth and tenth depth separable convolution layers is a large-scale detection branch, large-scale features can be further extracted on the basis of basic image feature information, the structure is simple, the convolution effect is general, and the model plays an auxiliary role in weld crater positioning identification.
Further, the convolution kernel of the first depth separable convolution layer is 3 x 3, and the number of channels is 16; the convolution kernel of the second depth separable convolution layer is 3 x 3, and the channel number of the second depth separable convolution layer is 32; the convolution kernel of the third depth separable convolution layer is 3 x 3, and the channel number of the third depth separable convolution layer is 64; the convolution kernel of the fourth depth separable convolution layer is 3 x 3, and the number of channels is 128; the convolution kernel of the fifth depth separable convolution layer is 3 x 3, and the number of channels is 256; the convolution kernel of the sixth depth separable convolution layer is 3 x 3, and the number of channels is 512; the convolution kernel of the seventh depth separable convolution layer is 3 x 3, and the number of channels is 1024; the convolution kernel of the eighth depth separable convolution layer is 1 x 1, and the number of channels is 128; the convolution kernel of the ninth depth separable convolution layer is 1 x 1, and the number of channels is 256; the convolution kernel of the tenth depth separable convolution layer is 1 x 1, and the number of channels is 64; the convolution kernel of the eleventh depth separable convolution layer is 3 x 3, and the number of channels is 256; the sixth largest pooling layer has a size of 5 x 5 with a step size of 1; the seventh largest pooling layer has a size of 9 x 9 with a step size of 1; the eighth largest pooling layer has a size of 13 × 13 with a step size of 1.
The beneficial effects of the above further scheme are: the convolution layer adopts deep separable convolution to greatly reduce the model parameter number and floating point operation times during training, reduce the dependence on high-performance graphics card hardware during training and facilitate the smooth development of the model; the model obtained by training is greatly compressed, and convenience and operability during recognition are improved.
Further, in step S2, a weld seam location identification model is trained, where the loss function is:
Figure BDA0003238004660000051
wherein L is a loss function, I is the number of grids divided for an image, J is the number of prediction frames in one grid,
Figure BDA0003238004660000052
for crater information contained in the jth prediction box in the ith grid, (b)x,by) Is the center coordinate of the prediction box, (b'x,b′y) As the center coordinates of the real frame, bwTo predict the width of the frame, bhIs a high, b 'of the prediction frame'wIs the width of the real frame, b'hThe height of the real frame is the height of the real frame,
Figure BDA0003238004660000053
is the crater information not contained in the jth prediction box in the ith grid, and P is the classification probability, BCE (c'j,cj) As a cross entropy function, ciConfidence, c ', of the predicted box in the ith mesh'iIs the confidence of the real box in the ith grid, λ1、λ2Is a weight parameter.
In conclusion, the beneficial effects of the invention are as follows: because the characteristics of different types of crater images in the X-ray crater image data set are different only in small-scale range in the images, the invention designs a high-precision and light-weight crater positioning and identifying model for identifying the fine characteristics, and solves the problems of low detection precision and low efficiency when different types of craters in large quantities are manually detected.
Drawings
FIG. 1 is a flow chart of a method of identifying different types of craters;
FIG. 2 is a schematic structural diagram of a crater positioning and identifying model;
FIGS. 3 and 4 are diagrams illustrating the effectiveness of trained normal and abnormal type models in detecting corresponding native X-ray crater images;
fig. 5 and 6 are diagrams illustrating the detection effect of the trained normal and pore models on the corresponding native X-ray crater images.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a method of identifying different types of craters includes the steps of:
s1, collecting an X-ray crater image and constructing an annotation data set;
s2, training the crater positioning and identifying model by using the labeling data set to obtain a trained crater positioning and identifying model;
and S3, recognizing the image needing to be subjected to crater recognition by adopting the trained crater positioning recognition model to obtain the position and the type of the crater.
Step S1 includes the following substeps:
s11, collecting an X-ray crater image to obtain a crater image data set;
the format of the collected crater image is tif format, and needs to be converted into jpg format for the subsequent crater positioning identification model identification, so the format of the crater image data set image is jpg format.
S12, classifying the weld seam image data sets to obtain different types of image data sets;
in this embodiment, in step S12, the weld bead image data sets are classified to obtain 15 types of image data sets, wherein 35 poor-forming images, 19 misalignment images, 21 dross images, 19 flash images, 2 transverse defects, 10 slag inclusions, 54 parent metal defects, 12 dents, 188 air holes, 19 incomplete penetration images, 69 un-fused images, 13 undercut images, 15 foreign objects, 1650 normal images, 3 small bead shrinkage images, and 2129 total images, and the rest are images containing 3 to 5 weld beads except for the normal types.
S13, carrying out image enhancement processing on the image data sets of different types to obtain enhanced image data sets;
in the present embodiment, since the number of images of some types of image data sets is too small, image enhancement needs to be performed on a small number of types of image data sets, enriching the number of images of the types.
For example, in the present embodiment, the four types of normal, abnormal, blowhole, and flash are mainly classified, the number of images of flash is small, only 19, and in order to ensure that the training sample is sufficient, the flash images of flash types need to be subjected to data enhancement to some extent: including promoting contrast and luminance, carry out the level upset to the picture, carry out the vertical upset to the picture, rotate 90 degrees clockwise to the picture, carry out anticlockwise rotation to the picture. Through the data enhancement operation, the number of the flash crater sample pictures is increased to 114 on the premise of not losing the identification and positioning effects as much as possible, and preparation is made for subsequent normal related training.
And S14, labeling abnormal crater images in the enhanced image data set to obtain a labeled data set.
In the present embodiment, four types of X-ray crater images, normal, abnormal, blowhole, and flash, are mainly aimed at, wherein the abnormal type crater is the rest of crater images except for the normal type crater. The images of the types are marked by using an open source marking tool, the number of the marked images is 1211, and the number of the frame selection craters is about 3700. The boxed data is stored in the corresponding xml file in the VOC 2007 format. Through statistics, the data set used for the subsequent method comprises 600 normal type craters, 320 abnormal type craters and 114 flash type craters, and except the normal type craters, the images of the other types of craters comprise 3-5 craters.
As shown in fig. 2, the crater positioning recognition model in step S2 includes: a first depth-separable convolutional layer, a second depth-separable convolutional layer, a third depth-separable convolutional layer, a fourth depth-separable convolutional layer, a fifth depth-separable convolutional layer, a sixth depth-separable convolutional layer, a seventh depth-separable convolutional layer, an eighth depth-separable convolutional layer, a ninth depth-separable convolutional layer, a tenth depth-separable convolutional layer, an eleventh depth-separable convolutional layer, a first maximum pooling layer, a second maximum pooling layer, a third maximum pooling layer, a fourth maximum pooling layer, a fifth maximum pooling layer, a sixth maximum pooling layer, a seventh maximum pooling layer, an eighth maximum pooling layer, a zero-filled maximum pooling layer, a first stitching layer, a second stitching layer, an upper sampling layer, and a detector;
the input end of the first depth separable convolution layer is used as the input end of the crater positioning identification model, and the output end of the first depth separable convolution layer is connected with the input end of the first maximum pooling layer; the output end of the first maximum pooling layer is connected with the input end of the second depth separable convolution layer; the output end of the second depth separable convolutional layer is connected with the input end of the second maximum pooling layer; the output end of the second maximum pooling layer is connected with the input end of the third depth separable convolution layer; an output end of the third depth-separable convolutional layer is connected with an input end of a third maximum pooling layer; the output end of the third maximum pooling layer is connected with the input end of a fourth depth-separable convolutional layer; an output end of the fourth depth-separable convolutional layer is connected with an input end of a fourth maximum pooling layer; the output end of the fourth maximum pooling layer is connected with the input end of the fifth depth separable convolution layer; an output end of the fifth depth-separable convolutional layer is connected with an input end of a fifth maximum pooling layer; the output end of the fifth maximum pooling layer is connected with the input end of the sixth depth separable convolution layer; the output end of the sixth depth separable convolution layer is respectively connected with the input end of the zero filling type maximum pooling layer and the first input end of the second splicing layer; the output end of the zero-padding type maximum pooling layer is respectively connected with the input end of the first splicing layer, the input end of the sixth maximum pooling layer, the input end of the seventh maximum pooling layer, the input end of the eighth maximum pooling layer, the output end of the sixth maximum pooling layer, the output end of the seventh maximum pooling layer and the output end of the eighth maximum pooling layer; the output end of the first splicing layer is connected with the input end of the seventh depth separable convolution layer; an output end of the seventh depth separable convolutional layer is connected with an input end of a ninth depth separable convolutional layer; an output end of the ninth depth-separable convolutional layer is connected with an input end of the eighth depth-separable convolutional layer and an input end of the tenth depth-separable convolutional layer, respectively; the output end of the eighth depth separable convolution layer is connected with the input end of the upper sampling layer; the output end of the upper sampling layer is connected with the second input end of the second splicing layer; the output end of the second splice layer is connected with the input end of the eleventh depth separable convolutional layer; an output end of the eleventh depth-separable convolutional layer is connected with a first input end of a detector; an output terminal of the tenth depth separable convolutional layer is connected with a second input terminal of the detector; and the output end of the detector is used as the output end of the crater positioning and identifying model.
The convolution kernel of the first depth separable convolution layer is 3 x 3, and the channel number is 16; the convolution kernel of the second depth separable convolution layer is 3 x 3, and the channel number of the second depth separable convolution layer is 32; the convolution kernel of the third depth separable convolution layer is 3 x 3, and the channel number of the third depth separable convolution layer is 64; the convolution kernel of the fourth depth separable convolution layer is 3 x 3, and the number of channels is 128; the convolution kernel of the fifth depth separable convolution layer is 3 x 3, and the number of channels is 256; the convolution kernel of the sixth depth separable convolution layer is 3 x 3, and the number of channels is 512; the convolution kernel of the seventh depth separable convolution layer is 3 x 3, and the number of channels is 1024; the convolution kernel of the eighth depth separable convolution layer is 1 x 1, and the number of channels is 128; the convolution kernel of the ninth depth separable convolution layer is 1 x 1, and the number of channels is 256; the convolution kernel of the tenth depth separable convolution layer is 1 x 1, and the number of channels is 64; the convolution kernel of the eleventh depth separable convolution layer is 3 x 3, and the number of channels is 256; the sixth largest pooling layer has a size of 5 x 5 with a step size of 1; the seventh largest pooling layer has a size of 9 x 9 with a step size of 1; the eighth largest pooling layer has a size of 13 × 13 with a step size of 1.
And step S2, training a welding seam positioning and identifying model, wherein the loss function is as follows:
Figure BDA0003238004660000091
wherein L is a loss function, I is the number of grids divided for an image, J is the number of prediction frames in one grid,
Figure BDA0003238004660000092
the crater information contained in the jth prediction frame in the ith grid is the crater information, if the prediction frame has a target object, the crater information is the crater information
Figure BDA0003238004660000093
Value 1, no target, then
Figure BDA0003238004660000101
A value of 0, (b)x,by) Is the center coordinate of the prediction box, (b'x,b′y) As the center coordinates of the real frame, bwTo predict the width of the frame, bhIs a high, b 'of the prediction frame'wIs the width of the real frame, b'hThe height of the real frame is the height of the real frame,
Figure BDA0003238004660000102
the crater information which is not contained in the jth prediction frame in the ith grid is provided, if the prediction frame has a target object, the crater information is the crater information
Figure BDA0003238004660000103
Value 0, no target, then
Figure BDA0003238004660000104
Value 1, P classification summaryRate, BCE (c'j,cj) As a cross entropy function, ciConfidence, c ', of the predicted box in the ith mesh'iIs the confidence of the real box in the ith grid, λ1、λ2Is a weight parameter.
In this embodiment, the step S2 of training the weld bead positioning recognition model includes two steps, the first step: firstly, training a model for detecting normal type crater images and abnormal type crater images, wherein the abnormal type crater images refer to the other types of crater images which are selected except the normal type crater images.
By the training mode of the first step, images of normal type craters and images of abnormal type craters can be preliminarily distinguished.
The second step is that: the air hole or welding flash model in the normal type welding flash image and the abnormal type welding flash image is trained and detected, so that the welding flash image with the detected defect type can be further detected, and the air hole type or the welding flash type is selected for testing because the two types are typical representatives of the welding flash defect.
The method firstly uses the trained normal and abnormal type models to detect the corresponding primary X-ray crater images, and the detection effect is shown in fig. 3 and 4. The figure in fig. 3 is the confidence level, i.e. the model considers that there is a probability of 0.88 that the crater is of the normal type. The numbers in fig. 4 are the confidence of each crater, i.e. the model considers that 3 craters in the diagram have corresponding probability of being of abnormal type.
The invention utilizes the trained normal and vent models to detect corresponding primary X-ray crater images to obtain effect graphs as shown in figures 5 and 6, and the trained normal and vent models can accurately mark the positions of the vent holes according to the graph shown in figure 5. The figure in the figure is the confidence, that is, the model considers that there is a probability of 0.91 that the crater is of the normal type. The figures in fig. 6 are confidence of each crater, that is, the model considers that 5 craters in the diagram have corresponding probability sizes of the types of the air holes.

Claims (7)

1. A method of identifying different types of craters, comprising the steps of:
s1, collecting an X-ray crater image and constructing an annotation data set;
s2, training the crater positioning and identifying model by using the labeling data set to obtain a trained crater positioning and identifying model;
and S3, recognizing the image needing to be subjected to crater recognition by adopting the trained crater positioning recognition model to obtain the position and the type of the crater.
2. Method for identifying different types of craters according to claim 1, characterized in that said step S1 comprises the sub-steps of:
s11, collecting an X-ray crater image to obtain a crater image data set;
s12, classifying the weld seam image data sets to obtain different types of image data sets;
s13, carrying out image enhancement processing on the image data sets of different types to obtain enhanced image data sets;
and S14, labeling abnormal crater images in the enhanced image data set to obtain a labeled data set.
3. The method for identifying different types of craters as claimed in claim 2, wherein the image enhancement processing in step S13 comprises: improving the contrast of an image, improving the brightness of an image, horizontally flipping, vertically flipping, rotating 90 degrees clockwise, and rotating 90 degrees counterclockwise.
4. The method for identifying different types of craters as claimed in claim 2, wherein the step S14 of marking comprises: and marking the position of the welding opening and the type of the welding opening.
5. The method for identifying different types of craters as claimed in claim 1, wherein the crater positioning identification model in step S2 comprises: a first depth-separable convolutional layer, a second depth-separable convolutional layer, a third depth-separable convolutional layer, a fourth depth-separable convolutional layer, a fifth depth-separable convolutional layer, a sixth depth-separable convolutional layer, a seventh depth-separable convolutional layer, an eighth depth-separable convolutional layer, a ninth depth-separable convolutional layer, a tenth depth-separable convolutional layer, an eleventh depth-separable convolutional layer, a first maximum pooling layer, a second maximum pooling layer, a third maximum pooling layer, a fourth maximum pooling layer, a fifth maximum pooling layer, a sixth maximum pooling layer, a seventh maximum pooling layer, an eighth maximum pooling layer, a zero-filled maximum pooling layer, a first stitching layer, a second stitching layer, an upper sampling layer, and a detector;
the input end of the first depth separable convolution layer is used as the input end of the crater positioning identification model, and the output end of the first depth separable convolution layer is connected with the input end of the first maximum pooling layer; the output end of the first maximum pooling layer is connected with the input end of the second depth separable convolution layer; the output end of the second depth separable convolutional layer is connected with the input end of the second maximum pooling layer; the output end of the second maximum pooling layer is connected with the input end of the third depth separable convolution layer; an output end of the third depth-separable convolutional layer is connected with an input end of a third maximum pooling layer; the output end of the third maximum pooling layer is connected with the input end of a fourth depth-separable convolutional layer; an output end of the fourth depth-separable convolutional layer is connected with an input end of a fourth maximum pooling layer; the output end of the fourth maximum pooling layer is connected with the input end of the fifth depth separable convolution layer; an output end of the fifth depth-separable convolutional layer is connected with an input end of a fifth maximum pooling layer; the output end of the fifth maximum pooling layer is connected with the input end of the sixth depth separable convolution layer; the output end of the sixth depth separable convolution layer is respectively connected with the input end of the zero filling type maximum pooling layer and the first input end of the second splicing layer; the output end of the zero-padding type maximum pooling layer is respectively connected with the input end of the first splicing layer, the input end of the sixth maximum pooling layer, the input end of the seventh maximum pooling layer, the input end of the eighth maximum pooling layer, the output end of the sixth maximum pooling layer, the output end of the seventh maximum pooling layer and the output end of the eighth maximum pooling layer; the output end of the first splicing layer is connected with the input end of the seventh depth separable convolution layer; an output end of the seventh depth separable convolutional layer is connected with an input end of a ninth depth separable convolutional layer; an output end of the ninth depth-separable convolutional layer is connected with an input end of the eighth depth-separable convolutional layer and an input end of the tenth depth-separable convolutional layer, respectively; the output end of the eighth depth separable convolution layer is connected with the input end of the upper sampling layer; the output end of the upper sampling layer is connected with the second input end of the second splicing layer; the output end of the second splice layer is connected with the input end of the eleventh depth separable convolutional layer; an output end of the eleventh depth-separable convolutional layer is connected with a first input end of a detector; an output terminal of the tenth depth separable convolutional layer is connected with a second input terminal of the detector; and the output end of the detector is used as the output end of the crater positioning and identifying model.
6. The method of identifying different types of craters according to claim 5, wherein the convolution kernel of the first depth separable convolution layer is 3 x 3 with a channel number of 16; the convolution kernel of the second depth separable convolution layer is 3 x 3, and the channel number of the second depth separable convolution layer is 32; the convolution kernel of the third depth separable convolution layer is 3 x 3, and the channel number of the third depth separable convolution layer is 64; the convolution kernel of the fourth depth separable convolution layer is 3 x 3, and the number of channels is 128; the convolution kernel of the fifth depth separable convolution layer is 3 x 3, and the number of channels is 256; the convolution kernel of the sixth depth separable convolution layer is 3 x 3, and the number of channels is 512; the convolution kernel of the seventh depth separable convolution layer is 3 x 3, and the number of channels is 1024; the convolution kernel of the eighth depth separable convolution layer is 1 x 1, and the number of channels is 128; the convolution kernel of the ninth depth separable convolution layer is 1 x 1, and the number of channels is 256; the convolution kernel of the tenth depth separable convolution layer is 1 x 1, and the number of channels is 64; the convolution kernel of the eleventh depth separable convolution layer is 3 x 3, and the number of channels is 256; the sixth largest pooling layer has a size of 5 x 5 with a step size of 1; the seventh largest pooling layer has a size of 9 x 9 with a step size of 1; the eighth largest pooling layer has a size of 13 × 13 with a step size of 1.
7. The method for identifying different types of craters according to claim 1, wherein the crater positioning identification model is trained in step S2, wherein the loss function is:
Figure FDA0003238004650000031
wherein L is a loss function, I is the number of grids divided for an image, J is the number of prediction frames in one grid,
Figure FDA0003238004650000032
for crater information contained in the jth prediction box in the ith grid, (b)x,by) Is the center coordinate of the prediction box, (b'x,b′y) As the center coordinates of the real frame, bwTo predict the width of the frame, bhIs a high, b 'of the prediction frame'wIs the width of the real frame, b'hThe height of the real frame is the height of the real frame,
Figure FDA0003238004650000041
is the weld crater information not contained in the jth prediction frame in the ith grid, P is the classification probability,
Figure FDA0003238004650000042
as a cross entropy function, ciConfidence, c ', of the predicted box in the ith mesh'iIs the confidence of the real box in the ith grid, λ1、λ2Is a weight parameter.
CN202111018165.8A 2021-08-31 2021-08-31 Method for identifying different types of craters Active CN113723526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111018165.8A CN113723526B (en) 2021-08-31 2021-08-31 Method for identifying different types of craters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111018165.8A CN113723526B (en) 2021-08-31 2021-08-31 Method for identifying different types of craters

Publications (2)

Publication Number Publication Date
CN113723526A true CN113723526A (en) 2021-11-30
CN113723526B CN113723526B (en) 2023-04-18

Family

ID=78680290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111018165.8A Active CN113723526B (en) 2021-08-31 2021-08-31 Method for identifying different types of craters

Country Status (1)

Country Link
CN (1) CN113723526B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321874A (en) * 2019-07-12 2019-10-11 南京航空航天大学 A kind of light-weighted convolutional neural networks pedestrian recognition method
CN110826520A (en) * 2019-11-14 2020-02-21 燕山大学 Port grab bucket detection method based on improved YOLOv3-tiny algorithm
US20200097775A1 (en) * 2018-09-20 2020-03-26 Cable Television Laboratories, Inc. Systems and methods for detecting and classifying anomalous features in one-dimensional data
CN111248859A (en) * 2019-12-27 2020-06-09 黄淮学院 Automatic sleep apnea detection method based on convolutional neural network
CN111429441A (en) * 2020-03-31 2020-07-17 电子科技大学 Crater identification and positioning method based on YO L OV3 algorithm
CN113177560A (en) * 2021-04-27 2021-07-27 西安电子科技大学 Universal lightweight deep learning vehicle detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200097775A1 (en) * 2018-09-20 2020-03-26 Cable Television Laboratories, Inc. Systems and methods for detecting and classifying anomalous features in one-dimensional data
CN110321874A (en) * 2019-07-12 2019-10-11 南京航空航天大学 A kind of light-weighted convolutional neural networks pedestrian recognition method
CN110826520A (en) * 2019-11-14 2020-02-21 燕山大学 Port grab bucket detection method based on improved YOLOv3-tiny algorithm
CN111248859A (en) * 2019-12-27 2020-06-09 黄淮学院 Automatic sleep apnea detection method based on convolutional neural network
CN111429441A (en) * 2020-03-31 2020-07-17 电子科技大学 Crater identification and positioning method based on YO L OV3 algorithm
CN113177560A (en) * 2021-04-27 2021-07-27 西安电子科技大学 Universal lightweight deep learning vehicle detection method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JUN YANG 等: "YOLO-Xweld: Efficiently Detecting Pipeline Welding Defects in X-Ray Images for Constrained Environments" *
XIAOXIAO XIANG 等: "A Convolutional Network With Multi-Scale and Attention Mechanisms for End-to-End Single-Channel Speech Enhancement" *
ZHANCHAO HUANG 等: "DC-SPP-YOLO: Dense Connection and Spatial Pyramid Pooling Based YOLO for Object Detection" *
张文明 等: "基于深度学习的门机抓斗检测方法" *
李想: "基于深度学习下视频车辆与车尾灯语识别" *
李明 等: "基于改进的YOLO-tiny的闸板阀开度检测" *

Also Published As

Publication number Publication date
CN113723526B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN111080622B (en) Neural network training method, workpiece surface defect classification and detection method and device
CN111062915B (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN106770394B (en) The three-dimensional appearance of metal welding seam internal flaw and the lossless detection method of stress characteristics
Parlak et al. Deep learning-based detection of aluminum casting defects and their types
CN110473173A (en) A kind of defect inspection method based on deep learning semantic segmentation
CN112700444B (en) Bridge bolt detection method based on self-attention and central point regression model
CN110097547B (en) Automatic detection method for welding seam negative film counterfeiting based on deep learning
CN113822889B (en) Method for detecting surface defects of hot-rolled steel plate
CN115601355A (en) Method and device for detecting and classifying product surface defects and storage medium
Naddaf-Sh et al. Defect detection and classification in welding using deep learning and digital radiography
Zuo et al. Application of YOLO object detection network in weld surface defect detection
CN111879972A (en) Workpiece surface defect detection method and system based on SSD network model
CN116524313A (en) Defect detection method and related device based on deep learning and multi-mode image
CN113962951B (en) Training method and device for detecting segmentation model, and target detection method and device
CN116416432A (en) Pipeline weld image segmentation method based on improved UNet
Li et al. Weld image recognition algorithm based on deep learning
Muresan et al. Automatic vision inspection solution for the manufacturing process of automotive components through plastic injection molding
CN111429441B (en) Crater identification and positioning method based on YOLOV3 algorithm
CN114428110A (en) Method and system for detecting defects of fluorescent magnetic powder inspection image of bearing ring
CN113723526B (en) Method for identifying different types of craters
Xue et al. A high efficiency deep learning method for the x-ray image defect detection of casting parts
CN113971676A (en) Automatic segmentation method for connecting piece section image based on deep learning
Zuo et al. An X-ray-based automatic welding defect detection method for special equipment system
CN111861889B (en) Automatic splicing method and system for solar cell images based on semantic segmentation
Halim et al. Automatic laser welding defect detection and classification using sobel-contour shape detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant