CN112150410B - Automatic detection method and system for weld defects - Google Patents

Automatic detection method and system for weld defects Download PDF

Info

Publication number
CN112150410B
CN112150410B CN202010856564.0A CN202010856564A CN112150410B CN 112150410 B CN112150410 B CN 112150410B CN 202010856564 A CN202010856564 A CN 202010856564A CN 112150410 B CN112150410 B CN 112150410B
Authority
CN
China
Prior art keywords
detected
digital image
defect
weld
network structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010856564.0A
Other languages
Chinese (zh)
Other versions
CN112150410A (en
Inventor
王粤
邓杨
尚玉婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN202010856564.0A priority Critical patent/CN112150410B/en
Publication of CN112150410A publication Critical patent/CN112150410A/en
Application granted granted Critical
Publication of CN112150410B publication Critical patent/CN112150410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder

Abstract

The invention provides an automatic detection method and a system for weld defects, wherein the method comprises the steps of obtaining a digital image to be detected of a weld to be detected; preprocessing the digital image to be detected to obtain a cut digital image to be detected; inputting the cut digital image to be detected into a first depth semantic segmentation network model, and detecting a first weld defect of the digital image to be detected to obtain a first defect map; inputting the cut digital image to be detected into a second depth semantic segmentation network model, and detecting a second weld defect of the digital image to be detected to obtain a second defect map; carrying out fusion treatment on the first defect map and the second defect map to obtain a fusion defect map; and processing the fusion defect map, and obtaining defect data corresponding to each weld defect by combining the defect data with the detection standard table to obtain defect information of the weld defect, thereby obtaining a detection report comprising all the defect information. The invention has the beneficial effects of improving the detection precision and the detection efficiency.

Description

Automatic detection method and system for weld defects
Technical Field
The invention relates to the technical field of welding detection, in particular to an automatic detection method and system for weld defects.
Background
At present, welding is one of important technological methods in the current manufacturing field, and along with the continuous improvement of the industrialization degree in China, the welding technology is widely applied to various key fields such as basic building manufacturing, aerospace, metallurgical industry, petrochemical industry, national defense department industry and equipment manufacturing. Welding can be classified into gas welding, resistance welding, arc welding, induction welding, laser welding, etc. according to the welding process, and simultaneously, automatic welding, semi-automatic welding, and manual welding according to the welding method. During welding, appearance defects and internal defects of the welded parts, such as weld flash, undercut, welding deformation, dent and the like, can be directly identified by naked eyes, but the internal defects such as unfused, incomplete welding, cracks, strip defects, circular defects, tungsten clamping/copper clamping and the like need to be identified by a nondestructive detection technology. These internal defects can reduce the quality of the welded product, greatly affect the service performance of the structure, lead to corrosion and fatigue damage easily occurring in the daily application process when the welded failure is light, lead to system breakdown and even have disastrous results. Therefore, besides the external defect inspection, effective nondestructive inspection is required to be performed on the welding internal structure so as to prevent unqualified weldments from entering the market, and the welding internal structure is put into use, so that economic benefit is ensured, and strong guarantee is provided for personal safety.
The main nondestructive testing techniques at present are as follows: magnetic particle detection method, penetration detection method, eddy current detection method, ultrasonic detection method, X-ray detection method, machine vision detection method, etc. Among them, X-ray detection is currently the most dominant detection method, and the principle is that X-rays with uniform intensity emitted by a radiation source penetrate a weld region and irradiate on an imaging device. Since the density and thickness of the weld defect and normal weld area are not the same, the extent of absorption of the radiation is not uniform. The imaging device can accurately sense rays with different intensities and convert the intensity information of the rays into brightness change in the gray level image, so that the shape, position, size and type information of the defects can be accurately reflected on the gray level image. In order to ensure welding quality, national export ray detection standard GBAT 3323-2005 and classification description GB/T6417.1-2005 of weld defects, pressure equipment nondestructive detection standard NB/T47013.2-2005 and the like are used for standardizing weld defect detection and assessment standards.
At present, most large, medium and small enterprises still use an X-ray film photographing method when detecting the internal quality of a welding line, and the film is disposable and needs to be printed, so that time is wasted and the environment is polluted. The quality inspection of the internal structure of the welding seam is that a worker puts a film under an LED film lamp and judges the defect of the film by human eyes, and the judging result also often depends on the professional level of the film evaluation personnel, and different detection personnel can have a little difference in understanding and implementing the evaluation standard. In addition, the long-time and multi-batch detection work can cause eye fatigue, so that the probability of false judgment due to missed detection is increased.
Disclosure of Invention
Aiming at the problems in the prior art, an automatic detection method and a system for weld defects are provided.
The specific technical scheme is as follows:
an automatic detection method for weld defects comprises the following steps:
step S1, obtaining a digital image to be detected of a weld joint to be detected;
step S2, preprocessing the digital image to be detected to obtain a cut digital image to be detected;
s3, inputting the cut digital image to be detected into a first depth semantic segmentation network model to detect a first weld defect of the digital image to be detected, and obtaining a first defect map; and
inputting the cut digital image to be detected into a second depth semantic segmentation network model to detect a second weld defect of the digital image to be detected, so as to obtain a second defect map;
s4, carrying out fusion processing on the first defect map and the second defect map to obtain a fusion defect map;
s5, processing the fusion defect map to obtain defect data corresponding to each weld defect, and combining the defect data of each weld defect with a detection standard table to obtain defect information of the weld defect so as to obtain a detection report comprising all the defect information;
Wherein the weld defects include a first weld defect and a second weld defect.
Preferably, the automatic detection method, wherein step S3 specifically includes the following steps:
the step of obtaining a first defect map comprises the following steps:
pre-segmenting a digital image to be detected by adopting a sliding window of a first depth semantic segmentation network model, and respectively inputting the pre-segmented digital image to be detected into a first network structure so as to detect a first weld defect of the pre-segmented digital image to be detected through the first network structure to obtain a first defect map;
the first network structure comprises a first encoder network structure and a first decoder network structure, and the first encoder network structure and the first decoder network structure form an asymmetric structure;
the step of obtaining the second defect map includes:
pre-segmenting the digital image to be detected by adopting a sliding window of a second depth semantic segmentation network model, and respectively inputting the pre-segmented digital image to be detected into a second network structure to detect a second weld defect of the pre-segmented digital image to be detected through the second network structure so as to obtain a second defect map;
the second network structure comprises a second encoder network structure and a second decoder network structure, and the second encoder network structure and the second decoder network structure form an asymmetric structure.
Preferably, the automatic detection method, wherein the first encoder network structure and the second encoder network structure each comprise at least one first convolution block, a plurality of second convolution blocks and a plurality of inverse residual hole convolution redundancy blocks;
wherein the first convolution block comprises at least one common convolution layer;
the second convolution block comprises at least one common convolution layer and at least one depth separable convolution layer;
the inverted residual hole convolution redundant block comprises two common convolution layers and one depth separable convolution layer, and the depth separable convolution layer is arranged between the two common convolution layers;
the first encoder network structure is connected with the first decoder network structure by at least one connection signal;
the second encoder network structure is connected with the second decoder network structure by at least one connection signal;
the depth separable convolution layer in the inverted residual error hole convolution redundant block adopts hole convolution with the expansion coefficient of 2;
the first decoder network architecture and the second decoder network architecture each include a plurality of normal convolutional layers, a plurality of upsampling layers, and an output layer.
Preferably, the method for automatically detecting, wherein the loss function adopted by the first deep semantic segmentation network model is shown in the following formula:
Loss1=λ1·CE1+(1-λ1)·Tversky;
Wherein λ1 is used to represent the ratio of CE1 function and Tversky function in the loss function;
c1 is used to represent a first number of classifications of the digital image to be detected, the first classification comprising:
the first classification, namely a round defect and a strip defect in the first weld defects;
the second classification, tungsten-and copper-pinch defects of the first weld defects;
thirdly, classifying the welding seam background of the digital image to be detected;
i is used for representing the abscissa of the pixel point on the digital image to be detected;
j is used for representing the ordinate of the pixel point on the digital image to be detected;
the image is used for representing a digital image to be detected;
k1 is used to represent the current first classification of the digital image to be detected;
p k1 (i, j) means for representing a predicted probability that a pixel point on the digital image to be detected is the current first class;
q k1 (i, j) is used for representing the true probability that the pixel point on the digital image to be detected is the current first classification;
TP is used to represent samples that are actually predicted to be correct for positive samples;
FP is used to represent samples that are actually negative sample prediction errors;
FN is used to represent samples that are actually positive sample prediction errors;
alpha is used to represent the weight value.
Preferably, the method for automatically detecting, wherein the loss function adopted by the second deep semantic segmentation network model is shown in the following formula:
Loss2=λ2·CE2+(1-λ2)·Tversky;
Wherein λ2 is used to represent the ratio of CE2 function and Tversky function in the loss function;
m is used for representing different neighborhood points of pixel points on the digital image to be detected;
c2 is used to represent a second classification number of the digital image to be detected, the second classification comprising:
a fourth classification, crack defects in the second weld defects;
a fifth classification, of the second weld defects, of non-blown defects;
a sixth classification, of the second weld defects, of incomplete penetration defects;
seventh classification, namely detecting the welding seam background of the digital image;
k2 is used to represent the current second classification of the digital image to be detected;
i is used for representing the abscissa of the pixel point on the digital image to be detected;
j is used for representing the ordinate of the pixel point on the digital image to be detected;
the image is used for representing a digital image to be detected;
p (m,k2) (i, j) a prediction probability for representing that a neighborhood point of a pixel point on the digital image to be detected is the current second classification;
q (m,k2) (i, j) representing that the neighborhood point of the pixel point on the digital image to be detected is true of the current second classificationProbability;
u k2 a weight for representing the current second classification;
TP is used to represent samples that are actually predicted to be correct for positive samples;
FP is used to represent samples that are actually negative sample prediction errors;
FN is used to represent samples that are actually positive sample prediction errors;
alpha is used to represent the weight value.
Preferably, the automatic detection method comprises the steps that a sliding window adopted by a first depth semantic segmentation network model is 48 x 48 in size, and the step length is 8; and/or
The size of a sliding window adopted by the second depth semantic segmentation network model is 256×256, and the step size is 64.
Preferably, the automatic detection method, wherein step S5 specifically includes the following steps:
step S51, smoothing the defect area of the fusion defect map to eliminate noise points of the fusion defect map, and performing depth-first search on the fusion defect map;
step S52, calculating the area and the length-width ratio information of each weld defect in the fusion defect map, and converting the area and the length-width ratio information of the weld defect into the defect size in the defect data by combining the image calibration parameters;
wherein the defect data further includes a category of each weld defect in the fusion defect map;
and step S53, combining the defect data with the detection standard table to obtain the defect level of the corresponding weld defect so as to obtain a detection report comprising the defect data and the defect level corresponding to each weld defect.
Preferably, the automatic detection method, wherein step S1 specifically includes the following steps:
Step S11, obtaining a film image to be detected of a weld joint to be detected;
step S12, scanning a film image to be detected into a digital image to be detected by adopting a film scanner; or (b)
And S13, acquiring a digital image to be detected of the weld joint to be detected by adopting a radiographic imaging plate and a digital image acquisition card.
Preferably, the automatic detection method, wherein step S2 specifically includes the following steps:
step S21, performing a first preprocessing operation on the digital image to be detected;
the first preprocessing operation includes:
carrying out noise reduction treatment on the digital image to be detected, and adopting median filtering treatment on the digital image to be detected;
step S22, performing a second preprocessing operation on the digital image to be detected;
the second preprocessing operation includes:
performing Ojin automatic segmentation processing on the digital image to be detected to obtain an automatic segmentation threshold;
keeping the image below the automatic segmentation threshold in the digital image to be detected still, and carrying out nonlinear logarithmic transformation on the image above the automatic segmentation threshold in the digital image to be detected so as to improve the contrast of the digital image to be detected;
step S23, performing a third preprocessing operation on the digital image to be detected;
the third preprocessing operation includes the steps of:
Acquiring a gray level histogram of a digital image to be detected, and filtering the gray level histogram by adopting a Gaussian filter to calculate the minimum value of the filtered gray level histogram;
performing binary segmentation on the digital image to be detected according to the minimum value to obtain a foreground region of the digital image to be detected, and performing expansion processing on the foreground region;
calculating the area of each connected domain in the foreground region after expansion treatment, and taking the circumscribed rectangle corresponding to the connected domain with the largest area as a reference mark, so as to cut the digital image to be detected according to the reference mark, thereby obtaining the cut digital image to be detected.
The automatic detection system of the weld defect is also included, wherein the automatic detection system of the weld defect comprises:
the acquisition module is used for acquiring a digital image to be detected of the weld joint to be detected;
the preprocessing module is used for preprocessing the digital image to be detected to obtain a cut digital image to be detected;
the first detection module is used for inputting the cut digital image to be detected into a first depth semantic segmentation network model so as to detect a first weld defect of the digital image to be detected and obtain a first defect map;
the second detection module is used for inputting the cut digital image to be detected into a second depth semantic segmentation network model so as to detect a second weld defect of the digital image to be detected and obtain a second defect map;
The fusion module is used for carrying out fusion processing on the first defect map and the second defect map so as to obtain a fusion defect map;
the processing module is used for processing the fusion defect map to obtain defect data corresponding to each weld defect, and acquiring defect information of the weld defect by combining the defect data of each weld defect with the detection standard table to obtain a detection report comprising all the defect information;
wherein the weld defects include a first weld defect and a second weld defect.
The technical scheme has the following advantages or beneficial effects:
the neural network formed by the first depth semantic segmentation network model, the first depth semantic segmentation network model and the second depth semantic segmentation network model is a lightweight semantic segmentation network and can be transplanted into a mobile device for field detection.
Secondly, indirectly acquiring a digital image to be detected of the weld to be detected by adopting a film scanner; or (b)
Directly acquiring a digital image to be detected of the weld joint to be detected by adopting a ray imaging plate and a digital image acquisition card; thereby realizing the acquisition of the digital image to be detected of the weld to be detected aiming at two different devices, and further improving the compatibility of acquiring the digital image to be detected;
thirdly, the influence of image noise or brightness fluctuation and the like on the welding quality judgment accuracy is greatly reduced by carrying out preprocessing operations such as noise reduction and the like on the digital image to be detected;
Fourthly, a first depth semantic segmentation network model and a second depth semantic segmentation network model are adopted to detect a first weld defect and a second weld defect of the digital image to be detected respectively, so that the detection range is increased, the detection precision is improved, and the detection efficiency is improved;
fifthly, fusing the first defect map and the second defect map, and then processing the fused defect map to obtain a detection report of defect information including all weld defects, so that a user can intuitively review the defect information of the detected weld defects.
Drawings
Embodiments of the present invention will now be described more fully with reference to the accompanying drawings. The drawings, however, are for illustration and description only and are not intended as a definition of the limits of the invention.
FIG. 1 is a flow chart of an embodiment of an automatic weld defect detection method of the present invention;
FIG. 2 is a flowchart of step S1 of an embodiment of the method for automatically detecting weld defects according to the present invention;
FIG. 3 is a flowchart of step S2 of an embodiment of the method for automatically detecting weld defects according to the present invention;
FIG. 4 is a flowchart of step S3 of an embodiment of the method for automatically detecting weld defects of the present invention;
FIG. 5 is a schematic structural diagram of a first depth semantic segmentation network model according to an embodiment of the method for automatically detecting a weld defect of the present invention;
FIG. 6 is a schematic diagram of a second depth semantic segmentation network model of an embodiment of the automatic weld defect detection method of the present invention;
FIG. 7 is a schematic diagram of a label of a weld defect of an embodiment of an automatic weld defect detection method of the present invention;
FIG. 8 is a schematic diagram of an 8 neighborhood of pixels (i, j) of an embodiment of an automatic detection method of weld defects according to the present invention;
FIG. 9 is a flow chart of inputting a digital image to be detected into an inverted residual hole convolution block according to an embodiment of the automatic weld defect detection method of the present invention;
FIG. 10 is a schematic block diagram of an automatic weld defect detection system of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
The invention comprises an automatic detection method of weld defects, as shown in figure 1, comprising the following steps:
step S1, obtaining a digital image to be detected of a weld joint to be detected;
step S2, preprocessing the digital image to be detected to obtain a cut digital image to be detected;
s3, inputting the cut digital image to be detected into a first depth semantic segmentation network model to detect a first weld defect of the digital image to be detected, and obtaining a first defect map; and
inputting the cut digital image to be detected into a second depth semantic segmentation network model to detect a second weld defect of the digital image to be detected, so as to obtain a second defect map;
s4, carrying out fusion processing on the first defect map and the second defect map to obtain a fusion defect map;
s5, processing the fusion defect map to obtain defect data corresponding to each weld defect, and combining the defect data of each weld defect with a detection standard table to obtain defect information of the weld defect so as to obtain a detection report comprising all the defect information;
Wherein the weld defects include a first weld defect and a second weld defect.
In the embodiment, the first depth semantic segmentation network model and the second depth semantic segmentation network model are adopted to detect the first weld defects and the second weld defects of the digital image to be detected respectively, so that the detection range is increased and the detection capability is improved; the first defect map and the second defect map are fused, and then the fused defect map is processed to obtain a detection report of defect information including all weld defects, so that a user can intuitively review the defect information of the detected weld defects;
it should be noted that the weld defect may be a first weld defect or a second weld defect;
that is, in step S5, the fusion defect map is processed to obtain defect data corresponding to each first weld defect and each second weld defect;
the defect data of each weld defect comprises the type and the defect size of the weld defect;
the defect information for each weld defect includes a defect description and a defect level.
Further, as a preferred embodiment, as shown in fig. 2, step S1 may include the steps of:
Step S11, obtaining a film image to be detected of a weld joint to be detected;
in step S12, the film image to be detected is scanned into a digital image to be detected by a film scanner.
In the above embodiment, the user may acquire a film image to be detected of the weld to be detected using a conventional apparatus and scan the film image to be detected into a digital image to be detected using a film scanner after developing the film image to be detected.
In the above embodiment, the method and the device can be applied to the case that the user does not acquire the device for detecting the digital image, and the user does not need to be appointed to replace the corresponding device, so that the detection cost is reduced.
Further, as a preferred embodiment, step S1 may include the steps of:
and S13, acquiring a digital image to be detected of the weld joint to be detected by adopting a radiographic imaging plate and a digital image acquisition card.
In the above embodiment, the user may acquire the digital image to be detected of the weld to be detected by directly acquiring the X-rays by using the radiographic imaging plate and the digital image acquisition card.
Through the two preferred embodiments described above, digital images to be detected of the weld to be detected can be acquired for two different devices.
Further, in the above embodiment, as shown in fig. 3, step S2 specifically includes the following steps:
Step S21, performing a first preprocessing operation on the digital image to be detected;
the first preprocessing operation includes:
carrying out noise reduction treatment on the digital image to be detected, and adopting median filtering treatment on the digital image to be detected;
step S22, performing a second preprocessing operation on the digital image to be detected;
the second preprocessing operation includes:
performing Otsu automatic segmentation (OTSU) processing on the digital image to be detected to obtain an automatic segmentation threshold;
maintaining an image below an automatic segmentation threshold in the digital image to be detected still, and performing nonlinear logarithmic transformation on the image above the automatic segmentation threshold in the digital image to be detected so as to improve the contrast of the digital image to be detected, thereby improving the contrast between the weld defect and the weld background in the digital image to be detected;
as a preferred embodiment, the automatic segmentation threshold value can be set to be t, the pixel value between 0 and t can be kept unchanged, and nonlinear logarithmic transformation is carried out on the image between t and 255, so that the contrast between the weld defect and the weld background in the digital image to be detected is improved.
Step S23, performing a third preprocessing operation on the digital image to be detected;
the third preprocessing operation includes the steps of:
Acquiring a gray level histogram of a digital image to be detected, and filtering the gray level histogram by adopting a Gaussian filter to calculate the minimum value of the filtered gray level histogram;
performing binary segmentation on the digital image to be detected according to the minimum value to obtain a foreground region of the digital image to be detected, and performing expansion processing on the foreground region;
in the above embodiment, the foreground region pair may be subjected to expansion processing using an expansion template, where the size of the expansion template may be 21×21;
the minimum value may be used as a threshold value for binary segmentation to binary segment the digital image to be detected according to the minimum value to extract the foreground region, and then the foreground region is subjected to a 21 x 21 size expansion process to expand the foreground region.
Calculating the area of each connected domain in the foreground region after expansion treatment, and taking the circumscribed rectangle corresponding to the connected domain with the largest area as a reference mark, so as to cut the digital image to be detected according to the reference mark, thereby obtaining the cut digital image to be detected.
As a preferred embodiment, the calculated area of each connected domain may be arranged to obtain the connected domain with the largest area, and the circumscribed rectangle corresponding to the connected domain with the largest area is used as a reference mark, and then an image of the corresponding position of the digital image to be detected is intercepted according to the reference mark, where the intercepted image is the cut digital image to be detected.
In the above embodiment, the noise reduction and median filtering of the digital image to be detected are implemented by adopting the first preprocessing operation;
the contrast of the digital image to be detected is improved by adopting a second preprocessing operation;
and obtaining the cut digital image to be detected by adopting a third preprocessing operation.
Further, in the above embodiment, the step S3 specifically includes the steps of:
the step of obtaining a first defect map comprises the following steps:
pre-segmenting a digital image to be detected by adopting a sliding window of a first depth semantic segmentation network model, and respectively inputting the pre-segmented digital image to be detected into a first network structure so as to detect a first weld defect of the pre-segmented digital image to be detected through the first network structure to obtain a first defect map;
the first network structure comprises a first encoder network structure and a first decoder network structure, and the first encoder network structure and the first decoder network structure form an asymmetric structure;
the step of obtaining the second defect map includes:
pre-segmenting the digital image to be detected by adopting a sliding window of a second depth semantic segmentation network model, and respectively inputting the pre-segmented digital image to be detected into a second network structure to detect a second weld defect of the pre-segmented digital image to be detected through the second network structure so as to obtain a second defect map;
The second network structure comprises a second encoder network structure and a second decoder network structure, and the second encoder network structure and the second decoder network structure form an asymmetric structure.
In the above embodiment, the training data set of the weld defect image may be used to train the depth semantic segmentation network model for the first weld defect, so as to obtain the first depth semantic segmentation network model parameter;
training the depth semantic segmentation network model for a second weld defect by using a weld defect image training data set to obtain second depth semantic segmentation network model parameters;
each weld defect image training data in the weld defect image training data set adopts a LabelMe label manufacturing tool to manufacture defect image labels with corresponding pixel granularity levels.
Further, in the above embodiment, the first encoder network structure and the second encoder network structure each include at least one first convolution block 11, a plurality of second convolution blocks 12, and a plurality of inverse residual hole convolution redundancy blocks 13;
wherein the first convolution block 11 comprises at least one normal convolution layer;
the second convolution block 12 comprises at least one normal convolution layer and at least one depth separable convolution layer;
The inverted residual hole convolution redundant block 13 comprises two common convolution layers and one depth separable convolution layer, wherein the depth separable convolution layer is arranged between the two common convolution layers; and the depth separable convolution layer in the back-residual hole convolution redundant block 13 adopts hole convolution with the expansion rate of 2.
First, as shown in fig. 9, the input of the module is first passed through a 1×1 first normal convolution layer, then is input to a 3×3 depth separable convolution layer which is convolved with a hole having an expansion rate of 2, and then is input to a 1×1 second normal convolution layer; and then, superposing the input of the depth separable convolution layer and the output of the second common convolution layer and outputting the superposed output.
The first encoder network structure is connected with the first decoder network structure by at least one connection signal;
the second encoder network structure is connected with the second decoder network structure by at least one connection signal;
each common convolution layer and each depth separable convolution layer are provided with input, execution operation and execution parameters and output;
the first decoder network structure is connected with the first encoder network structure, and the second decoder network structure is connected with the second encoder network structure;
The first decoder network structure and the second decoder network structure comprise a plurality of common convolution layers, a plurality of up-sampling layers 14 and an output layer 15, wherein each common convolution layer and each up-sampling layer 14 are provided with input, execution operation and execution parameters and output;
wherein the performing operation of the decoding filter of each of the common convolution layers in the first encoder network structure, the first decoder network structure, the second encoder network structure, and the second decoder network structure includes convolution, standard normalization, and activation.
In the above embodiment, the first deep semantic segmentation network model and the second deep semantic segmentation network model have fewer network parameters, so that the neural network formed by the first deep semantic segmentation network model and the second deep semantic segmentation network model belongs to a lightweight semantic segmentation network and can be transplanted into a mobile device for field detection.
Further, in the above embodiment, the first weld defect includes a round defect, a strip defect, a tungsten-clamping defect, and a copper-clamping defect; and/or
The second weld defects include crack defects, unblown defects, and incomplete penetration defects;
wherein the first weld defect and the second weld defect do not overlap.
As a preferred embodiment, as shown in fig. 5, the first encoder network structure may include a first convolution block 11 and three second convolution blocks 12, where the first convolution block 11 includes a common convolution layer, and the common convolution layer is correspondingly provided with input, execution operation and execution parameters, and output; each second convolution block 12 includes a depth separable convolution layer and a normal convolution layer, where the depth separable convolution layer and the normal convolution layer are respectively provided with input, execution operation, execution parameters and output;
the inverse residual hole convolution redundant block 13 in the first encoder network structure includes two normal convolution layers and one depth separable convolution layer, and the depth separable convolution layer is disposed between the two normal convolution layers, and adopts hole convolution with an expansion rate of 2.
As shown in fig. 9, the input of the module is first passed through a 1×1 first normal convolution layer, then input to a 3×3 depth separable convolution layer convolved with a hole having an expansion rate of 2, and then input to a 1×1 second normal convolution layer; and then, superposing the input of the depth separable convolution layer and the output of the second common convolution layer and outputting the superposed output.
The depth separable convolution layer and each common convolution layer are correspondingly provided with input, execution operation and execution parameters and output;
the details are shown in table 1 below:
TABLE 1
In table 1 above, conv is used to represent the convolution operation;
BN (BatchNormalization) is used to represent a batch normalization operation;
relu is used to represent the activation function;
s1 is used to represent the step size of the convolution operation as 1;
s2 is used to represent a step size of 2;
conv dw (Dwpthwise Convolution) is used to represent a depth separable convolution;
r2 is used to represent the rate of expansion (rate) of the hole convolution as 2;
wherein layer 1 is a normal convolutional layer in the first convolutional block 11, the coding filter in layer 1 has an execution parameter of 3 x 32, the image specification required to be input in the layer 1 is 48 multiplied by 3, and the execution operation of the coding filter of the layer 1 is Conv+BN+Relu/s2 in sequence; and so on;
wherein layers 2 and 3 constitute a second convolution block 12;
layers 4 to 6 form an inverted residual error hole convolution redundant block 13;
the 7 th layer to the 9 th layer form an inverted residual error hole convolution redundant block 13;
the 10 th layer to the 12 th layer form an inverted residual error hole convolution redundant block 13;
the 13 th layer to the 15 th layer form an inverted residual error hole convolution redundant block 13;
Layers 16 to 18 form an inverted residual error hole convolution redundant block 13;
layers 19 and 20 constitute a second convolution block 12;
layers 21 and 22 constitute a second convolution block 12.
In table 1 above, the first encoder network structure consists of 9 blocks (blocks), and the input image of the first encoder network structure is a 48×48×3 rice grain image. Firstly, carrying out common convolution for 1 time by 3 multiplied by 3, and carrying out standardized operation on the common convolution, wherein a Relu6 function is adopted as an activation function; the channel adjustment is then performed by 1 depth separable convolution followed by 1 x 1 PW convolution (Pointwise Convolution, PW), normalized to the activation function using the Relu6 function. Then, 5 reverse residual hole convolution redundant blocks 13 are entered; finally, 2 depth separable convolutions were performed.
The first decoder network structure in the first network structure may be as shown in table 2 below:
TABLE 2
In table 2 above, the first encoder network structure is connected to the first decoder network structure using two connection signals; i.e. performing a connect operation before layer 1 in the first decoder network structure such that at this time the input of layer 1 in the first decoder network structure also comprises the output of layer 18 in the first encoder network structure, so the input of layer 1 in the first decoder network structure comprises the output of the first encoder network structure and the output of layer 18 in the first encoder network structure;
Performing a connect operation prior to layer 3 in the first decoder network structure such that at this time the input of layer 3 in the first decoder network structure also includes an output of layer 12 in the first encoder network structure, so the input of layer 3 in the first decoder network structure includes an output of layer 2 in the first decoder network structure and an output of layer 12 in the first encoder network structure;
the layer where the execution operation is Conv is a common convolution layer, and Conv is used for representing convolution operation;
the layer where the execution operation is Upsampling is an Upsampling layer 14, and Upsampling is used for representing the Upsampling operation;
the layer on which the operation Softmax is performed is the output layer 15, softmax being used for classifying the pixels.
As shown in fig. 6, the second encoder network structure may include a first convolution block 11 and five second convolution blocks 12, where the first convolution block 11 includes a common convolution layer, and the common convolution layer is correspondingly provided with input, execution operation and execution parameters, and output; each second convolution block 12 includes a depth separable convolution layer and a normal convolution layer, where the depth separable convolution layer and the normal convolution layer are respectively provided with input, execution operation, execution parameters and output;
The inverted residual hole convolution redundant block 13 in the second encoder network structure comprises two layers of common convolution layers and one layer of depth separable convolution layer, the depth separable convolution layer is arranged between the two layers of common convolution layers, and the input, the execution operation, the execution parameters and the output are correspondingly arranged on each layer of common convolution layer;
the second encoder network structure in the second network structure may be as shown in table 3 below:
/>
TABLE 3 Table 3
In table 3 above, conv is used to represent the convolution operation;
BN (BatchNormalization) is used to represent a batch normalization operation;
relu is used to represent the activation function;
s1 is used to represent the step size of the convolution operation as 1;
s2 is used to represent a step size of 2;
conv dw (Dwpthwise Convolution) is used to represent a depth separable convolution;
r2 is used to represent the rate of the hole convolution as 2;
wherein layer 1 is a normal convolutional layer in the first convolutional block 11, the coding filter in layer 1 has an execution parameter of 3 x 32, the image specification required to be input in the layer 1 is 256 multiplied by 3, and the execution operation of the layer 1 is Conv+BN+Relu/s2 in sequence; and so on;
wherein layers 2 and 3 constitute a second convolution block 12;
layer 4 and layer 5 form a second convolution block 12;
Layers 6 and 7 constitute a second convolution block 12;
layers 8 and 9 constitute a second convolution block 12;
layers 10 and 11 constitute a second convolution block 12;
the 12 th layer and the 14 th layer form an inverted residual error hole convolution redundant block 13;
layers 15 to 17 form an inverted residual error hole convolution redundant block 13;
layers 18 to 20 form an inverted residual error hole convolution redundant block 13;
layers 21 to 23 form an inverted residual error hole convolution redundant block 13;
layers 24 through 26 constitute an inverse residual hole convolution redundant block 13.
In table 3 above, the second encoder network structure consists of 11 blocks (blocks) and the input image of the network is a 256×256×3 rice grain image. Firstly, carrying out common convolution for 1 time by 3 multiplied by 3, and carrying out standardized operation on the common convolution, wherein a Relu6 function is adopted as an activation function; the channel adjustment is then performed by 5 depth separable convolutions, followed by a 1 x 1 PW convolution (Pointwise Convolution, PW), normalized to the activation function using the Relu6 function. Next, 5 inverse residual hole convolution redundant blocks 13 are entered.
The second decoder network structure in the second network structure may be as shown in table 4 below:
TABLE 4 Table 4
In table 4 above, the second encoder network structure is connected to the second decoder network structure using three connection signals; performing a connect operation prior to layer 1 in the second decoder network structure such that at this time the input of layer 1 in the second decoder network structure also includes an output of layer 11 in the second encoder network structure, so the input of layer 1 in the second decoder network structure includes an output of the second encoder network structure and an output of layer 11 in the second encoder network structure;
Performing a connect operation prior to layer 3 in the second decoder network structure such that at this time the input of layer 3 in the second decoder network structure also includes an output of layer 7 in the second encoder network structure, so the input of layer 3 in the second decoder network structure includes an output of layer 2 in the second decoder network structure and an output of layer 7 in the second encoder network structure;
performing a connect operation prior to layer 5 in the second decoder network structure at which time the input of layer 5 in the second decoder network structure also includes an output of layer 3 in the second encoder network structure, so the input of layer 5 in the second decoder network structure includes an output of layer 4 in the second decoder network structure and an output of layer 3 in the second encoder network structure;
the layer where the execution operation is Conv is a common convolution layer, and Conv is used for representing convolution operation;
the layer where the execution operation is Upsampling is an Upsampling layer 14, and Upsampling is used for representing the Upsampling operation;
the layer on which the operation Softmax is performed is the output layer 15, softmax being used for classifying the pixels.
In table 4 above, the 1 st, 3 rd, 5 th and 7 th layers are normal convolution layers, and the normal convolution layers of the 1 st, 3 rd, 5 th and 7 th layers each form the first convolution block 11.
In the above embodiment, the Loss function (Loss) adopted by the first deep semantic segmentation network model is shown in the following formula:
Loss1=λ1·CE1+(1-λ1)·Tversky; (1)
wherein, in the formulas (1) - (3), λ1 is used to represent the respective ratios of CE1 (cross entropy Loss function, crossEntropy Loss) and Tversky in the Loss function, which can be set by user-definition, for example, can be set to 0.7;
c1 is used to represent the first number of classifications of the digital image to be detected, where C1 may be 3, i.e., the first classification of the digital image to be detected may include:
the first classification, namely a round defect and a strip defect in the first weld defects;
the second classification, tungsten-and copper-pinch defects of the first weld defects;
and thirdly, classifying the welding seam background of the digital image to be detected.
i is used for representing the abscissa of the pixel point on the digital image to be detected;
j is used for representing the ordinate of the pixel point on the digital image to be detected;
the image is used for representing a digital image to be detected;
k1 is used to represent the current first classification of the digital image to be detected;
p k1 (i, j) means for representing a predicted probability that a pixel point on the digital image to be detected is the current first classification (k 1);
q k1 (i, j) is used to represent the true probability that the pixel point on the digital image to be detected is the current first class (k 1), and only has values of 0 and 1;
TP (True Positive) is used to indicate that the correct sample is actually predicted for a positive sample;
FP (False Positive) the samples that are actually negative sample prediction errors;
FN (False Negative) the samples that are actually positive sample prediction errors;
alpha is used to represent a weight value, where alpha may be 0.5.
Specifically, in the above embodiment, the loss function of the second depth semantic segmentation network model may be obtained by weighted summation of the loss values of the current pixel point and the loss values corresponding to the 8 neighborhood pixel points of the current pixel point;
as shown in fig. 7, 1 represents a defect point, 0 represents a background point, and for a linear defect, if all 8 neighborhoods around a pixel are defects of a certain type, the current pixel can be completely determined as the defect of the certain type, and if all 8 neighborhoods are not, the probability that an isolated point belongs to the defect of the certain type can be determined to be 0. Referring to fig. 7 and 8, the neighborhood of the pixel of 1 in the dotted line frame is different from the neighborhood information of the pixel of 1 in the solid line frame compared with the pixel of 1 in the solid line frame,
the 1 st neighborhood of the pixel point with 1 in the dotted line frame is 1, the 2 nd neighborhood is 1, the 3 rd neighborhood is 1, the 4 th neighborhood is 1, the 5 th neighborhood is 1, the 6 th neighborhood is 0, the 7 th neighborhood is 0, and the 8 th neighborhood is 1, so the pixel point in the dotted line frame can be determined as a defect point;
The 1 st neighborhood of the pixel point with 1 in the solid line frame is 0, the 2 nd neighborhood is 0, the 3 rd neighborhood is 0, the 4 th neighborhood is 0, the 5 th neighborhood is 0, the 6 th neighborhood is 0, the 7 th neighborhood is 0, and the 8 th neighborhood is 0, so the pixel point in the solid line frame can be determined to be a non-defect point.
Thus, its loss function is changed to:
Loss2=λ2·CE2+(1-λ2)·Tversky; (4)
wherein, in the above formulas (4) - (6), λ2 is used to represent the ratio of CE2 function and Tversky function in the loss function;
m is used for representing different neighborhood points of pixel points on the digital image to be detected, namely the value range of m is 1-8;
c2 is used to represent the second number of classifications of the digital image to be detected, wherein C2 may be 4, i.e. the second classification of the digital image to be detected may comprise:
a fourth classification, crack defects in the second weld defects;
a fifth classification, of the second weld defects, of non-blown defects;
a sixth classification, of the second weld defects, of incomplete penetration defects;
seventh classification, namely detecting the welding seam background of the digital image;
k2 is used to represent the current second classification of the digital image to be detected;
i is used for representing the abscissa of the pixel point on the digital image to be detected;
j is used for representing the ordinate of the pixel point on the digital image to be detected;
The image is used for representing a digital image to be detected;
p (m,k2) (i, j) a prediction probability for representing a neighborhood point (m) of a pixel point on the digital image to be detected as the current second class (k 2), i.e. p (m,k) (i, j) has a value of 0 to 1;
q (m,k2) (i, j) true probability that the neighborhood point (m) used to represent the pixel point on the digital image to be detected is the current second class (k 2), i.e. q (m,k2) The values of (i, j) are 0 and 1;
u k2 a weight for representing the current second classification;
TP is used to represent samples that are actually predicted to be correct for positive samples;
FP is used to represent samples that are actually negative sample prediction errors;
FN is used to represent samples that are actually positive sample prediction errors;
alpha is used to represent the weight value.
Further, in the above embodiment, the size of the sliding window adopted by the first depth semantic segmentation network model is 48×48, and the step size is 8; and/or
The size of a sliding window adopted by the second depth semantic segmentation network model is 256×256, and the step size is 64.
In the above embodiment, the first depth semantic segmentation network model and the second depth semantic segmentation network model both use sliding windows to perform window segmentation on the input image, and the step sizes of the sliding windows used by the first depth semantic segmentation network model and the second depth semantic segmentation network model are smaller than the window sizes, so that the first defect map and the second defect map respectively output by the first depth semantic segmentation network model and the second depth semantic segmentation network model have the problem of overlapping pixel parts, and therefore, a weighting algorithm is required when the detection result of each pixel corresponding to the original digital image to be detected is restored, and finally, the classification result of the pixel level is obtained.
In the above embodiment, in the fusion defect map obtained in step S4, each weld defect has a corresponding category, where the category includes a first weld defect and a second weld defect; namely, the category corresponding to each weld defect can be any one of a round defect, a strip defect, a tungsten clamping defect and a copper clamping defect of the first weld defect, or the category corresponding to each weld defect can also be any one of a crack defect, an unblown defect and an unblown defect of the first weld defect; that is, each weld defect may be any one of a round defect, a bar defect, a tungsten-clamping defect, a copper-clamping defect, a crack defect, an unblown defect, and an unblown defect;
and the category corresponding to each weld defect has been obtained in the fusion defect map obtained in step S4.
Further, in the above embodiment, as shown in fig. 4, step S5 specifically includes the following steps:
step S51, smoothing the defect area of the fusion defect map to eliminate noise points of the fusion defect map, and performing depth-first search on the fusion defect map;
step S52, calculating the area and the length-width ratio information of each weld defect in the fusion defect map, and converting the area and the length-width ratio information of the weld defect into the defect size in the defect data by combining the image calibration parameters;
Wherein the defect data further includes a category of each weld defect in the fusion defect map;
and step S53, combining the defect data with the detection standard table to obtain the defect level of the corresponding weld defect so as to obtain a detection report comprising the defect data and the defect level corresponding to each weld defect.
In the above embodiment, since the neural network derives defects at the pixel level, it is necessary to determine which pixels belong to the same defect by performing a search again through a depth-first search, so as to determine a specific pixel region of each weld defect through the depth-first search.
Further, in the above embodiment, further comprising:
and step S6, storing the digital image to be detected and a detection report corresponding to the digital image to be detected in a memory.
In the above embodiment, the digital image to be detected and the detection report corresponding to the digital image to be detected may be stored in the memory in one-to-one correspondence, and when the user wants to refer to the digital image/detection report to be detected, the detection report/digital image to be detected corresponding to the digital image/detection report to be detected may be directly searched; thereby facilitating subsequent retrieval at any time.
In the above embodiment, the memory may be a cloud memory or a local memory.
Also included is an automatic weld defect detection system, as shown in FIG. 10, comprising:
the acquisition module 1 is used for acquiring a digital image to be detected of the weld joint to be detected;
the preprocessing module 2 is connected with the acquisition module 1 and is used for preprocessing the digital image to be detected to obtain a cut digital image to be detected;
the first detection module 3 is connected with the preprocessing module 2 and is used for inputting the cut digital image to be detected into a first depth semantic segmentation network model so as to detect a first weld defect of the digital image to be detected and obtain a first defect map;
the second detection module 4 is connected with the preprocessing module 2 and is used for inputting the cut digital image to be detected into a second depth semantic segmentation network model so as to detect a second weld defect of the digital image to be detected and obtain a second defect map;
the fusion module 5 is respectively connected with the first detection module 3 and the second detection module 4 and is used for carrying out fusion treatment on the first defect map and the second defect map so as to obtain a fusion defect map;
the processing module 6 is connected with the fusion module 5 and is used for processing the fusion defect map to obtain defect data corresponding to each weld defect, and acquiring defect information of each weld defect by combining the detection standard table to obtain a detection report comprising all the defect information;
Wherein the weld defects include a first weld defect and a second weld defect.
The specific implementation manner of the automatic detection system of the present invention is basically the same as that of the above-mentioned embodiments of the automatic detection method, and will not be repeated here.
The foregoing is merely illustrative of the preferred embodiments of the present invention and is not intended to limit the embodiments and scope of the present invention, and it should be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the description and illustrations of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. An automatic detection method for weld defects is characterized by comprising the following steps:
step S1, obtaining a digital image to be detected of a weld joint to be detected;
step S2, preprocessing the digital image to be detected to obtain a cut digital image to be detected;
s3, inputting the cut digital image to be detected into a first depth semantic segmentation network model to detect a first weld defect of the digital image to be detected, and obtaining a first defect map; and
inputting the cut digital image to be detected into a second depth semantic segmentation network model to detect a second weld defect of the digital image to be detected, so as to obtain a second defect map;
S4, carrying out fusion processing on the first defect map and the second defect map to obtain a fusion defect map;
s5, processing the fusion defect map to obtain defect data corresponding to each weld defect, and acquiring defect information of the weld defects by combining the defect data of each weld defect with a detection standard table to obtain a detection report comprising all the defect information;
wherein the weld defects include the first weld defect and the second weld defect.
2. The automatic detection method according to claim 1, wherein the step S3 specifically includes the steps of:
the step of obtaining a first defect map comprises the following steps:
pre-segmenting the digital image to be detected by adopting a sliding window of the first depth semantic segmentation network model, and respectively inputting the pre-segmented digital image to be detected into a first network structure to detect the first weld defects of the pre-segmented digital image to be detected through the first network structure so as to obtain a first defect map;
wherein the first network structure comprises a first encoder network structure and a first decoder network structure, the first encoder network structure and the first decoder network structure forming an asymmetric structure;
The step of obtaining the second defect map includes:
pre-segmenting the digital image to be detected by adopting a sliding window of the second depth semantic segmentation network model, and respectively inputting the pre-segmented digital image to be detected into a second network structure to detect the second weld defects of the pre-segmented digital image to be detected through the second network structure so as to obtain a second defect map;
the second network structure comprises a second encoder network structure and a second decoder network structure, and the second encoder network structure and the second decoder network structure form an asymmetric structure.
3. The automatic detection method of claim 2, wherein the first encoder network structure and the second encoder network structure each comprise at least a first convolution block, a plurality of second convolution blocks, and a plurality of inverse residual hole convolution redundancy blocks;
wherein the first convolution block comprises at least one common convolution layer;
the second convolution block comprises at least one common convolution layer and at least one depth separable convolution layer;
the inverted residual hole convolution redundant block comprises two common convolution layers and one depth separable convolution layer, and the depth separable convolution layer is arranged between the two common convolution layers;
The first encoder network structure is connected with the first decoder network structure by at least one connection signal;
the second encoder network structure is connected with the second decoder network structure by adopting at least one connecting signal;
the depth separable convolution layer in the inverted residual error hole convolution redundant block adopts hole convolution with the expansion coefficient of 2;
the first decoder network structure and the second decoder network structure each include a plurality of normal convolutional layers, a plurality of upsampling layers, and an output layer.
4. The automatic detection method of claim 1, wherein the first depth semantic segmentation network model employs a loss function as shown in the following formula:
Loss1=λ1·CE1+(1-λ1)·Tversky;
wherein λ1 is used to represent the ratio of CE1 function and Tversky function in the loss function;
c1 is used to represent a first number of classifications of the digital image to be detected, the first classification comprising:
a first classification, namely a round defect and a strip defect in the first weld defects;
a second classification, wherein the first weld defects include tungsten-and copper-pinch defects;
a third classification, namely, welding seam background of the digital image to be detected;
i is used for representing the abscissa of the pixel point on the digital image to be detected;
j is used for representing the ordinate of the pixel point on the digital image to be detected;
the image is used for representing the digital image to be detected;
k1 is used for representing the current first classification of the digital image to be detected;
p k1 (i, j) means for representing a prediction probability that a pixel point on the digital image to be detected is a current first classification;
q k1 (i, j) representing the true probability that the pixel point on the digital image to be detected is the current first classification;
TP is used to represent samples that are actually predicted to be correct for positive samples;
FP is used to represent samples that are actually negative sample prediction errors;
FN is used to represent samples that are actually positive sample prediction errors;
alpha is used to represent the weight value.
5. The automatic detection method of claim 1, wherein the second depth semantic segmentation network model employs a loss function as shown in the following formula:
Loss2=λ2·CE2+(1-λ2)·Tversky;
wherein λ2 is used to represent the ratio of CE2 function and Tversky function in the loss function;
m is used for representing different neighborhood points of the pixel points on the digital image to be detected;
c2 is used to represent a second number of classifications of the digital image to be detected, the second classifications comprising:
a fourth classification of crack defects in the second weld defects;
A fifth classification of unfused ones of the second weld defects;
a sixth classification, of the second weld defects, of the incomplete penetration defects;
seventh classification, namely welding seam background of the digital image to be detected;
k2 is used for representing the current second classification of the digital image to be detected;
i is used for representing the abscissa of the pixel point on the digital image to be detected;
j is used for representing the ordinate of the pixel point on the digital image to be detected;
the image is used for representing the digital image to be detected;
p (m,k2) (i, j) a prediction probability for representing that a neighborhood point of a pixel point on the digital image to be detected is a current second classification;
q (m,k2) (i, j) representing the true probability that the neighborhood point of the pixel point on the digital image to be detected is the current second classification;
u k2 a weight for representing the current second classification;
TP is used to represent samples that are actually predicted to be correct for positive samples;
FP is used to represent samples that are actually negative sample prediction errors;
FN is used to represent samples that are actually positive sample prediction errors;
alpha is used to represent the weight value.
6. The automatic detection method according to claim 1, wherein the size of a sliding window adopted by the first depth semantic segmentation network model is 48 x 48, and the step size is 8; and/or
The size of a sliding window adopted by the second depth semantic segmentation network model is 256×256, and the step size is 64.
7. The automatic detection method according to claim 1, wherein the step S5 specifically includes the steps of:
step S51, smoothing the defect area of the fusion defect map to eliminate noise points of the fusion defect map, and performing depth-first search on the fusion defect map;
step S52, calculating the area and the length-width ratio information of each weld defect in the fusion defect map, and converting the area and the length-width ratio information of the weld defects into defect sizes in the defect data by combining with image calibration parameters;
wherein the defect data further includes a category of each of the weld defects in the fused defect map;
step S53, obtaining the defect level of the corresponding weld defect by combining each defect data with the detection standard table, so as to obtain the detection report including the defect data and the defect level corresponding to each weld defect.
8. The automatic detection method according to claim 1, wherein the step S1 specifically includes the steps of:
Step S11, obtaining a film image to be detected of the weld joint to be detected;
step S12, scanning the film image to be detected into the digital image to be detected by adopting a film scanner; or (b)
And S13, acquiring the digital image to be detected of the weld joint to be detected by adopting a radiographic imaging plate and a digital image acquisition card.
9. The automatic detection method according to claim 1, wherein the step S2 specifically includes the steps of:
step S21, performing a first preprocessing operation on the digital image to be detected;
the first preprocessing operation includes:
carrying out noise reduction treatment on the digital image to be detected, and carrying out median filtering treatment on the digital image to be detected;
step S22, performing a second preprocessing operation on the digital image to be detected;
the second preprocessing operation includes:
performing Ojin automatic segmentation processing on the digital image to be detected to obtain an automatic segmentation threshold;
keeping an image below an automatic segmentation threshold in the digital image to be detected still, and performing nonlinear logarithmic transformation on the image above the automatic segmentation threshold in the digital image to be detected so as to improve the contrast of the digital image to be detected;
Step S23, performing a third preprocessing operation on the digital image to be detected;
the third preprocessing operation includes the steps of:
acquiring a gray level histogram of the digital image to be detected, and filtering the gray level histogram by adopting a Gaussian filter to calculate the minimum value of the gray level histogram after the filtering;
performing binary segmentation on the digital image to be detected according to the minimum value to obtain a foreground region of the digital image to be detected, and performing expansion processing on the foreground region;
calculating the area of each connected domain in the foreground region after expansion treatment, and taking the circumscribed rectangle corresponding to the connected domain with the largest area as a reference mark, so as to cut the digital image to be detected according to the reference mark, thereby obtaining the cut digital image to be detected.
10. An automatic weld defect detection system, comprising:
the acquisition module is used for acquiring a digital image to be detected of the weld joint to be detected;
the preprocessing module is used for preprocessing the digital image to be detected to obtain the cut digital image to be detected;
the first detection module is used for inputting the cut digital image to be detected into a first depth semantic segmentation network model so as to detect a first weld defect of the digital image to be detected and obtain a first defect map;
The second detection module is used for inputting the cut digital image to be detected into a second depth semantic segmentation network model so as to detect a second weld defect of the digital image to be detected and obtain a second defect map;
the fusion module is used for carrying out fusion processing on the first defect map and the second defect map so as to obtain a fusion defect map;
the processing module is used for processing the fusion defect map to obtain defect data corresponding to each weld defect, and acquiring defect information of the weld defect by combining the defect data of each weld defect with a detection standard table to obtain a detection report comprising all the defect information;
wherein the weld defects include the first weld defect and the second weld defect.
CN202010856564.0A 2020-08-24 2020-08-24 Automatic detection method and system for weld defects Active CN112150410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010856564.0A CN112150410B (en) 2020-08-24 2020-08-24 Automatic detection method and system for weld defects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010856564.0A CN112150410B (en) 2020-08-24 2020-08-24 Automatic detection method and system for weld defects

Publications (2)

Publication Number Publication Date
CN112150410A CN112150410A (en) 2020-12-29
CN112150410B true CN112150410B (en) 2023-12-12

Family

ID=73889081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010856564.0A Active CN112150410B (en) 2020-08-24 2020-08-24 Automatic detection method and system for weld defects

Country Status (1)

Country Link
CN (1) CN112150410B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907560A (en) * 2021-03-16 2021-06-04 中科海拓(无锡)科技有限公司 Notebook appearance flaw segmentation method based on deep learning
CN112907564A (en) * 2021-03-18 2021-06-04 中科海拓(无锡)科技有限公司 MaskRCNN-based nut surface defect segmentation method
CN113344857B (en) * 2021-05-13 2022-05-03 深圳市华汉伟业科技有限公司 Defect detection network training method, defect detection method and storage medium
CN113240014B (en) * 2021-05-18 2022-05-31 长春理工大学 Application method of class II segmentation loss function in achieving class II segmentation of intervertebral disc tissue image
CN113222947B (en) * 2021-05-19 2022-08-19 上海派普诺管道检测科技发展有限公司 Intelligent detection method and system for welding defects of non-metallic materials
CN113409313B (en) * 2021-08-18 2021-11-09 济宁联威车轮制造有限公司 Wheel weld surface defect detection method based on computer vision
CN113763392B (en) * 2021-11-10 2022-03-18 北京中科慧眼科技有限公司 Model prediction method and system for road surface flatness detection and intelligent terminal
CN114354882A (en) * 2021-12-29 2022-04-15 博迈科海洋工程股份有限公司 Information extraction and defect detection method for process pipeline welding seam

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110243934A (en) * 2019-05-30 2019-09-17 中国计量大学上虞高等研究院有限公司 A kind of ultrasonic weld seam detection method based on wavelet convolution neural network
WO2020133639A1 (en) * 2018-12-29 2020-07-02 东北大学 Intelligent analysis system for magnetic flux leakage detection data in pipeline
CN111429403A (en) * 2020-02-26 2020-07-17 北京航空航天大学杭州创新研究院 Automobile gear finished product defect detection method based on machine vision
KR20200087297A (en) * 2018-12-28 2020-07-21 이화여자대학교 산학협력단 Defect inspection method and apparatus using image segmentation based on artificial neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200087297A (en) * 2018-12-28 2020-07-21 이화여자대학교 산학협력단 Defect inspection method and apparatus using image segmentation based on artificial neural network
WO2020133639A1 (en) * 2018-12-29 2020-07-02 东北大学 Intelligent analysis system for magnetic flux leakage detection data in pipeline
CN110243934A (en) * 2019-05-30 2019-09-17 中国计量大学上虞高等研究院有限公司 A kind of ultrasonic weld seam detection method based on wavelet convolution neural network
CN111429403A (en) * 2020-02-26 2020-07-17 北京航空航天大学杭州创新研究院 Automobile gear finished product defect detection method based on machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于X射线图像和卷积神经网络的石油钢管焊缝缺陷检测与识别;刘涵;郭润元;;仪器仪表学报(04);全文 *

Also Published As

Publication number Publication date
CN112150410A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112150410B (en) Automatic detection method and system for weld defects
CN113362326B (en) Method and device for detecting defects of welding spots of battery
CN110047073B (en) X-ray weld image defect grading method and system
CN108665452B (en) Pipeline-weld film scanning storage and identification of Weld Defects and its system
Wang et al. Automatic identification of different types of welding defects in radiographic images
Hassan et al. Welding defect detection and classification using geometric features
CN110097547B (en) Automatic detection method for welding seam negative film counterfeiting based on deep learning
JPH0896136A (en) Evaluation system for welding defect
Kumar et al. Flaws classification using ANN for radiographic weld images
CN116309409A (en) Weld defect detection method, system and storage medium
da Silva et al. Radiographics pattern recognition of welding defects using linear classifiers
CN116433669B (en) Machine vision-based quality detection method for weld joints of steel frame of anti-seismic structure
CN115719332A (en) Welding quality detection method
CN114119595A (en) GMAW welding quality on-line monitoring and evaluating method based on integrated deep learning
Guo et al. WDXI: The dataset of X-ray image for weld defects
Dang et al. A novel method for detecting weld defects accurately and reliably in radiographic images
Thien et al. An approach to the automatic detection of weld defects in radiography films using digital image processing
Kamalakannan et al. Spatial smoothing based segmentation method for internal defect detection in X-ray images of casting components
Al-Hameed Segmentation of radiographic images of weld defect
CN114187256A (en) Method for detecting defects of welding seam X-ray photograph
CN116152594A (en) Labeling method for constructing weld X-ray film artificial intelligence training set
Carvalho et al. Evaluation of the relevant features of welding defects in radiographic inspection
CN111882535B (en) Resistance welding shear strength identification method based on improved Unet network
TWM604396U (en) Weld checking system based on radiography
Rale et al. Comparison of different ANN techniques for automatic defect detection in X-Ray images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant