CN114511543A - Intelligent defect evaluation system and method for radiographic negative of long-distance pipeline - Google Patents
Intelligent defect evaluation system and method for radiographic negative of long-distance pipeline Download PDFInfo
- Publication number
- CN114511543A CN114511543A CN202210126866.1A CN202210126866A CN114511543A CN 114511543 A CN114511543 A CN 114511543A CN 202210126866 A CN202210126866 A CN 202210126866A CN 114511543 A CN114511543 A CN 114511543A
- Authority
- CN
- China
- Prior art keywords
- image
- defect
- loss
- mask
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0006—Industrial image inspection using a design-rule based approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30152—Solder
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
The invention relates to the technical field of pipeline weld quality evaluation, in particular to an intelligent defect evaluation system and method for a long-distance pipeline radiographic negative. The specific process is as follows: the original image to be evaluated is first corrected and negative quality analysis is performed based on a quality inspection model. And then, carrying out intelligent defect identification on the qualified negative film and extracting an outer layer fusion line and a root fusion line. Wall thickness information and signature location information is then determined by an OCR recognition model. And the intelligent identification defect positions in the weld bead are quantitatively and comprehensively rated according to the national standard, so that the intelligent weld quality evaluation is realized, and the evaluation efficiency is greatly improved.
Description
Technical Field
The invention relates to the technical field of pipeline weld quality evaluation, in particular to an intelligent defect evaluation system and method for a long-distance pipeline radiographic negative.
Background
The petrochemical industry is the life line of national economy, and oil and gas pipelines are widely applied to transportation engineering. At present, pipeline leakage happens occasionally, the security of oil and gas pipelines arouses national high attention, and weld quality assessment is a key ring in maintaining pipeline security.
In the current weld quality assessment process, the assessment of the defect mainly relies on human experts. However, there are a number of disadvantages to manual assessment, one of which is subjective and the other is inefficient. Therefore, the design of an intelligent defect assessment system has great value.
The research and development of a complete lack intelligent identification system have more difficulties, such as tedious evaluation process, operations including fusion line extraction, radiographic image quality inspection, signature extraction and the like except for lack intelligent identification, and the following challenges are included for the lack intelligent identification of the core part: the defect types are various, the defect shape changes greatly, the defect image is not clear, the background such as artifact is complex and serious, the defect qualification is interfered, and the like. Aiming at the challenges listed above, the invention aims to design an intelligent comprehensive evaluation system for pipeline radiographic negative film deficiency, which can contain all the functions and provide important reference value for the welding seam quality evaluation project.
At present, the deep learning technology is widely applied to the industrial field, the training model becomes the mainstream by adopting a data-driven mode, the intelligent model can effectively avoid the defect of manual evaluation, the evaluation efficiency is greatly improved, and great technical support is provided for quickly searching high-posterior fruit areas and high-risk area damage defects.
Disclosure of Invention
Aiming at the low efficiency of the existing manual evaluation of the lack of the pipeline radiograph, the invention provides an intelligent lack evaluation system and method of the radiograph of the long-distance pipeline, and the system comprises the whole process of the comprehensive evaluation of the whole welding seam quality according to the national standard.
The specific technical scheme is as follows:
an intelligent lack evaluation system and method for a long-distance pipeline radiographic negative film specifically comprise the following steps:
step 1: correcting the radiographic image to be evaluated according to a detection flow;
step 1-1: judging the width and height of pixels of the radiographic image, determining a vertical piece and a horizontal piece, wherein the horizontal piece is downward, and the vertical piece is downward after being converted into the horizontal piece;
step 1-2: image segmentation: recording the width and height of the original image as (w, h), and cuttingThe procedure is as follows: the height of the cut image is fixed to 648 pixels, the width of the cut image is scaled in an equal ratio mode through bilinear interpolation according to the scaling ratio, wherein the scaling ratio t meets the requirementt is an integer, then the scaled image resolution is (w ', h') (t × w, 648), the overlapped image is cut with squares, the width of the overlapped image is determined to be γ, and zero padding is performed on the width to ensure that the number of cut subgraphs n is an integer, where n is the number of cut subgraphsWherein: ε is the width of zero padding;
step 2: performing OCR recognition on the welding seam image to perform signature extraction;
and step 3: performing quality inspection on the welding seam image according to the national standard;
and 4, step 4: sequentially inputting the divided sub-images into a defect identification model for identification, and outputting defect pixel position information and preliminary qualitative information;
and 5: identifying a welding bead, namely identifying a root fusion line and an outer surface fusion line in the bottom sheet;
step 6: the precision qualitative and positioning are output;
and 7: and (4) performing default quantitative rating, performing default quantitative measurement and comprehensive rating according to national standards, and outputting a default hazard level.
The step 2 of performing OCR recognition on the welding seam image for signature extraction specifically comprises the following steps:
step 2-1: firstly, establishing a label set for signature, determining all kinds of characters and making a data setWherein each imageContaining a plurality of instances namely
Wherein the content of the first and second substances,representing character examples, n target characters are counted, and the corresponding labels meet the requirementFor each target character label thereinIs composed of two partsWhereinIs the position information of the target character,category information of the target character;
step 2-2: establishing a deep learning model according to the labels, selecting a cross-stage network CspDarknet53 as a backbone network, and performing classification and regression operation respectively based on cross entropy loss and cross-over specific loss;
step 2-3: updating the weight according to a gradient descent method, stopping training when the loss is converged, and acquiring an OCR detection model Mocr;
Step 2-4: logical post-processing operation, in order to ensure the accuracy of recognition, setting the character extraction threshold value as treocrAnd (5) after reasoning all the subgraphs, integrating the images, determining long-image signature information, and for undetected characters, performing character completion and alignment according to a front-and-back text logic sequence, wherein an OCR (optical character recognition) model can extract scale signature information and nominal thickness information, preferentially multiplexing recognized wall thickness information if negative thickness information is missing, and then inputting the wall thickness information by default when a system is initialized.
And 3, performing quality inspection on the welding seam image according to the national standard, which specifically comprises the following steps:
step 3-1: establishing an image quality meter training label based on key point labeling;
step 3-2: because the characteristics of the image quality meter are cleaned, the image quality meter is easy to distinguish from the negative background, and the image quality meter recognition model with the rapid reasoning capability can be obtained by performing rapid training according to the lightweight neural network;
step 3-2: selecting a threshold, determining an image quality meter identification area based on the confidence after reasoning, and performing pixel post-processing on the image quality meter by an image processing means to ensure the identification accuracy of the image quality meter due to the fact that the shape of the image quality meter is approximately fixed and the image quality meter does not have blackness;
step 3-3: integrating the subgraphs to determine the number of the identification filaments of the image quality meter of the original radiographic image, and checking whether the filament number meets the requirements according to the national standard by combining the wall thickness information determined in the step 2; if the requirements are met, intelligent evaluation can be performed, otherwise, intelligent evaluation is not performed.
Step 4, sequentially inputting the divided sub-images into the defect identification model for identification, and outputting the position information of the defect pixels and the preliminary qualitative information, specifically comprising the following steps:
step 4-1: establishing a target defect training set, wherein the target defect training set specifically comprises a defect label database of the following categories, such as round, strip, incomplete fusion, burn-through, crack, incomplete welding and the like;
step 4-2: training a defect detection model based on a self-attention target detection algorithm, and well realizing the suppression of a background and highlighting the position of a defect by constructing an attention mechanism; then, a weight matrix, Q, K, V to N (μ, σ), where μ is 0, σ is 1, a gaussian distribution with a variance of 1 from the mean value of 0, is initialized according to the gaussian distribution, and the resulting vector is taken as an initial weight value, and a weight of the value is obtained by using a Softmax function, so that the output matrix is:
step 4-3: performing anchor point prediction based on the characteristics of the attention direction to be encountered, obtaining a detection model by loss training network parameters, wherein the loss comprises the position loss of a detection frame and the class loss of a detection target, and performing parameter training by optimizing the following formula:
wherein FL (. cndot.) is the Focal classification loss function, CIoU (. cndot.) is the bounding box position regression loss function,andrespectively representing the classification probability distribution and the position regression probability distribution of the network prediction image,andrespectively represent images yiA true category tag and a location tag;
step 4-4: updating the weight value: the weight is updated through back propagation to realize minimum loss, the loss is taken as a function of weight parameters, the weight is updated in an iterative way in the direction of the fastest gradient decrease by using a random gradient decrease method until the condition that the parameters stop updating is met, and the default intelligent identification model M can be obtaineddefect;
And 4-5: and image reasoning, after the training iteration is stopped, identifying the position and the type of the image defect based on the defect detection model, and outputting the position information and the preliminary qualitative information of the defect pixel.
The weld bead identification in step 5 specifically comprises the following steps:
step 5-1: because the fusion line has a variation trend, the trend of a welding bead cannot be accurately identified by the rectangular frame, and for accurate positioning, a data set is constructed by adopting mask marking under key points, wherein the data set comprises two types, namely an outer layer fusion line and a root fusion line;
step 5-2: establishing a weld bead recognition model based on an example segmentation model, selecting a main network Resnet50, and extracting rich position characteristics and semantic characteristics by fusing a neck network selected FPN + PAN model from top to bottom and from bottom to top;
step 5-3: constructing loss, and in the three branch tasks of fusion line classification, mask generation and boundary box regression, performing reverse transfer through a loss function L to minimize the error between a predicted value and a true value of a target area, so as to obtain the loss function L of the neural network as follows:
L=Lcls+Lmask+Lbox
wherein L isclsA loss function for the fusion line classification is the logarithmic loss of the target and the non-target, i.e. cross entropy loss; l ismaskFor the loss function generated by the mask, the neural network generates a mask for each class without competing with other classes, and selects an output mask according to a class label predicted by a fusion line classification branch; l isboxCalculating the regression loss of the bounding box through IoU loss, namely judging the overlapping degree of the prediction box and the real box to continue loss judgment;
step 5-4: obtaining a weld bead extraction model M by training loss adjustment parametersfusionAnd then introducing logic operation for position refinement, wherein the specific mode is as follows: firstly, weld bead mask position information is obtained based on a weld bead extraction model reasoning image, the mask information is a 0-1 matrix, wherein 1 represents a target weld bead area, and 0 represents a background area; because the partial fusion lines have the possibility of discontinuity in the model reasoning, the discontinuous welding bead areas in the ray negative detection result are connected based on cubic spline interpolation; and then, edge extraction is carried out on the mask matrix by using a Canny operator, the extracted edge key point information is stored, and finally, the edge key point information is drawn on the original negative image to obtain a final welding bead positioning result display image.
And 6, outputting accurate qualitative and accurate positioning by the defect accurate qualitative and positioning method, which specifically comprises the following steps:
step 6-1: removing false detection outside a weld bead heat affected zone according to the position of the outer surface weld line; then positioning a welding seam defect image to be tested to obtain an area between two root fusion lines: mask1,Mask2=Mfusion(X), wherein: mask1Mask for masking regions at the weld2Is a root mask region; satisfies loc as M for the default positiondefect(X), depending on the possible location of the defect, the defect zone must be satisfied and then contained within the weld zone, thus satisfying:
step 6-2: it is further desirable for the unfused defect to distinguish between three types of root unfused, interlayer unfused, and outer surface unfused, depending on the location of the weld line, the root unfused defect region should be between the root weld line regions, and the interlayer unfused region should be outside the root weld line regions,
step 6-3: outputting an actual scale position corresponding to the pixel position according to the scale type position; if the negative type is true or the recognition is wrong, the distance from the default position to the film head is returned.
The unfused in step 4-1 includes root unfused, interlayer unfused or outer surface unfused.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention firstly corrects the original image to be evaluated and carries out film quality analysis based on a quality inspection model. And then, carrying out intelligent defect identification on the qualified negative film and extracting an outer layer fusion line and a root fusion line. Wall thickness information and signature location information is then determined by an OCR recognition model. And the intelligent identification defect positions in the weld bead are quantitatively and comprehensively rated according to the national standard, so that the intelligent weld quality evaluation is realized, and the evaluation efficiency is greatly improved. The method can realize intelligent defect identification, and compared with the prior art, the detection model based on the method is more accurate. The comprehensive evaluation system for the weld joint quality is completed, and compared with other inventions, the comprehensive evaluation system for the weld joint quality is more comprehensive.
Drawings
Fig. 1 is a general structure diagram of an intelligent lack evaluation system and method for a long-distance pipeline radiographic negative according to an embodiment of the present invention.
Detailed Description
The present invention is described in detail below with reference to the drawings and examples, but the scope of the present invention is not limited by the drawings and examples.
Example 1:
the invention relates to an intelligent lack evaluation system and method for a long-distance pipeline radiographic negative film, which mainly comprise the following parts: the system comprises a data acquisition part, an image quality detection part, a character extraction part, a weld bead extraction part, an intelligent deficiency identification part and a quantitative and comprehensive rating part, wherein the general structure diagram is shown in figure 1.
Step 1: and correcting the radiographic image to be evaluated according to the detection flow.
Step 1-1: and judging the width and height of pixels of the radiographic image, determining a vertical film and a horizontal film, wherein the horizontal film is downward, and the vertical film is downward after being converted into the horizontal film.
Step 1-2: image segmentation: the width and height of the original image are recorded as (w, h), and the cutting mode is as follows: after cutting, the height of the image is fixed to 648 pixels, the width of the image is scaled in an equal ratio according to a fixed ratio and a bilinear interpolation mode, wherein the scaling ratio t meets the requirementt is an integer, then the scaled image resolution is (w ', h') (t × w, 648), the overlapped image is cut with squares, the width of the overlapped image is determined to be γ, and zero padding is performed on the width to ensure that the number of cut subgraphs n is an integer, where n is the number of cut subgraphsWhere epsilon is the width of the filled zeros.
Step 2: and performing OCR recognition on the welding seam image to perform signature extraction.
Step 2-1: firstly, establishing a label set for signature, determining all kinds of characters and making a data setA set of labels, wherein for each image there are multiple instance objects, i.e. multiple characters,whereinFor one character in the data set, the total number of n target characters is n, and the corresponding labels meet the requirementFor each target character label thereinIs composed of two partsWhereinIs the position information of the target character,is the category information of the target character.
Step 2-2: and (3) establishing a deep learning model according to the labels, selecting a cross-stage network CspDarknet53 as a backbone network, and performing classification and regression operation respectively based on cross entropy loss and cross-over ratio loss.
Step 2-3: updating weight according to gradient descent method, stopping training when loss is converged, and obtainingOCR detection model Mocr
Step 2-4: logical post-processing operation, in order to ensure the accuracy of recognition, setting the character extraction threshold value as treocr0.5. And after reasoning all the subgraphs, integrating the images and determining long-image signature information. And for undetected characters, performing character completion and alignment according to the logical sequence of front and back texts. Wherein the OCR recognition model can extract the signature information of the ruler and the nominal thickness information, if the negative thickness information is missing, the recognized thickness information is preferentially reused, and then the default input of the thickness information is carried out when the system is initialized.
And step 3: the quality of the weld image was reviewed according to national standards.
Step 3-1: and establishing an image quality meter training label based on the key point label.
Step 3-2: because the image quality meter features are cleaned and are easily distinguished from the negative background, the image quality meter recognition model with the rapid reasoning capability can be obtained by performing rapid training according to the lightweight neural network.
Step 3-2: and selecting a threshold, determining an image quality meter identification area based on the confidence after inference, and performing pixel post-processing on the image quality meter by an image processing means to ensure the identification accuracy of the image quality meter due to the fact that the shape of the image quality meter is approximately fixed and the image quality meter does not have blackness.
Step 3-3: integrating the subgraphs to determine the number of the identification filaments of the image quality meter of the original radiographic image, and checking whether the filament number meets the requirements or not according to the national standard by combining the wall thickness information determined in the step 2; if the requirements are met, intelligent evaluation can be performed, otherwise, intelligent evaluation is not performed.
And 4, step 4: and sequentially inputting the divided sub-images into the defect recognition model for recognition. And outputting the position information of the missing pixel and the preliminary qualitative information.
Step 4-1: establishing a target defect training set, wherein the target defect training set specifically comprises a defect label database of the following categories, such as round, strip, unfused (root unfused, interlayer unfused, outer surface unfused), burn-through, crack, lack of penetration and the like.
Step 4-2: self-attention-based target detectionThe algorithm trains a defect detection model, and the model can better realize the suppression of the background and highlight the position of the defect by constructing an attention mechanism. The specific implementation is that firstly, the cutting subgraph is sent into a convolution trunk network to extract multi-level features. Then, a weight matrix, Q, K, V to N (μ, σ), where μ is 0, σ is 1, a gaussian distribution with a variance of 1 from a mean value of 0, is initialized in accordance with the gaussian distribution, and the resultant vector is taken as an initial weight value, and a weight of the value is obtained by using a Softmax function. The output matrix is therefore:
step 4-3: and performing anchor point prediction based on the characteristics of the attention direction to be encountered, and training network parameters through loss to obtain a detection model, wherein the loss comprises the position loss of a detection frame and the class loss of a detection target. The parameter training is performed by optimizing the following formula,where FL (. cndot.) is the Focal classification loss function, CIoU (. cndot.) is the bounding box position regression loss function,andrespectively representing the classification probability distribution and the position regression probability distribution of the network prediction image, andrespectively represent images yiThe true category label and the location label.
Step 4-4: updating the weight value: the purpose of updating the weights by back-propagation is to minimize the loss, willThe loss is used as a function of the weight parameters, the CNN needs to calculate the partial derivative of the loss relative to each weight parameter, then the weight is iteratively updated in the direction with the fastest gradient decrease by using a random gradient descent (SGD) method until the condition that the parameters stop updating is met, and the default intelligent identification model M can be obtaineddefect。
And 4-5: and image reasoning, after the training iteration is stopped, identifying the position and the type of the image defect based on the defect detection model, and outputting the position information and the preliminary qualitative information of the defect pixel.
And 5: and (5) identifying a welding bead. Identifying a root weld line and an outer surface weld line in the backsheet.
Step 5-1: because the fusion line has a variation trend, the trend of the welding bead cannot be accurately identified by the rectangular frame, and for accurate positioning, a data set is constructed by adopting mask marks under key points, wherein the data set comprises two types, namely an outer layer fusion line and a root fusion line.
Step 5-2: and establishing a weld bead recognition model based on the example segmentation model, selecting a backbone network Resnet50, and extracting rich position features and semantic features by fusing a neck network selected FPN + PAN model from top to bottom and from bottom to top.
Step 5-3: constructing loss, and in the three branch tasks of fusion line classification, mask generation and boundary box regression, performing reverse transfer through a loss function L to minimize the error between a predicted value and a true value of a target area, so as to obtain the loss function L of the neural network as follows:
L=Lcls+Lmask+Lbox
wherein L isclsA loss function for the fusion line classification is the logarithmic loss of the target and the non-target, i.e. cross entropy loss; l ismaskFor the loss function generated by the mask, the neural network generates a mask for each class without competing with other classes, and selects an output mask according to a class label predicted by a fusion line classification branch; l is a radical of an alcoholboxFor bounding box regression loss, the loss determination is continued by IoU loss calculation, i.e., by determining the degree of overlap of the prediction box with the true box.
Step (ii) of5-4: obtaining a weld bead extraction model M by training loss adjustment parametersfusionAnd then introducing logic operation for position refinement, wherein the specific mode is as follows: firstly, deducing image based on a welding bead extraction model to obtain welding bead mask position information, wherein the mask information is a 0-1 matrix, 1 represents a target welding bead area, and 0 represents a background area. Because the partial weld lines have the possibility of discontinuity in the model reasoning, the discontinuous weld bead regions in the radiographic negative detection result are connected based on cubic spline interpolation. And then, edge extraction is carried out on the mask matrix by using a Canny operator, and the extracted edge key point information is stored. And finally, drawing the edge key point information on the original negative image to obtain a final welding bead positioning result display image.
Step 6: precise characterization and location of the defect. And outputting accurate qualitative and accurate positioning.
Step 6-1: removing false detection outside a weld bead heat affected zone according to the position of the outer surface weld line; 6-8, positioning the to-be-tested weld defect image according to the fusion line test model obtained in the step 6-7 to obtain an area between two root fusion lines: mask1,Mask2=Mfusion(X) wherein Mask1Mask for masking regions at the weld2Is the area of the mask at the root. Satisfies loc as M for the default positiondefect(X), depending on the position where the defect is likely to be generated, it is necessary to satisfy that the defect region is included in the weld region, and therefore:
step 6-2: for unfused defects, it is further necessary to distinguish between three types of unfused root, unfused interlayer, and unfused outer surface, depending on location. Depending on the weld line positioning, the root unfused defect region should be between the root weld line regions, while the interlayer unfused region should be outside the root weld line region.
Step 6-3: outputting an actual scale position corresponding to the pixel position according to the scale type position; if the negative type is true or the recognition is wrong, the distance from the default position to the film head is returned.
And 7: quantitative rating of the deficit. And carrying out quantitative default measurement and comprehensive rating according to national standards, and outputting the damage level of the default.
Claims (7)
1. An intelligent lack evaluation system and method for a long-distance pipeline radiographic negative film is characterized by comprising the following steps:
step 1: correcting the radiographic image to be evaluated according to a detection flow;
step 1-1: judging the width and height of pixels of the radiographic image, determining a vertical piece and a horizontal piece, wherein the horizontal piece is downward, and the vertical piece is downward after being converted into the horizontal piece;
step 1-2: image segmentation: the width and height of the original image are recorded as (w, h), and the cutting mode is as follows: the height of the cut image is fixed to 648 pixels, the width of the cut image is scaled in an equal ratio mode through bilinear interpolation according to the scaling ratio, wherein the scaling ratio t meets the requirementt is an integer, then the scaled image resolution is (w ', h') (t × w, 648), the overlapped image is cut with squares, the width of the overlapped image is determined to be γ, and zero padding is performed on the width to ensure that the number of cut subgraphs n is an integer, where n is the number of cut subgraphsWherein: ε is the width of zero padding;
step 2: performing OCR recognition on the welding seam image to perform signature extraction;
and 3, step 3: performing quality inspection on the welding seam image according to the national standard;
and 4, step 4: sequentially inputting the divided sub-images into a defect identification model for identification, and outputting defect pixel position information and preliminary qualitative information;
and 5: identifying a welding bead, namely identifying a root fusion line and an outer surface fusion line in the negative;
step 6: the precision qualitative and positioning are output;
and 7: and (4) performing default quantitative rating, performing default quantitative measurement and comprehensive rating according to national standards, and outputting a default hazard level.
2. The long-distance pipeline radiographic film intelligent deficiency assessment system and method according to claim 1, characterized in that: the step 2 of performing OCR recognition on the welding seam image for signature extraction specifically comprises the following steps:
step 2-1: firstly, establishing a label set for signature, determining all kinds of characters and making a data setWherein each imageContaining a plurality of instances, i.e.
Wherein the content of the first and second substances,representing character examples, n target characters are counted, and the corresponding labels meet the requirementFor each target character label thereinIs composed of two partsWhereinIs the position information of the target character,category information of the target character;
step 2-2: establishing a deep learning model according to the labels, selecting a cross-stage network CspDarknet53 as a backbone network, and performing classification and regression operation respectively based on cross entropy loss and cross-over specific loss;
step 2-3: updating the weight according to a gradient descent method, stopping training when the loss is converged, and acquiring an OCR detection model Mocr;
Step 2-4: logical post-processing operation, in order to ensure the accuracy of recognition, setting the character extraction threshold value as treocrAnd (5) after reasoning all the subgraphs, integrating the images, determining long-image signature information, and for undetected characters, performing character completion and alignment according to a front-and-back text logic sequence, wherein an OCR (optical character recognition) model can extract scale signature information and nominal thickness information, preferentially multiplexing recognized wall thickness information if negative thickness information is missing, and then inputting the wall thickness information by default when a system is initialized.
3. The long-distance pipeline radiographic film intelligent deficiency assessment system and method according to claim 1, characterized in that: and 3, performing quality inspection on the welding seam image according to the national standard, which specifically comprises the following steps:
step 3-1: establishing an image quality meter training label based on key point labeling;
step 3-2: because the characteristics of the image quality meter are cleaned, the image quality meter is easy to distinguish from the negative background, and the image quality meter recognition model with the rapid reasoning capability can be obtained by performing rapid training according to the lightweight neural network;
step 3-2: selecting a threshold, determining an image quality meter identification area based on the confidence after reasoning, and performing pixel post-processing on the image quality meter by an image processing means to ensure the identification accuracy of the image quality meter due to the fact that the shape of the image quality meter is approximately fixed and the image quality meter does not have blackness;
step 3-3: integrating the subgraphs to determine the number of the identification filaments of the image quality meter of the original radiographic image, and checking whether the filament number meets the requirements according to the national standard by combining the wall thickness information determined in the step 2; if the requirements are met, intelligent evaluation can be performed, otherwise, intelligent evaluation is not performed.
4. The long-distance pipeline radiographic film intelligent deficiency assessment system and method according to claim 1, characterized in that: and 4, sequentially inputting the divided sub-images into the defect identification model for identification, and outputting defect pixel position information and preliminary qualitative information, wherein the method specifically comprises the following steps:
step 4-1: establishing a target defect training set, wherein the target defect training set specifically comprises a defect label database of the following categories, such as round, strip, incomplete fusion, burn-through, crack, incomplete penetration and the like;
step 4-2: training a defect detection model based on a self-attention target detection algorithm, and well realizing the suppression of a background and highlighting the position of a defect by constructing an attention mechanism; then, a weight matrix, Q, K, V to N (μ, σ), where μ is 0, σ is 1, a gaussian distribution with a variance of 1 from the mean value of 0, is initialized according to the gaussian distribution, and the resulting vector is taken as an initial weight value, and a weight of the value is obtained by using a Softmax function, so that the output matrix is:
step 4-3: performing anchor point prediction based on the characteristics of the attention direction to be encountered, obtaining a detection model by loss training network parameters, wherein the loss comprises the position loss of a detection frame and the class loss of a detection target, and performing parameter training by optimizing the following formula:
wherein, FL (·)For the Focal classification loss function, CIoU (. cndot.) is the bounding box position regression loss function,andrespectively representing the classification probability distribution and the position regression probability distribution of the network predicted image, andrespectively represent images yiA true category tag and a location tag;
step 4-4: updating the weight value: the weight is updated through back propagation to realize minimum loss, the loss is taken as a function of weight parameters, the weight is updated in an iterative way in the direction of the fastest gradient decrease by using a random gradient decrease method until the condition that the parameters stop updating is met, and the default intelligent identification model M can be obtaineddefect;
And 4-5: and image reasoning, after the training iteration is stopped, identifying the position and the type of the image defect based on the defect detection model, and outputting the position information and the preliminary qualitative information of the defect pixel.
5. The long-distance pipeline radiographic film intelligent deficiency assessment system and method according to claim 1, characterized in that: the weld bead identification in step 5 specifically comprises the following steps:
step 5-1: because the fusion line has a variation trend, the trend of a welding bead cannot be accurately identified by the rectangular frame, and for accurate positioning, a data set is constructed by adopting mask marking under key points, wherein the data set comprises two types, namely an outer layer fusion line and a root fusion line;
step 5-2: establishing a weld bead recognition model based on an example segmentation model, selecting a main network Resnet50, and extracting rich position characteristics and semantic characteristics by fusing a neck network selected FPN + PAN model from top to bottom and from bottom to top;
step 5-3: constructing loss, and in the three branch tasks of fusion line classification, mask generation and boundary box regression, performing reverse transfer through a loss function L to minimize the error between a predicted value and a true value of a target area, so as to obtain the loss function L of the neural network as follows:
L=Lcls+Lmask+Lbox
wherein L isclsA loss function for the fusion line classification is the logarithmic loss of the target and the non-target, i.e. cross entropy loss; l ismaskFor the loss function generated by the mask, the neural network generates a mask for each class without competing with other classes, and selects an output mask according to a class label predicted by a fusion line classification branch; l isboxCalculating the regression loss of the bounding box through IoU loss, namely judging the overlapping degree of the prediction box and the real box to continue loss judgment;
step 5-4: obtaining a weld bead extraction model M by training loss adjustment parametersfusionAnd then introducing logic operation for position refinement, wherein the specific mode is as follows: firstly, weld bead mask position information is obtained based on a weld bead extraction model reasoning image, wherein the mask information is a 0-1 matrix, 1 represents a target weld bead area, and 0 represents a background area; because the partial fusion lines have the possibility of discontinuity in the model reasoning, the discontinuous welding bead areas in the ray negative detection result are connected based on cubic spline interpolation; and then, edge extraction is carried out on the mask matrix by using a Canny operator, the extracted edge key point information is stored, and finally, the edge key point information is drawn on the original negative image to obtain a final welding bead positioning result display image.
6. The long-distance pipeline radiographic film intelligent deficiency assessment system and method according to claim 1, characterized in that: and 6, outputting precise qualitative and precise positioning, wherein the precise qualitative and precise positioning is realized by the defect, and the method specifically comprises the following steps:
step 6-1: removing false detection outside a weld bead heat affected zone according to the position of the outer surface weld line; then positioning a welding seam defect image to be tested to obtain an area between two root fusion lines: mask1,Mask2=Mfusion(X), wherein: mask1Mask for masking regions at the weld2Is a root mask region; satisfies loc as M for the default positiondefect(X), depending on the possible location of the defect, the defect zone must be satisfied and then contained within the weld zone, thus satisfying:
step 6-2: it is further desirable for the unfused defect to distinguish between three types of root unfused, interlayer unfused, and outer surface unfused, depending on the location of the weld line, the root unfused defect region should be between the root weld line regions, and the interlayer unfused region should be outside the root weld line regions,
step 6-3: outputting an actual scale position corresponding to the pixel position according to the scale type position; if the negative type is true or the recognition is wrong, the distance from the default position to the film head is returned.
7. The long-distance pipeline radiographic film intelligent deficiency assessment system and method according to claim 4, characterized in that: the unfused in step 4-1 includes root unfused, interlayer unfused or outer surface unfused.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210126866.1A CN114511543A (en) | 2022-02-11 | 2022-02-11 | Intelligent defect evaluation system and method for radiographic negative of long-distance pipeline |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210126866.1A CN114511543A (en) | 2022-02-11 | 2022-02-11 | Intelligent defect evaluation system and method for radiographic negative of long-distance pipeline |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114511543A true CN114511543A (en) | 2022-05-17 |
Family
ID=81550960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210126866.1A Pending CN114511543A (en) | 2022-02-11 | 2022-02-11 | Intelligent defect evaluation system and method for radiographic negative of long-distance pipeline |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114511543A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115266774A (en) * | 2022-07-29 | 2022-11-01 | 中国特种设备检测研究院 | Weld ray detection and evaluation method based on artificial intelligence |
CN116630242A (en) * | 2023-04-28 | 2023-08-22 | 广东励图空间信息技术有限公司 | Pipeline defect evaluation method and device based on instance segmentation |
-
2022
- 2022-02-11 CN CN202210126866.1A patent/CN114511543A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115266774A (en) * | 2022-07-29 | 2022-11-01 | 中国特种设备检测研究院 | Weld ray detection and evaluation method based on artificial intelligence |
CN115266774B (en) * | 2022-07-29 | 2024-02-13 | 中国特种设备检测研究院 | Artificial intelligence-based weld joint ray detection and evaluation method |
CN116630242A (en) * | 2023-04-28 | 2023-08-22 | 广东励图空间信息技术有限公司 | Pipeline defect evaluation method and device based on instance segmentation |
CN116630242B (en) * | 2023-04-28 | 2024-01-12 | 广东励图空间信息技术有限公司 | Pipeline defect evaluation method and device based on instance segmentation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113362326B (en) | Method and device for detecting defects of welding spots of battery | |
TWI729405B (en) | Method and device for optimizing damage detection results | |
CN113469177B (en) | Deep learning-based drainage pipeline defect detection method and system | |
CN114511543A (en) | Intelligent defect evaluation system and method for radiographic negative of long-distance pipeline | |
CN111091538B (en) | Automatic identification and defect detection method and device for pipeline welding seams | |
CN110097547B (en) | Automatic detection method for welding seam negative film counterfeiting based on deep learning | |
CN110992349A (en) | Underground pipeline abnormity automatic positioning and identification method based on deep learning | |
CN113920107A (en) | Insulator damage detection method based on improved yolov5 algorithm | |
WO2020238256A1 (en) | Weak segmentation-based damage detection method and device | |
CN112465746B (en) | Method for detecting small defects in ray film | |
CN113643268A (en) | Industrial product defect quality inspection method and device based on deep learning and storage medium | |
CN112053317A (en) | Workpiece surface defect detection method based on cascade neural network | |
CN113780087A (en) | Postal parcel text detection method and equipment based on deep learning | |
CN116645586A (en) | Port container damage detection method and system based on improved YOLOv5 | |
CN113469950A (en) | Method for diagnosing abnormal heating defect of composite insulator based on deep learning | |
CN113962929A (en) | Photovoltaic cell assembly defect detection method and system and photovoltaic cell assembly production line | |
CN113420694A (en) | Express delivery assembly line blockage identification method and system, electronic device and readable storage medium | |
CN112017154A (en) | Ray defect detection method based on Mask R-CNN model | |
CN113095404A (en) | X-ray contraband detection method based on front and back background partial convolution neural network | |
CN114078106A (en) | Defect detection method based on improved Faster R-CNN | |
CN116630263A (en) | Weld X-ray image defect detection and identification method based on deep neural network | |
CN111738991A (en) | Method for creating digital ray detection model of weld defects | |
Yang et al. | Weld Defect Cascaded Detection Model Based on Bidirectional Multi-scale Feature Fusion and Shape Pre-classification | |
Zhang et al. | Automatic forgery detection for x-ray non-destructive testing of welding | |
Devereux et al. | Automated object detection for visual inspection of nuclear reactor cores |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |