CN113298809A - Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation - Google Patents
Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation Download PDFInfo
- Publication number
- CN113298809A CN113298809A CN202110714603.8A CN202110714603A CN113298809A CN 113298809 A CN113298809 A CN 113298809A CN 202110714603 A CN202110714603 A CN 202110714603A CN 113298809 A CN113298809 A CN 113298809A
- Authority
- CN
- China
- Prior art keywords
- sub
- image
- composite material
- segmentation
- defect
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Abstract
The invention discloses a method for detecting defects of a composite material ultrasonic image based on deep learning and superpixel segmentation, which comprises the following steps of: step 1, amplifying composite material ultrasonic detection images to form a training sample set, and detecting and extracting the characteristics of composite material defects through a YOLOv3 neural network based on the training sample set to obtain a defect target detection surrounding frame; step 2, performing pixel-level segmentation on the composite material ultrasonic detection image by adopting a superpixel segmentation method to obtain a plurality of superpixel segmentation subregions; step 3, discarding the part of the super-pixel segmentation sub-region containing the defects, which is positioned outside the defect target detection surrounding frame, and reserving and combining the part of the super-pixel segmentation sub-region containing the defects, which is positioned inside the defect target detection surrounding frame, as a final defect region; step 4, fitting the minimum circumscribed rectangle of the final defect area as a final defect detection result; the method has the beneficial effect of simultaneously ensuring the high efficiency and the accuracy of the defect detection and identification of the ultrasonic image of the composite material.
Description
Technical Field
The invention belongs to the technical field of composite material ultrasonic image defect detection, and particularly relates to a composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation.
Background
With the technological development and technological progress, carbon fiber composite materials are widely applied to the fields of aerospace, military industry, automobile manufacturing, civil construction and the like; however, defects in various forms can be formed in the manufacturing or using process of the carbon fiber composite material, so that various performance indexes of the composite material are affected, and the formation of some defects is inevitable, so that it is important to detect whether the defects meet the threshold standard to judge whether the quality of the part can reach the qualified use condition; however, the existing defect detection mostly judges the defect image by naked eyes of a person, has low automation degree and low efficiency, and cannot meet the requirement of rapid and efficient production; there is a need to address this problem in conjunction with recent computer vision techniques and conventional image processing methods.
The target detection is carried out by a deep learning method, so that the detection efficiency is greatly improved, and the robustness and the accuracy of the target detection are also greatly improved due to the strong characteristic learning and extracting capability of the artificial neural network; a target detection algorithm based on a neural network represented by YOLO is gradually applied to various aspects of production and life; the method for detecting the composite material defects through the neural network is efficient and rapid, but the defect detection and identification of the ultrasonic image of the composite material based on the YOLO in the prior art still has the problem that the finally obtained detection target enclosure frame still contains non-defect characteristics, namely the precision of the final detection result still needs to be improved.
Image segmentation is a process of dividing an image into a plurality of sub-regions, i.e., super-pixels; the super-pixels are small areas formed by a series of pixels with adjacent positions and similar characteristics such as color, brightness, texture and the like; the small regions are closely attached to the edges of the image under the condition of good segmentation, so that the internal information of the edges is reserved, and the subsequent processing is facilitated. However, in the conventional super-pixel division, since there are many sub-regions formed by division, there is a need to improve the processing efficiency.
Therefore, a defect detection method capable of simultaneously ensuring the detection precision and the detection efficiency is urgently needed for detecting the defects of the composite material with higher defect requirement precision.
Disclosure of Invention
The invention aims to provide a composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation, which can efficiently and accurately detect and extract defect characteristics on a composite material ultrasonic image and simultaneously ensure the accuracy and the efficiency of composite material defect detection.
The invention is realized by the following technical scheme:
a composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation comprises the following steps:
step 1, amplifying composite material ultrasonic detection images to form a training sample set, and detecting and extracting the characteristics of composite material defects through a YOLOv3 neural network based on the training sample set to obtain a defect target detection surrounding frame;
step 2, performing pixel-level segmentation on the composite material ultrasonic detection image by adopting a superpixel segmentation method to obtain a plurality of superpixel segmentation subregions;
step 3, discarding the part of the super-pixel segmentation sub-region containing the defects, which is positioned outside the defect target detection surrounding frame, and reserving and combining the part of the super-pixel segmentation sub-region containing the defects, which is positioned inside the defect target detection surrounding frame, as a final defect region;
and 4, fitting the minimum circumscribed rectangle of the final defect area as a final defect detection result.
In order to better implement the present invention, further, step 3 specifically includes:
step 3.1, extracting all super-pixel segmentation sub-regions with pixels overlapped with the defect target detection surrounding frame, detecting whether the extracted super-pixel segmentation sub-regions contain defects or not, reserving the super-pixel segmentation sub-regions containing the defects, and discarding the super-pixel segmentation sub-regions not containing the defects;
step 3.2, extracting and combining the parts, positioned inside the defect target detection surrounding frame, of the super-pixel segmentation sub-regions containing the defects reserved in the step 3.1, and discarding the parts, positioned outside the defect target detection surrounding frame, of the super-pixel segmentation sub-regions containing the defects reserved in the step 3.1;
and 3.3, fitting the minimum circumscribed rectangle of the combined super-pixel segmentation sub-region extracted in the step 3.2, and extracting the coordinate position, the size parameter and the pixel number of the minimum circumscribed rectangle as a final defect detection result.
In order to better implement the present invention, further, step 1 specifically includes:
step 1.1, reading a composite material ultrasonic detection image, and manually marking defects in the composite material ultrasonic detection image to form a marking target frame;
step 1.2, randomly intercepting a plurality of sub-image regions from the composite material ultrasonic detection image, judging whether the sub-image regions contain the labeling target frame, and if the sub-image regions contain the labeling target frame, reserving the sub-image regions; if the sub-graph area does not contain the labeling target frame, the sub-graph area is abandoned;
step 1.3, with the upper left corner point of the composite material ultrasonic detection image as a starting point, cutting the composite material ultrasonic detection image in a sliding manner from left to right and from top to bottom to obtain a plurality of sub-image regions, judging whether the sub-image regions contain the labeling target frame, and if the sub-image regions contain the labeling target frame, reserving the sub-image regions; if the sub-graph area does not contain the labeling target frame, the sub-graph area is abandoned;
step 1.4, performing amplification operation on the subgraph areas reserved in the step 1.2 and the step 1.3 to form a training sample set, performing defect recognition training on the training sample set by adopting a YOLOv3 algorithm to obtain a defect detection frame, calculating the contact ratio of the defect detection frame and a labeling target frame, and iterating the process until the contact ratio of the defect detection frame and the labeling target frame reaches a set threshold value to obtain a trained YOLOv3 neural network model;
and step 1.5, intercepting a sub-image region on the composite material ultrasonic detection image through a sliding window, and performing defect recognition on the intercepted sub-image region by adopting a trained YOLOv3 neural network model to obtain a final defect target detection surrounding frame.
In order to better implement the present invention, further, the step 1.2 specifically includes:
step 1.2.1, establishing an X axis rightwards by taking an upper left corner point of the composite material ultrasonic detection image as an original point, establishing a Y axis downwards, and extracting coordinates of an upper left corner point of the composite material ultrasonic detection image as (SX, SY);
step 1.2.2, randomly intercepting a sub-image region with the width of SW and the height of SH from the composite material ultrasonic detection image, extracting coordinates (BX, BY) of an upper left corner point of a labeling target frame, labeling the width BW of the target frame, and labeling the height BH of the target frame;
step 1.2.3, judging whether the current sub-map area contains a labeling target frame, wherein the judgment formula is as follows:
wherein: BX is the coordinate of the upper left corner point of the labeling target frame on the X axis; BY is the coordinate of the upper left corner point of the labeling target frame on the Y axis; SX is the coordinate of the upper left corner point of the composite material ultrasonic detection image on the X axis; SY is a coordinate of an upper left corner point of the composite material ultrasonic detection image on a Y axis; BW is the width of the marked target frame along the X axis; BH is the height of the marking target frame along the Y axis; SW is the width of a sub-image area randomly intercepted from the composite material ultrasonic detection image along the X axis; SH is the height along the Y axis of a sub-image region randomly cut from the composite material ultrasonic detection image.
In order to better implement the present invention, further, step 1.3 specifically includes:
step 1.3.1, establishing an X axis rightwards by taking an upper left corner point of the composite material ultrasonic detection image as an original point, establishing a Y axis downwards, and extracting coordinates of an upper left corner point of the composite material ultrasonic detection image as (SX, SY);
step 1.3.2, taking the upper left corner point of the composite material ultrasonic detection image as a starting point, cutting and intercepting a sub-image area with the width SW and the height SH on the composite material ultrasonic detection image in a sliding way according to the sequence from left to right and from top to bottom, extracting coordinates (BX, BY) of the upper left corner point of a labeling target frame, labeling the width BW of the target frame, and labeling the height BH of the target frame;
step 1.3.3, judging whether the current sub-map area contains a labeling target frame, wherein the judgment formula is as follows:
wherein: BX is the coordinate of the upper left corner point of the labeling target frame on the X axis; BY is the coordinate of the upper left corner point of the labeling target frame on the Y axis; SX is the coordinate of the upper left corner point of the composite material ultrasonic detection image on the X axis; SY is a coordinate of an upper left corner point of the composite material ultrasonic detection image on a Y axis; BW is the width of the marked target frame along the X axis; BH is the height of the marking target frame along the Y axis; SW is the width of a sub-image area randomly intercepted from the composite material ultrasonic detection image along the X axis; SH is the height along the Y axis of a sub-image region randomly cut from the composite material ultrasonic detection image.
To better implement the present invention, further, the step size of the sliding cut in step 1.3.2 is 0.2 times the width SW or 0.2 times the height SH.
In order to better implement the present invention, further, for the sub-image region intercepted by the sliding window in step 1.5, an NMS algorithm is used to calculate IoU values between adjacent detection boxes on the sub-image region, if the value IoU is greater than 0.9, the adjacent detection boxes are merged, and the above process is iterated until a final defect target detection bounding box is obtained.
In order to better implement the present invention, further, step 2 specifically includes:
step 2.1, performing superpixel segmentation on the composite material ultrasonic detection image to obtain K superpixel segmentation sub-regions with the width of SPW and the height of SPH;
2.2, extracting K central pixel points in the centers of the K superpixel segmentation subregions and 8 peripheral pixel points around each central pixel point, calculating gradient module values of the extracted central pixel points and the peripheral pixel points, and extracting a pixel point with the lowest gradient module value in the K superpixel segmentation subregions as a clustering center of the current superpixel segmentation subregion;
step 2.3, iteratively calculating the distance between pixel points in the range of 3SPW multiplied by 3SPH around the clustering center of the current super-pixel segmentation sub-region and the clustering center by adopting a K-means algorithm and the average vector value of all pixel points in the current super-pixel segmentation sub-region, and further iterating to obtain a new clustering center;
and 2.4, searching a plurality of pixels around the new clustering center, which are similar to the pixels of the clustering center, by taking the new clustering center as a reference, classifying the similar pixels, and stopping iteration until the number of the pixels causing the change of the category labels of the pixels is less than a set threshold value to obtain the super-pixel segmentation sub-region.
In order to better implement the method, before the pixel-level segmentation is performed on the composite material ultrasonic detection image in step 2, edge-preserving denoising and region smoothing processing are performed on the composite material ultrasonic detection image through a bilateral filter in advance, and the processed composite material ultrasonic detection image is converted into a single-channel gray scale space.
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the method, the composite material ultrasonic detection image of the composite material is subjected to defect detection and extraction through the YOLOv3 neural network, so that a defect target detection surrounding frame containing defects is obtained, and the high efficiency of defect detection is ensured; meanwhile, the composite material ultrasonic detection image is subjected to superpixel segmentation to obtain a superpixel segmentation sub-region containing the defect, and the superpixel segmentation sub-region not containing the defect is abandoned, so that the efficiency of subsequent combination of the superpixel segmentation sub-regions is effectively improved; then the super-pixel segmentation sub-regions positioned outside the defect target detection surrounding frame are abandoned, the merging efficiency of the super-pixel segmentation sub-regions is further improved, then the super-pixel segmentation sub-regions positioned inside the defect target detection surrounding frame and containing defects are merged and fitted to form a minimum external rectangle, the influence of the part, which does not contain the defects, inside the defect target detection surrounding frame on a final detection result is eliminated, and the accuracy of a defect detection identification result is effectively improved.
Drawings
FIG. 1 is a schematic flow chart illustrating the steps of the present invention;
FIG. 2 is a diagram showing the relationship between the initial positions of the sub-regions of the super-pixel partition and the bounding box for detecting the defect target
FIG. 3 is a diagram illustrating discarding a super-pixel partition sub-region that does not include a defect;
FIG. 4 is a diagram illustrating super-pixel partition sub-region extraction and merging;
FIG. 5 is a diagram of a fit of a minimum bounding rectangle.
Detailed Description
Example 1:
the method for detecting defects of a composite material ultrasonic image based on deep learning and superpixel segmentation in the embodiment, as shown in fig. 1, includes the following steps:
step 1, randomly shearing and completely dividing a composite material ultrasonic detection image to form a training sample set, and detecting and extracting the characteristics of composite material defects through a YOLOv3 neural network based on the training sample set to obtain a defect target detection surrounding frame; in order to solve the problem of insufficient quantity of samples in a training sample set, the samples in the training sample set are subjected to transformation such as rotation, turnover, random stretching and the like to realize sample amplification, then iterative training of composite material defect detection and extraction is carried out on the samples in the training sample set through a YOLOv3 neural network until the detection precision reaches the standard, a trained YOLOv3 neural network model is obtained, then defect detection and extraction are carried out on a composite material ultrasonic detection image through the trained YOLOv3 neural network model, and a final defect target detection enclosure frame is obtained.
Step 2, performing pixel-level segmentation on the composite material ultrasonic detection image by adopting a superpixel segmentation method to obtain a plurality of superpixel segmentation subregions; some of the superpixel partition sub-regions contain no defects, and the remaining superpixel partition sub-regions contain defects.
And 3, discarding the part of the super-pixel segmentation sub-area containing the defects, which is positioned outside the defect target detection surrounding frame, effectively reducing the subsequent operation amount for merging the super-pixel segmentation sub-areas, and improving the efficiency for merging the super-pixel segmentation sub-areas. Then reserving and combining the parts of the super-pixel segmentation sub-areas containing the defects, which are positioned in the defect target detection surrounding frame, as final defect areas;
and 4, fitting the minimum circumscribed rectangle of the final defect area as a final defect detection result.
Example 2:
in this embodiment, further optimization is performed on the basis of embodiment 1, and step 3 specifically includes:
and 3.1, as shown in fig. 2 and 3, extracting all the super-pixel segmentation sub-regions overlapped with the pixels of the defect target detection surrounding frame, detecting whether the extracted super-pixel segmentation sub-regions contain defects, reserving the super-pixel segmentation sub-regions containing the defects, discarding the super-pixel segmentation sub-regions not containing the defects, avoiding the super-pixel segmentation sub-regions not containing the defects from participating in subsequent combination, and effectively improving the combination efficiency of the super-pixel sub-regions.
And 3.2, as shown in fig. 4, extracting and combining the parts, located inside the defect target detection surrounding frame, of the superpixel partition sub-regions containing the defects reserved in the step 3.1, discarding the parts, located outside the defect target detection surrounding frame, of the superpixel partition sub-regions containing the defects reserved in the step 3.1, so as to avoid the superpixel partition sub-regions located outside the defect target detection surrounding frame from participating in subsequent combination, and further improve the combination efficiency of the superpixel partition sub-regions.
And 3.3, as shown in fig. 5, fitting the minimum circumscribed rectangle of the combined super-pixel segmentation sub-region extracted in the step 3.2, and extracting the coordinate position, the size parameter and the pixel number of the minimum circumscribed rectangle as a final defect detection result.
Other parts of this embodiment are the same as embodiment 1, and thus are not described again.
Example 3:
this embodiment is further optimized on the basis of the foregoing embodiment 1 or 2, where step 1 specifically includes:
step 1.1, reading a composite material ultrasonic detection image, and manually marking defects in the composite material ultrasonic detection image to form a marking target frame;
step 1.2, random shearing: randomly intercepting a plurality of sub-image regions from the composite ultrasonic detection image, judging whether the sub-image regions contain the labeling target frame, and if the sub-image regions contain the labeling target frame, reserving the sub-image regions; if the sub-graph area does not contain the labeling target frame, the sub-graph area is abandoned;
step 1.3, complete division: taking an upper left corner point of the composite material ultrasonic detection image as a starting point, cutting the composite material ultrasonic detection image in a sliding manner from left to right and from top to bottom to obtain a plurality of sub-image regions, judging whether the sub-image regions contain a labeling target frame, and if the sub-image regions contain the labeling target frame, reserving the sub-image regions; if the sub-graph area does not contain the labeling target frame, the sub-graph area is abandoned;
step 1.4, performing amplification operation on the subgraph areas reserved in the step 1.2 and the step 1.3 to form a training sample set, performing defect recognition training on the training sample set by adopting a YOLOv3 algorithm to obtain a defect detection frame, calculating the contact ratio of the defect detection frame and a labeling target frame, and iterating the process until the contact ratio of the defect detection frame and the labeling target frame reaches a set threshold value to obtain a trained YOLOv3 neural network model;
the size of the sub-graph region input in the YOLOv3 algorithm is 800 pixels multiplied by 800 pixels, the sub-graph region is subjected to feature calculation by adopting a Darknet-53 architecture, and the Darknet-53 comprises a convolution layer and a pooling layer. The gradient problem of a deep network is solved by adopting a full convolution technology and introducing a residual structure. The network end is trained by adopting a Softmax classifier, the learning rate is set to be 0.001, batchs and subdivisions are determined according to the performance of a training video card, Batch normalization is adopted to carry out iterative training on a weight parameter and a bias parameter, the weight attenuation rate is set to be 0.0005, the momentum gradient decline parameter is set to be 0.8, the processes are iterated until the coincidence degree of a defect detection frame and a labeling target frame reaches a set threshold value, a trained YOLOv3 neural network model is obtained, and the training weight parameter is stored.
And step 1.5, intercepting a sub-image region on the composite material ultrasonic detection image through a sliding window, and performing defect recognition on the intercepted sub-image region by adopting a trained YOLOv3 neural network model to obtain a final defect target detection surrounding frame.
Inputting the composite material ultrasonic detection image into a trained YOLOv3 neural network model, and performing defect detection on the composite material ultrasonic detection image by adopting the weight parameters stored in the step 1.4. In the process of carrying out defect identification on the composite material ultrasonic detection image by using the YOLOv3 neural network model, cross-scale prediction is adopted, 3 scale prediction boundary frames are adopted, the size of a defect target detection surrounding frame is obtained by carrying out K-means clustering on sample images in a training sample set, and finally the parameters and the confidence coefficient of the defect target detection surrounding frame are output.
In the step 1.5, the sliding window is adopted to slide and intercept the subarea on the composite material ultrasonic detection image so as to be compatible with the composite material ultrasonic detection images meeting the requirements of parts with different sizes for defect detection.
Further, for the sub-image region intercepted by the sliding window in step 1.5, an NMS algorithm is used to calculate IoU values between adjacent detection boxes on the sub-image region, if the value IoU is greater than 0.9, the adjacent detection boxes are merged, and the above process is iterated until a final defect target detection enclosure box is obtained.
The rest of this embodiment is the same as embodiment 1 or 2, and therefore, the description thereof is omitted.
Example 4:
this embodiment is further optimized on the basis of any one of embodiments 1 to 3, where step 1.2 specifically includes:
step 1.2.1, establishing an X axis rightwards by taking an upper left corner point of the composite material ultrasonic detection image as an original point, establishing a Y axis downwards, and extracting coordinates of an upper left corner point of the composite material ultrasonic detection image as (SX, SY);
step 1.2.2, randomly intercepting a sub-image region with the width of SW and the height of SH from the composite material ultrasonic detection image, extracting coordinates (BX, BY) of an upper left corner point of a labeling target frame, labeling the width BW of the target frame, and labeling the height BH of the target frame;
step 1.2.3, judging whether the current sub-map area contains a labeling target frame, wherein the judgment formula is as follows:
wherein: BX is the coordinate of the upper left corner point of the labeling target frame on the X axis; BY is the coordinate of the upper left corner point of the labeling target frame on the Y axis; SX is the coordinate of the upper left corner point of the composite material ultrasonic detection image on the X axis; SY is a coordinate of an upper left corner point of the composite material ultrasonic detection image on a Y axis; BW is the width of the marked target frame along the X axis; BH is the height of the marking target frame along the Y axis; SW is the width of a sub-image area randomly intercepted from the composite material ultrasonic detection image along the X axis; SH is the height along the Y axis of a sub-image region randomly cut from the composite material ultrasonic detection image.
Other parts of this embodiment are the same as any of embodiments 1 to 3, and thus are not described again.
Example 5:
this embodiment is further optimized on the basis of any one of embodiments 1 to 4, where step 1.3 specifically includes:
step 1.3.1, establishing an X axis rightwards by taking an upper left corner point of the composite material ultrasonic detection image as an original point, establishing a Y axis downwards, and extracting coordinates of an upper left corner point of the composite material ultrasonic detection image as (SX, SY);
step 1.3.2, taking the upper left corner point of the composite material ultrasonic detection image as a starting point, cutting and intercepting a sub-image area with the width SW and the height SH on the composite material ultrasonic detection image in a sliding way according to the sequence from left to right and from top to bottom, extracting coordinates (BX, BY) of the upper left corner point of a labeling target frame, labeling the width BW of the target frame, and labeling the height BH of the target frame;
step 1.3.3, judging whether the current sub-map area contains a labeling target frame, wherein the judgment formula is as follows:
wherein: BX is the coordinate of the upper left corner point of the labeling target frame on the X axis; BY is the coordinate of the upper left corner point of the labeling target frame on the Y axis; SX is the coordinate of the upper left corner point of the composite material ultrasonic detection image on the X axis; SY is a coordinate of an upper left corner point of the composite material ultrasonic detection image on a Y axis; BW is the width of the marked target frame along the X axis; BH is the height of the marking target frame along the Y axis; SW is the width of a sub-image area randomly intercepted from the composite material ultrasonic detection image along the X axis; SH is the height along the Y axis of a sub-image region randomly cut from the composite material ultrasonic detection image.
Further, the step size of the sliding cut in step 1.3.2 is 0.2 times the width SW or 0.2 times the height SH.
Other parts of this embodiment are the same as any of embodiments 1 to 4, and thus are not described again.
Example 6:
this embodiment is further optimized on the basis of any one of embodiments 1 to 5, where step 2 specifically includes:
step 2.1, performing superpixel segmentation on the composite material ultrasonic detection image to obtain K superpixel segmentation sub-regions with the width of SPW and the height of SPH;
2.2, extracting K central pixel points in the centers of the K superpixel segmentation subregions and 8 peripheral pixel points around each central pixel point, calculating gradient module values of the extracted central pixel points and the peripheral pixel points, and extracting a pixel point with the lowest gradient module value in the K superpixel segmentation subregions as a clustering center of the current superpixel segmentation subregion;
step 2.3, iteratively calculating the distance between pixel points in the range of 3SPW multiplied by 3SPH around the clustering center of the current super-pixel segmentation sub-region and the clustering center by adopting a K-means algorithm and the average vector value of all pixel points in the current super-pixel segmentation sub-region, and further iterating to obtain a new clustering center;
and 2.4, searching a plurality of pixels around the new clustering center, which are similar to the pixels of the clustering center, by taking the new clustering center as a reference, classifying the similar pixels, and stopping iteration until the number of the pixels causing the change of the category labels of the pixels is less than a set threshold value to obtain the super-pixel segmentation sub-region.
Other parts of this embodiment are the same as any of embodiments 1 to 5, and thus are not described again.
Example 7:
in this embodiment, further optimization is performed on the basis of any one of embodiments 1 to 6, before the pixel-level segmentation is performed on the composite material ultrasonic detection image in step 2, edge-preserving denoising and area smoothing are performed on the composite material ultrasonic detection image in advance through a bilateral filter, and the processed composite material ultrasonic detection image is converted into a single-channel gray scale space.
Other parts of this embodiment are the same as any of embodiments 1 to 6, and thus are not described again.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications and equivalent variations of the above embodiments according to the technical spirit of the present invention are included in the scope of the present invention.
Claims (9)
1. The method for detecting the defects of the composite material ultrasonic image based on deep learning and superpixel segmentation is characterized by comprising the following steps of:
step 1, amplifying composite material ultrasonic detection images to form a training sample set, and detecting and extracting the characteristics of composite material defects through a YOLOv3 neural network based on the training sample set to obtain a defect target detection surrounding frame;
step 2, performing pixel-level segmentation on the composite material ultrasonic detection image by adopting a superpixel segmentation method to obtain a plurality of superpixel segmentation subregions;
step 3, discarding the part of the super-pixel segmentation sub-region containing the defects, which is positioned outside the defect target detection surrounding frame, and reserving and combining the part of the super-pixel segmentation sub-region containing the defects, which is positioned inside the defect target detection surrounding frame, as a final defect region;
and 4, fitting the minimum circumscribed rectangle of the final defect area as a final defect detection result.
2. The method for detecting defects of a composite ultrasonic image based on deep learning and superpixel segmentation as claimed in claim 1, wherein said step 3 specifically comprises:
step 3.1, extracting all super-pixel segmentation sub-regions with pixels overlapped with the defect target detection surrounding frame, detecting whether the extracted super-pixel segmentation sub-regions contain defects or not, reserving the super-pixel segmentation sub-regions containing the defects, and discarding the super-pixel segmentation sub-regions not containing the defects;
step 3.2, extracting and combining the parts, positioned inside the defect target detection surrounding frame, of the super-pixel segmentation sub-regions containing the defects reserved in the step 3.1, and discarding the parts, positioned outside the defect target detection surrounding frame, of the super-pixel segmentation sub-regions containing the defects reserved in the step 3.1;
and 3.3, fitting the minimum circumscribed rectangle of the combined super-pixel segmentation sub-region extracted in the step 3.2, and extracting the coordinate position, the size parameter and the pixel number of the minimum circumscribed rectangle as a final defect detection result.
3. The method for detecting defects of a composite ultrasonic image based on deep learning and superpixel segmentation as claimed in claim 2, wherein said step 1 specifically comprises:
step 1.1, reading a composite material ultrasonic detection image, and manually marking defects in the composite material ultrasonic detection image to form a marking target frame;
step 1.2, randomly intercepting a plurality of sub-image regions from the composite material ultrasonic detection image, judging whether the sub-image regions contain the labeling target frame, and if the sub-image regions contain the labeling target frame, reserving the sub-image regions; if the sub-graph area does not contain the labeling target frame, the sub-graph area is abandoned;
step 1.3, with the upper left corner point of the composite material ultrasonic detection image as a starting point, cutting the composite material ultrasonic detection image in a sliding manner from left to right and from top to bottom to obtain a plurality of sub-image regions, judging whether the sub-image regions contain the labeling target frame, and if the sub-image regions contain the labeling target frame, reserving the sub-image regions; if the sub-graph area does not contain the labeling target frame, the sub-graph area is abandoned;
step 1.4, performing amplification operation on the subgraph areas reserved in the step 1.2 and the step 1.3 to form a training sample set, performing defect recognition training on the training sample set by adopting a YOLOv3 algorithm to obtain a defect detection frame, calculating the contact ratio of the defect detection frame and a labeling target frame, and iterating the process until the contact ratio of the defect detection frame and the labeling target frame reaches a set threshold value to obtain a trained YOLOv3 neural network model;
and step 1.5, intercepting a sub-image region on the composite material ultrasonic detection image through a sliding window, and performing defect recognition on the intercepted sub-image region by adopting a trained YOLOv3 neural network model to obtain a final defect target detection surrounding frame.
4. The method for detecting defects of a composite ultrasonic image based on deep learning and superpixel segmentation as claimed in claim 3, wherein said step 1.2 specifically comprises:
step 1.2.1, establishing an X axis rightwards by taking an upper left corner point of the composite material ultrasonic detection image as an original point, establishing a Y axis downwards, and extracting coordinates of an upper left corner point of the composite material ultrasonic detection image as (SX, SY);
step 1.2.2, randomly intercepting a sub-image region with the width of SW and the height of SH from the composite material ultrasonic detection image, extracting coordinates (BX, BY) of an upper left corner point of a labeling target frame, labeling the width BW of the target frame, and labeling the height BH of the target frame;
step 1.2.3, judging whether the current sub-map area contains a labeling target frame, wherein the judgment formula is as follows:
wherein: BX is the coordinate of the upper left corner point of the labeling target frame on the X axis; BY is the coordinate of the upper left corner point of the labeling target frame on the Y axis; SX is the coordinate of the upper left corner point of the composite material ultrasonic detection image on the X axis; SY is a coordinate of an upper left corner point of the composite material ultrasonic detection image on a Y axis; BW is the width of the marked target frame along the X axis; BH is the height of the marking target frame along the Y axis; SW is the width of a sub-image area randomly intercepted from the composite material ultrasonic detection image along the X axis; SH is the height along the Y axis of a sub-image region randomly cut from the composite material ultrasonic detection image.
5. The method for detecting defects of a composite ultrasonic image based on deep learning and superpixel segmentation as claimed in claim 3, wherein said step 1.3 specifically comprises:
step 1.3.1, establishing an X axis rightwards by taking an upper left corner point of the composite material ultrasonic detection image as an original point, establishing a Y axis downwards, and extracting coordinates of an upper left corner point of the composite material ultrasonic detection image as (SX, SY);
step 1.3.2, taking the upper left corner point of the composite material ultrasonic detection image as a starting point, cutting and intercepting a sub-image area with the width SW and the height SH on the composite material ultrasonic detection image in a sliding way according to the sequence from left to right and from top to bottom, extracting coordinates (BX, BY) of the upper left corner point of a labeling target frame, labeling the width BW of the target frame, and labeling the height BH of the target frame;
step 1.3.3, judging whether the current sub-map area contains a labeling target frame, wherein the judgment formula is as follows:
wherein: BX is the coordinate of the upper left corner point of the labeling target frame on the X axis; BY is the coordinate of the upper left corner point of the labeling target frame on the Y axis; SX is the coordinate of the upper left corner point of the composite material ultrasonic detection image on the X axis; SY is a coordinate of an upper left corner point of the composite material ultrasonic detection image on a Y axis; BW is the width of the marked target frame along the X axis; BH is the height of the marking target frame along the Y axis; SW is the width of a sub-image area randomly intercepted from the composite material ultrasonic detection image along the X axis; SH is the height along the Y axis of a sub-image region randomly cut from the composite material ultrasonic detection image.
6. The method for detecting defects of composite ultrasonic images based on deep learning and superpixel segmentation as claimed in claim 5, wherein the sliding step size of the sliding cut in step 1.3.2 is 0.2 times of the width SW or 0.2 times of the height SH.
7. The method for detecting defects of composite ultrasonic images based on deep learning and superpixel segmentation as claimed in claim 3, wherein for the sub-image region intercepted by the sliding window in step 1.5, the NMS algorithm is used to calculate IoU values between adjacent detection frames on the sub-image region, if the IoU value is greater than 0.9, the adjacent detection frames are merged, and the above process is iterated until the final defect target detection bounding frame is obtained.
8. The method for detecting defects of a composite ultrasonic image based on deep learning and superpixel segmentation as claimed in claim 2, wherein said step 2 specifically comprises:
step 2.1, performing superpixel segmentation on the composite material ultrasonic detection image to obtain K superpixel segmentation sub-regions with the width of SPW and the height of SPH;
2.2, extracting K central pixel points in the centers of the K superpixel segmentation subregions and 8 peripheral pixel points around each central pixel point, calculating gradient module values of the extracted central pixel points and the peripheral pixel points, and extracting a pixel point with the lowest gradient module value in the K superpixel segmentation subregions as a clustering center of the current superpixel segmentation subregion;
step 2.3, iteratively calculating the distance between pixel points in the range of 3SPW multiplied by 3SPH around the clustering center of the current super-pixel segmentation sub-region and the clustering center by adopting a K-means algorithm and the average vector value of all pixel points in the current super-pixel segmentation sub-region, and further iterating to obtain a new clustering center;
and 2.4, searching a plurality of pixels around the new clustering center, which are similar to the pixels of the clustering center, by taking the new clustering center as a reference, classifying the similar pixels, and stopping iteration until the number of the pixels causing the change of the category labels of the pixels is less than a set threshold value to obtain the super-pixel segmentation sub-region.
9. The method for detecting defects of the ultrasonic composite image based on deep learning and superpixel segmentation as claimed in claim 8, wherein before the pixel-level segmentation is performed on the ultrasonic composite image in step 2, the ultrasonic composite image is subjected to edge-preserving denoising and region smoothing processing in advance by a bilateral filter, and the processed ultrasonic composite image is converted into a single-channel gray scale space.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110714603.8A CN113298809B (en) | 2021-06-25 | 2021-06-25 | Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110714603.8A CN113298809B (en) | 2021-06-25 | 2021-06-25 | Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113298809A true CN113298809A (en) | 2021-08-24 |
CN113298809B CN113298809B (en) | 2022-04-08 |
Family
ID=77329689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110714603.8A Active CN113298809B (en) | 2021-06-25 | 2021-06-25 | Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113298809B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114387271A (en) * | 2022-03-23 | 2022-04-22 | 武汉铂雅科技有限公司 | Air conditioner plastic water pan grid glue shortage detection method and system based on angular point detection |
CN116012283A (en) * | 2022-09-28 | 2023-04-25 | 逸超医疗科技(北京)有限公司 | Full-automatic ultrasonic image measurement method, equipment and storage medium |
CN116403094A (en) * | 2023-06-08 | 2023-07-07 | 成都菁蓉联创科技有限公司 | Embedded image recognition method and system |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9147255B1 (en) * | 2013-03-14 | 2015-09-29 | Hrl Laboratories, Llc | Rapid object detection by combining structural information from image segmentation with bio-inspired attentional mechanisms |
CN107833220A (en) * | 2017-11-28 | 2018-03-23 | 河海大学常州校区 | Fabric defect detection method based on depth convolutional neural networks and vision significance |
CN108961235A (en) * | 2018-06-29 | 2018-12-07 | 山东大学 | A kind of disordered insulator recognition methods based on YOLOv3 network and particle filter algorithm |
CN109658381A (en) * | 2018-11-16 | 2019-04-19 | 华南理工大学 | A kind of copper face defect inspection method of the flexible IC package substrate based on super-pixel |
CN110310259A (en) * | 2019-06-19 | 2019-10-08 | 江南大学 | It is a kind of that flaw detection method is tied based on the wood for improving YOLOv3 algorithm |
CN110400296A (en) * | 2019-07-19 | 2019-11-01 | 重庆邮电大学 | The scanning of continuous casting blank surface defects binocular and deep learning fusion identification method and system |
CN110827244A (en) * | 2019-10-28 | 2020-02-21 | 上海悦易网络信息技术有限公司 | Method and equipment for detecting appearance flaws of electronic equipment |
CN111210408A (en) * | 2019-12-30 | 2020-05-29 | 南京航空航天大学 | Ray image-based composite material defect identification method |
CN111598084A (en) * | 2020-05-11 | 2020-08-28 | 北京阿丘机器人科技有限公司 | Defect segmentation network training method, device and equipment and readable storage medium |
CN111695482A (en) * | 2020-06-04 | 2020-09-22 | 华油钢管有限公司 | Pipeline defect identification method |
CN112541930A (en) * | 2019-09-23 | 2021-03-23 | 大连民族大学 | Image super-pixel target pedestrian segmentation method based on cascade connection |
CN112819771A (en) * | 2021-01-27 | 2021-05-18 | 东北林业大学 | Wood defect detection method based on improved YOLOv3 model |
-
2021
- 2021-06-25 CN CN202110714603.8A patent/CN113298809B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9147255B1 (en) * | 2013-03-14 | 2015-09-29 | Hrl Laboratories, Llc | Rapid object detection by combining structural information from image segmentation with bio-inspired attentional mechanisms |
CN107833220A (en) * | 2017-11-28 | 2018-03-23 | 河海大学常州校区 | Fabric defect detection method based on depth convolutional neural networks and vision significance |
CN108961235A (en) * | 2018-06-29 | 2018-12-07 | 山东大学 | A kind of disordered insulator recognition methods based on YOLOv3 network and particle filter algorithm |
CN109658381A (en) * | 2018-11-16 | 2019-04-19 | 华南理工大学 | A kind of copper face defect inspection method of the flexible IC package substrate based on super-pixel |
CN110310259A (en) * | 2019-06-19 | 2019-10-08 | 江南大学 | It is a kind of that flaw detection method is tied based on the wood for improving YOLOv3 algorithm |
CN110400296A (en) * | 2019-07-19 | 2019-11-01 | 重庆邮电大学 | The scanning of continuous casting blank surface defects binocular and deep learning fusion identification method and system |
CN112541930A (en) * | 2019-09-23 | 2021-03-23 | 大连民族大学 | Image super-pixel target pedestrian segmentation method based on cascade connection |
CN110827244A (en) * | 2019-10-28 | 2020-02-21 | 上海悦易网络信息技术有限公司 | Method and equipment for detecting appearance flaws of electronic equipment |
CN111210408A (en) * | 2019-12-30 | 2020-05-29 | 南京航空航天大学 | Ray image-based composite material defect identification method |
CN111598084A (en) * | 2020-05-11 | 2020-08-28 | 北京阿丘机器人科技有限公司 | Defect segmentation network training method, device and equipment and readable storage medium |
CN111695482A (en) * | 2020-06-04 | 2020-09-22 | 华油钢管有限公司 | Pipeline defect identification method |
CN112819771A (en) * | 2021-01-27 | 2021-05-18 | 东北林业大学 | Wood defect detection method based on improved YOLOv3 model |
Non-Patent Citations (5)
Title |
---|
ABDULKADIR ALBAYRAK等: "A Hybrid Method of Superpixel Segmentation Algorithm and Deep Learning Method in Histopathological Image Segmentation", 《2018 INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS (INISTA)》 * |
YUTU YANG等: "Wood Defect Detection Based on Depth Extreme Learning Machine", 《APPLIED SCIENCES》 * |
刘英等: "基于优化卷积神经网络的木材缺陷检测", 《林业工程学报》 * |
姜涛等: "基于卷积神经网络和超像素的CT图像肝脏分割", 《中国医疗设备》 * |
王璨等: "基于卷积神经网络提取多尺度分层特征识别玉米杂草", 《农业工程学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114387271A (en) * | 2022-03-23 | 2022-04-22 | 武汉铂雅科技有限公司 | Air conditioner plastic water pan grid glue shortage detection method and system based on angular point detection |
CN114387271B (en) * | 2022-03-23 | 2022-06-10 | 武汉铂雅科技有限公司 | Air conditioner plastic water pan grid glue shortage detection method and system based on angular point detection |
CN116012283A (en) * | 2022-09-28 | 2023-04-25 | 逸超医疗科技(北京)有限公司 | Full-automatic ultrasonic image measurement method, equipment and storage medium |
CN116012283B (en) * | 2022-09-28 | 2023-10-13 | 逸超医疗科技(北京)有限公司 | Full-automatic ultrasonic image measurement method, equipment and storage medium |
CN116403094A (en) * | 2023-06-08 | 2023-07-07 | 成都菁蓉联创科技有限公司 | Embedded image recognition method and system |
CN116403094B (en) * | 2023-06-08 | 2023-08-22 | 成都菁蓉联创科技有限公司 | Embedded image recognition method and system |
Also Published As
Publication number | Publication date |
---|---|
CN113298809B (en) | 2022-04-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106875381B (en) | Mobile phone shell defect detection method based on deep learning | |
CN113298809B (en) | Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation | |
CN115861135B (en) | Image enhancement and recognition method applied to panoramic detection of box body | |
CN108765371B (en) | Segmentation method of unconventional cells in pathological section | |
CN113160192B (en) | Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background | |
CN108562589B (en) | Method for detecting surface defects of magnetic circuit material | |
CN110175982B (en) | Defect detection method based on target detection | |
CN109977997B (en) | Image target detection and segmentation method based on convolutional neural network rapid robustness | |
CN111914698B (en) | Human body segmentation method, segmentation system, electronic equipment and storage medium in image | |
CN111915704A (en) | Apple hierarchical identification method based on deep learning | |
CN113724231B (en) | Industrial defect detection method based on semantic segmentation and target detection fusion model | |
CN108305260B (en) | Method, device and equipment for detecting angular points in image | |
CN111161222B (en) | Printing roller defect detection method based on visual saliency | |
CN111860439A (en) | Unmanned aerial vehicle inspection image defect detection method, system and equipment | |
CN110910445B (en) | Object size detection method, device, detection equipment and storage medium | |
CN110598698A (en) | Natural scene text detection method and system based on adaptive regional suggestion network | |
CN111161213B (en) | Industrial product defect image classification method based on knowledge graph | |
CN113609984A (en) | Pointer instrument reading identification method and device and electronic equipment | |
CN111868783B (en) | Region merging image segmentation algorithm based on boundary extraction | |
CN113989604A (en) | Tire DOT information identification method based on end-to-end deep learning | |
CN112686872B (en) | Wood counting method based on deep learning | |
CN110889418A (en) | Gas contour identification method | |
CN111047614A (en) | Feature extraction-based method for extracting target corner of complex scene image | |
CN106056575A (en) | Image matching method based on object similarity recommended algorithm | |
CN114998701A (en) | Target detection loss optimization method based on pixel feature matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |