CN113362347A - Image defect region segmentation method and system based on multi-scale superpixel feature enhancement - Google Patents
Image defect region segmentation method and system based on multi-scale superpixel feature enhancement Download PDFInfo
- Publication number
- CN113362347A CN113362347A CN202110801975.4A CN202110801975A CN113362347A CN 113362347 A CN113362347 A CN 113362347A CN 202110801975 A CN202110801975 A CN 202110801975A CN 113362347 A CN113362347 A CN 113362347A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- super
- superpixel
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000007547 defect Effects 0.000 title claims abstract description 60
- 230000011218 segmentation Effects 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000013507 mapping Methods 0.000 claims abstract description 32
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 8
- 230000002950 deficient Effects 0.000 claims abstract description 4
- 238000005070 sampling Methods 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 12
- 238000010586 diagram Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 238000007526 fusion splicing Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000006116 polymerization reaction Methods 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 5
- 238000013135 deep learning Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image defect region segmentation method based on multi-scale superpixel feature enhancement, which comprises the following steps: s1: acquiring an image comprising a surface defect of a workpiece; s2: preprocessing the image; s3: extracting the characteristics of the preprocessed image; s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal pixel and the external pixel of the super-pixel; s5: fusing and splicing the superpixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the segmentation network for segmenting the image defect region; s6: the segmentation network outputs the segmented defective regions. The invention extracts the prior knowledge of the superpixels under different scales, performs multi-scale fusion with the coding characteristics of the segmentation network, enriches the characteristic information, and enables the segmentation network to output more precise prediction segmentation areas.
Description
Technical Field
The invention relates to the field of machine vision deep learning, in particular to an image defect region segmentation method and system based on multi-scale superpixel feature enhancement.
Background
Most of the surface defect detection based on deep learning at present is based on a supervised characterization learning method. The method for detecting the defects based on the characterization learning can be regarded as an application of a related classical network in the industrial field because the achieved target is completely consistent with the computer vision task.
The surface detection of industrial products in the industrial field is a key step for determining the quality of the products, and due to the influence of the processing environment, the processing technology and the like, the defects are irregular in shape, different in size and randomly distributed in positions, cannot be predicted in advance, and have unobvious defects (similar to the background), the traditional visual algorithm is difficult to consider various defects, and particularly, the unobvious defects have large missing detection.
The generalization capability of an additional measurement model can be effectively improved based on data-driven deep learning, whether a convolutional neural network can effectively extract defect features or not is a key factor for detecting defects, most of the conventional feature extraction methods are of a downsampling-upsampling network structure, the defect features are extracted through convolution and pooling, the downsampling process inevitably leads to loss of feature information, and the common convolution and pooling cannot fully extract the feature information. For low-contrast and unobvious defects, the difficulty in extracting defect features is a difficult problem in deep learning.
Chinese patent publication No. CN111445471A, published as 24/07/2020, discloses a method and apparatus for detecting surface defects of products based on deep learning and machine vision. The invention aims to provide a product surface defect detection method and device based on deep learning and machine vision. The technical scheme of the invention is as follows: a product surface defect detection method based on deep learning and machine vision is characterized in that: acquiring a surface image of a product to be detected in a line scanning mode of an industrial camera; performing defect characteristic pretreatment on the acquired image in real time, and quickly determining whether the surface image of the product to be detected has defects; identifying the severity of the defect and classifying the defect type of the image with the defect by using the trained deep convolutional neural network model; the trained deep convolutional neural network model is formed by performing transfer learning, transformation and training on a classical neural network model increment v 3. This patent also makes detection of low contrast, unobvious defects difficult to achieve due to the inability to adequately extract feature information.
Disclosure of Invention
The invention aims to provide an image defect region segmentation method based on multi-scale superpixel feature enhancement, which is suitable for detecting low contrast and multi-scale defects.
It is a further object of this invention to provide a system for image defect region segmentation based on multi-scale superpixel feature enhancement.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a method for segmenting image defect regions based on multi-scale superpixel feature enhancement comprises the following steps:
s1: acquiring an image comprising a surface defect of a workpiece;
s2: preprocessing the image;
s3: extracting the characteristics of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal pixel and the external pixel of the super-pixel;
s5: fusing and splicing the superpixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the segmentation network for segmenting the image defect region;
s6: the segmentation network outputs the segmented defective regions.
Preferably, in the step S2, the image is preprocessed, specifically:
and carrying out image acquisition, preliminary setting of an image detection area and cutting on the image.
Preferably, in step S3, the feature extraction is performed on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as the characteristic for describing each pixel point.
Preferably, in step S4, the extracted features are input into the S2pNet network, and the following processing is further performed on the extracted features:
subjecting the extracted features to mean pooling of different scales to obtain features f under different scalesαfβfηThe down sampling multiples are respectively alpha, beta and eta, and the dimensions are respectively
Preferably, the dimension size of the super-pixel domain association map at different scales in the step S4 isThe first dimension is fixed to be 9, the first dimension represents that the current pixel point neighborhood comprises own 9 pieces of orientation information which are respectively upper left, upper right, left, center, right, lower left, lower right and lower right, and the value of each shop of the super pixel domain association mapping chart represents the correlation between the current super pixel and the 9 super pixels in the neighborhood.
Preferably, the S2pNet network trains a super-pixel neighborhood correlation model by using the extracted features, the super-pixel neighborhood correlation model outputs super-pixel neighborhood correlation mapping maps at different scales, and the training process of the super-pixel neighborhood correlation model is as follows:
for the feature with down-sampling multiple of alpha, first initialize oneS _ m matrix of, andthe characteristic region is subjected to weighted dot product calculation to obtainPolymerization characteristic f of0:
In the formula, H, W represents the length and width of an image, s _ m represents a super-pixel domain association mapping map to be learned, f represents an extracted image feature, and alpha represents a down-sampling multiple;
the image is obtained by redistributing the associated mapping map of the super pixel fieldIs reconstructed feature frc:
Defining the similarity degree of the reconstructed features and the original features as the learning quality degree of the super-pixel neighborhood correlation model, and defining the Loss of the target functions_m:
Losss_m=|f0-frc|2
Updating reversely according to the target function until the target function is smaller than a threshold value, and finishing training;
and for the features with the down-sampling multiples of beta and eta, completing training by adopting the same method as the steps.
Preferably, in step S5, the super-pixel domain associated maps at different scales and the feature layers at the scales corresponding to the networks for segmenting the image defect region are fused and spliced, where the fused and spliced process is divided into fusion and splicing, and the fusion specifically includes:
fsi=λ*s_mi*fi+(1-λ)*fi,i∈(1,2,3)
in the formula (f)siAnd (3) representing the characteristic diagram after the ith fused superpixel incidence matrix, wherein lambda is a superparameter for adjusting the importance degree of the aggregated characteristic diagram and the original characteristic diagram.
Preferably, the splicing specifically is:
fout=up(up(up(fs3)+fs2)+fs1)
in the formula, up represents an up-sampling process, and '+' represents a splicing process of the feature map.
Preferably, the training of the segmentation network requires preprocessing of the workpiece image, the preprocessing includes movement of a program console and acquisition of images by a synchronous camera, and random inversion, contrast stretching and random center clipping of the input image.
An image defect region segmentation system based on multi-scale superpixel feature enhancement, comprising:
an image acquisition module for acquiring an image comprising a surface defect of a workpiece;
a pre-processing module for pre-processing the image;
the characteristic extraction module is used for extracting the characteristics of the preprocessed image;
the S2pNet module is used for inputting the extracted features into an S2pNet network, the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal and external pixels of the super-pixel;
the fusion splicing module is used for fusion splicing of the super-pixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the scales for segmenting the image defect region;
and the output module outputs the well-segmented defect area by utilizing the segmentation network.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention utilizes the super-pixel priori knowledge of the S2pNet network learning image to extract the super-pixel priori knowledge under different scales and carries out multi-scale fusion with the coding characteristics of the segmentation network, so that the information of a characteristic layer is more compact, the characteristic points in the characteristic layer are mutually influenced, the characteristic information is enriched, the defect of insufficient supervision information of the weak supervision network is relieved, the information extraction of the weak supervision segmentation network to the image pixel in one order or even multiple orders is realized, and finally the segmentation network outputs a more precise prediction segmentation area.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a block diagram of the system of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides an image defect region segmentation method based on multi-scale superpixel feature enhancement, as shown in fig. 1, comprising the following steps:
s1: acquiring an image comprising a surface defect of a workpiece;
s2: preprocessing the image;
s3: extracting the characteristics of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal pixel and the external pixel of the super-pixel;
s5: fusing and splicing the superpixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the segmentation network for segmenting the image defect region;
s6: the segmentation network outputs the segmented defective regions.
In step S2, the image is preprocessed, specifically:
and carrying out image acquisition, preliminary setting of an image detection area and cutting on the image.
In step S3, feature extraction is performed on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as the characteristic for describing each pixel point.
In step S4, the extracted features are input into the S2pNet network, and the following processing is further performed on the extracted features:
subjecting the extracted features to mean pooling of different scales to obtain features f under different scalesαfβfηThe down sampling multiples are respectively alpha, beta and eta, and the dimensions are respectively
The dimension of the superpixel domain association map under different scales in the step S4 is The super-pixel domain association model under multiple scales can extract longitudinal and transverse pixel relations, the first dimension is fixed to be 9, 9 azimuth information including the super-pixel domain association model per se in the neighborhood of the current pixel point is represented and respectively comprises upper left, upper right, left, center, right, lower left, lower right and lower right, the value of each store of the super-pixel domain association mapping map represents the correlation between the current super-pixel and the 9 super-pixels in the neighborhood, the larger the weight is, the larger the probability that the two super-pixels belong to the same category is, the smaller the weight is, the smaller the correlation between the two super-pixels is, and the probability of distributing labels of different categories is larger.
The S2pNet network is a convolutional neural network consisting of a plurality of convolutional layers, is a core part of an algorithm and is mainly responsible for training a neighborhood correlation model of internal-external pixels of the superpixel, and comprises the output definition of the network model and the training strategy design of the neighborhood correlation model of the superpixel; the S2pNet is a coding-de-coding network structure, and input and output channels are different but have the same size.
The S2pNet network trains a super-pixel neighborhood correlation model by using the extracted features, the super-pixel neighborhood correlation model outputs super-pixel neighborhood correlation mapping maps under different scales, and the training process of the super-pixel neighborhood correlation model is as follows:
for the feature with down-sampling multiple of alpha, first initialize oneS _ m matrix of, andthe characteristic region is subjected to weighted dot product calculation to obtainPolymerization characteristic f of0:
In the formula, H, W represents the length and width of an image, s _ m represents a super-pixel domain association mapping map to be learned, f represents an extracted image feature, and alpha represents a down-sampling multiple;
the image is obtained by redistributing the associated mapping map of the super pixel fieldIs reconstructed feature frc:
Defining the similarity degree of the reconstructed features and the original features as the learning quality degree of the super-pixel neighborhood correlation model, and defining the Loss of the target functions_m:
Losss_m=|f0-frc|2
Updating reversely according to the target function until the target function is smaller than a threshold value, and finishing training;
and for the features with the down-sampling multiples of beta and eta, completing training by adopting the same method as the steps.
In step S5, fusion splicing is performed on the superpixel domain associated maps at different scales and the feature layers at the scales corresponding to the networks for segmenting the image defect region, where the fusion splicing includes fusion and splicing, and the fusion specifically includes:
fsi=λ*s_mi*fi+(1-λ)*fi,i∈(1,2,3)
in the formula (f)siAnd (3) representing the characteristic diagram after the ith fused superpixel incidence matrix, wherein lambda is a superparameter for adjusting the importance degree of the aggregated characteristic diagram and the original characteristic diagram.
The splicing specifically comprises the following steps:
fout=up(up(up(fs3)+fs2)+fs1)
in the formula, up represents an up-sampling process, and '+' represents a splicing process of the feature map.
The training of the segmentation network requires preprocessing of the workpiece image, including program console movement and synchronous camera image acquisition, random flipping, contrast stretching and random center clipping of the input image.
Example 2
The embodiment provides an image defect region segmentation system based on multi-scale superpixel feature enhancement, as shown in fig. 2, including:
an image acquisition module for acquiring an image comprising a surface defect of a workpiece;
a pre-processing module for pre-processing the image;
the characteristic extraction module is used for extracting the characteristics of the preprocessed image;
the S2pNet module is used for inputting the extracted features into an S2pNet network, the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal and external pixels of the super-pixel;
the fusion splicing module is used for fusion splicing of the super-pixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the scales for segmenting the image defect region;
and the output module outputs the well-segmented defect area by utilizing the segmentation network.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. A method for segmenting image defect regions based on multi-scale superpixel feature enhancement is characterized by comprising the following steps:
s1: acquiring an image comprising a surface defect of a workpiece;
s2: preprocessing the image;
s3: extracting the characteristics of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal pixel and the external pixel of the super-pixel;
s5: fusing and splicing the superpixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the segmentation network for segmenting the image defect region;
s6: the segmentation network outputs the segmented defective regions.
2. The method for segmenting image defect regions based on multi-scale superpixel feature enhancement as claimed in claim 1, wherein said step S2 is implemented by preprocessing said image, specifically:
and carrying out image acquisition, preliminary setting of an image detection area and cutting on the image.
3. The method for segmenting image defect regions based on multi-scale superpixel feature enhancement according to claim 1, wherein said step S3 is to perform feature extraction on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as the characteristic for describing each pixel point.
4. The method according to claim 3, wherein the extracted features are input into the S2pNet network in step S4, and the following processing is further performed on the extracted features:
5. The method for segmenting image defect regions based on multi-scale superpixel feature enhancement as claimed in claim 4, wherein the dimension size of the superpixel domain association map at different scales in step S4 isThe first dimension is fixed to be 9, the first dimension represents that the current pixel point neighborhood comprises own 9 pieces of orientation information which are respectively upper left, upper right, left, center, right, lower left, lower right and lower right, and the value of each shop of the super pixel domain association mapping chart represents the correlation between the current super pixel and the 9 super pixels in the neighborhood.
6. The image defect region segmentation method based on multi-scale superpixel feature enhancement according to claim 5, wherein said S2pNet network trains a superpixel neighborhood correlation model by using the extracted features, said superpixel neighborhood correlation model outputs superpixel neighborhood correlation maps at different scales, and the training process of said superpixel neighborhood correlation model is as follows:
for the feature with down-sampling multiple of alpha, first initialize oneS _ m matrix of, andthe characteristic region is subjected to weighted dot product calculation to obtainPolymerization characteristic f of0:
In the formula, H, W represents the length and width of an image, s _ m represents a super-pixel domain association mapping map to be learned, f represents an extracted image feature, and alpha represents a down-sampling multiple;
the image is obtained by redistributing the associated mapping map of the super pixel fieldIs reconstructed feature frc:
Defining the similarity degree of the reconstructed features and the original features as the learning quality degree of the super-pixel neighborhood correlation model, and defining the Loss of the target functions_m:
Losss_m=|f0-frc|2
Updating reversely according to the target function until the target function is smaller than a threshold value, and finishing training;
and for the features with the down-sampling multiples of beta and eta, completing training by adopting the same method as the steps.
7. The method for segmenting image defect regions based on multi-scale superpixel feature enhancement according to claim 6, wherein in step S5, the superpixel domain associated maps at different scales and the feature layers at the scales corresponding to the networks for segmenting image defect regions are fused and spliced, the fused and spliced process is divided into fusion and splicing, and the fusion specifically is:
fsi=λ*s_mi*fi+(1-λ)*fi,i∈(1,2,3)
in the formula (f)siAnd (3) representing the characteristic diagram after the ith fused superpixel incidence matrix, wherein lambda is a superparameter for adjusting the importance degree of the aggregated characteristic diagram and the original characteristic diagram.
8. The method for segmenting the image defect region based on the multi-scale superpixel feature enhancement as claimed in claim 7, wherein said stitching specifically comprises:
fout=up(up(up(fs3)+fs2)+fs1)
in the formula, up represents an up-sampling process, and '+' represents a splicing process of the feature map.
9. The method of claim 8, wherein training of the segmentation network requires preprocessing of the workpiece image, wherein the preprocessing includes program console motion and synchronized camera capture of the image, random flipping, contrast stretching and random center cropping of the input image.
10. An image defect region segmentation system based on multi-scale superpixel feature enhancement, comprising:
an image acquisition module for acquiring an image comprising a surface defect of a workpiece;
a pre-processing module for pre-processing the image;
the characteristic extraction module is used for extracting the characteristics of the preprocessed image;
the S2pNet module is used for inputting the extracted features into an S2pNet network, the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal and external pixels of the super-pixel;
the fusion splicing module is used for fusion splicing of the super-pixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the scales for segmenting the image defect region;
and the output module outputs the well-segmented defect area by utilizing the segmentation network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110801975.4A CN113362347B (en) | 2021-07-15 | 2021-07-15 | Image defect region segmentation method and system based on super-pixel feature enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110801975.4A CN113362347B (en) | 2021-07-15 | 2021-07-15 | Image defect region segmentation method and system based on super-pixel feature enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113362347A true CN113362347A (en) | 2021-09-07 |
CN113362347B CN113362347B (en) | 2023-05-26 |
Family
ID=77539672
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110801975.4A Active CN113362347B (en) | 2021-07-15 | 2021-07-15 | Image defect region segmentation method and system based on super-pixel feature enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113362347B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115063725A (en) * | 2022-06-23 | 2022-09-16 | 中国民航大学 | Airplane skin defect identification system based on multi-scale self-adaptive SSD algorithm |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180012365A1 (en) * | 2015-03-20 | 2018-01-11 | Ventana Medical Systems, Inc. | System and method for image segmentation |
WO2019104767A1 (en) * | 2017-11-28 | 2019-06-06 | 河海大学常州校区 | Fabric defect detection method based on deep convolutional neural network and visual saliency |
US20200019817A1 (en) * | 2018-07-11 | 2020-01-16 | Harbin Institute Of Technology | Superpixel classification method based on semi-supervised k-svd and multiscale sparse representation |
CN112633416A (en) * | 2021-01-16 | 2021-04-09 | 北京工业大学 | Brain CT image classification method fusing multi-scale superpixels |
CN112927235A (en) * | 2021-02-26 | 2021-06-08 | 南京理工大学 | Brain tumor image segmentation method based on multi-scale superpixel and nuclear low-rank representation |
CN112991302A (en) * | 2021-03-22 | 2021-06-18 | 华南理工大学 | Flexible IC substrate color-changing defect detection method and device based on super-pixels |
-
2021
- 2021-07-15 CN CN202110801975.4A patent/CN113362347B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180012365A1 (en) * | 2015-03-20 | 2018-01-11 | Ventana Medical Systems, Inc. | System and method for image segmentation |
WO2019104767A1 (en) * | 2017-11-28 | 2019-06-06 | 河海大学常州校区 | Fabric defect detection method based on deep convolutional neural network and visual saliency |
US20200019817A1 (en) * | 2018-07-11 | 2020-01-16 | Harbin Institute Of Technology | Superpixel classification method based on semi-supervised k-svd and multiscale sparse representation |
CN112633416A (en) * | 2021-01-16 | 2021-04-09 | 北京工业大学 | Brain CT image classification method fusing multi-scale superpixels |
CN112927235A (en) * | 2021-02-26 | 2021-06-08 | 南京理工大学 | Brain tumor image segmentation method based on multi-scale superpixel and nuclear low-rank representation |
CN112991302A (en) * | 2021-03-22 | 2021-06-18 | 华南理工大学 | Flexible IC substrate color-changing defect detection method and device based on super-pixels |
Non-Patent Citations (1)
Title |
---|
杜卉然 等: "基于邻域差分滤波生成式对抗网络的数据增强方法", 《计算机应用研究》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115063725A (en) * | 2022-06-23 | 2022-09-16 | 中国民航大学 | Airplane skin defect identification system based on multi-scale self-adaptive SSD algorithm |
CN115063725B (en) * | 2022-06-23 | 2024-04-26 | 中国民航大学 | Aircraft skin defect identification system based on multi-scale self-adaptive SSD algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN113362347B (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108562589B (en) | Method for detecting surface defects of magnetic circuit material | |
CN113807355B (en) | Image semantic segmentation method based on coding and decoding structure | |
CN109580630B (en) | Visual inspection method for defects of mechanical parts | |
CN109840483B (en) | Landslide crack detection and identification method and device | |
CN111582294A (en) | Method for constructing convolutional neural network model for surface defect detection and application thereof | |
CN111681273A (en) | Image segmentation method and device, electronic equipment and readable storage medium | |
CN114612472B (en) | SegNet improvement-based leather defect segmentation network algorithm | |
CN116012291A (en) | Industrial part image defect detection method and system, electronic equipment and storage medium | |
CN110728302A (en) | Method for identifying color textile fabric tissue based on HSV (hue, saturation, value) and Lab (Lab) color spaces | |
CN115035097B (en) | Cross-scene strip steel surface defect detection method based on domain adaptation | |
CN117197763A (en) | Road crack detection method and system based on cross attention guide feature alignment network | |
CN114332473A (en) | Object detection method, object detection device, computer equipment, storage medium and program product | |
CN112669300A (en) | Defect detection method and device, computer equipment and storage medium | |
CN112668725A (en) | Metal hand basin defect target training method based on improved features | |
CN111667465A (en) | Metal hand basin defect detection method based on far infrared image | |
CN113362347B (en) | Image defect region segmentation method and system based on super-pixel feature enhancement | |
CN114331961A (en) | Method for defect detection of an object | |
CN110163149A (en) | Acquisition methods, device and the storage medium of LBP feature | |
CN111882545B (en) | Fabric defect detection method based on bidirectional information transmission and feature fusion | |
CN113205136A (en) | Real-time high-precision detection method for appearance defects of power adapter | |
CN108537266A (en) | A kind of cloth textured fault sorting technique of depth convolutional network | |
CN116797602A (en) | Surface defect identification method and device for industrial product detection | |
CN114882252B (en) | Semi-supervised remote sensing image change detection method, device and computer equipment | |
CN113192018B (en) | Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network | |
CN115937095A (en) | Printing defect detection method and system integrating image processing algorithm and deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230512 Address after: 510090 Dongfeng East Road 729, Yuexiu District, Guangzhou City, Guangdong Province Applicant after: GUANGDONG University OF TECHNOLOGY Applicant after: Guangzhou Deshidi Intelligent Technology Co.,Ltd. Address before: 510090 Dongfeng East Road 729, Yuexiu District, Guangzhou City, Guangdong Province Applicant before: GUANGDONG University OF TECHNOLOGY |