CN113362347A - Image defect region segmentation method and system based on multi-scale superpixel feature enhancement - Google Patents

Image defect region segmentation method and system based on multi-scale superpixel feature enhancement Download PDF

Info

Publication number
CN113362347A
CN113362347A CN202110801975.4A CN202110801975A CN113362347A CN 113362347 A CN113362347 A CN 113362347A CN 202110801975 A CN202110801975 A CN 202110801975A CN 113362347 A CN113362347 A CN 113362347A
Authority
CN
China
Prior art keywords
image
superpixel
super
pixel
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110801975.4A
Other languages
Chinese (zh)
Other versions
CN113362347B (en
Inventor
许亮
吴启荣
郑博远
向旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Deshidi Intelligent Technology Co ltd
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110801975.4A priority Critical patent/CN113362347B/en
Publication of CN113362347A publication Critical patent/CN113362347A/en
Application granted granted Critical
Publication of CN113362347B publication Critical patent/CN113362347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种基于多尺度超像素特征增强的图像缺陷区域分割方法,包括以下步骤:S1:获取包括工件表面缺陷的图像;S2:对所述图像进行预处理;S3:对预处理后的图像进行特征提取;S4:将提取得到的特征输入至S2pNet网络中,所述S2pNet网络输出不同尺度下的超像素领域关联映射图,所述超像素领域关联映射图表示超像素内部‑外部像素的关系;S5:将不同尺度下的超像素领域关联映射图与用于对图像缺陷区域分割的分割网络对应尺度的特征层进行融合拼接;S6:所述分割网络输出分割好的缺陷区域。本发明提取不同尺度下的超像素的先验知识,与分割网络的编码特征进行多尺度融合,丰富特征信息,使得分割网络输出更加精细的预测分割区域。

Figure 202110801975

The present invention provides an image defect area segmentation method based on multi-scale superpixel feature enhancement, comprising the following steps: S1: acquiring an image including workpiece surface defects; S2: preprocessing the image; S3: preprocessing the preprocessed image The image carries out feature extraction; S4: the extracted features are input into the S2pNet network, and the S2pNet network outputs the superpixel domain correlation map under different scales, and the superpixel domain correlation map represents the internal-external pixel of the superpixel. S5: fuse and splicing the superpixel domain correlation maps at different scales with the feature layer of the corresponding scale of the segmentation network used to segment the image defect area; S6: the segmentation network outputs the segmented defect area. The invention extracts the prior knowledge of superpixels in different scales, and performs multi-scale fusion with the coding features of the segmentation network to enrich the feature information, so that the segmentation network outputs more refined prediction segmentation regions.

Figure 202110801975

Description

Image defect region segmentation method and system based on multi-scale superpixel feature enhancement
Technical Field
The invention relates to the field of machine vision deep learning, in particular to an image defect region segmentation method and system based on multi-scale superpixel feature enhancement.
Background
Most of the surface defect detection based on deep learning at present is based on a supervised characterization learning method. The method for detecting the defects based on the characterization learning can be regarded as an application of a related classical network in the industrial field because the achieved target is completely consistent with the computer vision task.
The surface detection of industrial products in the industrial field is a key step for determining the quality of the products, and due to the influence of the processing environment, the processing technology and the like, the defects are irregular in shape, different in size and randomly distributed in positions, cannot be predicted in advance, and have unobvious defects (similar to the background), the traditional visual algorithm is difficult to consider various defects, and particularly, the unobvious defects have large missing detection.
The generalization capability of an additional measurement model can be effectively improved based on data-driven deep learning, whether a convolutional neural network can effectively extract defect features or not is a key factor for detecting defects, most of the conventional feature extraction methods are of a downsampling-upsampling network structure, the defect features are extracted through convolution and pooling, the downsampling process inevitably leads to loss of feature information, and the common convolution and pooling cannot fully extract the feature information. For low-contrast and unobvious defects, the difficulty in extracting defect features is a difficult problem in deep learning.
Chinese patent publication No. CN111445471A, published as 24/07/2020, discloses a method and apparatus for detecting surface defects of products based on deep learning and machine vision. The invention aims to provide a product surface defect detection method and device based on deep learning and machine vision. The technical scheme of the invention is as follows: a product surface defect detection method based on deep learning and machine vision is characterized in that: acquiring a surface image of a product to be detected in a line scanning mode of an industrial camera; performing defect characteristic pretreatment on the acquired image in real time, and quickly determining whether the surface image of the product to be detected has defects; identifying the severity of the defect and classifying the defect type of the image with the defect by using the trained deep convolutional neural network model; the trained deep convolutional neural network model is formed by performing transfer learning, transformation and training on a classical neural network model increment v 3. This patent also makes detection of low contrast, unobvious defects difficult to achieve due to the inability to adequately extract feature information.
Disclosure of Invention
The invention aims to provide an image defect region segmentation method based on multi-scale superpixel feature enhancement, which is suitable for detecting low contrast and multi-scale defects.
It is a further object of this invention to provide a system for image defect region segmentation based on multi-scale superpixel feature enhancement.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a method for segmenting image defect regions based on multi-scale superpixel feature enhancement comprises the following steps:
s1: acquiring an image comprising a surface defect of a workpiece;
s2: preprocessing the image;
s3: extracting the characteristics of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal pixel and the external pixel of the super-pixel;
s5: fusing and splicing the superpixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the segmentation network for segmenting the image defect region;
s6: the segmentation network outputs the segmented defective regions.
Preferably, in the step S2, the image is preprocessed, specifically:
and carrying out image acquisition, preliminary setting of an image detection area and cutting on the image.
Preferably, in step S3, the feature extraction is performed on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as the characteristic for describing each pixel point.
Preferably, in step S4, the extracted features are input into the S2pNet network, and the following processing is further performed on the extracted features:
subjecting the extracted features to mean pooling of different scales to obtain features f under different scalesαfβfηThe down sampling multiples are respectively alpha, beta and eta, and the dimensions are respectively
Figure BDA0003164985800000021
Preferably, the dimension size of the super-pixel domain association map at different scales in the step S4 is
Figure BDA0003164985800000031
The first dimension is fixed to be 9, the first dimension represents that the current pixel point neighborhood comprises own 9 pieces of orientation information which are respectively upper left, upper right, left, center, right, lower left, lower right and lower right, and the value of each shop of the super pixel domain association mapping chart represents the correlation between the current super pixel and the 9 super pixels in the neighborhood.
Preferably, the S2pNet network trains a super-pixel neighborhood correlation model by using the extracted features, the super-pixel neighborhood correlation model outputs super-pixel neighborhood correlation mapping maps at different scales, and the training process of the super-pixel neighborhood correlation model is as follows:
for the feature with down-sampling multiple of alpha, first initialize one
Figure BDA0003164985800000032
S _ m matrix of, and
Figure BDA0003164985800000033
the characteristic region is subjected to weighted dot product calculation to obtain
Figure BDA0003164985800000034
Polymerization characteristic f of0
Figure BDA0003164985800000035
In the formula, H, W represents the length and width of an image, s _ m represents a super-pixel domain association mapping map to be learned, f represents an extracted image feature, and alpha represents a down-sampling multiple;
the image is obtained by redistributing the associated mapping map of the super pixel field
Figure BDA0003164985800000036
Is reconstructed feature frc
Figure BDA0003164985800000037
Defining the similarity degree of the reconstructed features and the original features as the learning quality degree of the super-pixel neighborhood correlation model, and defining the Loss of the target functions_m
Losss_m=|f0-frc|2
Updating reversely according to the target function until the target function is smaller than a threshold value, and finishing training;
and for the features with the down-sampling multiples of beta and eta, completing training by adopting the same method as the steps.
Preferably, in step S5, the super-pixel domain associated maps at different scales and the feature layers at the scales corresponding to the networks for segmenting the image defect region are fused and spliced, where the fused and spliced process is divided into fusion and splicing, and the fusion specifically includes:
fsi=λ*s_mi*fi+(1-λ)*fi,i∈(1,2,3)
in the formula (f)siAnd (3) representing the characteristic diagram after the ith fused superpixel incidence matrix, wherein lambda is a superparameter for adjusting the importance degree of the aggregated characteristic diagram and the original characteristic diagram.
Preferably, the splicing specifically is:
fout=up(up(up(fs3)+fs2)+fs1)
in the formula, up represents an up-sampling process, and '+' represents a splicing process of the feature map.
Preferably, the training of the segmentation network requires preprocessing of the workpiece image, the preprocessing includes movement of a program console and acquisition of images by a synchronous camera, and random inversion, contrast stretching and random center clipping of the input image.
An image defect region segmentation system based on multi-scale superpixel feature enhancement, comprising:
an image acquisition module for acquiring an image comprising a surface defect of a workpiece;
a pre-processing module for pre-processing the image;
the characteristic extraction module is used for extracting the characteristics of the preprocessed image;
the S2pNet module is used for inputting the extracted features into an S2pNet network, the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal and external pixels of the super-pixel;
the fusion splicing module is used for fusion splicing of the super-pixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the scales for segmenting the image defect region;
and the output module outputs the well-segmented defect area by utilizing the segmentation network.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention utilizes the super-pixel priori knowledge of the S2pNet network learning image to extract the super-pixel priori knowledge under different scales and carries out multi-scale fusion with the coding characteristics of the segmentation network, so that the information of a characteristic layer is more compact, the characteristic points in the characteristic layer are mutually influenced, the characteristic information is enriched, the defect of insufficient supervision information of the weak supervision network is relieved, the information extraction of the weak supervision segmentation network to the image pixel in one order or even multiple orders is realized, and finally the segmentation network outputs a more precise prediction segmentation area.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a block diagram of the system of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides an image defect region segmentation method based on multi-scale superpixel feature enhancement, as shown in fig. 1, comprising the following steps:
s1: acquiring an image comprising a surface defect of a workpiece;
s2: preprocessing the image;
s3: extracting the characteristics of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal pixel and the external pixel of the super-pixel;
s5: fusing and splicing the superpixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the segmentation network for segmenting the image defect region;
s6: the segmentation network outputs the segmented defective regions.
In step S2, the image is preprocessed, specifically:
and carrying out image acquisition, preliminary setting of an image detection area and cutting on the image.
In step S3, feature extraction is performed on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as the characteristic for describing each pixel point.
In step S4, the extracted features are input into the S2pNet network, and the following processing is further performed on the extracted features:
subjecting the extracted features to mean pooling of different scales to obtain features f under different scalesαfβfηThe down sampling multiples are respectively alpha, beta and eta, and the dimensions are respectively
Figure BDA0003164985800000051
The dimension of the superpixel domain association map under different scales in the step S4 is
Figure BDA0003164985800000052
Figure BDA0003164985800000061
The super-pixel domain association model under multiple scales can extract longitudinal and transverse pixel relations, the first dimension is fixed to be 9, 9 azimuth information including the super-pixel domain association model per se in the neighborhood of the current pixel point is represented and respectively comprises upper left, upper right, left, center, right, lower left, lower right and lower right, the value of each store of the super-pixel domain association mapping map represents the correlation between the current super-pixel and the 9 super-pixels in the neighborhood, the larger the weight is, the larger the probability that the two super-pixels belong to the same category is, the smaller the weight is, the smaller the correlation between the two super-pixels is, and the probability of distributing labels of different categories is larger.
The S2pNet network is a convolutional neural network consisting of a plurality of convolutional layers, is a core part of an algorithm and is mainly responsible for training a neighborhood correlation model of internal-external pixels of the superpixel, and comprises the output definition of the network model and the training strategy design of the neighborhood correlation model of the superpixel; the S2pNet is a coding-de-coding network structure, and input and output channels are different but have the same size.
The S2pNet network trains a super-pixel neighborhood correlation model by using the extracted features, the super-pixel neighborhood correlation model outputs super-pixel neighborhood correlation mapping maps under different scales, and the training process of the super-pixel neighborhood correlation model is as follows:
for the feature with down-sampling multiple of alpha, first initialize one
Figure BDA0003164985800000062
S _ m matrix of, and
Figure BDA0003164985800000063
the characteristic region is subjected to weighted dot product calculation to obtain
Figure BDA0003164985800000064
Polymerization characteristic f of0
Figure BDA0003164985800000065
In the formula, H, W represents the length and width of an image, s _ m represents a super-pixel domain association mapping map to be learned, f represents an extracted image feature, and alpha represents a down-sampling multiple;
the image is obtained by redistributing the associated mapping map of the super pixel field
Figure BDA0003164985800000066
Is reconstructed feature frc
Figure BDA0003164985800000067
Defining the similarity degree of the reconstructed features and the original features as the learning quality degree of the super-pixel neighborhood correlation model, and defining the Loss of the target functions_m
Losss_m=|f0-frc|2
Updating reversely according to the target function until the target function is smaller than a threshold value, and finishing training;
and for the features with the down-sampling multiples of beta and eta, completing training by adopting the same method as the steps.
In step S5, fusion splicing is performed on the superpixel domain associated maps at different scales and the feature layers at the scales corresponding to the networks for segmenting the image defect region, where the fusion splicing includes fusion and splicing, and the fusion specifically includes:
fsi=λ*s_mi*fi+(1-λ)*fi,i∈(1,2,3)
in the formula (f)siAnd (3) representing the characteristic diagram after the ith fused superpixel incidence matrix, wherein lambda is a superparameter for adjusting the importance degree of the aggregated characteristic diagram and the original characteristic diagram.
The splicing specifically comprises the following steps:
fout=up(up(up(fs3)+fs2)+fs1)
in the formula, up represents an up-sampling process, and '+' represents a splicing process of the feature map.
The training of the segmentation network requires preprocessing of the workpiece image, including program console movement and synchronous camera image acquisition, random flipping, contrast stretching and random center clipping of the input image.
Example 2
The embodiment provides an image defect region segmentation system based on multi-scale superpixel feature enhancement, as shown in fig. 2, including:
an image acquisition module for acquiring an image comprising a surface defect of a workpiece;
a pre-processing module for pre-processing the image;
the characteristic extraction module is used for extracting the characteristics of the preprocessed image;
the S2pNet module is used for inputting the extracted features into an S2pNet network, the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal and external pixels of the super-pixel;
the fusion splicing module is used for fusion splicing of the super-pixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the scales for segmenting the image defect region;
and the output module outputs the well-segmented defect area by utilizing the segmentation network.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A method for segmenting image defect regions based on multi-scale superpixel feature enhancement is characterized by comprising the following steps:
s1: acquiring an image comprising a surface defect of a workpiece;
s2: preprocessing the image;
s3: extracting the characteristics of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal pixel and the external pixel of the super-pixel;
s5: fusing and splicing the superpixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the segmentation network for segmenting the image defect region;
s6: the segmentation network outputs the segmented defective regions.
2. The method for segmenting image defect regions based on multi-scale superpixel feature enhancement as claimed in claim 1, wherein said step S2 is implemented by preprocessing said image, specifically:
and carrying out image acquisition, preliminary setting of an image detection area and cutting on the image.
3. The method for segmenting image defect regions based on multi-scale superpixel feature enhancement according to claim 1, wherein said step S3 is to perform feature extraction on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as the characteristic for describing each pixel point.
4. The method according to claim 3, wherein the extracted features are input into the S2pNet network in step S4, and the following processing is further performed on the extracted features:
subjecting the extracted features to mean pooling of different scales to obtain features f under different scalesαfβfηThe down sampling multiples are respectively alpha, beta and eta, and the dimensions are respectively
Figure FDA0003164985790000011
5. The method for segmenting image defect regions based on multi-scale superpixel feature enhancement as claimed in claim 4, wherein the dimension size of the superpixel domain association map at different scales in step S4 is
Figure FDA0003164985790000021
The first dimension is fixed to be 9, the first dimension represents that the current pixel point neighborhood comprises own 9 pieces of orientation information which are respectively upper left, upper right, left, center, right, lower left, lower right and lower right, and the value of each shop of the super pixel domain association mapping chart represents the correlation between the current super pixel and the 9 super pixels in the neighborhood.
6. The image defect region segmentation method based on multi-scale superpixel feature enhancement according to claim 5, wherein said S2pNet network trains a superpixel neighborhood correlation model by using the extracted features, said superpixel neighborhood correlation model outputs superpixel neighborhood correlation maps at different scales, and the training process of said superpixel neighborhood correlation model is as follows:
for the feature with down-sampling multiple of alpha, first initialize one
Figure FDA0003164985790000022
S _ m matrix of, and
Figure FDA0003164985790000023
the characteristic region is subjected to weighted dot product calculation to obtain
Figure FDA0003164985790000024
Polymerization characteristic f of0
Figure FDA0003164985790000025
In the formula, H, W represents the length and width of an image, s _ m represents a super-pixel domain association mapping map to be learned, f represents an extracted image feature, and alpha represents a down-sampling multiple;
the image is obtained by redistributing the associated mapping map of the super pixel field
Figure FDA0003164985790000026
Is reconstructed feature frc
Figure FDA0003164985790000027
Defining the similarity degree of the reconstructed features and the original features as the learning quality degree of the super-pixel neighborhood correlation model, and defining the Loss of the target functions_m
Losss_m=|f0-frc|2
Updating reversely according to the target function until the target function is smaller than a threshold value, and finishing training;
and for the features with the down-sampling multiples of beta and eta, completing training by adopting the same method as the steps.
7. The method for segmenting image defect regions based on multi-scale superpixel feature enhancement according to claim 6, wherein in step S5, the superpixel domain associated maps at different scales and the feature layers at the scales corresponding to the networks for segmenting image defect regions are fused and spliced, the fused and spliced process is divided into fusion and splicing, and the fusion specifically is:
fsi=λ*s_mi*fi+(1-λ)*fi,i∈(1,2,3)
in the formula (f)siAnd (3) representing the characteristic diagram after the ith fused superpixel incidence matrix, wherein lambda is a superparameter for adjusting the importance degree of the aggregated characteristic diagram and the original characteristic diagram.
8. The method for segmenting the image defect region based on the multi-scale superpixel feature enhancement as claimed in claim 7, wherein said stitching specifically comprises:
fout=up(up(up(fs3)+fs2)+fs1)
in the formula, up represents an up-sampling process, and '+' represents a splicing process of the feature map.
9. The method of claim 8, wherein training of the segmentation network requires preprocessing of the workpiece image, wherein the preprocessing includes program console motion and synchronized camera capture of the image, random flipping, contrast stretching and random center cropping of the input image.
10. An image defect region segmentation system based on multi-scale superpixel feature enhancement, comprising:
an image acquisition module for acquiring an image comprising a surface defect of a workpiece;
a pre-processing module for pre-processing the image;
the characteristic extraction module is used for extracting the characteristics of the preprocessed image;
the S2pNet module is used for inputting the extracted features into an S2pNet network, the S2pNet network outputs a super-pixel domain association mapping map under different scales, and the super-pixel domain association mapping map represents the relation between the internal and external pixels of the super-pixel;
the fusion splicing module is used for fusion splicing of the super-pixel domain associated mapping maps under different scales and the feature layers of the segmentation network corresponding to the scales for segmenting the image defect region;
and the output module outputs the well-segmented defect area by utilizing the segmentation network.
CN202110801975.4A 2021-07-15 2021-07-15 Image defect region segmentation method and system based on super-pixel feature enhancement Active CN113362347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110801975.4A CN113362347B (en) 2021-07-15 2021-07-15 Image defect region segmentation method and system based on super-pixel feature enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110801975.4A CN113362347B (en) 2021-07-15 2021-07-15 Image defect region segmentation method and system based on super-pixel feature enhancement

Publications (2)

Publication Number Publication Date
CN113362347A true CN113362347A (en) 2021-09-07
CN113362347B CN113362347B (en) 2023-05-26

Family

ID=77539672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110801975.4A Active CN113362347B (en) 2021-07-15 2021-07-15 Image defect region segmentation method and system based on super-pixel feature enhancement

Country Status (1)

Country Link
CN (1) CN113362347B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898123A (en) * 2022-06-13 2022-08-12 广东工业大学 Complex texture image color difference detection method
CN115063725A (en) * 2022-06-23 2022-09-16 中国民航大学 Aircraft Skin Defect Recognition System Based on Multi-scale Adaptive SSD Algorithm

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180012365A1 (en) * 2015-03-20 2018-01-11 Ventana Medical Systems, Inc. System and method for image segmentation
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
US20200019817A1 (en) * 2018-07-11 2020-01-16 Harbin Institute Of Technology Superpixel classification method based on semi-supervised k-svd and multiscale sparse representation
CN112633416A (en) * 2021-01-16 2021-04-09 北京工业大学 Brain CT image classification method fusing multi-scale superpixels
CN112927235A (en) * 2021-02-26 2021-06-08 南京理工大学 Brain tumor image segmentation method based on multi-scale superpixel and nuclear low-rank representation
CN112991302A (en) * 2021-03-22 2021-06-18 华南理工大学 Flexible IC substrate color-changing defect detection method and device based on super-pixels

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180012365A1 (en) * 2015-03-20 2018-01-11 Ventana Medical Systems, Inc. System and method for image segmentation
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
US20200019817A1 (en) * 2018-07-11 2020-01-16 Harbin Institute Of Technology Superpixel classification method based on semi-supervised k-svd and multiscale sparse representation
CN112633416A (en) * 2021-01-16 2021-04-09 北京工业大学 Brain CT image classification method fusing multi-scale superpixels
CN112927235A (en) * 2021-02-26 2021-06-08 南京理工大学 Brain tumor image segmentation method based on multi-scale superpixel and nuclear low-rank representation
CN112991302A (en) * 2021-03-22 2021-06-18 华南理工大学 Flexible IC substrate color-changing defect detection method and device based on super-pixels

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杜卉然 等: "基于邻域差分滤波生成式对抗网络的数据增强方法", 《计算机应用研究》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898123A (en) * 2022-06-13 2022-08-12 广东工业大学 Complex texture image color difference detection method
CN115063725A (en) * 2022-06-23 2022-09-16 中国民航大学 Aircraft Skin Defect Recognition System Based on Multi-scale Adaptive SSD Algorithm
CN115063725B (en) * 2022-06-23 2024-04-26 中国民航大学 Aircraft skin defect identification system based on multi-scale self-adaptive SSD algorithm

Also Published As

Publication number Publication date
CN113362347B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN112017189B (en) Image segmentation method and device, computer equipment and storage medium
CN114820579B (en) A method and system for detecting composite defects in images based on semantic segmentation
CN114663346B (en) A strip steel surface defect detection method based on improved YOLOv5 network
CN111681273A (en) Image segmentation method and device, electronic equipment and readable storage medium
CN111582294A (en) Method for constructing convolutional neural network model for surface defect detection and application thereof
CN110728302A (en) A method for texture identification of dyed fabrics based on HSV and Lab color space
CN116205876B (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN115035097B (en) Cross-scene strip steel surface defect detection method based on domain adaptation
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN116824352A (en) Water surface floater identification method based on semantic segmentation and image anomaly detection
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN115937518A (en) Pavement disease identification method and system based on multi-source image fusion
CN113362347A (en) Image defect region segmentation method and system based on multi-scale superpixel feature enhancement
CN113205136A (en) Real-time high-precision detection method for appearance defects of power adapter
CN104616321B (en) A kind of luggage image motion behavior description method based on Scale invariant features transform
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection
CN112669300A (en) Defect detection method and device, computer equipment and storage medium
CN113033656A (en) Interactive hole exploration data expansion method based on generation countermeasure network
CN111667465A (en) Metal hand basin defect detection method based on far infrared image
Zhang et al. Automatic detection of surface defects based on deep random chains
CN115731198A (en) An intelligent detection system for leather surface defects
CN114612738A (en) Training method of cell electron microscope image segmentation model and organelle interaction analysis method
CN115880209A (en) Surface defect detection system, method, device and medium suitable for steel plate
CN113192018B (en) Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TA01 Transfer of patent application right

Effective date of registration: 20230512

Address after: 510090 Dongfeng East Road 729, Yuexiu District, Guangzhou City, Guangdong Province

Applicant after: GUANGDONG University OF TECHNOLOGY

Applicant after: Guangzhou Deshidi Intelligent Technology Co.,Ltd.

Address before: 510090 Dongfeng East Road 729, Yuexiu District, Guangzhou City, Guangdong Province

Applicant before: GUANGDONG University OF TECHNOLOGY

TA01 Transfer of patent application right