CN115661174A - Surface defect region segmentation method and device based on flow distortion and electronic equipment - Google Patents

Surface defect region segmentation method and device based on flow distortion and electronic equipment Download PDF

Info

Publication number
CN115661174A
CN115661174A CN202211416972.XA CN202211416972A CN115661174A CN 115661174 A CN115661174 A CN 115661174A CN 202211416972 A CN202211416972 A CN 202211416972A CN 115661174 A CN115661174 A CN 115661174A
Authority
CN
China
Prior art keywords
features
segmentation
defect region
module
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211416972.XA
Other languages
Chinese (zh)
Inventor
李朋超
杨庆泰
蔡丽蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jushi Intelligent Technology Co ltd
Original Assignee
Beijing Jushi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jushi Intelligent Technology Co ltd filed Critical Beijing Jushi Intelligent Technology Co ltd
Priority to CN202211416972.XA priority Critical patent/CN115661174A/en
Publication of CN115661174A publication Critical patent/CN115661174A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a surface defect region segmentation method and device based on flow distortion and an electronic device, belonging to the technical field of defect detection, wherein the surface defect region segmentation method based on flow distortion comprises the following steps: acquiring a target surface image; carrying out defect region segmentation processing on the target surface image by using the trained segmentation network model; the network structure of the segmentation network model is provided with a flow warping module which carries out space consistency constraint on the cross-scale features of the image. According to the technical scheme, the flow distortion module is configured in the segmentation network model, and the cross-scale features of the image in the segmentation prediction processing process are subjected to space consistency constraint, so that the space inconsistency of features of different network levels is reduced, more excellent feature fusion can be obtained, and the improvement of the segmentation precision of the defect area is facilitated.

Description

Surface defect region segmentation method and device based on flow distortion and electronic equipment
Technical Field
The application belongs to the technical field of defect detection, and particularly relates to a surface defect region segmentation method and device based on flow distortion and electronic equipment.
Background
In recent years, in the related art of workpiece surface defect detection, segmentation of workpiece surface defect regions based on a deep learning semantic segmentation algorithm has been rapidly developed. In the current representative method, an encoder-decoder framework such as U-Net or the like or a method such as DeeplabV3 is adopted, the effective fusion of multi-scale features is realized by fusing multi-level features such as bottom-layer spatial detail and high-layer discrimination semantics of an image, or context information in different distance ranges is aggregated by expanding convolution pyramids in different receptive field ranges, so that the segmentation prediction of a defect region is realized.
The existing related method makes progress in the multi-scale feature extraction and fusion of images, but ignores a basic and important problem: namely, no space consistency constraint is applied to the cross-scale features of different network levels, so that the problem of space inconsistency exists between the features of different network levels, the difference between the features of different network levels is increased, the similarity of the features in the same network level is increased, and the precision of defect region segmentation is influenced.
The above is only for the purpose of assisting understanding of the technical solution of the present invention, and does not represent an admission that the above is the prior art.
Disclosure of Invention
In order to overcome the problems in the related art at least to a certain extent, the present application provides a method, an apparatus and a device for surface defect region segmentation based on stream distortion, so as to solve the problem of poor accuracy of defect region segmentation caused by lack of spatial consistency constraint of cross-scale features of different network levels in the region segmentation process.
In order to achieve the purpose, the following technical scheme is adopted in the application:
in a first aspect,
the application provides a surface defect region segmentation method based on flow distortion, which comprises the following steps:
acquiring a target surface image;
utilizing the trained segmentation network model to carry out defect region segmentation processing on the target surface image;
the network structure of the segmentation network model is provided with a flow warping module which carries out space consistency constraint on cross-scale features of the image.
Optionally, the flow warping module is configured to:
performing 1 × 1 convolution on the shallowest layer feature of the input module to obtain a first convolution operation result, performing up-sampling on each deep layer feature of the input module respectively, and performing K × K convolution on the sampling result correspondingly to obtain a second convolution operation result corresponding to each deep layer feature;
performing distortion mapping operation according to the first operation convolution result and each second convolution operation result, and performing guided upsampling on the deepest layer feature in the deep layer features based on a distortion mapping pixel weight obtained by operation to obtain guided upsampled features;
and performing pixel-by-pixel addition processing on the guide up-sampled feature and the shallowest feature, and taking a processed result as the output of a module.
Optionally, the configuration rule of the K × K convolution is:
under different input conditions, the value of K enables the resolution of the convolution result of the first operation and the resolution of the convolution result of the corresponding second operation to be the same.
Optionally, the warping mapping operation is performed in a pixel-by-pixel addition manner.
Optionally, the guided upsampling is performed based on the following expression:
Figure 537849DEST_PATH_IMAGE001
wherein, Q 11 、Q 12 、Q 21 、Q 22 Representing a known point, (x, y) representing a point to be inserted, w 11 、w 12 、w 21 、w 22 Respectively, representing the warped mapped pixel weights for the corresponding known points.
Optionally, the segmentation network model is constructed based on a HRNet network.
Optionally, the HRNet network of the split network model has an N-level structure;
in a first-stage structure of the HRNet network, performing feature extraction on input features based on a residual error network and outputting the features to a next-stage structure, performing resolution reduction processing on the extracted features, and outputting the processed features to the next-stage structure;
in the second-level structure to the (N-1) -level structure of the HRNet network, respectively extracting the features of different scales output by the previous-level structure based on a residual error network, respectively processing the extracted features by a feature fusion module and the (N-1) stream distortion module, and outputting the processed features to the next-level structure,
performing resolution reduction processing on the features processed by the feature fusion module in the level structure, and outputting the processed features to a next level structure;
in the N-level structure of the HRNet network, feature extraction is respectively carried out on different scale features output by the upper-level structure based on a residual error network, the extracted features are respectively processed by a feature fusion module and N flow distortion modules, and the processed features are output.
Optionally, during the defect region segmentation process, the method further includes:
performing feature extraction on the target surface image through a convolution block, and taking the extracted features as features input to a first-level structure of the HRNet network;
and performing feature splicing on features of different scales output by the Nth-level structure of the HRNet network along the channel dimension, and performing segmentation prediction based on a splicing result.
In a second aspect of the present invention,
the application provides a surface defect region segmentation device based on flow distortion, its characterized in that includes:
the acquisition module is used for acquiring a target surface image;
the segmentation processing module is used for carrying out defect region segmentation processing on the target surface image by utilizing the trained segmentation network model;
the network structure of the segmentation network model is provided with a flow warping module which carries out space consistency constraint on the cross-scale features of the image.
In a third aspect,
the application provides an electronic device, including:
a memory having an executable program stored thereon;
a processor for executing the executable program in the memory to implement the steps of the method described above.
This application adopts above technical scheme, possesses following beneficial effect at least:
the method for segmenting the surface defect region based on the flow distortion comprises the steps of obtaining a target surface image; carrying out defect region segmentation processing on the target surface image by using the trained segmentation network model; the network structure of the segmentation network model is provided with a flow warping module which carries out space consistency constraint on cross-scale features of the image. According to the technical scheme, the flow distortion module is configured in the segmentation network model, and the cross-scale features of the image in the segmentation prediction processing process are subjected to space consistency constraint, so that the space inconsistency of features of different network levels is reduced, more excellent feature fusion can be obtained, and the improvement of the segmentation precision of the defect area is facilitated.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
The accompanying drawings are included to provide a further understanding of the technology or prior art of the present application and are incorporated in and constitute a part of this specification. The drawings expressing the embodiments of the present application are used for explaining the technical solutions of the present application, and should not be construed as limiting the technical solutions of the present application.
Fig. 1 is a schematic flowchart of a method for segmenting a surface defect region based on flow warping according to an embodiment of the present application;
FIG. 2 is a schematic illustration of an implementation of a flow warping module in one embodiment of the present application;
FIG. 3 is a schematic illustration of guided upsampling in one embodiment of the present application;
FIG. 4 is a schematic illustration of a network structure of a segmented network model in one embodiment of the present application;
FIG. 5 is a schematic structural diagram of a device for segmenting a surface defect region based on flow distortion according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without making any creative effort, shall fall within the protection scope of the present application.
As described in the background art, in recent years, in the related art of workpiece surface defect detection, segmentation of a workpiece surface defect region based on a deep learning semantic segmentation algorithm has been rapidly developed. In the current representative method, an encoder-decoder framework such as U-Net or the like or a method such as DeeplabV3 is adopted, the effective fusion of multi-scale features is realized by fusing multi-level features such as bottom-layer spatial detail and high-layer discrimination semantics of an image, or context information in different distance ranges is aggregated by expanding convolution pyramids in different receptive field ranges, so that the segmentation prediction of a defect region is realized.
The existing related method makes progress in the multi-scale feature extraction and fusion of images, but ignores a basic and important problem: namely, the cross-scale features of different network levels do not impose a spatial consistency constraint, so that the spatial inconsistency exists among the features of different network levels.
Specifically, different layers of networks extract different features, the resolution of lower-layer features is higher, more position and detail information is contained, but the semanteme is lower and the noise is more because of less convolution. The high-level features have stronger semantic information, but the resolution is very low, and the perception capability of the details is poor. One major drawback of the feature pyramid is the non-uniformity of different scales. When detecting a target with a feature pyramid, heuristic guided feature selection is adopted, wherein large examples are generally associated with an upper feature map, and small examples are generally associated with a lower feature map. When an object is designated as positive in a feature map of a certain level, the degree of attention of the corresponding region in feature maps of other levels is not so strong. Thus, if an image contains large and small objects at the same time, conflicts between features at different levels tend to dominate the feature pyramid. This inconsistency interferes with the gradient computation during the training process, reducing the effectiveness of the feature pyramid.
That is, the spatial inconsistency increases the difference between features of different network layers and increases the similarity of features in the same network layer, thereby affecting the accuracy of the defective area segmentation.
In view of the above, the present application provides a surface defect region segmentation method based on flow warping, so as to solve the problem that cross-scale features of different network levels lack spatial consistency constraint in a region segmentation process, which results in poor defect region segmentation accuracy.
In one embodiment, as shown in fig. 1, the present application provides a method for segmenting a surface defect region based on flow warping, comprising:
step S110, acquiring a target surface image;
for example, the application scenario of the embodiment is a silicon steel strip production scenario, in which surface defect detection needs to be performed on the silicon steel strip, a camera device is specifically configured in a production site, and a surface image (i.e., a target surface image) of the silicon steel strip to be input into the detection processing system is obtained by shooting through the camera device.
Continuing to perform step S120, as shown in fig. 1, performing defect region segmentation processing on the target surface image by using the trained segmentation network model;
different from the prior art, in the technical scheme of the application, in the network structure for segmenting the network model, a flow warping module for performing spatial consistency constraint on cross-scale features of the image is provided.
Specifically, for example, as shown in fig. 2, a schematic illustration of an implementation of the flow warping module in this embodiment is shown, and as shown in fig. 2, the flow warping module is configured to:
carrying out 1 × 1 convolution on the shallowest layer feature of the input module to obtain a first convolution operation result, respectively carrying out upsampling on each deep layer feature (corresponding to deep layer feature 1 \8230; deepest layer feature; in FIG. 2) of the input module, and correspondingly carrying out K × K convolution on a sampling result to obtain a second convolution operation result corresponding to each deep layer feature, namely, the flow distortion module uses sense fields with different sizes for features with different resolutions, carries out 1 × 1 convolution on the shallowest layer feature, and uses a K × K convolution block to obtain a larger sense field for the deeper layer feature;
here, the rule of the K × K convolution configuration is: under different input conditions, the value of K enables the resolution of the convolution result of the first operation and the corresponding convolution result of the second operation to be the same, or the size of a convolution kernel is consistent with the characteristic proportion of a deep layer and a shallow layer, and the proportion is to enable the space between one pixel of the shallow layer and 4 pixels of the deep layer to be mutually overlapped;
for example, the sizes of convolution kernels at different scales are set to 3, 7, and 15, respectively, where 3 represents 2 upsampling and 15 represents 8 upsampling.
After convolution operation, performing distortion mapping operation (namely mapping the corner of one image to a certain position of another image, changing the covered pixel value, and synthesizing a new image) according to the convolution result of the first operation and each convolution result of the second operation, and performing guided upsampling on the deepest layer feature in the deep layer features based on the distortion mapping pixel weight value obtained by operation to obtain the feature after guided upsampling;
and finally, performing pixel-by-pixel addition processing on the guide up-sampled feature and the shallowest feature, and taking the processed result as the output of the module.
In the process, the image distortion mapping is actually to map each pixel point on the image to a new position of another image according to a certain rule, and actually is to solve a new (x, y) process, wherein the transformation is linear transformation from a two-dimensional coordinate to a two-dimensional coordinate, and the straightness and the parallelism of a two-dimensional graph are kept;
for example, the warping mapping operation is performed by pixel-by-pixel addition, the nature of the image is a two-dimensional matrix, and the pixel addition of the image is the summation of corresponding points, taking two inputs as an example, the calculation process is shown in the following expression:
Figure 317586DEST_PATH_IMAGE002
in expression (1), pixel _ conv represents a shallow feature (X) s ) 1X 1 convolution of (1), region _ conv, represents a deep feature (X) D ) K X K convolution of (a).
In the above process, for example, the upsampling may be conducted based on the following expression:
Figure 650479DEST_PATH_IMAGE003
in expression (2), Q 11 、Q 12 、Q 21 、Q 22 Representing known points, (x, y) representing points to be inserted, w 11 、w 12 、w 21 、w 22 Warping mapping pixel weight values respectively representing corresponding known points
As will be readily understood by those skilled in the art, the convolution operation reduces the resolution of the image, and one of the purposes of upsampling is to restore the original resolution of the image, and the specific operation is to insert new pixel points into the image;
and the guided upsampling characterized by expression (2), Q, as shown in FIG. 3 11 、Q 12 、Q 21 、Q 22 For existing pixels, their weight w 11 、w 12 、w 21 、w 22 Has been calculated by expression (1) (i.e. for deep features and shallow layers)The feature is added pixel by pixel), the P point (x, y) is a point to be inserted, and the pixel value of the P point can be calculated according to the formula expression (2).
According to the technical scheme, the flow distortion module is configured in the segmentation network model, and the cross-scale features of the image in the segmentation prediction processing process are subjected to space consistency constraint, so that the space inconsistency of features of different network levels is reduced, more excellent feature fusion can be obtained, and the improvement of the segmentation precision of the defect area is facilitated.
To facilitate understanding of the technical solutions of the present application, the technical solutions of the present application will be described below with reference to another embodiment.
In this embodiment, the segmentation network model for performing the defect region segmentation processing is constructed based on an HRNet (high resolution Net) network.
In the technical solution of the present application, the HRNet network of the split network model has an N-level structure, specifically, as shown in fig. 4, a four-level structure (Stage 1 \8230; stage 4) in this embodiment;
as shown in fig. 4, in the first Stage structure Stage1 of the HRNet network, feature extraction is performed on input features based on a residual network (as shown in fig. 4, here, feature extraction is implemented based on a residual connection module in ResNet 101) and output to a next Stage structure, and in order to obtain features with different resolutions, resolution reduction processing is performed on the extracted features (corresponding to the resolution reduction module in Stage1 in fig. 4), and the processed features are output to the next Stage structure.
As shown in fig. 4, in the second-level structure to the (N-1) -level structure of the HRNet (in this embodiment, corresponding to stage2 and stage3 in fig. 4), feature extraction is performed on different scale features output by the previous-level structure based on the residual network (e.g., high resolution feature and low resolution feature output by stage1 targeted in stage 2), and the extracted features are processed by a feature fusion module and (N-1) stream warping modules respectively, and the processed features are output to the next-level structure,
and performing resolution reduction processing on the features processed by the feature fusion module in the level structure, and outputting the processed features to a next level structure.
Based on the implementation description of the convection distortion module described in the foregoing, the flow distortion module serves as an "adjustment function" in the cross-scale feature fusion process, and plays a role in coordinate correction, so that spatial consistency of features in the feature fusion process can be improved, and the feature fusion module is used for fusing context information of different scales;
taking stage2 as an example, in implementation, 1 × 1 convolution and bilinear interpolation upsampling can be used to transfer deep features to the shallow layer for fusion, and 3 × 3 convolution with step size of 2 is used to transfer shallow features to the deep layer for fusion. The features fused by the feature fusion module are subjected to resolution reduction module to obtain second-level low-resolution features;
similar to stage2, in stage3, as shown in fig. 4, the extracted high-resolution features and low-resolution features are respectively input to a feature extraction module (residual connection module) of a third-level structure, feature fusion is performed through two stream warping modules and a feature fusion module, a third-level low-resolution feature is obtained through a resolution reduction module, and finally the obtained four features with different scales are input to a fourth-level structure.
As shown in fig. 4, in the nth stage structure of the HRNet network (here, corresponding to stage 4), feature extraction is performed on features of different scales output by the previous stage structure based on the residual error network, and the extracted features are processed by a feature fusion module and N stream warping modules, respectively, and the processed features are output.
Specifically, in stage4, the fusion features of two flow warping modules, the fusion feature of one feature fusion module, and one low-resolution feature output in stage3 are input to a feature extraction module (residual connection module) of a fourth-stage (stage 4) structure, and feature fusion is performed through the three flow warping modules and the feature fusion module, so that four fusion features of different scales are finally obtained and output.
In addition, similar to the prior art, in this embodiment, as shown in fig. 4, before the HRnet network in the model, some volume blocks are further provided for extracting features from the acquired target surface image and reducing the resolution of the features, that is, during the defect region segmentation process, the method further includes:
and performing feature extraction on the target surface image through a convolution block, and taking the extracted features as features input to a first-stage structure stage1 of the HRNet network.
Similarly, similar to the prior art, in this embodiment, in order to implement a complete segmentation prediction function, after the HRnet in the model, the splicing and segmentation prediction processing needs to be performed, that is, in the defect region segmentation processing process, the method further includes:
performing feature splicing on features of different scales output by an Nth-level structure of the HRNet network along channel dimensions, and performing segmentation prediction based on a splicing result;
specifically, corresponding to this embodiment, as shown in fig. 4, four fused features of different scales extracted from the fourth-level structure are feature-spliced along the channel dimension (corresponding to the "splicing" legend in fig. 4), and finally, a prediction result of the defect region is output after passing through a layer of segmentation prediction module (corresponding to the "segmentation prediction" legend in fig. 4).
The flow distortion module provided by the technical scheme of the application can eliminate the difference between semantic segmentation network layers, plays a role in coordinate correction in the multi-level feature fusion process, and improves the space consistency of features in the feature fusion process.
In a specific implementation, HRNet can be used to learn features of different scales, and a multi-branch structure is used to retain features from different resolutions, the overall structure of which is shown in fig. 4; according to the segmentation network model, the HRNet and the flow distortion module are used for extracting features of different scales, and the features are adjusted in the spatial dimension, so that the problem of spatial inconsistency of features of different network levels is reduced, better feature fusion is obtained, and finally the segmentation precision of the defect region is improved.
Fig. 5 is a schematic structural diagram of a surface defect region segmentation apparatus based on flow distortion according to an embodiment of the present application, and as shown in fig. 5, the surface defect region segmentation apparatus 300 based on flow distortion includes:
an acquisition module 301, configured to acquire a target surface image;
a segmentation processing module 302, configured to perform defect region segmentation processing on the target surface image by using the trained segmentation network model;
the network structure of the segmentation network model is provided with a flow warping module which carries out space consistency constraint on the cross-scale features of the image.
With respect to the flow-distortion-based surface defect region segmentation apparatus 300 in the above-described related embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment related to the method, and will not be described in detail here.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 6, the electronic device 400 includes:
a memory 401 having an executable program stored thereon;
a processor 402 for executing the executable program in the memory 401 to implement the steps of the above method.
With respect to the electronic device 400 in the above embodiment, the specific manner of executing the program in the memory 401 by the processor 402 thereof has been described in detail in the embodiment related to the method, and will not be elaborated herein.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar contents in other embodiments may be referred to for the contents which are not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A method for segmenting a surface defect region based on flow distortion is characterized by comprising the following steps:
acquiring a target surface image;
utilizing the trained segmentation network model to carry out defect region segmentation processing on the target surface image;
the network structure of the segmentation network model is provided with a flow warping module which carries out space consistency constraint on cross-scale features of the image.
2. The surface defect region segmentation method of claim 1, wherein the flow warping module is configured to:
performing 1 × 1 convolution on the shallowest layer features of the input module to obtain a first convolution operation result, performing up-sampling on each deep layer feature of the input module respectively, and performing K × K convolution on the sampling result correspondingly to obtain a second convolution operation result corresponding to each deep layer feature;
performing distortion mapping operation according to the first operation convolution result and each second convolution operation result, and performing guided upsampling on the deepest layer feature in the deep layer features based on the distortion mapping pixel weight obtained by operation to obtain guided upsampled features;
and performing pixel-by-pixel addition processing on the guide up-sampled feature and the shallowest feature, and taking a processed result as the output of a module.
3. The surface defect region segmentation method according to claim 2, wherein the K x K convolution is configured according to a rule that:
under different input conditions, the value of K enables the resolution of the convolution result of the first operation and the resolution of the convolution result of the corresponding second operation to be the same.
4. The surface defect region segmentation method as claimed in claim 3, wherein the warp mapping operation is performed in a pixel-by-pixel addition manner.
5. The surface defect region segmentation method of claim 2, wherein the guided upsampling is performed based on the following expression:
Figure 723304DEST_PATH_IMAGE001
wherein Q 11 、Q 12 、Q 21 、Q 22 Representing known points, (x, y) representing points to be inserted, w 11 、w 12 、w 21 、w 22 Respectively, representing the warped mapped pixel weights for the corresponding known points.
6. The surface defect region segmentation method according to any one of claims 1 to 5, wherein the segmentation network model is constructed based on HRNet network.
7. The surface defect region segmentation method as claimed in claim 6, wherein the HRNet network of the segmentation network model has an N-level structure;
in a first-stage structure of the HRNet network, performing feature extraction on input features based on a residual error network and outputting the features to a next-stage structure, performing resolution reduction processing on the extracted features, and outputting the processed features to the next-stage structure;
in the second-level structure to the (N-1) -level structure of the HRNet network, respectively extracting the features of different scales output by the previous-level structure based on a residual error network, respectively processing the extracted features by a feature fusion module and the (N-1) stream distortion module, and outputting the processed features to the next-level structure,
performing resolution reduction processing on the features processed by the feature fusion module in the level structure, and outputting the processed features to a next level structure;
in the N-level structure of the HRNet network, feature extraction is respectively carried out on different scale features output by the upper-level structure based on a residual error network, the extracted features are respectively processed by a feature fusion module and N flow distortion modules, and the processed features are output.
8. The surface defect region segmentation method according to claim 7, further comprising, during the defect region segmentation process:
performing feature extraction on the target surface image through a convolution block, and taking the extracted features as features input to a first-level structure of the HRNet network;
and performing feature splicing on features of different scales output by the Nth-level structure of the HRNet network along the channel dimension, and performing segmentation prediction based on a splicing result.
9. A surface defect region segmentation apparatus based on flow warping, comprising:
the acquisition module is used for acquiring a target surface image;
the segmentation processing module is used for carrying out defect region segmentation processing on the target surface image by utilizing the trained segmentation network model;
the network structure of the segmentation network model is provided with a flow warping module which carries out space consistency constraint on the cross-scale features of the image.
10. An electronic device, comprising:
a memory having an executable program stored thereon;
a processor for executing the executable program in the memory to implement the steps of the method of any one of claims 1-8.
CN202211416972.XA 2022-11-14 2022-11-14 Surface defect region segmentation method and device based on flow distortion and electronic equipment Pending CN115661174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211416972.XA CN115661174A (en) 2022-11-14 2022-11-14 Surface defect region segmentation method and device based on flow distortion and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211416972.XA CN115661174A (en) 2022-11-14 2022-11-14 Surface defect region segmentation method and device based on flow distortion and electronic equipment

Publications (1)

Publication Number Publication Date
CN115661174A true CN115661174A (en) 2023-01-31

Family

ID=85020996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211416972.XA Pending CN115661174A (en) 2022-11-14 2022-11-14 Surface defect region segmentation method and device based on flow distortion and electronic equipment

Country Status (1)

Country Link
CN (1) CN115661174A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180067909A (en) * 2016-12-13 2018-06-21 한국전자통신연구원 Apparatus and method for segmenting image
CN111696075A (en) * 2020-04-30 2020-09-22 航天图景(北京)科技有限公司 Intelligent fan blade defect detection method based on double-spectrum image
CN112950593A (en) * 2021-03-08 2021-06-11 南京航空航天大学 Semantic segmentation method for rail surface defects
CN113887654A (en) * 2021-10-20 2022-01-04 北京矩视智能科技有限公司 Surface defect region segmentation method, device and equipment based on staggered feature fusion
CN114158280A (en) * 2020-07-08 2022-03-08 谷歌有限责任公司 Method and apparatus for determining a learnable cost body corresponding to a pixel
CN114693930A (en) * 2022-03-31 2022-07-01 福州大学 Example segmentation method and system based on multi-scale features and context attention
CN115272225A (en) * 2022-07-26 2022-11-01 山东大学 Strip steel surface defect detection method and system based on countermeasure learning network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180067909A (en) * 2016-12-13 2018-06-21 한국전자통신연구원 Apparatus and method for segmenting image
CN111696075A (en) * 2020-04-30 2020-09-22 航天图景(北京)科技有限公司 Intelligent fan blade defect detection method based on double-spectrum image
CN114158280A (en) * 2020-07-08 2022-03-08 谷歌有限责任公司 Method and apparatus for determining a learnable cost body corresponding to a pixel
CN112950593A (en) * 2021-03-08 2021-06-11 南京航空航天大学 Semantic segmentation method for rail surface defects
CN113887654A (en) * 2021-10-20 2022-01-04 北京矩视智能科技有限公司 Surface defect region segmentation method, device and equipment based on staggered feature fusion
CN114693930A (en) * 2022-03-31 2022-07-01 福州大学 Example segmentation method and system based on multi-scale features and context attention
CN115272225A (en) * 2022-07-26 2022-11-01 山东大学 Strip steel surface defect detection method and system based on countermeasure learning network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YINJIE ZHANG: "Improved-Flow Warp Module for Remote Sensing Semantic Segmentation" *

Similar Documents

Publication Publication Date Title
CN108647585B (en) Traffic identifier detection method based on multi-scale circulation attention network
CN112017198B (en) Right ventricle segmentation method and device based on self-attention mechanism multi-scale features
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
CN111369581B (en) Image processing method, device, equipment and storage medium
CN111681273B (en) Image segmentation method and device, electronic equipment and readable storage medium
CN108961229A (en) Cardiovascular OCT image based on deep learning easily loses plaque detection method and system
CN112800964B (en) Remote sensing image target detection method and system based on multi-module fusion
CN111445478A (en) Intracranial aneurysm region automatic detection system and detection method for CTA image
CN112712528B (en) Intestinal tract focus segmentation method combining multi-scale U-shaped residual error encoder and integral reverse attention mechanism
CN112446892A (en) Cell nucleus segmentation method based on attention learning
CN111444807B (en) Target detection method, device, electronic equipment and computer readable medium
CN114758137B (en) Ultrasonic image segmentation method and device and computer readable storage medium
CN116645592B (en) Crack detection method based on image processing and storage medium
CN114266794B (en) Pathological section image cancer region segmentation system based on full convolution neural network
CN112800955A (en) Remote sensing image rotating target detection method and system based on weighted bidirectional feature pyramid
CN116994140A (en) Cultivated land extraction method, device, equipment and medium based on remote sensing image
WO2023116632A1 (en) Video instance segmentation method and apparatus based on spatio-temporal memory information
CN110895815A (en) Chest X-ray pneumothorax segmentation method based on deep learning
CN115601330A (en) Colonic polyp segmentation method based on multi-scale space reverse attention mechanism
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN111192320A (en) Position information determining method, device, equipment and storage medium
CN112991280B (en) Visual detection method, visual detection system and electronic equipment
CN110827963A (en) Semantic segmentation method for pathological image and electronic equipment
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
CN115661174A (en) Surface defect region segmentation method and device based on flow distortion and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230131

RJ01 Rejection of invention patent application after publication