CN117830226A - Boundary constraint-based polyp segmentation method and system - Google Patents

Boundary constraint-based polyp segmentation method and system Download PDF

Info

Publication number
CN117830226A
CN117830226A CN202311662571.7A CN202311662571A CN117830226A CN 117830226 A CN117830226 A CN 117830226A CN 202311662571 A CN202311662571 A CN 202311662571A CN 117830226 A CN117830226 A CN 117830226A
Authority
CN
China
Prior art keywords
polyp
image
boundary
feature
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311662571.7A
Other languages
Chinese (zh)
Inventor
沈俊羽
黄志青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Hengshayun Technology Co ltd
Original Assignee
Guangzhou Hengshayun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Hengshayun Technology Co ltd filed Critical Guangzhou Hengshayun Technology Co ltd
Priority to CN202311662571.7A priority Critical patent/CN117830226A/en
Publication of CN117830226A publication Critical patent/CN117830226A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a polyp segmentation method and a polyp segmentation system based on boundary constraint, wherein the method comprises the following steps: acquiring a polyp image and performing data preprocessing to obtain a preprocessed polyp image; introducing a boundary feature extraction module to construct a polyp segmentation network based on boundary constraint; and performing recognition segmentation processing on the preprocessed polyp image through a polyp segmentation network based on boundary constraint to obtain a segmented polyp image. The system comprises: the device comprises a preprocessing module, a construction module and a segmentation module. According to the method, polyps with different shapes, colors and sizes are accurately segmented through capturing polyp image boundary details. The polyp segmentation method and system based on boundary constraint can be widely applied to the technical field of image recognition segmentation processing.

Description

Boundary constraint-based polyp segmentation method and system
Technical Field
The invention relates to the technical field of image recognition segmentation processing, in particular to a polyp segmentation method and system based on boundary constraint.
Background
Currently, colonoscopy is one of the most common methods for detecting polyps. However, due to the large workload of doctors and limited resources, the colonoscopy time is often shorter, the proportion of missed polyps is higher, the existence of polyps cannot be found in time, the risk of colorectal cancer of patients is increased, while for the current polyp segmentation algorithm based on machine learning, although the traditional manual detection is avoided, the traditional manual detection is relied on to manually extract features, the missed detection rate is higher, and for the current polyp detection technology based on the deep learning method, the feature can be learned end to end and segmented, but due to the loss of detail information caused by continuous downsampling operation and the insufficient representation of polyp boundaries, the existing polyp segmentation algorithm based on the deep learning is generally incapable of accurately segmenting polyps with different shapes, colors and sizes.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a polyp segmentation method and a polyp segmentation system based on boundary constraint, which can accurately segment polyps with different shapes, colors and sizes by capturing polyp image boundary details.
The first technical scheme adopted by the invention is as follows: a boundary constraint-based polyp segmentation method, comprising the steps of:
acquiring a polyp image and performing data preprocessing to obtain a preprocessed polyp image;
introducing a boundary feature extraction module to construct a polyp segmentation network based on boundary constraint;
and performing recognition and segmentation processing on the preprocessed polyp image through the polyp segmentation network based on boundary constraint to obtain a segmented polyp image, wherein the segmented polyp image comprises a plurality of polyp segmentation images.
Further, the step of acquiring a polyp image and performing data preprocessing to obtain a preprocessed polyp image specifically includes:
acquiring polyp images;
performing anti-reflection treatment on the polyp image to obtain an anti-reflection polyp image;
and carrying out data enhancement processing on the anti-reflection polyp image to obtain a preprocessed polyp image.
Further, the polyp segmentation network based on boundary constraint comprises a plurality of Res2Net-50 main network modules, a plurality of mutual optimization modules, a plurality of convolution attention modules, a boundary feature extraction module and a plurality of output modules, wherein a first output end of the Res2Net-50 main network modules is connected with an input end of the mutual optimization modules, a second output end of the Res2Net-50 main network modules is connected with an input end of the boundary feature extraction module, an output end of the mutual optimization modules and an output end of the boundary feature extraction module are respectively connected with an input end of the convolution attention modules, and an output end of the convolution attention modules is connected with an input end of the output modules.
Further, the step of performing recognition segmentation processing on the preprocessed polyp image through the polyp segmentation network based on boundary constraint to obtain a segmented polyp image specifically includes:
inputting the preprocessed polyp image to the boundary constraint-based polyp segmentation network;
based on the Res2Net-50 backbone network module, performing feature mapping processing on the preprocessed polyp image to obtain a polyp feature image;
based on the mutual optimization module, performing feature optimization processing on the polyp feature image to obtain a polyp feature image after mutual optimization;
based on the boundary feature extraction module, carrying out edge feature extraction processing on the polyp feature image to obtain a boundary polyp feature image and a boundary polyp prediction image;
performing feature fusion processing on the boundary polyp feature image and the polyp feature image after mutual optimization to obtain a polyp feature image with boundary feature information;
based on the convolution attention module, performing spatial attention feature extraction processing on the polyp feature image with boundary feature information to obtain the polyp attention feature image with boundary feature information;
fusing the polyp attention characteristic image with the boundary characteristic information with the boundary polyp prediction image to obtain a fused boundary polyp attention characteristic image;
and based on the output module, performing feature extraction processing on the fused boundary polyp attention feature image to obtain a segmented polyp image.
Further, the step of performing feature optimization processing on the polyp feature image based on the mutual optimization module to obtain a polyp feature image after mutual optimization specifically includes:
inputting the polyp feature image to the mutual optimization module, wherein the mutual optimization module comprises a first convolution layer, a first batch of normalization layers, a first ReLU activation function, an upsampling layer and a downsampling layer;
based on the first convolution layer, carrying out convolution processing on the polyp characteristic image to obtain a convolved polyp characteristic image;
based on the first batch normalization layer, batch normalization processing is carried out on the convolved polyp characteristic images, and batch normalized polyp characteristic images are obtained;
based on the first ReLU activation function, performing nonlinear mapping processing on the batch of normalized polyp feature images to obtain activated polyp feature images;
based on the upsampling layer, upsampling the activated polyp feature image to obtain an upsampled polyp feature image;
and based on the downsampling layer, downsampling the up-sampled polyp feature image to obtain a polyp feature image after mutual optimization.
Further, the step of performing edge feature extraction processing on the polyp feature image based on the boundary feature extraction module to obtain a boundary polyp feature image and a boundary polyp prediction image specifically includes:
inputting the polyp characteristic image to the boundary characteristic extraction module, wherein the boundary characteristic extraction module comprises a first branch convolution layer with a first structure, a second branch convolution layer with a first structure, a first branch convolution layer with a second structure, a second branch convolution layer with a second structure, a first branch convolution layer with a third structure and a second branch convolution layer with a third structure;
based on the first branch convolution layer of the first structure, carrying out convolution processing on the polyp characteristic image to obtain a first transverse axis polyp characteristic image;
based on the first structural second branch convolution layer, carrying out convolution processing on the polyp characteristic image to obtain a polyp characteristic image with a first longitudinal axis;
performing fusion processing on the first transverse axis polyp characteristic image and the first longitudinal axis polyp characteristic image to obtain a first fusion polyp characteristic image;
based on the first branch convolution layer of the second structure, carrying out convolution processing on the first fused polyp feature image to obtain a second transverse axis polyp feature image;
based on the second branch convolution layer of the second structure, carrying out convolution processing on the first fused polyp feature image to obtain a polyp feature image with a second longitudinal axis;
fusing the second transverse axis polyp feature image and the second longitudinal axis polyp feature image to obtain a second fused polyp feature image;
based on the first branch convolution layer of the third structure, carrying out convolution processing on the second fusion polyp characteristic image to obtain a third transverse axis polyp characteristic image;
based on the third structural second branch convolution layer, carrying out convolution processing on the second fused polyp feature image to obtain a third longitudinal polyp feature image;
and carrying out fusion processing on the third transverse axis polyp characteristic image and the third longitudinal axis polyp characteristic image to obtain a third fusion polyp characteristic image, wherein the third fusion polyp characteristic image comprises a boundary polyp characteristic image and a boundary polyp prediction image.
Further, the step of performing spatial attention feature extraction processing on the polyp feature image with boundary feature information based on the convolution attention module to obtain a polyp attention feature image with boundary feature information specifically includes:
inputting the polyp feature image with boundary feature information to the convolution attention module, wherein the convolution attention module comprises a channel attention sub-module and a space attention sub-module;
based on the channel attention sub-module, carrying out channel attention extraction processing on the polyp characteristic image with boundary characteristic information to obtain a polyp channel attention characteristic image with boundary characteristic information;
performing element-by-element multiplication processing on the polyp channel attention characteristic image with boundary characteristic information and the polyp characteristic image with boundary characteristic information to obtain a preliminary polyp attention characteristic image with boundary characteristic information;
based on the spatial attention sub-module, performing spatial attention extraction processing on the preliminary polyp attention characteristic image with boundary characteristic information to obtain a polyp spatial attention characteristic image with boundary characteristic information;
and carrying out element-by-element multiplication processing on the preliminary polyp attention characteristic image with boundary characteristic information and the polyp space attention characteristic image with boundary characteristic information to obtain the polyp attention characteristic image with boundary characteristic information.
Further, the step of performing feature extraction processing on the fused boundary polyp attention feature image based on the output module to obtain a segmented polyp image specifically includes:
inputting the fused boundary polyp attention feature image to the output module, wherein the output module comprises a second convolution layer, a second batch of normalization layers and a second ReLU activation function;
based on the second convolution layer, carrying out convolution processing on the fused boundary polyp attention characteristic image to obtain a convolved boundary polyp attention characteristic image;
based on the second normalization layer, carrying out batch normalization processing on the convolved boundary polyp attention feature images to obtain batch normalized boundary polyp attention feature images;
and performing feature mapping processing on the boundary polyp attention feature images after batch normalization based on the second ReLU activation function to obtain segmented polyp images.
Further, the loss functions of the boundary constraint-based polyp segmentation network include a weighted binary cross entropy loss function, a dice loss function, and a consistency enhancement loss function.
The second technical scheme adopted by the invention is as follows: a boundary constraint-based polyp segmentation system, comprising:
the preprocessing module is used for acquiring a polyp image and preprocessing data to obtain a preprocessed polyp image;
the construction module is used for introducing the boundary feature extraction module and constructing a polyp segmentation network based on boundary constraint;
the segmentation module is used for carrying out recognition and segmentation processing on the preprocessed polyp image through the polyp segmentation network based on boundary constraint to obtain a segmented polyp image, wherein the segmented polyp image comprises a plurality of polyp segmentation images.
The method and the system have the beneficial effects that: according to the invention, the polyp image is obtained and subjected to data preprocessing, and the boundary feature extraction module is further introduced as an independent branch to construct a polyp segmentation network based on boundary constraint, the preprocessed polyp image is subjected to recognition segmentation processing, the polyp image is subjected to feature extraction fusion processing from the directions of the horizontal axis and the vertical axis respectively through the boundary feature extraction module, the capture of polyp image boundary details by the polyp segmentation network is improved, the detail information of the polyp image is enriched, and accurate segmentation of polyps with different shapes, colors and sizes can be realized.
Drawings
FIG. 1 is a flow chart of steps of a method for polyp segmentation based on boundary constraints in accordance with an embodiment of the present invention;
fig. 2 is a block diagram of a boundary constraint based polyp segmentation system in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a polyp segmentation network based on boundary constraints in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of a boundary feature extraction module according to an embodiment of the present invention;
fig. 5 is a schematic diagram of the results of polyp segmentation network output for boundary constraints in accordance with an embodiment of the present invention.
Reference numerals: 1. res2Net-50 backbone network module; 2. a mutual optimization module; 3. a boundary feature extraction module; 4. a convolution attention module; 5. and an output module.
Detailed Description
The invention will now be described in further detail with reference to the drawings and to specific examples. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
First, the polyp segmentation network based on boundary constraint constructed by the invention is based on Pytorch implementation, and training and testing data are carried out on an NVIDIA RTX 2080Ti GPU. Res2Net-50 is used to initialize parameters of the backbone network while the rest of the network is initialized using random initialization policies. All the input images are resized to 352 x 352 with a channel number of 64 in the middle layer of the network. Optimizing the whole network parameters by adopting an Adam optimization algorithm, and setting the learning rate to be 1 multiplied by 10 -4 The final output is a network segmentation result with high level semantic information and minimal loss of detail.
Referring to fig. 1, the present invention provides a boundary constraint-based polyp segmentation method comprising the steps of:
s1, acquiring a polyp image and performing data preprocessing to obtain a preprocessed polyp image;
specifically, a polyp image is acquired; performing anti-reflection treatment on the polyp image to obtain a polyp image after anti-reflection; and carrying out data enhancement processing on the polyp image after the light reflection removal to obtain a preprocessed polyp image.
In this embodiment, firstly, the polyp image antireflection processing is performed, and the influence of the highlight part of the polyp image on the network is eliminated. Then using data enhancement means such as random inversion, random cropping, image scaling, etc.
S2, introducing a boundary feature extraction module to construct a polyp segmentation network based on boundary constraint;
specifically, as shown in fig. 3, the polyp segmentation network based on boundary constraint includes a plurality of Res2Net-50 backbone network modules 1, a plurality of mutual optimization modules 2, a plurality of convolution attention modules 4, a boundary feature extraction module 3 and a plurality of output modules 5, wherein a first output end of each of the plurality of Res2Net-50 backbone network modules is connected with an input end of each of the plurality of mutual optimization modules, a second output end of each of the plurality of Res2Net-50 backbone network modules is connected with an input end of each of the boundary feature extraction modules, an output end of each of the plurality of mutual optimization modules and an output end of each of the boundary feature extraction modules are respectively connected with an input end of each of the plurality of convolution attention modules, and an output end of each of the plurality of convolution attention modules is connected with an input end of each of the plurality of output modules.
It should be noted that, in the Res2Net-50 backbone network module in the embodiment of the present invention, since the receptive field is small, the resolution is large, the low-level features include a large amount of noise, and a large amount of computation is required, so only the features of the last four layers, that is, the features corresponding to Stage1_l1, stage2_l2, stage3_l3 and Stage4_l4 in fig. 3, are used, and therefore the above-mentioned several values of the specific embodiment of the present invention are 4.
S3, performing recognition and segmentation processing on the preprocessed polyp image through a polyp segmentation network based on boundary constraint to obtain a segmented polyp image, wherein the segmented polyp image comprises a plurality of polyp segmentation images.
S31, inputting the preprocessed polyp image into a polyp segmentation network based on boundary constraint;
s32, performing feature mapping processing on the preprocessed polyp image based on the Res2Net-50 backbone network module to obtain a polyp feature image;
s33, performing feature optimization processing on the polyp feature image based on the mutual optimization module to obtain the polyp feature image after mutual optimization;
specifically, inputting a polyp feature image to a mutual optimization module, wherein the mutual optimization module comprises a first convolution layer, a first batch of normalization layers, a first ReLU activation function, an up-sampling layer and a down-sampling layer;
further, based on the first convolution layer, carrying out convolution processing on the polyp characteristic image to obtain a convolved polyp characteristic image; based on the first batch normalization layer, performing batch normalization processing on the convolved polyp feature images to obtain batch normalized polyp feature images; based on a first ReLU activation function, performing nonlinear mapping processing on the batch normalized polyp feature images to obtain activated polyp feature images; based on the upsampling layer, upsampling the activated polyp feature image to obtain an upsampled polyp feature image; and based on the downsampling layer, downsampling the up-sampled polyp feature image to obtain the polyp feature image after mutual optimization.
In this embodiment, the OOM is a mutual optimization module that mainly includes a convolutional layer, a BN (batch Normalization, BN) layer, a ReLU activation function, upsampling, and downsampling operations. For each layer, the convolutional layer, the BN layer and the ReLU layer are subjected to further feature extraction, the feature images of the other three layers are subjected to up-sampling/down-sampling operation to be the same as the feature images of the layer, and then the features are fused through Concat operation, so that the feature images optimized mutually are finally obtained, and the feature images contain abundant detail information and advanced semantic information.
S34, based on a boundary feature extraction module, carrying out edge feature extraction processing on the polyp feature image to obtain a boundary polyp feature image and a boundary polyp prediction image;
specifically, inputting a polyp feature image to a boundary feature extraction module, wherein the boundary feature extraction module comprises a first branch convolution layer with a first structure, a second branch convolution layer with a first structure, a first branch convolution layer with a second structure, a second branch convolution layer with a second structure, a first branch convolution layer with a third structure and a second branch convolution layer with a third structure;
further, based on a first branch convolution layer of the first structure, carrying out convolution processing on the polyp characteristic image to obtain a first transverse axis polyp characteristic image; based on the first structural second branch convolution layer, carrying out convolution processing on the polyp characteristic image to obtain a polyp characteristic image with a first longitudinal axis; fusing the first transverse axis polyp feature image and the first longitudinal axis polyp feature image to obtain a first fused polyp feature image; based on a first branch convolution layer of the second structure, carrying out convolution processing on the first fusion polyp feature image to obtain a second transverse axis polyp feature image; based on a second branch convolution layer of the second structure, carrying out convolution processing on the first fused polyp feature image to obtain a polyp feature image of a second longitudinal axis; fusing the second transverse axis polyp feature image and the second longitudinal axis polyp feature image to obtain a second fused polyp feature image; based on the first branch convolution layer of the third structure, carrying out convolution processing on the second fusion polyp feature image to obtain a third transverse axis polyp feature image; based on a second branch convolution layer of the third structure, carrying out convolution processing on the second fused polyp feature image to obtain a polyp feature image with a third longitudinal axis; and carrying out fusion processing on the third transverse axis polyp characteristic image and the third longitudinal axis polyp characteristic image to obtain a third fusion polyp characteristic image, wherein the third fusion polyp characteristic image comprises a boundary polyp characteristic image and a boundary polyp prediction image.
In this embodiment, the polyp has poor contrast with surrounding tissue mucosa, and the boundary is blurred, resulting in poor segmentation of the polyp boundary by the segmentation network. Thus we propose a BFEM (Boundary Feature extract module, BFEM edge feature extraction module) that first determines the boundaries of polyps and then directs the network to segment the polyps. BFEM may be considered a branch network of a split network that may generate a boundary feature map and a boundary prediction map. As shown in fig. 4, the convolution with the convolution kernel size of 3 is first decomposed into 1×3 and 3×1, that is, the first branch convolution layer of the first structure and the second branch convolution layer of the first structure, to obtain features on the horizontal axis and the vertical axis, and then the features on the horizontal axis and the vertical axis are fused. The convolution kernel sizes used in this order are 3, 5 and 7, namely the second structure first branch convolution layer 5×1, the second structure second branch convolution layer 1×5, the third structure first branch convolution layer 7×1 and the third structure second branch convolution layer 1×7 in fig. 4. The receptive fields obtained by different sizes of the convolution kernels are different, and the characteristics of multiple scales can be obtained by continuously using and then fusing, wherein the receptive fields contain abundant boundary characteristics.
S35, carrying out feature fusion processing on the boundary polyp feature image and the polyp feature image after mutual optimization to obtain a polyp feature image with boundary feature information;
s36, based on a convolution attention module, performing spatial attention feature extraction processing on the polyp feature image with the boundary feature information to obtain the polyp attention feature image with the boundary feature information;
specifically, polyp feature images with boundary feature information are input to a convolution attention module, which includes a channel attention sub-module and a spatial attention sub-module;
further, based on the channel attention sub-module, channel attention extraction processing is carried out on the polyp feature image with boundary feature information, so as to obtain the polyp channel attention feature image with boundary feature information; performing element-by-element multiplication processing on the polyp channel attention characteristic image with the boundary characteristic information and the polyp characteristic image with the boundary characteristic information to obtain a preliminary polyp attention characteristic image with the boundary characteristic information; based on the space attention sub-module, performing space attention extraction processing on the preliminary polyp attention characteristic image with boundary characteristic information to obtain the polyp space attention characteristic image with boundary characteristic information; and performing element-by-element multiplication processing on the preliminary polyp attention characteristic image with the boundary characteristic information and the polyp space attention characteristic image with the boundary characteristic information to obtain the polyp attention characteristic image with the boundary characteristic information.
In this embodiment, CBAM is Convolutional Block Attention Module, which is an attention mechanism module combining a channel (channel) and a space (spatial), and the feature map passes through a channel attention sub-module first, then multiplies the output by the original feature map element by element, then inputs the result as a spatial attention module, and finally multiplies the result obtained by multiplying the channel attention module by the original feature map element by element, and then multiplies the result of the channel attention module by the output of the spatial attention module element by element again, so as to obtain an attention map.
S37, fusing the polyp attention characteristic image with the boundary characteristic information with the boundary polyp prediction image to obtain a fused boundary polyp attention characteristic image;
s38, based on the output module, performing feature extraction processing on the fused boundary polyp attention feature image to obtain a segmented polyp image.
Specifically, the fused boundary polyp attention feature image is input to an output module, and the output module comprises a second convolution layer, a second normalization layer and a second ReLU activation function;
further, based on the second convolution layer, carrying out convolution processing on the fused boundary polyp attention characteristic image to obtain a convolved boundary polyp attention characteristic image; based on the second normalization layer, carrying out batch normalization processing on the convolved boundary polyp attention characteristic images to obtain batch normalized boundary polyp attention characteristic images; and performing feature mapping processing on the boundary polyp attention feature images after batch normalization based on the second ReLU activation function to obtain segmented polyp images.
In summary, in the polyp image segmentation network, the embodiment of the invention uses Res2Net-50 as the backbone network, and can obtain better segmentation results than VGG, resNet, convxNet. Res2Net-50 can represent multi-scale features at a finer granularity level, generate feature maps with pyramidal structures, and extract five-level features from low-level to high-level. However, since the receptive field is small, the resolution is large, and low-level features contain a large amount of noise, a large amount of computation is required. Therefore, only the features l=l1, L2, L3, L4 of the last four layers are used. The network structure is shown in fig. 3, four feature graphs with different sizes are obtained through a backbone network, then feature channel number processing is carried out through a U module, the feature graphs are unified to 64, L1 is used as input of a boundary feature extraction module, the other three layers are processed through a mutual optimization module (OOM) to obtain an optimized module, the OOM is a mutual optimization module, the CBAM is a Convolutional Block Attention Module convolution attention module, and the BFEM is a boundary feature extraction module. Considering the quantity and the calculated quantity of parameters, the embodiment of the invention firstly carries out unified channel number processing on the output of four backbone networks, wherein the unified channel number is 64, wherein the output L1 of the first layer is used as the input of a boundary feature extraction module, and the output of a boundary feature graph and a boundary prediction graph is obtained through the boundary feature extraction module. And further respectively scaling/expanding the feature map of each layer to a designated multiple and fusing the feature map with other features to obtain low-level detail features and high-level semantic features (O1, O2, O3 and O4), wherein the low-level detail features comprise detail features such as boundaries, shapes, colors and the like, and the high-level features comprise polyp semantic features. Then, scaling the output boundary feature image of the boundary feature extraction module to a designated multiple and fusing the mutually optimized features to obtain feature images F1, F2, F3 and F4 containing rich feature information; second, the feature map is further optimized using the attention mechanism CBAM, so that the network is more focused on polyp regions, suppressing some noise in the background and feature map. And then fusing the feature map with an output boundary prediction map of the boundary feature extraction module, so that the boundary prediction map can guide and monitor the feature map prediction. The fused features are then further extracted using a convolutional layer, a BN (batch normalization, BN) layer, and a ReLU activation function. Finally, network outputs are obtained, one of which is the primary Output1, and the other three of which are the secondary outputs Output2, output3 and Output4.
Wherein the auxiliary outputs Output2, output3 and Output4 can capture information of different scales. Objects in the image may have different features at different scales, so by introducing multiple scale outputs in the network, the information can be better captured, and the capturing capability of the model on object boundaries and details is improved. And the risk of over fitting can be reduced, convergence is accelerated, and the robustness of the network is improved.
It should be noted that, the loss function of the polyp segmentation network based on boundary constraint according to the embodiment of the present invention includes a weighted binary cross entropy loss function, a dice loss function, and a consistency enhancement loss function.
Wherein the loss function is used to supervise the training process of a network. The network continuously adjusts its own parameters through the value of the loss function to obtain better performance and learning ability. A weighted binary cross entropy loss function (BCE), a Dice loss function (Dice), and a consistency enhancement loss function (CEL) are used as the loss functions. The Dice loss function is equivalent to global supervision, whereas BCE loss is supervised on a pixel-by-pixel basis, and the two loss functions may be complementary. Meanwhile, the BCE loss can also be used as a guide of dice loss. CEL allows the model to highlight the foreground object region as smoothly as possible and is able to handle sample imbalance issues. Depth supervision is performed using the outputs O1, O2, O3, O4 and the boundary prediction outputs, each prediction graph being up-sampled to the same size as the group-Truth (GT).
Finally, as shown in fig. 5, the polyp segmentation network based on boundary constraint is constructed, a better weight result is obtained through training, and the weight obtained through training is used for prediction. First, the detected image is subjected to the same preprocessing operation as in the training process. Then sending the polyp segmentation network based on boundary constraint to detect; the boundary of the finally obtained prediction result is clear and accurate, so that the problem that the traditional polyp segmentation method is inaccurate in polyp boundary segmentation and surrounding tissue region segmentation is solved, multi-scale boundary features are extracted by adopting decomposition convolution, extraction is respectively carried out from the directions of a transverse axis and a longitudinal axis, fusion is carried out, capture of boundary details is improved, the problem of insufficient feature extraction is further solved, and the expression of details and semantics is enhanced by adopting mutually optimized fusion features.
Referring to fig. 2, a boundary constraint based polyp segmentation system comprising:
the preprocessing module is used for acquiring a polyp image and preprocessing data to obtain a preprocessed polyp image;
the construction module is used for introducing the boundary feature extraction module and constructing a polyp segmentation network based on boundary constraint;
the segmentation module is used for carrying out recognition segmentation processing on the preprocessed polyp image through a polyp segmentation network based on boundary constraint to obtain a segmented polyp image, wherein the segmented polyp image comprises a plurality of polyp segmentation images.
The content in the method embodiment is applicable to the system embodiment, the functions specifically realized by the system embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method embodiment.
While the preferred embodiment of the present invention has been described in detail, the invention is not limited to the embodiment, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the invention, and these modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. A boundary constraint-based polyp segmentation method, comprising the steps of:
acquiring a polyp image and performing data preprocessing to obtain a preprocessed polyp image;
introducing a boundary feature extraction module to construct a polyp segmentation network based on boundary constraint;
and performing recognition and segmentation processing on the preprocessed polyp image through the polyp segmentation network based on boundary constraint to obtain a segmented polyp image, wherein the segmented polyp image comprises a plurality of polyp segmentation images.
2. The method for segmenting polyps based on boundary constraint according to claim 1, wherein the step of acquiring polyp images and performing data preprocessing to obtain preprocessed polyp images specifically comprises:
acquiring polyp images;
performing anti-reflection treatment on the polyp image to obtain an anti-reflection polyp image;
and carrying out data enhancement processing on the anti-reflection polyp image to obtain a preprocessed polyp image.
3. The polyp segmentation method based on boundary constraint according to claim 1, wherein the polyp segmentation network based on boundary constraint comprises a plurality of Res2Net-50 main network modules, a plurality of mutual optimization modules, a plurality of convolution attention modules, a boundary feature extraction module and a plurality of output modules, wherein a first output end of each Res2Net-50 main network module is connected with an input end of each of the plurality of mutual optimization modules, a second output end of each of the plurality of Res2Net-50 main network modules is connected with an input end of each of the boundary feature extraction modules, an output end of each of the plurality of mutual optimization modules is connected with an input end of each of the plurality of convolution attention modules, and an output end of each of the plurality of convolution attention modules is connected with an input end of each of the plurality of output modules.
4. A boundary constraint-based polyp segmentation method according to claim 3, wherein the step of performing recognition segmentation processing on the preprocessed polyp image through the boundary constraint-based polyp segmentation network to obtain a segmented polyp image specifically comprises:
inputting the preprocessed polyp image to the boundary constraint-based polyp segmentation network;
based on the Res2Net-50 backbone network module, performing feature mapping processing on the preprocessed polyp image to obtain a polyp feature image;
based on the mutual optimization module, performing feature optimization processing on the polyp feature image to obtain a polyp feature image after mutual optimization;
based on the boundary feature extraction module, carrying out edge feature extraction processing on the polyp feature image to obtain a boundary polyp feature image and a boundary polyp prediction image;
performing feature fusion processing on the boundary polyp feature image and the polyp feature image after mutual optimization to obtain a polyp feature image with boundary feature information;
based on the convolution attention module, performing spatial attention feature extraction processing on the polyp feature image with boundary feature information to obtain the polyp attention feature image with boundary feature information;
fusing the polyp attention characteristic image with the boundary characteristic information with the boundary polyp prediction image to obtain a fused boundary polyp attention characteristic image;
and based on the output module, performing feature extraction processing on the fused boundary polyp attention feature image to obtain a segmented polyp image.
5. The polyp segmentation method based on boundary constraint according to claim 4, wherein the step of performing feature optimization processing on the polyp feature image based on the mutual optimization module to obtain a polyp feature image after mutual optimization specifically comprises:
inputting the polyp feature image to the mutual optimization module, wherein the mutual optimization module comprises a first convolution layer, a first batch of normalization layers, a first ReLU activation function, an upsampling layer and a downsampling layer;
based on the first convolution layer, carrying out convolution processing on the polyp characteristic image to obtain a convolved polyp characteristic image;
based on the first batch normalization layer, batch normalization processing is carried out on the convolved polyp characteristic images, and batch normalized polyp characteristic images are obtained;
based on the first ReLU activation function, performing nonlinear mapping processing on the batch of normalized polyp feature images to obtain activated polyp feature images;
based on the upsampling layer, upsampling the activated polyp feature image to obtain an upsampled polyp feature image;
and based on the downsampling layer, downsampling the up-sampled polyp feature image to obtain a polyp feature image after mutual optimization.
6. The method for segmenting polyps based on boundary constraint according to claim 4, wherein the step of extracting edge features from the polyp feature image to obtain a boundary polyp feature image and a boundary polyp prediction image based on the boundary feature extraction module specifically comprises:
inputting the polyp characteristic image to the boundary characteristic extraction module, wherein the boundary characteristic extraction module comprises a first branch convolution layer with a first structure, a second branch convolution layer with a first structure, a first branch convolution layer with a second structure, a second branch convolution layer with a second structure, a first branch convolution layer with a third structure and a second branch convolution layer with a third structure;
based on the first branch convolution layer of the first structure, carrying out convolution processing on the polyp characteristic image to obtain a first transverse axis polyp characteristic image;
based on the first structural second branch convolution layer, carrying out convolution processing on the polyp characteristic image to obtain a polyp characteristic image with a first longitudinal axis;
performing fusion processing on the first transverse axis polyp characteristic image and the first longitudinal axis polyp characteristic image to obtain a first fusion polyp characteristic image;
based on the first branch convolution layer of the second structure, carrying out convolution processing on the first fused polyp feature image to obtain a second transverse axis polyp feature image;
based on the second branch convolution layer of the second structure, carrying out convolution processing on the first fused polyp feature image to obtain a polyp feature image with a second longitudinal axis;
fusing the second transverse axis polyp feature image and the second longitudinal axis polyp feature image to obtain a second fused polyp feature image;
based on the first branch convolution layer of the third structure, carrying out convolution processing on the second fusion polyp characteristic image to obtain a third transverse axis polyp characteristic image;
based on the third structural second branch convolution layer, carrying out convolution processing on the second fused polyp feature image to obtain a third longitudinal polyp feature image;
and carrying out fusion processing on the third transverse axis polyp characteristic image and the third longitudinal axis polyp characteristic image to obtain a third fusion polyp characteristic image, wherein the third fusion polyp characteristic image comprises a boundary polyp characteristic image and a boundary polyp prediction image.
7. The method for polyp segmentation based on boundary constraint according to claim 4, wherein the step of performing spatial attention feature extraction processing on the polyp feature image with boundary feature information based on the convolution attention module to obtain the polyp attention feature image with boundary feature information specifically comprises:
inputting the polyp feature image with boundary feature information to the convolution attention module, wherein the convolution attention module comprises a channel attention sub-module and a space attention sub-module;
based on the channel attention sub-module, carrying out channel attention extraction processing on the polyp characteristic image with boundary characteristic information to obtain a polyp channel attention characteristic image with boundary characteristic information;
performing element-by-element multiplication processing on the polyp channel attention characteristic image with boundary characteristic information and the polyp characteristic image with boundary characteristic information to obtain a preliminary polyp attention characteristic image with boundary characteristic information;
based on the spatial attention sub-module, performing spatial attention extraction processing on the preliminary polyp attention characteristic image with boundary characteristic information to obtain a polyp spatial attention characteristic image with boundary characteristic information;
and carrying out element-by-element multiplication processing on the preliminary polyp attention characteristic image with boundary characteristic information and the polyp space attention characteristic image with boundary characteristic information to obtain the polyp attention characteristic image with boundary characteristic information.
8. The polyp segmentation method according to claim 4, wherein the step of performing feature extraction processing on the fused boundary polyp attention feature image based on the output module to obtain a segmented polyp image specifically comprises:
inputting the fused boundary polyp attention feature image to the output module, wherein the output module comprises a second convolution layer, a second batch of normalization layers and a second ReLU activation function;
based on the second convolution layer, carrying out convolution processing on the fused boundary polyp attention characteristic image to obtain a convolved boundary polyp attention characteristic image;
based on the second normalization layer, carrying out batch normalization processing on the convolved boundary polyp attention feature images to obtain batch normalized boundary polyp attention feature images;
and performing feature mapping processing on the boundary polyp attention feature images after batch normalization based on the second ReLU activation function to obtain segmented polyp images.
9. The boundary constraint based polyp segmentation method according to claim 1, wherein the loss functions of the boundary constraint based polyp segmentation network comprise weighted binary cross entropy loss functions, dice loss functions and consistency enhancement loss functions.
10. A boundary constraint-based polyp segmentation system, comprising the following modules:
the preprocessing module is used for acquiring a polyp image and preprocessing data to obtain a preprocessed polyp image;
the construction module is used for introducing the boundary feature extraction module and constructing a polyp segmentation network based on boundary constraint;
the segmentation module is used for carrying out recognition and segmentation processing on the preprocessed polyp image through the polyp segmentation network based on boundary constraint to obtain a segmented polyp image, wherein the segmented polyp image comprises a plurality of polyp segmentation images.
CN202311662571.7A 2023-12-05 2023-12-05 Boundary constraint-based polyp segmentation method and system Pending CN117830226A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311662571.7A CN117830226A (en) 2023-12-05 2023-12-05 Boundary constraint-based polyp segmentation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311662571.7A CN117830226A (en) 2023-12-05 2023-12-05 Boundary constraint-based polyp segmentation method and system

Publications (1)

Publication Number Publication Date
CN117830226A true CN117830226A (en) 2024-04-05

Family

ID=90512561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311662571.7A Pending CN117830226A (en) 2023-12-05 2023-12-05 Boundary constraint-based polyp segmentation method and system

Country Status (1)

Country Link
CN (1) CN117830226A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102332088B1 (en) * 2021-01-13 2021-12-01 가천대학교 산학협력단 Apparatus and method for polyp segmentation in colonoscopy images through polyp boundary aware using detailed upsampling encoder-decoder networks
CN114612662A (en) * 2022-03-10 2022-06-10 扬州大学 Polyp image segmentation method based on boundary guidance
CN114926423A (en) * 2022-05-12 2022-08-19 深圳大学 Polyp image segmentation method, device, apparatus and medium based on attention and boundary constraint
WO2023001190A1 (en) * 2021-07-23 2023-01-26 天津御锦人工智能医疗科技有限公司 Colorectal polyp image recognition method, apparatus, and storage medium
CN116503431A (en) * 2023-05-06 2023-07-28 重庆邮电大学 Codec medical image segmentation system and method based on boundary guiding attention
CN116563536A (en) * 2023-04-14 2023-08-08 三峡大学 Polyp image segmentation system for uncertainty enhanced contextual attention network
CN116958535A (en) * 2023-04-14 2023-10-27 三峡大学 Polyp segmentation system and method based on multi-scale residual error reasoning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102332088B1 (en) * 2021-01-13 2021-12-01 가천대학교 산학협력단 Apparatus and method for polyp segmentation in colonoscopy images through polyp boundary aware using detailed upsampling encoder-decoder networks
WO2023001190A1 (en) * 2021-07-23 2023-01-26 天津御锦人工智能医疗科技有限公司 Colorectal polyp image recognition method, apparatus, and storage medium
CN114612662A (en) * 2022-03-10 2022-06-10 扬州大学 Polyp image segmentation method based on boundary guidance
CN114926423A (en) * 2022-05-12 2022-08-19 深圳大学 Polyp image segmentation method, device, apparatus and medium based on attention and boundary constraint
CN116563536A (en) * 2023-04-14 2023-08-08 三峡大学 Polyp image segmentation system for uncertainty enhanced contextual attention network
CN116958535A (en) * 2023-04-14 2023-10-27 三峡大学 Polyp segmentation system and method based on multi-scale residual error reasoning
CN116503431A (en) * 2023-05-06 2023-07-28 重庆邮电大学 Codec medical image segmentation system and method based on boundary guiding attention

Similar Documents

Publication Publication Date Title
CN112329800B (en) Salient object detection method based on global information guiding residual attention
US10943145B2 (en) Image processing methods and apparatus, and electronic devices
CN110516201B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111652217A (en) Text detection method and device, electronic equipment and computer storage medium
CN111598862B (en) Breast molybdenum target image segmentation method, device, terminal and storage medium
CN107871319B (en) Method and device for detecting beam limiter area, X-ray system and storage medium
CN111062854B (en) Method, device, terminal and storage medium for detecting watermark
CN111932577B (en) Text detection method, electronic device and computer readable medium
CN110009622B (en) Display panel appearance defect detection network and defect detection method thereof
JP2023531350A (en) A method for incrementing a sample image, a method for training an image detection model and a method for image detection
CN110929735B (en) Rapid significance detection method based on multi-scale feature attention mechanism
CN113762138A (en) Method and device for identifying forged face picture, computer equipment and storage medium
CN111914654A (en) Text layout analysis method, device, equipment and medium
CN113591831A (en) Font identification method and system based on deep learning and storage medium
CN114283431B (en) Text detection method based on differentiable binarization
Zhou et al. FANet: Feature aggregation network for RGBD saliency detection
CN114926734A (en) Solid waste detection device and method based on feature aggregation and attention fusion
CN117830226A (en) Boundary constraint-based polyp segmentation method and system
CN115345895A (en) Image segmentation method and device for visual detection, computer equipment and medium
CN114743245A (en) Training method of enhanced model, image processing method, device, equipment and medium
CN115205641A (en) Multi-prior-driven saliency target detection algorithm
CN114387315A (en) Image processing model training method, image processing device, image processing equipment and image processing medium
CN114529828A (en) Method, device and equipment for extracting residential area elements of remote sensing image
CN114742742A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN117152353B (en) Live three-dimensional model creation method, device, electronic equipment and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Room 909A, Building 705, Dayuan West District, No. 135 Xingang West Road, Haizhu District, Guangzhou City, Guangdong Province, China 510000

Applicant after: Guangzhou Zhongyiyong Intelligent Technology Co.,Ltd.

Address before: Room 212, 2nd Floor, No. 5 Lujiang West Street, Haizhu District, Guangzhou City, Guangdong Province, 510000

Applicant before: Guangzhou Hengshayun Technology Co.,Ltd.

Country or region before: China