CN112651981B - Intestinal disease segmentation method for guiding network by using significant edge feature extraction module - Google Patents

Intestinal disease segmentation method for guiding network by using significant edge feature extraction module Download PDF

Info

Publication number
CN112651981B
CN112651981B CN202011537413.5A CN202011537413A CN112651981B CN 112651981 B CN112651981 B CN 112651981B CN 202011537413 A CN202011537413 A CN 202011537413A CN 112651981 B CN112651981 B CN 112651981B
Authority
CN
China
Prior art keywords
information
edge
module
extraction module
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011537413.5A
Other languages
Chinese (zh)
Other versions
CN112651981A (en
Inventor
李胜
夏瑞瑞
何熊熊
程珊
郝明杰
王栋超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202011537413.5A priority Critical patent/CN112651981B/en
Publication of CN112651981A publication Critical patent/CN112651981A/en
Application granted granted Critical
Publication of CN112651981B publication Critical patent/CN112651981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • G06T2207/30032Colon polyp

Abstract

A method for dividing intestinal diseases of a guide network of a remarkable edge feature extraction module includes the steps of firstly inputting a data set, extracting features by a backbone network Res2net, then extracting multi-scale context information by adopting GCN modules with different kernel sizes, and further extracting the features by utilizing a boundary refinement module to improve the dividing performance of object boundaries; generating edge information by using the second-layer side output characteristics, suppressing noise in the edge information by using three fused high-layer semantic information, performing significant edge supervision on the generated significant edge characteristics by using the mask graph generated edge graph, and forming a final significant edge characteristic extraction module; and finally, inputting the edge information generated in the significant edge feature extraction module and the side edge output information of each layer into an RAS module together, training a model to obtain training parameters, and testing by using the training parameters to obtain a final result. The invention solves the problems of inaccurate polyp segmentation and positioning and fuzzy boundary in the gastrointestinal tract.

Description

Intestinal disease segmentation method for guiding network by using significant edge feature extraction module
Technical Field
The invention relates to the technical field of image processing of artificial intelligence, in particular to an intestinal disease segmentation method of a remarkable edge feature extraction module guide network.
Background
Colorectal cancer is the third most common cause of cancer-related deaths worldwide, usually due to abnormal growth of polyps in the colon. Colonoscopy is the primary method of screening and preventing polyp canceration, however, colonoscopy relies on high skill endoscopists and high degree of eye-hand coordination, and studies have shown that 22% -28% of patients receiving colonoscopy have a polyp leak rate. Segmenting polyps from normal mucosa can help endoscopists improve misclassification and subjectivity. For accurate segmentation of polyps, many different methods have been proposed. The existing polyp segmentation research works are roughly classified into three main methods, wherein the first method is based on a traditional image processing method, such as a threshold-based segmentation method, a region-based segmentation method and the like; the second method is based on the traditional classifier, such as a method of supporting a vector machine, features are extracted firstly, and then the classifier is used for segmentation; the third is a method of segmentation using Convolutional Neural Networks (CNNs). At present, with the rapid development of deep learning, the convolutional neural network successfully breaks through the limitation of the traditional manual characteristics. These CNN-based methods greatly refresh almost all widely used benchmarks and gradually replace traditional image segmentation methods due to high efficiency and high performance, but most of the existing FCN-based methods still have problems of rough object boundaries and inaccurate positioning targets. A polyp segmentation network architecture (PraNet) is proposed as in Fan et al at MICCAI conference, which employs a Cascade of Partial Decoders (CPD) and a reverse attention-based residual network (RAS) two salient object detection modules to segment polyps. The method discards low-level information, generates a rough saliency map by fusing three layers of semantic features, and the RAS module complements the rough saliency map from thick to thin in an iterative mode to obtain a final segmentation effect map. However, the segmentation of the salient object is not accurate enough only by means of the high-level semantic features, the low-level structure contains rich edge information, the object position can be positioned more accurately, and meanwhile, many networks ignore the complementary relation between the low-level information and the high-level semantic information. In the invention, in order to solve the problem of incorrect segmentation caused by inaccurate target positioning, a method for guiding network segmentation by a significant edge feature extraction module is designed.
Disclosure of Invention
In order to overcome the defects of rough boundary and target positioning accuracy in segmentation, the invention provides a method for guiding a network to segment intestinal diseases by using a significant edge feature extraction module to study the complementary relation between edge information and a significant target. The positioning accuracy of the boundary of the foreground object of the image segmentation is improved by means of a global convolution network (Global Convolutional Network, GCN) and a boundary refinement module (Boundary Refinement block, BR). The problem that the image containing the light spots is subjected to wrong segmentation due to inaccurate target positioning and fuzzy boundary of PraNet networks is solved, and meanwhile, the segmentation precision is improved.
In order to solve the technical problems, the invention adopts the following technical scheme:
A method for segmenting intestinal diseases of a guided network by a remarkable edge feature extraction module comprises the following steps:
Step 1: an input data set x= { X 1,x2,...,xn }, wherein X represents samples input in the data set, X n∈R352 ×352, n represents the number of samples, and a backbone network Res2net extracts characteristics to obtain five output characteristics of Conv1, conv2, conv3, conv4 and Conv 5;
Step 2: because the dimension and the shape change of polyps in the colorectal polyp data set are large, GCN modules with different kernel sizes are adopted to extract multi-scale context information, then a boundary refinement module (BR module) is utilized to further improve the dividing performance of object boundaries, the information extraction is carried out on the characteristics extracted by a backbone network by utilizing the module, and five side output characteristics are obtained, and are simply expressed as:
C={C(1),C(2),C(3),C(4),C(5)} (1)
Step 3: the generation of the significant edge feature extraction module comprises the following steps:
3.1, the low-level information contains abundant edge information, so that accurate positioning can be provided for target segmentation, C (1) is too close to an input image and a receiving domain is smaller and is not suitable for generating edge characteristics, the high-level information such as C (3),C(4),C(5) contains abundant semantic information, and finally C (2) is selected to generate the edge information, in order to obtain more robust obvious target characteristics, C (2) is subjected to three convolution layers to enhance the characteristics, and a RELU layer is added behind each convolution layer to ensure nonlinearity;
3.2, the low-level information contains abundant edge information and also contains a lot of noise, in order to restrain noise in the low-level information, three high-level information of C (3),C(4),C(5) is fused by using a CPD module of a cascade part decoder to obtain fused characteristic information C (6),C(6) which contains three layers of abundant semantic information, the fused information C (6) is propagated to a side characteristic C (2) by a top-down method, the fused top-level semantic information is used for restraining noise in the unobvious edge characteristic and the edge information, and a fusion characteristic F (2) is expressed as follows:
In the formula (2) C (6) after three convolution layers are enhanced, C (2) represents feature information obtained by C (2), trans (x; θ) represents a convolution layer with a parameter θ for changing the number of channels, Φ () represents RELU activation function, up (x; C (2)) represents Up-sampling method using bilinear interpolation, up-sampling the size to the same size as C (2), and F (2) represents edge guiding feature;
Step 4: enhancing edge guiding features by the F (2) through three convolution layers to obtain final edge information F E, and performing significant edge supervision on the generated significant edge features by using an edge map obtained by using a mask map to obtain a significant edge feature extraction module;
Step 5: the obtained significant edge information and the characteristic information of each layer are used as the input of an RAS module in PraNet network, the mask is used for carrying out depth supervision on the output of each layer of the RAS module to obtain a final model, the polyp image in the intestinal tract is used as the input to train the designed model to obtain parameters, the trained parameters are used, and the test set image is input to predict the final segmentation result.
The beneficial effects of the invention are as follows:
(1) The method of the invention utilizes the low-layer network characteristics to generate the edge information, researches the complementary relation between the edge information and the high-layer semantic information, utilizes the fused high-layer semantic information to inhibit noise in the edge information to obtain more accurate and clear edge information, and generates a remarkable edge characteristic extraction module.
(2) The salient edge features generated by the salient edge feature extraction module are utilized to help more accurately locate the target by the salient object features.
(3) Because of large size and shape variation difference of polyps in the colorectal polyp data set, the invention adopts GCN modules with different kernel sizes to extract multi-scale context information, and further utilizes a boundary refinement module (BR module) to further improve the dividing performance of object boundaries.
Drawings
Fig. 1 is a flow chart of the present invention.
Figure 2 is a GCN module.
Figure 3 is a GCN module employed in the present invention.
Fig. 4 is a BR module.
Fig. 5 is an edge result of the experiment.
Fig. 6 is the final segmentation result of the present invention.
Detailed Description
For the purpose of illustrating the objects, aspects and advantages of the present invention, the present invention will be described in further detail with reference to the following detailed description and the accompanying drawings.
Referring to fig. 1 to 6, a method for dividing intestinal diseases by a salient edge feature extraction module guiding network includes the following steps:
Step 1: an input dataset x= { X 1,x2,...,xn }, where X represents a sample input in the dataset,
X n∈R352×352, n represents the number of samples, and the backbone network Res2net extracts characteristics to obtain five output characteristics of Conv1, conv2, conv3, conv4 and Conv 5;
Step 2: because the dimension and the shape change of polyps in the colorectal polyp data set are large, GCN modules with different kernel sizes are adopted to extract multi-scale context information, then a boundary refinement module (BR module) is utilized to further improve the dividing performance of object boundaries, the information extraction is carried out on the characteristics extracted by a backbone network by utilizing the module, and five side output characteristics are obtained, and are simply expressed as:
C={C(1),C(2),C(3),C(4),C(5)} (1)
Step 3: the generation of the significant edge feature extraction module comprises the following steps:
3.1, the low-level information contains abundant edge information, so that accurate positioning can be provided for target segmentation, C (1) is too close to an input image and a receiving domain is smaller and is not suitable for generating edge characteristics, the high-level information such as C (3),C(4),C(5) contains abundant semantic information, and finally C (2) is selected to generate the edge information, C (2) passes through three convolution layers in order to obtain more robust obvious target characteristics, and a RELU layer is added behind each convolution layer to ensure nonlinearity;
3.2, the low-level information contains abundant edge information and also contains a lot of noise, in order to restrain noise in the low-level information, three high-level information of C (3),C(4),C(5) is fused by using a CPD module to obtain fused characteristic information C (6),C(6) which contains three layers of abundant semantic information, the fused information C (6) is propagated to a side characteristic C (2) by a top-down method, the fused top-level semantic information is used for restraining noise in the unobvious edge characteristic and the edge information, and the fused characteristic F (2) is expressed as follows:
In the formula (2) C (6) after three convolution layers, C (2) represents feature information obtained by C (2), trans (x; θ) represents a convolution layer with a parameter θ for changing the number of channels, Φ () represents RELU activation function, up (x; C (2)) represents Up-sampling method using bilinear interpolation, up-sampling the size to the same size as C (2), and F (2) represents edge guiding feature;
Step 4: enhancing edge guiding features by the F (2) through three convolution layers to obtain final edge information F E, and performing significant edge supervision on the generated significant edge features by using an edge map obtained by using a mask map to obtain a significant edge feature extraction module;
step 5: the obtained significant edge information and the characteristic information obtained by each layer are used as the input of an RAS module in PraNet network, the mask is used for carrying out depth supervision on the output of each layer of the RAS module to obtain a final model, the polyp image in the intestinal tract is used as the input to train the designed model to obtain parameters, the trained parameters are used, and the test set image is input to predict the final segmentation result.
The following is a further supplementary explanation of the content of the complaint,
In connection with fig. 2, the gcn module is Global Convolutional Network, a short term, global convolution network, which is applied to the multi-layer feature map to obtain multi-scale context information. To avoid sparse connections and to achieve dense connections within the kxk region of the feature map, the GCN uses a combination of kx1+1 xk and 1 xk+kx1 convolutions to effectively achieve kxk convolutions with fewer parameters than normal kxk convolutions. In connection with fig. 3, in the present invention, multi-scale context information can be learned at multiple levels of abstraction using GCNs of k=7, 11, 15 kernel sizes.
With reference to fig. 4, after the br module (Boundary Refinement block) is applied to the GCN module, the module can improve the capability of locating the boundary, is a residual structure, and is mainly used for refining the edge, so that the precision of boundary segmentation can be improved.
The salient edge feature extraction module mainly generates a salient edge feature by utilizing fusion of local edge information and global position information of the second layer, and constructs an explicit edge information module to guide the whole network to more accurately position a salient target. The salient objects, especially the edges thereof, are more accurately located using the abundant edge information and position information in the salient edges. In fig. 5 it can be seen that the edge map of the present invention accurately outputs the edges of a salient object, i.e., a polyp, while PraNet edge map detects other non-salient object edges in addition to the edges of the salient object. Map_2 is the output of the RAS module at the second layer, and it can be seen from fig. 5 that the addition of the edge Map of the present invention improves the edges of the polyp segmentation. It can be seen in conjunction with fig. 6 that the effect of the method of the present invention on polyp segmentation is greatly enhanced.
1) Simulation conditions
The experiment adopts a workstation configured with CPU Intel (R) Xeon (R) Gold 6161CPU@2.20GHz 2.2GHz (2 processors), 64GB memory, windows operating system and 2 Nvidia GTX 3080Ti graphics cards. The model is implemented based on PyTorch deep learning framework, version PyTorch is 1.8.0 and version python is 3.7. The input picture size is uniformly adjusted to 352×352, and a multi-scale training strategy is adopted. An ADAM algorithm was used to optimize the overall parameters, with a learning rate set to 1e-4. The polyp dataset adopts CVC-ClinicDB, and the method provided by the invention is respectively compared with four medical image segmentation methods of UNet, BASET, U-2Net and PraNet.
2) Simulation results
The inventive method was compared with three medical image segmentation methods UNet, BASNet, U-2Net, praNet on a CVC-ClinicDB polyp dataset. Using weighted Dice indices commonly used in medical segmentationMAE、Four indices S a are evaluated. Wherein/>Is used for correcting the defect of 'importance equality' in the Dice; MAE is used to evaluate pixel level accuracy; /(I)To evaluate similarity at the pixel level and global level; s a is used to evaluate structural similarity between predictions and truths, MAE and/>All based on pixel wise evaluation, ignoring structural similarity.
TABLE 1
As can be seen from Table 1, the method of the invention has better results than the methods of UNet, BASET, U-2Net and PraNet, greatly improves the segmentation performance, can be better applied to polyp segmentation, and has better practical engineering application value.
The embodiments described in this specification are merely illustrative of the manner in which the inventive concepts may be implemented. The scope of the present invention should not be construed as being limited to the specific forms set forth in the embodiments, but the scope of the present invention and the equivalents thereof as would occur to one skilled in the art based on the inventive concept.

Claims (1)

1. The method for segmenting the intestinal diseases of the guided network by the remarkable edge feature extraction module is characterized by comprising the following steps of:
Step 1: an input data set x= { X 1,x2,...,xn }, wherein X represents samples input in the data set, X n∈R352×352, n represents the number of samples, and a backbone network Res2net extracts characteristics to obtain five output characteristics of Conv1, conv2, conv3, conv4 and Conv 5;
Step 2: because the size and shape variation difference of polyps in the colorectal polyp data set are large, GCN modules with different kernel sizes are adopted to extract multi-scale context information, and then a boundary refinement module is utilized to further improve the dividing performance of object boundaries; the information extraction is carried out on the characteristics extracted by the backbone network by utilizing the module, so that five side output characteristics are obtained, and the characteristics are simply expressed as:
C={C(1),C(2),C(3),C(4),C(5)} (1)
Step 3: the generation of the significant edge feature extraction module comprises the following steps:
3.1, the low-level information contains abundant edge information, so that accurate positioning can be provided for target segmentation, C (1) is too close to an input image and a receiving domain is smaller and is not suitable for generating edge characteristics, the high-level information such as C (3),C(4),C(5) contains abundant semantic information, and finally C (2) is selected to generate the edge information, C (2) passes through three convolution layers in order to obtain more robust obvious target characteristics, and a RELU layer is added behind each convolution layer to ensure nonlinearity;
3.2, the low-level information contains abundant edge information and also contains a lot of noise, in order to restrain noise in the low-level information, three high-level information of C (3),C(4),C(5) is fused by using a CPD module of a cascade part decoder to obtain fused characteristic information C (6),C(6) which contains three layers of abundant semantic information, the fused information C (6) is propagated to a side characteristic C (2) by a top-down method, the fused top-level semantic information is used for restraining noise in the unobvious edge characteristic and the edge information, and a fusion characteristic F (2) is expressed as follows:
In the formula (2) C (6) after three convolution layers, C (2) represents the feature information obtained by C (2), trans (x; θ) represents the convolution layer with parameter θ for changing the number of channels, Φ () represents RELU activation function, up (x; C (2)) represents the Up-sampling method using bilinear interpolation, up-sampling the size to the same size as C (2);
Step 4: enhancing edge guiding features by the F (2) through three convolution layers to obtain final edge information F E, and performing significant edge supervision on the generated significant edge features by using an edge map obtained by using a mask map to obtain a significant edge feature extraction module;
Step 5: the obtained significant edge information and the characteristic information of each layer are used as the input of an RAS module in PraNet network, the mask is used for carrying out depth supervision on the output of each layer of the RAS module to obtain a final model, the polyp image in the intestinal tract is used as the input to train the designed model to obtain parameters, the trained parameters are used, and the test set image is input to predict the final segmentation result.
CN202011537413.5A 2020-12-23 2020-12-23 Intestinal disease segmentation method for guiding network by using significant edge feature extraction module Active CN112651981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011537413.5A CN112651981B (en) 2020-12-23 2020-12-23 Intestinal disease segmentation method for guiding network by using significant edge feature extraction module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011537413.5A CN112651981B (en) 2020-12-23 2020-12-23 Intestinal disease segmentation method for guiding network by using significant edge feature extraction module

Publications (2)

Publication Number Publication Date
CN112651981A CN112651981A (en) 2021-04-13
CN112651981B true CN112651981B (en) 2024-04-19

Family

ID=75359511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011537413.5A Active CN112651981B (en) 2020-12-23 2020-12-23 Intestinal disease segmentation method for guiding network by using significant edge feature extraction module

Country Status (1)

Country Link
CN (1) CN112651981B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192093B (en) * 2021-05-10 2023-04-18 新疆大学 Quick saliency target detection method based on double-flow network structure
CN113222012A (en) * 2021-05-11 2021-08-06 北京知见生命科技有限公司 Automatic quantitative analysis method and system for lung digital pathological image
CN113538313B (en) * 2021-07-22 2022-03-25 深圳大学 Polyp segmentation method and device, computer equipment and storage medium
CN114972155B (en) * 2021-12-30 2023-04-07 昆明理工大学 Polyp image segmentation method based on context information and reverse attention
CN114445426B (en) * 2022-01-28 2022-08-26 深圳大学 Method and device for segmenting polyp region in endoscope image and related assembly
CN115375917B (en) * 2022-10-25 2023-03-24 杭州华橙软件技术有限公司 Target edge feature extraction method, device, terminal and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443818A (en) * 2019-07-02 2019-11-12 中国科学院计算技术研究所 A kind of Weakly supervised semantic segmentation method and system based on scribble
CN111462126A (en) * 2020-04-08 2020-07-28 武汉大学 Semantic image segmentation method and system based on edge enhancement
CN111797841A (en) * 2020-05-10 2020-10-20 浙江工业大学 Visual saliency detection method based on depth residual error network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443818A (en) * 2019-07-02 2019-11-12 中国科学院计算技术研究所 A kind of Weakly supervised semantic segmentation method and system based on scribble
CN111462126A (en) * 2020-04-08 2020-07-28 武汉大学 Semantic image segmentation method and system based on edge enhancement
CN111797841A (en) * 2020-05-10 2020-10-20 浙江工业大学 Visual saliency detection method based on depth residual error network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多模态特征融合监督的RGB-D图像显著性检测;刘政怡;段群涛;石松;赵鹏;;电子与信息学报(第04期);全文 *

Also Published As

Publication number Publication date
CN112651981A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN112651981B (en) Intestinal disease segmentation method for guiding network by using significant edge feature extraction module
US11176381B2 (en) Video object segmentation by reference-guided mask propagation
Wang et al. Transformer-based unsupervised contrastive learning for histopathological image classification
Shu et al. Segmenting overlapping cell nuclei in digital histopathology images
CN112767418B (en) Mirror image segmentation method based on depth perception
CN112712528B (en) Intestinal tract focus segmentation method combining multi-scale U-shaped residual error encoder and integral reverse attention mechanism
Kassani et al. Deep transfer learning based model for colorectal cancer histopathology segmentation: A comparative study of deep pre-trained models
CN113837989B (en) Large intestine endoscope polyp detection and pathological classification method based on anchor-free frame
Wang et al. A generalizable and robust deep learning algorithm for mitosis detection in multicenter breast histopathological images
CN113344932A (en) Semi-supervised single-target video segmentation method
CN114511508A (en) Colorectal polyp segmentation method fusing convolution and multilayer perceptron neural network
Wang et al. DHUnet: Dual-branch hierarchical global–local fusion network for whole slide image segmentation
CN114266794A (en) Pathological section image cancer region segmentation system based on full convolution neural network
IL301650A (en) A method of processing an image of tissue and a system for processing an image of tissue
Wang et al. Automatic liver segmentation using EfficientNet and Attention-based residual U-Net in CT
Tsaku et al. Texture-based deep learning for effective histopathological cancer image classification
CN113408524A (en) Crop image segmentation and extraction algorithm based on MASK RCNN
Ige et al. ConvSegNet: Automated Polyp Segmentation From Colonoscopy Using Context Feature Refinement With Multiple Convolutional Kernel Sizes
CN116597138A (en) Polyp image semantic segmentation method based on depth convolution neural network
Li et al. Boundary guided network with two-stage transfer learning for gastrointestinal polyps segmentation
CN114972155B (en) Polyp image segmentation method based on context information and reverse attention
Liu et al. GCCNet: Grouped channel composition network for scene text detection
Wang et al. A novel dataset and a deep learning method for mitosis nuclei segmentation and classification
CN112967232A (en) Stomach cancer pathological image segmentation network structure based on graph convolution
Zhou et al. Colorectal polyp segmentation based on group convolution and transformer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant