CN114266794A - Pathological section image cancer region segmentation system based on full convolution neural network - Google Patents

Pathological section image cancer region segmentation system based on full convolution neural network Download PDF

Info

Publication number
CN114266794A
CN114266794A CN202210183367.6A CN202210183367A CN114266794A CN 114266794 A CN114266794 A CN 114266794A CN 202210183367 A CN202210183367 A CN 202210183367A CN 114266794 A CN114266794 A CN 114266794A
Authority
CN
China
Prior art keywords
pathological section
image
section image
segmentation
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210183367.6A
Other languages
Chinese (zh)
Other versions
CN114266794B (en
Inventor
唐杰
肖鸿昭
宋弘健
李清华
胡俊承
王丽萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Guilin Medical University
Original Assignee
South China University of Technology SCUT
Guilin Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT, Guilin Medical University filed Critical South China University of Technology SCUT
Priority to CN202210183367.6A priority Critical patent/CN114266794B/en
Publication of CN114266794A publication Critical patent/CN114266794A/en
Application granted granted Critical
Publication of CN114266794B publication Critical patent/CN114266794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a pathological section image cancer region segmentation system based on a full convolution neural network, which comprises the following steps: extracting masks of tissue areas in the pathological section images, and removing blank backgrounds; combining the tissue area with a cancer area marked in the pathological section image, and cutting the pathological section image of the full visual field to obtain training sample data; expanding the sample data; constructing a Unet segmentation network taking Resnet50 as an encoder, replacing a convolution unit of a first-stage encoder with a combination convolution unit, extracting information of different scales in an input image, introducing a feature fusion module into a decoder, and fully utilizing information output by each stage of decoder; the segmentation network is trained and tuned on the data set subjected to data enhancement; and predicting the whole pathological section image by adopting a grid processing algorithm, and identifying a cancer area in the pathological section image. The invention introduces multi-scale information in the characteristic extraction stage, and improves the segmentation precision of the cancer region.

Description

Pathological section image cancer region segmentation system based on full convolution neural network
Technical Field
The invention relates to the technical field of image recognition, in particular to a pathological section image cancer region segmentation system based on a full convolution neural network.
Background
Before the neural network is popular, researchers have started to adopt image processing technology to perform auxiliary diagnosis on pathological section images, mainly perform feature extraction on images from aspects of statistical features, textural features, morphological features and the like of the images, judge whether the images contain certain structures and features and separate the structures and the features. Although the traditional image processing method can already assist the diagnosis of doctors, the problems of low precision, unstable effect and the like generally exist.
And the neural network is adopted to identify pathological section images, most of researches only stay in judging whether the images have cancer or not, but do not specifically segment areas with cancer, the dependence on pathologists is still very serious, and the neural network can only play a slight auxiliary role for the pathologists.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides a pathological section image cancer region segmentation system based on a full convolution neural network.
In order to achieve the purpose, the invention adopts the following technical scheme:
a pathological section image cancer region segmentation system based on a full convolution neural network comprises: the system comprises an organization region extraction module, a sample data construction module, a sample data expansion module, a segmentation network construction module, a network training module and a cancer region identification module;
the tissue area extraction module is used for extracting a tissue area in the pathological section image, obtaining a mask of the tissue area and removing a blank background;
the sample data construction module is used for combining the extracted tissue area with a cancer area marked in the pathological section image, cutting the pathological section image of the full visual field and obtaining sample data for training;
the sample data expansion module is used for expanding sample data by adopting image enhancement;
the segmentation network construction module is used for constructing an Unet segmentation network taking Resnet50 as an encoder, replacing a convolution unit of a first-stage encoder of the Resnet50 with a combined convolution unit, wherein the combined convolution unit is provided with convolutions of convolution kernels with different sizes, extracting information of different scales in an input image, introducing a characteristic fusion module into a decoder part, and fully utilizing information output by each stage of decoder;
performing up-sampling operation on the feature map output by each stage of decoder, performing convolution with different parameters, and splicing on channels to obtain an integral feature mapT
Global characteristic diagramTRespectively obtaining two vectors through a maximum pooling layer and an average pooling layerV 1 AndV 2 two vectorsV 1 AndV 2 by one shareMulti-layer perceptron post-addition of parameters to obtain a vectorC
Applying Sigmoid function to vectorCAnd integral characteristic diagramTMultiplying to obtain a feature map containing channel attention informationT’
The overall characteristic diagramTAnd characteristic diagramsT’After adding, obtaining a final segmentation result through an output convolution module;
the network training module is used for training and adjusting the segmentation network on the data set subjected to data enhancement;
the cancer area identification module is used for predicting the whole pathological section image by adopting a trained model and based on a grid processing algorithm to identify a cancer area in the pathological section image.
As a preferred technical solution, the tissue region extraction module is configured to extract a tissue region in a pathological section image, and the specific steps include:
converting the original pathological section image from an RGB color space to an HSV color space;
obtaining a segmentation threshold value of the S component of the converted HSV image by adopting an Otsu algorithm;
and carrying out binarization processing on the image based on the segmentation threshold value to obtain a mask of the tissue region.
As a preferred technical solution, the cutting of the pathology slice image of the full field of view to obtain sample data for training includes:
performing phase-comparison on the mask of the tissue area and the mask of the originally marked cancer area to obtain a final marking mask of the tissue area of the whole pathological section image;
and simultaneously cutting the original pathological section and the final labeling mask to obtain an image block and a label thereof.
As a preferred technical solution, the sample data expansion module is configured to expand sample data by using image enhancement, and includes the specific steps of:
and carrying out image transformation on the pathological section image and the labeled image thereof, wherein the image transformation comprises any one or more of horizontal turning, random angle rotation, color change, noise interference and blurring.
As a preferred technical solution, the combined convolution unit adopts 4 convolutions with different sizes, including convolution kernels of 3 × 3,5 × 5,7 × 7 and 1 × 1, the input image respectively undergoes the 4 convolutions, and then the 4 convolution results are spliced on the channels as the output of the final whole combined convolution.
As a preferred technical solution, the output convolution module includes a convolution by 3 × 3 and Sigmoid activation function.
As a preferred technical solution, the multi-layer perceptron includes one layer of 1 × 1 convolution layer, a ReLu activation function, and another layer of 1 × 1 convolution layer, and is used for compressing information and generating an attention result on a channel.
As a preferred technical solution, the cancer region identification module is configured to predict a whole pathological section image by using a trained model and based on a grid processing algorithm, and identify a cancer region therein, and the method specifically includes:
partitioning the whole pathological section image into grids by adopting a grid algorithm, wherein each grid is used as a prediction sample;
adopting a sliding window to take image blocks in the grid each time and input the image blocks into a segmentation network to obtain segmentation results;
and splicing the segmentation result of each sliding window to obtain the segmentation result of the whole pathological section image.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) the method introduces the combined convolution in the network structure, fully extracts the characteristic information of different scales in the input image, has more excellent segmentation effect compared with a network without introducing information of different scales, and improves the segmentation precision of the cancer region by introducing information of different scales.
(2) When the segmentation result is generated, the output of the 5-level decoder is respectively extracted and the feature fusion module is adopted to perform feature fusion, so that the processing can fully utilize information such as the spatial position in the low-level feature map and the high-level semantic information in the high-level feature map, and the segmentation result is more accurate.
Drawings
FIG. 1 is a schematic diagram of a process for implementing a pathological section image cancer region segmentation system based on a full convolution neural network according to the present invention;
FIG. 2 is a schematic diagram of a full convolutional network structure of the present invention;
FIG. 3 is a schematic diagram of the implementation process of the grid processing algorithm of the present invention
FIG. 4(a) is a schematic diagram of an original image of a whole pathological section for prediction according to the present invention;
FIG. 4(b) is a schematic diagram of an annotation mask of the entire pathological section image for prediction according to the present invention;
fig. 4(c) is a schematic diagram of the segmentation result of the whole pathological section image according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
The embodiment provides a pathological section image cancer region segmentation system based on a full convolution neural network, which comprises: the system comprises an organization region extraction module, a sample data construction module, a sample data expansion module, a segmentation network construction module, a network training module and a cancer region identification module;
in this embodiment, the tissue region extraction module is configured to extract a tissue region in a pathological section image, obtain a mask of the tissue region, and remove a blank background;
in this embodiment, the sample data construction module is configured to combine the extracted tissue region with a cancer region labeled in a pathological section image, and cut a full-view pathological section image to obtain sample data for training;
in this embodiment, the sample data expansion module is configured to expand sample data by using image enhancement;
in this embodiment, the segmentation network construction module is configured to construct an Unet segmentation network using Resnet50 as an encoder, and replace a convolution unit of a first-stage encoder of Resnet50 with a combined convolution unit, where the combined convolution unit is provided with convolutions of convolution kernels of different sizes, extract information of different scales in an input image, and introduce a feature fusion module in a decoder portion, so as to fully utilize information output by each stage of decoder;
performing up-sampling operation on the feature map output by each stage of decoder, performing convolution with different parameters, and splicing on channels to obtain an integral feature mapT
Global characteristic diagramTRespectively obtaining two vectors through a maximum pooling layer and an average pooling layerV 1 AndV 2 two vectorsV 1 AndV 2 obtaining a vector by post-adding a multi-layer perceptron sharing parametersC
Applying Sigmoid function to vectorCAnd integral characteristic diagramTMultiplying to obtain a feature map containing channel attention informationT’
The overall characteristic diagramTAnd characteristic diagramsT’After adding, obtaining a final segmentation result through an output convolution module;
in this embodiment, the network training module is configured to train and tune the segmented network on a data set subjected to data enhancement;
in this embodiment, the cancer region identification module is configured to use the trained model and predict the whole pathological section image based on a grid processing algorithm, so as to identify the cancer region therein.
As shown in fig. 1, in the embodiment of the method for implementing a cancer region segmentation system based on full-convolution neural network for pathological section images, a breast cancer data set disclosed in Grand Challenge game catalog 16 is used, the data set includes 111 breast cancer pathological section images with labels containing cancer and 159 normal pathological section images, and the test set includes 129 pathological section images (only 48 of them have labels of tumor regions), including the following steps:
s1, extracting a tissue area in the pathological section image by using an Otsu algorithm and color space conversion, and removing a blank background; the specific process of this embodiment is as follows:
s11, converting the original pathological section image from an RGB color space to an HSV color space;
s12, obtaining a segmentation threshold value of the S component of the converted HSV image by applying an Otsu algorithm;
and S13, performing binarization processing on the image by using the threshold value obtained in the step S12 to obtain a mask of the tissue region, wherein the value of the region belonging to the tissue region in the mask is 255, and the value of the region not belonging to the tissue is 0.
S2, combining the extracted tissue area with the cancer area marked in the pathological section image, cutting the pathological section image of the whole visual field to obtain sample data for training, wherein the specific processing process is as follows:
s21, the value of the cancer region in the originally labeled cancer region mask is 1, and the values of the other regions are 0. In order to enable the Mask of the tissue region not to influence the labeled Mask value, the tissue region Mask obtained in the step S1 and the originally labeled cancer region Mask are subjected to bitwise and, so that the labeled Mask of the tissue region of the whole pathological section image can be obtained, which is labeled Mask, a region not being 0 in Mask indicates that the region is both tissue and cancer, and the Mask values of other regions are all 0;
s22, cutting the original pathological section image and the Mask obtained in the step S21 simultaneously to obtain a small image block and a mark thereof, wherein the size of the image block is recorded as W, the value of W can be 1024,512,256 and the like, in the specific implementation, in order to improve the efficiency and reduce the calculation complexity, the adopted W is 1024, but the cut image block is downsampled to 256, so that the image block can be ensured to contain enough information, and the calculation speed is accelerated;
s3, expanding the sample data by adopting an image enhancement technology to improve the generalization performance of the model, wherein the specific implementation process is as follows:
according to the characteristic that the pathological section image has no fixed direction, when the image is input into a network for training, the image and the label are simultaneously subjected to image transformation such as horizontal inversion, random angle rotation, color change, noise interference, blurring and the like with a certain probability, so that the purpose of expanding sample data is achieved.
S4, constructing a segmentation network introducing information of different scales, and training and tuning, wherein the specific process of the embodiment is as follows:
s41, constructing a Unet segmentation network with Resnet50 as an encoder, wherein the structure can be summarized into a structure that the encoder is connected with the decoder, the decoder and the encoder have 5 levels respectively, each level of encoder is composed of 5 stages (total) of Resnet50, each level of decoder is composed of a convolution kernel with the size of 2 x 2 and the step size of 2 and two layers of 3 x 3 convolutions, and the recovery of image resolution is realized;
s42, replacing the convolution unit of the first-level encoder of the Resnet50 in the segmentation network constructed in the step S41 (namely, the first residual module of the Resnet 50) with a combination convolution unit, fully extracting information of different scales in the input image, and gaining the improvement of segmentation accuracy; the combined convolution adopted in this embodiment is composed of 4 types of convolutions with different sizes, such as convolution kernels of 3 × 3,5 × 5,7 × 7 and 1 × 1, respectively, the combination of the convolution kernels with different sizes is not limited, the input image is subjected to the 4 types of convolutions, and then the 4 convolution results are spliced on the channels to be used as the output of the final combined convolution;
s43, a feature fusion module is introduced into the decoder of the segmentation network constructed in the step S42, the information of the 5-level decoder is fully utilized, the segmentation effect is better, and the process of introducing the feature fusion module is as follows:
the feature maps output by the 5-stage decoder are all subjected to upsampling operation, so that the sizes of the feature maps are consistent with the final target output size, and are recorded as
Figure DEST_PATH_IMAGE001
Obtained as described above
Figure 243511DEST_PATH_IMAGE001
All 5 characteristic graphs are subjected to 1-x 1 convolution with different parameters, the main function is to reduce the number of channels of the characteristic graphs, the calculation amount can be reduced, and the obtained characteristic graphs are marked as
Figure DEST_PATH_IMAGE002
Will be provided with
Figure 285286DEST_PATH_IMAGE002
Splicing 5 characteristic graphs on a channel to obtain an integral characteristic graphT
The overall characteristic diagramTRespectively obtaining two vectors through a maximum pooling layer and an average pooling layerV 1 AndV 2 two vectors are combinedV 1 AndV 2 obtaining a vector by post-adding a multi-layer perceptron sharing parametersCThe multilayer perceptron is a three-layer structure formed by one layer of 1 × 1 convolution layer, a ReLu activation function and the other layer of 1 × 1 convolution layer, and is used for compressing information and generating an attention result on a channel;
applying Sigmoid function to vectorCAnd integral characteristic diagramTMultiplying to obtain a feature map containing channel attention informationT’
The overall characteristic diagramTAnd characteristic diagramsT’And after addition, an output convolution module consisting of 3-by-3 convolution and Sigmoid activation function is used for obtaining a final segmentation result.
S44, as shown in fig. 2, each encoder in the figure corresponds to each residual module in the original Resnet50, respectively, the improved segmentation network is trained and tuned on the data set subjected to data enhancement, and the following scheme is adopted in the specific implementation:
and updating model parameters by using an Adam optimizer, updating the learning rate by using an exponential transformation mode, monitoring and verifying the mIoU index of the set in the training process, and storing the corresponding model parameters for later testing and prediction when the mIoU index is improved every time.
The test results of the training model are shown in the following table 1, wherein the mIoU index and the Dice index are common indexes for segmentation task evaluation.
Table 1 table of test results of model
Figure DEST_PATH_IMAGE003
S5, predicting the whole pathological section image by using the trained model and the grid processing algorithm, and identifying the cancer region therein, wherein the specific process of this embodiment is as follows:
s51, as shown in fig. 3, dividing the entire pathological section image into small blocks by using a grid algorithm to form a grid, where each small grid is used as a prediction sample, and the specific steps are as follows:
assuming that the size of the sliding window is W and the size of the overlapping area of each sliding window is O, the moving step length S = W-O of the sliding window;
calculating the number n of image blocks which can be obtained in the length X direction and the width Y direction of the original pathological section image according to the step length SxAnd nyThe calculation formulas are respectively as follows:
Figure DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE005
according to nxAnd nyDividing the length X and the width Y of the whole pathological section image, and storing the coordinate points inX a AndY a in (1),X a =[X 1 ,X 2 ,…,X n ],Y a =[Y 1 ,Y 2 ,…,Y n ];
according to the coordinates of the starting point of each window: (X i ,Y i ) And the size of a sliding window is used for obtaining small image blocks in the slice image, the vertex coordinates of the upper left corner of each small grid in the grid are coordinates indexed when the small image blocks are cut, and if the size of the image blocks exceeds the boundary (the right boundary and the lower boundary) of the slice image, the starting point of each image block is moved to the left or the starting point of each image block is moved to the right or the starting point of each image block is moved to the lower sideAnd moving upwards to make the image blocks beyond the boundary of the slice image just coincide with the boundary of the slice image.
S52, inputting a small block in the grid into the network by adopting a sliding window each time to obtain a segmentation result;
s53, as shown in fig. 4(c) and in combination with fig. 4(a) and 4(b), the segmentation results of each sliding window are merged to obtain the segmentation result of the entire pathological section image and the segmentation result of the entire pathological section image.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (8)

1. A pathological section image cancer region segmentation system based on a full convolution neural network is characterized by comprising: the system comprises an organization region extraction module, a sample data construction module, a sample data expansion module, a segmentation network construction module, a network training module and a cancer region identification module;
the tissue area extraction module is used for extracting a tissue area in the pathological section image, obtaining a mask of the tissue area and removing a blank background;
the sample data construction module is used for combining the extracted tissue area with a cancer area marked in the pathological section image, cutting the pathological section image of the full visual field and obtaining sample data for training;
the sample data expansion module is used for expanding sample data by adopting image enhancement;
the segmentation network construction module is used for constructing an Unet segmentation network taking Resnet50 as an encoder, replacing a convolution unit of a first-stage encoder of the Resnet50 with a combined convolution unit, wherein the combined convolution unit is provided with convolutions of convolution kernels with different sizes, extracting information of different scales in an input image, introducing a characteristic fusion module into a decoder part, and fully utilizing information output by each stage of decoder;
performing up-sampling operation on the feature map output by each stage of decoder, performing convolution with different parameters, and splicing on channels to obtain an integral feature mapT
Global characteristic diagramTRespectively obtaining two vectors through a maximum pooling layer and an average pooling layerV 1 AndV 2 two vectorsV 1 AndV 2 obtaining a vector by post-adding a multi-layer perceptron sharing parametersC
Applying Sigmoid function to vectorCAnd integral characteristic diagramTMultiplying to obtain a feature map containing channel attention informationT’
The overall characteristic diagramTAnd characteristic diagramsT’After adding, obtaining a final segmentation result through an output convolution module;
the network training module is used for training and adjusting the segmentation network on the data set subjected to data enhancement;
the cancer area identification module is used for predicting the whole pathological section image by adopting a trained model and based on a grid processing algorithm to identify a cancer area in the pathological section image.
2. The system for segmenting the cancer region of the pathological section image based on the full convolution neural network as claimed in claim 1, wherein the tissue region extraction module is used for extracting the tissue region in the pathological section image, and the specific steps include:
converting the original pathological section image from an RGB color space to an HSV color space;
obtaining a segmentation threshold value of the S component of the converted HSV image by adopting an Otsu algorithm;
and carrying out binarization processing on the image based on the segmentation threshold value to obtain a mask of the tissue region.
3. The pathological section image cancer region segmentation system based on the full convolution neural network as claimed in claim 1, wherein the pathological section image of the full field of view is cut to obtain sample data for training, and the specific steps include:
performing phase-comparison on the mask of the tissue area and the mask of the originally marked cancer area to obtain a final marking mask of the tissue area of the whole pathological section image;
and simultaneously cutting the original pathological section and the final labeling mask to obtain an image block and a label thereof.
4. The pathological section image cancer region segmentation system based on the full convolution neural network as claimed in claim 1, wherein the sample data expansion module is configured to expand sample data by image enhancement, and includes:
and carrying out image transformation on the pathological section image and the labeled image thereof, wherein the image transformation comprises any one or more of horizontal turning, random angle rotation, color change, noise interference and blurring.
5. The pathological section image cancer region segmentation system based on full convolution neural network as claimed in claim 1, wherein the combined convolution unit adopts 4 convolutions with different sizes, including convolution kernels of 3 × 3,5 × 5,7 × 7 and 1 × 1, the input images are respectively passed through the 4 convolutions, and then 4 convolution results are spliced on the channel as the output of the final whole combined convolution.
6. The system of claim 1, wherein the output convolution module comprises a 3 x 3 convolution and Sigmoid activation function.
7. The system of claim 1, wherein the multi-layered perceptron includes one layer of 1 x 1 convolutional layer, the ReLu activation function, and another layer of 1 x 1 convolutional layer for compressing information to generate an attention result on the channel.
8. The system for segmenting pathological section image cancer region based on full convolution neural network as claimed in claim 1, wherein the cancer region identification module is configured to use a trained model and a mesh processing algorithm to predict the whole pathological section image and identify the cancer region therein, and the specific steps include:
partitioning the whole pathological section image into grids by adopting a grid algorithm, wherein each grid is used as a prediction sample;
adopting a sliding window to take image blocks in the grid each time and input the image blocks into a segmentation network to obtain segmentation results;
and splicing the segmentation result of each sliding window to obtain the segmentation result of the whole pathological section image.
CN202210183367.6A 2022-02-28 2022-02-28 Pathological section image cancer region segmentation system based on full convolution neural network Active CN114266794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210183367.6A CN114266794B (en) 2022-02-28 2022-02-28 Pathological section image cancer region segmentation system based on full convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210183367.6A CN114266794B (en) 2022-02-28 2022-02-28 Pathological section image cancer region segmentation system based on full convolution neural network

Publications (2)

Publication Number Publication Date
CN114266794A true CN114266794A (en) 2022-04-01
CN114266794B CN114266794B (en) 2022-06-10

Family

ID=80833663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210183367.6A Active CN114266794B (en) 2022-02-28 2022-02-28 Pathological section image cancer region segmentation system based on full convolution neural network

Country Status (1)

Country Link
CN (1) CN114266794B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549520A (en) * 2022-04-08 2022-05-27 北京端点医药研究开发有限公司 Retina pathological section analysis system based on full convolution attention enhancement network
CN114820502A (en) * 2022-04-21 2022-07-29 济宁医学院附属医院 Coloring detection method for protein kinase CK2 in intestinal mucosa tissue
CN114862763A (en) * 2022-04-13 2022-08-05 华南理工大学 Gastric cancer pathological section image segmentation prediction method based on EfficientNet
CN115294126A (en) * 2022-10-08 2022-11-04 南京诺源医疗器械有限公司 Intelligent cancer cell identification method for pathological image
CN117952969A (en) * 2024-03-26 2024-04-30 济南大学 Endometrial cancer analysis method and system based on selective attention

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322435A (en) * 2019-01-20 2019-10-11 北京工业大学 A kind of gastric cancer pathological image cancerous region dividing method based on deep learning
CN111340828A (en) * 2020-01-10 2020-06-26 南京航空航天大学 Brain glioma segmentation based on cascaded convolutional neural networks
CN111784671A (en) * 2020-06-30 2020-10-16 天津大学 Pathological image focus region detection method based on multi-scale deep learning
CN111968127A (en) * 2020-07-06 2020-11-20 中国科学院计算技术研究所 Cancer focus area identification method and system based on full-section pathological image
CN112419286A (en) * 2020-11-27 2021-02-26 苏州斯玛维科技有限公司 Method and device for segmenting skin mirror image
CN112435246A (en) * 2020-11-30 2021-03-02 武汉楚精灵医疗科技有限公司 Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322435A (en) * 2019-01-20 2019-10-11 北京工业大学 A kind of gastric cancer pathological image cancerous region dividing method based on deep learning
CN111340828A (en) * 2020-01-10 2020-06-26 南京航空航天大学 Brain glioma segmentation based on cascaded convolutional neural networks
CN111784671A (en) * 2020-06-30 2020-10-16 天津大学 Pathological image focus region detection method based on multi-scale deep learning
CN111968127A (en) * 2020-07-06 2020-11-20 中国科学院计算技术研究所 Cancer focus area identification method and system based on full-section pathological image
CN112419286A (en) * 2020-11-27 2021-02-26 苏州斯玛维科技有限公司 Method and device for segmenting skin mirror image
CN112435246A (en) * 2020-11-30 2021-03-02 武汉楚精灵医疗科技有限公司 Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SARUAR ALAM1 ET AL.: "Automatic Polyp Segmentation using U-Net-ResNet50", 《ARXIV:2012.15247 V1》 *
XIANG LI ET AL.: "Selective Kernel Networks", 《ARXIV:1903.06586V2》 *
杨博雄 等: "《基于高性能计算的深度学习理论与实践研究》", 31 December 2019, 武汉大学出版社 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549520A (en) * 2022-04-08 2022-05-27 北京端点医药研究开发有限公司 Retina pathological section analysis system based on full convolution attention enhancement network
CN114549520B (en) * 2022-04-08 2024-05-07 北京端点医药研究开发有限公司 Retina pathological section analysis system based on full convolution attention enhancement network
CN114862763A (en) * 2022-04-13 2022-08-05 华南理工大学 Gastric cancer pathological section image segmentation prediction method based on EfficientNet
CN114820502A (en) * 2022-04-21 2022-07-29 济宁医学院附属医院 Coloring detection method for protein kinase CK2 in intestinal mucosa tissue
CN114820502B (en) * 2022-04-21 2023-10-24 济宁医学院附属医院 Coloring detection method for protein kinase CK2 in intestinal mucosa tissue
CN115294126A (en) * 2022-10-08 2022-11-04 南京诺源医疗器械有限公司 Intelligent cancer cell identification method for pathological image
CN115294126B (en) * 2022-10-08 2022-12-16 南京诺源医疗器械有限公司 Cancer cell intelligent identification method for pathological image
CN117952969A (en) * 2024-03-26 2024-04-30 济南大学 Endometrial cancer analysis method and system based on selective attention

Also Published As

Publication number Publication date
CN114266794B (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN110599448B (en) Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN114266794B (en) Pathological section image cancer region segmentation system based on full convolution neural network
CN110390251B (en) Image and character semantic segmentation method based on multi-neural-network model fusion processing
CN111882560B (en) Lung parenchyma CT image segmentation method based on weighted full convolution neural network
US20190019055A1 (en) Word segmentation system, method and device
CN110569738B (en) Natural scene text detection method, equipment and medium based on densely connected network
CN110175613A (en) Street view image semantic segmentation method based on Analysis On Multi-scale Features and codec models
CN110942471B (en) Long-term target tracking method based on space-time constraint
CN111461039B (en) Landmark identification method based on multi-scale feature fusion
CN113435240B (en) End-to-end form detection and structure identification method and system
CN110334709B (en) License plate detection method based on end-to-end multi-task deep learning
CN113077419A (en) Information processing method and device for hip joint CT image recognition
CN111353544A (en) Improved Mixed Pooling-Yolov 3-based target detection method
CN115546768A (en) Pavement marking identification method and system based on multi-scale mechanism and attention mechanism
CN115424017B (en) Building inner and outer contour segmentation method, device and storage medium
CN115131797A (en) Scene text detection method based on feature enhancement pyramid network
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN111507337A (en) License plate recognition method based on hybrid neural network
CN110895815A (en) Chest X-ray pneumothorax segmentation method based on deep learning
CN116883650A (en) Image-level weak supervision semantic segmentation method based on attention and local stitching
CN113657225B (en) Target detection method
CN114445620A (en) Target segmentation method for improving Mask R-CNN
CN113313669B (en) Method for enhancing semantic features of top layer of surface defect image of subway tunnel
CN114445665A (en) Hyperspectral image classification method based on Transformer enhanced non-local U-shaped network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant