CN114266794B - Pathological section image cancer region segmentation system based on full convolution neural network - Google Patents
Pathological section image cancer region segmentation system based on full convolution neural network Download PDFInfo
- Publication number
- CN114266794B CN114266794B CN202210183367.6A CN202210183367A CN114266794B CN 114266794 B CN114266794 B CN 114266794B CN 202210183367 A CN202210183367 A CN 202210183367A CN 114266794 B CN114266794 B CN 114266794B
- Authority
- CN
- China
- Prior art keywords
- pathological section
- image
- section image
- segmentation
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a pathological section image cancer region segmentation system based on a full convolution neural network, which comprises the following steps: extracting masks of tissue areas in the pathological section images, and removing blank backgrounds; combining the tissue area with a cancer area marked in the pathological section image, and cutting the pathological section image of the full visual field to obtain training sample data; expanding the sample data; constructing a Unet segmentation network taking Resnet50 as an encoder, replacing a convolution unit of a first-stage encoder with a combination convolution unit, extracting information of different scales in an input image, introducing a feature fusion module into a decoder, and fully utilizing information output by each stage of decoder; the segmentation network is trained and tuned on the data set subjected to data enhancement; and predicting the whole pathological section image by adopting a grid processing algorithm, and identifying a cancer area in the pathological section image. The invention introduces multi-scale information in the characteristic extraction stage, and improves the segmentation precision of the cancer region.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a pathological section image cancer area segmentation system based on a full convolution neural network.
Background
Before the popularity of neural networks, researchers have begun to adopt image processing techniques to perform aided diagnosis on pathological section images, mainly performing feature extraction on images from aspects such as statistical features, texture features and morphological features of the images, judging whether the images contain certain structures and features, and separating the structures and features. Although the traditional image processing method can already assist the diagnosis of doctors, the problems of low precision, unstable effect and the like generally exist.
And the neural network is adopted to identify pathological section images, most of researches only stay in judging whether the images have cancer or not, but do not specifically segment areas with cancer, the dependence on pathologists is still very serious, and the neural network can only play a slight auxiliary role for the pathologists.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides a pathological section image cancer region segmentation system based on a full convolution neural network.
In order to achieve the purpose, the invention adopts the following technical scheme:
a pathological section image cancer region segmentation system based on a full convolution neural network comprises: the system comprises an organization region extraction module, a sample data construction module, a sample data expansion module, a segmentation network construction module, a network training module and a cancer region identification module;
the tissue area extraction module is used for extracting a tissue area in the pathological section image, obtaining a mask of the tissue area and removing a blank background;
the sample data construction module is used for combining the extracted tissue area with a cancer area marked in the pathological section image, cutting the pathological section image of the full visual field and obtaining sample data for training;
the sample data expansion module is used for expanding sample data by adopting image enhancement;
the segmentation network construction module is used for constructing an Unet segmentation network taking Resnet50 as an encoder, replacing a convolution unit of a first-stage encoder of the Resnet50 with a combined convolution unit, wherein the combined convolution unit is provided with convolutions of convolution kernels with different sizes, extracting information of different scales in an input image, introducing a characteristic fusion module into a decoder part, and fully utilizing information output by each stage of decoder;
Performing up-sampling operation on the feature map output by each stage of decoder, performing convolution with different parameters, and splicing on channels to obtain an integral feature mapT;
Overall characteristic diagramTRespectively obtaining two vectors through a maximum pooling layer and an average pooling layerV 1 AndV 2 two vectorsV 1 AndV 2 obtaining a vector by post-adding a multi-layer perceptron sharing parametersC;
Applying Sigmoid function to vectorCAnd integral characteristic diagramTMultiplying to obtain a feature map containing channel attention informationT’;
The overall characteristic diagramTAnd characteristic diagramsT’After adding, obtaining a final segmentation result through an output convolution module;
the network training module is used for training and adjusting the segmentation network on the data set subjected to data enhancement;
the cancer area identification module is used for predicting the whole pathological section image by adopting a trained model and based on a grid processing algorithm to identify a cancer area in the pathological section image.
As a preferred technical solution, the tissue region extraction module is configured to extract a tissue region in a pathological section image, and the specific steps include:
converting the original pathological section image from an RGB color space to an HSV color space;
obtaining a segmentation threshold value of the S component of the converted HSV image by adopting an Otsu algorithm;
And carrying out binarization processing on the image based on the segmentation threshold value to obtain a mask of the tissue region.
As a preferred technical solution, the cutting of the full-field pathological section image to obtain sample data for training includes the following specific steps:
performing phase-AND on the mask of the tissue area and the mask of the originally marked cancer area to obtain a final marked mask of the tissue area of the whole pathological section image;
and simultaneously cutting the original pathological section and the final labeling mask to obtain an image block and a label thereof.
As a preferred technical solution, the sample data expansion module is configured to expand sample data by using image enhancement, and includes the specific steps of:
and carrying out image transformation on the pathological section image and the labeled image thereof, wherein the image transformation comprises any one or more of horizontal turning, random angle rotation, color change, noise interference and blurring.
As a preferred technical solution, the combined convolution unit adopts 4 convolutions with different sizes, including convolution kernels of 3 × 3,5 × 5,7 × 7 and 1 × 1, the input image respectively undergoes the 4 convolutions, and then the 4 convolution results are spliced on the channels as the output of the final whole combined convolution.
As a preferred technical solution, the output convolution module includes a 3 × 3 convolution and Sigmoid activation function.
As a preferred technical solution, the multi-layer sensor includes one layer 1 × 1 convolution layer, a ReLu activation function, and another layer 1 × 1 convolution layer, and is used for compressing information and generating an attention result on a channel.
As a preferred technical solution, the cancer region identification module is configured to predict a whole pathological section image by using a trained model and based on a grid processing algorithm, and identify a cancer region therein, and the method specifically includes:
partitioning the whole pathological section image into grids by adopting a grid algorithm, wherein each grid is used as a prediction sample;
adopting a sliding window to take the image blocks in the grid each time and input the image blocks into a segmentation network to obtain segmentation results;
and splicing the segmentation result of each sliding window to obtain the segmentation result of the whole pathological section image.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) the method introduces the combined convolution in the network structure, fully extracts the characteristic information of different scales in the input image, has more excellent segmentation effect compared with a network without introducing information of different scales, and improves the segmentation precision of the cancer region by introducing information of different scales.
(2) When the segmentation result is generated, the output of the 5-level decoder is respectively extracted and the feature fusion module is adopted to perform feature fusion, so that the processing can fully utilize information such as the spatial position in the low-level feature map and the high-level semantic information in the high-level feature map, and the segmentation result is more accurate.
Drawings
FIG. 1 is a schematic diagram of a process for implementing a pathological section image cancer region segmentation system based on a full convolution neural network according to the present invention;
FIG. 2 is a schematic diagram of a full convolutional network structure of the present invention;
FIG. 3 is a schematic diagram of the implementation process of the grid processing algorithm of the present invention
FIG. 4 (a) is a schematic diagram of an original image of a whole pathological section for prediction according to the present invention;
FIG. 4 (b) is a schematic diagram of an annotation mask of the entire pathological section image for prediction according to the present invention;
fig. 4 (c) is a schematic diagram of the segmentation result of the whole pathological section image according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
The embodiment provides a pathological section image cancer region segmentation system based on a full convolution neural network, which comprises: the system comprises an organization region extraction module, a sample data construction module, a sample data expansion module, a segmentation network construction module, a network training module and a cancer region identification module;
in this embodiment, the tissue region extraction module is configured to extract a tissue region in a pathological section image, obtain a mask of the tissue region, and remove a blank background;
in this embodiment, the sample data construction module is configured to combine the extracted tissue region with a cancer region labeled in a pathological section image, and cut a full-field pathological section image to obtain sample data for training;
in this embodiment, the sample data expansion module is configured to expand sample data by using image enhancement;
in this embodiment, the segmentation network construction module is configured to construct an Unet segmentation network using Resnet50 as an encoder, and replace a convolution unit of a first-stage encoder of Resnet50 with a combined convolution unit, where the combined convolution unit is provided with convolutions of convolution kernels of different sizes, extract information of different scales in an input image, and introduce a feature fusion module in a decoder portion, so as to fully utilize information output by each stage of decoder;
Performing up-sampling operation on the feature map output by each stage of decoder, performing convolution with different parameters, and splicing on channels to obtain an integral feature mapT;
Global characteristic diagramTRespectively obtaining two vectors through a maximum pooling layer and an average pooling layerV 1 AndV 2 two vectorsV 1 AndV 2 obtaining a vector by post-adding a multi-layer perceptron of a shared parameterC;
Applying Sigmoid function to vectorCAnd integral characteristic diagramTMultiplying to obtain a feature map containing channel attention informationT’;
The overall characteristic diagramTAnd characteristic diagramsT’After adding, obtaining a final segmentation result through an output convolution module;
in this embodiment, the network training module is configured to train and tune the segmented network on a data set subjected to data enhancement;
in this embodiment, the cancer region identification module is configured to use the trained model and predict the whole pathological section image based on a grid processing algorithm, so as to identify the cancer region therein.
As shown in fig. 1, in the embodiment of the method for implementing a cancer region segmentation system based on full-convolution neural network for pathological section images, a breast cancer data set disclosed in Grand Challenge game catalog 16 is used, the data set includes 111 breast cancer pathological section images with labels containing cancer and 159 normal pathological section images, and the test set includes 129 pathological section images (only 48 of them have labels of tumor regions), including the following steps:
S1, extracting a tissue area in the pathological section image by using an Otsu algorithm and color space conversion, and removing a blank background; the specific process of this embodiment is as follows:
s11, converting the original pathological section image from an RGB color space to an HSV color space;
s12, obtaining a segmentation threshold value of the S component of the converted HSV image by applying an Otsu algorithm;
and S13, performing binarization processing on the image by using the threshold value obtained in the step S12 to obtain a mask of the tissue region, wherein the value of the region belonging to the tissue region in the mask is 255, and the value of the region not belonging to the tissue is 0.
S2, combining the extracted tissue area with the cancer area marked in the pathological section image, cutting the pathological section image of the whole visual field to obtain sample data for training, wherein the specific processing process is as follows:
s21, the value of the cancer region in the originally labeled cancer region mask is 1, and the values of the other regions are 0. In order to enable the Mask of the tissue region not to influence the labeled Mask value, the tissue region Mask obtained in the step S1 and the originally labeled cancer region Mask are subjected to bitwise and, so that the labeled Mask of the tissue region of the whole pathological section image can be obtained, which is labeled Mask, a region not being 0 in Mask indicates that the region is both tissue and cancer, and the Mask values of other regions are all 0;
S22, cutting the original pathological section image and the Mask obtained in the step S21 simultaneously to obtain a small image block and a mark thereof, wherein the size of the image block is recorded as W, the value of W can be 1024,512,256 and the like, in order to improve the efficiency and reduce the calculation complexity in specific implementation, the adopted W is 1024, but the cut image block is downsampled to 256, so that the image block can be ensured to contain enough information and the calculation speed is increased;
s3, expanding the sample data by adopting an image enhancement technology to improve the generalization performance of the model, wherein the specific implementation process is as follows:
according to the characteristic that the pathological section image has no fixed direction, when the image is input into a network for training, the pathological section image and the label are simultaneously subjected to image transformation such as horizontal inversion, random angle rotation, color change, noise interference, blurring and the like at a certain probability, so that the purpose of expanding sample data is achieved.
S4, constructing a segmentation network introducing information of different scales, and training and tuning, wherein the specific process of the embodiment is as follows:
s41, constructing a Unet segmentation network with Resnet50 as an encoder, wherein the structure can be summarized into a structure that the encoder is connected with the decoder, the decoder and the encoder have 5 levels respectively, each level of encoder is composed of 5 stages (total) of Resnet50, each level of decoder is composed of a deconvolution with the convolution kernel size of 2 x 2 and the step size of 2 and two layers of 3 x 3 convolutions, and the recovery of image resolution is realized;
S42, replacing the convolution unit of the first-level encoder of the encoder Resnet50 in the segmentation network constructed in the step S41 (namely, the first residual module of the encoder Resnet 50) with a combination convolution unit, fully extracting information of different scales in the input image, and gaining the improvement of segmentation precision; the combined convolution adopted in this embodiment is composed of 4 types of convolutions with different sizes, such as convolution kernels of 3 × 3,5 × 5,7 × 7 and 1 × 1, respectively, the combination of the convolution kernels with different sizes is not limited, the input image is subjected to the 4 types of convolutions, and then the 4 convolution results are spliced on the channels to be used as the output of the final combined convolution;
s43, a feature fusion module is introduced into the decoder of the segmentation network constructed in the step S42, the information of the 5-level decoder is fully utilized, the segmentation effect is better, and the process of introducing the feature fusion module is as follows:
the feature maps output by the 5-stage decoder are all subjected to upsampling operation, so that the sizes of the feature maps are consistent with the final target output size, and are recorded as;
Obtained as described aboveAll 5 characteristic graphs are subjected to 1-to-1 convolution with different parameters, the main function is to reduce the number of channels of the characteristic graphs, the function of reducing the calculated amount can be achieved, and the obtained characteristic graphs are marked as ;
Will be provided withSplicing 5 characteristic graphs on the channel to obtain an integral characteristic graphT;
Drawing the overall characteristic diagramTRespectively obtaining two vectors through a maximum pooling layer and an average pooling layerV 1 AndV 2 two vectors are combinedV 1 AndV 2 obtaining a vector by post-adding a multi-layer perceptron sharing parametersCThe multilayer perceptron is a three-layer structure formed by one layer of 1 × 1 convolution layer, a ReLu activation function and the other layer of 1 × 1 convolution layer, and is used for compressing information and generating an attention result on a channel;
applying Sigmoid function to vectorCAnd integral characteristic diagramTMultiplying to obtain a feature map containing channel attention informationT’;
The overall characteristic diagramTAnd characteristic diagramsT’And after addition, an output convolution module consisting of 3-by-3 convolution and Sigmoid activation function is used for obtaining a final segmentation result.
S44, as shown in fig. 2, each encoder in the figure corresponds to each residual module in the original Resnet50, respectively, the improved segmentation network is trained and tuned on the data set subjected to data enhancement, and the following scheme is adopted in the specific implementation:
and updating model parameters by using an Adam optimizer, updating the learning rate by using an exponential transformation mode, monitoring and verifying the mIoU index of the set in the training process, and storing the corresponding model parameters for later testing and prediction when the mIoU index is improved every time.
The test results of the training model are shown in the following table 1, and the mIoU index and the Dice index in the table are common indexes for segmentation task evaluation.
Table 1 table of test results of model
S5, predicting the entire pathological section image by using the trained model and using a mesh processing algorithm, and identifying a cancer region in the pathological section image, wherein the specific process in this embodiment is as follows:
s51, as shown in fig. 3, dividing the whole pathological section image into small blocks by using a grid algorithm to form a grid, where each small grid is used as a prediction sample, and the specific steps are as follows:
assuming that the size of the sliding window is W and the size of the overlapping area of each sliding window is O, the moving step length S = W-O of the sliding window;
calculating the number n of image blocks which can be obtained in the length X direction and the width Y direction of the original pathological section image according to the step length SxAnd nyThe calculation formulas are respectively as follows:
according to nxAnd nyDividing the length X and the width Y of the whole pathological section image, and storing the coordinate points inX a AndY a in (1),X a =[X 1 ,X 2 ,…,X n ],Y a =[Y 1 ,Y 2 ,…,Y n ];
according to the coordinates of the starting point of each window: (X i ,Y i ) And the size of a sliding window is used for obtaining small image blocks in the slice image, the vertex coordinates at the upper left corner of each small grid in the grid are the coordinates of indexes when the small image blocks are cut, and if the size of the image blocks exceeds the boundary (the right boundary and the lower boundary) of the slice image, the starting points of the image blocks are moved leftwards or upwards, so that the image blocks exceeding the boundary of the slice image are just matched with the boundary of the slice image.
S52, inputting a small block in the grid into the network by adopting a sliding window each time to obtain a segmentation result;
s53, as shown in fig. 4 (c) and in combination with fig. 4 (a) and 4 (b), the segmentation results of each sliding window are merged to obtain the segmentation result of the entire pathological section image and the segmentation result of the entire pathological section image.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (7)
1. A pathological section image cancer region segmentation system based on a full convolution neural network is characterized by comprising: the system comprises an organization region extraction module, a sample data construction module, a sample data expansion module, a segmentation network construction module, a network training module and a cancer region identification module;
the tissue area extraction module is used for extracting a tissue area in the pathological section image, obtaining a mask of the tissue area and removing a blank background;
the sample data construction module is used for combining the extracted tissue area with a cancer area marked in the pathological section image, cutting the pathological section image of the full visual field and obtaining sample data for training;
Performing phase-AND on the mask of the tissue area and the mask of the originally marked cancer area to obtain a final marked mask of the tissue area of the whole pathological section image;
simultaneously cutting the original pathological section and the final mark mask to obtain an image block and a mark thereof;
the sample data expansion module is used for expanding sample data by adopting image enhancement;
the segmentation network construction module is used for constructing an Unet segmentation network taking Resnet50 as an encoder, replacing a convolution unit of a first-stage encoder of the Resnet50 with a combined convolution unit, wherein the combined convolution unit is provided with convolutions of convolution kernels with different sizes, extracting information of different scales in an input image, introducing a characteristic fusion module into a decoder part, and fully utilizing information output by each stage of decoder;
performing up-sampling operation on the feature map output by each stage of decoder, performing convolution with different parameters, and splicing on channels to obtain an integral feature mapT;
Global characteristic diagramTRespectively obtaining two vectors through a maximum pooling layer and an average pooling layerV 1 AndV 2 two vectorsV 1 AndV 2 obtaining a vector by post-adding a multi-layer perceptron sharing parametersC;
Applying Sigmoid function to vector CAnd global feature mapsTMultiplying to obtain a feature map containing channel attention informationT’;
Drawing the overall characteristic diagramTAnd characteristic diagramT’After adding, obtaining a final segmentation result through an output convolution module;
the network training module is used for training and adjusting the segmentation network on the data set subjected to data enhancement;
the cancer area identification module is used for predicting the whole pathological section image by adopting a trained model and based on a grid processing algorithm to identify a cancer area in the pathological section image.
2. The system for segmenting the cancer region of the pathological section image based on the full convolution neural network as claimed in claim 1, wherein the tissue region extraction module is used for extracting the tissue region in the pathological section image, and the specific steps include:
converting the original pathological section image from an RGB color space to an HSV color space;
obtaining a segmentation threshold value of the S component of the converted HSV image by adopting an Otsu algorithm;
and carrying out binarization processing on the image based on the segmentation threshold value to obtain a mask of the tissue region.
3. The pathological section image cancer region segmentation system based on the full convolution neural network as claimed in claim 1, wherein the sample data expansion module is configured to expand sample data by image enhancement, and includes:
And carrying out image transformation on the pathological section image and the marked image thereof, wherein the image transformation comprises any one or more of horizontal turning, random angle rotation, color change, noise interference and blurring.
4. The pathological section image cancer region segmentation system based on full convolution neural network as claimed in claim 1, wherein the combined convolution unit adopts 4 convolutions with different sizes, including convolution kernels of 3 × 3,5 × 5,7 × 7 and 1 × 1, the input images are respectively passed through the 4 convolutions, and then 4 convolution results are spliced on the channel as the output of the final whole combined convolution.
5. The system of claim 1, wherein the output convolution module comprises a 3-by-3 convolution and Sigmoid activation function.
6. The system of claim 1, wherein the multi-layered perceptron includes one layer of 1 x 1 convolutional layer, the ReLu activation function, and another layer of 1 x 1 convolutional layer for compressing information to generate an attention result on the channel.
7. The system for segmenting pathological section image cancer region based on full convolution neural network as claimed in claim 1, wherein the cancer region identification module is configured to use a trained model and a mesh processing algorithm to predict the whole pathological section image and identify the cancer region therein, and the specific steps include:
Partitioning the whole pathological section image into grids by adopting a grid algorithm, wherein each grid is used as a prediction sample;
adopting a sliding window to take the image blocks in the grid each time and input the image blocks into a segmentation network to obtain segmentation results;
and splicing the segmentation result of each sliding window to obtain the segmentation result of the whole pathological section image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210183367.6A CN114266794B (en) | 2022-02-28 | 2022-02-28 | Pathological section image cancer region segmentation system based on full convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210183367.6A CN114266794B (en) | 2022-02-28 | 2022-02-28 | Pathological section image cancer region segmentation system based on full convolution neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114266794A CN114266794A (en) | 2022-04-01 |
CN114266794B true CN114266794B (en) | 2022-06-10 |
Family
ID=80833663
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210183367.6A Active CN114266794B (en) | 2022-02-28 | 2022-02-28 | Pathological section image cancer region segmentation system based on full convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114266794B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114549520B (en) * | 2022-04-08 | 2024-05-07 | 北京端点医药研究开发有限公司 | Retina pathological section analysis system based on full convolution attention enhancement network |
CN114862763B (en) * | 2022-04-13 | 2024-06-21 | 华南理工大学 | EFFICIENTNET-based gastric cancer pathological section image segmentation prediction method |
CN114820502B (en) * | 2022-04-21 | 2023-10-24 | 济宁医学院附属医院 | Coloring detection method for protein kinase CK2 in intestinal mucosa tissue |
CN115294126B (en) * | 2022-10-08 | 2022-12-16 | 南京诺源医疗器械有限公司 | Cancer cell intelligent identification method for pathological image |
CN117952969B (en) * | 2024-03-26 | 2024-06-21 | 济南大学 | Endometrial cancer analysis method and system based on selective attention |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110322435A (en) * | 2019-01-20 | 2019-10-11 | 北京工业大学 | A kind of gastric cancer pathological image cancerous region dividing method based on deep learning |
CN112419286A (en) * | 2020-11-27 | 2021-02-26 | 苏州斯玛维科技有限公司 | Method and device for segmenting skin mirror image |
CN112435246A (en) * | 2020-11-30 | 2021-03-02 | 武汉楚精灵医疗科技有限公司 | Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111340828A (en) * | 2020-01-10 | 2020-06-26 | 南京航空航天大学 | Brain glioma segmentation based on cascaded convolutional neural networks |
CN111784671B (en) * | 2020-06-30 | 2022-07-05 | 天津大学 | Pathological image focus region detection method based on multi-scale deep learning |
CN111968127B (en) * | 2020-07-06 | 2021-08-27 | 中国科学院计算技术研究所 | Cancer focus area identification method and system based on full-section pathological image |
-
2022
- 2022-02-28 CN CN202210183367.6A patent/CN114266794B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110322435A (en) * | 2019-01-20 | 2019-10-11 | 北京工业大学 | A kind of gastric cancer pathological image cancerous region dividing method based on deep learning |
CN112419286A (en) * | 2020-11-27 | 2021-02-26 | 苏州斯玛维科技有限公司 | Method and device for segmenting skin mirror image |
CN112435246A (en) * | 2020-11-30 | 2021-03-02 | 武汉楚精灵医疗科技有限公司 | Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope |
Non-Patent Citations (2)
Title |
---|
Automatic Polyp Segmentation using U-Net-ResNet50;Saruar Alam1 et al.;《arXiv:2012.15247 v1》;20201230;第1-3页 * |
Selective Kernel Networks;Xiang Li et al.;《arXiv:1903.06586v2》;20190318;第1-12页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114266794A (en) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114266794B (en) | Pathological section image cancer region segmentation system based on full convolution neural network | |
CN110738207B (en) | Character detection method for fusing character area edge information in character image | |
CN112132156B (en) | Image saliency target detection method and system based on multi-depth feature fusion | |
CN111882560B (en) | Lung parenchyma CT image segmentation method based on weighted full convolution neural network | |
Xu et al. | An improved faster R-CNN algorithm for assisted detection of lung nodules | |
CN111612008A (en) | Image segmentation method based on convolution network | |
CN109840483B (en) | Landslide crack detection and identification method and device | |
US12086989B2 (en) | Medical image segmentation method based on U-network | |
CN114529516B (en) | Lung nodule detection and classification method based on multi-attention and multi-task feature fusion | |
CN113077419A (en) | Information processing method and device for hip joint CT image recognition | |
CN113192076B (en) | MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction | |
CN116012291A (en) | Industrial part image defect detection method and system, electronic equipment and storage medium | |
CN111353544A (en) | Improved Mixed Pooling-Yolov 3-based target detection method | |
CN115131797A (en) | Scene text detection method based on feature enhancement pyramid network | |
CN116563285B (en) | Focus characteristic identifying and dividing method and system based on full neural network | |
CN116228792A (en) | Medical image segmentation method, system and electronic device | |
CN110895815A (en) | Chest X-ray pneumothorax segmentation method based on deep learning | |
CN114445620A (en) | Target segmentation method for improving Mask R-CNN | |
CN116645592A (en) | Crack detection method based on image processing and storage medium | |
CN112488996A (en) | Inhomogeneous three-dimensional esophageal cancer energy spectrum CT (computed tomography) weak supervision automatic labeling method and system | |
CN116596966A (en) | Segmentation and tracking method based on attention and feature fusion | |
CN113554656B (en) | Optical remote sensing image example segmentation method and device based on graph neural network | |
CN114445665A (en) | Hyperspectral image classification method based on Transformer enhanced non-local U-shaped network | |
CN117975087A (en) | Casting defect identification method based on ECA-ConvNext | |
CN114708591B (en) | Document image Chinese character detection method based on single word connection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |