CN112102332A - Cancer WSI segmentation method based on local classification neural network - Google Patents
Cancer WSI segmentation method based on local classification neural network Download PDFInfo
- Publication number
- CN112102332A CN112102332A CN202010891178.5A CN202010891178A CN112102332A CN 112102332 A CN112102332 A CN 112102332A CN 202010891178 A CN202010891178 A CN 202010891178A CN 112102332 A CN112102332 A CN 112102332A
- Authority
- CN
- China
- Prior art keywords
- image
- image blocks
- blocks
- cancer
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 206010028980 Neoplasm Diseases 0.000 title claims abstract description 26
- 230000011218 segmentation Effects 0.000 title claims abstract description 22
- 238000000034 method Methods 0.000 title claims abstract description 19
- 201000011510 cancer Diseases 0.000 title claims abstract description 17
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 11
- 230000001575 pathological effect Effects 0.000 claims abstract description 38
- 238000012549 training Methods 0.000 claims description 15
- 210000002751 lymph Anatomy 0.000 claims description 8
- 230000001338 necrotic effect Effects 0.000 claims description 7
- 210000001519 tissue Anatomy 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 230000017074 necrotic cell death Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000012805 post-processing Methods 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 2
- 238000005096 rolling process Methods 0.000 claims description 2
- 238000000638 solvent extraction Methods 0.000 claims description 2
- 238000003745 diagnosis Methods 0.000 abstract description 5
- 238000012545 processing Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000004195 computer-aided diagnosis Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000004393 prognosis Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 210000001165 lymph node Anatomy 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000010827 pathological analysis Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of medical image intelligent processing, in particular to a cancer full-view digital pathological section segmentation method based on a local classification neural network. The method can quickly mark the areas of different tissues in the whole pathological section, effectively assist doctors in diagnosis and improve the accuracy and efficiency of diagnosis.
Description
Technical Field
The invention belongs to the technical field of medical image intelligent processing, particularly relates to a pathological section segmentation method, and more particularly relates to a cancer full-view digital pathological section segmentation method based on a local classification neural network.
Background
With the development of the full-slice scanning technology, a large number of tissue slices are scanned into full-field digital pathological sections (WSI), stored in a digitized form, and widely used in cancer pathological diagnosis. Although a professional physician can perform a diagnosis by analyzing the WSI, it is difficult for the physician to focus on all the details of the WSI due to its extremely large size. Meanwhile, the factors influencing the prognosis of cancer are very many, and doctors are also difficult to analyze the relevant information of the patient prognosis through pathological sections.
By using a machine learning algorithm, a computer is applied to pathological section analysis, learning can be performed on a large number of WSIs by virtue of the computational power advantage of the computer, the WSIs are analyzed by using a model obtained by training, and a doctor is assisted in diagnosing after the result is visualized, so that rich image information contained in the WSIs is fully utilized.
The Deep Convolutional Neural Network (DCNN) is a machine learning technology, which can effectively avoid human factors and automatically learn how to extract rich representative visual features from a large amount of marked data. The technology uses a back propagation optimization algorithm, so that a machine updates internal parameters thereof and learns the mapping relation from an input image to a label. In recent years, DCNN has greatly improved the performance of tasks in computer vision.
2012, Krizhevsky et al[1]First in ImageNet[2]The image classification competition applies a deep convolutional neural network, and obtains a champion with a Top-5 error rate of 15.3%, which causes a hot tide of deep learning. 2015 Simnyan et al[3]Propose 16 andthe 19-layer neural networks VGG-16 and VGG-19 increase the parameter quantity of the networks and further improve the result of the ImageNet image classification task. 2016 He et al[4]The use of the 152-layer residual network ResNet achieves a classification effect exceeding that of human eyes.
DCNN not only performs excellently in image classification tasks, but also in some structured output tasks, such as object detection[5-7]Semantic segmentation[8,9]The same excellent effects are obtained. If the DCNN is applied to computer-aided diagnosis (CAD), doctors can be assisted to make better medical diagnosis, early discovery and early treatment can be achieved, and the treatment effect can be improved.
The invention is based on He et al[4]The proposed ResNet provides a new cancer full-visual-field digital pathological section segmentation method based on a local classification neural network, can fully combine the characteristics of training sections, extracts abundant characteristics, and simultaneously realizes the analysis of WSI.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a segmentation method of a cancer full-visual-field digital pathological section (WSI) based on a local classification neural network (DCNN), which eliminates the influence of human factors and realizes automatic segmentation of the full-visual-field digital pathological section.
The invention provides a cancer full-visual-field digital pathological section (WSI) segmentation method based on a local classification neural network, which comprises the following specific steps of:
(1) dividing pathological sections into blocks;
firstly, Otsu method is used for pathological section images according to G channel (green channel) in RGB three channels[10]Performing threshold segmentation to remove the white background to obtain a mask of the approximate position of the tissue to be analyzed; then, partitioning the whole image into non-overlapping image blocks with the length and the width of 256 pixels, and only taking the image block in the mask as an analysis object;
(2) constructing a local image classification network model, and classifying image blocks;
the local image classification network model is constructed by removing the last two rolling blocks and the full connecting layer on the basis of ResNet18 and adding a full connecting layer; classifying the cut image blocks by using the network to obtain classified output;
in the invention, the model for executing the classification task is divided into two independent parts: the first is a classifier for distinguishing tumor regions and paracancerous regions; the other is a three-classifier for distinguishing a lymph gathering area, a necrosis area and other areas; the output of the second classifier is a value, the activation function is a sigmoid function and represents the confidence coefficient of the cancer-side region, if the confidence coefficient is high, the image block belongs to the cancer-side region, otherwise, the image block belongs to the tumor region; outputting a vector with the length of 3 by a full connection layer of the three classifiers, wherein an activation function is a softmax function, finally outputting confidence coefficients of each element in the obtained 3-dimensional vector corresponding to other regions, a lymph dense region and a necrosis region in sequence, and taking a high-confidence-coefficient as a classification result;
(3) carrying out overall heat map post-processing on the pathological section;
combining the obtained classification results into two heat maps according to the coordinate positions of the image blocks in the pathological section, wherein the two heat maps respectively correspond to the results of the two classifiers and the results of the three classifiers; the two heat maps are subjected to median filtering respectively, and then blocks with small areas in the heat maps are removed, wherein the size and the area threshold of the filter are determined according to requirements and actual conditions.
In the invention, the training process of the classification network model is as follows:
firstly, cutting a plurality of misaligned image blocks in different areas of each pathological section according to labels, wherein the length and the width of each image block are 256 pixels, and storing the image blocks as a data set. Then, training the two classifiers and the three classifiers by using a data set formed by the image blocks respectively;
the training samples of the two classifiers at least comprise 240 pathological sections, 10 thousands of image blocks in a tumor area are cut out, and 14 thousands of image blocks in a cancer-side area are cut out; the three-classifier training sample at least comprises 80 pathological sections, 24 thousands of lymph dense areas are cut out, 18 thousands of image blocks in necrotic areas are cut out, and 38 thousands of image blocks in other areas are cut out. The image blocks are all 256 pixels in length and width.
The invention also relates to a segmentation system of the cancer full-visual-field digital pathological section based on the local classification neural network, which corresponds to the segmentation method. The system comprises 3 modules, which are respectively: the system comprises a pathological section blocking module, a local image classification network model and a pathological section global heat map post-processing module, and the operations of the three steps are correspondingly executed.
In the invention, the average time of a single classifier for processing the WSI is 734.22 seconds, and the display card used in the test is GeForce GTX 1080 Ti. The segmentation effect is shown in fig. 2 and fig. 3, the present invention can accurately segment a lymph dense region and a necrotic region, and distinguish a paracancer region and a tumor region, thereby helping a doctor to quickly locate a fine tissue region, and the distribution and the area size of each region can also help the doctor to analyze and diagnose a section.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
FIG. 2 is a graph showing the effect of segmenting the lymphoid dense region and necrotic region of a pathological section.
Fig. 3 is a graph showing the effect of segmenting the paracancerous region of a pathological section.
Detailed Description
The embodiments of the present invention are described in detail below, but the scope of the present invention is not limited to the examples.
By adopting the flow framework in fig. 1, 86 pieces of pathological sections marked with a lymph dense region and a necrotic region and 241 pieces of pathological sections marked with a paracarcinoma region and a tumor region are used for training two target detection neural networks respectively to obtain an automatic detection and diagnosis model.
The specific process is as follows:
(1) before training, threshold segmentation is performed on the green channel of the pathological section using Otsu's method to distinguish the background, so as to obtain the mask of the region where the tissue is located. The WSI is divided into a plurality of non-overlapping image blocks with the length and the width of 256 pixels, and then a plurality of image blocks are randomly sampled from different areas in the mask as training samples according to manual marking. The image block obtained by sampling requires that all pixels are located in the same marked area and are not overlapped with each other;
(2) during training, the initial learning rate is set to be 0.00005, and a small batch of random gradient descent method is used for minimizing a loss function. The batch size was set to 256;
(3) before training, threshold segmentation is performed on the green channel of the pathological section using Otsu's method to distinguish the background, so as to obtain the mask of the region where the tissue is located. The WSI is divided into a plurality of non-overlapping image blocks with the length and the width of 256 pixels, and only the image blocks positioned in the mask are taken as analysis objects to be sent into a network for classification. And determining the corresponding category of each image block according to the confidence coefficient obtained by the network. The classification results of the image blocks are combined into a heat map according to the position coordinates of the image blocks in the pathological section. The heat map is median filtered and the regions of too small an area are removed to obtain the final segmentation result. The larger the filter size of the median filtering and the area threshold of the removed region are, the easier the fine and outlier region is to be removed, and the specific size and threshold are determined according to the actual requirements.
FIG. 2 is a graph showing the effect of the present invention on dividing the lymph node region and the necrotic region on a pathological section. Wherein the green line range is the part predicted to be a lymph-dense region; the blue line is the part predicted to be a necrotic area.
FIG. 3 is a diagram showing the effect of segmenting the paracancerous region on a pathological section according to the present invention. Wherein the part within the range of the red line is predicted to be a paracancerous region, and the rest region except the background can be regarded as a tumor region.
Reference to the literature
[1] Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 1097-1105 (2012).
[2] Russakovsky, O., Deng, J., Su, H. et al. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision 115, 211-252 (2015).
[3] Simonyan, K. & Zisserman A. Very deep convolutional networks for large-scale image recognition. International Conference on Representation Learning, (2014).
[4] He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, 770-778 (2016).
[5] Girshick, R., Donahue, J., Darrell, T. & Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. IEEE Conference on Computer Vision and Pattern Recognition, 580-587 (2014).
[6] Girshick, R. Fast R-CNN. IEEE International Conference on Computer Vision, 1440-1448 (2015).
[7] Ren, S., He, K., Girshick, R. & Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. Neural Information Processing Systems, (2015).
[8] Long, J., Shelhamer, E. & Darrell, T. Fully convolutional networks for semantic segmentation. IEEE International Conference on Computer Vision, 3431-3440 (2015).
[9] Chen, L., Papandreou, G., Kokkinos, I., Murphy, K. & Yuille, A. L. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 834-848 (2018).
[10] Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1), 62-66(1979).。
Claims (2)
1. A cancer full-visual field digital pathological section segmentation method based on a local classification neural network is characterized by comprising the following specific steps:
(1) dividing pathological sections into blocks;
firstly, performing threshold segmentation on a pathological section image according to a G channel in RGB three channels by using an Otsu method to remove a white background, and obtaining a mask of the approximate position of a tissue to be analyzed; then, partitioning the whole image into non-overlapping image blocks with the length and the width of 256 pixels, and only taking the image block in the mask as an analysis object;
(2) constructing a local image classification network model, and classifying image blocks;
the local image classification network model is constructed by removing the last two rolling blocks and the full connecting layer on the basis of ResNet18 and adding a full connecting layer; classifying the cut image blocks by using the network to obtain classified output;
wherein the model that performs the classification task is divided into two separate parts: the first is a classifier for distinguishing tumor regions and paracancerous regions; the other is a three-classifier for distinguishing a lymph gathering area, a necrosis area and other areas; the output of the second classifier is a value, the activation function is a sigmoid function and represents the confidence coefficient of the cancer-side region, if the confidence coefficient is high, the image block belongs to the cancer-side region, otherwise, the image block belongs to the tumor region; outputting a vector with the length of 3 by a full connection layer of the three classifiers, wherein an activation function is a softmax function, finally outputting confidence coefficients of each element in the obtained 3-dimensional vector corresponding to other regions, a lymph dense region and a necrosis region in sequence, and taking a high-confidence-coefficient as a classification result;
(3) carrying out overall heat map post-processing on the pathological section;
combining the obtained classification results into two heat maps according to the coordinate positions of the image blocks in the pathological section, wherein the two heat maps respectively correspond to the results of the two classifiers and the results of the three classifiers; the two heat maps are subjected to median filtering respectively, and then blocks with small areas in the heat maps are removed, wherein the size and the area threshold of the filter are determined according to requirements and actual conditions.
2. The segmentation method according to claim 1, wherein the training procedure of the classification network model is as follows:
firstly, cutting a plurality of non-coincident image blocks in different areas of each pathological section according to labels, wherein the length and the width of each image block are 256 pixels, and storing the image blocks as a data set; training a two-classifier and a three-classifier respectively by using a data set formed by the image blocks;
the training samples of the two classifiers at least comprise 240 pathological sections, 10 thousands of image blocks in a tumor area are cut out, and 14 thousands of image blocks in a cancer-side area are cut out; the three-classifier training sample at least comprises 80 pathological sections, 24 thousands of lymph dense areas are cut out, 18 thousands of image blocks in necrotic areas are cut out, and 38 thousands of image blocks in other areas are cut out; the image blocks are all 256 pixels in length and width.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010891178.5A CN112102332A (en) | 2020-08-30 | 2020-08-30 | Cancer WSI segmentation method based on local classification neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010891178.5A CN112102332A (en) | 2020-08-30 | 2020-08-30 | Cancer WSI segmentation method based on local classification neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112102332A true CN112102332A (en) | 2020-12-18 |
Family
ID=73756651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010891178.5A Pending CN112102332A (en) | 2020-08-30 | 2020-08-30 | Cancer WSI segmentation method based on local classification neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112102332A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819768A (en) * | 2021-01-26 | 2021-05-18 | 复旦大学 | DCNN-based cancer full-field digital pathological section survival analysis method |
CN113450381A (en) * | 2021-06-16 | 2021-09-28 | 上海深至信息科技有限公司 | System and method for evaluating accuracy of image segmentation model |
CN114406502A (en) * | 2022-03-14 | 2022-04-29 | 扬州市振东电力器材有限公司 | Laser metal cutting method and system |
CN116596926A (en) * | 2023-07-17 | 2023-08-15 | 成都交大光芒科技股份有限公司 | Tobacco shred parameter uniformity detection method and system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334909A (en) * | 2018-03-09 | 2018-07-27 | 南京天数信息科技有限公司 | Cervical carcinoma TCT digital slices data analysing methods based on ResNet |
CN108665454A (en) * | 2018-05-11 | 2018-10-16 | 复旦大学 | A kind of endoscopic image intelligent classification and irregular lesion region detection method |
CN109118485A (en) * | 2018-08-13 | 2019-01-01 | 复旦大学 | Digestive endoscope image classification based on multitask neural network cancer detection system early |
CN110188767A (en) * | 2019-05-08 | 2019-08-30 | 浙江大学 | Keratonosus image sequence feature extraction and classifying method and device based on deep neural network |
CN110570421A (en) * | 2019-09-18 | 2019-12-13 | 上海鹰瞳医疗科技有限公司 | multitask fundus image classification method and apparatus |
CN110705565A (en) * | 2019-09-09 | 2020-01-17 | 西安电子科技大学 | Lymph node tumor region identification method and device |
CN111079862A (en) * | 2019-12-31 | 2020-04-28 | 西安电子科技大学 | Thyroid papillary carcinoma pathological image classification method based on deep learning |
CN111462076A (en) * | 2020-03-31 | 2020-07-28 | 湖南国科智瞳科技有限公司 | Method and system for detecting fuzzy area of full-slice digital pathological image |
CN111462036A (en) * | 2020-02-18 | 2020-07-28 | 腾讯科技(深圳)有限公司 | Pathological image processing method based on deep learning, model training method and device |
-
2020
- 2020-08-30 CN CN202010891178.5A patent/CN112102332A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334909A (en) * | 2018-03-09 | 2018-07-27 | 南京天数信息科技有限公司 | Cervical carcinoma TCT digital slices data analysing methods based on ResNet |
CN108665454A (en) * | 2018-05-11 | 2018-10-16 | 复旦大学 | A kind of endoscopic image intelligent classification and irregular lesion region detection method |
CN109118485A (en) * | 2018-08-13 | 2019-01-01 | 复旦大学 | Digestive endoscope image classification based on multitask neural network cancer detection system early |
CN110188767A (en) * | 2019-05-08 | 2019-08-30 | 浙江大学 | Keratonosus image sequence feature extraction and classifying method and device based on deep neural network |
CN110705565A (en) * | 2019-09-09 | 2020-01-17 | 西安电子科技大学 | Lymph node tumor region identification method and device |
CN110570421A (en) * | 2019-09-18 | 2019-12-13 | 上海鹰瞳医疗科技有限公司 | multitask fundus image classification method and apparatus |
CN111079862A (en) * | 2019-12-31 | 2020-04-28 | 西安电子科技大学 | Thyroid papillary carcinoma pathological image classification method based on deep learning |
CN111462036A (en) * | 2020-02-18 | 2020-07-28 | 腾讯科技(深圳)有限公司 | Pathological image processing method based on deep learning, model training method and device |
CN111462076A (en) * | 2020-03-31 | 2020-07-28 | 湖南国科智瞳科技有限公司 | Method and system for detecting fuzzy area of full-slice digital pathological image |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819768A (en) * | 2021-01-26 | 2021-05-18 | 复旦大学 | DCNN-based cancer full-field digital pathological section survival analysis method |
CN112819768B (en) * | 2021-01-26 | 2022-06-17 | 复旦大学 | DCNN-based survival analysis method for cancer full-field digital pathological section |
CN113450381A (en) * | 2021-06-16 | 2021-09-28 | 上海深至信息科技有限公司 | System and method for evaluating accuracy of image segmentation model |
CN113450381B (en) * | 2021-06-16 | 2022-10-18 | 上海深至信息科技有限公司 | System and method for evaluating accuracy of image segmentation model |
CN114406502A (en) * | 2022-03-14 | 2022-04-29 | 扬州市振东电力器材有限公司 | Laser metal cutting method and system |
CN114406502B (en) * | 2022-03-14 | 2022-11-25 | 扬州市振东电力器材有限公司 | Laser metal cutting method and system |
CN116596926A (en) * | 2023-07-17 | 2023-08-15 | 成都交大光芒科技股份有限公司 | Tobacco shred parameter uniformity detection method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111985536B (en) | Based on weak supervised learning gastroscopic pathology image Classification method | |
CN110232383B (en) | Focus image recognition method and focus image recognition system based on deep learning model | |
CN106056595B (en) | Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules | |
CN109325942B (en) | Fundus image structure segmentation method based on full convolution neural network | |
CN112102332A (en) | Cancer WSI segmentation method based on local classification neural network | |
CN110288597B (en) | Attention mechanism-based wireless capsule endoscope video saliency detection method | |
CN112529894B (en) | Thyroid nodule diagnosis method based on deep learning network | |
Liu et al. | A framework of wound segmentation based on deep convolutional networks | |
CN109670510A (en) | A kind of gastroscopic biopsy pathological data screening system and method based on deep learning | |
CN109858540B (en) | Medical image recognition system and method based on multi-mode fusion | |
CN110120040A (en) | Sectioning image processing method, device, computer equipment and storage medium | |
WO2022001571A1 (en) | Computing method based on super-pixel image similarity | |
CN112070772A (en) | Blood leukocyte image segmentation method based on UNet + + and ResNet | |
CN111951221A (en) | Glomerular cell image identification method based on deep neural network | |
CN110647875A (en) | Method for segmenting and identifying model structure of blood cells and blood cell identification method | |
CN112819768B (en) | DCNN-based survival analysis method for cancer full-field digital pathological section | |
CN114627067A (en) | Wound area measurement and auxiliary diagnosis and treatment method based on image processing | |
CN110288574A (en) | A kind of adjuvant Ultrasonographic Diagnosis hepatoncus system and method | |
CN110021019A (en) | A kind of thickness distributional analysis method of the AI auxiliary hair of AGA clinical image | |
CN112750132A (en) | White blood cell image segmentation method based on dual-path network and channel attention | |
CN116758336A (en) | Medical image intelligent analysis system based on artificial intelligence | |
CN112419246B (en) | Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution | |
CN106960199A (en) | A kind of RGB eye is as the complete extraction method in figure white of the eye region | |
CN114372962A (en) | Laparoscopic surgery stage identification method and system based on double-particle time convolution | |
CN112102234B (en) | Ear sclerosis focus detection and diagnosis system based on target detection neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20201218 |
|
WD01 | Invention patent application deemed withdrawn after publication |