CN112580748B - Method for counting classified cells of stain image - Google Patents

Method for counting classified cells of stain image Download PDF

Info

Publication number
CN112580748B
CN112580748B CN202011608423.3A CN202011608423A CN112580748B CN 112580748 B CN112580748 B CN 112580748B CN 202011608423 A CN202011608423 A CN 202011608423A CN 112580748 B CN112580748 B CN 112580748B
Authority
CN
China
Prior art keywords
detection
cells
cell
training
patch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202011608423.3A
Other languages
Chinese (zh)
Other versions
CN112580748A (en
Inventor
仲佳慧
曹永盛
张于凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202011608423.3A priority Critical patent/CN112580748B/en
Publication of CN112580748A publication Critical patent/CN112580748A/en
Application granted granted Critical
Publication of CN112580748B publication Critical patent/CN112580748B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for counting classified cells of a stain image, which comprises the steps of firstly manufacturing a precisely classified cell data set, then training a model by adopting a target detection method in deep learning so as to identify a first cell and a second cell in the current stain image, and simultaneously analyzing a detection result and optimizing the model so as to obtain more accurate number of the first cell. Comparing the detection result with the manual annotation of a pathologist, the F1 score of the first A-type cells in the current staining image of the first-type cells can reach about 95%, and the F1 score of the first B-type cells is about 91%.

Description

Method for counting classified cells of stain image
Technical Field
The present invention relates to a deep learning technique and an object detection technique in computer vision.
Background
Object detection, an important part of image understanding, is the task of finding all interested objects (objects) in an image and determining their positions and sizes, and is one of the core problems in the field of machine vision. The convolutional neural network CNN is one of the basic tools for deep learning, and is generally used for image analysis. The convolutional neural networks such as VGG, googleLenet, resnet and the like show excellent performance in the aspects of target detection and semantic segmentation. Unlike image classification, object detection requires locating objects within an image. The deep learning model based on object detection can be divided into two phases. One stage is to generate region suggestions and the other stage is to classify regions and provide confidence for each object. Some related methods include Fast R-CNN, as well as modified Fast R-CNN, SPP, R-FCN, and Mask R-CNN. Deep learning has achieved unprecedented performance in a variety of tasks, particularly in the biomedical field. In addition, end-to-end detection methods based on deep learning, such as SSD, YOLO, RON, and the like. The size, position and label of the object can be directly predicted without any intermediate steps, and the detection speed is improved compared with the two-stage Faster-RCNN. Although CNNs have attractive qualities, it is still necessary to make the training set large enough.
The Ki-67 proliferation index is an important biomarker of cancer cell proliferation, is closely related to the differentiation, invasion, metastasis and prognosis of tumors, and has great significance for clinical research when the accurate Ki-67 index is rapidly obtained. The Ki-67 index is the proportion of the number of positive cancer cells to all the cancer cells, but some traditional methods can regard non-tumor cells as tumor cells due to extremely similar forms and colors of cell nucleuses of various cells in a Ki-67 stained image, thereby causing a large number of counting errors. By utilizing a GAN network (generation countermeasure network), the Ruihan Zhang et al, nanjing aerospace university successfully generates a new model training method, performs data enhancement by generating more artificial samples, and improves Ki67 accuracy by combining CNN and SSD. Dayong Wang et al classified the entire segmented breast cancer image into patches and classified according to the patches. Saha et al established an automatic scoring system Ki-67 using a Gamma mixture model GMM with expectation maximization to perform seed point detection, patch selection and deep learning, so that the final precision reaches 93% and the recall rate reaches 88%. Practical analysis of Ki-67 suggests that a limited set of label data may lead to insufficient CNN, which in turn leads to overfitting of the training set and affects accuracy.
Many computerized methods rely on color features to detect and classify cells for Ki-67 scoring. Al-Lahham et Al first applied K-means clustering to the transformed color space, followed by segmentation and counting of cells on Ki-67 stained histological images using mathematical morphology and connected component analysis. In which an image analysis system is used to quantify tumor cells, wherein a color intensity threshold needs to be appropriately selected. Markiewicz uses a watershed algorithm to separate the contact cells and uses a Support Vector Machine (SVM) classifier to distinguish immune positive cells from immune negative cells. However, these methods do not accurately distinguish between tumor and non-tumor cells and separate the contacting cells at the same time. The Ki-67 image belongs to immunohistochemical staining Images (IHC), and the study of the IHC staining images on automated nuclear segmentation has recently attracted attention. Most relevant research has focused on image segmentation methods based on threshold, edge detection or pixel classification based on machine learning. Wherein the pixel intensity thresholding method will utilize pixel intensities in the red, green and blue (RGB) color space and apply an intensity transform and global thresholding based on the difference between the brown and blue colors. In supervised and unsupervised learning methods, a single pixel is the subject of study, while pixels in the same class together make up each component of the tissue. Researchers need to select a representative region of each tissue component, including all cell types, as a training sample before supervised classification, the performance of which depends largely on the quality and comprehensiveness of the pre-defined training samples. Since deep learning is not widely used in medicine, there is no data set that can be used directly for Ki-67 staining images. How to effectively improve the detection index of the Ki-67 staining image is still the key direction of the current research.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method for detecting the number of classified cells of a color image based on a deep learning and full supervision method.
The technical scheme adopted by the invention for solving the technical problems is that the method for counting the classified cells of the stain image comprises the following steps:
1) Creating a training set:
1-1) collecting a specimen image, manually marking a part of label image, and cutting a small image patch on a marked scanning whole area; labels used for labeling fall into four categories: a first A-type cell, a first B-type cell, a second A-type cell, and a second B-type cell; wherein the first A-type cell and the first B-type cell belong to a first type of cell, and the second A-type cell and the second B-type cell belong to a second type of cell;
1-2) screening out the patch with high background occupation ratio or fuzzy cells, and forming a pre-training set by the screened patch and the corresponding label;
1-3) inputting the pre-training set into a Libra-RCNN network model to complete pre-training;
1-4) inputting a part of unlabelled sample images into a pre-training network model obtained by pre-training for testing to obtain a test result output by the pre-training network model;
1-5) carrying out manual correction on the test result output by the pre-training network model, then screening out the patch with high background occupation ratio or fuzzy cells, adding the test result subjected to manual correction and screening into a pre-training set, and judging whether the pre-training end condition is met, if so, taking the current pre-training set as a training set, and then entering the step 2), otherwise, returning to the step 1-3);
2) Inputting the finally obtained perfect training set into a Libra-RCNN network model for training to obtain a training model;
3) A detection step:
3-1) selecting an ROI (region of interest) on the input scanning full sheet to be detected; according to the number of the first type cells judged by machine learning, selecting a region with more first type cells as an ROI or freely selecting the ROI by a doctor;
3-2) cutting the slice patch in the selected ROI;
3-3) carrying out detection classification on all the patches to obtain the final detection results of all the patches; the detection classification process of each patch is as follows:
3-3-1) detecting each patch by using the trained model, and removing the duplicate of the repeated detection frame of the patch in the preset overlap region overlap;
3-3-2) traversing all the detection frames on the current patch, and performing duplication removal on the detection frames among different classes to obtain a final detection result of the patch after traversing is finished; the specific way of removing the duplicate of the detection frame among different classes is as follows: calculating the size of the intersection comparison IOU of every two different types of detection frames, and when the IOU is larger than a preset threshold value, deleting the detection frame with lower confidence level from every two different types of detection frames, and keeping the detection frame with higher confidence level;
4) And mapping all patch detection result coordinates to the whole scanning full sheet, and calculating the number of the first type cells according to the counted number of the first type A cells and the first type B cells. The method comprises the steps of collecting samples, sorting and screening, cutting and screening again and the like to form a final data set. And the cell types of the data set are totally divided into four types, wherein the first type of cells have two types, the second type of cells have two types, and the detection of the second type of cells not only can enable the first type of cells to be distinguished more accurately, but also can be used for expanding the analysis of other medical indexes. The Libra-RCNN network is an improvement of the fast-RCNN network, and greatly improves the detection performance. The problem that the detection result shows that the same cell has detection frames of different types is not solved, and particularly, a method for removing the duplication of the detection frames among different types is provided.
The invention has the advantages of accurate detection and classification, providing a basis for accurate counting of the classified cells of the dyeing image, accurately providing the number of the classified cells and providing better assistance for clinical medicine.
Drawings
FIG. 1 is a typical morphology of four types of cells;
FIG. 2 visualization of the test results on a full sheet;
figure 3 compares experimental data.
Detailed Description
The invention is explained in further detail below with reference to the figures and examples.
The examples apply to the counting of sorted cells of the invention. In the four types of labels, the first A type cell corresponds to a positive cancer cell, the first B type cell corresponds to a negative cancer cell, the second A type cell corresponds to a lymphocyte, and the second B type cell corresponds to a mesenchymal cell.
The invention mainly researches a Ki-67 staining image in breast cancer, and cells of the Ki-67 staining image are roughly divided into brown and blue colors with different forms. Wherein the positive cancer cells are generally brown, round in shape and large in volume; the negative cancer cells are generally blue and are distributed in a circular shape, and the volume of the negative cancer cells is equal to that of the positive cancer cells; while normal cells (mesenchymal cells, lymphocytes, etc.) are mostly blue, brown in color in a small part, and have different morphologies. We grouped the cells into four categories in total, positive cancer cells (positive cancer cells), negative cancer cells (negative cancer cells), lymphocytes (lymphocytes) and mesenchymal cells (stromal cells). The typical morphology of the four types of cells is shown in fig. 1, the typical lymphocyte and stromal cell are mostly blue, the morphology is easily distinguished from cancer cells, but some special lymphocyte and stromal cell morphologies are similar to cancer cells, and are easily confused with cancer cells (especially negative cancer cells).
Methods for fully supervised target detection, such as R-CNN, fast R-CNN and Fast-RCNN, are two-stage based target detection methods: the region of interest ROI is first extracted and then classified. Subsequently, one-stage target detection methods have also been developed, such as SSD, YOLOv2 and RetinaNet. These methods are faster than the two-stage method, but less accurate than the two-stage method. The invention focuses on the precision, so that the two-stage network Libra-rcnn is selected. The Libra-RCNN overall network architecture is composed of two main modules: (1) A Region Proposal Network (RPN) which returns the ROI in the picture; (2) A network is detected that classifies objects within the region while performing bounding box regression. The anchor point of the RPN has three scales and three aspect ratios. Since the detection target is irregularly shaped cells, it is necessary to set a plurality of aspect ratios. The present invention uses three aspect ratios, 1: 1. 1:2 and 2:1. meanwhile, the ResNeXt model is used as a basic convolutional neural network so as to extract a more complete powerful feature map from an input image.
The data set of the fully supervised network is extraordinarily important, so that the data processing is more accurate. First we selected the whole Ki-67 plate with distinct positive cancer cell proportion. A patch (small image) is cut in the region marked by the pathologist, the size is 1024 × 1024, and a certain overlap (overlapped part) is set according to the needs of the pathologist. In order to reduce the labeling workload of a pathologist, a doctor labels three or four wsi (whole sheets), cuts the patches in a labeling area, screens out the patches with high background occupation ratio and fuzzy cells, organizes all the patches, carries out preliminary training to obtain a training model, then tests more wsi on the training model, organizes the test result, returns the organized test result to the doctor to modify, the doctor labels the misjudged cells and the misjudged cells, adds the corrected cells into a training set to carry out fine tune on the model, and the steps are repeated.
Limited training set data may over-fit the model, resulting in a reduction in accuracy. In order to increase the diversity of data, images with Ki-67 staining are generated by using pictures of other staining modes and a style conversion mode, so that the purpose of enhancing the data is achieved. To achieve this, we use CycleGAN, which can implement unpaired image-to-image transformations, learning the mapping function between the two image domains X and Y by unpaired example. And mapping G: x → Y and an inverse mapping F: y → X is co-learned using CNN. We train the CycleGAN to learn the mapping function between the source domain Xs and the target domain Xt, and convert other types of staining images into Ki-67 images. More data samples are generated through the cycleGAN, so that the aim of data enhancement is fulfilled.
Then, the training of the model is started, a series of comparison experiments are carried out along with the increase of the diversity of the data, the optimal model is selected for detection, the visual analysis is carried out on the detection result, the detailed conditions of false detection and cell omission detection are analyzed, and the problems mainly existing in the method are as follows: 1. missing detection of overlapped cells; 2. two detection frames exist in the same cell; 3. detecting cell class errors. The embodiment changes nms to soft-nms, so that the condition of missed detection of overlapped cells is relieved, the detection result is processed by using an intersection ratio IOU aiming at the second problem, the confidence degrees of two detection frames of the same cell are compared, and the detection frame with the higher confidence degree is reserved. The optimized detection result is improved by three to four percentage points. The last problem involves the false detection between different categories, which can be roughly divided into two cases: 1) Misdetection between negative cancer cells and normal cells; 2) False detection of positive cancer cells and lymphocytes; the first case is the major error case and the second case is rare. Both of these false detection situations improve with the improvement of the data set, but the mild range is smaller, which is also the improvement point that the performance can be continuously improved on the basis of the invention.
The method comprises the following specific steps:
1) Creating a training set:
1-1) collecting a specimen image, manually marking a part of label image, and cutting out a small image patch on a Ki-67 scanned whole area which is marked;
1-2) screening out patches with high background occupation ratio or fuzzy cells, and forming a pre-training set by the screened patches and corresponding labels thereof;
1-3) converting images in a dyeing mode except Ki67 into Ki67 images through a cycleGAN network, thereby achieving the purpose of enhancing a pre-training set;
1-4) inputting the pre-training set into a Libra-RCNN network model to complete pre-training;
1-5) inputting a part of unlabelled specimen images into a pre-training network model obtained by pre-training for testing to obtain a test result output by the pre-training network model;
1-6) manually correcting the test result output by the pre-training network model, screening out the patch with high background occupation ratio or fuzzy cells, adding the manually corrected and screened test result into a pre-training set, and judging whether a pre-training finishing condition is met, wherein if yes, the current pre-training set is used as a training set, and then the step 2) is carried out, and if not, the step 1-4) is carried out;
2) Inputting the finally obtained perfect training set into a Libra-RCNN network model for training to obtain a training model;
3) A detection step:
3-1) selecting an ROI (region of interest) on the input Ki-67 scanning whole sheet to be detected; according to the number of the cancer cells judged by machine learning, selecting a region with more cancer cells as an ROI or freely selecting the ROI by a doctor;
3-2) cutting slices patch in the selected ROI;
3-3) detecting and classifying all the patches to obtain the final detection results of all the patches; the detection classification process of each patch is as follows:
3-3-1) detecting each patch by using the trained model, and performing duplicate removal of a repeated detection frame on cells in overlap when the patches are removed;
3-3-2) traversing all detection frames on the current patch, and performing duplicate removal on two detection frames of different types of the same cell to obtain a final detection result of the patch after the traversal is completed; the specific way of removing the duplicate of the detection frame among different classes is as follows: calculating the size of the intersection comparison IOU of every two different types of detection frames, and when the IOU is larger than a preset threshold value, deleting the detection frame with lower confidence level from every two different types of detection frames, and keeping the detection frame with higher confidence level;
4) And mapping all patch detection result coordinates to the whole Ki-67 scanning whole slice, and calculating the Ki67 index according to the counted numbers of positive cancer cells and negative cancer cells.
FIG. 2 shows the results of the measurements on the Ki-67 whole plate. Since the Ki-67 index is only associated with cancer cells, we finally label only cancer cells, and for clarity we turned the detection box to a dot over the cell, where dark black is positive cancer cells, light gray is negative cancer cells, and the area framed out arbitrarily in the figure is a rectangle (also supporting irregular areas). Thus we finally provided the physician with the Ki-67 index and the cancer cells we detected.
The test evaluation index of the example was an F1 score. The F1 score is the harmonic mean of recall = TP/(TP + FN) and precision of precision), where precision = TP/(TP + FP). (TP: number of true positives, FP = number of false positives, FN = number of false negatives).
Comparing the test results with the pathologist's manual annotation, the F1 score of positive cancer cells was around 95%, the F1 score of negative cancer cells was around 91%, and the F1 score was the concordant mean of recal and precison in Ki-67 images of breast cancer. The specific indexes are shown in fig. 3, and the positions of the detection frames are relatively accurate, wherein the indexes of the negative cancer cells are slightly lower than those of the positive cancer cells, because the negative cancer cells are easier to be confused with normal cells, so that the detection indexes are lower than those of the positive cancer cells. The test result proves that the test paper can be preliminarily used for assisting the doctor in clinical judgment.

Claims (2)

1. A method of classifying a cell count for a stain image, comprising the steps of:
1) Creating a training set:
1-1) collecting a specimen image, manually marking a part of label image, and cutting a small image patch on a marked scanning whole area; labels used for labeling fall into four categories: a first class a cell, a first class B cell, a second class a cell, and a second class B cell; wherein the first A-type cell and the first B-type cell belong to a first type of cell, and the second A-type cell and the second B-type cell belong to a second type of cell;
1-2) screening out the patch with high background occupation ratio or fuzzy cells, and forming a pre-training set by the screened patch and the corresponding label;
1-3) inputting the pre-training set into a Libra-RCNN network model to complete pre-training;
1-4) inputting a part of unlabelled specimen images into a pre-training network model obtained by pre-training for testing to obtain a test result output by the pre-training network model;
1-5) manually correcting the test result output by the pre-training network model, screening out the patch with high background occupation ratio or fuzzy cells, adding the manually corrected and screened test result into a pre-training set, and judging whether a pre-training finishing condition is met, wherein if yes, the current pre-training set is used as a training set, and then the step 2) is carried out, and if not, the step 1-3) is carried out;
2) Inputting the finally obtained perfect training set into a Libra-RCNN network model for training to obtain a training model;
3) A detection step:
3-1) selecting a region of interest ROI on the input scanning full sheet to be detected; according to the number of the first type cells judged by machine learning, selecting a region with more first type cells as an ROI or freely selecting the ROI by a doctor;
3-2) cutting slices patch in the selected ROI;
3-3) carrying out detection classification on all the patches to obtain the final detection results of all the patches; the detection classification process of each patch is as follows:
3-3-1) detecting each patch by using the trained model, and removing the duplicate of the repeated detection frame of the patch in the preset overlap region overlap;
3-3-2) traversing all the detection frames on the current patch, and performing duplication removal on the detection frames among different classes to obtain a final detection result of the patch after traversing is finished; the specific way of removing the duplicate of the detection frame among different classes is as follows: calculating the size of the intersection comparison IOU of every two different types of detection frames, and when the IOU is larger than a preset threshold value, deleting the detection frame with lower confidence level from every two different types of detection frames, and keeping the detection frame with higher confidence level;
4) And mapping all patch detection result coordinates to the whole scanning whole slice, and calculating the number of the first type cells according to the counted number of the first type A cells and the first type B cells.
2. The method of claim 1, wherein the specimen image includes a current stain image and a current stain image into which other types of stain images than the current stain image are converted, and the conversion of the other types of stain images into the current stain image is accomplished using a CycleGAN network.
CN202011608423.3A 2020-12-30 2020-12-30 Method for counting classified cells of stain image Expired - Fee Related CN112580748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011608423.3A CN112580748B (en) 2020-12-30 2020-12-30 Method for counting classified cells of stain image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011608423.3A CN112580748B (en) 2020-12-30 2020-12-30 Method for counting classified cells of stain image

Publications (2)

Publication Number Publication Date
CN112580748A CN112580748A (en) 2021-03-30
CN112580748B true CN112580748B (en) 2022-10-14

Family

ID=75144427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011608423.3A Expired - Fee Related CN112580748B (en) 2020-12-30 2020-12-30 Method for counting classified cells of stain image

Country Status (1)

Country Link
CN (1) CN112580748B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096086B (en) * 2021-04-01 2022-05-17 中南大学 Ki67 index determination method and system
CN113470041B (en) * 2021-05-26 2022-04-22 透彻影像(北京)科技有限公司 Immunohistochemical cell image cell nucleus segmentation and counting method and system
CN113591919B (en) * 2021-06-29 2023-07-21 复旦大学附属中山医院 Analysis method and system for prognosis of early hepatocellular carcinoma postoperative recurrence based on AI
CN113628199B (en) * 2021-08-18 2022-08-16 四川大学华西第二医院 Pathological picture stained tissue area detection method, pathological picture stained tissue area detection system and prognosis state analysis system
CN116309497B (en) * 2023-03-26 2023-10-03 湖南医药学院 Image recognition-based auxiliary analysis method for cancer cell counting and prognosis prediction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011008262A2 (en) * 2009-07-13 2011-01-20 H. Lee Moffitt Cancer Center & Research Institute Methods and apparatus for diagnosis and/or prognosis of cancer
WO2012043499A1 (en) * 2010-09-30 2012-04-05 日本電気株式会社 Information processing device, information processing system, information processing method, program, and recording medium
WO2019133538A2 (en) * 2017-12-29 2019-07-04 Leica Biosystems Imaging, Inc. Processing of histology images with a convolutional neural network to identify tumors
CN110853005A (en) * 2019-11-06 2020-02-28 杭州迪英加科技有限公司 Immunohistochemical membrane staining section diagnosis method and device
CN111417958A (en) * 2017-12-07 2020-07-14 文塔纳医疗系统公司 Deep learning system and method for joint cell and region classification in biological images
CN111598849A (en) * 2020-04-29 2020-08-28 北京小白世纪网络科技有限公司 Pathological image cell counting method, equipment and medium based on target detection
CN111914937A (en) * 2020-08-05 2020-11-10 湖北工业大学 Lightweight improved target detection method and detection system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011008262A2 (en) * 2009-07-13 2011-01-20 H. Lee Moffitt Cancer Center & Research Institute Methods and apparatus for diagnosis and/or prognosis of cancer
WO2012043499A1 (en) * 2010-09-30 2012-04-05 日本電気株式会社 Information processing device, information processing system, information processing method, program, and recording medium
CN111417958A (en) * 2017-12-07 2020-07-14 文塔纳医疗系统公司 Deep learning system and method for joint cell and region classification in biological images
WO2019133538A2 (en) * 2017-12-29 2019-07-04 Leica Biosystems Imaging, Inc. Processing of histology images with a convolutional neural network to identify tumors
CN111542830A (en) * 2017-12-29 2020-08-14 徕卡生物系统成像股份有限公司 Processing histological images using convolutional neural networks to identify tumors
CN110853005A (en) * 2019-11-06 2020-02-28 杭州迪英加科技有限公司 Immunohistochemical membrane staining section diagnosis method and device
CN111598849A (en) * 2020-04-29 2020-08-28 北京小白世纪网络科技有限公司 Pathological image cell counting method, equipment and medium based on target detection
CN111914937A (en) * 2020-08-05 2020-11-10 湖北工业大学 Lightweight improved target detection method and detection system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于光扫描图像的Ki-67计数应用研究;仲佳慧;《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》;20220131;E072-1780 *

Also Published As

Publication number Publication date
CN112580748A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN112580748B (en) Method for counting classified cells of stain image
US10565479B1 (en) Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring
CN110334706B (en) Image target identification method and device
CN106056118B (en) A kind of identification method of counting for cell
US20190087638A1 (en) Analyzing digital holographic microscopy data for hematology applications
US8600143B1 (en) Method and system for hierarchical tissue analysis and classification
CN102682305B (en) Automatic screening system and automatic screening method using thin-prep cytology test
CN109447998B (en) Automatic segmentation method based on PCANet deep learning model
CN108596038B (en) Method for identifying red blood cells in excrement by combining morphological segmentation and neural network
CN113724231B (en) Industrial defect detection method based on semantic segmentation and target detection fusion model
CN107730499A (en) A kind of leucocyte classification method based on nu SVMs
CN113658174A (en) Microkaryotic image detection method based on deep learning and image processing algorithm
CN110987886A (en) Full-automatic microscopic image fluorescence scanning system
CN115170518A (en) Cell detection method and system based on deep learning and machine vision
CN115294377A (en) System and method for identifying road cracks
CN115909006A (en) Mammary tissue image classification method and system based on convolution Transformer
KR20200136004A (en) Method for detecting cells with at least one malformation in a cell sample
CN1226609C (en) Method for analyzing a biological sample
Rege et al. Automatic leukemia identification system using otsu image segmentation and mser approach for microscopic smear image database
Liu et al. Faster r-cnn based robust circulating tumor cells detection with improved sensitivity
CN110364224A (en) A kind of chromosome separation phase positioning sorting method
Schüffler et al. Computational TMA analysis and cell nucleus classification of renal cell carcinoma
CN114897823A (en) Cytology sample image quality control method, system and storage medium
US20220058371A1 (en) Classification of cell nuclei
CN116580011B (en) Endometrial cancer full-slide image detection system of deep learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221014

CF01 Termination of patent right due to non-payment of annual fee