CN113192047A - Method for automatically interpreting KI67 pathological section based on deep learning - Google Patents
Method for automatically interpreting KI67 pathological section based on deep learning Download PDFInfo
- Publication number
- CN113192047A CN113192047A CN202110529098.XA CN202110529098A CN113192047A CN 113192047 A CN113192047 A CN 113192047A CN 202110529098 A CN202110529098 A CN 202110529098A CN 113192047 A CN113192047 A CN 113192047A
- Authority
- CN
- China
- Prior art keywords
- model
- cell
- cells
- deep learning
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000001575 pathological effect Effects 0.000 title claims abstract description 22
- 238000013135 deep learning Methods 0.000 title claims abstract description 18
- 230000006870 function Effects 0.000 claims abstract description 12
- 238000012360 testing method Methods 0.000 claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 11
- 230000000007 visual effect Effects 0.000 claims abstract description 9
- 238000012795 verification Methods 0.000 claims abstract description 5
- 238000012805 post-processing Methods 0.000 claims abstract description 4
- 238000010586 diagram Methods 0.000 claims description 32
- 238000002372 labelling Methods 0.000 claims description 10
- 238000009499 grossing Methods 0.000 claims description 9
- 238000013136 deep learning model Methods 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 230000002238 attenuated effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 2
- 210000004027 cell Anatomy 0.000 abstract description 67
- 210000004881 tumor cell Anatomy 0.000 abstract description 9
- 238000004519 manufacturing process Methods 0.000 abstract description 2
- 206010028980 Neoplasm Diseases 0.000 description 7
- 239000000835 fiber Substances 0.000 description 5
- 230000007170 pathology Effects 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 230000002055 immunohistochemical effect Effects 0.000 description 2
- 230000007762 localization of cell Effects 0.000 description 2
- 210000002751 lymph Anatomy 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000004043 dyeing Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000002950 fibroblast Anatomy 0.000 description 1
- 210000004698 lymphocyte Anatomy 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000036210 malignancy Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Epidemiology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
The invention discloses a method for automatically interpreting KI67 pathological sections based on deep learning. The method comprises the following steps of firstly, collecting data and marking; secondly, manufacturing a true standard; dividing all the labeled data into a training set, a verification set and a test set; thirdly, designing a model architecture and a loss function, and training the model until convergence; fourthly, model prediction and post-processing are carried out; fifthly, evaluating the model performance; and sixthly, analyzing the model output, and visualizing the result. The main purpose of the invention is to position and distinguish all cells in the visual field under a KI67 section lens, then count positive tumor cells and negative tumor cells respectively, and finally calculate the KI67 value, wherein the value is used as the judgment standard of KI67 pathological section.
Description
Technical Field
The invention relates to the technical field of machine learning, in particular to a method for automatically interpreting KI67 pathological sections based on deep learning.
Background
In recent years, with the development of artificial intelligence and machine vision technologies, digital image processing and artificial intelligence assist doctors to interpret pathological images in clinical application are becoming more and more widespread. The method adopts a deep learning model and combines the marks of doctors with rich experience, so that the pathological data can be interpreted with high accuracy. KI67 is a common immunohistochemical technique in the pathology department, and generally uses a KI67 index as an evaluation index for measuring the malignancy degree of tumors, and the index has diagnosis and prognosis values in various cancers. The KI67 index may be obtained by analyzing KI67 immunohistochemical pathology images. In clinic, a doctor selects a region with more positive cells on a pathological section by moving a microscope visual field, calculates the ratio of the positive tumor cells to all the tumor cells in not less than ten under-lens visual fields, and then takes the average of the ratios as a standard for judging pathological images. However, it is time-consuming and laborious for the clinician to count the tumor cells in the under-the-lens field, and it is very boring and tedious for the clinician to make misjudgment due to fatigue.
Disclosure of Invention
The invention aims to provide a method for automatically judging KI67 pathological sections based on deep learning, which comprises the steps of positioning all cells in a visual field under a KI67 section lens and judging the types of the cells, then respectively counting positive tumor cells and negative tumor cells, and finally calculating a KI67 value which is used as a judgment standard of the KI67 pathological sections.
In order to achieve the purpose, the invention provides the following technical scheme: a method for automatically interpreting KI67 pathological sections based on deep learning comprises the following steps:
firstly, collecting visual field data under a KI67 section lens, wherein the data volume is not less than 100, and labeling the data by a professional pathologist;
secondly, making a true standard, and analyzing the marked file of marked data into a smooth Gaussian mask as the true standard of the deep learning model, wherein the mask is in smooth transition from the center of the cell to the edge; dividing all the labeled data into a training set, a verification set and a test set;
thirdly, designing a model architecture and a loss function, and training the model until convergence;
fourthly, model prediction and post-processing are carried out, the test set is input into a trained model to obtain a prediction result thermodynamic diagram, and finally, Gaussian smoothing and local peak value processing are carried out to obtain a result which is used as a model to be output;
fifthly, evaluating the model performance, matching the model prediction cells with the labeled cells by adopting a Hungarian method, and measuring accuracy rate, recall rate and F1-score index;
and sixthly, analyzing the model output, and visualizing the result.
Preferably, the labeling in the first step is a dot labeling.
Preferably, in the first step, all the collected data are amplified with a uniform magnification, all the collected data cannot be from the same case, and the collected data of the same case does not exceed ten.
Preferably, the mask in the second step is represented by a floating point number in the range of [0,1], and the operation is to perform gaussian smoothing on each labeled point in the image, and the gaussian superposition part takes the maximum value to prevent the peaks from merging.
Preferably, in the third step, the model architecture selects a deep learning model rescet 34 as an unet model of the encoder, the learning rate is 5e-4, the optimizer adopts an Adam optimizer, the batch size is 8, the maximum iteration number is 500 epochs, the maximum iteration number is gradually attenuated after 100 epochs, the model is trained until convergence, the loss function is composed of two parts, and one part is a cross entropy loss function:
some are the Jaccard coefficients:
wherein X and Y are respectively the network output and the true standard mask, and the integral loss function is the sum of the two.
Preferably, in the fourth step, the trained model is used for predicting a test set, and thermodynamic diagrams of various categories are output;
the thermodynamic diagram is a three-dimensional image, wherein the first two dimensions respectively represent the height and the width of the image, and the third dimension is a cell category, namely, each category of cells corresponds to a two-dimensional thermodynamic diagram;
processing the prediction output result of the model, screening all cell centers in the whole image, summing the cell category dimensions of the three-dimensional thermodynamic diagram to obtain a two-dimensional thermodynamic diagram of all cells, performing Gaussian smoothing on the thermodynamic diagram to eliminate isolated points predicted by the model, then using a maximum filter to search a local maximum value in the smooth image and returning all local peak value coordinates, wherein the local peak value coordinates correspond to the centers of the predicted cells in the whole image by detecting all cells predicted by the model;
and then, classifying the cell center to obtain a thermodynamic value of a three-dimensional thermodynamic diagram corresponding to the local peak value coordinate, wherein the third dimension of the thermodynamic diagram, namely the number of layers of the two-dimensional thermodynamic diagram, corresponds to the number of cell types, and the cell type corresponding to the highest thermodynamic value is the cell type to which the coordinate belongs.
Preferably, in the fifth step, the Hungarian method is adopted to match the model prediction cells with the annotation cells, so that each annotation cell is not matched with more than one prediction cell, and each prediction cell is not matched with more than one annotation cell; the matching range is a circular area with the center point of the marked cell as the center of a circle and 16 pixels as the radius, and the area beyond the range does not participate in matching; in the matching process, only whether the cells are effectively detected or not is considered, and the cell types are not taken as the matching standards;
and the evaluation indexes of KI67 index difference, MAE and RMSE of label and network prediction are adopted as evaluation indexes. The calculation formula for MAE and RMSE is as follows:
where pred represents model prediction, anno represents annotation or true annotation, and m represents cell number.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, pathological doctor labels are used as gold standards, and the prediction results of the training model are close to the doctor labels, so that manual work can be replaced, counting of various cells in the under-mirror visual field and calculation of KI67 indexes can be completed, and time cost and labor cost are greatly reduced for colleagues meeting clinical interpretation precision.
Drawings
FIG. 1 is an image of a field under a collected KI67 mirror;
FIG. 2 is a visualization diagram of the labeling result of the pathologist in FIG. 1;
FIG. 3 is a visualization of the true standard mask corresponding to FIG. 2;
fig. 4 is a structure diagram of UNET network;
FIG. 5 is a graph of the match of the model prediction results to the true criteria using the Hungarian algorithm;
FIG. 6 is a visualization graph of deep learning predicted cell localization and classification results;
FIG. 7 is a flow chart of the method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 7, the present invention provides a technical solution: a method for automatically interpreting KI67 pathological sections based on deep learning comprises the following steps:
step one, data acquisition and doctor labeling: collecting field data under a KI67 section mirror, wherein the data volume is not less than 100; the acquisition requirements are as follows: all the collected data adopt a unified magnification, such as 20x mirrors or 40x mirrors; the cell contour is clear, and can not be too large to occupy the whole picture and too small to be distinguished by human eyes; in order to ensure the generalization of the model, the collected data can not come from the same case; the size of the collected image and the dyeing style of the pathological section do not need to be set; marking the data of all the acquired images meeting the requirements by a professional pathologist; the marking mode is point marking; the deep learning method takes the label of the pathology department doctor as the gold standard, so the clinical case department doctor is required to label the data; the point labeling method is defined as labeling points at the center of the cell without drawing the outline of the cell; the pathologist needs to exhaust the cells in the visual field under the marking mirror; the cell types are classified into 6 types, which are: negative fibers, lymph, negative tumors, positive fibers, positive tumors, and other cells, and can be stained sequentially with blue, yellow, green, brown, red, purple, with only one label per cell; the label file is exported after the pathologist exhaustively labels the complete picture.
Secondly, manufacturing a true standard; different from a common deep learning segmentation task which adopts 0 and 1 to respectively represent a background and a foreground, the mask in the invention is expressed by a floating point (float) number in a range of [0,1 ]; the specific operation is that every marking point in the image is subjected to Gaussian smoothing, and the Gaussian superposition part takes the maximum value to prevent the wave crests from merging; the mask after Gaussian smoothing is in a form of high central intensity and low edge intensity; not only can emphasize the center position of the cell, but also can approximate the edge of the cell; the closer the mask pixel value is to 1, the higher the probability that the pixel is the foreground, whereas, the closer the mask pixel value is to 0, the higher the probability that the pixel is the background; in FIG. 3, a total of 6 masks are used, corresponding to 6 cell types; the first row from left to right is: negative fiber cells, lymphocytes, negative tumor cells; the second row, from left to right, is: positive fibroblasts, positive tumor cells and other cells; and dividing a training set, a verification set and a test set, training the model on the training set until convergence, storing the model on the verification set to realize optimal model storage, and testing the model precision by adopting the test set.
Thirdly, selecting a deep learning model resnet34 as the UNET model of the encoder by the model architecture, wherein the learning rate is 5e-4, an Adam optimizer is adopted by the optimizer, the batch size is 8, the maximum iteration frequency is 500 epochs, the model is gradually attenuated after 100 epochs, and the model is trained until convergence;
the loss function is composed of two parts, one part is a cross entropy loss function:
some are the Jaccard coefficients:
wherein X and Y are respectively the network output and the true standard mask, and the integral loss function is the sum of the two.
And fourthly, model prediction and post-processing. And predicting the test set by the trained model, and outputting the thermodynamic diagrams of all classes. The thermodynamic diagram is a three-dimensional image in which the first two dimensions represent the height and width of the image, respectively, and the third dimension is the cell class. I.e. one two-dimensional thermodynamic diagram for each class of cells. And processing the output result of the model prediction, and screening all cell centers in the whole image. And summing the cell class dimensions of the three-dimensional thermodynamic diagrams to obtain the two-dimensional thermodynamic diagrams of all cells, performing Gaussian smoothing on the thermodynamic diagrams, eliminating isolated points predicted by the model, then using a maximum filter to search a local maximum value in the smooth diagrams, and returning all local peak value coordinates. This step is to detect all cells predicted by the model in the global map, with local peak coordinates corresponding to the center of the predicted cell. Subsequently, the cell centers are classified. And acquiring a thermodynamic value of a three-dimensional thermodynamic diagram corresponding to the local peak value coordinate, wherein the third dimension of the thermodynamic diagram, namely the number of layers of the two-dimensional thermodynamic diagram, corresponds to the number of cell types, and the cell type corresponding to the highest thermodynamic value is the cell type to which the coordinate belongs. The corresponding classes of all cells predicted by the full-map model can be obtained by the method.
And fifthly, evaluating the model performance. And matching the model prediction cells and the labeled cells by adopting a Hungarian algorithm, so that each labeled cell is not matched with more than one prediction cell, and each prediction cell is not matched with more than one labeled cell. The matching range is a circular area with the center point of the marked cell as the center and 16 pixels as the radius, and the area beyond the range does not participate in matching. In the matching process, only whether the cells are effectively detected or not is considered, and the cell class is not taken as the matching standard. According to the matching result, indexes such as Precision (Precision), Recall (Recall), Euclidean distance, F1-score and the like can be calculated. Further, we can calculate an index of classification for the matched result. FIG. 5 is a result of matching the model prediction result with the true standard using the Hungarian algorithm. Wherein the cell center and the predicted cell center are labeled with red line representation pair, the length of red line represents the distance between the two, green represents False negative, i.e. False negative, and blue represents False positive. Besides, the evaluation indexes of KI67 index difference, MAE and RMSE of the label and the network prediction are adopted as evaluation indexes. The calculation formula for MAE and RMSE is as follows:
where pred represents model prediction, anno represents annotation or true annotation, and m represents cell number.
And sixthly, analyzing the model output, and visualizing the result as shown in fig. 6. In order to demonstrate the output result of the model and facilitate comparison, a graph I is still selected and input into the model, and FIG. 5 is a visual graph of deep learning prediction cell positioning and classification results. The correspondence of color points to corresponding cell classes is consistent with fig. 1. Negative fibers (blue), lymph (yellow), negative tumors (green), positive fibers (brown), positive tumors (red) and other cells (purple). As can be seen, the cell localization and classification results predicted by the deep learning model are basically consistent with the labeling results of doctors.
Experiment and analysis:
47 images not participating in model training were used for testing, and the resulting model was represented as follows:
evaluation criteria | Detectable Rate (precision) | Recall rate (recall) | F1-Score | European distance (Unit: pixel) |
Cell detection | 0.8686 | 0.8570 | 0.8628 | 5.59±3.17 |
Cell sorting | 0.8668 | 0.8278 | 0.8397 | — |
TABLE 1
TABLE 2
As can be seen from tables 1 and 2, the model has higher detection rate, recall rate and F1-Score, the Euclidean distance is smaller, the predicted cell center of the model is closer to the true standard, the error between the index marked KI67 and the index predicted KI67 is within 1, and the MAE and the RMSE are both within 0.1. The model prediction result can meet the clinical requirements in both precision and stability.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. A method for automatically interpreting KI67 pathological sections based on deep learning is characterized by comprising the following steps:
firstly, collecting visual field data under a KI67 section lens, wherein the data volume is not less than 100, and manually marking the data;
secondly, making a true standard, and analyzing the marked file of marked data into a smooth Gaussian mask as the true standard of the deep learning model, wherein the mask is in smooth transition from the center of the cell to the edge; dividing all the labeled data into a training set, a verification set and a test set;
thirdly, designing a model architecture and a loss function, and training the model until convergence;
fourthly, model prediction and post-processing are carried out, the test set is input into a trained model to obtain a prediction result thermodynamic diagram, and finally, Gaussian smoothing and local peak value processing are carried out to obtain a result which is used as a model to be output;
fifthly, evaluating the model performance, matching the model prediction cells with the labeled cells by adopting a Hungarian algorithm, and measuring accuracy rate, recall rate and F1-score index;
and sixthly, analyzing the model output, and visualizing the result.
2. The method for automatically interpreting KI67 pathological sections based on deep learning according to claim 1, wherein: the labeling in the first step is point labeling.
3. The method for automatically interpreting KI67 pathological sections based on deep learning according to claim 1, wherein: in the first step, all the collected data adopt a unified magnification, all the collected data can not come from the same case, and the collected data of the same case does not exceed ten.
4. The method for automatically interpreting KI67 pathological sections based on deep learning according to claim 1, wherein: in the second step, the mask is represented by a floating point number in a range of [0,1], the specific operation is to perform Gaussian smoothing on each marked point in the image, and the Gaussian superposition part takes the maximum value to prevent the wave crests from merging.
5. The method for automatically interpreting KI67 pathological sections based on deep learning according to claim 1, wherein: in the third step, a model architecture selects a deep learning model rescet 34 as an unet model of an encoder, the learning rate is 5e-4, an Adam optimizer is adopted by the optimizer, the batch size is 8, the maximum iteration frequency is 500 epochs, the model is gradually attenuated after 100 epochs, the model is trained until convergence, a loss function is formed by two parts, and one part is a cross entropy loss function:
some are the Jaccard coefficients:
wherein X and Y are respectively the network output and the true standard mask, and the integral loss function is the sum of the two.
6. The method for automatically interpreting KI67 pathological sections based on deep learning according to claim 1, wherein: in the fourth step, the trained model is used for predicting a test set, and thermodynamic diagrams of all categories are output;
the thermodynamic diagram is a three-dimensional image, wherein the first two dimensions respectively represent the height and the width of the image, and the third dimension is a cell category, namely, each category of cells corresponds to a two-dimensional thermodynamic diagram;
processing the prediction output result of the model, screening all cell centers in the whole image, summing the cell category dimensions of the three-dimensional thermodynamic diagram to obtain a two-dimensional thermodynamic diagram of all cells, performing Gaussian smoothing on the thermodynamic diagram to eliminate isolated points predicted by the model, then using a maximum filter to search a local maximum value in the smooth image and returning all local peak value coordinates, wherein the local peak value coordinates correspond to the centers of the predicted cells in the whole image by detecting all cells predicted by the model;
and then, classifying the cell center to obtain a thermodynamic value of a three-dimensional thermodynamic diagram corresponding to the local peak value coordinate, wherein the third dimension of the thermodynamic diagram, namely the number of layers of the two-dimensional thermodynamic diagram, corresponds to the number of cell types, and the cell type corresponding to the highest thermodynamic value is the cell type to which the coordinate belongs.
7. The method for automatically interpreting KI67 pathological sections based on deep learning according to claim 1, wherein: in the fifth step, the Hungarian algorithm is adopted to match the model prediction cells with the labeled cells, so that each labeled cell is not matched with more than one prediction cell, and each prediction cell is not matched with more than one labeled cell; the matching range is a circular area with the center point of the marked cell as the center of a circle and 16 pixels as the radius, and the area beyond the range does not participate in matching; in the matching process, only whether the cells are effectively detected or not is considered, and the cell types are not taken as the matching standards;
using the marked and network predicted KI67 index difference, MAE and RMSE as evaluation indexes; the calculation formula for MAE and RMSE is as follows:
where pred represents model prediction, anno represents annotation or true annotation, and m represents cell number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110529098.XA CN113192047A (en) | 2021-05-14 | 2021-05-14 | Method for automatically interpreting KI67 pathological section based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110529098.XA CN113192047A (en) | 2021-05-14 | 2021-05-14 | Method for automatically interpreting KI67 pathological section based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113192047A true CN113192047A (en) | 2021-07-30 |
Family
ID=76981749
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110529098.XA Pending CN113192047A (en) | 2021-05-14 | 2021-05-14 | Method for automatically interpreting KI67 pathological section based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113192047A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114494204A (en) * | 2022-01-27 | 2022-05-13 | 复旦大学 | Ki67 index calculation method based on deep learning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108346145A (en) * | 2018-01-31 | 2018-07-31 | 浙江大学 | The recognition methods of unconventional cell in a kind of pathological section |
CN109389557A (en) * | 2018-10-20 | 2019-02-26 | 南京大学 | A kind of cell image ultra-resolution method and device based on image prior |
US20190147215A1 (en) * | 2017-11-16 | 2019-05-16 | General Electric Company | System and method for single channel whole cell segmentation |
CN111369615A (en) * | 2020-02-21 | 2020-07-03 | 苏州优纳医疗器械有限公司 | Cell nucleus central point detection method based on multitask convolutional neural network |
CN112396621A (en) * | 2020-11-19 | 2021-02-23 | 之江实验室 | High-resolution microscopic endoscope image nucleus segmentation method based on deep learning |
CN112750106A (en) * | 2020-12-31 | 2021-05-04 | 山东大学 | Nuclear staining cell counting method based on incomplete marker deep learning, computer equipment and storage medium |
-
2021
- 2021-05-14 CN CN202110529098.XA patent/CN113192047A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190147215A1 (en) * | 2017-11-16 | 2019-05-16 | General Electric Company | System and method for single channel whole cell segmentation |
CN108346145A (en) * | 2018-01-31 | 2018-07-31 | 浙江大学 | The recognition methods of unconventional cell in a kind of pathological section |
CN109389557A (en) * | 2018-10-20 | 2019-02-26 | 南京大学 | A kind of cell image ultra-resolution method and device based on image prior |
CN111369615A (en) * | 2020-02-21 | 2020-07-03 | 苏州优纳医疗器械有限公司 | Cell nucleus central point detection method based on multitask convolutional neural network |
CN112396621A (en) * | 2020-11-19 | 2021-02-23 | 之江实验室 | High-resolution microscopic endoscope image nucleus segmentation method based on deep learning |
CN112750106A (en) * | 2020-12-31 | 2021-05-04 | 山东大学 | Nuclear staining cell counting method based on incomplete marker deep learning, computer equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
SANTANU PATTANAYAK: "《TensorFlow深度学习:数学原理与Python实战进阶》", 30 April 2020, 机械工业出版社 * |
SURESH KUMAR GORAKALA: "《自己动手做推荐引擎》", 31 January 2020, 机械工业出版社 * |
何龙: "《深入理解XGBoost高效机器学习算法与进阶》", 31 May 2020, 机械工业出版社 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114494204A (en) * | 2022-01-27 | 2022-05-13 | 复旦大学 | Ki67 index calculation method based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
El Achi et al. | Automated diagnosis of lymphoma with digital pathology images using deep learning | |
CN107274386B (en) | artificial intelligent auxiliary cervical cell fluid-based smear reading system | |
CN106780460B (en) | A kind of Lung neoplasm automatic checkout system for chest CT images | |
CN110120056B (en) | Blood leukocyte segmentation method based on adaptive histogram threshold and contour detection | |
CN112101451B (en) | Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block | |
US20190042826A1 (en) | Automatic nuclei segmentation in histopathology images | |
CN112070772A (en) | Blood leukocyte image segmentation method based on UNet + + and ResNet | |
CN106056118A (en) | Recognition and counting method for cells | |
CN110148126B (en) | Blood leukocyte segmentation method based on color component combination and contour fitting | |
CN111079620A (en) | Leukocyte image detection and identification model construction method based on transfer learning and application | |
Jia et al. | Multi-layer segmentation framework for cell nuclei using improved GVF Snake model, Watershed, and ellipse fitting | |
CN113361370B (en) | Abnormal behavior detection method based on deep learning | |
CN111105422A (en) | Method for constructing reticulocyte classification counting model and application | |
CN114600155A (en) | Weakly supervised multitask learning for cell detection and segmentation | |
CN116630971B (en) | Wheat scab spore segmentation method based on CRF_Resunate++ network | |
CN112750132A (en) | White blood cell image segmentation method based on dual-path network and channel attention | |
Anari et al. | Computer-aided detection of proliferative cells and mitosis index in immunohistichemically images of meningioma | |
CN114972202A (en) | Ki67 pathological cell rapid detection and counting method based on lightweight neural network | |
CN117036288A (en) | Tumor subtype diagnosis method for full-slice pathological image | |
CN110414317B (en) | Full-automatic leukocyte classification counting method based on capsule network | |
He et al. | Progress of machine vision in the detection of cancer cells in histopathology | |
CN113192047A (en) | Method for automatically interpreting KI67 pathological section based on deep learning | |
Marcuzzo et al. | Automated Arabidopsis plant root cell segmentation based on SVM classification and region merging | |
Khamael et al. | Using adapted JSEG algorithm with fuzzy C mean for segmentation and counting of white blood cell and nucleus images | |
CN116468690B (en) | Subtype analysis system of invasive non-mucous lung adenocarcinoma based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210730 |