CN116503338A - Thyroid cell pathology whole-slide image analysis method based on target detection - Google Patents
Thyroid cell pathology whole-slide image analysis method based on target detection Download PDFInfo
- Publication number
- CN116503338A CN116503338A CN202310388243.6A CN202310388243A CN116503338A CN 116503338 A CN116503338 A CN 116503338A CN 202310388243 A CN202310388243 A CN 202310388243A CN 116503338 A CN116503338 A CN 116503338A
- Authority
- CN
- China
- Prior art keywords
- target detection
- network
- training
- thyroid
- steps
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 65
- 210000001685 thyroid gland Anatomy 0.000 title claims abstract description 46
- 230000007170 pathology Effects 0.000 title claims abstract description 21
- 238000003703 image analysis method Methods 0.000 title claims abstract description 7
- 238000012549 training Methods 0.000 claims abstract description 49
- 230000002159 abnormal effect Effects 0.000 claims abstract description 21
- 238000002372 labelling Methods 0.000 claims abstract description 7
- 238000004458 analytical method Methods 0.000 claims abstract description 3
- 238000000034 method Methods 0.000 claims description 38
- 230000004913 activation Effects 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 9
- 238000000137 annealing Methods 0.000 claims description 3
- 230000002238 attenuated effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 230000006872 improvement Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000003745 diagnosis Methods 0.000 abstract description 7
- 238000013135 deep learning Methods 0.000 description 5
- 208000009453 Thyroid Nodule Diseases 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000003211 malignant effect Effects 0.000 description 3
- 238000002560 therapeutic procedure Methods 0.000 description 3
- 208000024770 Thyroid neoplasm Diseases 0.000 description 2
- 238000001574 biopsy Methods 0.000 description 2
- 230000002380 cytological effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 201000002510 thyroid cancer Diseases 0.000 description 2
- 206010054107 Nodule Diseases 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 210000000750 endocrine system Anatomy 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
The application provides a thyroid cell pathology whole slide image analysis method based on target detection, which comprises the steps of firstly, constructing a target detection network and a classification network which are respectively used for detecting abnormal areas and further perfecting detection results; then constructing uniformly segmented thyroid WSI images into two data sets according to the labeling result of operators, and respectively training a target detection network and a classification network; then detecting an abnormal region in the thyroid WSI image by using a trained target detection network; and finally, sending the detected abnormal areas into a trained classification network for further classification, wherein the classification network gives ten abnormal areas with highest positive possibility for auxiliary diagnosis. According to the invention, target detection is used for thyroid cell pathology whole-slide image diagnosis for the first time, analysis of thyroid cell pathology whole-slide images is realized, and diagnosis suggestions are given, so that heavy labor of operators and misdiagnosis and missed diagnosis caused by subjective experience are reduced.
Description
Technical Field
The invention relates to the field of deep learning and medical intersection, in particular to a thyroid cell pathology whole slide image analysis method based on target detection, and belongs to the application of deep learning in the medical field.
Background
Thyroid nodules are frequently occurring and common diseases of the endocrine system. Clinically, the treatment of benign and malignant thyroid nodules is different, the influence on the life quality of patients and the related medical cost are also obviously different, so that the benign and malignant thyroid nodules can be accurately evaluated before operation, unnecessary benign nodules can be prevented from being resected by operation, meanwhile, a proper operation scheme is prepared before operation, prognosis risk factors are layered, and a personalized accurate treatment scheme is prepared.
The nature of thyroid nodules is generally identified clinically by fine needle aspiration biopsy (fine needle aspiration biopsy, FNAB). FNAB is that under the guidance of Doppler color Doppler ultrasound, a part of thyroid cells are extracted by using a puncture needle and then the thyroid cells are tableted, and a professional cytologist scans the prepared sample under a microscope to judge benign and malignant characteristics according to the form, the size and the like of the cells. With the development of high resolution scanners and electronic pathology, samples are now typically scanned as full slide images (Whole Slide Images, thyroid WSI images). The process of examination by cytological specialists is very time consuming and the examination results are subjectively determined by the specialist experience. The development of computer-assisted therapy systems can rapidly accelerate the diagnostic process and provide objective diagnostic advice.
Initial computer-assisted therapy systems typically employ conventional machine learning algorithms, requiring complex image preprocessing and feature extraction steps. In recent years, deep learning has made a great breakthrough in the fields of computer vision and image processing. The end-to-end image recognition method is widely applied to various tasks of medical image analysis. Deep learning brings a good solution for automatic diagnosis of thyroid cancer, and more students study thyroid cancer analysis methods based on deep learning in the past five years. However, these studies have several limitations. First, these studies mostly sort fixed-size patches, rather than using thyroid WSI image-level data. Second, patch-level cytological images for model training are typically manually selected, rather than automatically generated. Since the diagnostic-aid discriminatory features in thyroid WSI images are only a small part, it is crucial in clinical practice to find critical areas in a large number of unwanted background information. These problems greatly hamper the use of end-to-end automated computer-assisted therapy systems in clinical practice. In chinese patent document CN114743195a, a training method and an image processing method for a digital image identifier of thyroid cell pathology are described, which can train the digital image identifier of thyroid cell pathology under the condition that only the type or the content of an image block is marked and the position of a positive cell is not marked, so that the digital image identifier of thyroid cell pathology can not only judge whether the positive cell exists in an image, but also locate the positive cell in the image. However, the steps of the method are complicated, and the end-to-end training method in the target detection method provided by the invention does not need to manually process data, and the data are all packaged in a network model, so that the operation is more convenient.
Disclosure of Invention
The invention discloses a thyroid cell pathology whole-slide image analysis method based on target detection. The method mainly comprises the following steps:
s1, uniformly dividing a thyroid WSI image;
s2, building a target detection network for detecting an abnormal region;
s3, constructing the segmented thyroid WSI image into a data set according to the labeling result of the operator, and training a target detection network; the operator marks a positive cell area by using a marking frame in the data set for training the target detection network;
s4, training a target detection network by using the data set;
s5, detecting abnormal areas in the thyroid WSI image by using the trained target detection network.
In one embodiment, the thyroid WSI image is uniformly cut into 1024X 1024 patches because it is required to be divided into smaller patches in step S1 due to the size of the thyroid WSI image being too large.
In one embodiment, the step S2 is implemented in a manner of setting up the target detection network:
a YOLOV4 model was constructed as the target detection network and no pre-training weights for YOLOV4 were loaded. The backbone network is CSPDarkNet53, the activation function is Mish activation function, and the formula is as follows:
Mish=x·tanh(ln(1+e x ))
the feature pyramid part uses an SSP structure and a PANet structure.
In one embodiment, the abnormal region detected by the target detection network is put into the trained classification network for further improving the detection result, and the specific steps are as follows:
s6: the classification network is built for further improvement of the detection result of the abnormal region detected by the target detection network;
s7: training the classification network with a training set;
s8: the abnormal region detected by the target detection network is sent into a classification network for further classification;
s9: the classification network gives the ten largest abnormal areas with highest possible positives for assisting diagnosis.
In one embodiment, the step S6 is implemented in a manner of building a classification network:
efficient Net is built as a classification network, again without loading the pre-training weights of Efficient Net. The main structure is MBConv, using the Swish activation function, the formula is as follows:
Swish=x·sigmoid(x)
in one embodiment, step S3 further comprises the steps of:
constructing the segmented thyroid WSI image into two data sets according to the labeling result of an operator, and training a target detection network and a classification network respectively; the operator marks a positive cell area by using a marking frame in the data set for training the target detection network; in the data set for classifying network training, an operator marks the partial patches obtained by segmentation in the step S1 as positive and negative respectively.
In one embodiment, the training of the target detection network in step S4 is performed in the following manner:
and (3) taking the patches marked by the operator in the step (S3) by using the marking frame to mark the positive cell area as input to be sent to a target detection network for training. The data enhancement method is a Mosaic data enhancement method, namely four pictures are randomly cut and spliced on one picture to serve as training data. Overfitting is avoided by using a label smoothing idea, learning rate cosine annealing is attenuated, and regression loss of a prediction frame is L CIOU The formula is as follows:
L CIOU =1-IOU(A,B)+ρ 2 (A ctr ,B ctr )/c 2 +α·v
wherein IOU (A, B) refers to the difference of the intersection ratio between the predicted frame A and the real frame B, A ctr And B ctr The coordinates of the center points of the predicted and real frames, ρ (a ctr ,B ctr ) Calculation A ctr And B ctr The Euclidean distance between c is the diagonal length of the smallest bounding box of the predicted box A and the real box B. w (w) gt And h gt W and h are the width and height of the real frame and the width and height of the predicted frame.
In one embodiment, the training of the classification network in step S7 is performed in the following manner:
the patches marked positive and negative by the operator in step S3 are fed as inputs into the classification network for training. The training uses an RMSProp optimizer, the learning rate is increased from an initial value, then the learning rate is kept still, then exponential decay is carried out, the data enhancement is carried out by using an AutoAutoAutoAutoAutoAutoAutomation method, and the training performance is improved by adopting a random depth strategy.
Compared with the prior researches, the invention has the following advantages: the method for classifying the thyroid WSI image level data realizes classification of the thyroid WSI image level data in a mode of uniformly dividing the thyroid WSI image into patches with smaller sizes; the image data in the invention are trained in an end-to-end mode, the data do not need to be manually processed, and the operation is convenient; the detection result of the target detection network is further classified through the classification network, so that the detection result is improved.
Drawings
The invention is further described below with reference to the drawings and examples.
FIG. 1 is an image of a whole thyroid cell pathology slide used in an embodiment of the present invention;
FIG. 2 is a general block diagram of an embodiment of the present invention;
FIG. 3 is a diagram of a target detection network according to an embodiment of the present invention;
FIG. 4 is a diagram of a classification network according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a thyroid cell pathology whole slide image analysis method based on target detection, which comprises the following steps:
s1, uniformly dividing a thyroid WSI image; the method specifically comprises the following steps:
since the thyroid WSI image is too large in size, it is cut uniformly into 1024X 1024 patches.
And S2, building a target detection network for detecting the abnormal region. The method specifically comprises the following steps:
the YOLOv4 model was constructed as the target detection network and no pre-training weights for YOLOv4 were loaded. The backbone network is CSPDarkNet53, the activation function is Mish activation function, and the formula is as follows:
Mish=x·tanh(ln(1+e x ))
the feature pyramid part uses an SSP structure and a PANet structure.
S3, constructing the segmented thyroid WSI image into a data set according to the labeling result of the operator, and training a target detection network; the method specifically comprises the following steps:
constructing the segmented thyroid WSI image into two data sets according to the labeling result of an operator, and training a target detection network and a classification network respectively; the operator marks a positive cell area by using a marking frame in the data set for training the target detection network; in the data set for classifying network training, an operator marks the partial patches obtained by segmentation in the step S1 as positive and negative respectively;
s4: training a target detection network using the data set; the method specifically comprises the following steps:
and (3) taking the patches marked by the operator in the step (S3) by using the marking frame to mark the positive cell area as input to be sent to a target detection network for training. The data enhancement method is a Mosaic data enhancement method, namely four pictures are randomly cut and spliced on one picture to serve as training data. Overfitting is avoided by using a label smoothing idea, learning rate cosine annealing is attenuated, and regression loss of a prediction frame is L CIOU The formula is as follows:
L CIOU =1-IOU(A,B)+ρ 2 (A ctr ,B ctr )/c 2 +α·v
wherein I0U (A, B) refers to the difference of the intersection ratio between the predicted frame A and the real frame B, A ctr And B ctr The coordinates of the center points of the predicted and real frames, ρ (a ctr ,B ctr ) Calculation A ctr And B ctr The Euclidean distance between c is the diagonal length of the smallest bounding box of the predicted box A and the real box B. w (w) gt And h gt The width and the height of the real frame are the width and the height of the prediction frame;
s5: detecting an abnormal region in the thyroid WSI image by using a trained target detection network;
s6: the classification network is built for further improvement of the detection result of the abnormal region detected by the target detection network; the method specifically comprises the following steps:
efficient Net is built as a classification network, again without loading the pre-training weights of Efficient Net. The main structure is MBConv, using the Swish activation function, the formula is as follows:
Swish=x·sigmoid(x)
s7: training the classification network with a training set; the method specifically comprises the following steps:
the patches marked positive and negative by the operator in step S3 are fed as inputs into the classification network for training. Training is carried out by using an RMSProp optimizer, learning is firstly increased from an initial value, then kept still, then exponential decay is carried out, data enhancement is carried out by using an AutoAutoAutoAutoAutoAutoAutomation method, and a random depth strategy is adopted to improve training performance;
s8: the abnormal region detected by the target detection network is sent into a classification network for further classification;
s9: the classification network gives the ten largest abnormal areas with highest possible positives for assisting diagnosis.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (8)
1. The thyroid cell pathology whole slide image analysis method based on target detection is characterized by comprising the following steps of:
s1: uniformly segmenting a thyroid WSI image;
s2: constructing a target detection network for detecting an abnormal region;
s3: constructing the segmented thyroid WSI image into a data set according to the labeling result of the operator, and training a target detection network;
s4: training a target detection network using the data set;
s5: and detecting abnormal areas in the thyroid WSI image by using a trained target detection network.
2. The method for analyzing thyroid cell pathology whole slide image based on target detection according to claim 1, wherein the method comprises the following steps: the abnormal region detected by the target detection network is put into a trained classification network for further improving the detection result, and the method specifically comprises the following four steps:
s6: the classification network is built for further improvement of the detection result of the abnormal region detected by the target detection network;
s7: training the classification network with a training set;
s8: the abnormal region detected by the target detection network is sent into a classification network for further classification;
s9: the classification network gives the highest possible positive abnormal region auxiliary analysis result.
3. The method for analyzing thyroid cell pathology whole slide image based on target detection according to claim 2, wherein the method comprises the following steps: step S3 further comprises the steps of:
constructing the segmented thyroid WSI image into two data sets according to the labeling result of an operator, and training a target detection network and a classification network respectively; the operator marks a positive cell area by using a marking frame in the data set for training the target detection network; in the data set for classifying network training, an operator marks the partial patches obtained by segmentation in the step S1 as positive and negative respectively.
4. The method for analyzing thyroid cell pathology whole slide image based on target detection according to claim 1, wherein the method comprises the following steps: the step S1 also comprises the following steps:
thyroid WSI images were cut uniformly into 1024X 1024 patches.
5. The method for analyzing thyroid cell pathology whole slide image based on target detection according to claim 1, wherein the method comprises the following steps: in the step S2, the implementation mode of setting up the target detection network is as follows:
constructing a YOLOv4 model as a target detection network, and not loading a pre-training weight of YOLOv 4; the backbone network is CSPDarkNet53, the activation function is Mish activation function, and the formula is as follows:
Mish=x·tanh(ln(1+e x ))
the feature pyramid part uses an SSP structure and a PANet structure.
6. The method for analyzing thyroid cell pathology whole slide image based on target detection according to claim 2, wherein the method comprises the following steps: the implementation mode of setting up the classification network in the step S6 is as follows:
constructing the EfficientNet as a classification network, and also not loading the pre-training weight of the EfficientNet, wherein the main structure is MBConv, and a Swish activation function is used, and the formula is as follows:
Swish=x·sigmoid(x)
7. the method for analyzing thyroid cell pathology whole slide image based on target detection according to claim 3, wherein the method comprises the following steps: the step S4 further includes the following steps:
the patches of the positive cell area marked by the operator in the step S3 by using the marking frame is used as input to be sent into a target detection network for training; the data enhancement method is a Mosaic data enhancement method, namely four pictures are randomly cut and spliced on one picture to serve as training data; overfitting is avoided by using a label smoothing idea, learning rate cosine annealing is attenuated, and regression loss of a prediction frame is L CIOU The formula is as follows:
L CIOU =1-IOU(A,B)+ρ 2 (A ctr ,B ctr )/c 2 +α·v
wherein IOU (A, B) refers to pre-preparationMeasuring the difference value A of the intersection ratio between the frame A and the real frame B ctr And B ctr The coordinates of the center points of the predicted and real frames, ρ (a ctr ,B ctr ) Calculation A ctr And B ctr The Euclidean distance between the two, c is the diagonal length of the minimum bounding box of the predicted box A and the real box B, and w gt And h gt W and h are the width and height of the real frame and the width and height of the predicted frame.
8. The method for analyzing thyroid cell pathology whole slide image based on target detection according to claim 3, wherein the method comprises the following steps: the step S7 further includes the steps of:
the patches marked positive and negative by the operator in the step S3 are used as input to be sent into a classification network for training; the training uses an RMSProp optimizer, the learning rate is increased from an initial value, then the learning rate is kept still, then exponential decay is carried out, the data enhancement is carried out by using an AutoAutoAutoAutoAutoAutoAutomation method, and the training performance is improved by adopting a random depth strategy.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310388243.6A CN116503338A (en) | 2023-04-12 | 2023-04-12 | Thyroid cell pathology whole-slide image analysis method based on target detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310388243.6A CN116503338A (en) | 2023-04-12 | 2023-04-12 | Thyroid cell pathology whole-slide image analysis method based on target detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116503338A true CN116503338A (en) | 2023-07-28 |
Family
ID=87327699
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310388243.6A Pending CN116503338A (en) | 2023-04-12 | 2023-04-12 | Thyroid cell pathology whole-slide image analysis method based on target detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116503338A (en) |
-
2023
- 2023-04-12 CN CN202310388243.6A patent/CN116503338A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230419696A1 (en) | Image analysis method, apparatus, program, and learned deep learning algorithm | |
US11436718B2 (en) | Image analysis method, image analysis apparatus, program, learned deep layer learning algorithm manufacturing method and learned deep layer learning algorithm | |
Liao et al. | An accurate segmentation method for white blood cell images | |
JP2022023912A (en) | Computer scoring based on primary stain and immunohistochemistry images | |
CN111986150B (en) | The method comprises the following steps of: digital number pathological image Interactive annotation refining method | |
CN113723573B (en) | Tumor tissue pathological classification system and method based on adaptive proportion learning | |
CN112614128B (en) | System and method for assisting biopsy under endoscope based on machine learning | |
JP5469070B2 (en) | Method and system using multiple wavelengths for processing biological specimens | |
CN111488921A (en) | Panoramic digital pathological image intelligent analysis system and method | |
JP2023510915A (en) | Non-tumor segmentation to aid tumor detection and analysis | |
CN111476794B (en) | Cervical pathological tissue segmentation method based on UNET | |
CN112990214A (en) | Medical image feature recognition prediction model | |
CN112580748A (en) | Method for counting cancer cells of Ki67 stained image | |
CN113393454A (en) | Method and device for segmenting pathological target examples in biopsy tissues | |
CN111583226B (en) | Cell pathological infection evaluation method, electronic device and storage medium | |
CN111382801B (en) | Medical image classification method, device, equipment and storage medium | |
CN114782372B (en) | DNA fluorescence in situ hybridization BCR/ABL fusion state detection method and detection system | |
Kanwal et al. | Quantifying the effect of color processing on blood and damaged tissue detection in whole slide images | |
CN112703531A (en) | Generating annotation data for tissue images | |
GB2329014A (en) | Automated identification of tubercle bacilli | |
Zhang et al. | Microscope based her2 scoring system | |
CN111062909A (en) | Method and equipment for judging benign and malignant breast tumor | |
CN116503338A (en) | Thyroid cell pathology whole-slide image analysis method based on target detection | |
Kiyuna et al. | Automatic classification of hepatocellular carcinoma images based on nuclear and structural features | |
CN114898866A (en) | Thyroid cell auxiliary diagnosis method, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |