CN115206495A - Renal cancer pathological image analysis method and system based on CoAtNet deep learning and intelligent microscopic device - Google Patents

Renal cancer pathological image analysis method and system based on CoAtNet deep learning and intelligent microscopic device Download PDF

Info

Publication number
CN115206495A
CN115206495A CN202210671552.XA CN202210671552A CN115206495A CN 115206495 A CN115206495 A CN 115206495A CN 202210671552 A CN202210671552 A CN 202210671552A CN 115206495 A CN115206495 A CN 115206495A
Authority
CN
China
Prior art keywords
coatnet
image
cancer
model
renal cancer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210671552.XA
Other languages
Chinese (zh)
Inventor
许迎科
于佳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Binjiang Research Institute Of Zhejiang University
Zhejiang University ZJU
Original Assignee
Binjiang Research Institute Of Zhejiang University
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Binjiang Research Institute Of Zhejiang University, Zhejiang University ZJU filed Critical Binjiang Research Institute Of Zhejiang University
Priority to CN202210671552.XA priority Critical patent/CN115206495A/en
Publication of CN115206495A publication Critical patent/CN115206495A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/60ICT specially adapted for the handling or processing of medical references relating to pathologies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pathology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a renal cancer pathological image analysis method, a renal cancer pathological image analysis system and an intelligent microscopic device based on CoAtNet deep learning. The invention can be used for assisting doctors in the pathology department to finish end-to-end diagnosis of the kidney cancer by combining the artificial intelligence technology with the microscope, can simultaneously meet the aims of high recognition rate, strong real-time performance and complete functions on an analysis model of a pathological image of the kidney cancer, and is beneficial to relieving the problems of shortage of talents and long culture period in the pathology department in China; is beneficial to reducing the misjudgment probability and the fatigue degree of a pathologist.

Description

Renal cancer pathological image analysis method and system based on CoAtNet deep learning and intelligent microscopic device
Technical Field
The invention belongs to the technical field of digital pathological image processing and auxiliary diagnosis, and particularly relates to a renal cancer pathological image analysis method and system based on CoAtNet deep learning and an intelligent microscopic device.
Background
Statistically, renal cell carcinoma accounts for about 85% of human renal malignant diseases, and is one of the most common cancers for many years. Renal cell carcinoma can be classified into clear renal cell carcinoma (KIRC), papillary renal cell carcinoma (KIRP), chromophobe-like renal cell carcinoma (KICH), and other specific types according to the type of pathology. Wherein three subtypes, KIRC, KIRP and KICH, account for approximately 90% of renal cancer typing. At present, the diagnosis and analysis of kidney cancer are mostly performed by observing pathological images by experienced pathologists. The pathological image is a specimen image which is taken from a patient and formed after staining and microscopic imaging, and is one of important bases for diagnosis and analysis in the medical field. The pathological report is called the "gold standard" for disease diagnosis. Based on the pathological images, the doctor can determine the type of the disease and the prognosis status and take a targeted treatment.
In recent years, artificial intelligence has achieved many satisfactory results in the analysis and application of pathological images. Chinese patent CN113420793A discloses a gastric ring cell carcinoma classification method based on improved convolutional neural network ResNet50, which includes data preprocessing: selecting, labeling, slicing and dividing training data for the gastric cell carcinoma images; training and comparing models: used for improving ResNet50 and training network parameters according to actual data, and comparing with the latest research result; feature map making and classifier training: the thermodynamic diagram is used for generating obvious features for the image, and the important feature diagram is selected to train the classifier; image classification: and the method is used for verifying the performance of the model and outputting the classified indexes.
Chinese patent CN110007455B discloses a pathological microscope, a display module, a control method, a device and a storage medium, and belongs to the field of microscope shooting and imaging. The pathology microscope includes: microscope body, image acquisition subassembly, control assembly and augmented reality AR projection subassembly. This application obtains AI analysis information through pathology microscope, and rethread AR projection subassembly is with AI analysis information projection to the microscope field of view of microscope body, has reached the doctor and can observe pathology section image and AI analysis information simultaneously in the microscope field of view, need not make a round trip to switch the field of view, makes the observation process more direct to realize the high real-time of pathology microscope in the use. However, the microscope has poor pertinence to pathological analysis of the kidney cancer, and can not complete comprehensive analysis of the pathological analysis of the kidney cancer.
Chinese patent CN113222933A discloses an image recognition system for full-chain diagnosis of renal cell carcinoma, which includes an image segmentation module, the image segmentation module segments an original pathological image including a cancer genome map TCGA and an LH data set provided by a local hospital after labeling a cancer region, a cancer subtype and a cancer grade, then inputs the image to a cancer region detection module for training and predicting, the accuracy of the image processed by the image cancer region detection module is improved by an accuracy improvement module to obtain a more accurate cancer region prediction thermodynamic diagram, marks the region predicted as a cancer and sends the region to the cancer region classification module for further classification to obtain a cancer segmentation subtype, and a report output module outputs an image recognition result report after classification. However, the identification system cannot meet the requirements of convenience and rapidity, and has no function of prognostic analysis. The hierarchical training set uses doctor labeling data, and different doctors have different judgment standards for the hierarchy, which results in larger model errors.
Chinese patent CN112992336A discloses a pathological intelligent diagnosis system, which integrates deep learning and real-time AI functions to close the gap between AI algorithm and the traditional microscope working process to achieve intelligent diagnosis of pathological sections; at the same time, deep learning algorithms were developed and evaluated for both applications to evaluate the impact of use in actual clinical workflows and with other microscope models. The system seamlessly integrates AI into the microscope working process, and makes cancer diagnosis for pathology and microscopic examination for biological samples of other diseases more efficient, accurate and intelligent; the image labeling diagnosis result in the intelligent diagnosis process is provided, and the literal diagnosis report is output, so that the speed of processing pathological diagnosis is increased, and the workload of pathological experts is effectively reduced. However, the invention can not display the conclusion of the artificial intelligence algorithm in real time in an ocular visual field observed by a pathologist; does not include the functions of detection, typing, grading, prognosis and the like.
Disclosure of Invention
The invention provides an intelligent microscopic system for renal cancer pathological image analysis based on deep learning, aiming at the problem that no system capable of assisting doctors in pathology department to complete end-to-end renal cancer tissue analysis exists at present. The system can be used for assisting a pathologist to finish end-to-end diagnosis of the kidney cancer by combining an artificial intelligence technology and a microscope.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
a renal cancer pathological image analysis method based on CoAtNet deep learning is characterized by comprising the following steps:
s1, collecting pathological images of kidney cancer of clinically diagnosed lesion tissues, and randomly dividing the pathological images of kidney cancer into a training set and a testing set according to a proportion;
s2, processing all pathological images of the kidney cancer into a plurality of section images with fixed resolution;
s3, filtering the image background through a pixel threshold method, and keeping the foreground;
s4, performing data enhancement processing on the slice image;
s5, randomly cutting the slice images processed by the training set data through S2 and S3, processing the cut slice images through S4, inputting the cut slice images into a detection model based on a CoAtNet deep learning algorithm for training, learning cancer and non-cancer characteristics of the detection model during training, generating a pre-training model for detecting a kidney cancer area, processing the test set data through S2 and S3, inputting the test set data into the pre-training model for testing to obtain the slice images with the kidney cancer area detection marks, splicing the slice images with the kidney cancer area detection marks to form complete kidney cancer pathological images with the kidney cancer area detection marks, and recording the test data with the cancer area marks as data to be input of a classification model;
s6, processing the training set data through S2, S3 and S4, inputting the training set data into a classification model based on CoAtNet for training and optimization, learning the characteristics of three different subtypes by taking subtype names as labels through the classification model, selecting and storing the optimal classification model, inputting the data to be input in S5 into the stored classification model, and outputting specific subtype information of the kidney cancer;
and S7, inputting the kidney cancer region detected in the S5 into a CoAtNet-based clustering model as training data, clustering images into 4 categories according to the learned characteristics by the clustering model, respectively corresponding to the cell carcinomas of levels I to IV, and generating a final model for judging the level number of the patient' S cancer.
And S8, analyzing the relationship between the survival probability of the renal cancer patient and time according to the detected pathological image data of the renal cancer.
Preferably, in S1, the pathological image of renal cancer is represented according to 7:3, the data of the cancer area in the pathological image of the kidney cancer is accurately marked by a pathologist with rich experience, the training set is used for training and optimizing the model, and the test set is used for testing the performance of the model.
Preferably, in S2, the pathological image of the kidney cancer is sliced by a sliding window method, which includes the following steps:
s21, amplifying the pathological image of the kidney cancer by 20 times;
s22, sliding the slice over the image with a window size of 512 x 512 pixels, in left-to-right, top-to-bottom order;
and S23, sliding the slice, wherein each window is overlapped with the previous window by 50 percent.
Preferably, the specific steps of filtering the image background in S3 are:
s31, pixel mapping: performing pixel mapping on the subgraph subjected to image slicing processing according to three channels of R, G and B, wherein points with pixel values larger than a pixel threshold value 230 in each channel are mapped to be 1, and points smaller than 230 are mapped to be 0;
s32, background judgment: and if the total pixel value after mapping of each channel in the three channels of the image slice is greater than 60% of that when all the pixel points are mapped to be 1, the background is regarded as background filtering, otherwise, the cells are regarded as being stored to the specified positions.
Preferably, the specific step of slice data enhancement in S4 includes:
s41, horizontally turning over the glass substrate,
s42, randomly rotating for 45 degrees,
and S43, adding Gaussian noise.
Preferably, in S5, all slice images are re-spliced according to naming and slice rules by using an Openslide library function, so as to form a labeled complete renal cancer pathological image.
Preferably, the CoAtNet used by the classification model in S6 firstly uses the ImageNet data set to train parameters of the network until convergence, and stores the parameters, then changes the last full connection layer of the CoAtNet into 3 to realize the classification of the three subtypes of the kidney cancer, and initializes the rest network layers by using the parameters pre-trained on ImaeNet; when training the model, the initial learning rate =0.00001 and is reduced by 5 times with the training times, the batch size =64, an Adam optimizer is used, and the cross entropy is used as a loss function; 7. batch size =1 when tested.
Preferably, the specific step in S8 includes
S81, after being processed by S2, S3 and S4, the training set data is input into a CoAtNet and COX-based prognostic analysis model for training and optimization, wherein the prognostic analysis model takes the survival time of a patient as a label;
s82, using a 7x7 characteristic diagram before a CoAtNet global pooling layer as a pathological characteristic of a patient, and taking an average value on each characteristic diagram channel as a characteristic vector as input of a COX model;
and S83, inputting the characteristic vector into a COX model, predicting a median prognostic risk of the patient, and forming a survival curve.
A renal cancer pathology image analysis system based on CoAtNet deep learning, comprising:
a data reading module: reading pathological image information of the kidney cancer through a device embedded in an intelligent microscope objective lens accessory;
an image slicing module: the system is used for carrying out slice processing on the read pathological image of the kidney cancer to obtain a slice image;
a background filtering module: for background filtering of the section images to retain the cell foreground images;
the data enhancement module: for data enhancement of the slice;
renal cancer region detection module: for detecting and segmenting cancerous and non-cancerous regions of a pathological image of a kidney cancer;
renal cancer subtype classification module: the subtype information of the pathological image of the input kidney cancer is identified;
a kidney cancer grading module: for predicting the hazard classification stage of renal cancer subtype cells;
renal cancer prognosis analysis module: used for predicting the change of the survival probability of the patient with time.
An intelligent microscopic device for renal cancer pathology image analysis based on CoAtNet deep learning is characterized by comprising an intelligent microscope, wherein the intelligent microscope comprises the renal cancer pathology image analysis system based on CoAtNet deep learning.
The invention has the beneficial effects that:
(1) The invention is an end-to-end intelligent microscopic system, through the combination of artificial intelligence algorithm and microscope, the pathologist can see the diagnosis result of the cell in real time in the ocular visual field of the intelligent microscope, the whole process from observation to outputting pathological report can be finished simply and rapidly, and the circulation operation among different devices is not needed;
(2) The artificial intelligence algorithm uses an advanced CoAtNet-based deep learning model, has strong generalization capability and can adapt to kidney cancer pathological images acquired by different hospitals; the model structure is simple, and the purpose of real-time diagnosis can be achieved;
(3) The invention has comprehensive functions and basically covers all items required for diagnosing the renal cancer patient: detecting and dividing kidney cancer regions, identifying subtypes, grading tumor cells, analyzing kidney cancer prognosis and outputting a diagnosis pathological report;
(4) The invention relieves the problems of shortage of talents and long culture period of pathologists, and is also beneficial to reducing the misjudgment probability and fatigue degree of pathologists.
Drawings
FIG. 1 is an overall flow chart of the present invention;
FIG. 2 is a pathological example diagram of a region marked with cancer;
FIG. 3 is a schematic view of an image slice process;
FIG. 4 is a schematic diagram of a background filtering process;
FIG. 5 is a graph showing the prognostic analysis of a patient with renal cancer
FIG. 6 is an exemplary diagram of a pathological image of a patient with renal cancer observed by the intelligent microscope system of the present invention;
FIG. 7 is a block diagram of the system of the present invention.
Detailed Description
For further understanding of the present invention, the present invention will be described in detail with reference to examples, which are provided for illustration of the present invention but are not intended to limit the scope of the present invention.
Example 1
As shown in fig. 1, the present embodiment relates to a renal cancer pathology image analysis method based on the deep learning of CoAtNet, which is characterized by comprising the following steps:
data reading 100: kidney cancer pathology image information in the slide is read by a device embedded in the accessory of the intelligent microscope objective as input for model training and testing.
Renal cancer pathology data set 101 was collected from a local hospital and stained 102 with hematoxylin-eosin staining to form digital pathology images for training and testing. The data were 330, with 260 transparent renal cells (KIRC), 35 papillary renal cells (KIRP) and chromophobe renal cell carcinoma (KICH), each, randomized into training and test sets according to the ratio of 7. Wherein the training set is used for training and optimizing the model and the test set is used for testing the model. All data are accurately labeled by experienced pathologists, and fig. 2 is an example of a pathological map of a labeled cancer region.
Image slice 107: the slicing of the pathological image of the kidney cancer is performed using a sliding window method, and the pathological image of the kidney cancer is processed into several slice images of fixed resolution. As shown in the flow chart of fig. 3, the steps are: a1, amplifying each kidney cancer pathological image by 20 times; b1, sliding the slice over the image with a window of size 512 x 512 pixels, in left to right, top to bottom order; c1, in order to increase the number of slices and enrich the view of the image, each window overlaps the last window by 50% when the slices are slid.
Background filtering 108: the image background is filtered and the foreground is preserved using a pixel thresholding method. As shown in fig. 4. The method comprises the following steps: a2, pixel mapping: performing pixel mapping on the processed subgraph of the image slice 107 according to three channels of R (pure red), G (pure green) and B (pure blue), wherein points with pixel values larger than a pixel threshold value 230 in each channel are mapped to be 1 (pure color), and points smaller than 230 are mapped to be 0 (black); b2, background judgment: if the total pixel value after mapping of each channel in the three channels of the slice is greater than 60% of the value when all the pixel points are mapped to 1 (called as "filtering threshold"), the cell is regarded as background filtering, otherwise, the cell is regarded as being stored to a specified position. After processing the data, approximately 2000 final slice images were formed for each renal cancer pathology image. The data of the completed slice is used as input for subsequent model training and testing.
Data enhancement 109: in order to increase the generalization performance of the model, the slice image is subjected to data enhancement by using the following method, which comprises the following steps: a3, horizontally overturning; b3, randomly rotating for 45 degrees; c3, adding Gaussian noise.
Renal cancer region detection 110: the method is used for detecting cancer and non-cancer areas of a segmented kidney cancer pathological image so as to be displayed in an ocular visual field of a microscope for reference of a pathologist. The method comprises the following steps: a4, slicing the training data by using the method described in 107 and 108, and using the generated slices as input to train and optimize the detection model. The detection model uses a deep learning algorithm based on CoAtNet. The model can learn the characteristics of cancer and non-cancer during training, and then a pre-training model capable of detecting a kidney cancer area is generated; b4, reading the test data by 100, slicing by using the methods 107 and 108, sending the test data into the detection model generated in the step a4, outputting a slice image with a detection mark, and finally splicing all slice images again by using an Openslide library function according to naming and slicing rules to form a marked complete kidney cancer pathological image. Based on the steps, a pathologist can observe a complete kidney cancer pathological image of the segmented cancer region in an eyepiece visual field 104 of the intelligent microscope; c4, recording the test data with the cancer area mark as the data to be input into the classification model.
The CoAtNet detection model adopts a novel CoAtNet + UNet network structure, and specifically comprises the following steps: replacing a down-sampling part of the Unet network by a module in front of a CoAtNet global pooling layer, and performing feature extraction on an input slice image to generate a feature map with the size of 7x 7; then, the features extracted by downsampling are fused into an upsampling part according to the connection mode of Unet, and a picture and a slice image with the same size and detection marks are input. The input to the model is the original slice, the labeled slice of the labeled contour of the cancerous tissue region. The model finalizes the location of the contours by learning the features of normal and cancerous tissue.
Renal cancer subtype classification 111: for automatically recognizing the subtype information of the input image for display in the ocular field of the microscope for reference by the pathologist. The method comprises the following steps: a5, generating a slice image for training by the training data through steps 107 and 108; and b5, randomly cutting all the slices into the size of 224 × 224, and sending the slices into the step 109 for data enhancement. The data enhancement step expands the data volume and enables the generalization performance and the migration capability of the model to be better; and c5, inputting the data in the step b5 into a CoAtNet-based classifier for training and optimizing the model. The model learns the characteristics of three different subtypes by taking the subtype names as labels, and selects and stores the classification model with the best effect; d5, inputting the cancer section obtained by the test image through the kidney cancer region detection model into the stored classification model, outputting the specific subtype information of the kidney cancer, and displaying the specific subtype information in the microscope eyepiece 104.
What needs to be described is that: the CoAtNet used by the classification model firstly uses the ImageNet data set to train the parameters of the network until convergence, and saves the parameters. According to the classification task of the invention, the last full connection layer of CoAtNet is changed to 3 to realize the classification of three subtypes of kidney cancer, and the rest network layers are initialized by using parameters pre-trained on ImaeNet. When training the model, initial learning rate =0.00001 and decreasing with 5 times the number of training, batch size =64, using Adam optimizer with cross entropy as the loss function. This arrangement avoids over-fitting and under-fitting of the model and enables the features of each subtype to be learned at the fastest speed. Batch size =1 at test, data enhancement was not used.
Renal cancer grade 112: used for predicting the stage of renal cancer subtype cells belonging to grade I to IV. The higher the number of stages, the worse the prognosis, i.e. the least harmful in stage I and the highest risk in stage IV. Cancer regions screened by the detection model 110 are used as training data and input into a CoAtNet-based clustering model with 4 clustering categories, and images are clustered into 4 categories by the model according to learned characteristics, and the categories correspond to the I-IV-level cell carcinomas respectively. The resulting model is used to determine the number of stages to which the patient's cancer belongs.
Renal cancer prognostic assay 113: used for predicting the change of the survival probability of the patient with time. Prognostic analytical models use a format based on the CoAtNet and COX models. Where CoAtNet was used to extract cellular features and COX was used to predict median risk. The method comprises the following steps: a6, generating training data as input data of a model by the pathological image through steps 107, 108 and 109, wherein the model takes the survival time of a patient as a label; b6, using a 7x7 characteristic diagram before a CoAtNet global pooling layer as a pathological characteristic of a patient, and taking an average value on a channel of each characteristic diagram as a characteristic vector to be used as the input of a COX model; c6, inputting the characteristic vector into a COX model, predicting a median prognostic risk of the patient, and forming a survival curve; d6, printing a survival probability curve of the patient at a certain future time t in a pathology report, and fig. 5 is a prognostic analysis schematic curve of a certain renal cancer patient. The two curves are respectively the relationship between the survival probability and time of the patient under the better (upper) and worse (lower) prognosis conditions predicted by the trained model according to the pathological features of the patient.
Analysis report printing 105: the data output by the model is visualized by adopting a medical analysis open source platform of CellProfiler, the analysis result of the artificial intelligence algorithm 106 is displayed in real time in an eyepiece, and a diagnosis suggestion is input by voice and printed as a pathological report of the kidney cancer. Fig. 6 is an example of a screen seen in an eyepiece by a pathologist observing a pathological image of a certain kidney cancer patient using the intelligent microscope system of the present invention.
Example 2
As shown in fig. 7, the present embodiment relates to a renal cancer pathology image analysis system based on the deep learning of CoAtNet, which includes:
a data reading module: reading pathological image information of the kidney cancer through a device embedded in an intelligent microscope objective lens accessory;
an image slicing module: the system is used for carrying out section processing on the read pathological image of the kidney cancer to obtain a section image;
a background filtering module: for background filtering of the section images to retain the cell foreground images;
the data enhancement module: for data enhancement of the slice;
renal cancer region detection module: for detecting and segmenting cancerous and non-cancerous regions of pathological images of kidney cancer;
renal cancer subtype classification module: the subtype information is used for identifying the input kidney cancer pathological image;
renal cancer grading module: for predicting the hazard classification stage of renal cancer subtype cells;
renal cancer prognosis analysis module: used for predicting the change of the survival probability of the patient with time.
Although the present invention has been described in detail with reference to the specific embodiments, the present invention is not limited to the above embodiments, and various changes and modifications without inventive changes may be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (10)

1. A renal cancer pathological image analysis method based on CoAtNet deep learning is characterized by comprising the following steps:
s1, collecting pathological images of kidney cancer of clinically diagnosed lesion tissues, and randomly dividing the pathological images of kidney cancer into a training set and a testing set according to a proportion;
s2, processing all pathological images of the kidney cancer into a plurality of section images with fixed resolution;
s3, filtering the image background through a pixel threshold method, and keeping the foreground;
s4, performing data enhancement processing on the slice image;
s5, processing the slice images processed by the training set data through S2 and S3, processing the slice images through S4 after random cutting, inputting the slice images into a detection model based on a CoAtNet deep learning algorithm for training, learning cancer and non-cancer characteristics of the detection model during training, generating a pre-training model for detecting a kidney cancer region, inputting the slice images with kidney cancer region detection marks into the pre-training model after the test set data is processed through S2 and S3 for testing to obtain the slice images with the kidney cancer region detection marks, splicing the slice images with the kidney cancer region detection marks to form a complete kidney cancer pathological image with the kidney cancer region detection marks, and recording the test data with the cancer region marks as data to be input of a classification model;
s6, processing the training set data through S2, S3 and S4, inputting the training set data into a classification model based on CoAtNet for training and optimization, learning the characteristics of three different subtypes by taking subtype names as labels through the classification model, selecting and storing the optimal classification model, inputting the data to be input in S5 into the stored classification model, and outputting specific subtype information of the kidney cancer;
and S7, inputting the kidney cancer region detected in the S5 into a CoAtNet-based clustering model as training data, clustering images into 4 categories according to the learned characteristics by the clustering model, respectively corresponding to the cell carcinomas of levels I to IV, and generating a final model for judging the level number of the patient' S cancer.
And S8, analyzing the relationship between the survival probability of the renal cancer patient and time according to the detected renal cancer pathological image data.
2. The renal cancer pathology image analysis method based on CoAtNet deep learning of claim 1, wherein in S1, the renal cancer pathology image is obtained according to the ratio of 7:3, the data of the cancer area in the pathological image of the kidney cancer is accurately marked by a pathologist with rich experience, the training set is used for training and optimizing the model, and the test set is used for testing the performance of the model.
3. The renal cancer pathology image analysis method based on CoAtNet deep learning of claim 1, wherein in S2, the renal cancer pathology image is sliced by a sliding window method, and the specific steps are as follows:
s21, amplifying the pathological image of the kidney cancer by 20 times;
s22, sliding the slice over the image with a window size of 512 × 512 pixels, in left-to-right, top-to-bottom order;
and S23, sliding the slice, wherein each window is overlapped with the previous window by 50 percent.
4. The renal cancer pathology image analysis method based on CoAtNet deep learning of claim 1, wherein the specific steps of filtering the image background in S3 are as follows:
s31, pixel mapping: performing pixel mapping on the sub-image subjected to image slicing processing according to three channels of R, G and B, wherein points with pixel values larger than a pixel threshold value 230 in each channel are mapped to be 1, and points smaller than 230 are mapped to be 0;
s32, background judgment: and if the total pixel value after mapping of each channel in the three channels of the image slice is greater than 60% of that when all the pixel points are mapped to be 1, the background is regarded as background filtering, otherwise, the cell is regarded as being stored to a specified position.
5. The renal cancer pathology image analysis method based on CoAtNet deep learning of claim 1, wherein the specific step of slice data enhancement in S4 comprises:
s41, horizontally turning over the glass substrate,
s42, randomly rotating for 45 degrees,
and S43, adding Gaussian noise.
6. The renal cancer pathology image analysis method based on CoAtNet deep learning of claim 1, wherein in S5, through using Openslide library function, all slice images are re-spliced according to naming and slice rules to form a marked complete renal cancer pathology image.
7. The renal cancer pathology image analysis method based on CoAtNet deep learning of claim 1, wherein in S6, the CoAtNet used by the classification model firstly trains parameters of the network by using ImageNet data set until convergence, and stores the parameters, then changes the last full connection layer of CoAtNet to 3 to realize the classification of three subtypes of renal cancer, and the rest network layers are initialized by using the parameters pre-trained on ImaeNet; when training the model, the initial learning rate =0.00001 and is reduced by 5 times with the training times, the batch size =64, an Adam optimizer is used, and the cross entropy is used as a loss function; 7. batch size =1 when tested.
8. The renal cancer pathology image analysis method based on CoAtNet deep learning of claim 1, wherein the specific steps in S8 include
S81, after being processed by S2, S3 and S4, the training set data is input into a CoAtNet and COX-based prognostic analysis model for training and optimization, wherein the prognostic analysis model takes the survival time of a patient as a label;
s82, using a 7x7 characteristic diagram in front of a CoAtNet global pooling layer as a pathological characteristic of a patient, and taking an average value on a channel of each characteristic diagram as a characteristic vector to be used as input of a COX model;
and S83, inputting the characteristic vector into a COX model, predicting a median prognosis risk value of the patient, and forming a survival curve.
9. A renal cancer pathology image analysis system based on the deep learning of CoAtNet, for use in the method of any one of claims 1-9, characterized in that it comprises:
a data reading module: reading pathological image information of the kidney cancer through a device embedded in an accessory of an objective lens of the intelligent microscope;
an image slicing module: the system is used for carrying out section processing on the read pathological image of the kidney cancer to obtain a section image;
a background filtering module: used for carrying out background filtration on the section images to keep a cell foreground image;
the data enhancement module: for data enhancement of the slice;
renal cancer region detection module: for detecting and segmenting cancerous and non-cancerous regions of a pathological image of a kidney cancer;
renal cancer subtype classification module: the subtype information is used for identifying the input kidney cancer pathological image;
renal cancer grading module: for predicting the hazard classification stage of renal cancer subtype cells;
renal cancer prognosis analysis module: used for predicting the change of the survival probability of the patient with time.
10. An intelligent microscope device for renal cancer pathology image analysis based on CoAtNet deep learning, which is characterized by comprising an intelligent microscope, wherein the intelligent microscope comprises the renal cancer pathology image analysis system based on CoAtNet deep learning of claim 9.
CN202210671552.XA 2022-06-15 2022-06-15 Renal cancer pathological image analysis method and system based on CoAtNet deep learning and intelligent microscopic device Pending CN115206495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210671552.XA CN115206495A (en) 2022-06-15 2022-06-15 Renal cancer pathological image analysis method and system based on CoAtNet deep learning and intelligent microscopic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210671552.XA CN115206495A (en) 2022-06-15 2022-06-15 Renal cancer pathological image analysis method and system based on CoAtNet deep learning and intelligent microscopic device

Publications (1)

Publication Number Publication Date
CN115206495A true CN115206495A (en) 2022-10-18

Family

ID=83575906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210671552.XA Pending CN115206495A (en) 2022-06-15 2022-06-15 Renal cancer pathological image analysis method and system based on CoAtNet deep learning and intelligent microscopic device

Country Status (1)

Country Link
CN (1) CN115206495A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116881725A (en) * 2023-09-07 2023-10-13 之江实验室 Cancer prognosis prediction model training device, medium and electronic equipment
CN117672222A (en) * 2024-01-31 2024-03-08 浙江大学滨江研究院 Large language model driven microscope control method and device and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116881725A (en) * 2023-09-07 2023-10-13 之江实验室 Cancer prognosis prediction model training device, medium and electronic equipment
CN116881725B (en) * 2023-09-07 2024-01-09 之江实验室 Cancer prognosis prediction model training device, medium and electronic equipment
CN117672222A (en) * 2024-01-31 2024-03-08 浙江大学滨江研究院 Large language model driven microscope control method and device and electronic equipment
CN117672222B (en) * 2024-01-31 2024-04-16 浙江大学滨江研究院 Large language model driven microscope control method and device and electronic equipment

Similar Documents

Publication Publication Date Title
JP7496389B2 (en) Image analysis method, device, program, and method for manufacturing trained deep learning algorithm
CN112070772B (en) Blood leukocyte image segmentation method based on UNet++ and ResNet
JP7076698B2 (en) Image analysis method, image analysis device, program, learned deep learning algorithm manufacturing method and learned deep learning algorithm
CN111986150B (en) The method comprises the following steps of: digital number pathological image Interactive annotation refining method
CN111951221B (en) Glomerular cell image recognition method based on deep neural network
US20070019854A1 (en) Method and system for automated digital image analysis of prostrate neoplasms using morphologic patterns
CN115206495A (en) Renal cancer pathological image analysis method and system based on CoAtNet deep learning and intelligent microscopic device
CN110796661B (en) Fungal microscopic image segmentation detection method and system based on convolutional neural network
CN113261012B (en) Method, device and system for processing image
CN111476794B (en) Cervical pathological tissue segmentation method based on UNET
CN112990214A (en) Medical image feature recognition prediction model
CN115909006A (en) Mammary tissue image classification method and system based on convolution Transformer
JP2024027078A (en) Method and system for fusion-extracting whole slide pathology features based on multi-scale, system, electronic apparatus, and storage medium
CN114387596A (en) Automatic interpretation system for cytopathology smear
Teverovskiy et al. Improved prediction of prostate cancer recurrence based on an automated tissue image analysis system
CN109948706B (en) Micro-calcification cluster detection method combining deep learning and feature multi-scale fusion
Saxena et al. Study of Computerized Segmentation & Classification Techniques: An Application to Histopathological Imagery
Kim et al. Nucleus segmentation and recognition of uterine cervical pap-smears
CN113222928B (en) Urine cytology artificial intelligence urothelial cancer identification system
CN114898862A (en) Cervical cancer computer-aided diagnosis method based on convolutional neural network and pathological section image
CN112819042A (en) Method, system and medium for processing esophageal squamous dysplasia image
CN114821046B (en) Method and system for cell detection and cell nucleus segmentation based on cell image
CN116705289B (en) Cervical pathology diagnosis device based on semantic segmentation network
Sreelekshmi et al. SwinCNN: An Integrated Swin Trasformer and CNN for Improved Breast Cancer Grade Classification
CN117557558B (en) Full-slice pathological image classification method based on semi-supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination