CN116524315A - Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method - Google Patents
Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method Download PDFInfo
- Publication number
- CN116524315A CN116524315A CN202210051013.6A CN202210051013A CN116524315A CN 116524315 A CN116524315 A CN 116524315A CN 202210051013 A CN202210051013 A CN 202210051013A CN 116524315 A CN116524315 A CN 116524315A
- Authority
- CN
- China
- Prior art keywords
- mask
- pathological tissue
- tissue section
- lung cancer
- cnn
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 44
- 230000001575 pathological effect Effects 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 title claims abstract description 35
- 208000020816 lung neoplasm Diseases 0.000 title claims abstract description 33
- 206010058467 Lung neoplasm malignant Diseases 0.000 title claims abstract description 32
- 201000005202 lung cancer Diseases 0.000 title claims abstract description 32
- 230000003902 lesion Effects 0.000 claims abstract description 50
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 28
- 201000010099 disease Diseases 0.000 claims abstract description 18
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 18
- 238000013528 artificial neural network Methods 0.000 claims abstract description 11
- 230000004913 activation Effects 0.000 claims abstract description 10
- 238000010586 diagram Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000000007 visual effect Effects 0.000 claims abstract description 8
- 238000004364 calculation method Methods 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 4
- 208000010507 Adenocarcinoma of Lung Diseases 0.000 claims description 3
- 210000004072 lung Anatomy 0.000 claims description 3
- 201000005249 lung adenocarcinoma Diseases 0.000 claims description 3
- 206010041823 squamous cell carcinoma Diseases 0.000 claims description 3
- 201000001142 lung small cell carcinoma Diseases 0.000 claims description 2
- 208000000587 small cell lung carcinoma Diseases 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 238000010200 validation analysis Methods 0.000 claims description 2
- 206010028980 Neoplasm Diseases 0.000 description 5
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 4
- 238000004445 quantitative analysis Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 201000011510 cancer Diseases 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000036285 pathological change Effects 0.000 description 2
- 231100000915 pathological change Toxicity 0.000 description 2
- 230000007170 pathology Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004195 computer-aided diagnosis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000034994 death Effects 0.000 description 1
- 231100000517 death Toxicity 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000002546 full scan Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000002962 histologic effect Effects 0.000 description 1
- 208000037841 lung tumor Diseases 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000004451 qualitative analysis Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 208000000649 small cell carcinoma Diseases 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 230000033772 system development Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Geometry (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Quality & Reliability (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of medical image processing, and particularly relates to a lung cancer pathological tissue section disease identification and segmentation method. A Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method comprises the following steps: s1, acquiring a lung cancer pathological tissue section scanning image of a patient, and preprocessing; s2, inputting the preprocessed scanning image into a pre-trained disease classification and segmentation model, judging the type of the slice disease, and obtaining a visual activation diagram for lesion region segmentation; the disease classification and segmentation model is an improved Mask R-CNN neural network; s3, calculating the area proportion of the lesion area to the global pathological tissue section according to the acquired visual activation diagram. The method has the advantages of high classification accuracy, smooth region segmentation, accurate quantitative calculation, capability of analyzing images by multiple indexes and assisting doctors in rapidly, conveniently and accurately performing pathological judgment.
Description
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a lung cancer pathological tissue section disease identification and segmentation method
Background
Lung cancer is the disease with the largest number of cancer deaths in China, and seriously threatens the physical health of residents in China. Histopathological examination is the most reliable diagnostic basis for accuracy, and biopsy by a doctor through H & E stained pathological tissue sections is the "gold standard" for diagnosing lung cancer and distinguishing its type and severity. With the development of computer recognition technology, the digital pathology technology combines the digital imaging system at the present stage with the traditional optical imaging device, and provides imaging conditions with higher resolution, higher definition and higher stability for the diagnosis and analysis process of doctors.
Medical image processing is one of the popular research fields in the world today, and the occurrence of computer-aided diagnosis reduces the workload of doctors, but the traditional expert system still needs to manually extract lesion features, and has long system development period and high development cost. With the development of deep learning, the image processing field has been a great progress. In the recognition competition of lung tumor nodules (CT image data set) held in the year 2017 by Kagle, the average recall rate (AR) reaches 89.7%. He Kelei A multi-instance deep convolution network based on prototype learning from end to end is designed, and the weak marking environment noise filtering identification of lung cancer pathological cell images is realized. The existing research work is focused on tumor classification and segmentation, and has poor interpretation. However, medicine is still too early with its unique medical ethics, and machine learning is now being done instead of manually making conclusive diagnoses. Therefore, the multidimensional evaluation index of the auxiliary diagnosis system facing to doctors is increased, and the interpretation function of the attached pathology is designed, so that the doctor can perform more accurate and convenient diagnosis reference.
Conventional histologic section pathological changes are evaluated by a number 4 method (namely mild, moderate and severe). This is a classical method for evaluating tissue lesions and is currently the main stream. But with the popularity of whole-slice scanning and quantitative analysis concepts, quantitative analysis of tissue lesions is becoming popular. Sometimes, data is obtained after quantitative analysis, so that the actual degree of the lesion and the lesion range can be accurately reflected. In addition, the acquired data also facilitates statistical analysis of the differences between groups. Area measurement is an important index in quantitative analysis. The existing measurement software can calculate the area only by manually marking the lesion area, and the invention fuses the qualitative analysis and quantitative measurement functions to construct an intelligent medical auxiliary system.
In recent years, a convolutional neural network is one of the most popular methods applied to the field of image processing, and is deeply fused with pathological images, and specific functions and evaluation indexes are designed by combining pathological characteristics, so that the convolutional neural network is worthy of research in the future.
Disclosure of Invention
The invention aims to provide a method for integrating recognition, segmentation and quantitative calculation of lung cancer pathological tissue sections based on deep learning aiming at the defects of the prior method, wherein a Mask R-CNN model in image recognition is applied to an image in stages and a regression algorithm of quantitative calculation is combined, so that the condition classification and lesion region positioning of the lung cancer pathological tissue sections are realized, and the area of the overall medical record tissue sections is calculated. The method has the advantages of high classification accuracy, smooth region segmentation, accurate quantitative calculation, capability of analyzing images by multiple indexes and assisting doctors in rapidly, conveniently and accurately performing pathological judgment.
In order to solve the technical problems, the invention adopts the following technical scheme: a Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method comprises the following steps:
s1, acquiring a lung cancer pathological tissue section scanning image of a patient, and preprocessing;
s2, inputting the preprocessed scanning image into a pre-trained disease classification and segmentation model, judging the type of the slice disease, and obtaining a visual activation diagram for lesion region segmentation; the disease classification and segmentation model is an improved Mask R-CNN neural network;
s3, calculating the area proportion of the lesion area to the global pathological tissue section according to the acquired visual activation diagram.
Further, in the step S1, the specific method of the pretreatment is as follows:
the scanned image of lung cancer pathological tissue section of patient is magnified 20 times and converted from TIFP format to jpeg format.
Further, the improved Mask R-CNN neural network specifically comprises:
a feature extraction network comprising an improved residual network, res net; the improved residual error network ResNet is added with a full connection layer and a dropout layer before the last classification layer;
the FPN network is added into the feature extraction network, and multiscale fusion is carried out on the extracted features;
the RPN network is used for generating target areas of the features after the FPN fusion and inputting a set number of candidate areas with the highest score values into the Mask R-CNN network;
and (3) classifying the input candidate areas by using a Mask R-CNN network, and dividing the lesion areas to generate a segmentation Mask of the background and the lesion areas.
Further, in the step S3, the method for calculating the area ratio of the lesion area to the global pathological tissue section includes:
(1) Performing Gaussian blur processing on the obtained visual activation image, setting the gray level of a lesion area to be 0 and setting the gray level of a background area to be 255;
(2) And traversing the image pixels, counting the pixels of the lesion area, and calculating the area and the duty ratio of the lesion area.
Further, the gaussian blur process calculates the transformation of each pixel in the image using a normal distribution, and the two-dimensional spatial normal distribution equation is:
where (u, v) is the two-dimensional coordinates of the image pixel point, r is the blur radius, and σ is the standard deviation of the normal distribution.
Further, training data of the lesion recognition model is obtained by adopting the following method:
(1) The full-scan pathological tissue slice image is segmented after 20 times magnification, and is converted from a TIFP format to a jpeg format.
(2) Classifying all images into four types of normal lung adenocarcinoma, lung squamous carcinoma and lung small cell carcinoma according to classification;
(3) Manually segmenting a lesion area by using labelme software;
(4) All data are scaled 8:1:1 into a training set, a validation set and a test set.
The lung cancer pathological tissue section identification and segmentation method provided by the invention is based on an improved Mask R-CNN neural network, has important reference significance for improving the disease diagnosis accuracy of lung cancer, and has the beneficial effects that:
according to the invention, classification and segmentation results obtained by automatic learning are realized from tumor pathological tissue section image information of a patient in a medical image database through the application of the improved Mask R-CNN neural network in lung cancer lesion segmentation and disease recognition, and normalized lesion region images and corresponding binary Mask images thereof are obtained. The lesion feature extraction network extracts relevant geometric feature parameters, which are used as reference basis for the subsequent quantitative calculation of lesion area and proportion, and assist pathologists in improving the detection efficiency of lung cancer condition identification and improving the evaluation accuracy of tumor differentiation. In addition, the invention greatly reduces the film reading time of clinicians and relieves the stress of manual resource shortage.
Drawings
FIG. 1 is a schematic diagram of a system architecture of the present invention;
FIG. 2 is a schematic diagram of ResNet structure for improved generalization ability;
FIG. 3 is a schematic diagram of a neural network architecture for condition identification and lesion segmentation;
fig. 4 is a flow chart for calculating the area ratio of the binarized lesion area.
Detailed Description
In order that the invention may be readily understood, a more particular description thereof will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
The method for identifying and dividing lung cancer pathological tissue section symptoms based on Mask R-CNN provided by the embodiment is shown in a figure 1, and comprises the following specific steps:
1. constructing a dataset
And acquiring a lung cancer pathological tissue section scanning image of the patient from a medical image database, preprocessing the acquired pathological tissue section scanning image, and marking to obtain a preprocessed image.
The pretreatment mainly comprises the following steps:
(1) And (3) carrying out 20 times magnification on the full-scanning pathological tissue slice image, then cutting, and converting the TIFP format into jpeg format.
(2) All images were classified as normal, lung adenocarcinoma, lung squamous carcinoma and small cell carcinoma according to the document description.
(3) The lesion area was manually segmented by a physician using labelme software. The obtained labeling information is stored in json file
(4) All data are scaled 8:1: and 1, dividing a training set, a verification set and a test set, and using the training set, the verification set and the test set for training and testing the model.
The labeling includes: (1) Generating a binary mask map of a lesion area of each image by applying a json file containing labeling information; and (2) marking lung cancer classification information.
2. Construction and training of disease classification and segmentation models
1. Feature extraction net
The feature extraction network selects an improved residual network ResNet, reduces the number of convolution layers, increases a full-connection layer and a dropout layer before a final classification layer, improves generalization capability of a neural network, and simultaneously adds an FPN network into the feature extraction network to carry out multi-scale fusion on the extracted features.
The feature extraction convolutional network, shown in fig. 2, is structured as a modified res net network. The first convolution portion of the network consists of one convolution layer + a BatchNorm layer + a Relu activation layer + a max pooling layer, where the convolution kernel size is 7x7 and the convolution kernel step size of the max pooling layer is 2. The second convolution part of the network comprises 3 residual blocks residual block. Each residual block comprises 1 convolution layer of 1x1, 1 convolution layer of 3x3, 3 BatchNorm layer and 3 Relu activation layers, and the first layer of the feature map of each residual block is subjected to deconvolution to ensure that the size of the extracted feature map is consistent with that of the second layer of the convolution feature map.
And sending the features subjected to FPN fusion into an RPN (remote procedure network) to generate a target region, inputting the candidate region with the highest score value (the number is set as a super parameter by self) into a Mask R-CNN network, and realizing the position refinement of the candidate frame by utilizing a frame regression operation to obtain a final target frame.
2. Classification and segmentation of lesions and segmentation of lesion areas are achieved based on classification and segmentation neural networks of Mask R-CNN, as shown in FIG. 3.
3. And inputting the constructed pathological tissue image training set into a Mask R-CNN neural network for training, and obtaining a disease classification and segmentation model through verification and test.
3. Pathological tissue section disease identification and segmentation of lung cancer
1. And (3) performing 20 times magnification on the obtained scanned image of the cancer pathological tissue section of the patient, then performing segmentation, and converting the TIFP format into the jpeg format.
2. Inputting the image into a disease classification and segmentation model
Firstly, carrying out preliminary convolution by an improved ResNet network to extract image abstract features; secondly, performing multi-scale fusion on the multi-layer abstract feature map by using the FPN feature map pyramid network; and then, sending the features after FPN fusion into an RPN network to generate a target region, picking the features corresponding to each RoI on the full graph features by using the RoI Align, and classifying by a full connection layer. Parallel to the full connected layer classification is the segmentation task—the RoI alignment uses bilinear interpolation:
wherein Q is 11 =(x 1 ,y 1 )、Q 12 =(x 11 ,y 2 )、Q 21 =(x 2 ,y 1 )、Q 22 =(x 2 ,y 2 ) Four points for interpolation positioning.
Refining a lesion area detection target result: obtaining Class Score with highest Score of each target recommended region and coordinates of the recommended region, deleting the recommended region with highest Score as background, removing the recommended region with highest Score not reaching a threshold, performing non-maximum suppression NMS on candidate frames of the same category, removing-1 placeholders for frame indexes after NMS, obtaining the first n, and finally returning information of each frame (y 1, x1, y2, x2, class_ID, score).
Generating a segmentation Mask of the lesion area image: the obtained target recommended region is used as input to be sent to an FCN network to output a Mask of 2 layers, each layer represents different classes, log output is used for binarization, and segmentation masks of the background and lesion regions are generated.
3. Calculation of area ratio of lesion region
(1) And carrying out Gaussian blur processing on the obtained 'background and lesion region segmentation Mask' image, setting the gray level of the lesion region to be 0 and setting the gray level of the background region to be 255.
Gaussian blur is an image blur filter that uses a normal distribution to calculate the transform for each pixel in an image.
The two-dimensional normal distribution equation is:
where (u, v) is the two-dimensional coordinates of the image pixel point, r is the blur radius, and σ is the standard deviation of the normal distribution. In a two-dimensional image, a contour concentric circle which is normally distributed from the center of the formula is convolved with an original corresponding pixel of the image, and each pixel value after convolution is a weighted average of surrounding adjacent pixel values.
(2) And traversing the image pixels, counting the pixels of the lesion area, and calculating the area and the duty ratio of the lesion area.
The gray level image is binarized, the pixel value of the lesion area is 0, and the pixel value of the background area is 255. The counting variable count of the pathological change pixel points is initially set to 0, all pixels of the image are traversed by using a circulation statement, gray value judgment is carried out on each pixel point, and the flow is shown in fig. 4, specifically:
let (h_x, w_x) be the pixel point:
when the pixel value (h_x, w_x) is 0, count=count+1; otherwise, not counting.
Finally, obtaining a lesion proportion report:
proportion=count/pic_shape
where pic_shape is the picture size.
Claims (7)
1. A Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method is characterized by comprising the following steps of: the method comprises the following steps:
s1, acquiring a lung cancer pathological tissue section scanning image of a patient, and preprocessing;
s2, inputting the preprocessed scanning image into a pre-trained disease classification and segmentation model, judging the type of the slice disease, and obtaining a visual activation diagram for lesion region segmentation; the disease classification and segmentation model is an improved Mask R-CNN neural network;
s3, calculating the area proportion of the lesion area to the global pathological tissue section according to the acquired visual activation diagram.
2. The Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method according to claim 1, wherein the method comprises the following steps: in the step S1, the specific method for preprocessing is as follows:
the scanned image of lung cancer pathological tissue section of patient is magnified 20 times and converted into jpeg format.
3. The Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method according to claim 1, wherein the method comprises the following steps: the improved Mask R-CNN neural network specifically comprises:
a feature extraction network comprising an improved residual network, res net; the improved residual error network ResNet is added with a full connection layer and a dropout layer before the last classification layer;
the FPN network is added into the feature extraction network, and multiscale fusion is carried out on the extracted features;
the RPN network is used for generating target areas of the features after the FPN fusion and inputting a set number of candidate areas with the highest score values into the Mask R-CNN network;
and (3) classifying the input candidate areas by using a Mask R-CNN network, and dividing the lesion areas to generate a segmentation Mask of the background and the lesion areas.
4. The Mask R-CNN-based lung cancer pathological tissue section recognition and segmentation method according to claim 3, wherein: in the step S3, the area ratio calculation method of the lesion area to the global pathological tissue section comprises the following steps:
(1) Performing Gaussian blur processing on the obtained visual activation image, setting the gray level of a lesion area to be 0 and setting the gray level of a background area to be 255;
(2) And traversing the image pixels, counting the pixels of the lesion area, and calculating the area and the duty ratio of the lesion area.
5. The Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method according to claim 4, wherein the method comprises the following steps of: the Gaussian blur processing calculates the transformation of each pixel in the image by using normal distribution, and a two-dimensional space normal distribution equation is as follows:
where (u, v) is the two-dimensional coordinates of the image pixel point, r is the blur radius, and σ is the standard deviation of the normal distribution.
6. The Mask R-CNN-based lung cancer pathological tissue section recognition and segmentation method according to any one of claims 1 to 5, wherein: training data of the lesion recognition model is obtained by adopting the following method:
(1) The full-scanning pathological tissue slice image is segmented after 20 times magnification, and is converted from a TIFP format to a jpeg format;
(2) Classifying all images into four types of normal lung adenocarcinoma, lung squamous carcinoma and lung small cell carcinoma according to classification;
(3) Manually segmenting a lesion area by using labelme software;
(4) All data are scaled 8:1:1 into a training set, a validation set and a test set.
7. The Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method according to claim 6, wherein the method comprises the following steps: the preprocessed image also needs to be marked, including: (1) Generating a binary mask map of a lesion area of each image by applying a json file containing labeling information; and (2) marking lung cancer classification information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210051013.6A CN116524315A (en) | 2022-01-17 | 2022-01-17 | Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210051013.6A CN116524315A (en) | 2022-01-17 | 2022-01-17 | Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116524315A true CN116524315A (en) | 2023-08-01 |
Family
ID=87403368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210051013.6A Pending CN116524315A (en) | 2022-01-17 | 2022-01-17 | Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116524315A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883397A (en) * | 2023-09-06 | 2023-10-13 | 佳木斯大学 | Automatic lean method and system applied to anatomic pathology |
CN118154975A (en) * | 2024-03-27 | 2024-06-07 | 广州市中西医结合医院 | Tumor pathological diagnosis image classification method based on big data |
-
2022
- 2022-01-17 CN CN202210051013.6A patent/CN116524315A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883397A (en) * | 2023-09-06 | 2023-10-13 | 佳木斯大学 | Automatic lean method and system applied to anatomic pathology |
CN116883397B (en) * | 2023-09-06 | 2023-12-08 | 佳木斯大学 | Automatic lean method and system applied to anatomic pathology |
CN118154975A (en) * | 2024-03-27 | 2024-06-07 | 广州市中西医结合医院 | Tumor pathological diagnosis image classification method based on big data |
CN118154975B (en) * | 2024-03-27 | 2024-10-01 | 广州市中西医结合医院 | Tumor pathological diagnosis image classification method based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111985536B (en) | Based on weak supervised learning gastroscopic pathology image Classification method | |
CN110060774B (en) | Thyroid nodule identification method based on generative confrontation network | |
CN108464840B (en) | Automatic detection method and system for breast lumps | |
WO2018120942A1 (en) | System and method for automatically detecting lesions in medical image by means of multi-model fusion | |
CN112529894B (en) | Thyroid nodule diagnosis method based on deep learning network | |
CN111028206A (en) | Prostate cancer automatic detection and classification system based on deep learning | |
CN111862044B (en) | Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium | |
CN112380900A (en) | Deep learning-based cervical fluid-based cell digital image classification method and system | |
CN113223005B (en) | Thyroid nodule automatic segmentation and grading intelligent system | |
CN108062749B (en) | Identification method and device for levator ani fissure hole and electronic equipment | |
CN112270667B (en) | TI-RADS-based integrated deep learning multi-tag identification method | |
CN116524315A (en) | Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method | |
CN112365973B (en) | Pulmonary nodule auxiliary diagnosis system based on countermeasure network and fast R-CNN | |
CN110189293A (en) | Cell image processing method, device, storage medium and computer equipment | |
CN112348059A (en) | Deep learning-based method and system for classifying multiple dyeing pathological images | |
CN116758336A (en) | Medical image intelligent analysis system based on artificial intelligence | |
CN113902738A (en) | Heart MRI segmentation method and system | |
CN107590806B (en) | Detection method and system based on brain medical imaging | |
CN117809030A (en) | Breast cancer CT image identification and segmentation method based on artificial neural network | |
CN117495882A (en) | Liver tumor CT image segmentation method based on AGCH-Net and multi-scale fusion | |
CN117522862A (en) | Image processing method and processing system based on CT image pneumonia recognition | |
de Araújo et al. | Automated detection of segmental glomerulosclerosis in kidney histopathology | |
CN114359279B (en) | Image processing method, image processing device, computer equipment and storage medium | |
Patibandla et al. | CT Image Precise Denoising Model with Edge Based Segmentation with Labeled Pixel Extraction Using CNN Based Feature Extraction for Oral Cancer Detection | |
CN118657756B (en) | Intelligent auxiliary decision making system and method for brain tumor patient nursing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |