GB2620233A - Method for recognizing pancreatic cancer image based on federated transfer learning (FTL) - Google Patents
Method for recognizing pancreatic cancer image based on federated transfer learning (FTL) Download PDFInfo
- Publication number
- GB2620233A GB2620233A GB2305577.5A GB202305577A GB2620233A GB 2620233 A GB2620233 A GB 2620233A GB 202305577 A GB202305577 A GB 202305577A GB 2620233 A GB2620233 A GB 2620233A
- Authority
- GB
- United Kingdom
- Prior art keywords
- pancreatic cancer
- recognized
- image
- images
- segmented
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 206010061902 Pancreatic neoplasm Diseases 0.000 title claims abstract description 159
- 208000015486 malignant pancreatic neoplasm Diseases 0.000 title claims abstract description 159
- 201000002528 pancreatic cancer Diseases 0.000 title claims abstract description 159
- 208000008443 pancreatic carcinoma Diseases 0.000 title claims abstract description 159
- 238000013526 transfer learning Methods 0.000 title claims abstract description 8
- 238000000034 method Methods 0.000 title claims description 24
- 210000001519 tissue Anatomy 0.000 claims abstract description 92
- 230000001575 pathological effect Effects 0.000 claims abstract description 49
- 238000013145 classification model Methods 0.000 claims abstract description 38
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 34
- 210000002751 lymph Anatomy 0.000 claims abstract description 5
- 210000003205 muscle Anatomy 0.000 claims abstract description 5
- 210000000813 small intestine Anatomy 0.000 claims abstract description 5
- 238000002372 labelling Methods 0.000 claims abstract description 3
- 238000012549 training Methods 0.000 claims description 39
- 238000012360 testing method Methods 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 5
- 238000009432 framing Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 description 12
- 230000011218 segmentation Effects 0.000 description 5
- 238000013500 data storage Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000003321 amplification Effects 0.000 description 3
- 210000004027 cell Anatomy 0.000 description 3
- 238000004195 computer-aided diagnosis Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 230000004807 localization Effects 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 238000009792 diffusion process Methods 0.000 description 2
- 238000013399 early diagnosis Methods 0.000 description 2
- 238000012336 endoscopic ultrasonography Methods 0.000 description 2
- 238000009558 endoscopic ultrasound Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000003743 erythrocyte Anatomy 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 210000000496 pancreas Anatomy 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000010186 staining Methods 0.000 description 2
- 208000002699 Digestive System Neoplasms Diseases 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000010827 pathological analysis Methods 0.000 description 1
- 238000010837 poor prognosis Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/096—Transfer learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Mathematical Physics (AREA)
- Pathology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Recognizing a pancreatic cancer image, comprises acquiring (S1) pancreatic cancer pathological images, and labelling tissues in the images to obtain labelled images, the tissues comprising fat, small intestine, lymph, muscle and a tumour. A tissue classification model is constructed (S2) with the labelled images based on federated transfer learning (FTL). A to-be-recognized pancreatic cancer image is segmented (S4) to obtain to-be-recognized segmented images of the same size, and positions of the segmented images in the to-be-recognized pancreatic cancer image are recorded. The to-be-recognized segmented pancreatic cancer images are inputted (one-by-one) to the tissue classification model (S5) to obtain tissue classification results, and a corresponding table between the various tissues and the positions is established according to the tissue classification results. Corresponding position and number (count) of a tumour is acquired in the table between the various tissues and the positions, and whether the number is greater than or equal to 1 is determined. If yes, the to-be-recognized pancreatic cancer image is outputted as a diseased image, and a corresponding position of the tumour is framed on the to-be-recognized pancreatic cancer image. If no, the to-be-recognized pancreatic cancer image is outputted as a normal image.
Description
METHOD FOR RECOGNIZING PANCREATIC CANCER IMAGE BASED ON
FEDERATED TRANSFER LEARNING (FTL)
TECHNICAL FIELD
[0001] The present disclosure relates to the technical field of pancreatic cancer image recognition, and in particular to a method for recognizing a pancreatic cancer image based on federated transfer learning (FTL).
BACKGROUND
[0002] Pancreatic cancer is a highly-aggressive digestive system tumor, and one of the most malignant tumors with a low early diagnosis rate and a poor prognosis. Pathological examination through surgeries is considered as a golden standard to diagnose the pancreatic cancer, but the pancreatectomy is highly risky. At present, main clinical and pathological diagnosis on the pancreatic cancer depends on the less invasive endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA). By sampling a pathological section of the pancreas and making the pathological examination, the technology has 85-95% of sensitivity and 95-98% of specificity. Rapid on-site evaluation (ROSE) is an important factor to affect sensitivity of the EUS-FNA to diagnose the pancreatic cancer. Specifically, rapid staining sections sampled with the EUS-FNA are evaluated on site by a pathologist, thereby determining effectiveness and sufficiency of the tissue sections in real time. With laborious observation on the large pathological sections, the pathologist artificially diagioses a type and a grade of the tumor by virtue of professional knowledge. Nowadays, preparation of the pathological sections tends to be gradually automatic, and a number of pathological sections have been stored as digital images, which lays a data basis for development of computer-aided diagnosis. Tissue segmentation for a pathological image plays a vital role for recognition, quantitative analysis and other subsequent operations, and its segmentation effect has a direct impact on a quality of recognition on the pathological image. Hence, accurate automatic tissue segmentation is crucial to the accuracy of the computer-aided diagnosis. Automatic segmentation for various tissues in the pathological sections is rarely implemented, because the fully scanned pathological images have a large size with various types of tissues. It is challenging to automatically classify and segment the various types of tissues in the fully scanned pathological images. The Chinese Patent Application No. CN 201110063144.8 provides a digital image processing and pattern classification method, which is applied to computer-aided diagnosis in endoscopic ultrasonography of the pancreatic cancer. By extracting textural features of endoscopic ultrasound images and with the help of a classifier, various objective and quantitative diagnostic indicators, as well as correct methods for describing the endoscopic ultrasound images, are established to make early diagnosis of the endoscopic ultrasonography on the pancreatic cancer more accurate. However, the method is directed at image processing and pattern classification of the ultrasound images, and has the limited and undesirable accuracy of the images. Concerning the quick staining pathological images for pancreatic cells in EUS-FNA, research on classification of pancreatic cancer pathological images based on FTL is still in infancy. Due to lack of labeled high-quality data and large noise regions in the high-resolution pathological images, classification performance of a model is affected.
SUMMARY
[0003] In view of problems of no tissue segmentation and no lesion position feedback in classification for a pancreatic cancer pathological image based on FTL in the prior art, the present disclosure provides a method for recognizing a pancreatic cancer image based on FTL. The present disclosure establishes the classification model for various tissues of a pancreas based on the FTL, and diagnoses a lesion according to the classification model.
[0004] To achieve the above objective, the present disclosure provides the following technical solutions: [0005] A method for recognizing a pancreatic cancer image based on FTL including the following steps: [0006] Si: acquiring a plurality of pancreatic cancer pathological images, preprocessing the pancreatic cancer pathological images, and labeling various tissues in the pancreatic cancer pathological images to obtain labeled pancreatic cancer pathological images, the various tissues including fat, small intestine, lymph, muscle and a tumor; [0007] S2: constructing a tissue classification model with the labeled pancreatic cancer pathological images based on FTL; [0008] S3: preprocessing a to-be-recognized pancreatic cancer image in the same way as the pancreatic cancer pathological images are preprocessed in step Si, segmenting a preprocessed tobe-recognized pancreatic cancer image to obtain a plurality of to-be-recognized segmented pancreatic cancer images each being the same size, and recording positions of the to-be-recognized segmented pancreatic cancer images in the preprocessed to-be-recognized pancreatic cancer image; [0009] S4: inputting the to-be-recognized segmented pancreatic cancer images to the tissue classification model to obtain a plurality of tissue classification results, and establishing a corresponding table between the various tissues and the positions according to the plurality of tissue classification results; and [0010] S5: acquiring corresponding position and number of a tumor in the corresponding table between the various tissues and the positions, and determining whether the number is greater than or equal to 1; outputting, if yes, the to-be-recognized pancreatic cancer image as a diseased image, and framing a corresponding position of the tumor on the to-be-recognized pancreatic cancer image; and outputting, if no, the to-be-recognized pancreatic cancer image as a normal image. [0011] The present disclosure constructs the tissue classification model based on the FTL to recognize the to-be-recognized pancreatic cancer image, performs image segmentation and localization on the to-be-recognized pancreatic cancer image, and inputs the to-be-recognized segmented pancreatic cancer images to the tissue classification model one by one to obtain a plurality of tissue classification results. The present disclosure can recognize various tissues of the pancreatic cancer image, and localize the tumor type in response to result output, which can provide an auxiliary support for the professional doctor to shorten diagnosis time.
[0012] Preferably, preprocessing the pancreatic cancer pathological images in S1 includes normalization, standardization, and grayscale transformation. Through the normalization, the present disclosure accelerates a rate of convergence in network training. Through the standardization, the present disclosure implements centralization on the image by removing a mean to increase a generalization ability of the model. Through the grayscale transformation, the present disclosure removes a low-resolution pancreatic cancer pathological image, and eliminates blank background regions, erythrocytes and other noise interferences of the pathological image, such that the tissue classification model can be more directed at features such as morphologies, arrangements and heterogeneities of pancreatic cells to improve the interpretability and classification accuracy of the tissue classification model.
[0013] Preferably, S2 specifically includes: [0014] S201: extracting the labeled pancreatic cancer pathological images in a form of image blocks to obtain five tissue labeled types of image blocks, all image blocks being formed into a classification dataset; [0015] S202: dividing the classification dataset into a training set and a test set; and [0016] S203: constructing an initial classification model based on the FTL, and training and testing the initial classification model with the training set and the test set to obtain a well-trained tissue classification model.
[0017] Preferably, S203 further includes a step of amplifying the training set: performing 90-degree rotation, 180-degree rotation, horizontal flip and vertical flip on all image blocks in the training set to obtain amplified image blocks, and adding the amplified image blocks to the training set. Due to a small sample size of the medical images, the present disclosure further performs a series of data amplifications on training images to relieve the overfitting problem. With the 90-degree rotation, the 180-degree rotation, the horizontal flip and the vertical flip on the image blocks, the present disclosure amplifies the training set by five times to improve the training accuracy.
[0018] S2 may include one or more of the following steps: training a lightweight ResNet50 detection model according to local label data of each of source domains, and integrating parameters of the ResNet50 detection model with a federated integration method to obtain a local model; training the local model with an individualized model training method of the FTL to obtain parameters of an individualized detection model of the source domain; weighting and integrating the parameters of the individualized detection model of the source domain to obtain a global model of the source domain, and taking the global model of the source domain as an initial model of a target domain; and training a pseudo-label predictor for the target domain with a federated diffusion network, performing random voting with the individualized detection model of the source domain to obtain a prediction result of the pseudo-label predictor, and training the initial model through the prediction result and unlabeled data of the target domain to obtain an optimal detection model of the target domain [0019] Preferably, S3 specifically includes: [0020] S301: preprocessing the to-be-recognized pancreatic cancer image same as Si; [0021] S302: segmenting the preprocessed to-be-recognized pancreatic cancer image to obtain the plurality of to-be-recognized segmented pancreatic cancer images having the same size, the preprocessed to-be-recognized pancreatic cancer image being equally segmented into n rows and m columns of the to-be-recognized segmented pancreatic cancer images; and [0022] S303: recording each of the positions of the to-be-recognized segmented pancreatic cancer images in the preprocessed to-be-recognized pancreatic cancer image as (n, m), (n, m) indicates that a to-be-recognized segmented pancreatic cancer image is located at an nth row and an mth column of the to-be-recognized pancreatic cancer image.
[0023] Preferably, S4 specifically includes: [0024] S401: inputting the to-be-recognized segmented pancreatic cancer images to the tissue classification model to obtain the plurality of tissue classification results, inputting each of the tobe-recognized segmented pancreatic cancer images to the tissue classification model to obtain a probability that the to-be-recognized segmented pancreatic cancer image belongs to each of tissue types, and determining whether a to-be-recognized segmented pancreatic cancer image belongs to a tumor and has a probability of greater than a preset tumor value; classifying, if yes, the to-berecognized segmented pancreatic cancer image to a tumor type, and taking a position of the to-berecognized segmented pancreatic cancer image in the preprocessed to-be-recognized pancreatic cancer image as a position corresponding to the tumor type; and performing S402 if no; [0025] S402: classifying the to-be-recognized segmented pancreatic cancer image as a tissue type having a largest probability; and [0026] 5403: establishing, after all of the to-be-recognized segmented pancreatic cancer images are classified completely, the corresponding table between the various tissues and the positions, a header in the corresponding table between the various tissues and the positions including a tissue type, a number, and a position.
[0027] Preferably, the preset tumor value falls within a range of 50% to 60%.
[0028] The present disclosure has the following beneficial effects: The present disclosure constructs the tissue classification model based on the FTL to recognize the to-be-recognized pancreatic cancer image, performs image segmentation and localization on the to-be-recognized pancreatic cancer image, and inputs the to-be-recognized segmented pancreatic cancer images to the tissue classification model one by one to obtain a plurality of tissue classification results. By implementing the method according to the present disclosure, it may be possible to recognize various tissues of the pancreatic cancer image, and to localize the tumor type based on output results, which can provide an auxiliary support for the professional doctor to shorten diagnosis time.
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] FIG. 1 is a flowchart of a method according to an embodiment of the present disclosure; and [0030] FIG. 2 is a construction flowchart of a tissue classification model according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0031] Embodiment: [0032] Referring to FIG. 1, the embodiment provides a method for recognizing a pancreatic cancer image based on FTL, including the following steps: [0033] SI: Acquire a plurality of pancreatic cancer pathological images, preprocess the pancreatic cancer pathological images, and label various tissues in the pancreatic cancer pathological images to obtain labeled pancreatic cancer pathological images, the various tissues including fat, small intestine, lymph, muscle and a tumor. In this embodiment, preprocessing the pancreatic cancer pathological images in Si includes normalization, standardization, and grayscale transformation. Through the normalization, the present disclosure accelerates a rate of convergence in network training. Through the standardization, the present disclosure implements centralization on the image by removing a mean to increase a generalization ability of the model. Through the grayscale transformation, the present disclosure removes a low-resolution pancreatic cancer pathological image, and eliminates blank background regions, erythrocytes and other noise interferences of the pathological image, such that the tissue classification model can be more directed at features such as morphologies, arrangements and heterogeneities of pancreatic cells to improve the interpretability and classification accuracy of the tissue classification model.
[0034] 25 pancreatic cancer pathological images are selected. After the above five tissues are strictly labeled in the images, numeral-labeled pathological images are obtained, and thus the labeled pancreatic cancer pathological images are formed.
[0035] S2: Construct a tissue classification model with the labeled pancreatic cancer pathological images based on FTL. Referring to FIG. 2, S2 specifically includes: [0036] S201: Extract the labeled pancreatic cancer pathological images in a form of image blocks to obtain five tissue labeled types of image blocks, all image blocks being formed into a classification dataset. With the use of an image block sampling technology, the model is trained with smaller local image blocks, thereby keeping basic local details. The image blocks each have a size of 224*224 pixels, and five tissue classifiers are trained. ;[0037] S202: Divide the classification dataset into a training set and a test set. ;[0038] S203: Construct an initial classification model based on the FTL, and train and test the initial classification model with the training set and the test set to obtain a well-trained tissue classification model. ;[0039] S203 further includes a step of amplifying the training set: Perform 90-degree rotation, I 80-degree rotation, horizontal flip and vertical flip on all image blocks in the training set to obtain amplified image blocks, and add the amplified image blocks to the training set. Due to a small sample size of the medical images, the present disclosure further performs a series of data amplifications on training images to relieve the overfitting problem. With the 90-degree rotation, the 180-degree rotation, the horizontal flip and the vertical flip on the image blocks, the present disclosure amplifies the training set by five times to improve the training accuracy. ;[0040] The initial classification model in the present disclosure is improved from an FTL algorithm-based model. The initial classification model includes a hardware-aware layer, a data processing layer, a business logic layer, a cross-system cascaded network layer, and a data storage unit. ;[0041] The hardware-aware layer is configured to monitor a video to provide a hardware support. 10042] The data processing layer is configured to process input data through a deep neural network (DNN) to obtain support data available for decision making of the business logic layer. [0043] The business logic layer is configured to maintain a model database, compare image data, maintain a submodel, and communicate with a shared model for an Internet layer and a cloud server. ;[0044] The cross-system cascaded network layer is configured to acquire data from the business logic layer, perform transfer learning according to acquired model parameters, and transmit data to the data storage unit upon the transfer learning. ;[0045] The data storage unit is configured to store data. ;[0046] The business logic layer includes an image data matching module, a database maintenance module, and a model training model. ;[0047] The model training module includes the submodel and an encryption parameter submodule. The encryption parameter submodule includes a parameter update element and an encryption algorithm element. The data storage unit includes the shared model, the cloud sewer, a parameter decoding module, and a parameter aggregation module. ;[0048] The model training may be as follows: A lightweight ResNet50 detection model is trained according to local label data of each of source domains, and parameters of the ResNet50 detection model are integrated with a federated integration method to obtain updated parameters of a local submodel. The local model is trained with an individualized model training method of the FTL to obtain parameters of an individualized detection model of the source domain. The parameters of the individualized detection model of the source domain are weighted and integrated to obtain a global model of the source domain, and the global model of the source domain is taken as an initial model of a target domain. A pseudo-label predictor is trained for the target domain with a federated diffusion network, random voting is performed with the individualized detection model of the source domain to obtain a prediction result of the pseudo-label predictor, and the initial model is trained through the prediction result and unlabeled data of the target domain to obtain an optimal detection model of the target domain [0049] A number of pancreatic cancer images are acquired at an early stage. Since images stored in one hospital are far from enough, transfer learning is essential for image samples from a plurality of hospitals. With the improved model of the FTL algorithm, the present disclosure can integrate parameters of the pancreatic cancer images from various hospitals for training, thereby improving the overall recognition accuracy of the model. ;[0050] In order to measure performance of the model, among the image blocks extracted from the labeled pancreatic cancer pathological images, 5,000 image blocks are kept for each type to take as the test set, while remaining image blocks are taken as the training dataset. Through data amplification, 25,000 image blocks are used as the test set. With training on the training dataset, the tissue classification accuracy in the test dataset is evaluated in the embodiment. All images in the set have a size of 224*224 pixels, and are input to the model sequentially for training and testing.
[0051] S3: Preprocess a to-be-recognized pancreatic cancer image in the same way as the pancreatic cancer pathological images are preprocessed in step Si above, segment a preprocessed to-be-recognized pancreatic cancer image to obtain a plurality of to-be-recognized segmented pancreatic cancer images each being the same size, and record positions of the to-be-recognized segmented pancreatic cancer images in the preprocessed to-be-recognized pancreatic cancer image. S3 specifically includes: [0052] S301: Preprocess the to-be-recognized pancreatic cancer image in the same way as the pancreatic cancer pathological images are preprocessed in step Si above.
[0053] 5302: Segment the preprocessed to-be-recognized pancreatic cancer image to obtain the plurality of to-be-recognized segmented pancreatic cancer images each having the same size, the preprocessed to-be-recognized pancreatic cancer image being equally segmented into n rows and in columns of the to-be-recognized segmented pancreatic cancer images.
[0054] S303: Record each of the positions of the to-be-recognized segmented pancreatic cancer images in the preprocessed to-be-recognized pancreatic cancer image as (n, m), (n, m) indicates that a to-be-recognized segmented pancreatic cancer image is located at an nth row and an mth column of the to-be-recognized pancreatic cancer image. In this embodiment, the preprocessed tobe-recognized pancreatic cancer image is segmented into 16 to-be-recognized segmented pancreatic cancer images in four rows and four columns.
[0055] S4: Input the to-be-recognized segmented pancreatic cancer images to the tissue classification model to obtain a plurality of tissue classification results, and establish a corresponding table between the various tissues and the positions according to the plurality of tissue classification results. S4 specifically includes: [0056] S401: Input the to-be-recognized segmented pancreatic cancer images to the tissue classification model to obtain the plurality of tissue classification results, input each of the to-berecognized segmented pancreatic cancer images to the tissue classification model to obtain a probability that the to-be-recognized segmented pancreatic cancer image belongs to each of the tissue types, and determine whether a to-be-recognized segmented pancreatic cancer image belongs to a tumor and has a probability of greater than a preset tumor value. The preset tumor value falls within a range of 50% to 60%, and is 50% in this embodiment. Classify, if yes, the tobe-recognized segmented pancreatic cancer image to a tumor type, and take a position of the tobe-recognized segmented pancreatic cancer image in the preprocessed to-be-recognized pancreatic cancer image as a position corresponding to the tumor type; and perform S402 if no. [0057] S402: Classify the to-be-recognized segmented pancreatic cancer image to the tissue type having a largest probability.
[0058] 5403: Establish, after all of the to-be-recognized segmented pancreatic cancer images are classified completely, the corresponding table between the various tissues and the positions, a header in the corresponding table between the various tissues and the positions including a tissue type, a number, and a position.
[0059] Table 1 Corresponding table between the various tissues and the positions 0060 Tissue type Number Position Fat 4 (1,2), (1,4)(2,1), (4, 1) Small intestine 1 (1,1) Lymph 6 (2,4), (3,3), (3,4) (4,2), (4,3), (4,4) Muscle 3 (1,3), (2,2), (3,1) Tumor 2 (2,3), (3,2) [0061] S5: Acquire corresponding position and number of a tumor in the corresponding table between the various tissues and the positions, and determine whether the number is greater than or equal to I. Output, if yes, the to-be-recognized pancreatic cancer image as a diseased image, and frame a corresponding position of the tumor on the to-be-recognized pancreatic cancer image. Output, if no, the to-be-recognized pancreatic cancer image as a normal image.
[0062] As can be seen from Table 1, the number under the tumor type is greater 1 in the case of the embodiment. The to-be-recognized pancreatic cancer image is output as a diseased image, and the corresponding positions (2,3) and (3,2) of the tumor are framed on the to-be-recognized pancreatic cancer image.
[0063] The present disclosure constructs the tissue classification model based on the FTL to recognize the to-be-recognized pancreatic cancer image, performs image segmentation and localization on the to-be-recognized pancreatic cancer image, and inputs the to-be-recognized segmented pancreatic cancer images to the tissue classification model one by one to obtain a plurality of tissue classification results. By implementing the method of the present disclosure, it may be possible to recognize various tissues of the pancreatic cancer image, and localize the tumor type based on output results, which can provide an auxiliary support for the professional doctor to shorten diagnosis time.
Claims (7)
- CLAIMSI. A method for recognizing a pancreatic cancer image based on federated transfer learning (FTL), comprising the following steps: Si: acquiring a plurality of pancreatic cancer pathological images, preprocessing the pancreatic cancer pathological images, and labeling various tissues in the pancreatic cancer pathological images to obtain labeled pancreatic cancer pathological images, the various tissues comprising fat, small intestine, lymph, muscle and a tumor; 52: constructing a tissue classification model with the labeled pancreatic cancer pathological images based on FTL; S3: preprocessing a to-be-recognized pancreatic cancer image in the same way as the pancreatic cancer pathological images are preprocessed in step Si, segmenting a preprocessed tobe-recognized pancreatic cancer image to obtain a plurality of to-be-recognized segmented pancreatic cancer images each being the same size, and recording positions of the to-be-recognized segmented pancreatic cancer images in the preprocessed to-be-recognized pancreatic cancer image; S4: inputting the to-be-recognized segmented pancreatic cancer images to the tissue classification model to obtain a plurality of tissue classification results, and establishing a corresponding table between the various tissues and the positions according to the plurality of tissue classification results; and S5: acquiring corresponding position and number of a tumor in the corresponding table between the various tissues and the positions, and determining whether the number is greater than or equal to 1; outputting, if yes, the to-be-recognized pancreatic cancer image as a diseased image, and framing a corresponding position of the tumor on the to-be-recognized pancreatic cancer image; and outputting, if no, the to-be-recognized pancreatic cancer image as a normal image.
- 2. The method for recognizing a pancreatic cancer image based on FTL according to claim 1, wherein preprocessing the pancreatic cancer pathological images in SI comprises normalization, standardization, and grayscale transformation
- 3. The method for recognizing a pancreatic cancer image based on FTL according to claim 1, wherein 52 specifically comprises: S201: extracting the labeled pancreatic cancer pathological images in a form of image blocks to obtain five tissue labeled types of image blocks, all image blocks being formed into a classification dataset; S202: dividing the classification dataset into a training set and a test set; and 5203: constructing an initial classification model based on the FTL, and training and testing the initial classification model with the training set and the test set to obtain a well-trained tissue classification model.
- 4 The method for recognizing a pancreatic cancer image based on FTL according to claim 3, wherein S203 further comprises a step of amplifying the training set: performing 90-degree rotation, 180-degree rotation, horizontal flip and vertical flip on all image blocks in the training set to obtain amplified image blocks, and adding the amplified image blocks to the training set.
- 5. The method for recognizing a pancreatic cancer image based on FTL according to claim I, wherein S3 specifically comprises: S301: preprocessing the to-be-recognized pancreatic cancer image same as Sl; 5302: segmenting the preprocessed to-be-recognized pancreatic cancer image to obtain the plurality of to-be-recognized segmented pancreatic cancer images having the same size, the preprocessed to-be-recognized pancreatic cancer image being equally segmented into n rows and m columns of the to-be-recognized segmented pancreatic cancer images; and S303: recording each of the positions of the to-be-recognized segmented pancreatic cancer images in the preprocessed to-be-recognized pancreatic cancer image as (n, m), (n, m) indicates that a to-be-recognized segmented pancreatic cancer image is located at an nth row and an mth column of the to-be-recognized pancreatic cancer image.
- 6. The method for recognizing a pancreatic cancer image based on FTL according to claim I, wherein S4 specifically comprises: 5401: inputting the to-be-recognized segmented pancreatic cancer images to the tissue classification model to obtain the plurality of tissue classification results, inputting each of the tobe-recognized segmented pancreatic cancer images to the tissue classification model to obtain a probability that the to-be-recognized segmented pancreatic cancer image belongs to each of tissue types, and determining whether a to-be-recognized segmented pancreatic cancer image belongs to a tumor and has a probability of greater than a preset tumor value; classifying, if yes, the to-berecognized segmented pancreatic cancer image to a tumor type, and taking a position of the to-berecognized segmented pancreatic cancer image in the preprocessed to-be-recognized pancreatic cancer image as a position corresponding to the tumor type; and performing 5402 if no; S402: classifying the to-be-recognized segmented pancreatic cancer image as a tissue type having a largest probability; and 5403: establishing, after all of the to-be-recognized segmented pancreatic cancer images are classified completely, the corresponding table between the various tissues and the positions, a header in the corresponding table between the various tissues and the positions comprising a tissue type, a number, and a position.
- 7 The method for recognizing a pancreatic cancer image based on FTL according to claim 6, wherein the preset tumor value falls within a range of 50% to 60%.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210517997.2A CN115170857A (en) | 2022-05-12 | 2022-05-12 | Pancreatic cancer image identification method based on federal transfer learning |
Publications (3)
Publication Number | Publication Date |
---|---|
GB202305577D0 GB202305577D0 (en) | 2023-05-31 |
GB2620233A true GB2620233A (en) | 2024-01-03 |
GB2620233A8 GB2620233A8 (en) | 2024-02-21 |
Family
ID=83483167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2305577.5A Pending GB2620233A (en) | 2022-05-12 | 2023-04-17 | Method for recognizing pancreatic cancer image based on federated transfer learning (FTL) |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115170857A (en) |
GB (1) | GB2620233A (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115715994B (en) * | 2022-11-18 | 2023-11-21 | 深圳大学 | Image excitation ultramicro injection method, system and equipment |
CN116109608A (en) * | 2023-02-23 | 2023-05-12 | 智慧眼科技股份有限公司 | Tumor segmentation method, device, equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120004514A1 (en) * | 2009-03-04 | 2012-01-05 | Atsushi Marugame | Diagnostic imaging support device, diagnostic imaging support method, and storage medium |
-
2022
- 2022-05-12 CN CN202210517997.2A patent/CN115170857A/en active Pending
-
2023
- 2023-04-17 GB GB2305577.5A patent/GB2620233A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120004514A1 (en) * | 2009-03-04 | 2012-01-05 | Atsushi Marugame | Diagnostic imaging support device, diagnostic imaging support method, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
GB202305577D0 (en) | 2023-05-31 |
CN115170857A (en) | 2022-10-11 |
GB2620233A8 (en) | 2024-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107103187B (en) | Lung nodule detection grading and management method and system based on deep learning | |
GB2620233A (en) | Method for recognizing pancreatic cancer image based on federated transfer learning (FTL) | |
Baumgartner et al. | Real-time standard scan plane detection and localisation in fetal ultrasound using fully convolutional neural networks | |
CN109583440A (en) | It is identified in conjunction with image and reports the medical image aided diagnosis method edited and system | |
CN111798425B (en) | Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning | |
CN114664413B (en) | System for predicting colorectal cancer treatment resistance and molecular mechanism thereof before treatment | |
CN113538435B (en) | Pancreatic cancer pathological image classification method and system based on deep learning | |
CN110838110A (en) | System for identifying benign and malignant tumor based on ultrasonic imaging | |
Assad et al. | Deep biomedical image classification using diagonal bilinear interpolation and residual network | |
CN113159223A (en) | Carotid artery ultrasonic image identification method based on self-supervision learning | |
Bansal et al. | An improved hybrid classification of brain tumor MRI images based on conglomeration feature extraction techniques | |
Hu et al. | A GLCM embedded CNN strategy for computer-aided diagnosis in intracerebral hemorrhage | |
Tenali et al. | Oral Cancer Detection using Deep Learning Techniques | |
Jing et al. | A comprehensive survey of intestine histopathological image analysis using machine vision approaches | |
Wulaning Ayu et al. | Pixel Classification Based on Local Gray Level Rectangle Window Sampling for Amniotic Fluid Segmentation. | |
Yang et al. | Classification of histopathological images of breast cancer using an improved convolutional neural network model | |
Devi et al. | Brain tumour detection with feature extraction and tumour cell classification model using machine learning–a survey | |
CN115101150A (en) | Specimen collection method for clinical tumor operation in general surgery department | |
CN114445374A (en) | Image feature processing method and system based on diffusion kurtosis imaging MK image | |
Liu et al. | A lung-parenchyma-contrast hybrid network for egfr gene mutation prediction in lung cancer | |
Xie et al. | [Retracted] Analysis of the Diagnosis Model of Peripheral Non‐Small‐Cell Lung Cancer under Computed Tomography Images | |
Machireddy et al. | Robust segmentation of cellular ultrastructure on sparsely labeled 3d electron microscopy images using deep learning | |
CN110664425A (en) | Key CT technology of lung in-situ cancer identification method | |
Reddy et al. | Different algorithms for lung cancer detection and prediction | |
CN116385814B (en) | Ultrasonic screening method, system, device and medium for detection target |