CN113889261B - Pathological feature assistance-based PET/CT automatic lung cancer diagnosis classification model training method - Google Patents
Pathological feature assistance-based PET/CT automatic lung cancer diagnosis classification model training method Download PDFInfo
- Publication number
- CN113889261B CN113889261B CN202111113534.1A CN202111113534A CN113889261B CN 113889261 B CN113889261 B CN 113889261B CN 202111113534 A CN202111113534 A CN 202111113534A CN 113889261 B CN113889261 B CN 113889261B
- Authority
- CN
- China
- Prior art keywords
- image
- pathological
- pet
- lung cancer
- cancer diagnosis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Public Health (AREA)
- Computational Linguistics (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Databases & Information Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Pathology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a PET/CT automatic lung cancer diagnosis classification model training method based on pathological feature assistance, and belongs to the field of medical images. The method is characterized in that a group of better pathological classification network model parameters are obtained preferentially by training a classification network of pathological images; the characteristic information of the pathological image is obtained through the group of parameters to guide the characteristic extraction of the PET/CT image classification network, so that the precision of the PET/CT image classification network is improved, the popularization and the application of early lung cancer diagnosis classification based on the PET/CT image are facilitated, and the diagnosis and the follow-up visit of a clinician are facilitated. By the method, a more accurate lung cancer diagnosis classification result close to a pathological diagnosis result can be achieved only by the noninvasive PET/CT image before subsequent invasive pathological examination, so that the diagnosis efficiency of a clinician can be effectively improved, and the wound of a patient is reduced.
Description
Technical Field
The invention relates to the field of medical images, in particular to a PET/CT automatic lung cancer diagnosis classification model training method based on pathological feature assistance.
Background
With the continuous development of medical technology, more and more imaging modes are applied. Multiple studies prove that PET/CT has great value in diagnosis of benign and malignant pulmonary nodules, staging of lung cancer and evaluation after treatment of lung cancer. The most widely used tracer in current PET/CT scanning is 18F-FDG (18 fluoro labelled glucose analogue), which requires increased glucose uptake and glycolysis to maintain cellular energy supply in response to abnormal proliferation of malignant cells, so that different types of tumours show different degrees of glucose uptake on glucose metabolism imaging images. The tracer decays within the patient and annihilates, producing a pair of 511keV gamma photons with emission directions about 180 ° opposite, and the detector collects information about the location and time at which the gamma photons reach the crystal. The acquired information is reconstructed by using an image reconstruction algorithm and is subjected to post-processing, so that the condition of the metabolism and the ingestion of the reaction tracer in the body of the patient can be obtained. Physicians can reflect this metabolic heterogeneity in an early, quantitative manner through PET imaging. The PET/CT metabolism and structural image texture feature comprehensive analysis has great potential in the aspects of disease differential diagnosis, disease staging, prognosis judgment and the like.
Pathological section analysis has been recognized as a gold standard for cancer diagnosis, and provides diagnostic information such as regional localization of clinical tumor and classification of benign and malignant stages through examination such as cytomorphology and histopathology. However, this requires a pathologist to remove a small piece of tissue from a lesion site of a patient body, prepare a pathological section, and then observe morphological changes of cells and tissues under a microscope to determine the type of tumor or the like.
Although the existing automatic lung cancer diagnosis and classification model based on PET/CT can obtain better diagnosis and classification precision, a certain distance still exists between the model and clinical use, and although the existing automatic lung cancer diagnosis and classification model based on pathology can obtain higher diagnosis and classification precision, in early screening and rapid diagnosis, pathology needs invasive examination, and the cutting result is slow, so the model is difficult to use in early diagnosis. Therefore, the training of the PET/CT automatic lung cancer diagnosis classification model is assisted by pathological features, and the high diagnosis classification precision is achieved without the help of pathological images in clinical use, so that the diagnosis classification efficiency and precision of early lung cancer by clinicians are improved to a great extent, and a patient can be better helped to make a diagnosis and treatment scheme.
Disclosure of Invention
The invention aims to provide a PET/CT automatic lung cancer diagnosis classification model training method based on pathological feature assistance, which aims to overcome the defects of the prior art, extracts pathological features through a pathological classification network to assist and guide the feature extraction of the PET/CT image classification network so as to achieve the purpose of improving the classification precision of the PET/CT image classification network.
The purpose of the invention is realized by the following technical scheme:
a PET/CT automatic lung cancer diagnosis classification model training method based on pathological feature assistance comprises the following specific training steps:
the method comprises the following steps: acquiring image data, pathological image data and corresponding diagnosis label data which are matched one by one to construct a data set, wherein the image data comprises a PET (positron emission tomography) image and a CT (computed tomography) image, the pathological image data is a color normalized pathological image, and the tumor area in the PET image, the CT image and the pathological image is more than 80%.
Step two: taking the pathological image as input, classifying the lung cancer diagnosis as a prediction target, and training a classification convolutional neural network by using the pathological image data in the data set and corresponding diagnosis label data to obtain an automatic lung cancer diagnosis classification network based on the pathological image; the method comprises the steps that images obtained by superposing PET images and CT images on a channel are used as input, lung cancer diagnosis classification is used as a prediction target, and a classification convolutional neural network is trained by utilizing image data in a data set and corresponding diagnosis label data to obtain an automatic lung cancer diagnosis image classification network based on the PET/CT images;
step three: establishing a joint loss function, performing second training on an automatic lung cancer diagnosis classification network based on pathological images and an automatic lung cancer diagnosis classification network based on PET/CT images through back propagation, and finally obtaining the automatic lung cancer diagnosis classification network based on the PET/CT images, namely a PET/CT automatic lung cancer diagnosis classification model based on pathological feature auxiliary training;
the joint loss function is specifically:
Loss=lossA+lossB
therein, lossALoss of classification for automated lung cancer diagnosis classification network based on pathological imagesBContains two-dimensional image feature T'PET-CTAnd pathological characteristics TPILoss of similarity ofB_1And the classification loss of the automatic lung cancer diagnosis image classification network based on PET/CT imagesB_2(ii) a The two-dimensional image feature T'PET-CTThe method comprises the steps of extracting a feature projection result from a feature extraction layer in an automatic lung cancer diagnosis image classification network based on a PET/CT image; the pathological feature TPIThe method is used for extracting features of a feature extraction layer in an automatic lung cancer diagnosis classification network based on pathological images and an automatic lung cancer diagnosis image classification network based on PET/CT images.
Further, in the first step, more than 80% of the tumor area in the PET image, the CT image and the pathological image is processed by the following method:
and respectively carrying out cutting patch operation on the PET image, the CT image and the pathological image raw data according to the corresponding tumor mask data, selecting a patch image with the coverage rate of the tumor mask data being more than 80%, and obtaining the PET image, the CT image and the pathological image with the tumor area being more than 80%.
Further, in the first step, the color normalization of the pathological image data specifically includes:
and selecting a better-dyed pathological image from all pathological image data as a target pathological image, and normalizing the colors of other residual pathological images to the same color level of the target image.
Further, in the second step, the classified convolutional neural network adopts a Resnet-18 structure or a Resnet-50 structure. Preferably, the pathological image-based automatic lung cancer diagnosis classification network adopts a Resnet-18 structure, and the PET/CT image-based automatic lung cancer diagnosis image classification network adopts a Resnet-50 structure.
Further, in the third step, the two-dimensional image feature T'PET-CTThe method comprises the steps of extracting a feature projection result of a layer before a full-connection layer in an automatic lung cancer diagnosis image classification network based on a PET/CT image; the pathological feature TPIFeatures extracted from the layer before the fully-connected layer in the automatic lung cancer diagnosis and classification network based on pathological images.
Further, in the third step, the lossBExpressed as:
lossB=α×lossB_1+(1-α)×lossB_2
wherein alpha is a weighted value and takes the value of (0, 1).
Further, the classification loss of the automatic lung cancer diagnosis image classification network based on the PET/CT imageB_2According to two-dimensional image feature T'PET-CTCalculating the output predicted value, specifically:
where the subscript k is the index, N is the number of samples, M represents the number of classifications of the tumor, ykcIs a true distribution, p, representing the samplekcRepresenting two-dimensional video feature T'PET-CTAnd after the network softmax function, the input image belongs to the prediction probability of the class c.
Further, in the third step, the pairs are propagated in opposite directionsWhen the automatic lung cancer diagnosis classification network based on the pathological image and the automatic lung cancer diagnosis image classification network based on the PET/CT image are trained for the second time, the automatic lung cancer diagnosis classification network based on the pathological image performs the second training according to the lossAPerforming a back propagation training wherein loss is back propagated if the results of the pathological diagnosis are erroneousAAnd (4) optimizing an automatic lung cancer diagnosis classification network of the pathological image, otherwise, not performing back propagation. The automatic lung cancer diagnosis image classification network based on PET/CT images is based on lossBAnd carrying out back propagation training.
The invention has the beneficial effects that the pathological features are used for assisting in training the automatic lung cancer diagnosis and classification network based on the PET/CT images, and the purpose of assisting in improving the precision of the traditional automatic lung cancer diagnosis and classification network based on the PET/CT images is realized by referring to the pathological features during training. Meanwhile, pathological features are not required to be input in practical clinical application, and the network B 'is only regarded as priori knowledge of the network B' in the training process to assist the network B 'to better obtain an optimal solution, so that the diagnosis classification accuracy of the network B' is improved. The method can achieve a more accurate lung cancer diagnosis classification result close to a pathological diagnosis result just by a non-invasive PET/CT image before invasive pathological examination, can effectively improve the diagnosis efficiency of a clinician, and reduces the wound of a patient.
Drawings
FIG. 1 is a flow chart of a PET/CT automatic lung cancer diagnosis classification training method based on pathological feature assistance;
FIG. 2 is a diagram of a neural network based on pathological feature assisted PET/CT automatic lung cancer diagnosis classification.
Detailed Description
The invention relates to a PET/CT automatic lung cancer diagnosis classification model training method based on pathological feature assistance, which trains a classification network of pathological images to preferentially obtain a group of better pathological classification network model parameters; the characteristic information of the pathological image is obtained through the group of parameters to guide the characteristic extraction of the PET/CT image classification network, so that the precision of the PET/CT image classification network is improved, the popularization and the application of early lung cancer diagnosis classification based on the PET/CT image are facilitated, and the diagnosis and the follow-up visit of a clinician are facilitated.
The invention is explained in detail below with reference to specific embodiments and the accompanying drawings:
the flow of the method of the invention is shown in fig. 1, and specifically comprises the following steps:
the method comprises the following steps: collecting image data (I) matched with the same medical center one by onePET/ICT) And full scan pathology image IPIAnd acquires corresponding diagnostic tag data LPET/LCTAnd LPIAnd their corresponding tumor mask data MPET/MCTAnd MPI(ii) a Then unifying the layer thickness of the image data to the same standard by an interpolation method.
Step two: for full scan pathology image IPICarrying out color normalization operation; the method comprises the following steps:
selecting a pathological image with better dyeing from all full-scanning pathological images as a target pathological image IPI_ONormalizing the colors of the other remaining pathology images to the target image IPI_OAt the same color level;
step three: mask data M from full scan pathology imagePIFor pathological image IPIPerforming a cut patch operation, each pathological patch image IPI_patchMust be selected to satisfy the tumor mask data MPIThe coverage rate of (a) is 80% or more, namely:
whereinIs represented byPI_patchMiddle mask data MPICovering the area occupied by the doctor marking the tumor, SPI_patchIndicating the area occupied by patch.
Simultaneously each pathology patch image IPI_patchHas a size of 288X 288, each IPI_patchThe label selects the corresponding full-scanning pathological image IPITag data L ofPIAs a corresponding label LPI_patch(ii) a In order to ensure the data balance, an Overlap-tile strategy (overlay-tile strategy) is adopted in the example, so that each full-scan pathological image IPIThe number of cuts remains substantially the same.
Step four: establishing a single-input and output classification convolution neural network A, wherein the input is a pathology patch image I obtained in the third stepPI_patchOutputting the result as a pathological diagnosis classification result; then the pathological patch image I obtained in the third stepPI_patchAnd its corresponding label LPI_patchLeading the data into the network A for training, and storing the network parameter D of the network A after obtaining a better resultA. The trained network A is an automatic lung cancer diagnosis classification network based on pathological images; the method specifically comprises the following steps:
(4.1) establishing a classified convolutional neural network A, wherein the network A in the example adopts Resnet-18 as a main classified network, and the specific structure is as shown in Table 1:
TABLE 1Resnet-18 network architecture
Where num _ class represents the number of diagnostic classifications;
(4.2) use of Patch image I obtained in step threePI_patchAnd corresponding label LPI_patchThe data set is constructed and divided into a training set, a validation set and a test set, wherein the training set accounts for 80%, the validation set accounts for 10% and the test set accounts for 10%.
(4.3) training, verifying and testing the network A established in the step (4.1) according to the data division in the step (4.2), and finally storing a group of network parameters D with good effectA。
Step five: establishing a single input and output classified convolutional neural network B, the input of which is paired PET/CT image data IPET/ICTOutput as image classification nodeFruit; then collecting the image data (I) collected in the step onePET/ICT) And its corresponding label LPET/LCTLeading the data into a network B for training, and storing a network parameter D of the network B after obtaining a better resultB. The trained network B is an automatic lung cancer diagnosis image classification network based on PET/CT images; the method specifically comprises the following steps:
(5.1) establishing a classified convolutional neural network B, wherein the network B in the example adopts Resnet-50 as a main classified network, and the specific structure is shown in Table 2:
TABLE 2Resnet-50 network architecture
(5.2) Using the image data (I) collected in step onePET/ICT) And its corresponding label LPET/LCTThe data set is constructed and divided into a training set, a validation set and a test set, wherein the training set accounts for 80%, the validation set accounts for 10% and the test set accounts for 10%.
(5.3) training, verifying and testing the network B established in the step (5.1) according to the data division in the step (5.2), and finally storing a group of network parameters D with good effectB. Wherein the single input of the network B is according to the slave IPET/ICT288 × 288 lung cancer slices are cut out, and the slices are stacked according to the channel.
Step six: and (5) integrating the network A in the fourth step and the network B in the fifth step to construct a PET/CT automatic lung cancer diagnosis classification network C based on pathological feature assistance, as shown in fig. 2. The network C construction and training process specifically comprises the following steps:
(6.1) rewriting the network A to network A', adding an output-pathological characteristic T to the previous layer of the fully-connected layer 1 in the network API(ii) a And using the network parameters saved in step fourDATo initialize the network a';
rewriting the network B to be network B', adding an output-image characteristic T to the layer before the full connection layer 1 in the network BPET-CT(ii) a Extracting image characteristics TPET-CTPerforming projection operation to obtain three-dimensional image characteristic TPET-CTProjection of two-dimensional video feature T'PET-CT(ii) a Then, diagnosis and classification are carried out through subsequent network structures such as a full connection layer and the like to obtain results;
(6.2) by calculating T'PET-CTAnd TPIThe loss of the image characteristic to pathological characteristic mapping is obtained by linking the network A 'and the network B' to obtain the loss value loss of the image characteristic to pathological characteristic mappingB_1;
Among them, lossB_1If the scheme for adjusting the cosine similarity calculation is selected, the specific calculation method is as follows:
where the index k is the index, N is the number of samples,is T 'of all samples'PET-CTThe mean value of the features is calculated,for T of all samplesPIA feature mean.
(6.3) calculating the value V of the diagnostic loss in the network AAIf the result of the pathological diagnosis in the network A' is incorrect, the loss value loss is propagated backwardsAThe network A' is adjusted and optimized, otherwise, the backward propagation is not carried out; lossAThe specific calculation method of (2) is as follows:
wherein M represents the number of classifications of the tumor, ykcIs a true score representing a sampleCloth, pkcRepresenting two-dimensional video feature T'PET-CTAnd after the network softmax function, the input image belongs to the prediction probability of the class c.
(6.4) calculating the diagnostic loss value loss in the network BB_2Specific calculation method and lossAAnd (5) the consistency is achieved. At this time, all the loss values in the network B are lostB_1And lossB_2Weighting and adding to obtain a network loss value VBAnd reversely transmitting the data to the network B for optimization, wherein the actual network error is as follows:
lossB=α×lossB_1+(1-α)×lossB_2 (4)
therein, lossBFor the back-propagation error of the final network B', α is a weighted value of two errors, in this example, α is chosen to be 0.5;
step seven: training the network C to obtain the parameters D of the network A' and the network BA’And DB’Because the network A 'and the network B' only have the characteristic link mapping, the network A 'and the network B' can be relatively independently used, and the parameter D obtained by training the network C is obtainedA’And DB’May be used for network a 'and network B'. Thereby using only the network B' and the trained parameters DB’The method can realize the PET/CT automatic lung cancer diagnosis and classification, and the classification precision can reach a more accurate lung cancer diagnosis and classification result which is close to a pathological diagnosis result, and the method has higher diagnosis and classification precision than the traditional diagnosis and classification precision obtained by only using image data training and has more practical clinical significance.
Claims (8)
1. A PET/CT automatic lung cancer diagnosis classification model training method based on pathological feature assistance is characterized in that the PET/CT automatic lung cancer diagnosis classification model based on pathological feature assistance training specifically comprises the following training steps:
the method comprises the following steps: acquiring image data, pathological image data and corresponding diagnosis label data which are matched one by one to construct a data set, wherein the image data comprises a PET (positron emission tomography) image and a CT (computed tomography) image, the pathological image data is a color normalized pathological image, and tumor areas in the PET image, the CT image and the pathological image are more than 80%;
step two: taking the pathological image as input, classifying the lung cancer diagnosis as a prediction target, and training a classification convolutional neural network by using the pathological image data in the data set and corresponding diagnosis label data to obtain an automatic lung cancer diagnosis classification network based on the pathological image; the method comprises the steps that images obtained by superposing PET images and CT images on a channel are used as input, lung cancer diagnosis classification is used as a prediction target, and a classification convolutional neural network is trained by utilizing image data in a data set and corresponding diagnosis label data to obtain an automatic lung cancer diagnosis image classification network based on the PET/CT images;
step three: establishing a joint loss function, performing second training on an automatic lung cancer diagnosis classification network based on pathological images and an automatic lung cancer diagnosis classification network based on PET/CT images through back propagation, and finally obtaining the automatic lung cancer diagnosis classification network based on the PET/CT images, namely a PET/CT automatic lung cancer diagnosis classification model based on pathological feature auxiliary training;
the joint loss function is specifically:
Loss=lossA+lossB
therein, lossALoss of classification for automated lung cancer diagnosis classification network based on pathological imagesBContains two-dimensional image feature T'PET-CTAnd pathological characteristics TPILoss of similarity ofB_1And the classification loss of the automatic lung cancer diagnosis image classification network based on the PET/CT imageB_2(ii) a The two-dimensional image feature T'PET-CTThe method comprises the steps of extracting a feature projection result from a feature extraction layer in an automatic lung cancer diagnosis image classification network based on a PET/CT image; the pathological feature TPIFeatures extracted by a feature extraction layer in an automatic lung cancer diagnosis classification network based on pathological images.
2. The method according to claim 1, wherein in the first step, more than 80% of the tumor area in the PET image, the CT image and the pathological image is processed by the following method:
and respectively carrying out cutting patch operation on the PET image, the CT image and the pathological image original data according to the corresponding tumor mask data, selecting a patch image with the coverage rate of the tumor mask data being more than 80%, and obtaining the PET image, the CT image and the pathological image with the tumor area being more than 80%.
3. The method according to claim 1, wherein in the first step, the color normalization of the pathological image data is specifically:
and selecting a better-dyed pathological image from all pathological image data as a target pathological image, and normalizing the colors of other residual pathological images to the same color level of the target image.
4. The method according to claim 1, wherein in the second step, the classified convolutional neural network adopts a Resnet-18 structure and/or a Resnet-50 structure.
5. The method according to claim 1, wherein in step three, the two-dimensional image feature T'PET-CTThe method comprises the steps of extracting a feature projection result of a layer before a full-connection layer in an automatic lung cancer diagnosis image classification network based on a PET/CT image; the pathological feature TPIFeatures extracted from the layer before the fully-connected layer in the automatic lung cancer diagnosis and classification network based on pathological images.
6. The method of claim 1, wherein in step three, the loss isBExpressed as:
lossB=α×lossB_1+(1-α)×lossB_2
wherein alpha is a weighted value and takes the value of (0, 1).
7. The method of claim 6, wherein the PET/CT image-based automated lung cancer diagnostic image classification network has a classification loss ofB_2According to two-dimensional image feature T'PET-CTCalculating the output predicted value, specifically:
Where the subscript k is the index, N is the number of samples, M represents the number of classifications of the tumor, ykcIs a true distribution, p, representing the samplekcRepresenting two-dimensional video feature T'PET-CTAnd after the network softmax function, the input image belongs to the prediction probability of the class c.
8. The method according to claim 1, wherein in the third step, when the pathological image-based automatic lung cancer diagnosis and classification network and the PET/CT image-based automatic lung cancer diagnosis and classification network are trained for the second time through back propagation, the pathological image-based automatic lung cancer diagnosis and classification network is based on lossAPerforming a back propagation training wherein loss is back propagated if the results of the pathological diagnosis are erroneousAOptimizing an automatic lung cancer diagnosis classification network of the pathological image, otherwise, not performing back propagation; the automatic lung cancer diagnosis image classification network based on PET/CT images is based on lossBAnd carrying out back propagation training.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111113534.1A CN113889261B (en) | 2021-09-23 | 2021-09-23 | Pathological feature assistance-based PET/CT automatic lung cancer diagnosis classification model training method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111113534.1A CN113889261B (en) | 2021-09-23 | 2021-09-23 | Pathological feature assistance-based PET/CT automatic lung cancer diagnosis classification model training method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113889261A CN113889261A (en) | 2022-01-04 |
CN113889261B true CN113889261B (en) | 2022-06-10 |
Family
ID=79010219
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111113534.1A Active CN113889261B (en) | 2021-09-23 | 2021-09-23 | Pathological feature assistance-based PET/CT automatic lung cancer diagnosis classification model training method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113889261B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114638292B (en) * | 2022-03-10 | 2023-05-05 | 中国医学科学院北京协和医院 | Artificial intelligence pathology auxiliary diagnosis system based on multi-scale analysis |
CN117831757B (en) * | 2024-03-05 | 2024-05-28 | 之江实验室 | Pathological CT multi-mode priori knowledge-guided lung cancer diagnosis method and system |
CN118154975B (en) * | 2024-03-27 | 2024-10-01 | 广州市中西医结合医院 | Tumor pathological diagnosis image classification method based on big data |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018156133A1 (en) * | 2017-02-23 | 2018-08-30 | Google Llc | Method and system for assisting pathologist identification of tumor cells in magnified tissue images |
CN108776962A (en) * | 2018-04-11 | 2018-11-09 | 浙江师范大学 | A method of the structure good pernicious prediction model of lung neoplasm |
CN109003672A (en) * | 2018-07-16 | 2018-12-14 | 北京睿客邦科技有限公司 | A kind of early stage of lung cancer detection classification integration apparatus and system based on deep learning |
CN111340768A (en) * | 2020-02-21 | 2020-06-26 | 之江实验室 | Multi-center effect compensation method based on PET/CT intelligent diagnosis system |
CN111445946A (en) * | 2020-03-26 | 2020-07-24 | 北京易康医疗科技有限公司 | Calculation method for calculating lung cancer genotyping by using PET/CT (positron emission tomography/computed tomography) images |
CN112465824A (en) * | 2021-01-28 | 2021-03-09 | 之江实验室 | Lung adenosquamous carcinoma diagnosis device based on PET/CT image subregion image omics characteristics |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11200671B2 (en) * | 2019-12-31 | 2021-12-14 | International Business Machines Corporation | Reference image guided object detection in medical image processing |
-
2021
- 2021-09-23 CN CN202111113534.1A patent/CN113889261B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018156133A1 (en) * | 2017-02-23 | 2018-08-30 | Google Llc | Method and system for assisting pathologist identification of tumor cells in magnified tissue images |
CN108776962A (en) * | 2018-04-11 | 2018-11-09 | 浙江师范大学 | A method of the structure good pernicious prediction model of lung neoplasm |
CN109003672A (en) * | 2018-07-16 | 2018-12-14 | 北京睿客邦科技有限公司 | A kind of early stage of lung cancer detection classification integration apparatus and system based on deep learning |
CN111340768A (en) * | 2020-02-21 | 2020-06-26 | 之江实验室 | Multi-center effect compensation method based on PET/CT intelligent diagnosis system |
CN111445946A (en) * | 2020-03-26 | 2020-07-24 | 北京易康医疗科技有限公司 | Calculation method for calculating lung cancer genotyping by using PET/CT (positron emission tomography/computed tomography) images |
CN112465824A (en) * | 2021-01-28 | 2021-03-09 | 之江实验室 | Lung adenosquamous carcinoma diagnosis device based on PET/CT image subregion image omics characteristics |
Non-Patent Citations (5)
Title |
---|
A convolutional neural network-based system to classify patients using PDG PET/CT examinations;K Kawauchi;《BMC Cancer》;20201231;全文 * |
基于深度卷积神经网络方法构建肺部多模态图像分类诊断模型;武志远等;《中国卫生统计》;20191225(第06期);全文 * |
基于随机森林的肺部肿瘤PET/CT图像计算机辅助诊断方法研究;刘敬霞;《生物医学工程研究》;20200625(第02期);全文 * |
影像组学在肺肿瘤良恶性分类预测中的应用研究;周天绮等;《中国医疗器械杂志》;20200330(第02期);全文 * |
深度学习在胸部CT图像分割中的应用;冒凯鹏;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;20180215;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113889261A (en) | 2022-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113889261B (en) | Pathological feature assistance-based PET/CT automatic lung cancer diagnosis classification model training method | |
CN113516210B (en) | Lung adenocarcinoma squamous carcinoma diagnosis model training method and device based on PET/CT | |
CN107203989A (en) | End-to-end chest CT image dividing method based on full convolutional neural networks | |
CN111445946B (en) | Calculation method for calculating lung cancer genotyping by using PET/CT (positron emission tomography/computed tomography) images | |
Song et al. | Bridging the gap between 2D and 3D contexts in CT volume for liver and tumor segmentation | |
CN114782307A (en) | Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning | |
CN114360718B (en) | Feature fitting-based PET/CT automatic lung cancer diagnosis and classification system and construction method | |
Li et al. | Multi-stage attention-unet for wireless capsule endoscopy image bleeding area segmentation | |
CN111312373A (en) | PET/CT image fusion automatic labeling method | |
CN112767407A (en) | CT image kidney tumor segmentation method based on cascade gating 3DUnet model | |
CN114648663A (en) | Lung cancer CT image subtype classification method based on deep learning | |
CN116645380A (en) | Automatic segmentation method for esophageal cancer CT image tumor area based on two-stage progressive information fusion | |
CN115908449A (en) | 2.5D medical CT image segmentation method and device based on improved UNet model | |
Li et al. | A dense connection encoding–decoding convolutional neural network structure for semantic segmentation of thymoma | |
CN115471512A (en) | Medical image segmentation method based on self-supervision contrast learning | |
Wu et al. | Transformer-based 3D U-Net for pulmonary vessel segmentation and artery-vein separation from CT images | |
CN110008836B (en) | Feature extraction method of hyperspectral image of pathological tissue slice | |
Shi et al. | Metabolic anomaly appearance aware U-Net for automatic lymphoma segmentation in whole-body PET/CT scans | |
CN118314394A (en) | Rectal cancer operation difficulty assessment method based on multiple views | |
CN114565601A (en) | Improved liver CT image segmentation algorithm based on DeepLabV3+ | |
CN112967254A (en) | Lung disease identification and detection method based on chest CT image | |
CN114764855A (en) | Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning | |
CN117218419A (en) | Evaluation system and evaluation method for pancreatic and biliary tumor parting and grading stage | |
CN115132275B (en) | Method for predicting EGFR gene mutation state based on end-to-end three-dimensional convolutional neural network | |
Gao et al. | A high-level feature channel attention unet network for cholangiocarcinoma segmentation from microscopy hyperspectral images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |