CN112767393A - Machine learning-based bimodal imaging omics ground glass nodule classification method - Google Patents
Machine learning-based bimodal imaging omics ground glass nodule classification method Download PDFInfo
- Publication number
- CN112767393A CN112767393A CN202110234903.6A CN202110234903A CN112767393A CN 112767393 A CN112767393 A CN 112767393A CN 202110234903 A CN202110234903 A CN 202110234903A CN 112767393 A CN112767393 A CN 112767393A
- Authority
- CN
- China
- Prior art keywords
- image
- pet
- machine learning
- classification method
- patients
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000010801 machine learning Methods 0.000 title claims abstract description 24
- 230000002902 bimodal effect Effects 0.000 title claims abstract description 17
- 238000003384 imaging method Methods 0.000 title claims abstract description 13
- 239000005337 ground glass Substances 0.000 title claims abstract description 9
- 238000013170 computed tomography imaging Methods 0.000 claims abstract description 8
- 238000013480 data collection Methods 0.000 claims abstract description 8
- 238000012879 PET imaging Methods 0.000 claims abstract description 6
- 238000004458 analytical method Methods 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims abstract description 4
- 230000003902 lesion Effects 0.000 claims description 18
- 238000002591 computed tomography Methods 0.000 claims description 13
- 210000004072 lung Anatomy 0.000 claims description 12
- 206010058467 Lung neoplasm malignant Diseases 0.000 claims description 10
- 201000005202 lung cancer Diseases 0.000 claims description 10
- 208000020816 lung neoplasm Diseases 0.000 claims description 10
- 238000012636 positron electron tomography Methods 0.000 claims description 9
- 210000000038 chest Anatomy 0.000 claims description 8
- WQZGKKKJIJFFOK-GASJEMHNSA-N Glucose Natural products OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O WQZGKKKJIJFFOK-GASJEMHNSA-N 0.000 claims description 6
- 239000008280 blood Substances 0.000 claims description 6
- 210000004369 blood Anatomy 0.000 claims description 6
- 239000008103 glucose Substances 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 210000001370 mediastinum Anatomy 0.000 claims description 6
- 239000011521 glass Substances 0.000 claims description 5
- 230000001575 pathological effect Effects 0.000 claims description 4
- 210000001154 skull base Anatomy 0.000 claims description 4
- 210000000689 upper leg Anatomy 0.000 claims description 4
- 206010006322 Breath holding Diseases 0.000 claims description 3
- 206010019670 Hepatic function abnormal Diseases 0.000 claims description 3
- 230000000259 anti-tumor effect Effects 0.000 claims description 3
- 230000007717 exclusion Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 210000003128 head Anatomy 0.000 claims description 3
- 238000009206 nuclear medicine Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000000391 smoking effect Effects 0.000 claims description 3
- 238000002560 therapeutic procedure Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 2
- 238000011994 high resolution computer tomography Methods 0.000 claims 4
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000002503 metabolic effect Effects 0.000 abstract description 2
- 229920000139 polyethylene terephthalate Polymers 0.000 description 29
- 239000005020 polyethylene terephthalate Substances 0.000 description 29
- 238000001764 infiltration Methods 0.000 description 9
- 208000009956 adenocarcinoma Diseases 0.000 description 6
- 230000008595 infiltration Effects 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 4
- 206010028980 Neoplasm Diseases 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 201000011510 cancer Diseases 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- -1 polyethylene terephthalate Polymers 0.000 description 2
- 230000002685 pulmonary effect Effects 0.000 description 2
- 238000010792 warming Methods 0.000 description 2
- 102100036475 Alanine aminotransferase 1 Human genes 0.000 description 1
- 108010082126 Alanine transaminase Proteins 0.000 description 1
- 108010003415 Aspartate Aminotransferases Proteins 0.000 description 1
- 102000004625 Aspartate Aminotransferases Human genes 0.000 description 1
- 230000005773 cancer-related death Effects 0.000 description 1
- 231100000504 carcinogenesis Toxicity 0.000 description 1
- 210000003850 cellular structure Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003748 differential diagnosis Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000005338 frosted glass Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 230000036210 malignancy Effects 0.000 description 1
- 230000003211 malignant effect Effects 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 210000002966 serum Anatomy 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000004614 tumor growth Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Radiology & Medical Imaging (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Epidemiology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
- Nuclear Medicine (AREA)
Abstract
The invention belongs to the technical field of medical treatment, and discloses a machine learning-based bimodal imaging omics ground glass nodule classification method, which comprises the following steps: step one, case data collection: collection of Subjects received as suspicious Ground Glass Nodules (GGNs)18Patients on F-FDG PET/CT examinations; step two, image acquisition and reconstruction: carrying out image acquisition by adopting a PET/CT imaging instrument; step three, extracting image characteristics; and step four, data processing and analysis. The inventionThe method is good in robustness, high in accuracy, simple and feasible, integrates functional metabolic information and physical anatomical information of the molecular level of the focus, effectively improves the prediction efficiency of the traditional CT parameters and single CT image omics, and is beneficial to the clinical management of the GGN.
Description
Technical Field
The invention relates to the technical field of medical treatment, in particular to a machine learning-based bimodal imaging omics ground glass nodule classification method.
Background
Lung cancer is the leading cause of cancer-related deaths worldwide, especially in china, the incidence of lung cancer is rapidly increasing, with an estimated increase in lung cancer mortality of about 40% in china between 2015 and 2030. Early identification and individualized management are key to improving the prognosis of lung cancer patients. With the dramatic increase in the detection of many asymptomatic pulmonary nodules and the changing epidemiological trend of Chinese lung cancer, the diagnosis and differential diagnosis of Ground Glass Nodules (GGNs) has become a significant challenge for clinicians. HRCT is a generally accepted method for identifying GGN, but the identification efficiency needs to be improved because the radiology characteristics of good and malignant GGN have certain overlap and are easily influenced by subjective factors. Pathological examination is a gold standard for focus diagnosis, but the cell components of GGN are few, the puncture technology is high in requirement and difficulty, and the application value of bronchoscope and percutaneous lung puncture technology in GGN is limited.
The imaging omics is a very popular diagnosis method with development prospect, characteristic spatial data are extracted from image data of an area of interest in a high-throughput manner by means of mathematical and statistical methods, valuable focus information which can be ignored by naked eyes is captured, accuracy of disease diagnosis is improved, and the method has the advantages of being real-time, objective, non-invasive and the like. Previous studies have shown that CT texture features are of great significance in the identification of lung cancer, prediction of tumor growth, gene expression, and efficacy assessment. However, most of these studies are based on lung parenchymal nodules, and the application of GGN is reported to be less. Digumarthy et al, the good-malignancy identification study of 108 GGNs found that only 2/92 CT imaging omics features could be used for model prediction, with an AUC of 0.624, and the diagnostic efficacy still needs to be improved. Therefore, a noninvasive, objective and highly accurate image analysis method is expected to classify GGNs.
The application of PET/CT imaging in the field of lung cancer has been approved as a non-invasive bimodal imaging reflecting tumor heterogeneity in a macroscopic view. Earlier researches also find that the PET metabolic parameters are helpful for identifying GGN, and Standardized Uptake Value (SUV) is an independent relevant factor for predicting goodness and malignancy of GGN and is in a nonlinear relationship with the GGN. Development of a PET + HRCT bimodal imaging omics analysis method is expected to improve the specific classification of GGN. In addition, most of the traditional image omics feature dimension reduction methods adopt LASSO regression, which is mainly based on a linear model and does not consider nonlinear factors. Machine learning is a study of computer algorithms that can be automatically improved through experience, which is difficult to achieve by traditional feature acquisition methods, and has significantly promoted the development of the medical field in recent years.
Therefore, a machine learning-based bimodal imaging omics ground glass nodule classification method is provided.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a machine learning-based bimodal imaging omics ground glass nodule classification method, which is used for constructing an imaging omics model based on a PET (polyethylene terephthalate) image and an HRCT (high-resolution computed tomography) image by applying a machine learning method and is used for classifying GGNs (gas-gated global warming potentials) including pre-infiltration lesions, micro-infiltration adenocarcinomas, infiltration adenocarcinomas and benign lesions.
In order to achieve the purpose, the invention provides the following technical scheme:
a machine learning-based bimodal imaging omics ground glass nodule classification method comprises the following steps:
step one, case data collection: collection of suspected lung GGN18Patients on F-FDG PET/CT examinations;
step two, image acquisition and reconstruction: carrying out image acquisition by adopting a PET/CT imaging instrument;
step three, image feature extraction: the clinical parameters collected included: patient age, gender, smoking history, fasting blood glucose level; the image characteristics are obtained as follows: exporting a PET image and a chest HRCT image of a patient from a workstation in a DICOM format, then completely importing the PET image and the chest HRCT image into LIFEx software, using 40% threshold of SUVmax to semi-automatically demarcate a target region of target lesion on the PET image by at least two experienced nuclear medicine doctors, then automatically calculating and extracting PET textural features by a software program, manually sketching each target lesion on the HRCT image, drawing ROI (region of interest) layer by layer along the contour of the lesion, and automatically calculating and extracting CT textural features for each GGN;
step four, data processing and analysis: for the selection of texture features: firstly, parameters of consistency test ICC of two measurers lower than 0.75 are removed, and then the importance of clinical and image omics characteristics to prediction is ranked by using a machine learning method.
Further, the inclusion criteria for case data collection were: GGN is more than or equal to 0.8cm and less than or equal to 3 cm; simultaneously performing machine line PET/CT scanning and breath-holding chest CT scanning; the focus is removed by operation within 1 month after PET/CT examination and the pathological data is complete;
the exclusion criteria for case data collection were: poor image quality or less than 64 voxels of the lesion in PET imaging; patients who received any anti-tumor therapy within 5 years; patients with stage IB lung cancer; patients with fasting plasma glucose > 11.1 mmol/L; patients with severely impaired liver function.
Further, the procedure of image acquisition by using the PET/CT imaging instrument is as follows: the patient fasts for 4-6 hours, and measures height, weight and blood sugar; selecting the back of hand or elbow to inject the developer intravenously18F-FDG with the dose of 3.70-5.55MBq/kg, and performing quiet rest for 50-70 minutes for image acquisition; the patient lies on the examination bed on his back, holds his head with both hands, first performs low-dose whole-body CT scan, ranging from the skull base to the middle femur, then performs whole-body PET scan, then performs high-dose whole-body CT scan, ranging from the skull base to the middle femur, then performs whole-body PET scan, 2 min/bed, 3D mode.
Further, the image reconstruction parameters and the method specifically comprise: TrueX + TOF, 2 iterations, 21 subsets, gaussian filter function, full width at half maximum 2mm, matrix 200 × 200, magnification 1.0, reconstruction layer thickness 3 mm.
Further, after PET/CT acquisition, HRCT scanning is performed on each subject under breath hold, and parameters are acquired and reconstructed: tube voltage 140kV, tube current adopt CareDose4D technique automatic adjustment, rotation time 0.5s, screw pitch 0.6, layer thickness 1.0mm, layer interval 0.5mm, matrix 512 x 512, reconstruction algorithm B70f very sharp and B41f medium +, adopt TrueD software to evaluate the image, lung window width 1200HU, lung window level-600 HU, mediastinum window width 300HU, mediastinum window level 40 HU.
In summary, the invention mainly has the following beneficial effects:
the invention constructs an image omics model based on a PET image and an HRCT image by applying a machine learning method, is used for classifying GGNs (gas-plasma) including pre-infiltration lesions, micro-infiltration adenocarcinoma, infiltration adenocarcinoma and benign lesions, and is verified and tested.
Drawings
FIG. 1 is a schematic flow chart of a method for lung worn glass nodule classification based on machine learning bimodal imagery omics according to one embodiment;
fig. 2 is a schematic diagram of a process of machine learning feature screening and model building according to an embodiment.
Detailed Description
The present invention is described in further detail below with reference to FIGS. 1-2.
Examples
A machine learning based bimodal imaging omics ground glass nodule classification method, as shown in fig. 1-2, comprising the steps of:
the method comprises the following steps: case data collection
Collection of suspected lung GGN18Patients on F-FDG PET/CT examinations;
inclusion criteria were: GGN is more than or equal to 0.8cm and less than or equal to 3 cm; simultaneously performing machine line PET/CT scanning and breath-holding chest CT scanning; the focus is removed by operation within 1 month after PET/CT examination and the pathological data is complete;
exclusion criteria: poor image quality or less than 64 voxels of the lesion in PET imaging; patients who received any anti-tumor therapy within 5 years; patients with stage IB lung cancer; patients with fasting plasma glucose > 11.1 mmol/L; patients with severely impaired liver function (serum alanine aminotransferase or aspartate aminotransferase above the 5-fold upper normal limit);
step two: image acquisition reconstruction
Carrying out image acquisition by adopting a PET/CT imaging instrument;
the process is as follows: the patient fasts for 4-6 hours, and measures height, weight and blood sugar; selecting the back of hand or elbow to inject the developer intravenously18F-FDG with the dose of 3.70-5.55MBq/kg, and performing quiet rest for 50-70 minutes for image acquisition; lying on the examination bed with the patient lying on the back, holding the head with the two hands, firstly carrying out low-dose whole-body CT scanning, then carrying out whole-body PET scanning, then carrying out high-dose whole-body CT scanning, ranging from the skull base to the middle femur section, then carrying out whole-body PET scanning, and carrying out 2 min/bed and 3D mode;
image reconstruction parameters and methods: TrueX + TOF (ultra HD-PET), 2 iterations, 21 subsets, gaussian filter function, full width at half maximum 2mm, matrix 200 × 200, magnification factor 1.0, reconstruction layer thickness 3 mm;
after PET/CT acquisition, performing HRCT scanning on each examined person under breath hold, acquiring and reconstructing parameters: the tube voltage is 140kV, the tube current is automatically adjusted by adopting the CareDose4D technology, the rotation time is 0.5s, the thread pitch is 0.6, the layer thickness is 1.0mm, the layer interval is 0.5mm, and the matrix is 512 multiplied by 512. The reconstruction algorithms B70f very sharp and B41f medium + use TrueD software to evaluate the images with a lung window width of 1200HU, a lung window level of-600 HU, a mediastinum window width of 300HU, and a mediastinum window level of 40 HU;
step three: image feature extraction
The clinical parameters collected included: patient age, gender, smoking history, fasting blood glucose level; the image characteristics are obtained as follows: the PET image and chest HRCT image of a patient are exported from Siemens workstation in DICOM format, then are all imported into LIFEx software (version 6.30, download address http:// www.lifexsoft.org), two experienced nuclear medicine doctors use 40% threshold of SUVmax to semi-automatically demarcate target region (voxel is more than or equal to 64) of target lesion for the PET image, then, software program automatically calculates and extracts PET textural features, each target lesion is manually sketched in HRCT image, ROI is drawn layer by layer along lesion outline, each GGN automatically calculates and extracts CT textural features;
step four: data processing analysis
For the selection of texture features: firstly, parameters of consistency check ICC of two measurers lower than 0.75 are removed, then the importance of clinical and image omics characteristics to prediction is ranked by using a method of machine learning extreme gradient enhancement XGboost, and an XGboost model has been proved to provide the most advanced results for various medical applications previously and obtains better results in machine learning.
Feature size is reduced using a ranking-based feature selection method with a kini coefficient. The kini coefficient is a measure of the distribution inequality, defined as a ratio of values between 0 and 1, where 0 means that all elements belong to a certain category, or if there are only 1 category, 1 means that the elements are randomly distributed in each category, and clinical and image omics features are ranked according to the kini coefficient score, which is derived by evaluating their association with histological classification.
To compare the predicted performance of the model and feature subsets, one can plot a subject performance curve and measure the area under the curve, AUC, calculating the following performance indicators: AUC, accuracy, F1 score, precision (also known as positive predictor) and recall (also known as sensitivity), F1 score (also known as F score or F metric) is the harmonic mean of precision and recall, and all analyses used R, version 3.4.3 (download address http:// www.R-project. org)
In summary, the following steps: the method for classifying the pulmonary frosted glass nodules by the machine learning-based bimodal omics provided by the invention is characterized in that a machine learning method is used for constructing an image omics model based on a PET (polyethylene terephthalate) image and an HRCT (high resolution computed tomography) image, and is used for classifying GGNs (gas-gated global warming potential) including pre-infiltration lesions, micro-infiltration adenocarcinoma, infiltration adenocarcinoma and benign lesion.
The parts not involved in the present invention are the same as or can be implemented by the prior art. The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.
Claims (5)
1. A machine learning-based bimodal imaging omics ground glass nodule classification method is characterized by comprising the following steps: the method comprises the following steps:
step one, case data collection: collection of suspected lung GGN18Patients on F-FDG PET/CT examinations;
step two, image acquisition and reconstruction: carrying out image acquisition by adopting a PET/CT imaging instrument;
step three, image feature extraction: the clinical parameters collected included: patient age, gender, smoking history, fasting blood glucose level; the image characteristics are obtained as follows: exporting a PET image and a chest HRCT image of a patient from a workstation in a DICOM format, then completely importing the PET image and the chest HRCT image into LIFEx software, using 40% threshold of SUVmax to semi-automatically demarcate a target region of target lesion on the PET image by at least two experienced nuclear medicine doctors, then automatically calculating and extracting PET textural features by a software program, manually sketching each target lesion on the HRCT image respectively, drawing ROI (region of interest) layer by layer along the contour of the lesion, and automatically calculating and extracting CT textural features for each GGN;
step four, data processing and analysis: for the selection of texture features: firstly, parameters of consistency test ICC of two measurers lower than 0.75 are removed, and then the importance of clinical and image omics characteristics to prediction is ranked by using a machine learning method.
2. The machine learning-based bimodal iconography glass nodule classification method of claim 1, wherein: the inclusion criteria for case data collection were: GGN is more than or equal to 0.8cm and less than or equal to 3 cm; simultaneously performing machine line PET/CT scanning and breath-holding chest CT scanning; the focus is removed by operation within 1 month after PET/CT examination and the pathological data is complete;
the exclusion criteria for case data collection were: poor image quality or less than 64 voxels of the lesion in PET imaging; patients who received any anti-tumor therapy within 5 years; patients with stage IB lung cancer; patients with fasting plasma glucose > 11.1 mmol/L; patients with severely impaired liver function.
3. The machine learning-based bimodal iconography glass nodule classification method of claim 1, wherein: the image acquisition process by the PET/CT imaging instrument comprises the following steps: the patient fasts for 4-6 hours, and measures height, weight and blood sugar; selecting the back of hand or elbow to inject the developer intravenously18F-FDG with the dose of 3.70-5.55MBq/kg, and performing quiet rest for 50-70 minutes for image acquisition; the patient lies on the examination bed on his back, holds his head with both hands, first performs low-dose whole-body CT scan, ranging from the skull base to the middle femur, then performs whole-body PET scan, 2 min/bed, 3D mode.
4. The machine learning-based bimodal iconography glass nodule classification method of claim 3, wherein: the image reconstruction parameters and the method are as follows: TrueX + TOF, 2 iterations, 21 subsets, gaussian filter function, full width at half maximum 2mm, matrix 200 × 200, magnification 1.0, reconstruction layer thickness 3 mm.
5. The machine learning-based bimodal iconography glass nodule classification method of claim 4, wherein: after PET/CT acquisition, performing line breath-hold HRCT scanning on each examined person, and acquiring and reconstructing parameters: tube voltage 140kV, tube current adopt CareDose4D technique automatic adjustment, rotation time 0.5s, screw pitch 0.6, layer thickness 1.0mm, layer interval 0.5mm, matrix 512 x 512, reconstruction algorithm B70f very sharp and B41f medium +, adopt TrueD software to evaluate the image, lung window width 1200HU, lung window level-600 HU, mediastinum window width 300HU, mediastinum window level 40 HU.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110234903.6A CN112767393A (en) | 2021-03-03 | 2021-03-03 | Machine learning-based bimodal imaging omics ground glass nodule classification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110234903.6A CN112767393A (en) | 2021-03-03 | 2021-03-03 | Machine learning-based bimodal imaging omics ground glass nodule classification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112767393A true CN112767393A (en) | 2021-05-07 |
Family
ID=75691022
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110234903.6A Pending CN112767393A (en) | 2021-03-03 | 2021-03-03 | Machine learning-based bimodal imaging omics ground glass nodule classification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112767393A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554663A (en) * | 2021-06-08 | 2021-10-26 | 浙江大学 | System for automatically analyzing dopamine transporter PET image based on CT structural image |
CN114927230A (en) * | 2022-04-11 | 2022-08-19 | 四川大学华西医院 | Machine learning-based severe heart failure patient prognosis decision support system and method |
CN115222805A (en) * | 2022-09-20 | 2022-10-21 | 威海市博华医疗设备有限公司 | Prospective imaging method and device based on lung cancer image |
CN116206756A (en) * | 2023-05-06 | 2023-06-02 | 中国医学科学院北京协和医院 | Lung adenocarcinoma data processing method, system, equipment and computer readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539946A (en) * | 2020-04-30 | 2020-08-14 | 常州市第一人民医院 | Method for identifying early lung adenocarcinoma manifested as frosted glass nodule |
CN111539918A (en) * | 2020-04-15 | 2020-08-14 | 复旦大学附属肿瘤医院 | Glassy lung nodule risk layered prediction system based on deep learning |
CN112215799A (en) * | 2020-09-14 | 2021-01-12 | 北京航空航天大学 | Automatic classification method and system for grinded glass lung nodules |
-
2021
- 2021-03-03 CN CN202110234903.6A patent/CN112767393A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539918A (en) * | 2020-04-15 | 2020-08-14 | 复旦大学附属肿瘤医院 | Glassy lung nodule risk layered prediction system based on deep learning |
CN111539946A (en) * | 2020-04-30 | 2020-08-14 | 常州市第一人民医院 | Method for identifying early lung adenocarcinoma manifested as frosted glass nodule |
CN112215799A (en) * | 2020-09-14 | 2021-01-12 | 北京航空航天大学 | Automatic classification method and system for grinded glass lung nodules |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554663A (en) * | 2021-06-08 | 2021-10-26 | 浙江大学 | System for automatically analyzing dopamine transporter PET image based on CT structural image |
CN113554663B (en) * | 2021-06-08 | 2023-10-31 | 浙江大学 | System for automatically analyzing PET (positron emission tomography) images of dopamine transporter based on CT (computed tomography) structural images |
CN114927230A (en) * | 2022-04-11 | 2022-08-19 | 四川大学华西医院 | Machine learning-based severe heart failure patient prognosis decision support system and method |
CN114927230B (en) * | 2022-04-11 | 2023-05-23 | 四川大学华西医院 | Prognosis decision support system and method for severe heart failure patient based on machine learning |
CN115222805A (en) * | 2022-09-20 | 2022-10-21 | 威海市博华医疗设备有限公司 | Prospective imaging method and device based on lung cancer image |
CN115222805B (en) * | 2022-09-20 | 2023-01-13 | 威海市博华医疗设备有限公司 | Prospective imaging method and device based on lung cancer image |
CN116206756A (en) * | 2023-05-06 | 2023-06-02 | 中国医学科学院北京协和医院 | Lung adenocarcinoma data processing method, system, equipment and computer readable storage medium |
CN116206756B (en) * | 2023-05-06 | 2023-10-27 | 中国医学科学院北京协和医院 | Lung adenocarcinoma data processing method, system, equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109493325B (en) | Tumor heterogeneity analysis system based on CT images | |
CN112767393A (en) | Machine learning-based bimodal imaging omics ground glass nodule classification method | |
US6999549B2 (en) | Method and apparatus for quantifying tissue fat content | |
JP5068519B2 (en) | Machine-readable medium and apparatus including routines for automatically characterizing malignant tumors | |
Sambuceti et al. | Estimating the whole bone-marrow asset in humans by a computational approach to integrated PET/CT imaging | |
US7627078B2 (en) | Methods and apparatus for detecting structural, perfusion, and functional abnormalities | |
JP2004174232A (en) | Computer aided diagnosis of image set | |
US9147242B2 (en) | Processing system for medical scan images | |
CA2438479A1 (en) | Computer assisted analysis of tomographic mammography data | |
US9305351B2 (en) | Method of determining the probabilities of suspect nodules being malignant | |
CN113288186A (en) | Deep learning algorithm-based breast tumor tissue detection method and device | |
Li et al. | Application analysis of ai technology combined with spiral CT scanning in early lung cancer screening | |
US11478163B2 (en) | Image processing and emphysema threshold determination | |
Sayed et al. | Automatic classification of breast tumors using features extracted from magnetic resonance images | |
Uhlig et al. | Pre-and post-contrast versus post-contrast cone-beam breast CT: can we reduce radiation exposure while maintaining diagnostic accuracy? | |
JP6078531B2 (en) | Method, apparatus and storage medium executed by digital processing apparatus | |
De Santi et al. | Comparison of Histogram-based Textural Features between Cancerous and Normal Prostatic Tissue in Multiparametric Magnetic Resonance Images | |
Karwoski et al. | Processing of CT images for analysis of diffuse lung disease in the lung tissue research consortium | |
Borguezan et al. | Solid indeterminate nodules with a radiological stability suggesting benignity: a texture analysis of computed tomography images based on the kurtosis and skewness of the nodule volume density histogram | |
Lin et al. | Application of Pet‐CT Fusion Deep Learning Imaging in Precise Radiotherapy of Thyroid Cancer | |
CN113822873A (en) | Bimodal imagery omics image analysis method for lung nodule classification | |
Mertelmeier et al. | 3D breast tomosynthesis–intelligent technology for clear clinical benefits | |
Yang et al. | The automated lung segmentation and tumor extraction algorithm for PET/CT images | |
Borguezan et al. | Research Article Solid Indeterminate Nodules with a Radiological Stability Suggesting Benignity: A Texture Analysis of Computed Tomography Images Based on the Kurtosis and Skewness of the Nodule Volume Density Histogram | |
Wang et al. | Value of the texture feature for solitary pulmonary nodules and mass lesions based on PET/CT |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |