CN112767393A - Machine learning-based bimodal imaging omics ground glass nodule classification method - Google Patents

Machine learning-based bimodal imaging omics ground glass nodule classification method Download PDF

Info

Publication number
CN112767393A
CN112767393A CN202110234903.6A CN202110234903A CN112767393A CN 112767393 A CN112767393 A CN 112767393A CN 202110234903 A CN202110234903 A CN 202110234903A CN 112767393 A CN112767393 A CN 112767393A
Authority
CN
China
Prior art keywords
image
pet
machine learning
classification method
patients
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110234903.6A
Other languages
Chinese (zh)
Inventor
牛荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Peoples Hospital of Changzhou
Original Assignee
First Peoples Hospital of Changzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Peoples Hospital of Changzhou filed Critical First Peoples Hospital of Changzhou
Priority to CN202110234903.6A priority Critical patent/CN112767393A/en
Publication of CN112767393A publication Critical patent/CN112767393A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Epidemiology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Nuclear Medicine (AREA)

Abstract

The invention belongs to the technical field of medical treatment, and discloses a machine learning-based bimodal imaging omics ground glass nodule classification method, which comprises the following steps: step one, case data collection: collection of Subjects received as suspicious Ground Glass Nodules (GGNs)18Patients on F-FDG PET/CT examinations; step two, image acquisition and reconstruction: carrying out image acquisition by adopting a PET/CT imaging instrument; step three, extracting image characteristics; and step four, data processing and analysis. The inventionThe method is good in robustness, high in accuracy, simple and feasible, integrates functional metabolic information and physical anatomical information of the molecular level of the focus, effectively improves the prediction efficiency of the traditional CT parameters and single CT image omics, and is beneficial to the clinical management of the GGN.

Description

Machine learning-based bimodal imaging omics ground glass nodule classification method
Technical Field
The invention relates to the technical field of medical treatment, in particular to a machine learning-based bimodal imaging omics ground glass nodule classification method.
Background
Lung cancer is the leading cause of cancer-related deaths worldwide, especially in china, the incidence of lung cancer is rapidly increasing, with an estimated increase in lung cancer mortality of about 40% in china between 2015 and 2030. Early identification and individualized management are key to improving the prognosis of lung cancer patients. With the dramatic increase in the detection of many asymptomatic pulmonary nodules and the changing epidemiological trend of Chinese lung cancer, the diagnosis and differential diagnosis of Ground Glass Nodules (GGNs) has become a significant challenge for clinicians. HRCT is a generally accepted method for identifying GGN, but the identification efficiency needs to be improved because the radiology characteristics of good and malignant GGN have certain overlap and are easily influenced by subjective factors. Pathological examination is a gold standard for focus diagnosis, but the cell components of GGN are few, the puncture technology is high in requirement and difficulty, and the application value of bronchoscope and percutaneous lung puncture technology in GGN is limited.
The imaging omics is a very popular diagnosis method with development prospect, characteristic spatial data are extracted from image data of an area of interest in a high-throughput manner by means of mathematical and statistical methods, valuable focus information which can be ignored by naked eyes is captured, accuracy of disease diagnosis is improved, and the method has the advantages of being real-time, objective, non-invasive and the like. Previous studies have shown that CT texture features are of great significance in the identification of lung cancer, prediction of tumor growth, gene expression, and efficacy assessment. However, most of these studies are based on lung parenchymal nodules, and the application of GGN is reported to be less. Digumarthy et al, the good-malignancy identification study of 108 GGNs found that only 2/92 CT imaging omics features could be used for model prediction, with an AUC of 0.624, and the diagnostic efficacy still needs to be improved. Therefore, a noninvasive, objective and highly accurate image analysis method is expected to classify GGNs.
The application of PET/CT imaging in the field of lung cancer has been approved as a non-invasive bimodal imaging reflecting tumor heterogeneity in a macroscopic view. Earlier researches also find that the PET metabolic parameters are helpful for identifying GGN, and Standardized Uptake Value (SUV) is an independent relevant factor for predicting goodness and malignancy of GGN and is in a nonlinear relationship with the GGN. Development of a PET + HRCT bimodal imaging omics analysis method is expected to improve the specific classification of GGN. In addition, most of the traditional image omics feature dimension reduction methods adopt LASSO regression, which is mainly based on a linear model and does not consider nonlinear factors. Machine learning is a study of computer algorithms that can be automatically improved through experience, which is difficult to achieve by traditional feature acquisition methods, and has significantly promoted the development of the medical field in recent years.
Therefore, a machine learning-based bimodal imaging omics ground glass nodule classification method is provided.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a machine learning-based bimodal imaging omics ground glass nodule classification method, which is used for constructing an imaging omics model based on a PET (polyethylene terephthalate) image and an HRCT (high-resolution computed tomography) image by applying a machine learning method and is used for classifying GGNs (gas-gated global warming potentials) including pre-infiltration lesions, micro-infiltration adenocarcinomas, infiltration adenocarcinomas and benign lesions.
In order to achieve the purpose, the invention provides the following technical scheme:
a machine learning-based bimodal imaging omics ground glass nodule classification method comprises the following steps:
step one, case data collection: collection of suspected lung GGN18Patients on F-FDG PET/CT examinations;
step two, image acquisition and reconstruction: carrying out image acquisition by adopting a PET/CT imaging instrument;
step three, image feature extraction: the clinical parameters collected included: patient age, gender, smoking history, fasting blood glucose level; the image characteristics are obtained as follows: exporting a PET image and a chest HRCT image of a patient from a workstation in a DICOM format, then completely importing the PET image and the chest HRCT image into LIFEx software, using 40% threshold of SUVmax to semi-automatically demarcate a target region of target lesion on the PET image by at least two experienced nuclear medicine doctors, then automatically calculating and extracting PET textural features by a software program, manually sketching each target lesion on the HRCT image, drawing ROI (region of interest) layer by layer along the contour of the lesion, and automatically calculating and extracting CT textural features for each GGN;
step four, data processing and analysis: for the selection of texture features: firstly, parameters of consistency test ICC of two measurers lower than 0.75 are removed, and then the importance of clinical and image omics characteristics to prediction is ranked by using a machine learning method.
Further, the inclusion criteria for case data collection were: GGN is more than or equal to 0.8cm and less than or equal to 3 cm; simultaneously performing machine line PET/CT scanning and breath-holding chest CT scanning; the focus is removed by operation within 1 month after PET/CT examination and the pathological data is complete;
the exclusion criteria for case data collection were: poor image quality or less than 64 voxels of the lesion in PET imaging; patients who received any anti-tumor therapy within 5 years; patients with stage IB lung cancer; patients with fasting plasma glucose > 11.1 mmol/L; patients with severely impaired liver function.
Further, the procedure of image acquisition by using the PET/CT imaging instrument is as follows: the patient fasts for 4-6 hours, and measures height, weight and blood sugar; selecting the back of hand or elbow to inject the developer intravenously18F-FDG with the dose of 3.70-5.55MBq/kg, and performing quiet rest for 50-70 minutes for image acquisition; the patient lies on the examination bed on his back, holds his head with both hands, first performs low-dose whole-body CT scan, ranging from the skull base to the middle femur, then performs whole-body PET scan, then performs high-dose whole-body CT scan, ranging from the skull base to the middle femur, then performs whole-body PET scan, 2 min/bed, 3D mode.
Further, the image reconstruction parameters and the method specifically comprise: TrueX + TOF, 2 iterations, 21 subsets, gaussian filter function, full width at half maximum 2mm, matrix 200 × 200, magnification 1.0, reconstruction layer thickness 3 mm.
Further, after PET/CT acquisition, HRCT scanning is performed on each subject under breath hold, and parameters are acquired and reconstructed: tube voltage 140kV, tube current adopt CareDose4D technique automatic adjustment, rotation time 0.5s, screw pitch 0.6, layer thickness 1.0mm, layer interval 0.5mm, matrix 512 x 512, reconstruction algorithm B70f very sharp and B41f medium +, adopt TrueD software to evaluate the image, lung window width 1200HU, lung window level-600 HU, mediastinum window width 300HU, mediastinum window level 40 HU.
In summary, the invention mainly has the following beneficial effects:
the invention constructs an image omics model based on a PET image and an HRCT image by applying a machine learning method, is used for classifying GGNs (gas-plasma) including pre-infiltration lesions, micro-infiltration adenocarcinoma, infiltration adenocarcinoma and benign lesions, and is verified and tested.
Drawings
FIG. 1 is a schematic flow chart of a method for lung worn glass nodule classification based on machine learning bimodal imagery omics according to one embodiment;
fig. 2 is a schematic diagram of a process of machine learning feature screening and model building according to an embodiment.
Detailed Description
The present invention is described in further detail below with reference to FIGS. 1-2.
Examples
A machine learning based bimodal imaging omics ground glass nodule classification method, as shown in fig. 1-2, comprising the steps of:
the method comprises the following steps: case data collection
Collection of suspected lung GGN18Patients on F-FDG PET/CT examinations;
inclusion criteria were: GGN is more than or equal to 0.8cm and less than or equal to 3 cm; simultaneously performing machine line PET/CT scanning and breath-holding chest CT scanning; the focus is removed by operation within 1 month after PET/CT examination and the pathological data is complete;
exclusion criteria: poor image quality or less than 64 voxels of the lesion in PET imaging; patients who received any anti-tumor therapy within 5 years; patients with stage IB lung cancer; patients with fasting plasma glucose > 11.1 mmol/L; patients with severely impaired liver function (serum alanine aminotransferase or aspartate aminotransferase above the 5-fold upper normal limit);
step two: image acquisition reconstruction
Carrying out image acquisition by adopting a PET/CT imaging instrument;
the process is as follows: the patient fasts for 4-6 hours, and measures height, weight and blood sugar; selecting the back of hand or elbow to inject the developer intravenously18F-FDG with the dose of 3.70-5.55MBq/kg, and performing quiet rest for 50-70 minutes for image acquisition; lying on the examination bed with the patient lying on the back, holding the head with the two hands, firstly carrying out low-dose whole-body CT scanning, then carrying out whole-body PET scanning, then carrying out high-dose whole-body CT scanning, ranging from the skull base to the middle femur section, then carrying out whole-body PET scanning, and carrying out 2 min/bed and 3D mode;
image reconstruction parameters and methods: TrueX + TOF (ultra HD-PET), 2 iterations, 21 subsets, gaussian filter function, full width at half maximum 2mm, matrix 200 × 200, magnification factor 1.0, reconstruction layer thickness 3 mm;
after PET/CT acquisition, performing HRCT scanning on each examined person under breath hold, acquiring and reconstructing parameters: the tube voltage is 140kV, the tube current is automatically adjusted by adopting the CareDose4D technology, the rotation time is 0.5s, the thread pitch is 0.6, the layer thickness is 1.0mm, the layer interval is 0.5mm, and the matrix is 512 multiplied by 512. The reconstruction algorithms B70f very sharp and B41f medium + use TrueD software to evaluate the images with a lung window width of 1200HU, a lung window level of-600 HU, a mediastinum window width of 300HU, and a mediastinum window level of 40 HU;
step three: image feature extraction
The clinical parameters collected included: patient age, gender, smoking history, fasting blood glucose level; the image characteristics are obtained as follows: the PET image and chest HRCT image of a patient are exported from Siemens workstation in DICOM format, then are all imported into LIFEx software (version 6.30, download address http:// www.lifexsoft.org), two experienced nuclear medicine doctors use 40% threshold of SUVmax to semi-automatically demarcate target region (voxel is more than or equal to 64) of target lesion for the PET image, then, software program automatically calculates and extracts PET textural features, each target lesion is manually sketched in HRCT image, ROI is drawn layer by layer along lesion outline, each GGN automatically calculates and extracts CT textural features;
step four: data processing analysis
For the selection of texture features: firstly, parameters of consistency check ICC of two measurers lower than 0.75 are removed, then the importance of clinical and image omics characteristics to prediction is ranked by using a method of machine learning extreme gradient enhancement XGboost, and an XGboost model has been proved to provide the most advanced results for various medical applications previously and obtains better results in machine learning.
Feature size is reduced using a ranking-based feature selection method with a kini coefficient. The kini coefficient is a measure of the distribution inequality, defined as a ratio of values between 0 and 1, where 0 means that all elements belong to a certain category, or if there are only 1 category, 1 means that the elements are randomly distributed in each category, and clinical and image omics features are ranked according to the kini coefficient score, which is derived by evaluating their association with histological classification.
To compare the predicted performance of the model and feature subsets, one can plot a subject performance curve and measure the area under the curve, AUC, calculating the following performance indicators: AUC, accuracy, F1 score, precision (also known as positive predictor) and recall (also known as sensitivity), F1 score (also known as F score or F metric) is the harmonic mean of precision and recall, and all analyses used R, version 3.4.3 (download address http:// www.R-project. org)
In summary, the following steps: the method for classifying the pulmonary frosted glass nodules by the machine learning-based bimodal omics provided by the invention is characterized in that a machine learning method is used for constructing an image omics model based on a PET (polyethylene terephthalate) image and an HRCT (high resolution computed tomography) image, and is used for classifying GGNs (gas-gated global warming potential) including pre-infiltration lesions, micro-infiltration adenocarcinoma, infiltration adenocarcinoma and benign lesion.
The parts not involved in the present invention are the same as or can be implemented by the prior art. The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.

Claims (5)

1. A machine learning-based bimodal imaging omics ground glass nodule classification method is characterized by comprising the following steps: the method comprises the following steps:
step one, case data collection: collection of suspected lung GGN18Patients on F-FDG PET/CT examinations;
step two, image acquisition and reconstruction: carrying out image acquisition by adopting a PET/CT imaging instrument;
step three, image feature extraction: the clinical parameters collected included: patient age, gender, smoking history, fasting blood glucose level; the image characteristics are obtained as follows: exporting a PET image and a chest HRCT image of a patient from a workstation in a DICOM format, then completely importing the PET image and the chest HRCT image into LIFEx software, using 40% threshold of SUVmax to semi-automatically demarcate a target region of target lesion on the PET image by at least two experienced nuclear medicine doctors, then automatically calculating and extracting PET textural features by a software program, manually sketching each target lesion on the HRCT image respectively, drawing ROI (region of interest) layer by layer along the contour of the lesion, and automatically calculating and extracting CT textural features for each GGN;
step four, data processing and analysis: for the selection of texture features: firstly, parameters of consistency test ICC of two measurers lower than 0.75 are removed, and then the importance of clinical and image omics characteristics to prediction is ranked by using a machine learning method.
2. The machine learning-based bimodal iconography glass nodule classification method of claim 1, wherein: the inclusion criteria for case data collection were: GGN is more than or equal to 0.8cm and less than or equal to 3 cm; simultaneously performing machine line PET/CT scanning and breath-holding chest CT scanning; the focus is removed by operation within 1 month after PET/CT examination and the pathological data is complete;
the exclusion criteria for case data collection were: poor image quality or less than 64 voxels of the lesion in PET imaging; patients who received any anti-tumor therapy within 5 years; patients with stage IB lung cancer; patients with fasting plasma glucose > 11.1 mmol/L; patients with severely impaired liver function.
3. The machine learning-based bimodal iconography glass nodule classification method of claim 1, wherein: the image acquisition process by the PET/CT imaging instrument comprises the following steps: the patient fasts for 4-6 hours, and measures height, weight and blood sugar; selecting the back of hand or elbow to inject the developer intravenously18F-FDG with the dose of 3.70-5.55MBq/kg, and performing quiet rest for 50-70 minutes for image acquisition; the patient lies on the examination bed on his back, holds his head with both hands, first performs low-dose whole-body CT scan, ranging from the skull base to the middle femur, then performs whole-body PET scan, 2 min/bed, 3D mode.
4. The machine learning-based bimodal iconography glass nodule classification method of claim 3, wherein: the image reconstruction parameters and the method are as follows: TrueX + TOF, 2 iterations, 21 subsets, gaussian filter function, full width at half maximum 2mm, matrix 200 × 200, magnification 1.0, reconstruction layer thickness 3 mm.
5. The machine learning-based bimodal iconography glass nodule classification method of claim 4, wherein: after PET/CT acquisition, performing line breath-hold HRCT scanning on each examined person, and acquiring and reconstructing parameters: tube voltage 140kV, tube current adopt CareDose4D technique automatic adjustment, rotation time 0.5s, screw pitch 0.6, layer thickness 1.0mm, layer interval 0.5mm, matrix 512 x 512, reconstruction algorithm B70f very sharp and B41f medium +, adopt TrueD software to evaluate the image, lung window width 1200HU, lung window level-600 HU, mediastinum window width 300HU, mediastinum window level 40 HU.
CN202110234903.6A 2021-03-03 2021-03-03 Machine learning-based bimodal imaging omics ground glass nodule classification method Pending CN112767393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110234903.6A CN112767393A (en) 2021-03-03 2021-03-03 Machine learning-based bimodal imaging omics ground glass nodule classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110234903.6A CN112767393A (en) 2021-03-03 2021-03-03 Machine learning-based bimodal imaging omics ground glass nodule classification method

Publications (1)

Publication Number Publication Date
CN112767393A true CN112767393A (en) 2021-05-07

Family

ID=75691022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110234903.6A Pending CN112767393A (en) 2021-03-03 2021-03-03 Machine learning-based bimodal imaging omics ground glass nodule classification method

Country Status (1)

Country Link
CN (1) CN112767393A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554663A (en) * 2021-06-08 2021-10-26 浙江大学 System for automatically analyzing dopamine transporter PET image based on CT structural image
CN114927230A (en) * 2022-04-11 2022-08-19 四川大学华西医院 Machine learning-based severe heart failure patient prognosis decision support system and method
CN115222805A (en) * 2022-09-20 2022-10-21 威海市博华医疗设备有限公司 Prospective imaging method and device based on lung cancer image
CN116206756A (en) * 2023-05-06 2023-06-02 中国医学科学院北京协和医院 Lung adenocarcinoma data processing method, system, equipment and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539946A (en) * 2020-04-30 2020-08-14 常州市第一人民医院 Method for identifying early lung adenocarcinoma manifested as frosted glass nodule
CN111539918A (en) * 2020-04-15 2020-08-14 复旦大学附属肿瘤医院 Glassy lung nodule risk layered prediction system based on deep learning
CN112215799A (en) * 2020-09-14 2021-01-12 北京航空航天大学 Automatic classification method and system for grinded glass lung nodules

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539918A (en) * 2020-04-15 2020-08-14 复旦大学附属肿瘤医院 Glassy lung nodule risk layered prediction system based on deep learning
CN111539946A (en) * 2020-04-30 2020-08-14 常州市第一人民医院 Method for identifying early lung adenocarcinoma manifested as frosted glass nodule
CN112215799A (en) * 2020-09-14 2021-01-12 北京航空航天大学 Automatic classification method and system for grinded glass lung nodules

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554663A (en) * 2021-06-08 2021-10-26 浙江大学 System for automatically analyzing dopamine transporter PET image based on CT structural image
CN113554663B (en) * 2021-06-08 2023-10-31 浙江大学 System for automatically analyzing PET (positron emission tomography) images of dopamine transporter based on CT (computed tomography) structural images
CN114927230A (en) * 2022-04-11 2022-08-19 四川大学华西医院 Machine learning-based severe heart failure patient prognosis decision support system and method
CN114927230B (en) * 2022-04-11 2023-05-23 四川大学华西医院 Prognosis decision support system and method for severe heart failure patient based on machine learning
CN115222805A (en) * 2022-09-20 2022-10-21 威海市博华医疗设备有限公司 Prospective imaging method and device based on lung cancer image
CN115222805B (en) * 2022-09-20 2023-01-13 威海市博华医疗设备有限公司 Prospective imaging method and device based on lung cancer image
CN116206756A (en) * 2023-05-06 2023-06-02 中国医学科学院北京协和医院 Lung adenocarcinoma data processing method, system, equipment and computer readable storage medium
CN116206756B (en) * 2023-05-06 2023-10-27 中国医学科学院北京协和医院 Lung adenocarcinoma data processing method, system, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN109493325B (en) Tumor heterogeneity analysis system based on CT images
CN112767393A (en) Machine learning-based bimodal imaging omics ground glass nodule classification method
US6999549B2 (en) Method and apparatus for quantifying tissue fat content
JP5068519B2 (en) Machine-readable medium and apparatus including routines for automatically characterizing malignant tumors
Sambuceti et al. Estimating the whole bone-marrow asset in humans by a computational approach to integrated PET/CT imaging
US7627078B2 (en) Methods and apparatus for detecting structural, perfusion, and functional abnormalities
JP2004174232A (en) Computer aided diagnosis of image set
US9147242B2 (en) Processing system for medical scan images
CA2438479A1 (en) Computer assisted analysis of tomographic mammography data
US9305351B2 (en) Method of determining the probabilities of suspect nodules being malignant
CN113288186A (en) Deep learning algorithm-based breast tumor tissue detection method and device
Li et al. Application analysis of ai technology combined with spiral CT scanning in early lung cancer screening
US11478163B2 (en) Image processing and emphysema threshold determination
Sayed et al. Automatic classification of breast tumors using features extracted from magnetic resonance images
Uhlig et al. Pre-and post-contrast versus post-contrast cone-beam breast CT: can we reduce radiation exposure while maintaining diagnostic accuracy?
JP6078531B2 (en) Method, apparatus and storage medium executed by digital processing apparatus
De Santi et al. Comparison of Histogram-based Textural Features between Cancerous and Normal Prostatic Tissue in Multiparametric Magnetic Resonance Images
Karwoski et al. Processing of CT images for analysis of diffuse lung disease in the lung tissue research consortium
Borguezan et al. Solid indeterminate nodules with a radiological stability suggesting benignity: a texture analysis of computed tomography images based on the kurtosis and skewness of the nodule volume density histogram
Lin et al. Application of Pet‐CT Fusion Deep Learning Imaging in Precise Radiotherapy of Thyroid Cancer
CN113822873A (en) Bimodal imagery omics image analysis method for lung nodule classification
Mertelmeier et al. 3D breast tomosynthesis–intelligent technology for clear clinical benefits
Yang et al. The automated lung segmentation and tumor extraction algorithm for PET/CT images
Borguezan et al. Research Article Solid Indeterminate Nodules with a Radiological Stability Suggesting Benignity: A Texture Analysis of Computed Tomography Images Based on the Kurtosis and Skewness of the Nodule Volume Density Histogram
Wang et al. Value of the texture feature for solitary pulmonary nodules and mass lesions based on PET/CT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination