CN113822873A - Bimodal imagery omics image analysis method for lung nodule classification - Google Patents

Bimodal imagery omics image analysis method for lung nodule classification Download PDF

Info

Publication number
CN113822873A
CN113822873A CN202111171035.8A CN202111171035A CN113822873A CN 113822873 A CN113822873 A CN 113822873A CN 202111171035 A CN202111171035 A CN 202111171035A CN 113822873 A CN113822873 A CN 113822873A
Authority
CN
China
Prior art keywords
image
pet
lung nodule
patient
omics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111171035.8A
Other languages
Chinese (zh)
Inventor
史云梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Peoples Hospital of Changzhou
Original Assignee
First Peoples Hospital of Changzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Peoples Hospital of Changzhou filed Critical First Peoples Hospital of Changzhou
Priority to CN202111171035.8A priority Critical patent/CN113822873A/en
Publication of CN113822873A publication Critical patent/CN113822873A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a bimodal imaging omics image analysis method for lung nodule classification, which relates to the technical field of lung nodule image analysis, and comprises the steps of collecting lung nodule patients subjected to PET/CT examination, wherein the lung nodule patients can be subjected to pretreatment on a PET/CT image and can make definite pathological diagnosis on lung nodules by a center, using a PET/CT machine, fasting each patient for more than 6 hours before examination, measuring blood sugar by fingertip blood sampling to be less than 11.1 mmol/L, after intravenous injection of 18F-FDG according to the weight of a patient, the patient is rested, the body is scanned from the skull to the upper thigh region for about 1 hour, image acquisition is carried out from the tail of the bed, each bed scanning lasts for 2 minutes, each patient scans 9 to 10 beds, the images are segmented by using 3D-Slicer software, and PET and CT images of the pulmonary nodules are segmented by using a semi-automatic method, NVIDIA AI-Assisted segmentation and boundary-based CT segmentation model respectively. The invention can provide information for treatment decision and prognosis judgment of the pulmonary nodule patient and promote personalized treatment in a non-invasive manner.

Description

Bimodal imagery omics image analysis method for lung nodule classification
Technical Field
The invention relates to the technical field of lung nodule image analysis, in particular to a bimodal imaging omics image analysis method for lung nodule classification.
Background
With the wide application of multilayer spiral CT and low-dose CT in lung cancer screening, the detection rate of lung nodules is continuously improved, and diagnosis, differential diagnosis, treatment decision and prognosis judgment of the lung nodules become great challenges for clinicians, and currently, pathology examination is still the gold standard for focus diagnosis and prognosis judgment, but has high requirements and difficulty for puncture technology, and heterogeneity of tumors easily causes sampling deviation, so that bronchoscopy and percutaneous lung puncture technologies have certain limitations in the application of lung nodules, and an ideal noninvasive examination method is urgently needed for classifying lung nodules, thereby providing references for clinical treatment decision and prognosis judgment of lung nodules.
At present, the imaging group is a promising and very popular diagnostic method, and by means of mathematical and statistical methods, can quantitatively describe the spatial relationship among voxels, extract characteristic spatial data from the image data of a region of interest with high flux, capture potential lesion information beyond the capability of visual assessment, thereby improving the accuracy of disease diagnosis, having the advantages of objectivity, no wound, rapidness, low cost, easy operation and the like, the prior research shows that the CT textural features have important significance in the aspects of lung cancer identification, tumor growth prediction, gene expression, curative effect evaluation and the like, but can only provide the morphological information of lung nodules, the morphological characteristics of the lung nodules are overlapped to a certain extent and are easily influenced by subjective factors, the identification efficiency needs to be improved, therefore, there is a need for a noninvasive, objective and highly accurate image analysis method for classifying lung nodules.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a bimodal imaging group image analysis method for lung nodule classification.
In order to achieve the purpose, the invention adopts the following technical scheme:
a bimodal omics image analysis method for lung nodule classification, comprising the steps of:
the method comprises the following steps: collecting the lung nodule patient which can be pre-processed by the center and can be examined by PET/CT for making definite pathological diagnosis for the lung nodule;
step two: before PET and CT machine examination, each patient is fasted for more than 6 h, blood sugar is measured by fingertip blood sampling and is less than 11.1 mmol/L, after 18F-FDG is injected intravenously according to the weight of the patient, the patient is rested, about 1 hour, the body is scanned from the skull to the upper thigh area, image acquisition is carried out from the tail of the bed, each bed scanning lasts for 2 minutes, each patient is scanned for 9 to 10 beds, and TrueX + TOF (ultra HD-PET) is used for PET image reconstruction;
step three: so images were segmented using 3D-Slicer (version number 4.11.20200930, www.slicer.com), semi-automatic segmentation for PET images, NVIDIA AI-Assisted segmentation and boundary-based CT segmentation models for CT images;
step four: interpolating all images to make isotropic voxel pitch rotation unchanged, performing feature extraction to compare image data of different samples, resampling a CT image to 1 mm × 1 mm × 1 mm, resampling a PET image to 3mm × 3mm × 3mm, discretizing by a fixed box width method, wherein the box widths of the CT and PET images are 25 and 0.313 respectively, preprocessing discretizing, Gaussian Laplace and wavelet transformation to generate different feature sets, extracting fine, medium and rough features by using different sigma values for an LOG filter, specifically, the ranges of the fine, medium and rough features are 0.5 to 5, the step size is 0.5, each layer of the wavelet transformation generates 8 decompositions, and preprocessing all intensity, histogram and texture features;
step five: extracting a plurality of characteristics from different characteristic classes, wherein the classes comprise image characteristic extraction by using an open source Python library PyRadiomics based on shape and morphological characteristics, first-order statistics, a gray level co-occurrence matrix, a gray level dependency matrix, a gray level run length matrix, a gray level size area matrix and an adjacent gray level hue difference matrix;
step six: carrying out feature selection and classification by using an internally developed Python frame in an open source Python library Sciket-Learn;
step seven: after manually setting boundaries and discretizing, performing 10-fold cross validation on a training data set to evaluate the performance of the model, determining an optimal hyper-parameter by combining grid search, and evaluating parameter setting by using the area under an ROC curve of an estimator suitable for imbalance classification;
step eight: extracting a large number of PET/CT image omics characteristics from each segmented tumor region, then sequencing the characteristics in a training set by applying 12 characteristic selection methods, wherein the characteristics ranked at the top in each selection method are used for fitting 11 machine learning classifiers, a 132 cross combination machine learning algorithm matrix is established by the 12 characteristic selection methods and the 11 classification methods, and then the AUC and the accuracy of the matrix are evaluated in a test data set;
step nine: finally, comparing the performance of the image omics machine learning classifier in the aspects of AUC, accuracy, sensitivity, specificity and the like, calculating a 95% confidence interval of AUC by binomial accurate test, and performing pair-wise comparison on ROC curves by using a Delong method, wherein the accuracy is defined as the ratio of the number of samples correctly classified by the classifier to the total number of samples in a test data set, the sensitivity is defined as the number of correct positive results appearing in all available positive samples during a test, and the specificity is defined as the number of correct positive results appearing in all available negative samples during the test.
The invention is further set that the configuration parameter of the CT is tube voltage 140 kV, the tube current is automatically adjusted by adopting a CareDose4D technology, the scanning layer thickness is 3mm, and the attenuation correction of the PET image is carried out through CT data.
The invention is further configured such that the CT segmentation model is validated by experienced nuclear medicine physicians without knowledge of the patient's pathological outcome.
The invention further provides that the imagery omics features calculated by the open source Python library PyRadiomics conform to the feature definitions described by the image biomarker standardization initiative.
The invention further provides that the feature selection comprises a filtering method, a packaging method and an embedding method.
The invention further provides that the filtering method comprises variance filtering and correlation filtering, the packing method comprises decision trees, random forests, logistic regression and linear support vector machines, and the embedding method comprises minimum absolute shrinkage and selection operators, decision trees, random forests, logistic regression and linear support vector machines.
The invention is further configured such that the correlation filtering includes chi-square filtering, F-test, and mutual information methods.
The invention is further configured that the machine learning classifier comprises a decision tree, a random forest, naive Bayes, a gradient boosting decision tree, Logistic regression, linear discriminant analysis, a support vector machine, K nearest neighbors, Bagging, a multilayer perceptron, and AdaBoost.
The invention has the beneficial effects that:
the invention is based on a bimodal PET/CT image, contains heterogeneity information of the interior of a tumor with two dimensions of anatomy and function, and a constructed supervised machine learning matrix can be used for classification of lung nodules, including benign and malignant properties, infiltrability, growth mode and genotyping.
Drawings
Fig. 1 is a schematic flow structure diagram of a bimodal iconography image analysis method for lung nodule classification according to the present invention;
fig. 2 is a schematic diagram of a feature selection structure of a bimodal iconography image analysis method for lung nodule classification according to the present invention.
Detailed Description
The technical solution of the present patent will be described in further detail with reference to the following embodiments.
Reference will now be made in detail to embodiments of the present patent, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present patent and are not to be construed as limiting the present patent.
In the description of this patent, it is to be understood that the terms "center," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in the orientations and positional relationships indicated in the drawings for the convenience of describing the patent and for the simplicity of description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are not to be considered limiting of the patent.
In the description of this patent, it is noted that unless otherwise specifically stated or limited, the terms "mounted," "connected," and "disposed" are to be construed broadly and can include, for example, fixedly connected, disposed, detachably connected, disposed, or integrally connected and disposed. The specific meaning of the above terms in this patent may be understood by those of ordinary skill in the art as appropriate.
Referring to fig. 1-2, a bimodal omics image analysis method for lung nodule classification includes the steps of:
the method comprises the following steps: collecting the lung nodule patient which can be pre-processed by the center and can be examined by PET/CT for making definite pathological diagnosis for the lung nodule;
step two: before PET and CT machine examination, each patient is fasted for more than 6 h, blood sugar is measured by fingertip blood sampling and is less than 11.1 mmol/L, 18F-FDG is injected intravenously according to the weight of the patient, the patient is rested, the body is scanned from the skull to the upper thigh region for about 1 hour, image acquisition is carried out from the tail of the bed, each bed scanning lasts for 2 minutes, each patient scans 9 to 10 beds, the image is segmented by using 3D-Slicer (version number 4.11.20200930, www.slicer.com), for the PET image, a semi-automatic segmentation method is used, and for the CT image, NVIDIA AI-Assisted segmentation and a boundary-based CT segmentation model are used for segmentation;
step three: interpolating all images to make isotropic voxel pitch rotation unchanged, performing feature extraction to compare image data of different samples, resampling a CT image to 1 mm × 1 mm × 1 mm, resampling a PET image to 3mm × 3mm × 3mm, discretizing by a fixed box width method, wherein the box widths of the CT and PET images are 25 and 0.313 respectively, preprocessing discretizing, Gaussian Laplace and wavelet transformation to generate different feature sets, extracting fine, medium and rough features by using different sigma values for an LOG filter, specifically, the ranges of the fine, medium and rough features are 0.5 to 5, the step size is 0.5, each layer of the wavelet transformation generates 8 decompositions, and preprocessing all intensity, histogram and texture features;
step four: extracting a plurality of characteristics from different characteristic classes, wherein the classes comprise image characteristic extraction by using an open source Python library PyRadiomics based on shape and morphological characteristics, first-order statistics, a gray level co-occurrence matrix, a gray level dependency matrix, a gray level run length matrix, a gray level size area matrix and an adjacent gray level hue difference matrix;
step five: extracting a plurality of characteristics from different characteristic classes, wherein the classes comprise image characteristic extraction by using an open source Python library PyRadiomics based on shape and morphological characteristics, first-order statistics, a gray level co-occurrence matrix, a gray level dependency matrix, a gray level run length matrix, a gray level size area matrix and an adjacent gray level hue difference matrix;
step six: carrying out feature selection and classification by using an internally developed Python frame in an open source Python library Sciket-Learn;
step seven: after manually setting boundaries and discretizing, performing 10-fold cross validation on a training data set to evaluate the performance of the model, determining an optimal hyper-parameter by combining grid search, and evaluating parameter setting by using the area under an ROC curve of an estimator suitable for imbalance classification;
step eight: extracting a large number of PET/CT image omics characteristics from each segmented tumor region, then sequencing the characteristics in a training set by applying 12 characteristic selection methods, wherein the characteristics ranked at the top in each selection method are used for fitting 11 machine learning classifiers, a 132 cross combination machine learning algorithm matrix is established by the 12 characteristic selection methods and the 11 classification methods, and then the AUC and the accuracy of the matrix are evaluated in a test data set;
step nine: finally, comparing the performance of the image omics machine learning classifier in the aspects of AUC, accuracy, sensitivity, specificity and the like, calculating a 95% confidence interval of AUC by binomial accurate test, and performing pair-wise comparison on ROC curves by using a Delong method, wherein the accuracy is defined as the ratio of the number of samples correctly classified by the classifier to the total number of samples in a test data set, the sensitivity is defined as the number of correct positive results appearing in all available positive samples during a test, and the specificity is defined as the number of correct positive results appearing in all available negative samples during the test.
In order to improve the accuracy of CT and PET scanning, the configuration parameter of CT is tube voltage 140 kV, the tube current is automatically adjusted by using CareDose4D technology, the scanning layer thickness is 3mm, and the attenuation correction of the PET image is performed by using CT data.
To improve the accuracy of image segmentation, the CT segmentation model is validated by experienced nuclear medicine physicians without knowing the pathological outcome of the patient.
Further, in this embodiment, the imagery omics features calculated by the Python library PyRadiomics conform to the feature definition described by the standardization initiative for image biomarkers.
In order to realize feature selection, the feature selection comprises a filtering method, a packing method and an embedding method, the filtering method comprises variance filtering and correlation filtering, the packing method comprises a decision tree, a random forest, a logistic regression and a linear support vector machine, the embedding method comprises a minimum absolute shrinkage and selection operator, the decision tree, the random forest, the logistic regression and the linear support vector machine, and the correlation filtering comprises a chi-square filtering method, an F test method and a mutual information method.
In order to realize unbalanced classification of features, the machine learning classifier comprises a decision tree, a random forest, naive Bayes, a gradient boosting decision tree, Logistic regression, linear discriminant analysis, a support vector machine, K neighbor, Bagging, a multilayer sensor and AdaBoost.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (8)

1. A bimodal omics image analysis method for lung nodule classification, characterized by comprising the following steps:
the method comprises the following steps: collecting the lung nodule patient which can be pre-processed by the center and can be examined by PET/CT for making definite pathological diagnosis for the lung nodule;
step two: before examination, each patient is fasted for more than 6 h, blood sugar is measured by fingertip blood sampling and is less than 11.1 mmol/L, after the intravenous injection of 18F-FDG according to the weight of the patient, the patient is rested for about 1 hour, the head is scanned to the upper thigh 1/3 area by using PET/CT, the image acquisition is carried out from the tail of the bed, each bed scanning lasts for 2 minutes, each patient is scanned for 9 to 10 beds, and TrueX + TOF (ultra HD-PET) is used for PET image reconstruction;
step three: so images were segmented using 3D-Slicer (version number 4.11.20200930, www.slicer.com), semi-automatic segmentation for PET images, NVIDIA AI-Assisted segmentation and boundary-based CT segmentation models for CT images;
step four: interpolating all images to enable the isotropic voxel space to be unchanged in rotation, extracting features so as to compare image data of different samples, resampling a CT image to 1 mm multiplied by 1 mm, resampling a PET image to 3mm multiplied by 3mm, discretizing by adopting a fixed box width method, wherein the box widths of the CT image and the PET image are respectively 25 and 0.313, preprocessing the discretization, the Gaussian Laplace and the wavelet transformation to generate different feature sets, extracting fine, medium and rough features by using different sigma values for an LOG filter, wherein the range is 0.5 to 5, the step length is 0.5, each layer of the wavelet transformation generates 8 decompositions, and preprocessing all intensity, histogram and texture features;
step five: extracting a plurality of features from different feature classes, including image feature extraction by using an open source Python library Pyradio based on shape and form features, first-order statistics, a gray level co-occurrence matrix, a gray level dependency matrix, a gray level run length matrix, a gray level size area matrix and an adjacent gray level hue difference matrix;
step six: carrying out feature selection and classification by using an internally developed Python frame in an open source Python library Sciket-Learn;
step seven: after manually setting boundaries and discretizing, performing 10-fold cross validation on a training data set to evaluate the performance of the model, determining an optimal hyper-parameter by combining grid search, and evaluating parameter setting by using the area under an ROC curve of an estimator suitable for imbalance classification;
step eight: extracting a large number of PET/CT image omics characteristics from each segmented tumor region, then sequencing the characteristics in a training set by applying 12 characteristic selection methods, wherein the characteristics ranked at the top in each selection method are used for fitting 11 machine learning classifiers, a 132 cross combination machine learning algorithm matrix is established by the 12 characteristic selection methods and the 11 classification methods, and then the AUC and the accuracy of the matrix are evaluated in a test data set;
step nine: finally, comparing the performance of the image omics machine learning classifier in the aspects of AUC, accuracy, sensitivity, specificity and the like, calculating a 95% confidence interval of AUC by binomial accurate test, and performing pair-wise comparison on ROC curves by using a Delong method, wherein the accuracy is defined as the ratio of the number of samples correctly classified by the classifier to the total number of samples in a test data set, the sensitivity is defined as the number of correct positive results appearing in all available positive samples during a test, and the specificity is defined as the number of correct positive results appearing in all available negative samples during the test.
2. The bimodal omics image analysis method for lung nodule classification as set forth in claim 1, wherein the CT configuration parameter is tube voltage 140 kV, tube current is automatically adjusted by using CareDose4D technique, scanning layer thickness is 3mm, and attenuation correction of PET image is performed by CT data.
3. The method of bimodal omics image analysis for lung nodule classification as set forth in claim 1, wherein the CT segmentation model is validated by an experienced nuclear medicine physician without knowledge of the patient's pathological outcome.
4. The method of bimodal imagemics image analysis for lung nodule classification of claim 1, wherein the computed imagemics features of the open source Python library PyRadiomics conform to the feature definitions described by the image biomarker standardization initiative.
5. The method of bimodal omics image analysis for lung nodule classification as set forth in claim 1, wherein the feature selection comprises filtering, packing, and embedding.
6. The bimodal omics image analysis method for lung nodule classification as set forth in claim 5, wherein the filtering comprises variance filtering and correlation filtering, the packing comprises decision trees, random forests, logistic regression, and linear support vector machines, and the embedding comprises minimum absolute shrinkage and selection operators, decision trees, random forests, logistic regression, and linear support vector machines.
7. The method of bimodal omics image analysis for lung nodule classification as set forth in claim 6, wherein the correlation filtering comprises chi-square filtering, F-test, and mutual information.
8. The method of claim 1, wherein the machine learning classifier comprises a decision tree, a random forest, naive Bayes, a gradient boosting decision tree, Logistic regression, linear discriminant analysis, support vector machine, K nearest neighbor, Bagging, multi-layered perceptron, AdaBoost.
CN202111171035.8A 2021-10-08 2021-10-08 Bimodal imagery omics image analysis method for lung nodule classification Withdrawn CN113822873A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111171035.8A CN113822873A (en) 2021-10-08 2021-10-08 Bimodal imagery omics image analysis method for lung nodule classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111171035.8A CN113822873A (en) 2021-10-08 2021-10-08 Bimodal imagery omics image analysis method for lung nodule classification

Publications (1)

Publication Number Publication Date
CN113822873A true CN113822873A (en) 2021-12-21

Family

ID=78916198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111171035.8A Withdrawn CN113822873A (en) 2021-10-08 2021-10-08 Bimodal imagery omics image analysis method for lung nodule classification

Country Status (1)

Country Link
CN (1) CN113822873A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563284A (en) * 2023-07-10 2023-08-08 南京信息工程大学 System for quantitatively describing brain glioma characteristic boundary change evaluation index in MRI

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563284A (en) * 2023-07-10 2023-08-08 南京信息工程大学 System for quantitatively describing brain glioma characteristic boundary change evaluation index in MRI
CN116563284B (en) * 2023-07-10 2023-11-07 南京信息工程大学 System for quantitatively describing brain glioma characteristic boundary change evaluation index in MRI

Similar Documents

Publication Publication Date Title
US20210401392A1 (en) Deep convolutional neural networks for tumor segmentation with positron emission tomography
Lee et al. Image based computer aided diagnosis system for cancer detection
US8774479B2 (en) System and method for automated segmentation, characterization, and classification of possibly malignant lesions and stratification of malignant tumors
US9235887B2 (en) Classification of biological tissue by multi-mode data registration, segmentation and characterization
CA2438479C (en) Computer assisted analysis of tomographic mammography data
JP5159242B2 (en) Diagnosis support device, diagnosis support device control method, and program thereof
Tomassini et al. Lung nodule diagnosis and cancer histology classification from computed tomography data by convolutional neural networks: A survey
Fernandes et al. A novel fusion approach for early lung cancer detection using computer aided diagnosis techniques
Azhari et al. Tumor detection in medical imaging: a survey
Chen et al. Improved window adaptive gray level co-occurrence matrix for extraction and analysis of texture characteristics of pulmonary nodules
EP2208183B1 (en) Computer-aided detection (cad) of a disease
JP5456132B2 (en) Diagnosis support device, diagnosis support device control method, and program thereof
WO2014107402A1 (en) Classification of biological tissue by multi-mode data registration, segmentation and characterization
Katiyar et al. A Comparative study of Lung Cancer Detection and Classification approaches in CT images
Cifci SegChaNet: a novel model for lung cancer segmentation in CT scans
Vagenas et al. A decision support system for the identification of metastases of Metastatic Melanoma using whole-body FDG PET/CT images
CN113822873A (en) Bimodal imagery omics image analysis method for lung nodule classification
CN110738649A (en) training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images
Kanakatte et al. Pulmonary tumor volume detection from positron emission tomography images
US20220375077A1 (en) Method for generating models to automatically classify medical or veterinary images derived from original images into at least one class of interest
Rahman et al. Automated detection of lung cancer using MRI images
Jalalian et al. Machine Learning Techniques for Challenging Tumor Detection and Classification in Breast Cancer
Gillani et al. Classification of pulmonary nodule using new transfer method approach
VENKATESWARLU et al. A CONTEMPORARY TECHNIQUE FOR LUNG DISEASE PREDICTION USING AND DL AND ML
Jahan et al. Automated Breast Tumor Detection Using MRI Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211221