CN114332577A - Colorectal cancer image classification method and system combining deep learning and image omics - Google Patents

Colorectal cancer image classification method and system combining deep learning and image omics Download PDF

Info

Publication number
CN114332577A
CN114332577A CN202111648121.3A CN202111648121A CN114332577A CN 114332577 A CN114332577 A CN 114332577A CN 202111648121 A CN202111648121 A CN 202111648121A CN 114332577 A CN114332577 A CN 114332577A
Authority
CN
China
Prior art keywords
deep learning
image
features
data
omics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111648121.3A
Other languages
Chinese (zh)
Inventor
黄立勤
何甜
潘林
郑绍华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202111648121.3A priority Critical patent/CN114332577A/en
Publication of CN114332577A publication Critical patent/CN114332577A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a colorectal cancer image classification method and system combining deep learning and image omics, aiming at the problem of training a small sample of a deep learning model, and fully utilizing the existing data to perform data enhancement (rotation, translation, image transformation and the like) so as to solve the problems of time consumption and labor consumption of manual marking of doctors, an automatic segmentation network model of deep learning is introduced, and the purpose of automatically marking an interested region from an image is realized. Aiming at the problems that the characteristic interpretability of the deep learning model is poor and the acquired characteristic information is not comprehensive enough, the image omics characteristic, the deep learning characteristic and the clinical pathology information are fused to acquire more comprehensive characteristic information, and the classification accuracy and reliability of the image omics are further improved.

Description

Colorectal cancer image classification method and system combining deep learning and image omics
Technical Field
The invention belongs to the technical field of medical image processing and image classification, and particularly relates to a colorectal cancer image classification method and system combining deep learning and image omics.
Background
1. Scheme of the image group: the image omics mainly extracts a large amount of high-dimensional quantitative image features from CT, MRI and PET images with high flux by means of computer software; the process comprises data acquisition, image segmentation, feature extraction, feature selection and model establishment, and the most accurate image classification is assisted by carrying out deeper mining, prediction and analysis on massive image data information. In practical application, the imaging omics focus needs manual marking by a doctor, which is time-consuming and has subjective deviation. In addition, when the quantitative method is adopted to calculate the target characteristics, a set of standard flow and quality control system is lacked, so that the performance of the method is limited.
2. Scheme of deep learning: deep learning refers to a technique of learning valid features from a large amount of input data by combining low-level features to form more abstract high-level features or classes, and using these features for classification, regression, and information retrieval. The model can automatically learn, extract and select image characteristics and predict, so that information in the image can be more comprehensively and deeply mined. The types of models are numerous, and the Convolutional Neural Network (CNN) is most widely applied in the field of medical imaging. The use of convolutional networks makes the training and reasoning process computationally expensive. Deep learning allows for fully automated analysis of images after model training is completed, but this advantage is based on more expensive data acquisition costs. Deep learning requires more data to be collected and labeled, and the data volume may be ten times or more than one hundred times of that of the iconomics.
As mentioned above, the existing colorectal cancer medical image classification has the following disadvantages:
1. the imaging group study needs doctors to manually mark data, time is consumed, subjective deviation exists, the image equipment of different manufacturers is lack of unified standards in terms of scanning parameters and reconstruction algorithms, and the extracted features are phenotypic features and are not deep enough.
2. The deep learning model needs a large amount of data for training, occupies a large amount of computing resources, has high requirements on hardware equipment, and has poor interpretability of the extracted deep learning features.
3. The extracted features are not comprehensive enough. Only the imaging group feature or the deep learning feature is used, and the two are not combined.
Disclosure of Invention
In order to make up for the blank and the defects of the prior art, the invention provides a colorectal cancer image classification method and system combining deep learning and image omics. The main design comprises:
1. aiming at the problem of small sample of deep learning model training, the existing data is fully utilized for data enhancement (rotation, translation, image transformation and the like)
2. In order to solve the problem that a doctor is time-consuming and labor-consuming in manual marking, an automatic segmentation network model for deep learning is introduced, and automatic marking of an interested region from an image is achieved.
3. Aiming at the problems that the characteristic interpretability of the deep learning model is poor and the acquired characteristic information is not comprehensive enough, the image omics characteristic, the deep learning characteristic and the clinical pathology information are fused to acquire more comprehensive characteristic information, and the classification accuracy and reliability of the image omics are further improved.
The invention specifically adopts the following technical scheme:
a colorectal cancer image classification method combining deep learning and imagery omics is characterized by comprising the following steps:
step S1: data preprocessing: amplifying colorectal cancer image data by adopting a data enhancement method; training the segmentation network by using the manual marking data, and automatically segmenting the region of interest from the image by using the trained model to acquire more data containing the labels;
step S2: feature extraction: extracting related omics characteristics from the abdominal CT through an open-source Python software package Pyradioics; selecting a model with the best result by adopting a resnet training model, and extracting deep learning characteristics by using the model;
step S3: selecting characteristics: removing the relevant features, and calculating a Pearson relevant matrix to remove high-relevant features (p > 0.90); sorting the residual features according to the prediction capability by using a recursive feature elimination method;
step S4: feature fusion: converting clinical pathological information provided by a doctor by adopting a natural language processing technology; fusing the image omics characteristics, the deep learning characteristics and the clinical pathological information to obtain more comprehensive characteristic information;
step S5: and (3) adopting ensemble learning, voting classification results of all classifiers in prediction by using the classifiers comprising a Support Vector Machine (SVM), a Bayes classifier, a logistic regression classifier and a Lasso regression, and selecting the best model for executing final image classification.
Further, in step S1: the method comprises the steps of firstly, performing data enhancement on the existing colorectal cancer image data to obtain more abdominal images, inputting the amplified data into a deep learning network U-Net for training in the modes of image rotation, image translation and image conversion, automatically segmenting an interested region after model training, and finely adjusting and segmenting a lesion region obtained by the model manually to obtain more labeled data.
Further, in step S3: firstly, traversing all features for feature selection, calculating Pearson correlation coefficients pairwise, and randomly removing one of the features when P is greater than 0.90 so that the features after dimension reduction do not have high similarity; the remaining features are then ranked according to prediction power using recursive feature elimination.
Further, in step S4: the clinical pathological information feature selection is implemented by gradually discriminating and regressing, and all features are introduced in sequence to be tested one by one; when the originally introduced characteristic variable becomes no longer significant due to the introduction of the following variable, rejecting the characteristic variable; this is repeated until neither significant variables are selected into the equation, nor insignificant independent variables are removed from the regression equation.
And, a colorectal cancer image classification system that combines deep learning and imagery omics, its characterized in that: a computer-based system, comprising:
the data preprocessing module is used for amplifying the colorectal cancer image data by adopting a data enhancement method; training the segmentation network by using manual marking data, and automatically segmenting the region of interest from the image by using a trained model to acquire more data containing labels;
the feature extraction module is used for extracting related omics features from the abdominal CT through an open-source Python software package Pythiomics; selecting a model with the best result by adopting a resnet training model, and extracting deep learning characteristics by using the model;
the characteristic selection module is used for removing relevant characteristics and calculating a Pearson correlation matrix to remove high-correlation characteristics (p > 0.90); sorting the residual features according to the prediction capability by using a recursive feature elimination method;
the characteristic fusion module adopts a natural language processing technology to convert clinical and pathological information provided by a doctor; fusing the image omics characteristics, the deep learning characteristics and the clinical pathological information to obtain more comprehensive characteristic information;
and the ensemble learning module votes the classification result of each classifier in prediction by using the classifiers comprising a Support Vector Machine (SVM), a Bayes classifier, a logistic regression classifier and a Lasso regression, and selects the best model for executing the final image classification.
Further, in the data preprocessing module, the data enhancement adopts the modes of image rotation, image translation and image conversion, and the amplified data is input into a deep learning network U-Net for training.
Further, in the feature selection module, firstly traversing all features, pairwise calculating Pearson correlation coefficients, and randomly removing one of the features when P is greater than 0.90, so that the features after dimensionality reduction do not have high similarity; the remaining features are then ranked according to prediction power using recursive feature elimination.
Furthermore, in the feature fusion module, the selection processing of clinical pathological information features is to introduce all features in sequence to carry out one-by-one inspection through stepwise discriminant regression; when the originally introduced characteristic variable becomes no longer significant due to the introduction of the following variable, rejecting the characteristic variable; this is repeated until neither significant variables are selected into the equation, nor insignificant independent variables are removed from the regression equation.
Compared with the prior art, the main design points of the invention and the preferred scheme thereof comprise:
1. aiming at the task of classifying medical images of colorectal cancer, a network integrating the characteristics of the image group, the deep learning characteristics and the clinical characteristics is designed, so that the integration of various characteristics can be realized, the image group characteristics with interpretability are provided, and the deep learning characteristics with deeper abstraction are combined with information provided by doctors and patients.
2. Combines imaging group study and deep learning fully. The deep learning is applied to a data preprocessing stage, including data amplification and automatic segmentation of an interested region, so that the limitations of small data quantity, time consumption of doctor labeling and the like are effectively relieved. The deep learning model is applied to extracting high-dimensional effective characteristics, the natural language processing technology is used for analyzing clinical pathological information, and the imaging omics can extract rich phenotypic characteristics.
3. And the integrated learning modeling and the use of various classifiers enable the final result to be more comprehensive.
Its advantages over the prior art include:
1. the existing method uses the characteristics of the image group or the deep learning characteristics independently, the application of the characteristic information is not comprehensive enough, the image group needs time and labor consumption of characteristic regions manually marked by doctors, the extracted characteristics are not comprehensive enough, the deep learning needs a large amount of data, and the interpretability of the extracted characteristics which are deep abstract characteristics is not strong.
The invention and the optimal selection scheme thereof introduce the interpretable image omics characteristics, deep abstract characteristics and clinical pathological characteristics, not only solve the defects of incomplete characteristic extraction, weak interpretability and the like, but also solve the limitations of large quantity demand, time consumption and labor consumption of manual labeling and the like.
2. The existing added clinical information is mostly quantitative characteristics of sex, age and the like, and has limitation.
The invention and the preferred scheme thereof introduce a Natural Language Processing (NLP) method to process clinical pathological information, which leads the imported information to be more comprehensive. The NLP technology can process large-batch text data, so that a machine can understand richer text information and utilize the richer text information.
Drawings
FIG. 1 is a schematic diagram of a classification network framework according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a U-Net network structure adopted in the embodiment of the present invention;
fig. 3 is a schematic diagram of a Resnet network structure according to an embodiment of the present invention.
Detailed Description
In order to make the features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail as follows:
as shown in fig. 1, the colorectal cancer image classification method and system combining deep learning and imaging omics provided in this embodiment specifically include the following schemes:
(1) data preprocessing: existing data is fully amplified by adopting a data enhancement method (rotation, translation, image transformation and the like); and training the segmentation network by using the existing manual labeling data, and automatically segmenting the region of interest from the image by using the trained model to acquire more data containing the label.
(2) Feature extraction: extracting related omics characteristics from the abdominal CT through an open-source Python software package Pyradioics; the resnet training model selects the model with the best result, and the model is used for extracting deep learning features.
(3) Feature fusion: natural Language Processing (NLP) techniques are used to convert clinical pathology information provided by physicians. The image omics characteristics, the deep learning characteristics and the clinical pathological information are fused to obtain more comprehensive characteristic information.
(4) Selecting characteristics: removing the relevant features, and calculating a Pearson relevant matrix to remove high-relevant features (p > 0.90); and the remaining features are sorted according to prediction power using a recursive feature elimination method.
(5) A classifier: the Ensemble Learning (Ensemble Learning) is adopted, and comprises classifiers such as a Support Vector Machine (SVM), a Bayesian discriminator, a logistic regression discriminator, a Lasso regression and the like. And voting the classification result of each classifier during prediction to select the best model.
2. Detailed design
(1) Classification network based on combination of deep learning and image omics
As shown in fig. 1, the classification network framework designed in this embodiment first performs data enhancement on original data to obtain more abdomen images, mainly through image rotation, image translation, image conversion, and the like. The amplified data is input into a deep learning network U-Net for training, the model can be used for automatically segmenting an interested region after being trained, and a professional doctor manually finely tunes a lesion region obtained by segmenting the model to obtain more labeling data, so that the method greatly reduces the manual labeling time of the doctor.
And (3) extracting image omics characteristics from abdominal CT through an open-source Python software package Pythiomics, then performing characteristic selection, and modeling by integrating learning by combining the depth characteristics extracted by a deep learning network and the clinical characteristics processed by NLP. The Ensemble Learning (Ensemble Learning) is adopted, and comprises classifiers such as a Support Vector Machine (SVM), a Bayesian discriminator, a logistic regression discriminator, a Lasso regression and the like. And voting the classification result of each classifier during prediction to select the best model.
The number of extracted omics features may vary from hundreds to tens of thousands, not every feature being associated with a clinical problem to be solved; on the other hand, in practice, since the number of features is relatively large and the number of samples is small, the phenomenon of overfitting of the subsequent model is easily caused, and thus the accuracy of the model is affected. Firstly, traversing all features for feature selection, calculating Pearson correlation coefficients pairwise, and randomly removing one of the features when P is greater than 0.90, wherein the method can ensure that the features after dimension reduction do not have high similarity; the remaining features are then ranked according to prediction power using recursive feature elimination.
The clinical pathological information feature selection is to introduce all features in sequence for one-by-one inspection through stepwise discriminant regression. When the originally introduced characteristic variable becomes no longer significant due to the introduction of the following variable, the originally introduced characteristic variable is eliminated. This is repeated until neither significant variables are selected into the equation, nor insignificant independent variables are removed from the regression equation.
(2) Automatic segmentation network U-Net
The U-Net network is composed of an encoding path and a decoding path for capturing context information, and as shown in fig. 2, the feature diagram of the encoder is spliced with the up-sampling feature diagram of the decoder at each stage in a layer-skipping connection manner, so as to form a U-shaped structure. The decoder learns the features lost by encoder pooling by making layer-hopping connections at each stage.
(3) Deep learning network extraction features
The high-dimensional deep learning features are trained and extracted by using a Resnet-based framework, and different from a traditional convolutional neural network, each layer in a Resnet network structure receives the output of the previous layer, so that the network is more accurate and efficient. Extracting deep learning features before the output layer of the network, removing the output layer, and taking the high-dimensional features obtained at the last layer of the hidden layer as the output deep learning features, as shown in fig. 3.
The above programming scheme provided by this embodiment can be stored in a computer readable storage medium in a coded form, and implemented in a computer program manner, and inputs basic parameter information required for calculation through computer hardware, and outputs a calculation result.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations of methods, apparatus (devices), and computer program products according to embodiments of the invention. It will be understood that each flow of the flowcharts, and combinations of flows in the flowcharts, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.
The present invention is not limited to the above preferred embodiments, and other various methods and systems for classifying colorectal cancer images combining deep learning and imaging omics can be derived by anyone based on the teaching of the present invention, and all equivalent changes and modifications made according to the claims of the present invention shall fall within the scope of the present invention.

Claims (8)

1. A colorectal cancer image classification method combining deep learning and imagery omics is characterized by comprising the following steps:
step S1: data preprocessing: amplifying colorectal cancer image data by adopting a data enhancement method; training the segmentation network by using the manual marking data, and automatically segmenting the region of interest from the image by using the trained model to acquire more data containing the labels;
step S2: feature extraction: extracting related omics characteristics from the abdominal CT through an open-source Python software package Pyradioics; selecting a model with the best result by adopting a resnet training model, and extracting deep learning characteristics by using the model;
step S3: selecting characteristics: removing the relevant features, and calculating a Pearson relevant matrix to remove high-relevant features (p > 0.90); sorting the residual features according to the prediction capability by using a recursive feature elimination method;
step S4: feature fusion: converting clinical pathological information provided by a doctor by adopting a natural language processing technology; fusing the image omics characteristics, the deep learning characteristics and the clinical pathological information to obtain more comprehensive characteristic information;
step S5: and (3) adopting ensemble learning, voting classification results of all classifiers in prediction by using the classifiers comprising a Support Vector Machine (SVM), a Bayes classifier, a logistic regression classifier and a Lasso regression, and selecting the best model for executing final image classification.
2. The colorectal cancer image classification method combining deep learning and imagery omics according to claim 1, wherein: in step S1: the method comprises the steps of firstly, performing data enhancement on the existing colorectal cancer image data to obtain more abdominal images, inputting the amplified data into a deep learning network U-Net for training in the modes of image rotation, image translation and image conversion, automatically segmenting an interested region after model training, and finely adjusting and segmenting a lesion region obtained by the model manually to obtain more labeled data.
3. The colorectal cancer image classification method combining deep learning and imagery omics according to claim 1, wherein: in step S3: firstly, traversing all features for feature selection, calculating Pearson correlation coefficients pairwise, and randomly removing one of the features when P is greater than 0.90 so that the features after dimension reduction do not have high similarity; the remaining features are then ranked according to prediction power using recursive feature elimination.
4. The colorectal cancer image classification method combining deep learning and imagery omics according to claim 1, wherein: in step S4: the clinical pathological information feature selection is implemented by gradually discriminating and regressing, and all features are introduced in sequence to be tested one by one; when the originally introduced characteristic variable becomes no longer significant due to the introduction of the following variable, rejecting the characteristic variable; this is repeated until neither significant variables are selected into the equation, nor insignificant independent variables are removed from the regression equation.
5. A colorectal cancer image classification system combining deep learning and imagery omics is characterized in that: a computer-based system, comprising:
the data preprocessing module is used for amplifying the colorectal cancer image data by adopting a data enhancement method; training the segmentation network by using manual marking data, and automatically segmenting the region of interest from the image by using a trained model to acquire more data containing labels;
the feature extraction module is used for extracting related omics features from the abdominal CT through an open-source Python software package Pythiomics; selecting a model with the best result by adopting a resnet training model, and extracting deep learning characteristics by using the model;
the characteristic selection module is used for removing relevant characteristics and calculating a Pearson correlation matrix to remove high-correlation characteristics (p > 0.90); sorting the residual features according to the prediction capability by using a recursive feature elimination method;
the characteristic fusion module adopts a natural language processing technology to convert clinical and pathological information provided by a doctor; fusing the image omics characteristics, the deep learning characteristics and the clinical pathological information to obtain more comprehensive characteristic information;
and the ensemble learning module votes the classification result of each classifier in prediction by using the classifiers comprising a Support Vector Machine (SVM), a Bayes classifier, a logistic regression classifier and a Lasso regression, and selects the best model for executing the final image classification.
6. The colorectal cancer image classification method combining deep learning and imagery omics according to claim 5, wherein: in the data preprocessing module, the data enhancement adopts the modes of image rotation, image translation and image conversion, and the amplified data is input into a deep learning network U-Net for training.
7. The colorectal cancer image classification method combining deep learning and imagery omics according to claim 5, wherein: in the feature selection module, firstly traversing all features, pairwise calculating Pearson correlation coefficients, and randomly removing one of the features when P is greater than 0.90 so that the features after dimensionality reduction do not have high similarity; the remaining features are then ranked according to prediction power using recursive feature elimination.
8. The colorectal cancer image classification method combining deep learning and imagery omics according to claim 5, wherein: in the feature fusion module, the selection processing of clinical pathological information features is to introduce all the features in sequence to carry out one-by-one inspection through stepwise discriminant regression; when the originally introduced characteristic variable becomes no longer significant due to the introduction of the following variable, rejecting the characteristic variable; this is repeated until neither significant variables are selected into the equation, nor insignificant independent variables are removed from the regression equation.
CN202111648121.3A 2021-12-31 2021-12-31 Colorectal cancer image classification method and system combining deep learning and image omics Pending CN114332577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111648121.3A CN114332577A (en) 2021-12-31 2021-12-31 Colorectal cancer image classification method and system combining deep learning and image omics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111648121.3A CN114332577A (en) 2021-12-31 2021-12-31 Colorectal cancer image classification method and system combining deep learning and image omics

Publications (1)

Publication Number Publication Date
CN114332577A true CN114332577A (en) 2022-04-12

Family

ID=81016270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111648121.3A Pending CN114332577A (en) 2021-12-31 2021-12-31 Colorectal cancer image classification method and system combining deep learning and image omics

Country Status (1)

Country Link
CN (1) CN114332577A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115132327A (en) * 2022-05-25 2022-09-30 中国医学科学院肿瘤医院 Microsatellite instability prediction system, construction method thereof, terminal equipment and medium
CN115311302A (en) * 2022-10-12 2022-11-08 四川大学华西医院 Femoral head ischemic necrosis staging characteristic construction method, diagnosis system and storage medium
CN115984193A (en) * 2022-12-15 2023-04-18 东北林业大学 PDL1 expression level detection method fusing histopathology image and CT image
CN116452898A (en) * 2023-06-16 2023-07-18 中国人民大学 Lung adenocarcinoma subtype identification method and device based on image histology and deep learning
CN117496277A (en) * 2024-01-02 2024-02-02 达州市中心医院(达州市人民医院) Rectal cancer image data modeling processing method and system based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915596A (en) * 2020-08-07 2020-11-10 杭州深睿博联科技有限公司 Method and device for predicting benign and malignant pulmonary nodules
WO2021022752A1 (en) * 2019-08-07 2021-02-11 深圳先进技术研究院 Multimodal three-dimensional medical image fusion method and system, and electronic device
US20210097682A1 (en) * 2019-09-30 2021-04-01 Case Western Reserve University Disease characterization and response estimation through spatially-invoked radiomics and deep learning fusion
CN113570627A (en) * 2021-07-02 2021-10-29 上海健康医学院 Training method of deep learning segmentation network and medical image segmentation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021022752A1 (en) * 2019-08-07 2021-02-11 深圳先进技术研究院 Multimodal three-dimensional medical image fusion method and system, and electronic device
US20210097682A1 (en) * 2019-09-30 2021-04-01 Case Western Reserve University Disease characterization and response estimation through spatially-invoked radiomics and deep learning fusion
CN111915596A (en) * 2020-08-07 2020-11-10 杭州深睿博联科技有限公司 Method and device for predicting benign and malignant pulmonary nodules
CN113570627A (en) * 2021-07-02 2021-10-29 上海健康医学院 Training method of deep learning segmentation network and medical image segmentation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何甜: ""Radiomics approach with deep learning for predicting T4 obstructive colorectal cancer using CT image"", 《ABDOMINAL RADIOLOGY》, 20 March 2023 (2023-03-20) *
郭恩特: ""图像和惯性传感器相结合的摄像机定位和物体三维位置估计"", 《福州大学学报(自然科学版)》, 28 February 2018 (2018-02-28) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115132327A (en) * 2022-05-25 2022-09-30 中国医学科学院肿瘤医院 Microsatellite instability prediction system, construction method thereof, terminal equipment and medium
WO2023226217A1 (en) * 2022-05-25 2023-11-30 中国医学科学院肿瘤医院 Microsatellite instability prediction system and construction method therefor, terminal device, and medium
US12027255B2 (en) 2022-05-25 2024-07-02 Cancer Hospital, Chinese Academy Of Medical Sciences System for predicting microsatellite instability and construction method thereof, terminal device and medium
CN115311302A (en) * 2022-10-12 2022-11-08 四川大学华西医院 Femoral head ischemic necrosis staging characteristic construction method, diagnosis system and storage medium
CN115311302B (en) * 2022-10-12 2022-12-23 四川大学华西医院 Femoral head avascular necrosis staged diagnostic system and storage medium
CN115984193A (en) * 2022-12-15 2023-04-18 东北林业大学 PDL1 expression level detection method fusing histopathology image and CT image
CN116452898A (en) * 2023-06-16 2023-07-18 中国人民大学 Lung adenocarcinoma subtype identification method and device based on image histology and deep learning
CN116452898B (en) * 2023-06-16 2023-10-17 中国人民大学 Lung adenocarcinoma subtype identification method and device based on image histology and deep learning
CN117496277A (en) * 2024-01-02 2024-02-02 达州市中心医院(达州市人民医院) Rectal cancer image data modeling processing method and system based on artificial intelligence
CN117496277B (en) * 2024-01-02 2024-03-12 达州市中心医院(达州市人民医院) Rectal cancer image data modeling processing method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN114332577A (en) Colorectal cancer image classification method and system combining deep learning and image omics
CN114998220B (en) Tongue image detection and positioning method based on improved Tiny-YOLO v4 natural environment
CN115601602A (en) Cancer tissue pathology image classification method, system, medium, equipment and terminal
Atanbori et al. Convolutional neural net-based cassava storage root counting using real and synthetic images
Khan et al. GLNET: global–local CNN's-based informed model for detection of breast cancer categories from histopathological slides
CN118154969A (en) Bile duct cancer endoscopic image classification method and system based on deep learning and feature fusion
CN114121226B (en) Unet model-based biomarker prediction system, method and equipment
CN113838018B (en) Cnn-former-based liver fibrosis lesion detection model training method and system
CN113177554B (en) Thyroid nodule identification and segmentation method, system, storage medium and equipment
CN114580501A (en) Bone marrow cell classification method, system, computer device and storage medium
Taheri et al. A Comprehensive Study on Classification of Breast Cancer Histopathological Images: Binary Versus Multi-Category and Magnification-Specific Versus Magnification-Independent
CN112070059A (en) Artificial intelligent classification and identification method for blood cell and marrow cell images
CN117174238A (en) Automatic pathology report generation method based on artificial intelligence
Ekman et al. Task based semantic segmentation of soft X-ray CT images using 3D convolutional neural networks
Tsaniya et al. Automatic radiology report generator using transformer with contrast-based image enhancement
Castillo et al. Object detection in digital documents based on machine learning algorithms
CN113409293A (en) Pathology image automatic segmentation system based on deep learning
CN114463320A (en) Magnetic resonance imaging brain glioma IDH gene prediction method and system
CN112086174A (en) Three-dimensional knowledge diagnosis model construction method and system
Shahzad et al. Semantic segmentation of anaemic RBCs using multilevel deep convolutional encoder-decoder network
Ihler et al. A comprehensive study of modern architectures and regularization approaches on chexpert5000
Li et al. A computer-aided diagnosis system based on feature extraction enhanced multiple instance learning
Begum et al. ENHANCED BRAIN DISORDER DETECTION THROUGH YOLOV5 IN MEDICAL IMAGE ANALYSIS
Toka et al. Determination of DL-Based Bone Age Assessment
Fontes et al. Check for updates Similarity-Based Explanations for Deep Interpretation of Capsule Endoscopy Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination