CN110910371A - Liver tumor automatic classification method and device based on physiological indexes and image fusion - Google Patents
Liver tumor automatic classification method and device based on physiological indexes and image fusion Download PDFInfo
- Publication number
- CN110910371A CN110910371A CN201911159731.XA CN201911159731A CN110910371A CN 110910371 A CN110910371 A CN 110910371A CN 201911159731 A CN201911159731 A CN 201911159731A CN 110910371 A CN110910371 A CN 110910371A
- Authority
- CN
- China
- Prior art keywords
- liver
- image
- physiological
- hepatocellular carcinoma
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 208000014018 liver neoplasm Diseases 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000004927 fusion Effects 0.000 title claims abstract description 43
- 206010019695 Hepatic neoplasm Diseases 0.000 title claims abstract description 19
- 206010073071 hepatocellular carcinoma Diseases 0.000 claims abstract description 59
- 231100000844 hepatocellular carcinoma Toxicity 0.000 claims abstract description 59
- 208000006990 cholangiocarcinoma Diseases 0.000 claims abstract description 58
- 238000012549 training Methods 0.000 claims abstract description 38
- 238000000605 extraction Methods 0.000 claims abstract description 31
- 210000005228 liver tissue Anatomy 0.000 claims abstract description 31
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 21
- 230000003187 abdominal effect Effects 0.000 claims abstract description 15
- 238000013528 artificial neural network Methods 0.000 claims abstract description 15
- 230000011218 segmentation Effects 0.000 claims abstract description 13
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 claims abstract description 11
- 238000013145 classification model Methods 0.000 claims abstract description 10
- 201000007270 liver cancer Diseases 0.000 claims description 36
- 239000000284 extract Substances 0.000 claims description 30
- 238000010586 diagram Methods 0.000 claims description 23
- 238000010276 construction Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 12
- 210000004185 liver Anatomy 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 10
- 238000003384 imaging method Methods 0.000 claims description 9
- 238000011176 pooling Methods 0.000 claims description 7
- 238000012546 transfer Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000014509 gene expression Effects 0.000 abstract description 17
- 206010028980 Neoplasm Diseases 0.000 description 8
- 238000013461 design Methods 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 238000003745 diagnosis Methods 0.000 description 4
- 238000005065 mining Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 208000002193 Pain Diseases 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000004195 computer-aided diagnosis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003631 expected effect Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011369 optimal treatment Methods 0.000 description 1
- 230000036407 pain Effects 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 238000011269 treatment regimen Methods 0.000 description 1
- 239000000439 tumor marker Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30056—Liver; Hepatic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The liver tumor automatic classification method and device based on the physiological indexes and image fusion can have good robustness when different patients are identified, a complex feature extraction algorithm does not need to be designed manually, full-automatic feature learning and extraction are achieved, feature expression differences of cholangiocellular carcinoma and hepatocellular carcinoma on the images and expression differences on the physiological indexes are jointly learned and mined, and identification accuracy of the model is improved. The method comprises the following steps: constructing images of cholangiocellular carcinoma and hepatocellular carcinoma and a physiological index database thereof, and collecting abdominal CT images of patients and corresponding physiological indexes recorded by doctors; marking all the collected image data, delineating a liver tissue area in the image data, judging whether the liver tissue area belongs to cholangiocellular carcinoma or hepatocellular carcinoma, and making a mark to serve as a gold standard for network training; constructing a three-dimensional full convolution neural network segmentation model; and constructing a deep convolutional neural network classification model based on fusion of images and physiological indexes.
Description
Technical Field
The invention relates to the technical field of medical image processing, in particular to a liver tumor automatic classification method based on physiological indexes and image fusion and a liver tumor automatic classification device based on physiological indexes and image fusion, which are mainly applied to the field of liver cancer identification research based on computer-aided diagnosis.
Background
In recent years, the incidence of cholangiocellular carcinoma in China is increasing year by year, cholangiocellular carcinoma belongs to liver cancer, the clinical manifestations of cholangiocellular carcinoma are very similar to that of hepatocellular carcinoma, and the cholangiocellular carcinoma is often misdiagnosed as hepatocellular carcinoma in clinical diagnosis, but the treatment strategies for the cholangiocellular carcinoma and the hepatocellular carcinoma are quite different, so that many patients who actually become cholangiocellular carcinoma receive the operation treatment of hepatocellular carcinoma, the expected effect is not achieved, and a lot of medical resources are wasted. In addition, cholangiocellular carcinoma is difficult to be discovered at an early stage, and most patients miss the optimal treatment opportunity at the time of diagnosis, so that the early diagnosis of cholangiocellular carcinoma has important clinical significance.
Generally, in order to determine whether a patient has cholangiocellular carcinoma, imaging scanning examination of multiple modalities is often required to be performed on the affected part, the number of slices in each modality is large, the diagnosis is time-consuming and labor-consuming, and a physician cannot obtain a clear diagnosis result because cholangiocellular carcinoma is similar to appearance of hepatocellular carcinoma, so that a pathological biopsy is required to be performed on the patient for final determination, and additional risks and pains are brought to the patient. The liver cancer auxiliary identification based on the artificial intelligence method is helpful for solving the above clinical problems, so that the method also becomes a research hotspot in the field of medical image processing in recent years.
Currently, research teams propose techniques for realizing liver cancer identification by using traditional machine learning and deep learning methods. The recognition method based on the traditional machine learning is to manually design algorithms for extracting the liver cancer features and train a classifier by using the extracted features so as to obtain a feature recognition model. Because the classifier is designed based on the manually extracted liver cancer features, the accuracy of the feature extraction algorithm design often directly affects the final recognition performance, and the feature difference presented by different types of liver cancer is large, so the manually designed feature extraction algorithm often hardly achieves a satisfactory recognition effect. The traditional machine learning method generally has the problems of poor noise immunity, low identification precision and the like, and the most obvious difference between the identification method based on deep learning and the traditional machine learning identification method is that the marked data can be used for automatically extracting the features without manually designing a complex feature extraction algorithm. Therefore, more and more research teams are beginning to invest in the development of artificial intelligence-based cholangiocellular carcinoma and hepatocellular carcinoma identification algorithms. However, the currently reported artificial intelligent liver cancer identification models are developed only based on pure imaging information, and physiological indexes of patients are not taken into consideration. However, in clinical practice, when a doctor cannot make an accurate judgment by viewing image data, the doctor often needs to make a comprehensive decision by using information such as physiological indexes of a patient, so that the doctor can make a definite diagnosis.
Therefore, although the prior identification method has received extensive attention and obtained certain research results, the following disadvantages still exist:
1. the size, shape, and location of tumors vary from patient to patient, which presents challenges to traditional identification methods.
2. The traditional machine learning-based identification method needs to manually design feature extraction methods of different types of liver cancer, and the final identification performance is directly influenced by the quality of the method design.
3. The existing deep network recognition model is developed based on image data, the data can reflect limited tumor characteristic information, and the physiological indexes of patients are not comprehensively utilized to improve the recognition rate.
Therefore, the automatic cholangiocarcinoma and hepatocellular carcinoma identification method based on fusion of images and physiological indexes has to meet the following requirements: (1) good robustness is needed when different patients are identified; (2) the full-automatic feature learning and extraction are realized without manually designing a complex feature extraction algorithm; (3) the method can perform combined learning and mining on the characteristic expression difference of cholangiocellular carcinoma and hepatocellular carcinoma on images and the characteristic expression difference of physiological indexes such as biochemical index content, tumor markers and the like, and improve the identification accuracy of the model.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an automatic classification method of liver tumors based on physiological indexes and image fusion, which has good robustness when being used for identifying different patients, does not need to artificially design a complex characteristic extraction algorithm, realizes full-automatic characteristic learning and extraction, can carry out combined learning and mining on characteristic expression differences of cholangiocellular carcinoma and hepatocellular carcinoma on images and on physiological indexes such as biochemical index content, tumor markers and the like, and improves the identification accuracy of models.
The technical scheme of the invention is as follows: the automatic liver tumor classification method based on the physiological indexes and the image fusion comprises the following steps:
(1) constructing images of cholangiocellular carcinoma and hepatocellular carcinoma and a physiological index database thereof, and collecting abdominal CT images of patients and corresponding physiological indexes recorded by doctors;
(2) marking all the collected image data, delineating a liver tissue area in the image data, judging whether the liver tissue area belongs to cholangiocellular carcinoma or hepatocellular carcinoma, and making a mark to serve as a gold standard for network training;
(3) constructing a three-dimensional full-convolution neural network segmentation model, learning by taking cholangiocellular carcinoma and hepatocellular carcinoma image data of the liver region marked in the step (2) as input of the model, and fully automatically learning and extracting characteristics of liver tissues by the model so as to segment the liver tissues from the whole abdominal scanning image to be used as an interested region for subsequent liver cancer identification;
(4) the method comprises the steps of constructing a deep convolutional neural network classification model based on fusion of images and physiological indexes, carrying out learning training on image information of cholangiocellular carcinoma and hepatocellular carcinoma liver cancers and information capable of obviously distinguishing differences of the image information and the hepatocellular carcinoma liver cancers on the physiological indexes, projecting the physiological indexes of the two existing liver cancers into a digital matrix by using corresponding transfer functions, connecting the digital matrix with a full connecting layer in a convolutional neural network for carrying out imaging characteristic extraction, and carrying out fusion training.
The invention utilizes the convolution neural network technology in deep learning, and combines with the characteristic expression difference of cholangiocellular carcinoma and hepatocellular carcinoma on the image and the characteristic expression difference information of physiological indexes such as biochemical index content, tumor markers and the like which are diagnosed by doctors to input the information into the model for fusion learning and training, so that the model can fully excavate the characteristic expressions of cholangiocellular carcinoma and hepatocellular carcinoma on the image and the physiological indexes, therefore, the method has good robustness when identifying different patients, does not need to artificially design a complex feature extraction algorithm, realizes full-automatic feature learning and extraction, the method can perform combined learning and mining on the characteristic expression difference of cholangiocellular carcinoma and hepatocellular carcinoma on images and the characteristic expression difference of physiological indexes such as biochemical index content, tumor markers and the like, and improve the identification accuracy of the model.
Also provided is a liver tumor automatic classification device based on physiological index and image fusion, which comprises:
the database construction and physiological index acquisition module is used for constructing images of cholangiocellular carcinoma and hepatocellular carcinoma and a physiological index database thereof, and acquiring abdominal CT images of patients and corresponding physiological indexes recorded by doctors;
the marking module marks all the acquired image data, delineates a liver tissue area in the image data, judges whether the liver tissue area belongs to cholangiocellular carcinoma or hepatocellular carcinoma, and marks the liver tissue area to serve as a gold standard for network training;
the three-dimensional full-convolution neural network segmentation model construction module is used for constructing a three-dimensional full-convolution neural network segmentation model, takes cholangiocellular carcinoma and hepatocellular carcinoma image data of a labeled liver region as input of the model for learning, and enables the model to fully automatically learn and extract the characteristics of liver tissues, so that the liver tissues are segmented from the whole abdominal scanning image and serve as an interested region for subsequent liver cancer identification;
the deep convolutional neural network classification model construction module constructs a deep convolutional neural network classification model based on fusion of images and physiological indexes, performs learning training on image information of cholangiocellular carcinoma and hepatocellular carcinoma and information capable of remarkably distinguishing difference of the image information and the hepatocellular carcinoma on the physiological indexes, projects the physiological indexes of the existing two types of liver cancer into a digital matrix by using corresponding transfer functions, and performs fusion training after connecting with a full connection layer in a convolutional neural network for performing imaging characteristic extraction.
Drawings
Fig. 1 is a flowchart of an automatic classification method of liver tumor based on physiological index and image fusion according to the present invention.
FIG. 2 is a flowchart of the establishment of a convolutional neural network liver cancer identification model guided by the fusion of images and physiological indexes according to the present invention.
FIG. 3 is a detailed schematic diagram of the fusion of images and physiological indices according to the present invention.
Detailed Description
As shown in fig. 1, the method for automatically classifying liver tumor based on physiological index and image fusion includes the following steps:
(1) constructing images of cholangiocellular carcinoma and hepatocellular carcinoma and a physiological index database thereof, and collecting abdominal CT images of patients and corresponding physiological indexes recorded by doctors;
(2) marking all the collected image data, delineating a liver tissue area in the image data, judging whether the liver tissue area belongs to cholangiocellular carcinoma or hepatocellular carcinoma, and making a mark to serve as a gold standard for network training;
(3) constructing a three-dimensional full-convolution neural network segmentation model, learning by taking cholangiocellular carcinoma and hepatocellular carcinoma image data of the liver region marked in the step (2) as input of the model, and fully automatically learning and extracting characteristics of liver tissues by the model so as to segment the liver tissues from the whole abdominal scanning image to be used as an interested region for subsequent liver cancer identification;
(4) the method comprises the steps of constructing a deep convolutional neural network classification model based on fusion of images and physiological indexes, carrying out learning training on image information of cholangiocellular carcinoma and hepatocellular carcinoma liver cancers and information capable of obviously distinguishing differences of the image information and the hepatocellular carcinoma liver cancers on the physiological indexes, projecting the physiological indexes of the two existing liver cancers into a digital matrix by using corresponding transfer functions, connecting the digital matrix with a full connecting layer in a convolutional neural network for carrying out imaging characteristic extraction, and carrying out fusion training.
The invention utilizes the convolution neural network technology in deep learning, and combines with the characteristic expression difference of cholangiocellular carcinoma and hepatocellular carcinoma on the image and the characteristic expression difference information of physiological indexes such as biochemical index content, tumor markers and the like which are diagnosed by doctors to input the information into the model for fusion learning and training, so that the model can fully excavate the characteristic expressions of cholangiocellular carcinoma and hepatocellular carcinoma on the image and the physiological indexes, therefore, the method has good robustness when identifying different patients, does not need to artificially design a complex feature extraction algorithm, realizes full-automatic feature learning and extraction, the method can perform combined learning and mining on the characteristic expression difference of cholangiocellular carcinoma and hepatocellular carcinoma on images and the characteristic expression difference of physiological indexes such as biochemical index content, tumor markers and the like, and improve the identification accuracy of the model.
Preferably, in the step (3), the abdominal image data acquired in the step (1) is input into a network for learning training, the segmentation result of the network is compared with the provided liver gold standard, a corresponding loss value is calculated and fed back to the network to update the weight parameter, and model training is optimized.
Preferably, in the step (4), the liver tissue image output in the step (3) is used as an input of the network for learning training, a digital matrix converted from the physiological indexes of the liver cancer patient is inserted into a full connection layer of the network, the liver cancer image and the physiological index are connected to realize fusion training of the liver cancer image and the physiological index characteristics, then the prediction result of the network is compared with the provided liver cancer category gold standard, a corresponding loss value is calculated and fed back to the network to update the weight parameters, model training is optimized, and therefore the model capable of fully automatically identifying the two types of liver cancer, namely cholangiocellular carcinoma and hepatocellular carcinoma, is obtained.
Preferably, the step (4) comprises the following substeps:
(4.1) inputting the liver region-of-interest image obtained by dividing the full convolution neural network into 1 convolution layer 1 with 7 x 7 for extracting corresponding characteristics of cholangiocellular carcinoma and hepatocellular carcinoma;
(4.2) inputting the output characteristic diagram in the step (4.1) into a pooling layer 1 for scaling, reducing the size of the characteristic diagram under the condition of ensuring that the resolution of the characteristic diagram extracted in the step one is not reduced, and reducing the parameter quantity of network training;
(4.3) inputting the output characteristic diagram of the step into a full connection layer 1, and flattening the characteristic diagram into a one-dimensional characteristic vector;
(4.4) converting the physiological index data of the patient into a one-dimensional feature vector through a conversion function, and connecting the vector with the one-dimensional feature vector output in the step (4.3);
and (4.5) inputting the one-dimensional characteristic vector output in the step (4.4) into a softmax layer 1 for classification to obtain classification results of cholangiocellular carcinoma and hepatocellular carcinoma, and thus, completing construction of the convolutional neural network liver cancer identification model with the image and physiological indexes fused.
Preferably, the following steps are further included between the steps (4.2) and (4.3):
(a) inputting the output feature map in the step (4.2) into a convolution layer 2 of 3 x 3 to further extract features;
(b) and inputting the output feature diagram in the step into a pooling layer 2 for scaling, reducing the size of the feature diagram under the condition of ensuring that the resolution of the feature diagram is not reduced, and reducing the parameters of network training.
Preferably, the step (a) is followed by the step of: inputting the output feature map of the step (a) into a plurality of convolution layers of 3 x 3 to further extract features.
Preferably, said step (a) is followed by 31 convolution layers 3-33 of 3 x 3.
FIG. 2 is a flowchart of the establishment of a convolutional neural network liver cancer identification model guided by the fusion of images and physiological indexes according to the present invention.
The model establishment comprises the following steps:
the method comprises the following steps: the liver interested region image obtained by the segmentation of the full convolution neural network is input into 1 convolution layer 1 of 7 × 7 to extract the corresponding characteristics of cholangiocellular carcinoma and hepatocellular carcinoma.
Step two: and inputting the output feature diagram in the step I into a pooling layer 1 for scaling, and reducing the size and parameters of network training under the condition of ensuring that the resolution of the feature diagram extracted in the step I is not reduced.
Step three: and inputting the output feature map in the step two into a convolution layer 2 with 3 x 3 to further extract corresponding liver cancer features.
Step four: and inputting the output feature map of the step three into a convolution layer 3 of 3 x 3 to further extract features.
Step five: inputting the output feature map of the step four into a convolution layer 4 of 3 x 3 to further extract features.
Step six: inputting the output feature map of the step five into a convolution layer 5 of 3 x 3 to further extract features.
Step seven: and inputting the output feature map of the step six into a convolution layer 6 of 3 x 3 to further extract features.
Step eight: and inputting the output feature map of the step seven into a convolution layer 7 of 3 x 3 to further extract features.
Step nine: and inputting the output feature map of the step eight into a convolution layer 8 of 3 x 3 to further extract features.
Step ten: and inputting the output feature map of the step nine into a convolution layer 9 of 3 x 3 to further extract features.
Step eleven: inputting the output feature map of the step ten into a convolution layer 10 of 3 x 3 to further extract features.
Step twelve: inputting the output feature map of the step eleven into a convolution layer 11 of 3 × 3 to further extract features.
Step thirteen: and inputting the output feature map of the step twelve into a convolution layer 12 of 3 x 3 to further extract features.
Fourteen steps: inputting the output feature map of the step thirteen into a convolution layer 13 of 3 x 3 to further extract features.
Step fifteen: and inputting the output feature map of the step fourteen into a convolution layer 14 of 3 x 3 to further extract features.
Sixthly, the steps are as follows: and inputting the output feature map of the step fifteen into a convolution layer 15 of 3 x 3 to further extract features.
Seventeen steps: the output signature of step sixteen is input into a 3 x 3 convolutional layer 16 for further feature extraction.
Eighteen steps: inputting the output feature map of the seventeenth step into a convolution layer 17 of 3 × 3 to further extract features.
Nineteen steps: and inputting the output feature map of the step eighteen into a convolution layer 18 of 3 x 3 to further extract features.
Twenty steps: the output feature map from the nineteenth step is input into a 3 × 3 convolutional layer 19 for further feature extraction.
Twenty one: the output signature of step twenty is input into a 3 x 3 convolutional layer 20 for further feature extraction.
Step twenty-two: and inputting the output feature map of the twenty-first step into a convolution layer 21 of 3 x 3 to further extract features.
Twenty-three steps: the output signature of step twenty-two is input into a 3 x 3 convolutional layer 22 for further feature extraction.
Twenty-four steps: and inputting the output feature map of the twenty-third step into a convolution layer 23 of 3 x 3 to further extract features.
Twenty-five steps: and inputting the output feature map of the twenty-four step into a convolution layer 24 of 3 x 3 to further extract features.
Twenty-six steps: and inputting the output feature map of the twenty-five step into a convolution layer 25 of 3 x 3 to further extract features.
Twenty-seven steps: the output signature of step twenty-six is input into a 3 x 3 convolutional layer 26 for further feature extraction.
Twenty-eight steps: the output signature of step twenty-seventh is input into a 3 x 3 convolutional layer 27 for further feature extraction.
Thirty steps are as follows: the output signature of step twenty-nine is input into a 3 x 3 convolutional layer 28 for further feature extraction.
Thirty-one steps: and inputting the output feature map of the step thirty into a convolution layer 29 of 3 x 3 to further extract features.
Step thirty-two: the output signature from step thirty one is input into a 3 x 3 convolutional layer 30 for further feature extraction.
Step thirty three: and inputting the output feature map of the step thirty-two into a convolution layer 31 of 3 x 3 to further extract features.
Thirty-four steps: the output feature map of step thirty-three is input into a 3 x 3 convolutional layer 32 for further feature extraction.
Step thirty-five: the output feature map of step thirty-three is input into a 3 x 3 convolutional layer 33 for further feature extraction.
Step thirty-six: and inputting the output characteristic diagram of the step thirty-five into a pooling layer 2 for scaling, reducing the size of the characteristic diagram under the condition of ensuring that the resolution ratio of the characteristic diagram is not reduced, and reducing the parameter quantity of network training.
Step three seventeen: and inputting the output characteristic diagram of the step thirty-six into a full connection layer 1, and flattening the characteristic diagram into a one-dimensional characteristic vector.
Step thirty-eight: and converting the physiological index data of the patient into a one-dimensional characteristic vector through a conversion function, and connecting the vector with the one-dimensional characteristic vector output in the seventeen step.
Step thirty-nine: inputting the one-dimensional characteristic vector output in the step thirty-eight into a softmax layer 1 for classification to obtain classification results of cholangiocellular carcinoma and hepatocellular carcinoma, and thus, completing construction of a convolutional neural network liver cancer identification model with the image fused with physiological indexes.
Fig. 3 is a detailed schematic diagram of the fusion of the image and the physiological index provided by the present invention, wherein the fusion process of the image and the physiological index comprises the following steps:
step I: after extracting features from the image data of the patient through a plurality of convolution and pooling operations, a series of feature maps are obtained, and each feature map is a number of values.
Step II: a series of characteristic maps output in the step I can be combined through different weighting summations in the full connection layer 1 to obtain a one-dimensional vector, and the one-dimensional vector is an abstract expression form of the image characteristics of the cholangiocarcinoma and hepatocellular carcinoma extracted by the model.
Step III: the information such as biochemical index content and tumor marker which can be used for distinguishing the physiological index from the physiological index is converted into a series of digital matrixes through a conversion function, and the digital matrixes are flattened into a one-dimensional vector to be used as a characteristic diagram.
Step IV: and (4) connecting the one-dimensional vector representing the image characteristics output in the step (II) with the one-dimensional vector representing the physiological index characteristics output in the step (III) to obtain a new one-dimensional vector for network training, so as to realize the fusion of the image and the physiological index information.
It will be understood by those skilled in the art that all or part of the steps in the method of the above embodiments may be implemented by hardware instructions related to a program, the program may be stored in a computer-readable storage medium, and when executed, the program includes the steps of the method of the above embodiments, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, and the like. Therefore, corresponding to the method of the present invention, the present invention also includes an automatic liver tumor classification device based on the fusion of the physiological indexes and the images, which is generally expressed in the form of functional modules corresponding to the steps of the method. The device includes:
the database construction and physiological index acquisition module is used for constructing images of cholangiocellular carcinoma and hepatocellular carcinoma and a physiological index database thereof, and acquiring abdominal CT images of patients and corresponding physiological indexes recorded by doctors;
the marking module marks all the acquired image data, delineates a liver tissue area in the image data, judges whether the liver tissue area belongs to cholangiocellular carcinoma or hepatocellular carcinoma, and marks the liver tissue area to serve as a gold standard for network training;
the three-dimensional full-convolution neural network segmentation model construction module is used for constructing a three-dimensional full-convolution neural network segmentation model, takes cholangiocellular carcinoma and hepatocellular carcinoma image data of a labeled liver region as input of the model for learning, and enables the model to fully automatically learn and extract the characteristics of liver tissues, so that the liver tissues are segmented from the whole abdominal scanning image and serve as an interested region for subsequent liver cancer identification;
the deep convolutional neural network classification model construction module constructs a deep convolutional neural network classification model based on fusion of images and physiological indexes, performs learning training on image information of cholangiocellular carcinoma and hepatocellular carcinoma and information capable of remarkably distinguishing difference of the image information and the hepatocellular carcinoma on the physiological indexes, projects the physiological indexes of the existing two types of liver cancer into a digital matrix by using corresponding transfer functions, and performs fusion training after connecting with a full connection layer in a convolutional neural network for performing imaging characteristic extraction.
Compared with the existing liver cancer identification method, the method has the advantages that:
1. the method adopts a deep learning technology, can fully automatically learn the characteristics of cholangiocellular carcinoma and hepatocellular carcinoma according to the labeled data, does not need to artificially design a complex characteristic extraction algorithm, reduces the operation threshold, and extracts the characteristics more accurately and comprehensively.
2. For the condition that the existing deep learning method only uses the imaging data of the liver cancer to train the model, the method comprehensively utilizes the image of the liver cancer and the physiological indexes thereof to train the model, so that the model can be used for fusion learning of the characteristic expressions of cholangiocellular carcinoma and hepatocellular carcinoma in the image and the physiological indexes, and the identification precision is improved.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.
Claims (8)
1. The automatic classification method of the liver tumor based on the fusion of the physiological index and the image is characterized in that: which comprises the following steps:
(1) constructing images of cholangiocellular carcinoma and hepatocellular carcinoma and a physiological index database thereof, and collecting abdominal CT images of patients and corresponding physiological indexes recorded by doctors;
(2) marking all the collected image data, delineating a liver tissue area in the image data, judging whether the liver tissue area belongs to cholangiocellular carcinoma or hepatocellular carcinoma, and making a mark to serve as a gold standard for network training;
(3) constructing a three-dimensional full-convolution neural network segmentation model, learning by taking cholangiocellular carcinoma and hepatocellular carcinoma image data of the liver region marked in the step (2) as input of the model, and fully automatically learning and extracting characteristics of liver tissues by the model so as to segment the liver tissues from the whole abdominal scanning image to be used as an interested region for subsequent liver cancer identification;
(4) the method comprises the steps of constructing a deep convolutional neural network classification model based on fusion of images and physiological indexes, carrying out learning training on image information of cholangiocellular carcinoma and hepatocellular carcinoma liver cancers and information capable of obviously distinguishing differences of the image information and the hepatocellular carcinoma liver cancers on the physiological indexes, projecting the physiological indexes of the two existing liver cancers into a digital matrix by using corresponding transfer functions, connecting the digital matrix with a full connecting layer in a convolutional neural network for carrying out imaging characteristic extraction, and carrying out fusion training.
2. The method for automatically classifying liver tumor based on physiological index and image fusion according to claim 1, wherein: and (3) inputting the abdominal image data acquired in the step (1) into a network for learning training, comparing the segmentation result of the network with the provided liver gold standard, calculating a corresponding loss value, feeding the loss value back to the network to update weight parameters, and optimizing model training.
3. The method for automatically classifying liver tumor based on physiological index and image fusion according to claim 2, wherein: in the step (4), the liver tissue image output in the step (3) is used as the input of the network for learning training, a digital matrix converted from the physiological index of the liver cancer patient is inserted into the full connection layer of the network, the digital matrix and the physiological index are connected to realize the fusion training of the image and the physiological index characteristics of the liver cancer, then the prediction result of the network is compared with the provided liver cancer category gold standard, the corresponding loss value is calculated and fed back to the network to update the weight parameter, and the model training is optimized, so that the model capable of fully automatically identifying the cholangiocellular carcinoma and hepatocellular carcinoma liver cancer is obtained.
4. The method for automatically classifying liver tumor based on physiological index and image fusion according to claim 3, wherein: the step (4) comprises the following sub-steps:
(4.1) inputting the liver region-of-interest image obtained by dividing the full convolution neural network into 1 convolution layer 1 with 7 x 7 for extracting corresponding characteristics of cholangiocellular carcinoma and hepatocellular carcinoma;
(4.2) inputting the output characteristic diagram in the step (4.1) into a pooling layer 1 for scaling, reducing the size of the characteristic diagram under the condition of ensuring that the resolution of the characteristic diagram extracted in the step one is not reduced, and reducing the parameter quantity of network training;
(4.3) inputting the output characteristic diagram of the step into a full connection layer 1, and flattening the characteristic diagram into a one-dimensional characteristic vector;
(4.4) converting the physiological index data of the patient into a one-dimensional feature vector through a conversion function, and connecting the vector with the one-dimensional feature vector output in the step (4.3);
(4.5) inputting the one-dimensional feature vector output in the step (4.4) into a softmax layer 1 for classification to obtain classification results of cholangiocarcinoma and hepatocellular carcinoma,
and finally, completing the construction of the convolutional neural network liver cancer identification model with the image and the physiological indexes fused.
5. The method for automatically classifying liver tumor based on physiological index and image fusion according to claim 4, wherein: the following steps are further included between the steps (4.2) and (4.3):
(a) inputting the output feature map in the step (4.2) into a convolution layer 2 of 3 x 3 to further extract features;
(b) the output feature map from the above step is input to a pooling layer 2 for scaling,
reducing the size of the feature map and reducing the network under the condition of ensuring that the resolution of the feature map is not reduced
Amount of parameters for training.
6. The method for automatically classifying liver tumor based on physiological index and image fusion according to claim 5, wherein: said step (a) is followed by the step of: inputting the output feature map of the step (a) into a plurality of convolution layers of 3 x 3 to further extract features.
7. The method for automatically classifying liver tumor based on physiological index and image fusion according to claim 4, wherein: said step (a) is followed by 31 convolution layers 3-33 of 3 x 3.
8. Liver tumour automatic classification device based on physiological index and image fusion, its characterized in that: it includes:
the database construction and physiological index acquisition module is used for constructing images of cholangiocellular carcinoma and hepatocellular carcinoma and a physiological index database thereof, and acquiring abdominal CT images of patients and corresponding physiological indexes recorded by doctors;
the marking module marks all the acquired image data, delineates a liver tissue area in the image data, judges whether the liver tissue area belongs to cholangiocellular carcinoma or hepatocellular carcinoma, and marks the liver tissue area to serve as a gold standard for network training;
the three-dimensional full-convolution neural network segmentation model construction module is used for constructing a three-dimensional full-convolution neural network segmentation model, takes cholangiocellular carcinoma and hepatocellular carcinoma image data of a labeled liver region as input of the model for learning, and enables the model to fully automatically learn and extract the characteristics of liver tissues, so that the liver tissues are segmented from the whole abdominal scanning image and serve as an interested region for subsequent liver cancer identification;
the deep convolutional neural network classification model construction module constructs a deep convolutional neural network classification model based on fusion of images and physiological indexes, performs learning training on image information of cholangiocellular carcinoma and hepatocellular carcinoma and information capable of remarkably distinguishing difference of the image information and the hepatocellular carcinoma on the physiological indexes, projects the physiological indexes of the existing two types of liver cancer into a digital matrix by using corresponding transfer functions, and performs fusion training after connecting with a full connection layer in a convolutional neural network for performing imaging characteristic extraction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911159731.XA CN110910371A (en) | 2019-11-22 | 2019-11-22 | Liver tumor automatic classification method and device based on physiological indexes and image fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911159731.XA CN110910371A (en) | 2019-11-22 | 2019-11-22 | Liver tumor automatic classification method and device based on physiological indexes and image fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110910371A true CN110910371A (en) | 2020-03-24 |
Family
ID=69818984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911159731.XA Pending CN110910371A (en) | 2019-11-22 | 2019-11-22 | Liver tumor automatic classification method and device based on physiological indexes and image fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110910371A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112700859A (en) * | 2020-12-15 | 2021-04-23 | 贵州小宝健康科技有限公司 | Medical diagnosis assisting method and system based on medical images |
CN112991295A (en) * | 2021-03-12 | 2021-06-18 | 中国科学院自动化研究所 | Lymph node metastasis image analysis system, method and equipment based on deep learning |
CN113011462A (en) * | 2021-02-22 | 2021-06-22 | 广州领拓医疗科技有限公司 | Classification and device of tumor cell images |
CN113657503A (en) * | 2021-08-18 | 2021-11-16 | 上海交通大学 | Malignant liver tumor classification method based on multi-modal data fusion |
CN114677378A (en) * | 2022-05-31 | 2022-06-28 | 四川省医学科学院·四川省人民医院 | Computer-aided diagnosis and treatment system based on ovarian tumor benign and malignant prediction model |
CN115082437A (en) * | 2022-07-22 | 2022-09-20 | 浙江省肿瘤医院 | Tumor prediction system and method based on tongue picture image and tumor marker and application |
CN116741350A (en) * | 2023-08-16 | 2023-09-12 | 枣庄市山亭区妇幼保健院 | File management system for hospital X-ray images |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105653858A (en) * | 2015-12-31 | 2016-06-08 | 中国科学院自动化研究所 | Image omics based lesion tissue auxiliary prognosis system and method |
CN107203989A (en) * | 2017-04-01 | 2017-09-26 | 南京邮电大学 | End-to-end chest CT image dividing method based on full convolutional neural networks |
CN107437092A (en) * | 2017-06-28 | 2017-12-05 | 苏州比格威医疗科技有限公司 | The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net |
CN108257135A (en) * | 2018-02-01 | 2018-07-06 | 浙江德尚韵兴图像科技有限公司 | The assistant diagnosis system of medical image features is understood based on deep learning method |
US20180315193A1 (en) * | 2017-04-27 | 2018-11-01 | Retinopathy Answer Limited | System and method for automated funduscopic image analysis |
CN108766555A (en) * | 2018-04-08 | 2018-11-06 | 深圳大学 | The computer diagnosis method and system of Pancreatic Neuroendocrine Tumors grade malignancy |
CN108846432A (en) * | 2018-06-06 | 2018-11-20 | 深圳神目信息技术有限公司 | It is a kind of based on deep learning to the classification method of chest CT images |
US20190050982A1 (en) * | 2017-08-09 | 2019-02-14 | Shenzhen Keya Medical Technology Corporation | System and method for automatically detecting a physiological condition from a medical image of a patient |
US10304193B1 (en) * | 2018-08-17 | 2019-05-28 | 12 Sigma Technologies | Image segmentation and object detection using fully convolutional neural network |
CN110223300A (en) * | 2019-06-13 | 2019-09-10 | 北京理工大学 | CT image abdominal multivisceral organ dividing method and device |
CN110236543A (en) * | 2019-05-23 | 2019-09-17 | 东华大学 | The more classification diagnosis systems of Alzheimer disease based on deep learning |
CN110265141A (en) * | 2019-05-13 | 2019-09-20 | 上海大学 | A kind of liver neoplasm CT images computer aided diagnosing method |
CN110276407A (en) * | 2019-06-26 | 2019-09-24 | 哈尔滨理工大学 | A kind of Hepatic CT staging system and classification method |
CN110335231A (en) * | 2019-04-01 | 2019-10-15 | 浙江工业大学 | A kind of ultrasonic image chronic kidney disease auxiliary screening method of fusion textural characteristics and depth characteristic |
-
2019
- 2019-11-22 CN CN201911159731.XA patent/CN110910371A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105653858A (en) * | 2015-12-31 | 2016-06-08 | 中国科学院自动化研究所 | Image omics based lesion tissue auxiliary prognosis system and method |
CN107203989A (en) * | 2017-04-01 | 2017-09-26 | 南京邮电大学 | End-to-end chest CT image dividing method based on full convolutional neural networks |
US20180315193A1 (en) * | 2017-04-27 | 2018-11-01 | Retinopathy Answer Limited | System and method for automated funduscopic image analysis |
CN107437092A (en) * | 2017-06-28 | 2017-12-05 | 苏州比格威医疗科技有限公司 | The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net |
US20190050982A1 (en) * | 2017-08-09 | 2019-02-14 | Shenzhen Keya Medical Technology Corporation | System and method for automatically detecting a physiological condition from a medical image of a patient |
CN108257135A (en) * | 2018-02-01 | 2018-07-06 | 浙江德尚韵兴图像科技有限公司 | The assistant diagnosis system of medical image features is understood based on deep learning method |
CN108766555A (en) * | 2018-04-08 | 2018-11-06 | 深圳大学 | The computer diagnosis method and system of Pancreatic Neuroendocrine Tumors grade malignancy |
CN108846432A (en) * | 2018-06-06 | 2018-11-20 | 深圳神目信息技术有限公司 | It is a kind of based on deep learning to the classification method of chest CT images |
US10304193B1 (en) * | 2018-08-17 | 2019-05-28 | 12 Sigma Technologies | Image segmentation and object detection using fully convolutional neural network |
CN110335231A (en) * | 2019-04-01 | 2019-10-15 | 浙江工业大学 | A kind of ultrasonic image chronic kidney disease auxiliary screening method of fusion textural characteristics and depth characteristic |
CN110265141A (en) * | 2019-05-13 | 2019-09-20 | 上海大学 | A kind of liver neoplasm CT images computer aided diagnosing method |
CN110236543A (en) * | 2019-05-23 | 2019-09-17 | 东华大学 | The more classification diagnosis systems of Alzheimer disease based on deep learning |
CN110223300A (en) * | 2019-06-13 | 2019-09-10 | 北京理工大学 | CT image abdominal multivisceral organ dividing method and device |
CN110276407A (en) * | 2019-06-26 | 2019-09-24 | 哈尔滨理工大学 | A kind of Hepatic CT staging system and classification method |
Non-Patent Citations (5)
Title |
---|
ANDRZEJ SKALSKI ET AL.: "Kidney tumor segmentation and detection on Computed Tomography data", 《2016 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST)》 * |
CHI WANG ET AL.: "Automatic Liver Segmentation Using Multi-plane Integrated Fully Convolutional Neural Networks", 《2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM)》 * |
NASRULLAH NASRULLAH ET AL.: "Automated Lung Nodule Detection and Classification Using Deep Learning Combined with Multiple Strategies", 《SENSORS》 * |
李冠东 等: "双卷积池化结构的3D-CNN高光谱遥感影像分类方法", 《中国图象图形学报》 * |
李雯: "基于深度卷积神经网络的CT图像肝脏肿瘤分割方法研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112700859A (en) * | 2020-12-15 | 2021-04-23 | 贵州小宝健康科技有限公司 | Medical diagnosis assisting method and system based on medical images |
CN113011462A (en) * | 2021-02-22 | 2021-06-22 | 广州领拓医疗科技有限公司 | Classification and device of tumor cell images |
CN113011462B (en) * | 2021-02-22 | 2021-10-22 | 广州领拓医疗科技有限公司 | Classification and device of tumor cell images |
CN112991295A (en) * | 2021-03-12 | 2021-06-18 | 中国科学院自动化研究所 | Lymph node metastasis image analysis system, method and equipment based on deep learning |
CN113657503A (en) * | 2021-08-18 | 2021-11-16 | 上海交通大学 | Malignant liver tumor classification method based on multi-modal data fusion |
CN114677378A (en) * | 2022-05-31 | 2022-06-28 | 四川省医学科学院·四川省人民医院 | Computer-aided diagnosis and treatment system based on ovarian tumor benign and malignant prediction model |
CN115082437A (en) * | 2022-07-22 | 2022-09-20 | 浙江省肿瘤医院 | Tumor prediction system and method based on tongue picture image and tumor marker and application |
CN116741350A (en) * | 2023-08-16 | 2023-09-12 | 枣庄市山亭区妇幼保健院 | File management system for hospital X-ray images |
CN116741350B (en) * | 2023-08-16 | 2023-10-31 | 枣庄市山亭区妇幼保健院 | File management system for hospital X-ray images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110910371A (en) | Liver tumor automatic classification method and device based on physiological indexes and image fusion | |
CN109166105B (en) | Tumor malignancy risk layered auxiliary diagnosis system based on artificial intelligent medical image | |
CN110929789A (en) | Liver tumor automatic classification method and device based on multi-stage CT image analysis | |
CN110265141B (en) | Computer-aided diagnosis method for liver tumor CT image | |
CN110807764A (en) | Lung cancer screening method based on neural network | |
US20230005140A1 (en) | Automated detection of tumors based on image processing | |
CN111476754B (en) | Bone marrow cell image artificial intelligence auxiliary grading diagnosis system and method | |
CN111553892B (en) | Lung nodule segmentation calculation method, device and system based on deep learning | |
CN110335665A (en) | It is a kind of applied to medical image auxiliary diagnosis analysis to scheme to search drawing method and system | |
CN112101451A (en) | Breast cancer histopathology type classification method based on generation of confrontation network screening image blocks | |
CN102737379A (en) | Captive test (CT) image partitioning method based on adaptive learning | |
CN114171187B (en) | Gastric cancer TNM stage prediction system based on multi-mode deep learning | |
CN107766874B (en) | Measuring method and measuring system for ultrasonic volume biological parameters | |
Bai et al. | Automatic segmentation of cervical region in colposcopic images using K-means | |
CN114782307A (en) | Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning | |
Ghose et al. | A random forest based classification approach to prostate segmentation in MRI | |
CN112263217B (en) | Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method | |
CN112819747A (en) | Method for automatically diagnosing benign and malignant nodules based on lung tomography image | |
CN111145185A (en) | Lung parenchyma segmentation method for extracting CT image based on clustering key frame | |
CN115471512A (en) | Medical image segmentation method based on self-supervision contrast learning | |
CN112907581A (en) | MRI (magnetic resonance imaging) multi-class spinal cord tumor segmentation method based on deep learning | |
Roy et al. | Heterogeneity of human brain tumor with lesion identification, localization, and analysis from MRI | |
Lu et al. | Breast tumor computer-aided detection system based on magnetic resonance imaging using convolutional neural network | |
CN114445328A (en) | Medical image brain tumor detection method and system based on improved Faster R-CNN | |
CN116779093B (en) | Method and device for generating medical image structured report and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200324 |