CN110738649A - training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images - Google Patents

training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images Download PDF

Info

Publication number
CN110738649A
CN110738649A CN201910972378.0A CN201910972378A CN110738649A CN 110738649 A CN110738649 A CN 110738649A CN 201910972378 A CN201910972378 A CN 201910972378A CN 110738649 A CN110738649 A CN 110738649A
Authority
CN
China
Prior art keywords
image
training
images
gastric cancer
training set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910972378.0A
Other languages
Chinese (zh)
Inventor
卢云
吴庆尧
孙品
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University
Original Assignee
Qingdao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University filed Critical Qingdao University
Priority to CN201910972378.0A priority Critical patent/CN110738649A/en
Publication of CN110738649A publication Critical patent/CN110738649A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/58Testing, adjusting or calibrating thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • Evolutionary Computation (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Software Systems (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physiology (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides an training method of a fast RCNN (fast RCNN) network for automatically identifying a gastric cancer enhanced CT (computed tomography) image, which comprises the following steps of obtaining an advanced gastric cancer image, manually identifying the image, extracting an interested region on the image by using the fast RCNN network, preprocessing the image in a data set, standardizing the preprocessed image, dividing the image after the standardization processing into a training set and a test set, inputting the image of the training set into the network, verifying the training set through the test set, finishing the training when the prediction effectiveness of the training set reaches a preset value, reconstructing the training set for training when the prediction effectiveness of the training set is lower than the preset value, and identifying the advanced gastric cancer tumor of the enhanced CT image by the fast RCNN network trained by the method disclosed by the invention, so that the advanced gastric cancer tumor can be subjected to T-stage processing.

Description

training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images
Technical Field
The invention relates to the technical field of image recognition, in particular to a training method of fast RCNN networks for automatic recognition of gastric cancer enhanced CT images.
Background
Gastric cancer is currently ranked fifth in cancer incidence and third in mortality worldwide, and has become the third largest killer threatening the health of people in the world. Accurate preoperative gastric cancer staging is crucial to selection of a treatment plan and prediction of postoperative treatment effect of a patient.
Currently, the examination applied to the preoperative staging of gastric cancer includes Endoscopic Ultrasound (EUS), multi-row detector Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and combined positron emission tomography (PET-CT), the MRI is not a routine examination of gastric cancer because of high requirements for examiners and limitation of long-time scanning, the PET-CT examination is not generally used for the routine examination because of cost and radiation damage, the EUS is not accepted by patients because of invasive examinations and cannot be used for examining metastatic diseases, the CT examination has the advantages of non-invasiveness, practicability, convenience, stability and the like, and is used for the routine examination of the preoperative staging of gastric cancer, the texture analysis of CT images can be used for detecting subtle differences which cannot be identified by human eyes, and quantitative information of tumor heterogeneity can be obtained by analyzing the distribution and the strength of pixel intensity in the images, so that the diagnostic value of CT is improved, particularly, the application of the CT examination is greatly improved, the accuracy of the staging of gastric cancer is respectively expressed by peritoneal membrane infiltration of gastric cancer cells in the peritoneal mucosa 75-peritoneal membrane, the peritoneal membrane is expressed by 3675% of peritoneal membrane and peritoneal membrane of peritoneal membrane, the peritoneal membrane is expressed by no peritoneal membrane infiltration of peritoneal membrane, or peritoneal membrane of.
The gastric cancer tumor cell infiltration depth plays an important guiding role in screening gastric cancer diseases and formulating treatment schemes, so how to accurately predict the tumor cell infiltration gastric wall depth based on the gastric cancer enhanced CT image is a problem to be solved at present.
At present, the depth of tumor cell infiltration into the stomach wall needs a professional physician to manually distinguish the mark, in the aspect of , the requirement on professional literacy of the physician is high, in the aspect of , the workload of the physician is very large, and the diagnosis process is long.
The fast RCNN network is an -class artificial neural network, and among various deep learning models, the fast RCNN network is a relatively mature algorithm and has strong capability in image processing and recognition.
Disclosure of Invention
The invention provides training methods of a Faster RCNN network for automatic identification of a gastric cancer enhanced CT image, which solve the problem that the depth of tumor cell infiltration into a gastric wall needs to be predicted manually based on the gastric cancer enhanced CT image in the prior art.
The technical scheme of the invention is realized as follows:
A training method of fast RCNN network for automatic identification of gastric cancer enhanced CT images, comprising the following steps:
step , acquiring an advanced gastric cancer image to form a data set;
manually marking the image by using labelImg software, and marking the position where the gastric cancer tumor cells are infiltrated deepest in the image;
extracting the region of interest on the image by using a fast RCNN network;
preprocessing the images in the data set, and processing the images by applying an image intensity range classification method and a histogram equalization method;
fifthly, carrying out standardization processing on the preprocessed image;
step six, randomly sampling and dividing the standardized images into a training set and a test set according to a proportion;
inputting the images of the training set into a fast RCNN network, performing multivariate Logistic regression analysis, determining the position and the shape of the stomach, detecting the position of the gastric cancer tumor, and identifying the position with the deepest infiltration of gastric cancer tumor cells in the images to obtain a segmented tumor result;
step eight, verifying the training set through the test set;
step nine, when the prediction effectiveness of the training set reaches a preset value, the training is finished; and when the prediction effectiveness of the training set is lower than a preset value, reconstructing the training set for training.
Optionally, in the second step, the image is identified manually by using labelImg software, and the distance between the tumor identification frame and the normal stomach wall is within 0.5 cm.
Optionally, in the third step, after extracting the region of interest on the image by using the fast RCNN network, the method further includes: more images are obtained using data enhancement algorithms, increasing the data set.
Optionally, the enhancement algorithm comprises clipping or flipping.
Optionally, in the sixth step, the random sampling is performed according to a ratio of 4: the 1-scale divides the normalized image into a training set and a test set.
Optionally, in step , the upper abdomen enhanced CT venous phase image is selected as the data set.
Optionally, in the fifth step, z-Score normalization processing is performed on the preprocessed image.
The invention has the beneficial effects that:
(1) the fast RCNN network trained by the method can identify the gastric cancer tumor in the advanced stage of the enhanced CT image, and can accurately identify the gastric cancer tumor part;
(2) can be used for carrying out T stage treatment on the advanced gastric cancer tumor, wherein the T3 and T4 gastric cancers have higher accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a training method of fast RCNN for automatic recognition of gastric cancer-enhanced CT images according to the present invention;
FIG. 2a is a schematic diagram of ROC curve of fast RCNN network for advanced gastric cancer identification;
FIG. 2b is a schematic diagram showing ROC curves of the fast RCNN network for stage T2 gastric cancer identification;
FIG. 2c is a schematic diagram showing ROC curves of the fast RCNN network for stage T3 gastric cancer identification;
FIG. 2d is a schematic diagram showing ROC curves of the fast RCNN network for stage T4 gastric cancer identification;
FIG. 3a is a schematic diagram of the imaging physician manually identifying the location of a T2 tumor in an image according to the pathological results;
FIG. 3b is a schematic diagram showing tumor segmentation and T stage identification by the fast RCNN network;
FIG. 3c is a schematic diagram of the imaging physician manually identifying the location of the T3 stage tumor in the image according to the pathological results;
FIG. 3d is a schematic diagram showing tumor segmentation and T stage identification by the fast RCNN network;
FIG. 3e is a schematic diagram of the imaging physician manually identifying the location of the T4 stage tumor in the image according to the pathological result;
FIG. 3f is a schematic diagram showing tumor segmentation and T-stage identification by the fast RCNN network.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only partial embodiments of of the present invention, rather than all embodiments.
The invention provides training methods of a fast RCNN for automatic identification of stomach cancer enhanced CT images, which are used for constructing a training set to train the fast RCNN and testing the training effect of the training set through a testing set.
As shown in fig. 1, the method for training the fast RCNN network for automatically identifying a gastric cancer-enhanced CT image according to the present invention includes the following steps:
step , acquiring an advanced gastric cancer image to form a data set;
manually marking the image by using labelImg software, and marking the position where the gastric cancer tumor cells are infiltrated deepest in the image;
extracting the region of interest on the image by using a fast RCNN network;
preprocessing the images in the data set, and processing the images by applying an image intensity range classification method and a histogram equalization method;
fifthly, carrying out standardization processing on the preprocessed image;
step six, random sampling is performed according to the following steps of 4: 1, dividing the standardized images into a training set and a test set in proportion;
inputting the images of the training set into a fast RCNN network, performing multivariate Logistic regression analysis, determining the position and the shape of the stomach, detecting the position of the gastric cancer tumor, and identifying the position with the deepest infiltration of gastric cancer tumor cells in the images to obtain a segmented tumor result;
step eight, verifying the training set through the test set;
step nine, when the prediction effectiveness of the training set reaches a preset value, the training is finished; and when the prediction effectiveness of the training set is lower than a preset value, reconstructing the training set for training.
The fast RCNN network may adopt an existing network architecture, for example, the fast RCNN network is deep convolutional neural networks with 101 layers, and includes a feature extraction network, a region generation network, and a region-of-interest feature vector network, and a model of each layer is trained by 100 epochs respectively, so that image features can be extracted.
The diagnosis performance of the epigastric enhanced CT venous phase image is better than that of the arterial phase image, optionally, in the step , the epigastric enhanced CT venous phase image is selected as the data set, for example, the number of the acquired advanced gastric cancer enhanced images is 2122, and the basic information of the images is shown in table 1 below.
TABLE 1
Figure BDA0002232518610000061
In the second step, two radiologists (respectively having gastroenterology imaging experience of 8 years and 10 years) interpret the CT images and mark tumor lesions independently under the condition of not knowing the clinical information (including name, sex, and patient age), and the images are marked by using labelImg software by adopting a tumor segmentation method, wherein the two radiologists only mark the deepest infiltration position of gastric cancer tumor cells in the images, and the distance between the tumor marking frame and the normal stomach wall is within 0.5 cm.
Optionally, in the third step, the region of interest includes at least the position in the image where gastric cancer tumor cells infiltrate deepest.
Optionally, after the step three extracts a region of interest (ROI) on the image by using a fast RCNN network, the method further includes: more images are obtained by using a data enhancement algorithm, and the data set is increased so as to relieve the overfitting problem generated when the data set is processed by the model. Optionally, the enhancement algorithm includes a cropping, flipping, or other data enhancement algorithm. For example, 5855 advanced gastric cancer images are obtained in total after image enhancement of 2122 epigastric enhanced CT venous images.
The images are preprocessed before a fast RCNN network is trained, in the preprocessing step, the images are processed by an image intensity range normalization method and a histogram equalization method to reduce the calculation time and improve the contrast of the images, then the preprocessed images are normalized, so that the pixel value of each channel presents a standard normal distribution with 0 as a mean value and 1 as a variance, and optionally, the preprocessed images are subjected to z-Score normalization.
In order to study the recognition performance of the fast RCNN network, ROC curves of the fast RCNN network were plotted and the area under the curve (AUC) was calculated, and the micro-average, macro-average, and weighted average of the accuracy, recall, F1-score, and the whole of the fast RCNN network were calculated, as shown in table 2 below.
TABLE 2
Figure BDA0002232518610000071
Table 2 shows the accuracy, recall, F1-Score and the mean of the mean, the mean of the macro and the weighted mean for the fast RCNN network learning. The experimental result shows that the area under the curve of the receiver operation characteristic curve identified by the stomach cancer tumor by the Faster RCNN network is 0.93 (the 95% confidence interval is 0.90-0.97), and the method has higher accuracy compared with human imaging physicians, and can obtain that the Faster RCNN network has higher accuracy for identifying the stomach cancer enhanced CT image T stage. The AUC value of the results after completion of the test was 0.93, with an accuracy of 0.93 and a specificity of 0.95. Wherein the identification accuracy rate of the T2 stage gastric cancer is as follows: 90%, and identification accuracy of gastric cancer at stage T3: 93%, and the recognition accuracy rate of the gastric cancer at the T4 stage: 95 percent, the Faster RCNN network has higher identification performance on gastric cancer tumors.
FIG. 2a is a ROC curve of fast RCNN network for advanced gastric cancer identification, where the area under the curve (AUC) is 0.93; fig. 2b is a ROC curve of the fast RCNN network for identifying gastric cancer at stage T2, where the area under the curve (AUC) is 0.90; fig. 2c is a ROC curve of the fast RCNN network for identifying gastric cancer at stage T3, where the area under the curve (AUC) is 0.93; fig. 2d shows the ROC curve identified by the fasterncn network for gastric cancer at stage T4, where the area under the curve (AUC) is 0.95.
FIG. 3a shows the imaging physician manually identifying the T2 tumor location in the image for training and testing of the FasterRCNN network based on the pathology results, and FIG. 3b shows the segmentation and T stage recognition of the tumor by the FasterRCNN network; FIG. 3c shows the imaging physician manually identifying the T3 stage tumor position in the image for training and testing of the Faster RCNN network according to the pathological results, and FIG. 3d shows the tumor segmentation and T stage recognition by the Faster RCNN network; fig. 3e shows the imaging physician manually identifying the T4 stage tumor position in the image according to the pathological result for training and testing of the fast RCNN network, and fig. 3f shows the tumor segmentation and T stage recognition by the fast RCNN network. The results in FIGS. 3 a-3 f show that the fast RCNN network has high recognition performance for tumors at T3 and T4.
The invention provides training methods of a Faster RCNN for automatic identification of enhanced CT images of stomach cancer, wherein the trained Faster RCNN can identify the stomach cancer tumor in the advanced stage of the enhanced CT images, can accurately identify the tumor part of the stomach cancer, and can perform T stage treatment on the stomach cancer tumor in the advanced stage, wherein the training methods have higher accuracy on the stomach cancer in the T3 and T4 stages.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1, A training method of fast RCNN network for automatic identification of stomach cancer enhanced CT image, which is characterized by comprising the following steps:
step , acquiring an advanced gastric cancer image to form a data set;
manually marking the image by using labelImg software, and marking the position where the gastric cancer tumor cells are infiltrated deepest in the image;
extracting the region of interest on the image by using a fast RCNN network;
preprocessing the images in the data set, and processing the images by applying an image intensity range classification method and a histogram equalization method;
fifthly, carrying out standardization processing on the preprocessed image;
step six, randomly sampling and dividing the standardized images into a training set and a test set according to a proportion;
inputting the images of the training set into a fast RCNN network, performing multivariate Logistic regression analysis, determining the position and the shape of the stomach, detecting the position of the gastric cancer tumor, and identifying the position with the deepest infiltration of gastric cancer tumor cells in the images to obtain a segmented tumor result;
step eight, verifying the training set through the test set;
step nine, when the prediction effectiveness of the training set reaches a preset value, the training is finished; and when the prediction effectiveness of the training set is lower than a preset value, reconstructing the training set for training.
2. The method of claim 1,
in the second step, the image is identified manually by using labelImg software, and the distance between the tumor identification frame and the normal stomach wall is within 0.5 cm.
3. The method of claim 1,
in the third step, after extracting the region of interest on the image by using the fast RCNN network, the method further includes: more images are obtained using data enhancement algorithms, increasing the data set.
4. The method of claim 3,
the enhancement algorithm includes clipping or flipping.
5. The method of claim 1,
in the sixth step, random sampling is performed according to a ratio of 4: the 1-scale divides the normalized image into a training set and a test set.
6. The method of claim 1,
in step , the upper abdomen enhanced CT venous phase image is selected as the data set.
7. The method of claim 1,
and in the fifth step, performing z-Score standardization processing on the preprocessed image.
CN201910972378.0A 2019-10-14 2019-10-14 training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images Pending CN110738649A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910972378.0A CN110738649A (en) 2019-10-14 2019-10-14 training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910972378.0A CN110738649A (en) 2019-10-14 2019-10-14 training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images

Publications (1)

Publication Number Publication Date
CN110738649A true CN110738649A (en) 2020-01-31

Family

ID=69270032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910972378.0A Pending CN110738649A (en) 2019-10-14 2019-10-14 training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images

Country Status (1)

Country Link
CN (1) CN110738649A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112164028A (en) * 2020-09-02 2021-01-01 陈燕铭 Pituitary adenoma magnetic resonance image positioning diagnosis method and device based on artificial intelligence
CN112419452A (en) * 2020-12-24 2021-02-26 福州大学 Rapid merging system and method for PD-L1 digital pathological section images of stomach cancer

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368671A (en) * 2017-06-07 2017-11-21 万香波 System and method are supported in benign gastritis pathological diagnosis based on big data deep learning
CN109124660A (en) * 2018-06-25 2019-01-04 南方医科大学南方医院 The postoperative risk checking method of gastrointestinal stromal tumor and system based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368671A (en) * 2017-06-07 2017-11-21 万香波 System and method are supported in benign gastritis pathological diagnosis based on big data deep learning
CN109124660A (en) * 2018-06-25 2019-01-04 南方医科大学南方医院 The postoperative risk checking method of gastrointestinal stromal tumor and system based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯琦等: "多层螺旋CT判断胃癌胃壁浸润深度的价值", 《中国医学影像技术》 *
吴智德等: "基于MRI图像纹理特征的膀胱肿瘤浸润深度检测", 《中国生物医学工程学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112164028A (en) * 2020-09-02 2021-01-01 陈燕铭 Pituitary adenoma magnetic resonance image positioning diagnosis method and device based on artificial intelligence
CN112419452A (en) * 2020-12-24 2021-02-26 福州大学 Rapid merging system and method for PD-L1 digital pathological section images of stomach cancer
CN112419452B (en) * 2020-12-24 2022-08-23 福州大学 Rapid merging system and method for PD-L1 digital pathological section images of stomach cancer

Similar Documents

Publication Publication Date Title
EP3432784B1 (en) Deep-learning-based cancer classification using a hierarchical classification framework
CN110728239B (en) Gastric cancer enhanced CT image automatic identification system utilizing deep learning
US7646902B2 (en) Computerized detection of breast cancer on digital tomosynthesis mammograms
US8634610B2 (en) System and method for assessing cancer risk
EP2070045B1 (en) Advanced computer-aided diagnosis of lung nodules
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
CN110472629B (en) Pathological image automatic identification system based on deep learning and training method thereof
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
US20180053297A1 (en) Methods and Apparatuses for Detection of Abnormalities in Low-Contrast Images
CN112071418B (en) Gastric cancer peritoneal metastasis prediction system and method based on enhanced CT image histology
CN113345576A (en) Rectal cancer lymph node metastasis diagnosis method based on deep learning multi-modal CT
Ghantasala et al. Texture recognization and image smoothing for microcalcification and mass detection in abnormal region
Kumar et al. Mammogram image segmentation using SUSAN corner detection
Songsaeng et al. Multi-scale convolutional neural networks for classification of digital mammograms with breast calcifications
Kaur et al. Computer-aided diagnosis of renal lesions in CT images: a comprehensive survey and future prospects
CN114758175A (en) Method, system, equipment and storage medium for classifying esophagus and stomach junction tumor images
CN110738649A (en) training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images
Mastouri et al. A morphological operation-based approach for Sub-pleural lung nodule detection from CT images
CN113838020B (en) Lesion area quantification method based on molybdenum target image
Liu et al. Application of deep learning-based CT texture analysis in TNM staging of gastric cancer
CN114783517A (en) Prediction of RAS gene status of CRLM patients based on imagery omics and semantic features
CN113822873A (en) Bimodal imagery omics image analysis method for lung nodule classification
WO2022153100A1 (en) A method for detecting breast cancer using artificial neural network
Cristian et al. Lung Cancer Diagnosis based on Ultrasound image processing
US20240242845A1 (en) Methods and modles for identifying breast lesions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200131

RJ01 Rejection of invention patent application after publication