CN114758175A - Method, system, equipment and storage medium for classifying esophagus and stomach junction tumor images - Google Patents

Method, system, equipment and storage medium for classifying esophagus and stomach junction tumor images Download PDF

Info

Publication number
CN114758175A
CN114758175A CN202210398698.1A CN202210398698A CN114758175A CN 114758175 A CN114758175 A CN 114758175A CN 202210398698 A CN202210398698 A CN 202210398698A CN 114758175 A CN114758175 A CN 114758175A
Authority
CN
China
Prior art keywords
image
wavelet
score
dimensional roi
firstorder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210398698.1A
Other languages
Chinese (zh)
Inventor
黄文鹏
高剑波
李莉明
刘晨晨
周宇涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Affiliated Hospital of Zhengzhou University
Original Assignee
First Affiliated Hospital of Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Affiliated Hospital of Zhengzhou University filed Critical First Affiliated Hospital of Zhengzhou University
Priority to CN202210398698.1A priority Critical patent/CN114758175A/en
Publication of CN114758175A publication Critical patent/CN114758175A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Geometry (AREA)

Abstract

The invention relates to the technical field of computer vision and medical image analysis, and particularly discloses a method, a system, equipment and a storage medium for classifying images of tumors of an esophageal-gastric junction. The classification method comprises the following specific steps: s1, processing the tumor medical image of the esophagus and stomach joint of the subject to obtain a three-dimensional ROI regional image of the tumor medical image focus; s2, extracting the imagery omics characteristics in the three-dimensional ROI area image; s3, inputting the values of the image omics characteristics extracted in the step S2 into a score prediction model, and calculating to obtain the image score of the three-dimensional ROI area image; and S4, qualitatively analyzing the image scores obtained in the step S3, and predicting the image types of the tumor medical images. The classification method can predict whether the tumor of the esophageal-gastric junction belongs to adenocarcinoma or squamous carcinoma before operation, the predicted AUC can reach more than 0.8, and the method has higher accuracy; moreover, the method has the advantage of no wound before operation.

Description

Method, system, equipment and storage medium for classifying esophagus and stomach junction tumor images
Technical Field
The invention relates to the technical field of artificial intelligence and medical image analysis, in particular to a method, a system, equipment and a storage medium for classifying images of tumors of an esophageal-gastric junction.
Background
In recent years, the incidence of gastric cancer has been gradually reduced worldwide, and the incidence of Esophageal Gastric Junction (EGJ) cancer has been increasing year by year. The EGJ refers to a virtual anatomic boundary line between a tubular esophagus and a cystic stomach, the EGJ cancer is defined as the cancer with the tumor center positioned within an interval of 5cm above and below the esophagus-stomach anatomic boundary, spans or contacts the EGJ, the biological behavior of the EGJ cancer is different from that of gastric cancer or esophageal cancer, the EGJ cancer is taken as a special type of tumor, lymph node metastasis and hematogenous metastasis are more likely to occur, the diagnosis is mostly in the advanced stage, and the prognosis is poorer. The Siewert type widely accepted in academia divides EGJ cancers into type I, type II and type III according to the distance between the tumor center and EGJ, and most of them respectively correspond to esophageal hypomere cancer, "cardia cancer" and "cardia cancer" in the traditional sense. SiewettI is predominantly squamous carcinoma, with SiewettII and III adenocarcinomas predominating. Because of the short and narrow EGJ part and the up-and-down invasion characteristic of the tumor, the distance between the tumor center of the EGJ cancer and the EGJ is difficult to accurately measure due to the tumor invasive growth, the Siewert typing is not easy to directly judge, the identification of EGJ squamous carcinoma and adenocarcinoma can prompt the Siewert typing, contribute to the determination of the operation path and the lymph node cleaning range of an EGJ patient, formulate an individualized and accurate medical scheme, and along with the development of new auxiliary treatment, the determination of the pathological type of the EGJ squamous carcinoma in the perioperative period is particularly important, contribute to the selection of the new auxiliary radiotherapy and chemotherapy schemes, guide clinical treatment and contribute to prognosis. However, pathological tissue sections can only be obtained after operation, although histologically confirmed invasive biopsy is often used in clinical practice, some patients have contraindications and intolerance of biopsy, and biopsy samples are limited to mucosal surfaces, which may not be sufficient for evaluating the whole tumor state. Therefore, the preoperative exploration of a reliable, practical and non-invasive EGJ histological typing prediction method has important clinical significance on the perioperative treatment, the selection of operative access and the lymph node cleaning range of EGJ patients. Clinically, Computer Tomography (CT) images have high value for diagnosis and staging of gastric and esophageal cancers, and are already popular in most hospitals in China. The American Joint Committee for Cancer (American Joint Committee on Cancer, AJCC) 8 th edition of staging recommends CT as the main means for staging gastric Cancer images, and proposes a gastric Cancer cTNM staging system based on CT imaging means, and the Chinese Society of Oncology (China Society and Clinical Oncology, CSCO) recommends CT to evaluate the pre-treatment staging of gastric Cancer and the treatment effect of chemoradiotherapy/targeted therapy in the gastric Cancer diagnosis and treatment guide. However, conventional CT scan images mostly reflect morphological features, mainly depend on visual evaluation by imaging physicians, and have limited effect on evaluating histological types. If CT is able to provide more information to identify EGJ squamous cell carcinoma and adenocarcinoma, clinicians may provide more information about diagnosis, treatment options, prognosis, and costs to patients and their families early in the treatment phase in preparation for subsequent treatment.
The image omics uses a mode of combining medical and industrial work to segment, extract and establish a model for tumors, converts traditional images into high-dimensional and deep digital quantitative features through the transformation of original images and the calculation of feature matrixes, more deeply excavates the internal potential biological characteristics and heterogeneity of tumor images, is widely and noninvasively applied to the research of disease diagnosis, differential diagnosis, curative effect evaluation, clinical result prediction and the like, has higher accuracy and provides greater opportunity for the image research of tumors. Related researches are carried out in the differential diagnosis of lung squamous carcinoma and adenocarcinoma in the image group at present. However, research on the identification of EGJ squamous cell carcinomas and adenocarcinomas is limited at home and abroad.
Disclosure of Invention
In view of the problems in the prior art, the present invention is directed to a method, system, device and storage medium for classifying an esophageal-gastric junction tumor image.
In order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows:
in a first aspect, the invention provides a method for classifying an esophageal-gastric junction tumor image, which comprises the following steps:
s1, processing the tumor medical image of the esophageal-gastric junction of the subject to obtain a three-dimensional ROI regional image of the tumor medical image focus;
s2, extracting the imagery omics characteristics in the three-dimensional ROI area image;
s3, inputting the values of the image omics characteristics extracted in the step S2 into a score prediction model, and calculating to obtain the image score of the three-dimensional ROI area image;
and S4, qualitatively analyzing the image scores obtained in the step S3, and predicting the image types of the tumor medical images.
According to the above method for classifying the tumor image of the esophagogastric junction, preferably, the three-dimensional ROI region image is an arterial three-dimensional ROI region image and/or a venous three-dimensional ROI region image.
According to the above method for classifying a tumor image of an esophagogastric junction, preferably, when the three-dimensional ROI region image is an arterial three-dimensional ROI region image, the imaging group characteristics in step S2 are as follows:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange。
according to the above method for classifying a tumor image of an esophagogastric junction, preferably, when the three-dimensional ROI area image is a three-dimensional ROI area image of a venous phase, the imaging omics characteristics in step S2 are:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance。
according to the above method for classifying a tumor image of an esophagogastric junction, preferably, when the three-dimensional ROI region image is an arterial three-dimensional ROI region image and a venous three-dimensional ROI region image, in step S2, the omics characteristics of the arterial three-dimensional ROI region image and the venous three-dimensional ROI region image are respectively extracted,
the image omics characteristics extracted from the three-dimensional ROI area image in the arterial phase are as follows:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange;
the image omics characteristics of the vein three-dimensional ROI regional image extraction are as follows:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance。
according to the above method for classifying an esophageal-gastric junction tumor image, preferably, when the three-dimensional ROI region image is an arterial three-dimensional ROI region image, the score prediction model in step S3 is an arterial score prediction model, and a calculation formula of the arterial score prediction model for calculating an image score is as follows:
Rad-scoreAP=0.266-0.852×log.sigma.1.0.mm.3D_ngtdm_Busyness+0.708×log.sigma.3.0.mm.3D_gldm_DependenceVariance+0.360×log.sigma.3.0.mm.3D_ngtdm_Busyness-0.830×original_firstorder_Median-1.160×wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis-1.122×wavelet.HLH_ngtdm_Busyness+0.656×wavelet.HLL_gldm_DependenceVariance+0.715×wavelet.LHH_firstorder_Skewness+2.398×wavelet.LLH_glszm_LargeAreaEmphasis+0.777×wavelet.LLL_firstorder_InterquartileRange;
among them, Rad-scoreAPImage scores representing three-dimensional ROI region images of the arterial phase.
According to the above method for classifying an esophageal-gastric junction tumor image, preferably, when the three-dimensional ROI region image is a three-dimensional ROI region image of a venous phase, the score prediction model in step S3 is a venous phase score prediction model, and a calculation formula of the venous phase score prediction model for calculating an image score is as follows:
Rad-scoreVP=0.047+0.760×log.sigma.1.0.mm.3D_firstorder_90Percentile-1.030×original_firstorder_Median+0.395×wavelet.HLH_glcm_ClusterProminence+1.333×wavelet.HLL_gldm_DependenceVariance+0.746×wavelet.LHH_ngtdm_Busyness+0.381×wavelet.LLH_firstorder_Kurtosis+0.409×wavelet.LLH_gldm_DependenceVariance;
among them, Rad-scoreVPImage scores representing three-dimensional ROI region images of venous phase.
According to the above method for classifying an esophageal-gastric junction tumor image, preferably, when the three-dimensional ROI region image is an arterial three-dimensional ROI region image and a venous three-dimensional ROI region image, the scoring prediction model in step S3 is a joint scoring prediction model, and a calculation formula of the joint scoring prediction model for calculating an image score is as follows:
Rad-score=0.009+0.671×Rad-scoreAP+0.621×Rad-scoreVP
among them, Rad-scoreAP=0.266-0.852×log.sigma.1.0.mm.3D_ngtdm_Busyness+0.708×log.sigma.3.0.mm.3D_gldm_DependenceVariance+0.360×log.sigma.3.0.mm.3D_ngtdm_Busyness-0.830×original_firstorder_Median-1.160×wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis-1.122×wavelet.HLH_ngtdm_Busyness+0.656×wavelet.HLL_gldm_DependenceVariance+0.715×wavelet.LHH_firstorder_Skewness+2.398×wavelet.LLH_glszm_LargeAreaEmphasis+0.777×wavelet.LLL_firstorder_InterquartileRange;
Rad-scoreVP=0.047+0.760×log.sigma.1.0.mm.3D_firstorder_90Percentile-1.030×original_firstorder_Median+0.395×wavelet.HLH_glcm_ClusterProminence+1.333×wavelet.HLL_gldm_DependenceVariance+0.746×wavelet.LHH_ngtdm_Busyness+0.381×wavelet.LLH_firstorder_Kurtosis+0.409×wavelet.LLH_gldm_DependenceVariance。
Wherein Rad-score represents the joint image score, Rad-scoreAPImage score, Rad-score, representing three-dimensional ROI area image at arterial phaseVPImage scores representing three-dimensional ROI region images of venous phase.
According to the above method for classifying the esophageal-gastric junction tumor image, preferably, the qualitative analysis of the image score obtained in step S3 in step S4 is performed to predict the image type of the tumor medical image, and the method includes:
and S3, comparing the image score obtained in the step S3 with a preset threshold, and dividing the tumor medical image into esophageal-gastric junction squamous carcinoma or esophageal-gastric junction adenocarcinoma according to the size relationship between the image score and the preset threshold.
According to the method for classifying the tumor images of the esophageal-gastric junction, preferably, when the three-dimensional ROI area image is an arterial three-dimensional ROI area image, the preset threshold value is-0.149, and when the image score is larger than or equal to-0.149, the tumor medical image is divided into squamous carcinoma of the esophageal-gastric junction; conversely, the tumor medical image is classified as an esophageal-gastric junction adenocarcinoma.
According to the above method for classifying the tumor images of the esophageal-gastric junction, preferably, when the three-dimensional ROI region image is a three-dimensional ROI region image in the venous phase, the preset threshold is-0.352, and when the image score is greater than or equal to-0.352, the tumor medical image is classified as squamous carcinoma of the esophageal-gastric junction; conversely, the tumor medical image is classified as an esophageal-gastric junction adenocarcinoma.
According to the above method for classifying the tumor images of the esophageal-gastric junction, preferably, when the three-dimensional ROI region images are the three-dimensional ROI region image in the arterial phase and the three-dimensional ROI region image in the venous phase, the preset threshold is 0.043, and when the image score is greater than or equal to 0.043, the tumor medical image is classified as esophageal-gastric junction squamous carcinoma; conversely, the medical images of tumors are classified as esophageal-gastric junction adenocarcinomas.
According to the above method for classifying an esophageal-gastric junction tumor image, the medical image is preferably a CT image, an enhanced CT image or an MRI image. More preferably, the medical image is an enhanced CT image.
According to the method for classifying the tumor images of the esophageal-gastric junction, preferably, in step S1, the method for processing the tumor medical images of the esophageal-gastric junction of the subject to obtain the three-dimensional ROI region image of the tumor medical image lesion includes: and segmenting a two-dimensional ROI regional image of a focus in the tumor medical image of the esophagus and stomach joint part, and performing three-dimensional reconstruction on the segmented two-dimensional ROI regional image to obtain a three-dimensional ROI regional image. More preferably, the ITK-SNAP software (version 3.6.0, www.itksnap.org) is used to segment a two-dimensional ROI area image of the lesion in the tumor medical image of the gastroesophageal junction.
According to the method for classifying the tumor images of the esophageal-gastric junction, before the ITK-SNAP software is used for segmenting the two-dimensional ROI images of the lesions in the tumor medical images of the esophageal-gastric junction, the tumor medical images of the esophageal-gastric junction need to be subjected to homogenization treatment. The specific operation of the homogenization treatment is as follows: tumor medical images were isostatically resampled using a linear interpolation algorithm in the Artificial Intelligence Kit software (A.K, version: 3.3.0.R, GE Healthcare, usa) with voxel sizes of 1mm × 1mm × 1 mm; the goal of the homogenization process is to minimize the effect of scanning scheme or device variation on the heterogeneity of quantitative omics features.
According to the above method for classifying the esophageal-gastric junction tumor images, preferably, in step S2, a PyRadiomics computing language platform is used to extract the iconomics features in the three-dimensional ROI region image; the PyRadiomics computing language platform is an open source Python software package loaded by A.K software.
In a second aspect, the invention provides a prediction system for classifying an esophageal-gastric junction tumor image, which comprises an image processing module, a feature extraction module, a score prediction module and an image classification module; the image processing module is used for processing a tumor medical image of an esophagus and stomach joint of a subject to acquire a three-dimensional ROI regional image of a tumor medical image focus; the feature extraction module is used for extracting the iconomics features in the three-dimensional ROI area image; the scoring prediction module is internally provided with a scoring prediction model and is used for calculating the image score of the three-dimensional ROI regional image according to the image omics characteristics extracted by the characteristic extraction module; the image classification module is used for carrying out qualitative analysis on the image scores and predicting the image types of the tumor medical images.
According to the above prediction system, preferably, the ROI region image is an arterial three-dimensional ROI region image and/or a venous three-dimensional ROI region image.
According to the above prediction system, preferably, when the three-dimensional ROI region image is an arterial three-dimensional ROI region image, the cinematology characteristics in step S2 are:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange。
according to the above prediction system, preferably, when the three-dimensional ROI region image is a three-dimensional ROI region image in the venous phase, the cinematology characteristics in step S2 are:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance。
according to the prediction system, preferably, when the three-dimensional ROI region images are the three-dimensional ROI region image in the arterial phase and the three-dimensional ROI region image in the venous phase, the image omics characteristics of the three-dimensional ROI region image in the arterial phase and the three-dimensional ROI region image in the venous phase are respectively extracted in step S2,
the image omics characteristics extracted from the three-dimensional ROI regional image at the arterial phase are as follows:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange;
the image omics characteristics of the vein three-dimensional ROI regional image extraction are as follows:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance。
according to the above prediction system, preferably, when the three-dimensional ROI region image is an arterial phase three-dimensional ROI region image, the score prediction model is an arterial phase score prediction model, and a calculation formula of the arterial phase score prediction model for calculating an image score is as follows:
Rad-scoreAP=0.266-0.852×log.sigma.1.0.mm.3D_ngtdm_Busyness+0.708×log.sigma.3.0.mm.3D_gldm_DependenceVariance+0.360×log.sigma.3.0.mm.3D_ngtdm_Busyness-0.830×original_firstorder_Median-1.160×wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis-1.122×wavelet.HLH_ngtdm_Busyness+0.656×wavelet.HLL_gldm_DependenceVariance+0.715×wavelet.LHH_firstorder_Skewness+2.398×wavelet.LLH_glszm_LargeAreaEmphasis+0.777×wavelet.LLL_firstorder_InterquartileRange;
among them, Rad-scoreAPIndicating arterial phase threeImage scoring of the dimensional ROI area image.
According to the above prediction system, preferably, when the three-dimensional ROI region image is a three-dimensional ROI region image in a venous phase, the score prediction model is a venous phase score prediction model, and a calculation formula of the venous phase score prediction model for calculating an image score is as follows:
Rad-scoreVP=0.047+0.760×log.sigma.1.0.mm.3D_firstorder_90Percentile-1.030×original_firstorder_Median+0.395×wavelet.HLH_glcm_ClusterProminence+1.333×wavelet.HLL_gldm_DependenceVariance+0.746×wavelet.LHH_ngtdm_Busyness+0.381×wavelet.LLH_firstorder_Kurtosis+0.409×wavelet.LLH_gldm_DependenceVariance;
wherein, Rad-scoreVPImage scores representing three-dimensional ROI region images of venous phase.
According to the above prediction system, preferably, when the three-dimensional ROI region image is an arterial three-dimensional ROI region image and a venous three-dimensional ROI region image, the scoring prediction model is a joint scoring prediction model, and a calculation formula of the joint scoring prediction model for calculating the image score is as follows:
Rad-score=0.009+0.671×Rad-scoreAP+0.621×Rad-scoreVP
among them, Rad-scoreAP=0.266-0.852×log.sigma.1.0.mm.3D_ngtdm_Busyness+0.708×log.sigma.3.0.mm.3D_gldm_DependenceVariance+0.360×log.sigma.3.0.mm.3D_ngtdm_Busyness-0.830×original_firstorder_Median-1.160×wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis-1.122×wavelet.HLH_ngtdm_Busyness+0.656×wavelet.HLL_gldm_DependenceVariance+0.715×wavelet.LHH_firstorder_Skewness+2.398×wavelet.LLH_glszm_LargeAreaEmphasis+0.777×wavelet.LLL_firstorder_InterquartileRange;
Rad-scoreVP=0.047+0.760×log.sigma.1.0.mm.3D_firstorder_90Percentile-1.030×original_firstorder_Median+0.395×wavelet.HLH_glcm_ClusterProminence+1.333×wavelet.HLL_gldm_DependenceVariance+0.746×wavelet.LHH_ngtdm_Busyness+0.381×wavelet.LLH_firstorder_Kurtosis+0.409×wavelet.LLH_gldm_DependenceVariance;
Rad-score denotes the joint image score, Rad-scoreAPImage score, Rad-score, representing three-dimensional ROI area image at arterial phaseVPImage scores representing three-dimensional ROI region images of venous phase.
According to the prediction system, the image classification module preferably predicts the image type of the tumor medical image by the following operations: and comparing the image score of the three-dimensional ROI area image with a preset threshold value, and dividing the tumor medical image into esophageal-gastric junction squamous carcinoma or esophageal-gastric junction adenocarcinoma according to the size relation between the image score and the preset threshold value.
According to the prediction system, preferably, when the three-dimensional ROI region image is an arterial three-dimensional ROI region image, the preset threshold is-0.149, and when the image score is greater than or equal to-0.149, the tumor medical image is classified as esophageal-gastric junction squamous carcinoma; conversely, the medical images of tumors are classified as esophageal-gastric junction adenocarcinomas.
According to the above prediction system, preferably, when the three-dimensional ROI area image is a three-dimensional ROI area image in a venous phase, the preset threshold is-0.352, and when the image score is greater than or equal to-0.352, the tumor medical image is classified as esophageal-gastric junction squamous carcinoma; conversely, the medical images of tumors are classified as esophageal-gastric junction adenocarcinomas.
According to the prediction system, preferably, when the three-dimensional ROI region image is an arterial three-dimensional ROI region image and a venous three-dimensional ROI region image, the preset threshold is 0.043, and when the image score is greater than or equal to 0.043, the tumor medical image is divided into esophageal-gastric junction squamous carcinoma; conversely, the medical images of tumors are classified as esophageal-gastric junction adenocarcinomas.
According to the above prediction system, preferably, the medical image is a CT image, an enhanced CT image or an MRI image. More preferably, the medical image is an enhanced CT image.
According to the prediction system, the image processing module preferably uses ITK-SNAP software (version 3.6.0, www.itksnap.org) to segment a two-dimensional ROI regional image of a lesion in a tumor medical image of an esophagogastric junction; and then, performing three-dimensional reconstruction on the divided two-dimensional ROI area image to obtain a three-dimensional ROI area image.
According to the prediction system, preferably, the feature extraction module adopts a PyRadiomics computing language platform to extract the iconomics features in the three-dimensional ROI region image; the PyRadiomics computing language platform is an open source Python software package loaded by A.K software.
In a third aspect, the present invention provides a medical image processing apparatus, which includes a memory, at least one processor, and an application program stored in the memory and readable by the processor, wherein the processor is configured to execute the application program to implement the method for classifying an image of a tumor at an esophagogastric junction according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and the computer program is loaded by a processor to execute the method for classifying an image of a tumor at an esophagogastric junction according to the first aspect.
Compared with the prior art, the invention has the following positive beneficial effects:
(1) according to the invention, the enhanced CT image of the tumor focus at the esophagogastric junction is processed to obtain the three-dimensional ROI regional image of the tumor focus, the image omics characteristics of the three-dimensional ROI regional image are extracted by using PyRadiomics, and then the tumor type of the esophagogastric junction is predicted according to the extracted image omics characteristics. By adopting the method, whether the tumor of the esophageal-gastric junction belongs to adenocarcinoma or squamous carcinoma can be predicted before an operation, the predicted AUC can reach more than 0.8, and the method has higher accuracy; moreover, the method has the advantages of preoperative non-invasiveness, can provide a new idea for diagnosing the tumor type of the esophageal-gastric junction before the operation in clinic at present, and relieves the pain and economic burden caused by invasive diagnosis for disease implementation; as an auxiliary diagnostic technique of the image omics, the invention has the advantages of no wound, intuition, easy operation and high prediction accuracy, makes up the defects of the current conventional diagnostic method, and can provide a set of non-invasive prediction scheme for the clinical diagnosis of the tumor patient at the esophageal-gastric junction before the operation is carried out, thereby providing more and more accurate decision information for the clinician before the operation and providing more efficient support for the preoperative specified accurate treatment scheme.
(2) The image omics characteristics extracted from the three-dimensional ROI regional image are all obtained from a tumor target focus, are not interfered by individual factor differences such as age, sex and the like, and have universality and universality, so the method has universality and can have stable evaluation effect in different environments.
(3) The three-dimensional ROI regional image shows the characteristic information of the tumor target focus in an all-round way from a three-dimensional angle, so that the method directly extracts the iconomics characteristic of the tumor target focus from the three-dimensional ROI regional image, and can overcome the defect that the tumor heterogeneity is not considered in the conventional method.
(4) The prediction system can predict whether the tumor of the esophagus and stomach junction belongs to adenocarcinoma or squamous carcinoma before operation based on the tumor focus enhancement CT image of the esophagus and stomach junction, the predicted AUC can reach more than 0.8, and the prediction system has higher accuracy; moreover, the method has the advantages of being noninvasive before the operation, can provide a new idea for diagnosing the tumor type of the esophagus-stomach junction part clinically before the operation at present, and relieves the pain and economic burden caused by invasive diagnosis for disease implementation.
Drawings
FIG. 1 is an image of a tumor site of a gastroesophageal junction squamous carcinoma patient; wherein, A is an axial position image of the CT venous period; b is a schematic diagram of a two-dimensional ROI area image sketched on ITK-SNAP software; c is a CT venous coronary position image which is formed by sketching a two-dimensional ROI area image on ITK-SNAP software and fusing the two-dimensional ROI area image into a three-dimensional ROI area image; d is a pathological image after operation, and the pathological image is proved to be squamous carcinoma of gastroesophageal junction (HE x 40);
FIG. 2 is an image of a tumor site of a gastroesophageal junction adenocarcinoma patient; a is an axial image of a CT venous phase, B is a schematic diagram of a two-dimensional ROI (region of interest) region image sketched on ITK-SNAP software, C is a coronary image of the CT venous phase fused into a 3D-ROI after the two-dimensional ROI region image sketched on the ITK-SNAP software, and D is a postoperative pathological image which is proved to be adenocarcinoma of a gastroesophageal junction (HE x 40);
FIG. 3 is a graph of the results of feature screening of arterial phase enhanced CT images using LASSO regression;
FIG. 4 is a graph of the results of feature screening of a venous-phase enhanced CT image using LASSO regression;
FIG. 5 shows the Rad-score prediction model for arterial phase scoringAPVenous phase scoring prediction model Rad-scoreVPAnd a ROC curve chart of a combined score prediction model Rad-score for identifying squamous carcinoma and adenocarcinoma of a gastroesophageal junction; wherein, A is a training group, and B is a verification group;
FIG. 6 shows a Rad-score prediction model for arterial phase scoreAPPredicting ROC curve graphs of squamous carcinoma and adenocarcinoma of a gastroesophageal junction by using single image omics characteristics; wherein, A is a training group, and B is a verification group;
FIG. 7 shows a Rad-score prediction model for venous phase scoringVPThe ROC curve graph of squamous carcinoma and adenocarcinoma of the gastroesophageal junction is predicted by single image omics characteristics; wherein, A is a training group, and B is a verification group;
FIG. 8 shows the Rad-score prediction model for arterial phase scoringAPVenous phase scoring prediction model Rad-scoreVPAnd a calibration curve chart of a combined score prediction model Rad-score for identifying the squamous carcinoma and adenocarcinoma of the gastroesophageal junction; wherein, A is a training set, B is a verification set, a 45-degree oblique line represents ideal calibration, and the closer a model calibration curve is to an ideal calibration line, the better consistency between the model prediction probability and the actual probability is represented;
FIG. 9 shows a Rad-score prediction model for arterial phase scoreAPVenous phase scoring prediction model Rad-scoreVPAnd a decision graph of the joint score prediction model Rad-score; wherein, a is a training set, B is a verification set, X-axis is a risk threshold range, Y-axis is net benefit, "NONE" means no lesion is assumed to be squamous carcinoma, and ALL "means ALL lesions are assumed to be squamous carcinoma; the more simultaneous curves away from "NONE" and "ALL" indicates that the model achieves a higher net benefit compared to the "NONE" and "ALL" assumptions; comparing decision curves of different models at the same threshold valueProbabilistically, a larger area under the curve indicates a higher net benefit for the model.
Detailed Description
The technical scheme of the invention is further clearly and completely described below by combining the embodiment of the invention. It should be noted that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The statistical methods employed in the following examples of the invention are as follows: statistical analysis was performed using R software (version 3.6.3, http:// www.r-project. org.). The image omics characteristics or the image omics scores Rad-score are continuous variables, the kurtosis and skewness values are adopted to test the data normality, and the independent sample t test (data conforming to normal distribution) or the Mann-Whitney U test (data not conforming to normal distribution) is adopted for comparison between the two groups. The classification variables are tested by chi-square test or Fisher's exact test. Comparison of AUC for different models was performed using the Delong test. Two-sided P <0.05 is statistically significant. And verifying the optimism degree of the prediction accuracy of the training set model by adopting 1000 self-service sampling (Bootstrap). The R software primarily uses toolkits or functions as follows: "ic" is used for the calculation of the intraclass correlation coefficients ICC, logistic regression (including LASSO regression) analysis using the "glmnet" package, ROC analysis using the "pROC" package, decision curve analysis using the "rmda" package, calibration curve analysis using the "calibration" function in the "rms" package, NRI and IDI analysis using the "predictABEL" package.
Example 1:
a method for classifying an esophagus and stomach junction tumor image comprises the following steps:
s1, processing the enhanced CT image of the tumor of the esophageal-gastric junction of the subject to obtain a three-dimensional ROI regional image of the tumor medical image focus; the enhanced CT image is an enhanced CT image in an arterial phase, and the three-dimensional ROI area image is a three-dimensional ROI area image in the arterial phase;
s2, extracting the imagery omics characteristics in the three-dimensional ROI area image;
s3, inputting the values of the image omics characteristics extracted in the step S2 into a score prediction model, and calculating to obtain the image score of the three-dimensional ROI area image;
and S4, carrying out qualitative analysis on the image scores obtained in the step S3, and predicting the image types of the tumor medical images.
In step S1, the tumor medical image of the esophageal-gastric junction of the subject is processed, and the specific operation of acquiring the three-dimensional ROI region image of the tumor medical image lesion is as follows: segmenting a two-dimensional ROI regional image of a focus in the tumor artery phase enhanced CT image of the esophagus and stomach joint part, and then performing three-dimensional reconstruction on the segmented two-dimensional ROI regional image to obtain a three-dimensional ROI regional image of the artery phase. The process of segmenting the two-dimensional ROI regional image of the focus in the tumor artery phase enhanced CT image of the esophagus and stomach junction is to realize the segmentation of the two-dimensional ROI regional image by 1 abdominal image diagnostician (5 years of image diagnosis experience) based on ITK-SNAP software (version 3.6.0, www.itksnap.org) on an axial image of the artery phase enhanced CT image, selecting a layer with the largest cross-sectional area of the focus and sketching the ROI along the edge of the focus (the stomach cavity, gastric contents, fat tissues around the stomach wall, macroscopic blood vessels and the like are avoided during sketching). In addition, in order to minimize the interference of different scanning schemes or different scanning devices on the acquired enhanced CT image, the present invention applies a linear interpolation algorithm in the Artisicial Intelligent registration Kit software (A.K, version: 3.3.0.R, GE Healthcare, USA) to perform isotropic resampling on the tumor medical image with a voxel size of 1mm × 1mm × 1mm before the ITK-SNAP software is used to segment the two-dimensional ROI area image of the lesion in the tumor medical image of the gastroesophageal junction.
In step S2, extracting an imagery omics feature in the three-dimensional ROI region image by using an open source Python software package PyRadiomics computing language platform loaded with A.K software, where the imagery omics feature specifically is:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange。
in step S3, the score prediction model is an artery score prediction model, and a calculation formula of the artery score prediction model for calculating an image score is as follows:
Rad-scoreAP=0.266-0.852×log.sigma.1.0.mm.3D_ngtdm_Busyness+0.708×log.sigma.3.0.mm.3D_gldm_DependenceVariance+0.360×log.sigma.3.0.mm.3D_ngtdm_Busyness-0.830×original_firstorder_Median-1.160×wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis-1.122×wavelet.HLH_ngtdm_Busyness+0.656×wavelet.HLL_gldm_DependenceVariance+0.715×wavelet.LHH_firstorder_Skewness+2.398×wavelet.LLH_glszm_LargeAreaEmphasis+0.777×wavelet.LLL_firstorder_InterquartileRange;
wherein, Rad-scoreAPImage scores representing three-dimensional ROI region images at the arterial phase.
In step S4, the qualitative analysis of the image score obtained in step S3 is performed to predict the image type of the medical image of the tumor, including: and S3, comparing the image score obtained in the step S3 with a preset threshold, and dividing the tumor medical image into esophageal-gastric junction squamous carcinoma or esophageal-gastric junction adenocarcinoma according to the size relationship between the image score and the preset threshold. The preset threshold value is-0.149, and when the image score is greater than or equal to-0.149, the medical tumor image is divided into squamous carcinoma of the esophageal-gastric junction; conversely, the medical images of tumors are classified as esophageal-gastric junction adenocarcinomas. The dividing result of the enhanced CT image of the arterial phase of the subject is the prediction result of the tumor type of the esophageal-gastric junction of the subject.
Further, the omics features in step S2 are obtained through screening, and the arterial phase score prediction model in step S3 is a linear score prediction model established according to the omics features obtained through screening in step S2. The specific method for screening the characteristics of the image omics in the step S2 and constructing the artery stage score prediction model in the step S3 is as follows:
1. experimental samples:
the study retrospectively collected 130 EGJ squamous carcinoma patients and 130 EGJ adenocarcinoma patients confirmed by gastroscopy, surgery pathology and diagnosis at the first subsidiary hospital of zheng zhou university at 1 month 2010 to 2021 month 6.
Inclusion criteria were: (1) performing abdominal CT flat scan and reinforcement examination before operation for 30 d. (2) Has complete clinical data and clear pathological results. (3) The focus on the CT cross section can be identified, at least 3 layers are covered, and the maximum plane diameter is more than or equal to 2.0 cm. (4) There was no perioperative radiotherapy and chemotherapy history before CT examination. (5) There is no contraindication of CT enhancement scan examination, such as iodine allergy. Exclusion criteria: (1) combine the history of other malignant tumors. (2) The quality of CT images is poor, and the focus is not clear and can not be segmented. (3) Anti-tumor treatment is performed before CT examination. (4) CT examination cannot be performed on dysfunction of important organs such as heart and lung.
Of the 130 EGJ squamous cell carcinoma patients, 87 male and 43 female patients are 38-89 years old (mean age 65.72 + -8.84 years old), and the course of disease is 5 days-4 years old. Among 130 EGJ adenocarcinoma patients, 93 male and 37 female with age of 31-83 years (average 62.95 + -9.91 years) and the course of disease is 5 days-4 years.
Before CT examination, patients sign an informed consent to know the required cautions and the specific procedures.
260 experimental samples are randomly divided into a training group (182 examples) and a verification group (78 examples) according to a ratio of 7:3, wherein 182 examples of the training group are used for screening the imaging group characteristics, and 78 examples of the training group are used for verifying the prediction performance of a scoring prediction model constructed on the basis of the screened imaging group characteristics. Among 182 cases of samples in the training group, 91 cases of EGJ squamous cell carcinoma patients (designated as squamous cell carcinoma group) and 91 cases of EGJ adenocarcinoma patients (designated as adenocarcinoma group); 39 EGJ squamous carcinoma patients and 39 EGJ adenocarcinoma patients were selected from 78 samples of the internal validation group.
2. Acquisition of CT images:
the U.S. GE Discovery 750HD 64 line CT scanner, the U.S. GE recommendation 256 line CT scanner (GE Healthcare, Waukesha, WI, United States) and the German Siemens dual source CT (SOMA-TOM Definition Flash, Siemens Healthcare, Forchheim, Germany) were used. Preparation before examination: fasting is carried out for more than 8 hours before examination, 10-20 mg of scopolamine is intramuscularly injected 15-20 min before examination to reduce gastrointestinal peristalsis (Hangzhou Minsheng Pharmaceutical PGRoup Co., Ltd. specificities: 10mg/ml), and breath holding exercise is carried out. The warm boiled water is drunk by 800-1000 ml 10-15 min before examination to ensure that the stomach is full enough and the stomach wall is stretched. The indwelling needle is ready for use in the right elbow anterior vein (sealed 20G-Y indwelling needle, BD, usa).
Scanning parameters are as follows: the voltage of the tube is 120kV, the current of the tube adopts an automatic milliampere-second technology or 220-230 mAs, and the screw pitch is 1.375/1.1; field of view (FOV) 500 mm; the matrix is 512 × 512mm or 500 × 500mm, the scanning range: at least including the lower esophageal segment to the lower edges of the kidneys. Enhanced scanning: injecting 90-100 mL of nonionic contrast agent iohexol (iopromide, 370mg/mL, GE Medical Systems, 1.5mL/kg and 3mL/s) into an elbow vein at the flow rate of 3mL/s by using a high-pressure injector, delaying the acquisition of arterial phase images for 10s and acquiring venous phase images at the interval of 30s when the descending aorta CT value reaches 100HU after the contrast agent is injected by using a small-dose triggering technology.
3. Screening of the characteristics of the image omics and construction of a score prediction model:
(1) image homogenization treatment:
the acquired arterial phase CT enhancement images were isostatically resampled using a linear interpolation algorithm in the Artificial Intelligence Kit software (A.K, version: 3.3.0.R, GE Healthcare, usa) with a voxel size of 1mm x 1mm to minimize the effect of different scanning protocols or equipment on the heterogeneity of quantitative omics features.
(2) Acquiring a three-dimensional ROI (region of interest) image:
1 abdominal image diagnostician (5-year image diagnosis experience) selects a layer with the largest cross-sectional area of a focus and delineates an ROI along the edge of the focus on axial position images of an artery phase and a vein phase respectively based on ITK-SNAP software (version 3.6.0, www.itksnap.org), and the division of the two-dimensional ROI is realized by paying attention to avoid gastric cavity and gastric contents, adipose tissues around the gastric wall, macroscopic blood vessels and the like during the delineation; and performing three-dimensional reconstruction on the divided two-dimensional ROI regional image to obtain a three-dimensional ROI regional image (see figures 1-2).
CT images of 30 patients were randomly taken after 1 month and ROI delineation was performed again by this and another 1 higher qualified physician (8-year image diagnostic experience) for consistency analysis and reproducibility check between and within groups of observers of subsequent radiologic features, respectively.
(3) The method for screening the characteristics of the image omics and constructing the score prediction model comprises the following steps:
1409 image omics characteristics are automatically extracted from the flat scan, artery phase and vein phase enhanced CT images of each sample of the training set by using an open source Python software package (Pythiomics) loaded by A.K software, wherein the 1409 image omics characteristics comprise 32 first-order characteristics (18 gray scale statistical characteristics and 14 morphological characteristics), 24 gray scale co-occurrence matrix characteristics (GLCM), 16 gray scale run matrix characteristics (GLRLM), 16 gray scale region size matrices (GLSZM), 14 gray scale correlation matrix characteristics (GLDM) and 5 neighborhood gray scale difference matrix characteristics (NGTDM). In addition, the same number of first-order grayscale statistical features and texture features are extracted based on different converted images. The method comprises the steps of extracting 744 features from a wavelet decomposition image based on 8 filtering channels, extracting 279 features from the wavelet decomposition image based on a Laplace-Gaussian (LoG) transformation image (sigma parameters are selected to be 1.0mm,3.0mm and 5.0mm), and extracting 279 features from the wavelet decomposition image based on a Local Binary Pattern (LBP) filtering image (2-order spherical harmonic function, radius of 1.0 and subdivision number of 1). The ROI region CT value is discretized based on a fixed interval width (25 HU) at the time of feature extraction.
After the feature extraction is finished, 1409 extracted image omics features are screened, and the specific screening operation is as follows: 1) firstly, consistency test between observer groups and observer groups is carried out, intra-/inter-classcorification coefficients (ICCs) of characteristics are calculated according to the characteristic of the ROI extracted omics repeatedly outlined by the same doctor and two different doctors, and the intra-/inter-classcorification coefficients are kept to be simultaneously possessed by the group and the groupBetter consistency (ICCs)>0.750); 2) good consistency between the elimination of intra-and inter-groups (ICCs)>0.750), and performing median filling on the 714 characteristics with the variance less than 1.0; 3) adopting a Z-Score standardization method to carry out data standardization processing on the reserved characteristic values, and using correlation analysis to remove average correlation coefficients with other characteristics>A characteristic of 0.700; 4) screening of retained characteristics by Mann-Whitney U test for differences between squamous and adenocarcinoma groups was statistically significant (P)<0.05), and introducing the characteristics into an LASSO regression model with ten-fold cross validation, screening an optimal model parameter lambda with the minimum deviation of the LASSO regression model by adopting a ten-fold cross validation method, and keeping the independent variable characteristics of which the regression coefficients are not 0; further adopting a minimum Akaike Information Criterion (AIC) to carry out backward multi-factor logistic regression analysis on the retained features of the LASSO regression analysis step by step to obtain finally screened image omics features; 5) using the final selected weight coefficient of the logistic regression analysis of the image omics characteristics, using the image omics scoring formula (Rad-score ═ beta)01x12x2+…+βnxnWhere β 0 is a constant term, β i is a logistic regression coefficient of the ith proteomic feature xi) to construct a model, and obtain a specific score prediction model.
(4) And (4) screening results:
the results of characteristic screening of the LASSO regression model are shown in fig. 3. The results of the multifactorial logistic regression analysis are shown in Table 1, with 10 features obtained from the co-screening.
TABLE 1 Multi-factor logistic regression analysis results for omics features in arterial phase enhanced CT images
Figure BDA0003598662060000151
Figure BDA0003598662060000161
According to the screened 10 image omics characteristicsThe weight coefficient of logistic regression analysis in (1) is determined by using the imaging omics score formula (Rad-score ═ beta)01x12x2+…+βnxn Beta 0 is a constant term, beta i is a logistic regression coefficient of ith image omics characteristic xi) to obtain a specific artery stage scoring prediction model Rad-scoreAPComprises the following steps:
Rad-scoreAP=0.266-0.852×log.sigma.1.0.mm.3D_ngtdm_Busyness+0.708×log.sigma.3.0.mm.3D_gldm_DependenceVariance+0.360×log.sigma.3.0.mm.3D_ngtdm_Busyness-0.830×original_firstorder_Median-1.160×wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis-1.122×wavelet.HLH_ngtdm_Busyness+0.656×wavelet.HLL_gldm_DependenceVariance+0.715×wavelet.LHH_firstorder_Skewness+2.398×wavelet.LLH_glszm_LargeAreaEmphasis+0.777×wavelet.LLL_firstorder_InterquartileRange。
prediction model Rad-score by using constructed arterial stage scoreAPPredicting the image score of the tumor enhanced CT image of the esophageal-gastric junction of each patient in the training set, drawing an ROC curve according to the predicted image score and the corresponding specific tumor type (squamous carcinoma or adenocarcinoma), obtaining the Area under the curve (AUC), and calculating the corresponding Rad-score when the approximation index is maximumAPThe value is the optimal cut-off value (namely a preset threshold value) for identifying and distinguishing adenocarcinoma and squamous carcinoma of an esophagus-stomach junction tumor patient, and the artery stage scoring prediction model Rad-score is adopted in the inventionAPIs-0.149.
Example 2:
a method for classifying an esophageal-gastric junction tumor image comprises the following steps:
s1, processing the enhanced CT image of the tumor of the esophageal-gastric junction of the subject to obtain a three-dimensional ROI regional image of the tumor medical image focus; the enhanced CT image is a vein phase enhanced CT image, and the three-dimensional ROI area image is a vein phase three-dimensional ROI area image;
s2, extracting the imagery omics characteristics in the three-dimensional ROI area image;
s3, inputting the values of the imagery omics characteristics extracted in the step S2 into a score prediction model, and calculating to obtain the image score of the three-dimensional ROI area image;
and S4, qualitatively analyzing the image scores obtained in the step S3, and predicting the image types of the tumor medical images.
In step S1, the tumor medical image of the esophageal-gastric junction of the subject is processed, and the specific operation of acquiring the three-dimensional ROI region image of the tumor medical image lesion is as follows: segmenting a two-dimensional ROI regional image of a focus in the tumor vein phase enhanced CT image of the esophagus and stomach junction, and then performing three-dimensional reconstruction on the segmented two-dimensional ROI regional image to obtain a three-dimensional ROI regional image of the vein phase. The process of segmenting the two-dimensional ROI regional image of the focus in the tumor quiet-period enhanced CT image of the esophagus and stomach junction is to select a layer with the largest cross-sectional area of the focus and to draw the ROI along the edge of the focus (the stomach cavity, gastric contents, fat tissues around the stomach wall, macroscopic blood vessels and the like should be avoided during drawing) on an axial image of a vein-period enhanced CT image by 1 abdominal image diagnostician (5-year image diagnosis experience) based on ITK-SNAP software (version 3.6.0, www.itksnap.org) so as to segment the two-dimensional ROI regional image. In addition, in order to minimize the interference of different scanning schemes or different scanning devices on the acquired enhanced CT image, the present invention applies a linear interpolation algorithm in the Artisicial Intelligent registration Kit software (A.K, version: 3.3.0.R, GE Healthcare, USA) to perform isotropic resampling on the tumor medical image with a voxel size of 1mm × 1mm × 1mm before the ITK-SNAP software is used to segment the two-dimensional ROI area image of the lesion in the tumor medical image of the gastroesophageal junction.
In step S2, extracting an imagery omics feature in the three-dimensional ROI region image by using an open source Python software package PyRadiomics computing language platform loaded with A.K software, where the imagery omics feature specifically is:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance。
in step S3, the score prediction model is a vein phase score prediction model, and a calculation formula of the vein phase score prediction model for calculating an image score is as follows:
Rad-scoreVP=0.047+0.760×log.sigma.1.0.mm.3D_firstorder_90Percentile-1.030×original_firstorder_Median+0.395×wavelet.HLH_glcm_ClusterProminence+1.333×wavelet.HLL_gldm_DependenceVariance+0.746×wavelet.LHH_ngtdm_Busyness+0.381×wavelet.LLH_firstorder_Kurtosis+0.409×wavelet.LLH_gldm_DependenceVariance;
among them, Rad-scoreVPImage scores representing three-dimensional ROI region images of venous phase.
In step S4, the qualitative analysis of the image score obtained in step S3 is performed to predict the image type of the medical image of the tumor, including: and S3, comparing the image score obtained in the step S3 with a preset threshold, and dividing the tumor medical image into esophageal-gastric junction squamous carcinoma or esophageal-gastric junction adenocarcinoma according to the size relationship between the image score and the preset threshold. The preset threshold value is-0.352, and when the image score is greater than or equal to-0.352, the tumor medical image is divided into esophageal-gastric junction squamous carcinoma; conversely, the medical images of tumors are classified as esophageal-gastric junction adenocarcinomas. The dividing result of the enhanced CT image of the arterial phase of the subject is the prediction result of the tumor type of the esophageal-gastric junction of the subject.
Further, the omics features of step S2 are obtained through screening, and the prediction model of venous phase score of step S3 is a linear score prediction model established according to the omics features of step S2. The specific method for screening the characteristics of the image omics in the step S2 and constructing the score prediction model in the step S3 comprises the following steps:
1. experimental samples:
the experimental sample was the same as in example 1 and will not be described in detail.
2. Acquisition of CT images:
the acquisition equipment and the acquisition method of the CT image are the same as those in embodiment 1, and are not described herein again.
3. Screening of image omics characteristics and construction of a score prediction model:
(1) image homogenization treatment:
the image homogenization process is the same as in embodiment 1, and is not repeated here.
(2) Acquiring a three-dimensional ROI (region of interest) image:
the method for acquiring the three-dimensional ROI area image is the same as that in embodiment 1, and is not described herein again.
(3) The method for screening the characteristics of the image omics and constructing the score prediction model comprises the following steps:
the method for screening the characteristics of the image omics and constructing the score prediction model is the same as that in embodiment 1, and is not repeated herein.
(4) And (4) screening results:
the results of characteristic screening of the LASSO regression model are shown in fig. 4. The results of the multifactorial logistic regression analysis are shown in Table 2, and a total of 7 features were obtained by screening.
TABLE 2 Multi-factor logistic regression analysis results for imaging omics features in venous phase enhanced CT images
Figure BDA0003598662060000181
According to the weight coefficient of the logistic regression analysis of the 7 screened image omics characteristics, the image omics scoring formula (Rad-score ═ beta) is utilized01x12x2+…+βnxn Beta 0 is a constant term, beta i is a logistic regression coefficient of ith imaging omics characteristic xi) to obtain a specific venous phase score prediction model Rad-scoreVPComprises the following steps:
Rad-scoreVP=0.047+0.760×log.sigma.1.0.mm.3D_firstorder_90Percentile-1.030×original_firstorder_Median+0.395×wavelet.HLH_glcm_ClusterProminence+1.333×wavelet.HLL_gldm_DependenceVariance+0.746×wavelet.LHH_ngtdm_Busyness+0.381×wavelet.LLH_firstorder_Kurtosis+0.409×wavelet.LLH_gldm_DependenceVariance。
forecasting model Rad-score by using constructed vein stage scoreVPPredicting the image score of the tumor enhanced CT image of the esophagus and stomach junction of each patient in the training set, drawing an ROC curve according to the predicted image score and the corresponding specific tumor type (squamous carcinoma or adenocarcinoma), obtaining the Area under the curve (AUC), and obtaining the corresponding Rad-score when the approximate ascending index is maximumVPThe value is the optimal cut-off value (namely a preset threshold value) for identifying and distinguishing adenocarcinoma and squamous carcinoma of an esophagus-stomach junction tumor patient, and the venous phase scoring prediction model Rad-score is adopted in the inventionVPIs-0.352.
Example 3:
a method for classifying an esophagus and stomach junction tumor image comprises the following steps:
s1, processing the enhanced CT image of the tumor of the esophageal-gastric junction of the subject to obtain a three-dimensional ROI regional image of the tumor medical image focus; the enhanced CT image comprises an arterial phase enhanced CT image and a venous phase enhanced CT image, the three-dimensional ROI area image comprises an arterial phase three-dimensional ROI area image and a venous phase three-dimensional ROI area image, the arterial phase enhanced CT image of the tumor of the esophageal-gastric junction of the subject is processed to obtain an arterial phase three-dimensional ROI area image, and the venous phase enhanced CT image of the tumor of the esophageal-gastric junction of the subject is processed to obtain a venous phase three-dimensional ROI area image;
s2, respectively extracting the iconomics characteristics in the artery three-dimensional ROI area image and the vein three-dimensional ROI area image;
s3, inputting the values of the imagery omics characteristics extracted in the step S2 into a score prediction model, and calculating to obtain the image score of the three-dimensional ROI area image;
and S4, qualitatively analyzing the image scores obtained in the step S3, and predicting the image types of the tumor medical images.
In step S1, the specific operation method of processing the artery enhanced CT image of the esophageal-gastric junction of the subject to obtain the artery three-dimensional ROI area image of the tumor medical image lesion is the same as in embodiment 1, and the specific operation method of processing the vein enhanced CT image of the esophageal-gastric junction of the subject to obtain the vein three-dimensional ROI area image of the tumor medical image lesion is the same as in embodiment 2, and is not described herein again.
In step S2, the open source Python software package PyRadiomics computing language platform loaded with A.K software is used to extract the iconomics features in the arterial three-dimensional ROI region image and the venous three-dimensional ROI region image respectively. The image omics characteristics extracted from the three-dimensional ROI area image in the arterial phase are as follows:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange;
the image omics characteristics of the vein three-dimensional ROI regional image extraction are as follows:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance。
in step S3, the score prediction model is a joint score prediction model, and a calculation formula of the joint score prediction model for calculating the image score is:
Rad-score=0.009+0.671×Rad-scoreAP+0.621×Rad-scoreVP
wherein Rad-score represents the joint image score, Rad-scoreVPImage score, Rad-score, representing three-dimensional ROI area image at arterial phaseVPImage scores representing three-dimensional ROI region images of venous phase. Firstly, inputting values of the iconomics characteristics extracted from the three-dimensional ROI regional image of the arterial phase into an arterial phase scoring prediction model Rad-scoreAPCalculates an image score Rad-score of the three-dimensional ROI area image in the arterial phase in the calculation formulaAPFirstly, the values of the omics characteristics extracted from the vein three-dimensional ROI area image are input into a vein scoring prediction model Rad-scoreVPCalculates an image score Rad-score of a three-dimensional ROI regional image in a venous phase from the calculation formulaVPThen calculating the Rad-scoreAPAnd Rad-scoreVPThe values are input into a calculation formula of a joint scoring prediction model Rad-score to obtain the final joint image scoring Rad-score.
Rad-scoreAP=0.266-0.852×log.sigma.1.0.mm.3D_ngtdm_Busyness+0.708×log.sigma.3.0.mm.3D_gldm_DependenceVariance+0.360×log.sigma.3.0.mm.3D_ngtdm_Busyness-0.830×original_firstorder_Median-1.160×wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis-1.122×wavelet.HLH_ngtdm_Busyness+0.656×wavelet.HLL_gldm_DependenceVariance+0.715×wavelet.LHH_firstorder_Skewness+2.398×wavelet.LLH_glszm_LargeAreaEmphasis+0.777×wavelet.LLL_firstorder_InterquartileRange;
Rad-scoreVP=0.047+0.760×log.sigma.1.0.mm.3D_firstorder_90Percentile-1.030×original_firstorder_Median+0.395×wavelet.HLH_glcm_ClusterProminence+1.333×wavelet.HLL_gldm_DependenceVariance+0.746×wavelet.LHH_ngtdm_Busyness+0.381×wavelet.LLH_firstorder_Kurtosis+0.409×wavelet.LLH_gldm_DependenceVariance。
In step S4, the qualitatively analyzing the image score obtained in step S3 to predict the image type of the medical tumor image includes: and (4) comparing the image score obtained in the step (S3) with a preset threshold, and dividing the tumor medical image into esophageal and gastric junction squamous cell carcinoma or esophageal and gastric junction adenocarcinoma according to the size relationship between the image score and the preset threshold. The preset threshold value is 0.043, and when the image score is greater than or equal to 0.043, the tumor medical image is divided into squamous carcinoma of the esophageal-gastric junction; conversely, the medical images of tumors are classified as esophageal-gastric junction adenocarcinomas. The dividing result of the enhanced CT image of the arterial phase of the subject is the prediction result of the tumor type of the esophageal-gastric junction of the subject.
Further, in step S2, the omics features of the arterial three-dimensional ROI region image are obtained by screening, and the screening process of the omics features of the arterial three-dimensional ROI region image and the arterial score prediction model Rad-score are performedAPThe construction procedure of (1) was the same as in example 1; the imaging omics characteristics of the vein-phase three-dimensional ROI regional image are obtained through screening, and the screening process of the imaging omics characteristics of the vein-phase three-dimensional ROI regional image and a vein-phase score prediction model Rad-score are obtained through screeningVPThe construction procedure of (1) was the same as in example 2; and will not be described in detail herein. The arterial phase model scoring and the venous phase scoring are subjected to multi-factor logistic regression again, and the scoring of the combined model is constructed based on respective logistic regression coefficients, so that a combined scoring model, namely Rad-score, is obtainedAP_VP=0.009+0.671×Rad-scoreAP+0.621×Rad-scoreVP
Predicting the image score of the tumor enhanced CT image of the esophagus and stomach junction of each patient in a training set by using the constructed combined score prediction model Rad-score, drawing a ROC curve according to the predicted image score and the corresponding specific tumor type (squamous carcinoma or adenocarcinoma), and obtaining the Area under the curve (AUC) by using the corresponding Rad-score when the approximate exponential is maximumVPThe value is the best cut-off value (namely a preset threshold value) for identifying adenocarcinoma and squamous carcinoma of an esophagus-stomach junction tumor patient, and the preset threshold value of the Rad-score prediction model of the invention is 0.043.
Example 4:
a prediction system for classifying the tumor image of the esophagus-stomach junction comprises an image processing module, a feature extraction module, a grading prediction module and an image classification module.
The image processing module is used for processing the tumor medical image of the esophagus and stomach joint part of the subject to acquire a three-dimensional ROI regional image of a tumor medical image focus. Preferably, the image processing module processes the medical image of the tumor of the esophageal-gastric junction of the subject by the following specific operations: firstly, a linear interpolation algorithm in the Artificial intelligency Kit software (A.K, version: 3.3.0.R, GE Healthcare, USA) is adopted to carry out isotropic resampling on a tumor medical image, wherein the voxel size is 1mm multiplied by 1 mm; then, ITK-SNAP software (version 3.6.0, version www.itksnap.org) is adopted to segment a two-dimensional ROI regional image of a focus in the tumor medical image of the esophagus and stomach junction, and the segmented two-dimensional ROI regional image is subjected to three-dimensional reconstruction to obtain a three-dimensional ROI regional image. The medical image is an artery phase enhanced CT image of a tumor of an esophagus-stomach junction, and the three-dimensional ROI area image is an artery phase three-dimensional ROI area image.
The feature extraction module is used for extracting the iconomics features in the three-dimensional ROI area image in the arterial phase. Preferably, the feature extraction module adopts an open source Python software package PyRadiomics computing language platform loaded by A.K software to extract the image omics features of the three-dimensional ROI area image in the arterial phase. The specifically extracted features of the image omics are as follows:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange。
and the scoring prediction module is internally provided with an artery stage scoring prediction model and is used for calculating the image score of the artery stage three-dimensional ROI regional image according to the image omics characteristics extracted by the characteristic extraction module. The calculation formula of the artery stage score prediction model for calculating the image score is as follows:
Rad-scoreAP=0.266-0.852×log.sigma.1.0.mm.3D_ngtdm_Busyness+0.708×log.sigma.3.0.mm.3D_gldm_DependenceVariance+0.360×log.sigma.3.0.mm.3D_ngtdm_Busyness-0.830×original_firstorder_Median-1.160×wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis-1.122×wavelet.HLH_ngtdm_Busyness+0.656×wavelet.HLL_gldm_DependenceVariance+0.715×wavelet.LHH_firstorder_Skewness+2.398×wavelet.LLH_glszm_LargeAreaEmphasis+0.777×wavelet.LLL_firstorder_InterquartileRange;
among them, Rad-scoreAPImage scores representing three-dimensional ROI region images at the arterial phase.
The image classification module is used for carrying out qualitative analysis on the image scores predicted by the score prediction module and predicting the image types of the tumor medical images. The specific operation of the image classification module for qualitatively analyzing the image scores predicted by the score prediction module is as follows: and comparing the image score of the three-dimensional ROI area image in the arterial phase with a preset threshold, and dividing the tumor medical image into esophageal-gastric junction squamous carcinoma or esophageal-gastric junction adenocarcinoma according to the size relationship between the image score and the preset threshold. The preset threshold value is-0.149, and when the image score is greater than or equal to-0.149, the tumor medical image is divided into esophageal-gastric junction squamous carcinoma; conversely, the tumor medical image is classified as an esophagogastric junction adenocarcinoma.
Example 5:
a prediction system for classifying the tumor image of the esophagus-stomach junction comprises an image processing module, a feature extraction module, a grading prediction module and an image classification module.
The image processing module is used for processing the tumor medical image of the esophagus and stomach joint part of the subject to acquire a three-dimensional ROI regional image of a tumor medical image focus. Preferably, the image processing module processes the medical image of the tumor of the esophageal-gastric junction of the subject by the following specific operations: firstly, a linear interpolation algorithm in the Artificial intelligency Kit software (A.K, version: 3.3.0.R, GE Healthcare, USA) is adopted to carry out isotropic resampling on a tumor medical image, wherein the voxel size is 1mm multiplied by 1 mm; then, ITK-SNAP software (version 3.6.0, version www.itksnap.org) is adopted to segment a two-dimensional ROI regional image of a focus in the tumor medical image of the esophagus and stomach junction, and the segmented two-dimensional ROI regional image is subjected to three-dimensional reconstruction to obtain a three-dimensional ROI regional image. The medical image is a vein-phase enhanced CT image of a tumor of an esophagogastric junction, and the three-dimensional ROI area image is a vein-phase three-dimensional ROI area image.
The feature extraction module is used for extracting the iconomics features in the vein-phase three-dimensional ROI area image. Preferably, the feature extraction module adopts an open source Python software package PyRadiomics computing language platform loaded by A.K software to extract the image omics features of the vein-phase three-dimensional ROI region image. The specifically extracted features of the image omics are as follows:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance。
the score prediction module is provided with a vein phase score prediction model and is used for calculating the image score of the vein phase three-dimensional ROI regional image according to the image omics characteristics extracted by the characteristic extraction module. The calculation formula of the vein phase score prediction model for calculating the image score is as follows:
Rad-scoreVP=0.047+0.760×log.sigma.1.0.mm.3D_firstorder_90Percentile-1.030×original_firstorder_Median+0.395×wavelet.HLH_glcm_ClusterProminence+1.333×wavelet.HLL_gldm_DependenceVariance+0.746×wavelet.LHH_ngtdm_Busyness+0.381×wavelet.LLH_firstorder_Kurtosis+0.409×wavelet.LLH_gldm_DependenceVariance;
wherein, Rad-scoreVPImage scores representing three-dimensional ROI region images of venous phase.
The image classification module is used for carrying out qualitative analysis on the image scores calculated by the score prediction module and predicting the image types of the tumor medical images. The specific operation of the image classification module for qualitatively analyzing the image scores predicted by the score prediction module is as follows: and comparing the image score of the venous three-dimensional ROI regional image calculated by the score prediction module with a preset threshold, and dividing the tumor medical image into esophageal and gastric junction squamous carcinoma or esophageal and gastric junction adenocarcinoma according to the size relationship between the image score and the preset threshold. The preset threshold value is-0.352, and when the image score is greater than or equal to-0.352, the tumor medical image is divided into esophageal-gastric junction squamous carcinoma; conversely, the medical images of tumors are classified as esophageal-gastric junction adenocarcinomas.
Example 6:
a prediction system for classifying the tumor image of the esophagus-stomach junction comprises an image processing module, a feature extraction module, a grading prediction module and an image classification module.
The image processing module is used for processing the tumor medical image of the esophagus and stomach joint part of the subject and acquiring a three-dimensional ROI regional image of a tumor medical image focus. Preferably, the image processing module processes the medical image of the tumor of the esophageal-gastric junction of the subject by the following specific operations: firstly, isotropic resampling is carried out on a tumor medical image by adopting a linear interpolation algorithm in the Artificial intelligency Kit software (A.K, version: 3.3.0.R, GE Healthcare, USA), wherein the voxel size is 1mm multiplied by 1 mm; then, ITK-SNAP software (version 3.6.0, version www.itksnap.org) is adopted to segment a two-dimensional ROI regional image of a focus in the tumor medical image of the esophagus and stomach junction, and the segmented two-dimensional ROI regional image is subjected to three-dimensional reconstruction to obtain a three-dimensional ROI regional image. The medical image comprises an artery phase enhanced CT image and a vein phase enhanced CT image of the tumor of the esophagus and stomach joint part, and the image processing module respectively processes the artery phase enhanced CT image and the vein phase enhanced CT image of the tumor of the esophagus and stomach joint part of the subject to obtain an artery phase three-dimensional ROI regional image and a vein phase three-dimensional ROI regional image of the tumor medical image focus.
The feature extraction module is used for extracting the iconomics features in the three-dimensional ROI area image in the arterial phase and the three-dimensional ROI area image in the venous phase. Preferably, the feature extraction module adopts an open source Python software package PyRadiomics computing language platform loaded by A.K software to respectively extract the iconomics features in the arterial three-dimensional ROI area image and the venous three-dimensional ROI area image.
The image omics characteristics extracted from the three-dimensional ROI area image in the arterial phase are as follows:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange;
the image omics characteristics of the vein three-dimensional ROI regional image extraction are as follows:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance。
and the scoring prediction module is internally provided with a joint scoring prediction model for calculating joint image scoring of the arterial three-dimensional ROI regional image and the venous three-dimensional ROI regional image according to the image omics characteristics extracted by the characteristic extraction module. The calculation formula of the joint score prediction model for calculating the image score is as follows:
Rad-score=0.009+0.671×Rad-scoreAP+0.621×Rad-scoreVP
wherein Rad-score represents the joint image score, Rad-scoreVPImage score, Rad-score, representing three-dimensional ROI area image at arterial phaseVPImage scores representing three-dimensional ROI region images of venous phase. Firstly, inputting values of the iconomics characteristics extracted from the three-dimensional ROI regional image of the arterial phase into an arterial phase scoring prediction model Rad-scoreAPCalculating an image score Rad-score of the three-dimensional ROI regional image in the arterial phase from the calculation formulaAPFirstly, the values of the omics characteristics extracted from the vein three-dimensional ROI area image are input into a vein scoring prediction model Rad-scoreVPCalculates an image score Rad-score of a three-dimensional ROI regional image in a venous phase from the calculation formulaVPThen calculating the Rad-scoreAPAnd Rad-scoreVPThe values are input into a calculation formula of a joint scoring prediction model Rad-score to obtain the final joint image scoring Rad-score.
Rad-scoreAP=0.266-0.852×log.sigma.1.0.mm.3D_ngtdm_Busyness+0.708×log.sigma.3.0.mm.3D_gldm_DependenceVariance+0.360×log.sigma.3.0.mm.3D_ngtdm_Busyness-0.830×original_firstorder_Median-1.160×wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis-1.122×wavelet.HLH_ngtdm_Busyness+0.656×wavelet.HLL_gldm_DependenceVariance+0.715×wavelet.LHH_firstorder_Skewness+2.398×wavelet.LLH_glszm_LargeAreaEmphasis+0.777×wavelet.LLL_firstorder_InterquartileRange。
Rad-scoreVP=0.047+0.760×log.sigma.1.0.mm.3D_firstorder_90Percentile-1.030×original_firstorder_Median+0.395×wavelet.HLH_glcm_ClusterProminence+1.333×wavelet.HLL_gldm_DependenceVariance+0.746×wavelet.LHH_ngtdm_Busyness+0.381×wavelet.LLH_firstorder_Kurtosis+0.409×wavelet.LLH_gldm_DependenceVariance。
The image classification module is used for carrying out qualitative analysis on the image scores calculated by the score prediction module and predicting the image types of the tumor medical images. The specific operation of the image classification module for qualitatively analyzing the image scores predicted by the score prediction module is as follows: and comparing the combined image score calculated by the score prediction module with a preset threshold, and dividing the tumor medical image into esophageal-gastric junction squamous carcinoma or esophageal-gastric junction adenocarcinoma according to the size relationship between the image score and the preset threshold. The preset threshold value is 0.043, and when the image score is greater than or equal to 0.043, the tumor medical image is divided into squamous carcinoma of the esophageal-gastric junction; conversely, the tumor medical image is classified as an esophageal-gastric junction adenocarcinoma. The dividing result of the enhanced CT image of the arterial phase of the subject is the prediction result of the tumor type of the esophageal-gastric junction of the subject.
Example 7:
a medical image processing apparatus comprising a memory, at least one processor, and an application program readable by the processor stored on the memory, the processor being configured to execute the application program to implement the method for classifying an image of an esophagogastric junction tumor as claimed in any of embodiments 1 to 3.
Example 8:
a computer readable storage medium having a computer program stored thereon, the computer program being loaded by a processor to perform the method for classifying an esophagogastric junction tumor image according to any one of embodiments 1 to 3.
The performance evaluation of the score prediction model constructed by the invention comprises the following steps:
the arterial phase score prediction model Rad-score was validated in training set and validation set samples (the training set and validation set samples were the same as in example 1), respectivelyAPVenous phase scoring prediction model Rad-scoreVPAnd performance of a combined scoring prediction model Rad-score.
1. ROC curve (receiver operating curve) evaluation
Artery phase scoring prediction model Rad-scoreAP(artery phase model for short) and venous phase score prediction model Rad-scoreVPThe performance evaluation of a venous phase model and a combined score prediction model Rad-score (combined model for short) mainly adopts Receiver Operating Curve (ROC) analysis.
And respectively predicting the image scores of the tumor enhancement CT images of the esophageal-gastric junction of each patient in the training set by using the constructed arterial phase model, venous phase model and combined model, and respectively drawing ROC curves (the ROC curves are shown as A in figure 5) of the arterial phase model, the venous phase model and the combined model according to the predicted image scores and the corresponding specific tumor types (squamous carcinoma or adenocarcinoma). And (3) taking the image score value corresponding to the maximum Youden index as a corresponding score prediction model to identify and distinguish the optimal cutoff value (namely a preset threshold) of adenocarcinoma and squamous carcinoma of the tumor patient at the esophageal-gastric junction, and calculating the sensitivity, specificity, accuracy, negative prediction value and positive prediction value (the result is shown in a table 3) of the optimal cutoff value for evaluating the identification performance of the score prediction model.
And substituting values of the image omics characteristics of the enhanced CT images of 39 EGJ squamous carcinoma samples and 39 EGJ adenocarcinoma samples in the verification group into the constructed artery phase model, vein phase model and combined model respectively to obtain image scores predicted by the samples by adopting different score prediction models, drawing ROC curves (shown as B in figure 5) of the artery phase model, the vein phase model and the combined model respectively according to the predicted image scores, and verifying the value of the artery phase model, the vein phase model and the combined model for distinguishing squamous carcinoma and adenocarcinoma at the esophageal-gastric junction. And calculating the corresponding sensitivity, specificity, accuracy, negative predicted value and positive predicted value according to the optimal cut-off value of the corresponding scoring prediction model (the result is shown in table 3).
TABLE 3 efficacy of different scoring prediction models in training and validation groups for identifying EGJ squamous cell carcinoma and adenocarcinoma
Figure BDA0003598662060000271
As can be seen from fig. 5 and table 3, the AUC of the area under the ROC curve for identifying and distinguishing esophageal and gastric combined squamous carcinoma and adenocarcinoma of the present invention in the training set are 0.876, 0.877 and 0.904 respectively, and the AUC of the area under the ROC curve for identifying and distinguishing esophageal and gastric combined squamous carcinoma and adenocarcinoma of the present invention in the verification set are 0.0824, 0.879 and 0.901 respectively, which are substantially the same as those in the training set. Moreover, when the scoring model prediction model is used for distinguishing squamous cell carcinoma and adenocarcinoma of the esophageal-gastric junction in a training group, the area AUC under the ROC curve can reach more than 0.8, the detection sensitivity can reach more than 0.8, and the detection specificity can reach more than 0.78. Therefore, the artery phase model, the vein phase model and the combined model constructed by the invention can be used for identifying and distinguishing squamous carcinoma and adenocarcinoma at the esophageal-gastric junction, and have high diagnostic value.
In order to compare the AUC values of the three scoring prediction models constructed by the present invention, the present invention performs a Delong test on samples of the training set and the verification set, and the specific results are shown in fig. 4.
TABLE 4 prediction model Delong test results for different scores in training and validation groups
Figure BDA0003598662060000272
As can be seen from table 4, the AUC of the combined model is significantly improved in the training group compared with that of the venous phase model, and the AUC of the combined model is significantly improved in the verification group compared with that of the arterial phase model.
Meanwhile, for comparison, ROC curves (shown in fig. 6 and 7) of each of the proteomic features in the arterial phase model and the venous phase model for distinguishing the tumor types (squamous carcinoma or adenocarcinoma) of the esophageal-gastric junction are drawn respectively.
As shown in FIG. 6, in the training set, the Rad-score prediction model is usedAPThe area AUC under the ROC curve for distinguishing squamous carcinoma and adenocarcinoma of the esophageal-gastric junction by single image omics characteristic identification in the verification group is in the range of 0.592-0.733, and a single image group in the verification groupThe area AUC under the ROC curve for distinguishing squamous carcinoma and adenocarcinoma of the esophageal-gastric junction by the identification of the chemical characteristics is within the range of 0.569-0.714, and is obviously lower than that of the artery phase scoring prediction model Rad-score of the inventionAP. As shown in FIG. 7, in the training set, the prediction model Rad-score is usedVPThe area AUC under the ROC curve for distinguishing the esophageal-gastric junction squamous cell carcinoma and the adenocarcinoma by single imaging group characteristic identification in the group is in the range of 0.627-0.783, and the area AUC under the ROC curve for distinguishing the esophageal-gastric junction squamous cell carcinoma and the adenocarcinoma by single imaging group characteristic identification in the group is in the range of 0.634-0.795, which are obviously lower than those of the venous phase score prediction model Rad-score in the inventionVP. Therefore, the artery stage scoring prediction model Rad-score is provided by the inventionAPAnd venous phase score prediction model Rad-scoreAPThe effect of distinguishing squamous carcinoma and adenocarcinoma of the esophageal-gastric junction is superior to that of a single image omics characteristic in the model.
2. And (3) verifying the calibration degree:
the arterial phase score prediction model Rad-score of the present invention was evaluated using a Calibration Curve (CCA) in the experimental samples of the training set and the validation setAPVenous phase score prediction model Rad-scoreVPCombined scoring the calibration capability of the predictive model Rad-score.
The specific method comprises the following steps: adopting a bootstrap method, repeatedly sampling for 1000 times, and respectively verifying the artery stage scoring model prediction model Rad-score established by the invention in experimental samples of a training group and a verification groupAPVenous phase scoring model prediction model Rad-scoreVPAnd the calibration curve between the esophagus and stomach junction tumor type predicted by the combined scoring prediction model Rad-score and the actual tumor type. The results are shown in FIG. 8.
As can be seen from FIG. 8, the independent models of the pulse phase and the venous phase, and the combined model of the artery phase and the venous phase have better calibration in the training set and the verification set.
3. Evaluation of Net Recognition Improvement (NRI) and Integrated Discrimination Improvement (IDI)
Experimental samples in training and validation groupsIn this case, the artery grade prediction model Rad-score of the present invention is evaluated by using the Net Recognition Improvement (NRI) and the integrated recognition improvement (IDI)AP(artery phase model for short) and venous phase score prediction model Rad-scoreVP(venous phase model for short) and a combined score prediction model Rad-score (combined model for short) on the improvement capability of classification efficiency. Specific results are shown in table 5.
TABLE 5 continuous NRI and IDI for the identification of EGJ squamous carcinoma and adenocarcinoma potencies using different scoring predictive models
Figure BDA0003598662060000281
Figure BDA0003598662060000291
As can be seen from Table 5, there was no significant gain effect in the venous phase and the arterial phase, but when the venous phase was modeled in combination with the arterial phase, the model efficacy was positively improved compared to the independent phase model.
4. Goodness of fit test
The different scoring prediction models constructed by the invention are tested for goodness-of-fit by adopting a Hosmer-Lemeshow test (H-L test) (P >0.05 represents that the model is well fitted). Specific results are shown in table 6.
TABLE 6 different imaging omics model Hosmer-Lemeshow test in training and validation groups
Figure BDA0003598662060000292
As can be seen from Table 6, the arterial phase model, the venous phase model and the combined model all pass the Hosmer-Lemeshow test and have better goodness of fit.
5. Decision curve evaluation
In the experimental samples of training set and validation set, Decision Curve (DC) is adoptedA) Evaluation of the arterial phase score prediction model Rad-score of the inventionAP(artery phase model for short) and vein phase score prediction model Rad-scoreVP(venous phase model for short) and a combined score prediction model Rad-score (combined model for short) under different threshold probabilities, and the clinical net benefits or clinical practicability. The specific results are shown in FIG. 9.
As can be seen in FIG. 9, the combined model has a higher clinical net benefit in the threshold probability interval of 0.3-0.9 in the training and validation groups than the other models.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for classifying tumor images of an esophagus-stomach junction is characterized by comprising the following steps:
s1, processing the tumor medical image of the esophageal-gastric junction of the subject to obtain a three-dimensional ROI regional image of the tumor medical image focus;
s2, extracting the imagery omics characteristics in the three-dimensional ROI area image;
s3, inputting the values of the image omics characteristics extracted in the step S2 into a score prediction model, and calculating to obtain the image score of the three-dimensional ROI area image;
and S4, qualitatively analyzing the image scores obtained in the step S3, and predicting the image types of the tumor medical images.
2. The method for classifying an esophageal-gastric junction tumor image according to claim 1, wherein the three-dimensional ROI area image is an arterial three-dimensional ROI area image and/or a venous three-dimensional ROI area image.
3. The method for classifying an esophageal-gastric junction tumor image according to claim 2, wherein when the three-dimensional ROI area image is an arterial three-dimensional ROI area image, the proteomics characteristics in step S2 are as follows:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange;
when the three-dimensional ROI regional image is a three-dimensional ROI regional image in a venous phase, the imagery omics characteristics in step S2 are as follows:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance;
when the three-dimensional ROI regional images are three-dimensional ROI regional images in the arterial phase and three-dimensional ROI regional images in the venous phase, the image omics characteristics of the three-dimensional ROI regional images in the arterial phase and the three-dimensional ROI regional images in the venous phase are respectively extracted in step S2,
the image omics characteristics extracted from the three-dimensional ROI area image in the arterial phase are as follows:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange;
the image omics characteristics of the vein three-dimensional ROI regional image extraction are as follows:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance。
4. the method for classifying the tumor image of the gastroesophageal junction according to claim 3, wherein when the three-dimensional ROI area image is an arterial three-dimensional ROI area image, the score prediction model in step S3 is an arterial score prediction model, and a calculation formula of the arterial score prediction model for calculating the image score is as follows: rad-scoreAP=0.266-0.852×log.sigma.1.0.mm.3D_ngtdm_Busyness+0.708×log.sigma.3.0.mm.3D_gldm_DependenceVariance+0.360×log.sigma.3.0.mm.3D_ngtdm_Busyness-0.830×original_firstorder_Median-1.160×wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis-1.122×wavelet.HLH_ngtdm_Busyness+0.656×wavelet.HLL_gldm_DependenceVariance+0.715×wavelet.LHH_firstorder_Skewness+2.398×wavelet.LLH_glszm_LargeAreaEmphasis+0.777×wavelet.LLL_firstorder_InterquartileRange;
When the three-dimensional ROI region image is a three-dimensional ROI region image of a venous phase, the score prediction model in step S3 is a venous phase score prediction model, and a calculation formula of the venous phase score prediction model for calculating an image score is as follows:
Rad-scoreVP=0.047+0.760×log.sigma.1.0.mm.3D_firstorder_90Percentile-1.030×original_firstorder_Median+0.395×wavelet.HLH_glcm_ClusterProminence+1.333×wavelet.HLL_gldm_DependenceVariance+0.746×wavelet.LHH_ngtdm_Busyness+0.381×wavelet.LLH_firstorder_Kurtosis+0.409×wavelet.LLH_gldm_DependenceVariance;
when the three-dimensional ROI region image is an arterial three-dimensional ROI region image and a venous three-dimensional ROI region image, the scoring prediction model in step S3 is a joint scoring prediction model, and a calculation formula of the joint scoring prediction model for calculating the image score is:
Rad-score=0.009+0.671×Rad-scoreAP+0.621×Rad-scoreVP
5. the method for classifying the tumor image of the esophageal-gastric junction according to claim 4, wherein the qualitative analysis of the image score obtained in step S3 in step S4 is performed to predict the image type of the tumor medical image, and the method comprises:
and (4) comparing the image score obtained in the step (S3) with a preset threshold, and dividing the tumor medical image into esophageal and gastric junction squamous cell carcinoma or esophageal and gastric junction adenocarcinoma according to the size relationship between the image score and the preset threshold.
6. The method for classifying the tumor image of the esophageal-gastric junction according to the claim 5, wherein the preset threshold is-0.149 when the three-dimensional ROI area image is an arterial three-dimensional ROI area image, and when the image score is greater than or equal to-0.149, the medical image of the tumor is classified as esophageal-gastric junction squamous carcinoma; otherwise, dividing the tumor medical image into adenocarcinoma of the esophagus-stomach junction;
when the three-dimensional ROI area image is a three-dimensional ROI area image in a venous phase, the preset threshold value is-0.352, and when the image score is larger than or equal to-0.352, the tumor medical image is divided into squamous carcinoma of an esophagus and stomach junction; otherwise, dividing the tumor medical image into adenocarcinoma of the esophagus-stomach junction;
when the three-dimensional ROI area images are three-dimensional ROI area images in an arterial phase and three-dimensional ROI area images in a venous phase, the preset threshold value is 0.043, and when the image score is larger than or equal to 0.043, the tumor medical images are divided into squamous carcinoma of an esophagus and stomach junction; conversely, the tumor medical image is classified as an esophageal-gastric junction adenocarcinoma.
7. The prediction system for classifying the tumor images of the esophageal-gastric junction is characterized by comprising an image processing module, a feature extraction module, a grading prediction module and an image classification module; the image processing module is used for processing a tumor medical image of the esophagus and stomach joint part of a subject to acquire a three-dimensional ROI regional image of a lesion of the tumor medical image; the feature extraction module is used for extracting the iconomics features in the three-dimensional ROI area image; the scoring prediction module is internally provided with a scoring prediction model and is used for calculating the image score of the three-dimensional ROI regional image according to the image omics characteristics extracted by the characteristic extraction module; the image classification module is used for carrying out qualitative analysis on the image scores and predicting the image types of the tumor medical images.
8. The prediction system according to claim 7, wherein the three-dimensional ROI area image is an arterial phase three-dimensional ROI area image and/or a venous phase three-dimensional ROI area image;
when the three-dimensional ROI region image is an arterial three-dimensional ROI region image, the cinematology features in step S2 are: log. sigma.1.0.mm.3D _ ngtdm _ Busyness,
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange;
When the three-dimensional ROI region image is a three-dimensional ROI region image in the venous phase, the cinematology features in step S2 are:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance;
when the three-dimensional ROI area images are the three-dimensional ROI area image in the arterial phase and the three-dimensional ROI area image in the venous phase, the image omics characteristics of the three-dimensional ROI area image in the arterial phase and the three-dimensional ROI area image in the venous phase are respectively extracted in step S2,
the image omics characteristics extracted from the three-dimensional ROI area image in the arterial phase are as follows:
log.sigma.1.0.mm.3D_ngtdm_Busyness、
log.sigma.3.0.mm.3D_gldm_DependenceVariance、
log.sigma.3.0.mm.3D_ngtdm_Busyness、
original_firstorder_Median、
wavelet.HLH_glrlm_LongRunHighGrayLevelEmphasis、
wavelet.HLH_ngtdm_Busyness、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_firstorder_Skewness、
wavelet.LLH_glszm_LargeAreaEmphasis、
wavelet.LLL_firstorder_InterquartileRange;
the image omics characteristics of the vein three-dimensional ROI regional image extraction are as follows:
log.sigma.1.0.mm.3D_firstorder_90Percentile、
original_firstorder_Median、
wavelet.HLH_glcm_ClusterProminence、
wavelet.HLL_gldm_DependenceVariance、
wavelet.LHH_ngtdm_Busyness、
wavelet.LLH_firstorder_Kurtosis、
wavelet.LLH_gldm_DependenceVariance。
9. a medical image processing device, characterized in that the medical image processing device comprises a memory, at least one processor, the memory having stored thereon a processor-readable application program, the processor being configured to execute the application program to implement the method for classifying an image of a tumor at an esophagogastric junction according to any one of claims 1 to 6.
10. A computer-readable storage medium, having a computer program stored thereon, which is loaded by a processor to perform the method for classifying an esophagogastric junction tumor image according to any one of claims 1 to 6.
CN202210398698.1A 2022-04-15 2022-04-15 Method, system, equipment and storage medium for classifying esophagus and stomach junction tumor images Pending CN114758175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210398698.1A CN114758175A (en) 2022-04-15 2022-04-15 Method, system, equipment and storage medium for classifying esophagus and stomach junction tumor images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210398698.1A CN114758175A (en) 2022-04-15 2022-04-15 Method, system, equipment and storage medium for classifying esophagus and stomach junction tumor images

Publications (1)

Publication Number Publication Date
CN114758175A true CN114758175A (en) 2022-07-15

Family

ID=82331815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210398698.1A Pending CN114758175A (en) 2022-04-15 2022-04-15 Method, system, equipment and storage medium for classifying esophagus and stomach junction tumor images

Country Status (1)

Country Link
CN (1) CN114758175A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309385A (en) * 2023-02-27 2023-06-23 之江实验室 Abdominal fat and muscle tissue measurement method and system based on weak supervision learning
CN116740386A (en) * 2023-05-17 2023-09-12 首都医科大学宣武医院 Image processing method, apparatus, device and computer readable storage medium
CN116862858A (en) * 2023-07-04 2023-10-10 浙江大学 Prediction method and system for curative effect of gastric cancer treatment based on image histology
CN118314156A (en) * 2024-05-07 2024-07-09 西南医科大学附属医院 CT image tumor region segmentation method based on image histology scoring feature map

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148115A (en) * 2019-04-04 2019-08-20 中国科学院深圳先进技术研究院 A kind of screening technique, device and the storage medium of metastasis of cancer prediction image feature
CN110175978A (en) * 2019-04-02 2019-08-27 南方医科大学南方医院 A kind of liver cancer image group data processing method, system, device and storage medium
CN110335259A (en) * 2019-06-25 2019-10-15 腾讯科技(深圳)有限公司 A kind of medical image recognition methods, device and storage medium
CN111128396A (en) * 2019-12-20 2020-05-08 山东大学齐鲁医院 Digestive tract disease auxiliary diagnosis system based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175978A (en) * 2019-04-02 2019-08-27 南方医科大学南方医院 A kind of liver cancer image group data processing method, system, device and storage medium
CN110148115A (en) * 2019-04-04 2019-08-20 中国科学院深圳先进技术研究院 A kind of screening technique, device and the storage medium of metastasis of cancer prediction image feature
CN110335259A (en) * 2019-06-25 2019-10-15 腾讯科技(深圳)有限公司 A kind of medical image recognition methods, device and storage medium
CN111128396A (en) * 2019-12-20 2020-05-08 山东大学齐鲁医院 Digestive tract disease auxiliary diagnosis system based on deep learning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309385A (en) * 2023-02-27 2023-06-23 之江实验室 Abdominal fat and muscle tissue measurement method and system based on weak supervision learning
CN116309385B (en) * 2023-02-27 2023-10-10 之江实验室 Abdominal fat and muscle tissue measurement method and system based on weak supervision learning
CN116740386A (en) * 2023-05-17 2023-09-12 首都医科大学宣武医院 Image processing method, apparatus, device and computer readable storage medium
CN116862858A (en) * 2023-07-04 2023-10-10 浙江大学 Prediction method and system for curative effect of gastric cancer treatment based on image histology
CN118314156A (en) * 2024-05-07 2024-07-09 西南医科大学附属医院 CT image tumor region segmentation method based on image histology scoring feature map

Similar Documents

Publication Publication Date Title
CN114758175A (en) Method, system, equipment and storage medium for classifying esophagus and stomach junction tumor images
CN109493325B (en) Tumor heterogeneity analysis system based on CT images
US10098602B2 (en) Apparatus and method for processing a medical image of a body lumen
US8238637B2 (en) Computer-aided diagnosis of malignancies of suspect regions and false positives in images
EP3528261A1 (en) Prediction model for grouping hepatocellular carcinoma, prediction system thereof, and method for determining hepatocellular carcinoma group
Azhari et al. Tumor detection in medical imaging: a survey
CN114974575A (en) Breast cancer neoadjuvant chemotherapy curative effect prediction device based on multi-feature fusion
CN112767393A (en) Machine learning-based bimodal imaging omics ground glass nodule classification method
CN112767407A (en) CT image kidney tumor segmentation method based on cascade gating 3DUnet model
EP4081952A1 (en) Systems and methods for analyzing two-dimensional and three-dimensional image data
JP2023540284A (en) System and method for virtual pancreatography pipeline
CN114549463A (en) Curative effect prediction method, system, equipment and medium for breast cancer liver metastasis anti-HER-2 treatment
Peng et al. Computed tomography-based radiomics analysis to predict lymphovascular invasion in esophageal squamous cell carcinoma
CN116958151B (en) Method, system and equipment for distinguishing adrenal hyperplasia from fat-free adenoma based on CT image characteristics
JP7539981B2 (en) Automatic classification of liver disease severity from non-invasive radiological imaging
Zhang et al. Multi-input dense convolutional network for classification of hepatocellular carcinoma and intrahepatic cholangiocarcinoma
CN110738649A (en) training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images
CN114783517A (en) Prediction of RAS gene status of CRLM patients based on imagery omics and semantic features
Ibrahim et al. Liver Multi-class Tumour Segmentation and Detection Based on Hyperion Pre-trained Models.
Cao et al. Deep learning based lesion detection for mammograms
CN115810083A (en) CT image processing method of pancreas-duodenum arterial arch and application thereof
WO2022153100A1 (en) A method for detecting breast cancer using artificial neural network
Karwoski et al. Processing of CT images for analysis of diffuse lung disease in the lung tissue research consortium
CN113850788A (en) System for judging bladder cancer muscle layer infiltration state and application thereof
EP4459544A1 (en) Method for detecting at least one lesion of a pancreas of a patient in at least one medical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination