CN116228690A - Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT - Google Patents

Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT Download PDF

Info

Publication number
CN116228690A
CN116228690A CN202310096370.9A CN202310096370A CN116228690A CN 116228690 A CN116228690 A CN 116228690A CN 202310096370 A CN202310096370 A CN 202310096370A CN 116228690 A CN116228690 A CN 116228690A
Authority
CN
China
Prior art keywords
image
features
pet
feature
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310096370.9A
Other languages
Chinese (zh)
Inventor
刘兆邦
魏文婷
王恒
郑健
杨晓冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Biomedical Engineering and Technology of CAS
Original Assignee
Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Biomedical Engineering and Technology of CAS filed Critical Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority to CN202310096370.9A priority Critical patent/CN116228690A/en
Publication of CN116228690A publication Critical patent/CN116228690A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computational Linguistics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The invention discloses an automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT, which comprises the following steps: respectively carrying out image preprocessing on the PET image and the CT image; based on the Faster-Rcnn algorithm byIntroducing an attention mechanism module into a residual error network of Resnet50, adding an FPN network to form a characteristic pyramid network, using ROIAlign to replace a roiling layer, constructing an improved detection network, respectively performing target detection on PET images and CT images subjected to image preprocessing, and outputting focus detection results 18 F-FDGPET/CT image; constructing a mixed classification model, pair 18 F-FDG PET/CT image is subjected to image histology feature extraction and deep learning feature extraction, multi-mode feature fusion, multi-domain feature fusion and feature correlation analysis, and focus classification results are output. The invention realizes full automation of focus detection and disease classification by designing an improved detection network and designing multi-mode feature fusion, and does not need intervention of doctors.

Description

Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT
Technical Field
The invention relates to the technical field of medical image processing, in particular to an automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT, which is used for automatic auxiliary diagnosis of pancreatic cancer and autoimmune pancreatitis.
Background
The identification of autoimmune pancreatitis (Autoimmune pancreatitis, AIP) and Pancreatic ductal adenocarcinoma (Pancreatic DuctalAdenocarcinoma, PDAC) is a diagnostic challenge and can lead to misdiagnosis. Currently, the most common diagnostic method in clinic is imaging examination. In clinical manifestations, CT, PET, MR imaging and ultrasound endoscopy ultrasound (Endoscopic ultrasound, EUS) can find subtle pancreatic abnormalities, often used as a means of detection and staging to solve the problem.
Along with the continuous development of artificial intelligence, the application of the method in clinical medical diagnosis represents great value significance. Linning et al, set up a radiometric classification model based on CT images using a random forest algorithm. Marya et al, using convolutional neural networks, identified PDAC and AIP in Endoscopic Ultrasound (EUS) images with a sensitivity of 0.88 and a specificity of 0.88.Zhang et al, developed a prognosis model based on risk scores by fusing deep transfer learning and radiological features with CT images of PDAC patients, and showed significant improvement in prognosis of resectable PDAC patients, with AUC (Area of subject working feature Curve) of 0.84. In studies to identify PDAC and AIP based on PET/CT images, zhang et al, compared the maximum normalized uptake value (SUVmax) with the FDG (Fluorodeoxyglucose) uptake lesion correlation results by PET/CT images, found that PET/CT study results might help to distinguish PDAC from AIP. Cheng et al first proposed identifying PDAC and AIP using an image histology method based on PET/CT images. They extracted texture features from PET images in three-dimensional space and combined them with SUVmax, the number of extrapancreatic lesions and the uptake of glucose analogs by lesions, constructing a logistic regression model, and obtaining AUC of 0.95. Combining CNN features and hand-made features to train a classifier, and using a support vector machine to realize final lung nodule classification; zhu et al, noninvasive identification of meningioma grading based on a conventional magnetic resonance imaging deep learning radiology group (Deep Learning Radiology, DLR) model; wanga et al propose a combination of Radiology and Deep Learning (RDL) to predict Adenocarcinoma (ADC), and the post-fusion model effect is significantly better than simple radiology. The research results show that the fusion characteristics of image histology and deep learning can effectively improve the accuracy of classification diagnosis and prognosis.
However, the conventional FaterRcnn algorithm has poor effect on small target detection, and although some algorithms adopt a multi-scale feature fusion mode, the fused features are generally adopted for prediction; in a common two-stage detection framework (such as Fast Rcnn, R-FCN), the ROI Pooling is used to pool the corresponding region into a feature map with a fixed size according to the position coordinates of a pre-selected frame in the feature map so as to perform subsequent classification and bounding box regression operation, and the ROI Pooling has a process of two quantization, and after two quantization, the position of the candidate frame at the moment has a certain deviation from the position of the initial regression, and the deviation affects the accuracy of detection or segmentation.
Therefore, how to construct a diagnosis auxiliary model integrating image histology and deep learning so as to further improve the accuracy of PDAC and AIP identification has practical research significance.
Disclosure of Invention
Aiming at the defects in the technology, the invention provides an automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT, which well combines the advantages of high contrast of PET images and sufficient structural information of CT images with high spatial resolution through designing an improved detection network and designing multi-mode feature fusion, improves the problems of blurred edges of tumors of the single-mode PET images and low contrast of focuses and surrounding tissues of the CT images, can effectively excavate the spatial features and texture features of the focuses, realizes full automation of focus detection and disease classification, does not need intervention of doctors, and has higher detection precision.
To achieve these objects and other advantages and in accordance with the purpose of the invention, as embodied by the following:
the embodiment of the invention provides a PET-CT-based automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis, which comprises the following steps:
respectively carrying out image preprocessing on the PET image and the CT image;
based on the fast-Rcnn algorithm, a feature pyramid network is formed by introducing an attention mechanism module into a residual network of Resnet50, adding an FPN network, an ROIAlign is used for replacing an ROI (region of interest) Pooling layer, an improved detection network is constructed, target detection is respectively carried out on the PET image and the CT image subjected to image preprocessing, and a focus detection result is output 18 F-FDG PET/CT image;
constructing a mixed classification model for the 18 F-FDG PET/CT image is subjected to image histology feature extraction and deep learning feature extraction, the image histology feature and the deep learning feature are respectively subjected to multi-mode feature fusion, multi-domain feature fusion and feature relevance analysis, and focus classification results are output.
Preferably, the image preprocessing for the PET image and the CT image respectively includes the following steps:
resampling the PET image using bilinear interpolation;
converting pixel values of the CT image into HU values, and converting pixel values of the resampled PET image into SUV values;
and adjusting the HU value threshold of the CT image to be-10-100.
Preferably, the attention mechanism module comprises a channel attention module and a space attention module which are connected in sequence;
the attention module mainly comprises a global average pooling layer, a first full-connection layer, a first activation function layer, a second full-connection layer and a second activation function layer which are connected in sequence; the global averaging pooling layer obtains global information, and then obtains a new weight through the connection of the first full-connection layer, the activation of the first activation function layer, the connection of the second full-connection layer and the activation of the second activation function layer in sequence;
the spatial attention module mainly comprises an average pooling layer, a maximum pooling layer, a shearing layer, a convolution layer and a third activation function layer; and the average pooling layer and the maximum pooling layer are respectively connected to the second activation function layer, and sequentially pass through the shearing splicing of the shearing layer, the convolution of the convolution layer and the activated output of the third activation function layer.
Preferably, the constructing the hybrid classification model includes the steps of:
extracting the image histology characteristics by using the Pyradiomics open source codes in python; extracting deep learning features by utilizing a VGG11 network, building a double-branch network model, and respectively carrying out network training on the preprocessed PET image and the preprocessed CT image;
feature fusion is carried out on the CT image features and the PET image features in the image histology features at a full connection layer, multi-scale feature fusion is carried out on the CT image features and the PET image features in the deep learning features, and the image histology features and the deep learning features are fused at the full connection layer multi-domain features, so that the image histology features and the deep learning features are formed 18 Multimodal features of F-FDG PET/CT;
setting a first classification prediction model, a second classification prediction model and a third classification prediction model, wherein the first classification prediction model is used for classifying CT image features and PET image features extracted from the image histology features,the second classification prediction model extracts high-level semantic features of the CT image and the PET image respectively through a VGG11 network for classification, and the third classification prediction model is used for classifying the 18 The multi-domain features of F-FDG PET/CT are classified.
Preferably, the initial framework of the VGG11 network comprises 5 convolution modules, and the convolution layer in each convolution module adopts a convolution kernel of 3*3.
Preferably, the image histology feature and the deep learning feature are respectively fused by multi-modal features, including the following steps:
carrying out feature fusion on CT image features and PET image features in the image group science features;
and carrying out multi-scale feature fusion on the CT image features and the PET image features of the deep learning: extracting a feature map generated by each convolution module in the VGG11 network when the preprocessed PET image and the preprocessed CT image are respectively subjected to network training; and (3) overlapping the PET feature map and the CT feature map, sending the PET feature map and the CT feature map into a convolution layer at a corresponding position, and weighting image features at different positions to form a PET/CT fusion map with spatial variation, thereby obtaining a mixed deep learning feature with multiple scale integration.
Preferably, the multi-domain feature fusion is performed on the extracted image histology feature and the deep learning feature, which comprises the following steps:
the said 18 The image histology features extracted from the F-FDG PET/CT images and the deep learning features are fused in a full-connection layer to form multi-domain features of PET/CT, and the obtained multi-domain features of PET/CT are input into a linear block for classification.
Preferably, the feature correlation analysis includes the steps of:
classifying the image histology features according to statistical features, and then arranging and combining the image histology features with the deep learning features to obtain a primary mixed feature set comprising primary mixed features;
performing difference comparison on the image histology features and the preliminary mixed feature set according to the result of classifying the image histology features according to the statistical features, and determining the distribution weight of the features of the preliminary mixed feature set on the contribution degree of the mixed classification model according to the difference comparison result, so as to obtain a mixed feature set which comprises final mixed features after adjustment;
wherein the statistical features include texture features, histogram features, morphological features.
The invention at least comprises the following beneficial effects:
1. according to the PET-CT-based automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis, provided by the invention, a focus mechanism module is introduced into a residual error network of Resnet50 through a fast-Rcnn algorithm, a FPN network is added to form a feature pyramid network, ROIAlign is used for replacing an ROI (region of interest) Pooling layer, an improved detection network is designed, multi-mode feature fusion is designed, the advantages of high contrast of a PET image and sufficient structural information of high spatial resolution of a CT image are well combined, the problems of blurred edges of tumors of the single-mode PET image and low contrast of focuses and surrounding tissues of the CT image are improved, the spatial features and texture features of the focuses can be effectively mined, full automation of focus detection and disease classification is realized, doctors are not required to intervene, and the detection precision is high;
2. the invention respectively carries out image preprocessing on a PET image and a CT image, and comprises a process of resampling the PET image by bilinear interpolation, a pixel value conversion process of converting the pixel value of the CT image into an HU value and converting the pixel value of the resampled PET image into an SUV value, and a threshold value adjustment process of adjusting the HU value threshold value of the CT image to be more than or equal to-10 and less than or equal to 100; resampling, namely, because the difference of the spatial resolutions of the PET image and the CT image is unfavorable for focus positioning and local feature extraction, the spatial resolution of the PET image and the CT image is kept consistent through the step of resampling the PET image by bilinear interpolation in the step S11; after the pixel value conversion, the digital image can be further correlated with a clinical index; thresholding is to filter artifacts in CT images to reduce the interference of fat, bone tissue, and other factors on texture features.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT;
FIG. 2 is a schematic diagram of a process for image preprocessing of PET and CT images, respectively, according to the present invention;
FIG. 3 is a schematic diagram of the attention mechanism module according to the present invention;
FIG. 4 is a schematic flow chart of the hybrid classification model construction provided by the invention;
FIG. 5 is a schematic flow chart of a method for respectively performing multi-modal feature fusion on image histology features and deep learning features;
fig. 6 is a schematic flow chart of a feature correlation analysis method provided by the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Terms such as "having," "including," and "comprising" used in various embodiments of the invention described below do not exclude the presence or addition of one or more other elements or combinations thereof; the technical features involved can be combined with one another as long as they do not conflict with one another.
As shown in fig. 1, an embodiment of the present invention provides an automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT, which includes steps S10, S20 and S30.
S10, data preprocessing;
s20, target detection;
s30, extracting and classifying the features.
Specifically, the data preprocessing in step S10 includes image preprocessing for the PET image and the CT image, respectively.
The target detection in step S20 includes step S21 and step S22:
s21, based on a fast-Rcnn algorithm, an improved detection network is constructed by introducing an attention mechanism module into a residual network of Resnet50, adding an FPN network to form a characteristic pyramid network and replacing an ROI (region of interest) Pooling layer by using ROIAlign;
s22, respectively carrying out target detection on the PET image and the CT image subjected to image pretreatment, and outputting an 18F-FDG PET/CT image of a focus detection result.
The feature extraction and classification in step S30 includes the steps of: constructing a mixed classification model, pair 18 F-FDG PET/CT image is subjected to image histology feature extraction and deep learning feature extraction, the image histology feature and the deep learning feature are respectively subjected to multi-mode feature fusion, multi-domain feature fusion and feature relevance analysis, and focus classification results are output.
In this embodiment, the data preprocessing in step S10 is because the medical image imaging principles of different modalities are different, and the resolution, the signal-to-noise ratio, and the like are correspondingly different, so that the image needs to be preprocessed first. The detectability of useful information in the image can be enhanced and adverse factors to subsequent tasks can be eliminated by respectively carrying out image preprocessing on the PET image and the CT image.
In step S21, an FPN network is added to form a characteristic pyramid network, because a plurality of small focus exists for pancreatic cancer, the traditional Faster Rcnn algorithm has poor effect on small target detection, most target detection algorithms only adopt top-layer characteristics for prediction, and target positions are rough; in addition, although some algorithms adopt a multi-scale feature fusion mode, the fused features are generally adopted for prediction, and FPN is different in that the prediction is independently carried out on different feature layers, and the precision of small target detection is improved through multi-scale fusion.
In step S21, ROIAlign is used to replace the ROI Pooling layer, because in a common two-stage detection frame (such as Fast Rcnn, R-FCN), the ROI Pooling is used to pool the corresponding region into a feature map with a fixed size according to the position coordinates of the pre-selected frame, so as to perform the subsequent classification and bounding box regression operation, the ROI Pooling has two quantization processes, and after two quantization processes, the candidate frame has a certain deviation from the position of the initial regression, and this deviation affects the accuracy of detection or segmentation, the ROIAlign is used to cancel the quantization operation, and the bilinear interpolation method is used to obtain the image value on the pixel point with the coordinates of the floating point, so that the whole feature aggregation process is converted into a continuous operation, and errors are greatly avoided.
In step S21, an improved detection network is constructed because the image imaging principles are significantly different, different modality images have complex relationships, and it is very important how to effectively learn complementary information between different modalities. The improved detection network is based on a fast-Rcnn algorithm, a attention mechanism module is introduced into a residual network of the Resnet50, a FPN network is also added to form a feature pyramid network, and the ROIAlign is used for replacing an ROI Pooling layer, so that effective fusion of features is realized.
In step S30, for 18 The F-FDG PET/CT image is subjected to image histology feature extraction and deep learning feature extraction, the image histology feature and the deep learning feature are respectively subjected to multi-mode feature fusion and multi-domain feature fusion, the advantages of high contrast of the PET image and sufficient structural information of the CT image with high spatial resolution are well combined, the problems of blurred edges of tumors of the single-mode PET image and low contrast of focuses and surrounding tissues of the CT image are solved, and the spatial features and the texture features of the focuses can be effectively excavated. The invention provides an improved generationThe detection network is relatively accurate in positioning in the samples, the corresponding detection precision is higher, the detection time is 0.02s per picture, and the detection time is obviously shorter than the imaging expert diagnosis time, so that the detection network can effectively assist doctors in realizing good diagnosis on focuses and even small focuses, improves the diagnosis efficiency, makes up for the experience difference of the doctors to a certain extent, and also verifies that the detection algorithm of the multi-mode PET/CT pancreatic focuses is significant.
Summarizing the above description, according to the automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT provided by the embodiment of the invention, on one hand, the attention, feature pyramid network and ROIAlign method are introduced to improve the network by improving the classical fast-Rcnn design dual-feature extraction network, so that an improved detection network is formed, and the accuracy of focus detection is improved while focus automatic detection is realized; on the other hand, by designing multi-mode feature fusion, the advantages of high contrast of PET images and sufficient structural information of CT images are well combined, the problems of blurred edges of single-mode PET images and low contrast of lesions and surrounding tissues of CT images are solved, the spatial features and the texture features of the lesions can be effectively excavated, disease classification of the lesions is completed, an end-to-end automatic auxiliary diagnosis function is formed, automation is realized from lesion detection to final classification, intervention by doctors is not needed, and automatic detection accuracy is high.
As a further preferred embodiment, the PET image and the CT image are subjected to image preprocessing, respectively, and as shown in fig. 2, the steps S11, S12, and S13 are included. S11, resampling; s12, converting pixel values; s13, adjusting a threshold value.
In step S11, the resampling specifically includes the steps of: the PET image is resampled using bilinear interpolation. Resampling is because the difference of the spatial resolutions of the PET image and the CT image is unfavorable for lesion localization and local feature extraction, and the spatial resolution of the PET image is kept consistent with that of the CT image by the step of resampling the PET image by bilinear interpolation in step S11.
In step S12, the pixel value conversion specifically includes the steps of: the pixel values of the CT image are converted to HU values and the pixel values of the resampled PET image are converted to SUV values. After the pixel value conversion, the digital image may be further correlated with a clinical indicator.
In step S13, the threshold adjustment specifically includes the steps of: HU value threshold of CT image is adjusted to be-10-100. This step is to filter artifacts in the CT image, thereby reducing the interference of fat, bone tissue and other factors on the texture features.
As a further preference of the above embodiment, in order to further ensure the validity of the multi-modal feature information fusion, in particular, the attention mechanism module includes a channel attention module and a spatial attention module that are sequentially connected, as shown in fig. 3. The channel attention module mainly comprises a Global average Pooling layer (Global Pooling), a first full-connection layer (FC), a first activation function layer (Relu), a second full-connection layer (FC) and a second activation function layer (Sigmoid) which are connected in sequence. The global information is firstly obtained through the global averaging pooling layer, and then a new weight is obtained through the connection of the first full-connection layer, the activation of the first activation function layer, the connection of the second full-connection layer and the activation of the second activation function layer in sequence, and the global information is used for selectively selecting required information features and inhibiting less required features through the new weight. The spatial attention module mainly comprises an average Pooling layer (Avg Pooling), a maximum Pooling layer (Max Pooling), a shearing layer (Cat), a convolution layer (Conv) and a third activation function layer (Sigmoid), wherein the average Pooling layer and the maximum Pooling layer are respectively connected to the second activation function layer, and sequentially output after shearing splicing of the shearing layer, convolution of the convolution layer and activation of the third activation function layer. The space attention module searches the position with the most key information based on the direction of the channel on the basis of the action of the channel attention module, and the channel attention module is supplemented. According to the embodiment, the attention mechanism module is arranged, the attention mechanism module comprises a channel attention module and a space attention module which are sequentially connected, the two feature extraction networks are used for respectively extracting features of PET and CT images, then splicing is carried out, so that effective fusion of multi-mode feature information is achieved, and focus detection is completed after the information is input into the following network.
As a further preferred aspect of the above embodiment, the fusion of the high-level semantic information of the deep learning extracted image and the statistical features obtained by the image histology method will form complementation, which can provide beneficial assistance for diagnosis and prognosis of related diseases, and the feature extraction and classification prediction processes are as follows: firstly, extracting manual features of an image, extracting multi-scale space features by deep learning, constructing a new feature set by adopting a fusion strategy, and inputting the new feature set into a classifier to finish a classification task. Thus, to better accomplish the classification detection after multi-modal feature fusion, as shown in fig. 4, constructing the hybrid classification model preferably includes the steps of:
s31, extracting features;
s32, feature fusion;
s33, feature classification prediction.
The feature extraction in step S31 specifically includes the steps of: extracting the image histology characteristics by using the Pyradiomics open source codes in python; and extracting deep learning features by utilizing the VGG11 network, building a double-branch network model, and simultaneously respectively carrying out network training on the preprocessed PET image and the preprocessed CT image.
The feature fusion in step S32 specifically includes the steps of: the CT image features and the PET image features in the image group science features are subjected to feature fusion at the full connecting layer, the CT image features and the PET image features in the deep learning features are subjected to multi-scale feature fusion, and the image group science features and the deep learning features are subjected to multi-domain feature fusion at the full connecting layer, so that the image group science features are formed 18 Multimodal features of F-FDG PET/CT;
the feature classification prediction in step S33 specifically includes the steps of: setting a first classification prediction model, a second classification prediction model and a third classification prediction model, wherein the first classification prediction model is used for classifying CT image features and PET image features extracted from image group chemical features, the second classification prediction model is used for respectively extracting high-level semantic features of the CT image and the PET image through a VGG11 network to classify, and the third classification prediction model is used for classifying the CT image features and the PET image features 18 The multi-domain features of F-FDG PET/CT are classified.
The implementation isIn the manner of this embodiment, the process is performed, 18 the multi-modal features of the F-FDG PET/CT image comprise two groups of features, namely image group learning features and deep learning features, which are extracted specifically through step S31: extracting image histology features by using the open source codes of the Pyramides in python, and extracting 102 statistical features based on the PET image, the CT image and the ROI mask respectively and based on the original image; and extracting deep learning features by utilizing the VGG11 network, building a double-branch network model, and simultaneously respectively carrying out network training on the preprocessed PET image and the preprocessed CT image. Still further preferably, fine tuning is performed on the VGG11 network, and the initial frame of the VGG11 network is set to include 5 convolution modules, and a convolution kernel of 3*3 is adopted in each convolution module. Feature extraction is carried out by using a pretrained fine-tuning VGG11 network, and 4096 one-dimensional features are extracted from CT and PET images respectively.
As a further preferred embodiment, in step S32, the image histology feature and the deep learning feature are respectively subjected to multi-modal feature fusion, as shown in fig. 5, and the method further comprises the steps of:
s321, carrying out feature fusion on CT image features and PET image features in the image histology features;
s322, carrying out multi-scale feature fusion on CT image features and PET image features in the deep learning features: extracting a feature map generated by each convolution module in the VGG11 network when the preprocessed PET image and the preprocessed CT image are respectively subjected to network training; and (3) overlapping the PET feature map and the CT feature map, and then sending the PET feature map and the CT feature map into a convolution layer at a corresponding position to form a PET/CT fusion map with spatial variation, so as to obtain a mixed deep learning feature with multiple scale integration.
The embodiment mainly provides a deep learning multi-scale feature fusion method, because feature images with different spatial scales represent expression differences of different modes, and the difference information is fully utilized, so that the analysis performance of a classification model is expected to be improved; the CT image anatomical information is more discriminative, the contour information between a focus area and surrounding blood vessels and organs can be reflected, the PET image reflects the metabolic level of lesions, the multi-spatial feature fusion network takes account of the image advantages of CT and PET, and fuses image features of different spatial scales, the network effectively learns spatial variation information between different modes, and meanwhile integrates multi-element information of CT and PET, so that the accuracy and generalization capability of model classification are improved, and new reference value can be provided for disease classification. Therefore, a multi-space feature fusion strategy is adopted to realize fusion connection of PET and CT images of different space layers. Specifically, the deep learning multi-space feature fusion method is that feature images generated by each convolution module in a VGG11 network are extracted when the preprocessed PET image and the preprocessed CT image are respectively subjected to network training; and (3) overlapping the PET feature map and the CT feature map, sending the PET feature map and the CT feature map into a convolution layer at a corresponding position, and weighting image features at different positions to form a PET/CT fusion map with spatial variation, thereby obtaining a mixed deep learning feature with multiple scale integration.
As a further preferred embodiment, in step S32, the multi-domain feature fusion is performed on the extracted image histology feature and the deep learning feature, and the method further includes the steps of:
will be 18 The image histology features extracted from the F-FDG PET/CT image and the deep learning features are fused in a full-connection layer to form multi-domain features of PET/CT, and the obtained multi-domain features of PET/CT are input into a linear block for classification.
As a further preferred embodiment, in step S32, the feature correlation analysis further includes the steps of:
s322, classifying the image histology features according to the statistical features, and then arranging and combining the image histology features with the deep learning features to obtain a primary mixed feature set comprising primary mixed features;
s323, performing difference comparison on the image histology features according to the result of classification of the statistical features and the primary mixed feature set, and determining the distribution weight of the features of the primary mixed feature set on the contribution degree of the mixed classification model according to the difference comparison result, thereby obtaining a mixed feature set comprising final mixed features after adjustment; wherein, the statistical features comprise texture features, histogram features and morphological features.
According to the embodiment, after the image histology features are classified, the image histology features are respectively fused with the deep learning to form a comparison experiment, and the distribution weight of each feature is obtained according to the difference of experimental results, so that a new mixed feature set is formed, and the optimal classification effect is shown. Meanwhile, the deep learning extracted features are abstract and lack intuitiveness and interpretability; the information attribute of the deep learning feature may be reverse extrapolated using the combined differences. If the abstract features extracted by the fast-Rcnn contain certain classification related information, redundant information or negative correlation can be formed after the whole fusion, so that the result is reduced. And after the model is fused with the deep learning features, the performance of the model is improved, and the model and the deep learning features form complementary trends, which are possibly the information lacking in the abstract features. Thus, a partial feature of the deep learning feature can be obtained. Specifically, the statistical features include texture features, histogram features and morphological features, the texture features further comprise five subclasses of features, the features are classified and then are arranged and combined with the deep learning features to form feature sets (including preliminary mixed features) of fusion of various image histology features and the deep learning features, and differences of classification results are analyzed. And (5) carrying out weight distribution according to the contribution degree of different features to the model, and finally obtaining an adjusted mixed feature set (including the final mixed feature). Meanwhile, according to the feature expression of different combinations, the relevance between the image group science and the deep learning feature can be shallow analyzed, the attribute of information in the deep learning can be reversely presumed, and the interpretability of the abstract high-level semantic features can be increased.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. And obvious variations or modifications thereof are contemplated as falling within the scope of the present invention.

Claims (8)

1. An automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT is characterized by comprising the following steps:
respectively carrying out image preprocessing on the PET image and the CT image;
based on the fast-Rcnn algorithm, a feature pyramid network is formed by introducing a attention mechanism module into a residual network of Resnet50, adding an FPN network, a ROIAlign is used for replacing a roiling layer, an improved detection network is constructed, target detection is respectively carried out on the PET image and the CT image which are subjected to image preprocessing, and a focus detection result is output 18 F-FDG PET/CT image;
constructing a mixed classification model for the 18 F-FDG PET/CT image is subjected to image histology feature extraction and deep learning feature extraction, the image histology feature and the deep learning feature are respectively subjected to multi-mode feature fusion, multi-domain feature fusion and feature relevance analysis, and focus classification results are output.
2. The automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT according to claim 1, wherein the image preprocessing is performed on the PET image and the CT image, respectively, comprising the steps of:
resampling the PET image using bilinear interpolation;
converting pixel values of the CT image into HU values, and converting pixel values of the resampled PET image into SUV values;
and adjusting the HU value threshold of the CT image to be-10-100.
3. The automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT according to claim 1, wherein the attention mechanism module comprises a channel attention module and a spatial attention module which are connected in sequence;
the attention module mainly comprises a global average pooling layer, a first full-connection layer, a first activation function layer, a second full-connection layer and a second activation function layer which are connected in sequence; the global averaging pooling layer obtains global information, and then obtains a new weight through the connection of the first full-connection layer, the activation of the first activation function layer, the connection of the second full-connection layer and the activation of the second activation function layer in sequence;
the spatial attention module mainly comprises an average pooling layer, a maximum pooling layer, a shearing layer, a convolution layer and a third activation function layer; and the average pooling layer and the maximum pooling layer are respectively connected to the second activation function layer, and sequentially pass through the shearing splicing of the shearing layer, the convolution of the convolution layer and the activated output of the third activation function layer.
4. The automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT according to claim 1, wherein the construction of the mixed classification model comprises the steps of:
extracting the image histology characteristics by using the Pyradiomics open source codes in python; extracting deep learning features by utilizing a VGG11 network, building a double-branch network model, and respectively carrying out network training on the preprocessed PET image and the preprocessed CT image;
feature fusion is carried out on the CT image features and the PET image features in the image histology features at a full connection layer, multi-scale feature fusion is carried out on the CT image features and the PET image features in the deep learning features, and the image histology features and the deep learning features are fused at the full connection layer multi-domain features, so that the image histology features and the deep learning features are formed 18 Multimodal features of F-FDG PET/CT;
setting a first classification prediction model, a second classification prediction model and a third classification prediction model, wherein the first classification prediction model is used for classifying CT image features and PET image features extracted from the image histology features, the second classification prediction model is used for respectively extracting high-level semantic features of the CT image and the PET image through a VGG11 network for classification, and the third classification prediction model is used for classifying the CT image features and the PET image features 18 The multi-domain features of F-FDG PET/CT are classified.
5. The method for automatically aiding diagnosis of pancreatic cancer and autoimmune pancreatitis based on PET-CT as claimed in claim 4, wherein,
the initial framework of the VGG11 network comprises 5 convolution modules, and a convolution layer in each convolution module adopts a convolution kernel of 3*3.
6. The automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT according to claim 5, wherein the image histology feature and the deep learning feature are respectively subjected to multi-modal feature fusion, comprising the steps of:
feature fusion is carried out on CT image features and PET image features in the image histology features;
and carrying out multi-scale feature fusion on CT image features and PET image features in the deep learning features: extracting a feature map generated by each convolution module in the VGG11 network when the preprocessed PET image and the preprocessed CT image are respectively subjected to network training; and (3) overlapping the PET feature map and the CT feature map, sending the PET feature map and the CT feature map into a convolution layer at a corresponding position, and weighting image features at different positions to form a PET/CT fusion map with spatial variation, thereby obtaining a mixed deep learning feature with multiple scale integration.
7. The automatic aided diagnosis method of pancreatic cancer and autoimmune pancreatitis based on PET-CT according to claim 5, characterized in that the multi-domain feature fusion of the extracted image histology features and the deep learning features comprises the steps of:
the said 18 The image histology features extracted from the F-FDG PET/CT images and the deep learning features are fused in a full-connection layer to form multi-domain features of PET/CT, and the obtained multi-domain features of PET/CT are input into a linear block for classification.
8. The automatic aided diagnosis method of pancreatic cancer and autoimmune pancreatitis based on PET-CT according to claim 5, characterized in that the feature correlation analysis includes the steps of:
classifying the image histology features according to statistical features, and then arranging and combining the image histology features with the deep learning features to obtain a primary mixed feature set comprising primary mixed features;
performing difference comparison on the image histology features and the preliminary mixed feature set according to the result of classifying the image histology features according to the statistical features, and determining the distribution weight of the features of the preliminary mixed feature set on the contribution degree of the mixed classification model according to the difference comparison result, so as to obtain a mixed feature set which comprises final mixed features after adjustment;
wherein the statistical features include texture features, histogram features, morphological features.
CN202310096370.9A 2023-02-10 2023-02-10 Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT Pending CN116228690A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310096370.9A CN116228690A (en) 2023-02-10 2023-02-10 Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310096370.9A CN116228690A (en) 2023-02-10 2023-02-10 Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT

Publications (1)

Publication Number Publication Date
CN116228690A true CN116228690A (en) 2023-06-06

Family

ID=86574425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310096370.9A Pending CN116228690A (en) 2023-02-10 2023-02-10 Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT

Country Status (1)

Country Link
CN (1) CN116228690A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132840A (en) * 2023-10-26 2023-11-28 苏州凌影云诺医疗科技有限公司 Peptic ulcer classification method and system based on AHS classification and Forrest classification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132840A (en) * 2023-10-26 2023-11-28 苏州凌影云诺医疗科技有限公司 Peptic ulcer classification method and system based on AHS classification and Forrest classification
CN117132840B (en) * 2023-10-26 2024-01-26 苏州凌影云诺医疗科技有限公司 Peptic ulcer classification method and system based on AHS classification and Forrest classification

Similar Documents

Publication Publication Date Title
CN106682435B (en) System and method for automatically detecting lesion in medical image through multi-model fusion
Farhat et al. Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19
CN110930367B (en) Multi-modal ultrasound image classification method and breast cancer diagnosis device
Pinaya et al. Unsupervised brain imaging 3D anomaly detection and segmentation with transformers
Aljabri et al. A review on the use of deep learning for medical images segmentation
Tang et al. High-resolution 3D abdominal segmentation with random patch network fusion
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
CN111798424B (en) Medical image-based nodule detection method and device and electronic equipment
CN113298830B (en) Acute intracranial ICH region image segmentation method based on self-supervision
CN112674720A (en) Alzheimer disease pre-diagnosis method based on 3D convolutional neural network
WO2021183765A1 (en) Automated detection of tumors based on image processing
CN113436173A (en) Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN117152433A (en) Medical image segmentation method based on multi-scale cross-layer attention fusion network
CN116645380A (en) Automatic segmentation method for esophageal cancer CT image tumor area based on two-stage progressive information fusion
Franco-Barranco et al. Deep learning based domain adaptation for mitochondria segmentation on EM volumes
CN116228690A (en) Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT
Sille et al. A Systematic Approach for Deep Learning Based Brain Tumor Segmentation.
Delmoral et al. Segmentation of pathological liver tissue with dilated fully convolutional networks: A preliminary study
US20220287647A1 (en) Disease classification by deep learning models
CN112967295B (en) Image processing method and system based on residual network and attention mechanism
Mani Deep learning models for semantic multi-modal medical image segmentation
CN114757894A (en) Bone tumor focus analysis system
Bi et al. Hyper-Connected Transformer Network for Multi-Modality PET-CT Segmentation
Huang et al. Si-MSPDNet: A multiscale Siamese network with parallel partial decoders for the 3-D measurement of spines in 3D ultrasonic images
CN116416235B (en) Feature region prediction method and device based on multi-mode ultrasonic data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination