CN113706486B - Pancreatic tumor image segmentation method based on dense connection network migration learning - Google Patents

Pancreatic tumor image segmentation method based on dense connection network migration learning Download PDF

Info

Publication number
CN113706486B
CN113706486B CN202110944394.6A CN202110944394A CN113706486B CN 113706486 B CN113706486 B CN 113706486B CN 202110944394 A CN202110944394 A CN 202110944394A CN 113706486 B CN113706486 B CN 113706486B
Authority
CN
China
Prior art keywords
network
segmentation
image
pet
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110944394.6A
Other languages
Chinese (zh)
Other versions
CN113706486A (en
Inventor
缑水平
续溢男
童诺
郭璋
李睿敏
陈姝喆
刘波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110944394.6A priority Critical patent/CN113706486B/en
Publication of CN113706486A publication Critical patent/CN113706486A/en
Application granted granted Critical
Publication of CN113706486B publication Critical patent/CN113706486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a pancreatic tumor image segmentation method based on dense connection network transfer learning, which comprises the following steps: acquiring positron emission computed tomography (PET) and Magnetic Resonance Imaging (MRI), preprocessing the PET and MRI, and dividing the PET and MRI into a training set and a testing set; constructing a segmentation network, and training the segmentation network by using a PET training data set to obtain a trained network parameter W1; setting initial parameters of a feature extraction module in a segmentation network to values of corresponding modules in W1 by using a migration strategy, randomly initializing parameters of other modules, and retraining the segmentation network by using an MRI image training set to obtain secondary trained network parameters W2; and inputting the MRI test set into a segmentation network with W2 as a network parameter to obtain segmentation results. The invention improves the MRI image segmentation performance, solves the problem that the prior art is difficult to train a network for a small data set, and can be used for assisting doctors to finish automatic target region sketching before pancreatic tumor treatment.

Description

Pancreatic tumor image segmentation method based on dense connection network migration learning
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a pancreatic tumor image segmentation method which can be used for helping doctors to finish automatic target area sketching before pancreatic tumor treatment.
Background
Currently, pancreatic tumors remain one of the most deadly malignant tumors worldwide, and the incidence rate tends to increase year by year. According to the recent cancer burden report GLOBOCAN 2020,2020 issued by the international cancer institute in 2020, the number of newly increased pancreatic tumor cases is approximately 49.57 ten thousand and the number of death cases is approximately 46.60 ten thousand in 2020. The number of cases of death caused by poor prognosis of pancreatic tumors is almost as large as the number of new cases, and the number of cases of death is the seventh leading cause of malignant tumor death for men and women. According to a study in 28 European countries, it was expected that 2025 years, pancreatic tumors would be the third leading cause of malignant death over breast cancer. The dosage in the radiotherapy of pancreatic tumor patients is usually limited by organs near the tumor, and the edge of the tumor is accurately positioned in the pancreas as much as possible on the premise of not reducing the dosage coverage range, so that the optimal radiotherapy plan is realized. Thus, accurate pancreatic tumor lesion segmentation in radiation therapy is necessary.
In medical images, multimodal data is widely used because of the different imaging mechanisms that provide information about the various layers of organs, tumors. Medical images commonly used in diagnosing tumors are CT images, MRI images and PET images. Wherein the CT image is used for diagnosing muscle and bone diseases; MRI images to provide good soft tissue contrast include T2-weighted MRI images suitable for diagnosing peri-tumor oedema; PET images, which lack anatomical features, can provide quantitative metabolic and functional information of lesions, and in PET imaging, the image intensity of tumor areas is higher than that of normal tissues and organ areas, and the approximate area of pancreatic tumors can be located relatively easily. In recent years, multimodal imaging has received increasing attention for its potential application in radiation therapy planning of oncological patients. The accuracy can be greatly improved by fully utilizing and integrating all available imaging data to perform target segmentation.
Zhang Guoqing et al in the chinese patent net: CN113034461a discloses a pancreatic tumor CT image segmentation method. The method is mainly divided into an image coding path and an image decoding path. In the image coding path, each layer consists of a variable convolution, BN and ReLU functions, and the feature map is transmitted to the next layer by a2 x2 max pooling layer; the last layer of the coding path comprises a dense connected convolutional network of three blocks. In the decoding path, the feature map of each layer includes a first portion and a second portion, the first portion being combined with the second portion by BConvLSTM; the first part is obtained by the operation of an up-sampling function and the characteristic map of the upper layer, and the second part is the characteristic map of the current decoding layer; the BConvLSTM includes an input gate, an output gate, a forget gate, and a storage unit.
A method for segmenting pancreatic tumors in MRI images is described in Liang et al, journal International Journal of Radiation Oncology, biology, physics, on the Development of MRI-Based Auto-Segmentation of Pancreatic Tumor Using Deep Neural Networks. The method uses a sliding square window mode to cut the original image to expand the data volume, and trains a three-dimensional convolutional neural network by using 27 MRI images, and pancreatic tumors in the MRI images can be segmented through the network.
A pancreatic tumor CT image segmentation method is proposed in the article Multi-Scale contrast-to-Fine Segmentation for SCREENING PANCREATIC Ductal Adenocarcinoma published by Zhu et al 2018 on arXiv. Aiming at the characteristic that pancreatic tumors of different patients are different in size, the method trains three corresponding segmentation networks by using CT images with three sizes of 64 3,323,163. In the test, the method firstly cuts the original image into an image with the size of 64 3, uses a segmentation network corresponding to the size to segment pancreatic tumors in the image, further cuts the image into the size of 32 3 according to the segmentation result after the rough segmentation, and uses the segmentation network corresponding to the size to segment pancreatic tumors in the image. And so on, the segmentation network with the three scales can be used for realizing the segmentation from thick to thin, and finally, the pancreatic tumor segmentation result can be obtained.
The existing pancreatic tumor segmentation method is not concerned with the problem of small number of marked images available in medical image segmentation, and only uses images of one mode, namely pancreatic tumor CT images or pancreatic tumor MRI images, but does not fully combine multi-mode image information, so that the pancreatic tumor segmentation accuracy is low, and the automatic delineation of pancreatic tumor areas before radiotherapy cannot be met.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a pancreatic tumor image segmentation method based on dense connection network migration learning, so that the segmentation accuracy of pancreatic tumors in Magnetic Resonance Imaging (MRI) is improved under the condition of less pancreatic tumor image data.
The technical idea of the invention is as follows: the method comprises the steps that a dense connection module is used for replacing a residual module of a feature extraction part in an existing Mask-RCNN network structure, so that a segmentation network structure DM-net based on the dense connection module is obtained; the knowledge learned from the segmentation of pancreatic tumor PET images is used in the segmentation of pancreatic tumor MRI images by using a migration learning strategy, thereby achieving accurate segmentation of MRI images using a small number of pancreatic tumors.
According to the above-mentioned thinking, the implementation of the invention comprises the following steps:
(1) Acquiring positron emission computed tomography (PET) data and Magnetic Resonance Imaging (MRI) data from a hospital, preprocessing the PET data and the MRI data, and dividing the preprocessed data into a training set and a testing set according to the proportion of 8:2;
(2) Constructing a segmentation network DM-net formed by cascading a feature extraction module, a region candidate network, a region of interest alignment module and a three-branch module;
(3) Initializing initial parameters of the split network by using a He initialization method, and setting a loss function of the split network DM-net as follows: loss=loss cls+lossbox+lossmask,
Where loss cls is the loss of the classification branch, loss box is the loss of the detection branch, loss mask is the loss of the partition branch;
(4) Using an Adam optimizer and taking the loss function as an optimization target, and performing iterative learning on parameters of a segmentation network DM-net by using a positron emission computed tomography (PET) training data set until the value of the loss function is not reduced any more to obtain a trained network parameter W1;
(5) Training DM-net by migrating learning strategies and using nuclear magnetic resonance MRI data:
(5a) Setting the parameter value of the feature extraction module of the DM-net as the value of the corresponding module in the trained network parameter W1, and re-initializing the parameters of the area candidate network, the interested area alignment module and the three-branch module by using a He initialization method;
(5b) Maintaining the loss function of the network unchanged, and performing iterative learning on parameters of the split network DM-net by using the nuclear magnetic resonance MRI training data set until the value of the loss function is not reduced any more to obtain a secondarily trained network parameter W2;
(6) Loading the secondarily trained network parameters W2 into a segmentation network DM-net, and inputting a nuclear magnetic resonance MRI test data set into the DM-net to obtain an output probability map;
(7) Setting the probability threshold value to be 0.5, and comparing each pixel point value of the output probability map with the threshold value to obtain a final segmentation result:
the value of the pixel point with the value less than 0.5 in the output probability map is set to 0, representing the background area,
The value of the pixel point with the value of the output probability map larger than 0.5 is set to be 1, which represents pancreatic tumor.
Compared with the prior art, the invention has the following advantages:
1. A segmentation network can be trained using a few pancreatic tumor MRI images.
According to the invention, the knowledge learned from the pancreatic tumor positron emission computed tomography (PET) image segmentation is migrated to the pancreatic tumor nuclear Magnetic Resonance (MRI) image segmentation by using a migration learning strategy, namely, the trained pancreatic tumor PET image segmentation model is used as a pre-training model, so that the pancreatic tumor MRI image segmentation network learning is easier, and a small number of pancreatic tumor MRI images can be used for training a segmentation network.
2. Improving the segmentation performance of pancreatic tumor nuclear magnetic resonance MRI images.
The invention refers to the structure of the existing Mask-RCNN, namely, the multitask learning of instance segmentation, target detection and classification is carried out simultaneously, and the tasks can be mutually promoted by feature sharing; meanwhile, as the Mask-RCNN network structure is improved, the feature extraction modules are stacked by dense connection modules, so that all the layers in front are densely connected with the back layer, and feature reuse is realized; in addition, the invention fuses the PET image of the pancreatic tumor positron emission computed tomography and the MRI image of the pancreatic tumor by using a migration learning strategy, so that the information complementation of the images of the two modes can be realized when the pancreatic tumor segmentation is realized; the three points improve the performance of pancreatic tumor nuclear magnetic resonance MRI image segmentation.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a schematic diagram of a dense connection module based DM-net network constructed in the present invention;
FIG. 3 is an example of a pancreatic tumor magnetic resonance MRI image and a positron emission computed tomography PET image used in the present invention;
FIG. 4 is a graph comparing the segmentation effect of MRI pancreatic tumors by the present invention and three prior art target segmentation methods;
Detailed Description
The practice and effects of the present invention are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, the implementation steps of the present invention include the following:
Step 1, constructing a positron emission computed tomography (PET) and Magnetic Resonance Imaging (MRI) data set, and dividing a training set and a testing set.
(1.1) Acquiring positron emission computed tomography (PET) data and Magnetic Resonance Imaging (MRI) data from a hospital;
(1.2) adjusting the spatial position of a positron emission computed tomography (PET) image of the same patient by using 3D slicer software with the position of the MRI image as a reference, overlapping the PET image with the MRI image, and sequentially carrying out random rotation, horizontal overturning and vertical overturning to respectively expand the data quantity of the PET image to 8 times of the original data quantity;
(1.3) cutting the size of the extended positron emission computed tomography PET image and the size of the nuclear magnetic resonance MRI image from original 512×512 to 320×320;
(1.4) normalizing the cropped positron emission computed tomography PET image and the magnetic resonance MRI image by the following formula, respectively:
Wherein Y is the normalized image, X is the input image, X min is the maximum value of the pixel gray of the input image, and X max is the minimum value of the pixel gray of the input image;
(1.5) dividing the normalized positron emission computed tomography PET image and the nuclear magnetic resonance MRI image into a training set and a testing set according to the proportion of 8:2.
And 2, constructing a division network DM-net.
Referring to fig. 2, the specific implementation of this step is as follows:
(2.1) constructing a feature extraction module: the device consists of four densely connected modules in cascade, wherein the output of the last densely connected module and the output of the next densely connected module are spliced in the channel direction, and each densely connected module consists of a linear rectification Relu activation function and a two-dimensional convolution layer with the size of 3 multiplied by 3;
(2.2) constructing a region candidate network: the method comprises the steps of extracting candidate anchor frame units and a two-class network, wherein a plurality of candidate anchor frames are obtained in the candidate anchor frame units by using a sliding window method; the two classification networks are formed by the cascade of a plurality of convolution layers and a plurality of full-connection layers, and can judge whether candidate anchor frames contain pancreatic tumor areas or not, so that possible candidate areas are screened out from all candidate anchor frames;
(2.3) constructing a region of interest alignment module: the method comprises a grid dividing unit, a bilinear interpolation unit and a maximum pooling unit, wherein:
The grid size of the grid dividing unit is L/7 XH/7, L is the length of the candidate frame, and H is the height of the candidate frame;
sampling points of the bilinear interpolation unit are 4, namely four points are randomly selected in each grid, and gray values of the four points are obtained by using a bilinear interpolation method;
The size of the sampling core of the maximum pooling unit is 2 multiplied by 2, and the step length is 2;
(2.4) building a three-branch module: the device is formed by connecting a classification module, a detection module and a segmentation module in parallel, wherein:
the classification module is a full-connection network and is formed by stacking a plurality of full-connection layers, and the number of neurons of the last full-connection layer is 2;
The detection module is a full-connection network, which is formed by stacking a plurality of full-connection layers, and the number of neurons of the last full-connection layer is 4;
The segmentation module is a full convolution network and consists of a plurality of up-sampling layers and 3×3 two-dimensional convolution layers;
and (2.5) sequentially cascading the feature extraction module, the region candidate module, the region of interest alignment module and the three-branch module to form a segmentation network DM-net.
And 3, initializing initial parameters of the split network by using a He initialization method, and setting a loss function of the split network DM-net.
(3.1) Initializing initial parameters of the split network by using a He initialization method, wherein the initialized network initial parameters W obey the distribution:
Wherein n l is the number of neurons of the first layer of the partition network DM-net, ) Representing a mathematical expectation of 0, variance ofIs a normal distribution of (c).
(3.2) Setting a loss function of the split network DM-net as: loss=loss cls+lossbox+lossmask
Where loss cls is the loss of the classified branch, loss box is the loss of the detected branch, loss mask is the loss of the split branch, and their specific formulas are as follows:
wherein, P i is the predictive classification probability of the ith candidate box, when the ith candidate box contains a pancreatic tumor,When the ith candidate box does not contain a pancreatic tumor,T i is the parameterized coordinates of candidate box i,The parameterized coordinates of the real label of the candidate frame i; y i is a segmentation label corresponding to the ith pixel point in the input image, if the pixel point i belongs to a background area, y i is 0, if the pixel point i belongs to pancreatic tumor, y i is 1, pred i is the probability that the ith pixel point belongs to pancreatic tumor in a prediction result; n box is the number of candidate boxes in the image, and N p is the number of pixels in the image.
And 4, training DM-net by using a positron emission computed tomography PET training data set to obtain a trained network parameter W1.
(4.1) Taking 4 positron emission computed tomography PET images in a positron emission computed tomography PET training data set, respectively inputting the PET images into a segmentation network DM-net to obtain a segmentation result, a classification result and a detection result of each image, calculating a loss function value of each image through a formula in the step 3, and averaging the loss function values of the 4 images to obtain an average loss function value of the positron emission computed tomography PET image;
(4.2) back-propagating the calculated average loss function value to obtain a gradient value, and updating network parameters of the partitioned network DM-net by using an Adam optimizer;
(4.3) repeating the processes (4.1) - (4.2) until all the data in the training data set are learned, and completing one iteration of iterative learning.
And (4.4) repeating the process (4.3) for a plurality of times, and performing iterative learning until the average loss function value obtained by calculation is not reduced any more, so as to obtain a trained network parameter W1.
And 5, training the DM-net by transferring the learning strategy and using the nuclear magnetic resonance MRI data set to obtain a network parameter W2 which is trained secondarily.
And (5.1) setting the parameter values of the feature extraction modules of the DM-net as the values of corresponding modules in the trained network parameters W1, and re-initializing the parameters of the region candidate network, the region of interest alignment module and the three-branch module by using a He initialization method.
And (5.2) taking 4 nuclear magnetic resonance MRI images in the nuclear magnetic resonance MRI training data set, respectively inputting the 4 nuclear magnetic resonance MRI images into a segmentation network DM-net to obtain a segmentation result, a classification result and a detection result of each image, calculating a loss function value of each image through the formula in the step 3, and averaging the loss function values of the 4 images to obtain an average loss function value of the nuclear magnetic resonance MRI images.
And (5.3) back-propagating the calculated average loss function value to obtain a gradient value, and updating network parameters of the partitioned network DM-net by using an Adam optimizer.
(5.4) Repeating the processes (5.2) - (5.3) until all the data in the training data set are learned, and completing one iteration of the iterative learning.
And (5.5) repeating the process (5.4) for a plurality of times, and performing iterative learning until the average loss function value obtained by calculation is not reduced any more, so as to obtain the network parameter W2 which is trained for the second time.
And 6, testing the pancreatic tumor nuclear magnetic resonance MRI test image by using the trained DM-net.
And loading the secondarily trained network parameters W2 into a segmentation network DM-net to obtain a trained DM-net, and inputting a nuclear magnetic resonance MRI test data set into the DM-net to obtain an output probability map.
And 7, obtaining a final segmentation result according to the output probability graph.
Setting the probability threshold to 0.5, and comparing each pixel point value of the output probability map with the threshold value:
setting the value of the pixel point with the value less than 0.5 in the output probability map to be 0;
Setting the value of the pixel point with the value of the output probability map being greater than 0.5 to be 1;
The background region is denoted by 0, the pancreatic tumor is denoted by 1, and the segmentation of the nuclear magnetic resonance MRI image is completed.
The effects of the present invention can be further illustrated by the following simulations.
1. Simulation conditions
The experimental simulation platform is a desktop computer with Intel Core i9-9900K CPU 3.6GHz and memory 32GB, a neural network model is built and trained by using Python3.6, keras2.1.5 and tensorsurface 1.9.0, and acceleration is carried out by using Nvidia 1080GPU, cuda 9.0 and Cudnn v.
The images used for simulation are pancreatic tumor nuclear magnetic resonance MRI images and positron emission computed tomography PET images, which are all from the same patient batch, and can be registered. As shown in fig. 3, wherein the first column is a PET image of the patient and the second column is a corresponding MRI image, pancreatic tumor regions are marked with curved contours.
The segmentation performance evaluation indexes adopted in simulation comprise DICE coefficients, sensitivity SEN and specificity SPE, and the specific calculation formulas are as follows:
Wherein a represents a real label, B represents a prediction result, TP represents the number of points in the image which are actually positive sample points and are actually divided into positive sample points, TN represents the number of points in the image which are actually positive sample points but are actually divided into negative samples, FP represents the number of points in the image which are actually negative samples and are actually divided into positive samples, and FN represents the number of points in the image which are actually negative samples and are actually divided into negative samples.
Existing image segmentation networks used for simulation: the system comprises a U-shaped network Unet, a residual network ResNet, a Mask segmentation detection network Mask-RCNN, a Mask segmentation detection network T-Mask-RCNN using a migration learning strategy and a network DM-net based on a dense connection module.
2. Emulation content
Simulation one, respectively training the network and the existing Unet, resNet, mask-RCNN, T-Mask-RCNN and DM-net image segmentation networks by using a nuclear magnetic resonance MRI image training set and a positron emission computed tomography PET image training set, and testing each trained network by using a nuclear magnetic resonance MRI testing set to obtain a testing set segmentation result of each segmentation method, wherein the testing set segmentation result is shown in figure 4.
And calculating the Dice coefficient, the sensitivity SEN and the specificity SPE between the test set segmentation result and the test set real label of each method. As shown in table 1:
TABLE 1
As can be seen from fig. 4, the present invention has higher segmentation accuracy than other existing image segmentation networks when segmenting the MRI image of the pancreatic tumor, and the segmentation results are not shown in fig. 4 because Unet and ResNet cannot segment the pancreatic tumor.
From Table one can get the following analytical conclusion:
Comparing Mask-RCNN with DM-net, it can be found that adding a dense connection module on the basis of Mask-RCNN greatly improves sensitivity under the condition of small change of the price coefficient;
Comparing the T-Mask-RCNN with the invention, the invention shows that the addition of the dense connection module can improve the Dice coefficient even if the segmentation is more accurate;
Comparison of DM-net and the invention, and comparison of Mask-RCNN and T-Mask-RCNN, show that the segmentation accuracy can be effectively improved by using migration learning;
As can be seen from the above comparison, the dense connection module and migration strategy used in the present invention can improve the partition performance.
Since Unet and ResNet failed to segment pancreatic tumors, the evaluation index was not reported.
Simulation two, the evaluation indexes of the test set segmentation results in the pancreatic tumor segmentation method proposed by Liang et al and the pancreatic tumor segmentation method proposed by Zhu et al are compared, and the results are shown in Table 2.
TABLE 2
Algorithm Dice(%) SEN(%) SPE(%)
Liang et al 72 79 94
Zhu et al 74.23 77.04 99.31
The invention is that 76.33 77.08 99.61
From Table II, it can be concluded that: compared with the pancreatic tumor segmentation method in the existing literature, the method has the advantages of improving accuracy, sensitivity and specificity.

Claims (5)

1. A pancreatic tumor segmentation method based on dense connection network migration learning is characterized by comprising the following steps:
(1) Acquiring positron emission computed tomography (PET) data and Magnetic Resonance Imaging (MRI) data from a hospital for the PET data
Preprocessing, and dividing preprocessed data into a training set and a testing set according to the proportion of 8:2;
(2) Constructing a segmentation network DM-net formed by cascading a feature extraction module, a region candidate network, a region of interest alignment module and a three-branch module; the specific structure is as follows:
The characteristic extraction module is formed by cascading four dense connection modules, the output of the previous dense connection module and the output of the next dense connection module are spliced in the channel direction, and each dense connection module is formed by a linear rectification Relu activation function and a two-dimensional convolution layer with the size of 3 multiplied by 3;
the area candidate network consists of an extraction candidate anchor frame unit and a two-class network, wherein the two-class network consists of a plurality of convolution layers and a plurality of full-connection layers in a cascading manner;
The region of interest alignment module is formed by cascading a grid dividing unit, a bilinear interpolation unit and a maximum pooling unit, wherein:
The grid size of the grid dividing unit is L/7 XH/7, L is the length of the candidate frame, and H is the height of the candidate frame;
sampling points of the bilinear interpolation unit are 4, namely four points are randomly selected in each grid, and gray values of the four points are obtained by using a bilinear interpolation method;
The size of the sampling core of the maximum pooling unit is 2 multiplied by 2, and the step length is 2;
The three-branch module is composed of a classification module, a detection module and a segmentation module in parallel, wherein:
The classification module is formed by stacking a plurality of full-connection layers, and the number of neurons of the last full-connection layer is 2;
the detection module is formed by stacking a plurality of full-connection layers, and the number of neurons of the last full-connection layer is 4;
The segmentation module consists of a plurality of upsampling layers and a 3 x3 two-dimensional convolution layer;
(3) Initializing initial parameters of the split network using He initializing method, and setting a loss function of the split network DM-net
The number is as follows: loss=loss cls+lossbox+lossmask,
Where loss cls is the loss of the classification branch, loss box is the loss of the detection branch, loss mask is the loss of the partition branch;
(4) Using a positron emission computed tomography (PET) optimizer and targeting the loss function described above
The PET training data set carries out iterative learning on parameters of the partitioned network DM-net until the value of the loss function is not reduced any more, and a trained network parameter W1 is obtained;
(5) Training DM-net by migrating learning strategies and using nuclear magnetic resonance MRI data:
(5a) Setting the parameter value of the feature extraction module of the DM-net as the value of the corresponding module in the trained network parameter W1, and re-initializing the parameters of the area candidate network, the interested area alignment module and the three-branch module by using a He initialization method;
(5b) Maintaining the loss function of the network unchanged, and performing iterative learning on parameters of the split network DM-net by using the nuclear magnetic resonance MRI training data set until the value of the loss function is not reduced any more to obtain a secondarily trained network parameter W2;
(6) Loading the secondarily trained network parameters W2 into a segmentation network DM-net, and inputting a nuclear magnetic resonance MRI test data set into the DM-net to obtain an output probability map;
(7) Setting the probability threshold value to be 0.5, and comparing each pixel point value of the output probability map with the threshold value to obtain a final segmentation result:
the value of the pixel point with the value less than 0.5 in the output probability map is set to 0, representing the background area,
The value of the pixel point with the value of the output probability map larger than 0.5 is set to be 1, which represents pancreatic tumor.
2. The method of claim 1, wherein the preprocessing of positron emission computed tomography (PET) data and Magnetic Resonance Imaging (MRI) data in (1) is accomplished by:
(1a) The method comprises the steps of taking the position of a nuclear magnetic resonance MRI image as a reference, using 3D slicer software to adjust the spatial position of a positron emission computed tomography (PET) image of the same patient, overlapping the PET image with the nuclear magnetic resonance MRI image, sequentially carrying out random rotation, horizontal overturning and vertical overturning, and expanding the data volume of the PET image to 8 times of the original data volume respectively;
(1b) The size of the extended positron emission computed tomography PET image and the size of the nuclear magnetic resonance MRI image are cut into 320 multiplied by 320 from original 512 multiplied by 512;
(1c) Normalizing the tailored positron emission computed tomography (PET) image and the nuclear magnetic resonance MRI image respectively by the following steps:
Where Y is the normalized image, X is the input image, X min is the maximum value of the pixel gray of the input image, and X max is the minimum value of the pixel gray of the input image.
3. The method of claim 1, wherein the loss of classification branch cls, the loss of detection branch box, and the loss of segmentation branch mask in (3) are calculated as follows:
wherein, P i is the predictive classification probability of the ith candidate box, when the ith candidate box contains a pancreatic tumor,When the ith candidate box does not contain a pancreatic tumor,T i is the parameterized coordinates of candidate box i,The parameterized coordinates of the real label of the candidate frame i; y i is a segmentation label corresponding to the ith pixel point in the input image, if the pixel point i belongs to a background area, y i is 0, if the pixel point i belongs to pancreatic tumor, y i is 1, pred i is the probability that the ith pixel point belongs to pancreatic tumor in a prediction result; n box is the number of candidate boxes in the image, and N p is the number of pixels in the image.
4. The method of claim 1, wherein the iterative learning of parameters of the segmentation network DM-net using the positron emission computed tomography PET training dataset in (4) is accomplished as follows:
(4a) Taking 4 positron emission computed tomography PET images in a training data set, respectively inputting the PET images into a segmentation network DM-net to obtain a segmentation result, a classification result and a detection result of each image, calculating a loss function value of each image through a formula in (3), and averaging the loss function values of the 4 images to obtain an average loss function value of the positron emission computed tomography PET images;
(4b) Back propagation is carried out on the average loss function value obtained through calculation to obtain a gradient value, and an Adam optimizer is used for updating network parameters of the partitioned network DM-net;
(4c) Repeating the processes (4 a) - (4 b) until the data in the training data set are all learned, and completing iterative learning;
(4d) Repeating the process (4 c) for a plurality of times, and performing iterative learning until the average loss function value obtained by calculation is not reduced any more, thereby obtaining a trained network parameter W1.
5. The method of claim 1, wherein the iterative learning of parameters of the partition network DM-net using the magnetic resonance MRI training dataset in (5 b) is accomplished as follows:
(5b1) Taking 4 nuclear magnetic resonance MRI images in a training data set, respectively inputting the 4 nuclear magnetic resonance MRI images into a segmentation network DM-net to obtain a segmentation result, a classification result and a detection result of each image, calculating a loss function value of each image through a formula in (3), and averaging the loss function values of the 4 images to obtain an average loss function value of the nuclear magnetic resonance MRI images;
(5b2) Back propagation is carried out on the average loss function value obtained through calculation to obtain a gradient value, and an Adam optimizer is used for updating network parameters of the partitioned network DM-net;
(5b3) Repeating the processes (5 b 1) - (5 b 2) until the data in the training data set are all learned, and completing iterative learning;
(5b4) Repeating the process (5 b 3) for a plurality of times, and performing iterative learning until the average loss function value obtained by calculation is not reduced any more, so as to obtain the network parameter W2 which is trained for the second time.
CN202110944394.6A 2021-08-17 2021-08-17 Pancreatic tumor image segmentation method based on dense connection network migration learning Active CN113706486B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110944394.6A CN113706486B (en) 2021-08-17 2021-08-17 Pancreatic tumor image segmentation method based on dense connection network migration learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110944394.6A CN113706486B (en) 2021-08-17 2021-08-17 Pancreatic tumor image segmentation method based on dense connection network migration learning

Publications (2)

Publication Number Publication Date
CN113706486A CN113706486A (en) 2021-11-26
CN113706486B true CN113706486B (en) 2024-08-02

Family

ID=78653084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110944394.6A Active CN113706486B (en) 2021-08-17 2021-08-17 Pancreatic tumor image segmentation method based on dense connection network migration learning

Country Status (1)

Country Link
CN (1) CN113706486B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742119A (en) * 2021-12-30 2022-07-12 浙江大华技术股份有限公司 Cross-supervised model training method, image segmentation method and related equipment
CN114937171B (en) * 2022-05-11 2023-06-09 复旦大学 Deep learning-based Alzheimer's classification system
CN115222007B (en) * 2022-05-31 2023-06-20 复旦大学 Improved particle swarm parameter optimization method for colloid rumen multitasking integrated network
CN115527036A (en) * 2022-11-25 2022-12-27 南方电网数字电网研究院有限公司 Power grid scene point cloud semantic segmentation method and device, computer equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636806A (en) * 2018-11-22 2019-04-16 浙江大学山东工业技术研究院 A kind of three-dimensional NMR pancreas image partition method based on multistep study
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875787B (en) * 2018-05-23 2020-07-14 北京市商汤科技开发有限公司 Image recognition method and device, computer equipment and storage medium
US10691978B2 (en) * 2018-06-18 2020-06-23 Drvision Technologies Llc Optimal and efficient machine learning method for deep semantic segmentation
EP3696821A1 (en) * 2019-02-14 2020-08-19 Koninklijke Philips N.V. Computer-implemented method for medical image processing
CN110751651B (en) * 2019-09-27 2022-03-04 西安电子科技大学 MRI pancreas image segmentation method based on multi-scale migration learning
CN111476713B (en) * 2020-03-26 2022-07-22 中南大学 Intelligent weather image identification method and system based on multi-depth convolution neural network fusion
CN111640120B (en) * 2020-04-09 2023-08-29 之江实验室 Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network
CN112381787A (en) * 2020-11-12 2021-02-19 福州大学 Steel plate surface defect classification method based on transfer learning
CN113011306A (en) * 2021-03-15 2021-06-22 中南大学 Method, system and medium for automatic identification of bone marrow cell images in continuous maturation stage
CN113034461A (en) * 2021-03-22 2021-06-25 中国科学院上海营养与健康研究所 Pancreas tumor region image segmentation method and device and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636806A (en) * 2018-11-22 2019-04-16 浙江大学山东工业技术研究院 A kind of three-dimensional NMR pancreas image partition method based on multistep study
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning

Also Published As

Publication number Publication date
CN113706486A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN113706486B (en) Pancreatic tumor image segmentation method based on dense connection network migration learning
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
US11562491B2 (en) Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN111192245A (en) Brain tumor segmentation network and method based on U-Net network
CN107403201A (en) Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
CN116097302A (en) Connected machine learning model with joint training for lesion detection
CN108629785B (en) Three-dimensional magnetic resonance pancreas image segmentation method based on self-learning
CN109300136B (en) Automatic segmentation method for organs at risk based on convolutional neural network
CN108038513A (en) A kind of tagsort method of liver ultrasonic
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN114693933A (en) Medical image segmentation device based on generation of confrontation network and multi-scale feature fusion
CN112750137B (en) Liver tumor segmentation method and system based on deep learning
CN112712532A (en) Bottleneck structure-based multi-scale DC-CUNet liver tumor segmentation method
Ansari et al. Multiple sclerosis lesion segmentation in brain MRI using inception modules embedded in a convolutional neural network
CN112330645A (en) Glioma grading method and device based on attention mechanism
Chen et al. MAU-Net: Multiple attention 3D U-Net for lung cancer segmentation on CT images
CN117115084A (en) Tumor heterogeneity assessment method, system, equipment and medium based on multiple regions of interest of radiological images
Banerjee et al. A CADe system for gliomas in brain MRI using convolutional neural networks
CN116309640A (en) Image automatic segmentation method based on multi-level multi-attention MLMA-UNet network
Kumar et al. Development of an enhanced U-Net model for brain tumor segmentation with optimized architecture
Khan et al. Zonal segmentation of prostate T2W-MRI using atrous convolutional neural network
CN117746042A (en) Liver tumor CT image segmentation method based on APA-UNet
CN113850816B (en) Cervical cancer MRI image segmentation device and method
CN116309551B (en) Method, device, electronic equipment and readable medium for determining focus sampling area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant