CN109102512B - A MRI brain tumor image segmentation method based on DBN neural network - Google Patents
A MRI brain tumor image segmentation method based on DBN neural network Download PDFInfo
- Publication number
- CN109102512B CN109102512B CN201810885507.8A CN201810885507A CN109102512B CN 109102512 B CN109102512 B CN 109102512B CN 201810885507 A CN201810885507 A CN 201810885507A CN 109102512 B CN109102512 B CN 109102512B
- Authority
- CN
- China
- Prior art keywords
- training
- image
- network
- samples
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 208000003174 Brain Neoplasms Diseases 0.000 title claims abstract description 20
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 19
- 238000003709 image segmentation Methods 0.000 title claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 86
- 238000012360 testing method Methods 0.000 claims abstract description 31
- 230000011218 segmentation Effects 0.000 claims abstract description 24
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 19
- 210000004556 brain Anatomy 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- 230000007480 spreading Effects 0.000 claims description 5
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 2
- 230000001737 promoting effect Effects 0.000 claims description 2
- 230000001502 supplementing effect Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 7
- 230000000007 visual effect Effects 0.000 abstract description 4
- 238000001514 detection method Methods 0.000 abstract description 3
- 238000002595 magnetic resonance imaging Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 7
- 230000035945 sensitivity Effects 0.000 description 4
- 238000013136 deep learning model Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 210000005013 brain tissue Anatomy 0.000 description 2
- JXSJBGJIGXNWCI-UHFFFAOYSA-N diethyl 2-[(dimethoxyphosphorothioyl)thio]succinate Chemical compound CCOC(=O)CC(SP(=S)(OC)OC)C(=O)OCC JXSJBGJIGXNWCI-UHFFFAOYSA-N 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- BHELIUBJHYAEDK-OAIUPTLZSA-N Aspoxicillin Chemical compound C1([C@H](C(=O)N[C@@H]2C(N3[C@H](C(C)(C)S[C@@H]32)C(O)=O)=O)NC(=O)[C@H](N)CC(=O)NC)=CC=C(O)C=C1 BHELIUBJHYAEDK-OAIUPTLZSA-N 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000004884 grey matter Anatomy 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 238000012916 structural analysis Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000002861 ventricular Effects 0.000 description 1
- 210000004885 white matter Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention discloses a DBN neural network-based MRI brain tumor image segmentation method, which comprises the steps of firstly selecting a plurality of images from an existing patient brain MRI sequence image library as training samples, preprocessing the training samples and calculating a saliency map; then, downsampling is sent to a DBN neural network to be subjected to unsupervised training and supervised training in sequence, downsampling processing is performed on a non-tumor area under the condition that a training sample is extremely unbalanced, and the detection rate of positive samples is improved; after training is finished, the test image to be segmented can be sent to the network for segmentation, the visual attention model is introduced, the accuracy of the network in segmenting the area difficult to segment is improved, and finally the segmentation result is output.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an MRI brain tumor image segmentation method based on a DBN neural network.
Background
In recent years, brain tumors have become one of the most prevalent tumors. Magnetic Resonance Imaging (MRI) enables high spatial resolution and high contrast imaging of brain soft tissue, is the best choice for physicians to perform structural analysis of the brain, and is therefore widely used clinically. In brain MRI image processing, accurate segmentation of a tumor part is a crucial step, and the accurate segmentation plays a crucial role in subsequent analysis and judgment of a doctor. At present, the step still depends on manual segmentation seriously, which is time-consuming and has strong instability, so that the search for an accurate automatic segmentation method has high practical value. However, due to the high variability of the shape, position and structure of brain tumors and the great influence of different people and different devices on the imaging gray scale distribution, it is difficult to find a high-precision segmentation method.
Currently, some scholars have done research on this aspect, and most of them adopt traditional machine learning algorithms (such as random forest, markov random field method, etc.) to segment normal brain tissue (such as white matter and gray matter) and abnormal brain tissue (such as brain tumor). However, the method usually needs to manually extract the features in advance, so that a designer is required to have related professional knowledge, which is not practical in many cases, and the manually extracted features have the defects of strong pertinence, poor expansibility and the like. The appearance and development of deep learning models perfectly solve these problems.
The deep learning model refers to a laminated structure for feature learning by using a multi-layer neural network (generally, a neural network with more than 3 layers), and the initial motivation is to simulate the human brain for learning and analysis. Compared with the traditional machine learning algorithm, the deep learning model has stronger feature abstraction capability and complex function expression capability. It presents incomparable advantages over many tasks such as speech recognition, image recognition, machine translation, etc. The Deep Belief Network (DBN) is a novel neural network developed on the basis of a Boltzmann machine, is an unsupervised probability generation model and can be used for fitting the probability distribution of original input data. The training of the whole network is divided into two steps, each RBM is firstly unsupervised and layer-by-layer trained through a contrast divergence algorithm, and then network parameters are supervised and finely adjusted through a back propagation algorithm, so that the network parameters can be closer to the global optimum by the training method. It has been shown that DBNs perform well in many medical image processing tasks, such as Tuan et al, which combine DBNs with horizontal slices for left ventricular segmentation of the heart, with the best results currently available in this task.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide an MRI brain tumor image segmentation method based on a DBN neural network, which is improved based on a DBN and is applied to the segmentation of brain MRI tumors.
The invention adopts the following technical scheme:
a MRI brain tumor image segmentation method based on DBN neural network, select many pieces from existing patient's brain MRI sequence image library as training sample first, preprocess it and calculate the significance map; then, the down-sampling is sent to a DBN neural network to carry out unsupervised training and supervised training in sequence; and after the training is finished, sending the test image to be segmented into a network for segmentation, and finally outputting a segmentation result.
Specifically, the method comprises the following steps:
s1, dividing N frames of images in the brain MRI sequence image into a training set and a test set, and preprocessing data;
s2, calculating a saliency map of each frame of image, and normalizing each saliency map;
s3, downsampling the training set samples according to the saliency map;
s4, sending the training set sample obtained in the step S3 to a DBN network for unsupervised pre-training through a contrast divergence method;
s5, simultaneously sending the training set samples and the labels thereof into a network, and finely adjusting network parameters through an Adam algorithm;
s6, testing the set imageAnd taking each pixel point as the center, taking a 9 x 9 area of the pixel point and spreading the area into 81-dimensional column vectors, sending the column vectors into a trained network for testing, outputting a classification label of each pixel point to obtain a divided binary image, and supplementing missing pixel values around the pixel points by using points on the edge of the image in a symmetrical filling mode.
Further, step S1 is specifically as follows:
s101, selecting a section with the largest tumor area from brain MRI sequence diagrams of N patients with brain tumors, and taking t frame images as a training set DTrainThe remaining N-t frames are used as a test set Dtest;
S102, carrying out image D on each frame of training set and test setiI is 1. ltoreq. N, byAnd carrying out normalization processing on each image.
4. The method of claim 3, wherein the DBN neural network-based MRI brain tumor image segmentation method is applied to the MRI brain tumor image segmentation method,the calculation is as follows:
further, step S2 is specifically as follows:
s201, setting the pixel point value of the mth row and the nth column of each frame image asAveraging each frame of image
S202, performing convolution operation on each frame of image and 5 × 5 Gaussian kernels respectively to obtain Gaussian blurred images of each frame of image
S203, solving a saliency map of each frame of image;
s204, according to SiObtaining a normalized significance map after the normalization,representing the image of the i-th frameThe saliency value of the pixel of the mth row and nth column.
Further, the saliency map of each frame image in step S203The method comprises the following specific steps:
further, in step S204, the normalized saliency map SiThe calculation is as follows:
further, in step S3, the saliency values of each pixel of each frame of image in the training set are sorted from large to small, and the first h pixels with the largest saliency are centered on the image in the training setTaking a square region of 9 x 9, spreading the square region into a column vector of 81 dimensions according to rows to be used as a training sample, obtaining t x h training samples, and setting AkDenotes the k-th training sample, LkThe label of the kth training sample is 0, which means belonging to the background region, and 1, which means belonging to the tumor region.
Further, step S5 is specifically as follows:
s501, the number of samples sent each time during training: let "batch _ size" 1024, where a sample labeled 0 is a (label 0), and the number of samples is n0The sample labeled 1 is a (label ═ 1), and the number thereof is n1F (-) denotes the output of the last layer of the DBN network;
s502, solving the mean value of samples with labels of 0 or 1 in each batch of training samples, the total intra-class variance of the samples in each characteristic dimension output by the last layer of the DBN network and the inter-class variance of each characteristic dimension output by the samples in the last layer of the DBN network;
s503, calculating the loss function of the network as follows:
wherein:Skand expressing the significance value of the kth sample, wherein the significance value of the kth sample expresses the corresponding loss weight distributed to each point according to the significance value of the pixel, thereby promoting the identification capability of the network on the tumor region, Softmax (DEG) expresses a Softmax classifier function, lambda is an adjustable hyperparameter which expresses the weight of a divergence regularization term, and then the loss function is minimized through an Adam algorithm, and the network parameters are continuously updated until convergence.
Further, in step S502, the average value of the output of the sample labeled 0 or 1 in each training sample in the last layer is calculated as follows:
wherein x ∈ {0,1}, batch _ size represents the number of samples sent in each batch during training, a (label ═ x) represents a sample labeled x, f (·) represents the last layer output of the DBN network, and μ (label ═ x) represents the feature average of the sample labeled x in the last layer;
total within-class variance delta of samples on each feature dimension output by the last layer of the DBN networkinThe calculation is as follows:
wherein n isxRepresents the number of samples marked x;
the inter-class variance delta of each characteristic dimension of the sample output at the last layer of the DBN networkbetweenThe calculation is as follows:
δbetween=(μ(label=0)-μ(label=1))2
where μ (label ═ 0) denotes the average value of the output of the sample labeled 0 in the last layer, and μ (label ═ 1) denotes the average value of the output of the sample labeled 1 in the last layer.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention relates to a DBN neural network-based MRI brain tumor image segmentation method, which comprises the steps of firstly selecting a plurality of images from an existing patient brain MRI sequence image library as training samples, preprocessing the training samples and calculating a significance map; then, downsampling is sent to a DBN neural network to be subjected to unsupervised training and supervised training in sequence, downsampling processing is performed on a non-tumor area under the condition that a training sample is extremely unbalanced, and the detection rate of positive samples is improved; after training is finished, the test image to be segmented can be sent to the network for segmentation, the visual attention model is introduced, the accuracy of the network in segmenting the area difficult to segment is improved, and finally the segmentation result is output.
Furthermore, the normalization operation can reduce the interference of the MRI picture caused by nonuniform light.
Furthermore, the pixels of the non-tumor area are down-sampled according to the sequence of the significance degree, so that the information of the non-tumor area can be retained to the maximum extent, the influence of unbalance of the sample can be eliminated, and the memory space and the calculation amount required by training can be reduced.
Furthermore, a divergence regular term is added to an error term of network back propagation, so that the robustness of the DBN network for extracting features is enhanced, the performance of the network is further improved, meanwhile, different loss weights are given to all pixel points according to the significance value, the attention of the network can be more concentrated on a tumor area and an area close to the pixel value of the tumor area, and the identification capability of the network on an error-prone area is improved.
In conclusion, the method reduces the interference of the MRI picture caused by uneven light through the over-normalization operation, eliminates the influence of unbalanced samples through the down-sampling operation, and improves the detection rate of positive samples. Meanwhile, the performance of the network is further improved by introducing a divergence regular term and a visual attention model.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a general flow chart of the present invention;
fig. 2 is a diagram of a DBN network structure used in the present invention;
FIG. 3 is a view of one of the MRI slices used in the test;
FIG. 4 is a graph of the results of the normalization of the saliency map of FIG. 3;
FIG. 5 is a schematic diagram of the segmentation of FIG. 4, wherein (a) is a real label graph and (b) is a graph of the segmentation result of the present invention.
Detailed Description
The invention provides an MRI brain tumor image segmentation method based on a DBN neural network, which can be used for assisting a doctor in diagnosis and segmentation of brain tumors. The realization process is as follows: firstly, picking out a plurality of samples from an existing patient brain MRI sequence image library as training samples, preprocessing the training samples and calculating a saliency map. Then the down-sampling is sent to a DBN neural network to carry out unsupervised training and supervised training in sequence. After training is finished, the test image to be segmented can be sent to a network for segmentation, and finally, a segmentation result is output. According to the invention, the image features are extracted by a deep learning method, so that the complexity and instability of manual feature extraction are avoided. In addition, the accuracy of pixel classification is improved by introducing a downsampling balance sample and a visual attention mechanism, and a better segmentation result is obtained for the MRI brain tumor.
Referring to fig. 1, the MRI brain tumor image segmentation method based on the DBN neural network of the present invention includes the following steps:
s1, dividing N frames of images in the brain MRI sequence image into a training set and a test set, and preprocessing data;
s101, selecting a section with the largest tumor area from brain MRI sequence diagrams of N patients with brain tumors, and taking t frame images as a training set DTrainThe remaining N-t frames are used as a test set Dtest。
In the embodiment of the present application, the used training and testing data are both from Flair modal image data of the brakes 2015 challenge race, and the resolution of each frame of image is: 250 x 250 pixels;
s102, carrying out image D on each frame of training set and test setiI is 1. ltoreq. N, byCarrying out normalization processing on each image;
s2, calculating a saliency map of each frame of image, and normalizing each saliency map;
s201, setting the pixel point value of the mth row and the nth column of each frame image asAveraging each frame of image
s202, performing convolution operation on each frame of image and 5 × 5 Gaussian kernels respectively to obtain Gaussian blurred images of each frame of image
S203, finding a saliency map of each frame of image, specifically as follows:
s204, according to SiObtaining a normalized significance map after the normalization,the saliency value of the pixel of the mth row and nth column of the ith frame image is shown, and as can be seen from fig. 5, the pixel value of the tumor region is greatly different from that of the background region and has a larger saliency;
Sithe calculation is as follows:
s3, downsampling the training set samples according to the saliency map;
respectively sequencing the significance values of all pixel points of each frame of image in the training set from large to small, and centering on the first h pixel points with the maximum significance in the images in the training setThe 9 x 9 square regions are taken and developed into 81-dimensional column vectors by rows as training samples. From this, t x h training samples are obtained, let AkDenotes the k-th training sample, LkA label of the kth training sample, wherein the label is 0 and belongs to the background area, and the label is 1 and belongs to the tumor area;
s4, sending the processed training set samples to a DBN network for unsupervised pre-training;
the training set sample obtained in the step S3 is sent to a DBN network for unsupervised pre-training through a contrast divergence method, and the network structure of the DBN refers to FIG. 2;
s5, simultaneously sending the training set samples and the labels thereof into a network, and finely adjusting network parameters through an Adam algorithm;
s501, sending the training data each timeNumber of input samples: let "batch _ size" 1024, where a sample labeled 0 is a (label 0), and the number of samples is n0The sample labeled 1 is a (label ═ 1), and the number thereof is n1F (-) denotes the output of the last layer of the DBN network;
s502, solving the output average value of the sample with the label of 0 or 1 in each batch of training samples in the last layer as follows:
wherein x ∈ {0,1}, batch _ size represents the number of samples sent in each batch during training, a (label ═ x) represents a sample labeled x, f (·) represents the last layer output of the DBN network, and μ (label ═ x) represents the feature average of the sample labeled x in the last layer;
and (3) solving the total intra-class variance of the sample on each feature dimension output by the last layer of the DBN network as follows:
wherein n isxRepresents the number of samples marked x;
and solving the inter-class variance of each characteristic dimension output by the sample at the last layer of the DBN network as follows:
δbetween=(μ(label=0)-μ(label=1))2;
where μ (label ═ 0) denotes the average value of the output of the sample labeled 0 in the last layer, and μ (label ═ 1) denotes the average value of the output of the sample labeled 1 in the last layer.
S503, calculating a loss function of the network:
wherein:sk represents a saliency value of a kth sample, wherein representing a loss weight corresponding to each point according to the saliency value of a pixel, so as to promote the identification capability of the network on a tumor region (with high saliency value), Softmax ((-)) represents a Softmax classifier function, lambda is an adjustable hyper-parameter and represents the weight of a divergence regularization term, and then the loss function is minimized through an Adam algorithm, and the network parameters are continuously updated until convergence;
and S6, segmenting the test set image by using the trained network model.
With test set imagesAnd taking each pixel point as a center, taking a 9 x 9 area of the pixel point and spreading the pixel point into 81-dimensional column vectors, sending the column vectors into a trained network for testing, and outputting a classification label of each pixel point, thereby obtaining a divided binary image. Points on the edge of the image supplement missing pixel values around the points in a symmetrical filling mode.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The experimental contents are as follows:
to illustrate the effectiveness and adaptability of the invention, the training images and test images used in the experiment were each decimated from the 80 th slice data in the flair modality of different patients in the Brats2015 challenge race database. Fig. 3 and 4 show one of the images and the corresponding label map for training and testing, respectively. The training data and the test data are sent to a network for training and testing after being preprocessed according to the method provided by the invention, and the evaluation indexes of the test result comprise three items: dice Similarity Coefficient (DSC), Sensitivity (Sensitivity), Positive Predictive Value (PPV). Wherein DSC is defined as:
wherein, VsegRepresenting the result of the manual segmentation, VgtA real label representing the image.
Sensitivity and positive predictive value are defined as follows:
where TP represents an overlapping region of the divided region of the present invention and the true label is 1, FP represents a pixel whose true label is 0 but which is identified as 1 in the present invention, and FN represents a pixel whose true label is 1 but which is identified as 0 in the present invention. A comparison of the present invention with other segmentation methods is shown in table 1:
table one comparison of the evaluation indices of the present invention with other segmentation methods in the Brats2015 challenge race
Method of producing a composite material | DSC | Sensitivity | PPV |
Zhao | 0.79 | 0.85 | 0.77 |
Festa | 0.72 | 0.72 | 0.77 |
Doyle | 0.71 | 0.87 | 0.66 |
The invention | 0.73 | 0.88 | 0.70 |
By contrast, the method has better effect on segmenting the brain MRI image, has indexes close to or superior to those of other methods, and has certain practical value.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.
Claims (1)
1. A MRI brain tumor image segmentation method based on DBN neural network is characterized in that a plurality of images are picked out from the existing MRI sequence image library of the brain of a patient and used as training samples, and the training samples are preprocessed and a saliency map is calculated; then, the down-sampling is sent to a DBN neural network to carry out unsupervised training and supervised training in sequence; after training, sending the test image to be segmented into a network for segmentation, and finally outputting a segmentation result, wherein the method comprises the following steps:
s1, dividing N frames of images in the brain MRI sequence image into a training set and a test set, and preprocessing data, specifically:
s101, selecting a section with the largest tumor area from brain MRI sequence diagrams of N patients with brain tumors, and taking t frame images as a training set DTrainThe remaining N-t frames are used as a test set Dtest;
S102, carrying out image D on each frame of training set and test setiI is 1. ltoreq. N, byEach image is subjected to a normalization process,the calculation is as follows:
s2, calculating a saliency map of each frame of image, and normalizing each saliency map, specifically:
s201, setting the pixel point value of the mth row and the nth column of each frame image asAveraging each frame of image
S202, performing convolution operation on each frame of image and 5 × 5 Gaussian kernels respectively to obtain Gaussian blurred images of each frame of image
S203, calculating the saliency map of each frame of image, and the saliency map of each frame of image in the step S203The method comprises the following specific steps:
s204, according to SiObtaining a normalized significance map after the normalization,a normalized saliency map S representing the saliency values of the pixels of the mth row and the nth column of the ith frame imageiThe calculation is as follows:
s3, down-sampling the training set samples according to the saliency map, sorting the saliency values of each pixel point of each frame of image in the training set from large to small, and normalizing the training set images by taking the first h pixel points with the largest saliency as centersTaking a square region of 9 x 9, spreading the square region into a column vector of 81 dimensions according to rows to be used as a training sample, obtaining t x h training samples, and setting AkDenotes the k-th training sample, LkA label of the kth training sample, wherein the label is 0 and belongs to the background area, and the label is 1 and belongs to the tumor area;
s4, sending the training set sample obtained in the step S3 to a DBN network for unsupervised pre-training through a contrast divergence method;
s5, simultaneously sending the training set samples and the labels thereof into the network, and finely adjusting network parameters through an Adam algorithm, specifically:
s501, the number of samples sent each time during training: let "batch _ size" 1024, where a sample labeled 0 is a (label 0), and the number of samples is n0The sample labeled 1 is a (label ═ 1), and the number thereof is n1F (-) denotes the output of the last layer of the DBN network;
s502, solving the mean value of the sample with the label of 0 or 1 in each batch of training samples, the total intra-class variance of the sample on each characteristic dimension output by the last layer of the DBN network and the inter-class variance of each characteristic dimension output by the sample on the last layer of the DBN network, wherein the mean value output by the sample with the label of 0 or 1 in each batch of training samples on the last layer is calculated as follows:
wherein x ∈ {0,1}, batch _ size represents the number of samples sent in each batch during training, a (label ═ x) represents a sample labeled x, f (·) represents the last layer output of the DBN network, and μ (label ═ x) represents the feature average of the sample labeled x in the last layer;
total within-class variance delta of samples on each feature dimension output by the last layer of the DBN networkinThe calculation is as follows:
wherein n isxRepresents the number of samples marked x;
the inter-class variance delta of each characteristic dimension of the sample output at the last layer of the DBN networkbetweenThe calculation is as follows:
δbetween=(μ(label=0)-μ(label=1))2
wherein μ (label ═ 0) represents the average of the samples labeled 0 output at the last layer, and μ (label ═ 1) represents the average of the samples labeled 1 output at the last layer;
s503, calculating the loss function of the network as follows:
wherein:Skrepresenting a saliency value of a kth sample, representing a loss weight distributed to each point according to the saliency value of a pixel, promoting the identification capability of the network on a tumor region, wherein Softmax (DEG) represents a Softmax classifier function, and lambda is an adjustable hyper-parameter and represents the weight of a divergence regular term, and then minimizing the loss function through an Adam algorithm, and continuously updating network parameters until convergence;
s6, testing the set imageAnd taking each pixel point as the center, taking a 9 x 9 area of the pixel point and spreading the area into 81-dimensional column vectors, sending the column vectors into a trained network for testing, outputting a classification label of each pixel point to obtain a divided binary image, and supplementing missing pixel values around the pixel points by using points on the edge of the image in a symmetrical filling mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810885507.8A CN109102512B (en) | 2018-08-06 | 2018-08-06 | A MRI brain tumor image segmentation method based on DBN neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810885507.8A CN109102512B (en) | 2018-08-06 | 2018-08-06 | A MRI brain tumor image segmentation method based on DBN neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109102512A CN109102512A (en) | 2018-12-28 |
CN109102512B true CN109102512B (en) | 2021-03-09 |
Family
ID=64848832
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810885507.8A Active CN109102512B (en) | 2018-08-06 | 2018-08-06 | A MRI brain tumor image segmentation method based on DBN neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109102512B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109754404B (en) * | 2019-01-02 | 2020-09-01 | 清华大学深圳研究生院 | End-to-end tumor segmentation method based on multi-attention mechanism |
CN110689057B (en) * | 2019-09-11 | 2022-07-15 | 哈尔滨工程大学 | A method for reducing the sample size of neural network training based on image segmentation |
CN111445443B (en) * | 2020-03-11 | 2023-09-01 | 北京深睿博联科技有限责任公司 | Early acute cerebral infarction detection method and device |
CN111612764B (en) * | 2020-05-21 | 2023-09-22 | 广州普世医学科技有限公司 | Method, system and storage medium for resolving new coronal pneumonia ground glass focus contrast |
CN112132842A (en) * | 2020-09-28 | 2020-12-25 | 华东师范大学 | A brain image segmentation method based on SEEDS algorithm and GRU network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102903103A (en) * | 2012-09-11 | 2013-01-30 | 西安电子科技大学 | Migratory active contour model based stomach CT (computerized tomography) sequence image segmentation method |
CN105719303A (en) * | 2016-01-25 | 2016-06-29 | 杭州职业技术学院 | Magnetic resonance imaging prostate 3D image segmentation method based on multi-depth belief network |
CN106296699A (en) * | 2016-08-16 | 2017-01-04 | 电子科技大学 | Cerebral tumor dividing method based on deep neural network and multi-modal MRI image |
CN106780453A (en) * | 2016-12-07 | 2017-05-31 | 电子科技大学 | A kind of method realized based on depth trust network to brain tumor segmentation |
-
2018
- 2018-08-06 CN CN201810885507.8A patent/CN109102512B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102903103A (en) * | 2012-09-11 | 2013-01-30 | 西安电子科技大学 | Migratory active contour model based stomach CT (computerized tomography) sequence image segmentation method |
CN105719303A (en) * | 2016-01-25 | 2016-06-29 | 杭州职业技术学院 | Magnetic resonance imaging prostate 3D image segmentation method based on multi-depth belief network |
CN106296699A (en) * | 2016-08-16 | 2017-01-04 | 电子科技大学 | Cerebral tumor dividing method based on deep neural network and multi-modal MRI image |
CN106780453A (en) * | 2016-12-07 | 2017-05-31 | 电子科技大学 | A kind of method realized based on depth trust network to brain tumor segmentation |
Non-Patent Citations (2)
Title |
---|
ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION;Diederik P.Kingma等;《ICLR 2015》;20150727;第1-15页 * |
一种带心肌癫痕的心脏磁共振图像左室壁分割方法;李晓宁;《四川大学学报(自然科学版)》;20160930;第53卷(第5期);第1011-1017页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109102512A (en) | 2018-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109102512B (en) | A MRI brain tumor image segmentation method based on DBN neural network | |
Khened et al. | Densely connected fully convolutional network for short-axis cardiac cine MR image segmentation and heart diagnosis using random forest | |
CN108648191B (en) | Pest image recognition method based on Bayesian width residual neural network | |
CN107506761B (en) | Brain image segmentation method and system based on saliency learning convolutional neural network | |
CN107016681B (en) | Brain MRI tumor segmentation method based on full convolution network | |
CN107610087B (en) | An automatic segmentation method of tongue coating based on deep learning | |
CN112270660B (en) | Nasopharyngeal carcinoma radiotherapy target area automatic segmentation method based on deep neural network | |
Birenbaum et al. | Longitudinal multiple sclerosis lesion segmentation using multi-view convolutional neural networks | |
CN106408562B (en) | A method and system for retinal blood vessel segmentation in fundus images based on deep learning | |
CN108171232A (en) | The sorting technique of bacillary and viral children Streptococcus based on deep learning algorithm | |
WO2019001208A1 (en) | Segmentation algorithm for choroidal neovascularization in oct image | |
CN104346617B (en) | A kind of cell detection method based on sliding window and depth structure extraction feature | |
CN112270666A (en) | Non-small cell lung cancer pathological section identification method based on deep convolutional neural network | |
CN107169974A (en) | It is a kind of based on the image partition method for supervising full convolutional neural networks more | |
CN107886514A (en) | Breast molybdenum target image lump semantic segmentation method based on depth residual error network | |
CN110930416A (en) | MRI image prostate segmentation method based on U-shaped network | |
CN110084318A (en) | A kind of image-recognizing method of combination convolutional neural networks and gradient boosted tree | |
CN105825509A (en) | Cerebral vessel segmentation method based on 3D convolutional neural network | |
CN108038513A (en) | A kind of tagsort method of liver ultrasonic | |
CN107730542B (en) | Cone beam computed tomography image correspondence and registration method | |
CN104484886B (en) | A kind of dividing method and device of MR images | |
CN111784653B (en) | Multi-scale Network MRI Pancreas Contour Localization Method Based on Shape Constraint | |
CN112598613A (en) | Determination method based on depth image segmentation and recognition for intelligent lung cancer diagnosis | |
Mahapatra et al. | Visual saliency based active learning for prostate mri segmentation | |
CN109934204A (en) | A facial expression recognition method based on convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |