CN110782427A - Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution - Google Patents
Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution Download PDFInfo
- Publication number
- CN110782427A CN110782427A CN201910761883.0A CN201910761883A CN110782427A CN 110782427 A CN110782427 A CN 110782427A CN 201910761883 A CN201910761883 A CN 201910761883A CN 110782427 A CN110782427 A CN 110782427A
- Authority
- CN
- China
- Prior art keywords
- magnetic resonance
- brain tumor
- convolution
- separable
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention belongs to the field of computer-aided medical treatment, and particularly relates to a magnetic resonance brain tumor image automatic segmentation method based on separable cavity convolution. The method comprises the following specific steps: firstly, dividing a magnetic resonance brain tumor image data set into a training set and a testing set, and preprocessing the magnetic resonance brain tumor image in the training set; secondly, constructing a magnetic resonance brain tumor image depth segmentation network framework based on separable cavity convolution; thirdly, performing end-to-end training on the constructed separable cavity convolution brain tumor segmentation network by adopting the preprocessed training set magnetic resonance brain tumor image to obtain an optimized brain tumor segmentation network model; and finally, segmenting the test set magnetic resonance brain tumor image by adopting the trained brain tumor segmentation network model. According to the method, by enhancing the extraction of the distinguishing depth characteristics of the magnetic resonance brain tumor image and the integration of the spatial multi-scale information, a better magnetic resonance brain tumor segmentation result can be obtained.
Description
Technical Field
The invention relates to a medical image segmentation method, in particular to a magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution.
Background
Brain glioma is one of the most common and aggressive primary brain tumors, seriously jeopardizing human health. The treatment of brain tumor is mainly performed by operation, and is assisted by comprehensive treatment measures such as radiotherapy, chemotherapy and the like. The magnetic resonance imaging technology has become a main reference basis for clinical diagnosis and treatment of brain tumors through the characteristics of non-accessibility, no harm, multi-aspect, multi-parameter imaging, clear soft tissue display capability and the like, and the accurate segmentation of brain tumor images has very important significance for medical image analysis and clinical application research, is an indispensable means for extracting quantitative information of special tissues in images, and is a prerequisite for realizing three-dimensional visual reconstruction of brain tissues. Based on the accurate segmentation result of the brain tumor, doctors can obtain various information of the tumor such as the shape, the size, the position and the like, and carry out quantitative analysis and tracking comparison on the information to master the development and the growth state of the tumor lesion. The existing brain tumor segmentation methods are mainly divided into two types, one is a segmentation method of traditional machine learning, and the other is a segmentation method based on deep learning.
The traditional magnetic resonance brain tumor segmentation method is mainly constructed based on models and methods in the fields of image processing, computer graphics, traditional artificial intelligence and the like, and mainly comprises a threshold-based method, a region-based method, a model-based method, a classifier-based method and the like.
In recent years, a deep neural convolution network model has been successfully applied to many computer vision tasks, can realize automatic extraction of deep high discriminant expression capability features, and is rapidly developed into the field of medical image processing and analysis. In the study of magnetic resonance brain image segmentation based on deep learning, a series of important research results are developed in recent years, and the method has greatly improved performance compared with the traditional brain image segmentation method. However, the current deep learning method is difficult to improve the capability of extracting features and integrating information of multiple spatial scales, and in order to solve the problem, a brain tumor automatic segmentation method based on separable cavity convolution is provided.
For example, chinese patent application No. 201580001261.8 discloses a fast magnetic resonance imaging method and apparatus based on deep convolutional neural network, the method includes: step S1, constructing a deep convolutional neural network; step S2, acquiring off-line magnetic resonance image data, training the deep convolutional neural network, and learning the mapping relation between the undersampled magnetic resonance image and the full-acquisition image; and step S3, reconstructing a magnetic resonance image by using the deep convolutional neural network learned in the step S2. The method utilizes a large number of off-line magnetic resonance images to develop prior information of the images, so that an off-line network can recover more fine structures and image characteristics from magnetic resonance data acquired in the previous period, and the magnetic resonance sampling multiple and the imaging precision are improved. However, the technical scheme does not design the automatic cutting process of the brain tumor, and the information processing capability is very limited.
Disclosure of Invention
In order to solve the problems in the prior art, namely the problem that the current deep learning technology cannot extract accurate features and integrate scale information when brain tumor segmentation is carried out, the invention provides a magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution.
The invention provides a magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution, which comprises the following steps:
step 1: dividing magnetic resonance image data into a training set and a testing set; the training set adopts a data preprocessing method to generate a processed magnetic resonance image, the testing set is used for testing a model and the segmentation process of a magnetic resonance brain tumor image, and the method specifically comprises the following steps:
step 11: constructing an image data set containing magnetic resonance image data and labels, and dividing the image data set into a training set and a testing set, wherein the training set is used for model training of the invention, and the testing set is used for a model testing stage of the invention;
acquiring image data and a label, X ═ X
1,x
2,...,x
N]Representing a sample set of all pictures, each case image being denoted x
iN is the number of image samples, and each case image has a different imageFour representations of the same magnetic sequence are Flair, T1, T2 and T1c modalities respectively; y ═ Y
1,y
2,...,y
M]Representing the label to which the image dataset X corresponds. Then, the sample set is divided, and a part of the sample set is selected as a training sample set X
trOne part as test sample set X
te;
Step 12: first remove the 1% highest and 1% lowest intensity regions of the training dataset image, resulting in images with dimensions of 152 x 192 x 1463D, then cut each 3D image into a series of 2D slice images f1, with dimensions of 152 x 192; then, according to the image tumor characteristics, the f1 images are subjected to block processing to obtain an image f2 with the scale of 128 x 128, so that data imbalance is solved;
step 13: the intensities of the data pixels with different magnitudes of the image f2 are converted into uniform intensities by using a z-score normalization process so as to ensure normalization between data. The step 13 specifically comprises:
step 131: solving the mean value mu of f2 image pixel overall data; solving the standard deviation sigma of f2 image pixel overall data;
step 132: acquiring f2 image pixel x; using the formula:
and acquiring and generating a processed magnetic resonance brain tumor image.
Step 2, constructing a convolution network based on separable cavities for the segmentation of the processed magnetic resonance brain tumor image, and specifically comprising the following steps:
step 21: training the full convolution neural network by using the axial slice size of the self-processed magnetic resonance image; sequentially acquiring each axial slice of four mode images of a secondary magnetic resonance image; specifically, the processing method is also used by a label in the same way for a square block on a magnetic resonance image of four modes, namely Flair, T1, T2 and T1c, which are centered at a specific pixel; the axial slice size is 128 × 128 × 4,4 denotes Flair, T1, T2 and T1c modes;
step 22: inputting the axial slices obtained in the step 21 into a neural convolution network capable of separating cavity convolution to carry out segmentation of brain tumors;
in step 22, the construction of the separable hole convolution network mainly comprises the following steps:
step S221: the encoder network extracts features by constructing a separable cavity convolution network, a separable cavity convolution block consists of a 3 x 3 separable convolution and a 3 x 3 convolution with a cavity rate of 2, regularization and a nonlinear activation function, and then summing operation is performed through shortcut connection to realize extraction of brain tumor features;
the encoder network comprises 3 separable cavity volume blocks, the number of channels for extracting features of the 3 volume blocks is 64, 128 and 256 respectively, the encoder network continuously performs downsampling to extract higher-order information of brain tumors and acquire global semantic information;
step S, 222: 2 residual blocks formed by convolution of 3 multiplied by 3 and 1 multiplied by 1 and summation operation are respectively added to the bottom layer of the separable hole convolution network, the number of channels of the two residual blocks is 512 and 512 respectively, and the structure is convenient for sensing global information and describing more detailed local characteristic information;
step S, 223: in the decoder network, the separable cavity convolutional network carries out characteristic reduction through upsampling and convolution operation, lost boundary information is compensated through merging encoder network characteristics mapped with the separable cavity convolutional network, and the characteristic information capturing capability of the network is improved.
In the decoder network, 3 residual blocks are adopted for characteristic restoration, and the number of channels is 256, 128 and 64 respectively. The decoder network and the encoder network cooperate together to complement the lost boundary information, and the pixel level classification is carried out through the softmax layer.
Step 3, end-to-end training of the separable cavity convolution network model, adopting the constructed separable volume cavity convolution-based network, sending the processed magnetic resonance image into the network for optimizing the network process, and improving the segmentation precision;
and (3) carrying out network optimization on the separable convolutional network through the step (3), wherein in the training process, the Loss function consists of a Dice Loss function and a Cross entry Loss function and is used for calculating the error of the network, and then continuously optimizing the network by adopting a random gradient descent method until the optimal value is reached.
And 4, step 4: the trained separable cavity convolution network model is used for segmenting the magnetic resonance brain image, the test set is sent to the network slice by slice, the size of the slice is 240 x 240, meanwhile, the four modal data are sent to the trained model together, and after model processing, the output test segmentation result is generated.
In the method, the magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution is a deep neural convolution network; the feature extraction network is composed of a separable cavity residual error network. The magnetic resonance brain tumor automatic segmentation neural network capable of separating the cavity convolution is a series-connected convolution neural network and comprises an encoder network and a decoder network, wherein the encoder network extracts image characteristics through operations such as convolution, pooling and the like, the extracted characteristics are sent into the decoder network, and the decoder network are connected through corresponding operations to restore the characteristics together to obtain a finally segmented magnetic resonance image.
After the brain tumor segmentation result is output in the step 3, index evaluation of regional tumors is required to obtain the segmentation effect of tumors in different regions.
The index is further evaluated as: three pathological states in the brain tumor segmentation result corresponding to the four-mode magnetic resonance images of Flair, T1, T2 and T1c are complete tumor, core tumor and enhanced tumor respectively, and the three pathological states are in inclusion relationship. The magnetic resonance brain tumor image has four labels, 0, 1,2,4, which in turn indicate that the voxels (x, y, z) are labeled healthy tissue, necrotic and non-enhanced, edematous, enhanced. And representing the segmentation result through the evaluation index Dice parameter.
The magnetic resonance brain tumor automatic segmentation method capable of separating the cavity convolution has the beneficial effects that: the separable cavity convolution network is used for feature extraction, an encoding and decoding network structure is constructed, tumor feature extraction and feature fusion are achieved, and the problems that a deep learning brain tumor segmentation model is difficult to extract features and information of multiple spatial scales is integrated are solved. In addition, the verification result of the invention on the BraTS2018 challenge match data set also obtains good effect. The method can process and segment piece by piece to realize the end-to-end segmentation effect, thereby reducing the segmentation time and the segmentation equipment performance, reducing the segmentation error rate of the brain tumor and reducing the segmentation cost of the brain tumor automatic method.
Drawings
The invention is described in further detail below with reference to the figures and the detailed description of the invention
FIG. 1 is a schematic diagram of a segmentation model of an automatic segmentation method of a separable cavity convolution magnetic resonance brain tumor according to the present invention;
FIG. 2 is a flow chart of the separable void convolution magnetic resonance brain tumor automatic segmentation method of the present invention;
FIG. 3 is a comparison of results of an MR brain tumor automatic segmentation method with separable void convolution and other methods in the data set of BraTS 2017;
fig. 4 is a comparison of results from the magnetic resonance brain tumor automatic segmentation method with separable void convolution and other methods in the BraTS2018 dataset.
Detailed Description
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these examples are only for explaining the technical solutions of the present invention, and are not intended to limit the scope of the present invention. For example, although the steps of the method of the present invention are described herein in a particular order, these orders are not limiting, and one skilled in the art can perform the steps in a different order without departing from the basic principles of the invention, and such variations are within the scope of the present application.
Example 1: referring to fig. 1, the present invention uses a neural network to implement a separable void convolution network fusion method to solve the problems of difficulty in feature extraction and capability of integrating information of multiple spatial scales in the current deep learning brain tumor segmentation method, and the method has a fast segmentation speed and a high segmentation accuracy. In the brain tumor segmentation network model according to the embodiment, as shown in fig. 1, the separable cavity convolution constitutes a U-shaped convolution neural network.
Please refer to fig. 2, which includes the following steps:
step 1: magnetic resonance image dataset partitioning and pre-processing operations of the training dataset: and performing data unbalance processing and data pixel intensity normalization processing operation by adopting a 2D slice processing and a z-score method to generate a processed magnetic resonance image.
Step 2: brain tumor segmentation: the magnetic resonance brain tumor automatic segmentation network construction based on separable cavity convolution is realized.
And step 3: and (4) optimizing parameters of the separable cavity convolution network, and segmenting the processed magnetic resonance image.
And 4, step 4: and the trained separable cavity convolution network model is used for segmenting the magnetic resonance brain test set.
The details are as follows:
1. processing of magnetic resonance images
The preprocessing step of the division of the data set and the magnetic resonance brain tumor training set image in the step comprises the following steps:
step 11: the magnetic resonance image data set is divided into a training set and a test set.
Step 12: firstly removing 1% highest and 1% lowest intensity regions of the images from the divided training set of the magnetic resonance brain images to obtain 152 × 192 × 146 3D data, and then cutting each 3D image into a series of 2D slice images f1, wherein the scale of the f1 image is 152 × 192; these f1 images are then passed through a sliding window for block processing, resulting in f2 with an image scale of 128 × 128.
Step 13: the magnetic resonance image f2 is subjected to z-score intensity normalization.
z-score normalization is a common method of data processing. By which data of different magnitudes can be converted into z-score scores of a unified measure for comparison.
In step 13 of this embodiment, the pixel intensity normalization is performed by
Two or more groups of data are converted into unitless z-score values, so that the data standards are unified, the data comparability is improved, and the data interpretability is weakened.
2. Construction of brain tumor segmentation network
Referring to fig. 1, in step 2, the magnetic resonance brain tumor automatic segmentation deep neural network based on separable cavity convolution is a convolutional neural network with a coding and decoding structure, and is formed by connecting a coding network and a decoding network in series. The coding network extracts image characteristics through operations such as separable cavity convolution, pooling and the like, and the decoding network acquires the characteristics of the image through upsampling and convolution operations to realize brain tumor segmentation.
In step 2, convolution and hole convolution can be separated by using the network of Chollet, Yu and the like for reference, and the correlation and the spatial correlation among the convolution layer channels can be decoupled and are mapped separately, so that a better effect can be achieved. Separable convolution and void convolution fusion obtain better local feature and global feature description capability. Meanwhile, the network makes full use of channel and region information of the magnetic resonance brain image, can better capture more pixel level details and space information, is beneficial to extracting the characteristic information of the magnetic resonance brain segmentation image, enhances the characteristic capturing capability and realizes more accurate segmentation.
In step 2, specifically, the magnetic resonance brain tumor automatic segmentation method construction process based on separable cavity convolution comprises the following steps:
step S21, training the network using square patches taken from the processed magnetic resonance image axial slices; the small square blocks of the axial slice of the magnetic resonance image after the extraction processing form a training sample in a training stage, specifically, the small square blocks are square blocks on four-mode magnetic resonance images of Flair, T1, T1c and T2 with a specific pixel as a center, the size of the square blocks is 128 × 128 × 4, and 4 channels are axial slices in four modes of Flair, T1, T1c and T2 respectively. The labels are also processed in this manner, corresponding to the label values of the training data.
Step S22, as shown in fig. 1, a schematic diagram of a brain tumor segmentation model of the magnetic resonance brain tumor automatic segmentation method with separable cavity convolution according to the present invention is shown. In the training stage of the magnetic resonance brain tumor automatic segmentation network capable of separating the cavity convolution, training is carried out according to the scale of 128 multiplied by 4, characteristic information restoration is carried out through downsampling, and finally classification is carried out through a softmax layer.
3. Model optimization
The training prediction value is used for calculating a modified Dice loss function, and in the training stage, an image with the same size is input, an image with the same size is output, and pixel classification prediction is carried out, namely probability values of healthy tissues, necrosis, non-enhancement, edema and enhancement. And then training parameters in the whole separable hole convolution network by using an error back propagation algorithm.
It should be noted that the training samples are obtained by random sampling on the training data, and the data balance processing is performed by removing and blocking. The network is trained in a small block mode of the axial slice used in the stage, so that the number of training samples can be increased on one hand, and the number of samples of different classes can be controlled on the other hand, thereby being beneficial to realizing sample balance. The network is a full convolution neural network, an end-to-end mode is realized, a 2D slicing mode is adopted for testing, and the network testing efficiency is accelerated.
And 3, performing parameter optimization on the magnetic resonance brain tumor automatic segmentation method fusing separable cavity convolution by using the processed magnetic resonance image axial slice. And (3) optimizing the parameters of the magnetic resonance brain tumor automatic segmentation network based on separable cavity convolution at the error back propagation stage.
And 4, using the trained separable cavity convolution network model for testing the magnetic resonance brain image for segmentation.
In conclusion, the separable cavity convolution-based magnetic resonance brain tumor automatic segmentation network realizes separable cavity convolution fusion and image feature extraction, solves the problems that a deep learning brain tumor segmentation model is difficult in feature extraction and information of various spatial scales is integrated, and can obtain good results on a BraTS2017 data set and a BraTS2018 data set.
Claims (6)
1. A magnetic resonance brain tumor image automatic segmentation method based on separable cavity convolution is characterized in that: the method comprises the following steps:
step 1: dividing a magnetic resonance brain tumor image data set into a training set and a testing set, wherein the training set is used for model training of the method, the testing data is used for model testing of the method, preprocessing operations are carried out on magnetic resonance brain tumor images in the training set, and a preprocessed magnetic resonance brain tumor image training set is generated;
step 2: constructing a magnetic resonance brain tumor image depth segmentation network framework based on separable cavity convolution;
and step 3: and performing end-to-end training on the constructed separable cavity convolution brain tumor segmentation network by adopting the preprocessed training set magnetic resonance brain tumor image to obtain an optimized brain tumor segmentation network model.
And 4, step 4: and segmenting the test set magnetic resonance brain tumor image by adopting the trained brain tumor segmentation network model.
2. The method of claim 1, wherein the method comprises: the preprocessing operation of the training set magnetic resonance brain tumor image in the step 1 comprises the following steps:
removing the areas of the invalid highest and lowest intensities of the three-dimensional data of the magnetic resonance brain tumor image, namely removing invalid background pixels, and then cutting the processed three-dimensional image into a series of two-dimensional images f1 along the Z axis; then, the two-dimensional f1 images are sliced in a sliding window mode, the size of the slice data is determined by a tumor area, 3 slice images f2 are obtained by one two-dimensional f1 image slice, and the labels are processed in the same mode to realize block processing so as to solve data imbalance; and finally, converting the intensities of the data pixels with different magnitudes of the image f2 into uniform intensities by utilizing a z-score method for normalization processing so as to ensure comparability between data and generate a processed magnetic resonance image.
3. The method of claim 2, wherein the method comprises: in step 1, the preprocessed magnetic resonance image includes magnetic resonance images of four modalities, Flair, T1, T2, and T1 c.
4. The method of claim 1, wherein the method comprises: and 2, constructing a magnetic resonance brain tumor image depth segmentation network architecture based on separable cavity convolution, wherein the deep learning network architecture is formed by connecting separable convolution blocks, double residual blocks and residual blocks in series. The separable convolution block and the double-residual block have the following specific structures:
s31, separable rolling block
(1) The separable convolution block comprises two convolution layers, wherein the first convolution layer consists of a regularization function, an activation function and a separation convolution, the second convolution layer consists of a regularization function, an activation function and a hole convolution, and the number of channels of the two convolution layers is consistent;
(2) then, using a residual error thought shortcut connection mode to perform summation operation on the input features and the features processed by the separable rolling blocks;
(3) finally, combining the two parts to obtain a final separable rolling block;
s32. double-residual block module
(1) The double-residual-block module is arranged at the bottom layer of the separable cavity convolution network and comprises two residual blocks, the two double-residual-block processing modes are different, the first residual block is constructed by two convolutions, the extraction of features is realized, and the shortcut principle of a residual net is mainly utilized;
(2) the second residual block is constructed by two convolutions, each convolution layer is regularized into and activated with a function, then processed by convolution, and finally summed with the features processed by the residual block.
5. The method of claim 1, wherein the method comprises: in step 3, the end-to-end training including the implementation of the separable hole convolution network model is optimized: based on the inverse propagation of the separable hole convolutional network, a modified Diceloss loss function is used as a target function, and the modified Diceloss loss function is mapped to the prediction probability of each label through the operation of an activation function softmax; based on the method, a random gradient descent method is adopted in the back propagation, the deviation of each parameter is calculated layer by using a chain type rule, the back propagation error is determined, the parameters of the network model are continuously updated, and the separable cavity convolution network is optimized.
6. The method of claim 1, wherein: in step 4, the trained brain tumor segmentation network model is adopted to segment the test set magnetic resonance brain tumor image, and the test network process is as follows:
firstly, 3D of test set data is converted into 2D data through Z-axis slicing, and then the slice data is sent into a training model to carry out test data segmentation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910761883.0A CN110782427B (en) | 2019-08-19 | 2019-08-19 | Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910761883.0A CN110782427B (en) | 2019-08-19 | 2019-08-19 | Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110782427A true CN110782427A (en) | 2020-02-11 |
CN110782427B CN110782427B (en) | 2023-06-20 |
Family
ID=69383306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910761883.0A Active CN110782427B (en) | 2019-08-19 | 2019-08-19 | Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110782427B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112053342A (en) * | 2020-09-02 | 2020-12-08 | 陈燕铭 | Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence |
CN112200810A (en) * | 2020-09-30 | 2021-01-08 | 深圳市第二人民医院(深圳市转化医学研究院) | Multi-modal automated ventricular segmentation system and method of use thereof |
CN112288749A (en) * | 2020-10-20 | 2021-01-29 | 贵州大学 | Skull image segmentation method based on depth iterative fusion depth learning model |
CN112489059A (en) * | 2020-12-03 | 2021-03-12 | 山东承势电子科技有限公司 | Medical tumor segmentation and three-dimensional reconstruction method |
CN113160138A (en) * | 2021-03-24 | 2021-07-23 | 山西大学 | Brain nuclear magnetic resonance image segmentation method and system |
CN113610742A (en) * | 2020-04-16 | 2021-11-05 | 同心医联科技(北京)有限公司 | Whole brain structure volume measurement method and system based on deep learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220980A (en) * | 2017-05-25 | 2017-09-29 | 重庆理工大学 | A kind of MRI image brain tumor automatic division method based on full convolutional network |
CN109919212A (en) * | 2019-02-26 | 2019-06-21 | 中山大学肿瘤防治中心 | The multi-dimension testing method and device of tumour in digestive endoscope image |
CN109949309A (en) * | 2019-03-18 | 2019-06-28 | 安徽紫薇帝星数字科技有限公司 | A kind of CT image for liver dividing method based on deep learning |
CN109949275A (en) * | 2019-02-26 | 2019-06-28 | 中山大学肿瘤防治中心 | A kind of diagnostic method and device of superior gastrointestinal endoscope image |
-
2019
- 2019-08-19 CN CN201910761883.0A patent/CN110782427B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220980A (en) * | 2017-05-25 | 2017-09-29 | 重庆理工大学 | A kind of MRI image brain tumor automatic division method based on full convolutional network |
CN109919212A (en) * | 2019-02-26 | 2019-06-21 | 中山大学肿瘤防治中心 | The multi-dimension testing method and device of tumour in digestive endoscope image |
CN109949275A (en) * | 2019-02-26 | 2019-06-28 | 中山大学肿瘤防治中心 | A kind of diagnostic method and device of superior gastrointestinal endoscope image |
CN109949309A (en) * | 2019-03-18 | 2019-06-28 | 安徽紫薇帝星数字科技有限公司 | A kind of CT image for liver dividing method based on deep learning |
Non-Patent Citations (1)
Title |
---|
FISHER YU 等: "Multi-scale context aggregation by dilated convolutions" * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113610742A (en) * | 2020-04-16 | 2021-11-05 | 同心医联科技(北京)有限公司 | Whole brain structure volume measurement method and system based on deep learning |
CN112053342A (en) * | 2020-09-02 | 2020-12-08 | 陈燕铭 | Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence |
CN112200810A (en) * | 2020-09-30 | 2021-01-08 | 深圳市第二人民医院(深圳市转化医学研究院) | Multi-modal automated ventricular segmentation system and method of use thereof |
CN112200810B (en) * | 2020-09-30 | 2023-11-14 | 深圳市第二人民医院(深圳市转化医学研究院) | Multi-modal automated ventricle segmentation system and method of use thereof |
CN112288749A (en) * | 2020-10-20 | 2021-01-29 | 贵州大学 | Skull image segmentation method based on depth iterative fusion depth learning model |
CN112489059A (en) * | 2020-12-03 | 2021-03-12 | 山东承势电子科技有限公司 | Medical tumor segmentation and three-dimensional reconstruction method |
CN113160138A (en) * | 2021-03-24 | 2021-07-23 | 山西大学 | Brain nuclear magnetic resonance image segmentation method and system |
CN113160138B (en) * | 2021-03-24 | 2022-07-19 | 山西大学 | Brain nuclear magnetic resonance image segmentation method and system |
Also Published As
Publication number | Publication date |
---|---|
CN110782427B (en) | 2023-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111709953B (en) | Output method and device in lung lobe segment segmentation of CT (computed tomography) image | |
CN110782427B (en) | Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution | |
CN112150428B (en) | Medical image segmentation method based on deep learning | |
CN108921851B (en) | Medical CT image segmentation method based on 3D countermeasure network | |
Bindhu | Biomedical image analysis using semantic segmentation | |
CN109410219A (en) | A kind of image partition method, device and computer readable storage medium based on pyramid fusion study | |
CN109493347A (en) | The method and system that the object of sparse distribution is split in the picture | |
CN112258530A (en) | Neural network-based computer-aided lung nodule automatic segmentation method | |
CN110310287A (en) | It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium | |
CN115496771A (en) | Brain tumor segmentation method based on brain three-dimensional MRI image design | |
CN114693933A (en) | Medical image segmentation device based on generation of confrontation network and multi-scale feature fusion | |
CN112446892A (en) | Cell nucleus segmentation method based on attention learning | |
CN110136133A (en) | A kind of brain tumor dividing method based on convolutional neural networks | |
CN110415253A (en) | A kind of point Interactive medical image dividing method based on deep neural network | |
CN115147600A (en) | GBM multi-mode MR image segmentation method based on classifier weight converter | |
CN114596317A (en) | CT image whole heart segmentation method based on deep learning | |
CN113421240A (en) | Mammary gland classification method and device based on ultrasonic automatic mammary gland full-volume imaging | |
CN115661165A (en) | Glioma fusion segmentation system and method based on attention enhancement coding and decoding network | |
CN117274599A (en) | Brain magnetic resonance segmentation method and system based on combined double-task self-encoder | |
Li et al. | A deeply supervised convolutional neural network for brain tumor segmentation | |
Merati et al. | A New Triplet Convolutional Neural Network for Classification of Lesions on Mammograms. | |
Muthiah et al. | Fusion of MRI and PET images using deep learning neural networks | |
CN115841457A (en) | Three-dimensional medical image segmentation method fusing multi-view information | |
CN114820636A (en) | Three-dimensional medical image segmentation model and training method and application thereof | |
CN110706209B (en) | Method for positioning tumor in brain magnetic resonance image of grid network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
GR01 | Patent grant | ||
GR01 | Patent grant |