CN108492297B - MRI brain tumor positioning and intratumoral segmentation method based on deep cascade convolution network - Google Patents

MRI brain tumor positioning and intratumoral segmentation method based on deep cascade convolution network Download PDF

Info

Publication number
CN108492297B
CN108492297B CN201810300057.1A CN201810300057A CN108492297B CN 108492297 B CN108492297 B CN 108492297B CN 201810300057 A CN201810300057 A CN 201810300057A CN 108492297 B CN108492297 B CN 108492297B
Authority
CN
China
Prior art keywords
tumor
layer
network
convolution
intratumoral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810300057.1A
Other languages
Chinese (zh)
Other versions
CN108492297A (en
Inventor
崔少国
张建勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Normal University
Original Assignee
Chongqing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Normal University filed Critical Chongqing Normal University
Publication of CN108492297A publication Critical patent/CN108492297A/en
Application granted granted Critical
Publication of CN108492297B publication Critical patent/CN108492297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention provides an MRI brain tumor positioning and intratumoral segmentation method based on a deep cascade convolution network, which comprises the following steps: building a deep cascade convolution neural network segmentation model; model training and parameter optimization; rapid localization and intratumoral segmentation of multi-modal MRI brain tumors. The invention provides an MRI brain tumor positioning and intratumoral segmentation method based on a deep cascade convolutional network, which constructs a deep cascade mixed neural network consisting of a full convolutional neural network and a classified convolutional neural network, divides the segmentation process into two stages of positioning of a complete tumor area and segmenting of intratumoral sub-areas, realizes the rapid and accurate positioning of the hierarchical MRI brain tumor and segmenting of the intratumoral sub-areas, firstly adopts the full convolutional network method to position the complete tumor area from an MRI image, then adopts an image block classification method to further segment the complete tumor into an edema area, a non-enhanced tumor area, an enhanced tumor area and a necrosis area, and realizes the accurate positioning of the multi-modal MRI brain tumor and the rapid and accurate segmentation of the intratumoral sub-areas.

Description

MRI brain tumor positioning and intratumoral segmentation method based on deep cascade convolution network
Technical Field
The invention relates to the technical field of medical image analysis, in particular to an MRI brain tumor positioning and intratumoral segmentation method based on a deep cascade convolution network.
Background
Brain tumors are a serious disease that seriously harms human health. Among them, glioma is the main type of malignant brain tumor, and although it is not common, the lethality rate is very high. The average survival time of the high-grade glioma is 14 months according to the statistics of the literature. Magnetic Resonance Imaging (MRI) is the most common brain tumor examination and diagnosis means in clinic, and brain tumors and intratumoral structures are accurately segmented from MRI images, so that the method has important value for neuropathological analysis and accurate diagnosis, and can provide important support for operation schemes, radiotherapy and chemotherapy plan making and prognosis evaluation.
Segmentation of MRI brain tumors requires simultaneous reference to the T1, T1c, T2 and FLAIR four modality images, each modality in turn containing several slices of three-dimensional volumetric data. Manual segmentation, while feasible, is very time consuming and subject to physician experience, with some subjectivity and poor repeatability. Therefore, it is necessary to explore an artificial intelligence algorithm for fully automatic segmentation of MRI brain tumors.
Conventional machine learning employs manual feature extraction, followed by training of a classifier using the extracted features, and then classification of image pixels using the trained classifier, thereby generating a segmentation map. However, this method is subject to feature extraction algorithms, and the extracted features are not necessarily discriminable features suitable for a particular classification task. However, the deep learning technique based on the convolutional neural network can automatically learn hierarchical features suitable for a specific task from a data set, and can greatly improve the pixel classification accuracy.
The inventor of the invention finds that the current MRI brain tumor automatic segmentation method based on deep learning mainly comprises two types: image block classification and full convolution network pixel level classification. The image block classification method adopts a sliding window mode to classify surrounding neighborhood blocks taking each pixel as a center, and has the following defects: (1) the calculation redundancy is high, and the segmentation speed is low; (2) the classification only adopts the local features of the image blocks, does not have the global features of the comprehensive images, and easily generates misclassification points; (3) the model effect is directly related to the training image block extraction method. The full convolution network pixel-level classification method directly inputs the whole image into a network, and can complete the segmentation of each type of tumor region of the whole image by one-time forward calculation, but has the following defects: (1) the lesion area, particularly the intratumoral subarea, in the medical image usually only occupies a very small part of the image, each type of pixels are seriously unbalanced, and the problem of unbalanced labels cannot be solved by the input training of the whole image; (2) due to insufficient training caused by insufficient samples in the small region, the image segmentation boundary is rough, and fine-grained segmentation of the small region cannot be realized.
Disclosure of Invention
The invention provides a method for MRI brain tumor location and intratumoral segmentation based on a deep cascade convolution network, which aims at solving the problems that in the existing MRI brain tumor segmentation, an image block classification method does not utilize global context characteristics and is low in segmentation speed, and training samples of a full convolution network pixel-level classification method are seriously unbalanced to cause inaccuracy of small region segmentation boundaries.
In order to solve the technical problems, the invention adopts the following technical scheme:
an MRI brain tumor localization and intratumoral segmentation method based on a deep cascade convolution network comprises the following steps:
s1, building a deep cascade convolution neural network segmentation model:
s11, the deep cascade convolution neural network is composed of a tumor localization network and an intratumoral classification network, the tumor localization network is suitable for inputting FLAIR, T1, T1c and T2 four-modality MRI images and outputting a binary image of a tumor candidate region and normal tissues, and the intratumoral classification network is suitable for inputting the tumor candidate region output by the tumor localization network and outputting a segmentation result of intratumoral subareas;
s12, the tumor positioning network is composed of a full convolution network and comprises five convolution layer groups from the first convolution layer group to the fifth convolution layer group, five pooling layers from the first convolution layer group to the fifth convolution layer group, a convolution layer six and a convolution layer seven, wherein the first pooling layer is positioned behind the first convolution layer group, the second pooling layer is positioned behind the second convolution layer group, the rest is done in a similar way, the fifth pooling layer is positioned behind the fifth convolution layer group, and the convolution layer six and the convolution layer seven are sequentially positioned behind the fifth pooling layer;
s13, jumping connection is adopted in the tumor positioning network, high-level semantic features output by the convolutional layer seven are subjected to 2-time upsampling and then are fused with the pooled low-level detail features layer by layer, and the final fusion features are used for accurately predicting the pixel types;
s14, the intratumoral classification network consists of two convolution layer groups, two pooling layers, three full-connection layers and one Softmax classification layer, wherein each convolution layer group is followed by one pooling layer, and the three full-connection layers and the one Softmax classification layer are sequentially arranged behind the last pooling layer;
s2, model training and parameter optimization: carrying out supervised training on the deep cascade convolution neural network segmentation model by using the expanded labeling data, designing objective function optimization network parameters, and generating an optimal segmentation model, wherein the method specifically comprises the following steps:
s21, the standardized and expanded whole image data set is processed according to the ratio of 8: 1: 1, dividing the ratio into a training set, a verification set and a test set, and taking the whole image of four modes of FLAIR, T1, T1c and T2 of the same cerebral section as four-channel input of a tumor positioning network;
s22, adopting a classification cross entropy loss function as an optimization target, wherein the target function is defined as follows:
Figure BDA0001619487130000031
wherein Y' is a segmentation label, Y is a prediction probability, C is a pixel class number, and S is an image pixel number;
s23, optimizing a target function by adopting a random gradient descent algorithm, and updating the parameters of the tumor localization network model by adopting an error back propagation algorithm;
s24, dividing the extracted MRI image block data set into 8: 1: 1, dividing the image blocks into a training set, a verification set and a test set, and taking the FLAIR, T1, T1c and T2 four-mode image blocks of the same cerebral section as four-channel input of an intra-tumor classification network;
s25, adopting the classification cross entropy loss function in the step S22 as an optimization target, wherein C represents the number of tumor classification categories, and S represents the number of image block samples in the batch;
s26, optimizing a target function by adopting a random gradient descent algorithm in the step S23, updating intra-tumor classification net model parameters by using an error back propagation algorithm, and when training an intra-tumor classification net, setting a Dropout regularization method for a first full connection layer and a second full connection layer of three full connection layers to be 0.50;
s3, rapid localization and intratumoral segmentation of multi-modal MRI brain tumors, comprising:
s31, inputting the preprocessed standardized four-modality MRI image as a four-channel into the trained and optimized tumor positioning network in the step S2, and automatically positioning and outputting a binary segmentation map comprising a tumor area and a non-tumor area;
s32, inputting the four-mode image blocks with the tumor area pixels as the center into the intra-tumor classification network trained and optimized in the step S2, predicting the classification of the pixels, predicting the tumor pixels one by one in a sliding window mode, and finally obtaining an intra-tumor subregion segmentation map;
and S33, overlapping the segmentation map of the intratumoral subarea on the original MRI image to obtain the final MRI brain tumor positioning and segmentation map.
Further, the method comprises the following steps:
s4, brain tumor multi-modality MRI image preprocessing, the preprocessed image data being suitable for the step S21, S24 and S31 inputs, which includes:
s41, carrying out offset field correction operation on the multi-mode MRI image;
s42, extracting MRI image slices of four modalities, namely FLAIR, T1, T1c and T2, and removing the gray values of the highest 1 percent and the lowest 1 percent in each MRI image slice;
s43, performing data normalization on the gray-level values of each MRI image according to the following formula:
Figure BDA0001619487130000041
wherein X (i, j) corresponds to the gray value of the ith row and j column of the slice X,
Figure BDA0001619487130000042
and XsMean and standard deviation, respectively, of slice X, X' (i, j) being the X (i, j) normalized gray scale;
s44, using a data expansion technology to the gray level image after the standardization operation to increase the training data sample to 10 times of the initial value;
and S45, randomly extracting 33 x 33 image blocks with tumor pixels as the center from the expanded data set, wherein the extraction quantity of each tumor type is the same, and the image block data set is averagely divided into 10 groups by adopting a hierarchical sampling method, and the proportion of the image blocks of each type in each group is the same.
Further, in step S12, there are 2 convolutional layers in the first convolutional layer group and the second convolutional layer group, the number of convolutional cores of the convolutional layers is 64 and 128, respectively, there are 3 convolutional layers in the third convolutional layer group, the fourth convolutional layer group and the fifth convolutional layer group, respectively, the number of convolutional cores of the convolutional layers is 256, 512 and 512, respectively, the size of convolutional cores of all convolutional layers is 3 × 3, the step size is 1, the core size of each pooling layer is 2 × 2, the step size is 2, the number of convolutional cores of convolutional layer six and convolutional layer seven is 4096, the size is 1 × 1, and the step size is 1.
Further, in the method, an output characteristic diagram Z corresponding to any convolution kerneliThe calculation was performed using the following formula:
Figure BDA0001619487130000051
wherein f represents a non-linear excitation function, biRepresenting the bias item corresponding to the ith convolution kernel, r representing the index number of the input channel, k representing the number of the input channels, WirAn r-th channel weight matrix representing an i-th convolution kernel,
Figure BDA0001619487130000052
is a convolution operation, XrRepresenting the r-th input channel image.
Further, the tumor localization network further comprises a rectifying linear unit ReLU for generating an output feature map Z from the convolution kerneliIs subjected to a non-linear transformation, the rectifying linear unit ReLU is defined as follows:
f(x)=max(0,x)
where f (x) represents the rectified linear unit function, and x is an input value.
Further, the step S13 specifically includes: 2 times of upsampling the result of the convolutional layer seven, then adding and fusing the result with a fourth pooling layer to obtain a fused layer 1, 2 times of upsampling the fused layer 1, then adding and fusing the fused layer with a third pooling layer to obtain a fused layer 2, 2 times of upsampling the fused layer 2, then adding and fusing the fused layer 2 with a second pooling layer to obtain a fused layer 3, 2 times of upsampling the fused layer 3, then adding and fusing the fused layer 3 with the first pooling layer to obtain a fused layer 4, and finally 2 times of upsampling the fused layer 4 to obtain a feature map with the same size as the original image; and 2, classifying the tumor and normal tissues of the pixels by using the characteristic map to generate 2 pixel class prediction value maps, and taking the class with high prediction value as a final class of the pixels.
Further, in step S14, two convolution layer groups are each provided with 3 convolution layers, the number of convolution kernels of the convolution layers is 64 and 128, respectively, the convolution kernel sizes of all the convolution layers are 3 × 3 and the step size is 1, the kernel size of each pooling layer is 3 × 3 and the step size is 2, the sizes of the three fully-connected layers are 256, 128, and 4, respectively, where 4 represents four categories of an edema area, a non-enhanced tumor area, an enhanced tumor area, and a necrosis area of a tumor.
Further, the intra-tumor classification network further includes a nonlinear excitation unit leak ReLU, where the nonlinear excitation unit leak ReLU is configured to perform nonlinear transformation on each value in the output feature map generated by the convolution kernel, and the nonlinear excitation unit leak ReLU is defined as follows:
f(z)=max(0,z)+αmin(0,z)
where f (z) represents the nonlinear excitation unit function, z is an input value, and α is the Leaky parameter.
Further, in the step S14, the Softmax function is defined as follows:
Figure BDA0001619487130000061
wherein, OkIs the value of the kth neuron output by the intra-tumor classification net, YkIs the probability that the input image block belongs to the kth class, and C is the number of classes.
Further, in the steps S22 and S25, an L2 regularization term is added to the objective function, so that the final objective function is obtained as follows:
Figure BDA0001619487130000062
wherein, lambda is a regularization factor, Q is the number of model parameters, and theta is a network model parameter.
Further, in the steps S23 and S26, the specific optimization process is as follows:
Figure BDA0001619487130000063
mt=μ*mt-1tgt
θt=θt-1+mt
where the subscript t represents the number of iterations, θ is a network model parameter, L (θ)t-1) When using thetat-1As a loss function in the network parameters, gt、mtAnd μ represents the gradient, momentum and momentum coefficient, η, respectivelytIs the learning rate. Compared with the prior art, the MRI brain tumor localization and intratumoral segmentation method based on the deep cascade convolution network has the following advantages:
1. constructing a cascade mixed neural network consisting of a full convolution neural network (FCN) and a classification Convolution Neural Network (CNN), realizing automatic positioning of the two-stage hierarchical MRI brain tumor and accurate segmentation of an intratumoral structure, firstly quickly positioning a complete tumor region from the MRI image by adopting a full convolution network method, and then further segmenting the complete tumor into an edema region, a non-enhanced tumor region, an enhanced tumor region and a necrosis region by adopting an image block classification method;
2. in the tumor positioning stage, all the subareas in the tumor are merged and taken as a whole for segmentation, so that the problem of sample imbalance among the subareas of the tumor and between the subareas of the tumor and normal tissues is relieved;
3. when the subareas in the tumor are segmented, only the pixels in the tumor need to be segmented by adopting an image block method, so that the number of block classifications is reduced, and the segmentation speed is improved;
4. when the intra-tumor classification net is trained, the same number of image blocks can be extracted from each class for training, the problem of unbalanced sub-region samples in the tumor can be solved, pixels of different classes can be trained to the same degree, and therefore more accurate intra-tumor segmentation boundaries can be obtained, and the intra-tumor sub-regions can be segmented more accurately;
5. the intratumoral subregion segmentation network adopts a small convolution kernel and has a deeper network structure, the nonlinear conversion capability of the network is improved under the condition of ensuring that the parameter quantity of the model is not increased, and the image block classification characteristics with richer levels and stronger identifiability are generated, so that the image block classification accuracy is improved.
Drawings
FIG. 1 is a schematic flow chart of the MRI brain tumor localization and intratumoral segmentation method based on the deep cascade convolution network provided by the invention.
FIG. 2 is a schematic diagram of a deep cascade convolution neural network segmentation model provided by the present invention.
Fig. 3 is a schematic diagram of an intra-tumor subregion segmentation network model based on image block classification according to the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained below by combining the specific drawings.
In the description of the present invention, it is to be understood that the terms "longitudinal", "radial", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Referring to fig. 1 to 3, the present invention provides an MRI brain tumor localization and intratumoral segmentation method based on a deep cascade convolution network, which includes the following steps:
s1, building a deep cascade convolution neural network segmentation model:
s11, the deep cascade convolution neural network is composed of a tumor localization network and an intratumoral classification network, the tumor localization network is suitable for inputting the four-modality MRI images of FLAIR, T1, T1c and T2 and outputting a binary image of a tumor candidate region and normal tissues, and the intratumoral classification network is suitable for inputting the tumor candidate region output by the tumor localization network and outputting a segmentation result of intratumoral subareas.
S12, the tumor positioning network is composed of a full convolution network, and the full convolution network comprises five convolution layer groups from the first to the fifth, five pooling layers from the first to the fifth, a convolution layer six and a convolution layer seven, wherein the first pooling layer is positioned behind the first convolution layer group, the second pooling layer is positioned behind the second convolution layer group, and the rest is done in the same way, the fifth pooling layer is positioned behind the fifth convolution layer group, and the convolution layer six and the convolution layer seven are sequentially positioned behind the fifth pooling layer; the number of input channels of the tumor localization network is 4, which represents four modalities of MRI images.
TABLE 1 tumor location network model structure hyper-parameter table
Figure BDA0001619487130000091
As a specific example, referring to table 1 above, in step S12, there are 2 convolutional layers in the first and second convolutional layer groups, respectively, the number of convolutional cores of the convolutional layers is 64 and 128, respectively, there are 3 convolutional layers in the third, fourth, and fifth convolutional layer groups, respectively, the number of convolutional cores of the convolutional layers is 256, 512, and 512, respectively, the size of convolutional cores of all convolutional layers is 3 × 3, and the step size is 1, the core size of each pooling layer is 2 × 2, and the step size is 2, and the number of convolutional cores of convolutional layer six and convolutional layer seven is 4096, the size is 1 × 1, and the step size is 1.
As a specific embodiment, the output characteristic diagram Z corresponding to any convolution kernel in the methodiThe calculation was performed using the following formula:
Figure BDA0001619487130000092
wherein f represents a non-linear excitation function, biRepresenting the bias item corresponding to the ith convolution kernel, r representing the index number of the input channel, k representing the number of the input channels, WirAn r-th channel weight matrix representing an i-th convolution kernel,
Figure BDA0001619487130000101
is a convolution operation, XrRepresenting the r-th input channel image.
In order to improve the nonlinear representation capability of the network, a function of a rectifying Linear unit ReLU (rectifier Linear units) is adopted as f in the formula (1), wherein ReLU is used as an activation function of the full convolution network, and the rectifying Linear unit is used for outputting a characteristic graph Z generated by a convolution kerneliIs non-linearly transformed, said rectifying linear unit ReLU being defined as follows:
f(x)=max(0,x) (2)
where f (x) represents the rectified linear unit function, and x is an input value.
Meanwhile, the output feature map after convolution of the convolutional layer may contain a large amount of redundant information, so that the redundant feature is eliminated by adopting the maximum pooling operation with the size of 2 × 2 and the step length of 2 after the convolutional layer is convolved, that is, the output feature map of the convolutional layer is subjected to dimension reduction through the pooling layer, so that the size of the output feature map is reduced, the receptive field is increased, and the translation invariance of the image is improved.
S13, adopting jump connection in the tumor positioning network, performing 2 times of upsampling on the high-level semantic features output by the convolutional layer seven, fusing the upsampled high-level semantic features with the pooled low-level detail features layer by layer, and accurately predicting the pixel category by using the final fusion features. The method specifically comprises the following steps: 2 times of upsampling the result of the convolutional layer seven, then adding and fusing the result with a fourth pooling layer to obtain a fused layer 1, 2 times of upsampling the fused layer 1, then adding and fusing the fused layer with a third pooling layer to obtain a fused layer 2, 2 times of upsampling the fused layer 2, then adding and fusing the fused layer 2 with a second pooling layer to obtain a fused layer 3, 2 times of upsampling the fused layer 3, then adding and fusing the fused layer 3 with the first pooling layer to obtain a fused layer 4, and finally 2 times of upsampling the fused layer 4 to obtain a feature map with the same size as the original image; 2, classifying the pixels into the tumor and the normal tissues by using the characteristic map to generate 2 (representing 2 classes) pixel class prediction value maps, and taking the class with high prediction value as the final class of the pixels.
S14, where the intra-tumor classification network includes two convolution layer groups, two pooling layers, three full-connection layers, and a Softmax classification layer, and as shown in fig. 3, each convolution layer group is followed by one pooling layer, and the three full-connection layers and the Softmax classification layer are sequentially disposed after the last pooling layer.
As a specific example, please refer to table 2 below, in step S14, each of two convolution layer groups is provided with 3 convolution layers, the number of convolution kernels of the convolution layers is 64 and 128, respectively, the convolution kernel sizes of all the convolution layers are 3 × 3 and the step size is 1, the kernel size of each pooling layer is 3 × 3 and the step size is 2, the sizes of the three fully-connected layers are 256, 128 and 4, respectively, where 4 represents four categories of an edema area, a non-enhanced tumor area, an enhanced tumor area and a necrosis area of a tumor; the intra-tumor classification net inputs a 33 × 33 four-mode image block with a tumor pixel as a center, and outputs a probability vector of the pixel on four classes.
TABLE 2 hyper-parameter table of classified network model structure in intra-tumor
Figure BDA0001619487130000111
As a specific embodiment, the output characteristic diagram Z corresponding to any convolution kernel in the intra-tumor classification networkiThe calculation is also performed by using equation (1), but the nonlinear excitation function is a leakage ReLU (Rectifier Linear Units) function, and therefore, the intra-tumor classification network further includes a nonlinear excitation unit, and the nonlinear excitation unit leakage ReLU is used for outputting a characteristic diagram Z generated by a convolution kerneliIs non-linearly transformed, said non-linear excitation unit, leak ReLU, being defined as follows:
f(z)=max(0,z)+αmin(0,z) (3)
wherein f (z) represents a nonlinear excitation unit function, z is an input value, and α is a Leaky parameter; in one embodiment, α is set to 0.33.
As a specific example, in the step S14, in order to solve the multi-classification problem, the output of the third full-connectivity layer FC3 is converted into a probability distribution using a Softmax function, which is defined as follows:
Figure BDA0001619487130000121
wherein, OkIs the value of the kth neuron output by the intra-tumor classification net, YkIs the probability that the input image block belongs to the kth category, C is the number of categories, and as can be seen from the foregoing description, the intra-tumor classification net outputs four categories, namely, an edema area, a non-enhanced tumor area, an enhanced tumor area, and a necrosis area, which represent tumors.
S2, model training and parameter optimization: carrying out supervised training on the deep cascade convolution neural network segmentation model by using the expanded labeling data, designing objective function optimization network parameters, and generating an optimal segmentation model, wherein the method specifically comprises the following steps:
s21, the standardized and expanded whole image data set is processed according to the ratio of 8: 1: the proportion of 1 is divided into a training set, a verification set and a test set, and the whole image of FLAIR, T1, T1c and T2 in the same brain section is used as the four-channel input of a tumor localization network, as an implementation mode, the inventor of the present application obtains 274 patient data with segmentation labels in total, each mode comprises 155 sequence images, so that 274 × 155-42470 four-mode image data samples are obtained, and the training set, the verification set and the test set are 339760 samples, 42470 samples respectively after expansion.
S22, adopting a classification cross entropy loss function as an optimization target, wherein the target function is defined as follows:
Figure BDA0001619487130000122
wherein Y' is a segmentation label, Y is a prediction probability, C is a pixel class number, and S is an image pixel number; in one embodiment, C is 2, and S is 240 × 240 is 57600.
As a preferred embodiment, in order to prevent overfitting, in step S22, an L2 regularization term is further added to the objective function, so that the final objective function is obtained as follows:
Figure BDA0001619487130000123
wherein, lambda is a regularization factor, Q is the number of model parameters, and theta is a network model parameter.
S23, optimizing the objective function by adopting a random gradient descent algorithm, and updating the tumor localization network model parameters by adopting an error back propagation algorithm, wherein the specific optimization process is as follows:
Figure BDA0001619487130000131
mt=μ*mt-1tgt (8)
θt=θt-1+mt (9)
where the subscript t denotes the number of iterations and θ is a network model parameter, which corresponds to θ, L (θ) in equation (6)t-1) When using thetat-1As a loss function in the network parameters, gt、mtAnd μ represents the gradient, momentum and momentum coefficient, η, respectivelytIs the learning rate; in one embodiment, in step S23, the momentum coefficient μ is 0.99, and the initial learning rate is ηt=1e-8Decrease 1/10 every 1000 iterations until 1e-10Until now.
S24, dividing the extracted MRI image block data set into 8: 1: 1, dividing the image blocks into a training set, a verification set and a test set, and taking the FLAIR, T1, T1c and T2 four-mode image blocks of the same cerebral section as four-channel input of an intra-tumor classification network; as an embodiment, the same number of image blocks of size 33 × 33, which is 40,000,000, is extracted from the expanded dataset for each class, which corresponds to 10,000,000 samples for each class, the dataset is divided into 10 groups on average by hierarchical sampling, and then the ratio of 8: 1: 1 ratio into a training set, a validation set and a test set, wherein the training set, the validation set and the test set respectively have 32,000,000 samples, 4,000,000 samples and 4,000,000 samples.
S25, adopting a classification cross entropy loss function as an optimization target, wherein the target function is the same as the formula (5) in the step S22, C represents the number of tumor classification categories, and S represents the number of image block samples in a batch; in one embodiment, C in this step is 4, and S is 256. Also to prevent overfitting, this step adds an L2 regularization term to the objective function to obtain the final objective function as shown in equation (6) in step S22.
S26, optimizing the objective function by adopting a random gradient descent algorithm, updating intra-tumor classification network model parameters by adopting an error back propagation algorithm, and specifically using the formula (7), the formula (8) and the formula (9) in the step S23 in the optimization process; in one embodiment, the momentum coefficient μ in this step is 0.9, and the initial learning rate is ηt=1e-6Decrease 1/10 every 1000 iterations until 1e-8Until now. Meanwhile, in order to prevent overfitting of the network when performing intra-tumor classification mesh training in step S26, Dropout regularization methods are used for the first full-link layer FC1 and the second full-link layer FC2 among the three full-link layers, and the Dropout rate is set to 0.50.
S3, rapid localization and intratumoral segmentation of multi-modal MRI brain tumors, comprising:
s31, inputting the preprocessed standardized four-modality MRI image as a four-channel into the trained and optimized tumor positioning network in the step S2, and automatically positioning and outputting a binary segmentation map comprising a tumor area and a non-tumor area;
s32, inputting the image block of the four-mode image with neighborhood size of 33 x 33 and taking the tumor area pixel as the center into the classification network in the tumor after training and optimization in the step 3, predicting the classification of the pixel, and predicting the tumor pixels one by one in a sliding window mode to finally obtain a segmentation map of the sub-area in the tumor; the sliding window method is well known to those skilled in the art, and is not described herein;
and S33, overlapping the segmentation map of the intratumoral subarea on the original MRI image to obtain the final MRI brain tumor positioning and segmentation map.
As a preferred embodiment, in order to better provide the image data input support for the related steps, the MRI brain tumor localization and intratumoral segmentation method based on the depth cascade convolution network further comprises the following steps:
s4, brain tumor multi-modality MRI image preprocessing, the preprocessed image data being suitable for the step S21, S24 and S31 inputs, which includes:
s41, performing offset field correction operation on the multi-modality MRI image, specifically performing offset field correction operation by adopting an N4ITK method;
s42, extracting MRI image slices of four modalities, namely FLAIR, T1, T1c and T2, and removing the gray values of the highest 1 percent and the lowest 1 percent in each MRI image slice;
s43, performing data normalization on the gray-level values of each MRI image according to the following formula:
Figure BDA0001619487130000141
wherein X (i, j) corresponds to the gray value of the ith row and j column of the slice X,
Figure BDA0001619487130000142
and XsMean and standard deviation, respectively, of slice X, X' (i, j) being the X (i, j) normalized gray scale;
s44, using horizontal flipping, vertical flipping, cutting after amplifying 1/8, rotating by 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° data expansion techniques to increase the training data sample by 10 times of the original value, wherein the aforementioned specific data expansion techniques are well known to those skilled in the art and are not described herein;
and S45, randomly extracting 33 x 33 image blocks with tumor pixels as the center from the expanded data set, wherein the extraction quantity of each tumor type is the same, and the image block data set is averagely divided into 10 groups by adopting a hierarchical sampling method, and the proportion of the image blocks of each type in each group is the same.
Compared with the prior art, the MRI brain tumor localization and intratumoral segmentation method based on the deep cascade convolution network has the following advantages:
1. constructing a cascade mixed neural network consisting of a full convolution neural network (FCN) and a classification Convolution Neural Network (CNN), realizing automatic positioning of the two-stage hierarchical MRI brain tumor and accurate segmentation of an intratumoral structure, firstly quickly positioning a complete tumor region from the MRI image by adopting a full convolution network method, and then further segmenting the complete tumor into an edema region, a non-enhanced tumor region, an enhanced tumor region and a necrosis region by adopting an image block classification method;
2. in the tumor positioning stage, all the subareas in the tumor are merged and taken as a whole for segmentation, so that the problem of sample imbalance among the subareas of the tumor and between the subareas of the tumor and normal tissues is relieved;
3. when the subareas in the tumor are segmented, only the pixels in the tumor need to be segmented by adopting an image block method, so that the number of block classifications is reduced, and the segmentation speed is improved;
4. when the intra-tumor classification net is trained, the same number of image blocks can be extracted from each class for training, the problem of unbalanced sub-region samples in the tumor can be solved, pixels of different classes can be trained to the same degree, and therefore more accurate intra-tumor segmentation boundaries can be obtained, and the intra-tumor sub-regions can be segmented more accurately;
5. the intratumoral subregion segmentation network adopts a small convolution kernel and has a deeper network structure, the nonlinear conversion capability of the network is improved under the condition of ensuring that the parameter quantity of the model is not increased, and the image block classification characteristics with richer levels and stronger identifiability are generated, so that the image block classification accuracy is improved.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.

Claims (10)

1. The MRI brain tumor localization and intratumoral segmentation method based on the deep cascade convolution network is characterized by comprising the following steps of:
s1, building a deep cascade convolution neural network segmentation model:
s11, the deep cascade convolution neural network is composed of a tumor localization network and an intratumoral classification network, the tumor localization network is suitable for inputting FLAIR, T1, T1c and T2 four-modality MRI images and outputting a binary image of a tumor candidate region and normal tissues, and the intratumoral classification network is suitable for inputting the tumor candidate region output by the tumor localization network and outputting a segmentation result of intratumoral subareas;
s12, the tumor positioning network is composed of a full convolution network and comprises five convolution layer groups from the first convolution layer group to the fifth convolution layer group, five pooling layers from the first convolution layer group to the fifth convolution layer group, a convolution layer six and a convolution layer seven, wherein the first pooling layer is positioned behind the first convolution layer group, the second pooling layer is positioned behind the second convolution layer group, the rest is done in a similar way, the fifth pooling layer is positioned behind the fifth convolution layer group, and the convolution layer six and the convolution layer seven are sequentially positioned behind the fifth pooling layer;
s13, adopting jump connection in the tumor positioning network, performing 2 times of upsampling on the high-level semantic features output by the convolutional layer seven, fusing the upsampled high-level semantic features with the low-level detail features subjected to pooling layer by layer, and accurately predicting the pixel category by using the final fusion features, wherein the jump connection specifically comprises the following steps: 2 times of upsampling the result of the convolutional layer seven, then adding and fusing the result with a fourth pooling layer to obtain a fused layer 1, 2 times of upsampling the fused layer 1, then adding and fusing the fused layer with a third pooling layer to obtain a fused layer 2, 2 times of upsampling the fused layer 2, then adding and fusing the fused layer 2 with a second pooling layer to obtain a fused layer 3, 2 times of upsampling the fused layer 3, then adding and fusing the fused layer 3 with the first pooling layer to obtain a fused layer 4, and finally 2 times of upsampling the fused layer 4 to obtain a feature map with the same size as the original image; 2, classifying the pixels by using the characteristic graph to obtain 2 pixel class prediction score graphs, and taking the class with high prediction value as the final class of the pixels;
s14, the intratumoral classification network consists of two convolution layer groups, two pooling layers, three full-connection layers and one Softmax classification layer, wherein each convolution layer group is followed by one pooling layer, and the three full-connection layers and the one Softmax classification layer are sequentially arranged behind the last pooling layer;
s2, model training and parameter optimization: carrying out supervised training on the deep cascade convolution neural network segmentation model by using the expanded labeling data, designing objective function optimization network parameters, and generating an optimal segmentation model, wherein the method specifically comprises the following steps:
s21, the standardized and expanded whole image data set is processed according to the ratio of 8: 1: 1, dividing the ratio into a training set, a verification set and a test set, and taking the whole image of four modes of FLAIR, T1, T1c and T2 of the same cerebral section as four-channel input of a tumor positioning network;
s22, adopting a classification cross entropy loss function as an optimization target, wherein the target function is defined as follows:
Figure FDA0003173171480000021
wherein Y' is a segmentation label, Y is a prediction probability, C is a pixel class number, and S is an image pixel number;
s23, optimizing a target function by adopting a random gradient descent algorithm, and updating the parameters of the tumor localization network model by adopting an error back propagation algorithm;
s24, dividing the extracted MRI image block data set into 8: 1: 1, dividing the image blocks into a training set, a verification set and a test set, and taking the FLAIR, T1, T1c and T2 four-mode image blocks of the same cerebral section as four-channel input of an intra-tumor classification network;
s25, adopting the classification cross entropy loss function in the step S22 as an optimization target, wherein C represents the number of tumor classification categories, and S represents the number of image block samples in the batch;
s26, optimizing a target function by adopting a random gradient descent algorithm in the step S23, updating intra-tumor classification net model parameters by using an error back propagation algorithm, and when training an intra-tumor classification net, setting a Dropout regularization method for a first full connection layer and a second full connection layer of three full connection layers to be 0.50;
s3, rapid localization and intratumoral segmentation of multi-modal MRI brain tumors, comprising:
s31, inputting the preprocessed standardized four-modality MRI image as a four-channel into the trained and optimized tumor positioning network in the step S2, and automatically positioning and outputting a binary segmentation map comprising a tumor area and a non-tumor area;
s32, inputting the four-mode image blocks with the tumor area pixels as the center into the intra-tumor classification network trained and optimized in the step S2, predicting the classification of the pixels, predicting the tumor pixels one by one in a sliding window mode, and finally obtaining an intra-tumor subregion segmentation map;
and S33, overlapping the segmentation map of the intratumoral subarea on the original MRI image to obtain the final MRI brain tumor positioning and segmentation map.
2. The deep cascade convolution network based MRI brain tumor localization and intratumoral segmentation method of claim 1, further comprising the steps of:
s4, brain tumor multi-modality MRI image preprocessing, the preprocessed image data being suitable for the step S21, S24 and S31 inputs, which includes:
s41, carrying out offset field correction operation on the multi-mode MRI image;
s42, extracting MRI image slices of four modalities, namely FLAIR, T1, T1c and T2, and removing the gray values of the highest 1 percent and the lowest 1 percent in each MRI image slice;
s43, performing data normalization on the gray-level values of each MRI image according to the following formula:
Figure FDA0003173171480000031
wherein X (i, j) corresponds to the gray value of the ith row and j column of the slice X,
Figure FDA0003173171480000032
and XsMean and standard deviation, respectively, of slice X, X' (i, j) being the X (i, j) normalized gray scale;
s44, using a data expansion technology to the gray level image after the standardization operation to increase the training data sample to 10 times of the initial value;
and S45, randomly extracting 33 x 33 image blocks with tumor pixels as the center from the expanded data set, wherein the extraction quantity of each tumor type is the same, and the image block data set is averagely divided into 10 groups by adopting a hierarchical sampling method, and the proportion of the image blocks of each type in each group is the same.
3. The MRI brain tumor localization and intratumoral segmentation method based on the deep cascade convolutional network of claim 1 or 2, wherein in step S12, there are 2 convolutional layers in the first and second convolutional layer groups, respectively, the number of convolutional cores of the convolutional layers is 64 and 128, respectively, there are 3 convolutional layers in the third, fourth and fifth convolutional layer groups, respectively, the number of convolutional cores of the convolutional layers is 256, 512 and 512, respectively, the convolutional core size of all convolutional layers is 3 × 3, the step size is 1, the core size of each pooling layer is 2 × 2, the step size is 2, the number of convolutional cores of convolutional layer six and convolutional layer seven is 4096, the size is 1 × 1, and the step size is 1.
4. The MRI brain tumor localization and intratumoral segmentation method based on the deep cascade convolution network as claimed in claim 3, wherein the output feature map Z corresponding to any convolution kernel in the methodiThe calculation was performed using the following formula:
Figure FDA0003173171480000041
wherein f represents a non-linear excitation function, biRepresenting the bias item corresponding to the ith convolution kernel, r representing the index number of the input channel, k representing the number of the input channels, WirAn r-th channel weight matrix representing an i-th convolution kernel,
Figure FDA0003173171480000042
is a convolution operation, XrRepresenting the r-th input channel image.
5. The MRI brain tumor localization and intratumoral segmentation method based on the deep cascade convolution network of claim 4, characterized in that the tumor localization network further comprises a rectifying linear unit RELU for applying an output feature map Z generated by a convolution kerneliIs non-linearly transformed, said rectifying linear unit ReLU being defined as follows:
f(x)=max(0,x)
where f (x) represents the rectified linear unit function, and x is an input value.
6. The MRI brain tumor localization and intratumoral segmentation method based on the deep cascade convolution network of claim 1 or 2, wherein in step S14, there are 3 convolution layers in each of the two convolution layer sets, the number of convolution kernels of the convolution layers is 64 and 128 respectively, the convolution kernels of all the convolution layers have a size of 3 x 3 and a step size of 1, the kernel size of each pooling layer is 3 x 3 and a step size of 2, and the sizes of the three fully-connected layers are 256, 128 and 4 respectively, wherein 4 represents four categories of an edema area, a non-enhanced tumor area, an enhanced tumor area and a necrosis area of the tumor.
7. The depth-cascaded convolutional-network-based MRI brain tumor localization and intratumoral segmentation method of claim 6, wherein the intratumoral classification network further comprises a nonlinear excitation unit Leaky ReLU for nonlinear transformation of each value in the output feature map generated by the convolution kernel, the nonlinear excitation unit Leaky ReLU is defined as follows:
f(z)=max(0,z)+αmin(0,z)
where f (z) represents the nonlinear excitation unit function, z is an input value, and α is the Leaky parameter.
8. The deep cascade convolution network based MRI brain tumor localization and intratumoral segmentation method according to claim 6, wherein in the step S14, the Softmax function is defined as follows:
Figure FDA0003173171480000051
wherein, OkIs the value of the kth neuron output by the intra-tumor classification net, YkIs the probability that the input image block belongs to the kth class, and C is the number of classes.
9. The MRI brain tumor localization and intratumoral segmentation method based on the deep cascade convolution network as claimed in claim 1 or 2, wherein in the steps S22 and S25, an L2 regularization term is added to the objective function to obtain the final objective function as follows:
Figure FDA0003173171480000052
wherein, lambda is a regularization factor, Q is the number of model parameters, and theta is a network model parameter.
10. The deep cascade convolution network-based MRI brain tumor localization and intratumoral segmentation method according to claim 1 or 2, wherein the specific optimization procedures in steps S23 and S26 are as follows:
Figure FDA0003173171480000053
mt=μ*mt-1tgt
θt=θt-1+mt
where the subscript t represents the number of iterations, θ is a network model parameter, L (θ)t-1) When using thetat-1As a loss function in the network parameters, gt、mtAnd μ represents the gradient, momentum and momentum coefficient, η, respectivelytIs the learning rate.
CN201810300057.1A 2017-12-25 2018-04-04 MRI brain tumor positioning and intratumoral segmentation method based on deep cascade convolution network Active CN108492297B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711419945 2017-12-25
CN2017114199457 2017-12-25

Publications (2)

Publication Number Publication Date
CN108492297A CN108492297A (en) 2018-09-04
CN108492297B true CN108492297B (en) 2021-11-19

Family

ID=63314778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810300057.1A Active CN108492297B (en) 2017-12-25 2018-04-04 MRI brain tumor positioning and intratumoral segmentation method based on deep cascade convolution network

Country Status (1)

Country Link
CN (1) CN108492297B (en)

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910335B (en) * 2018-09-15 2023-02-24 北京市商汤科技开发有限公司 Image processing method, image processing device and computer readable storage medium
CN109493317B (en) * 2018-09-25 2020-07-07 哈尔滨理工大学 3D multi-vertebra segmentation method based on cascade convolution neural network
CN109360210B (en) 2018-10-16 2019-10-25 腾讯科技(深圳)有限公司 Image partition method, device, computer equipment and storage medium
CN109377505B (en) * 2018-10-29 2021-07-06 哈尔滨理工大学 MRI brain tumor image segmentation method based on multi-feature discrimination
CN109472784A (en) * 2018-10-31 2019-03-15 安徽医学高等专科学校 Based on the recognition methods for cascading full convolutional network pathological image mitotic cell
CN109410289B (en) * 2018-11-09 2021-11-12 中国科学院精密测量科学与技术创新研究院 Deep learning high undersampling hyperpolarized gas lung MRI reconstruction method
CN109598728B (en) 2018-11-30 2019-12-27 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation device, diagnostic system, and storage medium
CN111292289B (en) * 2018-12-07 2023-09-26 中国科学院深圳先进技术研究院 CT lung tumor segmentation method, device, equipment and medium based on segmentation network
CN109886929B (en) * 2019-01-24 2023-07-18 江苏大学 MRI tumor voxel detection method based on convolutional neural network
CN113728335A (en) * 2019-02-08 2021-11-30 新加坡健康服务有限公司 Method and system for classification and visualization of 3D images
EP4276756A3 (en) * 2019-03-01 2024-02-21 Siemens Healthineers AG Tumor tissue characterization using multi-parametric magnetic resonance imaging
CN109902748A (en) * 2019-03-04 2019-06-18 中国计量大学 A kind of image, semantic dividing method based on the full convolutional neural networks of fusion of multi-layer information
CN110136133A (en) * 2019-03-11 2019-08-16 嘉兴深拓科技有限公司 A kind of brain tumor dividing method based on convolutional neural networks
CN110120033A (en) * 2019-04-12 2019-08-13 天津大学 Based on improved U-Net neural network three-dimensional brain tumor image partition method
CN110084823A (en) * 2019-04-18 2019-08-02 天津大学 Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN
CN110097567A (en) * 2019-04-18 2019-08-06 天津大学 In conjunction with the three-dimensional brain tumor image partition method of improved FCNN and level set
CN110047068A (en) * 2019-04-19 2019-07-23 山东大学 MRI brain tumor dividing method and system based on pyramid scene analysis network
CN110097550B (en) * 2019-05-05 2021-02-02 电子科技大学 Medical image segmentation method and system based on deep learning
CN111932486A (en) * 2019-05-13 2020-11-13 四川大学 Brain glioma segmentation method based on 3D convolutional neural network
CN110188754B (en) * 2019-05-29 2021-07-13 腾讯科技(深圳)有限公司 Image segmentation method and device and model training method and device
CN110349170B (en) * 2019-07-13 2022-07-08 长春工业大学 Full-connection CRF cascade FCN and K mean brain tumor segmentation algorithm
CN110427954A (en) * 2019-07-26 2019-11-08 中国科学院自动化研究所 The image group feature extracting method of multizone based on tumor imaging
US10937158B1 (en) * 2019-08-13 2021-03-02 Hong Kong Applied Science and Technology Research Institute Company Limited Medical image segmentation based on mixed context CNN model
CN110504032B (en) * 2019-08-23 2022-09-09 元码基因科技(无锡)有限公司 Method for predicting tumor mutation load based on image processing of hematoxylin-eosin staining tablet
CN110507288A (en) * 2019-08-29 2019-11-29 重庆大学 Vision based on one-dimensional convolutional neural networks induces motion sickness detection method
CN110533683B (en) * 2019-08-30 2022-04-29 东南大学 Image omics analysis method fusing traditional features and depth features
CN110706214B (en) * 2019-09-23 2022-06-17 东南大学 Three-dimensional U-Net brain tumor segmentation method fusing condition randomness and residual error
CN111008974A (en) * 2019-11-22 2020-04-14 浙江飞图影像科技有限公司 Multi-model fusion femoral neck fracture region positioning and segmentation method and system
CN111126333B (en) * 2019-12-30 2022-07-26 齐齐哈尔大学 Garbage classification method based on light convolutional neural network
CN111210909A (en) * 2020-01-13 2020-05-29 青岛大学附属医院 Deep neural network-based rectal cancer T stage automatic diagnosis system and construction method thereof
CN111340767B (en) * 2020-02-21 2023-12-12 四川大学华西医院 Brain tumor scalp positioning image processing method and system
CN111289251A (en) * 2020-02-27 2020-06-16 湖北工业大学 Rolling bearing fine-grained fault identification method
CN111477298B (en) * 2020-04-03 2021-06-15 山东省肿瘤防治研究院(山东省肿瘤医院) Method for tracking tumor position change in radiotherapy process
CN111476802B (en) * 2020-04-09 2022-10-11 山东财经大学 Medical image segmentation and tumor detection method, equipment and readable storage medium
CN111612722B (en) * 2020-05-26 2023-04-18 星际(重庆)智能装备技术研究院有限公司 Low-illumination image processing method based on simplified Unet full-convolution neural network
CN111754530B (en) * 2020-07-02 2023-11-28 广东技术师范大学 Prostate ultrasonic image segmentation classification method
CN112037171B (en) * 2020-07-30 2023-08-15 西安电子科技大学 Multi-mode feature fusion-based multi-task MRI brain tumor image segmentation method
CN111973154B (en) * 2020-08-20 2022-05-24 山东大学齐鲁医院 Multi-point accurate material taking system, method and device for brain tumor
CN112634192B (en) * 2020-09-22 2023-10-13 广东工业大学 Cascaded U-N Net brain tumor segmentation method combining wavelet transformation
CN112200810B (en) * 2020-09-30 2023-11-14 深圳市第二人民医院(深圳市转化医学研究院) Multi-modal automated ventricle segmentation system and method of use thereof
CN112529915B (en) * 2020-12-17 2022-11-01 山东大学 Brain tumor image segmentation method and system
CN112767417B (en) * 2021-01-20 2022-09-13 合肥工业大学 Multi-modal image segmentation method based on cascaded U-Net network
CN112862761B (en) * 2021-01-20 2023-01-17 清华大学深圳国际研究生院 Brain tumor MRI image segmentation method and system based on deep neural network
CN112837276B (en) * 2021-01-20 2023-09-29 重庆邮电大学 Brain glioma segmentation method based on cascade deep neural network model
CN112927203A (en) * 2021-02-25 2021-06-08 西北工业大学深圳研究院 Glioma patient postoperative life prediction method based on multi-sequence MRI global information
CN113159171B (en) * 2021-04-20 2022-07-22 复旦大学 Plant leaf image fine classification method based on counterstudy
CN113361654A (en) * 2021-07-12 2021-09-07 广州天鹏计算机科技有限公司 Image identification method and system based on machine learning
CN114332547B (en) * 2022-03-17 2022-07-08 浙江太美医疗科技股份有限公司 Medical object classification method and apparatus, electronic device, and storage medium
CN116258671B (en) * 2022-12-26 2023-08-29 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) MR image-based intelligent sketching method, system, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447872A (en) * 2015-12-03 2016-03-30 中山大学 Method for automatically identifying liver tumor type in ultrasonic image
CN106296699A (en) * 2016-08-16 2017-01-04 电子科技大学 Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
CN107220980A (en) * 2017-05-25 2017-09-29 重庆理工大学 A kind of MRI image brain tumor automatic division method based on full convolutional network

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326937B (en) * 2016-08-31 2019-08-09 郑州金惠计算机系统工程有限公司 Crowd density distribution estimation method based on convolutional neural networks
CN106780507B (en) * 2016-11-24 2019-05-10 西北工业大学 A kind of sliding window fast target detection method based on super-pixel segmentation
CN106780498A (en) * 2016-11-30 2017-05-31 南京信息工程大学 Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel
CN107016681B (en) * 2017-03-29 2023-08-25 浙江师范大学 Brain MRI tumor segmentation method based on full convolution network
CN107274451A (en) * 2017-05-17 2017-10-20 北京工业大学 Isolator detecting method and device based on shared convolutional neural networks
CN107464250B (en) * 2017-07-03 2020-12-04 深圳市第二人民医院 Automatic breast tumor segmentation method based on three-dimensional MRI (magnetic resonance imaging) image
CN107463906A (en) * 2017-08-08 2017-12-12 深图(厦门)科技有限公司 The method and device of Face datection
CN108062756B (en) * 2018-01-29 2020-04-14 重庆理工大学 Image semantic segmentation method based on deep full convolution network and conditional random field

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447872A (en) * 2015-12-03 2016-03-30 中山大学 Method for automatically identifying liver tumor type in ultrasonic image
CN106296699A (en) * 2016-08-16 2017-01-04 电子科技大学 Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
CN107220980A (en) * 2017-05-25 2017-09-29 重庆理工大学 A kind of MRI image brain tumor automatic division method based on full convolutional network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic Semantic Segmentation of Brain Gliomas from MRI Images Using a Deep Cascaded Neural Network;Shaoguo Cui et al;《Journal of Healthcare Engineering》;20180319;1-14 *
Brain Tumor Automatic Segmentation Using Fully Convolutional Networks;Shaoguo Cui et al;《Journal of Medical Imaging and Health Informatics》;20171031;1641-1647 *

Also Published As

Publication number Publication date
CN108492297A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
CN108492297B (en) MRI brain tumor positioning and intratumoral segmentation method based on deep cascade convolution network
Shah et al. A robust approach for brain tumor detection in magnetic resonance images using finetuned efficientnet
Graziani et al. Concept attribution: Explaining CNN decisions to physicians
CN108268870B (en) Multi-scale feature fusion ultrasonic image semantic segmentation method based on counterstudy
US20200380695A1 (en) Methods, systems, and media for segmenting images
Khojaste-Sarakhsi et al. Deep learning for Alzheimer's disease diagnosis: A survey
Khan et al. Stomach deformities recognition using rank-based deep features selection
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
CN114730463A (en) Multi-instance learner for tissue image classification
Rai et al. 2D MRI image analysis and brain tumor detection using deep learning CNN model LeU-Net
CN112270666A (en) Non-small cell lung cancer pathological section identification method based on deep convolutional neural network
Ypsilantis et al. Recurrent convolutional networks for pulmonary nodule detection in CT imaging
Doan et al. SONNET: A self-guided ordinal regression neural network for segmentation and classification of nuclei in large-scale multi-tissue histology images
WO2022127500A1 (en) Multiple neural networks-based mri image segmentation method and apparatus, and device
CN113408605A (en) Hyperspectral image semi-supervised classification method based on small sample learning
CN106780453A (en) A kind of method realized based on depth trust network to brain tumor segmentation
Shen et al. Sparse bayesian learning for identifying imaging biomarkers in AD prediction
CN114600155A (en) Weakly supervised multitask learning for cell detection and segmentation
Kondratenko et al. Artificial neural networks for recognition of brain tumors on MRI images
Morkūnas et al. Machine learning based classification of colorectal cancer tumour tissue in whole-slide images
Tian et al. Radiomics and Its Clinical Application: Artificial Intelligence and Medical Big Data
Eltoukhy et al. Classification of multiclass histopathological breast images using residual deep learning
Tyagi et al. LCSCNet: A multi-level approach for lung cancer stage classification using 3D dense convolutional neural networks with concurrent squeeze-and-excitation module
Sailunaz et al. A survey on brain tumor image analysis
CN115985503B (en) Cancer prediction system based on ensemble learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200518

Address after: 400000 No. 12 Chen Cheng Road, Shapingba District, Chongqing

Applicant after: CHONGQING NORMAL University

Address before: No. 69 lijiatuo Chongqing District of Banan City Road 400054 red

Applicant before: Chongqing University of Technology

GR01 Patent grant
GR01 Patent grant