CN114170244A - Brain glioma segmentation method based on cascade neural network structure - Google Patents

Brain glioma segmentation method based on cascade neural network structure Download PDF

Info

Publication number
CN114170244A
CN114170244A CN202111404516.9A CN202111404516A CN114170244A CN 114170244 A CN114170244 A CN 114170244A CN 202111404516 A CN202111404516 A CN 202111404516A CN 114170244 A CN114170244 A CN 114170244A
Authority
CN
China
Prior art keywords
segmentation
tumor
network
edge
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111404516.9A
Other languages
Chinese (zh)
Inventor
白相志
王元元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202111404516.9A priority Critical patent/CN114170244A/en
Publication of CN114170244A publication Critical patent/CN114170244A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention provides a brain glioma segmentation method based on a cascade neural network structure, which comprises the following steps: the method comprises the following steps: generating a high-precision tumor region segmentation result by utilizing a brain glioma segmentation network with a cascade neural network structure; step two: aiming at multi-scale residual error characteristics and global characteristics, on one hand, a segmentation and edge detection network is utilized to generate a whole tumor segmentation result and an edge detection result thereof; on the other hand, a cascade network is designed to generate a tumor core region and tumor enhancement region segmentation result under the primary whole tumor segmentation result; step three: constructing a loss function to train the accurate brain neural tumor segmentation network; and (3) outputting: and (3) carrying out tumor region segmentation on the original multi-modal image by using the trained brain glioma segmentation network with the cascade neural network structure. The method can be combined with various medical image-based application systems, helps to improve the segmentation quality of multi-modal images, and has wide market prospect and application value.

Description

Brain glioma segmentation method based on cascade neural network structure
Technical Field
The invention relates to a brain glioma segmentation method based on a cascade neural network structure, and belongs to the field of digital image processing, pattern recognition and computer vision. The medical image segmentation has wide application prospect in application systems of various image-guided interventional diagnosis and directional radiotherapy.
Background
Gliomas are the most common primary brain malignancies with varying degrees of invasiveness, prognosis and areas of heterogeneity. Segmentation of brain gliomas generally refers to segmentation of a tumor region from a multi-modal nuclear magnetic resonance sequence. The segmentation of the brain glioma can effectively extract a plurality of heterogeneous regions (including the whole tumor region, the tumor core region and the tumor enhancement region) of the tumor, thereby helping a doctor to make an accurate judgment. Medical image segmentation is more challenging than normal color images due to various noise, blur and low contrast problems resulting from the imaging process. In addition, the complexity of the brain tissue structure, the variability of the spatial position and the morphological size of the brain tumor, and other factors make it difficult to achieve accurate segmentation of the brain glioma.
Medical image segmentation algorithms are generally classified into conventional machine learning methods and deep learning methods. One of typical representative methods of the conventional machine learning method for medical image segmentation is a region-based method, which introduces color discontinuity to represent a boundary between a region and a target object as an edge, and is effective in dealing with problems such as insufficient segmentation, excessive segmentation, and an erroneous edge. Yang et al propose an improved method for a gradient threshold edge detector. The method introduces the basic characteristics of the human visual system and accurately determines the local mask area of the edge with any shape according to the image content. The gradient image is masked with the brightness and activity of the local image before determining the edge markers. Experimental results show that the edge images obtained by the algorithm are more consistent with the perception edge images (see the literature: Yankeen et al, improved method of gradient threshold edge detector based on HVS. Computational Intelligence and safety. Schprineglin Heidelberg, Berlin, Heidelberg, 2005, 1051-. One recent work by Su et al has been to segment the carpal bones in X-ray images using a multi-stage approach. Foreground regions and edge maps are extracted using adaptive local thresholds and adaptive Canny edge detection. The method comprises the steps of integrating an edge image and a foreground region through XOR operation, and solving over-segmentation by adding a background boundary in the edge image near a carpal bone boundary; processing the under-segmentation by adding foreground boundaries to the edge map near the carpal boundaries, thereby closing the foreground lost due to the under-segmentation; non-closed edges and false edges from the edge map are supplemented by carpal regions from local adaptive thresholding (see literature: Sulirey et al, segmentation based on delineating carpal prior model integration regions and boundaries in hand X-ray images, American society of Electrical and electronics Engineers, proceedings, 2018,19993 @20008 (Su L, Fu X, Zhang X, Cheng X, Ma Y, Gan Y, Hu Q.Definection of cardiac bones from X-ray images through the model of surgery and integration of region-based and boundary-based segmentation. IEEE Access.2018; 6:19993 @20008.). In recent years, there is a threshold-based method, and ilohan and the like use a threshold value to diagnose a brain tumor in an MRI grayscale image. This technique involves identifying edges using morphology (general erosion and diffusion) and then subtracting the generated image from the original image to obtain results (see, Umamet. Itehan et al, rain tumor segmentation based on a new thresholding method, encyclopedia of computer science, 2017,580-587.(Ilhan U, Ilhan A. brain tumor segmentation based on a new threshold approach. Proc. com. Sci.2017; 120: 580-587.).
In recent years, with the rapid development of deep learning techniques, some methods based on deep learning are applied to the field of medical image segmentation. Such methods overcome the disadvantages of manual functional extraction, making it possible to build large trainable models that can learn the best effect required for a given task.
In order to adapt the convolutional network to a variety of test images, Wang et AL propose a fine tuning algorithm (see: Wang Tai et AL, Interactive medical image segmentation based on deep learning and image-specific fine tuning, American institute of Electrical and electronics Engineers medical image science, 2018,1562-, incorporating multi-scale functions into the model through auxiliary classification paths enables the network to utilize multi-scale information (see literature: Zhang photograph et al, automated segmentation of acute ischemic stroke from DWI using three-dimensional full convolution density neural networks, proceedings of the institute of Electrical and electronics Engineers medical imaging, 2018, 2149-. Pannande et al propose a three-dimensional brain tumor segmentation framework based on three-dimensional U-Net. The proposed architecture is divided into three parts, multimodal fusion, tumor extractor and tumor segmenter. This structure fuses the magnetic resonance sequence with depth coding fusion, learns the tumor pattern with a three-dimensional initial U-Net model using a fusion modality, and finally decodes the Multi-scale extracted features into multiple types of tumor regions (see article: Pannand et al, Multi-modal coding fusion brain tumor segmentation based on 3D initial U-Net and decoder models, Multimedia Tools and Applications society, 2021,30305 @, 30320 (N.S. Punn and S.Agarwal, "Multi-modality encoded fusion with 3D acquisition U-Net and decoder model for simulation segmentation," Multimedia Tools and Applications, pp.1-16,2020.)).
However, most of the current medical image segmentation methods based on the convolutional neural network extract an interested region based on gray information of an image, the utilization of edge information is not enough, and the consideration of inter-class spatial relationship in multi-class segmentation tasks is lacked, so that the 3D segmentation is not fine enough and the precision is not high. The invention considers that the edge information and the inter-class spatial relationship have the same utilization value, can effectively solve the problem of fuzzy boundary in the segmentation task, introduces the inter-class spatial relationship and provides important information for the multi-class segmentation task. Based on the method, the invention provides a novel brain glioma segmentation method which comprises the following steps: a brain glioma segmentation method based on a cascade neural network structure. In the invention, the novel network structure extracts the global depth feature, and a cascade structure and a whole tumor segmentation and edge detection structure are adopted, so that the algorithm can be obtained by fully utilizing the spatial relationship among the classes and the edge feature, and the segmentation quality of the multiple classes of tumors is effectively improved.
Disclosure of Invention
1. The purpose is as follows: in view of the above problems, the present invention aims to provide a glioma segmentation method based on a cascade neural network structure for analyzing and researching image characteristic information of glioma. The method comprises the steps of fully extracting global multi-scale attention features from multi-modal medical images, using two decoding sequences with cascade relation, then utilizing extracted Whole Tumor edge feature information to effectively improve the segmentation quality and stability of the brain glioma, and finally outputting high-quality Whole Tumor regions (WT), Tumor Core regions (TC) and Tumor enhancement regions (ET) corresponding to input images.
2. The technical scheme is as follows: in order to achieve the purpose, the overall idea of the technical scheme of the invention is to adopt a three-dimensional feature extraction network to generate global multi-scale depth features, utilize a WT segmentation and edge detection network to generate the whole tumor edge and the segmentation result thereof, then utilize a cascade segmentation network to generate the accurate segmentation result of TC and ET, and utilize edge loss and content loss to continuously improve the performance of the glioma segmentation network. The algorithm technical idea of the invention is mainly embodied in the following four aspects:
1) and a residual error module is used, so that the network expression capability and the convergence speed are improved.
2) The global multi-scale feature generation module is used for generating multi-level residual features with global attention information;
3) the invention relates to a WT segmentation and edge detection network, which generates the whole tumor edge and the segmentation result thereof, and effectively utilizes edge loss to improve the network segmentation precision.
4) The cascade neural network model fully utilizes the inter-class relation and the edge characteristic of the multi-class segmentation, outputs the organization regions of TC and ET, and reconstructs a high-quality multi-class segmentation result.
The invention relates to a brain glioma segmentation method based on a cascade neural network structure, which comprises the following specific steps:
the method comprises the following steps: extracting a depth global multi-scale feature by using a convolutional neural network based on a residual error module; firstly, performing data expansion on four input modes of the brain tumor to form a data block; secondly, performing multi-scale feature extraction by using a multi-level three-dimensional residual error feature extraction network; thirdly, extracting global features from the features of the deepest level by a global attention module; finally, outputting the global features and the shallow multi-level features to subsequent multi-class segmentation networks and edge generation networks in a multi-line mode;
step two: and generating final multi-class segmentation results and whole tumor edge results by utilizing the multi-scale residual error features and the global features. On one hand, a WT segmentation and edge detection network is used for generating a whole tumor segmentation result and an edge detection result thereof; on the other hand, a cascade network is designed to generate a tumor core region and tumor enhancement region segmentation result under the whole tumor segmentation result;
step three: constructing a loss function to carry out end-to-end training on the accurate brain neural tumor segmentation network;
and (3) outputting: and (3) carrying out tumor region segmentation on the original multi-modal image by using the trained brain neural tumor segmentation network with the cascade neural network structure. After the segmentation network is subjected to sufficient iterative training by using the training data, the trained tumor segmentation network is obtained and used for extracting multiple types of tumor tissues.
The first step is as follows:
1.1: extracting feature graphs of the four modal images through a three-dimensional residual error feature network; the four modalities include T1 Weighted Imaging (T1-Weighted Imaging, T1), post-Contrast T1 Weighted Imaging (T1-Weighted Contrast-Enhanced Image, T1ce), T2 Weighted Imaging (T2-Weighted Imaging, T2), and T2 Fluid attenuation Inversion Recovery Imaging (T2 Fluid attached Inversion Recovery Imaging, T2-FLAIR); firstly, carrying out random inversion and random block cutting data amplification operation on multi-modal input data, secondly, extracting multi-modal characteristics by using multi-level three-dimensional residual characteristics formed by a residual module, and outputting multi-scale characteristics. And finally, outputting the multi-level three-dimensional scale features through multiple lines respectively for reconstructing multi-class segmentation results and edge extraction results. By extracting the multi-scale features, the texture information and the spatial information of the image are reserved;
1.2: reconstructing the feature map by using a global attention module; firstly, extracting the deepest level abstract features generated by a three-dimensional residual error feature network, firstly mapping the abstract features to two spaces, fusing to obtain fusion features, secondly, respectively extracting the relationship between pixels and the relationship between each layer of pixels, and finally outputting the global features.
Wherein, the second step is as follows:
2.1: extracting the segmentation result and the edge of the whole tumor through a WT segmentation and edge detection network; the WT segmentation and edge detection network consists of a shared parameter encoder and two paths of encoding modules, the WT segmentation and edge detection network utilizes the global multi-scale features generated in the first step, the global multi-scale features are decoded uniformly through the shared encoder, and finally the WT segmentation and edge detection network is divided into two paths to obtain the segmentation and edge extraction results of the WT respectively;
2.2: extracting a tumor core region and a tumor enhancement region segmentation result contained in the whole tumor through a cascade network; and (3) introducing the whole tumor segmentation result generated in the step (2.1) by the cascade network according to the global multi-scale features generated in the step (one), and generating a final accurate segmentation result of the tumor core region and the tumor enhancement region by fusion decoding.
Wherein the third step is as follows:
3.1: the loss function of the accurate brain neural tumor segmentation network consists of two parts: region loss consisting of Dice loss and Tversky loss between a segmentation result and a reference segmentation result, and edge loss between the result of the entire tumor edge detection and a reference edge; the expression of the clear video generation network loss function is as follows: l ═ α Lregion+βLedgeWherein L isregionRepresents a loss of area, LedgeRepresenting edge loss, α and β are their corresponding weighting coefficients; content loss LregionThe expression of (a) is:
Figure BDA0003371836390000041
wherein p is0iRepresenting the probability of voxel i being a tumor, p1iRepresenting the probability of voxel i being non-tumorous, g0i1 represents the lesion voxel, g0iWhen 0 represents the lesion voxel, g1iAnd g0iIn contrast, N represents the number of voxels, γ ═ 0.7, ε0Represents a non-zero term, set to 1 × 10-7. Against loss LedgeIs expressed as
Figure BDA0003371836390000051
Wherein, ynRepresenting the entire tumor margin detection result output by the network,
Figure BDA0003371836390000052
represents the corresponding true value;
3.2: optimization was performed using an Adaptive Moment Estimation (ADAM) optimizer. The ADAM optimizer is adopted for optimization, and the initial learning rates of various brain tumor segmentation networks are all 1.003 multiplied by 10-4Adjusting the network parameters by gradient back propagation reduces the corresponding loss function.
A brain glioma segmentation system based on a cascade neural network structure, the basic structural framework and the work flow of which are shown in figure 1, is characterized by comprising:
and the three-dimensional global feature extraction module is used for multi-scale features with global features. The three-dimensional global feature extraction module further comprises:
a multi-scale residual error module for generating multi-scale depth residual error features;
and the global feature extraction module is used for extracting global features based on the deepest level feature generated by the multi-scale residual error module.
And the multi-class segmentation result generation module is used for generating high-quality and accurate multi-class tumor tissue regions. The multi-class segmentation result generation module further comprises:
WT segmentation and edge detection networks for generating high quality tissue segmentation and edges of brain glioma whole tumors; designing a dovetail structure module at the tail of the network, dividing the dovetail structure module into two paths, simultaneously generating a segmentation result and extracting edges;
a cascaded segmentation network for generating tissue segmentation of high quality glioma core regions and enhancement regions; and introducing a whole tumor segmentation result before decoding, and generating a tumor core region and tumor enhancement region tissue segmentation result by using global multi-scale features.
The loss function calculation module is used for accurately calculating the loss function of the brain neural tumor segmentation network;
and the network training module is used for carrying out sufficient iterative training on the accurate brain neural tumor segmentation network to obtain the trained accurate brain neural tumor segmentation network so as to extract the segmentation result and the tumor edge.
The process mainly comprises the steps of outputting multi-scale features with global features through a three-dimensional global feature extraction module comprising a multi-scale residual error module and a global feature extraction module; using the multi-scale feature as input, and outputting the segmentation result and the edge of the WT by using the WT segmentation and edge detection network; and taking the multi-scale features and the segmentation result of the WT as input, outputting the segmentation result of the TC and ET region tissues by utilizing a cascade segmentation network, and finally fusing to obtain the segmentation of the multi-class tumor regions. And the segmentation result of the multi-type tumor region and the WT edge are used as constraints to iteratively update the whole network, so that higher precision is obtained.
3. The advantages and the effects are as follows: the invention provides a brain glioma segmentation method based on a cascade neural network structure, which takes a cascade neural network as a basic frame, extracts global multi-scale depth features through a residual coding network, and sufficiently introduces global attention information into the multi-scale features; generating a multi-classification tumor tissue segmentation result by designing a cascade network, and extracting the edge contour of the whole tumor; the spatial position information among multiple types is fully utilized by sequentially generating the sequence of the whole tumor, the tumor core region and the enhancement region; by introducing the edge loss function training network, the network can refine the segmentation precision of the tumor edge region, and further improve the precision of multi-class segmentation. The method can be combined with various medical image-based application systems, helps to improve the segmentation quality of multi-modal images, and has wide market prospect and application value.
Drawings
Fig. 1 is a basic structural framework and a workflow of a glioma segmentation network of a cascaded neural network structure proposed by the present invention.
FIG. 2 is a three-dimensional global feature extraction module.
Fig. 3 is a WT tumor margin segmentation network.
Fig. 4 is a cascaded split network.
Fig. 5a-f show the multi-class tumor segmentation effect of the present invention under different disease conditions, wherein 5a, 5c, and 5e are the true values corresponding to the inputted multi-modal images, and 5b, 5d, and 5f are the multi-class tumor segmentation results outputted by the present invention.
Detailed Description
For better understanding of the technical solutions of the present invention, the following further describes embodiments of the present invention with reference to the accompanying drawings.
The invention relates to a brain glioma segmentation method based on a cascade neural network structure, wherein an algorithm framework and a network structure are shown in figure 1, and the specific implementation steps of each part are as follows:
the method comprises the following steps: extracting global multi-scale features by using a three-dimensional global feature extraction module, wherein the basic structure of the three-dimensional global feature extraction module is shown in FIG. 2;
step two: the tissue area and the edge of the whole tumor are segmented by using a whole tumor edge segmentation network, the basic structure of the tissue area and the edge of the whole tumor is shown in fig. 3, and based on the segmentation result of the whole tumor tissue, the tissue of a brain glioma core area and an enhanced area is segmented by using a cascade segmentation network, as shown in fig. 4;
step three: constructing an edge loss function to train the whole network;
and (3) outputting: and extracting various tumor tissues by using the trained glioma segmentation network. After training data is used for carrying out sufficient iterative training on the glioma segmentation network, the trained glioma segmentation network is obtained and used for extracting various tumor tissues;
the first step is as follows:
1.1: extracting feature graphs of the four modal images through a three-dimensional residual error feature network; the four modalities comprise T1, T1Gd, T2 and T2-FLAIR; the three-dimensional residual error feature network is composed of residual error modules, the input of the three-dimensional residual error feature network is an input image block composed of four modal images, the output of the three-dimensional residual error feature network is a multi-scale feature, the feature map is used for reconstructing whole tumor edge information and organizing and segmenting results of the whole tumor edge information, and texture information and space information of the images are reserved by extracting the multi-scale feature. The three-dimensional multi-scale features firstly pass through a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 1, and then a first-level scale feature is extracted by 2 times of residual blocks with convolution kernel size of 3 multiplied by 3; then, a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 2 is used for down-sampling, and then second-level scale features are extracted through two residual errors; extracting third-level scale features in the same way; finally, downsampling is carried out through a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 2, and then the fourth-level scale features are extracted through 4 residual blocks.
1.2: reconstructing the feature map by using a global attention module; firstly, interlayer feature extraction and channel feature extraction are carried out on the highest-level abstract features generated by the three-dimensional residual error feature network, and global feature reconstruction is carried out through fusion. Firstly, mapping two spaces of the highest-level abstract feature to obtain mapping features in different spaces, and performing multiplication operation on the mapping features and the mapping features; and then, obtaining global features through pixel level node extraction and interlayer relation extraction in sequence.
Wherein, the second step is as follows:
2.1: extracting the segmentation result and the edge of the whole tumor through a WT segmentation and edge detection network; the WT segmentation and edge detection network is composed of a shared parameter encoder and two paths of encoding modules, the WT segmentation and edge detection network utilizes the global multi-scale features generated in the first step to decode uniformly through a shared decoder, and finally the WT segmentation and edge detection network is divided into two paths to obtain segmentation and edge extraction of the whole tumor tissue. The shared parameter decoder consists of convolution layers and upsampling, wherein the convolution kernel of the first-stage convolution layer is 3 multiplied by 3, the step length is 1, and the upsampling parameter is 2; the second level convolution layer is composed of a residual error block, convolution layers with convolution kernel size of 3 multiplied by 3 and step length of 1 and upsampling; the third level convolution layer is composed of a residual block and a convolution layer with the convolution kernel size of 3 multiplied by 3 and the step length of 1; the whole tumor segmentation branch consists of residual error blocks and convolution layers with convolution kernel size of 1 multiplied by 1 and step length of 1; the edge extraction network consists of residual blocks, convolution layers with convolution kernel size of 1 × 1 × 1 and step size of 1.
2.2: extracting a tumor core region and a tumor enhancement region segmentation result contained in the whole tumor through a cascade network; and (3) introducing the segmentation result of the whole tumor tissue generated in the step (2.1) by the cascade network according to the global multi-scale features generated in the step (one), and generating a final accurate segmentation result of the tumor core region and the tumor enhancement region by fusion decoding. Firstly, downsampling a segmentation result of the whole tumor tissue to 1/8 with an original size, and multiplying the segmentation result by the highest-level global features generated by an encoder; and finally, generating a tumor core region and tumor enhancement region segmentation result through convolution, a residual block and upsampling.
Wherein the third step is as follows:
3.1: the loss function of the accurate brain neural tumor segmentation network consists of two parts: region loss consisting of Dice loss and Tversky loss between a segmentation result and a reference segmentation result, and edge loss between the result of the entire tumor edge detection and a reference edge; the expression of the clear video generation network loss function is as follows: l ═ α Lregion+βLedgeWherein L isregionRepresents a loss of area, LedgeRepresenting edge loss, α and β are their corresponding weighting coefficients, α ═ 1 and β ═ 0.1, respectively; content loss LregionThe expression of (a) is:
Figure BDA0003371836390000081
wherein p is0iRepresenting the probability of voxel i being a tumor, p1iRepresenting the probability of t voxel i being non-tumorous, N representing the number of voxels, γ ═ 0.7, opposing the loss LedgeIs expressed as
Figure BDA0003371836390000082
Wherein y isnRepresenting the entire tumor margin detection result output by the network,
Figure BDA0003371836390000083
represents the corresponding true value;
s3.2: the invention adopts an ADAM optimizer for optimization, and the learning rate of the accurate brain neural tumor segmentation network is 1.003 multiplied by 10-4And adjusting network parameters through gradient back propagation to reduce corresponding loss functions so as to better guide the generation of the network.
In order to visually demonstrate the effect of the present invention, fig. 5a-f show the multi-class segmentation results of brain glioma of different patients according to the present invention, wherein 5a, 5c, and 5e are the true value data corresponding to the input original multi-modal data, and 5b, 5d, and 5f are the output multi-class segmentation of brain glioma. It can be seen from the figure that the multi-class segmentation output by the invention can effectively segment the multi-class tissue regions of the glioma under the constraint of the edge information, and the edge segmentation effect of the tumor region is obviously improved. The method takes the generation of the cascade neural network as a basic framework, utilizes the edge information of the whole tumor, combines a global feature extraction module and residual connection, realizes high-quality multi-class glioma tissue segmentation, and can be applied to various application systems of directional radiotherapy.

Claims (7)

1. A brain glioma segmentation method based on a cascade neural network structure is characterized by comprising the following steps: the method specifically comprises the following steps:
the method comprises the following steps: generating a high-precision tumor region segmentation result by utilizing a brain glioma segmentation network with a cascade neural network structure; firstly, performing multi-scale feature extraction by using a three-dimensional residual error feature extraction network; then extracting global features through a global attention module;
step two: aiming at multi-scale residual error characteristics and global characteristics, on one hand, a segmentation and edge detection network is utilized to generate a whole tumor WT segmentation result and an edge detection result thereof; on the other hand, a cascade network is designed to generate a segmentation result of a tumor core area TC and a tumor enhancement area ET under a primary whole tumor segmentation result;
step three: constructing a loss function to train the accurate brain neural tumor segmentation network;
and (3) outputting: and (3) carrying out tumor region segmentation on the original multi-modal image by using the trained brain glioma segmentation network with the cascade neural network structure.
2. The glioma segmentation method based on the cascade neural network structure of claim 1, wherein: the first step is as follows:
s1.1: extracting feature graphs of the four modal images through a three-dimensional residual error feature network; the four modalities include T1 weighted imaging, post-contrast T1 weighted imaging, T2 weighted imaging, and T2 fluid attenuation inversion recovery imaging; the three-dimensional residual error feature network is composed of residual error modules, the input of the three-dimensional residual error feature network is an input image block composed of four modal images, the output of the three-dimensional residual error feature network is a multi-scale feature, a feature map is used for reconstructing whole tumor edge information and a tumor region segmentation result, and texture information and space information of the images are reserved by extracting the multi-scale feature;
s1.2: reconstructing the feature map by using a global attention module; firstly, interlayer feature extraction and channel feature extraction are carried out on the highest-level abstract features generated by the three-dimensional residual error feature network, and global feature reconstruction is carried out through fusion.
3. The glioma segmentation method based on the cascade neural network structure of claim 2, wherein: the step S1.2 further comprises: when the encoder for medical image segmentation is used for encoding, features are extracted only by a convolution and downsampling module, a global attention module is designed for extracting more effective global features while multi-scale features are extracted by ResLock, and an output feature map is obtained by fusing the attention features and original highest-level abstract features.
4. The glioma segmentation method based on the cascade neural network structure of claim 1, wherein: the second step is as follows:
s2.1: extracting the segmentation result and the edge of the whole tumor through a WT segmentation and edge detection network; the WT segmentation and edge detection network is composed of a shared parameter decoder and two paths of decoding modules, the WT segmentation and edge detection network utilizes the global multi-scale features generated in the first step to decode uniformly through the shared decoder, and finally the WT segmentation and edge detection network is divided into two paths to obtain segmentation and edge extraction results of the whole tumor respectively;
s2.2: extracting a tumor core region and a tumor enhancement region segmentation result contained in the whole tumor through a cascade network; and (3) introducing the whole tumor segmentation result generated in the step (S2.1) by the cascade network according to the global multi-scale features generated in the step (I), and generating a final accurate segmentation result of the tumor core region and the tumor enhancement region by fusion decoding.
5. The brain tumor segmentation method based on the cascade neural network structure as claimed in claim 1, wherein: the third step is as follows:
s3.1: the loss function of the accurate brain neural tumor segmentation network consists of two parts: region loss consisting of Dice loss and Tversky loss between a segmentation result and a reference segmentation result, and edge loss between the result of the entire tumor edge detection and a reference edge; the expression of the clear video generation network loss function is as follows: l ═ α Lregion+βLedgeWherein L isregionRepresents a loss of area, LedgeRepresenting edge loss, α and β are their corresponding weighting coefficients; content loss LregionWatch (A)The expression is as follows:
Figure FDA0003371836380000021
wherein p is0iRepresenting the probability of voxel i being a tumor, p1iRepresenting the probability of t voxel i being non-tumorous, N representing the number of voxels, γ ═ 0.7, opposing the loss LedgeIs expressed as
Figure FDA0003371836380000022
Wherein y isnRepresenting the entire tumor margin detection result output by the network,
Figure FDA0003371836380000023
represents the corresponding true value;
s3.2: and (4) optimizing by adopting an adaptive moment estimation ADAM optimizer.
6. A brain tumor segmentation system based on a cascade neural network structure is characterized in that: the system comprises:
the global multi-scale feature generation module is used for generating multi-level residual features with global attention information;
the accurate tumor region segmentation network is used for generating accurate whole tumors, tumor core regions, tumor enhancement region focus segmentation results and edges of the whole tumors;
the loss function calculation module is used for accurately calculating the loss function of the brain neural tumor segmentation network;
and the network training module is used for carrying out sufficient iterative training on the accurate glioma segmentation network to obtain the trained accurate glioma segmentation network so as to extract a segmentation result.
7. The brain tumor segmentation system based on the cascaded neural network structure as claimed in claim 6, wherein: the precise tumor region segmentation network module further comprises:
the cascade network is designed by utilizing the relation that the whole tumor comprises a tumor core area and a tumor enhancement area, so that the network can better adapt to the accurate extraction of the tumor core area and the tumor enhancement area with larger morphological difference;
the WT segmentation and edge detection network is characterized in that the whole tumor segmentation branch and the edge detection branch share early-stage coding and decoding parameters, and different generation modules are subsequently arranged, so that the whole tumor segmentation branch and the edge detection branch can generate a more accurate whole tumor segmentation result under the mutual supervision and mutual promotion relationship;
the accurate tumor area segmentation network module obtains a final segmentation result by fusing the output results of the WT segmentation and edge detection network and the cascade network.
CN202111404516.9A 2021-11-24 2021-11-24 Brain glioma segmentation method based on cascade neural network structure Pending CN114170244A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111404516.9A CN114170244A (en) 2021-11-24 2021-11-24 Brain glioma segmentation method based on cascade neural network structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111404516.9A CN114170244A (en) 2021-11-24 2021-11-24 Brain glioma segmentation method based on cascade neural network structure

Publications (1)

Publication Number Publication Date
CN114170244A true CN114170244A (en) 2022-03-11

Family

ID=80480484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111404516.9A Pending CN114170244A (en) 2021-11-24 2021-11-24 Brain glioma segmentation method based on cascade neural network structure

Country Status (1)

Country Link
CN (1) CN114170244A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882047A (en) * 2022-04-19 2022-08-09 厦门大学 Medical image segmentation method and system based on semi-supervision and Transformers
CN114937171A (en) * 2022-05-11 2022-08-23 复旦大学 Alzheimer's classification system based on deep learning
CN115082500A (en) * 2022-05-31 2022-09-20 苏州大学 Corneal nerve fiber segmentation method based on multi-scale and local feature guide network
CN117611806A (en) * 2024-01-24 2024-02-27 北京航空航天大学 Prostate cancer operation incisal margin positive prediction system based on images and clinical characteristics

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084823A (en) * 2019-04-18 2019-08-02 天津大学 Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN
CN110120033A (en) * 2019-04-12 2019-08-13 天津大学 Based on improved U-Net neural network three-dimensional brain tumor image partition method
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
WO2020108525A1 (en) * 2018-11-30 2020-06-04 腾讯科技(深圳)有限公司 Image segmentation method and apparatus, diagnosis system, storage medium, and computer device
CN112215850A (en) * 2020-08-21 2021-01-12 天津大学 Method for segmenting brain tumor by using cascade void convolution network with attention mechanism
CN112837276A (en) * 2021-01-20 2021-05-25 重庆邮电大学 Brain glioma segmentation method based on cascaded deep neural network model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020108525A1 (en) * 2018-11-30 2020-06-04 腾讯科技(深圳)有限公司 Image segmentation method and apparatus, diagnosis system, storage medium, and computer device
CN110120033A (en) * 2019-04-12 2019-08-13 天津大学 Based on improved U-Net neural network three-dimensional brain tumor image partition method
CN110084823A (en) * 2019-04-18 2019-08-02 天津大学 Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN112215850A (en) * 2020-08-21 2021-01-12 天津大学 Method for segmenting brain tumor by using cascade void convolution network with attention mechanism
CN112837276A (en) * 2021-01-20 2021-05-25 重庆邮电大学 Brain glioma segmentation method based on cascaded deep neural network model

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882047A (en) * 2022-04-19 2022-08-09 厦门大学 Medical image segmentation method and system based on semi-supervision and Transformers
CN114937171A (en) * 2022-05-11 2022-08-23 复旦大学 Alzheimer's classification system based on deep learning
CN114937171B (en) * 2022-05-11 2023-06-09 复旦大学 Deep learning-based Alzheimer's classification system
CN115082500A (en) * 2022-05-31 2022-09-20 苏州大学 Corneal nerve fiber segmentation method based on multi-scale and local feature guide network
CN117611806A (en) * 2024-01-24 2024-02-27 北京航空航天大学 Prostate cancer operation incisal margin positive prediction system based on images and clinical characteristics
CN117611806B (en) * 2024-01-24 2024-04-12 北京航空航天大学 Prostate cancer operation incisal margin positive prediction system based on images and clinical characteristics

Similar Documents

Publication Publication Date Title
Vakanski et al. Attention-enriched deep learning model for breast tumor segmentation in ultrasound images
Ning et al. SMU-Net: Saliency-guided morphology-aware U-Net for breast lesion segmentation in ultrasound image
Li et al. Brain tumor detection based on multimodal information fusion and convolutional neural network
Masood et al. A survey on medical image segmentation
CN114170244A (en) Brain glioma segmentation method based on cascade neural network structure
CN109685060B (en) Image processing method and device
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN113554669B (en) Unet network brain tumor MRI image segmentation method with improved attention module
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN112446892A (en) Cell nucleus segmentation method based on attention learning
Chen et al. A lung dense deep convolution neural network for robust lung parenchyma segmentation
CN112258514A (en) Segmentation method of pulmonary blood vessels of CT (computed tomography) image
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
Liu et al. Automatic segmentation algorithm of ultrasound heart image based on convolutional neural network and image saliency
Kaur et al. Optimized multi threshold brain tumor image segmentation using two dimensional minimum cross entropy based on co-occurrence matrix
Amiri et al. Bayesian Network and Structured Random Forest Cooperative Deep Learning for Automatic Multi-label Brain Tumor Segmentation.
CN114972362A (en) Medical image automatic segmentation method and system based on RMAU-Net network
Akram et al. An automated system for liver ct enhancement and segmentation
Ning et al. CF2-Net: Coarse-to-fine fusion convolutional network for breast ultrasound image segmentation
Shao et al. Application of U-Net and Optimized Clustering in Medical Image Segmentation: A Review.
CN112991365B (en) Coronary artery segmentation method, system and storage medium
Ruan et al. An efficient tongue segmentation model based on u-net framework
Wang et al. Accurate lung nodule segmentation with detailed representation transfer and soft mask supervision
Nanayakkara et al. Automatic breast boundary segmentation of mammograms
Qin et al. Joint dense residual and recurrent attention network for DCE-MRI breast tumor segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination