CN114170244A - A glioma segmentation method based on cascaded neural network structure - Google Patents
A glioma segmentation method based on cascaded neural network structure Download PDFInfo
- Publication number
- CN114170244A CN114170244A CN202111404516.9A CN202111404516A CN114170244A CN 114170244 A CN114170244 A CN 114170244A CN 202111404516 A CN202111404516 A CN 202111404516A CN 114170244 A CN114170244 A CN 114170244A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- tumor
- network
- region
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 174
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 26
- 206010018338 Glioma Diseases 0.000 title claims description 18
- 208000032612 Glial tumor Diseases 0.000 title claims description 17
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 119
- 238000003708 edge detection Methods 0.000 claims abstract description 30
- 201000007983 brain glioma Diseases 0.000 claims abstract description 21
- 210000004556 brain Anatomy 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims description 28
- 238000003384 imaging method Methods 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 11
- 208000003174 Brain Neoplasms Diseases 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 8
- 238000003709 image segmentation Methods 0.000 claims description 8
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 4
- 239000012530 fluid Substances 0.000 claims description 3
- 239000011229 interlayer Substances 0.000 claims description 3
- 238000002075 inversion recovery Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000000877 morphologic effect Effects 0.000 claims description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims 1
- 201000011682 nervous system cancer Diseases 0.000 abstract description 9
- 210000001519 tissue Anatomy 0.000 description 18
- 230000006870 function Effects 0.000 description 12
- 239000010410 layer Substances 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 2
- 210000003010 carpal bone Anatomy 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000001959 radiotherapy Methods 0.000 description 2
- 208000032382 Ischaemic stroke Diseases 0.000 description 1
- 238000005481 NMR spectroscopy Methods 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000005013 brain tissue Anatomy 0.000 description 1
- 230000000747 cardiac effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000036210 malignancy Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a brain glioma segmentation method based on a cascade neural network structure, which comprises the following steps: the method comprises the following steps: generating a high-precision tumor region segmentation result by utilizing a brain glioma segmentation network with a cascade neural network structure; step two: aiming at multi-scale residual error characteristics and global characteristics, on one hand, a segmentation and edge detection network is utilized to generate a whole tumor segmentation result and an edge detection result thereof; on the other hand, a cascade network is designed to generate a tumor core region and tumor enhancement region segmentation result under the primary whole tumor segmentation result; step three: constructing a loss function to train the accurate brain neural tumor segmentation network; and (3) outputting: and (3) carrying out tumor region segmentation on the original multi-modal image by using the trained brain glioma segmentation network with the cascade neural network structure. The method can be combined with various medical image-based application systems, helps to improve the segmentation quality of multi-modal images, and has wide market prospect and application value.
Description
Technical Field
The invention relates to a brain glioma segmentation method based on a cascade neural network structure, and belongs to the field of digital image processing, pattern recognition and computer vision. The medical image segmentation has wide application prospect in application systems of various image-guided interventional diagnosis and directional radiotherapy.
Background
Gliomas are the most common primary brain malignancies with varying degrees of invasiveness, prognosis and areas of heterogeneity. Segmentation of brain gliomas generally refers to segmentation of a tumor region from a multi-modal nuclear magnetic resonance sequence. The segmentation of the brain glioma can effectively extract a plurality of heterogeneous regions (including the whole tumor region, the tumor core region and the tumor enhancement region) of the tumor, thereby helping a doctor to make an accurate judgment. Medical image segmentation is more challenging than normal color images due to various noise, blur and low contrast problems resulting from the imaging process. In addition, the complexity of the brain tissue structure, the variability of the spatial position and the morphological size of the brain tumor, and other factors make it difficult to achieve accurate segmentation of the brain glioma.
Medical image segmentation algorithms are generally classified into conventional machine learning methods and deep learning methods. One of typical representative methods of the conventional machine learning method for medical image segmentation is a region-based method, which introduces color discontinuity to represent a boundary between a region and a target object as an edge, and is effective in dealing with problems such as insufficient segmentation, excessive segmentation, and an erroneous edge. Yang et al propose an improved method for a gradient threshold edge detector. The method introduces the basic characteristics of the human visual system and accurately determines the local mask area of the edge with any shape according to the image content. The gradient image is masked with the brightness and activity of the local image before determining the edge markers. Experimental results show that the edge images obtained by the algorithm are more consistent with the perception edge images (see the literature: Yankeen et al, improved method of gradient threshold edge detector based on HVS. Computational Intelligence and safety. Schprineglin Heidelberg, Berlin, Heidelberg, 2005, 1051-. One recent work by Su et al has been to segment the carpal bones in X-ray images using a multi-stage approach. Foreground regions and edge maps are extracted using adaptive local thresholds and adaptive Canny edge detection. The method comprises the steps of integrating an edge image and a foreground region through XOR operation, and solving over-segmentation by adding a background boundary in the edge image near a carpal bone boundary; processing the under-segmentation by adding foreground boundaries to the edge map near the carpal boundaries, thereby closing the foreground lost due to the under-segmentation; non-closed edges and false edges from the edge map are supplemented by carpal regions from local adaptive thresholding (see literature: Sulirey et al, segmentation based on delineating carpal prior model integration regions and boundaries in hand X-ray images, American society of Electrical and electronics Engineers, proceedings, 2018,19993 @20008 (Su L, Fu X, Zhang X, Cheng X, Ma Y, Gan Y, Hu Q.Definection of cardiac bones from X-ray images through the model of surgery and integration of region-based and boundary-based segmentation. IEEE Access.2018; 6:19993 @20008.). In recent years, there is a threshold-based method, and ilohan and the like use a threshold value to diagnose a brain tumor in an MRI grayscale image. This technique involves identifying edges using morphology (general erosion and diffusion) and then subtracting the generated image from the original image to obtain results (see, Umamet. Itehan et al, rain tumor segmentation based on a new thresholding method, encyclopedia of computer science, 2017,580-587.(Ilhan U, Ilhan A. brain tumor segmentation based on a new threshold approach. Proc. com. Sci.2017; 120: 580-587.).
In recent years, with the rapid development of deep learning techniques, some methods based on deep learning are applied to the field of medical image segmentation. Such methods overcome the disadvantages of manual functional extraction, making it possible to build large trainable models that can learn the best effect required for a given task.
In order to adapt the convolutional network to a variety of test images, Wang et AL propose a fine tuning algorithm (see: Wang Tai et AL, Interactive medical image segmentation based on deep learning and image-specific fine tuning, American institute of Electrical and electronics Engineers medical image science, 2018,1562-, incorporating multi-scale functions into the model through auxiliary classification paths enables the network to utilize multi-scale information (see literature: Zhang photograph et al, automated segmentation of acute ischemic stroke from DWI using three-dimensional full convolution density neural networks, proceedings of the institute of Electrical and electronics Engineers medical imaging, 2018, 2149-. Pannande et al propose a three-dimensional brain tumor segmentation framework based on three-dimensional U-Net. The proposed architecture is divided into three parts, multimodal fusion, tumor extractor and tumor segmenter. This structure fuses the magnetic resonance sequence with depth coding fusion, learns the tumor pattern with a three-dimensional initial U-Net model using a fusion modality, and finally decodes the Multi-scale extracted features into multiple types of tumor regions (see article: Pannand et al, Multi-modal coding fusion brain tumor segmentation based on 3D initial U-Net and decoder models, Multimedia Tools and Applications society, 2021,30305 @, 30320 (N.S. Punn and S.Agarwal, "Multi-modality encoded fusion with 3D acquisition U-Net and decoder model for simulation segmentation," Multimedia Tools and Applications, pp.1-16,2020.)).
However, most of the current medical image segmentation methods based on the convolutional neural network extract an interested region based on gray information of an image, the utilization of edge information is not enough, and the consideration of inter-class spatial relationship in multi-class segmentation tasks is lacked, so that the 3D segmentation is not fine enough and the precision is not high. The invention considers that the edge information and the inter-class spatial relationship have the same utilization value, can effectively solve the problem of fuzzy boundary in the segmentation task, introduces the inter-class spatial relationship and provides important information for the multi-class segmentation task. Based on the method, the invention provides a novel brain glioma segmentation method which comprises the following steps: a brain glioma segmentation method based on a cascade neural network structure. In the invention, the novel network structure extracts the global depth feature, and a cascade structure and a whole tumor segmentation and edge detection structure are adopted, so that the algorithm can be obtained by fully utilizing the spatial relationship among the classes and the edge feature, and the segmentation quality of the multiple classes of tumors is effectively improved.
Disclosure of Invention
1. The purpose is as follows: in view of the above problems, the present invention aims to provide a glioma segmentation method based on a cascade neural network structure for analyzing and researching image characteristic information of glioma. The method comprises the steps of fully extracting global multi-scale attention features from multi-modal medical images, using two decoding sequences with cascade relation, then utilizing extracted Whole Tumor edge feature information to effectively improve the segmentation quality and stability of the brain glioma, and finally outputting high-quality Whole Tumor regions (WT), Tumor Core regions (TC) and Tumor enhancement regions (ET) corresponding to input images.
2. The technical scheme is as follows: in order to achieve the purpose, the overall idea of the technical scheme of the invention is to adopt a three-dimensional feature extraction network to generate global multi-scale depth features, utilize a WT segmentation and edge detection network to generate the whole tumor edge and the segmentation result thereof, then utilize a cascade segmentation network to generate the accurate segmentation result of TC and ET, and utilize edge loss and content loss to continuously improve the performance of the glioma segmentation network. The algorithm technical idea of the invention is mainly embodied in the following four aspects:
1) and a residual error module is used, so that the network expression capability and the convergence speed are improved.
2) The global multi-scale feature generation module is used for generating multi-level residual features with global attention information;
3) the invention relates to a WT segmentation and edge detection network, which generates the whole tumor edge and the segmentation result thereof, and effectively utilizes edge loss to improve the network segmentation precision.
4) The cascade neural network model fully utilizes the inter-class relation and the edge characteristic of the multi-class segmentation, outputs the organization regions of TC and ET, and reconstructs a high-quality multi-class segmentation result.
The invention relates to a brain glioma segmentation method based on a cascade neural network structure, which comprises the following specific steps:
the method comprises the following steps: extracting a depth global multi-scale feature by using a convolutional neural network based on a residual error module; firstly, performing data expansion on four input modes of the brain tumor to form a data block; secondly, performing multi-scale feature extraction by using a multi-level three-dimensional residual error feature extraction network; thirdly, extracting global features from the features of the deepest level by a global attention module; finally, outputting the global features and the shallow multi-level features to subsequent multi-class segmentation networks and edge generation networks in a multi-line mode;
step two: and generating final multi-class segmentation results and whole tumor edge results by utilizing the multi-scale residual error features and the global features. On one hand, a WT segmentation and edge detection network is used for generating a whole tumor segmentation result and an edge detection result thereof; on the other hand, a cascade network is designed to generate a tumor core region and tumor enhancement region segmentation result under the whole tumor segmentation result;
step three: constructing a loss function to carry out end-to-end training on the accurate brain neural tumor segmentation network;
and (3) outputting: and (3) carrying out tumor region segmentation on the original multi-modal image by using the trained brain neural tumor segmentation network with the cascade neural network structure. After the segmentation network is subjected to sufficient iterative training by using the training data, the trained tumor segmentation network is obtained and used for extracting multiple types of tumor tissues.
The first step is as follows:
1.1: extracting feature graphs of the four modal images through a three-dimensional residual error feature network; the four modalities include T1 Weighted Imaging (T1-Weighted Imaging, T1), post-Contrast T1 Weighted Imaging (T1-Weighted Contrast-Enhanced Image, T1ce), T2 Weighted Imaging (T2-Weighted Imaging, T2), and T2 Fluid attenuation Inversion Recovery Imaging (T2 Fluid attached Inversion Recovery Imaging, T2-FLAIR); firstly, carrying out random inversion and random block cutting data amplification operation on multi-modal input data, secondly, extracting multi-modal characteristics by using multi-level three-dimensional residual characteristics formed by a residual module, and outputting multi-scale characteristics. And finally, outputting the multi-level three-dimensional scale features through multiple lines respectively for reconstructing multi-class segmentation results and edge extraction results. By extracting the multi-scale features, the texture information and the spatial information of the image are reserved;
1.2: reconstructing the feature map by using a global attention module; firstly, extracting the deepest level abstract features generated by a three-dimensional residual error feature network, firstly mapping the abstract features to two spaces, fusing to obtain fusion features, secondly, respectively extracting the relationship between pixels and the relationship between each layer of pixels, and finally outputting the global features.
Wherein, the second step is as follows:
2.1: extracting the segmentation result and the edge of the whole tumor through a WT segmentation and edge detection network; the WT segmentation and edge detection network consists of a shared parameter encoder and two paths of encoding modules, the WT segmentation and edge detection network utilizes the global multi-scale features generated in the first step, the global multi-scale features are decoded uniformly through the shared encoder, and finally the WT segmentation and edge detection network is divided into two paths to obtain the segmentation and edge extraction results of the WT respectively;
2.2: extracting a tumor core region and a tumor enhancement region segmentation result contained in the whole tumor through a cascade network; and (3) introducing the whole tumor segmentation result generated in the step (2.1) by the cascade network according to the global multi-scale features generated in the step (one), and generating a final accurate segmentation result of the tumor core region and the tumor enhancement region by fusion decoding.
Wherein the third step is as follows:
3.1: the loss function of the accurate brain neural tumor segmentation network consists of two parts: region loss consisting of Dice loss and Tversky loss between a segmentation result and a reference segmentation result, and edge loss between the result of the entire tumor edge detection and a reference edge; the expression of the clear video generation network loss function is as follows: l ═ α Lregion+βLedgeWherein L isregionRepresents a loss of area, LedgeRepresenting edge loss, α and β are their corresponding weighting coefficients; content loss LregionThe expression of (a) is:wherein p is0iRepresenting the probability of voxel i being a tumor, p1iRepresenting the probability of voxel i being non-tumorous, g0i1 represents the lesion voxel, g0iWhen 0 represents the lesion voxel, g1iAnd g0iIn contrast, N represents the number of voxels, γ ═ 0.7, ε0Represents a non-zero term, set to 1 × 10-7. Against loss LedgeIs expressed asWherein, ynRepresenting the entire tumor margin detection result output by the network,represents the corresponding true value;
3.2: optimization was performed using an Adaptive Moment Estimation (ADAM) optimizer. The ADAM optimizer is adopted for optimization, and the initial learning rates of various brain tumor segmentation networks are all 1.003 multiplied by 10-4Adjusting the network parameters by gradient back propagation reduces the corresponding loss function.
A brain glioma segmentation system based on a cascade neural network structure, the basic structural framework and the work flow of which are shown in figure 1, is characterized by comprising:
and the three-dimensional global feature extraction module is used for multi-scale features with global features. The three-dimensional global feature extraction module further comprises:
a multi-scale residual error module for generating multi-scale depth residual error features;
and the global feature extraction module is used for extracting global features based on the deepest level feature generated by the multi-scale residual error module.
And the multi-class segmentation result generation module is used for generating high-quality and accurate multi-class tumor tissue regions. The multi-class segmentation result generation module further comprises:
WT segmentation and edge detection networks for generating high quality tissue segmentation and edges of brain glioma whole tumors; designing a dovetail structure module at the tail of the network, dividing the dovetail structure module into two paths, simultaneously generating a segmentation result and extracting edges;
a cascaded segmentation network for generating tissue segmentation of high quality glioma core regions and enhancement regions; and introducing a whole tumor segmentation result before decoding, and generating a tumor core region and tumor enhancement region tissue segmentation result by using global multi-scale features.
The loss function calculation module is used for accurately calculating the loss function of the brain neural tumor segmentation network;
and the network training module is used for carrying out sufficient iterative training on the accurate brain neural tumor segmentation network to obtain the trained accurate brain neural tumor segmentation network so as to extract the segmentation result and the tumor edge.
The process mainly comprises the steps of outputting multi-scale features with global features through a three-dimensional global feature extraction module comprising a multi-scale residual error module and a global feature extraction module; using the multi-scale feature as input, and outputting the segmentation result and the edge of the WT by using the WT segmentation and edge detection network; and taking the multi-scale features and the segmentation result of the WT as input, outputting the segmentation result of the TC and ET region tissues by utilizing a cascade segmentation network, and finally fusing to obtain the segmentation of the multi-class tumor regions. And the segmentation result of the multi-type tumor region and the WT edge are used as constraints to iteratively update the whole network, so that higher precision is obtained.
3. The advantages and the effects are as follows: the invention provides a brain glioma segmentation method based on a cascade neural network structure, which takes a cascade neural network as a basic frame, extracts global multi-scale depth features through a residual coding network, and sufficiently introduces global attention information into the multi-scale features; generating a multi-classification tumor tissue segmentation result by designing a cascade network, and extracting the edge contour of the whole tumor; the spatial position information among multiple types is fully utilized by sequentially generating the sequence of the whole tumor, the tumor core region and the enhancement region; by introducing the edge loss function training network, the network can refine the segmentation precision of the tumor edge region, and further improve the precision of multi-class segmentation. The method can be combined with various medical image-based application systems, helps to improve the segmentation quality of multi-modal images, and has wide market prospect and application value.
Drawings
Fig. 1 is a basic structural framework and a workflow of a glioma segmentation network of a cascaded neural network structure proposed by the present invention.
FIG. 2 is a three-dimensional global feature extraction module.
Fig. 3 is a WT tumor margin segmentation network.
Fig. 4 is a cascaded split network.
Fig. 5a-f show the multi-class tumor segmentation effect of the present invention under different disease conditions, wherein 5a, 5c, and 5e are the true values corresponding to the inputted multi-modal images, and 5b, 5d, and 5f are the multi-class tumor segmentation results outputted by the present invention.
Detailed Description
For better understanding of the technical solutions of the present invention, the following further describes embodiments of the present invention with reference to the accompanying drawings.
The invention relates to a brain glioma segmentation method based on a cascade neural network structure, wherein an algorithm framework and a network structure are shown in figure 1, and the specific implementation steps of each part are as follows:
the method comprises the following steps: extracting global multi-scale features by using a three-dimensional global feature extraction module, wherein the basic structure of the three-dimensional global feature extraction module is shown in FIG. 2;
step two: the tissue area and the edge of the whole tumor are segmented by using a whole tumor edge segmentation network, the basic structure of the tissue area and the edge of the whole tumor is shown in fig. 3, and based on the segmentation result of the whole tumor tissue, the tissue of a brain glioma core area and an enhanced area is segmented by using a cascade segmentation network, as shown in fig. 4;
step three: constructing an edge loss function to train the whole network;
and (3) outputting: and extracting various tumor tissues by using the trained glioma segmentation network. After training data is used for carrying out sufficient iterative training on the glioma segmentation network, the trained glioma segmentation network is obtained and used for extracting various tumor tissues;
the first step is as follows:
1.1: extracting feature graphs of the four modal images through a three-dimensional residual error feature network; the four modalities comprise T1, T1Gd, T2 and T2-FLAIR; the three-dimensional residual error feature network is composed of residual error modules, the input of the three-dimensional residual error feature network is an input image block composed of four modal images, the output of the three-dimensional residual error feature network is a multi-scale feature, the feature map is used for reconstructing whole tumor edge information and organizing and segmenting results of the whole tumor edge information, and texture information and space information of the images are reserved by extracting the multi-scale feature. The three-dimensional multi-scale features firstly pass through a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 1, and then a first-level scale feature is extracted by 2 times of residual blocks with convolution kernel size of 3 multiplied by 3; then, a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 2 is used for down-sampling, and then second-level scale features are extracted through two residual errors; extracting third-level scale features in the same way; finally, downsampling is carried out through a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 2, and then the fourth-level scale features are extracted through 4 residual blocks.
1.2: reconstructing the feature map by using a global attention module; firstly, interlayer feature extraction and channel feature extraction are carried out on the highest-level abstract features generated by the three-dimensional residual error feature network, and global feature reconstruction is carried out through fusion. Firstly, mapping two spaces of the highest-level abstract feature to obtain mapping features in different spaces, and performing multiplication operation on the mapping features and the mapping features; and then, obtaining global features through pixel level node extraction and interlayer relation extraction in sequence.
Wherein, the second step is as follows:
2.1: extracting the segmentation result and the edge of the whole tumor through a WT segmentation and edge detection network; the WT segmentation and edge detection network is composed of a shared parameter encoder and two paths of encoding modules, the WT segmentation and edge detection network utilizes the global multi-scale features generated in the first step to decode uniformly through a shared decoder, and finally the WT segmentation and edge detection network is divided into two paths to obtain segmentation and edge extraction of the whole tumor tissue. The shared parameter decoder consists of convolution layers and upsampling, wherein the convolution kernel of the first-stage convolution layer is 3 multiplied by 3, the step length is 1, and the upsampling parameter is 2; the second level convolution layer is composed of a residual error block, convolution layers with convolution kernel size of 3 multiplied by 3 and step length of 1 and upsampling; the third level convolution layer is composed of a residual block and a convolution layer with the convolution kernel size of 3 multiplied by 3 and the step length of 1; the whole tumor segmentation branch consists of residual error blocks and convolution layers with convolution kernel size of 1 multiplied by 1 and step length of 1; the edge extraction network consists of residual blocks, convolution layers with convolution kernel size of 1 × 1 × 1 and step size of 1.
2.2: extracting a tumor core region and a tumor enhancement region segmentation result contained in the whole tumor through a cascade network; and (3) introducing the segmentation result of the whole tumor tissue generated in the step (2.1) by the cascade network according to the global multi-scale features generated in the step (one), and generating a final accurate segmentation result of the tumor core region and the tumor enhancement region by fusion decoding. Firstly, downsampling a segmentation result of the whole tumor tissue to 1/8 with an original size, and multiplying the segmentation result by the highest-level global features generated by an encoder; and finally, generating a tumor core region and tumor enhancement region segmentation result through convolution, a residual block and upsampling.
Wherein the third step is as follows:
3.1: the loss function of the accurate brain neural tumor segmentation network consists of two parts: region loss consisting of Dice loss and Tversky loss between a segmentation result and a reference segmentation result, and edge loss between the result of the entire tumor edge detection and a reference edge; the expression of the clear video generation network loss function is as follows: l ═ α Lregion+βLedgeWherein L isregionRepresents a loss of area, LedgeRepresenting edge loss, α and β are their corresponding weighting coefficients, α ═ 1 and β ═ 0.1, respectively; content loss LregionThe expression of (a) is:
wherein p is0iRepresenting the probability of voxel i being a tumor, p1iRepresenting the probability of t voxel i being non-tumorous, N representing the number of voxels, γ ═ 0.7, opposing the loss LedgeIs expressed asWherein y isnRepresenting the entire tumor margin detection result output by the network,represents the corresponding true value;
s3.2: the invention adopts an ADAM optimizer for optimization, and the learning rate of the accurate brain neural tumor segmentation network is 1.003 multiplied by 10-4And adjusting network parameters through gradient back propagation to reduce corresponding loss functions so as to better guide the generation of the network.
In order to visually demonstrate the effect of the present invention, fig. 5a-f show the multi-class segmentation results of brain glioma of different patients according to the present invention, wherein 5a, 5c, and 5e are the true value data corresponding to the input original multi-modal data, and 5b, 5d, and 5f are the output multi-class segmentation of brain glioma. It can be seen from the figure that the multi-class segmentation output by the invention can effectively segment the multi-class tissue regions of the glioma under the constraint of the edge information, and the edge segmentation effect of the tumor region is obviously improved. The method takes the generation of the cascade neural network as a basic framework, utilizes the edge information of the whole tumor, combines a global feature extraction module and residual connection, realizes high-quality multi-class glioma tissue segmentation, and can be applied to various application systems of directional radiotherapy.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111404516.9A CN114170244B (en) | 2021-11-24 | 2021-11-24 | Brain glioma segmentation method based on cascade neural network structure |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111404516.9A CN114170244B (en) | 2021-11-24 | 2021-11-24 | Brain glioma segmentation method based on cascade neural network structure |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114170244A true CN114170244A (en) | 2022-03-11 |
CN114170244B CN114170244B (en) | 2024-05-28 |
Family
ID=80480484
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111404516.9A Active CN114170244B (en) | 2021-11-24 | 2021-11-24 | Brain glioma segmentation method based on cascade neural network structure |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114170244B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114882047A (en) * | 2022-04-19 | 2022-08-09 | 厦门大学 | Medical image segmentation method and system based on semi-supervision and Transformers |
CN114937171A (en) * | 2022-05-11 | 2022-08-23 | 复旦大学 | Alzheimer's classification system based on deep learning |
CN115082500A (en) * | 2022-05-31 | 2022-09-20 | 苏州大学 | Corneal nerve fiber segmentation method based on multi-scale and local feature guide network |
CN117611806A (en) * | 2024-01-24 | 2024-02-27 | 北京航空航天大学 | A positive prediction system for prostate cancer surgical margins based on imaging and clinical features |
CN117953027A (en) * | 2024-03-22 | 2024-04-30 | 首都医科大学附属北京天坛医院 | DWI-FLAIR mismatch evaluation method, device, medium and product |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084823A (en) * | 2019-04-18 | 2019-08-02 | 天津大学 | Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN |
CN110120033A (en) * | 2019-04-12 | 2019-08-13 | 天津大学 | Based on improved U-Net neural network three-dimensional brain tumor image partition method |
CN110689543A (en) * | 2019-09-19 | 2020-01-14 | 天津大学 | Improved convolutional neural network brain tumor image segmentation method based on attention mechanism |
WO2020108525A1 (en) * | 2018-11-30 | 2020-06-04 | 腾讯科技(深圳)有限公司 | Image segmentation method and apparatus, diagnosis system, storage medium, and computer device |
CN112215850A (en) * | 2020-08-21 | 2021-01-12 | 天津大学 | A Cascaded Atrous Convolutional Network for Brain Tumor Segmentation with Attention Mechanism |
CN112837276A (en) * | 2021-01-20 | 2021-05-25 | 重庆邮电大学 | A glioma segmentation method based on cascaded deep neural network model |
-
2021
- 2021-11-24 CN CN202111404516.9A patent/CN114170244B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020108525A1 (en) * | 2018-11-30 | 2020-06-04 | 腾讯科技(深圳)有限公司 | Image segmentation method and apparatus, diagnosis system, storage medium, and computer device |
CN110120033A (en) * | 2019-04-12 | 2019-08-13 | 天津大学 | Based on improved U-Net neural network three-dimensional brain tumor image partition method |
CN110084823A (en) * | 2019-04-18 | 2019-08-02 | 天津大学 | Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN |
CN110689543A (en) * | 2019-09-19 | 2020-01-14 | 天津大学 | Improved convolutional neural network brain tumor image segmentation method based on attention mechanism |
CN112215850A (en) * | 2020-08-21 | 2021-01-12 | 天津大学 | A Cascaded Atrous Convolutional Network for Brain Tumor Segmentation with Attention Mechanism |
CN112837276A (en) * | 2021-01-20 | 2021-05-25 | 重庆邮电大学 | A glioma segmentation method based on cascaded deep neural network model |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114882047A (en) * | 2022-04-19 | 2022-08-09 | 厦门大学 | Medical image segmentation method and system based on semi-supervision and Transformers |
CN114937171A (en) * | 2022-05-11 | 2022-08-23 | 复旦大学 | Alzheimer's classification system based on deep learning |
CN114937171B (en) * | 2022-05-11 | 2023-06-09 | 复旦大学 | Alzheimer's classification system based on deep learning |
CN115082500A (en) * | 2022-05-31 | 2022-09-20 | 苏州大学 | Corneal nerve fiber segmentation method based on multi-scale and local feature guide network |
CN117611806A (en) * | 2024-01-24 | 2024-02-27 | 北京航空航天大学 | A positive prediction system for prostate cancer surgical margins based on imaging and clinical features |
CN117611806B (en) * | 2024-01-24 | 2024-04-12 | 北京航空航天大学 | A positive prediction system for surgical margins of prostate cancer based on imaging and clinical features |
CN117953027A (en) * | 2024-03-22 | 2024-04-30 | 首都医科大学附属北京天坛医院 | DWI-FLAIR mismatch evaluation method, device, medium and product |
Also Published As
Publication number | Publication date |
---|---|
CN114170244B (en) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ning et al. | SMU-Net: Saliency-guided morphology-aware U-Net for breast lesion segmentation in ultrasound image | |
Punn et al. | Modality specific U-Net variants for biomedical image segmentation: a survey | |
Li et al. | Brain tumor detection based on multimodal information fusion and convolutional neural network | |
CN114170244A (en) | A glioma segmentation method based on cascaded neural network structure | |
CN112465827B (en) | Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation | |
CN110930416B (en) | A U-shaped network-based MRI image prostate segmentation method | |
CN110084823A (en) | Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN | |
CN112258456B (en) | A three-dimensional image segmentation method based on convolutional neural network supervision | |
CN112258514A (en) | A segmentation method of pulmonary blood vessels in CT images | |
CN113888555B (en) | Multi-mode brain tumor image segmentation system based on attention mechanism | |
Chen et al. | A lung dense deep convolution neural network for robust lung parenchyma segmentation | |
CN111275712A (en) | A Residual Semantic Network Training Method for Large-scale Image Data | |
Kaur et al. | Optimized multi threshold brain tumor image segmentation using two dimensional minimum cross entropy based on co-occurrence matrix | |
CN114972362A (en) | Medical image automatic segmentation method and system based on RMAU-Net network | |
Amiri et al. | Bayesian Network and Structured Random Forest Cooperative Deep Learning for Automatic Multi-label Brain Tumor Segmentation. | |
Shao et al. | Application of U-Net and Optimized Clustering in Medical Image Segmentation: A Review. | |
Wu et al. | Cascaded fully convolutional DenseNet for automatic kidney segmentation in ultrasound images | |
Ning et al. | CF2-Net: Coarse-to-fine fusion convolutional network for breast ultrasound image segmentation | |
Anand et al. | Residual u-network for breast tumor segmentation from magnetic resonance images | |
Wang et al. | Accurate lung nodule segmentation with detailed representation transfer and soft mask supervision | |
Khan et al. | Attresdu-net: medical image segmentation using attention-based residual double u-net | |
Huang et al. | Skin lesion image segmentation by using backchannel filling CNN and level sets | |
CN110033455A (en) | A method of extracting information on target object from video | |
CN114387282A (en) | Accurate automatic segmentation method and system for medical image organs | |
Nanayakkara et al. | Automatic breast boundary segmentation of mammograms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |