CN114170244A - A glioma segmentation method based on cascaded neural network structure - Google Patents

A glioma segmentation method based on cascaded neural network structure Download PDF

Info

Publication number
CN114170244A
CN114170244A CN202111404516.9A CN202111404516A CN114170244A CN 114170244 A CN114170244 A CN 114170244A CN 202111404516 A CN202111404516 A CN 202111404516A CN 114170244 A CN114170244 A CN 114170244A
Authority
CN
China
Prior art keywords
segmentation
tumor
network
region
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111404516.9A
Other languages
Chinese (zh)
Other versions
CN114170244B (en
Inventor
白相志
王元元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202111404516.9A priority Critical patent/CN114170244B/en
Publication of CN114170244A publication Critical patent/CN114170244A/en
Application granted granted Critical
Publication of CN114170244B publication Critical patent/CN114170244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a brain glioma segmentation method based on a cascade neural network structure, which comprises the following steps: the method comprises the following steps: generating a high-precision tumor region segmentation result by utilizing a brain glioma segmentation network with a cascade neural network structure; step two: aiming at multi-scale residual error characteristics and global characteristics, on one hand, a segmentation and edge detection network is utilized to generate a whole tumor segmentation result and an edge detection result thereof; on the other hand, a cascade network is designed to generate a tumor core region and tumor enhancement region segmentation result under the primary whole tumor segmentation result; step three: constructing a loss function to train the accurate brain neural tumor segmentation network; and (3) outputting: and (3) carrying out tumor region segmentation on the original multi-modal image by using the trained brain glioma segmentation network with the cascade neural network structure. The method can be combined with various medical image-based application systems, helps to improve the segmentation quality of multi-modal images, and has wide market prospect and application value.

Description

Brain glioma segmentation method based on cascade neural network structure
Technical Field
The invention relates to a brain glioma segmentation method based on a cascade neural network structure, and belongs to the field of digital image processing, pattern recognition and computer vision. The medical image segmentation has wide application prospect in application systems of various image-guided interventional diagnosis and directional radiotherapy.
Background
Gliomas are the most common primary brain malignancies with varying degrees of invasiveness, prognosis and areas of heterogeneity. Segmentation of brain gliomas generally refers to segmentation of a tumor region from a multi-modal nuclear magnetic resonance sequence. The segmentation of the brain glioma can effectively extract a plurality of heterogeneous regions (including the whole tumor region, the tumor core region and the tumor enhancement region) of the tumor, thereby helping a doctor to make an accurate judgment. Medical image segmentation is more challenging than normal color images due to various noise, blur and low contrast problems resulting from the imaging process. In addition, the complexity of the brain tissue structure, the variability of the spatial position and the morphological size of the brain tumor, and other factors make it difficult to achieve accurate segmentation of the brain glioma.
Medical image segmentation algorithms are generally classified into conventional machine learning methods and deep learning methods. One of typical representative methods of the conventional machine learning method for medical image segmentation is a region-based method, which introduces color discontinuity to represent a boundary between a region and a target object as an edge, and is effective in dealing with problems such as insufficient segmentation, excessive segmentation, and an erroneous edge. Yang et al propose an improved method for a gradient threshold edge detector. The method introduces the basic characteristics of the human visual system and accurately determines the local mask area of the edge with any shape according to the image content. The gradient image is masked with the brightness and activity of the local image before determining the edge markers. Experimental results show that the edge images obtained by the algorithm are more consistent with the perception edge images (see the literature: Yankeen et al, improved method of gradient threshold edge detector based on HVS. Computational Intelligence and safety. Schprineglin Heidelberg, Berlin, Heidelberg, 2005, 1051-. One recent work by Su et al has been to segment the carpal bones in X-ray images using a multi-stage approach. Foreground regions and edge maps are extracted using adaptive local thresholds and adaptive Canny edge detection. The method comprises the steps of integrating an edge image and a foreground region through XOR operation, and solving over-segmentation by adding a background boundary in the edge image near a carpal bone boundary; processing the under-segmentation by adding foreground boundaries to the edge map near the carpal boundaries, thereby closing the foreground lost due to the under-segmentation; non-closed edges and false edges from the edge map are supplemented by carpal regions from local adaptive thresholding (see literature: Sulirey et al, segmentation based on delineating carpal prior model integration regions and boundaries in hand X-ray images, American society of Electrical and electronics Engineers, proceedings, 2018,19993 @20008 (Su L, Fu X, Zhang X, Cheng X, Ma Y, Gan Y, Hu Q.Definection of cardiac bones from X-ray images through the model of surgery and integration of region-based and boundary-based segmentation. IEEE Access.2018; 6:19993 @20008.). In recent years, there is a threshold-based method, and ilohan and the like use a threshold value to diagnose a brain tumor in an MRI grayscale image. This technique involves identifying edges using morphology (general erosion and diffusion) and then subtracting the generated image from the original image to obtain results (see, Umamet. Itehan et al, rain tumor segmentation based on a new thresholding method, encyclopedia of computer science, 2017,580-587.(Ilhan U, Ilhan A. brain tumor segmentation based on a new threshold approach. Proc. com. Sci.2017; 120: 580-587.).
In recent years, with the rapid development of deep learning techniques, some methods based on deep learning are applied to the field of medical image segmentation. Such methods overcome the disadvantages of manual functional extraction, making it possible to build large trainable models that can learn the best effect required for a given task.
In order to adapt the convolutional network to a variety of test images, Wang et AL propose a fine tuning algorithm (see: Wang Tai et AL, Interactive medical image segmentation based on deep learning and image-specific fine tuning, American institute of Electrical and electronics Engineers medical image science, 2018,1562-, incorporating multi-scale functions into the model through auxiliary classification paths enables the network to utilize multi-scale information (see literature: Zhang photograph et al, automated segmentation of acute ischemic stroke from DWI using three-dimensional full convolution density neural networks, proceedings of the institute of Electrical and electronics Engineers medical imaging, 2018, 2149-. Pannande et al propose a three-dimensional brain tumor segmentation framework based on three-dimensional U-Net. The proposed architecture is divided into three parts, multimodal fusion, tumor extractor and tumor segmenter. This structure fuses the magnetic resonance sequence with depth coding fusion, learns the tumor pattern with a three-dimensional initial U-Net model using a fusion modality, and finally decodes the Multi-scale extracted features into multiple types of tumor regions (see article: Pannand et al, Multi-modal coding fusion brain tumor segmentation based on 3D initial U-Net and decoder models, Multimedia Tools and Applications society, 2021,30305 @, 30320 (N.S. Punn and S.Agarwal, "Multi-modality encoded fusion with 3D acquisition U-Net and decoder model for simulation segmentation," Multimedia Tools and Applications, pp.1-16,2020.)).
However, most of the current medical image segmentation methods based on the convolutional neural network extract an interested region based on gray information of an image, the utilization of edge information is not enough, and the consideration of inter-class spatial relationship in multi-class segmentation tasks is lacked, so that the 3D segmentation is not fine enough and the precision is not high. The invention considers that the edge information and the inter-class spatial relationship have the same utilization value, can effectively solve the problem of fuzzy boundary in the segmentation task, introduces the inter-class spatial relationship and provides important information for the multi-class segmentation task. Based on the method, the invention provides a novel brain glioma segmentation method which comprises the following steps: a brain glioma segmentation method based on a cascade neural network structure. In the invention, the novel network structure extracts the global depth feature, and a cascade structure and a whole tumor segmentation and edge detection structure are adopted, so that the algorithm can be obtained by fully utilizing the spatial relationship among the classes and the edge feature, and the segmentation quality of the multiple classes of tumors is effectively improved.
Disclosure of Invention
1. The purpose is as follows: in view of the above problems, the present invention aims to provide a glioma segmentation method based on a cascade neural network structure for analyzing and researching image characteristic information of glioma. The method comprises the steps of fully extracting global multi-scale attention features from multi-modal medical images, using two decoding sequences with cascade relation, then utilizing extracted Whole Tumor edge feature information to effectively improve the segmentation quality and stability of the brain glioma, and finally outputting high-quality Whole Tumor regions (WT), Tumor Core regions (TC) and Tumor enhancement regions (ET) corresponding to input images.
2. The technical scheme is as follows: in order to achieve the purpose, the overall idea of the technical scheme of the invention is to adopt a three-dimensional feature extraction network to generate global multi-scale depth features, utilize a WT segmentation and edge detection network to generate the whole tumor edge and the segmentation result thereof, then utilize a cascade segmentation network to generate the accurate segmentation result of TC and ET, and utilize edge loss and content loss to continuously improve the performance of the glioma segmentation network. The algorithm technical idea of the invention is mainly embodied in the following four aspects:
1) and a residual error module is used, so that the network expression capability and the convergence speed are improved.
2) The global multi-scale feature generation module is used for generating multi-level residual features with global attention information;
3) the invention relates to a WT segmentation and edge detection network, which generates the whole tumor edge and the segmentation result thereof, and effectively utilizes edge loss to improve the network segmentation precision.
4) The cascade neural network model fully utilizes the inter-class relation and the edge characteristic of the multi-class segmentation, outputs the organization regions of TC and ET, and reconstructs a high-quality multi-class segmentation result.
The invention relates to a brain glioma segmentation method based on a cascade neural network structure, which comprises the following specific steps:
the method comprises the following steps: extracting a depth global multi-scale feature by using a convolutional neural network based on a residual error module; firstly, performing data expansion on four input modes of the brain tumor to form a data block; secondly, performing multi-scale feature extraction by using a multi-level three-dimensional residual error feature extraction network; thirdly, extracting global features from the features of the deepest level by a global attention module; finally, outputting the global features and the shallow multi-level features to subsequent multi-class segmentation networks and edge generation networks in a multi-line mode;
step two: and generating final multi-class segmentation results and whole tumor edge results by utilizing the multi-scale residual error features and the global features. On one hand, a WT segmentation and edge detection network is used for generating a whole tumor segmentation result and an edge detection result thereof; on the other hand, a cascade network is designed to generate a tumor core region and tumor enhancement region segmentation result under the whole tumor segmentation result;
step three: constructing a loss function to carry out end-to-end training on the accurate brain neural tumor segmentation network;
and (3) outputting: and (3) carrying out tumor region segmentation on the original multi-modal image by using the trained brain neural tumor segmentation network with the cascade neural network structure. After the segmentation network is subjected to sufficient iterative training by using the training data, the trained tumor segmentation network is obtained and used for extracting multiple types of tumor tissues.
The first step is as follows:
1.1: extracting feature graphs of the four modal images through a three-dimensional residual error feature network; the four modalities include T1 Weighted Imaging (T1-Weighted Imaging, T1), post-Contrast T1 Weighted Imaging (T1-Weighted Contrast-Enhanced Image, T1ce), T2 Weighted Imaging (T2-Weighted Imaging, T2), and T2 Fluid attenuation Inversion Recovery Imaging (T2 Fluid attached Inversion Recovery Imaging, T2-FLAIR); firstly, carrying out random inversion and random block cutting data amplification operation on multi-modal input data, secondly, extracting multi-modal characteristics by using multi-level three-dimensional residual characteristics formed by a residual module, and outputting multi-scale characteristics. And finally, outputting the multi-level three-dimensional scale features through multiple lines respectively for reconstructing multi-class segmentation results and edge extraction results. By extracting the multi-scale features, the texture information and the spatial information of the image are reserved;
1.2: reconstructing the feature map by using a global attention module; firstly, extracting the deepest level abstract features generated by a three-dimensional residual error feature network, firstly mapping the abstract features to two spaces, fusing to obtain fusion features, secondly, respectively extracting the relationship between pixels and the relationship between each layer of pixels, and finally outputting the global features.
Wherein, the second step is as follows:
2.1: extracting the segmentation result and the edge of the whole tumor through a WT segmentation and edge detection network; the WT segmentation and edge detection network consists of a shared parameter encoder and two paths of encoding modules, the WT segmentation and edge detection network utilizes the global multi-scale features generated in the first step, the global multi-scale features are decoded uniformly through the shared encoder, and finally the WT segmentation and edge detection network is divided into two paths to obtain the segmentation and edge extraction results of the WT respectively;
2.2: extracting a tumor core region and a tumor enhancement region segmentation result contained in the whole tumor through a cascade network; and (3) introducing the whole tumor segmentation result generated in the step (2.1) by the cascade network according to the global multi-scale features generated in the step (one), and generating a final accurate segmentation result of the tumor core region and the tumor enhancement region by fusion decoding.
Wherein the third step is as follows:
3.1: the loss function of the accurate brain neural tumor segmentation network consists of two parts: region loss consisting of Dice loss and Tversky loss between a segmentation result and a reference segmentation result, and edge loss between the result of the entire tumor edge detection and a reference edge; the expression of the clear video generation network loss function is as follows: l ═ α Lregion+βLedgeWherein L isregionRepresents a loss of area, LedgeRepresenting edge loss, α and β are their corresponding weighting coefficients; content loss LregionThe expression of (a) is:
Figure BDA0003371836390000041
wherein p is0iRepresenting the probability of voxel i being a tumor, p1iRepresenting the probability of voxel i being non-tumorous, g0i1 represents the lesion voxel, g0iWhen 0 represents the lesion voxel, g1iAnd g0iIn contrast, N represents the number of voxels, γ ═ 0.7, ε0Represents a non-zero term, set to 1 × 10-7. Against loss LedgeIs expressed as
Figure BDA0003371836390000051
Wherein, ynRepresenting the entire tumor margin detection result output by the network,
Figure BDA0003371836390000052
represents the corresponding true value;
3.2: optimization was performed using an Adaptive Moment Estimation (ADAM) optimizer. The ADAM optimizer is adopted for optimization, and the initial learning rates of various brain tumor segmentation networks are all 1.003 multiplied by 10-4Adjusting the network parameters by gradient back propagation reduces the corresponding loss function.
A brain glioma segmentation system based on a cascade neural network structure, the basic structural framework and the work flow of which are shown in figure 1, is characterized by comprising:
and the three-dimensional global feature extraction module is used for multi-scale features with global features. The three-dimensional global feature extraction module further comprises:
a multi-scale residual error module for generating multi-scale depth residual error features;
and the global feature extraction module is used for extracting global features based on the deepest level feature generated by the multi-scale residual error module.
And the multi-class segmentation result generation module is used for generating high-quality and accurate multi-class tumor tissue regions. The multi-class segmentation result generation module further comprises:
WT segmentation and edge detection networks for generating high quality tissue segmentation and edges of brain glioma whole tumors; designing a dovetail structure module at the tail of the network, dividing the dovetail structure module into two paths, simultaneously generating a segmentation result and extracting edges;
a cascaded segmentation network for generating tissue segmentation of high quality glioma core regions and enhancement regions; and introducing a whole tumor segmentation result before decoding, and generating a tumor core region and tumor enhancement region tissue segmentation result by using global multi-scale features.
The loss function calculation module is used for accurately calculating the loss function of the brain neural tumor segmentation network;
and the network training module is used for carrying out sufficient iterative training on the accurate brain neural tumor segmentation network to obtain the trained accurate brain neural tumor segmentation network so as to extract the segmentation result and the tumor edge.
The process mainly comprises the steps of outputting multi-scale features with global features through a three-dimensional global feature extraction module comprising a multi-scale residual error module and a global feature extraction module; using the multi-scale feature as input, and outputting the segmentation result and the edge of the WT by using the WT segmentation and edge detection network; and taking the multi-scale features and the segmentation result of the WT as input, outputting the segmentation result of the TC and ET region tissues by utilizing a cascade segmentation network, and finally fusing to obtain the segmentation of the multi-class tumor regions. And the segmentation result of the multi-type tumor region and the WT edge are used as constraints to iteratively update the whole network, so that higher precision is obtained.
3. The advantages and the effects are as follows: the invention provides a brain glioma segmentation method based on a cascade neural network structure, which takes a cascade neural network as a basic frame, extracts global multi-scale depth features through a residual coding network, and sufficiently introduces global attention information into the multi-scale features; generating a multi-classification tumor tissue segmentation result by designing a cascade network, and extracting the edge contour of the whole tumor; the spatial position information among multiple types is fully utilized by sequentially generating the sequence of the whole tumor, the tumor core region and the enhancement region; by introducing the edge loss function training network, the network can refine the segmentation precision of the tumor edge region, and further improve the precision of multi-class segmentation. The method can be combined with various medical image-based application systems, helps to improve the segmentation quality of multi-modal images, and has wide market prospect and application value.
Drawings
Fig. 1 is a basic structural framework and a workflow of a glioma segmentation network of a cascaded neural network structure proposed by the present invention.
FIG. 2 is a three-dimensional global feature extraction module.
Fig. 3 is a WT tumor margin segmentation network.
Fig. 4 is a cascaded split network.
Fig. 5a-f show the multi-class tumor segmentation effect of the present invention under different disease conditions, wherein 5a, 5c, and 5e are the true values corresponding to the inputted multi-modal images, and 5b, 5d, and 5f are the multi-class tumor segmentation results outputted by the present invention.
Detailed Description
For better understanding of the technical solutions of the present invention, the following further describes embodiments of the present invention with reference to the accompanying drawings.
The invention relates to a brain glioma segmentation method based on a cascade neural network structure, wherein an algorithm framework and a network structure are shown in figure 1, and the specific implementation steps of each part are as follows:
the method comprises the following steps: extracting global multi-scale features by using a three-dimensional global feature extraction module, wherein the basic structure of the three-dimensional global feature extraction module is shown in FIG. 2;
step two: the tissue area and the edge of the whole tumor are segmented by using a whole tumor edge segmentation network, the basic structure of the tissue area and the edge of the whole tumor is shown in fig. 3, and based on the segmentation result of the whole tumor tissue, the tissue of a brain glioma core area and an enhanced area is segmented by using a cascade segmentation network, as shown in fig. 4;
step three: constructing an edge loss function to train the whole network;
and (3) outputting: and extracting various tumor tissues by using the trained glioma segmentation network. After training data is used for carrying out sufficient iterative training on the glioma segmentation network, the trained glioma segmentation network is obtained and used for extracting various tumor tissues;
the first step is as follows:
1.1: extracting feature graphs of the four modal images through a three-dimensional residual error feature network; the four modalities comprise T1, T1Gd, T2 and T2-FLAIR; the three-dimensional residual error feature network is composed of residual error modules, the input of the three-dimensional residual error feature network is an input image block composed of four modal images, the output of the three-dimensional residual error feature network is a multi-scale feature, the feature map is used for reconstructing whole tumor edge information and organizing and segmenting results of the whole tumor edge information, and texture information and space information of the images are reserved by extracting the multi-scale feature. The three-dimensional multi-scale features firstly pass through a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 1, and then a first-level scale feature is extracted by 2 times of residual blocks with convolution kernel size of 3 multiplied by 3; then, a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 2 is used for down-sampling, and then second-level scale features are extracted through two residual errors; extracting third-level scale features in the same way; finally, downsampling is carried out through a convolution layer with convolution kernel size of 3 multiplied by 3 and step length of 2, and then the fourth-level scale features are extracted through 4 residual blocks.
1.2: reconstructing the feature map by using a global attention module; firstly, interlayer feature extraction and channel feature extraction are carried out on the highest-level abstract features generated by the three-dimensional residual error feature network, and global feature reconstruction is carried out through fusion. Firstly, mapping two spaces of the highest-level abstract feature to obtain mapping features in different spaces, and performing multiplication operation on the mapping features and the mapping features; and then, obtaining global features through pixel level node extraction and interlayer relation extraction in sequence.
Wherein, the second step is as follows:
2.1: extracting the segmentation result and the edge of the whole tumor through a WT segmentation and edge detection network; the WT segmentation and edge detection network is composed of a shared parameter encoder and two paths of encoding modules, the WT segmentation and edge detection network utilizes the global multi-scale features generated in the first step to decode uniformly through a shared decoder, and finally the WT segmentation and edge detection network is divided into two paths to obtain segmentation and edge extraction of the whole tumor tissue. The shared parameter decoder consists of convolution layers and upsampling, wherein the convolution kernel of the first-stage convolution layer is 3 multiplied by 3, the step length is 1, and the upsampling parameter is 2; the second level convolution layer is composed of a residual error block, convolution layers with convolution kernel size of 3 multiplied by 3 and step length of 1 and upsampling; the third level convolution layer is composed of a residual block and a convolution layer with the convolution kernel size of 3 multiplied by 3 and the step length of 1; the whole tumor segmentation branch consists of residual error blocks and convolution layers with convolution kernel size of 1 multiplied by 1 and step length of 1; the edge extraction network consists of residual blocks, convolution layers with convolution kernel size of 1 × 1 × 1 and step size of 1.
2.2: extracting a tumor core region and a tumor enhancement region segmentation result contained in the whole tumor through a cascade network; and (3) introducing the segmentation result of the whole tumor tissue generated in the step (2.1) by the cascade network according to the global multi-scale features generated in the step (one), and generating a final accurate segmentation result of the tumor core region and the tumor enhancement region by fusion decoding. Firstly, downsampling a segmentation result of the whole tumor tissue to 1/8 with an original size, and multiplying the segmentation result by the highest-level global features generated by an encoder; and finally, generating a tumor core region and tumor enhancement region segmentation result through convolution, a residual block and upsampling.
Wherein the third step is as follows:
3.1: the loss function of the accurate brain neural tumor segmentation network consists of two parts: region loss consisting of Dice loss and Tversky loss between a segmentation result and a reference segmentation result, and edge loss between the result of the entire tumor edge detection and a reference edge; the expression of the clear video generation network loss function is as follows: l ═ α Lregion+βLedgeWherein L isregionRepresents a loss of area, LedgeRepresenting edge loss, α and β are their corresponding weighting coefficients, α ═ 1 and β ═ 0.1, respectively; content loss LregionThe expression of (a) is:
Figure BDA0003371836390000081
wherein p is0iRepresenting the probability of voxel i being a tumor, p1iRepresenting the probability of t voxel i being non-tumorous, N representing the number of voxels, γ ═ 0.7, opposing the loss LedgeIs expressed as
Figure BDA0003371836390000082
Wherein y isnRepresenting the entire tumor margin detection result output by the network,
Figure BDA0003371836390000083
represents the corresponding true value;
s3.2: the invention adopts an ADAM optimizer for optimization, and the learning rate of the accurate brain neural tumor segmentation network is 1.003 multiplied by 10-4And adjusting network parameters through gradient back propagation to reduce corresponding loss functions so as to better guide the generation of the network.
In order to visually demonstrate the effect of the present invention, fig. 5a-f show the multi-class segmentation results of brain glioma of different patients according to the present invention, wherein 5a, 5c, and 5e are the true value data corresponding to the input original multi-modal data, and 5b, 5d, and 5f are the output multi-class segmentation of brain glioma. It can be seen from the figure that the multi-class segmentation output by the invention can effectively segment the multi-class tissue regions of the glioma under the constraint of the edge information, and the edge segmentation effect of the tumor region is obviously improved. The method takes the generation of the cascade neural network as a basic framework, utilizes the edge information of the whole tumor, combines a global feature extraction module and residual connection, realizes high-quality multi-class glioma tissue segmentation, and can be applied to various application systems of directional radiotherapy.

Claims (7)

1.一种基于级联神经网络结构的脑胶质瘤分割方法,其特征在于:该方法具体包括:1. a glioma segmentation method based on cascaded neural network structure, is characterized in that: the method specifically comprises: 步骤一:利用级联神经网络结构的脑胶质瘤分割网络产生高精度的肿瘤区域分割结果;首先使用三维残差特征提取网络进行多尺度特征提取;然后通过全局注意力模块提取全局特征;Step 1: use the glioma segmentation network with cascaded neural network structure to generate high-precision tumor region segmentation results; first use the three-dimensional residual feature extraction network to perform multi-scale feature extraction; then extract global features through the global attention module; 步骤二:针对多尺度残差特征以及全局特征,一方面利用分割及边缘检测网络生成整个肿瘤WT分割结果及其边缘检测结果;另一方面设计级联网络在初步整个肿瘤分割结果下生成肿瘤核心区域TC和肿瘤增强区域ET分割结果;Step 2: For multi-scale residual features and global features, on the one hand, the segmentation and edge detection network is used to generate the entire tumor WT segmentation result and its edge detection results; on the other hand, a cascade network is designed to generate tumor cores under the preliminary entire tumor segmentation results. Segmentation results of regional TC and tumor-enhanced region ET; 步骤三:构造损失函数对精准脑部神经肿瘤分割网络进行训练;Step 3: Construct a loss function to train the accurate brain neurotumor segmentation network; 输出:用训练好的级联神经网络结构的脑胶质瘤分割网络对原始多模态图像进行肿瘤区域分割。Output: Segmentation of tumor regions on raw multimodal images using a trained glioma segmentation network with a cascaded neural network structure. 2.根据权利要求1所述的一种基于级联神经网络结构的脑胶质瘤分割方法,其特征在于:所述步骤一具体如下:2. a kind of brain glioma segmentation method based on cascade neural network structure according to claim 1, is characterized in that: described step one is as follows: S1.1:通过三维残差特征网络提取四个模态影像的特征图;所述的四个模态包括T1加权成像、后对比T1加权成像、T2加权成像以及T2流体衰减反转恢复成像;所述的三维残差特征网络由残差模块构成,三维残差特征网络的输入是四个模态影像构成的输入图像块,三维残差特征网络的输出是多尺度特征,特征图用于重建整个肿瘤边缘信息和肿瘤区域分割结果,通过提取多尺度特征,影像的纹理信息和空间信息得以保留;S1.1: Extract feature maps of four modal images through a three-dimensional residual feature network; the four modalities include T1-weighted imaging, post-contrast T1-weighted imaging, T2-weighted imaging, and T2 fluid attenuation inversion recovery imaging; The three-dimensional residual feature network is composed of residual modules, the input of the three-dimensional residual feature network is an input image block composed of four modal images, the output of the three-dimensional residual feature network is a multi-scale feature, and the feature map is used for reconstruction. The entire tumor edge information and tumor region segmentation results, through the extraction of multi-scale features, the texture information and spatial information of the image are preserved; S1.2:使用全局注意力模块对特征图进行重建;首先利用三维残差特征网络生成的最高级抽象特征,对其先进行层间特征提取,再进行通道特征提取,通过融合对全局特征重建。S1.2: Use the global attention module to reconstruct the feature map; first use the highest-level abstract features generated by the 3D residual feature network, first perform inter-layer feature extraction, and then perform channel feature extraction, and reconstruct the global feature through fusion. . 3.根据权利要求2所述的一种基于级联神经网络结构的脑胶质瘤分割方法,其特征在于:所述步骤S1.2还包括:用于医学影像分割的编码器进行编码时仅采用卷积和下采样模块提取特征,这里采用ResLock提取多尺度特征的同时,为提取到更为有效的全局特征,设计了全局注意力模块,通过注意力特征与原始最高级抽象特征进行融合得到输出特征图。3. A glioma segmentation method based on a cascaded neural network structure according to claim 2, wherein the step S1.2 further comprises: when the encoder for medical image segmentation performs encoding, only Convolution and downsampling modules are used to extract features. Here, while ResLock is used to extract multi-scale features, in order to extract more effective global features, a global attention module is designed, which is obtained by fusing the attention features with the original highest level abstract features. Output feature map. 4.根据权利要求1所述的一种基于级联神经网络结构的脑胶质瘤分割方法,其特征在于:所述步骤二具体如下:4. a kind of brain glioma segmentation method based on cascaded neural network structure according to claim 1, is characterized in that: described step 2 is as follows: S2.1:通过WT分割及边缘检测网络提取整个肿瘤的分割结果以及边缘;WT分割及边缘检测网络由共享参数解码器和两路解码模块两部分构成,WT分割及边缘检测网络利用步骤一生成的全局多尺度特征,通过共享解码器统一进行解码,最后分为两路分别得到整个肿瘤的分割及边缘提取结果;S2.1: Extract the segmentation results and edges of the entire tumor through the WT segmentation and edge detection network; the WT segmentation and edge detection network consists of a shared parameter decoder and a two-way decoding module, and the WT segmentation and edge detection network is generated by step 1 The global multi-scale features of the tumor are decoded uniformly by a shared decoder, and finally divided into two channels to obtain the segmentation and edge extraction results of the entire tumor; S2.2:通过级联网络提取包含于整个肿瘤的肿瘤核心区域和肿瘤增强区域分割结果;级联网络根据步骤一生成的全局多尺度特征,引入步骤S2.1生成的整个肿瘤分割结果,通过融合解码生成肿瘤核心区域和肿瘤增强区域的最终精准分割结果。S2.2: Extract the segmentation results of the tumor core region and tumor enhancement region contained in the entire tumor through the cascade network; the cascade network introduces the entire tumor segmentation results generated in step S2.1 according to the global multi-scale features generated in step 1, and passes The fusion decoding generates the final accurate segmentation results of the tumor core region and tumor enhancement region. 5.根据权利要求1所述的一种基于级联神经网络结构的脑部肿瘤分割方法,其特征在于:所述步骤三具体如下:5. a kind of brain tumor segmentation method based on cascading neural network structure according to claim 1, is characterized in that: described step 3 is as follows: S3.1:精准脑部神经肿瘤分割网络的损失函数由两部分组成:分割结果和参考分割结果之间的Dice损失和Tversky损失构成的区域损失,以及整个肿瘤边缘检测的结果和参考边缘之间的边缘损失;清晰视频生成网络损失函数的表达式为:L=αLregion+βLedge,其中Lregion代表区域损失,Ledge代表边缘损失,α和β是它们对应的加权系数;内容损失Lregion的表达式为:
Figure FDA0003371836380000021
其中p0i代表体素i为肿瘤的概率,p1i代表t体素i非肿瘤的概率,N代表体素数量,γ=0.7,对抗损失Ledge的表达式是
Figure FDA0003371836380000022
其中yn代表网络输出的整个肿瘤边缘检测结果,
Figure FDA0003371836380000023
代表相应的真值;
S3.1: The loss function of the accurate brain neurotumor segmentation network consists of two parts: the regional loss composed of the Dice loss and the Tversky loss between the segmentation result and the reference segmentation result, and the result of the entire tumor edge detection and the reference edge. The expression of the loss function of clear video generation network is: L=αL region + βL edge , where L region represents the regional loss, L edge represents the edge loss, and α and β are their corresponding weighting coefficients; the content loss L region The expression is:
Figure FDA0003371836380000021
where p 0i represents the probability that voxel i is a tumor, p 1i represents the probability that t voxel i is not a tumor, N represents the number of voxels, γ = 0.7, and the expression for the adversarial loss L edge is
Figure FDA0003371836380000022
where y n represents the entire tumor edge detection result output by the network,
Figure FDA0003371836380000023
represents the corresponding truth value;
S3.2:采用自适应矩估计ADAM优化器进行优化。S3.2: Adaptive moment estimation ADAM optimizer is used for optimization.
6.一种基于级联神经网络结构的脑部肿瘤分割系统,其特征在于:该系统包括:6. A brain tumor segmentation system based on a cascaded neural network structure, characterized in that: the system comprises: 全局多尺度特征生成模块,用于产生具有全局注意力信息的多级残差特征;Global multi-scale feature generation module for generating multi-level residual features with global attention information; 精准肿瘤区域分割网络,用于生成精准整个肿瘤、肿瘤核心区域、肿瘤增强区域病灶分割结果及整个肿瘤的边缘;The accurate tumor region segmentation network is used to generate accurate segmentation results of the entire tumor, tumor core region, tumor enhancement region and the margin of the entire tumor; 损失函数计算模块,用于精准脑部神经肿瘤分割网络的损失函数;The loss function calculation module is used for the loss function of the accurate brain neurotumor segmentation network; 网络训练模块,用于对精准脑胶质瘤分割网络进行充分迭代训练,得到训练好的精准脑胶质瘤分割网络以提取分割结果。The network training module is used to fully iteratively train the precise glioma segmentation network to obtain the trained precise glioma segmentation network to extract the segmentation results. 7.根据权利要求6所述的一种基于级联神经网络结构的脑部肿瘤分割系统,其特征在于:所述的精准肿瘤区域分割网络模块进一步包括:7. A brain tumor segmentation system based on a cascaded neural network structure according to claim 6, wherein the precise tumor region segmentation network module further comprises: 级联网络,利用整个肿瘤包含肿瘤核心区域、肿瘤增强区域的关系设计网络使得网络能更好的适应形态差异较大的肿瘤核心区域、肿瘤增强区域的精准提取;Cascading network, using the relationship between the entire tumor including the tumor core area and the tumor enhancement area to design the network, so that the network can better adapt to the accurate extraction of the tumor core area and tumor enhancement area with large morphological differences; WT分割及边缘检测网络,整个肿瘤分割支路与边缘检测支路共享前期编码解码参数,后续设置不同的生成模块,使得二者在相互监督相互促进的关系下,生成更为精准的整个肿瘤分割结果;WT segmentation and edge detection network, the entire tumor segmentation branch and edge detection branch share the pre-encoding and decoding parameters, and then set up different generation modules, so that the two can generate more accurate entire tumor segmentation under the relationship of mutual supervision and mutual promotion. result; 所述的精准肿瘤区域分割网络模块,通过将WT分割及边缘检测网络、级联网络的输出结果进行融合得到最终分割结果。The precise tumor region segmentation network module obtains the final segmentation result by fusing the output results of the WT segmentation, the edge detection network, and the cascade network.
CN202111404516.9A 2021-11-24 2021-11-24 Brain glioma segmentation method based on cascade neural network structure Active CN114170244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111404516.9A CN114170244B (en) 2021-11-24 2021-11-24 Brain glioma segmentation method based on cascade neural network structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111404516.9A CN114170244B (en) 2021-11-24 2021-11-24 Brain glioma segmentation method based on cascade neural network structure

Publications (2)

Publication Number Publication Date
CN114170244A true CN114170244A (en) 2022-03-11
CN114170244B CN114170244B (en) 2024-05-28

Family

ID=80480484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111404516.9A Active CN114170244B (en) 2021-11-24 2021-11-24 Brain glioma segmentation method based on cascade neural network structure

Country Status (1)

Country Link
CN (1) CN114170244B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882047A (en) * 2022-04-19 2022-08-09 厦门大学 Medical image segmentation method and system based on semi-supervision and Transformers
CN114937171A (en) * 2022-05-11 2022-08-23 复旦大学 Alzheimer's classification system based on deep learning
CN115082500A (en) * 2022-05-31 2022-09-20 苏州大学 Corneal nerve fiber segmentation method based on multi-scale and local feature guide network
CN117611806A (en) * 2024-01-24 2024-02-27 北京航空航天大学 A positive prediction system for prostate cancer surgical margins based on imaging and clinical features
CN117953027A (en) * 2024-03-22 2024-04-30 首都医科大学附属北京天坛医院 DWI-FLAIR mismatch evaluation method, device, medium and product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084823A (en) * 2019-04-18 2019-08-02 天津大学 Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN
CN110120033A (en) * 2019-04-12 2019-08-13 天津大学 Based on improved U-Net neural network three-dimensional brain tumor image partition method
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
WO2020108525A1 (en) * 2018-11-30 2020-06-04 腾讯科技(深圳)有限公司 Image segmentation method and apparatus, diagnosis system, storage medium, and computer device
CN112215850A (en) * 2020-08-21 2021-01-12 天津大学 A Cascaded Atrous Convolutional Network for Brain Tumor Segmentation with Attention Mechanism
CN112837276A (en) * 2021-01-20 2021-05-25 重庆邮电大学 A glioma segmentation method based on cascaded deep neural network model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020108525A1 (en) * 2018-11-30 2020-06-04 腾讯科技(深圳)有限公司 Image segmentation method and apparatus, diagnosis system, storage medium, and computer device
CN110120033A (en) * 2019-04-12 2019-08-13 天津大学 Based on improved U-Net neural network three-dimensional brain tumor image partition method
CN110084823A (en) * 2019-04-18 2019-08-02 天津大学 Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN112215850A (en) * 2020-08-21 2021-01-12 天津大学 A Cascaded Atrous Convolutional Network for Brain Tumor Segmentation with Attention Mechanism
CN112837276A (en) * 2021-01-20 2021-05-25 重庆邮电大学 A glioma segmentation method based on cascaded deep neural network model

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882047A (en) * 2022-04-19 2022-08-09 厦门大学 Medical image segmentation method and system based on semi-supervision and Transformers
CN114937171A (en) * 2022-05-11 2022-08-23 复旦大学 Alzheimer's classification system based on deep learning
CN114937171B (en) * 2022-05-11 2023-06-09 复旦大学 Alzheimer's classification system based on deep learning
CN115082500A (en) * 2022-05-31 2022-09-20 苏州大学 Corneal nerve fiber segmentation method based on multi-scale and local feature guide network
CN117611806A (en) * 2024-01-24 2024-02-27 北京航空航天大学 A positive prediction system for prostate cancer surgical margins based on imaging and clinical features
CN117611806B (en) * 2024-01-24 2024-04-12 北京航空航天大学 A positive prediction system for surgical margins of prostate cancer based on imaging and clinical features
CN117953027A (en) * 2024-03-22 2024-04-30 首都医科大学附属北京天坛医院 DWI-FLAIR mismatch evaluation method, device, medium and product

Also Published As

Publication number Publication date
CN114170244B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
Ning et al. SMU-Net: Saliency-guided morphology-aware U-Net for breast lesion segmentation in ultrasound image
Punn et al. Modality specific U-Net variants for biomedical image segmentation: a survey
Li et al. Brain tumor detection based on multimodal information fusion and convolutional neural network
CN114170244A (en) A glioma segmentation method based on cascaded neural network structure
CN112465827B (en) Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation
CN110930416B (en) A U-shaped network-based MRI image prostate segmentation method
CN110084823A (en) Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN
CN112258456B (en) A three-dimensional image segmentation method based on convolutional neural network supervision
CN112258514A (en) A segmentation method of pulmonary blood vessels in CT images
CN113888555B (en) Multi-mode brain tumor image segmentation system based on attention mechanism
Chen et al. A lung dense deep convolution neural network for robust lung parenchyma segmentation
CN111275712A (en) A Residual Semantic Network Training Method for Large-scale Image Data
Kaur et al. Optimized multi threshold brain tumor image segmentation using two dimensional minimum cross entropy based on co-occurrence matrix
CN114972362A (en) Medical image automatic segmentation method and system based on RMAU-Net network
Amiri et al. Bayesian Network and Structured Random Forest Cooperative Deep Learning for Automatic Multi-label Brain Tumor Segmentation.
Shao et al. Application of U-Net and Optimized Clustering in Medical Image Segmentation: A Review.
Wu et al. Cascaded fully convolutional DenseNet for automatic kidney segmentation in ultrasound images
Ning et al. CF2-Net: Coarse-to-fine fusion convolutional network for breast ultrasound image segmentation
Anand et al. Residual u-network for breast tumor segmentation from magnetic resonance images
Wang et al. Accurate lung nodule segmentation with detailed representation transfer and soft mask supervision
Khan et al. Attresdu-net: medical image segmentation using attention-based residual double u-net
Huang et al. Skin lesion image segmentation by using backchannel filling CNN and level sets
CN110033455A (en) A method of extracting information on target object from video
CN114387282A (en) Accurate automatic segmentation method and system for medical image organs
Nanayakkara et al. Automatic breast boundary segmentation of mammograms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant