CN113592847A - QSM deep brain nuclear mass automatic segmentation method based on deep learning - Google Patents

QSM deep brain nuclear mass automatic segmentation method based on deep learning Download PDF

Info

Publication number
CN113592847A
CN113592847A CN202110913201.0A CN202110913201A CN113592847A CN 113592847 A CN113592847 A CN 113592847A CN 202110913201 A CN202110913201 A CN 202110913201A CN 113592847 A CN113592847 A CN 113592847A
Authority
CN
China
Prior art keywords
qsm
deep
method based
deep learning
nuclei
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110913201.0A
Other languages
Chinese (zh)
Other versions
CN113592847B (en
Inventor
管晓军
张敏鸣
徐晓俊
张玉瑶
郭涛
吴晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110913201.0A priority Critical patent/CN113592847B/en
Publication of CN113592847A publication Critical patent/CN113592847A/en
Application granted granted Critical
Publication of CN113592847B publication Critical patent/CN113592847B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a QSM deep brain nucleus automatic segmentation method based on deep learning, which is realized based on a rapid segmentation tool aiming at a QSM deep brain grey matter nucleus structure and is named deep QSMSeg. The deep gray matter nuclear mass deep segmentation method is based on an end-to-end deep learning model deep QSMSeg, and 5 pairs of region-of-interest (ROI) structures (a caudate nucleus CN, a putamen PUT, a globus pallidus GP, a substantia nigra SN and a red nucleus RN in a left hemisphere and a right hemisphere) of a deep gray matter nuclear mass can be accurately, stably and quickly segmented by utilizing a QSM technology. The precise division of the deep gray matter nuclei is helpful for the development of brain iron-related research, in particular to nervous system degenerative diseases closely related to the deep brain nuclei, such as Parkinson's disease.

Description

QSM deep brain nuclear mass automatic segmentation method based on deep learning
Technical Field
The invention belongs to the technical field of neuroimaging, and particularly relates to a QSM deep brain nuclei automatic segmentation method based on deep learning.
Background
Magnetic Resonance Imaging (MRI) technology provides a non-invasive, highly repeatable way of measuring tissue Magnetic susceptibility, particularly important tissue components of brain tissue, including brain iron and myelin. In recent years, with the continuous replacement of MRI technology, Quantitative magnetic sensitivity imaging (QSM) has become a gold standard for MRI to quantify tissue magnetic susceptibility due to its high magnetic susceptibility contrast and tissue quantification characteristics. It is worth mentioning that iron is distributed in various regions of the brain, most notably the Deep subcutaneous gray matter (DGM), and has been shown by many studies to be closely related to human learning, planning and cognition. Conversely, excessive iron loading in the brain can be linked to oxidative stress injury of tissues, and pathological roles in neurodegenerative diseases, such as parkinson's disease and alzheimer's disease, are increasingly emphasized. Therefore, the quantification of human brain magnetic susceptibility based on QSM technology has important value in the field of neuroscience and in the research of clinical markers.
How to obtain the magnetic susceptibility of deep brain nuclei with high repeatability, strong objectivity and good stability is an important scientific problem to be solved urgently. In the past research, research related to DGM segmentation based on QSM is very few, most of known work is based on manual or atlas-based segmentation, and the method has the defects of strong subjectivity, time and labor waste and the like. In recent years, there have been studies that have been reported on segmentation methods of DGM in QSM, such as a map-based method and a deep learning-based method. These methods essentially comprise the following steps: template generation, manual segmentation of ROI structures, individual registration, ROI extraction and label fusion. However, the atlas-based method requires a series of difficult operations, and is computationally expensive and time-consuming. Furthermore, atlas-based segmentation methods are highly dependent on the stability of the atlas and registration algorithms, data variations between atlases or individual images (such as contrast variations or scanner parameter settings) are very likely to affect performance, and the necessary manual corrections are inevitable. In the field of deep learning, researchers have proposed that a two-dimensional full-convolution neural network be used in QSM to segment DGM structures, but the two-dimensional network cannot capture spatial information between slices, which is crucial in pixel-level volumetric image segmentation. In addition, it uses only manually selected slices containing the DGM structure during training and testing, and therefore, this segmentation method is not fully automatic. Therefore, developing a fully automated, accurate, robust DGM segmentation tool that can be applied to QSM remains an open problem.
Disclosure of Invention
The invention aims to provide a QSM deep brain nuclei automatic segmentation method based on deep learning, aiming at the defects of the prior art. The invention carries out network deep learning through a single-stage 3D encoder-decoder full convolution network based on a target structure label of manual segmentation, realizes automatic segmentation of a QSM brain deep gray matter nucleus structure, and further promotes the neural image research related to brain magnetic susceptibility.
The purpose of the invention is realized by the following technical scheme: a QSM deep brain nuclei automatic segmentation method based on deep learning comprises the following steps:
(1) acquiring a data set: and acquiring an enhanced sensitivity weighted angiography image (ESWAN), and reconstructing to obtain a QSM image. The 5 pairs of ROI structures of the deep gray matter nuclei, including caudate nuclei, putamen, globus pallidus, substantia nigra, and red nuclei in the left and right hemispheres, were selected as segmentation labels for QSM images.
(2) Constructing a segmentation network: the network body consists of an encoder and a decoder. The encoder consists of an input module and four feature extraction modules, the decoder consists of four feature reconstruction modules and an output module, and the structure of the decoder is symmetrical to that of the encoder. A jump connection is used from encoder to decoder. During the last two encoding stages and the first two decoding stages, attention modules are inserted. The attention module is a cascade of a channel level attention module and a spatial level attention module.
(3) Training the segmentation network by a loss function: the network is supervised by a method combining the dice loss and the focal loss at the voxel level.
(4) And after reconstructing the weighted angiography image with the enhanced sensitivity to be processed into a QSM image, inputting the QSM image into a trained segmentation network to obtain a segmentation result.
Further, in the step (1), a gradient echo sequence is adopted to acquire an ESWAN image.
Further, in the step (1), a STAR-QSM algorithm is adopted to reconstruct the enhanced sensitivity weighted angiography data acquired by the data acquisition, so as to obtain QSM.
Further, in step (1), the QSM reconstruction includes removing the skull using a brain extraction tool in the FMRIB software library and generating a brain mask using the GRE amplitude image; the original phase is expanded by a phase expansion method based on Laplacian, and a normalized background phase is removed by using a V-SHARP method; and finally, calculating a tissue magnetic susceptibility map by using a STAR-QSM algorithm to obtain a final QSM image.
Further, in step (2), each module of the encoder includes 2 anisotropic convolution (5 × 5 × 3) blocks, and a residual structure is adopted to reuse features and converge quickly; between the modules, convolution with 2 multiplied by 2 and step length of 2 is adopted, so that the space size of the characteristic diagram is halved, and the channel number is doubled. The decoder is similar to the basic convolution block of the encoder, with the transposed convolution used to double the spatial size of the signature graph between the modules, and the channel size is reduced.
Further, in step (2), the network output activated by the soft-max function has 11 channels (corresponding to 10 ROI structures and background classes respectively), and the resolution is the same as the original input. Except for the output module, an Exponential Linear Unit (ELU) is adopted as an activation function.
Further, in step (2), the attention module adopts CBAM, and is divided into an encoding form and a decoding form. In the encoding stage, the attention module takes the output of each stage directly as input. In the decoding phase, the combined feature map from the encoder and decoder is first processed through a convolutional block and then serves as input to the attention module.
Further, in step (2), the channel-level attention module extracts channel features by using global maximal pooling and global average pooling, and calculates channel-level weights by using a multi-layer perceptron.
Further, the spatial level attention module calculates the mean and maximum of the features along the channel axis as module inputs and calculates the spatial attention using a 1 × 1 × 1 convolution.
Further, in step (3):
the dice loss is as follows:
Figure BDA0003204625400000031
wherein p isnFor network prediction after soft-max activation, gnTo segment a label, e is a smoothing constant.
The focal loss is as follows:
Figure BDA0003204625400000032
the total loss of the split network is:
Figure BDA0003204625400000033
where λ is the hyper-parameter used to adjust the weight between dice and focal losses.
The invention has the beneficial effects that: a clinically useful deep learning model deep qsmseg is obtained. In the test data set, a complete QSM data can be segmented within 2.600 ± 0.018 s. The performance of the method is evaluated by adopting 5-time cross validation, the average Dice Similarity Coefficient (DSC) of all target DGM structures is 0.872 +/-0.053, and the Hausdorff distance is 2.644 +/-2.917 mm. ROI structure volume (mm) extracted using DeepQSMSeg3) And magnetic susceptibility values (in ppm) that have a significant correlation with magnetic susceptibility values extracted using manual segmentation as a whole (between structural volumes and magnetic susceptibility values extracted using artificial tags and our method in all target structuresRespectively, is 0.985 (p)<0.001) and 0.991 (p)<0.001)). In conclusion, the invention can accurately, stably, automatically and quickly segment 5 pairs of DGM structures (CN, PUT, GP, SN and RN) of QSM, and is beneficial to the development of clinical brain iron related research, in particular to nervous system degenerative diseases closely related to brain deep nucleus degeneration, such as Parkinson's disease.
Drawings
FIG. 1 is a schematic diagram of the overall network structure of a deep QSMSeg; wherein, the left side is an encoder containing four-layer down-sampling, and the right side is a decoder containing four-layer up-sampling; the encoder features of the corresponding stage are also used at the input of the decoder to increase the image detail information; introducing attention modules at the last two stages of the encoder and the first two stages of the decoder for paying attention to the DGM nuclei;
FIG. 2 is a schematic view of a volume attention module; the volume attention module is formed by cascading a channel level attention module and a space level attention module and respectively pays attention to channel characteristics and space characteristics of the feature map;
FIG. 3 is a graph comparing the results between the predicted results of deep QSMSeg and the manual segmentation (gold standard);
FIG. 4 is a graph of the linear regression relationship between DGM volume and gold standard volume predicted by the deep learning model;
FIG. 5 is a graph of the linear regression relationship between DGM susceptibility and gold standard susceptibility predicted by the deep learning model.
Detailed Description
The invention discloses a QSM brain deep nucleus automatic segmentation method based on deep learning, which is realized based on a rapid segmentation tool aiming at QSM subcortical deep gray nucleus and is named deep QSMSeg. The partition main body of the invention is a single-stage 3D encoder-decoder Full Convolution Network (FCN), the network main body consists of an encoder (left part) and a decoder (right part), and the encoder consists of an input module and four feature extraction modules; the decoder consists of four characteristic reconstruction modules and an output module, and is symmetrical to the encoder. We manually labeled 5 pairs of deep sub-cortical gray matter nuclei DGM structures. All these target structures are very small and therefore the number of foreground and background voxels is quite unbalanced. Therefore, we adopt the attention module and adopt two modes of dice loss and focal loss to supervise the training process together. In conjunction with the experience of the past study, we combined the spatial and channel attention mechanism to the encoder and decoder stages of the network to focus on the target DGM structure. Since the resolutions within and between slices are usually different in QSM, we use anisotropic convolution instead of the original isotropic convolution to help reduce the model parameters and speed up the model training process.
1. Data set
Data acquisition: the study of the example of the invention was approved by the ethical committee of the second subsidiary hospital of the university of zhejiang medical school, and all subjects signed informed consent. Our data set contained a total of 631 subjects, 338 of which were normal and 293 of which were parkinson's disease patients. Enhanced Sensitivity Weighted Angiography (ESWAN) images were acquired with gradient echo sequence for all subjects: the repetition time is 33.7 ms; the first echo time/interval/eighth echo time is 4.556ms/3.648ms/30.092 ms; the turning angle is 20 degrees; field of view 240X 240mm2(ii) a The matrix is 416 × 384; the layer thickness is 2 mm; the interlayer spacing is 0 mm; the number of layers is 64.
QSM reconstruction: the method comprises the steps of reconstructing Enhanced Sensitivity Weighted Angiography (ESWAN) data acquired in the last step of data acquisition by using a STAR-QSM algorithm in an STI Suite V3.0 software package to obtain QSM (https:// peer.eecs.berkeley.edu/. chunlei.liu/software.html), removing skull by using a Brain Extraction Tool (BET) in an FMRIB Software Library (FSL), and generating a brain mask by using a GRE amplitude image; the original phase is expanded by a phase expansion method based on Laplacian, and a normalized background phase is removed by using a V-SHARP method; and finally, calculating a tissue magnetic susceptibility map by using a STAR-QSM algorithm to obtain a final QSM image.
Acquisition of training and test set: we selected 5 pairs of ROI structures of deep gray matter nuclei (caudate nucleus, CN), Putamen (PUT), Globus Pallidus (GP), Substantia Nigra (SN), and Red Nucleus (RN) in both left and right hemispheres). The manual segmentation mask is labeled by an experienced radiologist and double-checked as a segmentation label. All images and labels are resampled to 256 x 64. The total data set contains 631 pairs of QSM and tags, divided into five separate parts, trained and validated with 5-fold cross-validation accuracy.
2. Deep learning network structure
The overall framework of the deepQSMSeg is shown in FIG. 1, the network body is composed of an encoder (left part) and a decoder (right part); the encoder extracts semantic features from the QSM and the decoder can reconstruct the segmentation map using these potential features. The encoder consists of an input module and four feature extraction modules, each module comprises 2 anisotropic convolution (5 multiplied by 3) blocks, and a residual structure is adopted to reuse features and quickly converge; between the modules, convolution with 2 multiplied by 2 and step length of 2 is adopted, so that the space size of the characteristic diagram is halved, and the channel number is doubled. The decoder consists of four feature reconstruction modules and an output module, is symmetrical to the encoder, is similar to a basic convolution block, and is used for doubling the space size of a feature diagram among modules by transposition convolution and reducing the size of a channel. Jump connections are used from the encoder to the decoder to obtain a feature image with high symmetry and high resolution. The output of the network activated by the soft-max function has 11 channels (corresponding to 10 ROI structure and background class respectively), and the resolution is the same as the original input. Except for the output module, an Exponential Linear Unit (ELU) is adopted as an activation function.
In the last two encoding stages and the first two decoding stages, four attention modules as shown in fig. 2 are inserted to focus on the small target structure. Note that the module uses cbam (conditional Block attachment module) and can divide it into an encoding form and a decoding form. In the encoding stage, the attention module takes the output of each stage directly as input. In the decoding stage, the mixed feature image (combined feature map) from the encoder and decoder is first processed through the rolling block and then serves as input to the attention module.
The attention module can be seen as a cascade of a channel level attention module and a spatial level attention module. Channel-level attention may determine which features are meaningful by the relationship between channels. In channel-level attention, Global Maximum Pooling (GMP) and Global Average Pooling (GAP) are used to extract channel features, and multi-layer perceptrons (MLPs) are used to compute channel-level weights. In the spatial level attention module, the mean and maximum values of the features are calculated along the channel axis (C-axis) and taken as module inputs and spatial attention is calculated using a 1 × 1 × 1 convolution.
3. Loss function
In the training process, a mode of combining the dice loss with the auxiliary voxel-level focal loss is adopted, the high unbalance of classes is relieved, and the voxels with poor classification are intensively processed.
The dice loss is one of the most classical region-based loss functions, and directly adopts a Dice Similarity Coefficient (DSC), and the mathematical formula of the dice loss is as follows:
Figure BDA0003204625400000051
wherein p isnFor network prediction after soft-max activation, gnTo segment a label, e is a smoothing constant.
Although the loss of dice may solve the problem of category imbalance to some extent, it still cannot robustly solve the case of category extreme imbalance. In our segmentation task, the target structures are quite small compared to the whole brain, so they are easily masked by background voxels and easily classified examples. The focal loss is firstly proposed to solve the classification problem in the single-stage object detection problem, and can distinguish the parts which are difficult to be divided from the parts which are easy to be divided, so that the training focuses on the parts which are difficult to be divided. The focal loss can be easily modified to voxel level and can be written as:
Figure BDA0003204625400000061
in our model, a method of dice loss and voxel-level focal loss are combined to supervise the network to focus on difficult examples and solve the class imbalance problem.
The total loss was:
Figure BDA0003204625400000062
where λ is the hyper-parameter used to adjust the weight between dice and focal losses.
4. Training and testing
Pre-processing such as conversion and enhancement does not consume additional overhead. All images were normalized and cut into random blocks of fixed size 128 x 64. The data was randomly spatially transformed (rotated, scaled, moved and flipped) prior to training and used for data amplification.
The trained deep qsmseg is used to segment the ESWAN image to be processed. FIG. 3 is a graph comparing the results between the predicted results of deep QSMSeg and the manual segmentation (gold standard); FIG. 4 is a graph of the linear regression relationship between DGM volume and gold standard volume predicted by the deep learning model; FIG. 5 is a graph of the linear regression relationship between DGM susceptibility and gold standard susceptibility predicted by the deep learning model.
To evaluate the segmentation performance of deep qsmseg, we calculated the Dice Similarity Coefficient (DSC) and the Hounsfield Distance (HD) between the manual segmentation mask and the prediction mask. The Dice Similarity Coefficient (DSC) may evaluate the similarity of the segmentation region overlap, and the Hausdorff Distance (HD) may measure the distance of the segmentation boundary.
The Dice Similarity Coefficient (DSC) is defined as:
Figure BDA0003204625400000063
where G denotes an artificially divided region and P denotes a predicted region of the divided network.
The Hausdorff Distance (HD) is defined as:
H(G,P)=max{h(G,P),h(P,G)} (5)
Figure BDA0003204625400000064
where G denotes the boundary of an artificially segmented region and P denotes the boundary of a prediction region of the segmented network.

Claims (10)

1. A QSM deep brain nuclei automatic segmentation method based on deep learning is characterized by comprising the following steps:
(1) acquiring a data set: and acquiring an enhanced sensitivity weighted angiography image (ESWAN), and reconstructing to obtain a QSM image. The 5 pairs of ROI structures of the deep gray matter nuclei, including caudate nuclei, putamen, globus pallidus, substantia nigra, and red nuclei in the left and right hemispheres, were selected as segmentation labels for QSM images.
(2) Constructing a segmentation network: the network body consists of an encoder and a decoder. The encoder consists of an input module and four feature extraction modules, the decoder consists of four feature reconstruction modules and an output module, and the structure of the decoder is symmetrical to that of the encoder. A jump connection is used from encoder to decoder. During the last two encoding stages and the first two decoding stages, attention modules are inserted. The attention module is a cascade of a channel level attention module and a spatial level attention module.
(3) Training the segmentation network by a loss function: the network is supervised by a method combining the dice loss and the focal loss at the voxel level.
(4) And after reconstructing the weighted angiography image with the enhanced sensitivity to be processed into a QSM image, inputting the QSM image into a trained segmentation network to obtain a segmentation result.
2. The QSM deep brain nuclei automatic segmentation method based on deep learning of claim 1, wherein in step (1), the ESWAN image is acquired by using gradient echo sequence.
3. The QSM deep brain nuclei auto-segmentation method based on deep learning of claim 1, wherein in step (1), the STAR-QSM algorithm is used to reconstruct the enhanced sensitivity weighted angiography data obtained by data acquisition to obtain QSM.
4. The QSM deep brain nuclei auto-segmentation method based on deep learning of claim 3 wherein in step (1), the QSM reconstruction includes using a brain extraction tool in the FMRIB software library to remove skull and generate a brain mask using the GRE amplitude image; the original phase is expanded by a phase expansion method based on Laplacian, and a normalized background phase is removed by using a V-SHARP method; and finally, calculating a tissue magnetic susceptibility map by using a STAR-QSM algorithm to obtain a final QSM image.
5. The QSM deep brain nuclei automatic segmentation method based on deep learning of claim 1, wherein in step (2), each module of the encoder comprises 2 anisotropic convolution (5 × 5 × 3) blocks, and a residual structure is adopted to reuse features and converge rapidly; between the modules, convolution with 2 multiplied by 2 and step length of 2 is adopted, so that the space size of the characteristic diagram is halved, and the channel number is doubled. The decoder is similar to the basic convolution block of the encoder, with the transposed convolution used to double the spatial size of the signature graph between the modules, and the channel size is reduced.
6. The QSM deep brain nuclei automatic segmentation method based on deep learning of claim 1, wherein in step (2), the network output activated by soft-max function has 11 channels (10 ROI structure and background class respectively) and the resolution is the same as the original input. Except for the output module, an Exponential Linear Unit (ELU) is adopted as an activation function.
7. The QSM deep brain nuclei automatic segmentation method based on deep learning of claim 1, wherein in step (2), the attention module adopts CBAM and is divided into an encoding form and a decoding form. In the encoding stage, the attention module takes the output of each stage directly as input. In the decoding phase, the combined feature map from the encoder and decoder is first processed through a convolutional block and then serves as input to the attention module.
8. The QSM deep brain nuclei automatic segmentation method based on deep learning of claim 1, wherein in step (2), the channel-level attention module uses global maximal pooling and global average pooling to extract channel features, and uses multi-layer perceptron to calculate channel-level weights.
9. The QSM deep brain nuclei automatic segmentation method based on deep learning of claim 1, wherein the spatial level attention module calculates mean and maximum values of the features along the channel axis as module inputs and calculates spatial attention using a 1 x 1 convolution.
10. The QSM deep brain nuclei automatic segmentation method based on deep learning according to claim 1, wherein in the step (3):
the dice loss is as follows:
Figure FDA0003204625390000021
wherein p isnFor network prediction after soft-max activation, gnTo segment a label, e is a smoothing constant.
The focal loss is as follows:
Figure FDA0003204625390000022
the total loss of the split network is:
Figure FDA0003204625390000023
where λ is the hyper-parameter used to adjust the weight between dice and focal losses.
CN202110913201.0A 2021-08-10 2021-08-10 Deep learning-based QSM brain deep nucleolus automatic segmentation method Active CN113592847B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110913201.0A CN113592847B (en) 2021-08-10 2021-08-10 Deep learning-based QSM brain deep nucleolus automatic segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110913201.0A CN113592847B (en) 2021-08-10 2021-08-10 Deep learning-based QSM brain deep nucleolus automatic segmentation method

Publications (2)

Publication Number Publication Date
CN113592847A true CN113592847A (en) 2021-11-02
CN113592847B CN113592847B (en) 2023-10-10

Family

ID=78256775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110913201.0A Active CN113592847B (en) 2021-08-10 2021-08-10 Deep learning-based QSM brain deep nucleolus automatic segmentation method

Country Status (1)

Country Link
CN (1) CN113592847B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897993A (en) * 2017-01-12 2017-06-27 华东师范大学 The construction method of probability collection of illustrative plates is rolled into a ball based on quantitative susceptibility imaging human brain gray matter core
CN111681184A (en) * 2020-06-09 2020-09-18 复旦大学附属华山医院 Neural melanin image reconstruction method, device, equipment and storage medium
CN112348779A (en) * 2020-10-23 2021-02-09 南开大学 Nuclear magnetic image brain gray matter cluster segmentation method based on convolutional neural network
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning
CN113222915A (en) * 2021-04-28 2021-08-06 浙江大学 Method for establishing PD (potential of Hydrogen) diagnosis model based on multi-modal magnetic resonance imaging omics

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897993A (en) * 2017-01-12 2017-06-27 华东师范大学 The construction method of probability collection of illustrative plates is rolled into a ball based on quantitative susceptibility imaging human brain gray matter core
CN111681184A (en) * 2020-06-09 2020-09-18 复旦大学附属华山医院 Neural melanin image reconstruction method, device, equipment and storage medium
CN112348779A (en) * 2020-10-23 2021-02-09 南开大学 Nuclear magnetic image brain gray matter cluster segmentation method based on convolutional neural network
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning
CN113222915A (en) * 2021-04-28 2021-08-06 浙江大学 Method for establishing PD (potential of Hydrogen) diagnosis model based on multi-modal magnetic resonance imaging omics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈璇;李咏梅;罗天友;欧阳羽;吕发金;曾春;王忠平: "复发-缓解型多发性硬化与复发型视神经脊髓炎脑深部灰质核团铁沉积的ESWAN对比定量分析", 中国医学影像技术, vol. 28, no. 4 *

Also Published As

Publication number Publication date
CN113592847B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
Zhang et al. ME‐Net: multi‐encoder net framework for brain tumor segmentation
Liu et al. Region-to-boundary deep learning model with multi-scale feature fusion for medical image segmentation
Li et al. Alzheimer's disease classification based on combination of multi-model convolutional networks
Gore et al. Comparative study of various techniques using deep Learning for brain tumor detection
Akkus et al. Robust brain extraction tool for CT head images
Li et al. Wavelet-based segmentation of renal compartments in DCE-MRI of human kidney: initial results in patients and healthy volunteers
Wang et al. JointVesselNet: Joint volume-projection convolutional embedding networks for 3D cerebrovascular segmentation
Ma et al. MRI image synthesis with dual discriminator adversarial learning and difficulty-aware attention mechanism for hippocampal subfields segmentation
CN115147600A (en) GBM multi-mode MR image segmentation method based on classifier weight converter
Kim et al. Fat-saturated image generation from multi-contrast MRIs using generative adversarial networks with Bloch equation-based autoencoder regularization
Li et al. Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols
CN116309524A (en) Method and system for suppressing imaging artifacts of cardiac magnetic resonance movies based on deep learning
Hu et al. Aorta-aware GAN for non-contrast to artery contrasted CT translation and its application to abdominal aortic aneurysm detection
WO2020056196A1 (en) Fully automated personalized body composition profile
Prats-Climent et al. Artificial intelligence on FDG PET images identifies mild cognitive impairment patients with neurodegenerative disease
Guan et al. DeepQSMSeg: a deep learning-based sub-cortical nucleus segmentation tool for quantitative susceptibility mapping
Lee et al. Improved classification of brain-tumor mri images through data augmentation and filter application
Aderghal Classification of multimodal MRI images using Deep Learning: Application to the diagnosis of Alzheimer’s disease.
Albay et al. Diffusion MRI spatial super-resolution using generative adversarial networks
CN113592847A (en) QSM deep brain nuclear mass automatic segmentation method based on deep learning
Brahim et al. A 3D deep learning approach based on Shape Prior for automatic segmentation of myocardial diseases
Wang et al. Semi-automatic segmentation of the fetal brain from magnetic resonance imaging
Shomirov et al. Brain tumor segmentation of HGG and LGG MRI images using WFL-based 3D U-net
Chai et al. CAU-net: A deep learning method for deep gray matter nuclei segmentation
Liang et al. Mouse brain MR super-resolution using a deep learning network trained with optical imaging data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant