CN113496496A - MRI image hippocampus region segmentation method based on multiple losses and multiple scale characteristics - Google Patents

MRI image hippocampus region segmentation method based on multiple losses and multiple scale characteristics Download PDF

Info

Publication number
CN113496496A
CN113496496A CN202110767563.3A CN202110767563A CN113496496A CN 113496496 A CN113496496 A CN 113496496A CN 202110767563 A CN202110767563 A CN 202110767563A CN 113496496 A CN113496496 A CN 113496496A
Authority
CN
China
Prior art keywords
hippocampus
network
segmentation
encoder
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110767563.3A
Other languages
Chinese (zh)
Other versions
CN113496496B (en
Inventor
王建新
陈雨
匡湖林
刘锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202110767563.3A priority Critical patent/CN113496496B/en
Publication of CN113496496A publication Critical patent/CN113496496A/en
Application granted granted Critical
Publication of CN113496496B publication Critical patent/CN113496496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/259Fusion by voting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides an MRI image hippocampus region segmentation method based on various losses and multi-scale characteristics, which comprises the following steps: step 1, acquiring brain MRI images and hippocampus labels of a plurality of T1 modalities; step 2, taking left and right labels of left and right hippocampus as a reference, and cutting the brain MRI image of each T1 mode to obtain a plurality of left and right hippocampus cubes after cutting; and 3, performing 3D cutting on each cut left and right hippocampus cube to obtain 3D cutting of all the cut left and right hippocampus cubes, screening out 3D blocks containing hippocampus voxels larger than a set threshold value from the 3D cutting of all the cut left and right hippocampus cubes, and preprocessing the screened 3D blocks. The invention can accurately segment the labels of the hippocampus and the background, and improves the segmentation accuracy by combining multi-scale information and various losses, so that the hippocampus segmentation accuracy in the brain image is obviously improved.

Description

MRI image hippocampus region segmentation method based on multiple losses and multiple scale characteristics
Technical Field
The invention relates to the technical field of medical image processing, in particular to an MRI image hippocampus region segmentation method based on various losses and multi-scale features.
Background
In recent years, neuroimaging technology plays a very important role in diagnosis of a plurality of neurological diseases, and researches show that Alzheimer's Disease (AD) is closely related to the shape and volume of a hippocampus, and doctors can be helped to better diagnose AD, which is a neurological disease, to a certain extent by correctly segmenting the hippocampus in brain Magnetic Resonance Imaging (MRI). With the rapid development of various semi-automatic and automatic hippocampus segmentation technologies, a great deal of research has been carried out on the segmentation of the hippocampus by using structural magnetic resonance imaging, and particularly, a deep learning model has achieved great success in the past years.
However, the current deep learning hippocampus segmentation method based on brain MRI images generally requires a large amount of training set data, does not consider multi-scale information of multiple levels in down-sampling, and loses some information of edge details. Therefore, it is necessary to improve the segmentation performance of the refined model by considering the multi-scale feature information added to the model and the features focusing on the edge details and the structure details.
Disclosure of Invention
The invention provides an MRI image hippocampus region segmentation method based on various losses and multi-scale features, and aims to solve the problems of poor segmentation performance and low segmentation accuracy of the traditional deep learning hippocampus segmentation method based on brain MRI images.
In order to achieve the above object, an embodiment of the present invention provides an MRI image hippocampus region segmentation method based on multiple losses and multi-scale features, including:
step 1, acquiring brain MRI images and hippocampus labels of a plurality of T1 modalities;
step 2, taking left and right labels of left and right hippocampus as a reference, and cutting the brain MRI image of each T1 mode to obtain a plurality of left and right hippocampus cubes after cutting;
3, performing 3D cutting on each cut left and right hippocampus cube to obtain 3D cutting of all the cut left and right hippocampus cubes, screening out 3D blocks containing hippocampus voxels larger than a set threshold value from the 3D cutting of all the cut left and right hippocampus cubes, and preprocessing the screened 3D blocks;
step 4, constructing a deep learning segmentation network, wherein the deep learning segmentation network consists of a main network and an auxiliary network, a multi-scale feature learning module, a region loss, an edge loss and a structural similarity loss are introduced into the deep learning segmentation network for optimization, and the preprocessed 3D block is input into the deep learning segmentation network for training to obtain a trained deep learning segmentation network;
and 5, inputting all the 3D cut blocks of the cut left and right hippocampus cubes into a trained deep learning segmentation network to obtain a segmentation result of each 3D block, and fusing voxel labels of overlapped parts between the blocks through a voting integration strategy according to the segmentation result of each 3D block to obtain a final segmentation label of each brain MRI image.
Wherein, the step 2 specifically comprises:
each brain MRI image was cropped to two 64 x 80 x 64 cubes centered on the center of the left and right hippocampal tags, each 64 x 80 x 64 cube including a left hippocampus region and a right hippocampus region.
Wherein, the step 3 specifically comprises:
step 31, performing 3D dicing processing on each of the 64 × 80 × 64 cubes from the x axis, the y axis, and the z axis in 8 steps, wherein the diced block size is 32 × 32, and each of the 64 × 80 × 64 cubes obtains [ (64-32) +1] [ (80-32) +1] [ (64-32) +1] ═ 5 ═ 7 ═ 175 3D diced blocks;
step 32, setting a hippocampus voxel threshold, screening out 3D blocks containing hippocampus voxels larger than the hippocampus voxel threshold, performing N4 bias field correction on the voxels in each screened 3D block, and performing [ -1,1] normalization processing on the pixel values of each screened 3D block.
Wherein, the step 4 specifically comprises:
the deep learning segmentation network is composed of an encoder and two decoders, wherein the encoder is shared by the two decoders, the two decoders are not interfered with each other, and the two decoders jointly optimize the characteristics learned by the encoder according to the loss respectively responsible for the two decoders; the network structure of the deep learning segmentation network is as follows: the method comprises the steps that a 3D network is adopted, the 3D network adopts a main network and an auxiliary network, a multi-scale feature learning module is introduced into the main network and the auxiliary network, the main network is composed of a shared 4-layer encoder and a 4-layer decoder, high-level semantic information is obtained through downsampling, the initial size of a brain MRI image is restored through upsampling and convolution, and the main network is mainly focused on regional loss and marginal loss of a segmentation network; the auxiliary network is composed of a shared 4-layer encoder and a 4-layer decoder, and the auxiliary network mainly focuses on the structural similarity difference between the segmentation result and the real label.
Wherein, the step 4 further comprises:
the main network and the auxiliary network introduce a multi-scale feature learning module to respectively input the encoder features of each layer of a decoder, the corresponding layer of the current layer of the decoder and the encoder features of all layers higher than the current layer of the decoder into the multi-scale feature learning module for fusion, the multi-scale feature learning module resamples the encoder features of the corresponding layer of each layer of the decoder and the encoder features of all layers higher than the current layer of the decoder to the same size, pixel point addition is carried out on the encoder features and the corresponding current layer of the decoder after convolution, batch standardization and linear rectification functions, channel fusion is carried out on the encoder features and the encoder features of the same layer of the current layer of the corresponding decoder after the pixel point addition is completed, and the fused multi-scale semantic features are guided through the encoder features.
Wherein, the step 4 further comprises:
the objective function for the area loss is as follows:
Figure BDA0003152441840000031
wherein p isiRepresenting the predicted value in the ith pixel of the total N voxels, qiRepresenting the true value in the ith pixel in the total N voxels, ε represents 1e-5
Wherein, the step 4 further comprises:
the objective function of edge loss is as follows:
Figure BDA0003152441840000032
wherein D (x) represents a distance map calculated from the real tags, sθ(x) And (4) expressing the softmax probability output of the pixel predicted value.
Wherein, the step 4 further comprises:
the objective function of the structural similarity loss is as follows:
Figure BDA0003152441840000041
wherein P represents the blocks in all P sets, G represents the blocks in all G sets, and there are N P sets and G sets, mupDenotes the mean value of p,. mu.gDenotes the mean value of g, σpDenotes the deviation of p, σgDenotes the deviation of p, σpgRepresenting the covariance of p and g.
Wherein, the step 5 specifically comprises:
the voting integration strategy of the split label is as follows:
Figure BDA0003152441840000042
wherein y represents the labeling result of the pixel, M represents the number of overlapping pixels, and i represents the ith overlapping pixel;
when in use
Figure BDA0003152441840000043
When the value is more than 0.5, taking y as 1, and taking 1 as a hippocampus label; when in use
Figure BDA0003152441840000044
When the value is less than 0.5, y is 0, and 0 is used as a background.
The above-mentioned scheme of the invention has the following beneficial effects,
the MRI image hippocampus region segmentation method based on multiple losses and multi-scale features according to the above embodiments of the present invention obtains a brain MRI image of T1 modality, performs clipping, then performs 3D dicing, performs preprocessing, inputs the preprocessed 3D diced into a trained deep learning segmentation network for training, verification and testing, the trained deep learning segmentation network includes adopting a main and auxiliary parallel network to concentrate on different centroids to jointly improve the performance of learning features of an encoder, introduces a multi-scale feature learning module to fuse the segmentation of diversified feature guidance details, introduces region loss, edge loss and structural similarity loss to improve the performance of segmentation, predicts the segmentation label of each pixel through a voting integration strategy module of segmentation labels, can accurately segment the hippocampus and background labels, the segmentation accuracy is improved through the combination of multi-scale information and various losses, so that the hippocampus segmentation accuracy in the brain image is remarkably improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a deep learning segmentation network structure according to the present invention;
FIG. 3 is a structural diagram of a multi-scale feature learning module according to the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The invention provides an MRI image hippocampus region segmentation method based on various losses and multi-scale characteristics, aiming at the problems of poor segmentation performance and low segmentation accuracy of the existing deep learning hippocampus segmentation method based on brain MRI images.
As shown in fig. 1 to 3, an embodiment of the present invention provides a method for segmenting a hippocampus region of an MRI image based on multiple losses and multi-scale features, including: step 1, acquiring brain MRI images and hippocampus labels of a plurality of T1 modalities; step 2, taking left and right labels of left and right hippocampus as a reference, and cutting the brain MRI image of each T1 mode to obtain a plurality of left and right hippocampus cubes after cutting; 3, performing 3D cutting on each cut left and right hippocampus cube to obtain 3D cutting of all the cut left and right hippocampus cubes, screening out 3D blocks containing hippocampus voxels larger than a set threshold value from the 3D cutting of all the cut left and right hippocampus cubes, and preprocessing the screened 3D blocks; step 4, constructing a deep learning segmentation network, wherein the deep learning segmentation network consists of a main network and an auxiliary network, a multi-scale feature learning module, a region loss, an edge loss and a structural similarity loss are introduced into the deep learning segmentation network for optimization, and the preprocessed 3D block is input into the deep learning segmentation network for training to obtain a trained deep learning segmentation network; and 5, inputting all the 3D cut blocks of the cut left and right hippocampus cubes into a trained deep learning segmentation network to obtain a segmentation result of each 3D block, and fusing voxel labels of overlapped parts between the blocks through a voting integration strategy according to the segmentation result of each 3D block to obtain a final segmentation label of each brain MRI image.
In the MRI image hippocampus region segmentation method based on multiple losses and multi-scale features according to the above embodiment of the present invention, a T1-modal brain MRI image and an artificially labeled hippocampus label provided by an EADC-ADNI database are obtained, and the european Alzheimer's Disease consortium and Alzheimer's Disease neuroactive Initiative (EADC-ADNI) database provides a unified standard protocol (HarP) for manually segmenting the hippocampus.
In the MRI image hippocampus region segmentation method based on various loss and multi-scale features according to the above embodiment of the present invention, the MRI image of the T1 modality is a magnetic resonance image whose main contrast is determined by the T1 difference between tissues or tissue states. This is achieved using a scanning sequence of short TR (<500ms) and short TE (<25 ms). Short T1 tissues, such as fat, are still sufficiently relaxed when scanned with short TRs, whereas long T1 tissues, such as cerebrospinal fluid, relax relatively little within a given TR time. Therefore, they absorb energy to a different extent when the next RF pulse occurs: short T1 tissue shows a strong signal due to absorbing more energy, and long T1 tissue does not absorb much energy due to saturation, thus showing a low signal. This change in signal intensity between tissues necessarily results in an enhanced T1 contrast of the image. The brain MRI image in the T1 mode is clearer than CT, and the difference between the regions is more obvious, so that the method is more suitable for accurate division of the hippocampus in medical diagnosis, can effectively help diagnosis and prediction of neurological diseases, the division result of the hippocampus region is obtained through learning of a deep learning network, and the difference between a predicted value and a real label of a training set in the network training process is used as a parameter which is reversely propagated to optimize the network after loss to improve the final division result. Because the volume of the hippocampus is very small compared to a complete MRI image, in order to reduce the amount of computation of the network and to initially reduce the imbalance between the hippocampus and the background, the image needs to be cropped, the cropped image is ensured to contain the target of the hippocampus, and the volume of the hippocampus after cropping only accounts for 1% of 64 × 80 × 64 cropping blocks; further, in order to better alleviate the extreme imbalance problem of the background and the hippocampus volume and eliminate noise, and increase the training amount and the characteristic amount of the sample, 3D dicing processing needs to be performed on the cut blocks, a threshold value is set to screen the dicing which meets the training standard, and finally, preprocessing is performed to obtain a proper training set. In the setting of the deep learning segmentation network, the feature learning capability of an encoder can be effectively enhanced to the maximum extent by reasonably setting the parallel structure of the main and auxiliary networks, then the global information can be best utilized through the multi-scale feature learning module, and the reasonable optimization of the regional loss, the edge loss and the structural similarity loss on the deep learning segmentation network is further realized, the segmentation performance of the network is improved, the effective training is carried out through a proper data processing mode, the characteristics of the parallel network are utilized, the multi-scale feature learning module and various losses are added, the defect that the deep learning segmentation network does not take into account is overcome, and the segmentation performance of the deep learning segmentation network is further improved.
Wherein, the step 2 specifically comprises: each brain MRI image was cropped to two 64 x 80 x 64 cubes centered on the center of the left and right hippocampal tags, each 64 x 80 x 64 cube including a left hippocampus region and a right hippocampus region.
The MRI image hippocampus region segmentation method based on various losses and multi-scale features according to the above embodiments of the present invention reduces the problem of extreme imbalance between the hippocampus label and the background sample by clipping each brain MRI image, and reduces the amount of computation of the deep learning segmentation network.
Wherein, the step 3 specifically comprises: step 31, performing 3D dicing processing on each of the 64 × 80 × 64 cubes from the x axis, the y axis, and the z axis in 8 steps, wherein the diced block size is 32 × 32, and each of the 64 × 80 × 64 cubes obtains [ (64-32) +1] [ (80-32) +1] [ (64-32) +1] ═ 5 ═ 7 ═ 175 3D diced blocks; step 32, setting a hippocampus voxel threshold, screening out 3D blocks containing hippocampus voxels larger than the hippocampus voxel threshold, performing N4 bias field correction on the voxels in each screened 3D block, and performing [ -1,1] normalization processing on the pixel values of each screened 3D block.
In the MRI image hippocampus region segmentation method based on multiple losses and multi-scale features according to the above embodiment of the present invention, the threshold is set to 500, and the 3D block containing hippocampus voxels larger than 500 is selected, so that the trained 3D block is not invalid, and the data enhancement function is also performed, and in the selected 3D block, the position of the hippocampus may be shifted or the hippocampus target may be truncated. Therefore, the features of an MRI image may be iteratively trained for a plurality of rounds in a training period, repeated feature training may occur in the iterations, and in the repeated training rounds, the target and the position are also different, so that such training is also meaningful, the variation of the training target and the offset of the position are helpful for improving the segmentation performance of the network, and the preprocessing of the screened 3D block can enhance the learning capability of the deep learning segmentation network and improve the convergence capability of the deep learning segmentation network.
Wherein, the step 4 specifically comprises: the deep learning segmentation network is composed of an encoder and two decoders, wherein the encoder is shared by the two decoders, the two decoders are not interfered with each other, and the two decoders jointly optimize the characteristics learned by the encoder according to the loss respectively responsible for the two decoders; the network structure of the deep learning segmentation network is as follows: the method comprises the steps that a 3D network is adopted, the 3D network adopts a main network and an auxiliary network, a multi-scale feature learning module is introduced into the main network and the auxiliary network, the main network is composed of a shared 4-layer encoder and a 4-layer decoder, high-level semantic information is obtained through downsampling, the initial size of a brain MRI image is restored through upsampling and convolution, and the main network is mainly focused on regional loss and marginal loss of a segmentation network; the auxiliary network is composed of a shared 4-layer encoder and a 4-layer decoder, and the auxiliary network mainly focuses on the structural similarity difference between the segmentation result and the real label.
In the MRI image hippocampus region segmentation method based on multiple losses and multiple scale features according to the above embodiment of the present invention, a multi-scale feature learning module is introduced into both the main network and the auxiliary network, the main network is composed of a shared 4-layer encoder and a 4-layer decoder, high-level semantic information is obtained by downsampling, and then the original size of the image is restored by upsampling and convolution. The whole network is composed of an encoder and two decoders, wherein the encoder is shared by the two decoders, the two decoders are not interfered with each other, and the characteristics which can be learned by the encoder are jointly optimized according to the loss respectively responsible.
Wherein, the step 4 further comprises: the main network and the auxiliary network introduce a multi-scale feature learning module to respectively input the encoder features of each layer of a decoder, the corresponding layer of the current layer of the decoder and the encoder features of all layers higher than the current layer of the decoder into the multi-scale feature learning module for fusion, the multi-scale feature learning module resamples the encoder features of the corresponding layer of each layer of the decoder and the encoder features of all layers higher than the current layer of the decoder to the same size, pixel point addition is carried out on the encoder features and the corresponding current layer of the decoder after convolution, batch standardization and linear rectification functions, channel fusion is carried out on the encoder features and the encoder features of the same layer of the current layer of the corresponding decoder after the pixel point addition is completed, and the fused multi-scale semantic features are guided through the encoder features.
In the MRI image hippocampus region segmentation method based on multiple losses and multi-scale features according to the above embodiment of the present invention, the structure of the multi-scale feature learning module is defined as: each layer of the decoder is fused with the encoder characteristics of the layer and the encoder characteristics of all layers higher than the layer, the multi-scale characteristic learning mode of the module resamples the encoder characteristics of the corresponding layer of each layer of the decoder and the encoder characteristics of all layers higher than the current layer of the decoder to the same size, and then the encoder characteristics are used for guiding the fused multi-scale semantic characteristics after convolution, Batch standardization and Batch standardization of Batch Normalization (BN) and Linear rectification function Rectified Linear Unit (ReLU), pixel point addition is carried out on the encoder characteristics and channel fusion is carried out on the encoder characteristics of the same layer, the activated units of the convolution, Batch standardization and Linear rectification functions can be combined together, and information loss caused by trilinear interpolation and preceding stage combination in each layer of the decoder can be compensated, so that the difference between the foreground and the background is enhanced, and the multi-scale low-level details are utilized to guide detailed subdivision, such as edges and positions, and the deep learning segmentation network can be fused with more levels of features by adding the multi-scale feature learning module and is concentrated on the loss of regions, edges and structures in the training process to obtain the trained deep learning segmentation network.
According to the MRI image hippocampus region segmentation method based on multiple losses and multiple scale features, the deep learning segmentation network can fuse more levels of features by adding the multiple scale feature learning module, the regional loss, the edge loss and the structure loss in the training process are concentrated, the trained deep learning segmentation network is obtained, the multiple scale feature learning module is introduced into the deep learning segmentation network, the difference between the foreground and the background is enhanced, and the multiple scale low-level details are utilized to guide detailed segmentation, such as edges and positions; the introduction of area loss and edge loss in the primary network improves the segmentation accuracy by improving detailed edges, and the introduction of structural similarity loss in the auxiliary network improves the segmentation accuracy by reducing the difference between the predicted hippocampus and the real hippocampus.
Wherein, the step 4 further comprises: the objective function for the area loss is as follows:
Figure BDA0003152441840000091
wherein p isiRepresenting the predicted value in the ith pixel of the total N voxels, qiRepresenting the true value in the ith pixel in the total N voxels, ε represents 1e-5
In the MRI image hippocampus region segmentation method based on multiple losses and multi-scale features according to the above embodiment of the present invention, the region-loss is mainly used in the case where the number of targets and backgrounds is seriously unbalanced, and the segmentation of the hippocampus region is optimized by using the region-loss as a basic loss function.
Wherein, the step 4 further comprises: the objective function of edge loss is as follows:
Figure BDA0003152441840000092
wherein D (x) represents a distance map calculated from the real tags, sθ(x) Representing predicted values of pixelsAnd (5) outputting the probability of softmax.
In the MRI image hippocampus region segmentation method based on multiple losses and multi-scale features according to the above embodiment of the present invention, the edge loss function is a loss extracted based on a distance map, which is an integration method of approximate differential boundary changes, and the distance map loss is added as a boundary loss to supervise differences between distance maps respectively calculated according to segmentation and ground authenticity.
Wherein, the step 4 further comprises: the objective function of the structural similarity loss is as follows:
Figure BDA0003152441840000093
wherein P represents the blocks in all P sets, G represents the blocks in all G sets, and there are N P sets and G sets, mupDenotes the mean value of p,. mu.gDenotes the mean value of g, σpDenotes the deviation of p, σgDenotes the deviation of p, σpgRepresenting the covariance of p and g.
According to the MRI image hippocampus region segmentation method based on multiple losses and multi-scale features, the structural similarity is an index for measuring the similarity of two images, and the structural similarity function is introduced to calculate the structural similarity loss between the prediction image and the real label by comparing the structural similarity of the prediction image and the real label, so that the prediction structures are more similar.
Wherein, the step 5 specifically comprises: the voting integration strategy of the split label is as follows:
Figure BDA0003152441840000101
wherein y represents the labeling result of the pixel, M represents the number of overlapping pixels, and i represents the ith overlapping pixel;
when in use
Figure BDA0003152441840000102
When the value is more than 0.5, taking y as 1, and taking 1 as a hippocampus label; when in use
Figure BDA0003152441840000103
When the value is less than 0.5, y is 0, and 0 is used as a background.
In the MRI image hippocampus region segmentation method based on multiple losses and multi-scale features according to the above embodiment of the present invention, the pixel point p isiWhen M3D blocks are overlapped, the average value of the predicted values of all the overlapped points is obtained, then a threshold value is set to be 0.5 to judge a final generated label, when the average value of the predicted values of all the overlapped points is larger than 0.5, y is set to be 1,1 corresponds to the left and right hippocampus, when the average value of the predicted values of all the overlapped points is smaller than 0.5, y is set to be 0, 0 is used as a background, the random error of the generated label can be effectively reduced through a voting integration strategy for dividing the label, and a precise final dividing result is obtained.
The MRI image hippocampus region segmentation method based on multiple losses and multi-scale features according to the above embodiments of the present invention obtains a brain MRI image of T1 modality, performs clipping, 3D segmentation processing and preprocessing, inputs the preprocessed 3D segment into a trained deep learning segmentation network for training, verification and testing, the trained deep learning segmentation network includes adopting a main and auxiliary parallel network to concentrate on different centroids to jointly improve the performance of learning features of an encoder, introduces a multi-scale feature learning module to fuse the segmentation of diversified feature guidance details, introduces region loss, edge loss and structural similarity loss to improve the performance of segmentation, predicts the segmentation label of each pixel of each 3D segment through a voting integration strategy module of the segmentation labels, can accurately segment the hippocampus and background labels, the method for segmenting the hippocampus of the MRI image based on multiple losses and multiple scale features is suitable for segmenting all the hippocampus comprising the MRI image data of the brain, can segment the hippocampus region of the image data with higher spatial resolution, such as T1 modal MRI image data, and can label voxel labels of left and right hippocampus from the image data.
According to the MRI image hippocampus region segmentation method based on multiple losses and multiple scale features, the brain MRI image is cut and 3D cut to play a role in data enhancement and training calculation amount reduction, on the basis, a deep learning segmentation network with parallel main and auxiliary networks is adopted, the complementary advantages of feature information of different levels are utilized to the maximum extent by adding a multiple scale feature learning module, the segmentation performance is improved, and losses such as region, edge and structure similarity are added to improve multiple performances so as to jointly improve the final segmentation performance. The deep learning segmentation network can accurately segment the labels of the hippocampus and the background, and the segmentation accuracy is improved by combining multi-scale information and various losses, so that the hippocampus segmentation accuracy in the brain image can be remarkably improved.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. An MRI image hippocampus region segmentation method based on multiple losses and multi-scale features is characterized by comprising the following steps:
step 1, acquiring brain MRI images and hippocampus labels of a plurality of T1 modalities;
step 2, taking left and right labels of left and right hippocampus as a reference, and cutting the brain MRI image of each T1 mode to obtain a plurality of left and right hippocampus cubes after cutting;
3, performing 3D cutting on each cut left and right hippocampus cube to obtain 3D cutting of all the cut left and right hippocampus cubes, screening out 3D blocks containing hippocampus voxels larger than a set threshold value from the 3D cutting of all the cut left and right hippocampus cubes, and preprocessing the screened 3D blocks;
step 4, constructing a deep learning segmentation network, wherein the deep learning segmentation network consists of a main network and an auxiliary network, a multi-scale feature learning module, a region loss, an edge loss and a structural similarity loss are introduced into the deep learning segmentation network for optimization, and the preprocessed 3D block is input into the deep learning segmentation network for training to obtain a trained deep learning segmentation network;
and 5, inputting all the 3D cut blocks of the cut left and right hippocampus cubes into a trained deep learning segmentation network to obtain a segmentation result of each 3D block, and fusing voxel labels of overlapped parts between the blocks through a voting integration strategy according to the segmentation result of each 3D block to obtain a final segmentation label of each brain MRI image.
2. The method for segmenting the hippocampus of an MRI image based on multiple losses and multi-scale features according to claim 1, wherein the step 2 specifically comprises:
each brain MRI image was cropped to two 64 x 80 x 64 cubes centered on the center of the left and right hippocampal tags, each 64 x 80 x 64 cube including a left hippocampus region and a right hippocampus region.
3. The method for segmenting the hippocampus of MRI images based on multiple losses and multi-scale features as claimed in claim 2, wherein said step 3 specifically comprises:
step 31, performing 3D dicing processing on each of the 64 × 80 × 64 cubes from the x axis, the y axis, and the z axis in 8 steps, wherein the diced block size is 32 × 32, and each of the 64 × 80 × 64 cubes obtains [ (64-32) +1] [ (80-32) +1] [ (64-32) +1] ═ 5 ═ 7 ═ 175 3D diced blocks;
step 32, setting a hippocampus voxel threshold, screening out 3D blocks containing hippocampus voxels larger than the hippocampus voxel threshold, performing N4 bias field correction on the voxels in each screened 3D block, and performing [ -1,1] normalization processing on the pixel values of each screened 3D block.
4. The method for segmenting the hippocampus of an MRI image based on multiple losses and multi-scale features as claimed in claim 3, wherein the step 4 specifically comprises:
the deep learning segmentation network is composed of an encoder and two decoders, wherein the encoder is shared by the two decoders, the two decoders are not interfered with each other, and the two decoders jointly optimize the characteristics learned by the encoder according to the loss respectively responsible for the two decoders; the network structure of the deep learning segmentation network is as follows: the method comprises the steps that a 3D network is adopted, the 3D network adopts a main network and an auxiliary network, a multi-scale feature learning module is introduced into the main network and the auxiliary network, the main network is composed of a shared 4-layer encoder and a 4-layer decoder, high-level semantic information is obtained through downsampling, the initial size of a brain MRI image is restored through upsampling and convolution, and the main network is mainly focused on regional loss and marginal loss of a segmentation network; the auxiliary network is composed of a shared 4-layer encoder and a 4-layer decoder, and the auxiliary network mainly focuses on the structural similarity difference between the segmentation result and the real label.
5. The method for segmenting the hippocampus of MRI images based on multiple losses and multi-scale features as claimed in claim 4, wherein said step 4 further comprises:
the main network and the auxiliary network introduce a multi-scale feature learning module to respectively input the encoder features of each layer of a decoder, the corresponding layer of the current layer of the decoder and the encoder features of all layers higher than the current layer of the decoder into the multi-scale feature learning module for fusion, the multi-scale feature learning module resamples the encoder features of the corresponding layer of each layer of the decoder and the encoder features of all layers higher than the current layer of the decoder to the same size, pixel point addition is carried out on the encoder features and the corresponding current layer of the decoder after convolution, batch standardization and linear rectification functions, channel fusion is carried out on the encoder features and the encoder features of the same layer of the current layer of the corresponding decoder after the pixel point addition is completed, and the fused multi-scale semantic features are guided through the encoder features.
6. The method for segmenting the hippocampus of the MRI image based on multiple losses and multiple scale features as claimed in claim 5, wherein the step 4 further comprises:
the objective function for the area loss is as follows:
Figure FDA0003152441830000021
wherein p isiRepresenting the predicted value in the ith pixel of the total N voxels, qiRepresenting the true value in the ith pixel in the total N voxels, ε represents 1e-5
7. The method for segmenting the hippocampus of MRI images based on multiple losses and multi-scale features as claimed in claim 6, wherein said step 4 further comprises:
the objective function of edge loss is as follows:
Figure FDA0003152441830000031
wherein D (x) represents a distance map calculated from the real tags, sθ(x) And (4) expressing the softmax probability output of the pixel predicted value.
8. The method for segmenting the hippocampus of MRI images based on multiple losses and multi-scale features as claimed in claim 7, wherein said step 4 further comprises:
the objective function of the structural similarity loss is as follows:
Figure FDA0003152441830000032
wherein P represents the blocks in all P sets, G represents the blocks in all G sets, and there are N P sets and G sets, mupDenotes the mean value of p,. mu.gDenotes the mean value of g, σpDenotes the deviation of p,σgDenotes the deviation of p, σpgRepresenting the covariance of p and g.
9. The method for segmenting the hippocampus of MRI images based on multiple losses and multi-scale features as claimed in claim 8, wherein said step 5 specifically comprises:
the voting integration strategy of the split label is as follows:
Figure FDA0003152441830000033
wherein y represents the labeling result of the pixel, M represents the number of overlapping pixels, and i represents the ith overlapping pixel;
when in use
Figure FDA0003152441830000034
When the value is more than 0.5, taking y as 1, and taking 1 as a hippocampus label; when in use
Figure FDA0003152441830000035
When the value is less than 0.5, y is 0, and 0 is used as a background.
CN202110767563.3A 2021-07-07 2021-07-07 MRI image hippocampus region segmentation method based on multiple losses and multiscale characteristics Active CN113496496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110767563.3A CN113496496B (en) 2021-07-07 2021-07-07 MRI image hippocampus region segmentation method based on multiple losses and multiscale characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110767563.3A CN113496496B (en) 2021-07-07 2021-07-07 MRI image hippocampus region segmentation method based on multiple losses and multiscale characteristics

Publications (2)

Publication Number Publication Date
CN113496496A true CN113496496A (en) 2021-10-12
CN113496496B CN113496496B (en) 2023-04-07

Family

ID=77995843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110767563.3A Active CN113496496B (en) 2021-07-07 2021-07-07 MRI image hippocampus region segmentation method based on multiple losses and multiscale characteristics

Country Status (1)

Country Link
CN (1) CN113496496B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937129A (en) * 2022-12-01 2023-04-07 北京邮电大学 Method and device for processing left-right half-brain relation based on multi-modal magnetic resonance image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060262A (en) * 2019-04-18 2019-07-26 北京市商汤科技开发有限公司 A kind of image partition method and device, electronic equipment and storage medium
CN110097131A (en) * 2019-05-08 2019-08-06 南京大学 A kind of semi-supervised medical image segmentation method based on confrontation coorinated training
CN110969626A (en) * 2019-11-27 2020-04-07 西南交通大学 Method for extracting hippocampus of human brain nuclear magnetic resonance image based on 3D neural network
CN112017191A (en) * 2020-08-12 2020-12-01 西北大学 Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism
US20210133966A1 (en) * 2019-10-02 2021-05-06 Memorial Sloan Kettering Cancer Center Deep multi-magnification networks for multi-class image segmentation
CN113052856A (en) * 2021-03-12 2021-06-29 北京工业大学 Hippocampus three-dimensional semantic network segmentation method based on multi-scale feature multi-path attention fusion mechanism

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060262A (en) * 2019-04-18 2019-07-26 北京市商汤科技开发有限公司 A kind of image partition method and device, electronic equipment and storage medium
CN110097131A (en) * 2019-05-08 2019-08-06 南京大学 A kind of semi-supervised medical image segmentation method based on confrontation coorinated training
US20210133966A1 (en) * 2019-10-02 2021-05-06 Memorial Sloan Kettering Cancer Center Deep multi-magnification networks for multi-class image segmentation
CN110969626A (en) * 2019-11-27 2020-04-07 西南交通大学 Method for extracting hippocampus of human brain nuclear magnetic resonance image based on 3D neural network
CN112017191A (en) * 2020-08-12 2020-12-01 西北大学 Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism
CN113052856A (en) * 2021-03-12 2021-06-29 北京工业大学 Hippocampus three-dimensional semantic network segmentation method based on multi-scale feature multi-path attention fusion mechanism

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KANGLI QIN ET AL.: "《Asymmetric Encode-Decode Network with Two Decoding Paths For Skin Lesion Segmentation》", 《2020 5TH INTERNATIONAL CONFERENCE ON BIOMEDICAL IMAGING , SIGNAL PROCESSING》 *
LE GENG ET AL.: "《M2E-Net: Multiscale Morphological Enhancement Network for Retinal Vessel Segmentation》", 《PATTERN RECOGNITION AND COMPUTER VISION》 *
唐维: "《基于深度学习的肝脏CT图像分割方法研究》", 《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》 *
张雪媛: "《基于卷积神经网络的宫颈细胞核分割与识别方法》", 《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937129A (en) * 2022-12-01 2023-04-07 北京邮电大学 Method and device for processing left-right half-brain relation based on multi-modal magnetic resonance image
CN115937129B (en) * 2022-12-01 2024-04-02 北京邮电大学 Method and device for processing left and right half brain relations based on multi-mode magnetic resonance image

Also Published As

Publication number Publication date
CN113496496B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112927240B (en) CT image segmentation method based on improved AU-Net network
Kumar et al. U-segnet: fully convolutional neural network based automated brain tissue segmentation tool
CN113012172A (en) AS-UNet-based medical image segmentation method and system
CN112215844A (en) MRI (magnetic resonance imaging) multi-mode image segmentation method and system based on ACU-Net
CN112465754B (en) 3D medical image segmentation method and device based on layered perception fusion and storage medium
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN111179269A (en) PET image segmentation method based on multi-view and 3-dimensional convolution fusion strategy
CN113496496B (en) MRI image hippocampus region segmentation method based on multiple losses and multiscale characteristics
CN115294029A (en) Brain focus region positioning system and method for multi-modal images
CN114581474A (en) Automatic clinical target area delineation method based on cervical cancer CT image
CN114549394A (en) Deep learning-based tumor focus region semantic segmentation method and system
CN104463885A (en) Partition method for multiple-sclerosis damage area
CN113269764A (en) Automatic segmentation method and system for intracranial aneurysm, sample processing method and model training method
CN110728660B (en) Method and device for lesion segmentation based on ischemic stroke MRI detection mark
CN116664605A (en) Medical image tumor segmentation method based on diffusion model and multi-mode fusion
CN116229074A (en) Progressive boundary region optimized medical image small sample segmentation method
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN113269816A (en) Regional progressive brain image elastic registration method and system
CN110992309A (en) Fundus image segmentation method based on deep information transfer network
CN111210436A (en) Lens segmentation method, device and storage medium
CN113516671B (en) Infant brain tissue image segmentation method based on U-net and attention mechanism
CN114399519B (en) MR image 3D semantic segmentation method and system based on multi-modal fusion
CN117611601B (en) Text-assisted semi-supervised 3D medical image segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant