CN114972382A - Brain tumor segmentation algorithm based on lightweight UNet + + network - Google Patents
Brain tumor segmentation algorithm based on lightweight UNet + + network Download PDFInfo
- Publication number
- CN114972382A CN114972382A CN202210613167.XA CN202210613167A CN114972382A CN 114972382 A CN114972382 A CN 114972382A CN 202210613167 A CN202210613167 A CN 202210613167A CN 114972382 A CN114972382 A CN 114972382A
- Authority
- CN
- China
- Prior art keywords
- lightweight
- convolution
- segmentation
- network
- brain tumor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 71
- 208000003174 Brain Neoplasms Diseases 0.000 title claims abstract description 63
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 10
- 238000012549 training Methods 0.000 claims abstract description 29
- 230000007246 mechanism Effects 0.000 claims abstract description 7
- 238000012216 screening Methods 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 35
- 230000008569 process Effects 0.000 claims description 25
- 238000000605 extraction Methods 0.000 claims description 16
- 206010028980 Neoplasm Diseases 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 9
- 238000009499 grossing Methods 0.000 claims description 8
- 238000003709 image segmentation Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 230000006872 improvement Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 206010072360 Peritumoural oedema Diseases 0.000 claims description 3
- 230000008901 benefit Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 238000011426 transformation method Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 2
- 238000007781 pre-processing Methods 0.000 claims description 2
- 238000005481 NMR spectroscopy Methods 0.000 claims 3
- 238000005070 sampling Methods 0.000 claims 3
- 238000002372 labelling Methods 0.000 claims 2
- 230000000903 blocking effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 claims 1
- 230000005764 inhibitory process Effects 0.000 claims 1
- 230000003993 interaction Effects 0.000 claims 1
- 238000010606 normalization Methods 0.000 claims 1
- 230000000717 retained effect Effects 0.000 claims 1
- 238000002595 magnetic resonance imaging Methods 0.000 abstract 1
- 230000002708 enhancing effect Effects 0.000 description 8
- 239000010410 layer Substances 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 208000032612 Glial tumor Diseases 0.000 description 1
- 206010018338 Glioma Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- JXSJBGJIGXNWCI-UHFFFAOYSA-N diethyl 2-[(dimethoxyphosphorothioyl)thio]succinate Chemical compound CCOC(=O)CC(SP(=S)(OC)OC)C(=O)OCC JXSJBGJIGXNWCI-UHFFFAOYSA-N 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 230000029578 entry into host Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000002759 z-score normalization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
本发明提出一种基于UNet++网络模型改进的轻量化脑肿瘤分割算法。针对脑肿瘤磁共振成像(MRI)多模态精准分割,UNet++网络模型通过密集的长短连接使其网络结构语义连结紧密,但密集连接使UNet++网络出现计算量增大和参数量增多的问题,导致UNet++网络训练时间缓慢,并对硬件设备提出了更高的要求。轻量级UNet++网络模型用轻量级残差模块代替UNet++系列的双层卷积结构,降低网络的计算复杂度和参数量。网络模型中密集连接导致每层拼接后得到的特征图通道数较大,而有些通道的特征对于分割任务来说并没有实际意义,在特征图后添加CBAM注意力机制,学习筛选参数,关注有用信息,提升网络分割精度。在最后一次下采样中应用轻量级类残差模块,通过轻量级类残差模块的通道拼接更好的保存和利用深层有效特征,在减少训练时间的同时,进一步提高脑肿瘤分割精度。
The invention proposes an improved lightweight brain tumor segmentation algorithm based on the UNet++ network model. For accurate multimodal segmentation of brain tumor magnetic resonance imaging (MRI), the UNet++ network model uses dense long and short connections to make the network structure and semantics closely connected, but the dense connections make the UNet++ network increase the amount of computation and the number of parameters, resulting in UNet++ The network training time is slow and puts higher demands on the hardware equipment. The lightweight UNet++ network model replaces the double-layer convolution structure of the UNet++ series with a lightweight residual module, reducing the computational complexity and parameter amount of the network. The dense connection in the network model leads to a large number of feature map channels obtained after each layer is spliced, and the features of some channels have no practical significance for the segmentation task. Add the CBAM attention mechanism after the feature map, learn the screening parameters, and pay attention to usefulness. information to improve the accuracy of network segmentation. The lightweight residual module is applied in the last downsampling, and the deep effective features are better preserved and utilized through the channel splicing of the lightweight residual module, which reduces the training time and further improves the accuracy of brain tumor segmentation.
Description
技术领域technical field
本发明提出一种基于深度学习的脑肿瘤分割算法,采用一种基于UNet++网络模型改进的轻量化脑肿瘤分割算法。改进的轻量化UNet++网络模型应用于脑肿瘤核磁共振图像分割,在保证整体分割精度的同时,提高对脑肿瘤内部组织分割的精确度,轻量化模块的应用,有效的降低整个模型的计算复杂度和参数量,提高模型的训练速度,解决了因UNet++网络结构复杂导致模型训练缓慢的问题。The invention proposes a brain tumor segmentation algorithm based on deep learning, and adopts an improved lightweight brain tumor segmentation algorithm based on the UNet++ network model. The improved lightweight UNet++ network model is applied to brain tumor MRI image segmentation. While ensuring the overall segmentation accuracy, it improves the accuracy of segmentation of brain tumor internal tissues. The application of lightweight modules effectively reduces the computational complexity of the entire model. and parameters, improve the training speed of the model, and solve the problem of slow model training due to the complex network structure of UNet++.
背景技术Background technique
目前,脑肿瘤是威胁人类生命安全常见的恶性肿瘤,具有很高的侵袭性和各种不同的组织学亚区域,由于脑肿瘤本身固有的空间异质性,且呈浸润性生长,肿瘤内部可能发生复杂的病理变化,导致脑肿瘤MRI图像的灰度、形状、纹理及组织学特征等发生变化,使具有多种模态的脑胶质瘤MRI图像呈现出多样性和复杂性,这使得放射科医生和其他临床医生难以识别和分割脑肿瘤。手动脑肿瘤分割需要很专业的先验知识,耗时耗力,容易发生错误,这很依赖于医生的经验,对脑肿瘤精确分割仍然是医学图像分析中具有挑战性的任务之一。At present, brain tumors are common malignant tumors that threaten human life, with high invasiveness and various histological subregions. Due to the inherent spatial heterogeneity of brain tumors and invasive growth, the internal tumor may Complex pathological changes occur, resulting in changes in the grayscale, shape, texture, and histological features of brain tumor MRI images, which make glioma MRI images with multiple modalities present diversity and complexity, which makes radiation Physicians and other clinicians have difficulty identifying and segmenting brain tumors. Manual brain tumor segmentation requires very professional prior knowledge, is time-consuming and labor-intensive, and is prone to errors. It is very dependent on the experience of doctors. Accurate segmentation of brain tumors is still one of the challenging tasks in medical image analysis.
深度学习在近些年显示出快速发展的趋势,被广泛应用于图像分割。UNet++网络使用一系列的网格状的密集跳跃路径和编码器-解码器的端对端结构,在脑肿瘤分割任务中取得了很好的效果。然而,正是由于这种结构导致网络具有庞大的参数量,对分割速度和设备内存带来了很大挑战,难以将模型落实到实际应用中,在对数据量庞大的3D图像进行分割时,UNet++网络训练速度更为缓慢,并对硬件设备提出了更高的要求。Deep learning has shown a rapid development trend in recent years and is widely used in image segmentation. UNet++ network uses a series of grid-like dense skip paths and encoder-decoder end-to-end structure, and achieves good results in brain tumor segmentation tasks. However, it is precisely because of this structure that the network has a huge amount of parameters, which brings great challenges to the segmentation speed and device memory, and it is difficult to implement the model into practical applications. When segmenting 3D images with a huge amount of data, The UNet++ network training speed is slower and puts forward higher requirements on hardware equipment.
发明内容SUMMARY OF THE INVENTION
本发明主要是针对UNet++网络模型在处理3D脑肿瘤图像分割任务中模型结构复杂度大参数量多导致模型训练缓慢的问题,提出了一种轻量化3D UNet++网络模型,通过对3D UNet++网络模型结构进行优化,保留其优秀的密集连接,同时使用改进的轻量级类残差模块和轻量级残差模块使模型整体轻量化,减少模型参数量的同时提高模型训练速度,并在结构中加入CBAM注意力机制使模型学会关注有效信息,通过CBAM筛选出有用信息,进一步提高网络分割精度。The invention mainly aims at the problem that the UNet++ network model has a large model structure and a large number of parameters in the processing of 3D brain tumor image segmentation tasks, which leads to slow model training, and proposes a lightweight 3D UNet++ network model. Optimize, retain its excellent dense connections, and use the improved lightweight residual module and lightweight residual module to make the model overall lightweight, reduce the amount of model parameters and improve the model training speed, and add in the structure. The CBAM attention mechanism enables the model to learn to pay attention to effective information, and filter out useful information through CBAM, which further improves the accuracy of network segmentation.
为了实现上述目的,本发明的技术方案如下:In order to achieve the above object, technical scheme of the present invention is as follows:
一种基于UNet++网络模型改进的轻量化脑肿瘤分割算法,包括以下步骤:An improved lightweight brain tumor segmentation algorithm based on UNet++ network model, including the following steps:
步骤一:数据预处理,将脑肿瘤核磁共振图像构成的数据集根据需求改为网络可训练的大小;Step 1: Data preprocessing, changing the data set composed of brain tumor MRI images to a size that can be trained by the network according to requirements;
步骤二:建立轻量化3D UNet++网络模型,在模型中应用轻量级类残差模块、轻量级残差模块和CBAM注意力机制;Step 2: Establish a lightweight 3D UNet++ network model, and apply the lightweight residual module, the lightweight residual module and the CBAM attention mechanism in the model;
步骤三:使用轻量化3D UNet++网络模型进行训练,获取脑肿瘤图像分割结果;Step 3: Use the lightweight 3D UNet++ network model for training to obtain brain tumor image segmentation results;
所述步骤一中的具体过程如下:The specific process in the step 1 is as follows:
(1)对脑肿瘤核磁共振图像数据进行交叉分块处理,将155×160×160大小的脑肿瘤图像分成7份32×160×160大小像素块(不够分的部分利用背景图进行填充);(1) Perform cross-block processing on the brain tumor MRI image data, and divide the 155×160×160 size brain tumor image into seven 32×160×160 pixel blocks (the part that is not enough is filled with the background image);
(2)因脑肿瘤患者同一时期不同成像方式会产生四模态的脑肿瘤图像,所以在BraTS2018和BraTS2019数据集中每个病例有四个模态(t1、t2、flair、t1ce),模态之间核磁共振图片的成像方式存在差异,导致图像的对比度不同。首先要对数据采用极值抑制,防止图片因为极大值或者极小值对整张图片产生较大影响,随后采用Z-score方法分别对每个模态的图像进行标准化(即图像减去均值除以标准差),进一步解决对比度差异的问题;(2) Because different imaging methods of brain tumor patients in the same period will produce brain tumor images of four modalities, each case in the BraTS2018 and BraTS2019 datasets has four modalities (t1, t2, flair, t1ce), and the difference between the modalities is There are differences in the imaging methods of the MRI pictures between different countries, resulting in different contrast of the images. First, the extreme value suppression should be applied to the data to prevent the image from having a great influence on the entire image due to the maximum or minimum value, and then the Z-score method is used to standardize the images of each modality (that is, the image minus the mean value). Divide by the standard deviation) to further solve the problem of contrast differences;
Z-score标准化公式可表示为:The Z-score normalization formula can be expressed as:
其中X是输入样本,μ为所有样本数据的均值,σ为所有样本数据的标准差;where X is the input sample, μ is the mean of all sample data, and σ is the standard deviation of all sample data;
(3)对脑肿瘤核磁共振图像数据进行裁剪,通过剪裁将输入调整至合适尺度,因背景在整幅图像中占的比例较大,并且背景区域不是分割目标区域,可认定为无效区域,从而对其进行剪裁,并不会减少目标区域;(3) Crop the brain tumor MRI image data, and adjust the input to an appropriate scale through cropping. Because the background accounts for a large proportion of the entire image, and the background area is not the target area for segmentation, it can be identified as an invalid area, so Cropping it does not reduce the target area;
(4)切块拼接,在一个新的维度将四个模态相同位置的32×160×160像素块进行拼接,得到4×32×160×160的像素块作为网络的最终输入。将患者的专家标注的脑肿瘤图像交叉分块,从155×160×160大小的图像分成7份32×160×160大小的图像块(不够分的部分利用背景图进行填充),连续两块交叉通道数为8。将每份32×160×160大小的图像块复制三份分别进行以下操作。将增强型肿瘤、肿瘤周围水肿和非增强性肿瘤置为1,其余为背景0。将增强型肿瘤和非增强性肿瘤置为1,其余为背景0。将增强型肿瘤置为1,其余为背景0。通过以上操作得到三个大小为32×160×160的图像块,在新维度对三个像素块进行连接操作,得到大小为3×32×160×160的图像块,最后将得到的图像块作为整个网络的标签;(4) Dicing and splicing, splicing 32×160×160 pixel blocks in the same position of the four modalities in a new dimension, and obtaining a 4×32×160×160 pixel block as the final input of the network. The brain tumor images annotated by the patient’s experts were cross-blocked, and the 155×160×160 image was divided into 7 image blocks of 32×160×160 size (the part that was not divided enough was filled with the background image), and two consecutive blocks were crossed. The number of channels is 8. Copy each 32×160×160 image block into three copies and perform the following operations respectively. Enhancing tumors, peritumoral edema, and non-enhancing tumors were set as 1, and the rest were background 0. Enhancing and non-enhancing tumors were set as 1 and the rest as background 0. Enhanced tumors were set to 1 and the rest to background 0. Through the above operations, three image blocks with a size of 32×160×160 are obtained, and the three pixel blocks are connected in a new dimension to obtain an image block with a size of 3×32×160×160. Finally, the obtained image block is used as labels for the entire network;
(5)数据增强,采用随机裁剪和随机旋转、缩放、平移以及错切等仿射变换方法,对脑肿瘤图像进行数据增强;(5) Data enhancement, using affine transformation methods such as random cropping and random rotation, scaling, translation, and staggered cutting, to perform data enhancement on brain tumor images;
所述步骤二中的具体情况如下:The specific situation in the second step is as follows:
(1)在3D UNet++网络应用轻量级类残差模块和轻量级残差模块构成脑肿瘤分割网络模型:(1) Apply the lightweight residual module and the lightweight residual module to the 3D UNet++ network to form a brain tumor segmentation network model:
①类残差模块和残差模块的轻量化改进;①Lightweight improvement of residual module and residual module;
实现轻量化类残差模块的具体过程:The specific process of implementing the lightweight residual module:
在进行卷积特征提取时,深层特征信息的损失大于浅层卷积特征提取,在深层网络中应用类残差模块可以减少特征信息损失。类残差模块首先在主分支使用1×1的卷积进行通道域扩充,将通道域扩充为原来的2.5倍,然后使用3×3的卷积进行特征提取,最后使用1×1的卷积进行通道域信息融合。输入在经过shortcut分支后不是特征图像素点的叠加,而是通道域的拼接,用这种方式是为了充分利用卷积前和卷积后各特征图;When performing convolutional feature extraction, the loss of deep feature information is greater than that of shallow convolutional feature extraction, and applying residual-like modules in deep networks can reduce the loss of feature information. The residual-like module first uses 1×1 convolution to expand the channel domain in the main branch, expands the channel domain to 2.5 times the original size, then uses 3×3 convolution for feature extraction, and finally uses 1×1 convolution. Perform channel domain information fusion. The input is not the superposition of the feature map pixels after the shortcut branch, but the splicing of the channel domain. This method is used to make full use of the feature maps before and after convolution;
轻量级类残差模块在保留类残差模块分割精度优势的同时进一步轻量化,将原本卷积核大小为3的普通卷积改为分组卷积来保留其结构,分组数为卷积核为3的卷积输入通道数。接着为了解决通道域信息无法交互问题,在模块进行通道域拼接后采用卷积核为1的卷积进行通道间信息交互,同时将通道域进行缩减,从而达到减少网络参数和计算量的目的;The lightweight class residual module is further lightweight while retaining the advantages of the class residual module segmentation accuracy. The ordinary convolution with the original convolution kernel size of 3 is changed to grouped convolution to retain its structure, and the number of groups is the convolution kernel. is the number of convolution input channels of 3. Then, in order to solve the problem of inability to interact with channel domain information, after the module performs channel domain splicing, a convolution with a convolution kernel of 1 is used to exchange information between channels, and at the same time, the channel domain is reduced to reduce network parameters and computational effort.
轻量级类残差模块可以表示为:The lightweight residual class module can be expressed as:
xm+1=Cat(xm,F(xm;Wm))x m+1 =Cat(x m , F(x m ; W m ))
其中xm为映射部分,F(xm;Wm)为类残差部分,Cat为特征图通道域拼接;where x m is the mapping part, F(x m ; W m ) is the class residual part, and Cat is the feature map channel domain splicing;
实现轻量化残差模块的具体过程:The specific process of implementing the lightweight residual module:
轻量化残差模块将输入通道数利用卷积核为1的卷积变为原来的1/4,再使用卷积核为3的卷积进行特征提取,最后再用卷积核为1的卷积将通道数扩大为原始输入通道数的2倍,从而达到减少网络参数和计算量的目的;In the lightweight residual module, the number of input channels is changed to 1/4 of the original using the convolution with the convolution kernel of 1, and then the convolution with the convolution kernel of 3 is used for feature extraction, and finally the convolution with the convolution kernel of 1 is used. The product expands the number of channels to twice the number of original input channels, so as to achieve the purpose of reducing network parameters and calculation amount;
轻量级残差模块可表示为:The lightweight residual module can be expressed as:
xl+1=xl+F(xl;Wl)x l+1 = x l +F(x l ; W l )
其中xl为直接映射部分,F(xl;Wl)为残差部分;where x l is the direct mapping part, and F(x l ; W l ) is the residual part;
②在训练过程中,为了减少类别不平衡问题对分割准确率的影响,训练采用二分类的交叉熵(binary_cross_entropy)和医学影像损失Dice Loss组合而成混合损失函数BCEDiceLoss:②During the training process, in order to reduce the impact of the class imbalance problem on the segmentation accuracy, the training adopts the binary cross entropy (binary_cross_entropy) and the medical image loss Dice Loss to form a hybrid loss function BCEDiceLoss:
计算二分类的交叉熵的具体过程:The specific process of calculating the cross entropy of the binary classification:
首先对模型训练的输出进行判断,因医生标注的脑肿瘤分割图片被预处理,目标区域标记为1,非目标区域标记为0,所以判断损失输入为二分类问题,网络模型的训练输出,每一个点即为一个结点,对这个结点是否大于0.5进行判决分类;First, the output of the model training is judged. Because the brain tumor segmentation pictures marked by the doctor are preprocessed, the target area is marked as 1, and the non-target area is marked as 0, so the judgment loss input is a binary classification problem. The training output of the network model, each A point is a node, and whether the node is greater than 0.5 is judged and classified;
计算交叉熵的具体过程:The specific process of calculating cross entropy:
L(p,t)=[-plog(t)+(1-p)log(1-t)]L(p,t)=[-plog(t)+(1-p)log(1-t)]
p为预处理后的医生标注分割图片期望输出,t为实际网络模型训练的输出;p is the expected output of the preprocessed doctor's labeled segmentation image, and t is the output of the actual network model training;
计算医学影像损失DiceLoss的具体过程:The specific process of calculating the medical image loss DiceLoss:
首先了解Dice系数的定义,Dice系数是用来度量集合相似度的度量函数,通常用于计算两个样本的相似度,最后s的取值范围在[0,1]:First understand the definition of Dice coefficient. Dice coefficient is a metric function used to measure the similarity of sets. It is usually used to calculate the similarity of two samples. The final value of s is in the range of [0,1]:
X表示分割图像,Y代表的是预测的分割图像,其中|X∩Y|是X和Y之间的交集,分子中系数2是因为分母中重复计算X和Y;X represents the segmented image, Y represents the predicted segmented image, where |X∩Y| is the intersection between X and Y, and the coefficient 2 in the numerator is because X and Y are repeatedly calculated in the denominator;
DiceLoss公式定义为:The DiceLoss formula is defined as:
在Dice Loss中添加拉普拉斯平滑(Laplace smoothing),由于是一个改动值,这里将值定义为1e-5,即在Dice Loss的分子分母全部加1e-5:Add Laplace smoothing to Dice Loss. Since it is a modified value, the value is defined as 1e-5, that is, add 1e-5 to the numerator and denominator of Dice Loss:
拉普拉斯平滑可以减少过拟合,避免当|X|和|Y|都为0时,分子被0除的问题;Laplace smoothing can reduce overfitting and avoid the problem that the numerator is divided by 0 when both |X| and |Y| are 0;
最终混合损失定义为:The final mixing loss is defined as:
综上在使用混合损失函数BCEDiceLoss后,提高网络模型的性能,保证了Dice系数的精度,减小了模型分割结果与专家勾勒结果的误差,提高的分割精度;In summary, after using the mixed loss function BCEDiceLoss, the performance of the network model is improved, the accuracy of the Dice coefficient is ensured, the error between the model segmentation result and the expert outline result is reduced, and the segmentation accuracy is improved;
③构建的网络模型使用3次下采样和6次上采样,采用轻量级残差模块代替UNet++系列双层卷积结构,直接替代虽然能够使网络达到轻量化目的,但是相对于原始双层卷积结构,残差结构只使用一次卷积核大小为3的卷积层,其余都由卷积核大小为1的卷积核替代,导致特征提取不充分,可能导致分割精度下降,在改进过程再添加CBAM注意力机制,并应用于U型结构的最外层,也就是上采样后通道域拼接得到的特征图,此时特征图是一系列长短连接得到的,虽然此时由于一系列的长短连接,特征图语义鸿沟已经相对较小,但是由于多次拼接导致此时通道数较大,而有些通道的特征对于分割任务来说并没有实际意义,从而用CBAM注意力模块学习筛选参数,关注有用信息,提升网络分割精度。因为输出特征图经过卷积特征提取后与输入特征图两者存在一定的语义鸿沟,在最后一次下采样中应用轻量级类残差模块,通过轻量级类残差模块的通道拼接可以更好的保存和利用深层有效特征,进一步提高脑肿瘤分割精度。③ The constructed network model uses 3 downsampling and 6 upsampling, and uses a lightweight residual module to replace the UNet++ series double-layer convolution structure. Although direct replacement can make the network lightweight, but compared to the original double-layer volume Product structure, the residual structure only uses a convolution layer with a convolution kernel size of 3, and the rest are replaced by a convolution kernel with a convolution kernel size of 1, resulting in insufficient feature extraction, which may lead to a decrease in segmentation accuracy. In the improvement process Then add the CBAM attention mechanism and apply it to the outermost layer of the U-shaped structure, that is, the feature map obtained by splicing the channel domain after upsampling. At this time, the feature map is obtained by a series of long and short connections, although at this time due to a series of Long and short connections, the feature map semantic gap has been relatively small, but due to multiple splicing, the number of channels is large at this time, and the features of some channels have no practical significance for the segmentation task, so the CBAM attention module is used to learn the screening parameters, Pay attention to useful information and improve the accuracy of network segmentation. Because there is a certain semantic gap between the output feature map and the input feature map after the convolution feature extraction, the lightweight residual module is applied in the last downsampling, and the channel splicing of the lightweight residual module can improve the Good preservation and utilization of deep effective features can further improve the accuracy of brain tumor segmentation.
(2)在上面的轻量级UNet++网络模型后再加入一次3D卷积,卷积核大小为1,通道数变为3,使输出与预处理后的专家标注的病人标签通道数一致。(2) Add a 3D convolution to the above lightweight UNet++ network model, the size of the convolution kernel is 1, and the number of channels is changed to 3, so that the output is consistent with the number of patient label channels marked by the preprocessed expert.
所述步骤三中的具体情况如下:The specific situation in the third step is as follows:
(1)使用轻量化3D UNet++网络模型进行训练,获取脑肿瘤图像分割结果,再将分割结果进行一次sigmoid,判断分割结果是否大于0.5,并将结果变为0和1,进行拼接,再根据三通道定义还原成单通道,即得到脑肿瘤分割结果图。(1) Use the lightweight 3D UNet++ network model for training, obtain the segmentation results of brain tumor images, and then perform a sigmoid on the segmentation results to determine whether the segmentation results are greater than 0.5, and change the results to 0 and 1, splicing, and then according to three The channel definition is restored to a single channel, that is, the brain tumor segmentation result map is obtained.
与现有技术相比,本发明技术方案的有益效果是:Compared with the prior art, the beneficial effects of the technical solution of the present invention are:
(1)本发明对UNet++网络模型进行轻量化改进,在保证整体分割精度的同时,提高对脑肿瘤内部组织分割的精确度,轻量化模块的应用,有效的降低整个模型的计算复杂度和参数量,提高模型的训练速度,解决了UNet++模型因网络复杂导致网络训练速度缓慢的问题。(1) The present invention improves the weight of the UNet++ network model. While ensuring the overall segmentation accuracy, the accuracy of the segmentation of the internal tissue of the brain tumor is improved. The application of the lightweight module effectively reduces the computational complexity and parameters of the entire model. It can improve the training speed of the model and solve the problem of slow network training caused by the complex network of the UNet++ model.
附图说明Description of drawings
图1为本发明的方法流程图。FIG. 1 is a flow chart of the method of the present invention.
图2为本发明的轻量级类残模块。FIG. 2 is a lightweight residual-like module of the present invention.
图3为本发明的轻量级残模块。FIG. 3 is a lightweight residual module of the present invention.
图4为本发明所改进的轻量级UNet++的网络模型。Fig. 4 is the network model of the lightweight UNet++ improved by the present invention.
具体实施方式Detailed ways
对于本领域的技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。下面结合附图和实施例对本发明的技术方案做进一步的说明。It will be understood by those skilled in the art that some well-known structures and their descriptions may be omitted from the drawings. The technical solutions of the present invention will be further described below with reference to the accompanying drawings and embodiments.
本发明提供了一种基于UNet++网络模型改进的轻量化脑肿瘤分割算法,该发明方法可以实现对脑肿瘤整体、脑肿瘤核心和增强型脑肿瘤核心的分割,高效的获取高精确的脑肿瘤图像分割图并应用在脑肿瘤核磁共振图像重复性测量和评估中。The invention provides an improved lightweight brain tumor segmentation algorithm based on the UNet++ network model. The method of the invention can realize the segmentation of the whole brain tumor, the brain tumor core and the enhanced brain tumor core, and efficiently obtain high-precision brain tumor images. Segmentation map and application in brain tumor MRI image repeatability measurement and assessment.
图1为本发明的方法流程图,首先是对脑肿瘤核磁共振图像预处理,将BraTS2018和BraTS2019变为网络所需的输入,然后构建轻量级UNet++网络模型,并用它对数据进行训练,保存效果最好的网络权值,实现分割任务。Fig. 1 is a flow chart of the method of the present invention. First, the brain tumor MRI image is preprocessed, and BraTS2018 and BraTS2019 are changed into the input required by the network, and then a lightweight UNet++ network model is constructed, and it is used to train the data and save it. The best network weights to achieve the segmentation task.
具体的实现步骤为:The specific implementation steps are:
Step1.1将输入的脑肿瘤核磁共振图像数据进行交叉分块处理;Step1.1 Perform cross-block processing on the input brain tumor MRI image data;
Step1.2对数据采用极值抑制后进标准化处理,标准化采用Z-score方法分别标准化每个模态的图像,图像减去均值除以标准差;Step1.2 Use extreme value suppression to standardize the data, and use Z-score method to standardize the images of each modality respectively, subtract the mean and divide the image by the standard deviation;
利用Z-score标准化:Normalize with Z-score:
其中μ为所有样本数据的均值,σ为所有样本数据的标准差;where μ is the mean of all sample data, σ is the standard deviation of all sample data;
Step1.3对脑肿瘤核磁共振图像进行剪裁至合适尺度,去除无效区域;Step1.3 Trim the brain tumor MRI image to an appropriate scale and remove the invalid area;
Step1.4切块拼接,在一个新的维度将四个模态相同位置的32×160×160像素块进行拼接,得到4×32×160×160的像素块作为网络的最终输入。将患者的专家标注的脑肿瘤图像交叉分块,从155×160×160大小的图像分成7份32×160×160大小的图像块(不够分的部分利用背景图进行填充),连续两块交叉通道数为8。将每份32×160×160大小的图像块复制三份分别进行以下操作。将增强型肿瘤、肿瘤周围水肿和非增强性肿瘤置为1,其余为背景0。将增强型肿瘤和非增强性肿瘤置为1,其余为背景0。将增强型肿瘤置为1,其余为背景0。通过以上操作得到三个大小为32×160×160的图像块,在新维度对三个像素块进行连接操作,得到大小为3×32×160×160的图像块,最后将得到的图像块作为整个网络的标签;Step1.4 splicing, splicing 32×160×160 pixel blocks in the same position of the four modalities in a new dimension, and obtaining a 4×32×160×160 pixel block as the final input of the network. The brain tumor images annotated by the patient’s experts were cross-blocked, and the 155×160×160 image was divided into 7 image blocks of 32×160×160 size (the part that was not divided enough was filled with the background image), and two consecutive blocks were crossed. The number of channels is 8. Copy each 32×160×160 image block into three copies and perform the following operations respectively. Enhancing tumors, peritumoral edema, and non-enhancing tumors were set as 1, and the rest were background 0. Enhancing and non-enhancing tumors were set as 1 and the rest as background 0. Enhanced tumors were set to 1 and the rest to background 0. Through the above operations, three image blocks with a size of 32×160×160 are obtained, and the three pixel blocks are connected in a new dimension to obtain an image block with a size of 3×32×160×160. Finally, the obtained image block is used as labels for the entire network;
Step1.5数据增强,采用随机裁剪和随机旋转、缩放、平移以及错切等仿射变换方法,对脑肿瘤图像进行数据增强;Step1.5 Data enhancement, using affine transformation methods such as random cropping and random rotation, scaling, translation and staggered cutting to enhance the data of brain tumor images;
Step2.1在3D UNet++网络应用轻量级类残差模块和轻量级残差模块构成脑肿瘤分割网络模型;Step2.1 Apply the lightweight residual module and the lightweight residual module in the 3D UNet++ network to form a brain tumor segmentation network model;
Step2.1.1构建的网络模型使用3次下采样和6次上采样,采用轻量级残差模块代替UNet++系列双层卷积结构,直接替代虽然能够使网络达到轻量化目的,但是相对于原始双层卷积结构,残差结构只使用一次卷积核大小为3的卷积层,其余都由卷积核大小为1的卷积核替代,导致特征提取不充分,可能导致分割精度下降,在改进过程再添加CBAM注意力机制,并应用于U型结构的最外层上采样后通道域拼接得到的特征图,此时特征图是一系列长短连接得到的虽然此时由于一系列的长短连接,特征图语义鸿沟已经相对较小,但是由于多次拼接导致此时通道数较大,而有些通道的特征对于分割任务来说并没有实际意义,从而用CBAM注意力模块学习筛选参数,关注有用信息,提升网络分割精度,同时在最后一次下采样中应用轻量级类残差模块,因为输出特征图经过卷积特征提取后与输入特征图两者存在一定的语义鸿沟,通过轻量级类残差模块的通道拼接可以更好的保存和利用深层有效特征,进一步提高脑肿瘤分割精度;The network model constructed in Step 2.1.1 uses 3 downsampling and 6 upsampling, and uses a lightweight residual module to replace the UNet++ series double-layer convolution structure. Although direct replacement can make the network lightweight, but compared to the original dual Layer convolution structure, the residual structure only uses a convolution layer with a convolution kernel size of 3, and the rest are replaced by a convolution kernel with a convolution kernel size of 1, resulting in insufficient feature extraction, which may lead to a decrease in segmentation accuracy. In the improvement process, the CBAM attention mechanism is added and applied to the feature map obtained by the channel domain splicing after upsampling in the outermost layer of the U-shaped structure. At this time, the feature map is obtained by a series of long and short connections, although at this time due to a series of long and short connections , the semantic gap of the feature map is relatively small, but the number of channels is large due to multiple splicing, and the features of some channels have no practical significance for the segmentation task, so the CBAM attention module is used to learn the screening parameters, and attention is useful. information, improve the accuracy of network segmentation, and apply a lightweight class residual module in the last downsampling, because there is a certain semantic gap between the output feature map and the input feature map after convolution feature extraction. The channel splicing of the residual module can better preserve and utilize deep effective features, and further improve the accuracy of brain tumor segmentation;
实现轻量化类残差模块的具体过程:The specific process of implementing the lightweight residual module:
在进行卷积特征提取时,深层特征信息的损失大于浅层卷积特征提取,在深层网络中应用类残差模块可以减少特征信息损失。原类残差模块首先在主分支使用1×1的卷积进行通道域扩充,将通道域扩充为原来的2.5倍,然后使用3×3的卷积进行特征提取,最后使用1×1的卷积进行通道域信息融合。输入在经过shortcut分支后不是特征图像素点的叠加,而是通道域的拼接,用这种方式是为了充分利用卷积前和卷积后各特征图;When performing convolutional feature extraction, the loss of deep feature information is greater than that of shallow convolutional feature extraction, and applying residual-like modules in deep networks can reduce the loss of feature information. The original residual module first uses 1 × 1 convolution to expand the channel domain in the main branch, expands the channel domain to 2.5 times the original, then uses 3 × 3 convolution for feature extraction, and finally uses 1 × 1 volume. product for channel domain information fusion. The input is not the superposition of the feature map pixels after the shortcut branch, but the splicing of the channel domain. This method is used to make full use of the feature maps before and after convolution;
轻量级类残差模块在保留原类残差模块分割精度优势的同时进一步轻量化,将原本卷积核大小为3的普通卷积改为分组卷积来保留其结构,分组数为卷积核为3的卷积输入通道数。接着为了解决通道域信息无法交互问题,在模块进行通道域拼接后采用卷积核为1的卷积进行通道间信息交互,同时将通道域进行缩减,从而达到减少网络参数和计算量的目的;The lightweight residual module is further lightweight while retaining the segmentation accuracy advantage of the original residual module. The original convolution kernel size of 3 is changed to a grouped convolution to retain its structure, and the number of groups is convolution The number of convolution input channels with kernel 3. Then, in order to solve the problem of inability to interact with channel domain information, after the module performs channel domain splicing, a convolution with a convolution kernel of 1 is used to exchange information between channels, and at the same time, the channel domain is reduced to reduce network parameters and computational effort.
轻量级类残差模块可以表示为:The lightweight residual class module can be expressed as:
xm+1=Cat(xm,F(xm;Wm))x m+1 =Cat(x m , F(x m ; W m ))
其中xm为直接映射部分,F(xm;Wm)为残差部分,Cat为特征图通道域拼接;where x m is the direct mapping part, F(x m ; W m ) is the residual part, and Cat is the feature map channel domain splicing;
实现轻量化残差模块的具体过程:The specific process of implementing the lightweight residual module:
轻量化残差模块将输入通道数利用卷积核为1的卷积变为原来的1/4,再使用卷积核为3的卷积进行特征提取,最后再用卷积核为1的卷积将通道数扩大为原始输入通道数的2倍,从而达到减少网络参数和计算量的目的;In the lightweight residual module, the number of input channels is changed to 1/4 of the original using the convolution with the convolution kernel of 1, and then the convolution with the convolution kernel of 3 is used for feature extraction, and finally the convolution with the convolution kernel of 1 is used. The product expands the number of channels to twice the number of original input channels, so as to achieve the purpose of reducing network parameters and calculation amount;
轻量级残差模块可表示为:The lightweight residual module can be expressed as:
xl+1=xl+F(xl;Wl)x l+1 = x l +F(x l ; W l )
其中xl为直接映射部分,F(xl;Wl)为残差部分;where x l is the direct mapping part, and F(x l ; W l ) is the residual part;
Step2.2在训练过程中,为了减少类别不平衡问题对分割准确率的影响,训练采用二分类的交叉熵(binary_cross_entropy)和医学影像损失Dice Loss组合而成混合损失函数BCEDiceLoss;Step2.2 In the training process, in order to reduce the impact of the class imbalance problem on the segmentation accuracy, the training uses a combination of binary cross entropy (binary_cross_entropy) and medical image loss Dice Loss to form a hybrid loss function BCEDiceLoss;
计算二分类的交叉熵的具体过程:The specific process of calculating the cross entropy of the binary classification:
首先对模型训练的输出进行判断,因医生标注的脑肿瘤分割图片被预处理,目标区域标记为1,非目标区域标记为0,所以判断损失输入为二分类问题,网络模型的训练输出,每一个点即为一个结点,对这个结点是否大于0.5进行判决分类。First, the output of the model training is judged. Because the brain tumor segmentation pictures marked by the doctor are preprocessed, the target area is marked as 1, and the non-target area is marked as 0, so the judgment loss input is a binary classification problem. The training output of the network model, each A point is a node, and whether the node is greater than 0.5 is judged and classified.
计算交叉熵的具体过程:The specific process of calculating cross entropy:
L(p,t)=[-plog(t)+(1-p)log(1-t)]L(p,t)=[-plog(t)+(1-p)log(1-t)]
p为预处理后的医生标注分割图片期望输出,t为实际网络模型训练的输出;p is the expected output of the preprocessed doctor's labeled segmentation image, and t is the output of the actual network model training;
计算医学影像损失Dice Loss的具体过程:The specific process of calculating the medical image loss Dice Loss:
首先了解Dice系数的定义,Dice系数是用来度量集合相似度的度量函数,通常用于计算两个样本的相似度,最后s的取值范围在[0,1]:First understand the definition of Dice coefficient. Dice coefficient is a metric function used to measure the similarity of sets. It is usually used to calculate the similarity of two samples. The final value of s is in the range of [0,1]:
X表示分割图像,Y代表的是预测的分割图像,其中|X∩Y|是X和Y之间的交集,分子中系数2是因为分母中重复计算X和Y;X represents the segmented image, Y represents the predicted segmented image, where |X∩Y| is the intersection between X and Y, and the coefficient 2 in the numerator is because X and Y are repeatedly calculated in the denominator;
Dice Loss公式定义为:The Dice Loss formula is defined as:
在Dice Loss中添加拉普拉斯平滑(Laplace smoothing),由于是一个改动值,这里将值定义为1e-5,即在Dice Loss的分子分母全部加1e-5:Add Laplace smoothing to Dice Loss. Since it is a modified value, the value is defined as 1e-5, that is, add 1e-5 to the numerator and denominator of Dice Loss:
拉普拉斯平滑可以减少过拟合,避免当|X|和|Y|都为0时,分子被0除的问题;Laplace smoothing can reduce overfitting and avoid the problem that the numerator is divided by 0 when both |X| and |Y| are 0;
最终混合损失定义为:The final mixing loss is defined as:
综上在使用混合损失函数BCEDiceLoss后,提高网络模型的性能,保证了Dice系数的精度,减小了模型分割结果与专家勾勒结果的误差,提高的分割精度;In summary, after using the mixed loss function BCEDiceLoss, the performance of the network model is improved, the accuracy of the Dice coefficient is ensured, the error between the model segmentation result and the expert outline result is reduced, and the segmentation accuracy is improved;
Step2.3在上面的网络模型后再加入一次3D卷积,使通道数变为3,使输出与处理后的医生标注图片一致;Step2.3 Add another 3D convolution to the above network model, so that the number of channels becomes 3, so that the output is consistent with the processed doctor's labeled picture;
Step3.1使用轻量化3D UNet++网络模型进行训练,获取脑肿瘤图像分割结果,再将分割结果进行一次sigmoid,判断分割结果是否大于0.5,并将结果变为0和1,进行拼接,再根据三通道定义还原成单通道,即得到脑肿瘤分割结果图。Step3.1 Use the lightweight 3D UNet++ network model for training, obtain the segmentation results of brain tumor images, and then perform a sigmoid on the segmentation results to determine whether the segmentation results are greater than 0.5, and change the results to 0 and 1, splicing, and then according to three The channel definition is restored to a single channel, that is, the brain tumor segmentation result map is obtained.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210613167.XA CN114972382A (en) | 2022-06-01 | 2022-06-01 | Brain tumor segmentation algorithm based on lightweight UNet + + network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210613167.XA CN114972382A (en) | 2022-06-01 | 2022-06-01 | Brain tumor segmentation algorithm based on lightweight UNet + + network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114972382A true CN114972382A (en) | 2022-08-30 |
Family
ID=82958396
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210613167.XA Pending CN114972382A (en) | 2022-06-01 | 2022-06-01 | Brain tumor segmentation algorithm based on lightweight UNet + + network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114972382A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116915549A (en) * | 2023-05-18 | 2023-10-20 | 重庆邮电大学 | A 1-bit massive MIMO channel estimation method based on lightweight and efficient LEU-Net |
CN117274184A (en) * | 2023-09-19 | 2023-12-22 | 河北大学 | Kidney cancer PET-CT image-specific prediction ki-67 expression method |
CN117893499A (en) * | 2024-01-15 | 2024-04-16 | 北京弗莱特智能软件开发有限公司 | Training method of medical image segmentation model and computer equipment |
-
2022
- 2022-06-01 CN CN202210613167.XA patent/CN114972382A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116915549A (en) * | 2023-05-18 | 2023-10-20 | 重庆邮电大学 | A 1-bit massive MIMO channel estimation method based on lightweight and efficient LEU-Net |
CN117274184A (en) * | 2023-09-19 | 2023-12-22 | 河北大学 | Kidney cancer PET-CT image-specific prediction ki-67 expression method |
CN117274184B (en) * | 2023-09-19 | 2024-05-28 | 河北大学 | Kidney cancer PET-CT image-specific prediction ki-67 expression method |
CN117893499A (en) * | 2024-01-15 | 2024-04-16 | 北京弗莱特智能软件开发有限公司 | Training method of medical image segmentation model and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Oskal et al. | A U-net based approach to epidermal tissue segmentation in whole slide histopathological images | |
CN109191476B (en) | Novel biomedical image automatic segmentation method based on U-net network structure | |
CN113808146B (en) | Method and system for multi-organ segmentation of medical images | |
CN114972382A (en) | Brain tumor segmentation algorithm based on lightweight UNet + + network | |
CN114581662B (en) | Brain tumor image segmentation method, system, device and storage medium | |
CN112085677A (en) | An image processing method, system and computer storage medium | |
CN110689548A (en) | Medical image segmentation method, device, equipment and readable storage medium | |
CN110706214B (en) | Three-dimensional U-Net brain tumor segmentation method fusing condition randomness and residual error | |
CN112150428A (en) | Medical image segmentation method based on deep learning | |
CN111798462A (en) | Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image | |
CN111640120A (en) | Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network | |
An et al. | Medical image segmentation algorithm based on multilayer boundary perception-self attention deep learning model | |
CN112598656A (en) | Brain tumor segmentation algorithm based on UNet + + optimization and weight budget | |
CN111275712A (en) | A Residual Semantic Network Training Method for Large-scale Image Data | |
CN112634192A (en) | Cascaded U-N Net brain tumor segmentation method combined with wavelet transformation | |
CN113610752A (en) | Mammary gland image registration method, computer device and storage medium | |
CN111091575A (en) | Medical image segmentation method based on reinforcement learning method | |
CN114399519A (en) | A method and system for 3D semantic segmentation of MR images based on multimodal fusion | |
CN116758102A (en) | An ultrasound image segmentation method based on CNN and Transformer | |
Feng et al. | Automatic liver and tumor segmentation of CT based on cascaded U-Net | |
Zhang et al. | Segmentation of brain tumor MRI image based on improved attention module Unet network | |
CN113129297B (en) | Diameter automatic measurement method and system based on multi-phase tumor image | |
CN115239655A (en) | Thyroid ultrasonic image tumor segmentation and classification method and device | |
CN117809122B (en) | A method, system, electronic device and medium for processing intracranial large blood vessel images | |
Du et al. | X-ray image super-resolution reconstruction based on a multiple distillation feedback network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |