CN117764948A - Liver tumor segmentation method based on mixed attention and multi-scale supervision - Google Patents
Liver tumor segmentation method based on mixed attention and multi-scale supervision Download PDFInfo
- Publication number
- CN117764948A CN117764948A CN202311787545.7A CN202311787545A CN117764948A CN 117764948 A CN117764948 A CN 117764948A CN 202311787545 A CN202311787545 A CN 202311787545A CN 117764948 A CN117764948 A CN 117764948A
- Authority
- CN
- China
- Prior art keywords
- liver tumor
- convunext
- attention
- tumor segmentation
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 208000014018 liver neoplasm Diseases 0.000 title claims abstract description 130
- 206010019695 Hepatic neoplasm Diseases 0.000 title claims abstract description 128
- 230000011218 segmentation Effects 0.000 title claims abstract description 88
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000012549 training Methods 0.000 claims abstract description 15
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000004913 activation Effects 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 5
- 210000004185 liver Anatomy 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000002591 computed tomography Methods 0.000 description 38
- 230000006870 function Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 201000007270 liver cancer Diseases 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
技术领域Technical field
本发明涉及医学图像处理领域,具体涉及一种基于混合注意力和多尺度监督的肝脏肿瘤分割方法。The invention relates to the field of medical image processing, and specifically relates to a liver tumor segmentation method based on hybrid attention and multi-scale supervision.
背景技术Background technique
肝癌是世界上死亡人数最多的主要癌症类型之一。利用CT(ComputedTomography,电子计算机断层扫描),可以帮助医生及早发现肝癌。准确的肝脏肿瘤分割结果,可以便于医生观察肝脏肿瘤情况,并可以辅助医生制定治疗计划、手术规划和监测治疗。Liver cancer is one of the leading causes of death in the world. Using CT (Computed Tomography) can help doctors detect liver cancer early. Accurate liver tumor segmentation results can facilitate doctors to observe liver tumor conditions, and can assist doctors in formulating treatment plans, surgical planning and monitoring treatment.
由于医学影像数据的复杂性和数量庞大,传统的手工分割方法变得耗时且易出错。自动化的肝脏肿瘤分割算法可以极大地提高效率,减轻医生的工作负担。利用肝脏肿瘤分割算法,医生可以更快速、准确地分析医学影像,提高诊断的灵敏性和特异性。Due to the complexity and volume of medical imaging data, traditional manual segmentation methods become time-consuming and error-prone. Automated liver tumor segmentation algorithms can greatly improve efficiency and reduce doctors' workload. Using liver tumor segmentation algorithms, doctors can analyze medical images more quickly and accurately, improving the sensitivity and specificity of diagnosis.
近年来,深度学习分割网络,尤其是UNet等架构,已经在肝脏肿瘤分割任务中取得了显著的进展。深度学习模型能够自动学习特征,并通过层次化的方式进行信息提取,从而更好地适应复杂多变的肝脏结构和肿瘤形态。然而传统UNet在简单的图像拼接后会得到许多冗余信息,且在上采样过程中会丢失较多语义特征,针对肝脏肿瘤的分割精确度往往较低。In recent years, deep learning segmentation networks, especially architectures such as UNet, have made significant progress in liver tumor segmentation tasks. The deep learning model can automatically learn features and extract information in a hierarchical manner to better adapt to the complex and changeable liver structure and tumor morphology. However, traditional UNet will obtain a lot of redundant information after simple image splicing, and will lose more semantic features during the upsampling process. The segmentation accuracy for liver tumors is often low.
发明内容Summary of the invention
为了解决肝脏肿瘤图像分割精确度较低的技术问题,本发明提出了一种基于混合注意力和多尺度监督的肝脏肿瘤分割方法。In order to solve the technical problem of low liver tumor image segmentation accuracy, the present invention proposes a liver tumor segmentation method based on hybrid attention and multi-scale supervision.
本发明提供了一种基于混合注意力和多尺度监督的肝脏肿瘤分割方法,该方法包括:The present invention provides a liver tumor segmentation method based on hybrid attention and multi-scale supervision, which method includes:
获取待分割肝脏肿瘤CT图像,并对待分割肝脏肿瘤CT图像进行预处理;Obtain CT images of liver tumors to be segmented, and preprocess the CT images of liver tumors to be segmented;
根据预先训练完成的MAS-ConvUNeXt肝脏肿瘤分割模型,对待分割肝脏肿瘤CT图像进行肝脏肿瘤分割,其中,MAS-ConvUNeXt肝脏肿瘤分割模型的训练方法包括:获取由肝脏肿瘤CT图像构成的图像数据集,对图像数据集进行预处理,并构建MAS-ConvUNeXt肝脏肿瘤分割模型,根据预处理后的图像数据集,对构建的MAS-ConvUNeXt肝脏肿瘤分割模型进行训练,得到训练完成的MAS-ConvUNeXt肝脏肿瘤分割模型,MAS-ConvUNeXt肝脏肿瘤分割模型包括:混合注意力模块和多尺度监督模块。According to the pre-trained MAS-ConvUNeXt liver tumor segmentation model, liver tumor segmentation is performed on the liver tumor CT image to be segmented. The training method of the MAS-ConvUNeXt liver tumor segmentation model includes: obtaining an image data set composed of liver tumor CT images, Preprocess the image data set and construct the MAS-ConvUNeXt liver tumor segmentation model. Based on the preprocessed image data set, train the constructed MAS-ConvUNeXt liver tumor segmentation model to obtain the trained MAS-ConvUNeXt liver tumor segmentation. Model, MAS-ConvUNeXt liver tumor segmentation model includes: hybrid attention module and multi-scale supervision module.
可选地,所述获取由肝脏肿瘤CT图像构成的图像数据集,包括:Optionally, the acquisition of an image data set composed of liver tumor CT images includes:
将3D肝脏肿瘤CT图像处理成含有肝脏及肝脏肿瘤的2D图像,作为肝脏肿瘤CT图像,并将得到的所有肝脏肿瘤CT图像,构成图像数据集。The 3D liver tumor CT image is processed into a 2D image containing the liver and liver tumors as a liver tumor CT image, and all the obtained liver tumor CT images are formed into an image data set.
可选地,所述预处理包括:图像增强。Optionally, the preprocessing includes: image enhancement.
可选地,所述构建MAS-ConvUNeXt肝脏肿瘤分割模型,包括:Optionally, the construction of the MAS-ConvUNeXt liver tumor segmentation model includes:
构建ConvUNeXt模型,其中,ConvUNeXt模型包括:编码器、解码器和注意力门控模块;Construct a ConvUNeXt model, where the ConvUNeXt model includes: encoder, decoder and attention gating module;
在ConvUNeXt模型的基础上,改进其注意力门控模块为混合注意力模块,并新增多尺度监督模块;Based on the ConvUNeXt model, its attention gating module is improved to a hybrid attention module, and a multi-scale supervision module is added;
将最终更新得到的ConvUNeXt模型,作为构建的训练前的MAS-ConvUNeXt肝脏肿瘤分割模型。The finally updated ConvUNeXt model is used as the constructed pre-training MAS-ConvUNeXt liver tumor segmentation model.
可选地,所述编码器和解码器为在UNet的编码器和解码器基础上,使用ConvUNeXt卷积块替换UNet的卷积块得到的编码器和解码器;所述编码器的stage ratio为1:1:3:1,在下采样步骤使用卷积层替换池化层;所述注意力门控模块位于跳跃连接部分。Optionally, the encoder and decoder are based on the encoder and decoder of UNet, using the ConvUNeXt convolution block to replace the convolution block of UNet; the stage ratio of the encoder is 1:1:3:1, the convolutional layer is used to replace the pooling layer in the downsampling step; the attention gating module is located in the skip connection part.
可选地,所述混合注意力模块对下层ConvUNeXt卷积块的输出特征图x1上采样后得到的结果进行线性变换,并将所得特征图在通道维度按1:2切分,将通道数占比为1的特征图输入空间注意力模块,将通道数占比为2的特征图输入通道注意力模块,将两注意力模块得到的注意力权重α、β和编码器中对应层ConvUNeXt卷积块的输出特征图x2相乘,并通过一个线性层变换维度后,与上采样后的输入特征图x1拼接。Optionally, the hybrid attention module linearly transforms the result obtained after upsampling the output feature map x1 of the lower ConvUNeXt convolution block, and divides the resulting feature map into 1:2 in the channel dimension, and divides the number of channels into The feature map with a ratio of 1 is input into the spatial attention module, the feature map with a channel number ratio of 2 is input into the channel attention module, and the attention weights α and β obtained by the two attention modules are convolved with the corresponding layer ConvUNeXt in the encoder The output feature map x2 of the block is multiplied and transformed by a linear layer, and then concatenated with the upsampled input feature map x1.
可选地,所述空间注意力模块为,将输入特征图输入线性层,并与编码器中对应层ConvUNext卷积块的输出特征图x2进行线性变换后的结果相加,相加结果使用ReLU激活函数激活,再通过一个线性层将特征图映射到中间空间,最后使用Sigmoid激活函数激活,得到注意力权重α。Optionally, the spatial attention module inputs the input feature map into the linear layer and adds it to the output feature map x2 of the corresponding layer ConvUNext convolution block in the encoder after linear transformation. The addition result uses ReLU. The activation function is activated, and then the feature map is mapped to the intermediate space through a linear layer, and finally the Sigmoid activation function is used to activate to obtain the attention weight α.
可选地,所述通道注意力模块为,将输入特征图进行全局平均池化,并与编码器对应层ConvUNext卷积块的输出特征图x2进行全局池化后的结果相加,相加结果使用ReLU激活函数激活,再通过两个线性层将特征图维度压缩再扩展,最后使用Sigmoid激活函数激活,得到注意力权重β。Optionally, the channel attention module performs global average pooling on the input feature map, and adds the result of global pooling to the output feature map x2 of the ConvUNext convolution block of the corresponding layer of the encoder, and the added result Use the ReLU activation function to activate, then compress and expand the feature map dimensions through two linear layers, and finally use the Sigmoid activation function to activate to obtain the attention weight β.
可选地,所述多尺度监督模块是在解码器中每次上采样后的卷积之后,对特征图进行1x1卷积后再上采样到原始尺寸,得到每个上采样阶段的分割图,与真实标签建立损失函数,实现多尺度监督。Optionally, the multi-scale supervision module is to perform 1x1 convolution on the feature map after each upsampling convolution in the decoder and then upsample to the original size to obtain the segmentation map of each upsampling stage. Establish a loss function with real labels to achieve multi-scale supervision.
可选地,MAS-ConvUNeXt肝脏肿瘤分割模型训练过程中的损失函数为交叉熵损失函数。Optionally, the loss function during the training process of the MAS-ConvUNeXt liver tumor segmentation model is the cross-entropy loss function.
本发明具有如下有益效果:The invention has the following beneficial effects:
本发明的一种基于混合注意力和多尺度监督的肝脏肿瘤分割方法,能够更好地解决肝脏肿瘤分割问题,本发明使用混合注意力模块,将上采样结果按通道分割,应用空间注意力和通道注意力,增强远跳连接拼接的质量,滤除低级语义的特征图中的无关特征,增强模型对ROI区域的关注,并且新增多尺度监督模块,在解码器中每一个上采样阶段增加监督信号,有助于增强解码器所得特征图的质量,优化网络模型参数的更新过程,使得模型损失函数更好收敛,从而提高了肝脏肿瘤图像分割的精确度。The present invention's liver tumor segmentation method based on hybrid attention and multi-scale supervision can better solve the problem of liver tumor segmentation. The present invention uses a hybrid attention module to segment the upsampling results by channels, and applies spatial attention and Channel attention, enhances the quality of far-hop connection splicing, filters out irrelevant features in low-level semantic feature maps, enhances the model's focus on ROI areas, and adds a multi-scale supervision module, which is added at each upsampling stage in the decoder The supervision signal helps to enhance the quality of the feature map obtained by the decoder, optimizes the update process of network model parameters, and makes the model loss function converge better, thus improving the accuracy of liver tumor image segmentation.
附图说明Description of the drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案和优点,下面将对实施例或现有技术描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它附图。In order to more clearly explain the technical solutions and advantages in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description The drawings are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without exerting creative efforts.
图1为本发明的一种基于混合注意力和多尺度监督的肝脏肿瘤分割方法的流程图;Figure 1 is a flow chart of a liver tumor segmentation method based on hybrid attention and multi-scale supervision of the present invention;
图2为本发明的MAS-ConvUNeXt肝脏肿瘤分割模型框架示意图;Figure 2 is a schematic diagram of the framework of the MAS-ConvUNeXt liver tumor segmentation model of the present invention;
图3为本发明的ConvUNext卷积块结构示意图;Figure 3 is a schematic structural diagram of the ConvUNext convolution block of the present invention;
图4为本发明的混合注意力模块结构示意图;Figure 4 is a schematic structural diagram of the hybrid attention module of the present invention;
图5为本发明的多尺度监督模块结构示意图;Figure 5 is a schematic structural diagram of the multi-scale supervision module of the present invention;
图6为本发明的又一流程图。Figure 6 is another flow chart of the present invention.
具体实施方式Detailed ways
为了更进一步阐述本发明为达成预定发明目的所采取的技术手段及功效,以下结合附图及较佳实施例,对依据本发明提出的技术方案的具体实施方式、结构、特征及其功效,详细说明如下。在下述说明中,不同的“一个实施例”或“另一个实施例”指的不一定是同一个实施例。此外,一个或多个实施例中的特定特征、结构或特点可由任何合适形式组合。In order to further elaborate on the technical means and effects adopted by the present invention to achieve the intended inventive purpose, the following is a detailed description of the specific implementation, structure, characteristics and effects of the technical solution proposed according to the present invention in conjunction with the accompanying drawings and preferred embodiments. described as follows. In the following description, different terms "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the specific features, structures, or characteristics of one or more embodiments may be combined in any suitable form.
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the technical field to which the invention belongs.
本发明提供了一种基于混合注意力和多尺度监督的肝脏肿瘤分割方法,该方法包括以下步骤:The present invention provides a liver tumor segmentation method based on hybrid attention and multi-scale supervision. The method includes the following steps:
获取待分割肝脏肿瘤CT图像,并对待分割肝脏肿瘤CT图像进行预处理;Obtain CT images of liver tumors to be segmented, and preprocess the CT images of liver tumors to be segmented;
根据预先训练完成的MAS-ConvUNeXt肝脏肿瘤分割模型,对待分割肝脏肿瘤CT图像进行肝脏肿瘤分割,其中,MAS-ConvUNeXt肝脏肿瘤分割模型的训练方法包括:获取由肝脏肿瘤CT图像构成的图像数据集,对图像数据集进行预处理,并构建MAS-ConvUNeXt肝脏肿瘤分割模型,根据预处理后的图像数据集,对构建的MAS-ConvUNeXt肝脏肿瘤分割模型进行训练,得到训练完成的MAS-ConvUNeXt肝脏肿瘤分割模型,MAS-ConvUNeXt肝脏肿瘤分割模型包括:混合注意力模块和多尺度监督模块。According to the pre-trained MAS-ConvUNeXt liver tumor segmentation model, liver tumor segmentation is performed on the liver tumor CT image to be segmented. The training method of the MAS-ConvUNeXt liver tumor segmentation model includes: obtaining an image data set composed of liver tumor CT images, Preprocess the image data set and construct the MAS-ConvUNeXt liver tumor segmentation model. Based on the preprocessed image data set, train the constructed MAS-ConvUNeXt liver tumor segmentation model to obtain the trained MAS-ConvUNeXt liver tumor segmentation. Model, MAS-ConvUNeXt liver tumor segmentation model includes: hybrid attention module and multi-scale supervision module.
下面对上述各个步骤进行详细展开:Each of the above steps is expanded upon in detail below:
参考图1,示出了根据本发明的一种基于混合注意力和多尺度监督的肝脏肿瘤分割方法的一些实施例的流程。该基于混合注意力和多尺度监督的肝脏肿瘤分割方法,包括以下步骤:Referring to FIG. 1 , a flow chart of some embodiments of a liver tumor segmentation method based on hybrid attention and multi-scale supervision according to the present invention is shown. The liver tumor segmentation method based on hybrid attention and multi-scale supervision includes the following steps:
步骤S1,获取待分割肝脏肿瘤CT图像,并对待分割肝脏肿瘤CT图像进行预处理。Step S1: Obtain the CT image of the liver tumor to be segmented, and preprocess the CT image of the liver tumor to be segmented.
在一些实施例中,可以通过CT设备,采集肝脏肿瘤的CT图像,作为待分割肝脏肿瘤CT图像,并对待分割肝脏肿瘤CT图像进行直方图均值化和对比度拉伸,实现对待分割肝脏肿瘤CT图像的增强,以实现对待分割肝脏肿瘤CT图像的预处理。In some embodiments, CT images of liver tumors can be collected through CT equipment as CT images of liver tumors to be segmented, and histogram averaging and contrast stretching are performed on the CT images of liver tumors to be segmented to realize CT images of liver tumors to be segmented. Enhancement to achieve preprocessing of CT images of liver tumors to be segmented.
其中,待分割肝脏肿瘤CT图像可以是待进行肝脏肿瘤分割的CT(ComputedTomography,电子计算机断层扫描)图像。预处理包括:图像增强。The CT image of the liver tumor to be segmented may be a CT (Computed Tomography, computed tomography) image of the liver tumor to be segmented. Preprocessing includes: image enhancement.
步骤S2,根据预先训练完成的MAS-ConvUNeXt肝脏肿瘤分割模型,对待分割肝脏肿瘤CT图像进行肝脏肿瘤分割。Step S2, perform liver tumor segmentation on the CT image of the liver tumor to be segmented based on the pre-trained MAS-ConvUNeXt liver tumor segmentation model.
在一些实施例中,可以将待分割肝脏肿瘤CT图像输入到预先训练完成的MAS-ConvUNeXt肝脏肿瘤分割模型,通过预先训练完成的MAS-ConvUNeXt肝脏肿瘤分割模型,可以实现对待分割肝脏肿瘤CT图像的肝脏肿瘤分割。其中,MAS-ConvUNeXt肝脏肿瘤分割模型的整体框架可以如图2所示。MAS-ConvUNext可以记为Mixed Attention and multi-scaleSupervision ConvUNext。In some embodiments, the liver tumor CT image to be segmented can be input into the pre-trained MAS-ConvUNeXt liver tumor segmentation model. Through the pre-trained MAS-ConvUNeXt liver tumor segmentation model, the liver tumor CT image to be segmented can be realized. Liver tumor segmentation. Among them, the overall framework of the MAS-ConvUNeXt liver tumor segmentation model can be shown in Figure 2. MAS-ConvUNext can be recorded as Mixed Attention and multi-scaleSupervision ConvUNext.
可选地,MAS-ConvUNeXt肝脏肿瘤分割模型的训练方法包括以下步骤:Optionally, the training method of the MAS-ConvUNeXt liver tumor segmentation model includes the following steps:
第一步,获取由肝脏肿瘤CT图像构成的图像数据集。The first step is to obtain an image data set consisting of liver tumor CT images.
例如,将3D肝脏肿瘤CT图像处理成含有肝脏及肝脏肿瘤的2D图像,包括调整窗宽窗位,并将得到的2D图像作为肝脏肿瘤CT图像,并将得到的所有肝脏肿瘤CT图像,构成图像数据集。其中,肝脏肿瘤CT图像可以是标注了肝脏肿瘤区域的CT图像。比如,可以通过人工的方式,在肝脏肿瘤CT图像上标注肝脏肿瘤区域。For example, the 3D liver tumor CT image is processed into a 2D image containing the liver and liver tumors, including adjusting the window width and window level, and using the obtained 2D image as a liver tumor CT image, and combining all the obtained liver tumor CT images to form an image. data set. The liver tumor CT image may be a CT image with the liver tumor area marked. For example, liver tumor areas can be manually marked on liver tumor CT images.
第二步,对图像数据集进行预处理。The second step is to preprocess the image data set.
例如,可以对处理后得到的2D图像,也就是图像数据集中的肝脏肿瘤CT图像,进行直方图均值化和对比度拉伸,实现对肝脏肿瘤CT图像的增强,从而实现对图像数据集的增强,对增强后的图像数据集进行数据增强,扩增数据集,并将最终得到的图像数据集作为预处理后的图像数据集,可以将预处理后的图像数据集划分为训练集、验证集和测试集。其中,数据增强可以包括:随机水平翻转、随机缩放和随机裁剪。数据增强可以生成新的数据,其目的在于扩大和多样化训练数据集,能够提高模型的泛化能力和鲁棒性。For example, the processed 2D image, that is, the liver tumor CT image in the image data set, can be subjected to histogram averaging and contrast stretching to enhance the liver tumor CT image, thereby enhancing the image data set. Perform data enhancement on the enhanced image data set, amplify the data set, and use the final image data set as the preprocessed image data set. The preprocessed image data set can be divided into a training set, a verification set, and a test set. Among them, data enhancement can include: random horizontal flipping, random scaling and random cropping. Data augmentation can generate new data with the purpose of expanding and diversifying the training data set, which can improve the generalization ability and robustness of the model.
第三步,构建MAS-ConvUNeXt肝脏肿瘤分割模型。The third step is to construct the MAS-ConvUNeXt liver tumor segmentation model.
其中,MAS-ConvUNeXt肝脏肿瘤分割模型包括:混合注意力模块和多尺度监督模块。Among them, the MAS-ConvUNeXt liver tumor segmentation model includes: a hybrid attention module and a multi-scale supervision module.
例如,构建MAS-ConvUNeXt肝脏肿瘤分割模型可以包括以下子步骤:For example, building the MAS-ConvUNeXt liver tumor segmentation model can include the following sub-steps:
第一子步骤,构建ConvUNeXt模型。The first sub-step is to build the ConvUNeXt model.
其中,ConvUNeXt模型包括:编码器、解码器和注意力门控模块。上述编码器和解码器为在UNet的编码器和解码器基础上,使用ConvUNeXt卷积块替换UNet的卷积块得到的编码器和解码器。ConvUNext卷积块结构如图3所示。上述编码器的stage ratio为1:1:3:1,并且在下采样步骤使用卷积层替换池化层。上述注意力门控模块位于跳跃连接部分。Among them, the ConvUNeXt model includes: encoder, decoder and attention gating module. The above encoder and decoder are based on the encoder and decoder of UNet, using ConvUNeXt convolution block to replace the convolution block of UNet. The ConvUNext convolution block structure is shown in Figure 3. The stage ratio of the above encoder is 1:1:3:1, and a convolutional layer is used to replace the pooling layer in the downsampling step. The above attention gating module is located in the skip connection part.
第二子步骤,在ConvUNeXt模型的基础上,改进其注意力门控模块为混合注意力模块,并新增多尺度监督模块。The second sub-step is based on the ConvUNeXt model, improving its attention gating module into a hybrid attention module and adding a multi-scale supervision module.
其中,混合注意力模块的结构可以如图4所示。上述混合注意力模块对下层ConvUNeXt卷积块的输出特征图x1上采样后得到的结果进行线性变换,并将所得特征图在通道维度按1:2切分,其中将通道数占比为1的特征图输入空间注意力模块,将通道数占比为2的特征图输入通道注意力模块,最后将两注意力模块得到的注意力权重α、β和编码器中对应层ConvUNext卷积块的输出特征图x2相乘,即得混合注意力模块的输出结果,该结果通过一个线性层变换维度后,与上采样后的输入特征图x1拼接。拼接后的特征图将输入下一ConvUNeXt卷积块。其中,特征图x1指的是结构图中处于混合注意力模块下层的解码器卷积块的输出特征图。特征图x2指的是结构图中和混合注意力模块处于相同层的编码器卷积块的输出特征图。Among them, the structure of the hybrid attention module can be shown in Figure 4. The above-mentioned hybrid attention module linearly transforms the result obtained by upsampling the output feature map x1 of the lower ConvUNeXt convolution block, and divides the resulting feature map into 1:2 in the channel dimension, in which the channel number is 1 The feature map is input into the spatial attention module, and the feature map with a channel number of 2 is input into the channel attention module. Finally, the attention weights α and β obtained by the two attention modules and the output of the corresponding layer ConvUNext convolution block in the encoder are input The feature map x2 is multiplied to obtain the output result of the hybrid attention module. After transforming the dimension through a linear layer, the result is spliced with the upsampled input feature map x1. The spliced feature map will be input to the next ConvUNeXt convolution block. Among them, the feature map x1 refers to the output feature map of the decoder convolution block in the lower layer of the hybrid attention module in the structure diagram. The feature map x2 refers to the output feature map of the encoder convolutional block in the structure graph that is at the same layer as the hybrid attention module.
上述空间注意力模块为,将输入特征图输入线性层,并与编码器中对应层ConvUNext卷积块的输出特征图x2进行线性变换后的结果相加,此时的相加结果使用ReLU激活函数激活,再通过一个线性层将特征图映射到中间空间,最后使用Sigmoid激活函数激活,得到注意力权重α。The above-mentioned spatial attention module inputs the input feature map into the linear layer and adds it to the output feature map x2 of the corresponding layer ConvUNext convolution block in the encoder after linear transformation. The addition result at this time uses the ReLU activation function. Activation, then mapping the feature map to the intermediate space through a linear layer, and finally using the Sigmoid activation function to activate to obtain the attention weight α.
上述通道注意力模块为,将输入特征图进行全局平均池化,并与编码器中对应层ConvUNext卷积块的输出特征图x2进行全局池化后的结果相加,此时的相加结果使用ReLU激活函数激活,再通过两个线性层将特征图维度压缩再扩展,最后使用Sigmoid激活函数激活,得到注意力权重β。The above-mentioned channel attention module performs global average pooling on the input feature map and adds it to the output feature map x2 of the corresponding layer ConvUNext convolution block in the encoder after global pooling. The addition result at this time is used The ReLU activation function is activated, and then the feature map dimensions are compressed and expanded through two linear layers, and finally the Sigmoid activation function is used to activate to obtain the attention weight β.
多尺度监督模块的结构可以如图5所示。上述多尺度监督模块是在解码器中每次上采样后的卷积之后,对特征图进行1x1卷积后再上采样到原始尺寸,得到每个上采样阶段的分割图,与真实标签建立损失函数,实现多尺度监督。The structure of the multi-scale supervision module can be shown in Figure 5. The above-mentioned multi-scale supervision module is to perform 1x1 convolution on the feature map after each upsampling in the decoder and then upsample to the original size to obtain the segmentation map of each upsampling stage, and establish a loss with the real label function to implement multi-scale supervision.
第三子步骤,将最终更新得到的ConvUNeXt模型,作为构建的训练前的MAS-ConvUNeXt肝脏肿瘤分割模型。The third sub-step is to use the finally updated ConvUNeXt model as the constructed pre-training MAS-ConvUNeXt liver tumor segmentation model.
第四步,根据预处理后的图像数据集,对构建的MAS-ConvUNeXt肝脏肿瘤分割模型进行训练,得到训练完成的MAS-ConvUNeXt肝脏肿瘤分割模型。The fourth step is to train the constructed MAS-ConvUNeXt liver tumor segmentation model based on the preprocessed image data set to obtain the trained MAS-ConvUNeXt liver tumor segmentation model.
其中,MAS-ConvUNeXt肝脏肿瘤分割模型训练过程中的损失函数为交叉熵损失函数。Among them, the loss function during the training process of the MAS-ConvUNeXt liver tumor segmentation model is the cross-entropy loss function.
例如,构建损失函数并在训练集上训练MAS-ConvUNeXt肝脏肿瘤分割模型,通过梯度下降法优化分割模型参数,并选出最优分割模型,损失函数为交叉熵损失函数对应的公式为:For example, construct a loss function and train the MAS-ConvUNeXt liver tumor segmentation model on the training set, optimize the segmentation model parameters through the gradient descent method, and select the optimal segmentation model. The loss function is the cross-entropy loss function. The corresponding formula is:
其中,L为损失函数值,N为分割图中像素点的总数,c为类别索引,yi,c是像素i中类别c的真实标签,是样本i中类别c的模型输出的预测概率。Among them, L is the loss function value, N is the total number of pixels in the segmentation map, c is the category index, y i, c is the true label of category c in pixel i, is the predicted probability of model output for category c in sample i.
可选地,可以将测试集中的图像输入MAS-ConvUNeXt肝脏肿瘤分割模型,得到测试集中图像的肝脏肿瘤分割结果。Optionally, the images in the test set can be input into the MAS-ConvUNeXt liver tumor segmentation model to obtain the liver tumor segmentation results of the images in the test set.
本发明的又一流程图可以如图6所示。Another flow chart of the present invention can be shown in Figure 6.
综上,本发明采用混合注意力模块作为注意力门控模块,并增加了多尺度监督模块。该模型能够在增大神经网络感受野的同时降低参数量,并通过混合注意力模块增强分割区域质量,实现对肝脏肿瘤高效快速的分割。In summary, the present invention uses a hybrid attention module as the attention gating module and adds a multi-scale supervision module. This model can increase the receptive field of the neural network while reducing the number of parameters, and enhance the quality of the segmentation area through the hybrid attention module, achieving efficient and fast segmentation of liver tumors.
以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围,均应包含在本发明的保护范围之内。The above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that they can still modify the technical solutions of the foregoing embodiments. Modifications are made to the recorded technical solutions, or equivalent substitutions are made to some of the technical features; these modifications or substitutions do not cause the essence of the corresponding technical solutions to deviate from the scope of the technical solutions of each embodiment of the present invention, and shall be included in the protection of the present invention. within the range.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311787545.7A CN117764948A (en) | 2023-12-23 | 2023-12-23 | Liver tumor segmentation method based on mixed attention and multi-scale supervision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311787545.7A CN117764948A (en) | 2023-12-23 | 2023-12-23 | Liver tumor segmentation method based on mixed attention and multi-scale supervision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117764948A true CN117764948A (en) | 2024-03-26 |
Family
ID=90323176
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311787545.7A Pending CN117764948A (en) | 2023-12-23 | 2023-12-23 | Liver tumor segmentation method based on mixed attention and multi-scale supervision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117764948A (en) |
-
2023
- 2023-12-23 CN CN202311787545.7A patent/CN117764948A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110287849B (en) | Lightweight depth network image target detection method suitable for raspberry pi | |
CN113362223B (en) | Image Super-Resolution Reconstruction Method Based on Attention Mechanism and Two-Channel Network | |
CN111627019B (en) | Liver tumor segmentation method and system based on convolutional neural network | |
CN113674253B (en) | Automatic segmentation method for rectal cancer CT image based on U-transducer | |
CN108062753B (en) | Unsupervised domain self-adaptive brain tumor semantic segmentation method based on deep counterstudy | |
CN113222124B (en) | SAUNet + + network for image semantic segmentation and image semantic segmentation method | |
CN110889852B (en) | Liver segmentation method based on residual error-attention deep neural network | |
CN115205300B (en) | Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion | |
CN111145181B (en) | Skeleton CT image three-dimensional segmentation method based on multi-view separation convolutional neural network | |
CN111145170A (en) | Medical image segmentation method based on deep learning | |
CN115409733A (en) | Low-dose CT image noise reduction method based on image enhancement and diffusion model | |
CN114820491A (en) | Multi-modal stroke lesion segmentation method and system based on small sample learning | |
CN114708255B (en) | A Multicenter Children's X-ray Lung Segmentation Method Based on TransUNet Model | |
CN113223005A (en) | Thyroid nodule automatic segmentation and grading intelligent system | |
CN111709900A (en) | A High Dynamic Range Image Reconstruction Method Based on Global Feature Guidance | |
CN110689525A (en) | Method and device for identifying lymph nodes based on neural network | |
CN110569851A (en) | A Real-time Semantic Segmentation Approach with Gated Multilayer Fusion | |
CN114723669A (en) | Liver tumor two-point five-dimensional deep learning segmentation algorithm based on context information perception | |
CN114037714A (en) | 3D MR and TRUS image segmentation method for prostate system puncture | |
CN115511767A (en) | Self-supervised learning multi-modal image fusion method and application thereof | |
CN116310329A (en) | Skin lesion image segmentation method based on lightweight multi-scale UNet | |
CN117710681A (en) | Semi-supervised medical image segmentation method based on data enhancement strategy | |
CN116757982A (en) | Multi-mode medical image fusion method based on multi-scale codec | |
CN117392082A (en) | Liver CT image segmentation method and system based on full-scale skip connections | |
CN117853730A (en) | U-shaped full convolution medical image segmentation network based on convolution kernel attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |