CN117078705B - A CT image segmentation method based on Bhattacharyya coefficient active contour attention - Google Patents
A CT image segmentation method based on Bhattacharyya coefficient active contour attention Download PDFInfo
- Publication number
- CN117078705B CN117078705B CN202311344404.8A CN202311344404A CN117078705B CN 117078705 B CN117078705 B CN 117078705B CN 202311344404 A CN202311344404 A CN 202311344404A CN 117078705 B CN117078705 B CN 117078705B
- Authority
- CN
- China
- Prior art keywords
- feature map
- convolution
- attention
- block
- active contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000003709 image segmentation Methods 0.000 title claims abstract description 30
- 230000011218 segmentation Effects 0.000 claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000012360 testing method Methods 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims abstract description 7
- 208000008839 Kidney Neoplasms Diseases 0.000 claims abstract description 4
- 210000003734 kidney Anatomy 0.000 claims abstract description 4
- 230000004913 activation Effects 0.000 claims description 87
- 238000012545 processing Methods 0.000 claims description 37
- 238000010586 diagram Methods 0.000 claims description 30
- 230000009466 transformation Effects 0.000 claims description 20
- 238000010606 normalization Methods 0.000 claims description 16
- 238000011176 pooling Methods 0.000 claims description 14
- 238000005315 distribution function Methods 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims 4
- 238000005070 sampling Methods 0.000 claims 4
- 230000001133 acceleration Effects 0.000 claims 1
- 238000009826 distribution Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 103
- 230000000694 effects Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及医学图像分割技术领域,尤其涉及一种基于巴氏系数主动轮廓注意力的CT图像分割方法,步骤如下:数据集是3D肾脏和肾脏肿瘤的CT图像;调用nibabel库处理数据集中的体数据得到2D的png格式的切片,从每个体数据中分别选取10个切片,得到数据集D’,对D’中的训练集进行数据增强得到数据集D;网络结构包括编码部分和解码部分;使用Dice损失函数计算损失函数;使用SGD优化器,通过反向传播来调整网络中的权重和偏置量;将最优的权重与偏置量保存在新建的文件中;读取测试集中的图像完成分割,将结果保存为jpg格式的文件。该发明能够聚焦目标结构,对灰度值分布不均匀的图片一样可以得到较好的分割结果。
The invention relates to the technical field of medical image segmentation, and in particular to a CT image segmentation method based on Babbitt coefficient active contour attention. The steps are as follows: the data set is CT images of 3D kidneys and renal tumors; the nibabel library is called to process the volumes in the data set. The data is sliced in 2D png format, and 10 slices are selected from each volume data to obtain the data set D'. The training set in D' is data enhanced to obtain the data set D; the network structure includes an encoding part and a decoding part; Use the Dice loss function to calculate the loss function; use the SGD optimizer to adjust the weights and biases in the network through backpropagation; save the optimal weights and biases in a newly created file; read the images in the test set Complete the segmentation and save the result as a jpg format file. This invention can focus on the target structure and can also obtain better segmentation results for pictures with uneven gray value distribution.
Description
技术领域Technical Field
本发明涉及医学图像分割技术领域,尤其涉及一种基于巴氏系数主动轮廓注意力的CT图像分割方法。The present invention relates to the technical field of medical image segmentation, and in particular to a CT image segmentation method based on Bhattacharyya coefficient active contour attention.
背景技术Background Art
CT图像分割是医学影像领域中的重要任务,旨在将图像中的目标结构分割出来,以便医生进行疾病的诊断和制定治疗方案。CT图像分割方法可以分为传统分割方法和深度学习分割方法。传统分割方法包括基于图的分割、阈值分割、区域生长等基于图像处理技术的经典方法,它们的效果可能会受到图像质量、背景噪声等的影响。深度学习分割方法包括U-Net及其变种、DeepLab系列等方法,这些深度学习方法借助于大规模数据和强大的计算能力,在医学图像分割中取得了不错的表现。但对于CT图像,由于受组织密度不同,图像采集参数不同等影响,导致其出现灰度不均匀的情况,大大增加了分割的难度。CT image segmentation is an important task in the field of medical imaging. It aims to segment the target structure in the image so that doctors can diagnose the disease and formulate treatment plans. CT image segmentation methods can be divided into traditional segmentation methods and deep learning segmentation methods. Traditional segmentation methods include classic methods based on image processing technology such as graph-based segmentation, threshold segmentation, and region growing. Their effects may be affected by image quality, background noise, etc. Deep learning segmentation methods include U-Net and its variants, DeepLab series and other methods. These deep learning methods have achieved good performance in medical image segmentation with the help of large-scale data and powerful computing power. However, for CT images, due to the influence of different tissue density and different image acquisition parameters, the grayscale is uneven, which greatly increases the difficulty of segmentation.
因此,针对上述问题,提出一种基于巴氏系数主动轮廓注意力的CT图像分割方法,来解决上述问题。Therefore, in order to solve the above problems, a CT image segmentation method based on Bhattacharyya coefficient active contour attention is proposed to solve the above problems.
发明内容Summary of the invention
本发明针对现有技术的不足,研制一种基于巴氏系数主动轮廓注意力的CT图像分割方法,该发明能够聚焦目标结构,对于灰度值分布不均匀的图片一样可以得到较好的分割结果。In view of the shortcomings of the prior art, the present invention develops a CT image segmentation method based on Bhattacharyya coefficient active contour attention, which can focus on the target structure and obtain better segmentation results for images with uneven grayscale value distribution.
本发明解决技术问题的技术方案为:The technical solution of the present invention to solve the technical problem is:
一种基于巴氏系数主动轮廓注意力的CT图像分割方法,步骤如下:A CT image segmentation method based on Bhattacharyya coefficient active contour attention, the steps are as follows:
S1. 数据集是3D肾脏和肾脏肿瘤的CT图像,来自MICCAI KiTS19,该数据集的训练集包括210个3D-CT图像,测试集包括90个3D-CT图像;S1. The dataset is 3D CT images of kidneys and kidney tumors from MICCAI KiTS19. The training set of the dataset includes 210 3D-CT images, and the test set includes 90 3D-CT images.
S2.调用nibabel库处理数据集中的体数据得到2D的.png格式的切片,从每个体数据中分别选取了10个切片,得到数据集D’,对数据集D’中的训练集进行数据增强得到数据集D;S2. Call the nibabel library to process the volume data in the dataset to obtain 2D .png format slices, select 10 slices from each volume data to obtain dataset D’, perform data enhancement on the training set in dataset D’ to obtain dataset D;
S3. 网络结构包括编码部分和解码部分,编码部分使用的是VGG16预训练网络对输入图像进行特征提取,解码部分使用的是巴氏系数主动轮廓注意力结构和注意力深度可分离卷积块;S3. The network structure includes an encoding part and a decoding part. The encoding part uses the VGG16 pre-trained network to extract features from the input image, and the decoding part uses the Bhattacharyya coefficient active contour attention structure and the attention depth separable convolution block;
S4. 计算损失函数Loss:损失函数Loss使用的是Dice损失函数;S4. Calculate the loss function Loss: The loss function Loss uses the Dice loss function;
S5. 使用SGD优化器,通过反向传播来调整网络中的权重和偏置量;S5. Use the SGD optimizer to adjust the weights and biases in the network through backpropagation;
S6.在数据集D的训练过程中,设置保存评价最优指标的变量,用来存储最优的权重与偏置量,并保存在ph文件中;S6. During the training process of the data set D, set the variable for saving the optimal evaluation index to store the optimal weight and bias, and save it in the ph file;
S7.把保存在ph文件中的最优的权重与偏置量加载到网络中,读取测试集中的图像完成分割,将结果保存为jpg格式的文件。S7. Load the optimal weights and biases saved in the ph file into the network, read the images in the test set to complete the segmentation, and save the results as a jpg format file.
上述的一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,使用预训练的VGG16网络前五个阶段,其中第五个阶段不包含最大池化运算Maxpooling操作,每一个阶段分别得到特征图像A1、A2、A3、A4、A5。Based on the above-mentioned CT image segmentation method based on Bhattacharyya coefficient active contour attention, the first five stages of the pre-trained VGG16 network are used, wherein the fifth stage does not include the maximum pooling operation, and each stage obtains feature images A1 , A2 , A3 , A4 , and A5 respectively.
上述的一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,解码部分包括第一巴氏系数主动轮廓注意力模块、第一注意力深度可分离卷积块、第二巴氏系数主动轮廓注意力模块、第二注意力深度可分离卷积块、第三巴氏系数主动轮廓注意力模块、第三注意力深度可分离卷积块、第四巴氏系数主动轮廓注意力模块组成、第四注意力深度可分离卷积块和输出模块。Based on the above-mentioned CT image segmentation method based on Bhattacharyya coefficient active contour attention, the decoding part includes a first Bhattacharyya coefficient active contour attention module, a first attention depth-separable convolution block, a second Bhattacharyya coefficient active contour attention module, a second attention depth-separable convolution block, a third Bhattacharyya coefficient active contour attention module, a third attention depth-separable convolution block, a fourth Bhattacharyya coefficient active contour attention module, a fourth attention depth-separable convolution block and an output module.
上述的一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,第一巴氏系数主动轮廓注意力模块包含第一初始轮廓块、第一图像处理块、第一巴氏系数主动轮廓块和第一注意力块;Based on the above-mentioned CT image segmentation method based on Bhattacharyya coefficient active contour attention, the first Bhattacharyya coefficient active contour attention module includes a first initial contour block, a first image processing block, a first Bhattacharyya coefficient active contour block and a first attention block;
(1)第一初始轮廓块实现步骤如下:特征图A5经过上采样得到特征图M1,特征图M1经过1×1的卷积得到特征图B1,特征图A4经过1×1的卷积得到特征图C1,特征图B1和特征图C1相加得到特征图D1,特征图D1分别经过Relu激活函数、1×1的卷积,Sigmoid激活函数得到特征图N1、特征图N1经距离变换后得到特征图E1,即初始轮廓;(1) The implementation steps of the first initial contour block are as follows: feature map A 5 is upsampled to obtain feature map M 1 , feature map M 1 is subjected to 1×1 convolution to obtain feature map B 1 , feature map A 4 is subjected to 1×1 convolution to obtain feature map C 1 , feature map B 1 and feature map C 1 are added to obtain feature map D 1 , feature map D 1 is subjected to Relu activation function, 1×1 convolution, and Sigmoid activation function to obtain feature map N 1 , feature map N 1 is subjected to distance transformation to obtain feature map E 1 , i.e., the initial contour;
距离变换由求得,其中是欧几里得距离,*是卷积乘,exp是以自然常数e为底的指数函数, 是人为给定的,且;The distance transformation is given by Find, among which is the Euclidean distance, * is the convolution multiplication, exp is the exponential function with the natural constant e as the base, is artificially given, and ;
(2)第一图像处理块实现步骤如下:特征图经过1×1的卷积得到特征图F1,输入图像通过Resize图像处理函数、Sigmoid激活函数得到特征图G1,特征图F1和特征图G1相加得到特征图H1,特征图H1经过1×1的卷积、Sigmoid激活函数得到特征图I1;(2) The first image processing block is implemented as follows: Feature map After 1×1 convolution, the feature map F 1 is obtained. The input image is processed by Resize image processing function and Sigmoid activation function to obtain feature map G 1 . Feature map F 1 and feature map G 1 are added to obtain feature map H 1 . Feature map H 1 is processed by 1×1 convolution and Sigmoid activation function to obtain feature map I 1 .
(3)第一巴氏系数主动轮廓块实现步骤如下:使用巴氏系数主动轮廓算法对特征图I1进行有限次迭代分割,得到特征图,,其中u和v为正参数,是人为给定的;BCV为巴氏系数主动轮廓算法;E1为特征图E1;(3) The first Bhattacharyya coefficient active contour block is implemented as follows: Use the Bhattacharyya coefficient active contour algorithm to perform a finite number of iterative segmentations on the feature map I 1 to obtain the feature map , , where u and v are positive parameters, which are artificially given; BCV is the Bhattacharyya coefficient active contour algorithm; E 1 is the feature map E 1 ;
巴氏系数主动轮廓算法的能量函数为:The energy function of the Bhattacharyya coefficient active contour algorithm is:
,其中后面的式子表示轮廓的长度,后面的式子表示轮廓内的面积,表示狄拉克函数,H为Heavyside函数,是定义域,Q为RGB颜色空间,in和out分别是初始轮廓内部和外部的区域。对于给定的颜色q,使用基于高斯核的直方图估计概率分布函数和; ,in The following formula represents the length of the contour, The following formula represents the area within the contour. represents the Dirac function, H is the Heavyside function, is the domain, Q is the RGB color space, in and out are the initial contours For a given color q, the probability distribution function is estimated using a histogram based on a Gaussian kernel and ;
(4)第一注意力块实现步骤如下:特征图A4和特征图J1相乘得到特征图K1。(4) The implementation steps of the first attention block are as follows: feature map A 4 and feature map J 1 are multiplied to obtain feature map K 1 .
上述的一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,第一注意力深度可分离卷积块实现步骤如下:将特征图和特征图拼接后,将拼接结果依次经过逐点卷积、Batch Normalization批量归一化、1×7的轴向深度卷积、7×1的轴向深度卷积、Residual残差卷积、逐点卷积和GELU激活函数得到特征图,特征图依次经过全局平均池化、1×1的卷积、Relu激活函数、1×1的卷积、Sigmoid激活函数后,再和特征图相乘得到特征图。Based on the above-mentioned CT image segmentation method based on Bhattacharyya coefficient active contour attention, the first attention depth separable convolution block is implemented as follows: the feature map and feature map After splicing, the splicing results are sequentially subjected to point-by-point convolution, batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function to obtain the feature map. , feature map After global average pooling, 1×1 convolution, Relu activation function, 1×1 convolution, Sigmoid activation function, and feature map Multiply to get the feature map .
上述的一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,第二巴氏系数主动轮廓注意力模块包含第二初始轮廓块、第二图像处理块、第二巴氏系数主动轮廓块、和第二注意力块;Based on the above-mentioned CT image segmentation method based on Bhattacharyya coefficient active contour attention, the second Bhattacharyya coefficient active contour attention module includes a second initial contour block, a second image processing block, a second Bhattacharyya coefficient active contour block, and a second attention block;
(1)第二初始轮廓块实现步骤如下:特征图P1经过上采样后得到特征图M2,特征图M2经过1×1的卷积得到特征图B2,特征图A3经过1×1的卷积得到特征图C2,特征图B2和特征图C2相加得到特征图D2,特征图D2分别经过Relu激活函数、1×1的卷积,Sigmoid激活函数得到特征图N2、特征图N2经距离变换后得到特征图E2,即初始轮廓; (1) The implementation steps of the second initial contour block are as follows: feature map P1 is upsampled to obtain feature map M2 , feature map M2 is subjected to 1×1 convolution to obtain feature map B2 , feature map A3 is subjected to 1×1 convolution to obtain feature map C2 , feature map B2 and feature map C2 are added to obtain feature map D2 , feature map D2 is subjected to Relu activation function, 1×1 convolution, and Sigmoid activation function to obtain feature map N2 , feature map N2 is subjected to distance transformation to obtain feature map E2 , i.e., the initial contour;
距离变换由求得,其中 是欧几里得距离,*是卷积乘,exp是以自然常数e为底的指数函数,是人为给定的,且;The distance transformation is given by Find, among which is the Euclidean distance, * is the convolution multiplication, exp is the exponential function with the natural constant e as the base, is artificially given, and ;
(2)第二图像处理块实现步骤如下:特征图M2经过1×1的卷积得到特征图F2,输入图像通过Resize图像处理函数、Sigmoid激活函数得到特征图G2,特征图F2和特征图G2相加得到特征图H2,特征图H2经过1×1的卷积、Sigmoid激活函数得到特征图I2;(2) The implementation steps of the second image processing block are as follows: the feature map M 2 is subjected to a 1×1 convolution to obtain the feature map F 2 , the input image is subjected to the Resize image processing function and the Sigmoid activation function to obtain the feature map G 2 , the feature map F 2 and the feature map G 2 are added to obtain the feature map H 2 , the feature map H 2 is subjected to a 1×1 convolution and the Sigmoid activation function to obtain the feature map I 2 ;
(3)第二巴氏系数主动轮廓块实现步骤如下:使用巴氏系数主动轮廓算法对特征图I2进行有限次迭代分割,得到特征图,,其中和为正参数,是人为给定的;BCV为巴氏系数主动轮廓算法;E2为特征图E2;(3) The second Bhattacharyya coefficient active contour block is implemented as follows: Use the Bhattacharyya coefficient active contour algorithm to perform a finite number of iterative segmentations on the feature map I 2 to obtain the feature map , ,in and is a positive parameter, which is artificially given; BCV is the Bhattacharyya coefficient active contour algorithm; E 2 is the feature map E 2 ;
巴氏系数主动轮廓算法的能量函数为:The energy function of the Bhattacharyya coefficient active contour algorithm is:
,其中后面的式子表示轮廓的长度,后面的式子表示轮廓内的面积,表示狄拉克函数,H为Heavyside函数,是定义域,Q为RGB颜色空间,in和out分别是初始轮廓内部和外部的区域,对于给定的颜色q,使用基于高斯核的直方图估计概率分布函数和; ,in The following formula represents the length of the contour, The following formula represents the area within the contour. represents the Dirac function, H is the Heavyside function, is the domain, Q is the RGB color space, in and out are the initial contours For the internal and external regions, for a given color q, the probability distribution function is estimated using a histogram based on a Gaussian kernel and ;
(4)第二注意力块实现步骤如下:特征图A3和特征图J2相乘得到特征图K2。(4) The implementation steps of the second attention block are as follows: feature map A 3 and feature map J 2 are multiplied to obtain feature map K 2 .
上述的一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,第二注意力深度可分离卷积块实现步骤如下:将特征图和特征图拼接后,将拼接结果依次经过逐点卷积、Batch Normalization批量归一化、1×7的轴向深度卷积、7×1的轴向深度卷积、Residual残差卷积、逐点卷积和GELU激活函数得到特征图L2,特征图L2依次经过全局平均池化、1×1的卷积、Relu激活函数、1×1的卷积、Sigmoid激活函数后,再和特征图L2相乘得到特征图P2。Based on the above-mentioned CT image segmentation method based on Bhattacharyya coefficient active contour attention, the second attention depth separable convolution block is implemented as follows: the feature map and feature map After splicing, the splicing result is subjected to point-by-point convolution, batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function in sequence to obtain the feature map L 2. The feature map L 2 is subjected to global average pooling, 1×1 convolution, Relu activation function, 1×1 convolution, Sigmoid activation function in sequence, and then multiplied with the feature map L 2 to obtain the feature map P 2 .
上述的一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,第三巴氏系数主动轮廓注意力模块包含第三初始轮廓块、第三图像处理块、第三巴氏系数主动轮廓块、第三注意力块;Based on the above-mentioned CT image segmentation method based on Bhattacharyya coefficient active contour attention, the third Bhattacharyya coefficient active contour attention module includes a third initial contour block, a third image processing block, a third Bhattacharyya coefficient active contour block, and a third attention block;
(1)第三初始轮廓块实现步骤如下:特征图P2经过上采样得到特征图M3,特征图M3经过1×1的卷积得到特征图B3,特征图经过1×1的卷积得到特征图C3,特征图B3和特征图C3相加得到特征图D3,特征图D3分别经过Relu激活函数、1×1的卷积,Sigmoid激活函数得到特征图N3、特征图N3经距离变换后得到特征图E3,即初始轮廓;(1) The implementation steps of the third initial contour block are as follows: feature map P2 is upsampled to obtain feature map M3 , feature map M3 is convolved with 1×1 to obtain feature map B3 , feature map After 1×1 convolution, feature map C 3 is obtained. Feature map B 3 and feature map C 3 are added to obtain feature map D 3. Feature map D 3 is respectively subjected to Relu activation function, 1×1 convolution, and Sigmoid activation function to obtain feature map N 3. Feature map N 3 is subjected to distance transformation to obtain feature map E 3 , i.e., the initial contour.
距离变换由求得,其中是欧几里得距离,*是卷积乘,exp是以自然常数e为底的指数函数, 是人为给定的,且;The distance transformation is given by Find, among which is the Euclidean distance, * is the convolution multiplication, exp is the exponential function with the natural constant e as the base, is artificially given, and ;
(2)第三图像处理块实现步骤如下:特征图M3经过1×1的卷积得到特征图F3,输入图像通过Resize图像处理函数、Sigmoid激活函数得到特征图G3,特征图F3和特征图G3相加得到特征图H3,特征图H3经过1×1的卷积、Sigmoid激活函数得到特征图I3;(2) The implementation steps of the third image processing block are as follows: the feature map M 3 is subjected to a 1×1 convolution to obtain a feature map F 3 , the input image is subjected to a Resize image processing function and a Sigmoid activation function to obtain a feature map G 3 , the feature map F 3 and the feature map G 3 are added to obtain a feature map H 3 , and the feature map H 3 is subjected to a 1×1 convolution and a Sigmoid activation function to obtain a feature map I 3 ;
(3)第三巴氏系数主动轮廓块实现步骤如下:使用巴氏系数主动轮廓算法对特征图I3进行有限次迭代分割,得到特征图,,其中和为正参数,是人为给定的;BCV为巴氏系数主动轮廓算法;E3为特征图E3;(3) The implementation steps of the third Bhattacharyya coefficient active contour block are as follows: Use the Bhattacharyya coefficient active contour algorithm to perform a finite number of iterative segmentations on the feature map I 3 to obtain the feature map , ,in and is a positive parameter, which is artificially given; BCV is the Bhattacharyya coefficient active contour algorithm; E 3 is the feature map E 3 ;
巴氏系数主动轮廓算法的能量函数为:,其中后面的式子表示轮廓的长度,后面的式子表示轮廓内的面积,表示狄拉克函数,H为Heavyside函数,是定义域,Q为RGB颜色空间,in和out分别是初始轮廓内部和外部的区域,对于给定的颜色q,使用基于高斯核的直方图估计概率分布函数和;The energy function of the Bhattacharyya coefficient active contour algorithm is: ,in The following formula represents the length of the contour, The following formula represents the area within the contour. represents the Dirac function, H is the Heavyside function, is the domain, Q is the RGB color space, in and out are the initial contours For the internal and external regions, for a given color q, the probability distribution function is estimated using a histogram based on a Gaussian kernel and ;
(4)第三注意力块实现步骤如下:特征图A2和特征图J3相乘得到特征图K3。 (4) The implementation steps of the third attention block are as follows: feature map A 2 and feature map J 3 are multiplied to obtain feature map K 3 .
上述的一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,第三注意力深度可分离卷积块实现步骤如下:将特征图M3和特征图K3拼接后,将拼接结果依次经过逐点卷积、Batch Normalization批量归一化、1×7的轴向深度卷积、7×1的轴向深度卷积、Residual残差卷积、逐点卷积和GELU激活函数得到特征图L3,特征图L3依次经过全局平均池化、1×1的卷积、Relu激活函数、1×1的卷积、Sigmoid激活函数后,再和特征图L3相乘得到特征图P3。Based on the above-mentioned CT image segmentation method based on Bhattacharyya coefficient active contour attention, the implementation steps of the third attention deep separable convolution block are as follows: after splicing the feature map M 3 and the feature map K 3 , the splicing result is subjected to point-by-point convolution, batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function to obtain the feature map L 3. The feature map L 3 is subjected to global average pooling, 1×1 convolution, Relu activation function, 1×1 convolution, Sigmoid activation function, and then multiplied with the feature map L 3 to obtain the feature map P 3 .
上述的一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,第四巴氏系数主动轮廓注意力模块包含第四初始轮廓块、第四图像处理块、第四巴氏系数主动轮廓块和第四注意力块,Based on the above-mentioned CT image segmentation method based on Bhattacharyya coefficient active contour attention, the fourth Bhattacharyya coefficient active contour attention module includes a fourth initial contour block, a fourth image processing block, a fourth Bhattacharyya coefficient active contour block and a fourth attention block.
(1)第四初始轮廓块实现步骤如下:特征图P3经过上采样得到特征图M4,特征图M4经过1×1的卷积得到特征图B4,特征图A1经过1×1的卷积得到特征图C4,特征图B4和特征图C4相加得到特征图D4,特征图D4分别经过Relu激活函数、1×1的卷积,Sigmoid激活函数得到特征图N4、特征图N4经距离变换后得到特征图E4,即初始轮廓;(1) The fourth initial contour block is implemented as follows: feature map P 3 is upsampled to obtain feature map M 4 , feature map M 4 is subjected to 1×1 convolution to obtain feature map B 4 , feature map A 1 is subjected to 1×1 convolution to obtain feature map C 4 , feature map B 4 and feature map C 4 are added to obtain feature map D 4 , feature map D 4 is subjected to Relu activation function, 1×1 convolution, and Sigmoid activation function to obtain feature map N 4 , feature map N 4 is subjected to distance transformation to obtain feature map E 4 , i.e., the initial contour;
距离变换由求得,其中是欧几里得距离,*是卷积乘,exp是以自然常数e为底的指数函数, 是人为给定的,且;The distance transformation is given by Find, among which is the Euclidean distance, * is the convolution multiplication, exp is the exponential function with the natural constant e as the base, is artificially given, and ;
(2)第四图像处理块实现步骤如下:特征图M4经过1×1的卷积得到特征图F4,输入图像通过Resize图像处理函数、Sigmoid激活函数得到特征图G4,特征图F4和特征图G4相加得到特征图H4,特征图H4经过1×1的卷积、Sigmoid激活函数得到特征图I4;(2) The fourth image processing block is implemented as follows: feature map M 4 is subjected to 1×1 convolution to obtain feature map F 4 , the input image is subjected to Resize image processing function and Sigmoid activation function to obtain feature map G 4 , feature map F 4 and feature map G 4 are added to obtain feature map H 4 , feature map H 4 is subjected to 1×1 convolution and Sigmoid activation function to obtain feature map I 4 ;
(3)第四巴氏系数主动轮廓块实现步骤如下:使用巴氏系数主动轮廓算法对特征图进行有限次迭代分割,得到特征图,,其中和为正参数,是人为给定的;BCV为巴氏系数主动轮廓算法;E4为特征图E4;(3) The fourth Bhattacharyya coefficient active contour block is implemented as follows: Use the Bhattacharyya coefficient active contour algorithm to Perform a finite number of iterative segmentation to obtain the feature map , ,in and is a positive parameter, which is artificially given; BCV is the Bhattacharyya coefficient active contour algorithm; E 4 is the feature map E 4 ;
巴氏系数主动轮廓算法的能量函数为:The energy function of the Bhattacharyya coefficient active contour algorithm is:
, ,
其中后面的式子表示轮廓的长度,后面的式子表示轮廓内的面积,表示狄拉克函数,H为Heavyside函数,是定义域,Q为RGB颜色空间,in和out分别是初始轮廓E4内部和外部的区域。对于给定的颜色q,使用基于高斯核的直方图估计概率分布函数和;in The following formula represents the length of the contour, The following formula represents the area within the contour. represents the Dirac function, H is the Heavyside function, is the domain, Q is the RGB color space, in and out are the areas inside and outside the initial contour E 4 , respectively. For a given color q, the probability distribution function is estimated using a histogram based on a Gaussian kernel and ;
(4)第四注意力块实现步骤如下:特征图和特征图相乘得到特征图。(4) The implementation steps of the fourth attention block are as follows: Feature map and feature map Multiply to get the feature map .
上述一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,Based on the above CT image segmentation method based on Bhattacharyya coefficient active contour attention,
第四注意力深度可分离卷积块实现步骤如下:The fourth attention depth separable convolution block is implemented as follows:
将特征图和特征图拼接后,将拼接结果依次经过逐点卷积、BatchNormalization批量归一化、1×7的轴向深度卷积、7×1的轴向深度卷积、Residual残差卷积、逐点卷积和GELU激活函数得到特征图;The feature map and feature map After splicing, the splicing results are sequentially subjected to point-by-point convolution, batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function to obtain the feature map. ;
特征图依次经过全局平均池化、1×1的卷积、Relu激活函数、1×1的卷积、Sigmoid激活函数后,再和特征图相乘得到特征图。Feature Map After global average pooling, 1×1 convolution, Relu activation function, 1×1 convolution, Sigmoid activation function, and feature map Multiply to get the feature map .
上述一种基于巴氏系数主动轮廓注意力的CT图像分割方法基础上,Based on the above CT image segmentation method based on Bhattacharyya coefficient active contour attention,
输出模块实现步骤如下:特征图经过1×1的卷积、BatchNorm加速深度神经网络收敛、Relu激活函数得到分割图像。The output module implementation steps are as follows: Feature map After 1×1 convolution, BatchNorm accelerates the convergence of deep neural network, and Relu activation function, the segmented image is obtained. .
发明内容中提供的效果仅仅是实施例的效果,而不是发明所有的全部效果,上述技术方案具有如下优点或有益效果:The effects provided in the summary of the invention are only the effects of the embodiments, rather than all the effects of the invention. The above technical solution has the following advantages or beneficial effects:
本发明提出了一种巴氏系数主动轮廓注意力结构,该结构通过通过量化分割区域和图像的灰度值分布之间的相似性,从而帮助我们的分割方法更好地适应不同区域之间的灰度差异,并将注意力集中在目标结构上;在解码阶段提出一种注意力深度可分离卷积模块,该模块在有较低的计算复杂度同时可以自适应地调整特征图中每个通道的权重,以便更有针对性地捕捉特征之间的关系,更好地学习重要特征抑制不重要的特征。The present invention proposes a Bhattacharyya coefficient active contour attention structure, which helps our segmentation method better adapt to the grayscale differences between different regions and focus on the target structure by quantifying the similarity between the grayscale value distributions of the segmented region and the image; in the decoding stage, an attention deep separable convolution module is proposed, which has low computational complexity and can adaptively adjust the weight of each channel in the feature map, so as to capture the relationship between features more specifically and better learn important features to suppress unimportant features.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。The accompanying drawings are used to provide further understanding of the present invention and constitute a part of the specification. They are used to explain the present invention together with the embodiments of the present invention and do not constitute a limitation of the present invention.
图1为本发明使用训练集训练网络流程图。FIG1 is a flow chart of the present invention using a training set to train a network.
图2为本发明使用测试查看网络分割效果图。FIG. 2 is a diagram showing the network segmentation effect of the present invention using a test.
具体实施方式DETAILED DESCRIPTION
为使本发明的目的、技术方案及优点更加清楚、明白,以下结合附图及具体实施方式对本发明作进一步说明。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solution and advantages of the present invention clearer and more understandable, the present invention is further described below in conjunction with the accompanying drawings and specific implementation methods. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without making creative work are within the scope of protection of the present invention.
一种基于巴氏系数主动轮廓注意力的CT图像分割方法,包括以下步骤:A CT image segmentation method based on Bhattacharyya coefficient active contour attention comprises the following steps:
S1. 数据集是3D肾脏和肾脏肿瘤的CT图像,来自MICCAI KiTS19,该数据集的训练集包括210个3D-CT图像,测试集包括90个3D-CT图像;S1. The dataset is 3D CT images of kidneys and kidney tumors from MICCAI KiTS19. The training set of the dataset includes 210 3D-CT images, and the test set includes 90 3D-CT images.
S2.调用nibabel库处理数据集中的体数据得到2D的.png格式的切片,从每个体数据中分别选取了10个切片,得到数据集D’,对数据集D’中的训练集进行数据增强得到数据集D;S2. Call the nibabel library to process the volume data in the dataset to obtain 2D slices in .png format. Select 10 slices from each volume data to obtain dataset D’. Perform data enhancement on the training set in dataset D’ to obtain dataset D.
S3. 网络结构包括编码部分和解码部分,编码部分使用的是VGG16预训练网络对输入图像进行特征提取,解码部分使用的是巴氏系数主动轮廓注意力结构和注意力深度可分离卷积块;S3. The network structure includes an encoding part and a decoding part. The encoding part uses the VGG16 pre-trained network to extract features from the input image, and the decoding part uses the Bhattacharyya coefficient active contour attention structure and the attention depth separable convolution block;
S4. 使用Dice损失函数计算损失函数Loss;S4. Use Dice loss function to calculate the loss function Loss;
S5. 使用SGD优化器,通过反向传播来调整网络中的权重和偏置量;S5. Use the SGD optimizer to adjust the weights and biases in the network through backpropagation;
S6.在数据集D的训练过程中,设置保存评价最优指标的变量,将最优的权重与偏置量保存在新建的文件ph中;S6. During the training process of the data set D, set the variables for saving the optimal evaluation index, and save the optimal weights and biases in the newly created file ph;
S7.把最优的权重与偏置量加载到网络中,读取测试集中的图像完成分割,将结果保存为jpg格式的文件。S7. Load the optimal weights and biases into the network, read the images in the test set to complete the segmentation, and save the results as a jpg file.
本实施例中,编码部分使用的是VGG16预训练网络对输入图像进行特征提取,本发明直接使用预训练的VGG16网络和权重文件,预训练的VGG16网络一共包含六个阶段,只使用前五个阶段,其中第五个阶段不包含运算Maxpooling操作,每一个阶段分别得到特征图A1、A2、A3、A4、A5。In this embodiment, the encoding part uses the VGG16 pre-trained network to extract features from the input image. The present invention directly uses the pre-trained VGG16 network and weight file. The pre-trained VGG16 network includes a total of six stages, and only the first five stages are used. The fifth stage does not include the Maxpooling operation, and each stage obtains feature maps A1, A2, A3, A4, and A5 respectively.
本实施例中,解码部分使用的是巴氏系数主动轮廓注意力结构和注意力深度可分离卷积块,巴氏系数主动轮廓注意力结构参考文献:Chan-Vese Attention U-Net: Anattention mechanism for robust segmentation,解码部分包括第一巴氏系数主动轮廓注意力模块、第一注意力深度可分离卷积块、第二巴氏系数主动轮廓注意力模块、第二注意力深度可分离卷积块、第三巴氏系数主动轮廓注意力模块、第三注意力深度可分离卷积块、第四巴氏系数主动轮廓注意力模块组成、第四注意力深度可分离卷积块、输出模块。In this embodiment, the decoding part uses the Bhattacharyya coefficient active contour attention structure and the attention depth separable convolution block. The Bhattacharyya coefficient active contour attention structure reference: Chan-Vese Attention U-Net: Anattention mechanism for robust segmentation. The decoding part includes a first Bhattacharyya coefficient active contour attention module, a first attention depth separable convolution block, a second Bhattacharyya coefficient active contour attention module, a second attention depth separable convolution block, a third Bhattacharyya coefficient active contour attention module, a third attention depth separable convolution block, a fourth Bhattacharyya coefficient active contour attention module, a fourth attention depth separable convolution block, and an output module.
本实施例中,第一巴氏系数主动轮廓注意力模块包含第一初始轮廓块、第一图像处理块、第一巴氏系数主动轮廓块、第一注意力块;In this embodiment, the first Bhattacharyya coefficient active contour attention module includes a first initial contour block, a first image processing block, a first Bhattacharyya coefficient active contour block, and a first attention block;
(1)第一初始轮廓块实现步骤如下:特征图经过上采样得到特征图,特征图经过1×1的卷积得到特征图,特征图经过1×1的卷积得到特征图,特征图和特征图相加得到特征图,特征图分别经过Relu激活函数、1×1的卷积、Sigmoid激活函数得到特征图、特征图经距离变换后得到特征图,即初始轮廓;(1) The first initial contour block is implemented as follows: feature map After upsampling, the feature map is obtained , feature map After 1×1 convolution, the feature map is obtained ,Feature map After 1×1 convolution, the feature map is obtained ,Feature map and feature map Add to get the feature map ,Feature map The feature map is obtained by Relu activation function, 1×1 convolution, and Sigmoid activation function respectively. , feature map After distance transformation, the feature map is obtained , i.e. the initial contour;
距离变换由求得,其中是欧几里得距离,*是卷积乘,exp是以自然常数e为底的指数函数, 是人为给定的,且;The distance transformation is given by Find, among which is the Euclidean distance, * is the convolution multiplication, exp is the exponential function with the natural constant e as the base, is artificially given, and ;
(2)第一图像处理块实现步骤如下:特征图经过1×1的卷积得到特征图,输入图像通过Resize图像处理函数、Sigmoid激活函数得到特征图,特征图和特征图相加得到特征图,特征图经过1×1的卷积、Sigmoid激活函数得到特征图;(2) The first image processing block is implemented as follows: Feature map After 1×1 convolution, the feature map is obtained The input image is processed by the Resize image processing function and the Sigmoid activation function to obtain the feature map , feature map and feature map Add to get the feature map , feature map After 1×1 convolution and Sigmoid activation function, the feature map is obtained ;
(3)第一巴氏系数主动轮廓块实现步骤如下:使用巴氏系数主动轮廓算法对特征图进行有限次迭代分割,得到特征图,,其中和为正参数,是人为给定的;BCV为巴氏系数主动轮廓算法,E1为特征图E1。(3) The first Bhattacharyya coefficient active contour block is implemented as follows: Use the Bhattacharyya coefficient active contour algorithm to Perform a finite number of iterative segmentation to obtain the feature map , ,in and is a positive parameter, which is artificially given; BCV is the Bhattacharyya coefficient active contour algorithm, and E 1 is the feature map E 1 .
巴氏系数主动轮廓算法的能量函数为:The energy function of the Bhattacharyya coefficient active contour algorithm is:
, ,
其中后面的式子表示轮廓的长度,后面的式子表示轮廓内的面积,表示狄拉克函数,H为Heavyside函数,是定义域,Q为RGB颜色空间,in和out分别是初始轮廓内部和外部的区域。对于给定的颜色q,和;in The following formula represents the length of the contour, The following formula represents the area within the contour. represents the Dirac function, H is the Heavyside function, is the domain, Q is the RGB color space, in and out are the initial contours The inner and outer regions. For a given color q, and ;
(4)第一注意力块实现步骤如下:特征图和特征图相乘得到特征图。(4) The implementation steps of the first attention block are as follows: Feature map and feature map Multiply to get the feature map .
本实施例中,第一注意力深度可分离卷积块实现步骤如下:将特征图和特征图拼接后,将拼接结果依次经过逐点卷积、Batch Normalization批量归一化、1×7的轴向深度卷积、7×1的轴向深度卷积、Residual残差卷积、逐点卷积和GELU激活函数得到特征图。特征图依次经过全局平均池化、1×1的卷积、Relu激活函数、1×1的卷积、Sigmoid激活函数后,再和特征图相乘得到特征图。In this embodiment, the first attention depth separable convolution block is implemented as follows: and feature map After splicing, the splicing results are sequentially subjected to point-by-point convolution, batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function to obtain the feature map. . Feature map After global average pooling, 1×1 convolution, Relu activation function, 1×1 convolution, Sigmoid activation function, and feature map Multiply to get the feature map .
本实施例中,第二巴氏系数主动轮廓注意力模块包含第二初始轮廓块、第二图像处理块、第二巴氏系数主动轮廓块、第二注意力块;In this embodiment, the second Bhattacharyya coefficient active contour attention module includes a second initial contour block, a second image processing block, a second Bhattacharyya coefficient active contour block, and a second attention block;
(1)第二初始轮廓块实现步骤如下:特征图经过上采样后得到特征图,特征图经过1×1的卷积得到特征图,特征图经过1×1的卷积得到特征图,特征图和特征图相加得到特征图,特征图分别经过Relu激活函数、1×1的卷积,Sigmoid激活函数得到特征图N2、特征图N2经距离变换后得到特征图E2,即初始轮廓;(1) The implementation steps of the second initial contour block are as follows: Feature map After upsampling, the feature map is obtained , feature map After 1×1 convolution, the feature map is obtained ,Feature map After 1×1 convolution, the feature map is obtained ,Feature map and feature map Add to get the feature map ,Feature map After Relu activation function, 1×1 convolution, and Sigmoid activation function, feature map N 2 is obtained. After distance transformation , feature map E 2 is obtained, which is the initial contour.
距离变换由求得,其中是欧几里得距离,*是卷积乘,exp是以自然常数e为底的指数函数, 是人为给定的,且;The distance transformation is given by Find, among which is the Euclidean distance, * is the convolution multiplication, exp is the exponential function with the natural constant e as the base, is artificially given, and ;
(2)第二图像处理块实现步骤如下:特征图经过1×1的卷积得到特征图,输入图像通过Resize图像处理函数、Sigmoid激活函数得到特征图,特征图和特征图相加得到特征图,特征图经过1×1的卷积、Sigmoid激活函数得到特征图;(2) The second image processing block is implemented as follows: Feature map After 1×1 convolution, the feature map is obtained The input image is processed by the Resize image processing function and the Sigmoid activation function to obtain the feature map ,Feature map and feature map Add to get the feature map , feature map After 1×1 convolution and Sigmoid activation function, the feature map is obtained ;
(3)第二巴氏系数主动轮廓块实现步骤如下:使用巴氏系数主动轮廓算法对特征图进行有限次迭代分割,得到特征图,其中和为正参数,(3) The second Bhattacharyya coefficient active contour block is implemented as follows: Use the Bhattacharyya coefficient active contour algorithm to Perform a finite number of iterative segmentation to obtain the feature map , in and is a positive parameter,
是人为给定的;BCV为巴氏系数主动轮廓算法,E2为特征图E2。is artificially given; BCV is the Bhattacharyya coefficient active contour algorithm, and E 2 is the feature map E 2 .
巴氏系数主动轮廓算法的能量函数为:The energy function of the Bhattacharyya coefficient active contour algorithm is:
, ,
其中后面的式子表示轮廓的长度,后面的式子表示轮廓内的面积,表示狄拉克函数,H为Heavyside函数,是定义域,Q为RGB颜色空间,in和out分别是初始轮廓内部和外部的区域。对于给定的颜色q,使用基于高斯核的直方图估计概率分布函数和;in The following formula represents the length of the contour, The following formula represents the area within the contour. represents the Dirac function, H is the Heavyside function, is the domain, Q is the RGB color space, in and out are the initial contours For a given color q, the probability distribution function is estimated using a histogram based on a Gaussian kernel and ;
(5)第二注意力块实现步骤如下:特征图和特征图相乘得到特征图。(5) The implementation steps of the second attention block are as follows: Feature map and feature map Multiply to get the feature map .
本实施例中,第二注意力深度可分离卷积块实现步骤如下:将特征图和特征图拼接后,将拼接结果依次经过逐点卷积、Batch Normalization批量归一化、1×7的轴向深度卷积、7×1的轴向深度卷积、Residual残差卷积、逐点卷积和GELU激活函数得到特征图。特征图依次经过全局平均池化、1×1的卷积、Relu激活函数、1×1的卷积、Sigmoid激活函数后,再和特征图相乘得到特征图。In this embodiment, the second attention depth separable convolution block is implemented as follows: and feature map After splicing, the splicing results are sequentially subjected to point-by-point convolution, batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function to obtain the feature map. . Feature map After global average pooling, 1×1 convolution, Relu activation function, 1×1 convolution, Sigmoid activation function, and feature map Multiply to get the feature map .
本实施例中,第三巴氏系数主动轮廓注意力模块包含第三初始轮廓块、第三图像处理块、第三巴氏系数主动轮廓块、第三注意力块;In this embodiment, the third Bhattacharyya coefficient active contour attention module includes a third initial contour block, a third image processing block, a third Bhattacharyya coefficient active contour block, and a third attention block;
(1)第三初始轮廓块实现步骤如下:特征图经过上采样得到特征图,特征图经过1×1的卷积得到特征图,特征图经过1×1的卷积得到特征图,特征图和特征图相加得到特征图,特征图分别经过Relu激活函数、1×1的卷积,Sigmoid激活函数得到特征图N3、特征图N3经距离变换后得到特征图E3,即初始轮廓;(1) The implementation steps of the third initial contour block are as follows: Feature map After upsampling, the feature map is obtained , feature map After 1×1 convolution, the feature map is obtained ,Feature map After 1×1 convolution, the feature map is obtained ,Feature map and feature map Add to get the feature map ,Feature map After Relu activation function, 1×1 convolution, and Sigmoid activation function, feature map N 3 is obtained. After distance transformation , feature map E 3 is obtained, which is the initial contour.
距离变换由求得,其中是欧几里得距离,*是卷积乘,exp是以自然常数e为底的指数函数,是人为给定的,且;The distance transformation is given by Find, among which is the Euclidean distance, * is the convolution multiplication, exp is the exponential function with the natural constant e as the base, is artificially given, and ;
(2)第三图像处理块实现步骤如下:特征图经过1×1的卷积得到特征图,输入图像通过Resize图像处理函数、Sigmoid激活函数得到特征图,特征图和特征图相加得到特征图,特征图经过1×1的卷积、Sigmoid激活函数得到特征图;(2) The implementation steps of the third image processing block are as follows: Feature map After 1×1 convolution, the feature map is obtained The input image is processed by the Resize image processing function and the Sigmoid activation function to obtain the feature map ,Feature map and feature map Add to get the feature map ,Feature map After 1×1 convolution and Sigmoid activation function, the feature map is obtained ;
(3)第三巴氏系数主动轮廓块实现步骤如下:使用巴氏系数主动轮廓算法对特征图进行有限次迭代分割,得到特征图,,其中和为正参数,是人为给定的;BCV为巴氏系数主动轮廓算法,E3为特征图E3。(3) The third Bhattacharyya coefficient active contour block is implemented as follows: Use the Bhattacharyya coefficient active contour algorithm to Perform a finite number of iterative segmentation to obtain the feature map , ,in and is a positive parameter, which is artificially given; BCV is the Bhattacharyya coefficient active contour algorithm, and E 3 is the feature map E 3 .
巴氏系数主动轮廓算法的能量函数为:The energy function of the Bhattacharyya coefficient active contour algorithm is:
, ,
其中后面的式子表示轮廓的长度,后面的式子表示轮廓内的面积,表示狄拉克函数,H为Heavyside函数,是定义域,Q为RGB颜色空间,in和out分别是初始轮廓内部和外部的区域。对于给定的颜色q,使用基于高斯核的直方图估计概率分布函数和;in The following formula represents the length of the contour, The following formula represents the area within the contour. represents the Dirac function, H is the Heavyside function, is the domain, Q is the RGB color space, in and out are the initial contours For a given color q, the probability distribution function is estimated using a histogram based on a Gaussian kernel and ;
(4)第三注意力块实现步骤如下:特征图和特征图相乘得到特征图。(4) The implementation steps of the third attention block are as follows: Feature map and feature map Multiply to get the feature map .
本实施例中,第三注意力深度可分离卷积块实现步骤如下:将特征图和特征图拼接后,将拼接结果依次经过逐点卷积、Batch Normalization批量归一化、1×7的轴向深度卷积、7×1的轴向深度卷积、Residual残差卷积、逐点卷积和GELU激活函数得到特征图,特征图依次经过全局平均池化、1×1的卷积、Relu激活函数、1×1的卷积、Sigmoid激活函数后,再和特征图相乘得到特征图。In this embodiment, the third attention depth separable convolution block is implemented as follows: and feature map After splicing, the splicing results are sequentially subjected to point-by-point convolution, batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function to obtain the feature map. After global average pooling, 1×1 convolution, Relu activation function, 1×1 convolution, Sigmoid activation function, and feature map Multiply to get the feature map .
本实施例中,第四巴氏系数主动轮廓注意力模块包含第四初始轮廓块、第四图像处理块、第四巴氏系数主动轮廓块、第四注意力块。In this embodiment, the fourth Bhattacharyya coefficient active contour attention module includes a fourth initial contour block, a fourth image processing block, a fourth Bhattacharyya coefficient active contour block, and a fourth attention block.
(1)第四初始轮廓块实现步骤如下:特征图经过上采样得到特征图,特征图经过1×1的卷积得到特征图,特征图经过1×1的卷积得到特征图,特征图和特征图相加得到特征图,特征图分别经过Relu激活函数、1×1的卷积,Sigmoid激活函数得到特征图N4、特征图N4经距离变换后得到特征图E4,即初始轮廓;(1) The fourth initial contour block is implemented as follows: feature map After upsampling, the feature map is obtained , feature map After 1×1 convolution, the feature map is obtained ,Feature map After 1×1 convolution, the feature map is obtained ,Feature map and feature map Add to get the feature map ,Feature map After Relu activation function, 1×1 convolution, and Sigmoid activation function, feature map N 4 is obtained. After distance transformation , feature map E 4 is obtained, which is the initial contour.
距离变换由求得,其中是欧几里得距离,*是卷积乘,exp是以自然常数e为底的指数函数,是人为给定的,且;The distance transformation is given by Find, among which is the Euclidean distance, * is the convolution multiplication, exp is the exponential function with the natural constant e as the base, is artificially given, and ;
(2)第四图像处理块实现步骤如下:特征图经过1×1的卷积得到特征图,输入图像通过Resize图像处理函数、Sigmoid激活函数得到特征图,特征图和特征图相加得到特征图,特征图经过1×1的卷积、Sigmoid激活函数得到特征图。(2) The fourth image processing block is implemented as follows: Feature map After 1×1 convolution, the feature map is obtained The input image is processed by the Resize image processing function and the Sigmoid activation function to obtain the feature map ,Feature map and feature map Add to get the feature map ,Feature map After 1×1 convolution and Sigmoid activation function, the feature map is obtained .
(3)第四巴氏系数主动轮廓块实现步骤如下:使用巴氏系数主动轮廓算法对特征图进行有限次迭代分割,得到特征图,,其中和为正参数,是人为给定的;BCV为巴氏系数主动轮廓算法,E4为特征图E4。(3) The fourth Bhattacharyya coefficient active contour block is implemented as follows: Use the Bhattacharyya coefficient active contour algorithm to Perform a finite number of iterative segmentation to obtain the feature map , ,in and is a positive parameter, which is artificially given; BCV is the Bhattacharyya coefficient active contour algorithm, and E 4 is the feature map E 4 .
巴氏系数主动轮廓算法的能量函数为:The energy function of the Bhattacharyya coefficient active contour algorithm is:
, ,
其中后面的式子表示轮廓的长度,后面的式子表示轮廓内的面积,表示狄拉克函数,H为Heavyside函数,是定义域,Q为RGB颜色空间,in和out分别是初始轮廓内部和外部的区域,对于给定的颜色q,使用基于高斯核的直方图估计概率分布函数和;in The following formula represents the length of the contour, The following formula represents the area within the contour. represents the Dirac function, H is the Heavyside function, is the domain, Q is the RGB color space, in and out are the initial contours For the internal and external regions, for a given color q, the probability distribution function is estimated using a histogram based on a Gaussian kernel and ;
(5)第四注意力块实现步骤如下:特征图和特征图相乘得到特征图。(5) The implementation steps of the fourth attention block are as follows: Feature map and feature map Multiply to get the feature map .
本实施例中,第四注意力深度可分离卷积块实现步骤如下:将特征图和特征图拼接后,将拼接结果依次经过逐点卷积、Batch Normalization批量归一化、1×7的轴向深度卷积、7×1的轴向深度卷积、Residual残差卷积、逐点卷积和GELU激活函数得到特征图,特征图依次经过全局平均池化、1×1的卷积、Relu激活函数、1×1的卷积、Sigmoid激活函数后,再和特征图相乘得到特征图。In this embodiment, the fourth attention depth separable convolution block is implemented as follows: and feature map After splicing, the splicing results are sequentially subjected to point-by-point convolution, batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function to obtain the feature map. , feature map After global average pooling, 1×1 convolution, Relu activation function, 1×1 convolution, Sigmoid activation function, and feature map Multiply to get the feature map .
本实施例中,输出模块实现步骤如下:特征图经过1×1的卷积、BatchNorm加速深度神经网络收敛、Relu激活函数得到分割图像。In this embodiment, the output module implements the following steps: feature map After 1×1 convolution, BatchNorm accelerates the convergence of deep neural network, and Relu activation function, the segmented image is obtained. .
本发明在准确度、查准率、Jaccard相似系数和Dice相似系数评价指标上分别达到了97.97%、95.56%、94.04%、95.88%的结果,相比先进的U-Net、UNet++、TransUNet等模型,本发明更能够聚焦目标结构,对于灰度值分布不均匀的图片一样可以得到较好的分割结果。The present invention achieved 97.97%, 95.56%, 94.04% and 95.88% results in accuracy, precision, Jaccard similarity coefficient and Dice similarity coefficient evaluation indicators respectively. Compared with advanced U-Net, UNet++, TransUNet and other models, the present invention is more able to focus on the target structure and can also obtain better segmentation results for images with uneven grayscale value distribution.
上述虽然结合附图对发明的具体实施方式进行了描述,但并非对本发明保护范围的限制,在本发明的技术方案的基础上,本领域技术人员不需要付出创造性劳动即可做出的各种修改或变形仍在本发明的保护范围以内。Although the above describes the specific implementation mode of the invention in conjunction with the drawings, it is not intended to limit the scope of protection of the invention. Based on the technical solution of the present invention, various modifications or variations that can be made by those skilled in the art without creative work are still within the scope of protection of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311344404.8A CN117078705B (en) | 2023-10-18 | 2023-10-18 | A CT image segmentation method based on Bhattacharyya coefficient active contour attention |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311344404.8A CN117078705B (en) | 2023-10-18 | 2023-10-18 | A CT image segmentation method based on Bhattacharyya coefficient active contour attention |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117078705A CN117078705A (en) | 2023-11-17 |
CN117078705B true CN117078705B (en) | 2024-02-13 |
Family
ID=88706521
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311344404.8A Active CN117078705B (en) | 2023-10-18 | 2023-10-18 | A CT image segmentation method based on Bhattacharyya coefficient active contour attention |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117078705B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114723669A (en) * | 2022-03-08 | 2022-07-08 | 同济大学 | Liver tumor two-point five-dimensional deep learning segmentation algorithm based on context information perception |
CN115393293A (en) * | 2022-08-12 | 2022-11-25 | 西南大学 | Segmentation and localization of electron microscope red blood cells based on UNet network and watershed algorithm |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10420523B2 (en) * | 2016-03-21 | 2019-09-24 | The Board Of Trustees Of The Leland Stanford Junior University | Adaptive local window-based methods for characterizing features of interest in digital images and systems for practicing same |
-
2023
- 2023-10-18 CN CN202311344404.8A patent/CN117078705B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114723669A (en) * | 2022-03-08 | 2022-07-08 | 同济大学 | Liver tumor two-point five-dimensional deep learning segmentation algorithm based on context information perception |
CN115393293A (en) * | 2022-08-12 | 2022-11-25 | 西南大学 | Segmentation and localization of electron microscope red blood cells based on UNet network and watershed algorithm |
Non-Patent Citations (4)
Title |
---|
A-PSPNet:一种融合注意力机制的PSPNet图像语义分割模型;高丹;陈建英;谢盈;;中国电子科学研究院学报(第06期);全文 * |
Chan-Vese Attention U-Net: An attention mechanism for robust segmentation;Nicolas Makaroff等;《arXiv》;全文 * |
Deep Depthwise Separable Convolutional Network for Change Detection in Optical Aerial Images;Ruochen Liu等;《 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing》;全文 * |
基于多尺度卷积神经网络的CT图像肾肿瘤分割研究;冀宏;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117078705A (en) | 2023-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108776969B (en) | Tumor segmentation method in breast ultrasound images based on fully convolutional network | |
CN109410219B (en) | Image segmentation method and device based on pyramid fusion learning and computer readable storage medium | |
CN110889853B (en) | Tumor segmentation method based on residual error-attention deep neural network | |
CN110889852B (en) | Liver segmentation method based on residual error-attention deep neural network | |
CN116664605B (en) | Medical image tumor segmentation method based on diffusion model and multi-modal fusion | |
CN117079139B (en) | Remote sensing image target detection method and system based on multi-scale semantic features | |
CN111968138B (en) | Medical image segmentation method based on 3D dynamic edge insensitivity loss function | |
CN109035172B (en) | A deep learning-based non-local mean ultrasound image denoising method | |
CN111640120A (en) | Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network | |
CN111598894B (en) | Retinal Vascular Image Segmentation System Based on Global Information Convolutional Neural Network | |
CN110517272B (en) | Deep learning-based blood cell segmentation method | |
CN113221925A (en) | Target detection method and device based on multi-scale image | |
CN113191968B (en) | Establishment method and application of blind denoising model for 3D ultrasound images | |
CN112651917A (en) | Space satellite low-illumination image enhancement method based on generation countermeasure network | |
CN114022462A (en) | Method, system, device, processor and computer-readable storage medium for realizing lesion segmentation of multi-parameter nuclear magnetic resonance images | |
CN115169533A (en) | Prostate Ultrasound Image Segmentation Method Based on Bidirectional Exponential Weighted Moving Average Algorithm | |
CN117078705B (en) | A CT image segmentation method based on Bhattacharyya coefficient active contour attention | |
CN117456185A (en) | Remote sensing image segmentation method based on adaptive pattern matching and nested modeling | |
CN114078149A (en) | Image estimation method, electronic equipment and storage medium | |
CN116542924A (en) | Method, device and storage medium for detecting prostate lesion area | |
CN113344935B (en) | Image segmentation method and system based on multi-scale difficulty perception | |
CN115018820A (en) | Multi-classification method of breast cancer based on texture enhancement | |
CN116468763A (en) | A Method of Electron Microscope Image Registration Based on Cost Volume | |
CN114742873A (en) | A three-dimensional reconstruction method, device and medium based on adaptive network | |
CN113139964A (en) | Multi-modal image segmentation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |