CN118071688A - Real-time cerebral angiography quality assessment method - Google Patents
Real-time cerebral angiography quality assessment method Download PDFInfo
- Publication number
- CN118071688A CN118071688A CN202410114418.9A CN202410114418A CN118071688A CN 118071688 A CN118071688 A CN 118071688A CN 202410114418 A CN202410114418 A CN 202410114418A CN 118071688 A CN118071688 A CN 118071688A
- Authority
- CN
- China
- Prior art keywords
- quality
- quality control
- attention
- angiography
- self
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
本发明涉及医学技术领域,且公开了一种在造影过程中实时进行造影质量评估的方法,能够辅助医生进行造影质量控制。该实时脑血管造影质量评估方法,排除了背景对质量评估的影响,同时,针对DSA血管造影的分割难点,参考Transformer的全局特征提取优势,网络引入了基于窗口的局部‑全局自注意力机制,在保证线性复杂度的同时,更加关注整个血管结构各部分的关联性,有效提高了分割精度,另外设计了特征聚合模块,基于注意力方式过滤编码器特征,使得特征聚合更加高效;同时,方法结合造影临床质量评价指标,依据造影图像质量分类数据集,确定了质量分类的指标,采用自动化方法定位图像质控点,并采用随机森林进行质量评估。
The present invention relates to the field of medical technology, and discloses a method for real-time angiography quality assessment during angiography, which can assist doctors in angiography quality control. The real-time cerebral angiography quality assessment method eliminates the influence of background on quality assessment. At the same time, in view of the segmentation difficulties of DSA angiography, the network introduces a window-based local-global self-attention mechanism with reference to the global feature extraction advantage of Transformer. While ensuring linear complexity, it pays more attention to the correlation between the various parts of the entire vascular structure, effectively improving the segmentation accuracy. In addition, a feature aggregation module is designed to filter encoder features based on the attention method, making feature aggregation more efficient; at the same time, the method combines angiography clinical quality evaluation indicators, determines the quality classification indicators based on the angiography image quality classification data set, uses an automated method to locate image quality control points, and uses random forests for quality assessment.
Description
技术领域Technical Field
本发明涉及医学技术领域,具体为一种实时脑血管造影质量评估方法。The invention relates to the field of medical technology, and in particular to a real-time cerebral angiography quality assessment method.
背景技术Background technique
脑血管疾病(cerebrovascular disease,CVD)是因颅内血液循环障碍而造成脑组织损伤的疾病,目前己成为危害人类身体健康和生命的主要疾病之一。Cerebrovascular disease (CVD) is a disease that causes brain tissue damage due to intracranial blood circulation disorders. It has become one of the major diseases that endanger human health and life.
目前对于医学图像的自动质量评估,不少研究引入了深度学习的图像处理方法,通常是在图像处理的基础上进行质量评估,比如在主体部分目标检测或分割的基础上进行图像质量评估。由于造影图像的复杂性,采用图像处理加质量分类的两阶段质量评估方法。At present, many studies have introduced deep learning image processing methods for automatic quality assessment of medical images. Quality assessment is usually performed on the basis of image processing, such as image quality assessment based on object detection or segmentation of the main body. Due to the complexity of contrast images, a two-stage quality assessment method of image processing plus quality classification is adopted.
对于血管图像的处理,通常采用U-Net等分割网络进行血管与背景的分割,具有良好的分割性能。DSA造影手术的成像时间通常只有12s左右,过程中质量评估需要实时的图像分析,且在部署到医疗设备时需要考虑计算资源不足的情况。因此,作为质量评估的重要基础,分割算法的实时性与轻量化是必须要考虑的问题。然而,脑血管造影图像存在一些特殊的分割难点,例如同时存在大直径的动脉和小直径的毛细血管、造影剂分布不均、X射线曝光不均等,且通常的血管分割算法存在参数量大、计算量高、推理速度慢等问题。一方面,已有的脑血管造影分割方法针对DSA的特点设计的分割方法具有较好的分割精度,但是对于脑血管造影的实时轻量化分割研究相对较少,脑血管DSA分割准确性与实时性的平衡问题没有得到较好的解决。另一方面,目前医学图像分割轻量化的方法例如特征提取网络的轻量化、注意力机制以及增加各种特征融合模块以代替网络深度,一定程度上提高了推理速度,但是由于特征提取不充分等原因,对于脑血管分割此类难分割问题难以保证分割的质量。基于以上这些问题,针对脑血管DSA的特征进行了分割网络的设计,通过特殊设计的特征处理模块和轻量化手段,确保了分割网络的实时性与准确性。For the processing of vascular images, segmentation networks such as U-Net are usually used to segment blood vessels and background, which has good segmentation performance. The imaging time of DSA angiography is usually only about 12 seconds. Quality assessment during the process requires real-time image analysis, and insufficient computing resources need to be considered when deployed to medical equipment. Therefore, as an important basis for quality assessment, the real-time and lightweight nature of the segmentation algorithm must be considered. However, there are some special segmentation difficulties in cerebral angiography images, such as the simultaneous presence of large-diameter arteries and small-diameter capillaries, uneven distribution of contrast agents, uneven X-ray exposure, etc., and the usual vascular segmentation algorithms have problems such as large number of parameters, high amount of calculation, and slow reasoning speed. On the one hand, the existing cerebral angiography segmentation methods designed for the characteristics of DSA have good segmentation accuracy, but there are relatively few studies on real-time lightweight segmentation of cerebral angiography, and the balance between accuracy and real-time nature of cerebral vascular DSA segmentation has not been well solved. On the other hand, the current lightweight methods of medical image segmentation, such as lightweight feature extraction networks, attention mechanisms, and adding various feature fusion modules to replace network depth, have improved the reasoning speed to a certain extent. However, due to insufficient feature extraction and other reasons, it is difficult to ensure the quality of segmentation for difficult segmentation problems such as cerebrovascular segmentation. Based on the above problems, a segmentation network was designed for the characteristics of cerebrovascular DSA, and the real-time and accuracy of the segmentation network were ensured through specially designed feature processing modules and lightweight methods.
目前图像质量评估方法采用手动提取特征结合机器学习算法进行分类,手动提取的特征更加直观。近几年出现的基于深度学习的分类方法,通过自主学习样本特征进行质量分类,具有较好的泛化性能。At present, the image quality assessment method uses manual feature extraction combined with machine learning algorithms for classification. Manually extracted features are more intuitive. In recent years, the classification method based on deep learning has emerged. It performs quality classification by autonomously learning sample features and has good generalization performance.
一方面,建立的造影图像数据集样本量较小,不适用于需要大量训练数据的深度学习分类方法。另一方面,脑血管造影质量评估存在相对清晰的质量评估指标,手动提取特征的方法能够模拟医生的评估过程,准确地进行质量评估。因此,提取特征并进行机器学习分类的方法更适合于造影图像质量评估,并且为实现自动化质量评估设计了自动化的特征提取方法。On the one hand, the sample size of the established angiography image dataset is small and is not suitable for deep learning classification methods that require a large amount of training data. On the other hand, there are relatively clear quality assessment indicators for cerebral angiography quality assessment, and the manual feature extraction method can simulate the doctor's evaluation process and accurately perform quality assessment. Therefore, the method of extracting features and performing machine learning classification is more suitable for angiography image quality assessment, and an automated feature extraction method is designed to achieve automated quality assessment.
发明内容Summary of the invention
(一)解决的技术问题1. Technical issues to be solved
针对现有技术的不足,本发明提供了一种实时脑血管造影质量评估方法,具备计算量小,提取准确等优点,解决了对图像评估时计算量大,提取效率和准确度有待提高的问题。In view of the shortcomings of the prior art, the present invention provides a real-time cerebral angiography quality assessment method, which has the advantages of small amount of calculation and accurate extraction, and solves the problem of large amount of calculation in image evaluation and the need to improve extraction efficiency and accuracy.
(二)技术方案(II) Technical solution
为实现上述计算量小,提取准确目的,本发明提供如下技术方案:一种实时脑血管造影质量评估方法,包括以下步骤:In order to achieve the above-mentioned purpose of small amount of calculation and accurate extraction, the present invention provides the following technical solution: a real-time cerebral angiography quality assessment method, comprising the following steps:
步骤S1,收集的颈内动脉正位图像,分别进行血管分割标注与质量分类标注,建立造影图像分割与分类数据集,划分训练集与测试集,具体步骤如下;Step S1, the collected internal carotid artery anteroposterior images are respectively subjected to vessel segmentation and quality classification annotation, angiography image segmentation and classification data sets are established, and training sets and test sets are divided. The specific steps are as follows;
步骤S1.1,采集110例颈内动脉正位影像,并建立造影图像数据集;Step S1.1, collecting 110 internal carotid artery anteroposterior images and establishing angiography image dataset;
步骤S1.2,分割与分类数据集共享上述造影图像数据,分割标注采用人工标注,标注区域为血管主干与分支;Step S1.2, the segmentation and classification data sets share the above-mentioned angiography image data, and the segmentation annotation is manually annotated, and the annotated areas are the vascular trunks and branches;
步骤S1.3,质量分类类别包括质量合格与不合格两类,质量不合格主要包括造影浓度过高或过低、血管结构异常、异物伪影和运动伪影的情况;Step S1.3, the quality classification categories include qualified and unqualified. Unqualified quality mainly includes the cases of too high or too low contrast concentration, abnormal vascular structure, foreign body artifacts and motion artifacts;
步骤S2,将图像输入分割模型进行训练,采用全图训练的方式。分割模型基于U-Net进行改进,包括轻量化特征提取主干,局部-全局自注意力机制和特征聚合模块;Step S2: Input the image into the segmentation model for training, using the full-image training method. The segmentation model is improved based on U-Net, including a lightweight feature extraction backbone, a local-global self-attention mechanism, and a feature aggregation module;
步骤S2.1,轻量化特征提取主干:一是进行造影质控评估,只需分割出血管的主干及主分支,且全图训练能帮助网络学习血管结构;二是全图训练方式无需前置和后置处理,在高分辨率图像输入后计算量大幅增加情况下,采用深度可分离卷积替代传统卷积,减少计算量,实现轻量化主干;Step S2.1, lightweight feature extraction backbone: First, for angiography quality control evaluation, only the main trunk and main branches of the blood vessels need to be segmented, and full-image training can help the network learn the vascular structure; second, the full-image training method does not require pre- and post-processing. When the amount of calculation increases significantly after high-resolution image input, deep separable convolution is used to replace traditional convolution to reduce the amount of calculation and achieve a lightweight backbone;
步骤S2.2,局部-全局自注意力机制:编码器与解码器引入了自注意力机制,能够在全图输入的基础上充分提取血管结构特征;Step S2.2, local-global self-attention mechanism: The encoder and decoder introduce a self-attention mechanism, which can fully extract the vascular structure features based on the full image input;
步骤S2.3,特征聚合模块:通过空间注意力对编码器特征进行加权,保留有效信息;Step S2.3, feature aggregation module: weight the encoder features through spatial attention to retain valid information;
步骤S3,将分类数据集和相应的指标计算结果输入质量分类模型进行训练,得到最终的分类模型,根据临床质控指标设计质量评估方法,临床判断造影质量主要是通过血管显影灰度大小、造影剂均匀度、血管结构完整性、血管形状异常几个维度,针对这几个维度设计合适的质控指标,在血管主干选取质控区域,然后确定具体质控点用来计算质控指标,包括以下步骤;Step S3, inputting the classification data set and the corresponding index calculation results into the quality classification model for training, obtaining the final classification model, designing the quality assessment method according to the clinical quality control index, clinically judging the quality of angiography mainly through the grayscale size of vascular development, the uniformity of contrast agent, the integrity of vascular structure, and the abnormal shape of blood vessels, designing appropriate quality control indicators for these dimensions, selecting the quality control area in the main vascular trunk, and then determining the specific quality control points for calculating the quality control index, including the following steps;
步骤S3.1,质控区域包括颈内动脉血管主干C2~C3段和C6~C7段,该两个区域能体现造影剂浓度和均匀度异常导致的质量问题;采用YOLOv7轻量化目标检测模型进行质控区域的自动定位,并建立检测数据集进行训练;Step S3.1, the quality control area includes the C2-C3 segment and the C6-C7 segment of the internal carotid artery trunk, which can reflect the quality problems caused by abnormal contrast agent concentration and uniformity; the YOLOv7 lightweight target detection model is used to automatically locate the quality control area, and a detection data set is established for training;
步骤S3.2,质控点为质控区域内血管最大内切圆,定位方法为:提取血管轮廓,选取质控区域高度中点作为质控点圆心y轴坐标值,在质控区内采用最大内切圆半径法找到质控点;Step S3.2, the quality control point is the maximum inscribed circle of the blood vessel in the quality control area, and the positioning method is: extract the blood vessel contour, select the midpoint of the height of the quality control area as the y-axis coordinate value of the center of the quality control point, and use the maximum inscribed circle radius method to find the quality control point in the quality control area;
步骤S3.3,选取血管整体面积、灰度均值,质控点灰度均值、方差,检测到的质控区域个数作为质控指标。其中,血管整体面积、灰度均值为分割得到的血管区域像素点数量和灰度均值,质控点灰度均值、方差为质控点-内切圆-内像素灰度均值及方差,检测到的质控区域个数为目标检测模型检测到的质控区域个数;Step S3.3, select the overall area and grayscale mean of the blood vessel, the grayscale mean and variance of the quality control points, and the number of quality control areas detected as quality control indicators. Among them, the overall area and grayscale mean of the blood vessel are the number of pixel points and grayscale mean of the segmented blood vessel area, the grayscale mean and variance of the quality control point are the grayscale mean and variance of the quality control point-inscribed circle-inner pixel, and the number of quality control areas detected is the number of quality control areas detected by the target detection model;
步骤S3.4,质量分类模型采用随机森林算法,随机森林由多个决策树组成,最终预测结果由多个决策树的结果投票产生,避免随机性,分类结果更加稳定,随机森林训练和测试数据集采用质量分类数据集以及对应的指标数据,指标数据来自于分类数据集图像指标计算结果。Step S3.4, the quality classification model adopts the random forest algorithm. The random forest is composed of multiple decision trees. The final prediction result is generated by voting the results of multiple decision trees to avoid randomness and make the classification result more stable. The random forest training and test data sets use the quality classification data sets and the corresponding indicator data. The indicator data comes from the calculation results of the image indicators of the classification data set.
优选的,步骤S1中训练集和测试集按照7:3的比例划分,训练集用于模型训练,测试集用于评价模型性能,分割评价指标包括准确度Accuracy、精确度Precision、敏感度Sensitivity,以及参数量Params、计算量FLOPs,分类评价指标包括准确度Accuracy、精确度Precision、敏感度Sensitivity,并采用平均推理时间评价整体系统的实时性。Preferably, in step S1, the training set and the test set are divided in a ratio of 7:3, the training set is used for model training, and the test set is used to evaluate model performance. The segmentation evaluation indicators include accuracy, precision, sensitivity, and parameter quantity Params, and computing quantity FLOPs. The classification evaluation indicators include accuracy, precision, sensitivity, and the average inference time is used to evaluate the real-time performance of the overall system.
优选的,步骤S2中,分割模型基于U-Net编码器-解码器架构,编码器特征提取采用EfficientNet中的MBConv深度可分离卷积块,用于降低模型参数量;Preferably, in step S2, the segmentation model is based on a U-Net encoder-decoder architecture, and the encoder feature extraction uses the MBConv deep separable convolution block in EfficientNet to reduce the number of model parameters;
其中,在每个MBConv后引入一个局部-全局自注意模块L-GBlock,通过不同特征层多尺度的窗口大小,实现局部到全局的特征信息建模,当特征图尺寸降低,图像感受野由局部扩大到全局,能充分提取血管局部连贯性信息以及全局结构性信息;Among them, a local-global self-attention module L-GBlock is introduced after each MBConv. Through the multi-scale window sizes of different feature layers, local to global feature information modeling is realized. When the feature map size is reduced, the image receptive field is expanded from local to global, which can fully extract the local coherence information of blood vessels and the global structural information.
该局部-全局自注意模块对称分布在编码器与解码器端,在编码器与解码器特征连接处引入了一个特征聚合模块FAM,将编码器特征与上采样特征求和并计算空间注意矩阵,最后在原编码器特征进行注意力加权,并与上采样特征拼接。The local-global self-attention module is symmetrically distributed on the encoder and decoder ends. A feature aggregation module FAM is introduced at the connection between the encoder and decoder features. The encoder features are summed with the upsampled features and the spatial attention matrix is calculated. Finally, the original encoder features are weighted for attention and concatenated with the upsampled features.
优选的,步骤S2.1中,采用EfficientNet前7层作为特征提取主干,降低网络参数和计算量。Preferably, in step S2.1, the first 7 layers of EfficientNet are used as the feature extraction backbone to reduce network parameters and computational complexity.
优选的,步骤S2.2中,Transformer首先使用自注意力机制建模全局特征交互,而针对二维图像的自注意力机制会随着图像分辨率呈计算量二次方增长;为实现全局信息交互,且保持计算量为线性复杂度,采用基于窗口的自注意力机制;Preferably, in step S2.2, Transformer first uses a self-attention mechanism to model global feature interactions, and the self-attention mechanism for two-dimensional images will increase quadratically with the image resolution. In order to achieve global information interaction and keep the computational complexity linear, a window-based self-attention mechanism is used;
首先将特征划分为P×P个大小相同的窗口,每个窗口进行FFN自注意力计算,得到注意力图,对原特征进行加权,得到全局注意力特征,对于窗口大小超参数的选取;First, the features are divided into P×P windows of the same size, and each window is subjected to FFN self-attention calculation to obtain the attention map. The original features are weighted to obtain the global attention features, and the window size hyperparameter is selected;
采用随网络深度加深、特征分辨率下降、窗口个数逐渐减小的方式,分别将窗口个数设置为16、8、4、2、1,在浅层网络中为局部自注意,而在深层网络中对整个特征进行自注意;The number of windows is set to 16, 8, 4, 2, and 1 respectively, with the feature resolution decreasing as the network depth increases. In the shallow network, local self-attention is performed, while in the deep network, self-attention is performed on the entire feature.
实现动态的局部到全局的自注意力机制;在解码器端引入了对称的自注意力模块,能够在重建分辨率的同时保证血管结构完整性和局部连贯性,解码器端窗口个数分别设置为2、4、8、16。A dynamic local-to-global self-attention mechanism is implemented. A symmetric self-attention module is introduced on the decoder side, which can ensure the integrity of the vascular structure and local coherence while reconstructing the resolution. The number of windows on the decoder side is set to 2, 4, 8, and 16 respectively.
优选的,步骤S2.3中,首先将两者通道降维并求和,随后通过自适应卷积,采用Sigmoid函数归一化,得到空间概率图;最后将注意机制的特征与上采样特征结合,并进行后续的自注意力计算;特征聚合模块均采用1×1卷积进行通道维度变换,计算量小且轻量化。Preferably, in step S2.3, the dimensions of the two channels are first reduced and summed, and then adaptive convolution is performed and normalized using the Sigmoid function to obtain a spatial probability map; finally, the features of the attention mechanism are combined with the upsampling features, and subsequent self-attention calculations are performed; the feature aggregation modules all use 1×1 convolution to transform the channel dimension, which is computationally small and lightweight.
(三)有益效果(III) Beneficial effects
与现有技术相比,本发明提供了一种实时脑血管造影质量评估方法,具备以下有益效果:Compared with the prior art, the present invention provides a real-time cerebral angiography quality assessment method, which has the following beneficial effects:
该实时脑血管造影质量评估方法,采用自动化方法定位图像质控点,适用于需要大量训练数据的深度学习分类方法,通过引入基于U-Net的轻量化实时分割网络,使得采集的样本采用计算量更小的可分离卷积,同时,针对DSA血管造影的分割难点,引入基于窗口的局部-全局自注意力机制,在保证线性复杂度的同时,更加关注整个血管结构各部分的关联性,有效提高了分割精度,另外,特征聚合模块,基于注意力方式过滤编码器特征,使得特征聚合更加高效,对图像采集的特征提取的准确性提高,优化并提高对脑血管造影质量的评估效果。This real-time cerebral angiography quality assessment method uses an automated method to locate image quality control points and is suitable for deep learning classification methods that require a large amount of training data. By introducing a lightweight real-time segmentation network based on U-Net, the collected samples use separable convolutions with smaller computational complexity. At the same time, in view of the segmentation difficulties of DSA angiography, a window-based local-global self-attention mechanism is introduced. While ensuring linear complexity, it pays more attention to the correlation between the various parts of the entire vascular structure, effectively improving the segmentation accuracy. In addition, the feature aggregation module filters the encoder features based on the attention method, making feature aggregation more efficient, improving the accuracy of feature extraction for image acquisition, and optimizing and improving the evaluation effect of cerebral angiography quality.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明提出的一种脑血管造影质量评估方法示意图;FIG1 is a schematic diagram of a cerebral angiography quality assessment method proposed by the present invention;
图2为本发明轻量化血管分割模型结构示意图。FIG. 2 is a schematic diagram of the structure of a lightweight blood vessel segmentation model according to the present invention.
具体实施方式Detailed ways
下面将结合本发明的实施例,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.
请参阅附图1-2:一种实时脑血管造影质量评估方法,包括以下步骤:Please refer to Figures 1-2: A real-time cerebral angiography quality assessment method, comprising the following steps:
步骤S1,收集的颈内动脉正位图像,分别进行血管分割标注与质量分类标注,建立造影图像分割与分类数据集,划分训练集与测试集,步骤S1中训练集和测试集按照7:3的比例划分,训练集用于模型训练,测试集用于评价模型性能,分割评价指标包括准确度Accuracy、精确度Precision、敏感度Sensitivity,以及参数量Params、计算量FLOPs,分类评价指标包括准确度Accuracy、精确度Precision、敏感度Sensitivity,并采用平均推理时间评价整体系统的实时性。具体步骤如下;Step S1, collect the anteroposterior images of the internal carotid artery, perform vascular segmentation and quality classification annotation, establish angiography image segmentation and classification data set, and divide the training set and test set. In step S1, the training set and test set are divided according to the ratio of 7:3. The training set is used for model training, and the test set is used to evaluate model performance. The segmentation evaluation indicators include accuracy, precision, sensitivity, as well as the number of parameters and the amount of FLOPs. The classification evaluation indicators include accuracy, precision, sensitivity, and the average inference time is used to evaluate the real-time performance of the overall system. The specific steps are as follows;
步骤S1.1,采集110例颈内动脉正位影像,并建立造影图像数据集;Step S1.1, collecting 110 internal carotid artery anteroposterior images and establishing angiography image dataset;
步骤S1.2,分割与分类数据集共享上述造影图像数据,分割标注采用人工标注,标注区域为血管主干与分支;Step S1.2, the segmentation and classification data sets share the above-mentioned angiography image data, and the segmentation annotation is manually annotated, and the annotated areas are the vascular trunks and branches;
步骤S1.3,质量分类类别包括质量合格与不合格两类,质量不合格主要包括造影浓度过高或过低、血管结构异常、异物伪影和运动伪影的情况;Step S1.3, the quality classification categories include qualified and unqualified. Unqualified quality mainly includes the cases of too high or too low contrast concentration, abnormal vascular structure, foreign body artifacts and motion artifacts;
步骤S2,请参阅附图2,将图像输入分割模型进行训练,采用全图训练的方式,包括轻量化特征提取主干,局部-全局自注意力机制和特征聚合模块;分割模型基于U-Net编码器-解码器架构设计,编码器特征提取采用EfficientNet中的MBConv深度可分离卷积块,显著降低了模型参数量。为弥补轻量化卷积块提取特征不充分,在每个MBConv后引入一个局部-全局自注意模块(L-GBlock),通过不同特征层多尺度的窗口大小,实现局部到全局的特征信息建模,随着特征图尺寸降低,感受野由局部扩大到全局,可充分提取血管局部连贯性信息以及全局结构性信息。该局部-全局自注意模块对称分布在编码器与解码器端。在编码器与解码器特征连接处引入了一个特征聚合模块(FAM),将编码器特征与上采样特征求和并计算空间注意矩阵,最后在原编码器特征进行注意力加权,并与上采样特征拼接。该模块能够提取原图像特征有利于分割的信息,避免直接短连接的信息冗余性和特征差异性。Step S2, please refer to Figure 2, input the image into the segmentation model for training, and adopt the whole-image training method, including lightweight feature extraction backbone, local-global self-attention mechanism and feature aggregation module; the segmentation model is designed based on the U-Net encoder-decoder architecture, and the encoder feature extraction adopts the MBConv deep separable convolution block in EfficientNet, which significantly reduces the number of model parameters. In order to make up for the insufficient feature extraction of the lightweight convolution block, a local-global self-attention module (L-GBlock) is introduced after each MBConv, and the local to global feature information modeling is realized through the multi-scale window size of different feature layers. As the size of the feature map decreases, the receptive field expands from local to global, and the local coherence information of the blood vessels and the global structural information can be fully extracted. The local-global self-attention module is symmetrically distributed on the encoder and decoder ends. A feature aggregation module (FAM) is introduced at the connection between the encoder and decoder features, and the encoder features are summed with the upsampled features and the spatial attention matrix is calculated. Finally, the original encoder features are weighted for attention and spliced with the upsampled features. This module can extract information that is beneficial to segmentation from the original image features, avoiding the information redundancy and feature differences of direct short connections.
步骤S2.1,轻量化特征提取主干:一是进行造影质控评估,只需分割出血管的主干及主分支,且全图训练能帮助网络学习血管结构;二是全图训练方式无需前置和后置处理,在高分辨率图像输入后计算量大幅增加情况下,采用深度可分离卷积替代传统卷积,减少计算量,实现轻量化主干;采用EfficientNet前7层作为特征提取主干,降低网络参数和计算量。Step S2.1, lightweight feature extraction backbone: First, for angiography quality control evaluation, it is only necessary to segment the main trunk and main branches of the blood vessels, and full-image training can help the network learn the vascular structure; second, the full-image training method does not require pre- and post-processing. When the amount of calculation increases significantly after high-resolution image input, deep separable convolution is used to replace traditional convolution to reduce the amount of calculation and achieve a lightweight backbone; the first 7 layers of EfficientNet are used as the feature extraction backbone to reduce network parameters and the amount of calculation.
步骤S2.2,局部-全局自注意力机制:编码器与解码器引入了自注意力机制,能够在全图输入的基础上充分提取血管结构特征;Transformer首先使用自注意力机制建模全局特征交互,而针对二维图像的自注意力机制会随着图像分辨率呈计算量二次方增长;为实现全局信息交互,且保持计算量为线性复杂度,采用基于窗口的自注意力机制;Step S2.2, local-global self-attention mechanism: The encoder and decoder introduce a self-attention mechanism, which can fully extract the vascular structure features based on the full image input; Transformer first uses the self-attention mechanism to model global feature interactions, and the self-attention mechanism for two-dimensional images will increase quadratically with the image resolution; in order to achieve global information interaction and keep the computational complexity linear, a window-based self-attention mechanism is used;
首先将特征划分为P×P个大小相同的窗口,每个窗口进行FFN自注意力计算,得到注意力图,对原特征进行加权,得到全局注意力特征,对于窗口大小超参数的选取;First, the features are divided into P×P windows of the same size, and each window is subjected to FFN self-attention calculation to obtain the attention map. The original features are weighted to obtain the global attention features, and the window size hyperparameter is selected;
采用随网络深度加深、特征分辨率下降、窗口个数逐渐减小的方式,分别将窗口个数设置为16、8、4、2、1,在浅层网络中为局部自注意,而在深层网络中对整个特征进行自注意;The number of windows is set to 16, 8, 4, 2, and 1 respectively, with the feature resolution decreasing as the network depth increases. In the shallow network, local self-attention is performed, while in the deep network, self-attention is performed on the entire feature.
实现动态的局部到全局的自注意力机制;在解码器端引入了对称的自注意力模块,能够在重建分辨率的同时保证血管结构完整性和局部连贯性,解码器端窗口个数分别设置为2、4、8、16。A dynamic local-to-global self-attention mechanism is implemented. A symmetric self-attention module is introduced on the decoder side, which can ensure the integrity of the vascular structure and local coherence while reconstructing the resolution. The number of windows on the decoder side is set to 2, 4, 8, and 16 respectively.
步骤S2.3,特征聚合模块:通过空间注意力对编码器特征进行加权,保留有效信息;首先将两者通道降维并求和,随后通过自适应卷积,采用Sigmoid函数归一化,得到空间概率图;最后将注意机制的特征与上采样特征结合,并进行后续的自注意力计算;特征聚合模块均采用1×1卷积进行通道维度变换,计算量小且轻量化。Step S2.3, feature aggregation module: weight the encoder features through spatial attention to retain valid information; first reduce the dimension of the two channels and sum them, then use adaptive convolution and sigmoid function to normalize and obtain a spatial probability map; finally, combine the features of the attention mechanism with the upsampled features, and perform subsequent self-attention calculations; the feature aggregation modules all use 1×1 convolution for channel dimension transformation, which is computationally small and lightweight.
步骤S3,将分类数据集和相应的指标计算结果输入质量分类模型进行训练,得到最终的分类模型,根据临床质控指标设计质量评估方法,临床判断造影质量主要是通过血管显影灰度大小、造影剂均匀度、血管结构完整性、血管形状异常几个维度,针对这几个维度设计合适的质控指标,在血管主干选取质控区域,然后确定具体质控点用来计算质控指标,包括以下步骤;Step S3, input the classification data set and the corresponding index calculation results into the quality classification model for training, and obtain the final classification model. The quality assessment method is designed according to the clinical quality control index. The clinical judgment of the angiography quality is mainly based on the grayscale size of vascular development, the uniformity of the contrast agent, the integrity of the vascular structure, and the abnormal shape of the vascular. Appropriate quality control indicators are designed for these dimensions, and the quality control area is selected in the main trunk of the blood vessel, and then the specific quality control points are determined to calculate the quality control indicators, including the following steps;
步骤S3.1,质控区域包括颈内动脉血管主干C2~C3段和C6~C7段,该两个区域能体现造影剂浓度和均匀度异常导致的质量问题;采用YOLOv7轻量化目标检测模型进行质控区域的自动定位,并建立检测数据集进行训练;Step S3.1, the quality control area includes the C2-C3 segment and the C6-C7 segment of the internal carotid artery trunk, which can reflect the quality problems caused by abnormal contrast agent concentration and uniformity; the YOLOv7 lightweight target detection model is used to automatically locate the quality control area, and a detection data set is established for training;
步骤S3.2,质控点为质控区域内血管最大内切圆,定位方法为:提取血管轮廓,选取质控区域高度中点作为质控点圆心y轴坐标值,在质控区内采用最大内切圆半径法找到质控点;Step S3.2, the quality control point is the maximum inscribed circle of the blood vessel in the quality control area, and the positioning method is: extract the blood vessel contour, select the midpoint of the height of the quality control area as the y-axis coordinate value of the center of the quality control point, and use the maximum inscribed circle radius method to find the quality control point in the quality control area;
步骤S3.3,选取血管整体面积、灰度均值,质控点灰度均值、方差,检测到的质控区域个数作为质控指标。其中,血管整体面积、灰度均值为分割得到的血管区域像素点数量和灰度均值,质控点灰度均值、方差为质控点-内切圆-内像素灰度均值及方差,检测到的质控区域个数为目标检测模型检测到的质控区域个数;Step S3.3, select the overall area and grayscale mean of the blood vessel, the grayscale mean and variance of the quality control points, and the number of quality control areas detected as quality control indicators. Among them, the overall area and grayscale mean of the blood vessel are the number of pixel points and grayscale mean of the segmented blood vessel area, the grayscale mean and variance of the quality control point are the grayscale mean and variance of the quality control point-inscribed circle-inner pixel, and the number of quality control areas detected is the number of quality control areas detected by the target detection model;
步骤S3.4,质量分类模型采用随机森林算法,随机森林由多个决策树组成,最终预测结果由多个决策树的结果投票产生,避免随机性,分类结果更加稳定,随机森林训练和测试数据集采用质量分类数据集以及对应的指标数据,指标数据来自于分类数据集图像指标计算结果。Step S3.4, the quality classification model adopts the random forest algorithm. The random forest is composed of multiple decision trees. The final prediction result is generated by voting the results of multiple decision trees to avoid randomness and make the classification result more stable. The random forest training and test data sets use the quality classification data sets and the corresponding indicator data. The indicator data comes from the calculation results of the image indicators of the classification data set.
实验例:在自建的脑血管造影质量分类数据集的测试集进行了实验,提出的基于血管分割的脑血管造影质控方法的分类准确率达到了84.6%,平均执行时间为0.87s。在自建的脑血管造影分割数据集的测试集进行了对比实验,设计的轻量化分割网络分割准确率达到了98.0%,几乎与先进分割网络性能相当。分割网络参数量仅为2.89MB,远小于目前先进的分割模型,推理FPS为27,达到了实时分割性能。消融实验证明,所提出的局部-全局自注意力机制与特征融合模块对网络性能改善效果比较明显。Experimental example: The experiment was conducted on the test set of the self-built cerebral angiography quality classification dataset. The classification accuracy of the proposed cerebral angiography quality control method based on vascular segmentation reached 84.6%, and the average execution time was 0.87s. A comparative experiment was conducted on the test set of the self-built cerebral angiography segmentation dataset. The designed lightweight segmentation network segmentation accuracy reached 98.0%, which is almost equivalent to the performance of the advanced segmentation network. The segmentation network parameter volume is only 2.89MB, which is much smaller than the current advanced segmentation model. The inference FPS is 27, achieving real-time segmentation performance. The ablation experiment proves that the proposed local-global self-attention mechanism and feature fusion module have a significant effect on improving network performance.
本发明的有益效果是:The beneficial effects of the present invention are:
该实时脑血管造影质量评估方法,采用自动化方法定位图像质控点,适用于需要大量训练数据的深度学习分类方法,通过引入基于U-Net的轻量化实时分割网络,使得采集的样本采用计算量更小的可分离卷积,同时,针对DSA血管造影的分割难点,引入基于窗口的局部-全局自注意力机制,在保证线性复杂度的同时,更加关注整个血管结构各部分的关联性,有效提高了分割精度,另外,特征聚合模块,基于注意力方式过滤编码器特征,使得特征聚合更加高效,对图像采集的特征提取的准确性提高,优化并提高对脑血管造影质量的评估效果。This real-time cerebral angiography quality assessment method uses an automated method to locate image quality control points and is suitable for deep learning classification methods that require a large amount of training data. By introducing a lightweight real-time segmentation network based on U-Net, the collected samples use separable convolutions with smaller computational complexity. At the same time, in view of the segmentation difficulties of DSA angiography, a window-based local-global self-attention mechanism is introduced. While ensuring linear complexity, it pays more attention to the correlation between the various parts of the entire vascular structure, effectively improving the segmentation accuracy. In addition, the feature aggregation module filters the encoder features based on the attention method, making feature aggregation more efficient, improving the accuracy of feature extraction for image acquisition, and optimizing and improving the evaluation effect of cerebral angiography quality.
尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and variations may be made to the embodiments without departing from the principles and spirit of the present invention, and that the scope of the present invention is defined by the appended claims and their equivalents.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410114418.9A CN118071688A (en) | 2024-01-29 | 2024-01-29 | Real-time cerebral angiography quality assessment method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410114418.9A CN118071688A (en) | 2024-01-29 | 2024-01-29 | Real-time cerebral angiography quality assessment method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118071688A true CN118071688A (en) | 2024-05-24 |
Family
ID=91099831
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410114418.9A Pending CN118071688A (en) | 2024-01-29 | 2024-01-29 | Real-time cerebral angiography quality assessment method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118071688A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118279158A (en) * | 2024-06-03 | 2024-07-02 | 之江实验室 | Quality improvement method and device for magnetic resonance brain image and computer equipment |
-
2024
- 2024-01-29 CN CN202410114418.9A patent/CN118071688A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118279158A (en) * | 2024-06-03 | 2024-07-02 | 之江实验室 | Quality improvement method and device for magnetic resonance brain image and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111598867B (en) | Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome | |
CN108464840B (en) | Automatic detection method and system for breast lumps | |
CN113689954B (en) | Hypertension risk prediction method, device, equipment and medium | |
CN111667456B (en) | A method and device for detecting vascular stenosis in coronary X-ray sequence angiography | |
CN113420826B (en) | Liver focus image processing system and image processing method | |
CN110751636B (en) | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network | |
CN113012163A (en) | Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network | |
CN113576508A (en) | Cerebral hemorrhage auxiliary diagnosis system based on neural network | |
CN118172614B (en) | Ordered ankylosing spondylitis rating method based on supervised contrast learning | |
Fu et al. | Deep‐Learning‐Based CT Imaging in the Quantitative Evaluation of Chronic Kidney Diseases | |
CN118071688A (en) | Real-time cerebral angiography quality assessment method | |
Miao et al. | Classification of diabetic retinopathy based on multiscale hybrid attention mechanism and residual algorithm | |
CN113408647A (en) | Extraction method of cerebral small vessel structural features | |
CN116309295A (en) | Automatic scoring device for acute ischemic cerebral apoplexy ASPECTS based on DWI (discrete wavelet transform) image | |
CN115661101A (en) | Premature infant retinopathy detection system based on random sampling and deep learning | |
CN111340794A (en) | Method and device for quantifying coronary artery stenosis | |
CN113223704B (en) | Auxiliary diagnosis method for computed tomography aortic aneurysm based on deep learning | |
CN118799283A (en) | A method for constructing an MRI image training set and an Alzheimer's disease prediction model | |
Khudair et al. | Diabetes Diagnosis Using Deep Learning | |
CN118038182A (en) | Classification method of retinal OCT disease images based on improved neural network | |
CN118212411A (en) | A pulmonary embolism segmentation method based on deep learning | |
CN117522693A (en) | Method and system for enhancing machine vision of medical images using super resolution techniques | |
Taş et al. | Detection of retinal diseases from ophthalmological images based on convolutional neural network architecture. | |
CN116863206A (en) | AD classification method and system based on three-dimensional convolution and twin neural network | |
Zhao et al. | Retinal image segmentation algorithm based on hybrid pooling and multi-dimensional attention |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |