CN118071688A - Real-time cerebral angiography quality assessment method - Google Patents

Real-time cerebral angiography quality assessment method Download PDF

Info

Publication number
CN118071688A
CN118071688A CN202410114418.9A CN202410114418A CN118071688A CN 118071688 A CN118071688 A CN 118071688A CN 202410114418 A CN202410114418 A CN 202410114418A CN 118071688 A CN118071688 A CN 118071688A
Authority
CN
China
Prior art keywords
quality
quality control
attention
self
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410114418.9A
Other languages
Chinese (zh)
Inventor
黄逸凡
陆小锋
刘学锋
孙军
唐嘉吕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou Research Institute Of Shanghai University
University of Shanghai for Science and Technology
Original Assignee
Wenzhou Research Institute Of Shanghai University
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou Research Institute Of Shanghai University, University of Shanghai for Science and Technology filed Critical Wenzhou Research Institute Of Shanghai University
Priority to CN202410114418.9A priority Critical patent/CN118071688A/en
Publication of CN118071688A publication Critical patent/CN118071688A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to the technical field of medicine, and discloses a method for carrying out contrast quality assessment in real time in a contrast process, which can assist doctors in carrying out contrast quality control. According to the real-time cerebral angiography quality assessment method, the influence of the background on quality assessment is eliminated, meanwhile, the global feature extraction advantage of a transducer is referred to aiming at the segmentation difficulty of DSA angiography, a local-global self-attention mechanism based on a window is introduced into a network, the relevance of each part of the whole vascular structure is focused more while the linear complexity is ensured, the segmentation precision is effectively improved, and a feature aggregation module is designed, and the features of an encoder are filtered based on an attention mode, so that feature aggregation is more efficient; meanwhile, the method combines with contrast clinical quality evaluation indexes, determines quality classification indexes according to a contrast image quality classification data set, adopts an automatic method to position image quality control points, and adopts a random forest to carry out quality evaluation.

Description

Real-time cerebral angiography quality assessment method
Technical Field
The invention relates to the technical field of medicine, in particular to a real-time cerebral angiography quality assessment method.
Background
Cerebrovascular disease (cerebrovascular disease, CVD) is a disease that causes brain tissue damage due to intracranial blood circulation disorders, and has become one of the major diseases that endanger human health and life.
Currently, for automatic quality assessment of medical images, a plurality of image processing methods introducing deep learning are studied, and quality assessment is usually performed on the basis of image processing, such as image quality assessment on the basis of object detection or segmentation of a main body part. Due to the complexity of the contrast image, a two-stage quality assessment method of image processing plus quality classification is employed.
For the processing of the blood vessel image, a partition network such as U-Net is generally adopted to partition the blood vessel and the background, so that the segmentation performance is good. The imaging time of DSA contrast surgery is typically only about 12s, in-process quality assessment requires real-time image analysis, and insufficient computing resources need to be considered when deployed to medical devices. Therefore, as an important basis for quality evaluation, the real-time performance and light weight of the segmentation algorithm are issues that must be considered. However, there are some special segmentation difficulties in the cerebral angiography image, such as large-diameter artery and small-diameter capillary, uneven distribution of contrast agent, uneven X-ray exposure, etc., and the conventional vessel segmentation algorithm has the problems of large parameter quantity, high calculation amount, slow reasoning speed, etc. On one hand, the existing brain angiography segmentation method has better segmentation precision aiming at the DSA characteristics, but the real-time light segmentation research on brain angiography is relatively less, and the balance problem of the brain angiography DSA segmentation accuracy and the real-time performance is not solved well. On the other hand, the current method for segmenting and lightening the medical image, such as lightening the feature extraction network, focusing on the mechanism and adding various feature fusion modules to replace the depth of the network, improves the reasoning speed to a certain extent, but for the reasons of insufficient feature extraction and the like, the segmentation quality is difficult to ensure for the problem of difficult segmentation such as cerebral vessel segmentation. Based on the problems, the design of the segmentation network is performed aiming at the characteristics of the cerebrovascular DSA, and the real-time performance and the accuracy of the segmentation network are ensured through a special designed characteristic processing module and a light weight means.
The existing image quality evaluation method adopts manual extraction features to classify by combining a machine learning algorithm, and the manual extraction features are more visual. The classification method based on deep learning in recent years carries out quality classification by independently learning sample characteristics, and has better generalization performance.
On the one hand, the established contrast image data set has smaller sample size, and is not suitable for a deep learning classification method which needs a large amount of training data. On the other hand, the quality assessment of the cerebral angiography has relatively clear quality assessment indexes, and the method for manually extracting the characteristics can simulate the assessment process of doctors and accurately carry out the quality assessment. Therefore, the method of extracting features and performing machine learning classification is more suitable for contrast image quality assessment, and an automated feature extraction method is designed for realizing automated quality assessment.
Disclosure of Invention
(One) solving the technical problems
Aiming at the defects of the prior art, the invention provides a real-time cerebral angiography quality assessment method which has the advantages of small calculated amount, accurate extraction and the like, and solves the problems of large calculated amount, high extraction efficiency and accuracy to be improved during image assessment.
(II) technical scheme
In order to realize the purposes of small calculated amount and accurate extraction, the invention provides the following technical scheme: a method for real-time cerebrovascular angiography quality assessment, comprising the steps of:
step S1, collecting an internal carotid artery normal image, respectively carrying out blood vessel segmentation labeling and quality classification labeling, establishing a contrast image segmentation and classification data set, and dividing a training set and a testing set, wherein the specific steps are as follows;
Step S1.1, collecting 110 carotid artery positive images and establishing a contrast image data set;
s1.2, dividing and classifying the data sets to share the contrast image data, wherein the dividing and marking adopts manual marking, and the marking area is a blood vessel trunk and a blood vessel branch;
step S1.3, quality classification categories comprise quality qualification and quality disqualification, wherein the quality disqualification mainly comprises conditions of excessively high or excessively low contrast concentration, abnormal vascular structure, foreign object artifact and motion artifact;
And S2, inputting the images into a segmentation model for training, and adopting a full-image training mode. The segmentation model is improved based on U-Net and comprises a lightweight feature extraction trunk, a local-global self-attention mechanism and a feature aggregation module;
Step S2.1, lightweight feature extraction trunk: firstly, contrast quality control evaluation is carried out, only the trunk and the main branch of the blood vessel are needed to be segmented, and whole-image training can help the network to learn the blood vessel structure; secondly, the full-image training mode does not need pre-processing and post-processing, and under the condition that the calculated amount is greatly increased after the high-resolution image is input, the depth separable convolution is adopted to replace the traditional convolution, so that the calculated amount is reduced, and a lightweight backbone is realized;
Step S2.2, local-global self-attention mechanism: the encoder and the decoder introduce a self-attention mechanism, and can fully extract the vascular structure characteristics on the basis of full graph input;
step S2.3, a feature aggregation module: weighting the encoder characteristics by spatial attention, preserving valid information;
Step S3, inputting a classification data set and a corresponding index calculation result into a quality classification model for training to obtain a final classification model, designing a quality evaluation method according to clinical quality control indexes, clinically judging contrast quality mainly through a plurality of dimensions of vessel development gray scale, contrast agent uniformity, vessel structural integrity and vessel shape abnormality, designing proper quality control indexes aiming at the plurality of dimensions, selecting a quality control area in a vessel trunk, and then determining physical control points for calculating the quality control indexes, wherein the method comprises the following steps;
S3.1, a quality control area comprises a C2-C3 section and a C6-C7 section of a carotid artery blood vessel trunk, and the two areas can embody quality problems caused by abnormal concentration and uniformity of a contrast agent; adopting YOLOv7 light-weight target detection models to automatically position quality control areas, and establishing detection data sets for training;
step S3.2, wherein the quality control point is a maximum inscribed circle of a blood vessel in the quality control region, and the positioning method comprises the following steps: extracting a blood vessel contour, selecting a midpoint of the height of a quality control area as a y-axis coordinate value of a center of a quality control point, and finding the quality control point in the quality control area by adopting a maximum inscribed circle radius method;
And S3.3, selecting the whole area and the gray average value of the blood vessel, and selecting the gray average value and the variance of the quality control points, wherein the number of the detected quality control areas is used as a quality control index. The whole area and the gray average value of the blood vessel are the number of pixel points and the gray average value of the blood vessel region obtained by segmentation, the gray average value and the variance of the quality control points are the gray average value and the variance of the quality control points, inscribed circles and inner pixels, and the number of the detected quality control regions is the number of the quality control regions detected by the target detection model;
And S3.4, a quality classification model adopts a random forest algorithm, a random forest is composed of a plurality of decision trees, a final prediction result is generated by the result votes of the decision trees, randomness is avoided, the classification result is more stable, a quality classification data set and corresponding index data are adopted by a random forest training and testing data set, and the index data are from the image index calculation result of the classification data set.
Preferably, in step S1, the training set and the test set are divided according to a ratio of 7:3, the training set is used for model training, the test set is used for evaluating model performance, the segmentation evaluation indexes include Accuracy, accuracy Precision, sensitivity, parameter amount Params and calculated amount FLOPs, the classification evaluation indexes include Accuracy, accuracy Precision, sensitivity, and the real-time performance of the whole system is evaluated by adopting average reasoning time.
Preferably, in step S2, the segmentation model is based on a U-Net encoder-decoder architecture, and the encoder feature extraction uses MBConv depth separable convolution blocks in EFFICIENTNET for reducing the model parameters;
After each MBConv, a local-global self-attention module L-GBlock is introduced, local-global feature information modeling is realized through the multi-scale window sizes of different feature layers, when the feature image size is reduced, the image receptive field is enlarged from local to global, and the local continuity information and the global structural information of blood vessels can be fully extracted;
The local-global self-attention module is symmetrically distributed at the encoder and decoder ends, a feature aggregation module FAM is introduced at the joint of the encoder and decoder features, the encoder features and the upsampling features are summed and a spatial attention matrix is calculated, and finally attention weighting is carried out on the original encoder features and the upsampling features are spliced.
Preferably, in step S2.1, the first 7 layers of EFFICIENTNET are adopted as the feature extraction trunk, so as to reduce network parameters and calculation amount.
Preferably, in step S2.2, the transducer first uses a self-attention mechanism to model global feature interactions, and the self-attention mechanism for a two-dimensional image increases quadratically with image resolution; in order to realize global information interaction and keep the calculated amount as linear complexity, a self-attention mechanism based on windows is adopted;
Firstly dividing the features into P multiplied by P windows with the same size, carrying out FFN self-attention calculation on each window to obtain attention force diagram, weighting the original features to obtain global attention features, and selecting window size super-parameters;
The method comprises the steps that the number of windows is set to be 16, 8, 4, 2 and 1 respectively in a mode that the depth of a network is increased, the feature resolution is reduced and the number of windows is gradually reduced, local self-attention is carried out in a shallow network, and self-attention is carried out on the whole feature in a deep network;
implementing a dynamic local to global self-attention mechanism; symmetrical self-attention modules are introduced at the decoder end, so that the vascular structural integrity and the local continuity can be ensured while the resolution is rebuilt, and the number of windows at the decoder end is respectively set to 2,4, 8 and 16.
Preferably, in step S2.3, the two channels are first reduced in dimension and summed, and then normalized by adaptive convolution and using Sigmoid function to obtain a space probability map; finally, combining the features of the attention mechanism with the upsampling features, and performing subsequent self-attention calculation; the feature aggregation modules all adopt 1 multiplied by 1 convolution to carry out channel dimension transformation, so that the calculation amount is small and the weight is light.
(III) beneficial effects
Compared with the prior art, the invention provides a real-time cerebral angiography quality assessment method, which has the following beneficial effects:
The real-time cerebral angiography quality assessment method adopts an automatic method to position image quality control points, is suitable for a deep learning classification method requiring a large amount of training data, adopts a separable convolution with smaller calculation amount by introducing a light-weight real-time segmentation network based on U-Net, simultaneously, introduces a local-global self-attention mechanism based on a window aiming at segmentation difficulty of DSA angiography, ensures linear complexity, simultaneously focuses on relevance of each part of the whole vascular structure, effectively improves segmentation precision, and further, a feature aggregation module filters encoder features based on an attention mode, so that feature aggregation is more efficient, the feature extraction accuracy of image acquisition is improved, and the assessment effect on cerebral angiography quality is optimized and improved.
Drawings
FIG. 1 is a schematic diagram of a method for evaluating the quality of cerebral angiography according to the present invention;
fig. 2 is a schematic view of a lightweight vessel segmentation model according to the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below in conjunction with the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Please refer to fig. 1-2: a method for real-time cerebrovascular angiography quality assessment, comprising the steps of:
In the step S1, the collected carotid artery normal images are respectively subjected to blood vessel segmentation labeling and quality classification labeling, a contrast image segmentation and classification data set is established, a training set and a testing set are divided, the training set and the testing set are divided according to the proportion of 7:3, the training set is used for model training, the testing set is used for evaluating model performance, segmentation evaluation indexes comprise Accuracy Accumey, accuracy Precision and Sensitivity, parameter quantity Params and calculated quantity FLOPs, and classification evaluation indexes comprise Accuracy Accumey, accuracy Precision and Sensitivity, and average reasoning time is adopted for evaluating real-time of the whole system. The method comprises the following specific steps of;
Step S1.1, collecting 110 carotid artery positive images and establishing a contrast image data set;
s1.2, dividing and classifying the data sets to share the contrast image data, wherein the dividing and marking adopts manual marking, and the marking area is a blood vessel trunk and a blood vessel branch;
step S1.3, quality classification categories comprise quality qualification and quality disqualification, wherein the quality disqualification mainly comprises conditions of excessively high or excessively low contrast concentration, abnormal vascular structure, foreign object artifact and motion artifact;
Step S2, please refer to FIG. 2, the image is input into a segmentation model for training, and a full-image training mode is adopted, wherein the full-image training mode comprises a light feature extraction trunk, a local-global self-attention mechanism and a feature aggregation module; the segmentation model is based on the design of a U-Net encoder-decoder architecture, and the encoder characteristic extraction adopts MBConv depth separable convolution blocks in EFFICIENTNET, so that the model parameter quantity is obviously reduced. In order to make up for insufficient feature extraction of the lightweight convolution blocks, a local-global self-attention module (L-GBlock) is introduced after each MBConv, local-global feature information modeling is realized through the multi-scale window sizes of different feature layers, and as the feature image size is reduced, the receptive field is expanded from local to global, so that the local continuity information and global structural information of blood vessels can be fully extracted. The local-global self-attention modules are symmetrically distributed at the encoder and decoder ends. A Feature Aggregation Module (FAM) is introduced at the junction of the encoder and decoder features, sums the encoder features with the upsampled features and calculates a spatial attention matrix, and finally performs attention weighting on the original encoder features and concatenates with the upsampled features. The module can extract the information of the original image features which are beneficial to segmentation, and avoid the information redundancy and the feature difference of direct short connection.
Step S2.1, lightweight feature extraction trunk: firstly, contrast quality control evaluation is carried out, only the trunk and the main branch of the blood vessel are needed to be segmented, and whole-image training can help the network to learn the blood vessel structure; secondly, the full-image training mode does not need pre-processing and post-processing, and under the condition that the calculated amount is greatly increased after the high-resolution image is input, the depth separable convolution is adopted to replace the traditional convolution, so that the calculated amount is reduced, and a lightweight backbone is realized; and the first 7 layers of EFFICIENTNET are used as a feature extraction trunk, so that network parameters and calculation amount are reduced.
Step S2.2, local-global self-attention mechanism: the encoder and the decoder introduce a self-attention mechanism, and can fully extract the vascular structure characteristics on the basis of full graph input; the transducer firstly uses a self-attention mechanism to model global feature interaction, and the self-attention mechanism for a two-dimensional image increases in a calculated amount and a quadratic direction along with the resolution of the image; in order to realize global information interaction and keep the calculated amount as linear complexity, a self-attention mechanism based on windows is adopted;
Firstly dividing the features into P multiplied by P windows with the same size, carrying out FFN self-attention calculation on each window to obtain attention force diagram, weighting the original features to obtain global attention features, and selecting window size super-parameters;
The method comprises the steps that the number of windows is set to be 16, 8, 4, 2 and 1 respectively in a mode that the depth of a network is increased, the feature resolution is reduced and the number of windows is gradually reduced, local self-attention is carried out in a shallow network, and self-attention is carried out on the whole feature in a deep network;
implementing a dynamic local to global self-attention mechanism; symmetrical self-attention modules are introduced at the decoder end, so that the vascular structural integrity and the local continuity can be ensured while the resolution is rebuilt, and the number of windows at the decoder end is respectively set to 2,4, 8 and 16.
Step S2.3, a feature aggregation module: weighting the encoder characteristics by spatial attention, preserving valid information; firstly, dimension reduction and summation of the two channels are carried out, and then, a spatial probability map is obtained through self-adaptive convolution and normalization by adopting a Sigmoid function; finally, combining the features of the attention mechanism with the upsampling features, and performing subsequent self-attention calculation; the feature aggregation modules all adopt 1 multiplied by 1 convolution to carry out channel dimension transformation, so that the calculation amount is small and the weight is light.
Step S3, inputting a classification data set and a corresponding index calculation result into a quality classification model for training to obtain a final classification model, designing a quality evaluation method according to clinical quality control indexes, clinically judging contrast quality mainly through a plurality of dimensions of vessel development gray scale, contrast agent uniformity, vessel structural integrity and vessel shape abnormality, designing proper quality control indexes aiming at the plurality of dimensions, selecting a quality control area in a vessel trunk, and then determining physical control points for calculating the quality control indexes, wherein the method comprises the following steps;
S3.1, a quality control area comprises a C2-C3 section and a C6-C7 section of a carotid artery blood vessel trunk, and the two areas can embody quality problems caused by abnormal concentration and uniformity of a contrast agent; adopting YOLOv7 light-weight target detection models to automatically position quality control areas, and establishing detection data sets for training;
step S3.2, wherein the quality control point is a maximum inscribed circle of a blood vessel in the quality control region, and the positioning method comprises the following steps: extracting a blood vessel contour, selecting a midpoint of the height of a quality control area as a y-axis coordinate value of a center of a quality control point, and finding the quality control point in the quality control area by adopting a maximum inscribed circle radius method;
And S3.3, selecting the whole area and the gray average value of the blood vessel, and selecting the gray average value and the variance of the quality control points, wherein the number of the detected quality control areas is used as a quality control index. The whole area and the gray average value of the blood vessel are the number of pixel points and the gray average value of the blood vessel region obtained by segmentation, the gray average value and the variance of the quality control points are the gray average value and the variance of the quality control points, inscribed circles and inner pixels, and the number of the detected quality control regions is the number of the quality control regions detected by the target detection model;
And S3.4, a quality classification model adopts a random forest algorithm, a random forest is composed of a plurality of decision trees, a final prediction result is generated by the result votes of the decision trees, randomness is avoided, the classification result is more stable, a quality classification data set and corresponding index data are adopted by a random forest training and testing data set, and the index data are from the image index calculation result of the classification data set.
Experimental example: experiments are carried out on a test set of a self-built cerebral angiography quality classification data set, the classification accuracy of the cerebral angiography quality control method based on vascular segmentation reaches 84.6%, and the average execution time is 0.87s. A comparison experiment is carried out on a test set of a self-built brain angiography segmentation data set, the segmentation accuracy of a designed lightweight segmentation network reaches 98.0%, and the segmentation accuracy is almost equivalent to the performance of an advanced segmentation network. The number of the segmentation network parameters is only 2.89MB, which is far smaller than the current advanced segmentation model, and the FPS is inferred to be 27, so that the real-time segmentation performance is achieved. The ablation experiment proves that the proposed local-global self-attention mechanism and the feature fusion module have obvious effect on improving the network performance.
The beneficial effects of the invention are as follows:
The real-time cerebral angiography quality assessment method adopts an automatic method to position image quality control points, is suitable for a deep learning classification method requiring a large amount of training data, adopts a separable convolution with smaller calculation amount by introducing a light-weight real-time segmentation network based on U-Net, simultaneously, introduces a local-global self-attention mechanism based on a window aiming at segmentation difficulty of DSA angiography, ensures linear complexity, simultaneously focuses on relevance of each part of the whole vascular structure, effectively improves segmentation precision, and further, a feature aggregation module filters encoder features based on an attention mode, so that feature aggregation is more efficient, the feature extraction accuracy of image acquisition is improved, and the assessment effect on cerebral angiography quality is optimized and improved.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A method for evaluating the quality of a real-time cerebrovascular angiography, comprising the steps of:
step S1, collecting an internal carotid artery normal image, respectively carrying out blood vessel segmentation labeling and quality classification labeling, establishing a contrast image segmentation and classification data set, and dividing a training set and a testing set, wherein the specific steps are as follows;
Step S1.1, collecting 110 carotid artery positive images and establishing a contrast image data set;
s1.2, dividing and classifying the data sets to share the contrast image data, wherein the dividing and marking adopts manual marking, and the marking area is a blood vessel trunk and a blood vessel branch;
step S1.3, quality classification categories comprise quality qualification and quality disqualification, wherein the quality disqualification mainly comprises conditions of excessively high or excessively low contrast concentration, abnormal vascular structure, foreign object artifact and motion artifact;
And S2, inputting the images into a segmentation model for training, and adopting a full-image training mode. The segmentation model is improved based on U-Net and comprises a lightweight feature extraction trunk, a local-global self-attention mechanism and a feature aggregation module;
Step S2.1, lightweight feature extraction trunk: firstly, contrast quality control evaluation is carried out, only the trunk and the main branch of the blood vessel are needed to be segmented, and whole-image training can help the network to learn the blood vessel structure; secondly, the full-image training mode does not need pre-processing and post-processing, and under the condition that the calculated amount is greatly increased after the high-resolution image is input, the depth separable convolution is adopted to replace the traditional convolution, so that the calculated amount is reduced, and a lightweight backbone is realized;
Step S2.2, local-global self-attention mechanism: the encoder and the decoder introduce a self-attention mechanism, and can fully extract the vascular structure characteristics on the basis of full graph input;
step S2.3, a feature aggregation module: weighting the encoder characteristics by spatial attention, preserving valid information;
Step S3, inputting a classification data set and a corresponding index calculation result into a quality classification model for training to obtain a final classification model, designing a quality evaluation method according to clinical quality control indexes, clinically judging contrast quality mainly through a plurality of dimensions of vessel development gray scale, contrast agent uniformity, vessel structural integrity and vessel shape abnormality, designing proper quality control indexes aiming at the plurality of dimensions, selecting a quality control area in a vessel trunk, and then determining physical control points for calculating the quality control indexes, wherein the method comprises the following steps;
S3.1, a quality control area comprises a C2-C3 section and a C6-C7 section of a carotid artery blood vessel trunk, and the two areas can embody quality problems caused by abnormal concentration and uniformity of a contrast agent; adopting YOLOv7 light-weight target detection models to automatically position quality control areas, and establishing detection data sets for training;
step S3.2, wherein the quality control point is a maximum inscribed circle of a blood vessel in the quality control region, and the positioning method comprises the following steps: extracting a blood vessel contour, selecting a midpoint of the height of a quality control area as a y-axis coordinate value of a center of a quality control point, and finding the quality control point in the quality control area by adopting a maximum inscribed circle radius method;
And S3.3, selecting the whole area and the gray average value of the blood vessel, and selecting the gray average value and the variance of the quality control points, wherein the number of the detected quality control areas is used as a quality control index. The whole area and the gray average value of the blood vessel are the number of pixel points and the gray average value of the blood vessel region obtained by segmentation, the gray average value and the variance of the quality control points are the gray average value and the variance of the quality control points, inscribed circles and inner pixels, and the number of the detected quality control regions is the number of the quality control regions detected by the target detection model;
And S3.4, a quality classification model adopts a random forest algorithm, a random forest is composed of a plurality of decision trees, a final prediction result is generated by the result votes of the decision trees, randomness is avoided, the classification result is more stable, a quality classification data set and corresponding index data are adopted by a random forest training and testing data set, and the index data are from the image index calculation result of the classification data set.
2. The method according to claim 1, wherein in step S1, the training set and the test set are divided according to a ratio of 7:3, the training set is used for model training, the test set is used for evaluating model performance, the segmentation evaluation indexes include Accuracy, accuracy Precision, sensitivity, and parameter amount Params, calculated amount FLOPs, and the classification evaluation indexes include Accuracy, accuracy Precision, sensitivity, and the real-time performance of the whole system is evaluated by adopting average reasoning time.
3. The method according to claim 1, wherein in step S2, the segmentation model is based on a U-Net encoder-decoder architecture, and the encoder feature extraction uses MBConv depth separable convolution blocks in EFFICIENTNET for reducing the model parameters;
After each MBConv, a local-global self-attention module L-GBlock is introduced, local-global feature information modeling is realized through the multi-scale window sizes of different feature layers, and when the feature diagram size is reduced, the receptive field is enlarged from local to global, so that the local continuity information and the global structural information of the blood vessel can be fully extracted;
The local-global self-attention module is symmetrically distributed at the encoder and decoder ends, a feature aggregation module FAM is introduced at the joint of the encoder and decoder features, the encoder features and the upsampling features are summed and a spatial attention matrix is calculated, and finally attention weighting is carried out on the original encoder features and the upsampling features are spliced.
4. The method according to claim 1, wherein in step S2.1, the first 7 layers of EFFICIENTNET are adopted as feature extraction trunks, so as to reduce network parameters and calculation amount.
5. The method according to claim 1, wherein in step S2.2, the transducer first uses a self-attentive mechanism to model global feature interactions, and the self-attentive mechanism for two-dimensional images increases in a calculated amount and a quadratic factor with image resolution; in order to realize global information interaction and keep the calculated amount as linear complexity, a self-attention mechanism based on windows is adopted;
Firstly dividing the features into P multiplied by P windows with the same size, carrying out FFN self-attention calculation on each window to obtain attention force diagram, weighting the original features to obtain global attention features, and selecting window size super-parameters;
The method comprises the steps that the number of windows is set to be 16, 8, 4, 2 and 1 respectively in a mode that the depth of a network is increased, the feature resolution is reduced and the number of windows is gradually reduced, local self-attention is carried out in a shallow network, and self-attention is carried out on the whole feature in a deep network;
implementing a dynamic local to global self-attention mechanism; symmetrical self-attention modules are introduced at the decoder end, so that the vascular structural integrity and the local continuity can be ensured while the resolution is rebuilt, and the number of windows at the decoder end is respectively set to 2,4, 8 and 16.
6. The method for evaluating the quality of the real-time cerebral angiography according to claim 1, wherein in the step S2.3, the dimension reduction and summation of the two channels are performed firstly, and then the spatial probability map is obtained by adopting a Sigmoid function for normalization through self-adaptive convolution; finally, combining the features of the attention mechanism with the upsampling features, and performing subsequent self-attention calculation; the feature aggregation modules all adopt 1 multiplied by 1 convolution to carry out channel dimension transformation, so that the calculation amount is small and the weight is light.
CN202410114418.9A 2024-01-29 2024-01-29 Real-time cerebral angiography quality assessment method Pending CN118071688A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410114418.9A CN118071688A (en) 2024-01-29 2024-01-29 Real-time cerebral angiography quality assessment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410114418.9A CN118071688A (en) 2024-01-29 2024-01-29 Real-time cerebral angiography quality assessment method

Publications (1)

Publication Number Publication Date
CN118071688A true CN118071688A (en) 2024-05-24

Family

ID=91099831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410114418.9A Pending CN118071688A (en) 2024-01-29 2024-01-29 Real-time cerebral angiography quality assessment method

Country Status (1)

Country Link
CN (1) CN118071688A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118279158A (en) * 2024-06-03 2024-07-02 之江实验室 Quality improvement method and device for magnetic resonance brain image and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118279158A (en) * 2024-06-03 2024-07-02 之江实验室 Quality improvement method and device for magnetic resonance brain image and computer equipment

Similar Documents

Publication Publication Date Title
Costa et al. Towards adversarial retinal image synthesis
CN112529839B (en) Method and system for extracting carotid vessel centerline in nuclear magnetic resonance image
CN112529894B (en) Thyroid nodule diagnosis method based on deep learning network
CN111046835A (en) Eyeground illumination multiple disease detection system based on regional feature set neural network
CN111369528B (en) Coronary artery angiography image stenosis region marking method based on deep convolutional network
CN113420826B (en) Liver focus image processing system and image processing method
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN118071688A (en) Real-time cerebral angiography quality assessment method
CN111667456A (en) Method and device for detecting vascular stenosis in coronary artery X-ray sequence radiography
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN112950644B (en) Neonatal brain image segmentation method and model construction method based on deep learning
CN113223005A (en) Thyroid nodule automatic segmentation and grading intelligent system
CN115035127A (en) Retinal vessel segmentation method based on generative confrontation network
CN118172614B (en) Ordered ankylosing spondylitis rating method based on supervised contrast learning
CN116468655A (en) Brain development atlas and image processing system based on fetal magnetic resonance imaging
CN114693622A (en) Plaque erosion automatic detection system based on artificial intelligence
CN116309614A (en) Brain small vascular disease MRI image segmentation and auxiliary diagnosis method and system based on multidimensional deep learning
CN115841472A (en) Method, device, equipment and storage medium for identifying high-density characteristics of middle cerebral artery
Taş et al. Detection of retinal diseases from ophthalmological images based on convolutional neural network architecture.
CN115410032A (en) OCTA image classification structure training method based on self-supervision learning
CN115249248A (en) Retinal artery and vein blood vessel direct identification method and system based on fundus image
CN112967295A (en) Image processing method and system based on residual error network and attention mechanism
Liu et al. AGFA-Net: Attention-Guided and Feature-Aggregated Network for Coronary Artery Segmentation using Computed Tomography Angiography
Rajendran et al. An Ensemble Deep Learning Network in Classifying the Early CT Slices of Ischemic Stroke Patients.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination