CN116580194B - Blood vessel segmentation method of soft attention network fused with geometric information - Google Patents

Blood vessel segmentation method of soft attention network fused with geometric information Download PDF

Info

Publication number
CN116580194B
CN116580194B CN202310485605.3A CN202310485605A CN116580194B CN 116580194 B CN116580194 B CN 116580194B CN 202310485605 A CN202310485605 A CN 202310485605A CN 116580194 B CN116580194 B CN 116580194B
Authority
CN
China
Prior art keywords
layer
convolution
map
characteristic diagram
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310485605.3A
Other languages
Chinese (zh)
Other versions
CN116580194A (en
Inventor
陈达
韩孝兴
舒明雷
刘丽
李焕春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Shandong Institute of Artificial Intelligence
Original Assignee
Qilu University of Technology
Shandong Institute of Artificial Intelligence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology, Shandong Institute of Artificial Intelligence filed Critical Qilu University of Technology
Priority to CN202310485605.3A priority Critical patent/CN116580194B/en
Publication of CN116580194A publication Critical patent/CN116580194A/en
Application granted granted Critical
Publication of CN116580194B publication Critical patent/CN116580194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A blood vessel segmentation method of soft attention network integrating geometric information utilizes geodesic distance map to obtain blood vessel structure information, adopts level set function to obtain position information of points on blood vessel image, and uses Heaviside function to generate weight distribution of points in blood vessel. Further, the weight distribution and the attention distribution are combined in a mode of fusing geometric information to form a soft attention network module. The blood vessel segmentation method which integrates the geometric information into the soft attention network can well inhibit segmentation leakage, solves the problem of discontinuous blood vessels in the segmentation process, and ensures that the blood vessel segmentation is quicker and more accurate.

Description

Blood vessel segmentation method of soft attention network fused with geometric information
Technical Field
The invention relates to the technical field of computer vision, in particular to a blood vessel segmentation method of a soft attention network fusing geometric information.
Background
Vessel segmentation is one of the important tasks of medical image analysis, which can help doctors in disease diagnosis and treatment planning. The task of vessel segmentation is very challenging due to the complex morphology, varying size, and contact and intersection between the vessels and surrounding tissue in medical images. In the past few decades, many conventional image segmentation methods have been applied to the field of vessel segmentation, such as thresholding, region growing, edge detection, etc. However, these conventional methods are often limited to problems such as image noise, gray level unevenness, and complicated blood vessel morphology, resulting in unstable segmentation effect and low accuracy.
The deep learning has strong characteristic learning capability and good generalization performance, can automatically learn the semantic information of the image, and accurately classifies and segments the target pixels in the image. Olaf Ronneberger et al (ref: ronneberger, o., fischer, p., brox, t (2015) U-Net: convolutional Networks for Biomedical Image segments.in: navab, n., horneger, j., wells, w., franki, a. (eds) Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015.MICCAI 2015.Lecture Notes in Computer Science (), vol 9351.Springer, chan.) employ convolutional neural networks in the encoder portion of the U-Net network to gradually reduce feature size and depth and extract high-level semantic information of images. The jump type connection is adopted to effectively combine local and global information, so that the network is helped to better understand the context of the input image, and the segmentation accuracy is improved. The design can not only effectively avoid information loss and repeated calculation, but also improve the robustness and generalization capability of the network, thereby reducing the repetition rate of the model and improving the segmentation precision. Zhou et al (ref: zhou, Z., rahman Siddique, M.M., tajbakhsh, N., liang, J. (2018). UNet++: A Nested U-Net Architecture for Medical Image segment.in:, et al deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision support.DLMIA ML-CDS2018 2018.Lecture Notes in Computer Science (), vol 11045.Springer, cham.) redesign the jump path on a U-Net basis, reducing the semantic gap between encoder and decoder subnetworks. One of the main problems is that the U-Net network may generate excessive overlapping areas, so that the calculation complexity of the model is increased, and the repeatability of an output result is high, so that the efficiency and the accuracy of the model are reduced. In addition, the U-Net network may also suffer from insufficient storage and computing resources when processing very large images, resulting in increased training time and reduced performance. In order to solve the above problems, researchers have proposed improved U-Net network structures and algorithms to improve the efficiency and accuracy of the model. For example, the computational complexity and storage requirements of the network can be reduced by adopting a new convolution operation or a batch normalization technology; by using the residual connection or jump connection and other technologies, the repetition rate can be effectively reduced, and the performance of the model can be improved. However, for images with curved surfaces, irregular shapes, or containing multiple objects, it is often difficult for the U-Net network to achieve accurate segmentation.
The blood vessel is usually in a curve shape in the medical image, and the traditional segmentation method is limited by problems of image noise, complex blood vessel morphology and the like, so that the segmentation accuracy is not high, and the U-Net also has some limitations when processing the type of image, such as insufficient smoothness of a segmentation result, inaccurate edge position and the like.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a blood vessel segmentation method of a soft attention network which is used for improving the image segmentation precision and speed and is used for fusing geometric information.
The technical scheme adopted for overcoming the technical problems is as follows:
a method of vessel segmentation for a soft-attention network incorporating geometric information, comprising: the method comprises the following steps:
a) Coronary vessel images of n patients were collected to obtain data set I, i= { I 1 ,I 2 ,...,I i ,...,I n }, wherein I i Coronary vessel image for the i patient, i e {1,2,., n };
b) Preprocessing the data set I to obtain a preprocessed data set I f ,I f ={I f1 ,I f2 ,...,I fi ,...,I fn }, wherein I fi A preprocessed coronary vessel image of the ith patient;
c) Pre-processing data set I f Dividing into a training set train, a verification set val and a test set test;
d) Setting a network structure composed of an encoder, an intermediate structure layer and a decoder, and preprocessing the coronary artery blood vessel image I of the ith patient in the training set train fi Input into a decoder of a network structure, and output to obtain a characteristic diagram I fm 4
e) Map I of the characteristics fm 4 Input into an intermediate structure layer of a network structure, and output to obtain a characteristic diagram I fc 1
f) Map I of the characteristics fc 1 Input to a decoder, output to obtain a divided image I end
g) Calculating a loss function L;
h) Using AdamW as an optimizer, optimizing the network structure in step d) using back propagation according to the loss function L, to obtain an optimized network structure;
i) Inputting the preprocessed coronary artery blood vessel image of the ith patient in the test set test into the optimized network structure to obtain a segmented image I end
Further, the method comprises the following steps. Collecting coronary vessel images of 200 patients in an automated region-based coronary artery disease diagnosis overt challenge using X-ray angiography images in step a) to obtain a dataset I; importing Augmentor package in python, using Augmentor package to data set in step b)I sequentially perform rotation, elastic deformation, brightness enhancement and contrast enhancement operations to obtain an enhanced data set I ', I' = { I '' 1 ,I′ 2 ,...,I′ i ,...,I′ n Performing an overlay-tile strategy on the enhanced data set I' to obtain a preprocessed data set I f
Preferably, the preprocessed data set I f The training set train, the verification set val and the test set test are divided according to the ratio of 6:2:2.
Further, step d) comprises the steps of:
d-1) the encoder is composed of a first convolution unit, a second convolution unit, a first maximum pooling layer, a third convolution unit, a fourth convolution unit, a second maximum pooling layer, a fifth convolution unit, a sixth convolution unit, a third maximum pooling layer, a seventh convolution unit, an eighth convolution unit and a fourth maximum pooling layer;
d-2) the first convolution unit of the encoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the preprocessed coronary artery blood vessel image I of the ith patient in the training set train fi Inputting into a first convolution unit, outputting to obtain a characteristic diagram I fi 1-1
d-3) the second convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fi 1-1 Inputting into a second convolution unit, outputting to obtain a characteristic diagram I fe 1 The method comprises the steps of carrying out a first treatment on the surface of the d-4) mapping the characteristic pattern I fe 1 Input into a first maximum pooling layer of an encoder, and output to obtain a characteristic diagram I fm 1 The method comprises the steps of carrying out a first treatment on the surface of the d-5) the third convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 1 Inputting into a third convolution unit, outputting to obtain a characteristic diagram I fe 2-1 The method comprises the steps of carrying out a first treatment on the surface of the d-6) the fourth convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fe 2-1 Inputting into a fourth convolution unit, outputting to obtain a characteristic diagram I fe 2 The method comprises the steps of carrying out a first treatment on the surface of the d-7) characterization ofFigure I fe 2 Input into a second maximum pooling layer of the encoder, and output to obtain a characteristic diagram I fm 2 The method comprises the steps of carrying out a first treatment on the surface of the d-8) the fifth convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 2 Inputting into a fifth convolution unit, outputting to obtain a characteristic diagram I fe 3-1 The method comprises the steps of carrying out a first treatment on the surface of the d-9) the sixth convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fe 3-1 Inputting into a sixth convolution unit, outputting to obtain a characteristic diagram I fe 3 The method comprises the steps of carrying out a first treatment on the surface of the d-10) mapping the features of I fe 3 Input into a third maximum pooling layer of the encoder, and output to obtain a characteristic diagram I fm 3 The method comprises the steps of carrying out a first treatment on the surface of the d-11) the seventh convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 3 Inputting into a seventh convolution unit, outputting and obtaining a characteristic diagram I fe 4-1 The method comprises the steps of carrying out a first treatment on the surface of the The eighth convolution unit of the d-12) encoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fe 4-1 Inputting into an eighth convolution unit, outputting and obtaining a characteristic diagram I fe 4 The method comprises the steps of carrying out a first treatment on the surface of the d-13) mapping of the characteristics I fe 4 Input into a fourth maximum pooling layer of the encoder, and output to obtain a characteristic diagram I fm 4 . Preferably, the convolution kernel size of the convolution layer of the first convolution unit in step d-2) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the second convolution unit in step d-3) is 3×03, stride is 1×11, and padding is 0; setting the pooling window to be 2 multiplied by 22 in the first maximum pooling layer in the step d-4); the convolution kernel size of the convolution layer of the third convolution unit in step d-5) is 3×33, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the fourth convolution unit in step d-6) is 3×3, stride is 1×1, and padding is 0; setting the pooling window to be 2 multiplied by 2 in the second maximum pooling layer in the step d-7); the convolution kernel size of the convolution layer of the fifth convolution unit in step d-8) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the sixth convolution unit in step d-9) is 3×3 and stride is 1×1Padding is 0; setting the pooling window to be 2 multiplied by 2 in the third maximum pooling layer in the step d-10); the convolution kernel size of the convolution layer of the seventh convolution unit in step d-11) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the eighth convolution unit in step d-12) is 3×3, stride is 1×1, padding is 0, and the fourth maximum pooling layer in step d-13) sets the pooling window to 2×2.
Further, step e) comprises the steps of:
e-1) the intermediate structure layer is composed of a first convolution unit and a second convolution unit;
e-2) the first convolution unit of the intermediate structure layer is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 4 Input into a first convolution unit, and output to obtain a characteristic diagram I fc 1-1 The method comprises the steps of carrying out a first treatment on the surface of the e-3) the second convolution unit of the intermediate structure layer is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fc 1-1 Input into a second convolution unit, and output to obtain a characteristic diagram I fc 1 The method comprises the steps of carrying out a first treatment on the surface of the Preferably, the convolution kernel size of the convolution layer of the first convolution unit in step e-2) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the second convolution unit in step e-3) is 3×3, stride is 1×1, and padding is 0.
Further, step f) comprises the steps of:
the f-1) decoder is composed of a soft attention module of first fusion geometric information, a first upsampling layer, a first feature fusion layer, a first convolution unit, a second convolution unit, a soft attention module of second fusion geometric information, a second upsampling layer, a second feature fusion layer, a third convolution unit, a fourth convolution unit, a soft attention module of third fusion geometric information, a third upsampling layer, a third feature fusion layer, a fifth convolution unit, a sixth convolution unit, a soft attention module of fourth fusion geometric information, a fourth upsampling layer, a fourth feature fusion layer, a seventh convolution unit, an eighth convolution unit, a ninth convolution unit and a tenth convolution unit;
f-2) first fusion geometry information of decoderThe soft attention module consists of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an upsampling layer, and is used for processing a characteristic diagram I fc 1 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 1-1-1 Map I of the characteristics fe 4 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 1-1-2 Map I of the characteristics fd 1-1-1 And feature map I fd 1-1-2 Adding to obtain a characteristic diagram I fd 1-1-3 Map I of the characteristics fd 1-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 1-1-4 Map I of the characteristics fd 1-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 1-1-5 Map I of the characteristics fd 1-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 1-1-6 Map I of the characteristics fd 1-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 1-1-6 Is divided into a foreground seed point set S 1 And background seed Point set R 1 For the foreground seed point set S 1 Calculation of geodesic distance map D using fast marching algorithm s (x 1 ) For the background seed point set R 1 Calculation of geodesic distance map U using fast marching algorithm r (x 1 ),x 1 Is characteristic diagram I fd 1-1-6 Characteristic value of x 1 ∈Ω 1 ,Ω 1 For the image domain, a geodesic distance map Ds (x 1 ) Distance map Ur (x) 1 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 1 ) When M (x 1 ) Representing image domain Ω when < 0 1 The upper point is inside the vessel, denoted asWhen M (x) 1 ) Representing image domain Ω at > 0 1 The upper point is outside the blood vessel, indicated as +.>When M (x) 1 ) When=0, the image domain Ω is represented 1 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 1 As a point on the boundary of the blood vessel, I.I 2 For Euclidean distance, level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 1-1-7 Wherein k is a constant, and the characteristic diagram I fd 1-1-6 And feature map I fd 1-1-7 Feature fusion is carried out to obtain a feature map I fd 1-1-8 Map I of the characteristics fd 1-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 1-1-9 Map I of the characteristics fe 4 And feature map I fd 1-1-9 Feature fusion is carried out to obtain a feature map I fd 1-1
f-3) mapping the features of I fc 1 In the first up-sampling layer of the decoder, the output results in a feature map I fu 1
f-4) mapping the features of I fu 1 And feature map I fd 1-1 Input into a first feature fusion layer of a decoder, and output to obtain a feature map I fd 1-2
The first convolution unit of the f-5) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 1-2 Input into a first convolution unit, and output to obtain a characteristic diagram I fd 1-2-1
The second convolution unit of the f-6) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 1-2-1 Input into a second convolution unit, and output to obtain a characteristic diagram I fd 1
f-7) the second soft attention module of the decoder fusing geometric information is composed of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an up-sampling layer, and the feature map I is obtained by fd 1 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 2-1-1 Map I of the characteristics fe 3 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 2-1-2 Map I of the characteristics fd 2-1-1 And feature map I fd 2-1-2 Adding to obtain a characteristic diagram I fd 2-1-3 Map I of the characteristics fd 2-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 2-1-4 Map I of the characteristics fd 2-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 2-1-5 Map I of the characteristics fd 2-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 2-1-6 Map I of the characteristics fd 2-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 2-1-6 Is divided into a foreground seed point set S 2 And background seed Point set R 2 For the foreground seed point set S 2 Calculation of geodesic distance map D using fast marching algorithm s (x 2 ) For the background seed point set R 2 Calculation of geodesic distance map U using fast marching algorithm r (x 2 ),x 2 Is characteristic diagram I fd 2-1-6 Characteristic value of x 2 ∈Ω 2 ,Ω 2 For the image domain, a geodesic distance map Ds (x 2 ) Distance map Ur (x) 2 ) Value subtraction of corresponding pixelsOperating to obtain a geodesic distance map M (x 2 ) When M (x 2 ) Representing image domain Ω when < 0 2 The upper point is inside the vessel, denoted asWhen M (x) 2 ) Representing image domain Ω at > 0 2 The upper point is outside the blood vessel, indicated as +.>When M (x) 2 ) When=0, the image domain Ω is represented 2 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 2 For points on the vessel border, the level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 2-1-7 Map I of the characteristics fd 2-1-6 And feature map I fd 2 -1-7 Feature fusion is carried out to obtain a feature map I fd 2-1-8 Map I of the characteristics fd 2-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 2-1-9 Map I of the characteristics fe 3 And feature map I fd 2-1-9 Feature fusion is carried out to obtain a feature map I fd 2-1
f-8) mapping of the features I fd 2-1 In the second up-sampling layer of the decoder, the output results in a feature mapI fu 2
f-9) mapping the features of I fu 2 And feature map I fd 2-1 Inputting into a second feature fusion layer of the decoder, and outputting to obtain a feature map I fd 2-2
The third convolution unit of the f-10) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 2-2 Input into a third convolution unit, and output to obtain a characteristic diagram I fd 2-2-1
The fourth convolution unit of the f-11) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 2-2-1 Input into a fourth convolution unit, and output to obtain a characteristic diagram I fd 2
The f-12) the third soft attention module of the decoder for fusing the geometric information is composed of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an up-sampling layer, and the characteristic diagram I is formed by fd 2 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 3-1-1 Map I of the characteristics fe 2 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 3-1-2 Map I of the characteristics fd 3-1-1 And feature map I fd 3-1-2 Adding to obtain a characteristic diagram I fd 3-1-3 Map I of the characteristics fd 3-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 3-1-4 Map I of the characteristics fd 3-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 3-1-5 Map I of the characteristics fd 3-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 3-1-6 Map I of the characteristics fd 3-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 3-1-6 Is divided into a foreground seed point set S 3 And background seed Point set R 3 For the foreground seed point set S 3 Calculated using a fast-marching algorithmDistance to geodesic map D s (x 3 ) For the background seed point set R 3 Calculation of geodesic distance map U using fast marching algorithm r (x 3 ),x 3 Is characteristic diagram I fd 3-1-6 Characteristic value of x 3 ∈Ω 3 ,Ω 3 For the image domain, a geodesic distance map Ds (x 3 ) Distance map Ur (x) 3 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 3 ) When M (x 3 ) Representing image domain Ω when < 0 3 The upper point is inside the vessel, denoted asWhen M (x) 3 ) Representing image domain Ω at > 0 3 The upper point is outside the blood vessel, indicated as +.>When M (x) 3 ) When=0, the image domain Ω is represented 3 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 3 For points on the vessel border, the level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 3-1-7 Map I of the characteristics fd 3-1-6 And feature map I fd 3 -1-7 Feature fusion is carried out to obtain a feature map I fd 3-1-8 Map I of the characteristics fd 3-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 3-1-9 Map I of the characteristics fe 2 And feature map I fd 3-1-9 Feature fusion is carried out to obtain a feature map I fd 3-1
f-13) mapping of the characteristics I fd 3-1 In the third upsampling layer of the decoder, the output results in a feature map I fu 3
f-14) mapping of the characteristics to I fu 3 And feature map I fd 3-1 Inputting into a third feature fusion layer of the decoder, and outputting to obtain a feature map I fd 3-2
The fifth convolution unit of the f-15) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 3-2 Input into a fifth convolution unit, and output to obtain a feature map I fd 3-2-1
The sixth convolution unit of the f-16) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 3-2-1 Input into a fourth convolution unit, and output to obtain a characteristic diagram I fd 3
The fourth soft attention module of the f-17) decoder integrating geometric information is composed of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an upsampling layer, and is used for integrating the characteristic diagram I fd 3 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 4-1-1 Map I of the characteristics fe 1 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 4-1-2 Map I of the characteristics fd 4-1-1 And feature map I fd 4-1-2 Adding to obtain a characteristic diagram I fd 4-1-3 Map I of the characteristics fd 4-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 4-1-4 Map I of the characteristics fd 4-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 4-1-5 Map I of the characteristics fd 4-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 4-1-6 Map I of the characteristics fd 4-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 4-1-6 Is divided into a foreground seed point set S 4 And background seed Point set R 4 For the foreground seed point set S 4 Calculation of geodesic distance map D using fast marching algorithm s (x 4 ) For the background seed point set R 4 Calculation of geodesic distance map U using fast marching algorithm r (x 4 ),x 4 Is characteristic diagram I fd 4-1-6 Characteristic value of x 4 ∈Ω 4 ,Ω 4 For the image domain, a geodesic distance map Ds (x 4 ) Distance map Ur (x) 4 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 4 ) When M (x 4 ) Representing image domain Ω when < 0 4 The upper point is inside the vessel, denoted asWhen M (x) 4 ) Representing image domain Ω at > 0 4 The upper point is outside the blood vessel, indicated as +.>When M (x) 4 ) When=0, the image domain Ω is represented 4 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 4 For points on the vessel border, the level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 4-1-7 Map I of the characteristics fd 4-1-6 And feature map I fd 4-1-7 Feature fusion is carried out to obtain a feature map I fd 4-1-8 Map I of the characteristics fd 4-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 4-1-9 Map I of the characteristics fe 1 And feature map I fd 4-1-9 Feature fusion is carried out to obtain a feature map I fd 4-1
f-18) mapping of the features I fd 4-1 In the fourth upsampling layer of the decoder, the feature map I is output fu 4
f-19) mapping of the characteristics to I fu 4 And feature map I fd 4-1 Inputting into a fourth feature fusion layer of the decoder, and outputting to obtain a feature map I fd 4-2
The seventh convolution unit of the f-20) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4-2 Input into a seventh convolution unit, and output to obtain a feature map I fd 4-2-1
The eighth convolution unit of the f-21) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4-2-1 Input into an eighth convolution unit, and output to obtain a feature map I fd 4
The ninth convolution unit of the f-22) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4-1-9 Inputting the characteristic image into a ninth convolution unit, and outputting the characteristic image Q; the tenth convolution unit of the f-23) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4 Input to a tenth convolution unit, output to obtain a segmented imageI end . Preferably, in the step f-2), the first convolution layer of the soft attention module of the first fused geometric information has a convolution kernel size of 1×1, stride of 1×1, padding of 0, the second convolution layer has a convolution kernel size of 1×01, stride of 2×12, padding of 0; the convolution kernel size of the third convolution layer is 1×21, stride is 1×31, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×42, stride is 2×52, and padding is 0; the deconvolution kernel size of the first upsampling layer in step f-3) is 2 x 62, stride is 2 x 72, and padding is 0; the convolution kernel size of the convolution layer of the first convolution unit in step f-5) is 3×83, stride is 1×91, and padding is 0; the convolution kernel size of the convolution layer of the second convolution unit in step f-6) is 3×3, stride is 1×01, and padding is 0; the first convolution layer of the soft attention module of the second fused geometric information in step f-7) has a convolution kernel size of 1×11, stride of 1×21, padding of 0, the second convolution layer has a convolution kernel size of 1×31, stride of 2×42, padding of 0; the convolution kernel size of the third convolution layer is 1×51, stride is 1×61, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×72, stride is 2×82, padding is 0; the deconvolution kernel size of the second upsampling layer in step f-8) is 2 x 92, stride is 2 x 2, and padding is 0; the convolution kernel size of the convolution layer of the third convolution unit in step f-10) is 3×03, stride is 1×11, and padding is 0; the convolution kernel size of the convolution layer of the fourth convolution unit in step f-11) is 3×23, stride is 1×31, and padding is 0; the first convolution layer of the soft attention module of the third fused geometry information in step f-12) has a convolution kernel size of 1×41, stride of 1×51, padding of 0, the second convolution layer has a convolution kernel size of 1×61, stride of 2×72, padding of 0; the convolution kernel size of the third convolution layer is 1×81, stride is 1×91, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×2, stride is 2×02, and padding is 0; the deconvolution kernel size of the third upsampling layer in step f-13) is 2 x 12, stride is 2 x 2, and padding is 0; the convolution kernel size of the convolution layer of the fifth convolution unit in step f-15) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the sixth convolution unit in step f-16) is 3×3, stride is 1×1, and padding is 0; first volume of fourth soft attention module fused geometric information in step f-17) The convolution kernel size of the lamination is 1×1, stride is 1×1, padding is 0, the convolution kernel size of the second convolution layer is 1×01, stride is 2×12, padding is 0; the convolution kernel size of the third convolution layer is 1×21, stride is 1×31, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×42, stride is 2×52, and padding is 0; the deconvolution kernel size of the fourth upsampling layer in step f-18) is 2 x 62, stride is 2 x 72, and padding is 0; the convolution kernel size of the convolution layer of the seventh convolution unit in step f-20) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the eighth convolution unit in step f-21) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the ninth convolution unit in step f-22) is 5×5, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the tenth convolution unit in step f-23) is 1 x 1, stride is 1 x 1, and padding is 0.
Further, step g) comprises the steps of:
g-1)x 5 for a feature value on a given group trunk, x 5 ∈Ω 5 ,Ω 5 For the image domain, through the formulaCalculating to obtain a level function set->Is the image domain omega 5 The upper point is inside the blood vessel, < >>Is the image domain omega 5 The upper point is on the vessel wall, +. >Is the image domain omega 5 The upper point is outside the blood vessel, y 5 For points on the vessel border, the level function set +.>The input smoothes the Heaviside function by the formulaCalculating to obtain a probability map Q GT By the formula->Calculating to obtain a loss function L 1 In the followingIs L1 norm;
g-2) is represented by the formulaCalculating to obtain a loss function L 2 Wherein N is the pretreated coronary artery blood vessel image I of the ith patient fi The total number of pixels in(s) j A pixel value g of the jth pixel point in the feature map Q j As probability map Q GT The pixel value of the j-th pixel point in the (b);
g-3) is represented by the formula l=l 1 +L 2 +L BCE Calculating to obtain a loss function L, wherein L BCE Is a cross entropy loss.
The beneficial effects of the invention are as follows: and acquiring vascular structure information by using a geodesic distance map, acquiring position information of each point on a vascular image by using a level set function, and generating weight distribution of each point in the blood vessel by using a Heaviside function. Further, the weight distribution and the attention distribution are combined in a mode of fusing geometric information to form a soft attention network module. By considering the characteristics of the vascular structure, the regional evolution is integrated into the loss function by the theory of the active contour model, and the targeted combined loss function is redesigned. Before the training of the network, the data set is first enhanced by rotation, flipping, contrast enhancement and other operations, and the enhanced data set is then sent into the network structure. The network is optimized using AdamW as an optimizer. After the network training is completed, the parameters such as the optimal weight, the offset and the like obtained in the network structure are stored. When the blood vessel segmentation is carried out, firstly, the optimal weight and the optimal offset are read from the file, and are loaded into a network, and then the blood vessel segmentation is completed. The blood vessel segmentation method for integrating the geometric information into the soft attention network can well inhibit segmentation leakage, solves the problem of discontinuous blood vessels in the segmentation process, and ensures that the blood vessel segmentation is quicker and more accurate.
Drawings
FIG. 1 is a network architecture diagram of the present invention;
FIG. 2 is a block diagram of a soft attention module of the present invention;
FIG. 3 is a segmentation flow chart of the present invention.
Detailed Description
The invention will be further described with reference to fig. 1,2 and 3.
A method of vessel segmentation for a soft-attention network incorporating geometric information, comprising: the method comprises the following steps:
a) Coronary vessel images of n patients were collected to obtain data set I, i= { I 1 ,I 2 ,...,I i ,...,I n }, wherein I i For the coronary vessel image of the ith patient, i e {1,2,..n }.
b) Preprocessing the data set I to obtain a preprocessed data set I f ,I f ={I f1 ,I f2 ,…,I fi ,…,I fn }, wherein I fi Is a pre-processed coronary vessel image of the ith patient.
c) Pre-processing data set I f The training set train, the verification set val and the test set test are divided.
d) Setting a network structure composed of an encoder, an intermediate structure layer and a decoder, and preprocessing the coronary artery blood vessel image I of the ith patient in the training set train fi Input into a decoder of a network structure, and output to obtain a characteristic diagram I fm 4
e) Map I of the characteristics fm 4 Input into an intermediate structure layer of a network structure, and output to obtain a characteristic diagram I fc 1
f) Map I of the characteristics fc 1 Input to a decoder, output to obtain a divided image I end
g) The loss function L is calculated.
h) Using AdamW as an optimizer, optimizing the network structure in step d) using back propagation according to the loss function L, resulting in an optimized network structure. Specifically, the weight and bias in the network structure are optimized by using back propagation according to the loss function L, wherein the initial learning rate is 0.001. And iteratively storing the optimal results of the network weight and the bias amount bias, storing a training parameter log, and storing the weight and the bias amount bias by using a bestResult. Ph file.
i) Loading the optimal weight and bias obtained by the bestreult. Ph file into a network, and reading the preprocessed coronary artery blood vessel image of the ith patient in the test set to segment to obtain a segmented image I '' end And save the separation result as a file in png format.
According to the invention, the advantages of the traditional method and the deep learning method are analyzed, and the advantages of the traditional method and the deep learning method are combined together, so that the accuracy and the speed of image segmentation are improved. The geometric information of the blood vessel is integrated into the deep learning network, so that the regional characteristics and morphological characteristics of the blood vessel are better utilized, and the segmentation quality is improved. In addition, when the image segmentation is performed by adopting a deep learning method, a large amount of data is usually required for training, and the data amount required for training can be reduced by utilizing the method of fusing geometric information, so that the training efficiency is improved. In a comprehensive view, the application of the deep learning network in image segmentation can be effectively improved by fusing the geometric information of the blood vessels, the accuracy and the efficiency of segmentation are improved, and the problem that the deep learning network needs a large amount of data set training and lacks of the geometric information of the blood vessels in the characteristic extraction process is solved. The soft attention module fused with the geometric information is embedded into the U-Net backbone network, and the improved loss function is matched with the soft attention module fused with the geometric information, so that segmentation leakage can be well restrained, and the problems of abnormal value, discontinuous segmentation, incomplete segmentation, single segmentation result information and the like in the segmentation process are solved.
Example 1:
in step a) in useAutomatic region-based coronary artery disease diagnosis of X-ray angiography images collects coronary artery vessel images of 200 patients in a public challenge race to obtain a data set I; in the step b), an Augmentor package is imported into python, and the Augmentor package is used for sequentially performing rotation, elastic deformation, brightness enhancement and contrast enhancement on the data set I to obtain an enhanced data set I ', I ' = { I ' 1 ,I′ 2 ,…,I′ i ,…,I′ n Performing an overlay-tile strategy on the enhanced data set I' to obtain a preprocessed data set I f
Example 2:
pre-processing data set I f The training set train, the verification set val and the test set test are divided according to the ratio of 6:2:2.
Example 3:
step d) comprises the steps of:
d-1) the decoder section is a feature extraction section, i.e., a compression path section, and specifically, the encoder is configured of a first convolution unit, a second convolution unit, a first maximum pooling layer, a third convolution unit, a fourth convolution unit, a second maximum pooling layer, a fifth convolution unit, a sixth convolution unit, a third maximum pooling layer, a seventh convolution unit, an eighth convolution unit, and a fourth maximum pooling layer.
d-2) the first convolution unit of the encoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the preprocessed coronary artery blood vessel image I of the ith patient in the training set train fi Inputting into a first convolution unit, outputting to obtain a characteristic diagram I fi 1-1
d-3) the second convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fi 1-1 Inputting into a second convolution unit, outputting to obtain a characteristic diagram I fe 1 . d-4) mapping the characteristic pattern I fe 1 Input into a first maximum pooling layer of an encoder, and output to obtain a characteristic diagram I fm 1 . d-5) the third convolution unit of the encoder is activated by the convolution layer, the BatchNorm layer, the Dropout layer, and the Relu in that orderFunction constitution, characteristic diagram I fm 1 Inputting into a third convolution unit, outputting to obtain a characteristic diagram I fe 2-1 . d-6) the fourth convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fe 2-1 Inputting into a fourth convolution unit, outputting to obtain a characteristic diagram I fe 2 . d-7) mapping the characteristic pattern I fe 2 Input into a second maximum pooling layer of the encoder, and output to obtain a characteristic diagram I fm 2 . d-8) the fifth convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 2 Inputting into a fifth convolution unit, outputting to obtain a characteristic diagram I fe 3-1 . d-9) the sixth convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fe 3-1 Inputting into a sixth convolution unit, outputting to obtain a characteristic diagram I fe 3 . d-10) mapping the features of I fe 3 Input into a third maximum pooling layer of the encoder, and output to obtain a characteristic diagram I fm 3 . d-11) the seventh convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 3 Inputting into a seventh convolution unit, outputting and obtaining a characteristic diagram I fe 4-1 . The eighth convolution unit of the d-12) encoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fe 4-1 Inputting into an eighth convolution unit, outputting and obtaining a characteristic diagram I fe 4 . d-13) mapping of the characteristics I fe 4 Input into a fourth maximum pooling layer of the encoder, and output to obtain a characteristic diagram I fm 4 . In this embodiment, it is preferable that the convolution kernel size of the convolution layer of the first convolution unit in step d-2) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the second convolution unit in step d-3) is 3×3, stride is 1×1, and padding is 0; setting a pooling window to be 2 multiplied by 2 in the first maximum pooling layer in the step d-4); convolution of the convolution layer of the third convolution unit in step d-5)Core size 3×3, stride 1×1, padding 0; the convolution kernel size of the convolution layer of the fourth convolution unit in step d-6) is 3×03, stride is 1×11, and padding is 0; setting the pooling window to be 2 multiplied by 22 in the second maximum pooling layer in the step d-7); the convolution kernel size of the convolution layer of the fifth convolution unit in step d-8) is 3×33, stride is 1×41, and padding is 0; the convolution kernel size of the convolution layer of the sixth convolution unit in step d-9) is 3×3, stride is 1×1, and padding is 0; setting the pooling window to be 2 multiplied by 2 in the third maximum pooling layer in the step d-10); the convolution kernel size of the convolution layer of the seventh convolution unit in step d-11) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the eighth convolution unit in step d-12) is 3×3, stride is 1×1, padding is 0, and the fourth maximum pooling layer in step d-13) sets the pooling window to 2×2.
Example 4:
step e) comprises the steps of:
e-1) the intermediate structure layer is composed of a first convolution unit and a second convolution unit.
e-2) the first convolution unit of the intermediate structure layer is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 4 Input into a first convolution unit, and output to obtain a characteristic diagram I fc 1-1 . e-3) the second convolution unit of the intermediate structure layer is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fc 1-1 Input into a second convolution unit, and output to obtain a characteristic diagram I fc 1 . The convolution kernel size of the convolution layer of the first convolution unit in step e-2) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the second convolution unit in step e-3) is 3×3, stride is 1×1, and padding is 0.
Example 5:
step f) comprises the steps of:
the decoder part f-1) is an extended path part, and specifically, the decoder is composed of a soft attention module of first fusion geometric information, a first upsampling layer, a first feature fusion layer, a first convolution unit, a second convolution unit, a soft attention module of second fusion geometric information, a second upsampling layer, a second feature fusion layer, a third convolution unit, a fourth convolution unit, a soft attention module of third fusion geometric information, a third upsampling layer, a third feature fusion layer, a fifth convolution unit, a sixth convolution unit, a soft attention module of fourth fusion geometric information, a fourth upsampling layer, a fourth feature fusion layer, a seventh convolution unit, an eighth convolution unit, a ninth convolution unit and a tenth convolution unit.
The first soft attention module of the f-2) decoder for fusing the geometric information consists of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an upsampling layer, and the characteristic diagram I is formed by fc 1 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 1-1-1 Map I of the characteristics fe 4 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 1-1-2 Map I of the characteristics fd 1-1-1 And feature map I fd 1-1-2 Adding to obtain a characteristic diagram I fd 1-1-3 Map I of the characteristics fd 1-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 1-1-4 Map I of the characteristics fd 1-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 1-1-5 Map I of the characteristics fd 1-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 1-1-6 Map I of the characteristics fd 1-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 1-1-6 Is divided into a foreground seed point set S 1 And background seed Point set R 1 For the foreground seed point set S 1 Calculation of geodesic distance map D using fast marching algorithm s (x 1 ) For the background seed point set R 1 Calculation of geodesic distance map U using fast marching algorithm r (x 1 ),x 1 Is characteristic diagram I fd 1-1-6 Characteristic value of x 1 ∈Ω 1 ,Ω 1 For the image domain, a geodesic distance map Ds (x 1 ) And measure the groundLine distance map Ur (x) 1 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 1 ) When M (x 1 ) Representing image domain Ω when < 0 1 The upper point is inside the vessel, denoted asWhen M (x) 1 ) Representing image domain Ω at > 0 1 The upper point is outside the blood vessel, indicated as +.>When M (x) 1 ) When=0, the image domain Ω is represented 1 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 1 As a point on the boundary of the blood vessel, I.I 2 For Euclidean distance, level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 1-1-7 Wherein k is a constant, and the characteristic diagram I fd 1-1-6 And feature map I fd 1-1-7 Feature fusion is carried out to obtain a feature map I fd 1-1-8 Map I of the characteristics fd 1-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 1-1-9 Map I of the characteristics fe 4 And feature map I fd 1-1-9 Feature fusion is carried out to obtain a feature map I fd 1-1
f-3)Map I of the characteristics fc 1 In the first up-sampling layer of the decoder, the output results in a feature map I fu 1
f-4) mapping the features of I fu 1 And feature map I fd 1-1 The first feature fusion layer input to the decoder fuses features by adopting a concat method, and a feature map I is obtained by outputting fd 1-2
The first convolution unit of the f-5) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 1-2 Input into a first convolution unit, and output to obtain a characteristic diagram I fd 1-2-1
The second convolution unit of the f-6) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 1-2-1 Input into a second convolution unit, and output to obtain a characteristic diagram I fd 1
f-7) the second soft attention module of the decoder fusing geometric information is composed of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an up-sampling layer, and the feature map I is obtained by fd 1 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 2-1-1 Map I of the characteristics fe 3 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 2-1-2 Map I of the characteristics fd 2-1-1 And feature map I fd 2-1-2 Adding to obtain a characteristic diagram I fd 2-1-3 Map I of the characteristics fd 2-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 2-1-4 Map I of the characteristics fd 2-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 2-1-5 Map I of the characteristics fd 2-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 2-1-6 Map I of the characteristics fd 2-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 2-1-6 Divided into foreground seedsPoint set S 2 And background seed Point set R 2 For the foreground seed point set S 2 Calculation of geodesic distance map D using fast marching algorithm s (x 2 ) For the background seed point set R 2 Calculation of geodesic distance map U using fast marching algorithm r (x 2 ),x 2 Is characteristic diagram I fd 2-1-6 Characteristic value of x 2 ∈Ω 2 ,Ω 2 For the image domain, the geodesic distance map D s (x 2 ) Distance map U with geodesic r (x 2 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 2 ) When M (x 2 ) Representing image domain Ω when < 0 2 The upper point is inside the vessel, denoted asWhen M (x) 2 ) Representing image domain Ω at > 0 2 The upper point is outside the blood vessel, indicated as +.>When M (x) 2 ) When=0, the image domain Ω is represented 2 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 2 For points on the vessel border, the level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 2-1-7 Map I of the characteristics fd 2-1-6 And feature map I fd 2-1-7 Feature fusion is carried out to obtain a feature map I fd 2-1-8 Map I of the characteristics fd 2-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 2-1-9 Map I of the characteristics fe 3 And feature map I fd 2-1-9 Feature fusion is carried out to obtain a feature map I fd 2-1
f-8) mapping of the features I fd 2-1 In the second upsampling layer of the decoder, the output results in a feature map I fu 2 . f-9) mapping the features of I fu 2 And feature map I fd 2-1 The second feature fusion layer input to the decoder fuses the features by adopting a concat method, and outputs a feature map I fd 2-2
The third convolution unit of the f-10) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 2-2 Input into a third convolution unit, and output to obtain a characteristic diagram I fd 2-2-1
The fourth convolution unit of the f-11) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 2-2-1 Input into a fourth convolution unit, and output to obtain a characteristic diagram I fd 2
The f-12) the third soft attention module of the decoder for fusing the geometric information is composed of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an up-sampling layer, and the characteristic diagram I is formed by fd 2 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 3-1-1 Map I of the characteristics fe 2 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 3-1-2 Map I of the characteristics fd 3-1-1 And feature map I fd 3-1-2 Adding to obtain a characteristic diagram I fd 3-1-3 Map I of the characteristics fd 3-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 3-1-4 Map I of the characteristics fd 3-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 3-1-5 Map I of the characteristics fd 3-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 3-1-6 Map I of the characteristics fd 3-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 3-1-6 Is divided into a foreground seed point set S 3 And background seed Point set R 3 For the foreground seed point set S 3 Calculation of geodesic distance map D using fast marching algorithm s (x 3 ) For the background seed point set R 3 Calculation of geodesic distance map U using fast marching algorithm r (x 3 ),x 3 Is characteristic diagram I fd 3-1-6 Characteristic value of x 3 ∈Ω 3 ,Ω 3 For the image domain, the geodesic distance map D s (x 3 ) Distance map U with geodesic r (x 3 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 3 ) When M (x 3 ) Representing image domain Ω when < 0 3 The upper point is inside the vessel, denoted asWhen M (x) 3 ) Representing image domain Ω at > 0 3 The upper point is outside the blood vessel, indicated as +.>When M (x) 3 ) When=0, the image domain Ω is represented 3 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 3 Is a point on the boundary of a blood vesselThe level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 3-1-7 Map I of the characteristics fd 3-1-6 And feature map I fd 3-1-7 Feature fusion is carried out to obtain a feature map I fd 3-1-8 Map I of the characteristics fd 3-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 3-1-9 Map I of the characteristics fe 2 And feature map I fd 3-1-9 Feature fusion is carried out to obtain a feature map I fd 3-1
f-13) mapping of the characteristics I fd 3-1 In the third upsampling layer of the decoder, the output results in a feature map I fu 3 . f-14) mapping of the characteristics to I fu 3 And feature map I fd 3-1 The third feature fusion layer input to the decoder fuses the features by adopting a concat method, and the features are output to obtain a feature map I fd 3-2
The fifth convolution unit of the f-15) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 3-2 Input into a fifth convolution unit, and output to obtain a feature map I fd 3-2-1
The sixth convolution unit of the f-16) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 3-2-1 Input into a fourth convolution unit, and output to obtain a characteristic diagram I fd 3
The fourth soft attention module of the f-17) decoder integrating geometric information is composed of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an upsampling layer, and is used for integrating the characteristic diagram I fd 3 Input into the first convolution layer, outputTo feature map I fd 4-1-1 Map I of the characteristics fe 1 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 4-1-2 Map I of the characteristics fd 4-1-1 And feature map I fd 4-1-2 Adding to obtain a characteristic diagram I fd 4-1-3 Map I of the characteristics fd 4-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 4-1-4 Map I of the characteristics fd 4-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 4-1-5 Map I of the characteristics fd 4-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 4-1-6 Map I of the characteristics fd 4-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 4-1-6 Is divided into a foreground seed point set S 4 And background seed Point set R 4 For the foreground seed point set S 4 Calculation of geodesic distance map D using fast marching algorithm s (x 4 ) For the background seed point set R 4 Calculation of geodesic distance map U using fast marching algorithm r (x 4 ),x 4 Is characteristic diagram I fd 4-1-6 Characteristic value of x 4 ∈Ω 4 ,Ω 4 For the image domain, a geodesic distance map Ds (x 4 ) Distance map Ur (x) 4 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 4 ) When M (x 4 ) Representing image domain Ω when < 0 4 The upper point is inside the vessel, denoted asWhen M (x) 4 ) Representing image domain Ω at > 0 4 The upper point is outside the blood vessel, indicated as +.>When M (x) 4 ) When=0, the image domain Ω is represented 4 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 4 For points on the vessel border, the level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 4-1-7 Map I of the characteristics fd 4-1-6 And feature map I fd 4 -1-7 Feature fusion is carried out to obtain a feature map I fd 4-1-8 Map I of the characteristics fd 4-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 4-1-9 Map I of the characteristics fe 1 And feature map I fd 4-1-9 Feature fusion is carried out to obtain a feature map I fd 4-1
f-18) mapping of the features I fd 4-1 In the fourth upsampling layer of the decoder, the feature map I is output fu 4 . f-19) mapping of the characteristics to I fu 4 And feature map I fd 4-1 The fourth feature fusion layer input to the decoder fuses the features by adopting a concat method, and outputs a feature map I fd 4-2
The seventh convolution unit of the f-20) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4-2 Input into a seventh convolution unit, and output to obtain a feature map I fd 4-2-1
The eighth convolution unit of the f-21) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4-2-1 Input into an eighth convolution unit, and output to obtain a feature map I fd 4
The ninth convolution unit of the f-22) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4-1-9 Inputting the characteristic image into a ninth convolution unit, and outputting the characteristic image Q; the tenth convolution unit of the f-23) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4 Input to a tenth convolution unit, and output to obtain a segmented image I end . In this embodiment, preferably, in step f-2), the first convolution layer of the soft attention module of the first fused geometric information has a convolution kernel size of 1×1, stride of 1×1, padding of 0, and the second convolution layer has a convolution kernel size of 1×01, stride of 2×12, padding of 0; the convolution kernel size of the third convolution layer is 1×21, stride is 1×31, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×42, stride is 2×52, and padding is 0; the deconvolution kernel size of the first upsampling layer in step f-3) is 2 x 62, stride is 2 x 72, and padding is 0; the convolution kernel size of the convolution layer of the first convolution unit in step f-5) is 3×83, stride is 1×91, and padding is 0; the convolution kernel size of the convolution layer of the second convolution unit in step f-6) is 3×3, stride is 1×01, and padding is 0; the first convolution layer of the soft attention module of the second fused geometric information in step f-7) has a convolution kernel size of 1×11, stride of 1×21, padding of 0, the second convolution layer has a convolution kernel size of 1×31, stride of 2×42, padding of 0; the convolution kernel size of the third convolution layer is 1×51, stride is 1×61, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×72, stride is 2×82, padding is 0; the deconvolution kernel size of the second upsampling layer in step f-8) is 2 x 92, stride is 2 x 2, and padding is 0; the convolution kernel size of the convolution layer of the third convolution unit in step f-10) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the fourth convolution unit in step f-11) is 3×3, stride is 1×1, and padding is 0; the first convolution layer of the soft attention module of the third fused geometry information in step f-12) has a convolution kernel size of 1 x 1, stride of 1 x 1, padding of 0, and the second convolution layer has a convolution kernel of 1 x 1 The size is 1 multiplied by 1, the stride is 2 multiplied by 2, and the padding is 0; the convolution kernel size of the third convolution layer is 1×01, stride is 1×11, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×22, stride is 2×32, and padding is 0; the deconvolution kernel size of the third upsampling layer in step f-13) is 2 x 42, stride is 2 x 52, and padding is 0; the convolution kernel size of the convolution layer of the fifth convolution unit in step f-15) is 3×63, stride is 1×71, and padding is 0; the convolution kernel size of the convolution layer of the sixth convolution unit in step f-16) is 3×83, stride is 1×91, and padding is 0; the convolution kernel size of the first convolution layer of the soft attention module of the fourth fused geometric information in the step f-17) is 1×1, stride is 1×01, padding is 0, the convolution kernel size of the second convolution layer is 1×11, stride is 2×22, and padding is 0; the convolution kernel size of the third convolution layer is 1×31, stride is 1×41, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×52, stride is 2×62, and padding is 0; the deconvolution kernel size of the fourth upsampling layer in step f-18) is 2 x 72, stride is 2 x 82, and padding is 0; the convolution kernel size of the convolution layer of the seventh convolution unit in step f-20) is 3×93, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the eighth convolution unit in step f-21) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the ninth convolution unit in step f-22) is 5×5, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the tenth convolution unit in step f-23) is 1 x 1, stride is 1 x 1, and padding is 0.
Example 6:
step g) comprises the steps of:
g-1)x 5 for a feature value on a given group trunk, x 5 ∈Ω 5 ,Ω 5 For the image domain, through the formulaCalculating to obtain a level function set->Is the image domain omega 5 The upper point is inside the blood vessel, < >>Is the image domain omega 5 The upper point is on the vessel wall, +.>Is the image domain omega 5 The upper point is outside the blood vessel, y 5 For points on the vessel border, the level function set +.>The input smoothes the Heaviside function by the formulaCalculating to obtain a probability map Q GT By the formula->Calculating to obtain a loss function L 1 In the followingIs the L1 norm.
g-2) is represented by the formulaCalculating to obtain a loss function L 2 Wherein N is the pretreated coronary artery blood vessel image I of the ith patient fi The total number of pixels in(s) j A pixel value g of the jth pixel point in the feature map Q j As probability map Q GT The pixel value of the j-th pixel point in the (b).
g-3) is represented by the formula l=l 1 +L 2 +L BCE Calculating to obtain a loss function L, wherein L BCE Is a cross entropy loss.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A method for vessel segmentation of a soft-attention network incorporating geometric information, comprising: the method comprises the following steps:
a) Coronary vessel images of n patients were collected to obtain data set I, i= { I 1 ,I 2 ,...,I i ,...,I n }, wherein I i Coronary vessel image for the i patient, i e {1,2,., n };
b) Preprocessing the data set I to obtain a preprocessed data set I f ,I f ={I f1 ,I f2 ,...,I fi ,...,I fn }, wherein I fi A preprocessed coronary vessel image of the ith patient;
c) Pre-processing data set I f Dividing into a training set train, a verification set val and a test set test;
d) Setting a network structure composed of an encoder, an intermediate structure layer and a decoder, and preprocessing the coronary artery blood vessel image I of the ith patient in the training set train fi Input into a decoder of a network structure, and output to obtain a characteristic diagram I fm 4
e) Map I of the characteristics fm 4 Input into an intermediate structure layer of a network structure, and output to obtain a characteristic diagram I fc 1
f) Map I of the characteristics fc 1 Input to a decoder, output to obtain a divided image I end
g) Calculating a loss function L;
h) Using AdamW as an optimizer, optimizing the network structure in step d) using back propagation according to the loss function L, to obtain an optimized network structure;
i) Inputting the preprocessed coronary artery blood vessel image of the ith patient in the test set test into the optimized network structure to obtain a segmented image I '' end
Step d) comprises the steps of:
d-1) the encoder is composed of a first convolution unit, a second convolution unit, a first maximum pooling layer, a third convolution unit, a fourth convolution unit, a second maximum pooling layer, a fifth convolution unit, a sixth convolution unit, a third maximum pooling layer, a seventh convolution unit, an eighth convolution unit and a fourth maximum pooling layer;
d-2) the first convolution unit of the encoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the preprocessed coronary artery blood vessel image I of the ith patient in the training set train fi Inputting into a first convolution unit, outputting to obtain a characteristic diagram I fi 1-1
d-3) the second convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fi 1-1 Inputting into a second convolution unit, outputting to obtain a characteristic diagram I fe 1
d-4) mapping the characteristic pattern I fe 1 Input into a first maximum pooling layer of an encoder, and output to obtain a characteristic diagram I fm 1 The method comprises the steps of carrying out a first treatment on the surface of the d-5) the third convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 1 Inputting into a third convolution unit, outputting to obtain a characteristic diagram I fe 2-1 The method comprises the steps of carrying out a first treatment on the surface of the d-6) the fourth convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fe 2-1 Inputting into a fourth convolution unit, outputting to obtain a characteristic diagram I fe 2
d-7) mapping the characteristic pattern I fe 2 Input into a second maximum pooling layer of the encoder, and output to obtain a characteristic diagram I fm 2 The method comprises the steps of carrying out a first treatment on the surface of the d-8) the fifth convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 2 Inputting into a fifth convolution unit, outputting to obtain a characteristic diagram I fe 3-1 The method comprises the steps of carrying out a first treatment on the surface of the The sixth convolution unit of the d-9) encoder is sequentially convolved withLayer, batchNorm layer, dropout layer, relu activation function, will characterize FIG. I fe 3-1 Inputting into a sixth convolution unit, outputting to obtain a characteristic diagram I fe 3
d-10) mapping the features of I fe 3 Input into a third maximum pooling layer of the encoder, and output to obtain a characteristic diagram I fm 3 The method comprises the steps of carrying out a first treatment on the surface of the d-11) the seventh convolution unit of the encoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 3 Inputting into a seventh convolution unit, outputting and obtaining a characteristic diagram I fe 4-1 The method comprises the steps of carrying out a first treatment on the surface of the The eighth convolution unit of the d-12) encoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fe 4-1 Inputting into an eighth convolution unit, outputting and obtaining a characteristic diagram I fe 4
d-13) mapping of the characteristics I fe 4 Input into a fourth maximum pooling layer of the encoder, and output to obtain a characteristic diagram I fm 4 The method comprises the steps of carrying out a first treatment on the surface of the Step e) comprises the steps of:
e-1) the intermediate structure layer is composed of a first convolution unit and a second convolution unit;
e-2) the first convolution unit of the intermediate structure layer is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fm 4 Input into a first convolution unit, and output to obtain a characteristic diagram I fc 1-1
e-3) the second convolution unit of the intermediate structure layer is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and the characteristic diagram I is obtained fc 1-1 Input into a second convolution unit, and output to obtain a characteristic diagram I fc 1 The method comprises the steps of carrying out a first treatment on the surface of the Step f) comprises the steps of:
the f-1) decoder is composed of a soft attention module of first fusion geometric information, a first upsampling layer, a first feature fusion layer, a first convolution unit, a second convolution unit, a soft attention module of second fusion geometric information, a second upsampling layer, a second feature fusion layer, a third convolution unit, a fourth convolution unit, a soft attention module of third fusion geometric information, a third upsampling layer, a third feature fusion layer, a fifth convolution unit, a sixth convolution unit, a soft attention module of fourth fusion geometric information, a fourth upsampling layer, a fourth feature fusion layer, a seventh convolution unit, an eighth convolution unit, a ninth convolution unit and a tenth convolution unit;
The first soft attention module of the f-2) decoder for fusing the geometric information consists of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an upsampling layer, and the characteristic diagram I is formed by fc 1 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 1-1-1 Map I of the characteristics fe 4 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 1-1-2 Map I of the characteristics fd 1-1-1 And feature map I fd 1-1-2 Adding to obtain a characteristic diagram I fd 1-1-3 Map I of the characteristics fd 1-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 1-1-4 Map I of the characteristics fd 1-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 1-1-5 Map I of the characteristics fd 1-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 1-1-6 Map I of the characteristics fd 1-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 1-1-6 Is divided into a foreground seed point set S 1 And background seed Point set R 1 For the foreground seed point set S 1 Calculation of geodesic distance map D using fast marching algorithm s (x 1 ) For the background seed point set R 1 Calculation of geodesic distance map U using fast marching algorithm r (x 1 ),x 1 Is characteristic diagram I fd 1-1-6 Characteristic value of x 1 ∈Ω 1 ,Ω 1 For the image domain, the geodesic distance map D s (x 1 ) Distance map U with geodesic r (x 1 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 1 ) When M (x 1 ) When < 0, meansImage domain Ω 1 The upper point is inside the vessel, denoted asWhen M (x) 1 ) Representing image domain Ω at > 0 1 The upper point is outside the blood vessel, indicated as +.>When M (x) 1 ) When=0, the image domain Ω is represented 1 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 1 As a point on the boundary of the blood vessel, I.I 2 For Euclidean distance, level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 1-1-7 Wherein k is a constant, and the characteristic diagram I fd 1-1-6 And feature map I fd 1-1-7 Feature fusion is carried out to obtain a feature map I fd 1-1-8 Map I of the characteristics fd 1-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 1-1-9 Map I of the characteristics fe 4 And feature map I fd 1-1-9 Feature fusion is carried out to obtain a feature map I fd 1-1
f-3) mapping the features of I fc 1 Of decodersIn the first upsampling layer, the feature map I is obtained by output fu 1 The method comprises the steps of carrying out a first treatment on the surface of the f-4) mapping the features of I fu 1 And feature map I fd 1-1 Input into a first feature fusion layer of a decoder, and output to obtain a feature map I fd 1-2
The first convolution unit of the f-5) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 1-2 Input into a first convolution unit, and output to obtain a characteristic diagram I fd 1-2-1
The second convolution unit of the f-6) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 1-2-1 Input into a second convolution unit, and output to obtain a characteristic diagram I fd 1 The method comprises the steps of carrying out a first treatment on the surface of the f-7) the second soft attention module of the decoder fusing geometric information is composed of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an up-sampling layer, and the feature map I is obtained by fd 1 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 2-1-1 Map I of the characteristics fe 3 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 2-1-2 Map I of the characteristics fd 2-1-1 And feature map I fd 2-1-2 Adding to obtain a characteristic diagram I fd 2-1-3 Map I of the characteristics fd 2-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 2-1-4 Map I of the characteristics fd 2-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 2-1-5 Map I of the characteristics fd 2-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 2-1-6 Map I of the characteristics fd 2-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 2-1-6 Is divided into a foreground seed point set S 2 And background seed Point set R 2 For the foreground seed point set S 2 Calculation of geodesic distance map D using fast marching algorithm s (x 2 ) For the background seed point set R 2 Calculation of geodesic distance map U using fast marching algorithm r (x 2 ),x 2 Is characteristic diagram I fd 2-1-6 Characteristic value of x 2 ∈Ω 2 ,Ω 2 For the image domain, a geodesic distance map Ds (x 2 ) Distance map Ur (x) 2 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 2 ) When M (x 2 ) Representing image domain Ω when < 0 2 The upper point is inside the vessel, denoted asWhen M (x) 2 ) Representing image domain Ω at > 0 2 The upper point is outside the blood vessel, indicated as +.>When M (x) 2 ) When=0, the image domain Ω is represented 2 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 2 For points on the vessel border, the level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 2-1-7 Map I of the characteristics fd 2-1-6 And feature map I fd 2-1-7 The feature fusion is carried out so as to obtain the target product,obtaining a characteristic diagram I fd 2-1-8 Map I of the characteristics fd 2-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 2-1-9 Map I of the characteristics fe 3 And feature map I fd 2-1-9 Feature fusion is carried out to obtain a feature map I fd 2-1
f-8) mapping of the features I fd 2-1 In the second upsampling layer of the decoder, the output results in a feature map I fu 2
f-9) mapping the features of I fu 2 And feature map I fd 2-1 Inputting into a second feature fusion layer of the decoder, and outputting to obtain a feature map I fd 2-2
The third convolution unit of the f-10) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 2-2 Input into a third convolution unit, and output to obtain a characteristic diagram I fd 2-2-1
The fourth convolution unit of the f-11) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 2-2-1 Input into a fourth convolution unit, and output to obtain a characteristic diagram I fd 2
The f-12) the third soft attention module of the decoder for fusing the geometric information is composed of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an up-sampling layer, and the characteristic diagram I is formed by fd 2 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 3-1-1 Map I of the characteristics fe 2 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 3-1-2 Map I of the characteristics fd 3-1-1 And feature map I fd 3-1-2 Adding to obtain a characteristic diagram I fd 3-1-3 Map I of the characteristics fd 3-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 3-1-4 Map I of the characteristics fd 3-1-4 Is input into the third convolution layer,outputting and obtaining a characteristic diagram I fd 3-1-5 Map I of the characteristics fd 3-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 3-1-6 Map I of the characteristics fd 3-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 3-1-6 Is divided into a foreground seed point set S 3 And background seed Point set R 3 For the foreground seed point set S 3 Calculation of geodesic distance map D using fast marching algorithm s (x 3 ) For the background seed point set R 3 Calculation of geodesic distance map U using fast marching algorithm r (x 3 ),x 3 Is characteristic diagram I fd 3-1-6 Characteristic value of x 3 ∈Ω 3 ,Ω 3 For the image domain, a geodesic distance map Ds (x 3 ) Distance map Ur (x) 3 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 3 ) When M (x 3 ) Representing image domain Ω when < 0 3 The upper point is inside the vessel, denoted asWhen M (x) 3 ) Representing image domain Ω at > 0 3 The upper point is outside the blood vessel, indicated as +.>When M (x) 3 ) When=0, the image domain Ω is represented 3 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 3 For points on the vessel border, the level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 3-1-7 Map I of the characteristics fd 3-1-6 And feature map I fd 3-1-7 Feature fusion is carried out to obtain a feature map I fd 3-1-8 Map I of the characteristics fd 3-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 3-1-9 Map I of the characteristics fe 2 And feature map I fd 3-1-9 Feature fusion is carried out to obtain a feature map I fd 3-1
f-13) mapping of the characteristics I fd 3-1 In the third upsampling layer of the decoder, the output results in a feature map I fu 3
f-14) mapping of the characteristics to I fu 3 And feature map I fd 3-1 Inputting into a third feature fusion layer of the decoder, and outputting to obtain a feature map I fd 3-2
The fifth convolution unit of the f-15) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 3-2 Input into a fifth convolution unit, and output to obtain a feature map I fd 3-2-1
The sixth convolution unit of the f-16) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 3-2-1 Input into a fourth convolution unit, and output to obtain a characteristic diagram I fd 3
The fourth soft attention module of the f-17) decoder integrating geometric information is composed of a first convolution layer, a second convolution layer, a Relu activation function, a third convolution layer, a Sigmoid function, a geometric information calculation layer and an upsampling layer, and is used for integrating the characteristic diagram I fd 3 In the first convolution layer of the input band, the output obtains a characteristic diagram I fd 4-1-1 Map I of the characteristics fe 1 Input into a second convolution layer, and output to obtain a characteristic diagram I fd 4-1-2 Map I of the characteristics fd 4-1-1 And feature map I fd 4-1-2 Adding to obtain a characteristic diagram I fd 4-1-3 Map I of the characteristics fd 4-1-3 Input into a Relu activation function, and output to obtain a feature map I fd 4-1-4 Map I of the characteristics fd 4-1-4 Input into a third convolution layer, and output to obtain a characteristic diagram I fd 4-1-5 Map I of the characteristics fd 4-1-5 Input into a Sigmoid function, and output to obtain a feature map I fd 4-1-6 Map I of the characteristics fd 4-1-6 Input into the geometric information calculation layer, and input I into the geometric information calculation layer by using a threshold method fd 4-1-6 Is divided into a foreground seed point set S 4 And background seed Point set R 4 For the foreground seed point set S 4 Calculation of geodesic distance map D using fast marching algorithm s (x 4 ) For the background seed point set R 4 Calculation of geodesic distance map U using fast marching algorithm r (x 4 ),x 4 Is characteristic diagram I fd 4-1-6 Characteristic value of x 4 ∈Ω 4 ,Ω 4 For the image domain, the geodesic distance map D s (x 4 ) Distance map U with geodesic r (x 4 ) The value of the corresponding pixel is subtracted to obtain a geodesic distance map M (x 4 ) When M (x 4 ) Representing image domain Ω when < 0 4 The upper point is inside the vessel, denoted asWhen M (x) 4 ) Representing image domain Ω at > 0 4 The upper point is outside the blood vessel, indicated as +.>When M (x) 4 ) When=0, the image domain Ω is represented 4 The upper point is on the vessel wall, indicated as +.>By the formula->Calculating to obtain a level function set->y 4 For points on the vessel border, the level function set +.>A smooth Heaviside function is input by the formula +.>Calculating to obtain a characteristic diagram I fd 4-1-7 Map I of the characteristics fd 4-1-6 And feature map I fd 4-1-7 Feature fusion is carried out to obtain a feature map I fd 4-1-8 Map I of the characteristics fd 4-1-8 Input into an up-sampling layer, and output to obtain a characteristic diagram I fd 4-1-9 Map I of the characteristics fe 1 And feature map I fd 4-1-9 Feature fusion is carried out to obtain a feature map I fd 4-1
f-18) mapping of the features I fd 4-1 In the fourth upsampling layer of the decoder, the feature map I is output fu 4
f-19) mapping of the characteristics to I fu 4 And feature map I fd 4-1 Inputting into a fourth feature fusion layer of the decoder, and outputting to obtain a feature map I fd 4-2
The seventh convolution unit of the f-20) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4-2 Input into a seventh convolution unit, and output to obtain a feature map I fd 4-2-1
The eighth convolution unit of the f-21) decoder is composed of a convolution layer and a Batch in turn Norm layer, dropout layer, relu activation function, characteristic diagram I fd 4-2-1 Input into an eighth convolution unit, and output to obtain a feature map I fd 4
The ninth convolution unit of the f-22) decoder is sequentially composed of a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4-1-9 Inputting the characteristic image into a ninth convolution unit, and outputting the characteristic image Q; the tenth convolution unit of the f-23) decoder sequentially comprises a convolution layer, a BatchNorm layer, a Dropout layer and a Relu activation function, and is used for generating a characteristic diagram I fd 4 Input to a tenth convolution unit, and output to obtain a segmented image I end
2. The method for vessel segmentation of a soft-attention network incorporating geometric information according to claim 1, wherein: collecting coronary vessel images of 200 patients in an automated region-based coronary artery disease diagnosis overt challenge using X-ray angiography images in step a) to obtain a dataset I; in the step b), an Augmentor package is imported into python, and the Augmentor package is used for sequentially performing rotation, elastic deformation, brightness enhancement and contrast enhancement on the data set I to obtain an enhanced data set I ', I ' = { I ' 1 ,I′ 2 ,...,I′ i ,...,I′ n Performing an overlay-tile strategy on the enhanced data set I' to obtain a preprocessed data set I f
3. The method for vessel segmentation of a soft-attention network incorporating geometric information according to claim 1, wherein: pre-processing data set I f The training set train, the verification set val and the test set test are divided according to the ratio of 6:2:2.
4. The method for vessel segmentation of a soft-attention network incorporating geometric information according to claim 1, wherein: the convolution kernel size of the convolution layer of the first convolution unit in step d-2) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the second convolution unit in step d-3) is 3×3, stride is 1×1, and padding is 0; setting a pooling window to be 2 multiplied by 2 in the first maximum pooling layer in the step d-4); the convolution kernel size of the convolution layer of the third convolution unit in step d-5) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the fourth convolution unit in step d-6) is 3×3, stride is 1×1, and padding is 0; setting the pooling window to be 2 multiplied by 2 in the second maximum pooling layer in the step d-7); the convolution kernel size of the convolution layer of the fifth convolution unit in step d-8) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the sixth convolution unit in step d-9) is 3×3, stride is 1×1, and padding is 0; setting the pooling window to be 2 multiplied by 2 in the third maximum pooling layer in the step d-10); the convolution kernel size of the convolution layer of the seventh convolution unit in step d-11) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the eighth convolution unit in step d-12) is 3×3, stride is 1×1, padding is 0, and the fourth maximum pooling layer in step d-13) sets the pooling window to 2×2.
5. The method for vessel segmentation of a soft-attention network incorporating geometric information according to claim 1, wherein: the convolution kernel size of the convolution layer of the first convolution unit in step e-2) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the second convolution unit in step e-3) is 3×3, stride is 1×1, and padding is 0.
6. The method for vessel segmentation of a soft-attention network incorporating geometric information according to claim 1, wherein: in the step f-2), the convolution kernel size of the first convolution layer of the soft attention module fused with the geometric information is 1×1, stride is 1×1, padding is 0, the convolution kernel size of the second convolution layer is 1×1, stride is 2×2, and padding is 0; the convolution kernel size of the third convolution layer is 1×1, stride is 1×1, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×2, stride is 2×2, and padding is 0; the deconvolution kernel size of the first upsampling layer in step f-3) is 2 x 2, stride is 2 x 2, and padding is 0; the convolution kernel size of the convolution layer of the first convolution unit in step f-5) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the second convolution unit in step f-6) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the first convolution layer of the soft attention module of the second fused geometric information in the step f-7) is 1×1, stride is 1×1, padding is 0, the convolution kernel size of the second convolution layer is 1×1, stride is 2×2, and padding is 0; the convolution kernel size of the third convolution layer is 1×1, stride is 1×1, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×2, stride is 2×2, and padding is 0; the deconvolution kernel size of the second upsampling layer in step f-8) is 2 x 2, stride is 2 x 2, and padding is 0; the convolution kernel size of the convolution layer of the third convolution unit in step f-10) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the fourth convolution unit in step f-11) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the first convolution layer of the soft attention module of the third fused geometric information in the step f-12) is 1×1, stride is 1×1, padding is 0, the convolution kernel size of the second convolution layer is 1×1, stride is 2×2, and padding is 0; the convolution kernel size of the third convolution layer is 1×1, stride is 1×1, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×2, stride is 2×2, and padding is 0; the deconvolution kernel size of the third upsampling layer in step f-13) is 2 x 2, stride is 2 x 2, and padding is 0; the convolution kernel size of the convolution layer of the fifth convolution unit in step f-15) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the sixth convolution unit in step f-16) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the first convolution layer of the soft attention module of the fourth fusion geometry information in the step f-17) is 1×1, stride is 1×1, padding is 0, the convolution kernel size of the second convolution layer is 1×1, stride is 2×2, and padding is 0; the convolution kernel size of the third convolution layer is 1×1, stride is 1×1, padding is 0, the deconvolution kernel size of the up-sampling layer is 2×2, stride is 2×2, and padding is 0; the deconvolution kernel size of the fourth upsampling layer in step f-18) is 2 x 2, stride is 2 x 2, and padding is 0; the convolution kernel size of the convolution layer of the seventh convolution unit in step f-20) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the eighth convolution unit in step f-21) is 3×3, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the ninth convolution unit in step f-22) is 5×5, stride is 1×1, and padding is 0; the convolution kernel size of the convolution layer of the tenth convolution unit in step f-23) is 1 x 1, stride is 1 x 1, and padding is 0.
7. The method of vessel segmentation of a soft-attention network with fusion of geometric information according to claim 1, wherein step g) comprises the steps of:
g-1)x 5 for a feature value on a given group trunk, x 5 ∈Ω 5 ,Ω 5 For the image domain, through the formulaCalculating to obtain a level function set-> Is the image domain omega 5 The upper point is inside the blood vessel, < >>Is the image domain omega 5 The upper point is on the vessel wall, +.>Is the image domain omega 5 The upper point is outside the blood vessel, y 5 For points on the vessel border, the level function set +.>The input smoothes the Heaviside function by the formulaCalculating to obtain a probability map Q GT By the formula->Calculating to obtain a loss function L 1 In the formula->Is L1 norm;
g-2) is represented by the formulaCalculating to obtain a loss function L 2 Wherein N is the pretreated coronary artery blood vessel image I of the ith patient fi The total number of pixels in(s) j A pixel value g of the jth pixel point in the feature map Q j As probability map Q GT The pixel value of the j-th pixel point in the (b);
g-3) is represented by the formula l=l 1 +L 2 +L BCE Calculating to obtain a loss function L, wherein L BCE Is a cross entropy loss.
CN202310485605.3A 2023-05-04 2023-05-04 Blood vessel segmentation method of soft attention network fused with geometric information Active CN116580194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310485605.3A CN116580194B (en) 2023-05-04 2023-05-04 Blood vessel segmentation method of soft attention network fused with geometric information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310485605.3A CN116580194B (en) 2023-05-04 2023-05-04 Blood vessel segmentation method of soft attention network fused with geometric information

Publications (2)

Publication Number Publication Date
CN116580194A CN116580194A (en) 2023-08-11
CN116580194B true CN116580194B (en) 2024-02-06

Family

ID=87533286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310485605.3A Active CN116580194B (en) 2023-05-04 2023-05-04 Blood vessel segmentation method of soft attention network fused with geometric information

Country Status (1)

Country Link
CN (1) CN116580194B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205538A (en) * 2021-05-17 2021-08-03 广州大学 Blood vessel image segmentation method and device based on CRDNet
CN114219814A (en) * 2021-11-03 2022-03-22 华南理工大学 Cup optic disk segmentation method based on depth level set learning
CN114283158A (en) * 2021-12-08 2022-04-05 重庆邮电大学 Retinal blood vessel image segmentation method and device and computer equipment
CN114581392A (en) * 2022-02-28 2022-06-03 山东省人工智能研究院 Image segmentation method based on deep learning and anisotropic active contour
CN115546570A (en) * 2022-08-25 2022-12-30 西安交通大学医学院第二附属医院 Blood vessel image segmentation method and system based on three-dimensional depth network
CN115661185A (en) * 2022-08-17 2023-01-31 青岛科技大学 Fundus image blood vessel segmentation method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013155300A1 (en) * 2012-04-11 2013-10-17 The Trustees Of Columbia University In The City Of New York Techniques for segmentation of organs and tumors and objects
US20220020155A1 (en) * 2020-07-16 2022-01-20 Korea Advanced Institute Of Science And Technology Image segmentation method using neural network based on mumford-shah function and apparatus therefor
CN112598686B (en) * 2021-03-03 2021-06-04 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
US20220335600A1 (en) * 2021-04-14 2022-10-20 Ping An Technology (Shenzhen) Co., Ltd. Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205538A (en) * 2021-05-17 2021-08-03 广州大学 Blood vessel image segmentation method and device based on CRDNet
CN114219814A (en) * 2021-11-03 2022-03-22 华南理工大学 Cup optic disk segmentation method based on depth level set learning
CN114283158A (en) * 2021-12-08 2022-04-05 重庆邮电大学 Retinal blood vessel image segmentation method and device and computer equipment
CN114581392A (en) * 2022-02-28 2022-06-03 山东省人工智能研究院 Image segmentation method based on deep learning and anisotropic active contour
CN115661185A (en) * 2022-08-17 2023-01-31 青岛科技大学 Fundus image blood vessel segmentation method and system
CN115546570A (en) * 2022-08-25 2022-12-30 西安交通大学医学院第二附属医院 Blood vessel image segmentation method and system based on three-dimensional depth network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
(M)SLAe-Net: Multi-Scale Multi-Level Attention embedded Network for Retinal Vessel Segmentation;Shreshth Saini 等;《2021 IEEE 9th International Conference on Healthcare Informatics (ICHI)》;1-5 *
图像分割中上下文信息网络模型与水平集损失函数研究;曾艳;《中国优秀硕士学位论文全文数据库 信息科技辑》;I138-1054 *

Also Published As

Publication number Publication date
CN116580194A (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN113077471B (en) Medical image segmentation method based on U-shaped network
CN111091589B (en) Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning
CN109816661B (en) Tooth CT image segmentation method based on deep learning
CN109949276B (en) Lymph node detection method for improving SegNet segmentation network
CN112150425A (en) Unsupervised intravascular ultrasound image registration method based on neural network
CN111369528B (en) Coronary artery angiography image stenosis region marking method based on deep convolutional network
CN113902761B (en) Knowledge distillation-based unsupervised segmentation method for lung disease focus
CN111598867B (en) Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome
CN111008974A (en) Multi-model fusion femoral neck fracture region positioning and segmentation method and system
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
CN115147600A (en) GBM multi-mode MR image segmentation method based on classifier weight converter
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN116563533A (en) Medical image segmentation method and system based on target position priori information
Tang et al. Lumen contour segmentation in IVOCT based on N-type CNN
CN114693622A (en) Plaque erosion automatic detection system based on artificial intelligence
CN116580194B (en) Blood vessel segmentation method of soft attention network fused with geometric information
CN114332278A (en) OCTA image motion correction method based on deep learning
Puri et al. Comparitive Analysis on Neural Networks based on their performance in Pneumonia Detection
Zhao et al. Overlapping region reconstruction in nuclei image segmentation
WO2024098379A1 (en) Fully automatic cardiac magnetic resonance imaging segmentation method based on dilated residual network
Ahmad et al. A Modified Memory-Efficient U-Net for Segmentation of Polyps
CN116630628B (en) Aortic valve calcification segmentation method, system, equipment and storage medium
Gokul et al. Ensembling Framework for Pneumonia Detection in Chest X-ray images
Pawar B-spline Based Image Segmentation, Registration and Modeling Neuron Growth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant