CN106920227A - Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method - Google Patents

Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method Download PDF

Info

Publication number
CN106920227A
CN106920227A CN201611228597.0A CN201611228597A CN106920227A CN 106920227 A CN106920227 A CN 106920227A CN 201611228597 A CN201611228597 A CN 201611228597A CN 106920227 A CN106920227 A CN 106920227A
Authority
CN
China
Prior art keywords
retinal
layer
network
image
convolutional layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611228597.0A
Other languages
Chinese (zh)
Other versions
CN106920227B (en
Inventor
蔡轶珩
高旭蓉
邱长炎
崔益泽
王雪艳
孔欣然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201611228597.0A priority Critical patent/CN106920227B/en
Publication of CN106920227A publication Critical patent/CN106920227A/en
Application granted granted Critical
Publication of CN106920227B publication Critical patent/CN106920227B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

基于深度学习与传统方法相结合的视网膜血管分割方法,涉及计算机视觉以及模式识别领域。本发明将两种灰度图像都作为网络的训练样本,针对视网膜图像数据少的问题做了相应的数据扩增包括弹性形变,平滑滤波等等,扩大了该发明的广泛适用性。本发明通过构建FCN‑HNED的视网膜血管分割深度网络,该网络极大程度的实现了自主学习的过程,不仅可以分享整个图像的卷积特征,减少特征冗余,又可以从抽象的特征中恢复出多个像素的所属类别,分别将视网膜血管图像的CLAHE图和高斯匹配滤波图分别输入网络使其得到的血管分割图进行加权平均从而得到更好更完整的视网膜血管分割概率图,该种处理方式极大程度的提高了血管分割的鲁棒性与准确性。

A retinal vessel segmentation method based on the combination of deep learning and traditional methods, involving the fields of computer vision and pattern recognition. The present invention uses two kinds of grayscale images as training samples of the network, and performs corresponding data amplification including elastic deformation, smoothing filtering, etc. for the problem of less retinal image data, which expands the wide applicability of the present invention. In the present invention, by constructing the retinal vessel segmentation depth network of FCN‑HNED, the network realizes the process of autonomous learning to a great extent, not only can share the convolution features of the entire image, reduce feature redundancy, but also recover from abstract features The classification of multiple pixels is obtained, and the CLAHE map and Gaussian matching filter map of the retinal blood vessel image are respectively input into the network so that the obtained blood vessel segmentation map is weighted and averaged to obtain a better and more complete retinal blood vessel segmentation probability map. This method greatly improves the robustness and accuracy of vessel segmentation.

Description

基于深度学习与传统方法相结合的视网膜血管分割方法Retinal Vessel Segmentation Method Based on Combination of Deep Learning and Traditional Methods

技术领域technical field

本发明涉及计算机视觉以及模式识别领域,是一种基于深度学习与传统方法相结合的视网膜血管分割方法。The invention relates to the fields of computer vision and pattern recognition, and is a retinal blood vessel segmentation method based on the combination of deep learning and traditional methods.

背景技术Background technique

眼底成像可以通过视网膜成像来判断是否存在异常,其中对于视网膜血管的观察相当重要。青光眼、白内障以及糖尿病等疾病都会造成视网膜眼底血管的病变。视网膜病变患者逐年增加,如果不能及时治疗,通常会导致长期患有这些疾病的患者承受巨大痛苦甚至失明。然而,目前视网膜病是由专科医生进行人工诊断,专科医生首先对患者的眼底图像进行手工血管的标记,然后,再测量所需的血管口径、分叉角度等相关参数。其中,手工标记血管的过程大概需要两个小时左右,诊断过程花费大量时间,为了节约人力物力,自动化提取血管的方法显得尤为重要。不但可以减轻专科医生的负担,也可以有效解决偏远地区缺乏专科医生的问题。鉴于视网膜血管分割的重要性,国内外学者做了许多研究,大致分非监督和监督方法。Fundus imaging can be used to determine whether there is an abnormality through retinal imaging, and the observation of retinal blood vessels is very important. Diseases such as glaucoma, cataract, and diabetes can all cause lesions in the blood vessels of the retinal fundus. The number of patients with retinopathy is increasing year by year, and if it is not treated in time, it usually leads to great pain and even blindness in patients with long-term suffering from these diseases. However, at present, retinopathy is manually diagnosed by a specialist. The specialist first manually marks the blood vessels on the patient's fundus image, and then measures the required blood vessel diameter, bifurcation angle and other related parameters. Among them, the process of manually marking blood vessels takes about two hours, and the diagnosis process takes a lot of time. In order to save manpower and material resources, the method of automatically extracting blood vessels is particularly important. Not only can it reduce the burden on specialists, but it can also effectively solve the problem of lack of specialists in remote areas. In view of the importance of retinal vessel segmentation, domestic and foreign scholars have done a lot of research, roughly divided into unsupervised and supervised methods.

非监督方法是通过某种规则来提取血管目标,包括匹配滤波,形态学处理,血管追踪,多尺度分析等算法。监督学习也叫作像素特征分类方法或者机器学习技术。通过训练将每个像素分类判断为血管或者非血管。主要分为两个过程:特征提取和分类。特征提取阶段通常包括Gabor滤波,高斯匹配滤波,形态学增强等方法。分类阶段通常包括的分类器有Bayesian(朴素贝叶斯),SVM等分类器。但是,该种对于像素的判断不能很好的考虑每个像素与其周围领域像素之间的联系。因而出现了CNN,它可以根据图像块的特征来判断中心像素为血管或者非血管,通过进行多层结构的自动学习特征,使这些抽象的特征有利于中心像素点的分类判断。但是,对每个像素进行分类很少涉及全局信息,使得在局部有病变的情况下,分类失败;其次,每幅图像至少有几十万个像素,如果一一判断,使得存储开销大,计算效率很低。The unsupervised method is to extract blood vessel targets through certain rules, including algorithms such as matched filtering, morphological processing, blood vessel tracking, and multi-scale analysis. Supervised learning is also called pixel feature classification method or machine learning technique. Through training, each pixel is classified as vascular or non-vascular. It is mainly divided into two processes: feature extraction and classification. The feature extraction stage usually includes methods such as Gabor filtering, Gaussian matching filtering, and morphological enhancement. Classifiers usually included in the classification stage include Bayesian (naive Bayesian), SVM and other classifiers. However, this kind of judgment for pixels cannot well consider the connection between each pixel and its surrounding domain pixels. Therefore, CNN appeared, which can judge whether the central pixel is a blood vessel or non-vascular according to the characteristics of the image block, and through the automatic learning of features of the multi-layer structure, these abstract features are conducive to the classification and judgment of the central pixel. However, classifying each pixel rarely involves global information, so that the classification fails in the case of local lesions; secondly, each image has at least hundreds of thousands of pixels, and if it is judged one by one, the storage overhead will be large, and the calculation low productivity.

发明内容Contents of the invention

针对现有算法的不足,本发明提出了一种基于深度学习与传统方法相结合的视网膜血管分割方法。首先,根据视网膜血管的特点做其针对性的预处理,包括进行CLAHE(限制性对比度自适应直方图均衡)处理使得视网膜血管和背景能够具有较高的对比度,进行高斯匹配滤波的传统方法使得视网膜的细小血管得到很好的增强,本发明提出将两种灰度图像都作为网络的训练样本。在此基础上,我们针对视网膜图像数据少的问题做了相应的数据扩增包括弹性形变,平滑滤波等等,不仅使得数据量增大有利于深度学习网络的学习和训练,更重要的是模拟了具有各种各样情况下的视网膜图像,都能够通过本发明的处理得到很好的视网膜血管分割图,扩大了该发明的广泛适用性。Aiming at the shortcomings of existing algorithms, the present invention proposes a retinal vessel segmentation method based on the combination of deep learning and traditional methods. First of all, according to the characteristics of retinal blood vessels, do targeted preprocessing, including CLAHE (Constrained Contrast Adaptive Histogram Equalization) processing so that retinal blood vessels and background can have a high contrast, and the traditional method of Gaussian matched filtering makes the retina The small blood vessels are well enhanced, and the present invention proposes to use both grayscale images as training samples for the network. On this basis, we have made corresponding data amplification including elastic deformation, smoothing filtering, etc. for the problem of less retinal image data, which not only makes the increase in data volume conducive to the learning and training of deep learning networks, but more importantly, simulates Even if there are retinal images in various situations, a good retinal blood vessel segmentation map can be obtained through the processing of the present invention, which expands the wide applicability of the present invention.

其次,本发明通过构建FCN-HNED的视网膜血管分割深度网络,将FCN(FullyConvolutional Network)网络末端得到血管概率图与浅层信息HNED(HolisticallyNestedEdge Detection)的血管概率图进行了很好的融合,得到我们所需的视网膜血管分割图,该网络极大程度的实现了自主学习的过程,不仅可以分享整个图像的卷积特征,减少特征冗余,又可以从抽象的特征中恢复出多个像素的所属类别,实现一种端到端,像素到像素的视网膜血管分割方法,这种全局输入和全局输出的方法既简单又有效。在视网膜血管检测当中本发明分别将视网膜血管图像的CLAHE图和高斯匹配滤波图分别输入网络使其得到的血管分割图进行加权平均从而得到更好更完整的视网膜血管分割概率图,该种处理方式极大程度的提高了血管分割的鲁棒性与准确性。Secondly, by constructing the retinal vessel segmentation depth network of FCN-HNED, the present invention combines the vessel probability map obtained at the end of the FCN (Fully Convolutional Network) network with the vessel probability map of shallow information HNED (HolisticallyNestedEdge Detection), and obtains our The required retinal vessel segmentation map, the network greatly realizes the process of autonomous learning, not only can share the convolution features of the entire image, reduce feature redundancy, but also recover the ownership of multiple pixels from abstract features category, implementing an end-to-end, pixel-to-pixel retinal vessel segmentation method, which is both simple and effective with global input and global output. In the detection of retinal blood vessels, the present invention respectively inputs the CLAHE map and Gaussian matching filter map of the retinal blood vessel image into the network so that the obtained blood vessel segmentation map is weighted and averaged to obtain a better and more complete retinal blood vessel segmentation probability map. This processing method The robustness and accuracy of vessel segmentation are greatly improved.

本文采用如下技术方案:This article adopts the following technical solutions:

1、预处理1. Pretreatment

1)对彩色的视网膜图像的RGB三个通道中对比度比较高的绿色通道进行提取。其次,由于拍摄角度等的问题,采集到的视网膜眼底图像的亮度往往是不均匀的,或者病变区域由于过亮或者过暗在图像中呈现出对比度不高等问题很难与背景区分,所以,我们进行归一化处理。然后,对归一化后的视网膜图像进行CLAHE处理提高视网膜眼底图像质量,均衡眼底图像的亮度,使其更适合后续血管提取。1) Extract the green channel with relatively high contrast in the RGB three channels of the color retinal image. Secondly, due to problems such as shooting angles, the brightness of the collected retinal fundus images is often uneven, or the lesion area is difficult to distinguish from the background due to problems such as too bright or too dark and low contrast in the image. Therefore, we Perform normalization. Then, CLAHE processing is performed on the normalized retinal image to improve the quality of the retinal fundus image, balance the brightness of the fundus image, and make it more suitable for subsequent blood vessel extraction.

CLAHE处理之后的视网膜血管在增强血管与背景对比度的同时,能够极大程度的保持视网膜血管的本身特性,然而,由于其中细小血管与背景很相似,在后续的深度学习当中不能够很好的分割出来,针对于此,本发明利用血管的横截面灰度图呈高斯走向的特点,将CLAHE处理之后的视网膜血管进行高斯匹配滤波处理,使得细小血管能够极大程度的表现出来。由于血管的方向是任意的,因此,本文采用12个不同方向的高斯核模板来对视网膜图像进行匹配滤波,将其最大响应作为该像素的响应值。二维高斯匹配滤波核函数k(x,y)可表示为:The retinal blood vessels processed by CLAHE can maintain the characteristics of the retinal blood vessels to a great extent while enhancing the contrast between the blood vessels and the background. However, because the small blood vessels are very similar to the background, they cannot be well segmented in the subsequent deep learning. In view of this, the present invention uses the Gaussian characteristic of the cross-sectional grayscale image of blood vessels, and performs Gaussian matching filter processing on the retinal blood vessels after CLAHE processing, so that the small blood vessels can be displayed to a great extent. Since the direction of blood vessels is arbitrary, this paper uses 12 Gaussian kernel templates in different directions to perform matching filtering on retinal images, and takes the maximum response as the response value of the pixel. The two-dimensional Gaussian matched filter kernel function k(x,y) can be expressed as:

其中σ表示高斯曲线的方差,L表示y轴被截断的视网膜血管长度,滤波窗口的宽度选择[-3σ,3σ]即核函数x的取值范围,选择较小的σ数值设置为0.5,使得细小血管能够极大程度的得到增强。Where σ represents the variance of the Gaussian curve, L represents the length of the truncated retinal blood vessels on the y-axis, the width of the filter window is [-3σ, 3σ], which is the value range of the kernel function x, and the smaller value of σ is set to 0.5, so that Small blood vessels can be greatly enhanced.

为了充分考虑到视网膜图像的整体特性以及其中细小血管的特性,我们将CLAHE处理之后的视网膜血管图和高斯匹配滤波图都作为训练的样本,可以极大地提升网络分割的性能。In order to fully consider the overall characteristics of the retinal image and the characteristics of the small blood vessels in it, we use the retinal blood vessel map after CLAHE processing and the Gaussian matched filter map as training samples, which can greatly improve the performance of network segmentation.

2、数据扩增和构建训练样本2. Data amplification and construction of training samples

由于训练深度网络需要大量的数据,仅现有的视网膜图像用于训练远远不够。于是需要对训练数据进行不同方式的扩张,加大数据量,提高训练和检测效果。数据扩增方式:Since training a deep network requires a large amount of data, only existing retinal images are not enough for training. Therefore, it is necessary to expand the training data in different ways, increase the amount of data, and improve the training and detection effects. Data augmentation method:

1)将预处理后的图像进行左右上下等平移分别为20个像素,实现网络学习的平移不变性。1) Translate the preprocessed image by 20 pixels from left to right, up and down, respectively, to realize the translation invariance of network learning.

2)将1)处理后的图像分别进行45°,90°,125°,180°的旋转,截取其中的最大矩形,这种变换不仅增强了训练数据的旋转鲁棒性,又将数据扩大为原来的5倍。2) Rotate the processed image by 45°, 90°, 125°, and 180° respectively, and intercept the largest rectangle among them. This transformation not only enhances the rotation robustness of the training data, but also expands the data to 5 times the original.

3)在一般的数据扩增当中,从未考虑视网膜图像可能出现的模糊现象,然而,本发明考虑到在各种情况下,譬如相机的抖动或者病人的不小心移动,都会造视网膜图像在一定程度上的部分模糊,所以,本发明将2)处理后的图像集选取其中25%分别进行3×3和5×5的中值滤波模糊操作,使得网络能对于各种模糊程度的视网膜图像具有广泛适用性。3) In the general data amplification, the possible blurring of the retinal image has never been considered. However, the present invention considers that under various circumstances, such as camera shaking or patient’s careless movement, the retinal image will be blurred in a certain amount. degree of partial blurring, so the present invention selects 25% of the processed image sets to perform 3×3 and 5×5 median filter fuzzy operations respectively, so that the network can have retinal images with various blurring degrees. Broad applicability.

4)在以往的视网膜图像数据扩增当中常用的只是平移,缩放,旋转等等,远远达不到对视网膜图像的各种情况的考虑,鉴于此,本发明考虑视网膜的血管方向形状等的各异性,我们对3)处理后的图像集取25%进行随机的弹性变形,该项数据扩增方式对于视网膜血管的分割有很重要的意义,它可以帮助网络学习到各种方向错综复杂的视网膜血管,有利于实际应用中视网膜血管分割准确率的提升。4) In the previous retinal image data amplification, only translation, zooming, rotation, etc. are commonly used, which are far from the consideration of various conditions of the retinal image. In view of this, the present invention considers the shape of the blood vessel direction of the retina, etc. Anisotropy, we take 25% of the processed image set for 3) and perform random elastic deformation. This data amplification method is of great significance for the segmentation of retinal blood vessels. It can help the network to learn the intricate retina in various directions. Blood vessels are beneficial to improve the accuracy of retinal blood vessel segmentation in practical applications.

5)由于FCN适用于任何大小的图像,我们对4)处理后的图像进行50%和75%的缩放处理,从而扩增数据。5) Since FCN is applicable to images of any size, we perform 50% and 75% scaling on the processed images of 4) to amplify the data.

当然,我们对于视网膜血管分割的专家标准图(ground truth)进行同样的处理,从而与样本一一对应。将构件好的训练样本数据的3/4作为训练集,1/4作为验证集。Of course, we do the same for the expert standard map (ground truth) of retinal vessel segmentation, so as to correspond to the samples one by one. 3/4 of the well-built training sample data is used as the training set, and 1/4 is used as the verification set.

3、FCN-HNED网络构建3. FCN-HNED network construction

FCN网络:一般的FCN网络层主要由5部分组成,输入层,卷积层,降采样层,上采样层(反卷积层)和输出层。本发明中构建的网络为:FCN network: The general FCN network layer is mainly composed of 5 parts, the input layer, the convolution layer, the downsampling layer, the upsampling layer (deconvolution layer) and the output layer. The network constructed in the present invention is:

输入层,两个卷积层(C1,C2),第一降采样层(pool1),两个卷积层(C3,C4),第二降采样层(pool2),两个卷积层(C5,C6),第三降采样层(pool3),两个卷积层(C7,C8),第四降采样层(pool4),两个卷积层(C9,C10),第一上采样层(U1),两个卷积层(C11,C12),第二上采样层(U2),两个卷积层(C13,C14),第三上采样层(U3),两个卷积层(C15,C16),第四上采样层(U4),两个卷积层(C17,C18),目标层(输出层)。形成一个前后对称的U型深度网络构架。Input layer, two convolutional layers (C1, C2), first downsampling layer (pool1), two convolutional layers (C3, C4), second downsampling layer (pool2), two convolutional layers (C5 , C6), the third downsampling layer (pool3), two convolutional layers (C7, C8), the fourth downsampling layer (pool4), two convolutional layers (C9, C10), the first upsampling layer ( U1), two convolutional layers (C11, C12), second upsampling layer (U2), two convolutional layers (C13, C14), third upsampling layer (U3), two convolutional layers (C15 , C16), the fourth upsampling layer (U4), two convolutional layers (C17, C18), and the target layer (output layer). Form a symmetrical U-shaped deep network architecture.

由于FCN网络的低层的特征分辨率较高,而高层信息体现了更强的语义信息,对于视网膜图像的部分病变等区域的血管分类具有很好的鲁棒性,但同时FCN网络最后得到与输入样本相同大小的输出却会丢失很多较小的目标和局部的细节信息,因而,本发明将浅层的视网膜血管信息以边缘检测HNED(Holisticallynested edge detection)的方法在深度监督情况下来学习丰富的多层信息表达,很大程度上解决了目标边缘模糊问题。即我们将C2,C4,C6,C8层之后分别添加一个softmax分类器,从而将隐藏层的信息在将groundtruth为标签的情况下学习得到视网膜血管概率图,分别称为侧输出1、侧输出2、侧输出3、侧输出4。在此基础上,我们将四个侧输出与最后的输出层进行融合,从而形成FCN-HNED的网络结构,将浅层信息与输出层信息进行互补,得到多尺度,多层次,与目标样本更相近的融合特征图,为分割血管的精细化起到很大的作用,以至于不需要后续专门的精化步骤来进行视网膜血管的精细化。Since the low-level feature resolution of the FCN network is high, and the high-level information reflects stronger semantic information, it is very robust for the classification of blood vessels in some lesions of the retinal image, but at the same time, the FCN network finally gets the same as the input The output of the same sample size will lose a lot of small target and local detail information. Therefore, the present invention uses the edge detection HNED (Holistically nested edge detection) method to learn rich multi-dimensional information under deep supervision. Layer information expression, to a large extent, solves the problem of blurring the edge of the target. That is, we add a softmax classifier after the C2, C4, C6, and C8 layers, so that the information of the hidden layer can be learned to obtain the retinal blood vessel probability map when the groundtruth is used as the label, which are called side output 1 and side output 2 respectively. , Side output 3, Side output 4. On this basis, we fuse the four side outputs with the final output layer to form the network structure of FCN-HNED, complement the shallow layer information with the output layer information, and obtain multi-scale, multi-level, and target samples. The similar fusion feature maps play a great role in the refinement of segmented vessels, so that no subsequent special refinement steps are required for the refinement of retinal vessels.

本发明的卷积层都通过补零的方式得到同样大小的特征图,pooling层的结果是使得特征减少,参数减少,但pooling层的目的并不仅在于此。本发明使用max-pooling能减小卷积层参数误差造成的估计均值的偏移,更多的保留纹理信息。本发明最大池化层的采样率为2。上采样即为双线性插值的过程。The convolutional layers of the present invention obtain feature maps of the same size by padding zeros, and the result of the pooling layer is to reduce features and parameters, but the purpose of the pooling layer is not limited to this. The present invention uses max-pooling to reduce the deviation of the estimated mean value caused by the parameter error of the convolution layer, and retain more texture information. The sampling rate of the maximum pooling layer in the present invention is 2. Upsampling is the process of bilinear interpolation.

整个模型的构建过程中激活函数都用ReLU除了Softmax分类层,损失函数为交叉熵。During the construction of the entire model, the activation function uses ReLU to remove the Softmax classification layer, and the loss function is cross entropy.

训练:FCN-HNED网络构建好之后可以进行网络的训练来进行对图像的自动特征提取和学习过程,每代输入128个图像,直到网络收敛之后停止。Training: After the FCN-HNED network is constructed, the network can be trained to perform automatic feature extraction and learning of the image. Each generation inputs 128 images until the network converges and stops.

测试:将每张视网膜图像绿色通道图的CLAHE图和高斯匹配滤波图分别输入到已训练好的网络进行测试,分别得到融合的视网膜血管分割图称为进行加权平均得到最后的视网膜血管分割概率图。4、后处理Test: Input the CLAHE map and Gaussian matching filter map of each retinal image green channel map to the trained network for testing, and obtain the fused retinal vessel segmentation map called with right with A weighted average is performed to obtain the final retinal vessel segmentation probability map. 4. Post-processing

对测试中得到视网膜血管概率图进行二值化得到分割图。Binarize the retinal blood vessel probability map obtained in the test to obtain a segmentation map.

有益效果Beneficial effect

1、本发明根据视网膜血管的不同特性,采用针对性的数据处理方法,训练数据的好坏直接决定了训练得到的模型是否可靠,准确率是否达到所需的水平,本发明利用模糊操作,弹性形变等,很好的模拟了各种各样可能出现的视网膜数据,同时扩大数据达到足够多的数量以避免训练过拟合,也能够为后续的检测提供帮助,进而提高视网膜血管分割准确率。1. According to the different characteristics of retinal blood vessels, the present invention adopts a targeted data processing method. The quality of the training data directly determines whether the model obtained through training is reliable and whether the accuracy reaches the required level. The present invention uses fuzzy operation, elastic Deformation, etc., well simulates a variety of possible retinal data, and at the same time expands the data to a sufficient number to avoid training overfitting, and can also provide assistance for subsequent detection, thereby improving the accuracy of retinal blood vessel segmentation.

2、本发明将CLAHE处理后的视网膜图像与高斯匹配滤波处理之后的图像分别输入网络进行训练学习,不仅使得视网膜血管的性质得到各个性能层次下的充分学习,而且高斯匹配滤波图充分弥补了CLAHE处理图对于细小血管不清晰的不足,大大的提升了视网膜血管分割的性能。2. In the present invention, the retinal image processed by CLAHE and the image processed by Gaussian matching filter are respectively input into the network for training and learning, which not only enables the properties of retinal blood vessels to be fully learned at each performance level, but also the Gaussian matched filter image fully compensates for the CLAHE The lack of clearness of small blood vessels in the processing map greatly improves the performance of retinal blood vessel segmentation.

3、本发明通过构建深度学习网络FCN-HNED的方法,能够快速的进行视网膜图像的自动特征提取,它能够从不同的层次对视网膜眼底图像进行特征提取,学习到视网膜图像的中各个像素与其周围多个邻域之间的关系,将其视网膜血管图的中,高级特征很好的表现出来,从而使其很好的区分了血管与非血管的内部特征,实现了端到端,像素到像素的血管分割,比传统单个像素的分类判断效率提升很多倍。3. The present invention can quickly perform automatic feature extraction of retinal images through the method of constructing a deep learning network FCN-HNED. It can perform feature extraction on retinal fundus images from different levels, and learn each pixel in the retinal image and its surroundings The relationship between multiple neighborhoods shows the medium and high-level features of the retinal blood vessel map well, so that it can distinguish the internal features of blood vessels and non-vessels well, and realize end-to-end, pixel-to-pixel The blood vessel segmentation is many times more efficient than the classification and judgment of traditional single pixels.

4、本发明利用浅层特征的四个侧输出与FCN网络的末端输出进行高度融合,从而实现血管分割的精细化与鲁棒性。使得血管分割图与专家的手动分割图达到很好的一致性。同时,极大程度的实现了视网膜血管分割的自动化,大大降低了人力物力的消耗。4. In the present invention, the four side outputs of shallow features are highly fused with the terminal output of the FCN network, so as to achieve refinement and robustness of blood vessel segmentation. The blood vessel segmentation map is in good agreement with the manual segmentation map of experts. At the same time, the automation of retinal vessel segmentation is greatly realized, which greatly reduces the consumption of manpower and material resources.

附图说明Description of drawings

图1是本发明的整体流程图;Fig. 1 is the overall flowchart of the present invention;

图2是血管横截面灰度分布图;(a)一段血管图(b)灰度级Figure 2 is the gray distribution map of the blood vessel cross section; (a) a section of the blood vessel (b) the gray level

图3是预处理效果图;(a)原始图像(b)CLAHE处理之后的图像(c)高斯匹配滤波后的图像Figure 3 is a preprocessing effect diagram; (a) original image (b) image after CLAHE processing (c) image after Gaussian matching filter

图4是FCN-HNED网络结构;Figure 4 is the FCN-HNED network structure;

图5是视网膜血管分割结果。(a)原始图像(b)视网膜血管分割图(c)第一位专家手动分割图Figure 5 is the result of retinal vessel segmentation. (a) Original image (b) Segmentation map of retinal vessels (c) Manual segmentation map by the first expert

具体实施方式detailed description

下面结合附图进行具体说明:Describe in detail below in conjunction with accompanying drawing:

本发明的技术框图如图1所示。具体实施步骤分别如下:The technical block diagram of the present invention is as shown in Figure 1. The specific implementation steps are as follows:

1、预处理1. Pretreatment

对每一幅视网膜眼底图像不管是训练集还是测试集都进行同样的预处理。The same preprocessing is performed on each retinal fundus image whether it is a training set or a test set.

1)对彩色的视网膜图像的RGB三个通道中对比度比较高的绿色通道进行提取。其次,由于拍摄角度等的问题,采集到的视网膜眼底图像的亮度往往是不均匀的,或者病变区域由于过亮或者过暗在图像中呈现出对比度不高等问题很难与背景区分,所以,我们进行归一化处理,然后,对归一化后的视网膜图像进行CLAHE处理提高视网膜眼底图像质量,均衡眼底图像的亮度,使其更适合后续血管提取。1) Extract the green channel with relatively high contrast in the RGB three channels of the color retinal image. Secondly, due to problems such as shooting angles, the brightness of the collected retinal fundus images is often uneven, or the lesion area is difficult to distinguish from the background due to problems such as too bright or too dark and low contrast in the image. Therefore, we Perform normalization processing, and then perform CLAHE processing on the normalized retinal image to improve the quality of the retinal fundus image, balance the brightness of the fundus image, and make it more suitable for subsequent blood vessel extraction.

CLAHE处理之后的视网膜血管在增强血管与背景对比度的同时,能够极大程度的保持视网膜血管的本身特性,然而,由于其中细小血管与背景很相似,在后续的深度学习当中不能够很好的分割出来,针对于此,本发明利用血管的横截面灰度图呈高斯走向的特点,对视网膜图像进行搞死匹配滤波处理。如图2所示,(a)为血管灰度图,(b)为血管的横截面的灰度值,细小的血管的横截面也呈现高斯走向,所以,本发明将CLAHE处理之后的视网膜血管进行高斯匹配滤波处理。由于血管的方向是任意的,因此,本文采用12个不同方向的高斯核模板来对视网膜图像进行匹配滤波,找到相应的最大响应作为该像素的响应值。The retinal blood vessels processed by CLAHE can maintain the characteristics of the retinal blood vessels to a great extent while enhancing the contrast between the blood vessels and the background. However, because the small blood vessels are very similar to the background, they cannot be well segmented in the subsequent deep learning. In view of this, the present invention uses the Gaussian characteristic of the cross-sectional grayscale image of the blood vessel to perform matching filtering processing on the retinal image. As shown in Figure 2, (a) is the blood vessel grayscale image, (b) is the grayscale value of the cross section of the blood vessel, and the cross section of the small blood vessel also presents a Gaussian trend, so the present invention treats the retinal blood vessel after CLAHE Perform Gaussian matched filtering. Since the direction of blood vessels is arbitrary, this paper uses 12 Gaussian kernel templates in different directions to perform matching filtering on retinal images, and finds the corresponding maximum response as the response value of the pixel.

为了充分考虑到视网膜图像的整体特性以及其中细小血管的特性,我们将CLAHE处理之后的视网膜血管图和高斯匹配滤波图都作为训练的样本,可以极大地提升网络分割的性能。In order to fully consider the overall characteristics of the retinal image and the characteristics of the small blood vessels in it, we use the retinal blood vessel map after CLAHE processing and the Gaussian matched filter map as training samples, which can greatly improve the performance of network segmentation.

2、数据扩增和构建训练样本2. Data amplification and construction of training samples

由于训练深度网络需要大量的数据,仅现有的视网膜图像用于训练远远不够。于是需要对训练数据进行不同方式的数据扩张,加大数据量,提高训练和检测效果。数据扩增方式:Since training a deep network requires a large amount of data, only existing retinal images are not enough for training. Therefore, it is necessary to expand the training data in different ways, increase the amount of data, and improve the training and detection effects. Data augmentation method:

1)将预处理后的图像进行左右上下等平移分别为20个像素,实现网络学习的平移不变性。1) Translate the preprocessed image by 20 pixels from left to right, up and down, respectively, to realize the translation invariance of network learning.

2)将1)处理后的图像分别进行45°,90°,125°,180°的旋转,截取其中的最大矩形,这种变换不仅增强了训练数据的旋转鲁棒性,又将数据扩大为原来的5倍。2) Rotate the processed image by 45°, 90°, 125°, and 180° respectively, and intercept the largest rectangle among them. This transformation not only enhances the rotation robustness of the training data, but also expands the data to 5 times the original.

3)在一般的数据扩增当中,从来未用到中值滤波,然而,本发明考虑到在各种情况下,譬如相机的抖动或者病人的不小心移动,都会造视网膜图像在一定程度上的部分模糊情况,所以,本发明将2)处理后的图像取其中25%图像分别进行3×3和5×5的中值滤波模糊操作,使得网络能对于各种模糊程度的视网膜图像具有广泛适用性。3) In general data augmentation, the median filter has never been used. However, the present invention considers that under various circumstances, such as camera shake or patient’s careless movement, the retinal image will be distorted to a certain extent. Therefore, the present invention takes 25% of the processed images to perform 3×3 and 5×5 median filter fuzzy operations respectively, so that the network can be widely applicable to retinal images with various blurring degrees sex.

4)在以往的视网膜图像数据扩增当中常用的只是平移,缩放,旋转等等,远远达不到对视网膜图像的各种情况的考虑,鉴于此,本发明考虑视网膜的血管方向形状等的各异性,我们对3)处理后的图像取25%进行随机的弹性变形,该项数据扩增方式对于视网膜血管的分割有很重要的意义,它可以帮助网络学习到各种方向错综复杂的视网膜血管,有利于实际应用中视网膜血管分割准确率的提升。4) In the previous retinal image data amplification, only translation, zooming, rotation, etc. are commonly used, which are far from the consideration of various conditions of the retinal image. In view of this, the present invention considers the shape of the blood vessel direction of the retina, etc. Anisotropy, we take 25% of the processed image for 3) and perform random elastic deformation. This data amplification method is of great significance for the segmentation of retinal blood vessels. It can help the network to learn intricate retinal blood vessels in various directions. , which is conducive to the improvement of the accuracy of retinal vessel segmentation in practical applications.

5)由于FCN适用于任何大小的图像,我们对4)处理后的图像进行50%和75%的缩放处理,从而扩增数据。5) Since FCN is applicable to images of any size, we perform 50% and 75% scaling on the processed images of 4) to amplify the data.

当然,我们对于视网膜血管分割的专家标准图(ground truth)进行同样的处理,从而与样本一一对应。将构件好的训练样本数据的3/4作为训练集,1/4作为验证集。Of course, we do the same for the expert standard map (ground truth) of retinal vessel segmentation, so as to correspond to the samples one by one. 3/4 of the well-built training sample data is used as the training set, and 1/4 is used as the verification set.

3、FCN-HNED网络构建以及训练和测试过程3. FCN-HNED network construction and training and testing process

FCN网络:一般的FCN网络层主要由5部分组成,输入层,卷积层,降采样层,上采样层(反卷积层)和输出层。本发明中构建的网络为:输入层,两个卷积层(C1,C2),第一降采样层(pool1),两个卷积层(C3,C4),第二降采样层(pool2),两个卷积层(C5,C6),第三降采样层(pool3),两个卷积层(C7,C8),第四降采样层(pool4),两个卷积层(C9,C10),第一上采样层(U1),两个卷积层(C11,C12),第二上采样层(U2),两个卷积层(C13,C14),第三上采样层(U3),两个卷积层(C15,C16),第四上采样层(U4),两个卷积层(C17,C18),目标层(输出层)。形成一个前后对称的U型深度网络构架。FCN network: The general FCN network layer is mainly composed of 5 parts, the input layer, the convolution layer, the downsampling layer, the upsampling layer (deconvolution layer) and the output layer. The network constructed in the present invention is: input layer, two convolutional layers (C1, C2), the first downsampling layer (pool1), two convolutional layers (C3, C4), the second downsampling layer (pool2) , two convolutional layers (C5, C6), third downsampling layer (pool3), two convolutional layers (C7, C8), fourth downsampling layer (pool4), two convolutional layers (C9, C10 ), the first upsampling layer (U1), two convolutional layers (C11, C12), the second upsampling layer (U2), two convolutional layers (C13, C14), the third upsampling layer (U3) , two convolutional layers (C15, C16), fourth upsampling layer (U4), two convolutional layers (C17, C18), target layer (output layer). Form a symmetrical U-shaped deep network architecture.

其中卷积过程实现如下:The convolution process is implemented as follows:

f(X;W,b)=W*sX+b (2)f(X;W,b)=W* s X+b (2)

其中,f(X;W,b)为输出为特征图,X是前一层的输入特征图,W和b是卷积核和偏移值,*s代表卷积操作,不像传统的CNN网络,FCN网络将最后的全连接层全部换做卷积层,但是,经过卷积和下采样等一系列操作使得特征图越来越小,要使图像恢复到与输入图像同样大小,FCN采用上采样操作或者说是反卷积。Among them, f(X; W, b) is the output feature map, X is the input feature map of the previous layer, W and b are the convolution kernel and offset value, * s represents the convolution operation, unlike the traditional CNN Network, the FCN network replaces all the last fully connected layers with convolutional layers. However, after a series of operations such as convolution and downsampling, the feature map becomes smaller and smaller. To restore the image to the same size as the input image, FCN uses Upsampling operation or deconvolution.

本发明的中间卷积层都通过补零的方式得到同样大小的特征图,左右对称的U型网络中都重复应用两个紧相连的3×3滤波卷积核进行卷积操作,步长为1,每个卷积层后边都有一个ReLU激活函数,pooling层的结果是使得特征减少,参数减少,但pooling层的目的并不仅在于此,它能够保持某种不变性旋转、平移等,本结构用核为2×2,步长为2的max-pooling层,能减小卷积层参数误差造成的估计均值的偏移,更多的保留纹理信息。在每个下采样的过程中,特征图的数目都会翻倍,上采样则相反。除此之外,在最后一层用1×1的卷积核将64个特征图以标准输出为目标映射进行训练。The intermediate convolutional layers of the present invention obtain feature maps of the same size by padding zeros, and in the left-right symmetrical U-shaped network, two closely connected 3×3 filter convolution kernels are repeatedly applied for convolution operations, with a step size of 1. There is a ReLU activation function behind each convolutional layer. The result of the pooling layer is to reduce features and parameters, but the purpose of the pooling layer is not limited to this. It can maintain some invariance rotation, translation, etc., this The structure uses a max-pooling layer with a core of 2×2 and a step size of 2, which can reduce the deviation of the estimated mean value caused by the parameter error of the convolution layer, and retain more texture information. During each downsampling, the number of feature maps doubles, and upsampling does the opposite. In addition, a 1×1 convolution kernel is used in the last layer to map 64 feature maps to the standard output for training.

整个模型的构建过程中激活函数都用ReLU除了Softmax分类层,损失函数为交叉熵。During the construction of the entire model, the activation function uses ReLU to remove the Softmax classification layer, and the loss function is cross entropy.

HNED结构:我们把血管分割看做是边缘检测问题,我们使用基于深度监督的网络得到浅层FCN网络的四个血管概率图。即我们将C2,C4,C6,C8之后分别添加一个softmax分类器,通过以标准分割结果为目标的深度监督网络从而将隐藏层的信息以视网膜血管概率图的形式展现出来,分别称为侧输出1、侧输出2、侧输出3、侧输出4,实现多尺度的特征映射图的学习。HNED structure: We regard vessel segmentation as an edge detection problem, and we use a network based on deep supervision to obtain four vessel probability maps of a shallow FCN network. That is, we add a softmax classifier after C2, C4, C6, and C8 respectively, and display the information of the hidden layer in the form of a retinal blood vessel probability map through a deep supervision network targeting the standard segmentation results, which are called side outputs respectively. 1. Side output 2, side output 3, and side output 4, to realize the learning of multi-scale feature maps.

由于FCN网络的低层特征分辨率较高,而高层信息体现了更强的语义信息,对于视网膜图像的部分病变等区域的血管分类具有很好的鲁棒性,但最后得到与输入样本相同大小的输出却会丢失很多较小的目标和局部的细节信息,因而,本发明将浅层的视网膜血管信息以边缘检测HNED的方法在深度监督情况下来学习丰富的多层信息表达,很大程度上解决了目标边缘模糊问题。在此基础上,我们将四个侧输出与最后的输出层进行融合,从而形成FCN-HNED的网络结构,如图4所示,如果输入图像大小为512×512,经过C1,C2都为64个3×3的滤波器得到64个特征图,通过对原图像补零的方式使得C1,C2特征图保持大小为512×512,经过降采样使得特征图翻倍,到达最底端C9和C10时,1024个特征图大小为32×32,之后的卷积实现与前边类似,上采样的实现方式为双线性插值。该网络结构将浅层信息的四个侧输出血管概率图与FCN网络的输出层血管概率图进行互补融合,通过训练得到更好的与目标样本更相近的特征图,为分割血管的精细化起到很大的作用,以至于不需要后续专门的精化步骤来进行视网膜血管的精细化。Since the low-level feature resolution of the FCN network is relatively high, and the high-level information reflects stronger semantic information, it is very robust for the classification of blood vessels in some lesions and other areas of the retinal image, but finally the same size as the input sample is obtained. However, the output will lose a lot of smaller targets and local details. Therefore, the present invention learns rich multi-layer information expression under the condition of deep supervision by using the shallow retinal blood vessel information with the method of edge detection HNED, which solves the problem to a large extent. The problem of blurring the edge of the target is solved. On this basis, we fuse the four side outputs with the final output layer to form the network structure of FCN-HNED, as shown in Figure 4, if the input image size is 512×512, after C1 and C2 are both 64 A 3×3 filter obtains 64 feature maps. By padding the original image with zeros, the C1 and C2 feature maps maintain a size of 512×512. After downsampling, the feature maps are doubled and reach the bottom C9 and C10. When , the size of 1024 feature maps is 32×32, and the subsequent convolution implementation is similar to the previous one, and the implementation of upsampling is bilinear interpolation. The network structure complementarily fuses the four side output vessel probability maps of the shallow layer information with the output layer vessel probability map of the FCN network, and obtains a better feature map that is closer to the target sample through training, which plays a role in the refinement of the segmented blood vessels. to such an extent that no subsequent dedicated refinement step is required for retinal vessel refinement.

融合过程:为了直接利用侧输出概率图和FCN上采样之后的输出概率图,我们对其进行融合得到:其中,σ(·)表示sigmoid函数,表示第m个侧输出,hm与h分别是四个侧输出和FCN最后输出的融合权值,初始融合权值都设为1/5。进行加权融合的损失函数为:Fusion process: In order to directly use the side output probability map and the output probability map after FCN upsampling, we fuse them to get: Among them, σ( ) represents the sigmoid function, Represents the mth side output, h m and h are the fusion weights of the four side outputs and the final output of the FCN, respectively, and the initial fusion weights are set to 1/5. The loss function for weighted fusion is:

其中,Y表示标准血管分割图即ground truth,Dist(·,·)表示融合之后的概率图与标准血管分割图之间的距离,即相差程度,通过学习的方式调整权值逐渐接近收敛,我们最小化其损失函数通过SDG(梯度下降法)。Among them, Y represents the standard blood vessel segmentation map, which is the ground truth, and Dist(·,·) represents the distance between the fused probability map and the standard blood vessel segmentation map, that is, the degree of difference. Adjusting the weights by learning is gradually approaching convergence. We Minimize its loss function via SDG (gradient descent).

训练:FCN-HNED网络构建好之后可以进行网络的训练来进行对图像的自动特征提取和学习过程,分两步进行:第一步,人工选取一些比较直观的图片1280张,先对本文构建的模型进行训练,每代输入128个图像,等到模型收敛之后,将模型参数保存下来,因为这1280张图片内容比较直观简单,血管非血管的语义信息比较清晰,模型的收敛速度比较快;第二步,在全集训练集上对模型进行再次训练,但是模型参数的初始值采用第一歩中得到的参数,这样大大减少了模型的训练时间,使得整体模型的收敛速度加快。Training: After the FCN-HNED network is constructed, the network training can be carried out to carry out the automatic feature extraction and learning process of the image, which is divided into two steps: the first step is to manually select some 1280 pictures that are more intuitive, and first to construct the image in this paper. The model is trained, and 128 images are input in each generation. After the model converges, the model parameters are saved, because the content of these 1280 images is relatively intuitive and simple, the semantic information of blood vessels and non-vascular is relatively clear, and the convergence speed of the model is relatively fast; the second In the first step, the model is retrained on the full training set, but the initial value of the model parameters is the parameter obtained in the first step, which greatly reduces the training time of the model and speeds up the convergence of the overall model.

训练:将每个图像训练数据通过卷积神经网络算法进行逐层计算后,得到输出一个融合后的血管概率图,计算该概率图与对应的标准图中每个像素所属类别的误差。根据最小误差准则,通过误差计算进行逐层反馈修正所构建的深度卷积神经网络中各层参数。当误差逐渐下降趋于稳定时,认为网络已经收敛,训练结束,生成所需检测模型。Training: After each image training data is calculated layer by layer through the convolutional neural network algorithm, a fused blood vessel probability map is output, and the error between the probability map and the corresponding category of each pixel in the standard map is calculated. According to the minimum error criterion, the parameters of each layer in the deep convolutional neural network constructed by error calculation are corrected layer by layer. When the error gradually decreases and becomes stable, it is considered that the network has converged, the training is over, and the required detection model is generated.

测试:将每张视网膜眼底图像绿色通道图的CLAHE图和高斯匹配滤波图分别输入到已训练好的网络进行测试,分别得到融合的视网膜血管分割图称为进行加权平均从而得到更多的血管信息,也得到了最后的视网膜血管分割概率图。Test: Input the CLAHE map and Gaussian matching filter map of each retinal fundus image green channel map to the trained network for testing, and the fused retinal vessel segmentation map is called with right with Weighted averaging is performed to obtain more blood vessel information, and the final retinal blood vessel segmentation probability map is also obtained.

4后处理4 post-processing

对综合得到的视网膜血管概率图进行二值化得到分割图,呈现出与专家分割一致的二值图。通过对分割结果进行参数评估,得到96%以上的准确率,如图5所示。Binarize the comprehensive retinal vessel probability map to obtain a segmentation map, which presents a binary map consistent with expert segmentation. By evaluating the parameters of the segmentation results, an accuracy rate of more than 96% is obtained, as shown in Figure 5.

Claims (1)

1. the Segmentation Method of Retinal Blood Vessels being combined with conventional method based on deep learning, it is characterised in that including following step Suddenly:
(1), pre-process
Tri- passage Green passages of RGB to colored retinal images are extracted, and are normalized, to normalization Retinal images afterwards carry out CLAHE treatment, and the retinal vessel after CLAHE is processed carries out Gauss matched filtering treatment, Retinal vessel figure and Gauss matched filtering figure after CLAHE is processed are all as the sample of training;
(2), data amplification and structure training sample
Data expand mode:
1) pretreated image is carried out into inferior translation on left and right and is respectively 20 pixels, realize the translation invariant of e-learning Property;
2) image after 1) processing carries out 45 ° respectively, and 90 °, 125 °, 180 ° of rotation intercepts maximum rectangle therein;
3) image set after 2) processing chooses wherein 25% and carries out 3 × 3 and 5 × 5 medium filtering fuzzy operation respectively;
4) image set after 3) processing takes 25% and enters row stochastic elastic deformation;
5) image after 4) processing carries out 50% and 75% scaling treatment, so that amplification data;
Carry out same treatment for the expert standard drawing ground truth that retinal vessel is split, so as to a pair of sample 1 Should;
(3), FCN-HNED network structions
The network of structure is:
Input layer, two convolutional layers (C1, C2), the first down-sampled layer (pool1), two convolutional layers (C3, C4), second is down-sampled Layer (pool2), two convolutional layers (C5, C6), the 3rd down-sampled layer (pool3), two convolutional layers (C7, C8), the 4th is down-sampled Layer (pool4), two convolutional layers (C9, C10), the first up-sampling layer (U1), two convolutional layers (C11, C12), the second up-sampling Layer (U2), two convolutional layers (C13, C14), the 3rd up-sampling layer (U3), two convolutional layers (C15, C16), the 4th up-sampling layer (U4), two convolutional layers (C17, C18), destination layer (output layer);Form U-shaped depth network architecture symmetrical before and after;
By C2, a softmax grader is added after C4, C6, C8 layer respectively, so as to by the information of hidden layer by ground Truth be label in the case of study obtain retinal vessel probability graph, be referred to as side output 1, side output 2, side output 3, Side output 4;Four side outputs are merged with last output layer, so as to form the network structure of FCN-HNED;
Training:The training of network can be carried out after FCN-HNED network structions well to carry out to the Automatic Feature Extraction of image and Learning process, 128 images of per generation input, stops after network convergence;
Test:The CLAHE figures and Gauss matched filtering figure of every retinal images green channel figure are separately input to train Good network is tested, and the retinal vessel segmentation figure for respectively obtaining fusion is referred to asWithIt is rightWithCarry out Weighted average obtains last retinal vessel segmentation probability graph;
(4) post-process
Binaryzation is carried out to obtaining retinal vessel probability graph in test and obtains segmentation figure.
CN201611228597.0A 2016-12-27 2016-12-27 The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method Expired - Fee Related CN106920227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611228597.0A CN106920227B (en) 2016-12-27 2016-12-27 The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611228597.0A CN106920227B (en) 2016-12-27 2016-12-27 The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method

Publications (2)

Publication Number Publication Date
CN106920227A true CN106920227A (en) 2017-07-04
CN106920227B CN106920227B (en) 2019-06-07

Family

ID=59453388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611228597.0A Expired - Fee Related CN106920227B (en) 2016-12-27 2016-12-27 The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method

Country Status (1)

Country Link
CN (1) CN106920227B (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108122236A (en) * 2017-12-18 2018-06-05 上海交通大学 Iterative eye fundus image blood vessel segmentation method based on distance modulated loss
CN108230322A (en) * 2018-01-28 2018-06-29 浙江大学 A kind of eyeground feature detection device based on weak sample labeling
CN108492302A (en) * 2018-03-26 2018-09-04 北京市商汤科技开发有限公司 Nervous layer dividing method and device, electronic equipment, storage medium, program
CN108510473A (en) * 2018-03-09 2018-09-07 天津工业大学 The FCN retinal images blood vessel segmentations of convolution and channel weighting are separated in conjunction with depth
CN108665461A (en) * 2018-05-09 2018-10-16 电子科技大学 A kind of breast ultrasound image partition method corrected based on FCN and iteration sound shadow
CN108765422A (en) * 2018-06-13 2018-11-06 云南大学 A kind of retinal images blood vessel automatic division method
CN108830155A (en) * 2018-05-10 2018-11-16 北京红云智胜科技有限公司 A kind of heart coronary artery segmentation and knowledge method for distinguishing based on deep learning
CN108986124A (en) * 2018-06-20 2018-12-11 天津大学 In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109118495A (en) * 2018-08-01 2019-01-01 沈阳东软医疗系统有限公司 A kind of Segmentation Method of Retinal Blood Vessels and device
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109285157A (en) * 2018-07-24 2019-01-29 深圳先进技术研究院 Left ventricular myocardial segmentation method, device and computer readable storage medium
CN109426773A (en) * 2017-08-24 2019-03-05 浙江宇视科技有限公司 A kind of roads recognition method and device
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109523522A (en) * 2018-10-30 2019-03-26 腾讯科技(深圳)有限公司 Processing method, device, system and the storage medium of endoscopic images
CN109523569A (en) * 2018-10-18 2019-03-26 中国科学院空间应用工程与技术中心 A kind of remote sensing image dividing method and device based on more granularity network integrations
CN109528155A (en) * 2018-11-19 2019-03-29 复旦大学附属眼耳鼻喉科医院 A kind of intelligent screening system and its method for building up suitable for the concurrent open-angle glaucoma of high myopia
CN109886982A (en) * 2019-04-24 2019-06-14 数坤(北京)网络科技有限公司 A kind of blood-vessel image dividing method, device and computer memory device
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN110222726A (en) * 2019-05-15 2019-09-10 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment
CN110276763A (en) * 2018-03-15 2019-09-24 中南大学 A Retinal Vascular Segmentation Map Generation Method Based on Credibility and Deep Learning
CN110309849A (en) * 2019-05-10 2019-10-08 腾讯医疗健康(深圳)有限公司 Blood-vessel image processing method, device, equipment and storage medium
CN110415231A (en) * 2019-07-25 2019-11-05 山东浪潮人工智能研究院有限公司 A kind of CNV dividing method based on attention pro-active network
WO2019228195A1 (en) * 2018-05-28 2019-12-05 中兴通讯股份有限公司 Method and apparatus for perceiving spatial environment
CN110796643A (en) * 2019-10-18 2020-02-14 四川大学 Rail fastener defect detection method and system
CN111091132A (en) * 2020-03-19 2020-05-01 腾讯科技(深圳)有限公司 Image recognition method and device based on artificial intelligence, computer equipment and medium
CN111541911A (en) * 2020-04-21 2020-08-14 腾讯科技(深圳)有限公司 Video detection method and device, storage medium and electronic device
CN112132817A (en) * 2020-09-29 2020-12-25 汕头大学 A Hybrid Attention Mechanism for Retinal Vessel Segmentation in Fundus Images
CN112465842A (en) * 2020-12-22 2021-03-09 杭州电子科技大学 Multi-channel retinal vessel image segmentation method based on U-net network
CN112950638A (en) * 2019-12-10 2021-06-11 深圳华大生命科学研究院 Image segmentation method and device, electronic equipment and computer readable storage medium
US11080850B2 (en) * 2018-01-16 2021-08-03 Electronics And Telecommunications Research Institute Glaucoma diagnosis method using fundus image and apparatus for the same
EP3745347A4 (en) * 2018-01-26 2021-12-15 BOE Technology Group Co., Ltd. Image processing method, processing apparatus and processing device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12205277B2 (en) 2021-12-29 2025-01-21 Shanghai United Imaging Intelligence Co., Ltd. Tubular structure segmentation
US12190508B2 (en) 2022-04-21 2025-01-07 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for enhancing medical images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120178099A1 (en) * 2011-01-10 2012-07-12 Indian Association For The Cultivation Of Science Highly fluorescent carbon nanoparticles and methods of preparing the same
CN105825509A (en) * 2016-03-17 2016-08-03 电子科技大学 Cerebral vessel segmentation method based on 3D convolutional neural network
CN106096654A (en) * 2016-06-13 2016-11-09 南京信息工程大学 A kind of cell atypia automatic grading method tactful based on degree of depth study and combination
CN106203327A (en) * 2016-07-08 2016-12-07 清华大学 Lung tumor identification system and method based on convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120178099A1 (en) * 2011-01-10 2012-07-12 Indian Association For The Cultivation Of Science Highly fluorescent carbon nanoparticles and methods of preparing the same
CN105825509A (en) * 2016-03-17 2016-08-03 电子科技大学 Cerebral vessel segmentation method based on 3D convolutional neural network
CN106096654A (en) * 2016-06-13 2016-11-09 南京信息工程大学 A kind of cell atypia automatic grading method tactful based on degree of depth study and combination
CN106203327A (en) * 2016-07-08 2016-12-07 清华大学 Lung tumor identification system and method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SAINING XIE ET AL.: "Holistically-Nested Edge Detection", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426773A (en) * 2017-08-24 2019-03-05 浙江宇视科技有限公司 A kind of roads recognition method and device
CN108122236B (en) * 2017-12-18 2020-07-31 上海交通大学 Iterative fundus image blood vessel segmentation method based on distance modulation loss
CN108122236A (en) * 2017-12-18 2018-06-05 上海交通大学 Iterative eye fundus image blood vessel segmentation method based on distance modulated loss
US11080850B2 (en) * 2018-01-16 2021-08-03 Electronics And Telecommunications Research Institute Glaucoma diagnosis method using fundus image and apparatus for the same
EP3745347A4 (en) * 2018-01-26 2021-12-15 BOE Technology Group Co., Ltd. Image processing method, processing apparatus and processing device
CN108230322B (en) * 2018-01-28 2021-11-09 浙江大学 Eye ground characteristic detection device based on weak sample mark
CN108230322A (en) * 2018-01-28 2018-06-29 浙江大学 A kind of eyeground feature detection device based on weak sample labeling
CN108510473A (en) * 2018-03-09 2018-09-07 天津工业大学 The FCN retinal images blood vessel segmentations of convolution and channel weighting are separated in conjunction with depth
CN110276763A (en) * 2018-03-15 2019-09-24 中南大学 A Retinal Vascular Segmentation Map Generation Method Based on Credibility and Deep Learning
CN110276763B (en) * 2018-03-15 2021-05-11 中南大学 A Retinal Vessel Segmentation Map Generation Method Based on Credibility and Deep Learning
CN108492302A (en) * 2018-03-26 2018-09-04 北京市商汤科技开发有限公司 Nervous layer dividing method and device, electronic equipment, storage medium, program
CN108492302B (en) * 2018-03-26 2021-04-02 北京市商汤科技开发有限公司 Neural layer segmentation method and device, electronic device and storage medium
CN108665461A (en) * 2018-05-09 2018-10-16 电子科技大学 A kind of breast ultrasound image partition method corrected based on FCN and iteration sound shadow
CN108830155A (en) * 2018-05-10 2018-11-16 北京红云智胜科技有限公司 A kind of heart coronary artery segmentation and knowledge method for distinguishing based on deep learning
WO2019228195A1 (en) * 2018-05-28 2019-12-05 中兴通讯股份有限公司 Method and apparatus for perceiving spatial environment
CN108765422A (en) * 2018-06-13 2018-11-06 云南大学 A kind of retinal images blood vessel automatic division method
CN108986124A (en) * 2018-06-20 2018-12-11 天津大学 In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method
CN109285157A (en) * 2018-07-24 2019-01-29 深圳先进技术研究院 Left ventricular myocardial segmentation method, device and computer readable storage medium
WO2020019740A1 (en) * 2018-07-24 2020-01-30 深圳先进技术研究院 Left ventricle myocardium segmentation method and device, and computer readable storage medium
CN109118495A (en) * 2018-08-01 2019-01-01 沈阳东软医疗系统有限公司 A kind of Segmentation Method of Retinal Blood Vessels and device
CN109118495B (en) * 2018-08-01 2020-06-23 东软医疗系统股份有限公司 Retinal vessel segmentation method and device
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109191476B (en) * 2018-09-10 2022-03-11 重庆邮电大学 Novel biomedical image automatic segmentation method based on U-net network structure
CN109523569A (en) * 2018-10-18 2019-03-26 中国科学院空间应用工程与技术中心 A kind of remote sensing image dividing method and device based on more granularity network integrations
CN109523522A (en) * 2018-10-30 2019-03-26 腾讯科技(深圳)有限公司 Processing method, device, system and the storage medium of endoscopic images
CN109523522B (en) * 2018-10-30 2023-05-09 腾讯医疗健康(深圳)有限公司 Endoscopic image processing method, device, system and storage medium
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109448006B (en) * 2018-11-01 2022-01-28 江西理工大学 Attention-based U-shaped dense connection retinal vessel segmentation method
CN109528155A (en) * 2018-11-19 2019-03-29 复旦大学附属眼耳鼻喉科医院 A kind of intelligent screening system and its method for building up suitable for the concurrent open-angle glaucoma of high myopia
WO2020199593A1 (en) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 Image segmentation model training method and apparatus, image segmentation method and apparatus, and device and medium
CN110120047B (en) * 2019-04-04 2023-08-08 平安科技(深圳)有限公司 Image segmentation model training method, image segmentation method, device, equipment and medium
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN109886982A (en) * 2019-04-24 2019-06-14 数坤(北京)网络科技有限公司 A kind of blood-vessel image dividing method, device and computer memory device
CN110309849A (en) * 2019-05-10 2019-10-08 腾讯医疗健康(深圳)有限公司 Blood-vessel image processing method, device, equipment and storage medium
CN110222726A (en) * 2019-05-15 2019-09-10 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment
CN110415231A (en) * 2019-07-25 2019-11-05 山东浪潮人工智能研究院有限公司 A kind of CNV dividing method based on attention pro-active network
CN110796643A (en) * 2019-10-18 2020-02-14 四川大学 Rail fastener defect detection method and system
CN112950638A (en) * 2019-12-10 2021-06-11 深圳华大生命科学研究院 Image segmentation method and device, electronic equipment and computer readable storage medium
CN112950638B (en) * 2019-12-10 2023-12-29 深圳华大生命科学研究院 Image segmentation method, device, electronic equipment and computer readable storage medium
CN111091132A (en) * 2020-03-19 2020-05-01 腾讯科技(深圳)有限公司 Image recognition method and device based on artificial intelligence, computer equipment and medium
CN111541911A (en) * 2020-04-21 2020-08-14 腾讯科技(深圳)有限公司 Video detection method and device, storage medium and electronic device
CN111541911B (en) * 2020-04-21 2024-05-14 深圳市雅阅科技有限公司 Video detection method and device, storage medium and electronic device
CN112132817A (en) * 2020-09-29 2020-12-25 汕头大学 A Hybrid Attention Mechanism for Retinal Vessel Segmentation in Fundus Images
CN112132817B (en) * 2020-09-29 2022-12-06 汕头大学 Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN112465842A (en) * 2020-12-22 2021-03-09 杭州电子科技大学 Multi-channel retinal vessel image segmentation method based on U-net network
CN112465842B (en) * 2020-12-22 2024-02-06 杭州电子科技大学 Multichannel retinal blood vessel image segmentation method based on U-net network

Also Published As

Publication number Publication date
CN106920227B (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN106920227A (en) Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
Wang et al. Multi-label classification of fundus images with efficientnet
Xiao et al. Weighted res-unet for high-quality retina vessel segmentation
Ali et al. Optic disk and cup segmentation through fuzzy broad learning system for glaucoma screening
Shankaranarayana et al. Fully convolutional networks for monocular retinal depth estimation and optic disc-cup segmentation
CN109345538B (en) Retinal vessel segmentation method based on convolutional neural network
CN109376636B (en) Capsule network-based eye fundus retina image classification method
CN107316307B (en) Automatic segmentation method of traditional Chinese medicine tongue image based on deep convolutional neural network
CN107292887A (en) A kind of Segmentation Method of Retinal Blood Vessels based on deep learning adaptive weighting
CN110059586A (en) A kind of Iris Location segmenting system based on empty residual error attention structure
CN105069472A (en) Vehicle detection method based on convolutional neural network self-adaption
CN110400288B (en) Sugar network disease identification method and device fusing binocular features
CN109919938B (en) Method for obtaining optic disc segmentation atlas of glaucoma
CN111008647B (en) Sample extraction and image classification method based on void convolution and residual linkage
CN111815563A (en) A Retina Optic Disc Segmentation Method Combining U-Net and Region Growing PCNN
CN110008912A (en) A social platform matching method and system based on plant identification
Singh et al. Optimized convolutional neural network for glaucoma detection with improved optic-cup segmentation
CN112580661A (en) Multi-scale edge detection method under deep supervision
CN114565620A (en) A method for blood vessel segmentation in fundus images based on skeleton prior and contrast loss
Sharma et al. Detection of diabetic retinopathy using convolutional neural network
Lyu et al. Deep tessellated retinal image detection using Convolutional Neural Networks
Biswal et al. Robust retinal optic disc and optic cup segmentation via stationary wavelet transform and maximum vessel pixel sum
Li et al. Region focus network for joint optic disc and cup segmentation
Sallam et al. Diabetic retinopathy grading using ResNet convolutional neural network
Saranya et al. Detection of exudates from retinal images for non-proliferative diabetic retinopathy detection using deep learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190607