CN101667289A - Retinal image segmentation method based on NSCT feature extraction and supervised classification - Google Patents

Retinal image segmentation method based on NSCT feature extraction and supervised classification Download PDF

Info

Publication number
CN101667289A
CN101667289A CN200810232337A CN200810232337A CN101667289A CN 101667289 A CN101667289 A CN 101667289A CN 200810232337 A CN200810232337 A CN 200810232337A CN 200810232337 A CN200810232337 A CN 200810232337A CN 101667289 A CN101667289 A CN 101667289A
Authority
CN
China
Prior art keywords
retinal
image
layer
segmented
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200810232337A
Other languages
Chinese (zh)
Other versions
CN101667289B (en
Inventor
钟桦
焦李成
侯鹏
王爽
侯彪
刘芳
马文萍
公茂果
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN200810232337XA priority Critical patent/CN101667289B/en
Publication of CN101667289A publication Critical patent/CN101667289A/en
Application granted granted Critical
Publication of CN101667289B publication Critical patent/CN101667289B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种基于NSCT特征提取和监督分类的视网膜图像分割方法,涉及医学图像处理。其步骤为:(1)对视网膜训练图像和待分割视网膜图像,利用其红色分量得到其感兴趣区域;(2)对视网膜训练图像和待分割视网膜图像的绿色分量,分别进行感兴趣区域边缘迭代扩展;(3)对扩展后的图像分别进行NSCT变换,将其分解为i层;(4)利用每层j个方向子带系数提取一维特征,并逐层提取组成特征向量,并归一化;(5)对归一化后的视网膜训练图像的特征向量建立训练样本;(6)选用分类器,并利用训练样本对分类器训练,将归一化后的待分割视网膜图像的特征向量输入分类器中,对待分割视网膜图像进行分割。本发明具有图像分割边缘清晰,准确率高的优点,用于医学图像视网膜分割。

Figure 200810232337

The invention discloses a retinal image segmentation method based on NSCT feature extraction and supervised classification, which relates to medical image processing. The steps are: (1) Use the red component of the retinal training image and the retinal image to be segmented to obtain the region of interest; (2) For the retinal training image and the green component of the retinal image to be segmented, iterate the edge of the region of interest Extend; (3) perform NSCT transformation on the expanded image respectively, and decompose it into i layers; (4) extract one-dimensional features by using the j direction subband coefficients of each layer, and extract the feature vector layer by layer, and normalize (5) establish training samples for the eigenvectors of the normalized retinal training images; (6) select a classifier, and use the training samples to train the classifier, and use the normalized eigenvectors of the retinal images to be segmented Input the classifier to segment the retinal image to be segmented. The invention has the advantages of clear image segmentation edge and high accuracy, and is used for retinal segmentation of medical images.

Figure 200810232337

Description

基于NSCT特征提取和监督分类的视网膜图像分割方法 Retinal Image Segmentation Method Based on NSCT Feature Extraction and Supervised Classification

技术领域 technical field

本发明属于图像处理技术领域,涉及视网膜检测的应用,可用于在医学上从视网膜图像提取视网膜血管。The invention belongs to the technical field of image processing, relates to the application of retinal detection, and can be used to extract retinal blood vessels from retinal images in medicine.

背景技术 Background technique

医学的发展与人类的健康密切相关,因此数字图像处理技术从一开始就引起了生物医学界的浓厚兴趣。早在七十年代末就有文献统计指出图像处理的一个十分广泛的应用场合是医学图像处理。医学上不论在基础学科还是临床应用,都是图像处理种类极多的领域。但是由于医学图像的处理技术难度大,使得很多处理很难达到临床实用化程度。近年来,随着数字图像处理设备成本的降低,用数字图像处理技术改善各类医学图像质量已达到实用阶段。The development of medicine is closely related to human health, so digital image processing technology has aroused great interest in biomedical circles from the very beginning. As early as the end of the 1970s, literature statistics pointed out that a very wide application of image processing is medical image processing. In medicine, whether it is in basic disciplines or clinical applications, it is a field with many types of image processing. However, due to the technical difficulty of medical image processing, it is difficult to achieve clinical practicality in many processes. In recent years, with the reduction of the cost of digital image processing equipment, the use of digital image processing technology to improve the quality of various medical images has reached the practical stage.

高血压、脑血管硬化、冠状动脉硬化等心脑血管疾病是目前我国老年人死亡和致残的主要原因,此类疾病损伤的组织水平首先是在微循环和微血管层次的变化。眼底视网膜微血管是人体唯一可以非创伤性直接观察的较深层的微血管,它的改变程度与高血压等疾病的病程、严重程度及愈后情况密切相关。通过对视网膜血管系统的检查可以发现高血压,糖尿病,动脉硬化等疾病。在视网膜图像中,血管占支配地位并且结构稳定,因而可靠的血管提取是视网膜图像分析和处理的先决条件。Cardiovascular and cerebrovascular diseases such as hypertension, cerebrovascular sclerosis, and coronary arteriosclerosis are the main causes of death and disability in the elderly in my country. The tissue level of damage caused by such diseases is firstly the changes in the microcirculation and microvascular levels. Fundus retinal microvessels are the only deep microvessels in the human body that can be directly observed non-invasively. Its changes are closely related to the course, severity and prognosis of diseases such as hypertension. Diseases such as high blood pressure, diabetes, and arteriosclerosis can be found through the examination of the retinal vascular system. In retinal images, vessels are dominant and structurally stable, thus reliable vessel extraction is a prerequisite for retinal image analysis and processing.

现有的视网膜分割主要有两种,一种是利用高斯滤波器,Hessian矩阵等来增强血管与背景的对比度,然后再进行进一步操作,或者进行阈值处理,或者采取区域生长。Jiang等人在2003年于Adaptive local thresholding by verification-basedmultithreshold probing with application to vessel detection in retinal images一文,提出了一种基于验证的多阈值探查方案的自适应局部阈值分割,即先通过假定的阈值进行二值化得到假定的目标,然后通过验证程序决定接受还是放弃该目标。There are two main types of existing retinal segmentation, one is to use Gaussian filter, Hessian matrix, etc. to enhance the contrast between blood vessels and the background, and then perform further operations, or perform threshold processing, or adopt region growth. In the article Adaptive local thresholding by verification-basedmultithreshold probing with application to vessel detection in retinal images in 2003, Jiang et al. proposed an adaptive local threshold segmentation based on a verification-based multi-threshold detection scheme, that is, first through the assumed threshold. Binarization yields a hypothetical target, and a validation procedure decides whether to accept or reject that target.

另一种是利用分类器来分割血管。在这种方法中,关键问题是如何建立特征向量。J.Staal等人在2004年于Ridge based vessel segmentation in color images of theretina一文,提出一种先利用脊检测来获取特征向量,然后分类分割血管的方法。Soares et al.等人2006年于Retinal Vessel Segmentation Using the 2-D Gabor Waveletand Supervised Classification一文,提出了一种基于Gabor小波的多尺度和多方向特性进行特征提取,再通过监督分类进行分割的方法。The other is to use a classifier to segment blood vessels. In this method, the key issue is how to build the eigenvectors. In the article Ridge based vessel segmentation in color images of theretina in 2004, J.Staal et al. proposed a method of first using ridge detection to obtain feature vectors, and then classifying and segmenting blood vessels. In the article Retinal Vessel Segmentation Using the 2-D Gabor Wavelet and Supervised Classification in 2006, Soares et al. et al. proposed a method of feature extraction based on the multi-scale and multi-directional characteristics of Gabor wavelet, and then segmented by supervised classification.

在视网膜图像血管分割中,最重要的就是血管检测率的提高,但是由于视网膜图像灰度全局不均衡,血管与背景对比度较差,存在噪声以及各种病变区域的影响,因而上述这些方法对血管分割的结果均存在一定的误差,其准确率还需要进一步提高。In retinal image blood vessel segmentation, the most important thing is to increase the detection rate of blood vessels. However, due to the global unbalanced gray scale of retinal images, poor contrast between blood vessels and background, noise and the influence of various lesion areas, the above methods are not effective for blood vessels. There are certain errors in the segmentation results, and the accuracy needs to be further improved.

发明内容 Contents of the invention

本发明的目的是提供一种基于Nonsubsampled Contourlet Transform(NSCT)和监督分类的视网膜图像分割方法。克服已有技术的不足,进一步提高分割的准确率。The object of the invention is to provide a retinal image segmentation method based on Nonsubsampled Contourlet Transform (NSCT) and supervised classification. It overcomes the shortcomings of the existing technology and further improves the accuracy of segmentation.

为实现上述目的,本发明的技术方案包括如下步骤:To achieve the above object, the technical solution of the present invention comprises the following steps:

一.特征提取步骤1. Feature extraction steps

(1)对视网膜训练图像和待分割视网膜图像,利用其红色分量得到其感兴趣区域ROI;(1) For the retinal training image and the retinal image to be segmented, use its red component to obtain the region of interest ROI;

(2)对视网膜训练图像和待分割视网膜图像的绿色分量,分别进行感兴趣区域边缘迭代扩展;(2) For the retinal training image and the green component of the retinal image to be segmented, iteratively expand the edge of the region of interest;

(3)对扩展后的视网膜训练图像和待分割视网膜图像分别进行NSCT变换,将其分解为i层,每层有j个方向的子带系数;(3) Perform NSCT transformation on the expanded retinal training image and the retinal image to be segmented respectively, decompose it into i layers, and each layer has subband coefficients in j directions;

(4)利用每层j个方向子带系数提取一维特征,并逐层提取特征,组成特征向量,并进行归一化;(4) extract one-dimensional features by using j direction sub-band coefficients in each layer, and extract features layer by layer to form feature vectors and normalize them;

二.分类器的训练与分割步骤2. Classifier training and segmentation steps

1)对归一化后的视网膜训练图像的特征向量建立训练样本;1) Establish training samples for the feature vectors of the normalized retinal training images;

2)选用分类器,并利用训练样本对分类器进行训练,将归一化后的待分割视网膜图像的特征向量输入分类器中,对待分割视网膜图像进行分割。2) Select a classifier and use the training samples to train the classifier, input the normalized feature vector of the retinal image to be segmented into the classifier, and segment the retinal image to be segmented.

上述的视网膜图像分割方法中,步骤(3)所述的利用每层j个方向子带系数提取一维特征,并逐层提取组成特征向量,按如下步骤进行:In the above-mentioned retinal image segmentation method, the use of j direction subband coefficients in each layer to extract one-dimensional features described in step (3), and extracting the feature vector layer by layer, is carried out as follows:

(3a)对于每层分解的j个方向子带系数,将视网膜图像中血管灰度与背景灰度的比较结果作为选取特征对象,如果视网膜图像中血管灰度小于背景灰度,选取其中最小的系数作为特征,如果视网膜图像中血管灰度大于背景灰度,选取其中最大的系数作为特征;(3a) For the j direction subband coefficients decomposed in each layer, the comparison result of the blood vessel gray level in the retinal image and the background gray level is used as the selected feature object. If the blood vessel gray level in the retinal image is smaller than the background gray level, select the smallest one The coefficient is used as a feature. If the grayscale of blood vessels in the retinal image is greater than the background grayscale, the largest coefficient is selected as a feature;

(3b)按照步骤(3a),逐层进行相同特征提取操作,将得到的特征组成特征向量;(3b) According to step (3a), the same feature extraction operation is performed layer by layer, and the obtained features are formed into feature vectors;

(3c)将视网膜训练图像和待分割视网膜图像的绿色分量的灰度值作为一维特征加入特征向量,得到最终的特征向量v。(3c) Add the gray value of the retinal training image and the green component of the retinal image to be segmented into the feature vector as a one-dimensional feature to obtain the final feature vector v.

本发明由于在NSCT的多尺度性,多方向性,平移不变性基础上,结合视网膜图像NSCT变换系数的具体表现形式,提出了一种新的特征提取方法,即用不同尺度上提取得特征描述不同宽度的视网膜血管,综合若干尺度上获得的特征组成特征向量,因而对视网膜图像血管边缘处的分割较为准确;同时由于本发明采取先训练,后分类的监督分类方法,在有监督的情况下进行视网膜图像分割,因而对视网膜图像分割结果误差较小。仿真结果表明,本发明比现有的视网膜图像分割方法有较好的分割准确率。Based on the multi-scale, multi-direction, and translation invariance of NSCT, the present invention proposes a new feature extraction method in combination with the specific expression form of NSCT transformation coefficients of retinal images, that is, using feature descriptions extracted on different scales Retinal blood vessels of different widths are combined with features obtained on several scales to form feature vectors, so the segmentation of retinal image blood vessel edges is more accurate; at the same time, because the present invention adopts the supervised classification method of training first and then classifying, in the case of supervision The retinal image is segmented, so the error of the retinal image segmentation result is small. Simulation results show that the present invention has better segmentation accuracy than existing retinal image segmentation methods.

附图说明 Description of drawings

图1是本发明的实现流程示意图;Fig. 1 is the realization flow schematic diagram of the present invention;

图2是本发明中获得视网膜图像ROI区域的各个中间步骤结果示意图;Fig. 2 is a schematic diagram of the results of various intermediate steps for obtaining the ROI region of the retinal image in the present invention;

图3是对视网膜绿色分量进行ROI边沿扩展的结果示意图;Fig. 3 is a schematic diagram of the result of ROI edge expansion on the retinal green component;

图4是特征提取原理示意图;Fig. 4 is a schematic diagram of feature extraction principle;

图5是从对DRIVE数据库进行实验的结果中选出的两幅图像;Figure 5 is two images selected from the results of experiments on the DRIVE database;

图6是从对STARE数据库进行实验的结果中选出的两幅图像;Figure 6 is two images selected from the results of experiments on the STARE database;

图7是给出了对DRIVE数据库进行测试的结果的ROC曲线图;Figure 7 is a ROC curve diagram showing the results of testing the DRIVE database;

图8是本发明方法与现有Soares et al.方法仿真结果对比图。Fig. 8 is a comparison diagram of the simulation results of the method of the present invention and the existing Soares et al. method.

具体实施方法Specific implementation method

参照图1,本发明的具体实现过程如下:With reference to Fig. 1, the concrete realization process of the present invention is as follows:

步骤一,对视网膜训练图像和待分割视网膜图像,利用其红色分量得到其感兴趣区域ROI,具体步骤参照图2如下:Step 1, for the retinal training image and the retinal image to be segmented, use its red component to obtain the ROI of the region of interest. The specific steps refer to Figure 2 as follows:

(1.1)对如图2(a)所示的视网膜图像的红色分量除以255,如图2(b),并通过高斯滤波器LOG对图2(b)进行边缘检测,得到图2(c);(1.1) Divide the red component of the retinal image shown in Figure 2(a) by 255, as shown in Figure 2(b), and perform edge detection on Figure 2(b) through the Gaussian filter LOG to obtain Figure 2(c );

(1.2)对图2(c)进行先膨胀后腐蚀,使边缘的断裂处连接起来,得到图2(d);(1.2) Figure 2(c) is first expanded and then corroded, so that the edge fractures are connected, and Figure 2(d) is obtained;

(1.3)在图2(d)中,沿着图像边缘添加一个轮廓;(1.3) In Figure 2(d), add a contour along the edge of the image;

(1.4)通过阈值确定外部区域,在图2(b)中,找到其灰度最大值max red,将灰度小于max red×0.15的点标记为1,这样就得到了一幅二值图像,然后从其中将面积小于10个像素的标记为1的部分去除,得到图2(e);(1.4) Determine the external area through the threshold. In Figure 2(b), find the maximum value of its grayscale max red, and mark the point with a grayscale smaller than max red×0.15 as 1, thus obtaining a binary image. Then remove from it the part marked as 1 with an area smaller than 10 pixels, resulting in Figure 2(e);

(1.5)在图2(e)中,将步骤(3)中添加的轮廓去掉,然后对图2(c)进行填充,填充起点是图2(e)中标记为1的点,如图2(f);(1.5) In Figure 2(e), remove the contour added in step (3), and then fill Figure 2(c), the filling starting point is the point marked 1 in Figure 2(e), as shown in Figure 2 (f);

(1.6)对图2(f)进行取反操作,再对其进行腐蚀,得到图2(g);(1.6) Perform the inversion operation on Figure 2(f), and then corrode it to obtain Figure 2(g);

(1.7)对图2(g)进行取反操作,然后将面积小于5000像素的标记为1的物体去除,再取反,达到填充缺失区域的目的,结果如图2(h);(1.7) Perform the inversion operation on Figure 2(g), then remove the object marked as 1 with an area smaller than 5000 pixels, and then invert it to achieve the purpose of filling the missing area, the result is shown in Figure 2(h);

(1.8)对图2(h)进行开操作,以去除虚假轮廓,再去除面积小于5000像素的标记为1的物体,最终得到感兴趣区域ROI,如图2(i)。(1.8) Perform an open operation on Figure 2(h) to remove false contours, and then remove objects marked 1 with an area smaller than 5000 pixels, and finally obtain the ROI of the region of interest, as shown in Figure 2(i).

步骤二,对视网膜训练图像和待分割视网膜图像的绿色分量进行感兴趣区域ROI边沿扩展,以消除视网膜基底和孔径外区域的强对比度,避免在孔径边缘处产生大量误判。具体步骤如下:Step 2: Extend the ROI edge of the retinal training image and the green component of the retinal image to be segmented to eliminate the strong contrast between the retinal base and the area outside the aperture, and avoid a large number of misjudgments at the edge of the aperture. Specific steps are as follows:

(2.1)找到ROI的外部边界的像素,这些像素位于ROI的外部,但是和区域内的像素是4邻域;(2.1) Find the pixels of the outer boundary of the ROI, these pixels are located outside the ROI, but are 4 neighbors with the pixels in the region;

(2.2)对步骤(2.1)得到的外部边界像素,其灰度值设为以该像素为中心,且属于ROI的较小区域的均值,区域大小可以选为5×5;(2.2) For the outer boundary pixel obtained in step (2.1), its gray value is set as the mean value of the smaller area that is centered on the pixel and belongs to the ROI, and the area size can be selected as 5 × 5;

(2.3)将步骤(2.2)得到的外部边界像素也加入ROI,迭代执行一定次数.,通常进行80次迭代。(2.3) Add the outer boundary pixels obtained in step (2.2) to the ROI, perform a certain number of iterations, usually 80 iterations.

图3是对视网膜图像绿色分量进行ROI边沿扩展的结果示意图。图3(a)是图2(a)所示视网膜图像的绿色分量,图3(b)是对图3(a)进行ROI边沿扩展的结果。Fig. 3 is a schematic diagram of the result of ROI edge expansion on the green component of the retinal image. Figure 3(a) is the green component of the retinal image shown in Figure 2(a), and Figure 3(b) is the result of ROI edge extension on Figure 3(a).

步骤三,对扩展后的视网膜训练图像和待分割视网膜图像分别进行Nonsubsampled Contourlet Transfrom(NSCT)变换。Step 3: Perform Nonsubsampled Contourlet Transfrom (NSCT) transformation on the expanded retinal training image and the retinal image to be segmented respectively.

NSCT是一种多分辨率、局域的、多方向的图像表示方法。它不仅继承了小波变换的多分辨率时频分析特性,而且拥有良好的各向异性特征,能用比小波更少的系数来表示光滑的曲线。NSCT由非下采样多级分解和非下采样多级方向滤波器组成,具有平移不变性。非下采样多级分解采取非下采样拉普拉斯塔型分解来实现,非下采样的多级方向滤波器采用非下采样方向滤波器组实现。对扩展后的视网膜训练图像和待分割视网膜图像进行NSCT分解,本发明分为4层,每层8个方向。NSCT is a multi-resolution, local, and multi-directional image representation method. It not only inherits the multi-resolution time-frequency analysis characteristics of wavelet transform, but also has good anisotropy characteristics, and can express smooth curves with fewer coefficients than wavelet. NSCT consists of a non-subsampled multi-stage decomposition and a non-subsampled multi-stage directional filter with translation invariance. The non-subsampling multilevel decomposition is implemented by non-subsampling Laplacian decomposition, and the non-subsampling multilevel directional filter is realized by non-subsampling directional filter bank. NSCT decomposition is performed on the expanded retinal training image and the retinal image to be segmented. The present invention is divided into 4 layers, and each layer has 8 directions.

步骤四,对视网膜图像进行特征提取。特征的原理是:在图像的NSCT域中,选取第3层上任意一个方向的系数图,如图4(a)。在该系数图中,血管的边缘处是过零点,即系数正负发生变换的地方。假设血管的宽度很大,例如有成百个像素宽,则垂直血管方向上血管处的系数形状如图4(b)。这时,如果血管的宽度不断减小,两个过零点不断靠近,最终图4(b)就趋向于图4(c)。在视网膜图像中,血管的宽度最宽也就十几个像素,所以它的NSCT系数表现就趋于为图4(c)这种形式。Step 4, perform feature extraction on the retinal image. The principle of the feature is: in the NSCT domain of the image, select the coefficient map in any direction on the third layer, as shown in Figure 4(a). In this coefficient map, the edge of the blood vessel is the zero crossing point, that is, the place where the sign of the coefficient changes. Assuming that the width of the blood vessel is very large, for example, hundreds of pixels wide, the shape of the coefficient at the blood vessel in the direction perpendicular to the blood vessel is shown in Figure 4(b). At this time, if the width of the blood vessel keeps decreasing and the two zero-crossing points keep approaching, the final figure 4(b) tends to figure 4(c). In the retinal image, the width of the blood vessel is only a dozen pixels, so its NSCT coefficient tends to be in the form of Figure 4(c).

特征提取的过程如下:The process of feature extraction is as follows:

(4.1)选取视网膜图像NSCT分解的第2层,第3层和第4层产生特征向量;(4.1) Select the second layer of retinal image NSCT decomposition, and the third layer and the fourth layer generate feature vectors;

(4.2)对于每层分解的8个方向子带系数,将视网膜图像中血管灰度与背景灰度的比较结果作为选取特征对象,如果视网膜图像中血管灰度小于背景灰度,选取其中最小的系数作为特征,如图4(d);如果视网膜图像中血管灰度大于背景灰度,选取其中最大的系数作为特征。(4.2) For the 8 direction subband coefficients decomposed in each layer, the comparison result of the blood vessel gray level in the retinal image and the background gray level is used as the selected feature object. If the blood vessel gray level in the retinal image is smaller than the background gray level, select the smallest one Coefficients are used as features, as shown in Figure 4(d); if the grayscale of blood vessels in the retinal image is greater than the grayscale of the background, the largest coefficient among them is selected as a feature.

(4.3)按照步骤(4.2),逐层进行相同特征提取操作,将得到的特征组成特征向量;(4.3) According to step (4.2), the same feature extraction operation is carried out layer by layer, and the obtained features are formed into feature vectors;

(4.4)将视网膜训练图像和待分割视网膜图像的绿色分量的灰度值作为一维特征加入特征向量,得到最终的特征向量v={vi|i=1,2,3,4}。(4.4) Add the gray value of the retinal training image and the green component of the retinal image to be segmented into the feature vector as a one-dimensional feature to obtain the final feature vector v={v i |i=1, 2, 3, 4}.

步骤五,选用分类器,并利用训练样本对分类器进行训练,将归一化后的待分割视网膜图像的特征向量输入分类器中,对待分割视网膜图像进行分割。Step 5, select a classifier, use training samples to train the classifier, input the normalized feature vector of the retinal image to be segmented into the classifier, and segment the retinal image to be segmented.

本发明选用贝叶斯分类器进行分类,其条件概率密度函数用高斯混合模型来表示,将视网膜图像中的像素分为两类:C1={血管像素},C2={背景像素};The present invention uses a Bayesian classifier for classification, and its conditional probability density function is represented by a Gaussian mixture model, and the pixels in the retinal image are divided into two categories: C 1 = {vascular pixel}, C 2 = {background pixel};

贝叶斯决策规则如下:The Bayesian decision rule is as follows:

if            p(v|C1)P(C1)>p(v|C2)P(C2)if p(v|C 1 )P(C 1 )>p(v|C 2 )P(C 2 )

then decide C1;otherwise C2then decide C 1 ; otherwise C 2 ;

其中,p(v|Ci)是类条件概率密度函数,也称为似然度,p(Ci)是Ci的先验概率,v是特征向量;Among them, p(v|C i ) is the class conditional probability density function, also known as the likelihood, p(C i ) is the prior probability of C i , and v is the feature vector;

估计p(Ci)=Ni/N,也就是Ci在训练集中的样本比率,类条件概率密度p(v|Ci)由高斯混合模型表示,由若干高斯函数线性组合得到的,即Estimated p(C i )=N i /N, that is, the sample ratio of C i in the training set, the class conditional probability density p(v|C i ) is represented by a Gaussian mixture model, which is obtained by linear combination of several Gaussian functions, namely

pp (( vv || CC ii )) == ΣΣ jj == 11 kk ii pp (( vv || jj ,, CC ii )) PP ijij

其中,ki是表示p(v|Ci)的高斯函数的个数,p(v|j,Ci)是多维高斯分布,Pij为权重;Among them, k i is the number of Gaussian functions representing p(v|C i ), p(v|j, C i ) is a multidimensional Gaussian distribution, and P ij is a weight;

对每一类Ci,给定ki个高斯分布,通过期望最大化EM算法估计每个高斯分布的参数和权重;For each category C i , given k i Gaussian distributions, estimate the parameters and weights of each Gaussian distribution through the expectation-maximization EM algorithm;

在用分类器进行待分割图像分割时,首先得到概率图,其中各象素点的值为When using a classifier to segment an image to be segmented, first obtain a probability map, where the value of each pixel is

Pv=p(C1|v)/(p(C2|v)+p(C1|v));P v =p(C 1 |v)/(p(C 2 |v)+p(C 1 |v));

选择阈值p=0.5将概率图像素分为两类,最终完成分类。Select the threshold p=0.5 to divide the probability map pixels into two categories, and finally complete the classification.

本发明的效果通过以下仿真图像和数据进一步说明。The effect of the present invention is further illustrated by the following simulation images and data.

一,仿真图像1. Simulation image

本发明所使用的视网膜图像来自两个公开彩色图像数据库,DRIVE和STARE。DRIVE数据库有40幅彩色图像组成,其中,7幅存在病变,并且有它们的人工分割结果。40幅图像被分为两组:训练集和测试集,每组包含20幅图像,测试集含有3幅病变图像。经过专业眼科医生的训练的人员人工分割了这些图像。训练集图被第一组人员分割的结果保存在setA。测试集图像被第一组人员分割的结果保存在setA,同时,测试集图像也被第二组人员分割,保存在set B中。在setA中,12.7%的像素被标记为血管,而在set B中,12.3%的像素被标记为血管。The retinal images used in the present invention come from two public color image databases, DRIVE and STARE. The DRIVE database consists of 40 color images, of which 7 have lesions and have their manual segmentation results. The 40 images were divided into two groups: a training set and a test set, each containing 20 images, and the test set contained 3 lesion images. The images were manually segmented by personnel trained by professional ophthalmologists. The results of the training set graph being split by the first group of people are saved in setA. The results of the segmentation of the test set image by the first group of personnel are stored in setA, and at the same time, the test set image is also segmented by the second group of personnel and stored in set B. In set A, 12.7% of the pixels are labeled as blood vessels, while in set B, 12.3% of the pixels are labeled as blood vessels.

STARE数据库有20幅数字化的幻灯片组成。其中有十幅图存在病变。由两个观察者分别分割这些图像,分别保存在ah set和vk set中。在set A(ah set)中,也就是第一个观察者的结果中,10.4%的像素被标记为血管,而在set B(vk set)中,也就是第二个观察者的结果中,14.9%的像素被标记为血管。这两个观察者分割的结果的差别相当大。第二个观察者比第一个观察者标记出了更多的极细血管。这说明,第一个观察者比第二个观察者保守一些。The STARE database consists of 20 digitized slides. There are lesions in ten of them. These images are segmented separately by two observers and stored in ah set and vk set respectively. In set A (ah set), that is, the results of the first observer, 10.4% of the pixels were labeled as blood vessels, while in set B (vk set), that is, the results of the second observer, 14.9% of the pixels were labeled as blood vessels. The difference between the results of these two observer segmentations is quite large. The second observer marked more extremely thin blood vessels than the first observer. This shows that the first observer is more conservative than the second observer.

对于DRIVE数据库,训练样本由训练集中的20幅标记的训练图像产生,然后训练得到分类器,用它对测试集的20幅待分割图像进行分类。For the DRIVE database, the training samples are generated from 20 labeled training images in the training set, and then the classifier is trained to classify the 20 images to be segmented in the test set.

对于STARE数据库中的20幅图,本发明用留一法进行测试,即用一幅图像作为待分割图像,其他图像都作为训练图像。For the 20 pictures in the STARE database, the present invention uses the leave-one-out method to test, that is, one image is used as the image to be segmented, and the other images are used as training images.

这两个数据库取训练样本集时,都是使用的set A。由于训练样本规模相当大,在所有的实验中,随机选取100万样本来训练分类器。在测试中,对于GMM分类器的的类条件概率密度,血管和背景的高斯模型个数取为相同的值k=k1=k2When these two databases take training sample sets, they all use set A. Since the training sample size is quite large, in all experiments, 1 million samples are randomly selected to train the classifier. In the test, for the class conditional probability density of the GMM classifier, the number of Gaussian models of the blood vessel and the background is taken as the same value k=k 1 =k 2 .

二,客观评价指标Second, the objective evaluation index

给出两个定量度量值:ROC曲线下面积Az和准确率accuracy。ROC曲线是在分类器分类得到的概率图Pv上,在阈值p从0到1变化时,正确检测率与错误检测率之比。正确检测率是分类器分出的血管像素中属于真正血管的像素的数量与实际血管像素总数之比。错误检测率是指将非血管像素分为血管的像素的数量与实际非血管像素总数之比。对于ROC曲线来说,曲线越接近左上角,方法的性能就越好。也就是说,ROC曲线下面积Az值越接近1,方法的性能就越好。准确率是指在不考虑像素点是血管还是非血管的情况下,正确分类的像素的总数与视网膜图像像素点的总数之比。Two quantitative metrics are given: the area under the ROC curve Az and the accuracy rate. The ROC curve is the ratio of the correct detection rate to the false detection rate when the threshold p changes from 0 to 1 on the probability map P v obtained by the classifier classification. The correct detection rate is the ratio of the number of pixels belonging to real blood vessels among the blood vessel pixels separated by the classifier to the total number of actual blood vessel pixels. The false detection rate is the ratio of the number of pixels that classify non-vessel pixels as vessels to the total number of actual non-vessel pixels. For the ROC curve, the closer the curve is to the upper left corner, the better the performance of the method. In other words, the closer the value of the area under the ROC curve Az is to 1, the better the performance of the method. Accuracy is the ratio of the total number of correctly classified pixels to the total number of retinal image pixels regardless of whether the pixel is vascular or non-vascular.

三,仿真结果与分析3. Simulation results and analysis

图5是从对DRIVE数据库中选出的在k=20时两幅图的结果,以及它们的人工分割图。图5(a)为概率图,图5(b)为分割结果,图5(c)为Set A中的人工分割图,图5(d)是Set B中的人工分割图。Fig. 5 is the results of two images selected from the pair DRIVE database when k=20, and their manual segmentation images. Figure 5(a) is the probability map, Figure 5(b) is the segmentation result, Figure 5(c) is the manual segmentation map in Set A, and Figure 5(d) is the manual segmentation map in Set B.

图6是从对STARE数据库中选出的k=20时两幅图像的实验结果,以及它们的人工分割图像。第二行的图像源自一幅病态视网膜图像,第一行的图像源自一幅正常的视网膜图像。图6(a)是概率图,图6(b)是分割结果,图6(c)是set A中的人工分割图,图6(d)是Set B中的人工分割图。Fig. 6 is the experimental result of two images selected from the STARE database when k=20, and their artificially segmented images. The image in the second row is derived from an image of a diseased retina, and the image in the first row is derived from an image of a normal retina. Figure 6(a) is the probability map, Figure 6(b) is the segmentation result, Figure 6(c) is the manual segmentation map in set A, and Figure 6(d) is the manual segmentation map in Set B.

图7(a)给出了对DRIVE数据库进行测试的结果的ROC曲线图。图7(b)给出了对STARE数据库进行测试的结果的ROC曲线图。为了比较set A和set B这两个不同的人工分割的结果的差别,给出了概率图在p=0.5时,分别以set A和set B作为标准分割结果时,得到的正确检测率与错误检测率之比。Figure 7(a) shows the ROC curve of the test results on the DRIVE database. Figure 7(b) shows the ROC curve of the test results on the STARE database. In order to compare the difference between the two different artificial segmentation results of set A and set B, the correct detection rate and error rate obtained when the probability map is set A and set B as the standard segmentation results respectively when p=0.5 are given. ratio of detection rates.

从图6和图7可直观的看出本发明的分割方法能达到较好的效果。It can be seen intuitively from Fig. 6 and Fig. 7 that the segmentation method of the present invention can achieve better results.

表1给出了本发明方法与现有方法性能的比较。Table 1 shows the performance comparison between the method of the present invention and the existing methods.

在k=10,15,20的情况下分别进行实验,实验结果见表1。同时,将本发明的结果与Jiang et al.,Staal et al等提取的方法进行比较。Experiments were carried out under the conditions of k=10, 15, and 20, and the experimental results are shown in Table 1. At the same time, the results of the present invention are compared with the methods extracted by Jiang et al., Staal et al.

表1不同方法的性能比较Table 1 Performance comparison of different methods

Figure A20081023233700091
Figure A20081023233700091

从表1的结果可以看出,本发明方法对DRIVE和STARE两个数据库都得到了很好的效果。通过比较,我们看出本发明方法的效果和Soares et al.方法相当,优于Jiang et al.Staal et al.的方法。这说明了本发明所给出的利用NSCT提取的特征的有效性,给出了一种新的视网膜特征。As can be seen from the results in Table 1, the method of the present invention has achieved very good results for both DRIVE and STARE databases. By comparison, we find out that the effect of the inventive method is equivalent to that of Soares et al., and is better than the method of Jiang et al.Staal et al. This illustrates the effectiveness of the feature extracted by NSCT given by the present invention, and a new retinal feature is given.

图8是对DRIVE数据库中某幅图像通过本发明分割的结果和Soares et al.的结果进行对比。图8(a)是set A人工分割结果,图8(b)是本发明血管分割结果,图8(c)Soares et al.的结果,图8(d)是Soares et al.的结果减去本文血管分割结果,图8(e)是Soares et al.的结果减去set A人工分割结果,图8(f)本文分割结果减去set A人工分割结果。在图8(d)中,灰色区域表示Soares et al.的结果和本文粗血管检测结果的相同部分,白色区域表示Soares et al.方法检测出来而本文粗血管方法未检测出的区域,黑色区域表示Soares et al..方法未检测出而本文方法检测出的区域。Fig. 8 compares the result of the segmentation of an image in the DRIVE database by the present invention and the result of Soares et al. Fig. 8 (a) is the artificial segmentation result of set A, Fig. 8 (b) is the blood vessel segmentation result of the present invention, Fig. 8 (c) the result of Soares et al., Fig. 8 (d) is the result of Soares et al. minus The results of blood vessel segmentation in this paper, Figure 8(e) is the result of Soares et al. minus the manual segmentation result of set A, and Figure 8(f) is the segmentation result of this paper minus the manual segmentation result of set A. In Figure 8(d), the gray area represents the same part of the results of Soares et al. and the thick blood vessel detection results in this paper, the white area represents the area detected by Soares et al. method but not detected by the thick blood vessel method in this paper, and the black area Indicates the area detected by the method of this paper but not detected by the method of Soares et al..

从图8(e)和图8(f)中可以看出,用现有Soares et al.方法分割的结果比真实结果中的血管普遍偏粗,而本发明方法分割的结果对边缘的定位效果要好于Soares et al.的方法。这说明Soares et al.方法中对粗血管的边缘的定位不准确。从图8(d)中也可以看出,本发明方法分割的结果和Soares et al.的结果的差别主要集中在粗血管边缘处。As can be seen from Fig. 8(e) and Fig. 8(f), the result of segmentation by the existing Soares et al. method is generally thicker than the blood vessels in the real result, while the result of segmentation by the method of the present invention has an edge positioning effect Better than the method of Soares et al. This shows that the positioning of the edges of thick vessels in the Soares et al. method is not accurate. It can also be seen from Fig. 8(d) that the difference between the segmentation result of the method of the present invention and the result of Soares et al. is mainly concentrated at the edge of the thick blood vessel.

比较图8(b)(c)中图8(a)中红色椭圆区域的结果,可发现在Soares et al.的结果中,两条紧邻的粗血管之间的空隙也被判断为血管,而本发明方法的结果中则不存在这个问题。Comparing the results of the red oval area in Figure 8(a) in Figure 8(b)(c), it can be found that in the results of Soares et al., the space between two adjacent thick blood vessels is also judged as a blood vessel, while This problem does not exist in the results of the method of the present invention.

Claims (2)

1、一种基于NSCT特征提取和监督分类的视网膜图像分割方法,包括如下步骤:1. A retinal image segmentation method based on NSCT feature extraction and supervised classification, comprising the steps of: 一.特征提取步骤1. Feature extraction steps (1)对视网膜训练图像和待分割视网膜图像,利用其红色分量得到其感兴趣区域;(1) For the retinal training image and the retinal image to be segmented, use its red component to obtain its region of interest; (2)对视网膜训练图像和待分割视网膜图像的绿色分量,分别进行感兴趣区域边缘迭代扩展;(2) For the retinal training image and the green component of the retinal image to be segmented, iteratively expand the edge of the region of interest; (3)对扩展后的视网膜训练图像和待分割视网膜图像分别进行NSCT变换,将其分解为i层,每层有j个方向的子带系数;(3) Perform NSCT transformation on the expanded retinal training image and the retinal image to be segmented respectively, decompose it into i layers, and each layer has subband coefficients in j directions; (4)利用每层j个方向子带系数提取一维特征,并逐层提取特征,组成特征向量,并进行归一化;(4) extract one-dimensional features by using j direction sub-band coefficients in each layer, and extract features layer by layer to form feature vectors and normalize them; 二.分类器的训练与分割步骤2. Classifier training and segmentation steps 1)对归一化后的视网膜训练图像的特征向量建立训练样本;1) Establish training samples for the feature vectors of the normalized retinal training images; 2)选用分类器,并利用训练样本对分类器进行训练,将归一化后的待分割视网膜图像的特征向量输入分类器中,对待分割视网膜图像进行分割。2) Select a classifier and use the training samples to train the classifier, input the normalized feature vector of the retinal image to be segmented into the classifier, and segment the retinal image to be segmented. 2、根据权利要求1所述的方法,其中步骤(3)所述的利用每层j个方向子带系数提取一维特征,并逐层提取组成特征向量,按如下步骤进行:2. The method according to claim 1, wherein said step (3) utilizes j direction subband coefficients of each layer to extract one-dimensional features, and extracts layer by layer to form a feature vector, as follows: (3a)对于每层分解的j个方向子带系数,将视网膜图像中血管灰度与背景灰度的比较结果作为选取特征对象,如果视网膜图像中血管灰度小于背景灰度,选取其中最小的系数作为特征,如果视网膜图像中血管灰度大于背景灰度,选取其中最大的系数作为特征;(3a) For the j direction subband coefficients decomposed in each layer, the comparison result of the blood vessel gray level in the retinal image and the background gray level is used as the selected feature object. If the blood vessel gray level in the retinal image is smaller than the background gray level, select the smallest one The coefficient is used as a feature. If the grayscale of blood vessels in the retinal image is greater than the background grayscale, the largest coefficient is selected as a feature; (3b)按照步骤(3a),逐层进行相同特征提取操作,将得到的特征组成特征向量;(3b) According to step (3a), the same feature extraction operation is performed layer by layer, and the obtained features are formed into feature vectors; (3c)将视网膜训练图像和待分割视网膜图像的绿色分量的灰度值作为一维特征加入特征向量,得到最终的特征向量v。(3c) Add the gray value of the retinal training image and the green component of the retinal image to be segmented into the feature vector as a one-dimensional feature to obtain the final feature vector v.
CN200810232337XA 2008-11-19 2008-11-19 Retinal Image Segmentation Method Based on NSCT Feature Extraction and Supervised Classification Expired - Fee Related CN101667289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810232337XA CN101667289B (en) 2008-11-19 2008-11-19 Retinal Image Segmentation Method Based on NSCT Feature Extraction and Supervised Classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810232337XA CN101667289B (en) 2008-11-19 2008-11-19 Retinal Image Segmentation Method Based on NSCT Feature Extraction and Supervised Classification

Publications (2)

Publication Number Publication Date
CN101667289A true CN101667289A (en) 2010-03-10
CN101667289B CN101667289B (en) 2011-08-24

Family

ID=41803901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810232337XA Expired - Fee Related CN101667289B (en) 2008-11-19 2008-11-19 Retinal Image Segmentation Method Based on NSCT Feature Extraction and Supervised Classification

Country Status (1)

Country Link
CN (1) CN101667289B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411715A (en) * 2010-09-21 2012-04-11 张云超 Automatic cell image classification method and system with learning monitoring function
CN102567734A (en) * 2012-01-02 2012-07-11 西安电子科技大学 Specific value based retina thin blood vessel segmentation method
CN103069455A (en) * 2010-07-30 2013-04-24 皇家飞利浦电子股份有限公司 Organ-specific enhancement filter for robust segmentation of medical images
CN103514605A (en) * 2013-10-11 2014-01-15 南京理工大学 Choroid layer automatic partitioning method based on HD-OCT retina image
CN103544491A (en) * 2013-11-08 2014-01-29 广州广电运通金融电子股份有限公司 Optical character recognition method and device facing complex background
CN106097340A (en) * 2016-06-12 2016-11-09 山东大学 A kind of method automatically detecting and delineating Lung neoplasm position based on convolution grader
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN107316013A (en) * 2017-06-14 2017-11-03 西安电子科技大学 Hyperspectral image classification method with DCNN is converted based on NSCT
CN108027969A (en) * 2015-09-04 2018-05-11 斯特拉克斯私人有限公司 The method and apparatus for identifying the gap between objects in images
CN108986127A (en) * 2018-06-27 2018-12-11 北京市商汤科技开发有限公司 The training method and image partition method of image segmentation neural network, device
CN109993757A (en) * 2019-04-17 2019-07-09 山东师范大学 A kind of retinal image lesion area automatic segmentation method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100555325C (en) * 2007-08-29 2009-10-28 华中科技大学 A kind of image interfusion method based on wave transform of not sub sampled contour
CN101303764B (en) * 2008-05-16 2010-08-04 西安电子科技大学 Multi-sensor image adaptive fusion method based on non-subsampled contourlet

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103069455A (en) * 2010-07-30 2013-04-24 皇家飞利浦电子股份有限公司 Organ-specific enhancement filter for robust segmentation of medical images
CN102411715A (en) * 2010-09-21 2012-04-11 张云超 Automatic cell image classification method and system with learning monitoring function
CN102567734A (en) * 2012-01-02 2012-07-11 西安电子科技大学 Specific value based retina thin blood vessel segmentation method
CN103514605A (en) * 2013-10-11 2014-01-15 南京理工大学 Choroid layer automatic partitioning method based on HD-OCT retina image
US9613266B2 (en) 2013-11-08 2017-04-04 Grg Banking Equipment Co., Ltd. Complex background-oriented optical character recognition method and device
CN103544491A (en) * 2013-11-08 2014-01-29 广州广电运通金融电子股份有限公司 Optical character recognition method and device facing complex background
CN108027969A (en) * 2015-09-04 2018-05-11 斯特拉克斯私人有限公司 The method and apparatus for identifying the gap between objects in images
CN108027969B (en) * 2015-09-04 2021-11-09 斯特拉克斯私人有限公司 Method and apparatus for identifying gaps between objects in an image
CN106097340A (en) * 2016-06-12 2016-11-09 山东大学 A kind of method automatically detecting and delineating Lung neoplasm position based on convolution grader
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN106408562B (en) * 2016-09-22 2019-04-09 华南理工大学 A method and system for retinal blood vessel segmentation in fundus images based on deep learning
CN107316013A (en) * 2017-06-14 2017-11-03 西安电子科技大学 Hyperspectral image classification method with DCNN is converted based on NSCT
CN107316013B (en) * 2017-06-14 2020-04-07 西安电子科技大学 Hyperspectral image classification method based on NSCT (non-subsampled Contourlet transform) and DCNN (data-to-neural network)
CN108986127A (en) * 2018-06-27 2018-12-11 北京市商汤科技开发有限公司 The training method and image partition method of image segmentation neural network, device
CN109993757A (en) * 2019-04-17 2019-07-09 山东师范大学 A kind of retinal image lesion area automatic segmentation method and system

Also Published As

Publication number Publication date
CN101667289B (en) 2011-08-24

Similar Documents

Publication Publication Date Title
CN101667289B (en) Retinal Image Segmentation Method Based on NSCT Feature Extraction and Supervised Classification
Nida et al. Melanoma lesion detection and segmentation using deep region based convolutional neural network and fuzzy C-means clustering
CN102800089B (en) Main carotid artery blood vessel extraction and thickness measuring method based on neck ultrasound images
CN106651846B (en) Segmentation method of retinal blood vessel images
US10303986B2 (en) Automated measurement of brain injury indices using brain CT images, injury data, and machine learning
Akram et al. Detection of neovascularization in retinal images using multivariate m-Mediods based classifier
Al-Dmour et al. A clustering fusion technique for MR brain tissue segmentation
CN110706225B (en) Tumor identification system based on artificial intelligence
Hashemzadeh et al. Retinal blood vessel extraction employing effective image features and combination of supervised and unsupervised machine learning methods
CN105574859A (en) Liver tumor segmentation method and device based on CT (Computed Tomography) image
CN108122221B (en) Segmentation method and device for cerebral ischemia area in diffusion weighted imaging image
CN110120051A (en) A kind of right ventricle automatic division method based on deep learning
CN104751178A (en) Pulmonary nodule detection device and method based on shape template matching and combining classifier
CN104851101A (en) Brain tumor automatic segmentation method based on deep learning
CN104794708A (en) Atherosclerosis plaque composition dividing method based on multi-feature learning
CN112150477B (en) Full-automatic segmentation method and device for cerebral image artery
CN105809175A (en) Encephaledema segmentation method and system based on support vector machine algorithm
CN106600584A (en) Tsallis entropy selection-based suspected pulmonary nodule detection method
CN108537751A (en) A kind of Thyroid ultrasound image automatic segmentation method based on radial base neural net
CN115147600A (en) GBM Multimodal MR Image Segmentation Method Based on Classifier Weight Converter
Jin et al. White matter hyperintensity segmentation from T1 and FLAIR images using fully convolutional neural networks enhanced with residual connections
Huang et al. Automatic Retinal Vessel Segmentation Based on an Improved U‐Net Approach
CN116206108A (en) A network model and method for choroid segmentation in OCT images based on domain adaptation
Arafat et al. Brain tumor MRI image segmentation and classification based on deep learning techniques
Fan et al. Automated blood vessel segmentation in fundus image based on integral channel features and random forests

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110824

Termination date: 20141119

EXPY Termination of patent right or utility model