CN109829931B - Retinal vessel segmentation method based on region growing PCNN - Google Patents

Retinal vessel segmentation method based on region growing PCNN Download PDF

Info

Publication number
CN109829931B
CN109829931B CN201910013381.XA CN201910013381A CN109829931B CN 109829931 B CN109829931 B CN 109829931B CN 201910013381 A CN201910013381 A CN 201910013381A CN 109829931 B CN109829931 B CN 109829931B
Authority
CN
China
Prior art keywords
image
blood vessel
filtering
gray level
iteration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910013381.XA
Other languages
Chinese (zh)
Other versions
CN109829931A (en
Inventor
徐光柱
王亚文
雷帮军
陈鹏
周军
夏平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN201910013381.XA priority Critical patent/CN109829931B/en
Publication of CN109829931A publication Critical patent/CN109829931A/en
Application granted granted Critical
Publication of CN109829931B publication Critical patent/CN109829931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

A region growing PCNN-based retinal vessel segmentation method comprising selecting a seed point from unlabeled pixels of a target retinal vessel image; increasing the connection strength of the PCNN model, and extracting the blood vessel characteristics in the target retina blood vessel image by using the PCNN model with the increased connection strength as a starting point until the increased connection strength is larger than a first preset threshold; if the blood vessel features extracted in the iteration do not meet the first preset condition and the second preset condition at the same time, marking the pixels corresponding to the blood vessel features extracted in the iteration as the same label until all the pixels are marked with the labels; the first preset condition is that the proportion of the number of the blood vessel edge pixels to the total number of the blood vessel pixels is smaller than or equal to a second preset threshold value; the second preset condition is that the proportion of the blood vessel area to the whole image area is smaller than or equal to a third preset threshold value. The invention realizes the automatic growth of the blood vessel region and improves the precision of retinal blood vessel segmentation.

Description

Retinal vessel segmentation method based on region growing PCNN
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a retinal vessel segmentation method based on region growing PCNN
Background
Research shows that various ophthalmic diseases and cardiovascular and cerebrovascular diseases can cause the ocular fundus retinal blood vessel to generate deformation, bleeding and other influences to different degrees. Clinically, medical staff can extract retinal blood vessels from the color fundus images acquired by the ophthalmoscope, and the purpose of diagnosing diseases is achieved through analysis of the morphological conditions of the blood vessels.
Because of the limitation of the fundus image acquisition technology, a great amount of noise often exists in the image, and the retinal blood vessel is complex and changeable in structure, so that the segmentation of the retinal blood vessel becomes difficult and heavy. In the traditional method, the retina blood vessel is manually segmented by manpower, so that the workload is huge, the influence of subjective factors is serious, and the time is extremely consumed. Therefore, by utilizing a computer technology, an algorithm capable of rapidly and accurately dividing retina is found, the real-time extraction of the blood vessel characteristics of the bottom image is realized, and the method has an important role in assisting medical staff in diagnosing ophthalmic diseases, cardiovascular and cerebrovascular diseases and the like.
The segmentation of blood vessels in retinal images presents the following difficulties:
(1) The structural characteristics of the blood vessel are complex. The retinal blood vessels have different bending degrees and different shapes and are distributed in a tree shape, so that the retinal blood vessels have certain difficulty in separation, and the traditional method has insufficient blood vessel separation precision;
(2) The existing PCNN and region growing-based method [1] has few tiny blood vessels which are segmented, and the blood vessel segmentation breaks off and cannot continue to grow.
Disclosure of Invention
The invention provides a retinal vessel segmentation method based on region growing PCNN for solving the technical problems.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a retinal vessel segmentation method based on region growing PCNN, comprising:
selecting a seed point from unlabeled pixels of the target retinal vascular image;
increasing the connection strength of a PCNN model, and extracting the blood vessel characteristics in the target retina blood vessel image by using the PCNN model with the increased connection strength as a starting point until the increased connection strength is greater than a first preset threshold;
if the blood vessel features extracted in the iteration do not meet the first preset condition and the second preset condition at the same time, marking the pixels corresponding to the blood vessel features extracted in the iteration as the same label until all the pixels of the target retina blood vessel image are marked with the labels;
the first preset condition is that the ratio of the number of the blood vessel edge pixels in the blood vessel characteristics extracted in the iteration to the total number of the blood vessel pixels is smaller than or equal to a second preset threshold value;
the second preset condition is that the ratio of the blood vessel area in the blood vessel characteristics extracted in the current iteration to the target retina blood vessel image area is smaller than or equal to a third preset threshold value.
If the blood vessel features extracted in the iteration do not meet the first preset condition and the second preset condition, marking the pixels corresponding to the blood vessel features extracted in the iteration as the same label until all the pixels of the target retina blood vessel image are marked with the labels, and the steps of:
and if the blood vessel characteristics extracted in the iteration meet the first preset condition and the second preset condition at the same time, deleting the blood vessel characteristics extracted in the iteration.
The step of selecting a seed point from unlabeled pixels of the target retinal blood vessel image is preceded by the step of:
multiplying each channel of the retinal blood vessel image to be processed by a corresponding preset weight, and adding to obtain a gray level image of the retinal blood vessel image to be processed;
and acquiring the target retinal blood vessel image according to the gray level image.
According to the gray level image, the step of obtaining the target retinal blood vessel image specifically comprises the following steps:
filtering the gray image based on a two-dimensional Gaussian filtering method and a two-dimensional Gabor filtering method respectively;
fusing a filtering result of the two-dimensional Gaussian filtering method with a filtering result of the two-dimensional Gabor filtering method to obtain a fused image;
and subtracting the gray level image from the fusion image, performing a subtraction operation on the subtraction result, and multiplying the subtraction result by a preset mask image to obtain the target retinal blood vessel image.
The steps of filtering the gray image based on the two-dimensional Gaussian filtering method and the two-dimensional Gabor filtering method respectively further comprise:
if each pixel in the gray level image is obtained to be a boundary based on a four-neighborhood method, performing edge expansion on each pixel which is the boundary in the gray level image;
carrying out contrast enhancement on the gray level image after edge expansion based on a histogram equalization method with limited contrast, and obtaining the preprocessed gray level image;
correspondingly, the steps of filtering the gray image based on the two-dimensional Gaussian filtering method and the two-dimensional Gabor filtering method respectively specifically comprise:
and filtering the preprocessed gray level image based on a two-dimensional Gaussian filtering method and a two-dimensional Gabor filtering method respectively.
If each pixel in the gray image is obtained to be a boundary based on the four-neighborhood method, the step of performing edge expansion on each pixel which is a boundary in the gray image specifically includes:
if at least one value in the four neighborhoods of each pixel in the gray level image is 0 and is not all 0, obtaining that each pixel in the gray level image is a boundary;
and performing edge expansion on each pixel which is a boundary in the gray level image.
The histogram equalization method based on the contrast limitation carries out contrast enhancement on the gray level image after edge expansion, and the step of obtaining the preprocessed gray level image specifically comprises the following steps:
dividing the gray image with the expanded edges into a plurality of blocks;
calculating a gray level histogram of each block, and cutting the gray level histogram of each block;
equalizing the gray level histogram of each block after clipping;
and connecting blocks corresponding to the equalized gray level histogram based on a linear interpolation method to obtain the preprocessed gray level image.
The ith Gaussian convolution kernel K 'of the two-dimensional Gaussian filtering method' i The formula of (x, y) is:
K′ i (x,y)=K i (x,y)-m i
K i (x,y)=-exp(-u 2 /(2σ 2 ));
Figure BDA0001938181360000031
wherein K is i (x, y) is a coefficient in the ith Gaussian kernel matrix, m i Is the mean value of the ith gaussian kernel,
Figure BDA0001938181360000032
for the discrete point of the ith Gaussian kernel, u and v are the values of the ith Gaussian kernel on the x axis and the y axis respectively, sigma is the offset of the Gaussian kernel from the x-axis coordinate center, and N= { [ u, v]And (3) u is less than or equal to 3 sigma, v is less than or equal to L/2, L is the length of a blood vessel truncated in the y-axis direction, and A is the number of points in N.
The step of fusing the filtering result of the two-dimensional Gaussian filtering method with the filtering result of the two-dimensional Gabor filtering method to obtain a fused image specifically comprises the following steps:
and multiplying the filtering result of the two-dimensional Gaussian filtering method and the filtering result of the two-dimensional Gabor filtering method by corresponding preset weights respectively, and then adding to obtain a fusion image.
After the subtraction result is subjected to the negation operation, multiplying the subtraction result by a preset mask image, and the step of obtaining the target retinal blood vessel image specifically comprises the following steps:
sequentially performing open operation, close operation and corrosion operation on a preset mask image;
multiplying the processed preset mask image with the negation result to obtain the target retinal blood vessel image.
The retina blood vessel segmentation method has the following beneficial effects:
(1) 10 graphs were randomly extracted from the DRIVE database, and the results of the pretreatment using a combination of gaussian filtering and Gabor filtering were reflected by calculation of accuracy (Acc), sensitivity (Sen), and specificity (Spe) as compared with the pretreatment using Gabor filtering alone.
(2) On the basis of pretreatment of retina blood vessels, a certain blood vessel area is selected as an initial seed area; then, PCNN with a quick connection mechanism is combined with the seed region growing idea, and by setting the maximum value of the connection intensity coefficient and the stop condition in the PCNN, if the growth is invalid, the part of pseudo blood vessels are deleted, so that the problem that the break point of the previous method cannot continue to grow is solved, the automatic growth of the blood vessel region in the fundus image is realized, and the blood vessels in the retina image are effectively extracted.
Drawings
Fig. 1 is a flow chart of retinal vascular segmentation.
Fig. 2 is an original image of a color fundus retina.
Fig. 3 is a diagram of the three channels extracted to scale of fig. 2.
Fig. 4 is a view of the edge of fig. 3 after expansion.
Fig. 5 is a graph of fig. 4 after a CLAHE treatment.
Fig. 6 is a graph of the results of the two-dimensional gaussian matching of fig. 5.
Fig. 7 is a graph of the results of the two-dimensional Gabor matched filter process of fig. 6.
Fig. 8 is a graph of fig. 7 after two-dimensional Gabor two-dimensional gaussian fusion.
Fig. 9 is a plot of fig. 8 subtracted.
Fig. 10 is a diagram of the mask of fig. 9.
Fig. 11 is the final preprocessed image of fig. 10.
Fig. 12 is the final segmented image of fig. 11.
Fig. 13 is a PCNN neuron model diagram.
Fig. 14 is a blood vessel map extracted from a retina image in document [1 ].
Fig. 15 is a diagram of blood vessels in an extracted retinal image according to the present invention.
Detailed Description
A retinal vessel segmentation method based on region growing PCNN comprises the following steps, and a flow chart is shown in figure 1:
step 1: selecting a color fundus retina image from a standard image library DRIVE as a retina blood vessel image to be processed, as shown in figure 2;
step 2: proportionally extracting red, green and blue channel images y=0.299r+0.587g+0.114 b as shown in fig. 3;
step 3: judging whether the boundary is the four-neighborhood method, if so, expanding the edge, as shown in fig. 4;
step 4: enhancing the contrast of the image by using a contrast-limited histogram equalization (CLAHE) method, as shown in fig. 5;
step 5: enhancement of contrast between retinal vessels and background by two-dimensional Gaussian filtering, as shown in FIG. 6
Step 6: the contrast between retinal blood vessels and the background is further enhanced by using a two-dimensional Gabor filtering method, as shown in fig. 7;
step 7: fusing the two-dimensional Gabor with the two-dimensional Gaussian matched filter result as shown in FIG. 8
Step 8: subtracting the image extracted in proportion of red, green and blue from the image fused by the matched filtering process, as shown in FIG. 9
Step 9: performing inverse operation, multiplying by a mask image, wherein the mask image is shown in fig. 10, and a preprocessing result, namely a target retinal blood vessel image, can be obtained, and is shown in fig. 11;
step 10: combining PCNN with rapid connection mechanism with seed region growth idea, firstly selecting reliable blood vessel region as seed by threshold operation, and then realizing automatic growth of blood vessel region by connection strength coefficient and stop condition in PCNN, thereby effectively extracting retina blood vessel, as shown in figure 12.
Further, the step 2 specifically includes: considering that the red and blue channels still contain useful information although the green channel contains a lot of useful information, 29.9% is extracted in red, 58.7% is extracted in green, 11.4% is extracted in blue from three different channels
Further, the step 3 specifically includes:
step 3.1: judging by four-adjacent domain method, if one of the four values is 0 but not all 0 is the boundary of the image
Step 3.2: if the boundary is the boundary, the boundary processing is smoother by using an expansion corrosion method
Further, the step 4 specifically includes:
step 4.1: dividing the original image into M×N blocks of equal size that are continuous but do not overlap;
step 4.2: calculating a histogram for each block;
step 4.3: performing gray level histogram "clipping" on each sub-region;
step 4.4: performing histogram equalization on each block;
step 4.5: each block is connected into an image by linear interpolation, namely the transformed image.
Further, the step 5 specifically includes:
step 5.1: rotating the blood vessel once every 15 degrees to obtain 12 matched filters from 0 degrees to 180 degrees;
the gaussian kernel of the two-dimensional matched filter is a two-dimensional matrix, and q= [ x, y]Is a discrete point in this Gaussian kernel matrix, phi i The rotation angle of the ith gaussian kernel in the n matched filters is represented. In calculating coefficients in the gaussian kernel rotation matrix, it is assumed that the rotation center is at the (0, 0) point, the rotation matrix is:
Figure BDA0001938181360000061
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0001938181360000062
the points in the rotated coordinate system are:
Figure BDA0001938181360000063
step 5.2: calculating a Gaussian convolution kernel of the matched filter;
because the two ends of the Gaussian curve are infinitely extended along the positive and negative directions of the x-axis, the Gaussian curve needs to be truncated: u= ±3σ, |v|is not more than L/2. Thus, the coefficients in the ith gaussian kernel matrix are:
Figure BDA0001938181360000064
wherein, N= { [ u, v ] |||u| is less than or equal to 3 sigma, and v is less than or equal to L/2.
Where K (x, y) is a kernel function, u, v are values on the x-axis and y-axis, respectively, and σ is the offset of the Gaussian function from the center of the x-axis coordinate. The larger the σ, the more the function value extends toward the x-axis, whereas the smaller the σ, the more the function value contracts toward the x-axis. The length of the vessel truncated in the y-axis direction is denoted by L.
In the matched filtering, the average value of the convolution kernel coefficients is required to be zero so that the original background gray-scale characteristics of the image are not changed, and therefore, the convolution kernel of the gaussian kernel of the matched filter is:
K i '(x,y)=K i (x,y)-m i
wherein:
Figure BDA0001938181360000065
is the mean of the gaussian kernels, a is the number of points in N.
Step 5.3: after the matched filter is designed, the filter is used to convolve the image, and then the largest convolution value is selected as the pixel value of the enhanced image. Assuming that f (x, y) is an original image, f' (x, y) is an image after matched filtering enhancement, a calculation formula of enhancing blood vessels in the image by using n Gaussian matched filters is as follows:
Figure BDA0001938181360000066
where u, v are values on the x-axis and y-axis, respectively.
Further step 6 shows:
step 6.1: the blood vessel is rotated once every 5 degrees, and 36 matched filters from-90 degrees to 90 degrees are obtained
The gaussian kernel of the two-dimensional gabor filter is also a two-dimensional matrix, and when coefficients in the gaussian kernel rotation matrix are calculated, the rotation matrix is assumed to be:
Figure BDA0001938181360000071
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0001938181360000072
the points in the rotated coordinate system are:
Figure BDA0001938181360000073
step 6.2: calculating a Gaussian convolution kernel of the matched filter;
Figure BDA0001938181360000074
and 5.3, after the matched filter is designed, carrying out convolution operation on the image by using the matched filter, and selecting the maximum convolution kernel value as the pixel value of the enhanced image. Assuming that f (x, y) is an original image, f' (x, y) is an image after matched filtering enhancement, a calculation formula of enhancing blood vessels in the image by using n Gaussian matched filters is as follows:
Figure BDA0001938181360000075
the Gaussian filter is only used to have certain advantages in weak vascular area response, but background noise generated by the response is also more, the Gabor filter bank is only used, weak vascular area response is insufficient, false response is easy to generate, and therefore the Gaussian filter bank and the Gabor filter bank are fused to each other to exert own advantages. As shown in fig. 14.
Further step 7 shows:
the above results after gaussian filtering are fused with the results after Gabor filtering, that is, the images are added, and as shown in fig. 15, the result after gaussian filtering is compared with the result 6:4 after Gabor filtering.
Further, the step 8 specifically includes:
step 8.1: subtracting the pixel point at the corresponding position of the image subjected to the fusion processing of the CLAHE and the matched filtering from each pixel point in the image subjected to the three channels;
step 8.2: performing an inverse operation, namely 1-F (x, y);
further, the step 9 specifically includes:
step 9.1: an open operation is performed using a mask of 3×3 size.
Step 9.2: a 3 x 3 size mask is used for the closing operation.
Step 9.3: a 3 x 3 size mask etch operation is used.
Step 9.4: multiplying the processed mask by the processed retinal blood vessel image to obtain the final pre-processed image 11
Step 10: combining PCNN with Rapid attachment mechanism with region growing idea
Furthermore, the PCNN simplified model in step 10 is a novel neural network which is fundamentally different from the traditional artificial neural network and is formed and developed in the 90 th century, and is proposed according to the phenomenon that neurons in the cerebral cortex of cats synchronously emit pulses, and has wide application in the fields of image segmentation, edge detection, noise reduction, feature extraction and the like. When PCNN is applied to fundus image processing, the number of neurons is consistent with that of pixels of an image, and each neuron corresponds to each pixel one by one. The PCNN model can be divided into three parts: an input domain, a modulation domain, and a pulse generation domain. The input fields can be further divided into: a feedback input field and a connection input field. As shown in fig. 12.
Further, PCNN is expressed as follows by an iterative formula:
F ij [n]=S ij (1)
Figure BDA0001938181360000081
U ij [n]=F ij [n]{1+βL ij [n]} (3)
Figure BDA0001938181360000082
Figure BDA0001938181360000083
wherein S is ij For external stimulus, i.e. gray value of corresponding pixel of point (i, j), F ij Is a feedback input term of neuron, L ij For the connection input of neurons, the outputs of peripheral neurons representing one neuron are weighted and summed, U ij As an internal activity item, Y ij For pulse output of neurons, θ ij Is a dynamic threshold. V (V) L To connect the amplification factors of the input domains, β is the connection strength factor between neurons, V θ And alpha θ Amplification factor and attenuation constant factor, W, of dynamic threshold respectively ijkl For connecting the weighting coefficient matrix.
Determining whether the neuron outputs a pulse by comparing the internal activity term with a dynamic threshold, when U ijij When the neuron is excited to output pulse, the threshold value is increased to prevent the neuron from being excited again, and the operation is continuously iterated until a certain moment U ij Again greater than theta ij At this time, the neuron is excited again, and a pulse is output.
The reduced PCNN model is modified based on the PCNN algorithm (RG-PCNN) of region growing, and the connection coefficient beta is calculated in the coupling part as shown in the formula (9)Aggregate beta n Substitution:
U ij [n]=F ij [n](1+β n (t)L ij [n]) (6)
in each iteration, the beta value is varied, an initial value is set for beta so that the first-pulsed neuron can capture the next neuron nearby, and then repeatedly with a smaller delta set β To increase its value. And comparing the edge proportion and the area proportion of the pixel area corresponding to the neurons excited before and after the beta change respectively, if the edge proportion and the area proportion are smaller than a certain threshold value, continuing to increase beta, otherwise ending the iterative process.
The invention uses a single pass condition to solve the termination of the algorithm, i.e. each neuron is allowed to pulse, the state changes from 0 to 1 only once, and the neurons corresponding to one area are excited simultaneously in the algorithm process. The iteration number of the pulse is issued as a region flag, the iteration number is stored in a matrix P, and the initial values of all elements are set to 0.
Figure BDA0001938181360000091
To achieve time independence, the present invention employs a threshold W determined by the neurons that emit pulses n N represents the number of iterations. The threshold of the other neurons sets a large threshold Ω to prevent repetitive firing at the same location.
Figure BDA0001938181360000092
Will W n The maximum value of the pixel corresponding to the neuron in the unexcited state is set, that is, the initial state of the neuron which is not excited and has the maximum intensity is set as the excited state. If the neurons that are not fired have the same maximum intensity value, then only one firing is selected.
Since the pulse transmission of the connection fields in the PCNN model has a time delay, this may cause break points or discontinuous areas in the image. To reduce these effects, a fast-connect model is used that allows the automatic wave caused by the already fired neurons to propagate sufficiently in one iteration, so that all the trapped neurons can be fired in the same iteration to produce a pulse.
In order to obtain better vessel segmentation results, the following cases are chosen herein as local iteration and global iteration termination conditions:
(1) All neurons were fired;
(2) Beta is greater than beta max
(3) Generating a proportion e of the number of the vascular edge pixel points to the total number of the vascular pixel points smaller than a set threshold value;
(4) The area ratio m of the blood vessel to the whole graph is larger than a set threshold value;
(5) The quick connect is terminated.
Wherein (1) is a global iteration condition and (2) (3) (4) (5) is a local iteration condition.
Condition (1), i.e., each pixel in the entire picture is divided into different regions, there are no non-divided pixels, which can be determined by judging whether there is an element of 0 in P. Because the initial value in P is 0, each time an iteration is carried out, the iteration times are given to the pixels corresponding to the excitation neurons; condition (2) that the connection coefficient intensity β of each iteration is greater than the given value β max Then it terminates. Compared with the original RG-PCNN, the two termination conditions (3) (4) are added according to the practical application scene. And (3) deleting the part of pseudo blood vessels if the ratio of the total number of the blood vessel edge pixel points to the total number of the blood vessel pixel points in the seed growth is smaller than a set threshold value e, terminating the iteration, and selecting the largest seed point from the rest unexcited pixel points to continue the steps. (4) The ratio of the total area of the growth of the blood vessels to the whole graph is limited to be smaller than a set threshold value m, so that the overgrowth of the blood vessels is prevented. Condition (5) is that no neurons are recaptured. Comparing the number of neurons excited before and after the iteration, if the number of neurons excited before and after the iteration is equal, stopping, and if the number of neurons excited after the iteration is still increasing, continuing the iteration.
Further, the first iteration is performed by calculating equations (1) - (5), and the calculation is repeated:
Figure BDA0001938181360000101
U ij [n]=F ij [n]{1+β(n)L ij [n]} (10)
Figure BDA0001938181360000102
until Y no longer changes, then the threshold is updated according to equation (12), starting a new iteration:
Figure BDA0001938181360000103
the step 10 specifically comprises the following steps:
step 10.1: setting initial value beta of connection strength coefficient, increment delta beta of connection coefficient and initial value beta of maximum value of connection strength coefficient max Generating a proportion e of the number of the blood vessel edge pixel points to the total number of the blood vessel pixel points, wherein the blood vessel occupies the area ratio m of the whole graph; stop threshold t=255;
step 10.2: normalizing the pretreatment result S y External input F as PCNN ij [n]=S y
Step 10.3: selected seed point, S ij The largest pixel in (1) is used as seed point to assign its value to 1, Y (ij seed )=1;
Step 10.4: setting PCNN threshold W n Taking the maximum value of the corresponding pixel of the neurons without excited states, namely T ij [n]=W n The other neuron threshold is set to a very large constant Ω (500 by default);
step 10.5: introducing a quick connection mechanism to perform iterative segmentation, namely defining a matrix Y which is the same as the PCNN_Y in order 0 So that Y 0 When beta is less than or equal to beta =1 max If Y 0 And there is an unequal value in PCNN_Y, then Y 0 =pcnn_y, then introducing a fast connection mechanism for iteration, and recalculating pcnn_y;
step 10.6: repeatedly cycling, constantly comparing the PCNN_Y after iteration with the value Y before iteration 0 Until the two are equal;
step 10.7: increasing its value by a smaller constant δβ and adding Y 0 The value of (2) is again assigned 1, and a new cycle is started;
step 10.8: repeating the above process until beta>β max Until the blood vessel completes the first round of growth;
step 10.9: and finally, when the proportion e of the number of the generated vascular edge pixel points to the total number of the vascular pixel points is smaller than or equal to 0.2, and the area ratio m of the vascular to the whole graph is smaller than or equal to 0.11, the growth is invalid, a part of pseudo blood vessels is deleted, the new growth is started, and T=255 is set as a stop condition.
The entire algorithm is described as follows:
setting beta 0 、δβ、β max 、T、e、m;
Normalizing the pretreatment result to S y As external input to PCNN: f (F) ij [n]=S y
Selected seed point, S ij The largest point of the pixel values corresponds to the pixel value assigned 1, i.e. Y (ij) seed )=1;
Finding the maximum value of the corresponding pixel values without exciting the neuron as a threshold value W n T, i.e ij =W n The threshold of the other neurons is set to a very large constant Ω (default to 500)
Figure BDA0001938181360000111
whileβ≤β max
while PCNN_Y!=Y 0
Y 0 =PCNN_Y
Figure BDA0001938181360000112
U ij [n]=F ij [n]{1+β(n)L ij [n]}
Figure BDA0001938181360000113
end。
β=β+δβ
Y 0 =1
end
Calculating the proportion e of pixel points at the edge of a generated blood vessel, wherein the area ratio m of the blood vessel to the whole graph is as follows:
if sum(e>0.2&&m>0.11)
Figure BDA0001938181360000114
n=n+1
statistics p= 0 index value
end
The beneficial effects of the invention are as follows:
(1): 10 graphs were randomly extracted from the DRIVE database, and the results of the pretreatment using a combination of gaussian filtering and Gabor filtering were reflected by calculation of accuracy (Acc), sensitivity (Sen), and specificity (Spe) as compared with the pretreatment using Gabor filtering alone.
TABLE 1 Gabor filtering results
Figure BDA0001938181360000121
TABLE 2 results of Gaussian filtering
Figure BDA0001938181360000122
TABLE 3 results of Gabor Filter fused with Gauss Filter
Figure BDA0001938181360000123
(2): a retinal vessel segmentation method based on region growing PCNN is provided. On the basis of pretreatment of retina blood vessels, a certain blood vessel area is selected as an initial seed area; then, PCNN with a quick connection mechanism is combined with the seed region growing idea, and by setting the maximum value of the connection intensity coefficient and the stop condition in the PCNN, if the growth is invalid, the part of pseudo blood vessels are deleted, so that the problem that the break point of the previous method cannot continue to grow is solved, the automatic growth of the blood vessel region in the fundus image is realized, and the blood vessels in the retina image are effectively extracted.

Claims (9)

1. A retinal vascular segmentation method based on region growing PCNN, characterized by comprising:
selecting a seed point from unlabeled pixels of the target retinal vascular image;
increasing the connection strength of a PCNN model, and extracting the blood vessel characteristics in the target retina blood vessel image by using the PCNN model with the increased connection strength as a starting point until the increased connection strength is greater than a first preset threshold;
if the blood vessel features extracted in the iteration do not meet the first preset condition and the second preset condition at the same time, marking the pixels corresponding to the blood vessel features extracted in the iteration as the same label until all the pixels of the target retina blood vessel image are marked with the labels;
the first preset condition is that the ratio of the number of the blood vessel edge pixels in the blood vessel characteristics extracted in the iteration to the total number of the blood vessel pixels is smaller than or equal to a second preset threshold value;
the second preset condition is that the ratio of the blood vessel area in the blood vessel characteristics extracted in the iteration to the area of the target retina blood vessel image is smaller than or equal to a third preset threshold value;
if the blood vessel features extracted in the iteration do not meet the first preset condition and the second preset condition, marking the pixels corresponding to the blood vessel features extracted in the iteration as the same label until all the pixels of the target retina blood vessel image are marked with the labels, and the steps of:
and if the blood vessel characteristics extracted in the iteration meet the first preset condition and the second preset condition at the same time, deleting the blood vessel characteristics extracted in the iteration.
2. The method of claim 1, wherein the step of selecting a seed point from unlabeled pixels of the target retinal vascular image is preceded by the step of:
multiplying each channel of the retinal blood vessel image to be processed by a corresponding preset weight, and adding to obtain a gray level image of the retinal blood vessel image to be processed;
and acquiring the target retinal blood vessel image according to the gray level image.
3. The method according to claim 2, wherein the step of acquiring the target retinal vascular image from the gray scale image comprises:
filtering the gray image based on a two-dimensional Gaussian filtering method and a two-dimensional Gabor filtering method respectively;
fusing a filtering result of the two-dimensional Gaussian filtering method with a filtering result of the two-dimensional Gabor filtering method to obtain a fused image;
and subtracting the gray level image from the fusion image, performing a subtraction operation on the subtraction result, and multiplying the subtraction result by a preset mask image to obtain the target retinal blood vessel image.
4. A method according to claim 3, wherein the step of filtering the gray scale image based on a two-dimensional gaussian filtering method and a two-dimensional Gabor filtering method, respectively, further comprises, prior to:
judging whether each pixel in the gray image is a boundary pixel according to a four-neighborhood method, and if so, performing edge expansion on each pixel which is a boundary in the gray image;
carrying out contrast enhancement on the gray level image after edge expansion based on a histogram equalization method with limited contrast, and obtaining the preprocessed gray level image;
correspondingly, the steps of filtering the gray image based on the two-dimensional Gaussian filtering method and the two-dimensional Gabor filtering method respectively specifically comprise:
and filtering the preprocessed gray level image based on a two-dimensional Gaussian filtering method and a two-dimensional Gabor filtering method respectively.
5. The method of claim 4, wherein determining whether each pixel in the gray scale image is a boundary pixel is performed according to a four-neighborhood method, and if so, edge-expanding each pixel in the gray scale image that is a boundary,
the method comprises the following specific steps:
if at least one value in the four neighborhoods of each pixel in the gray level image is 0 and is not all 0, obtaining that each pixel in the gray level image is a boundary;
and performing edge expansion on each pixel which is a boundary in the gray level image.
6. The method according to claim 4, wherein the step of obtaining the preprocessed gray-scale image comprises the steps of:
dividing the gray image with the expanded edges into a plurality of blocks;
calculating a gray level histogram of each block, and cutting the gray level histogram of each block;
equalizing the gray level histogram of each block after clipping;
and connecting blocks corresponding to the equalized gray level histogram based on a linear interpolation method to obtain the preprocessed gray level image.
7. A method according to claim 3, characterized in that the ith gaussian convolution kernel K 'of the two-dimensional gaussian filtering method' i The formula of (x, y) is:
K′ i (x,y)=K i (x,y)-m i
K i (x,y)=-exp(-u 2 /(2σ 2 ));
Figure FDA0004214894280000031
wherein K is i (x, y) is a coefficient in the ith Gaussian kernel matrix, m i Is the mean value of the ith gaussian kernel,
Figure FDA0004214894280000032
for the discrete point of the ith Gaussian kernel, u and v are the values of the ith Gaussian kernel on the x axis and the y axis respectively, sigma is the offset of the Gaussian kernel from the x-axis coordinate center, and N= { [ u, v]And (3) u is less than or equal to 3 sigma, v is less than or equal to L/2, L is the length of a blood vessel truncated in the y-axis direction, and A is the number of points in N.
8. The method according to claim 3, wherein the step of fusing the filtering result of the two-dimensional gaussian filtering method with the filtering result of the two-dimensional Gabor filtering method to obtain the fused image specifically comprises:
and multiplying the filtering result of the two-dimensional Gaussian filtering method and the filtering result of the two-dimensional Gabor filtering method by corresponding preset weights respectively, and then adding to obtain a fusion image.
9. The method according to claim 3, wherein the step of obtaining the target retinal blood vessel image by multiplying the subtraction result by a preset mask image after the subtraction result is inverted, comprises:
sequentially performing open operation, close operation and corrosion operation on a preset mask image;
multiplying the processed preset mask image with the negation result to obtain the target retinal blood vessel image.
CN201910013381.XA 2019-01-07 2019-01-07 Retinal vessel segmentation method based on region growing PCNN Active CN109829931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910013381.XA CN109829931B (en) 2019-01-07 2019-01-07 Retinal vessel segmentation method based on region growing PCNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910013381.XA CN109829931B (en) 2019-01-07 2019-01-07 Retinal vessel segmentation method based on region growing PCNN

Publications (2)

Publication Number Publication Date
CN109829931A CN109829931A (en) 2019-05-31
CN109829931B true CN109829931B (en) 2023-07-11

Family

ID=66860796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910013381.XA Active CN109829931B (en) 2019-01-07 2019-01-07 Retinal vessel segmentation method based on region growing PCNN

Country Status (1)

Country Link
CN (1) CN109829931B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288588A (en) * 2019-07-01 2019-09-27 齐鲁工业大学 Retinal images blood vessel segmentation method and system based on gray variance and standard deviation
CN114155193B (en) * 2021-10-27 2022-07-26 北京医准智能科技有限公司 Blood vessel segmentation method and device based on feature enhancement

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1483727A2 (en) * 2002-03-04 2004-12-08 Isis Innovation Limited Unsupervised data segmentation
WO2005048190A1 (en) * 2003-11-13 2005-05-26 Centre Hospitalier De L'universite De Montreal (Chum) Automatic multi-dimensional intravascular ultrasound image segmentation method
WO2015003225A1 (en) * 2013-07-10 2015-01-15 Commonwealth Scientific And Industrial Research Organisation Quantifying a blood vessel reflection parameter of the retina
WO2015063054A1 (en) * 2013-10-30 2015-05-07 Agfa Healthcare Vessel segmentation method
JP2015202236A (en) * 2014-04-15 2015-11-16 キヤノン株式会社 Eyeground image analysis device and analysis method
CN106815853A (en) * 2016-12-14 2017-06-09 海纳医信(北京)软件科技有限责任公司 To the dividing method and device of retinal vessel in eye fundus image
CN108109159A (en) * 2017-12-21 2018-06-01 东北大学 It is a kind of to increase the retinal vessel segmenting system being combined based on hessian matrixes and region

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8351669B2 (en) * 2011-02-01 2013-01-08 Universidade Da Coruna-Otri Method, apparatus, and system for retinal image analysis
US9089288B2 (en) * 2011-03-31 2015-07-28 The Hong Kong Polytechnic University Apparatus and method for non-invasive diabetic retinopathy detection and monitoring
CN107016676B (en) * 2017-03-13 2019-11-08 三峡大学 A kind of retinal vascular images dividing method and system based on PCNN
CN108090899A (en) * 2017-12-27 2018-05-29 重庆大学 A kind of vessel extraction and denoising method
CN109003279B (en) * 2018-07-06 2022-05-13 东北大学 Fundus retina blood vessel segmentation method and system based on K-Means clustering labeling and naive Bayes model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1483727A2 (en) * 2002-03-04 2004-12-08 Isis Innovation Limited Unsupervised data segmentation
WO2005048190A1 (en) * 2003-11-13 2005-05-26 Centre Hospitalier De L'universite De Montreal (Chum) Automatic multi-dimensional intravascular ultrasound image segmentation method
WO2015003225A1 (en) * 2013-07-10 2015-01-15 Commonwealth Scientific And Industrial Research Organisation Quantifying a blood vessel reflection parameter of the retina
WO2015063054A1 (en) * 2013-10-30 2015-05-07 Agfa Healthcare Vessel segmentation method
JP2015202236A (en) * 2014-04-15 2015-11-16 キヤノン株式会社 Eyeground image analysis device and analysis method
CN106815853A (en) * 2016-12-14 2017-06-09 海纳医信(北京)软件科技有限责任公司 To the dividing method and device of retinal vessel in eye fundus image
CN108109159A (en) * 2017-12-21 2018-06-01 东北大学 It is a kind of to increase the retinal vessel segmenting system being combined based on hessian matrixes and region

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于区域增长与局部自适应C-V模型的脑血管分割;解立志;周明全;田沄;武仲科;王醒策;;软件学报(08);第237-246页 *
视网膜图像中的血管自适应提取;黄琳;沈建新;罗煦;;中国制造业信息化(01);第68-71页 *

Also Published As

Publication number Publication date
CN109829931A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN111815574B (en) Fundus retina blood vessel image segmentation method based on rough set neural network
Pour et al. Automatic detection and monitoring of diabetic retinopathy using efficient convolutional neural networks and contrast limited adaptive histogram equalization
US20200250491A1 (en) Image classification method, computer device, and computer-readable storage medium
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
Li et al. Robust retinal image enhancement via dual-tree complex wavelet transform and morphology-based method
CN108986106B (en) Automatic segmentation method for retinal blood vessels for glaucoma
Biswal et al. Robust retinal blood vessel segmentation using line detectors with multiple masks
Agarwal et al. Modified histogram based contrast enhancement using homomorphic filtering for medical images
CN107644420B (en) Blood vessel image segmentation method based on centerline extraction and nuclear magnetic resonance imaging system
CN107016676B (en) A kind of retinal vascular images dividing method and system based on PCNN
CN109033945B (en) Human body contour extraction method based on deep learning
CN106355599B (en) Retinal vessel automatic division method based on non-fluorescence eye fundus image
CN113205538A (en) Blood vessel image segmentation method and device based on CRDNet
CN113205537B (en) Vascular image segmentation method, device, equipment and medium based on deep learning
Antohe et al. Implications of digital image processing in the paraclinical assessment of the partially edentated patient
CN111815562B (en) Retina blood vessel segmentation method combining U-Net and self-adaptive PCNN
CN108764342B (en) Semantic segmentation method for optic discs and optic cups in fundus image
CN102999905A (en) Automatic eye fundus image vessel detecting method based on PCNN (pulse coupled neural network)
CN109829931B (en) Retinal vessel segmentation method based on region growing PCNN
Mahmoud et al. Segmentation of skin cancer images based on gradient vector flow (GVF) snake
CN109087310A (en) Dividing method, system, storage medium and the intelligent terminal of Meibomian gland texture region
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
Gamini et al. Homomorphic filtering for the image enhancement based on fractional-order derivative and genetic algorithm
Sallam et al. Diabetic retinopathy grading using resnet convolutional neural network
Dong et al. Supervised learning-based retinal vascular segmentation by m-unet full convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant