CN111899272A - Fundus image blood vessel segmentation method based on coupling neural network and line connector - Google Patents

Fundus image blood vessel segmentation method based on coupling neural network and line connector Download PDF

Info

Publication number
CN111899272A
CN111899272A CN202010804380.XA CN202010804380A CN111899272A CN 111899272 A CN111899272 A CN 111899272A CN 202010804380 A CN202010804380 A CN 202010804380A CN 111899272 A CN111899272 A CN 111899272A
Authority
CN
China
Prior art keywords
image
blood vessel
pixels
neural network
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010804380.XA
Other languages
Chinese (zh)
Other versions
CN111899272B (en
Inventor
刘锋
黄林媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN202010804380.XA priority Critical patent/CN111899272B/en
Publication of CN111899272A publication Critical patent/CN111899272A/en
Application granted granted Critical
Publication of CN111899272B publication Critical patent/CN111899272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fundus image blood vessel segmentation method based on a coupling neural network and a line connector, and belongs to the technical field of medical image and digital image processing. The invention comprises the following steps: providing a simplified pulse coupling neural network model, and acquiring the basic structure of a blood vessel in an image by utilizing the similarity of adjacent neurons; a new denoising method is provided, most of noise points are removed by using pixel connectivity, and meanwhile, the complete blood vessel edge is reserved; the problem of vessel rupture occurring in the segmentation process is solved by using a line connector so as to present a complete vessel structure and improve the accuracy of vessel identification. The test is carried out on two public retina databases of DRIVE and STARE, and the result shows that compared with the existing method, the method provided by the invention has the advantages that the excellent performance is obtained on indexes such as average accuracy and sensitivity, and the response time is good.

Description

Fundus image blood vessel segmentation method based on coupling neural network and line connector
Technical Field
The invention relates to the field of medical image processing, in particular to a fundus image blood vessel segmentation method based on a coupling neural network and a line connector.
Background
The change of the structural characteristics of the fundus image can reflect the existence of certain physical diseases to a certain extent, blood vessels are the most main structures in the fundus image, the diameter, the color, the bending degree and the like of the blood vessels are closely related to the existence of the diseases, for example, cardiovascular diseases and coronary artery diseases of adults and pathological changes of retinas of infants generally cause the shape change of the retinal blood vessels. Therefore, analysis of fundus images is important for diagnosis of ophthalmic problems and other diseases such as diabetes and hypertension. Due to the time and labor consumption of manual analysis, an excellent medical image auxiliary diagnosis system is needed, images are processed in advance, for example, blood vessels in fundus images are segmented automatically, and objective and clear medical diagnosis data can be provided for relevant medical workers.
Existing image segmentation methods are roughly divided into two categories: supervised and unsupervised approaches. The supervision method utilizes the labeled data set to train the model, can obtain more accurate results, but can cost a large amount of training time and has certain requirements on the performance of the machine. Unsupervised methods can perform segmentation tasks without a priori knowledge, typically faster than supervised methods, such as matched filtering, morphological processing, wavelet transforms, etc.
Due to the complex structure of the fundus retina image, the problems of uneven illumination, weak contrast, noise interference and the like exist, so that the segmentation effect of a general image segmentation algorithm on fundus blood vessels is not obvious, and the problem of finding an effective blood vessel segmentation method is a problem generally concerned by students at present.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a fundus image blood vessel segmentation method based on a coupling neural network and a line connector, which comprises the following steps:
step 1: acquiring a fundus image to be segmented, and preprocessing the fundus image, wherein the preprocessing steps are as follows: extracting a green channel of a fundus image, enhancing the image contrast by using a contrast-limited adaptive histogram equalization algorithm, setting the size of an adaptive window to be 16 multiplied by 16, removing an interference background by using bottom cap morphological operation, setting the size of a structural element of the bottom cap operation to be square 13 multiplied by 13, and finally extracting an interested region by using a mask template to reduce the influence of a non-retinal region on image analysis;
step 2: the pulse coupling neural network model is improved, basic characteristics are reserved, calculation is simplified, and the similar characteristics of adjacent neurons are utilized to carry out blood vessel segmentation. The discrete model of the simplified pulse coupled neural network model is:
Fij[n]=Iij
Figure BDA0002626740850000011
Uij[n]=Fij[n](1+βLij[n])
Eij[n]=VE
Figure BDA0002626740850000012
wherein the subscript of ij is the label of neuron, n is iteration number, Fij[n]、Lij[n]、Uij[n]、Eij[n]The nth feedback input, the connection input, the internal activity item and the dynamic threshold of the ijth neuron respectively, IijIs the external stimulus of the neuron, here a digital matrix of the input image, WijklIs a weight matrix, beta is a connection coefficient, Yij[n]Is the output of the neural network, Fij[n]And Lij[n]Generation of internal activity items U by beta ligationij[n],Uij[n]If it is greater than Eij[n]Then the neuron fires, i.e. Yij[n]1, otherwise Yij[n]Is 0.
The parameters in the discrete model are specifically assigned as follows:
VEthe size of the image is determined by an Otsu method, which is an adaptive threshold value determination method, the image is divided into a background part and a target part by mainly utilizing the maximum between-class variance, and since the pixel value of a non-retina area is basically 0, in order to reduce the influence of the area on threshold value calculation, after counting the number of different pixel values by utilizing a histogram, the number of the pixel values of 0 is set as 0, and then the optimal threshold value is calculated;
the linkage coefficient β is 0.2;
weight matrix
Figure BDA0002626740850000021
And step 3: denoising the segmentation result obtained in the step 2, wherein the denoising step comprises the following steps: according to the connectivity principle of image pixels, finding out all 8 connection areas in an image, counting the number of pixels in the 8 connection areas, if the number of pixels in a certain area is less than a threshold value alpha, determining that the area is a noise point, performing noise point removal operation, and obtaining a good denoising effect by setting the value of alpha to be 53 through a large number of experiments.
And 4, step 4: and (3) providing a wire connector, and performing connection operation of broken blood vessels on the denoised image obtained in the step (3) so as to obtain a complete blood vessel image and improve the blood vessel identification rate. The basic principle of such a wire connector is: adopting a square sliding window with the size of 7 multiplied by 7 and taking a target pixel as a center, finding out 12 different directions of the window which are symmetrical with the center, wherein the angular resolution is 15 degrees; on the premise of not considering background pixels, finding out the largest 8 connection areas in the image, judging pixel types of two ends in 12 different directions in the sliding window, and ensuring that at least one end does not belong to the 8 connection areas; judging whether the pixel values of the head part and the tail part corresponding to each direction in the window are both 1, if not, abandoning the direction, then calculating the average gray value of the pixels covered by the remaining directions, and if the average gray value is 1, abandoning the direction; comparing the average gray value in the selected direction, determining the direction corresponding to the maximum gray value, and assigning all pixels in the direction corresponding to the maximum gray value as 1 to fill up the fracture gap; and sliding the window to the next target pixel, and repeating the steps until all the target pixels are operated.
Experimental results show that the images segmented by the method are superior to most of the existing methods in indexes such as average accuracy, sensitivity and the like, and the response time of executing the algorithm is short.
Compared with the prior art, the invention has the following advantages:
although the simplified pulse coupling neural network model loses some advantages of a basic pulse coupling neural network, such as dynamic threshold characteristics, automatic wave characteristics and the like, in consideration of the particularity of a blood vessel structure, the improved neural network model still retains the function of capturing adjacent pixels similar to a target pixel value, and greatly reduces the complexity of calculation on the basis of fully meeting the requirement of blood vessel segmentation.
The denoising method provided by the invention avoids the phenomena of image edge blurring and distortion brought by a common denoising method, and retains clear and complete blood vessel edges while removing a large number of noise points by using the pixel connectivity principle.
The wire connector can complete the connection of most of the broken parts of the blood vessels by utilizing the linear structural characteristics of the blood vessels, has small calculation amount and quick response time, and can meet the requirement of real-time processing. The line connector is also applicable to other linear structure problems in the field of image processing.
Drawings
In order to more clearly illustrate the fundus image vessel segmentation method based on the coupled neural network and the line connector, the drawings required in the detailed description will be briefly introduced, and it is obvious that the drawings described below are only some examples of the present invention, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1: the method is an overall flow chart of the fundus image blood vessel segmentation method based on the coupling neural network and the line connector;
FIG. 2: the fundus image blood vessel segmentation method based on the coupling neural network and the line connector comprises the following steps: (a) a green channel map of the fundus image; (b) a pre-processed fundus image;
FIG. 3: the fundus image blood vessel segmentation method based on the coupling neural network and the line connector comprises the following steps: a schematic diagram of a single neuron in a simplified pulse coupled neural network model;
FIG. 4: the method for segmenting the fundus image blood vessel based on the coupled neural network and the line connector is characterized in that: (a) the first iteration result; (b) the second iteration result; (c) denoising results;
FIG. 5: in a fundus image blood vessel segmentation method based on a coupled neural network and a line connector, 12 different directions (with an angular resolution of 15 °) centered on a target pixel in a 7 × 7 window of the line connector;
FIG. 6: in the fundus image blood vessel segmentation method based on the coupled neural network and the line connector, the line connector covers pixels in four different directions: (a)15 degrees; (b)30 degrees; (c)60 degrees; (d)75 deg. Where the black squares must belong to the vessel pixels.
FIG. 7: in the fundus image blood vessel segmentation method based on the coupled neural network and the line connector, an example in which a blood vessel is broken in a 30 ° direction of the line connector is shown, colored squares represent blood vessel pixels, and squares with diagonal lines represent broken portions.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. The invention can be implemented in many ways other than those described herein and all other embodiments that can be derived by a person skilled in the art without making any inventive step are within the scope of the invention.
The embodiment of the invention provides a fundus image blood vessel segmentation method based on a coupling neural network and a line connector, and the implementation flow chart is shown in figure 1 and mainly comprises the following steps:
1. acquiring a fundus image to be segmented, and performing preprocessing operation on the fundus image.
Due to the complex structure of the fundus image, the problems of uneven illumination, weak contrast, noise interference and the like, the fundus image needs to be preprocessed to eliminate noise, enhance the contrast between the blood vessel and the background and facilitate the subsequent blood vessel segmentation operation.
Since the green channel of the color RGB image can exhibit a better contrast effect than other channels, we select the green channel map of the fundus image for analysis, as shown in fig. 2 (a).
And the contrast of the image is further improved by using a contrast-limited adaptive histogram equalization (CLAHE) algorithm, the CLAHE adopts a sliding window for optimization operation, and experiments show that a better effect can be obtained when the window size is 16 multiplied by 16.
Thirdly, background interference factors such as macula lutea and optic disc exist in the retina image, the diameter of the blood vessel is considered to be small, the large background interference factors can be well removed by using bottom cap operation in morphology, meanwhile, the basic structure of the blood vessel is kept, through experiments, structural elements of the bottom cap operation are selected to be square, the size of a window is 13 x 13, and good effects can be obtained.
Due to the influence of illumination, imaging environments of equipment and the like, the obtained retina image has background noise, and the extraction of a region of interest (ROI) can reduce the influence to a certain extent. The method utilizes the existing mask template in the image database and automatically makes the mask template which does not exist in the database, the pixel values of the ROI in the template are all 1, the pixel value of the ROI is 0, and the original image is multiplied by the mask template to effectively extract the region of interest.
The preprocessed image is shown in fig. 2 (b).
2. And (4) performing vessel segmentation by using a simplified pulse coupling neural network model.
The pulse coupling neural network is called as a third generation artificial neural network, has various excellent characteristics of threshold variability, similar clustering, nonlinear modulation, dynamic pulse and synchronous pulse distribution and the like, and is very suitable for image information processing. The neural network is a two-dimensional single-layer neuron array formed by pulse coupling neurons, and the discrete model of the neural network is as follows:
Figure BDA0002626740850000031
Figure BDA0002626740850000032
Uij[n]=Fij[n](1+βLij[n])
Figure BDA0002626740850000033
Figure BDA0002626740850000034
wherein the subscript of ij is the label of neuron, n is iteration number, SijIs an external stimulus of the neuron, Fij[n]、Lij[n]、Uij[n]、Eij[n]The nth feedback input, the connection input, the internal activity item and the dynamic threshold of the ijth neuron respectively. M and W are a chaining weight matrix, VF、VL、VEAre respectively Fij[n]、Lij[n]And Eij[n]Amplitude constant of (a)f、al、aeBeta is the connection coefficient, Y is the corresponding attenuation coefficientij[n]Is the output of the neural network.
The invention improves the pulse coupling neural network model, and the improved discrete model is as follows:
Fij[n]=Iij
Figure BDA0002626740850000041
Uij[n]=Fij[n](1+βLij[n])
Eij[n]=VE
Figure BDA0002626740850000042
the improved pulse coupling neural network model removes exponential decay coefficient, greatly reduces computational complexity, and simplifies feedback input Fij[n]And a connection input Lij[n]A mathematical expression ofETo dynamic threshold terms Eij[n],VEIs the optimal threshold determined by the Otsu algorithm. FIG. 3 is a schematic diagram of a single neuron in this model.
Although the simplified pulse coupling neural network loses some characteristics possessed by the basic pulse coupling neural network, such as threshold variability, dynamic pulse emitting characteristics and the like, the simplified pulse coupling neural network fully utilizes similar clustering characteristics, can capture adjacent neurons similar to target neurons, and for the specific problem of image segmentation, the simplified pulse coupling neural network can capture adjacent pixels with the pixel value close to that of the target pixels, so that the correct recognition rate of the target edge pixels is improved.
After the preprocessed retina image is input, the relevant parameters of the model are initialized, and a preliminary blood vessel image segmentation result can be output after two iterations. The essence is that the optimal segmentation threshold between the blood vessel and the background is found out by utilizing the Otsu algorithm, and when the number of pixels is counted by utilizing the histogram, the number of pixel points with the pixel value of 0 is set to be 0 so as to ensure that the searched threshold is not interfered by a large-area non-retina area. The basic blood vessel main body can be obtained by performing the first image segmentation by using the threshold, as shown in fig. 4(a), then performing the iteration of the neural network again, and using the similar cluster characteristics to find the adjacent pixels with the pixel values close to the blood vessel main body pixels, that is, to identify the blood vessel pixels with the missing blood vessel edges, thereby improving the accuracy of blood vessel pixel identification, and the result is shown in fig. 4 (b).
3. And (5) denoising.
The segmented image obtained from the above steps contains much noise, especially some isolated interference points, which are different in size, as shown in fig. 4 (b). Common filtering operations (e.g., averaging or median filtering) can result in distortion of vessel edges or loss of vessel detail. Noise pixels are typically discrete and sparse compared to the continuity of vessel line-like structures. Therefore, the number of pixel points in each connected region can be counted by utilizing a connectivity principle, a threshold value is set, the connected regions lower than the threshold value are removed, namely small-sized isolated points are removed, and the damage to the blood vessel main body is avoided. The connected components may be used to mark foreground pixels. The nature of the connected components depends on which pixel adjacency form we choose, most commonly the 4-adjacency and 8-adjacency forms. Here we choose 8 contiguous versions, which is suitable for searching for more noisy pixels.
Based on the above analysis, we used the following noise reduction method: all octal-connected regions are found in the binary image and the number of pixels in these regions is counted, and if the number is less than some threshold a, the region will be considered as a noise point and a removal operation can be performed. Through a number of experiments, we found that α ═ 53 to be a suitable choice. Fig. 4(c) shows the final result of denoising, which effectively avoids blurring and distortion of the target edge.
4. The fractured blood vessels are connected with a wire connector.
The segmented blood vessel image obtained by the above steps has many broken parts in the blood vessel, and it is necessary to connect these parts together to form a complete and accurate blood vessel map. The invention provides a wire connector which can effectively solve the problem of blood vessel fracture.
The present invention uses a 7 × 7 square window centered on the target pixel to find 12 different directions with an angular resolution of 15 °, as shown in fig. 5. The dark squares in fig. 6 represent the blocks of pixels covered in the directions 15 deg., 30 deg., 60 deg., and 75 deg., respectively, which are symmetric about the horizontal axis to obtain the cases in the directions-15 deg., -30 deg., -60 deg., and-75 deg., whereas the cases of 0 deg., 90 deg., and + -40 deg. are obvious. When performing the join operation, it is necessary to ensure that the first and last two pixels in each direction must belong to the vessel pixels, that is, the pixel value in the binary image is 1, and if a certain direction does not meet this condition, the direction is discarded.
Counting the average gray value of pixels covered in each direction, and if the average gray value is 1, indicating that no broken part exists in the middle, discarding the direction; in case the mean grey value is not 1, the direction in which the grey value is the largest is chosen to ensure that the fracture site is indeed a fracture between the blood vessels, and not an erroneous position of the blood vessels with the noise point.
The 30 orientation is chosen here to specify the connection scheme, as shown in fig. 7. The center pixel is a blood vessel pixel, and the diagonal squares are the locations of the breaks, which can occur anywhere in the direction. The connection operation to be performed is to convert the pixel point at the fracture into a blood vessel pixel, that is, a pixel with a value of 0 under the 30-degree direction coverage is converted into a pixel with a value of 1.
The above procedure results in the connection of the broken part, and other problems may also occur, such as the misconnection problem … … occurring at two closely spaced branches of the blood vessel, or at the beginning of the bifurcation of the blood vessel, or at the edge of the blood vessel due to the flexibility of the blood vessel, which adds some limitations to the above implementation: on the premise of not considering background pixels, the largest 8-connection region is found, the region is generally the main part of the blood vessel, and at least one end of the two ends of the broken blood vessel does not belong to the region, so that the main part of the blood vessel is not connected by mistake, and the possibility of identifying non-blood vessel pixels as blood vessel pixels by mistake is reduced.
5. Results and analysis of the experiments
To illustrate the utility of the present invention, we performed correlation experiments on two published image databases DRIVE and STARE. The DRIVE database comprises 40 color fundus images, which are divided into a training set and a test set, wherein the training set and the test set respectively comprise 20 color images, and the resolution of each image is 768 x 584. The STARE database includes 20 color fundus images with an image resolution of 605 × 700.
In order to evaluate the effect of the method of the present invention on retinal images, three evaluation indexes are adopted: sensitivity (Sensitivity), Specificity (Specificity) and Accuracy (Accuracy). The mathematical expressions are respectively:
Figure BDA0002626740850000051
Figure BDA0002626740850000052
Figure BDA0002626740850000053
where tp (true positive) denotes the number of pixels correctly identified as vessel pixels, tn (true negative) denotes the number of pixels correctly identified as non-vessel pixels, fp (false positive) is the number of pixels originally belonging to a non-vessel but misclassified as vessel pixels, and fn (false negative) denotes the number of pixels originally belonging to a vessel but misclassified as non-vessel pixels.
The invention carries out experiments on Matlab R2019a, and the computer is configured to be Intel (R) core (TM) i5-9300HCPU, 2.4GHz memory and 8GB RAM. The results of the experiments are shown in the following table:
Figure BDA0002626740850000054
the black bold data are average performance indexes of the two databases respectively, the results of the three indexes are excellent, the data are compared with other existing methods, the segmentation performance of the method is superior to that of most existing algorithms, the response time is fast, and the effectiveness and feasibility of the method are verified.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (1)

1. The fundus image blood vessel segmentation method based on the coupling neural network and the line connector is characterized by comprising the following steps of:
step 1: acquiring a fundus image to be segmented, and preprocessing the fundus image;
step 2: according to the pulse coupling neural network, a simplified pulse coupling neural network model is provided, and the model is used for segmenting the preprocessed image;
and step 3: denoising the segmentation result obtained in the step 2;
and 4, step 4: providing a wire connector, and performing connection operation of broken blood vessels on the denoised image obtained in the step 3 to obtain a complete blood vessel image;
step 1 comprises the following steps:
step 1-1: extracting a green channel image of the fundus image to be segmented;
step 1-2: carrying out contrast-limited adaptive histogram equalization on the green channel image, and setting the size of an adaptive window to be 16 multiplied by 16 so as to improve the contrast of the image;
step 1-3: performing bottom cap morphological operation on the image obtained in the step 1-2, wherein the size of structural elements of the bottom cap operation is set to be 13 x 13 square, so as to eliminate background interference factors such as macula lutea, optic disc and the like, and facilitate subsequent blood vessel segmentation operation;
step 1-4: extracting an interested area in the image by using the mask, and reducing the influence of a non-retina area on image analysis;
the simplified pulse coupling neural network model in the step 2 is established, and the discrete model is as follows:
Fij[n]=Iij
Figure FDA0002626740840000011
Uij[n]=Fij[n](1+βLij[n])
Eij[n]=VE
Figure FDA0002626740840000012
wherein the subscript of ij is the label of neuron, n is iteration number, Fij[n]、Lij[n]、Uij[n]、Eij[n]The nth feedback input, the connection input, the internal activity item and the dynamic threshold of the ijth neuron respectively, IijIs the external stimulus of the neuron, here a digital matrix of the input image, WijklIs a weight matrix, beta is a connection coefficient, Yij[n]Is the output of the neural network, Fij[n]And Lij[n]Generation of internal activity items U by beta ligationij[n],Uij[n]If it is greater than Eij[n]Then the neuron firesI.e. Yij[n]1, otherwise Yij[n]To be 0, each parameter in the discrete model is specifically assigned as follows:
VEthe size of the image is determined by an Otsu method, which is an adaptive threshold value determination method, the image is divided into a background part and a target part by mainly utilizing the maximum between-class variance, and since the pixel value of a non-retina area is basically 0, in order to reduce the influence of the area on threshold value calculation, after counting the number of different pixel values by utilizing a histogram, the number of the pixel values of 0 is set as 0, and then the optimal threshold value is calculated;
the linkage coefficient β is 0.2;
weight matrix
Figure FDA0002626740840000013
The denoising method provided by the step 3 comprises the following steps:
step 3-1: finding out all 8 connection areas in the image according to the connectivity principle of the image pixels;
step 3-2: counting the number of pixels in these 8 connected regions;
step 3-3: if the number of pixels in a certain region is less than the threshold value alpha, the region is regarded as a noise point, noise point removing operation can be executed, and a good denoising effect can be obtained through a large number of experiments, wherein the value of alpha is set to be 53;
the implementation steps of the broken blood vessel connection proposed in the step 4 are as follows:
step 4-1: adopting a square sliding window with the size of 7 multiplied by 7 and taking a target pixel as a center, finding out 12 different directions of the window which are symmetrical with the center, wherein the angular resolution is 15 degrees;
step 4-2: on the premise of not considering background pixels, finding out the largest 8 connection areas in the image, judging pixel types of two ends in 12 different directions in the sliding window, and ensuring that at least one end does not belong to the 8 connection areas;
step 4-3: on the basis of the step 4-2, judging whether the pixel values of the head part and the tail part corresponding to each direction in the sliding window are both 1, if not, abandoning the direction, then calculating the average gray value of the pixels covered by the remaining directions, and if the average gray value is 1, abandoning the direction;
step 4-4: comparing the average gray value in the direction selected in the step 4-3, and determining the direction corresponding to the maximum gray value;
and 4-5: all pixels in the direction corresponding to the maximum gray value are assigned to be 1 so as to fill up the fracture gap;
and 4-6: sliding the window to the next target pixel, and repeating the operations of step 4-1 to step 4-5 until all the target pixels are completed, wherein the target pixels refer to the vessel pixels of the image obtained after the processing of step 1 to step 3 in claim 1.
CN202010804380.XA 2020-08-11 2020-08-11 Fundus image blood vessel segmentation method based on coupling neural network and line connector Active CN111899272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010804380.XA CN111899272B (en) 2020-08-11 2020-08-11 Fundus image blood vessel segmentation method based on coupling neural network and line connector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010804380.XA CN111899272B (en) 2020-08-11 2020-08-11 Fundus image blood vessel segmentation method based on coupling neural network and line connector

Publications (2)

Publication Number Publication Date
CN111899272A true CN111899272A (en) 2020-11-06
CN111899272B CN111899272B (en) 2023-09-19

Family

ID=73228844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010804380.XA Active CN111899272B (en) 2020-08-11 2020-08-11 Fundus image blood vessel segmentation method based on coupling neural network and line connector

Country Status (1)

Country Link
CN (1) CN111899272B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363132A (en) * 2023-06-01 2023-06-30 中南大学湘雅医院 Ophthalmic image processing method and system
CN116433666A (en) * 2023-06-14 2023-07-14 江西萤火虫微电子科技有限公司 Board card line defect online identification method, system, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150320313A1 (en) * 2014-05-08 2015-11-12 Universita Della Calabria Portable medical device and method for quantitative retinal image analysis through a smartphone
CN107016676A (en) * 2017-03-13 2017-08-04 三峡大学 A kind of retinal vascular images dividing method and system based on PCNN
CN108986106A (en) * 2017-12-15 2018-12-11 浙江中医药大学 Retinal vessel automatic division method towards glaucoma clinical diagnosis
CN109222990A (en) * 2018-08-09 2019-01-18 复旦大学 PPG based on multilayer time-delay neural network removal motion artifacts monitors system
WO2020103289A1 (en) * 2018-11-23 2020-05-28 福州依影健康科技有限公司 Method and system for analyzing hypertensive retinal blood vessel change feature data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150320313A1 (en) * 2014-05-08 2015-11-12 Universita Della Calabria Portable medical device and method for quantitative retinal image analysis through a smartphone
CN107016676A (en) * 2017-03-13 2017-08-04 三峡大学 A kind of retinal vascular images dividing method and system based on PCNN
CN108986106A (en) * 2017-12-15 2018-12-11 浙江中医药大学 Retinal vessel automatic division method towards glaucoma clinical diagnosis
CN109222990A (en) * 2018-08-09 2019-01-18 复旦大学 PPG based on multilayer time-delay neural network removal motion artifacts monitors system
WO2020103289A1 (en) * 2018-11-23 2020-05-28 福州依影健康科技有限公司 Method and system for analyzing hypertensive retinal blood vessel change feature data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐光柱;张柳;邹耀斌;夏平;雷帮军;: "自适应脉冲耦合神经网络与匹配滤波器相结合的视网膜血管分割", 光学精密工程, no. 03 *
王文涛;罗晓曙;阎晨阳;: "基于灰度迭代阈值PCNN的眼底图像血管分割", 计算机工程与应用, no. 20 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363132A (en) * 2023-06-01 2023-06-30 中南大学湘雅医院 Ophthalmic image processing method and system
CN116363132B (en) * 2023-06-01 2023-08-22 中南大学湘雅医院 Ophthalmic image processing method and system
CN116433666A (en) * 2023-06-14 2023-07-14 江西萤火虫微电子科技有限公司 Board card line defect online identification method, system, electronic equipment and storage medium
CN116433666B (en) * 2023-06-14 2023-08-15 江西萤火虫微电子科技有限公司 Board card line defect online identification method, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111899272B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN109472781B (en) Diabetic retinopathy detection system based on serial structure segmentation
CN108986106A (en) Retinal vessel automatic division method towards glaucoma clinical diagnosis
CN108109159B (en) Retina blood vessel segmentation system based on hessian matrix and region growing combination
CN104268515A (en) Sperm morphology anomaly detection method
CN110555380A (en) Finger vein identification method based on Center Loss function
CN111899272B (en) Fundus image blood vessel segmentation method based on coupling neural network and line connector
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
CN112396565A (en) Method and system for enhancing and segmenting blood vessels of images and videos of venipuncture robot
Zhao et al. Attention residual convolution neural network based on U-net (AttentionResU-Net) for retina vessel segmentation
Dikkala et al. A comprehensive analysis of morphological process dependent retinal blood vessel segmentation
CN108665474B (en) B-COSFIRE-based retinal vessel segmentation method for fundus image
Malek et al. Automated optic disc detection in retinal images by applying region-based active aontour model in a variational level set formulation
Gou et al. A novel retinal vessel extraction method based on dynamic scales allocation
Zhu et al. Corn leaf diseases diagnostic techniques based on image recognition
Jana et al. A semi-supervised approach for automatic detection and segmentation of optic disc from retinal fundus image
Zhou et al. Automated detection of red lesions using superpixel multichannel multifeature
CN111292285B (en) Automatic screening method for diabetes mellitus based on naive Bayes and support vector machine
CN108629780B (en) Tongue image segmentation method based on color decomposition and threshold technology
Ali et al. Optic Disc Localization in Retinal Fundus Images Based on You Only Look Once Network (YOLO).
Ali et al. Segmenting retinal blood vessels with gabor filter and automatic binarization
Ashame et al. Abnormality Detection in Eye Fundus Retina
Sathya et al. Contourlet transform and morphological reconstruction based retinal blood vessel segmentation
Honale et al. A review of methods for blood vessel segmentation in retinal images
Athab et al. Disc and Cup Segmentation for Glaucoma Detection
CN114140830A (en) Repeated identification inhibition method based on circulating tumor cell image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant