LU500959B1 - Rough set neural network method for segmentation of fundus retinal blood vessel images - Google Patents

Rough set neural network method for segmentation of fundus retinal blood vessel images Download PDF

Info

Publication number
LU500959B1
LU500959B1 LU500959A LU500959A LU500959B1 LU 500959 B1 LU500959 B1 LU 500959B1 LU 500959 A LU500959 A LU 500959A LU 500959 A LU500959 A LU 500959A LU 500959 B1 LU500959 B1 LU 500959B1
Authority
LU
Luxembourg
Prior art keywords
blood vessel
image
retinal blood
neural network
fundus retinal
Prior art date
Application number
LU500959A
Other languages
French (fr)
Other versions
LU500959A1 (en
Inventor
Ying Sun
Hengrong Ju
Ming Li
Yi Zhang
Zhihao Feng
Weiping Ding
Jie Wan
Jinxin Cao
Original Assignee
Univ Nantong
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nantong filed Critical Univ Nantong
Publication of LU500959A1 publication Critical patent/LU500959A1/en
Application granted granted Critical
Publication of LU500959B1 publication Critical patent/LU500959B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Provided is a rough set neural network method for segmentation of fundus retinal blood vessel images, which includes the following steps: S10. image preprocessing: obtaining an enhanced fundus retinal blood vessel image based on the rough set; S20. establishing a U- 10 net neural network model; S30. performing optimization training for the U-net neural network model by means of particle swarm optimization (PSO), to obtain a PSO-U-net neural network model; and S40. after image enhancement preprocessing on a to-be-tested color fundus retinal blood vessel image by means of the rough set theory, segmenting the to-be-tested color fundus retinal blood vessel image by using the PSO-U-net neural 15 network model.

Description

ROUGH SET NEURAL NETWORK METHOD FOR SEGMENTATION OF FUNDUS RETINAL BLOOD VESSEL IMAGES
TECHNICAL FIELD The present disclosure relates to the field of medical image processing technologies, and more particularly to a rough set neural network method for segmentation of fundus retinal blood vessel images.
BACKGROUND The health of retinal blood vessels in fundus images 1s of great significance and value for doctors in the early diagnosis of the diabetic cardiovascular disease and various ophthalmic diseases. However, due to the complex structure of the retinal blood vessels and their susceptibility to light in the acquisition environment, the manual segmentation of the retinal blood vessels in clinical practice is not only a huge workload but also requires a high level of experience and skill for the medical staff. In addition, there may be differences in the segmentation result of the same fundus image by different medical staff. Therefore, manual segmentation no longer meets the clinical needs.
With the continuous development of the computer technology, automatic segmentation of the fundus retinal blood vessel image using artificial intelligence technology can effectively assist in early diagnosis and decision making of ophthalmic diseases, which has become a research hotspot for domestic and foreign scholars. The convolutional neural network model in deep learning has unique superiority in medical image processing with its special structure of local perception and parameter sharing. Image information has strong spatial complexity and correlation, and it is likely to encounter problems such as incompleteness and uncertainty during image processing. Therefore, applying the rough set theory to image processing can attain better results than the conventional method in many situations.
SUMMARY To solve the foregoing problem, the present disclosure provides a rough set neural network method for segmentation of fundus retinal blood vessel images, which can reduce the workload of medical staff, avoid differences in a segmentation result of the same fundus image due to differences in experience and skills between the medical staff, and efficiently implement segmentation of a color fundus retinal blood vessel image, thus achieving high segmentation accuracy and efficiency.
To achieve the foregoing objective, the present disclosure adopts the following technical solution: A rough set neural network method for segmentation of fundus retinal blood vessel images is provided, which includes the following steps: S10. image preprocessing: performing image enhancement preprocessing for each standard-RGB color fundus retinal blood vessel image with a size of MxMx3 by using a rough set theory, to obtain an enhanced fundus retinal blood vessel image based on the rough set; S20. establishing a U-net neural network model to segment the enhanced fundus retinal blood vessel image based on the rough set to obtain a segmented image, and using an error between the segmented image and a standard segmentation tagged image corresponding to the standard-RGB color fundus retinal blood vessel image as an error function of the established U-net neural network, to obtain the U-net neural network model; S30. performing optimization training for the U-net neural network model by means of particle swarm optimization (PSO); by using the enhanced fundus retinal blood vessel images based on the rough set as particles, obtaining an optimal population of particles through continuous iteration of the particle swarm; and adjusting parameters of the U-net neural network by means of gradient descent, to obtain a PSO-U-net neural network model; and S40. after image enhancement preprocessing on a to-be-tested color fundus retinal blood vessel image by means of the rough set theory, segmenting the to-be-tested color fundus retinal blood vessel image by using the PSO-U-net neural network model.
Further, the U-net neural network model includes an input layer, a convolutional layer, a ReLU non-linear layer, a pooling layer, a deconvolution layer, and an output layer.
Further, step S10 includes the following sub-steps: S11. storing each standard-RGB color fundus retinal blood vessel image with a size of MxMx3 as three matrices with uniform sizes of MxM, which are respectively recorded as R*, G* and B*, where each value in the matrices indicates a component value of each color of each pixel point in the three channels; and establishing an HIS model by using the matrices R*, G* and B*, where H denotes hue, S denotes saturation, and I denotes brightness:
1 @& Bo | Lu -GHHR -BN | AD where & = arcous pees (1) 4 3 rent ent py oy Sf Foe ne Jr BH {2} {= ! (R467 +B) (3)
S12. the brightness component I being equivalent to a grayscale graph of the fundus retinal blood vessel image and regarded as an image information system, and performing image preprocessing by means of the rough set theory; using a two-dimensional fundus retinal image with a size of MxM as a universe U of discourse, where each pixel point x in the fundus retinal image indicates an object in U, and a gray level of the pixel point x is recorded as f(m, n), (m, n) denoting that the pixel point x is located in the mth row and nth column; determining two condition attributes of the fundus retinal blood vessel grayscale graph as cı and ca, namely, C={c1, c2}, where c1 denotes the gray level attribute of the pixel point, and an attribute value is c:={0, 1}; and cz is recorded as the noise property which indicates an absolute value of the difference between average gray levels of two adjacent sub-blocks, and an attribute value is c:= {0, 1}; a decision attribute D indicating the classification of the pixel point, and D= {d1,d2,d3,d4}, where di denotes a relatively bright noise-free region, da denotes a bright-area edge noise region, ds denotes a relatively dark noise-free region, and ds denotes a dark-area edge noise region, thus constructing a fundus retinal blood vessel image information system (U, CUD); S13. determining a grayscale threshold œ; denoting the gray level of the pixel point x in the mth row and nth column in
> Uas Sc ln, n), where if fc, (nm, n) meets Re,= {x€U]| f,(m,n)>a}, c1=1, which indicates that the gray level of the pixel point x falls within [a+1, 255], and thus, the pixel point is classified as the equivalence class of Rc,, which indicates that the pixel point belongs to a bright set in the fundus retinal blood vessel images; or otherwise, c1=0, which indicates that the gray level of the pixel point x falls within [0, a], and thus, the pixel point is classified as the equivalence class of Rc, which indicates that the pixel point belongs to a dark set in the fundus retinal blood vessel images; S14. determining a noise threshold J; dividing the fundus retinal blood vessel image into sub-blocks by 2x2 pixels, where fc,(m,
n) denotes an absolute value of the difference between average values of pixel gray levels of adjacent sub-blocks, namely, fc,=int|avg(S;;)-avg(S:=1,+1)|, avg(Sij) denoting an average pixel value of the sub-block Sij, 1<i<Z-1, and 1<j<Z-1; and if fo,(m,n) meets Ro, ={XEU] Jo, (m,n)>ß}, c2=1, which indicates that the pixel point x has noise and is classified as the equivalence class of Rc), that is, the pixel belongs to an edge noise set; or otherwise, c2=0, which indicates that the pixel point x does not have noise and is classified as the equivalence class of Re,» that 1s, the pixel belongs to a noise-free set; S15. determining the set to which the pixel point belongs according to the foregoing two condition attributes c1 and ca; and by using the condition attributes as a decision basis, performing decision classification for the pixel point, and dividing the original fundus retinal blood vessel image P into sub-images; and based on the gray level attribute cı and the noise attribute cs, dividing the original image into a relatively bright noise-free sub-image Pi, a bright-area edge noise sub-image P», a relatively dark noise-free sub-image Ps, and a dark-area edge noise sub-image P4 according to the condition attribute ca; completing the relatively bright noise-free sub-image Pi, that is, separately using the grayscale threshold œ and the noise threshold B to fill in all the relatively dark and noisy pixel positions, to form P1'; and completing the relatively dark noise-free sub-image Ps, that is, separately using the grayscale threshold a and the noise threshold B to filling in all the relatively bright and noisy pixel positions, to form P3'; and S16. performing enhanced transformation for Pı' and P3': performing histogram equalization transformation for P1', performing histogram exponential transformation for P3', and superimposing images obtained after histogram 5 transformation of Pı' and P3', to obtain an enhanced fundus retinal blood vessel image P'; and further performing normalization for the enhanced fundus retinal blood vessel image P' according to the following formula (4): x i x) 4 ROFIFE = (4) thus obtaining the enhanced fundus retinal blood vessel image based on the rough set, where x; denotes the value of the ith pixel point in the fundus retinal blood vessel image, and min(x) and max(x) respectively denote a minimum value and a maximum value of the pixel of the fundus retinal blood vessel image.
Further, step S20 includes the following sub-steps: S21. performing feature extraction for the enhanced fundus retinal blood vessel images based on the rough set by means of downsampling, performing a convolution operation twice for the input fundus retinal blood vessel images by using a convolution kernel with a size of 3x3, and performing non-linear transformation by using the ReLU activation function; then, performing 2x2 pooling operation and repeating it four times, where in the first 3x3 convolution operation after each pooling operation, the number of 3x3 convolutional kernels exponentially increases; and afterwards, performing 3x3 convolution operation twice, to continue the foregoing operation related to feature extraction by means of downsampling, where calculation for the convolutional layer is shown as follows: x= SOK +55) {5) M; denoting an input feature map set, x;" denoting the jth feature map in the nth layer, Ki; denoting a convolution kernel function, f() denoting the activation function and the ReLU function being selected as the activation function, and bj” denoting an offset parameter; and calculation for the pooling layer 1s shown as follows: x = FU dou 3 BT) (6) denoting a weight constant of the feature map of the downsampling layer, and down() denoting a downsampling function; S22. performing operations by means of upsampling: first, performing 3x3 deconvolution operation twice, copying and cropping images in the maximum pooling layer, and splicing the processed images and the images obtained after deconvolution; then, performing 3x3 convolution operation and repeating it four times, where in the first 3x3 convolution operation after each splicing operation, the number of 3x3 convolutional kernels exponentially decreases; and finally, performing 3x3 convolution operation twice and 1x1 convolution operation once, thus completing the upsampling process; and S23. after the upsampling and downsampling process, calculating an error between the segmented image obtained by using the U-net neural network and the standard segmentation tagged image corresponding to the standard-RGB color fundus retinal blood vessel image by means of forward calculation, where an error function is shown as follows: Cl Sr _ es A Es 229 out (= > tre) (7) where T denotes the number of fundus image samples input into the U-net neural network, y _out:(1) denotes the gray level of the ith pixel point in the tth fundus retinal image sample output by the U-net neural network, and y_true(i) denotes the gray level of the ith pixel point in the ith fundus retinal image tag.
Further, an error threshold is set in the sub-step S23 and is equal to 0.1, and when the error 1s not greater than the error threshold, the required U-net neural network model is obtained; or when the error is greater than the error threshold, backpropagation is performed according to a gradient descent algorithm to adjust a network weight, and then steps S21 to S22 are repeated to perform forward calculation until the error is not greater than the error threshold.
Further, step S30 includes the following sub-steps: S31. randomly selecting few H fundus images from a training set of the enhanced fundus retinal blood vessel images based on the rough set as reference images, and denoting a particle swarm Q as Q= (Qi, Qa, ..., Qn), where H denotes the number of particles in the particle swarm Q and is consistent with the number of the selected fundus images, each bit of each particle represents a link weight or threshold, and an encoding manner of the ith particle Qi is Qi= {Qi1, Qi, ..., Qin}, n denoting a total number of the link weights or thresholds; initializing acceleration constants 61 and 02, and an initial value of the inertia weight w; and initializing each particle position vector Yi= {vil, Yi2, …, Yin} and particle velocity vector Vi= {vi1, via, ..., Vin} into random numbers in the interval of [0,1], where n denotes the number of parameters in the U-net model; S32. separately completing the downsampling process and the upsampling process for each particle in the U-net model; calculating the fitness of each particle by using the error function of the U-net neural network as a fitness function of the particle swarm, and arranging the particles in an ascending order, to obtain an optimal position pbest of each particle and an optimal position gbest of the whole particle swarm; S33. if a minimal value in the error threshold range 1s reached, it indicating that the training has converged, and then stopping running; or otherwise, continuously updating the position and velocity of each particle according to the following formulas (8) and (9):
VAE WV pro; * rand (0 {phestix in) +02 * rand () * (gbestio-x in) (8) X in Kit Vin (9) where vin and xin respectively denote the current position and velocity of the particle 1, V'in and x'in respectively denote the updated velocity and position of the particle 1, w denotes the inertia weight, 61 and 02 are acceleration constants, and rand() is a random function in the interval of [0,1]; S34. sending the updated particles back to the U-net neural network to update a link weight to be trained, performing the upsampling and downsampling process again, and calculating its error; and S35. splitting the obtained optimal position gbest of the particle swarm and mapping it to the weight and threshold of the U-net neural network model, thus completing the whole PSO-based process of optimization of the U-
net neural network weight. The foregoing technical solution of the present disclosure has the following advantages compared to the prior art: The rough set neural network method for segmentation of fundus retinal blood vessel images in the present disclosure reduces the workload of medical staff, avoids differences in a segmentation result of the same fundus image due to differences in experience and skills between the medical staff, and efficiently implements segmentation of a color fundus retinal blood vessel image, thus achieving high segmentation accuracy and efficiency.
BRIEF DESCRIPTION OF THE DRAWINGS The technical solution of the present disclosure and its beneficial effects will be apparent by the following detailed description of specific embodiments of the present disclosure with reference to the accompanying drawings.
FIG. 1 is a flowchart of a rough set neural network method for segmentation of fundus retinal blood vessel images in an embodiment of the present disclosure; FIG. 2 is a detailed flowchart of a rough set neural network method for segmentation of fundus retinal blood vessel images in an embodiment of the present disclosure; and FIG. 3 is a structural diagram of a U-net neural network model in an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE EMBODIMENTS The technical solution in the embodiments of the present disclosure is clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are some rather than all of the embodiments of the present disclosure. Based on the embodiments of the present disclosure, other embodiments acquired by those of ordinary skill in the art without creative effort all belong to the protection scope of the present disclosure.
This embodiment provides a rough set neural network method for segmentation of fundus retinal blood vessel images, which, as shown in FIGs. 1 and 2, includes the following steps: S10. Image preprocessing: Image enhancement preprocessing 1s performed for each standard-RGB color fundus retinal blood vessel image with a size of MxMx3 by using a rough set theory, to obtain an enhanced fundus retinal blood vessel image based on the rough set. S20. A U-net neural network model is established to segment the enhanced fundus retinal blood vessel image based on the rough set to obtain a segmented image, and an error between the segmented image and a standard segmentation tagged image corresponding to the standard-RGB color fundus retinal blood vessel image 1s used as an error function of the established U-net neural network, to obtain the U-net neural network model. S30. Optimization training 1s performed for the U-net neural network model by means of PSO; by using the enhanced fundus retinal blood vessel images based on the rough set as particles, an optimal population of particles is obtained through continuous iteration of the particle swarm; and parameters of the U-net neural network are adjusted by means of gradient descent, to obtain a PSO-U-net neural network model. S40. After image enhancement preprocessing on a to-be-tested color fundus retinal blood vessel image by means of the rough set theory, the to-be-tested color fundus retinal blood vessel image is segmented by using the PSO-U-net neural network model.
Step S10 includes the following sub-steps: S11. Fach standard-RGB color fundus retinal blood vessel image with a size of MxM*3 is stored as three matrices with uniform sizes of MxM, which are respectively recorded as R*, G* and B*, where each value in the matrices indicates a component value of each color of each pixel point in the three channels; and an HIS model is established by using the matrices R*, G* and B*, where H denotes hue, S denotes saturation, and I denotes brightness: . ep (OR GREEN | = hero pag whee De ee hos CD Sef em RAA) (23 i= : (R + +8) (3)
S12. The brightness component I is equivalent to a grayscale graph of the fundus retinal blood vessel image and is regarded as an image information system, and image preprocessing is performed by means of the rough set theory.
A two-dimensional fundus retinal image with a size of MxM is used as a universe U of discourse, where each pixel point x in the fundus retinal image indicates an object in U, and a gray level of the pixel point x is recorded as f(m, n), (m, n) denoting that the pixel point x is located in the mth row and nth column.
Two condition attributes of the fundus retinal blood vessel grayscale graph are determined as ci and cz, namely, C= {ci, ca}, where c1 denotes the gray level attribute of the pixel point, and an attribute value is cı= {0, 1}; and cz is recorded as the noise property which indicates an absolute value of the difference between average gray levels of two adjacent sub-blocks, and an attribute value is co= {0, 1}. A decision attribute D indicates the classification of the pixel point, and D= {di,d2,d3,ds}, where di denotes a relatively bright noise-free region, d» denotes a bright-area edge noise region, ds denotes a relatively dark noise-free region, and ds denotes a dark-area edge noise region, thus constructing a fundus retinal blood vessel image information system (U, CUD). S13. A grayscale threshold a is determined.
The gray level of the pixel point x in the mth row and nth column in U is denoted as fe, (m, n); and if fc (m, n) meets Re,= {xEU] fe,(m,n)>a}, c1=1, which indicates that the gray level of the pixel point x falls within [o+1, 255]. Thus, the pixel point is classified as the equivalence class of R. , which indicates that the pixel point belongs to a bright set in the fundus retinal blood vessel images.
Otherwise, cı =0, which indicates that the gray level of the pixel point x falls within [0, a], Thus, the pixel point is classified as the equivalence class of Re, which indicates that the pixel point belongs to a dark set in the fundus retinal blood vessel images.
S14. A noise threshold ß is determined.
The fundus retinal blood vessel image is divided into sub-blocks by 2x2 pixels, where f,(m, n) denotes an absolute value of the difference between average values of pixel gray levels of adjacent sub-blocks, namely, Je, =intlavg(S;;)-avg(Siz1,+1)|, where avg(Sij) denotes an average pixel value of the sub-
block Si I<i<=-1, and 1<j<5-1. If f.,(m,n) meets R,={xEU] f,(m,n)>B}, c2=1, which indicates that the pixel point x has noise and is classified as the equivalence class of Ke, that is, the pixel belongs to an edge noise set. Otherwise, c:=0, which indicates that the pixel point x does not have noise and is classified as the equivalence class of Re,» that 1s, > the pixel belongs to a noise-free set. S15. The set to which the pixel point belongs is determined according to the foregoing two condition attributes cı and cz; and by using the condition attributes as a decision basis, decision classification is performed for the pixel point, and the original fundus retinal blood vessel image P is divided into sub-images. Based on the gray level attribute cı and the noise attribute cs, the original image is divided into a relatively bright noise-free sub- image Pi, a bright-area edge noise sub-image P», a relatively dark noise-free sub-image Ps, and a dark-area edge noise sub-image P4. The relatively bright noise-free sub-image P1 is completed. That is, the grayscale threshold a and the noise threshold B are separately used to fill in all the relatively dark and noisy pixel positions, to form P1'. The relatively dark noise-free sub-image P; is completed. That is, the grayscale threshold a and the noise threshold B are separately used to fill in all the relatively bright and noisy pixel positions, to form Ps".
S16. Perform enhanced transformation for Pi and Ps": Histogram equalization transformation is performed for P1', histogram exponential transformation is performed for P+', and images obtained after histogram transformation of P1' and P+' are superimposed, to obtain an enhanced fundus retinal blood vessel image P'; and further normalization is performed for the enhanced fundus retinal blood vessel image P' according to the following formula (4): x — min x) x AOE = ee {4} Thus, the enhanced fundus retinal blood vessel image based on the rough set is obtained,
where xi denotes the value of the ith pixel point in the fundus retinal blood vessel image, and min(x) and max(x) respectively denote a minimum value and a maximum value of the pixel of the fundus retinal blood vessel image.
As shown in FIG. 3, the U-net neural network model includes an input layer, a convolutional layer, a ReLU non-linear layer, a pooling layer, a deconvolution layer, and an output layer. Step S20 includes the following sub-steps: S21. Feature extraction is performed for the enhanced fundus retinal blood vessel images based on the rough set by means of downsampling, a convolution operation is performed twice for the input fundus retinal blood vessel images by using a convolution kernel with a size of 3x3, and non- linear transformation is performed by using the ReLU activation function. Then, 2x2 pooling operation is performed and repeated four times, where in the first 3x3 convolution operation after each pooling operation, the number of 3x3 convolutional kernels exponentially increases. Afterwards, 3x3 convolution operation is performed twice, to continue the foregoing operation related to feature extraction by means of downsampling, Calculation for the convolutional layer is shown as follows: x; = f à a” A K: + £7) {3 where M; denotes an input feature map set, x;* denotes the jth feature map in the nth layer, K{} denotes a convolution kernel function, f() denotes the activation function and the ReLU function is selected as the activation function, and bj" denotes an offset parameter. Calculation for the pooling layer 1s shown as follows: x = FL dom) +B) {6) where ß denotes a weight constant of the feature map of the downsampling layer, and down() denotes a downsampling function.
S22. Operations are performed by means of upsampling. First, 3x3 deconvolution operation is performed twice, images in the maximum pooling layer are copied and cropped, and the processed images and the images obtained after deconvolution are spliced.
Then, 3x3 convolution operation 1s performed and repeated four times, where in the first 3x3 convolution operation after each splicing operation, the number of 3x3 convolutional kernels exponentially decreases.
Finally, 3x3 convolution operation 1s performed twice and 1x1 convolution operation is performed once, thus completing the upsampling process.
S23. After the upsampling and downsampling process, an error between the segmented image obtained by using the U-net neural network and the standard segmentation tagged image corresponding to the standard-RGB color fundus retinal blood vessel image is calculated by means of forward calculation, where an error function is shown as follows: Es Ze SE Su He {7} where T denotes the number of fundus image samples input into the U-net neural network, y_out:(1) denotes the gray level of the ith pixel point in the tth fundus retinal image sample output by the U-net neural network, and y_true((1) denotes the gray level of the ith pixel point in the ith fundus retinal image tag.
Step S30 includes the following sub-steps: S31. Few H fundus images are randomly selected from a training set of the enhanced fundus retinal blood vessel images based on the rough set as reference images, and a particle swarm Q is denoted as Q=(Q1, Q, ..., Qn), where H denotes the number of particles in the particle swarm Q and is consistent with the number of the selected fundus images, each bit of each particle represents a link weight or threshold, and an encoding manner of the ith particle Qi is Qi= {Qi1, Qi, ..., Qin}, n denoting a total number of the link weights or thresholds.
Acceleration constants 61 and 62, and an initial value of the inertia weight w are initialized; and each particle position vector Yi= {yi1, Yi2, ..., Vin} and particle velocity vector Vi= {vi1, vi, ..., Vin} are initialized into random numbers in the interval of [0,1], where n denotes the number of parameters in the U-net model.
S32. The downsampling process and the upsampling process are separately completed for each particle in the U-net model; the fitness of each particle is calculated by using the error function of the U-net neural network as a fitness function of the particle swarm, and the particles are arranged in an ascending order, to obtain an optimal position pbest of each particle and an optimal position gbest of the whole particle swarm. S33. If a minimal value in the error threshold range is reached, it indicates that the training has converged, and then running is stopped; or otherwise, the position and velocity of each particle are continuously updated according to the following formulas (8) and (9): voemmwy tay * vand (Of {phéstis-x ad +02 * rand (OO + {gbestia-X is) (8) £ exit nn (9) In the foregoing formulas, Vin and xin respectively denote the current position and velocity of the particle 1, V'in and x'in respectively denote the updated velocity and position of the particle 1, w denotes the inertia weight, 61 and 02 are acceleration constants, and rand() 1s a random function in the interval of [0,1]. S34. The updated particles are sent back to the U-net neural network to update a link weight to be trained, the upsampling and downsampling process is performed again, and its error is calculated. S35. The obtained optimal position gbest of the particle swarm is split and mapped to the weight and threshold of the U-net neural network model, thus completing the whole PSO-based process of optimization of the U-net neural network weight.
The above merely describes exemplary embodiments of the present disclosure, and does not limit the patent protection scope of the present disclosure. Any equivalent structures or process transformations made by using the description and the accompanying drawings of the present disclosure can be applied directly or indirectly in other related technical fields, and all fall within the patent protection scope of the present disclosure.

Claims (6)

1. A rough set neural network method for segmentation of fundus retinal blood vessel images, comprising the following steps: S10. image preprocessing: performing image enhancement preprocessing for each standard-RGB color fundus retinal blood vessel image with a size of MxMx3 by using a rough set theory, to obtain an enhanced fundus retinal blood vessel image based on the rough set; S20. establishing a U-net neural network model to segment the enhanced fundus retinal blood vessel image based on the rough set to obtain a segmented image, and using an error between the segmented image and a standard segmentation tagged image corresponding to the standard-RGB color fundus retinal blood vessel image as an error function of the established U-net neural network, to obtain the U-net neural network model; S30. performing optimization training for the U-net neural network model by means of particle swarm optimization (PSO); by using the enhanced fundus retinal blood vessel images based on the rough set as particles, obtaining an optimal population of particles through continuous iteration of the particle swarm; and adjusting parameters of the U-net neural network by means of gradient descent, to obtain a PSO-U-net neural network model; and S40. after image enhancement preprocessing on a to-be-tested color fundus retinal blood vessel image by means of the rough set theory, segmenting the to-be-tested color fundus retinal blood vessel image by using the PSO-U-net neural network model.
2. The rough set neural network method for segmentation of fundus retinal blood vessel images according to claim 1, wherein the U-net neural network model comprises an input layer, a convolutional layer, a ReLU non-linear layer, a pooling layer, a deconvolution layer, and an output layer.
3. The rough set neural network method for segmentation of fundus retinal blood vessel images according to claim 2, wherein step S10 comprises the following sub-steps: S11. storing each standard-RGB color fundus retinal blood vessel image with a size of MxMx3 as three matrices with uniform sizes of MxM, which are respectively recorded as R*, G* and B*, wherein each value in the matrices indicates a component value of each color of each pixel point in the three channels; and establishing an HIS model by using the matrices R*, G* and B*, wherein H denotes hue, S denotes saturation, and I denotes brightness: mal 0 BEG oe aces | DEAN | Bevo, BSG | JR GAR GN F3) | 3 mt 5 n° ~~ Safe ETERS GB} (2) I= RG 48) (3) 3 S12. the brightness component I being equivalent to a grayscale graph of the fundus retinal blood vessel image and regarded as an image information system, and performing image preprocessing by means of the rough set theory; using a two-dimensional fundus retinal 1mage with a size of MxM as a universe U of discourse, wherein each pixel point x in the fundus retinal image indicates an object in U, and a gray level of the pixel point x is recorded as f(m, n), (m, n) denoting that the pixel point x is located in the mth row and nth column; determining two condition attributes of the fundus retinal blood vessel grayscale graph as c; and cz, namely, C={c1, cz}, wherein c1 denotes the gray level attribute of the pixel point, and an attribute value is c1={0, 1}; and c is recorded as the noise property which indicates an absolute value of the difference between average gray levels of two adjacent sub-blocks, and an attribute value is c= {0, 1}; a decision attribute D indicating the classification of the pixel point, and D= {d1,d2,d3,d4}, wherein d; denotes a relatively bright noise-free region, dz denotes a bright-area edge noise region, ds denotes a relatively dark noise-free region, and ds denotes a dark-area edge noise region, thus constructing a fundus retinal blood vessel image information system (U, CUD); S13. determining a grayscale threshold a; denoting the gray level of the pixel point x in the mth row and nth column in U as f1(m, n), wherein if fc1(m, n) meets Re1= {xEU] fc1(m,n)>a}, cı=1, which indicates that the gray level of the pixel point x falls within [a+1, 255]; and thus, the pixel point is classified as the equivalence class of Re1, which indicates that the pixel point belongs to a bright set in the fundus retinal blood vessel images; or otherwise, ¢1=0, which indicates that the gray level of the pixel point x falls within [0, a], and thus, the pixel point is classified as the equivalence class of Re, which indicates that the pixel point belongs to a dark set in the fundus retinal blood vessel images.
S14. determining a noise threshold PB; dividing the fundus retinal blood vessel image into sub-blocks by 2x2 pixels, wherein (m, n) denotes an absolute value of the difference between average values of pixel gray levels of adjacent sub-blocks, namely, Jo=intlavg(S;;)-avg(S=1;=1)|, avg(Sij) denoting an average pixel value of the sub-block Sj, I<i<=-1, and 1<j<5-1; and if f2(m,n) meets R2={xEU] foa(m,n}>B}, &;=1, which indicates that the pixel point x has noise and is classified as the equivalence class of Re, that is, the pixel belongs to an edge noise set; or otherwise, c2=0, which indicates that the pixel point x does not have noise and is classified as the equivalence class of Re that is, the pixel belongs to a noise-free set; S15. determining the set to which the pixel point belongs according to the foregoing two condition attributes cı and ca; and by using the condition attributes as a decision basis, performing decision classification for the pixel point, and dividing the original fundus retinal blood vessel image P into sub-images; and based on the gray level attribute c1 and the noise attribute ca, dividing the original image into a relatively bright noise-free sub-
image Pi, a bright-area edge noise sub-image P», a relatively dark noise-free sub-image Ps, and a dark-area edge noise sub-image P4; completing the relatively bright noise-free sub- image Pi, that is, separately using the grayscale threshold a and the noise threshold ß to fill in all the relatively dark and noisy pixel positions, to form P1'; and completing the relatively dark noise-free sub-image Ps, that is, separately using the grayscale threshold œ and the noise threshold ß to fill in all the relatively bright and noisy pixel positions, to form Ps’; and S16. performing enhanced transformation for P;' and Ps': performing histogram equalization transformation for P1', performing histogram exponential transformation for Ps', and superimposing images obtained after histogram transformation of P1' and P3', to obtain an enhanced fundus retinal blood vessel image P'; and further performing normalization for the enhanced fundus retinal blood vessel image P' according to the following formula (4): x — main x Ay rors = x ; me x} 4) thus obtaining the enhanced fundus retinal blood vessel image based on the rough set, wherein x; denotes the value of the ith pixel point in the fundus retinal blood vessel image, and min(x) and max(x) respectively denote a minimum value and a maximum value of the pixel of the fundus retinal blood vessel image.
4. The rough set neural network method for segmentation of fundus retinal blood vessel images according to claim 3, wherein step S20 comprises the following sub-steps: S21. performing feature extraction for the enhanced fundus retinal blood vessel images based on the rough set by means of downsampling, performing a convolution operation twice for the input fundus retinal blood vessel images by using a convolution kernel with a size of 3x3, and performing non-linear transformation by using the ReLU activation function; then, performing 2x2 pooling operation and repeating it four times, wherein in the first 3x3 convolution operation after each pooling operation, the number of 3x3 convolutional kernels exponentially increases; and afterwards, performing 3x3 convolution operation twice, to continue the foregoing operation related to feature extraction by means of downsampling, wherein calculation for the convolutional layer is shown as follows: x; fi 2x KB) (5) M; denoting an input feature map set, x;" denoting the jth feature map in the nth layer, Kj} denoting a convolution kernel function, f() denoting the activation function and the ReLU function being selected as the activation function, and bj denoting an offset parameter; calculation for the pooling layer 1s shown as follows: Xm {BT down x ye 8) {6) B denoting a weight constant of the feature map of the downsampling layer, and down() denoting a downsampling function: S22. performing operations by means of upsampling: first, performing 3x3 deconvolution operation twice, copying and cropping images in the maximum pooling layer, and splicing the processed images and the images obtained after deconvolution; then, performing 3x3 convolution operation and repeating it four times, wherein in the first 3x3 convolution operation after each splicing operation, the number of 3x3 convolutional kernels exponentially decreases; and finally, performing 3x3 convolution operation twice and 1x1 convolution operation once, thus completing the upsampling process; and S23. after the upsampling and downsampling process, calculating an error between the segmented image obtained by using the U-net neural network and the standard segmentation tagged image corresponding to the standard-RGB color fundus retinal blood vessel image by means of forward calculation, wherein an error function is shown as follows:
Em 37 2. ZU YOU suey (7) wherein T denotes the number of fundus image samples input into the U-net neural network, y_outi(i) denotes the gray level of the ith pixel point in the tth fundus retinal image sample output by the U-net neural network, and y_true((1) denotes the gray level of the ith pixel point in the ith fundus retinal image tag.
5. The rough set neural network method for segmentation of fundus retinal blood vessel images according to claim 4, wherein an error threshold is set in the sub-step S23 and is equal to 0.1, and when the error is not greater than the error threshold, the required U-net neural network model is obtained; or when the error is greater than the error threshold, backpropagation is performed according to a gradient descent algorithm to adjust a network weight, and then steps S21 to S22 are repeated to perform forward calculation until the error 1s not greater than the error threshold.
6. The rough set neural network method for segmentation of fundus retinal blood vessel images according to claim 5, wherein step S30 comprises the following sub-steps: S31. randomly selecting few H fundus images from a training set of the enhanced fundus retinal blood vessel images based on the rough set as reference images, and denoting a particle swarm Q as Q= (Q1, Q>, ..., Qu), wherein H denotes the number of particles in the particle swarm Q and is consistent with the number of the selected fundus images, each bit of each particle represents a link weight or threshold, and an encoding manner of the ith particle Qi 1s Qi= {Qi1, Qi2, ..., Qin}, n denoting a total number of the link weights or thresholds; initializing acceleration constants 61 and 02, and an initial value of the inertia weight w; and initializing each particle position vector Yi= {yii, Vi, ..., Vin} and particle velocity vector Vi= {vii Vi, …, Vin} into random numbers in the interval of [0,1], wherein n denotes the number of parameters in the U-net model;
S32. separately completing the downsampling process and the upsampling process for each particle in the U-net model; calculating the fitness of each particle by using the error function of the U-net neural network as a fitness function of the particle swarm, and arranging the particles in an ascending order, to obtain an optimal position pbest of each particle and an optimal position gbest of the whole particle swarm;
S33. if a minimal value in the error threshold range is reached, it indicating that the training has converged, and then stopping running; or otherwise, continuously updating the position and velocity of each particle according to the following formulas (8) and (9):
voy tor * rand (OO * (bests is) +04 * rand 0 * (gbestis-X us) (8) X n= Mint Vin (9)
wherein vin and xin respectively denote the current position and velocity of the particle 1, v'in and x'in respectively denote the updated velocity and position of the particle 1, w denotes the inertia weight, 61 and 62 are acceleration constants, and rand() is a random function in the interval of [0,1];
S34. sending the updated particles back to the U-net neural network to update a link weight to be trained, performing the upsampling and downsampling process again, and calculating its error; and S35. splitting the obtained optimal position gbest of the particle swarm and mapping 1t to the weight and threshold of the U-net neural network model, thus completing the whole PSO-based process of optimization of the U-net neural network weight.
LU500959A 2020-06-18 2021-04-12 Rough set neural network method for segmentation of fundus retinal blood vessel images LU500959B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010558465.4A CN111815574B (en) 2020-06-18 2020-06-18 Fundus retina blood vessel image segmentation method based on rough set neural network

Publications (2)

Publication Number Publication Date
LU500959A1 LU500959A1 (en) 2022-01-04
LU500959B1 true LU500959B1 (en) 2022-05-04

Family

ID=72844725

Family Applications (1)

Application Number Title Priority Date Filing Date
LU500959A LU500959B1 (en) 2020-06-18 2021-04-12 Rough set neural network method for segmentation of fundus retinal blood vessel images

Country Status (3)

Country Link
CN (1) CN111815574B (en)
LU (1) LU500959B1 (en)
WO (1) WO2021253939A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815574B (en) * 2020-06-18 2022-08-12 南通大学 Fundus retina blood vessel image segmentation method based on rough set neural network
CN115409765B (en) * 2021-05-28 2024-01-09 南京博视医疗科技有限公司 Blood vessel extraction method and device based on fundus retina image
CN114359104B (en) * 2022-01-10 2024-06-11 北京理工大学 Cataract fundus image enhancement method based on hierarchical generation
CN114494196B (en) * 2022-01-26 2023-11-17 南通大学 Retinal diabetes mellitus depth network detection method based on genetic fuzzy tree
CN114612484B (en) * 2022-03-07 2023-07-07 中国科学院苏州生物医学工程技术研究所 Retina OCT image segmentation method based on unsupervised learning
CN115187609A (en) * 2022-09-14 2022-10-14 合肥安杰特光电科技有限公司 Method and system for detecting rice yellow grains
CN115829883B (en) * 2023-02-16 2023-06-16 汶上县恒安钢结构有限公司 Surface image denoising method for special-shaped metal structural member
CN116228545B (en) * 2023-04-04 2023-10-03 深圳市眼科医院(深圳市眼病防治研究所) Fundus color photographic image stitching method and system based on retina characteristic points
CN116523877A (en) * 2023-05-04 2023-08-01 南通大学 Brain MRI image tumor block segmentation method based on convolutional neural network
CN116580008B (en) * 2023-05-16 2024-01-26 山东省人工智能研究院 Biomedical marking method based on local augmentation space geodesic
CN116342588B (en) * 2023-05-22 2023-08-11 徕兄健康科技(威海)有限责任公司 Cerebrovascular image enhancement method
CN116740203B (en) * 2023-08-15 2023-11-28 山东理工职业学院 Safety storage method for fundus camera data
CN117437350B (en) * 2023-09-12 2024-05-03 南京诺源医疗器械有限公司 Three-dimensional reconstruction system and method for preoperative planning
CN117058468B (en) * 2023-10-11 2023-12-19 青岛金诺德科技有限公司 Image recognition and classification system for recycling lithium batteries of new energy automobiles
CN117372284B (en) * 2023-12-04 2024-02-23 江苏富翰医疗产业发展有限公司 Fundus image processing method and fundus image processing system
CN117611599B (en) * 2023-12-28 2024-05-31 齐鲁工业大学(山东省科学院) Blood vessel segmentation method and system integrating centre line diagram and contrast enhancement network
CN117974692B (en) * 2024-03-29 2024-06-07 贵州毅丹恒瑞医药科技有限公司 Ophthalmic medical image processing method based on region growing

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254224A (en) * 2011-07-06 2011-11-23 无锡泛太科技有限公司 Internet of things electric automobile charging station system based on image identification of rough set neural network
EP2847737A4 (en) * 2012-04-11 2016-09-28 Univ Florida System and method for analyzing random patterns
CN108615051B (en) * 2018-04-13 2020-09-15 博众精工科技股份有限公司 Diabetic retina image classification method and system based on deep learning
US11989877B2 (en) * 2018-09-18 2024-05-21 MacuJect Pty Ltd Method and system for analysing images of a retina
CN110232372B (en) * 2019-06-26 2021-09-24 电子科技大学成都学院 Gait recognition method based on particle swarm optimization BP neural network
CN111091916A (en) * 2019-12-24 2020-05-01 郑州科技学院 Data analysis processing method and system based on improved particle swarm optimization in artificial intelligence
CN111815574B (en) * 2020-06-18 2022-08-12 南通大学 Fundus retina blood vessel image segmentation method based on rough set neural network

Also Published As

Publication number Publication date
WO2021253939A1 (en) 2021-12-23
LU500959A1 (en) 2022-01-04
CN111815574B (en) 2022-08-12
CN111815574A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
LU500959B1 (en) Rough set neural network method for segmentation of fundus retinal blood vessel images
Lv et al. Attention guided low-light image enhancement with a large scale low-light simulation dataset
WO2021164234A1 (en) Image processing method and image processing device
CN109754377B (en) Multi-exposure image fusion method
CN106920227A (en) Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
WO2021164731A1 (en) Image enhancement method and image enhancement apparatus
CN109816666B (en) Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium
CN109472193A (en) Method for detecting human face and device
US20160155241A1 (en) Target Detection Method and Apparatus Based On Online Training
CN112614072B (en) Image restoration method and device, image restoration equipment and storage medium
CN112991371B (en) Automatic image coloring method and system based on coloring overflow constraint
Steffens et al. Cnn based image restoration: Adjusting ill-exposed srgb images in post-processing
Yuan et al. Single image dehazing via NIN-DehazeNet
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
CN111179196A (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN107871315B (en) Video image motion detection method and device
Li et al. Attention-based adaptive feature selection for multi-stage image dehazing
CN115375986A (en) Model distillation method and device
Zheng et al. Overwater image dehazing via cycle-consistent generative adversarial network
Wu et al. Remote sensing image colorization based on multiscale SEnet GAN
CN111507276A (en) Construction site safety helmet detection method based on hidden layer enhancement features
CN115049901A (en) Small target detection method and device based on feature map weighted attention fusion
CN114821048A (en) Object segmentation method and related device
CN110033422B (en) Fundus OCT image fusion method and device
Chaczko et al. A preliminary investigation on computer vision for telemedicine systems using OpenCV

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20220504