CN110189320B - Retina blood vessel segmentation method based on middle layer block space structure - Google Patents

Retina blood vessel segmentation method based on middle layer block space structure Download PDF

Info

Publication number
CN110189320B
CN110189320B CN201910471171.5A CN201910471171A CN110189320B CN 110189320 B CN110189320 B CN 110189320B CN 201910471171 A CN201910471171 A CN 201910471171A CN 110189320 B CN110189320 B CN 110189320B
Authority
CN
China
Prior art keywords
block
random forest
image
constructing
blood vessel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910471171.5A
Other languages
Chinese (zh)
Other versions
CN110189320A (en
Inventor
赵荣昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201910471171.5A priority Critical patent/CN110189320B/en
Publication of CN110189320A publication Critical patent/CN110189320A/en
Application granted granted Critical
Publication of CN110189320B publication Critical patent/CN110189320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The invention discloses a retina blood vessel segmentation method based on a middle layer block space structure, which comprises the steps of constructing a sample set; extracting a blood vessel structure label value corresponding to the characteristic expression from a structure block of the color image sample set; constructing a random forest classifier, classifying the structural blocks, and expressing the vascular structure on a new image by using linear combination of contents in the feature integration model; inputting a color fundus image to be analyzed, detecting the class of the vascular structure to which the structural block belongs by adopting a random forest classifier after extracting the characteristics, expressing the vascular structure in the color image by using the content sparsity and linearity in the model, matching the vascular label value, and completing the segmentation of the image by calculating the probability of overlapping the structural block label value on a single pixel point. The method can be used for quickly and accurately segmenting retinal blood vessels, and is high in reliability and short in algorithm running time.

Description

Retina blood vessel segmentation method based on middle layer block space structure
Technical Field
The invention belongs to the field of image processing, and particularly relates to a retinal vessel segmentation method based on a middle-layer block space structure.
Background
With the increase of the economic level, people are concerned about their health conditions more and more. With the popularization and the large use of smart phones and the like, ophthalmic diseases seriously harm and influence the daily life of people. The related data show that the blindness causing proportions of glaucoma, congenital and hereditary ophthalmopathy, fundus oculi lesion and the like respectively account for 8.8%, 5.1% and 8.4%. Fundus diseases such as diabetic retinopathy, glaucoma and the like have the characteristics of irreversible disease, high morbidity and the like, and have great harmfulness to the life of patients suffering from the diseases.
The color fundus map is an important basis for clinical diagnosis of ophthalmic diseases, and analysis of the structure of the color fundus map can be used as an important basis for judging diseases such as hypertension, diabetes, cardiovascular and cerebrovascular diseases and the like. Meanwhile, the retinal blood vessels can also provide effective diagnosis basis for other systemic diseases. For example, when retinal microvasculature changes, it is suggested that the body may have some circulatory disease implications. The retinal blood vessels are the only deep capillary vessels in the human body which can be directly observed through an imaging technology and non-traumatism, and the structural change of the retinal blood vessels is closely related to the severity and recovery condition of diseases such as diabetes mellitus and the like. Diabetic retinopathy begins with microvascular changes in the human body. The structure of retinal blood vessels is relatively stable, and does not change greatly even along with the aging of human body, and the structural change of the retinal blood vessels is rarely influenced by other diseases besides the diseases of diabetes, hypertension, cardiovascular and cerebrovascular diseases and the like or the external force action.
Currently, research on retinal vessel segmentation techniques can be roughly divided into two categories: retinal vessel segmentation based on supervised learning methods, retinal vessel segmentation based on unsupervised learning. The non-supervised learning retinal vessel segmentation also comprises a vessel tracking method, a matched filtering method, a morphological method, a model-based method and the like.
In the existing retinal vessel segmentation method, the unsupervised learning method is generally not the supervised learning method, so that the segmentation precision is high, and the segmentation precision, timeliness and the like of the supervised learning method are better. It is now more common to incorporate an unsupervised learning method into a supervised learning method for use in order to obtain a better retinal vessel segmentation effect. However, the existing supervised learning method still cannot meet the requirement of real-time segmentation, the segmentation time is delayed greatly, and the application of the retinal vessel segmentation technology is seriously influenced.
Disclosure of Invention
The invention aims to provide a retinal vessel segmentation method based on a middle-layer block space structure, which has high reliability and short operation time.
The invention provides a retina blood vessel segmentation method based on a middle layer block space structure, which comprises the following steps:
s1, constructing a sample set;
s2, extracting a blood vessel structure label value corresponding to feature expression from a structure block in the color image of the sample set;
s3, constructing a random forest classifier, classifying the structural blocks by adopting the constructed random forest classifier, and expressing the vascular structure in the input image by using the linear combination of the contents in the feature integration model;
and S4, inputting a color fundus image to be analyzed, detecting the blood vessel structure type of the structure block in the color fundus image to be analyzed by adopting the random forest classifier constructed in the step S3 after characteristics are extracted, expressing the blood vessel structure in the color image by adopting sparse linear content in the training model, giving a corresponding blood vessel label value, and calculating a final result by overlapping the probability of the label value of the structure block to finish the segmentation of the image.
The constructing of the sample set in step S1 specifically comprises the following steps:
A. z represents an original image and is a blood vessel label structure block with Num multiplied by Num, A is a model formed by the characteristics of the structure block and has the size of K multiplied by Num, X is a K-dimensional vector expressed sparsely, N is the number of the structure blocks, and Z is approximately equal to A multiplied by X;
B. selecting a plurality of images with the size of Num1 multiplied by Num1 from a database, scanning each pixel point of the selected images, taking the blood vessel point as the center to extract a structure block with the size of Num multiplied by Num after the blood vessel point is scanned, and recording the position of the central pixel point of the extracted structure block;
C. and C, taking the structure blocks extracted in the step B as training contents of the model, thereby constructing a sample set.
Step S2, extracting a blood vessel structure label value corresponding to the feature expression from the structure block in the color image in the sample set, specifically, extracting the blood vessel structure label value by the following steps:
a. calculating multi-feature channel information of each structural block;
b. and c, performing self-similarity feature calculation based on the multi-feature channel information obtained in the step a, and matching with the vascular structure label value.
And S3, constructing the random forest classifier, classifying the structural blocks by adopting the random forest classifier, specifically constructing the random forest classifier and classifying the structural blocks by adopting the following steps:
(1) Constructing a decision tree: a decision tree F recursively circulates labels of a left sub-tree or a right sub-tree of the decision tree F, each node in the decision tree F has a splitting function, whether the node is in a left branch or a right branch is determined by the splitting function, splitting is stopped until the node reaches a leaf node, classification of the vascular structure block is realized, and finally output is stored on the leaf node;
(2) Constructing a group of decision trees to form a random forest, ensuring that training samples and features selected randomly are sufficiently diversified, and preventing overfitting from being generated in the training process of the random forest;
(3) And predicting the final output of the voting results of the decision trees by the random forest so as to finish the classification of the structural blocks.
According to the retinal vessel segmentation method based on the middle-layer block space structure, provided by the invention, the retinal vessel segmentation is realized by using the method of the middle-layer block space structure on the assumption that the fundus images share the tiny middle-layer image blocks with the same or similar vessel structures according to the sparsity structure characteristics of the retinal vessels of the color fundus images. Different from the existing method, the method does not mainly classify the pixel points, but realizes the blood vessel segmentation by multi-classification of the blood vessel structure blocks, so that the time for detecting the blood vessel can be shortened; the selected features have no complex features and are the combination of color and Gaussian features, and the features of the type do not need complex calculation and parameter setting; in order to realize multi-classification of the vascular structure, a random forest classifier is selected, the classification of the whole structural block can be realized, the characteristics are trained into a model, the classification effect is good, and the speed is high; when the blood vessel structure of the fundus image is detected, a method of taking every two pixel points as step length is adopted, so that the predicted label values of a plurality of structural blocks can be obtained on one pixel point, and the segmentation accuracy can be improved through probability calculation of a single pixel point label; therefore, the method has high reliability and quick segmentation time.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
FIG. 2 is a schematic representation of a block of local vascular structures of various shapes according to the method of the present invention.
FIG. 3 is a schematic diagram of a color eye diagram and its corresponding structure block components according to the method of the present invention.
FIG. 4 is a schematic diagram of one-dimensional and two-dimensional Gaussian function images of the method of the present invention.
FIG. 5 is a schematic representation of the results after Gaussian filtering of a color fundus image according to the method of the present invention.
FIG. 6 is a schematic diagram of the classification of vascular structures according to the method of the present invention.
FIG. 7 is a schematic diagram of the method for extracting the structural blocks from the color fundus image according to the method of the present invention.
Detailed Description
FIG. 1 is a schematic flow chart of the method of the present invention: the invention provides a retina blood vessel segmentation method based on a middle layer block space structure, which comprises the following steps:
s1, constructing a sample set; specifically, a sample set is constructed by adopting the following steps:
A. using Z to represent an original image, wherein the original image is a blood vessel label structural block of Num multiplied by Num, A is a model formed by structural block characteristics and is K multiplied by Num, X is a K-dimensional vector expressed sparsely, N is the number of structural blocks, and Z is approximately equal to A multiplied by X;
B. selecting a plurality of images with the size of Num1 multiplied by Num1 from a database, scanning each pixel point of the selected images, taking the blood vessel point as the center to extract a structure block with the size of Num multiplied by Num after the blood vessel point is scanned, and recording the position of the central pixel point of the extracted structure block;
C. b, taking the structure block extracted in the step B as the training content of the model, thereby constructing a sample set;
in specific implementation, Z represents an original image, that is, a blood vessel label structure block with a size of 16 × 16, a represents a model formed by the features of the structure block, the size is K × 16, x is a sparsely represented K-dimensional vector, N is the number of the structure blocks, and the calculation formula is as follows:
Z≈A*X
the training set in the database contains a color eye-bottom map and its corresponding binary vessel maps, wherein the size of each image is normalized to 584 × 565, and an initial matrix in a label structure block construction model representing different vessel structures is selected from the binary vessel maps, wherein each structure block can represent different vessel information in the color eye-bottom map, as shown in fig. 2, the different vessel structures;
scanning each pixel point in the binary blood vessel image, after the blood vessel point is scanned, taking the pixel point as a center to extract a structural block with the size of 16 multiplied by 16 to obtain a blood vessel label structural block, recording the position D (x, y) of the central pixel point of the structural block so as to facilitate later feature extraction, and selecting a part of samples from the extracted structural block as training contents in the model, such as a color image and the structural block taken out of the color image as shown in fig. 3;
s2, extracting a blood vessel structure label value corresponding to feature expression from a structure block in a color image corresponding to the sample set; specifically, the method comprises the following steps of:
a. calculating multi-feature channel information of each structural block;
b. b, performing self-similarity feature calculation based on the multi-feature channel information obtained in the step a, and matching label values of the vascular structure;
in specific implementation, the features of the higher layer can be expressed by combining the features of the lower layer, the features in the color image can be expressed by combining some basic features, and when the corresponding blood vessel label value in the color fundus map is expressed by using the multiple combination of the features of the lower layer, the result of blood vessel segmentation is more beneficial. Selecting two types of features, namely channel features and self-similar features, wherein the channel features comprise features such as colors, gaussian filtering, directional channels and the like;
three components in the CIE-LUV color space can be used to represent a point, and the colors therein can be represented by a point, and the distance between the points is calculated using the Euclidean formula, assuming that the two colors are (L) respectively 1 ,U 1 ,V 1 ) And (L) 2 ,U 2 ,V 2 ) The distance formula is as follows, and the color features in the structural blocks can be obtained by calculating the distance formula:
Figure BDA0002080881280000061
the Gaussian filter is used for extracting local features, noise belongs to a high-frequency part during image detection, and the Gaussian filter has the greatest characteristic of being capable of smoothing, removing noise interference to a great extent and determining the position of a blood vessel by using second-order derivation of Gaussian filtering. The gaussian function is a separable function, and in the two-dimensional gaussian function, convolution operations can be performed on rows and columns respectively, so that the computational complexity can be greatly reduced, the one-dimensional and two-dimensional gaussian functions are shown as follows, x and y are point coordinates, sigma is a standard deviation, and fig. 3 is a distribution diagram of the one-dimensional and two-dimensional gaussian functions
Figure BDA0002080881280000062
Figure BDA0002080881280000063
When the one-dimensional Gaussian function is used, the structural blocks at the same position in the color fundus image corresponding to the structural blocks of 16 multiplied by 16 are selected for filtering. The selection of the standard deviation sigma value is very critical, if the standard deviation sigma is too large, the standard deviation sigma is close to a mean filter, the smoothing effect on the image is relatively obvious, but the coefficient difference of each template is reduced; the standard deviation sigma is too small, the central coefficient of each template is large, the difference between the template and the periphery is large, the smoothing effect on the image is deteriorated, two values of sigma are selected in the method in the chapter and are respectively 0,1.5 for calculation, and a result of using two-dimensional Gaussian filtering on the whole fundus image is shown in FIG. 5;
firstly, calculating multi-feature channel information of each structure block, wherein each channel feature represents different information of the same structure block, the feature quantity of each image structure block is 16 multiplied by C, wherein C is the quantity of the channel features, the channel features are composed of color, gaussian and directional channel information, three color channel features are calculated by CIE-LUV color space, the standardized gradient channel feature is realized based on two scales, the feature is extracted by using two Gaussian filters for blurring, a gradient amplitude channel with sigma of 0 and 1.5 values is respectively split into four channels according to directions to form eight directional channels, and therefore the channel is connected to the image structure block through the eight directional channelsThe channel features include 3 color channel features, 2 gaussian filter features and 8 directional channel features, for a total of 13 channel features. Content-based image segmentation refers to the use of color, texture, semantics, shape, and other features of an image in an attempt to segment the image according to a partition of the image content. The self-similarity feature is calculated based on color and gradient channel information, cells with 5 × 5 resolution are adopted in the structure blocks, all the structure blocks are sampled, cells with 5 × 5 size are arranged in each channel layer, the cells can be mutually covered, and one channel layer is used for extraction
Figure BDA0002080881280000071
And (3) self-similar features.
Calculating the characteristics of a structure block in the invention comprises the following steps: 3 color features, 2 Gaussian filtering features, 8 directional channel features and 13 channel features, wherein each channel feature comprises 300 self-similar features, and a 16 × 16 structural block comprises 13 channel features and 13 × 300 self-similar features in total;
and S3, constructing a random forest classifier. Classifying the structural blocks by using a random forest as a classifier, and expressing the vascular structure on a new image by using content linear combination in the feature integration model; specifically, the method comprises the following steps of constructing a random forest classifier and classifying:
(1) Constructing a decision tree: a decision tree F recursively circulates labels of a left sub-tree or a right sub-tree of the decision tree F, each node in the decision tree F has a splitting function, whether the node is in a left branch or a right branch is determined by the splitting function, splitting is stopped until the node reaches a leaf node, classification of the vascular structure block is realized, and finally output is stored on the leaf node;
(2) Constructing a group of decision trees to form a random forest, ensuring that training samples and features selected randomly are sufficiently diversified, and preventing overfitting from being generated in the training process of the random forest;
(3) Predicting the final output of the voting results of the decision trees by the random forest so as to complete the classification of the middle layer blocks;
in specific implementation, a random forest classifier is used for classifying the structural blocks, and a model is formed by using the feature. Selecting the characteristics of the extracted structure blocks for a random forest training process, and randomly selecting a vascular structure as a label in the training process;
constructing a decision tree, wherein the decision tree F recursively circulates labels of a left sub-tree or a right sub-tree thereof, each node in the tree has a splitting function, the splitting function determines whether the node is on a left branch or a right branch, and the splitting is stopped until the node reaches a leaf node, so that the classification of the vascular structure blocks is realized, and finally, the output is stored on the leaf node, as shown in an example of vascular structure classification in fig. 6, the training process of each tree is independent and parallel, and when the training reaches the maximum depth of the tree or a sample is divided into a specific threshold, the splitting of the tree is stopped;
constructing a group of decision trees to form a random forest, ensuring that training samples and features selected randomly are sufficiently diversified, and preventing overfitting from being generated in the training process of the random forest;
training 8 decision trees to form a random forest, and stopping training until the structure on each leaf node is pure or the number of the structural blocks on the leaf nodes is less than 2, wherein the maximum depth of each tree is 64;
predicting the final output of a group of decision tree voting results by the random forest, supposing that a value x is input, and predicting a value f through a plurality of decision trees t (x) Forming a set model as its output value, using a top-down recursive approach
To ensure tree diversity, each tree is trained by random sampling from samples, and for each node, the feature attributes are randomly sub-sampled from all features F and F
Figure BDA0002080881280000091
And the feature is that randomness is injected into each layer of the tree, so that a trained dictionary model is more complete. The kini coefficients are used for feature selection and decision of each node, and are expressed as gini coefficients as follows:
Figure BDA0002080881280000092
Figure BDA0002080881280000093
P i is the frequency of the occurrence of class i in the sample T, N i Is the number of classes j in the sample T, S is the number of samples in T, S 1 ,S 2 Like T 1 And T 2
S4, inputting a color fundus image to be analyzed, detecting the class of the vascular structure to which the structural block in the color fundus image to be analyzed belongs by using the random forest classifier constructed in the step S3 after extracting the characteristics, expressing the vascular structure in the color image by using the content sparsity and linearity in the training model, giving a vascular label value, and completing the segmentation of the image by calculating the probability of the label value of the overlapped structural block on a single pixel point;
in specific implementation, a new color fundus image is input, pixel points in the image are scanned, every two pixel points in the up, down, left and right directions are taken as step lengths, and structural blocks with the size of 16 × 16 are extracted by taking the currently scanned pixel points as the center, as shown in fig. 7;
calculating the characteristics in the color eye bottom image corresponding to the structural block with the size of 16 multiplied by 16, recording the characteristic vector by the central pixel point X, and taking the expression as X belongs to F 16*16*C Wherein C is the number of feature channels, and the features comprise four types of features in model training;
and (3) the features are sent to a random forest classifier for training, the features are expressed by using content sparseness and linearity in the model, corresponding blood vessel label values are predicted, and the structural blocks in the color image are classified.
When information in the color eye-bottom image is detected, a structural block is selected from two pixel points as step length to be used for classification in a random forest classifier, and each pixel point can obtain a plurality of overlapped structural block label values, so that in actual output, each pixel point in a structural block with the size of 16 multiplied by 16 can obtain more than 256 label predicted values on average.
Through the steps, each pixel point in one color fundus image obtains multiple predicted values of adjacent structural blocks, in order to realize segmentation, probability calculation is carried out on the predicted values of the pixel points in the structural blocks to obtain a blood vessel probability graph, formulas 3 to 12 are calculation expressions, wherein P represents probability, x represents pixel points in the color image, and P represents pixel points in the color image i Representing the ith structural block passing through pixel point x, I being the predicted probability map, p i In I
Figure BDA0002080881280000101
Because the structural block is used for predicting the information of the whole image, the method in the chapter has remarkable efficiency, the calculated amount is greatly reduced, the prediction of each tree is irrelevant in the detection process, a subsampling method is selected, the input and the output of each decision tree are overlapped, and the method also greatly improves the accuracy of segmentation.

Claims (4)

1. A retinal vessel segmentation method based on a middle layer block space structure comprises the following steps:
s1, constructing a sample set;
s2, extracting a blood vessel structure label value corresponding to feature expression from a structure block of the color image sample set;
s3, constructing a random forest classifier, classifying the structural blocks by adopting the constructed random forest classifier, and expressing the vascular structure on a new image by using linear combination of contents in the feature integration model;
and S4, inputting a color fundus image to be analyzed, detecting the class of the vascular structure to which the structural block in the color fundus image to be analyzed belongs by adopting the random forest classifier constructed in the step S3 after extracting the characteristics, expressing the vascular structure in the color image by adopting the content sparseness and linearity in the training model, giving a vascular label value, and completing the segmentation of the image through the probability calculation of overlapping the structural block label value.
2. The retinal vessel segmentation method based on the spatial structure of the middle layer block according to claim 1, wherein the step S1 of constructing the sample set specifically comprises the following steps of:
A. using Z to represent an original image, wherein the original image is a blood vessel label structural block of Num multiplied by Num, A is a model formed by structural block characteristics and is K multiplied by Num, X is a K-dimensional vector expressed sparsely, N is the number of structural blocks, and Z is approximately equal to A multiplied by X;
B. selecting a plurality of images with the size of Num1 multiplied by Num1 from a database, scanning each pixel point of the selected images, taking the blood vessel point as the center to extract a structure block with the size of Num multiplied by Num after scanning the blood vessel point, and recording the position of the central pixel point of the extracted structure block;
C. and D, taking the structure blocks extracted in the step B as the training content of the model, thereby constructing a sample set.
3. The retinal vessel segmentation method based on the middle-layer block spatial structure according to claim 2, wherein the vessel structure label value corresponding to the feature expression is extracted from the structure block of the color image sample set in step S2, specifically the vessel structure label value is extracted by the following steps:
a. calculating multi-feature channel information of each structural block;
b. and c, performing self-similarity feature calculation based on the multi-feature channel information obtained in the step a, so as to obtain a final blood vessel structure label value.
4. The retinal vessel segmentation method based on the middle-layer block spatial structure as claimed in claim 3, wherein the step S3 of constructing the random forest classifier is to classify the structural blocks by using the constructed random forest classifier, specifically, the steps of constructing the random forest classifier and classifying are as follows:
(1) Constructing a decision tree: a decision tree F recursively circulates labels of a left sub-tree or a right sub-tree of the decision tree F, each node in the decision tree F has a splitting function, whether the node is in a left branch or a right branch is determined by the splitting function, splitting is stopped until the node reaches a leaf node, classification of the vascular structure block is realized, and finally output is stored in the leaf node;
(2) Constructing a group of decision trees to form a random forest, ensuring that training samples and features selected randomly are sufficiently diversified, and preventing overfitting from being generated in the training process of the random forest;
(3) And predicting the final output of the voting results of the decision trees by the random forest, thereby finishing classification.
CN201910471171.5A 2019-05-31 2019-05-31 Retina blood vessel segmentation method based on middle layer block space structure Active CN110189320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910471171.5A CN110189320B (en) 2019-05-31 2019-05-31 Retina blood vessel segmentation method based on middle layer block space structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910471171.5A CN110189320B (en) 2019-05-31 2019-05-31 Retina blood vessel segmentation method based on middle layer block space structure

Publications (2)

Publication Number Publication Date
CN110189320A CN110189320A (en) 2019-08-30
CN110189320B true CN110189320B (en) 2023-04-07

Family

ID=67719511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910471171.5A Active CN110189320B (en) 2019-05-31 2019-05-31 Retina blood vessel segmentation method based on middle layer block space structure

Country Status (1)

Country Link
CN (1) CN110189320B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110448267B (en) * 2019-09-06 2021-05-25 重庆贝奥新视野医疗设备有限公司 Multimode fundus dynamic imaging analysis system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010138645A2 (en) * 2009-05-29 2010-12-02 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Blood vessel segmentation with three-dimensional spectral domain optical coherence tomography
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN109523524A (en) * 2018-11-07 2019-03-26 电子科技大学 A kind of eye fundus image hard exudate detection method based on integrated study

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010138645A2 (en) * 2009-05-29 2010-12-02 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Blood vessel segmentation with three-dimensional spectral domain optical coherence tomography
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN109523524A (en) * 2018-11-07 2019-03-26 电子科技大学 A kind of eye fundus image hard exudate detection method based on integrated study

Also Published As

Publication number Publication date
CN110189320A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
JP7465914B2 (en) Systems and methods for biological particle classification - Patents.com
Mazen et al. Ripeness classification of bananas using an artificial neural network
Waghmare et al. Detection and classification of diseases of grape plant using opposite colour local binary pattern feature and machine learning for automated decision support system
Agrawal et al. Grape leaf disease detection and classification using multi-class support vector machine
Landini et al. Automatic thresholding from the gradients of region boundaries
Dey et al. Automatic detection of whitefly pest using statistical feature extraction and image classification methods
CN111080643A (en) Method and device for classifying diabetes and related diseases based on fundus images
Waheed et al. Hybrid features and mediods classification based robust segmentation of blood vessels
Zolfaghari et al. A survey on automated detection and classification of acute leukemia and WBCs in microscopic blood cells
Ren et al. An improved U-net based retinal vessel image segmentation method
Dhiman et al. A general purpose multi-fruit system for assessing the quality of fruits with the application of recurrent neural network
Ibrahim et al. Pre-trained classification of scalp conditions using image processing
CN110189320B (en) Retina blood vessel segmentation method based on middle layer block space structure
Remeseiro et al. Objective quality assessment of retinal images based on texture features
Zabihi et al. Retinal vessel segmentation using color image morphology and local binary patterns
Prem et al. Classification of exudates for diabetic retinopathy prediction using machine learning
Deeksha et al. Classification of Brain Tumor and its types using Convolutional Neural Network
CN110490088A (en) DBSCAN Density Clustering method based on region growth method
Poojari et al. Identification and solutions for grape leaf disease using convolutional neural network (CNN)
Kannadaguli Microscopic blood smear RBC classification using PCA and SVM based Machine Learning
CN110189299B (en) Cerebrovascular event automatic identification method and system based on MobileNet
Bezabh et al. Classification of pumpkin disease by using a hybrid approach
Altwme Some new types of approximations via minimal structure
Deng et al. Mlff: Multiple low-level features fusion model for retinal vessel segmentation
BEKTAŞ et al. Evaluating the effect of lesion segmentation on the detection of skin cancer by pre-trained CNN models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant