CN111563890A - Fundus image blood vessel segmentation method and system based on deep forest - Google Patents
Fundus image blood vessel segmentation method and system based on deep forest Download PDFInfo
- Publication number
- CN111563890A CN111563890A CN202010378375.7A CN202010378375A CN111563890A CN 111563890 A CN111563890 A CN 111563890A CN 202010378375 A CN202010378375 A CN 202010378375A CN 111563890 A CN111563890 A CN 111563890A
- Authority
- CN
- China
- Prior art keywords
- forest
- fundus image
- cascade
- blood vessel
- feature vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 60
- 210000004204 blood vessel Anatomy 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000007637 random forest analysis Methods 0.000 claims abstract description 86
- 239000013598 vector Substances 0.000 claims abstract description 76
- 238000005070 sampling Methods 0.000 claims abstract description 27
- 238000007781 pre-processing Methods 0.000 claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims abstract description 10
- 238000003066 decision tree Methods 0.000 claims description 29
- 238000012360 testing method Methods 0.000 claims description 12
- 238000011156 evaluation Methods 0.000 claims description 8
- 230000035945 sensitivity Effects 0.000 claims description 6
- 230000002792 vascular Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000002790 cross-validation Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 210000001210 retinal vessel Anatomy 0.000 description 3
- 208000024172 Cardiovascular disease Diseases 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 206010020772 Hypertension Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 208000026106 cerebrovascular disease Diseases 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 230000002526 effect on cardiovascular system Effects 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000004089 microcirculation Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003449 preventive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000009885 systemic effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses a fundus image blood vessel segmentation method and a fundus image blood vessel segmentation system based on a deep forest, wherein the fundus image blood vessel segmentation method comprises the following steps: acquiring a fundus image dataset; pre-processing the fundus image dataset; setting a sampling window to enable a pixel matrix contained in the sampling window to completely cover a blood vessel part, and extracting gray texture characteristics of an eyeground image data set to form a plurality of characteristic vector sets f; constructing a random forest and a completely random forest according to the feature vector set f; constructing a depth forest based on a random forest and a completely random forest according to the feature vector set f, wherein the depth forest comprises multi-granularity scanning and cascade forests, and each level of cascade forests in the depth forest comprises the same number of random forests and completely random forests; and segmenting the fundus image data set by using a depth forest and outputting segmentation results. The accuracy rate of the segmentation result of the scheme can reach more than 95%, dependence on data is reduced, and the scheme is suitable for industrial application.
Description
Technical Field
The invention relates to the technical field of machine learning and image processing, in particular to a fundus image blood vessel segmentation method and system based on a deep forest.
Background
The retinal blood vessels are important components of the systemic microcirculation system, and the change of the morphological structure of the retinal blood vessels is closely related to the severity of cardiovascular and cerebrovascular diseases such as diabetes, hypertension and the like, so that the cardiovascular diseases can be predicted to a great extent by extracting the retinal blood vessels and analyzing the characteristics of the blood vessels such as the pipe diameter, the curvature and the like, thereby implementing scientific preventive intervention and drug treatment.
With the development of artificial intelligence, the deep learning technology is widely applied to the field of fundus image retina segmentation, and at present, many researchers at home and abroad propose various fundus image retina segmentation algorithms based on deep learning, which mainly comprise: LeNet-5 neural network, Deep Belief network, SVM (rbfkkernel) support vector machine, and RandomForest random forest. However, these algorithms have the following disadvantages:
1. the hyper-parameters are too much, and the learning performance of the hyper-parameters is seriously dependent on the parameter adjusting process.
2. The model is overly complex.
3. Deep neural networks require training using large amounts of data. Even in the current big data era, we lack sufficient training data.
4. Due to the high marking costs, it is not suitable for industrial applications.
5. Theoretical analysis is difficult.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a fundus image blood vessel segmentation method and a fundus image blood vessel segmentation system based on a deep forest.
The invention mainly solves the technical problems through the following technical scheme: a fundus image blood vessel segmentation method based on a deep forest comprises the following steps:
s01, acquiring a fundus image dataset; the fundus image data set comprises a fundus image and an image label corresponding to the position of a blood vessel of the fundus image;
s02, preprocessing the fundus image data set;
s03, setting a sampling window, enabling a pixel matrix contained in the sampling window to completely cover a blood vessel part, extracting gray texture features of the fundus image data set, and forming a plurality of feature vector sets f;
s04, constructing a random forest and a completely random forest according to the feature vector set f;
s05, constructing a depth forest based on a random forest and a complete random forest according to the feature vector set f, wherein the depth forest comprises multi-granularity scanning and cascade forests, and each layer of cascade forest in the depth forest comprises the same number of random forests and the complete random forest;
s06, segmenting the fundus image data set by using a depth forest;
and S07, outputting the segmentation result.
Preferably, before outputting the segmentation result, the steps S03-S06 are repeated several times, the sampling window is replaced, a plurality of depth forests are established, the depth forest with the highest accuracy is selected through model evaluation, the fundus image data set is segmented, and the segmentation result is output.
Preferably, the preprocessing the fundus image data set in step S02 includes:
s201, converting and expanding an eye fundus image of the eye fundus image data set and image labels corresponding to the eye fundus image into the data set, wherein the conversion comprises overturning and rotating, and parameters used when the eye fundus image and the image labels are converted are the same;
s202, brightness normalization is carried out on each fundus image in the fundus image data set.
Preferably, in step S03, a sampling window is set so that a pixel matrix included in the sampling window can completely cover a blood vessel portion, and a gray texture feature of the fundus image data set is extracted to form a plurality of feature vector sets f, which specifically includes:
s301, setting a sampling window to comprise N multiplied by N pixel matrixes, and dividing a fundus image in the fundus image data set into K N multiplied by N pixel matrixes, wherein N and K are positive integers;
s302, expanding the K pixel matrixes into column vectors serving as feature vector sets f of the fundus image where the K pixel matrixes are located, and forming a plurality of feature vector sets f.
Preferably, in step S04, constructing a random forest and a completely random forest according to the feature vector set f, specifically including:
s401, applying the bootstrap method to extract a feature vector set f at random, and constructing M decision trees;
s402, the characteristic number of the fundus map is H; randomly extracting L candidate features (L is less than or equal to H) at each node of each decision tree, selecting the candidate feature with the minimum kiney index as a splitting node to split the node, stopping growing when only one feature in the splitting node of each decision tree or the number of features in the splitting node is less than the minimum splitting level number, and forming a random forest by M split decision trees; the minimum splitting level is set by a user; the feature of the fundus image means that the value of each pixel of the fundus image gray scale image is extracted as a feature
And S403, randomly selecting 1 candidate feature from each decision tree as a splitting node to split the node, stopping growing when only one feature in the splitting node of each decision tree or the number of features in the splitting node is less than the minimum splitting level number, and forming a complete random forest by the M generated decision trees.
Preferably, in step S05, constructing a deep forest based on a random forest and a completely random forest according to the feature vector set f specifically includes:
s501, taking a feature vector set f as input of multi-granularity scanning, namely respectively inputting the feature vector set f into the random forest and the completely random forest, and outputting two feature probability vectors by each feature vector set f;
s502, all characteristic probability vectors output by multi-granularity scanning are connected in series and input into a first layer of cascade forests, and each layer of cascade forests consists of random forests and complete random forests in the same quantity; the vector concatenation is to combine a plurality of vectors into a vector according to the y-axis direction;
s503, inputting the input of the first layer of cascade forest and the output of the first layer of cascade forest into a second layer of cascade forest after being connected in series;
s504, taking the input of the cascade forest of the first layer and the output of the cascade forest of the previous layer as the input of the cascade forest of the current layer in each subsequent layer of cascade forests;
performing cross validation on the generated whole deep forest by using the test set every time the cascade forest is operated by one layer, and stopping the growth of the deep forest and preventing the number of layers of the cascade forest from increasing if the accuracy of the test set is less than that of the previous layer; otherwise, the number of the cascade forest layers is continuously increased until the accuracy of the test set is less than that of the previous layer.
Preferably, the model evaluation specifically includes:
outputting a fundus image vessel map calculation TP value to each fundus image vessel segmentation model: predicting the positive class as a positive class number;
calculating the FN value: the number of correctly identified objects are blood vessels;
calculating the FP value: the number of objects that are blood vessels that are not correctly identified;
calculating the TN value: the number of correctly identified objects that are non-vascular;
and calculating accuracy, sensitivity, and specificity based on the TP value, FN value, and TN value.
A system for fundus image vessel segmentation, comprising:
a data acquisition unit for acquiring fundus images,
the preprocessing unit is used for preprocessing the fundus image;
the characteristic vector set extraction unit is used for setting a sampling window to enable a pixel matrix contained in the sampling window to completely cover a blood vessel part, extracting gray texture characteristics of the fundus image and forming a plurality of characteristic vector sets f;
a random forest and complete random forest constructing unit, which is used for constructing a random forest and a complete random forest according to the feature vector set f;
the deep forest constructing unit is used for constructing a deep forest based on a random forest and a complete random forest according to the feature vector set f, wherein the deep forest comprises two parts, namely a multi-granularity scanning part and a cascade forest, and each level of cascade forest in the deep forest is composed of the same number of random forests and the complete random forest;
and the data output unit is used for segmenting the image by using the depth forest and outputting a segmentation result.
A computer device for segmentation of vessels in fundus images, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps as described above when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the preceding.
Compared with the prior art, the invention has the following beneficial effects:
compared with a deep neural network, the fundus image blood vessel segmentation method disclosed by the invention is easy to train, and the neural network can run out a better effect only by needing a large number of data sets. In the method, only the fundus image data set is needed, then the deep forest is established, cross validation is used for generation of each cascade, a fundus image blood vessel segmentation model with relatively high accuracy is selected, the accuracy of segmentation results can reach more than 95%, dependence on data is reduced, and the method is suitable for industrial application.
Drawings
FIG. 1 is a flow chart of a method of constructing a fundus image vessel segmentation model;
FIG. 2 is a schematic illustration of pre-processing a fundus image dataset;
FIG. 3 is a schematic view of extracting gray scale texture features of a fundus image dataset;
FIG. 4 is a schematic diagram of the construction of a random forest;
FIG. 5 is a schematic diagram of a multi-granularity scan;
FIG. 6 is a schematic diagram of constructing a deep forest;
FIG. 7 is a diagram illustrating an output segmentation result;
FIG. 8 is a comparison graph one of the fundus images before and after segmentation;
fig. 9 is a comparison image two of the fundus images before and after segmentation.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
Example (b):
the embodiment discloses a fundus image blood vessel segmentation method based on a deep forest, as shown in fig. 1, comprising the following steps:
s01, acquiring a fundus image dataset, wherein the fundus image dataset is used as a training set and comprises a plurality of groups of fundus images, and as an alternative, each group of fundus images comprises a fundus image and an outer layer block diagram; the fundus image data set comprises a fundus image and an image label corresponding to the position of a blood vessel of the fundus image;
s02, as shown in fig. 2, pre-processing the fundus image data set, specifically:
s201, converting and expanding an eye fundus image of the eye fundus image data set and image labels corresponding to the eye fundus image into the data set, wherein the conversion comprises overturning and rotating, and parameters used when the eye fundus image and the image labels are converted are the same;
the selected expansion steps in this embodiment are:
and expanding the data set in a mode of respectively overturning and rotating the fundus image of the fundus image data set and the corresponding image label, wherein the fundus image and the corresponding image label are horizontally and vertically overturned and expanded by 4 times, are rotated for 1 time every 10 degrees, and are expanded to between 2500 + 3500 groups.
S202, performing brightness normalization on each fundus image in the fundus image data set, wherein the steps are as follows:
s2021, converting the fundus image into a gray scale using a formula of gray 0.2989 xr +0.5870 xg +0.1140 xb (R, G, B in the formula is a color representing three channels of red, green, and blue);
s2022, calculating the average gray value of each fundus image, and taking the average gray value as the brightness of the fundus image;
s2023, calculating the average brightness value of all the fundus images, and normalizing the average brightness value, wherein B is (B-E (B))/sqrt (D (B)) (in the formula, B represents the brightness value, and E represents the mathematical expected value).
Furthermore, as the area of the non-blood vessels in the fundus image is far larger than that of the blood vessels, if the non-blood vessel part also has a corresponding image label, in order to avoid the unbalance of the sample, 40% of the non-blood vessel points and 100% of the blood vessel points are taken to form a training sample.
S03, as shown in fig. 3, setting a sampling window so that a pixel matrix included in the sampling window can completely cover a blood vessel portion, extracting gray texture features of a fundus image data set, and forming a plurality of feature vector sets f, which specifically includes:
s301, setting a sampling window to comprise N multiplied by N pixel matrixes, and dividing a fundus image in the fundus image data set into K N multiplied by N pixel matrixes, wherein N and K are positive integers; usually the value of N is 7-20;
s302, expanding the K pixel matrixes into column vectors serving as feature vector sets f of the fundus image where the K pixel matrixes are located, and forming a plurality of feature vector sets f.
S04, constructing a random forest and a completely random forest according to the feature vector set f, and specifically comprising the following steps:
s401, applying the bootstrap method to extract a feature vector set f at random, and constructing M decision trees;
s402, the characteristic number of the fundus map is H; randomly extracting L candidate features (L is less than or equal to H) at each node of each decision tree, selecting the candidate feature with the minimum kiney index as a splitting node to split the node, stopping growing when only one feature in the splitting node of each decision tree or the number of features in the splitting node is less than the minimum splitting level number, and forming a random forest by M split decision trees; the minimum splitting level is set by a user;
the formula for the calculation of the kini index is that for a given training set D, the kini index is:
and S403, randomly selecting 1 candidate feature from each decision tree as a splitting node to split the node, stopping growing when only one feature in the splitting node of each decision tree or the number of features in the splitting node is less than the minimum splitting level number, and forming a complete random forest by the M generated decision trees. Fig. 4 is a schematic diagram of the construction of a random forest.
S05, as shown in fig. 6, constructing a depth forest based on a random forest and a completely random forest according to the feature vector set f, where the depth forest includes a multi-granularity scan and a cascade forest, the multi-granularity scan is shown in fig. 5, and each level of the cascade forest in the depth forest includes the same number of random forests and completely random forests, and specifically includes:
s501, taking a feature vector set f as input of multi-granularity scanning, namely respectively inputting the feature vector set f into the random forest and the completely random forest, and outputting two feature probability vectors by each feature vector set f;
s502, all characteristic probability vectors output by multi-granularity scanning are connected in series and input into a first layer of cascade forests, and each layer of cascade forests consists of random forests and complete random forests in the same quantity;
s503, inputting the input of the first layer of cascade forest and the output of the first layer of cascade forest into a second layer of cascade forest after being connected in series;
s504, taking the input of the cascade forest of the first layer and the output of the cascade forest of the previous layer as the input of the cascade forest of the current layer in each subsequent layer of cascade forests;
performing cross validation on the generated whole deep forest by using the test set every time the cascade forest is operated by one layer, and stopping the growth of the deep forest and preventing the number of layers of the cascade forest from increasing if the accuracy of the test set is less than that of the previous layer; otherwise, the number of the cascade forest layers is continuously increased until the accuracy of the test set is less than that of the previous layer.
S06, segmenting the fundus image data set by using a depth forest;
s07, the segmentation result is output, and the result is shown in fig. 7.
And before outputting the segmentation result, repeating the steps S03-S06 for a plurality of times, replacing a sampling window, establishing a plurality of depth forests, selecting the depth forest with the highest accuracy rate through model evaluation, segmenting the fundus image data set, and outputting the segmentation result.
And (3) evaluating a model: outputting a fundus image vessel map calculation TP value to each fundus image vessel segmentation model: predicting the positive class as a positive class number; calculating the FN value: the number of correctly identified objects are blood vessels; calculating the FP value: the number of objects that are blood vessels that are not correctly identified; calculating the TN value: the number of correctly identified objects that are non-vascular;
calculating accuracy, sensitivity, specificity based on TP value, FN value, TN value, wherein
The accuracy is TP/(TP + FN), which is all the pixel points which are correctly divided; sensitivity 1-FP/(FP + TN), as a percentage of correctly divided vessel points; specificity (TP + TN)/(TP + TN + FP + FN), as a percentage of correctly demarcated background (non-vascular fraction).
The following table shows the results of the model evaluations.
Fig. 8 and 9 are comparison images of the fundus image before and after the division.
The present embodiment also provides a system for any one of the fundus image blood vessel segmentation methods described above, including:
a data acquisition unit for acquiring fundus images,
the pre-processing unit is used for pre-processing the fundus image, and comprises the following specific pre-processing units:
the data expansion module is configured to expand an eye fundus image of the fundus image data set and corresponding image tags into the data set in a mode of respectively overturning and rotating, wherein the fundus image data set comprises the eye fundus image and the image tags corresponding to the positions of blood vessels of the eye fundus image;
and the brightness normalization module is configured for performing brightness normalization on each fundus image in the fundus image data set.
The feature vector set extraction unit is configured to set a sampling window so that a pixel matrix contained in the sampling window can completely cover a blood vessel part, extract gray texture features of an eye fundus image and form a plurality of feature vector sets f, and the specific feature vector set extraction unit comprises:
a sampling module configured to set a sampling window to include an N × N pixel matrix, and divide a fundus image in a fundus image data set into K N × N pixel matrices;
and the vector set module is configured to expand the K pixel matrixes into column vectors serving as a feature vector set f of the fundus image where the column vectors are located, and form a plurality of feature vector sets f.
And the random forest and completely random forest constructing unit is configured and used for constructing a random forest and a completely random forest according to the feature vector set f, and comprises:
the decision tree building module is configured for applying the bootstrap method to replace a randomly extracted feature vector set f and building k decision trees according to the randomly extracted feature vector set f;
the random forest construction module is used for configuring the characteristic number used for the eye-bottom graph as H, randomly extracting L candidate characteristics (L is less than or equal to H) at each node of each decision tree, selecting the candidate characteristics with the minimum kini index as splitting nodes to split the nodes, stopping growing when only one characteristic exists in the splitting nodes of each decision tree or the characteristic number in the splitting nodes is less than the minimum splitting level number, and forming a random forest by using the k split decision trees;
and the complete random forest construction module is configured for randomly selecting 1 candidate feature in each decision tree as a splitting node to split the node, stopping growing when only one feature is in the splitting node of each decision tree or the number of features in the splitting node is less than the minimum splitting level number, and forming a complete random forest by the k generated decision trees.
The deep forest constructing unit is configured to construct a deep forest based on a random forest and a complete random forest according to the feature vector set f, wherein the deep forest comprises two parts, namely a multi-granularity scanning part and a cascade forest, and each level of the cascade forest in the deep forest consists of the same number of random forests and the same number of complete random forests; specifically, the working principle of the deep forest building unit is as follows:
taking a feature vector set f as the input of multi-granularity scanning, respectively inputting the feature vector set f into the random forest and the completely random forest, outputting two feature probability vectors by each feature vector set f, combining the two feature probability vectors, and unfolding the two feature probability vectors into one vector which is used as the input of the cascade forest;
all characteristic probability vectors output by multi-granularity scanning are connected in series to be input into a first layer of cascade forests, and each layer of cascade forests consists of random forests and complete random forests in the same number;
in a second layer of the cascade forest, taking the input of the first layer of the cascade forest and the output of the first layer of the cascade forest as the input of the second layer of the cascade forest;
taking the input of the cascade forest of the first layer and the output of the cascade forest of the previous layer as the input of the cascade forest of the current layer in each subsequent layer of cascade forests;
meanwhile, cross verification is carried out on the generated whole deep forest by using the test set every time the cascade forest is operated by one layer, if the accuracy of the test set is smaller than that of the previous layer, the deep forest stops growing, and the number of layers of the cascade forest is not increased; otherwise, the number of the cascade forest layers is continuously increased until the accuracy of the test set is less than that of the previous layer.
And the data output unit is used for segmenting the image by using the depth forest and outputting a segmentation result.
As an alternative scheme, in order to improve the segmentation accuracy, a sampling window is replaced before a segmentation result is output, a plurality of depth forests are established, the depth forest with the highest accuracy is selected through model evaluation, and the fundus image data set is segmented.
A model evaluation unit configured to output a fundus image vessel map calculation TP value for each fundus image vessel segmentation model: predicting the positive class as a positive class number; calculating the FN value: the number of correctly identified objects are blood vessels; calculating the FP value: the number of objects that are blood vessels that are not correctly identified; calculating the TN value: the subjects were non-vascular, correctly identified in number, and calculated accuracy, sensitivity, specificity based on TP values, FN values, TN values. The accuracy is TP/(TP + FN), which is all the pixel points divided correctly; sensitivity 1-FP/(FP + TN), as a percentage of correctly divided vessel points; specificity (TP + TN)/(TP + TN + FP + FN), which is the percentage of correctly divided background (non-vascular part), compared with a deep neural network, the neural network is easy to train, and the neural network needs a large number of data sets to run out better effect. In the method, only the fundus image data set is needed, then the deep forest is established, cross validation is used for generation of each cascade, a fundus image blood vessel segmentation model with relatively high accuracy is selected, the accuracy of segmentation results can reach more than 95%, dependence on data is reduced, and the method is suitable for industrial application.
It should be understood that modules or units described in the above-described system for constructing a fundus image blood vessel segmentation model correspond to the respective steps described in the above-described method for constructing a fundus image blood vessel segmentation model. Thus, the operations and features described above for the method are equally applicable to the subsystems of the system for constructing a fundus image vessel segmentation model and the units included therein, and will not be described in detail here.
As another aspect, the present embodiment also provides an apparatus adapted to implement the embodiments of the present application, the apparatus including a computer system including a Central Processing Unit (CPU) that can perform various appropriate actions and processes according to respective programs stored in a Read Only Memory (ROM) for executing respective steps described in the above-described method of constructing a fundus image blood vessel segmentation model or respective programs loaded from a storage section into a Random Access Memory (RAM) for executing respective steps described in the above-described method of constructing a fundus image blood vessel segmentation model. In the RAM, various programs and data necessary for system operation are also stored. The CPU, ROM, and RAM are connected to each other via a bus. An input/output (I/O) interface is also connected to the bus.
The following components are connected to the I/O interface: an input section including a keyboard, a mouse, and the like; an output section including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section including a hard disk and the like; and a communication section including a network interface card such as a LAN card, a modem, or the like. The communication section performs communication processing via a network such as the internet. The drive is also connected to the I/O interface as needed. A removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive as necessary, so that a computer program read out therefrom is mounted into the storage section as necessary.
In particular, according to an embodiment of the present disclosure, the processes described in the respective steps described in the above fundus image blood vessel segmentation method may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the above-described method of constructing a fundus image vessel segmentation model. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium.
The flowcharts in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or by combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The described units or modules may also be provided in a processor. The names of these units or modules do not in some cases constitute a limitation of the unit or module itself.
As another aspect, the present embodiment also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the system in the foregoing embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer-readable storage medium stores one or more programs for use by one or more processors in performing the flow charts of the fundus image blood vessel segmentation methods described in the present application.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the features described above have similar functions to (but are not limited to) those disclosed in this application.
Claims (8)
1. A fundus image blood vessel segmentation method based on a deep forest is characterized by comprising the following steps:
s01, acquiring a fundus image dataset; the fundus image data set comprises a fundus image and an image label corresponding to the position of a blood vessel of the fundus image;
s02, preprocessing the fundus image data set;
s03, setting a sampling window, enabling a pixel matrix contained in the sampling window to completely cover a blood vessel part, extracting gray texture features of the fundus image data set, and forming a plurality of feature vector sets f;
s04, constructing a random forest and a completely random forest according to the feature vector set f;
s05, constructing a depth forest based on a random forest and a complete random forest according to the feature vector set f, wherein the depth forest comprises multi-granularity scanning and cascade forests, and each layer of cascade forest in the depth forest comprises the same number of random forests and the complete random forest;
s06, segmenting the fundus image data set by using a depth forest;
and S07, outputting the segmentation result.
2. A fundus image blood vessel segmentation method based on deep forests as claimed in claim 1 wherein before outputting the segmentation result, repeating steps S03-S06 several times, replacing sampling windows, creating several deep forests, selecting the deep forest with the highest accuracy by model evaluation, segmenting the fundus image data set, and outputting the segmentation result.
3. A depth forest based fundus image blood vessel segmentation method according to claim 1 or 2 wherein in step S02, preprocessing the fundus image data set comprises:
s201, converting and expanding an eye fundus image of the eye fundus image data set and image labels corresponding to the eye fundus image into the data set, wherein the conversion comprises overturning and rotating, and parameters used when the eye fundus image and the image labels are converted are the same;
s202, brightness normalization is carried out on each fundus image in the fundus image data set.
4. A fundus image blood vessel segmentation method based on deep forest according to claim 3, characterized in that in step S03, a sampling window is set so that a pixel matrix included in the sampling window can completely cover a blood vessel part, and gray texture features of a fundus image data set are extracted to form a plurality of feature vector sets f, specifically comprising:
s301, setting a sampling window to comprise N multiplied by N pixel matrixes, and dividing a fundus image in the fundus image data set into K N multiplied by N pixel matrixes, wherein N and K are positive integers;
s302, expanding the K pixel matrixes into column vectors serving as feature vector sets f of the fundus image where the K pixel matrixes are located, and forming a plurality of feature vector sets f.
5. A fundus image blood vessel segmentation method based on deep forest according to claim 4 is characterized in that in step S04, constructing a random forest and a completely random forest according to the feature vector set f specifically comprises:
s401, applying the bootstrap method to extract a feature vector set f at random, and constructing M decision trees;
s402, the characteristic number of the fundus map is H; randomly extracting L candidate features (L is less than or equal to H) at each node of each decision tree, selecting the candidate feature with the minimum kiney index as a splitting node to split the node, stopping growing when only one feature in the splitting node of each decision tree or the number of features in the splitting node is less than the minimum splitting level number, and forming a random forest by M split decision trees; the minimum splitting level is set by a user;
and S403, randomly selecting 1 candidate feature from each decision tree as a splitting node to split the node, stopping growing when only one feature in the splitting node of each decision tree or the number of features in the splitting node is less than the minimum splitting level number, and forming a complete random forest by the M generated decision trees.
6. A fundus image blood vessel segmentation method based on a deep forest according to the claim 5, characterized in that in the step S05, constructing the deep forest based on random forest and completely random forest according to the feature vector set f specifically comprises:
s501, taking a feature vector set f as input of multi-granularity scanning, namely respectively inputting the feature vector set f into the random forest and the completely random forest, and outputting two feature probability vectors by each feature vector set f;
s502, all characteristic probability vectors output by multi-granularity scanning are connected in series and input into a first layer of cascade forests, and each layer of cascade forests consists of random forests and complete random forests in the same quantity;
s503, inputting the input of the first layer of cascade forest and the output of the first layer of cascade forest into a second layer of cascade forest after being connected in series;
s504, taking the input of the cascade forest of the first layer and the output of the cascade forest of the previous layer as the input of the cascade forest of the current layer in each subsequent layer of cascade forests;
performing cross validation on the generated whole deep forest by using the test set every time the cascade forest is operated by one layer, and stopping the growth of the deep forest and preventing the number of layers of the cascade forest from increasing if the accuracy of the test set is less than that of the previous layer; otherwise, the number of the cascade forest layers is continuously increased until the accuracy of the test set is less than that of the previous layer.
7. A fundus image blood vessel segmentation method based on deep forest according to claim 2, characterized in that the model evaluation specifically comprises:
outputting a fundus image vessel map calculation TP value to each fundus image vessel segmentation model: predicting the positive class as a positive class number;
calculating the FN value: the number of correctly identified objects are blood vessels;
calculating the FP value: the number of objects that are blood vessels that are not correctly identified;
calculating the TN value: the number of correctly identified objects that are non-vascular;
and calculating accuracy, sensitivity, and specificity based on the TP value, FN value, and TN value.
8. A fundus image blood vessel segmentation system based on a deep forest is characterized by comprising:
a data acquisition unit for acquiring fundus images,
the preprocessing unit is used for preprocessing the fundus image;
the characteristic vector set extraction unit is used for setting a sampling window to enable a pixel matrix contained in the sampling window to completely cover a blood vessel part, extracting gray texture characteristics of the fundus image and forming a plurality of characteristic vector sets f;
a random forest and complete random forest constructing unit, which is used for constructing a random forest and a complete random forest according to the feature vector set f;
the deep forest constructing unit is used for constructing a deep forest based on a random forest and a complete random forest according to the feature vector set f, wherein the deep forest comprises two parts, namely a multi-granularity scanning part and a cascade forest, and each level of cascade forest in the deep forest is composed of the same number of random forests and the complete random forest;
and the data output unit is used for segmenting the image by using the depth forest and outputting a segmentation result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010378375.7A CN111563890A (en) | 2020-05-07 | 2020-05-07 | Fundus image blood vessel segmentation method and system based on deep forest |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010378375.7A CN111563890A (en) | 2020-05-07 | 2020-05-07 | Fundus image blood vessel segmentation method and system based on deep forest |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111563890A true CN111563890A (en) | 2020-08-21 |
Family
ID=72067918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010378375.7A Pending CN111563890A (en) | 2020-05-07 | 2020-05-07 | Fundus image blood vessel segmentation method and system based on deep forest |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111563890A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982542A (en) * | 2012-11-14 | 2013-03-20 | 天津工业大学 | Fundus image vascular segmentation method based on phase congruency |
CN106408562A (en) * | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
CN106530283A (en) * | 2016-10-20 | 2017-03-22 | 北京工业大学 | SVM (support vector machine)-based medical image blood vessel recognition method |
CN108319855A (en) * | 2018-02-08 | 2018-07-24 | 中国人民解放军陆军炮兵防空兵学院郑州校区 | A kind of malicious code sorting technique based on depth forest |
CN109003279A (en) * | 2018-07-06 | 2018-12-14 | 东北大学 | Fundus retina blood vessel segmentation method and system based on K-Means clustering labeling and naive Bayes model |
CN109087302A (en) * | 2018-08-06 | 2018-12-25 | 北京大恒普信医疗技术有限公司 | A kind of eye fundus image blood vessel segmentation method and apparatus |
CN110189295A (en) * | 2019-04-16 | 2019-08-30 | 浙江工业大学 | Eye ground blood vessel segmentation method based on random forest and center line |
CN110189327A (en) * | 2019-04-15 | 2019-08-30 | 浙江工业大学 | Eye ground blood vessel segmentation method based on structuring random forest encoder |
-
2020
- 2020-05-07 CN CN202010378375.7A patent/CN111563890A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982542A (en) * | 2012-11-14 | 2013-03-20 | 天津工业大学 | Fundus image vascular segmentation method based on phase congruency |
CN106408562A (en) * | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
CN106530283A (en) * | 2016-10-20 | 2017-03-22 | 北京工业大学 | SVM (support vector machine)-based medical image blood vessel recognition method |
CN108319855A (en) * | 2018-02-08 | 2018-07-24 | 中国人民解放军陆军炮兵防空兵学院郑州校区 | A kind of malicious code sorting technique based on depth forest |
CN109344618A (en) * | 2018-02-08 | 2019-02-15 | 中国人民解放军陆军炮兵防空兵学院郑州校区 | A kind of malicious code classification method based on depth forest |
CN109003279A (en) * | 2018-07-06 | 2018-12-14 | 东北大学 | Fundus retina blood vessel segmentation method and system based on K-Means clustering labeling and naive Bayes model |
CN109087302A (en) * | 2018-08-06 | 2018-12-25 | 北京大恒普信医疗技术有限公司 | A kind of eye fundus image blood vessel segmentation method and apparatus |
CN110189327A (en) * | 2019-04-15 | 2019-08-30 | 浙江工业大学 | Eye ground blood vessel segmentation method based on structuring random forest encoder |
CN110189295A (en) * | 2019-04-16 | 2019-08-30 | 浙江工业大学 | Eye ground blood vessel segmentation method based on random forest and center line |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Al-Haija et al. | Breast cancer diagnosis in histopathological images using ResNet-50 convolutional neural network | |
EP4145353A1 (en) | Neural network construction method and apparatus | |
CN112365171B (en) | Knowledge graph-based risk prediction method, device, equipment and storage medium | |
WO2021115084A1 (en) | Structural magnetic resonance image-based brain age deep learning prediction system | |
CN111242933B (en) | Retinal image artery and vein classification device, apparatus, and storage medium | |
RU2689818C1 (en) | Method of interpreting artificial neural networks | |
CN112639833A (en) | Adaptable neural network | |
CN109948575B (en) | Eyeball area segmentation method in ultrasonic image | |
Bagheri et al. | Deep neural network based polyp segmentation in colonoscopy images using a combination of color spaces | |
CN114239861A (en) | Model compression method and system based on multi-teacher combined guidance quantification | |
CN112330684A (en) | Object segmentation method and device, computer equipment and storage medium | |
CN111401156A (en) | Image identification method based on Gabor convolution neural network | |
CN109034218B (en) | Model training method, device, equipment and storage medium | |
CN116664930A (en) | Personalized federal learning image classification method and system based on self-supervision contrast learning | |
CN111694954B (en) | Image classification method and device and electronic equipment | |
CN114972759A (en) | Remote sensing image semantic segmentation method based on hierarchical contour cost function | |
AU2022392233A1 (en) | Method and system for analysing medical images to generate a medical report | |
CN111612739B (en) | Deep learning-based cerebral infarction classification method | |
CN111563890A (en) | Fundus image blood vessel segmentation method and system based on deep forest | |
CN116805162A (en) | Transformer model training method based on self-supervision learning | |
CN113177602B (en) | Image classification method, device, electronic equipment and storage medium | |
CN115661185A (en) | Fundus image blood vessel segmentation method and system | |
Vasylieva et al. | Automation Methods for Processing Medical Images Based on the Application of Grids. | |
Montagner et al. | NILC: a two level learning algorithm with operator selection | |
Tuan et al. | Semisupervised fuzzy clustering methods for X-ray image segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200821 |