CN109523524B - Eye fundus image hard exudation detection method based on ensemble learning - Google Patents
Eye fundus image hard exudation detection method based on ensemble learning Download PDFInfo
- Publication number
- CN109523524B CN109523524B CN201811317900.3A CN201811317900A CN109523524B CN 109523524 B CN109523524 B CN 109523524B CN 201811317900 A CN201811317900 A CN 201811317900A CN 109523524 B CN109523524 B CN 109523524B
- Authority
- CN
- China
- Prior art keywords
- image
- candidate area
- fundus image
- detection method
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an eye fundus image hard exudation detection method based on ensemble learning, which belongs to the technical field of image processing, and comprises the steps of firstly carrying out contrast enhancement, filtering and morphological reconstruction on an input eye fundus image, then utilizing a trained convolutional neural network to extract depth characteristics of a sample, extracting traditional characteristics of the sample, then cascading the depth characteristics and the traditional characteristics, adopting a principal component analysis method to carry out dimensionality reduction, and finally sending the dimensionality reduced characteristics and labels into a trained random forest classifier to carry out classification, thereby segmenting hard exudation areas of the eye fundus image, and solving the problems of large calculated amount, low detection accuracy and incomplete detection of the existing hard exudation detection method.
Description
Technical Field
The invention belongs to the technical field of computer image processing, and relates to a fundus image hard exudation detection method based on ensemble learning.
Background
At present, the positions, the number and the like of hard exudations are often found out by manually observing a shot retinal fundus image, however, the manual finding of the positions of the hard exudations and the statistics of the exudations are a task which is large in workload, time-consuming and labor-consuming, and a fundus doctor is required to have professional skill to a great extent, which is not suitable for being carried out in a basic remote area, so that the hard exudations in the retinal fundus image are detected by a computer image processing technology and the like, the doctor is reminded to pay attention to the relevant area, and the observation is assisted, which is a work with practical significance.
Since blood vessels, optic discs, optic disc fibers and the like with brightness, color and contrast similar to those of the hard exudation exist in the fundus image, the detection of the hard exudation is easily interfered, and the hard exudation can be mistakenly identified by a computer, so that the automatic segmentation of the hard exudation is a challenging task, and students at home and abroad begin to pay attention to the detection and segmentation of the hard exudation in recent years.
The hard exudation detection method based on the computer image processing technology mainly comprises a method based on threshold segmentation, a method based on region growing, a method based on morphology and a method based on a classifier. The threshold value mixed model proposed by Sanchez et al processes the image histogram so as to dynamically segment hard exudation, but there are many blood vessels and optic nerve discs, which easily cause many interferences on the detection of hard exudation; sinnthanayothin et al propose a cyclic region growing method for automatic detection, but the method has large calculated amount and long consumed time; walter et al propose remove optic disc through the morphological method, find the outline that is oozed hard based on pixel value variance, and rebuild and get the hard oozing area with the morphology, similarly, Sopharak et al also propose the segmentation method of oozing hard based on morphology, this method uses the operation of morphological closure to rebuild operator to remove blood vessel and optic disc at first, and then calculate the standard deviation and statistics edge profile pixel of each pixel and detect oozing hard through H channel and I channel, but because oozing hard has the characteristics such as size irregularity and luminance nonuniformity, it is difficult to select the appropriate parameter based on the method of morphology, and often can only segment some oozing hard oozing and other non-oozing targets, the segmentation precision is not high, thus cause and detect the accuracy is not high; the classifier-based method determines whether a hard bleed target is present or not by performing feature extraction on each pixel or candidate connected region and classifying it with a support vector machine, random forest, neural network, etc., Giancardo et al propose a classification detection method based on image level, performing conventional feature extraction such as color, area, etc. on images with bleed and without bleed, and classifying input pictures with a support vector machine, but this method still has the problem of incomplete hard bleed detection.
Therefore, the current hard exudation detection method based on the computer image processing technology has the problems of large calculation amount, low detection accuracy and incomplete detection.
Disclosure of Invention
The invention aims to: the eye fundus image hard exudation detection method based on ensemble learning is provided, and the problems of large calculated amount, low detection accuracy and incomplete detection in the existing hard exudation detection method are solved.
The technical scheme adopted by the invention is as follows:
a fundus image hard exudation detection method based on ensemble learning comprises the following steps:
step 1: inputting a fundus image, and performing contrast enhancement to obtain an enhanced image;
step 2: extracting a green channel of the enhanced image to obtain a green channel image, and performing median filtering and opening operation on the green channel image to obtain a background estimation image;
and step 3: performing morphological reconstruction on the background estimation image to obtain a morphological reconstruction image, and subtracting the morphological reconstruction image from the green channel image in the step 2 to obtain a normalized background image;
and 4, step 4: performing dynamic threshold segmentation on the normalized background image until the number of connected domains is larger than the set number of connected domains, and removing the connected domains with small areas to obtain a candidate area template image;
and 5: masking the enhanced image in the step 1 by using a candidate area template image to obtain a candidate area sample image, sending the candidate area sample image into a convolutional neural network for forward propagation, taking the vector of a full connection layer as a depth feature, and simultaneously extracting the traditional feature of a corresponding area in the candidate area sample image;
step 6: simply cascading the depth features and the traditional features in the step 5, and reducing the dimension by using a principal component analysis method to obtain final feature vectors of each candidate region;
and 7: and sending the feature vectors into a random forest for judgment, and classifying the candidate regions to obtain a final exudation marking map.
Further, the specific steps of step 1 are:
step 1.1: inputting a color fundus image I, and respectively carrying out contrast enhancement on three channels of the fundus image I by using an enhancement formula, wherein the enhancement formula is as follows:
Ii=α·Ii+τ·Gaussian*Ii+γ
wherein I represents a fundus image, I represents R, G, B three channels of the fundus image, Gaussian represents a Gaussian filter, and α, τ, and γ are constants;
Step 1.3: and meanwhile, performing blood vessel segmentation on the fundus image I to obtain a blood vessel segmentation image.
Further, the specific steps of step 2 are:
Step 2.2: adopting a median filter with the size of 50-70 pixels to carry out green channel image processingFiltering to obtain a filtered image
Step 2.3: disc structure element pair filtering image with area of 10-20 pixelsPerforming opening operation to obtain background estimation image
Further, the specific steps of step 3 are:
step 3.1: estimating the background to an imageAs marker, image green channelAs mask, estimate image for backgroundPerforming morphological reconstruction to obtain a morphologically reconstructed image
Step 3.2: with green channel image of step 2.1Subtracting morphologically reconstructed imagesObtaining a normalized background image
Further, the specific steps of step 4 are:
step 4.1: computing normalized background imagesMaximum value t of pixel valuemaxAnd a minimum value tmin;
Step 4.2: setting the range of threshold to tmaxTo tminHigh-to-low pairs of normalized background images in step 4.1Binarizing until the connected domain number conn _ num of the binarized image is less than the preset connected domain number K, and recording the binarized image asIf all the ranges are traversed and no binary image meeting the number of the connected domains is found, directly using an artificially given threshold value tlTo normalized background imageBinarizing and recording the binarized image as
Step 4.3: deleting binary imageThe area of the middle pixel is smaller than the connected domain of a set value px, the value range of the set value px is that px is larger than or equal to 1 and smaller than or equal to 10, and a candidate area template picture mask is obtainedcand。
Further, the specific steps of step 5 are:
step 5.1: using the candidate region template map mask of step 4.3candFor the enhanced image in step 1.2Performing mask processing to obtain candidatesRegion template mapcandCorresponding candidate region in enhanced imageThe area image in (1), namely a candidate area sample image;
step 5.2, performing connectivity analysis on the candidate area sample graph, if the ellipse major axis is greater than a preset value L, determining a rectangular frame of L × L by taking the communication center of the candidate area sample graph as a central frame, and if the ellipse major axis is less than the preset value L, determining a rectangular frame of M × M by taking the communication center of the candidate area sample graph as a central frame;
step 5.3, normalizing the L × L rectangular frame in the step 5.2 into an M × M rectangular frame to obtain an M × M candidate area sample, inputting the M × M selected area sample and a label corresponding to the M × M selected area sample into a convolutional neural network for training in the training stage of the convolutional neural network to obtain a network model parameter, performing forward propagation on the M × M candidate area sample by using the trained model parameter in the testing stage of the convolutional neural network, and extracting a feature vector of a full connection layer as a depth feature 1;
step 5.4, carrying out traditional feature extraction on the M × M candidate region sample of the step 5.3, wherein the traditional features comprise R, G, B three channels of the fundus image of the step 1.1 and the enhanced image of the step 1.2 of each candidate region connected domainThree channels of (1) and the features based on the pixel value size in the vessel segmentation map of step 1.3, and the binarized image of each candidate region connected domain in step 4.2The traditional feature is defined as feature2 based on the shape of the feature.
Still further, the features based on pixel value size in step 5.4 include pixel average, pixel value sum, standard deviation, contrast and minimum; the shape-based features include area, perimeter, circularity, eccentricity, compactness, number of non-zero pixels in DOG features, and Sobel gradient values.
Further, the specific steps of step 6 are:
step 6.1: connecting the traditional feature2 obtained in the step 5.4 behind the depth feature1 obtained in the step 5.3, and simply cascading to obtain a cascading feature vector feature 3;
step 6.2: and (5) performing dimensionality reduction and redundant Feature removal on the cascade Feature vector Feature3 by using a principal component analysis method to obtain a final Feature vector Feature of each candidate region.
Further, the specific steps of step 7 are:
step 7.1: in the training stage of the random forest, Feature vectors Feature and labels are sent into the random forest for training, the random forest is composed of T decision trees, m features in the Feature vectors Feature are randomly selected by each node for classification during decision making, and the probability judgment result of each decision tree on each candidate area is obtained;
step 7.2: in the testing stage of the random forest, classifying Feature vectors by using the random forest parameters trained in the step 7.1 to obtain a probability judgment result of each decision tree for each candidate region;
step 7.3: and (4) according to the judgment result of the step 7.2, calculating an average value, and combining a minority obedience majority rule to obtain a probability map and a bleeding segmentation binary map of the candidate region so as to obtain a final bleeding marker map.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. a method for detecting the hard exudation of eyeground image based on integrated learning includes such steps as contrast enhancement, filtering and morphological reconstruction of input eyeground image, extracting the depth characteristics of sample by trained convolutional neural network, extracting traditional characteristics, cascade connection, reducing the dimension by principal component analysis, and classifying the features and labels in trained random forest classifier.
2. Compared with the traditional method, the method has better segmentation effect, the traditional method can detect the large-area exudation but easily loses the small-area exudation, and the method can well segment the small-area exudation and the large-area exudation.
3. The traditional characteristics and the depth characteristics are cascaded, so that the characteristics exuded on the computer image level can be effectively represented, and the represented characteristics are more distinctive and unique compared with other representation methods.
4. Because training of the neural network and the random forest depends on more samples, the method firstly extracts the exudation candidate area so as to classify the candidate area, and the method of firstly detecting and then classifying can well solve the problem of less exudation samples; meanwhile, when the depth features are extracted, the problem of insufficient samples can be well avoided by adopting a shallow network.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and that for those skilled in the art, other relevant drawings can be obtained according to the drawings without inventive effort, wherein:
FIG. 1 is a flow chart of a fundus image hard exudation detection method based on ensemble learning
FIG. 2 is a fundus image of step 1.1 in a first embodiment of the present invention;
FIG. 3 is an enhanced image of step 1.2 in one embodiment of the present invention;
FIG. 4 is a morphological reconstructed image of step 3.1 in a first embodiment of the present invention;
FIG. 5 is a template map of the candidate region at step 4.3 according to one embodiment of the present invention;
FIG. 6 is a probability map of step 7.3 in the first embodiment of the present invention;
FIG. 7 is a two-value graph of the bleed-out partition at step 7.3 according to one embodiment of the present invention;
FIG. 8 is a graph of bleed markers at step 7.3 in a first embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
A fundus image hard exudation detection method based on ensemble learning solves the problems of large calculated amount, low detection accuracy and incomplete detection existing in the conventional hard exudation detection method;
a fundus image hard exudation detection method based on ensemble learning is disclosed, as shown in figure 1, and comprises the following steps:
step 1: inputting a fundus image, and performing contrast enhancement to obtain an enhanced image;
step 2: extracting a green channel of the enhanced image to obtain a green channel image, and performing median filtering and opening operation on the green channel image to obtain a background estimation image;
and step 3: performing morphological reconstruction on the background estimation image to obtain a morphological reconstruction image, and subtracting the morphological reconstruction image from the green channel image in the step 2 to obtain a normalized background image;
and 4, step 4: performing dynamic threshold segmentation on the normalized background image until the number of connected domains is larger than the set number of connected domains, and removing the connected domains with small areas to obtain a candidate area template image;
and 5: masking the enhanced image in the step 1 by using a candidate area template image to obtain a candidate area sample image, sending the candidate area sample image into a convolutional neural network for forward propagation, taking the vector of a full connection layer as a depth feature, and simultaneously extracting the traditional feature of a corresponding area in the candidate area sample image;
step 6: simply cascading the depth features and the traditional features in the step 5, and reducing the dimension by using a principal component analysis method to obtain final feature vectors of each candidate region;
and 7: and sending the feature vectors into a random forest for judgment, and classifying the candidate regions to obtain a final exudation marking map.
The invention firstly carries out contrast enhancement, filtering and morphological reconstruction on the input fundus images, then extracts the depth characteristics of the sample by utilizing the trained convolutional neural network, extracting traditional characteristics, cascading the two, reducing dimensions by adopting a principal component analysis method, can effectively represent the characteristics exuded on the computer image level, has distinctiveness and uniqueness, finally sends the characteristics and the labels after dimension reduction into a trained random forest classifier for classification, so as to divide the hard exudation area of the eyeground image, the invention detects the hard exudation area in the eyeground image to a large extent from the aspect of computer image processing technology, and non-rigid exudation areas such as optic discs and the like can be cut off less, the specificity and the sensitivity are higher, the calculation amount is reduced compared with the traditional method, and the detection accuracy and the detection completeness are improved.
The features and properties of the present invention are described in further detail below with reference to examples.
Example one
The invention provides a fundus image hard exudation detection method based on integrated learning, which comprises the following steps:
step 1: inputting a fundus image, and performing contrast enhancement to obtain an enhanced image;
step 1.1 input color fundus image I with image size 2544 × 1696 pixels, as shown in FIG. 2, contrast enhancement is performed on the three channels of fundus image I separately using enhancement formulas as follows:
Ii=α·Ii+τ·Gaussian*Ii+γ
wherein I denotes a fundus image, I denotes R, G, B three channels of the fundus image, Gaussian denotes a Gaussian filter, the size of which is one tenth of the input image width, i.e. Gaussian 169, α, τ, γ are constants, α is 4, τ is-4, γ is 128;
step 1.3: meanwhile, carrying out blood vessel segmentation on the fundus image I to obtain a blood vessel segmentation image;
step 2: extracting a green channel of the enhanced image to obtain a green channel image, and performing median filtering and opening operation on the green channel image to obtain a background estimation image;
Step 2.2: adopting a median filter with the size of 50-70 pixels and the specific size of 55 pixels to carry out green channel image processingFiltering to obtain a filtered image
Step 2.3: the method comprises the step of filtering an image by using a disc structural element pair with the area of 10-20 pixels and the specific radius of 30 pixelsPerforming opening operation to obtain background estimation image
And step 3: performing morphological reconstruction on the background estimation image to obtain a morphological reconstruction image, and subtracting the morphological reconstruction image from the green channel image in the step 2 to obtain a normalized background image;
step 3.1: estimating the background to an imageAs marker, image green channelAs mask, estimate image for backgroundPerforming morphological reconstruction to obtain a morphologically reconstructed imageAs shown in fig. 4;
step 3.2: with green channel image of step 2.1Subtracting morphologically reconstructed imagesObtaining a normalized background image
And 4, step 4: performing dynamic threshold segmentation on the normalized background image until the number of connected domains is larger than the set number of connected domains, and removing the connected domains with small areas to obtain a candidate area template image;
step 4.1: computing normalized background imagesMaximum value t of pixel valuemaxAnd a minimum value tmin;
Step 4.2: setting the range of threshold to tmaxTo tminFirst, the threshold value threshold is set to tu,tuThe normalized background image in step 4.1 is paired from high to low as 0.7Binarizing until the connected domain number conn _ num of the binarized image is less than the preset connected domain number K, and marking the binarized image as 1500If all the ranges are traversed and no binary image meeting the number of the connected domains is found, directly using an artificially given threshold value tlTo normalized background imageCarry out binarization, tl0.05, and the binarized image is recorded as
Step 4.3: deleting binary imageAnd obtaining a connected domain with the middle pixel area smaller than a set value px, wherein the value range of the set value px is that px is more than or equal to 1 and less than or equal to 10, and specifically px is more than or equal to 5, and obtaining a candidate area template mapcandAs shown in fig. 5;
and 5: masking the enhanced image in the step 1 by using a candidate area template image to obtain a candidate area sample image, sending the candidate area sample image into a convolutional neural network for forward propagation, taking the vector of a full connection layer as a depth feature, and simultaneously extracting the traditional feature of a corresponding area in the candidate area sample image;
step 5.1: using the candidate region template map mask of step 4.3candFor the enhanced image in step 1.2Mask processing is carried out to obtain a candidate area template picture maskcandCorresponding candidate region in enhanced imageThe area image in (1), namely a candidate area sample image;
step 5.2, performing connectivity analysis on the candidate area sample image, if the ellipse long axis is greater than a preset value L, determining a rectangular frame of L × L by taking the communication center of the candidate area sample image as a central frame, and if the ellipse long axis is less than the preset value L, determining a rectangular frame of M × M by taking the communication center of the candidate area sample image as a central frame, wherein M is 32 pixels;
step 5.3, normalizing the L × L rectangular frame in the step 5.2 to an M × M rectangular frame to obtain an M × M candidate area sample, inputting the M × M selected area sample and a label corresponding to the M × M selected area sample into a convolutional neural network for training in the training stage of the convolutional neural network to obtain a network model parameter, performing forward propagation on the M × M candidate area sample by using the trained model parameter in the testing stage of the convolutional neural network, extracting a feature vector of a full connection layer as a depth feature1,
the parameters of the convolutional neural network are shown in table 1,
TABLE 1
In the table, dropout has a parameter of 0.1, and N is 128;
step 5.4, carrying out traditional feature extraction on the M × M candidate region sample of the step 5.3, wherein the traditional features comprise R, G, B three channels of the fundus image of the step 1.1 and the enhanced image of the step 1.2 of each candidate region connected domainThe three channels of (1) and the features based on the pixel value size in the vessel segmentation map of the step 1.3, the features based on the pixel value size comprise the average value, the sum of the pixel values, the standard deviation, the contrast and the minimum value of the pixels, and the binarized image of each candidate region connected domain in the step 4.2The method includes the following steps that (1) shape-based features are defined as feature2, wherein the shape-based features include area, perimeter, circularity, eccentricity, compactness, number of non-zero pixels in DOG features and Sobel gradient values, and a total 71-dimensional conventional feature vector is extracted in the embodiment;
step 6: simply cascading the depth features and the traditional features in the step 5, and reducing the dimension by using a principal component analysis method to obtain final feature vectors of each candidate region;
step 6.1: connecting the traditional feature2 obtained in the step 5.4 behind the depth feature1 obtained in the step 5.3, and simply cascading to obtain a cascading feature vector feature 3;
step 6.2: using Principal Component Analysis (PCA) to perform dimensionality reduction and redundant Feature removal on the cascade Feature vector Feature3 to obtain final Feature vectors features of each candidate region, wherein in the embodiment, the PCA is used for dimensionality reduction to obtain the first 80% features;
and 7: sending the feature vectors into a random forest for judgment, and classifying the candidate regions to obtain a final exudation marking map;
step 7.1: in the training stage of the random forest, Feature vectors Feature and labels are sent into the random forest for training, the random forest is composed of T decision trees, T is {100, 120,.., 200}, the best value of the result is taken, m features in the Feature vectors Feature are randomly selected by each node for classification during decision making, and m is 14, and the probability judgment result of each candidate area by each decision tree is obtained;
step 7.2: in the testing stage of the random forest, classifying Feature vectors by using the random forest parameters trained in the step 7.1 to obtain a probability judgment result of each decision tree for each candidate region;
step 7.3: according to the judgment result of the step 7.2, an average value is obtained, and a minority obeys a majority rule to obtain a probability map of the candidate region as shown in fig. 6 and a bleeding segmentation binary map as shown in fig. 7, so as to obtain a final bleeding label map as shown in fig. 8.
The invention can detect hard exudation area in fundus image to a large extent from computer image processing technology, can cut out non-hard exudation area such as optic disc and the like to a small extent, has high specificity and sensitivity, reduces calculation amount compared with the traditional method, and improves detection accuracy and detection completeness.
It should be noted that, since the drawings in the specification should not be colored or modified, it is difficult to display a portion where a part of the distinction is obvious in the present invention, and if necessary, a color picture can be provided.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents and improvements made by those skilled in the art within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (9)
1. A fundus image hard exudation detection method based on ensemble learning is characterized by comprising the following steps:
step 1: inputting a fundus image, and performing contrast enhancement to obtain an enhanced image;
step 2: extracting a green channel of the enhanced image to obtain a green channel image, and performing median filtering and opening operation on the green channel image to obtain a background estimation image;
and step 3: performing morphological reconstruction on the background estimation image to obtain a morphological reconstruction image, and subtracting the morphological reconstruction image from the green channel image in the step 2 to obtain a normalized background image;
and 4, step 4: performing dynamic threshold segmentation on the normalized background image until the number of connected domains is larger than the set number of connected domains, and removing the connected domains with small areas to obtain a candidate area template image;
and 5: masking the enhanced image in the step 1 by using a candidate area template image to obtain a candidate area sample image, sending the candidate area sample image into a convolutional neural network for forward propagation, taking the vector of a full connection layer as a depth feature, and simultaneously extracting the traditional feature of a corresponding area in the candidate area sample image;
step 6: simply cascading the depth features and the traditional features in the step 5, and reducing the dimension by using a principal component analysis method to obtain final feature vectors of each candidate region;
and 7: and sending the feature vectors into a random forest for judgment, and classifying the candidate regions to obtain a final exudation marking map.
2. An integrated learning based fundus image hard exudation detection method according to claim 1, wherein the specific steps of step 1 are as follows:
step 1.1: inputting a color fundus image, and respectively carrying out contrast enhancement on three channels of the fundus image by using an enhancement formula, wherein the enhancement formula is as follows:
Ii=α·Ii+τ·Gaussian*Ii+γ
wherein I represents a fundus image, I represents R, G, B three channels of the fundus image, Gaussian represents a Gaussian filter, and α, τ, and γ are constants;
step 1.2: merging the three enhanced channels into an enhanced image;
step 1.3: meanwhile, the fundus image is subjected to blood vessel segmentation to obtain a blood vessel segmentation image.
3. An integrated learning based fundus image hard exudation detection method according to claim 1, wherein the specific steps of step 2 are:
step 2.1: extracting a green channel of the enhanced image to obtain a green channel image;
step 2.2: filtering the green channel image by adopting a median filter with the size of 50-70 pixels to obtain a filtered image;
step 2.3: and opening the filtering image by using a disc structural element with the area of 10-20 pixels to obtain a background estimation image.
4. An integrated learning based fundus image hard exudation detection method according to claim 1, wherein the specific steps of step 3 are:
step 3.1: taking the background estimation image as a marker, taking the green channel image as a mask, and performing morphological reconstruction on the background estimation image to obtain a morphological reconstruction image;
step 3.2: and (3) subtracting the morphological reconstruction image from the green channel image in the step (2) to obtain a normalized background image.
5. An integrated learning based fundus image hard exudation detection method according to claim 2, wherein the specific steps of step 4 are as follows:
step 4.1: calculating the maximum value t of the normalized background image pixel valuesmaxAnd a minimum value tmin;
Step 4.2: setting the range of the threshold value as tmaxTo tminAnd binarizing the normalized background image in the step 4.1 from high to low until the number of connected domains of the binarized image is less than the preset number K of connected domains, and recording the binarized image as the binary imageIf all the ranges are traversed, and no binary image which meets the number of the connected domains is found, directly using an artificially given threshold value to carry out binarization on the normalized background image, and marking the binarized image as the binary image
6. An integrated learning based fundus image hard exudation detection method according to claim 5, wherein the concrete steps of step 5 are:
step 5.1: masking the enhanced image in the step 1 by using the candidate area template drawing in the step 4 to obtain an enhanced image of the candidate area corresponding to the candidate area template drawingThe area image in (1), namely a candidate area sample image;
step 5.2, performing connectivity analysis on the candidate area sample graph, if the ellipse major axis is greater than a preset value L, determining a rectangular frame of L × L by taking the connected center of the candidate area sample graph as a central frame, and if the ellipse major axis is less than the preset value L, determining a rectangular frame of M × M by taking the connected center of the candidate area sample graph as a central frame, wherein L represents the preset length of the rectangular frame and has a unit of pixel, M represents the length of the rectangular frame and has a unit of pixel;
step 5.3, normalizing the L × L rectangular frame in the step 5.2 into an M × M rectangular frame to obtain an M × M candidate area sample, inputting the M × M candidate area sample and a label corresponding to the M × M candidate area sample into a convolutional neural network for training in a convolutional neural network training stage to obtain a network model parameter, performing forward propagation on the M × M candidate area sample by using the trained model parameter in a convolutional neural network testing stage, and extracting a feature vector of a full connection layer as a depth feature 1;
step 5.4, carrying out traditional feature extraction on the M × M candidate region sample of the step 5.3, wherein the traditional features comprise R, G, B three channels of the fundus image of the step 1.1 and the enhanced image of the step 1.2 of each candidate region connected domainThree channels of (1) and the features based on the pixel value size in the vessel segmentation map of step 1.3, and the binarized image of each candidate region connected domain in step 4.2Based on the shape of the feature.
7. An integrated learning based fundus image hard effusion detection method according to claim 6, characterized in that said pixel value size based features in step 5.4 include pixel average, pixel value sum, standard deviation, contrast and minimum; the shape-based features include area, perimeter, circularity, eccentricity, compactness, number of non-zero pixels in DOG features, and Sobel gradient values.
8. An integrated learning based fundus image hard exudation detection method according to claim 1, wherein the concrete steps of step 6 are:
step 6.1: connecting the traditional characteristics in the step 5 behind the depth characteristics, and simply cascading to obtain cascading characteristic vectors;
step 6.2: and (4) performing dimensionality reduction and redundant feature removal on the cascade feature vectors by using a principal component analysis method to obtain final feature vectors of each candidate region.
9. An integrated learning based fundus image hard exudation detection method according to claim 1, wherein the specific steps of step 7 are:
step 7.1: in the training stage of the random forest, the feature vectors and the labels are sent into the random forest for training, the random forest is composed of T decision trees, m features in the feature vectors are randomly selected by each node for classification during decision making, and the probability judgment result of each candidate area by each decision tree is obtained;
step 7.2: in the testing stage of the random forest, classifying the feature vectors by using the random forest parameters trained in the step 7.1 to obtain the probability judgment result of each decision tree for each candidate region;
step 7.3: and (4) according to the judgment result of the step 7.2, calculating an average value, and combining a minority obedience majority rule to obtain a probability map and a bleeding segmentation binary map of the candidate region so as to obtain a final bleeding marker map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811317900.3A CN109523524B (en) | 2018-11-07 | 2018-11-07 | Eye fundus image hard exudation detection method based on ensemble learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811317900.3A CN109523524B (en) | 2018-11-07 | 2018-11-07 | Eye fundus image hard exudation detection method based on ensemble learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109523524A CN109523524A (en) | 2019-03-26 |
CN109523524B true CN109523524B (en) | 2020-07-03 |
Family
ID=65774464
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811317900.3A Active CN109523524B (en) | 2018-11-07 | 2018-11-07 | Eye fundus image hard exudation detection method based on ensemble learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109523524B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472781B (en) * | 2018-10-29 | 2022-02-11 | 电子科技大学 | Diabetic retinopathy detection system based on serial structure segmentation |
CN110189320B (en) * | 2019-05-31 | 2023-04-07 | 中南大学 | Retina blood vessel segmentation method based on middle layer block space structure |
CN110288616B (en) * | 2019-07-01 | 2022-12-09 | 电子科技大学 | Method for segmenting hard exudation in fundus image based on fractal and RPCA |
CN110298849A (en) * | 2019-07-02 | 2019-10-01 | 电子科技大学 | Hard exudate dividing method based on eye fundus image |
CN112950737B (en) * | 2021-03-17 | 2024-02-02 | 中国科学院苏州生物医学工程技术研究所 | Fundus fluorescence contrast image generation method based on deep learning |
CN114494196B (en) * | 2022-01-26 | 2023-11-17 | 南通大学 | Retinal diabetes mellitus depth network detection method based on genetic fuzzy tree |
CN116012659B (en) * | 2023-03-23 | 2023-06-30 | 海豚乐智科技(成都)有限责任公司 | Infrared target detection method and device, electronic equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108596247A (en) * | 2018-04-23 | 2018-09-28 | 南方医科大学 | A method of fusion radiation group and depth convolution feature carry out image classification |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140314288A1 (en) * | 2013-04-17 | 2014-10-23 | Keshab K. Parhi | Method and apparatus to detect lesions of diabetic retinopathy in fundus images |
CN105787927B (en) * | 2016-02-06 | 2018-06-01 | 上海市第一人民医院 | Automatic identification method is oozed out in a kind of color fundus photograph image |
CN106408562B (en) * | 2016-09-22 | 2019-04-09 | 华南理工大学 | Eye fundus image Segmentation Method of Retinal Blood Vessels and system based on deep learning |
CN106570530A (en) * | 2016-11-10 | 2017-04-19 | 西南交通大学 | Extraction method for extracting hard exudates in ophthalmoscopic image |
EP3616120A1 (en) * | 2017-04-27 | 2020-03-04 | Retinascan Limited | System and method for automated funduscopic image analysis |
CN107341265B (en) * | 2017-07-20 | 2020-08-14 | 东北大学 | Mammary gland image retrieval system and method fusing depth features |
-
2018
- 2018-11-07 CN CN201811317900.3A patent/CN109523524B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108596247A (en) * | 2018-04-23 | 2018-09-28 | 南方医科大学 | A method of fusion radiation group and depth convolution feature carry out image classification |
Also Published As
Publication number | Publication date |
---|---|
CN109523524A (en) | 2019-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109523524B (en) | Eye fundus image hard exudation detection method based on ensemble learning | |
CN115082683B (en) | Injection molding defect detection method based on image processing | |
Jidong et al. | Recognition of apple fruit in natural environment | |
CN114757900B (en) | Artificial intelligence-based textile defect type identification method | |
CN108073918B (en) | Method for extracting blood vessel arteriovenous cross compression characteristics of fundus retina | |
CN105445277A (en) | Visual and intelligent detection method for surface quality of FPC (Flexible Printed Circuit) | |
CN110189383B (en) | Traditional Chinese medicine tongue color and fur color quantitative analysis method based on machine learning | |
CN103295013A (en) | Pared area based single-image shadow detection method | |
CN109241973B (en) | Full-automatic soft segmentation method for characters under texture background | |
Maji et al. | An automated method for counting and characterizing red blood cells using mathematical morphology | |
CN113706490B (en) | Wafer defect detection method | |
CN108109133B (en) | Silkworm egg automatic counting method based on digital image processing technology | |
CN104794721A (en) | Quick optic disc positioning method based on multi-scale macula detection | |
CN113935666B (en) | Building decoration wall tile abnormity evaluation method based on image processing | |
CN106447673A (en) | Chip pin extraction method under non-uniform illumination condition | |
CN106331746B (en) | Method and apparatus for identifying watermark location in video file | |
CN109409227A (en) | A kind of finger vena plot quality appraisal procedure and its device based on multichannel CNN | |
CN102184404A (en) | Method and device for acquiring palm region in palm image | |
Tang et al. | Leaf extraction from complicated background | |
CN109117837B (en) | Region-of-interest determination method and apparatus | |
CN110189327A (en) | Eye ground blood vessel segmentation method based on structuring random forest encoder | |
CN106372593B (en) | Optic disk area positioning method based on vascular convergence | |
Wang et al. | A fast image segmentation algorithm for detection of pseudo-foreign fibers in lint cotton | |
CN110288616B (en) | Method for segmenting hard exudation in fundus image based on fractal and RPCA | |
CN102938052A (en) | Sugarcane segmentation and recognition method based on computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |