CN111507932B - High-specificity diabetic retinopathy characteristic detection method and storage device - Google Patents

High-specificity diabetic retinopathy characteristic detection method and storage device Download PDF

Info

Publication number
CN111507932B
CN111507932B CN201910098174.9A CN201910098174A CN111507932B CN 111507932 B CN111507932 B CN 111507932B CN 201910098174 A CN201910098174 A CN 201910098174A CN 111507932 B CN111507932 B CN 111507932B
Authority
CN
China
Prior art keywords
focus
fundus image
image
delineation
blood vessel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910098174.9A
Other languages
Chinese (zh)
Other versions
CN111507932A (en
Inventor
余轮
林嘉雯
潘林
薛岚燕
曹新容
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou Yiying Health Technology Co ltd
Original Assignee
Fuzhou Yiying Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Yiying Health Technology Co ltd filed Critical Fuzhou Yiying Health Technology Co ltd
Priority to CN201910098174.9A priority Critical patent/CN111507932B/en
Publication of CN111507932A publication Critical patent/CN111507932A/en
Application granted granted Critical
Publication of CN111507932B publication Critical patent/CN111507932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention relates to the technical field of medical image processing, in particular to a high-specificity diabetic retinopathy characteristic detection method and storage equipment. The high-specificity diabetic retinopathy characteristic detection method comprises the following steps: detecting focus characteristics of a lesion area of the fundus image through a preset step; preprocessing the fundus image processed by the preset step; extracting main blood vessels of the preprocessed fundus image; performing optic disc delineation and macula fovea delineation on the preprocessed fundus image according to the main blood vessel; and further perfecting the specificity of focus feature detection according to the video disc delineation result, the macula fovea delineation result and the main blood vessel. Compared with a method which can only show fundus image level, a fundus image classification method based on image level or a focus feature extraction method only through deep learning, the method can reduce focus detection errors and improve the specificity of focus feature detection by directly acquiring the positions, types and numbers of red and bright focuses.

Description

High-specificity diabetic retinopathy characteristic detection method and storage device
Technical Field
The invention relates to the technical field of artificial intelligence and medical image processing, in particular to a high-specificity diabetic retinopathy characteristic detection method and storage equipment.
Background
Diabetic retinopathy (Diabetic Retinopathy, DR) is a chronic and imperceptible disease that is one of the leading causes of blindness in humans. Thus, in the normal population, early, timed screening for DR is highly desirable. The screening of the large-scale fundus image increases the burden of doctors, and has great significance in realizing focus automatic detection in a DR screening system in order to reduce the workload of the doctors and improve the efficiency.
Detection of bleeding points, hemangiomas (red lesions) and exudates (bright lesions) is particularly important in studies of early detection of DR. However, the problem of fundus image quality due to the similarity of anatomical structures and lesions in fundus images on certain features and imaging hardware conditions presents difficulties in detecting DR.
Disclosure of Invention
Therefore, a high-specificity diabetic retinopathy characteristic detection method is needed to solve the above technical problems, and the specific technical scheme is as follows:
a high-specificity diabetic retinopathy characteristic detection method comprises the following steps: detecting focus characteristics of a lesion area of the fundus image through a preset step; preprocessing the fundus image processed by the preset step; extracting main blood vessels of the preprocessed fundus image; performing optic disc delineation and macula fovea delineation on the preprocessed fundus image according to the main blood vessel; and further perfecting the specificity of focus feature detection according to the video disc delineation result, the macula fovea delineation result and the main blood vessel.
Further, the "detecting a lesion feature of a lesion region of a fundus image through a preset step" further includes the steps of: grading fundus images by using a transfer learning and ensemble learning method; positioning the affected area of the eye bottom image by using a weak supervision learning algorithm; training by using a convolutional neural network and a support vector machine classifier to obtain bleeding points, hemangiomas and exudate focus models; and detecting focus features in the lesion area of the fundus image by using the focus model.
Further, the "preprocessing the fundus image; extracting main blood vessels of the preprocessed fundus image; performing optic disc delineation and macula fovea delineation on the preprocessed fundus image according to the main blood vessel; performing optic disc delineation and macula fovea delineation on the preprocessed fundus image, further comprising the steps of: the pretreatment comprises the following steps: green channel selection, median filtering, finite contrast enhancement and gray scale equalization and normalization processing; extracting a binarized blood vessel map from the preprocessed fundus image through an Ojin algorithm, and corroding the binarized blood vessel map through a morphological method to obtain main blood vessel information; performing parabolic fitting calculation on the main blood vessel, and positioning the center of the optic disc and the edge of the delineated optic disc according to the calculation result; taking the center of the video disc as the center of a circle, and constructing a circle by a first preset radius value and a second preset radius value to form an annular area; and (3) carrying out macular fovea positioning and circumscribing the macular edge according to the macular brightness characteristic in the annular region.
Further, the method for grading fundus images by using transfer learning and ensemble learning further includes the steps of: initializing VGGNet16 and GoogLeNet model parameters by using a pre-trained model, and performing fine tuning on the two models by using a DR image grading label and an image in a kagle-DR data set; 1000 outputs of the original full-connection layer are changed into 4 outputs, and the four DR images respectively correspond to the health, the mildness, the moderate degree and the severe degree are graded; and (3) fine-tuning the weight of the pre-initialized convolutional neural filter through a back propagation algorithm, so that the whole convolutional neural network accords with the characteristics of fundus images.
Further, extracting fundus image feature vectors by using the two finely-adjusted models to respectively train five classifiers; and adopting an ensemble learning method, averaging the discrimination results of the plurality of classifiers through an averaging method, and judging the category of the fundus image together.
Further, the "positioning the DR lesion area using the weak supervised learning algorithm" further includes the steps of: and (3) generating a heat map by applying a category activation mapping algorithm of a convolutional neural network to the eye bottom image, normalizing and threshold segmentation to the heat map, and judging a part larger than a preset threshold as a lesion area.
Further, the training of the convolutional neural network and the support vector machine classifier to obtain the bleeding point, hemangioma and exudate focus model further comprises the following steps: according to focus labeling in the data set, capturing image blocks in focus label areas and non-focus label areas by utilizing a sliding window, and training a convolutional neural network and a support vector machine classifier as positive and negative samples.
Further, the "detecting a lesion feature in a lesion region of a fundus image by using the lesion model" further includes the steps of: and intercepting an image block on a lesion area obtained by weak supervision learning by utilizing a sliding window, extracting features of the image block through the focus model, classifying and detecting the image block, and judging whether the image block is a focus and a focus type thereof.
In order to solve the technical problems, the invention also provides a storage device, and the specific technical scheme is as follows:
a storage device having stored therein a set of instructions for performing: any of the steps mentioned above.
The beneficial effects of the invention are as follows: detecting focus characteristics of a lesion area of the fundus image through a preset step; on the basis, preprocessing the fundus image processed by the preset step; extracting main blood vessels of the preprocessed fundus image; performing optic disc delineation and macula fovea delineation on the preprocessed fundus image according to the main blood vessel; and further perfecting the specificity of focus feature detection according to the video disc delineation result, the macula fovea delineation result and the main blood vessel extraction result, namely detecting the focus feature detection result and removing the false detection result. Compared with a fundus image grade, a fundus image classification method based on the image grade or a focus feature extraction method only through deep learning, the method for reducing focus detection errors and improving the specificity of focus feature detection by directly acquiring the positions, types and numbers of red and bright focuses.
Drawings
FIG. 1 is a flow chart of a method for detecting characteristics of high specificity diabetic retinopathy according to an embodiment;
fig. 2 is a flowchart of detecting lesion characteristics of a lesion area of a fundus image through a preset step in the embodiment;
fig. 3 is a schematic block diagram of a memory device according to an embodiment.
Detailed Description
In order to describe the technical content, constructional features, achieved objects and effects of the technical solution in detail, the following description is made in connection with the specific embodiments in conjunction with the accompanying drawings.
Referring to fig. 1 to 2, in this embodiment, the method for detecting a characteristic of diabetic retinopathy with high specificity is particularly applied to a storage device, where the storage device may be a smart phone, a tablet computer, a desktop computer, a notebook computer, a cloud server, a cloud storage, a machine room server, a workstation, and so on.
First, some english abbreviations that may appear in this embodiment are explained as follows:
CNN: convolutional neural networks.
And (3) SVM: and supporting a vector machine.
In the present embodiment, the fundus image is specifically a fundus image of a patient diagnosed with diabetes.
In this embodiment, a specific embodiment of the method for detecting characteristics of diabetic retinopathy with high specificity is as follows:
step S101: and detecting focus characteristics of a lesion area of the fundus image through a preset step.
Step S102: and preprocessing the fundus image processed by the preset step.
Step S103: and extracting main blood vessels of the preprocessed fundus image.
Step S104: and performing optic disc delineation and macula fovea delineation on the preprocessed fundus image according to the main blood vessel.
Step S105: and further perfecting the specificity of focus feature detection according to the video disc delineation result, the macula fovea delineation result and the main blood vessel. .
Referring to fig. 2, the step S101 may further include the following steps:
step S201: fundus images are classified by using a method of transfer learning and ensemble learning.
Step S202: and positioning the fundus image lesion area by using a weak supervision learning algorithm.
Step S203: training by using a convolutional neural network and a support vector machine classifier to obtain bleeding points, hemangiomas and exudate focus models.
Step S204: and detecting focus features in the lesion area of the fundus image by using the focus model.
Grading fundus images by using a transfer learning and ensemble learning method; positioning the affected area of the eye bottom image by using a weak supervision learning algorithm; training by using a convolutional neural network and a support vector machine classifier to obtain bleeding points, hemangiomas and exudate focus models; and detecting focus features in the lesion area of the fundus image by using the focus model. Compared with a fundus image classification method which can only show fundus image levels and is based on the image levels, the positions, types and numbers of red and bright focus can be acquired more accurately, the condition of focus detection errors is reduced, and the specificity of lesion detection is improved.
In this embodiment, the "classifying the fundus image by the transfer learning and the ensemble learning" method further includes the steps of: initializing VGGNet16 and GoogLeNet model parameters by using a pre-trained model, and performing fine tuning on the two models by using a DR image grading label and an image in a kagle-DR data set; 1000 outputs of the original full-connection layer are changed into 4 outputs, and the four DR images respectively correspond to the health, the mildness, the moderate degree and the severe degree are graded; and (3) fine-tuning the weight of the pre-initialized convolutional neural filter through a back propagation algorithm, so that the whole convolutional neural network accords with the characteristics of fundus images.
In this embodiment, the "initialize VGGNet16 and google model parameters using a pre-trained model and fine tune the two models" further includes the steps of: initializing VGGNet16 and GoogLeNet model parameters by using a pre-trained model, and performing fine adjustment on the two models by using DR image grading labels and images in a kagle-DR data set so as to change one thousand outputs of an original connection layer into four outputs, wherein the four outputs comprise: four fundus images were graded healthy, mild, moderate and severe.
Further, the method further comprises the steps of: extracting fundus image feature vectors by using the two finely-adjusted models to respectively train five classifiers; and adopting an ensemble learning method, averaging the discrimination results of the plurality of classifiers through an averaging method, and judging the category of the fundus image together.
The specific method can be as follows: it should be noted that, the data set in this embodiment is mainly from the free DR screening platform and is disclosed by the data competition platform kagle, abbreviated as kagle-DR data set. The VGGNet16 model parameters and the GoogLeNet model parameters are initialized by a pre-trained model on an ImageNet data set, and are finely adjusted by using DR image grading labels and images in a Kagle-DR data set, 1000 outputs of an original full-connection layer are changed into 4 outputs, and the four DR images are respectively graded corresponding to health, mildness, moderate and severe DR images. And (3) fine tuning the weight of the pre-initialized CNN filter through a back propagation algorithm, so that the whole CNN network accords with the characteristics of the DR image.
Extracting fundus image feature vector by using the model to train 5 classifiers, namely, a Softmax classifier of a VGGNet16 classification layer, a Softmax classifier of a GoogLeNet classification layer, extracting feature vector from the fine-tuned VGGNet to train SVM classifier, extracting feature vector from the fine-tuned GoogLeNet to train SVM classifier, extracting feature vector from the fine-tuned VGGNet-16 and GoogLeNet to train SVM,
in the process, a PCA (Principal Component Analysis) algorithm is adopted to reduce the dimension of the feature vector extracted by the model (namely, the fundus image feature vector), and then a corresponding classifier is trained.
And constructing a plurality of classifiers by adopting an ensemble learning method, averaging the discrimination results of the classifiers by an averaging method, and judging the category of the fundus image together.
In this embodiment, the "positioning the DR lesion area using the weak supervised learning algorithm" further includes the steps of: and (3) generating a heat map by applying a category activation mapping algorithm of a convolutional neural network to the eye bottom image, normalizing and threshold segmentation to the heat map, and judging a part larger than a preset threshold as a lesion area. The following method can be adopted: in this embodiment, the size of the fundus image to be processed is unified to 512×512 pixels by a secondary sampling or interpolation method, the output of the last convolution layer has 1024 14×14 feature images, each feature image is enlarged to the original image size by a bilinear interpolation method, the weights of the GAP layer and the Softmax layer are extracted, and finally, each feature image is multiplied by the corresponding weight and accumulated together to obtain a lesion area heat image of the fundus image; the heat map is normalized, the threshold is divided, when the threshold is larger than 0.65, the focus is judged, wherein the algorithm of CAM (Class Activation Mapping ) of the convolutional neural network of GAP (Global Average Pooling, average pooling layer) is adopted. The main difference between GAP and the max pooling layer is that GAP is averaged inside each feature map, i.e. the convolution kernel is set to be the same size as the feature map, each feature map is turned into a value, and then input into Softmax layer.
In this embodiment, the training of the convolutional neural network and the support vector machine classifier to obtain the bleeding point, hemangioma and exudate focus model further includes the steps of: according to focus labeling in the data set, capturing image blocks in focus label areas and non-focus label areas by utilizing a sliding window, and training a convolutional neural network and a support vector machine classifier as positive and negative samples.
The method specifically comprises the following steps: according to the marking of the red focus in the data set ROC and e-ophtha, obtaining the outer bounding box area of the focus as a positive sample and the bounding box area without the focus as a negative sample; according to the labeling of the bright focus in the data set e-ophtha, the sliding window is utilized to intercept image blocks in the focus label area and the non-focus label area, and the image blocks are used as positive and negative samples to train a convolutional neural network and a support vector machine classifier. Wherein ROC (Retinopathy Online Challenge) retinopathy on-line challenges and e-ophtha datasets are both existing.
Further, the "detecting a lesion feature in a lesion region of a fundus image by using the lesion model" further includes the steps of: and intercepting an image block on a lesion area obtained by weak supervision learning by utilizing a sliding window, extracting features of the image block through the focus model, classifying and detecting the image block, and judging whether the image block is a focus and a focus type thereof.
Further, in the present embodiment, the "preprocessing the fundus image; performing optic disc delineation and macula fovea delineation on the preprocessed fundus image, further comprising the steps of: the pretreatment comprises the following steps: green channel selection, median filtering, finite contrast enhancement and gray scale equalization and normalization processing; extracting a binarized blood vessel map from the preprocessed fundus image through an Ojin algorithm, and corroding the binarized blood vessel map through a morphological method to obtain main blood vessel information; performing parabolic fitting calculation on the main blood vessel, and positioning the center of the optic disc and the edge of the delineated optic disc according to the calculation result; taking the center of the video disc as the center of a circle, and constructing a circle by a first preset radius value and a second preset radius value to form an annular area; and (3) carrying out macular fovea positioning and circumscribing the macular edge according to the macular brightness characteristic in the annular region.
The pretreatment can remove redundant background in the fundus image, effectively remove noise, and is more beneficial to the subsequent fundus image analysis.
As an implementation manner, according to the situations that in the color fundus image, noise under a blue channel is more, useful information is basically lost, two spots under a red channel are more prominent, and information such as dark blood vessels, microangioma, bleeding points and the like are more lost, the color fundus image to be inspected is selected in a green channel, so that fundus blood vessels are reserved and highlighted to the greatest extent; performing median filtering on fundus images under a green channel to realize denoising; in order to obtain better blood vessel extraction effect, contrast enhancement is carried out on the denoised image. In order to avoid the situation that the image is too bright after being enhanced, a limited contrast enhancement method CLAHE is adopted in the embodiment. And finally, carrying out normalization processing to ensure that the pixel values of all pixel points in one image fall between 0 and 1.
After preprocessing the fundus image to be processed, step S103 is executed: extracting a binarized blood vessel map from the preprocessed fundus image through an Ojin algorithm, and corroding the binarized blood vessel map through a morphological method to obtain a main blood vessel;
as one embodiment, a threshold value is calculated on the fundus image after preprocessing by an oxford algorithm, and pixels with gray values larger than the threshold value are identified as blood vessels according to the following formula;
Figure GDA0004155146560000081
and according to the structural elements with the diameter of the video disc being 1/8-1/5 of the image width and the width of the main blood vessel being 1/4 of the video disc diameter, corroding the extracted blood vessel by utilizing the structural elements to remove the tiny blood vessel, and obtaining the main blood vessel.
After obtaining the main blood vessel, step S104 is performed: and performing parabolic fitting calculation on the main blood vessel, and obtaining the positioning of the center of the video disc and the delineation of the edge of the video disc according to the calculation result.
The following method can be adopted:
establishing a coordinate system by taking the left upper corner of the fundus image as an origin, taking the horizontal direction as an X axis and taking the vertical direction as a Y axis; mapping each pixel point in the main blood vessel into coordinates of the coordinate system;
as shown in the following formula, parabolic fitting is performed on the main vessel according to the least square method, parameters of the parabola are determined, and vertices of the parabola are calculated,
f(x)=ax 2 +bx+c
Figure GDA0004155146560000082
judging whether the parabolic vertex falls in an original fundus image, if so, defining the parabolic vertex as the center of the optic disc, and defining the coordinates (ODXX, ODYY) as the center of the optic disc and the edge of the optic disc; further, the diameter of the video disc is automatically or semi-automatically obtained, and the video disc diameter ODD is described in terms of pixel number.
After the positioning of the center of the video disc and the disc edge delineation are completed, step S105 is performed: positioning a central fovea of a macula; the following method can be adopted: taking the center of the video disc as the center of a circle, and constructing a circle by a first preset radius value and a second preset radius value to form an annular area; and (5) carrying out macula fovea positioning in the annular region according to the macula brightness characteristics. The method comprises the following steps:
depending on the positional relationship between the macula and the optic disc, the distance between the macula fovea and the center of the optic disc is typically 2 to 3 times the ODD size. In this embodiment, therefore, the first circle is preferably constructed with the center of the optic disc as the center and the size of 2 ODD as the radius; constructing a second circle by taking the center of the video disc as the center and taking the size of the 3 times ODD as the radius, wherein an annular area formed between the two circles is defined as a mask area; and then in the mask area, according to the characteristic of lowest brightness of the central concave, positioning the central concave to obtain coordinates MX and MY of the central concave. In a preferred mode, a local directional contrast method is adopted to realize the detection of the fovea position; and finally, according to the brightness information, fitting the macula area circularly by taking the central concave as the center of a circle.
Scanning each pixel point in the candidate area by a sliding window with a preset size;
constructing an evaluation formula according to the fact that the macula area is the darkest area in the fundus image and the macula fovea does not contain any blood vessel;
the evaluation formula is as follows:
Figure GDA0004155146560000091
wherein f vessel For the score value of the number of the blood vessel pixel points corresponding to non-0 in the blood vessel distribution map in any window, f intensity Is the luminance score within any window. In the present embodiment, the darkest in the fundus image, that is, the brightness score in the corresponding formula; does not contain blood vessels, and corresponds to the number of the blood vessel pixel points in the formula.
In this embodiment, each pixel in the candidate region is scanned with a sliding window having a size of (ODD/4) and a size of 4 (4) and 4 (4) as the video disc diameter.
In the present embodiment, f is obtained by performing normalization processing on the score maximum values of all windows vessel
Calculating the average brightness value of all pixel points in the window, and carrying out normalization processing by utilizing 255 to obtain f intensity
Calculating the evaluation value of each sliding window, and selecting a central pixel point corresponding to the sliding window with the smallest evaluation value as a macula lutea fovea; and defining a circle by taking the macular fovea as a circle center and the diameter of the optic disc as a diameter, wherein the area surrounded by the circle is set as a macular area.
After the video disc delineation result, the macula fovea delineation result and the main blood vessel are obtained, the specificity of focus feature detection is further improved or perfected according to the three. The method specifically comprises the following steps: removing focus features in an edge ring with a certain proportion of diameter of the delineated macular fovea or having a certain intersection with the edge ring; the lesion characterization includes: bleeding points and hemangiomas (both red lesions, different in morphology and brightness) are marked for focal features within or at certain intersections with the rim of the delineated optic disc for later manual analysis to confirm deletion, the focal features comprising: hemangiomas and bleeding sites; removing focus features having certain intersections with the main blood vessel; the focal features with certain intersections include: hemangiomas and bleeding sites. In addition, the white or yellow lesion characteristics (suspected of not having a stiff bleed) within or intersecting the rim of the optic disc should be calibrated for submission to a subsequent manual analysis to confirm deletion.
Detecting focus characteristics of a lesion area of the fundus image through a preset step; on the basis, preprocessing the fundus image processed by the preset step; and extracting main blood vessels, optic discs and macular fovea of the preprocessed fundus image by an automatic or semi-automatic interactive extraction method, detecting focus characteristic automatic or semi-automatic detection results according to optic disc positioning or delineating results, macular fovea positioning or delineating results and main blood vessel extraction results, and removing false detection results. Compared with a fundus image grade, a fundus image classification method based on the image grade or a focus feature extraction method only through deep learning, the method for reducing focus detection errors and improving the specificity of focus feature detection by directly acquiring the positions, types and numbers of red and bright focuses.
Referring to fig. 3, in this embodiment, a specific embodiment of a storage device 300 is as follows:
a storage device 300 having stored therein a set of instructions for performing: any of the steps mentioned above.
It should be noted that, although the foregoing embodiments have been described herein, the scope of the present invention is not limited thereby. In particular, the method for removing or reducing false interpretation and improving the specificity of focus feature detection is also suitable for all other focus feature extraction methods capable of acquiring the positions, types and numbers of red and bright focus.
Therefore, based on the innovative concepts of the present invention, alterations and modifications to the embodiments described herein, or equivalent structures or equivalent flow transformations made by the present description and drawings, apply the above technical solution, directly or indirectly, to other relevant technical fields, all of which are included in the scope of the invention.

Claims (7)

1. The high-specificity diabetic retinopathy characteristic detection method is characterized by comprising the following steps of:
detecting focus characteristics of a lesion area of the fundus image through a preset step;
preprocessing the fundus image processed by the preset step;
extracting main blood vessels of the preprocessed fundus image;
performing optic disc delineation and macula fovea delineation on the preprocessed fundus image according to the main blood vessel;
the focus feature detection method is further perfected according to the video disc delineation result, the macula lutea fovea delineation result and the main blood vessel, and the further perfection comprises: removing focal features in or intersecting the edge ring of the delineated macular fovea in a certain proportion of diameter, calibrating focal features in or intersecting the edge ring of the delineated optic disc, removing focal features intersecting the main blood vessel, calibrating white or yellow focal features in or intersecting the edge ring of the optic disc, the focal features intersecting the edge ring comprising: hemangiomas and bleeding sites;
the method for detecting the lesion feature of the fundus image through the preset step further comprises the following steps:
grading fundus images by using a transfer learning and ensemble learning method;
positioning the affected area of the eye bottom image by using a weak supervision learning algorithm;
training by using a convolutional neural network and a support vector machine classifier to obtain bleeding points, hemangiomas and exudate focus models;
performing focus feature detection on a lesion area of the fundus image by using the focus model;
the method for grading fundus images by using transfer learning and ensemble learning further includes the steps of:
initializing VGGNet16 and GoogLeNet model parameters by using a pre-trained model, and performing fine tuning on the two models by using a DR image grading label and an image in a kagle-DR data set; 1000 outputs of the original full-connection layer are changed into 4 outputs, and the four DR images respectively correspond to the health, the mildness, the moderate degree and the severe degree are graded;
and (3) fine-tuning the weight of the pre-initialized convolutional neural filter through a back propagation algorithm, so that the whole convolutional neural network accords with the characteristics of fundus images.
2. The method for detecting the characteristics of high specificity diabetic retinopathy according to claim 1,
the 'preprocessing the fundus image'; extracting main blood vessels of the preprocessed fundus image; performing optic disc delineation and macula fovea delineation on the preprocessed fundus image according to the main blood vessel; performing optic disc delineation and macula fovea delineation on the preprocessed fundus image, further comprising the steps of:
the pretreatment comprises the following steps: green channel selection, median filtering, finite contrast enhancement and gray scale equalization and normalization processing;
extracting a binarized blood vessel map from the preprocessed fundus image through an Ojin algorithm, and corroding the binarized blood vessel map through a morphological method to obtain main blood vessel information;
performing parabolic fitting calculation on the main blood vessel, and positioning the center of the optic disc and the edge of the delineated optic disc according to the calculation result;
taking the center of the video disc as the center of a circle, and constructing a circle by a first preset radius value and a second preset radius value to form an annular area;
and (3) carrying out macular fovea positioning and circumscribing the macular edge according to the macular brightness characteristic in the annular region.
3. The method for detecting the characteristics of diabetic retinopathy with high specificity according to claim 1,
extracting fundus image feature vectors by using the two fine-tuned models, respectively training five classifiers, wherein the five classifiers are respectively Softmax classifiers of a VGGNet16 classification layer, softmax classifiers of a GoogLeNet classification layer, extracting feature vectors from the fine-tuned VGGNet, training SVM classifiers, extracting feature vectors from the fine-tuned GoogLeNet, training SVM classifiers, and extracting feature vectors from the fine-tuned VGGNet-16 and GoogLeNet;
and adopting an ensemble learning method, averaging the discrimination results of the plurality of classifiers through an averaging method, and judging the category of the fundus image together.
4. The method for detecting the characteristics of diabetic retinopathy with high specificity according to claim 1,
the "positioning the DR lesion area using a weak supervision learning algorithm" further includes the steps of:
and (3) generating a heat map by applying a category activation mapping algorithm of a convolutional neural network to the eye bottom image, normalizing and threshold segmentation to the heat map, and judging a part larger than a preset threshold as a lesion area.
5. The method for detecting diabetic retinopathy characterized by high specificity according to claim 4,
the method for training the model of bleeding points, hemangiomas and exudates focus by using the convolutional neural network and the support vector machine classifier comprises the following steps:
according to focus labeling in the data set, capturing image blocks in focus label areas and non-focus label areas by utilizing a sliding window, and training a convolutional neural network and a support vector machine classifier as positive and negative samples.
6. The method for detecting diabetic retinopathy characterized by high specificity according to claim 5,
the method for detecting the focus features in the lesion area of the fundus image by using the focus model further comprises the following steps:
and intercepting an image block on a lesion area obtained by weak supervision learning by utilizing a sliding window, extracting features of the image block through the focus model, classifying and detecting the image block, and judging whether the image block is a focus and a focus type thereof.
7. A storage device having stored therein a set of instructions for performing:
any of the steps of claims 1-6.
CN201910098174.9A 2019-01-31 2019-01-31 High-specificity diabetic retinopathy characteristic detection method and storage device Active CN111507932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910098174.9A CN111507932B (en) 2019-01-31 2019-01-31 High-specificity diabetic retinopathy characteristic detection method and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910098174.9A CN111507932B (en) 2019-01-31 2019-01-31 High-specificity diabetic retinopathy characteristic detection method and storage device

Publications (2)

Publication Number Publication Date
CN111507932A CN111507932A (en) 2020-08-07
CN111507932B true CN111507932B (en) 2023-05-09

Family

ID=71870833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910098174.9A Active CN111507932B (en) 2019-01-31 2019-01-31 High-specificity diabetic retinopathy characteristic detection method and storage device

Country Status (1)

Country Link
CN (1) CN111507932B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489003A (en) * 2020-11-25 2021-03-12 哈尔滨理工大学 Diabetic retinopathy area positioning detection method based on deep learning
CN112652392A (en) * 2020-12-22 2021-04-13 成都市爱迦科技有限责任公司 Fundus anomaly prediction system based on deep neural network
CN112652394A (en) * 2021-01-14 2021-04-13 浙江工商大学 Multi-focus target detection-based retinopathy of prematurity diagnosis system
CN112883962B (en) * 2021-01-29 2023-07-18 北京百度网讯科技有限公司 Fundus image recognition method, fundus image recognition apparatus, fundus image recognition device, fundus image recognition program, and fundus image recognition program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN109101950A (en) * 2018-08-31 2018-12-28 福州依影健康科技有限公司 A kind of optic disk localization method and storage equipment based on the fitting of main blood vessel
CN109199322A (en) * 2018-08-31 2019-01-15 福州依影健康科技有限公司 A kind of macula lutea detection method and a kind of storage equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6656063B2 (en) * 2016-04-15 2020-03-04 キヤノン株式会社 Image processing apparatus, image processing method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN109101950A (en) * 2018-08-31 2018-12-28 福州依影健康科技有限公司 A kind of optic disk localization method and storage equipment based on the fitting of main blood vessel
CN109199322A (en) * 2018-08-31 2019-01-15 福州依影健康科技有限公司 A kind of macula lutea detection method and a kind of storage equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
糖尿病视网膜病变眼底图像分类方法;梁平等;《深圳大学学报(理工版)》;20170531(第03期);全文 *

Also Published As

Publication number Publication date
CN111507932A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111507932B (en) High-specificity diabetic retinopathy characteristic detection method and storage device
Shen et al. Domain-invariant interpretable fundus image quality assessment
CN110276356B (en) Fundus image microaneurysm identification method based on R-CNN
CN108416344B (en) Method for locating and identifying eyeground color optic disk and yellow spot
CN110033456B (en) Medical image processing method, device, equipment and system
WO2020164493A1 (en) Method and apparatus for filtering medical image area, and storage medium
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN104463140B (en) A kind of colored eye fundus image optic disk automatic positioning method
CN110084803B (en) Fundus image quality evaluation method based on human visual system
JPWO2007029467A1 (en) Image processing method and image processing apparatus
CN111612856B (en) Retina neovascularization detection method and imaging method for color fundus image
Salazar-Gonzalez et al. Optic disc segmentation by incorporating blood vessel compensation
Zou et al. Classified optic disc localization algorithm based on verification model
CN111815563B (en) Retina optic disc segmentation method combining U-Net and region growing PCNN
CN114648806A (en) Multi-mechanism self-adaptive fundus image segmentation method
David et al. Retinal blood vessels and optic disc segmentation using U-net
Kovacs et al. Graph based detection of optic disc and fovea in retinal images
TW201726064A (en) Medical image processing apparatus and breast image processing method thereof
CN115272231A (en) Non-proliferative diabetic retinopathy classification method
CN114638800A (en) Improved Faster-RCNN-based head shadow mark point positioning method
CN113935961A (en) Robust breast molybdenum target MLO (Multi-level object) visual angle image pectoral muscle segmentation method
CN109872337A (en) A kind of eye fundus image optic disk dividing method based on Quick and equal displacement
Jana et al. A semi-supervised approach for automatic detection and segmentation of optic disc from retinal fundus image
CN113139929A (en) Gastrointestinal tract endoscope image preprocessing method comprising information screening and fusion repairing
CN115272333B (en) Cup-disk ratio data storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant