CN116843612A - Image processing method for diabetic retinopathy diagnosis - Google Patents

Image processing method for diabetic retinopathy diagnosis Download PDF

Info

Publication number
CN116843612A
CN116843612A CN202310425913.7A CN202310425913A CN116843612A CN 116843612 A CN116843612 A CN 116843612A CN 202310425913 A CN202310425913 A CN 202310425913A CN 116843612 A CN116843612 A CN 116843612A
Authority
CN
China
Prior art keywords
image
blood vessel
fundus image
diabetic retinopathy
fundus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202310425913.7A
Other languages
Chinese (zh)
Inventor
刘雪莲
徐勇
马秀梅
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Affiliated Hospital of Southwest Medical University
Original Assignee
Affiliated Hospital of Southwest Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Affiliated Hospital of Southwest Medical University filed Critical Affiliated Hospital of Southwest Medical University
Priority to CN202310425913.7A priority Critical patent/CN116843612A/en
Publication of CN116843612A publication Critical patent/CN116843612A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The application relates to the technical field of image detection, in particular to an image processing method for diagnosing diabetic retinopathy, which is an identification application technology in medical images. The method comprises the following steps: acquiring a fundus image to be identified, and performing image segmentation on the fundus image to extract blood vessel characteristics in the fundus image to obtain a blood vessel image; and identifying and acquiring a first feature in the blood vessel image, acquiring a plurality of second features by taking the first feature as a central position, intercepting actual pixel widths of the plurality of second features based on a first condition, comparing the actual pixel widths with actual errors of preset pixel widths, and acquiring a preliminary detection result based on an error threshold. According to the technical scheme provided by the embodiment of the application, the blood vessel image is generated by extracting the blood vessel characteristics of the eye bottom image, the associated diameter data in the medical characteristics in the blood vessel image are calculated, and the pathological change result is obtained according to the calculation result.

Description

Image processing method for diabetic retinopathy diagnosis
Technical Field
The application relates to the technical field of image detection, in particular to an image processing method for diagnosing diabetic retinopathy, which is an identification application technology in medical images.
Background
With the development of society, the health consciousness of human beings is gradually enhanced, and the development of medicine is also becoming a focus of social attention. With the rapid improvement of the living standard of people, the incidence rate of diabetes is increased year by year, and according to WHO statistics, 3.47 hundred million diabetics exist in the world at present. Diabetes causes serious trauma to endocrine and nervous systems of human bodies, and is one of common chronic diseases with mortality rate inferior to cancers and cardiovascular and cerebrovascular diseases. Diabetic retinopathy (Diabetic Retinopathy, DR) is a vascular disorder caused by diabetes, which causes blood glucose metabolic disturbance in human body, and at the same time, damages large blood vessels and micro blood vessels of the whole body, and when serious, causes the problems of ischemia, rupture and the like of blood vessels. The eyes are the most abundant organs of the distribution of the micro blood vessels in the human body, and along with the continuous increase of blood sugar, the changes of retinal capillary endothelial cell injury, endothelial cell contraction, basement membrane thickening and the like occur, thereby causing microcirculation disturbance, capillary vessel occlusion and ischemia and accelerating the development of retinopathy. Since early clinical symptoms of diabetes are not obvious, often not appreciated, about half of the population of patients who typically have a history of diabetes over ten years will be accompanied by symptoms of retinopathy. Studies have shown that in cases of blindness, which occur newly each year, with diabetes, up to 12% of patients with blindness, which is 4 times the normal blindness rate, effective early treatment can prevent 90% of DR patients from vision loss or blindness.
Early detection of retinal fundus images for early treatment is an effective means to prevent vision loss and blindness, and the main reason for analysis of retinal images is also due to their irreplaceable properties: (1) Numerous fine blood vessels are spread in the retina, and deep micro blood vessels can be directly observed in a non-invasive mode; (2) The inner structure of retina is closed, and the problems of deformation, aging, abrasion and the like are not easy to occur in all ages of healthy people; (3) The retina structure is complex and stable, and is not easy to be affected by the cross of other diseases: (4) The retinal blood vessel shape is highly concealed and cannot be observed by naked eyes, and the probability of being counterfeited is low by means of specific equipment. At present, the method for detecting diabetic retinopathy mainly depends on manual judgment, which requires a trained ophthalmologist to evaluate each retina image in turn, and the time until the result is usually after several days. The patient can fail to pay attention to in time due to the delayed diagnosis result, so that the doctor can not communicate with the patient and family members smoothly, and the optimal treatment time is delayed. Existing tests are time consuming and labor intensive and may result in increased patient morbidity due to delayed diagnosis.
The diabetic retinopathy is not obvious in early stage, but can observe tiny vascular changes from the retina image, and in view of the characteristics, diabetics need to check eyes regularly, so that effective treatment can be adopted when the diabetic retinopathy occurs in early stage, and the deterioration of the condition is avoided. The diagnosis method of the diabetic retinopathy at the present stage mainly comprises the steps that a doctor carries out evaluation analysis according to information in fundus images so as to draw a conclusion. This approach is limited by the experience and expertise of the physician, is limited and time consuming, and may also lead to erroneous diagnostic results, or serious consequences of missing the optimal treatment period.
Disclosure of Invention
In order to solve the technical problems, the application provides an image processing method for diagnosing diabetic retinopathy, which is characterized in that blood vessels of the retina are segmented, the blood vessels are identified according to lesion characteristics, and the lesions are detected according to the identified characteristics.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the application is as follows:
in a first aspect, a method of image processing for diabetic retinopathy diagnosis, the method comprising: acquiring a fundus image to be identified, performing image segmentation on the fundus image, and extracting blood vessel characteristics in the fundus image to obtain a blood vessel image; and identifying and acquiring a first feature in the blood vessel image, acquiring a plurality of second features by taking the first feature as a central position, intercepting actual pixel widths of the plurality of second features based on a first condition, comparing the actual pixel widths with actual errors of preset pixel widths, and acquiring a preliminary detection result based on an error threshold.
Further, the method for extracting the blood vessel characteristics in the fundus image to obtain a plurality of blood vessel characteristic images by image segmentation of the fundus image comprises the following steps: inputting the fundus image into a segmentation model for forward propagation to obtain a corresponding segmentation map; the segmentation model comprises five convolution layers, a Focus structure and an inner convolution layer connected with the Focus structure are arranged in a first convolution layer, the Focus structure is used for converting wide-high information of the fundus image into a channel, and the inner convolution layer extracts the fundus image through inner convolution to obtain a preliminary feature map containing shallow features.
Further, the convolution structure in the third convolution layer is a deformable convolution.
Further, the method further comprises the following steps: and predicting and outputting the pixels in the extracted preliminary feature map based on a U-Net network through a decoder to obtain a segmented image, namely a blood vessel feature image.
Further, the U-Net network is constructed based on a valid convolution structure.
Further, the first feature is a disk in a blood vessel image, and identifying the first feature includes: and extracting the characteristics which accord with the threshold range in the blood light image based on the first target detection frame.
Further, a plurality of second features are obtained by taking the first feature as a central position, actual pixel widths of the plurality of second features are intercepted based on a first condition, actual errors of the actual pixel widths and preset pixel widths are compared, a preliminary detection result is obtained based on an error threshold, the second features are blood vessels in a blood vessel image, and the method comprises the following steps: and acquiring actual pixel widths of a plurality of blood vessels based on the target measurement distance from the center of the disk to a disk diameter range of 1/2-1 of the disk edge by using the disk center position, and comparing the actual pixel widths of the plurality of blood vessels with a preset pixel width to acquire an actual error.
Further, the method further comprises preprocessing the fundus image before the fundus image is subjected to image segmentation extraction, wherein the preprocessing comprises the steps of obtaining the contrast of the fundus image and enhancing the contrast of the fundus image.
Further, obtaining the contrast of the fundus image includes performing weighted average graying processing on the fundus image, specifically including: the pixel values of the R channel, the G channel and the B channel in the fundus image and the corresponding weights are obtained, and weighted average processing is carried out based on a weighted formula, wherein the weighted formula is as follows: gray_image=ω R R+ω G G+ω B B, wherein omega R Weights, ω, for R channel component pixel values G Weights, ω, for the G-channel component pixel values B Weights for B-channel component pixel values, where ω G >ω R >ω B
Further, enhancing the contrast of the fundus image comprises the steps of carrying out the histogram of the fundus image after the graying treatment, carrying out the averaging treatment on the histogram, calculating a dynamic threshold value after the averaging treatment, cutting the histogram based on the dynamic threshold value, and adjusting the gray value.
In a second aspect, an electronic device is provided, including a memory storing a computer program and a processor implementing any one of the above image processing methods when executing the computer program.
In a third aspect, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the image processing method of any one of the above.
According to the technical scheme provided by the embodiment of the application, the blood vessel image is generated by extracting the blood vessel characteristics of the eye bottom image, the associated diameter data in the medical characteristics in the blood vessel image are calculated, and the pathological change result is obtained according to the calculation result.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
The methods, systems, and/or programs in the accompanying drawings will be described further in terms of exemplary embodiments. These exemplary embodiments will be described in detail with reference to the drawings. These exemplary embodiments are non-limiting exemplary embodiments, wherein the exemplary numbers represent like mechanisms throughout the various views of the drawings.
Fig. 1 is a flowchart of an image processing method for diabetic retinopathy diagnosis according to an embodiment of the present application.
Fig. 2 is a block diagram of an image processing apparatus for diabetic retinopathy diagnosis according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an image processing apparatus for diabetic retinopathy diagnosis according to an embodiment of the present application.
Fig. 4 is a fundus image provided by an embodiment of the present application.
Detailed Description
In order to better understand the above technical solutions, the following detailed description of the technical solutions of the present application is made by using the accompanying drawings and specific embodiments, and it should be understood that the specific features of the embodiments and the embodiments of the present application are detailed descriptions of the technical solutions of the present application, and not limiting the technical solutions of the present application, and the technical features of the embodiments and the embodiments of the present application may be combined with each other without conflict.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. It will be apparent, however, to one skilled in the art that the application can be practiced without these details. In other instances, well known methods, procedures, systems, components, and/or circuits have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present application.
The present application uses a flowchart to illustrate the execution of a system according to an embodiment of the present application. It should be clearly understood that the execution of the flowcharts may be performed out of order. Rather, these implementations may be performed in reverse order or concurrently. Additionally, at least one other execution may be added to the flowchart. One or more of the executions may be deleted from the flowchart.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
(1) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
(2) Based on the conditions or states that are used to represent the operations that are being performed, one or more of the operations that are being performed may be in real-time or with a set delay when the conditions or states that are being relied upon are satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
The technical scheme provided by the embodiment of the application has the main application scene of diagnosing diabetic retinopathy aiming at fundus images. In the retina of diabetics, hyperglycemia can lead to early changes such as small vascular endothelial cell injury, thickening of the basement membrane, increased extracellular matrix deposition, venous distention, and wall thickness changes. Microangioma, arteriole sclerosis, vitreous hemorrhage, etc. then occur. If the pre-disease becomes less effective, diabetic retinopathy will develop further, forming new blood vessels on the retinal surface, which are prone to rupture and undergo ocular fundus hemorrhage, leading to serious vision impairment. Diabetic Retinopathy (DR) is one of the most common complications of diabetes mellitus and is one of the major factors of irreversible blindness. In the medical field, they can be divided into two main categories according to severity: non-proliferative (NPDR) and Proliferative (PDR), wherein non-proliferative diabetic retinopathy is divided into three sub-categories, depending on the severity of the ocular disease. The incidence rate of diabetic retinopathy in diabetics is higher, the patients have no obvious inadaptation symptoms in the early stage, the vision of the patients can be influenced in the middle stage of the disease course, short vision blurring can occur in the period, and the patients can recover to be normal by means of autoimmune repair functions, so that attention is often not paid, long-time vision blurring, blood shadow, mosquito and other manifestations appear in the later stage, irreversible vision damage is gradually caused, and even vision loss occurs. If diabetics can regularly perform fundus disease screening, rationalized treatment can be performed at the initial stage of disease, and the risk of blindness can be eliminated to a great extent. It is therefore highly desirable to provide early diabetic retinopathy screening and timely treatment for diabetic patients.
Diagnosis of diabetic retinopathy firstly, the eyes of a patient are photographed through a mydriasis-free digital fundus camera to obtain fundus retina photographs, and secondly, a professional ophthalmologist evaluates the severity according to the characteristics shown by fundus images. Because the acquisition of fundus photos requires professional personnel to take the images of patients by using professional machines, but medical resources in remote areas of China are deficient, all diabetics cannot be ensured to be clearly diagnosed. In addition, in diagnosing diabetic retinopathy, a professional ophthalmologist with a great deal of experience is required to evaluate based on fundus image information. However, in our country, there are relatively few specialized ophthalmologists, which result in the inability to screen a large number of diabetic retinopathy patients. Furthermore, the diagnostic process is quite complex and the detection accuracy is not high.
With the development of artificial intelligence technology, image classification is widely applied in various fields, such as face recognition in the security field, road security in the traffic field, image recognition in the medical field, and the like. The computer is used for identifying and diagnosing the fundus image, which is similar to doctor diagnosis, namely, the severity degree is judged according to the characteristic information shown by the fundus image of the patient. The image pixel information is converted into characteristic information of a computer language, and the severity degree is judged by the model through learning the characteristic information. The symptoms exhibited by different classes of diabetic retinopathy are different. The blood vessels in the normal fundus image are relatively slender. A small number of aneurysms (aneoursum) appear in the fundus retina at the early stage of disease, and the patient does not have obvious discomfort. By the mid-course of the disease, more aneurysms appear in the blood vessels, and as the blood vessels become damaged, blood and other fluids in the blood vessels infiltrate the retina, causing bleeding (Hemorrhages) to occur in the retina with the appearance of hard exudates (HardExudates) and blurred vision in the patient. Scar tissue may form due to vessel occlusion in severe cases. In the proliferation period, cotton linter spots (Cotto Wool) appear in fundus images and show abnormal growth of blood vessels (Abnormal Groth of Blood Vessels), and the patient basically suffers vision loss only from a little light spot, and finally, the retina breaks away and the vision is lost.
In recent years, medical image processing has attracted attention due to rapid development of image processing technology, and the heat is getting higher and higher. The main research object of medical image processing is medical images with different imaging principles, and if deep learning is fused with medical image processing, a doctor can be assisted in making decisions in clinical diagnosis, so that the accuracy and efficiency of disease diagnosis are greatly improved.
Based on the above technical background, the embodiment of the application provides an image processing method for diagnosing diabetic retinopathy, which comprises the following steps:
s110, acquiring a fundus image to be identified, and performing image segmentation on the fundus image to extract blood vessel features in the fundus image so as to obtain a blood vessel image.
In the embodiment of the application, the obtaining of the fundus image can be based on the imaging medicine in the prior art. And extracting blood vessel characteristics aiming at the acquired fundus image, and carrying out characteristic fusion according to the acquired blood vessel characteristics to acquire a blood vessel image.
Wherein the preprocessing section includes acquiring a contrast of the fundus image and enhancing the contrast of the fundus image is required for preprocessing the fundus image before the feature extraction is performed. The specific treatment process comprises the following steps:
the method for acquiring the contrast of the fundus image comprises the steps of carrying out weighted average graying treatment on the fundus image, and specifically comprises the following steps: the pixel values of the R channel, the G channel and the B channel in the fundus image and the corresponding weights are obtained, and weighted average processing is carried out based on a weighted formula, wherein the weighted formula is as follows:
Gray_Image=ω R R+ω G G+ω B b, wherein omega R Weights, ω, for R channel component pixel values G Weights, ω, for the G-channel component pixel values B Weights for B-channel component pixel values, where ω G >ω R >ω B
The method comprises the steps of enhancing contrast of fundus images, including histogram of fundus images after gray level processing, carrying out averaging processing on the histogram, calculating a dynamic threshold value after the averaging processing, cutting the histogram based on the dynamic threshold value, and adjusting gray level values, and specifically comprises the following processing steps:
(1) The fundus image is divided into 4*4 blocks, and each pixel block has the same size and is not overlapped with each other.
(2) The image gamut space is converted from RGB space to YUV space.
(3) The block calculates the histogram and calculates the average value.
(4) A dynamic threshold is calculated. And calculating the area of the histogram which is larger than the average value by taking the average value of the brightness of the pixel block as a boundary, namely, the abrupt degree of the histogram, and dynamically distributing different thresholds according to the abrupt degree of one or more histograms. The threshold value is appropriately adjusted according to the concentration of the gray values.
(5) The histogram is cut according to the threshold value, and the parts above the threshold value are summed and distributed to the position of each pixel point on average. Assuming that the total number of pixels exceeding the threshold T is S, they are equally allocated to the respective gray levels.
(6) The cumulative distribution curve is calculated as follows:
(7) One-dimensional low-pass filtering enhances the smoothness of the curve. The calculation formula is shown as follows:
y (k) = [ Y (k-2) +y (k-1) +y (k) +y (k+1) ] ≡4, where Y (k) is the current pixel value.
(8) To avoid blocking artifacts, interpolation operations are used to find pixel values between deblocking and blocks. And defining pink blocks as corner area pixels in the fundus image, and directly performing mapping function calculation. And defining a green block as an edge area, and adopting linear interpolation to calculate pixel values. The calculation formula is as follows:
where f (x, y) represents the value of the pixel point sought, and the coordinates of the centers of adjacent pixel blocks are defined as (x) 1 ,y 1 ) And (x) 2 ,y 2 ) F1 and f2 are the mapped values of the sub-blocks.
The blue block is a middle region pixel block, and pixel values are obtained by bilinear interpolation, and the calculation formula is as follows:
where f1, f2, f3, f4 are the mapped values of the four pixel blocks, and the corresponding center coordinates are defined as (x) 1 ,y 1 ),(x 1 ,y 2 ),(x 2 ,y 1 ),(x 2 ,y 2 )。
(9) The original image and the enhanced image are fused, the fusion curve is in Gaussian distribution, and the calculation formula is as follows:
f (x, y) =a×y' + (1-a) y, wherein: a is the weight of the fusion of the original image and the enhanced image, and y' is
And (3) an enhanced image, wherein y is an original image. a is gaussian distributed, and is controlled by gray values, and the darker the part, the smaller a is, so as to suppress dark noise.
(10) The mean y_gain of the contrast enhancement of all pixels is counted, and the uv component is adjusted according to the mean, and the adjustment formula is as follows: u (u) new =128±y gain X (128-u), where u new The enhanced value of v can be obtained by the same method as the enhanced value of u.
(11) The YUV image is color gamut converted into an RGB image.
Through the above processing, the fundus image is subjected to image enhancement processing, and the blood vessel image is obtained by performing image segmentation and extracting blood vessel characteristics with respect to the fundus image after enhancement processing.
Wherein, for carrying out image segmentation on the fundus image and extracting blood vessel characteristics in the fundus image to obtain a plurality of blood vessel characteristic images, the method comprises the following steps: inputting the fundus image into a segmentation model for forward propagation to obtain a corresponding segmentation map; the segmentation model comprises five convolution layers, a Focus structure and an inner convolution layer connected with the Focus structure are arranged in a first convolution layer, the Focus structure is used for converting wide-high information of the fundus image into a channel, and the inner convolution layer extracts the fundus image through inner convolution to obtain a preliminary feature map containing shallow features.
In this embodiment, the feature extraction network includes a plurality of basic convolutional layer structures. Wherein the number of the plurality of convolution layers is 5, namely the convolution processing of the S1-S5 stages is included. In the S1 stage, first, the wide-high information of the fundus image is converted into a channel using the Focus structure in the convolution layer, and then, shallow features with greater spatial specificity are extracted using the inner convolution. In the S2 stage, the width and height of the characteristic diagram are further compressed, and the number of channels is further expanded. In the S3, S4 and S5 stages, three feature graphs are respectively generated for constructing a feature fusion network, wherein the feature fusion network is in a pyramid structure and fuses feature information of different scales through bottom-up and top-down connection.
In this embodiment, the Focus structure is used to perform slicing processing on the input image, so as to reduce the amount of computation. But the resolution of the fundus image obtained in reality is lower, and the image processed by the Focus structure is directly used for subsequent convolution operation, so that the extraction of edge information such as lines, textures and the like of a shallow convolution layer can be influenced. Therefore, in order to acquire detailed information of more fundus images in the present embodiment, the present embodiment introduces an inner convolution layer after the Focus structure. Specifically, for the H×W×4C-scale feature map X generated by the Focus structure, the inner convolution layer first uses the feature vector v in X ij Is adjusted to be 1 multiplied by k 2 Next, the vector F after adjustment is expanded to obtain a convolution kernel Z of size kxkx1. Then, convolution kernels Z and v are used ij The eigenvectors of the region are multiplied to obtain an eigenvector F with the size of k multiplied by c p . Finally, the eigenvectors of k multiplied by k 4c dimensions in Fp are added to obtain an inner convolution result v ij . Through the operation, a convolution kernel can be generated from the channel containing the width and height information and used for acquiring the feature vector of the area, so that a preliminary feature map is obtained. In this embodiment, the preliminary feature map is a feature map Y with spatial specificity, which provides more abundant detailed information for subsequent convolution operations.
In this embodiment, the processing of the feature map for semantic segmentation of the fundus image mainly includes a U-Net network, and an encoder and a decoder are configured on the basis of the U-Net network, where the encoder is used for feature extraction of the fundus image based on the U-Net network, and the decoder is used for recovering information such as details and edges of the feature map obtained by the encoder, and the overall U-shaped structure is presented. And finally, predicting and outputting each pixel on the feature map, namely the segmented image. The embodiment can combine the upper and lower Wen Yuyi through the encoder, the decoder and the U-Net network structure, has the advantages of high training speed and less required data, and is suitable for pain points of medical image requiring real-time segmentation.
In this embodiment, the U-Net network adopts a feature fusion manner spanning "stitching", that is, features are stitched in the channel dimension to form features with thicker dimension, which ensures that the final recovery feature map fuses more low-level semantic features, and simultaneously fuses features with different dimensions, so that segmentation recovers fine edge information which is more focused in the medical image field. The U-Net also adds a plurality of characteristic channels in the up-sampling process, so that more original image textures are allowed to be transmitted, and focus contour details are restored. In addition, because the U-Net uses valid convolution modes in the whole process, the context characteristics of the spatial domain, which cannot be lost in the segmentation result, are ensured. The valid convolution mode is an existing mode, and will not be described in this embodiment.
S120, identifying and acquiring first features in the blood vessel image, acquiring a plurality of second features by taking the first features as central positions, intercepting actual pixel widths of the second features based on a first condition, comparing actual pixel widths with actual errors of preset pixel widths, and obtaining a preliminary detection result based on an error threshold.
Referring to fig. 4, in an embodiment of the present application, the first feature is a disk in a blood vessel image, and identifying the first feature includes: and extracting the characteristics which accord with the threshold range in the blood light image based on the first target detection frame.
The first target detection frame is generated based on machine vision, and detection frame generation technology in the prior art can be adopted, and will not be described in the embodiment of the present application.
The first feature is a plurality of second features obtained from a central position, actual pixel widths of the plurality of second features are intercepted based on a first condition, actual errors of the actual pixel widths and preset pixel widths are compared, a preliminary detection result is obtained based on an error threshold, the second feature is a blood vessel in a blood vessel image, and the method comprises the following steps: and acquiring actual pixel widths of a plurality of blood vessels based on the target measurement distance from the center of the disk to a disk diameter range of 1/2-1 of the disk edge by using the disk center position, and comparing the actual pixel widths of the plurality of blood vessels with a preset pixel width to acquire an actual error. Through the above processing, the association relationship between the blood vessel and the disease is obtained.
In an embodiment, the first condition is the number of blood vessels selected based on clinical experience, for example, the first condition is 1/2 of a plurality of blood vessels in a cross distribution, six blood vessels in random distribution in other embodiments, and a plurality of blood vessels in other distribution modes in other embodiments.
Wherein, in 210 cases of the proliferation period DR, CRVE is (259.60 + -22.15) μm, (264.37 + -24.87) μm, (270.00 + -21.67) μm and (280.83 + -26.61) μm, respectively, with respect to the correlation between the non-proliferation period DR stage and the retinal vessel diameter.
Referring to fig. 2, an embodiment of the present application further provides an image processing apparatus 200 for diabetic retinopathy diagnosis, including:
the image acquisition module 210 acquires a fundus image to be identified, and performs image segmentation on the fundus image to extract blood vessel features in the fundus image to obtain a blood vessel image.
The identifying module 220 is configured to identify and acquire a first feature in the blood vessel image, acquire a plurality of second features with the first feature as a center position, intercept actual pixel widths of the plurality of second features based on a first condition, compare actual errors of the actual pixel widths with preset pixel widths, and obtain a preliminary detection result based on an error threshold.
Referring to fig. 3, an image processing apparatus 300 for diagnosis of diabetic retinopathy may have a relatively large difference in configuration or performance, may include one or more processors 301 and a memory 302, and may have one or more storage applications or data stored in the memory 302. Wherein the memory 302 may be transient storage or persistent storage. The application programs stored in memory 302 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in an image processing device for diabetic retinopathy diagnosis. Still further, the processor 401 may be arranged to communicate with the memory 302 to execute a series of computer executable instructions in the memory 302 on an image processing device for diabetic retinopathy diagnosis. The image processing device for diabetic retinopathy diagnosis may also include one or more power supplies 303, one or more wired or wireless network interfaces 304, one or more input/output interfaces 305, one or more keyboards 306, etc.
In a specific embodiment, an image processing device for diabetic retinopathy diagnosis includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer executable instructions in the image processing device for diabetic retinopathy diagnosis, and the execution of the one or more programs by the one or more processors comprises computer executable instructions for:
acquiring a fundus image to be identified, performing image segmentation on the fundus image, and extracting blood vessel characteristics in the fundus image to obtain a blood vessel image;
and identifying and acquiring a first feature in the blood vessel image, acquiring a plurality of second features by taking the first feature as a central position, intercepting actual pixel widths of the plurality of second features based on a first condition, comparing the actual pixel widths with actual errors of preset pixel widths, and obtaining a preliminary detection result based on an error threshold.
The following describes each component of the processor in detail:
wherein in this embodiment the processor is a specific integrated circuit (application specific integrated circuit, ASIC), or one or more integrated circuits configured to implement embodiments of the present application, such as: one or more microprocessors (digital signal processor, DSPs), or one or more field programmable gate arrays (field programmable gate array, FPGAs).
Alternatively, the processor may perform various functions, such as performing the method shown in fig. 2 described above, by running or executing a software program stored in memory, and invoking data stored in memory.
In a particular implementation, the processor may include one or more microprocessors, as one embodiment.
The memory is configured to store a software program for executing the scheme of the present application, and the processor is used to control the execution of the software program, and the specific implementation manner may refer to the above method embodiment, which is not described herein again.
Alternatively, the memory may be read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, but may also be, without limitation, electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), compact disc read-only memory (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store the desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be integrated with the processor or may exist separately and be coupled to the processing unit through an interface circuit of the processor, which is not particularly limited by the embodiment of the present application.
It should be noted that the structure of the processor shown in this embodiment is not limited to the apparatus, and an actual apparatus may include more or less components than those shown in the drawings, or may combine some components, or may be different in arrangement of components.
In addition, the technical effects of the processor may refer to the technical effects of the method described in the foregoing method embodiments, which are not described herein.
It should be appreciated that the processor in embodiments of the application may be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should also be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example but not limitation, many forms of random access memory (random access memory, RAM) are available, such as Static RAM (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
The above embodiments may be implemented in whole or in part by software, hardware (e.g., circuitry), firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method for diabetic retinopathy diagnosis, characterized in that the method comprises:
acquiring a fundus image to be identified, performing image segmentation on the fundus image, and extracting blood vessel characteristics in the fundus image to obtain a blood vessel image;
and identifying and acquiring a first feature in the blood vessel image, acquiring a plurality of second features by taking the first feature as a central position, intercepting actual pixel widths of the plurality of second features based on a first condition, comparing the actual pixel widths with actual errors of preset pixel widths, and obtaining a preliminary detection result based on an error threshold.
2. The image processing method for diabetic retinopathy diagnosis according to claim 1, wherein image-dividing the fundus image to extract blood vessel features in the fundus image to obtain a plurality of blood vessel feature images, comprising: inputting the fundus image into a segmentation model for forward propagation to obtain a corresponding segmentation map; the segmentation model comprises five convolution layers, a Focus structure and an inner convolution layer connected with the Focus structure are arranged in a first convolution layer, the Focus structure is used for converting wide-high information of the fundus image into a channel, and the inner convolution layer extracts the fundus image through inner convolution to obtain a preliminary feature map containing shallow features.
3. The image processing method for diabetic retinopathy diagnosis according to claim 2, wherein the convolution structure in the third convolution layer is a deformable convolution.
4. The image processing method for diabetic retinopathy diagnosis according to claim 2, further comprising: and predicting and outputting the pixels in the extracted preliminary feature map based on a U-Net network through a decoder to obtain a segmented image, namely a blood vessel feature image.
5. The image processing method for diabetic retinopathy diagnosis according to claim 4, wherein the U-Net network is constructed based on valid convolution structure.
6. The image processing method for diabetic retinopathy diagnosis according to claim 4, wherein the first feature is a disk in a blood vessel image, and identifying the first feature includes: and extracting the features which accord with the threshold range from the blood vessel image based on the first target detection frame.
7. The image processing method for diabetic retinopathy diagnosis according to claim 5, wherein a plurality of second features are obtained centering on the first features, actual pixel widths of the plurality of second features are intercepted based on a first condition, actual errors of the actual pixel widths and preset pixel widths are compared, a preliminary detection result is obtained based on an error threshold, the second features are blood vessels in a blood vessel image, comprising: and acquiring actual pixel widths of a plurality of blood vessels based on the target measurement distance from the center of the disk to a disk diameter range of 1/2-1 of the disk edge by using the disk center position, and comparing the actual pixel widths of the plurality of blood vessels with a preset pixel width to acquire an actual error.
8. The image processing method for diabetic retinopathy diagnosis according to claim 7, further comprising preprocessing the fundus image before image division extraction, the preprocessing including acquiring a contrast of the fundus image and enhancing the contrast of the fundus image.
9. The image processing method for diabetic retinopathy diagnosis according to claim 8, wherein obtaining the contrast of the fundus image includes performing weighted average graying processing on the fundus image, specifically including: the pixel values of the R channel, the G channel and the B channel in the fundus image and the corresponding weights are obtained, and weighted average processing is carried out based on a weighted formula, wherein the weighted formula is as follows:
Gray_Image=ω R R+ω G G+ω B b, wherein omega R Weights, ω, for R channel component pixel values G Weights, ω, for the G-channel component pixel values B Weights for B-channel component pixel values, where ω G >ω R >ω B
10. The image processing method for diabetic retinopathy diagnosis according to claim 9, wherein enhancing the contrast of the fundus image includes subjecting a histogram of a fundus image after gradation processing to a averaging process, calculating a dynamic threshold value after the averaging process, clipping the histogram based on the dynamic threshold value, and adjusting a gradation value.
CN202310425913.7A 2023-04-20 2023-04-20 Image processing method for diabetic retinopathy diagnosis Withdrawn CN116843612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310425913.7A CN116843612A (en) 2023-04-20 2023-04-20 Image processing method for diabetic retinopathy diagnosis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310425913.7A CN116843612A (en) 2023-04-20 2023-04-20 Image processing method for diabetic retinopathy diagnosis

Publications (1)

Publication Number Publication Date
CN116843612A true CN116843612A (en) 2023-10-03

Family

ID=88167732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310425913.7A Withdrawn CN116843612A (en) 2023-04-20 2023-04-20 Image processing method for diabetic retinopathy diagnosis

Country Status (1)

Country Link
CN (1) CN116843612A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN115063425A (en) * 2022-08-18 2022-09-16 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Reading knowledge graph-based structured inspection finding generation method and system
CN115082388A (en) * 2022-06-08 2022-09-20 哈尔滨理工大学 Diabetic retinopathy image detection method based on attention mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN115082388A (en) * 2022-06-08 2022-09-20 哈尔滨理工大学 Diabetic retinopathy image detection method based on attention mechanism
CN115063425A (en) * 2022-08-18 2022-09-16 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Reading knowledge graph-based structured inspection finding generation method and system

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
HOUWEI FENG ET AL.: "FYU-Net: A Cascading Segmentation Network for Kidney Tumor Medical Imaging", 《COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE》, vol. 2022, 18 October 2022 (2022-10-18), pages 1 - 10 *
HOUWEI FENG ET AL.: "FYU-Net: A Cascading Segmentation Network for Kidney Tumor Medical Imaging", 《COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE》, vol. 2022, pages 1 - 10 *
JIANBO MAO ET AL.: "Automated diagnosis and quantitative analysis of plus disease in retinopathy of prematurity based on deep convolutional neural networks", 《ACTA OPHTHALMOLOGICA》, pages 339 *
YAGNA SAI KALYAN REBBA ET AL.: "Deep Learning Methods for the Assessment of Vascular Diameters for Diabetic Retinopathy Screening", 《2021 2ND GLOBAL CONFERENCE FOR ADVANCEMENT IN TECHNOLOGY (GCAT)》, 9 November 2021 (2021-11-09), pages 1 - 5 *
YAGNA SAI KALYAN REBBA ET AL.: "Deep Learning Methods for the Assessment of Vascular Diameters for Diabetic Retinopathy Screening", 《2021 2ND GLOBAL CONFERENCE FOR ADVANCEMENT IN TECHNOLOGY (GCAT)》, pages 1 - 5 *
周立群: "甲状腺超声影像中结节检测系统的设计与实现", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》, no. 02, 15 February 2023 (2023-02-15), pages 17 - 32 *
文海琼 等: "基于直方图均衡化的自适应阈值图像增强算法", 《中国集成电路》, vol. 31, no. 03, 31 March 2022 (2022-03-31), pages 38 - 42 *
沈志军 等: "早期糖尿病视网膜病变视网膜血管管径变化的初步研究", 《眼科》, vol. 31, no. 3, pages 195 - 199 *
赵宁: "基于机器视觉的药板智能计数算法研究和系统设计", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》, no. 01, pages 7 - 13 *

Similar Documents

Publication Publication Date Title
Rahim et al. Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing
Reza et al. Automatic tracing of optic disc and exudates from color fundus images using fixed and variable thresholds
Jamal et al. Retinal image preprocessing: background and noise segmentation
Budai et al. Multiscale Blood Vessel Segmentation in Retinal Fundus Images.
Mahendran et al. Identification of exudates for Diabetic Retinopathy based on morphological process and PNN classifier
Adalarasan et al. Automatic detection of blood vessels in digital retinal images using soft computing technique
Soomro et al. Retinal blood vessels extraction of challenging images
Purandare et al. Hybrid system for automatic classification of diabetic retinopathy using fundus images
Güven Automatic detection of age-related macular degeneration pathologies in retinal fundus images
Rahim et al. Automatic detection of microaneurysms for diabetic retinopathy screening using fuzzy image processing
CN115205315A (en) Fundus image enhancement method for maintaining ophthalmologic physical signs
Gour et al. Blood vessel segmentation using hybrid median filtering and morphological transformation
Saravanan et al. Automated red lesion detection in diabetic retinopathy
CN116843612A (en) Image processing method for diabetic retinopathy diagnosis
Mahesh Comparative analysis on U-Net based Retinal Blood Vessel Segmentation
Oommen et al. A research insight toward the significance in extraction of retinal blood vessels from fundus images and its various implementations
Puranik et al. Morphology based approach for microaneurysm detection from retinal image
Al-Fahdawi et al. An automatic corneal subbasal nerve registration system using FFT and phase correlation techniques for an accurate DPN diagnosis
Mahmoudinezhad et al. Deep Learning Estimation of 10-2 Visual Field Map Based on Macular Optical Coherence Tomography Angiography Measurements
Gadriye et al. Neural network based method for the diagnosis of diabetic retinopathy
Janakiraman et al. Reliable IoT-based health-care system for diabetic retinopathy diagnosis to defend the vision of patients
Makala et al. Survey on automatic detection of diabetic retinopathy screening
Ramya et al. Diabetic retinopathy detection through feature aggregated generative adversarial network
Poonkasem et al. Detection of hard exudates in fundus images using convolutional neural networks
Bindhya et al. A Review on Methods of Enhancement And Denoising in Retinal Fundus Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20231003