WO2018116321A2 - Retinal fundus image processing method - Google Patents

Retinal fundus image processing method Download PDF

Info

Publication number
WO2018116321A2
WO2018116321A2 PCT/IN2017/050604 IN2017050604W WO2018116321A2 WO 2018116321 A2 WO2018116321 A2 WO 2018116321A2 IN 2017050604 W IN2017050604 W IN 2017050604W WO 2018116321 A2 WO2018116321 A2 WO 2018116321A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
fundus image
processing
computer processor
retinal
Prior art date
Application number
PCT/IN2017/050604
Other languages
French (fr)
Other versions
WO2018116321A3 (en
Inventor
SreeHarish MUPPIRISETTY
Sandeep Reddy GUDIMETLA
Original Assignee
Braviithi Technologies Private Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Braviithi Technologies Private Limited filed Critical Braviithi Technologies Private Limited
Publication of WO2018116321A2 publication Critical patent/WO2018116321A2/en
Publication of WO2018116321A3 publication Critical patent/WO2018116321A3/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • A61B3/1241Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes specially adapted for observation of ocular blood flow, e.g. by fluorescein angiography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the present invention relates to the field of retinal image processing.
  • the invention relates to a computer implemented method of processing a retinal fundus image.
  • the images processed by the computer implemented method in integration with algorithms, return data that is used in automated identification of disease states of the eye, specifically the retina.
  • the present invention relates to a method for diagnosing retinal pathological changes in a subject, and then assesses and grades risk level according to Early Treatment Diabetic Retinopathy Study (ETDRS) based on the retinal image data acquired.
  • EDRS Early Treatment Diabetic Retinopathy Study
  • Imaging of human organs especially the eye is critical in diagnosis of disease conditions particularly ocular diseases. This is especially true for the human retina, where the presence of a large network of blood vessels and nerves make it a near-ideal window for exploring the effects of diseases that harm vision such as diabetic retinopathy seen in diabetic patients.
  • Advances in computer-aided image processing and analysis technologies are essential to make imaging-based disease diagnosis scalable, cost-effective, and reproducible. Such advances would directly result in effective triage of patients, leading to timely treatment and better quality of life.
  • Image processing systems are used in standard clinical practices for development of a diagnostic method for various diseases. Rapid development of digital imaging and computer vision has increased the potential of using the image processing technologies in ophthalmology.
  • Fundus photography involves capturing the photograph of the back of the eye i.e. fundus. These fundus images are captured by special fundus cameras that consist of intricate microscope attached to the flash enable camera. The retinal fundus images provide vital information about the health of the sensory part of the visual system.
  • Diabetic retinopathy is a diabetes complication that affects eyes.
  • the diabetic retinopathy is caused by damage to the blood vessels of light-sensitive tissue at the back of retina.
  • the diabetic retinopathy is conventionally assessed through a dilated eye examination.
  • tests such as a fluorescein angiogram and ocular coherence tomography (OCT) can be conducted.
  • DR or Diabetic retinopathy is categorized based on severity. The earliest visible change to the retina is known as background retinopathy. The capillaries become blocked and bulged leaking blood or fluid.
  • the mild and moderate non-proliferative retinopathy is second stage where the retina may be blocked decreasing supply of nutrients and oxygen to certain areas of the retina.
  • sever non-proliferative retinopathy may lead to blocked oxygen supply leading to retinal ischemia.
  • Growth of new supply of blood vessels to compensate for the lack of oxygen is neovascularization.
  • DR progresses to proliferative retinopathy, the abnormal blood vessels are fragile and break causing hemorrhage in vitreous humor, leading to scarring and retinal detachment.
  • Signs of retinal damage are formation of hard exudates (HE), cotton wool spots (CWS), microanneurism (MA), hemorrhages.
  • image processing techniques such as thresholding, filtering and morphological operators were used. Recent research is focused on implementing segmentation, edge detection, mathematical modeling, feature extraction, classification, pattern recognition and texture analysis techniques for: blood vessel enhancement, detection of HE, CWS, MA and hemorrhages.
  • US8787638 discloses the methods and devices for diagnosing and/or predicting the presence, progression and/or treatment effect of a disease characterized by retinal pathological changes in a subject.
  • the retina images are captured from the subject.
  • the images are preprocessed to enhance the image contrast using mathematical morphological operations and wavelet transform.
  • the optic disc and macula in the preprocessed images are located by morphological analysis.
  • the abnormal patterns are analyzed using wavelet algorithm; and integrating the detection results and grading the severity of diabetic retinopathy based on the integrated results.
  • EDRS Early Treatment Diabetic Retinopathy Study
  • US8503749 discloses a method for diagnosing diseases having retinal manifestations including retinal pathologies includes the steps of providing a CBIR system including an archive of stored digital retinal photography images and diagnosed patient data corresponding to the retinal photography images, the stored images each indexed in a CBIR database using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the stored images.
  • the digital retinal image is collected from a patient and automatically characterized to locate the vascular arcades, optic disc and macula regions.
  • US9089288 discloses a fundus camera using infrared light sources, which included an imaging optical path and an optical path for focusing and positioning and two optical paths share a common set of retina objective lens, and a computer-assisted method for retinal vessel segmentation in general and diabetic retinopathy (DR) image segmentation in particular.
  • the method is primarily based on Multiscale Production of the Matched Filter (MPMF) scheme, which together the fundus camera, is useful for non-invasive diabetic retinopathy detection and monitoring.
  • MPMF Matched Filter
  • US8687862 discloses a platform for automated analysis of retinal images, for obtaining from them information characterizing retinal blood vessels which may be useful in forming a diagnosis of a medical condition.
  • the aspect of the invention proposes that a plurality of characteristics of the retina are extracted, in order to provide data which is useful for enabling an evaluation of cardiovascular risk prediction, or even diagnosis of a cardiovascular condition.
  • US20170020389 A 1 discloses a system automatically captures a fundus image with limited user interaction such that the fundus image has one or more desired characteristics.
  • the system can retrieve at least one image from an image capture device, process the at least one image digitally to generate a processed image, detect at least one point of interest in the processed image, determine that each of the at least one point of interest is representative of a corresponding feature of the eye, and obtain a fundus image of the eye based on the at least one image.
  • the system is configured to automatically capture a fundus image with limited user interaction such that the fundus image has one or more desired characteristics.
  • CA2458815 discloses a robust technique to automatically grade retinal images through detection of lesions that occur early in the course of diabetic retinopathy: dot hemorrhages or microaneurysms, blot and striate hemorrhages, lipid exudates and nerve-fiber layer infarcts.
  • the invention includes methods to extract the optic nerve in the appropriately identified fields, and to track and identify the vessels.
  • WO2010131944 discloses an apparatus for ascertaining the area of foveal avascular zone (FAZ) of the retina based on the digital map of retinal vasculature for reliable determination of the FAZ area to assist in monitoring and grading diabetic retinopathy.
  • the apparatus is able to provide a comprehensive grading for severity of Diabetic Retinopathy and Diabetic Maculopathy.
  • the risk levels are not graded according to Early Treatment Diabetic Retinopathy Study (ETDRS).
  • EDRS Early Treatment Diabetic Retinopathy Study
  • WO2016032397 discloses a method of assessing the quality of a retinal image includes selecting at least one region of interest within a retinal image corresponding to a particular structure of the eye, and a quality score is calculated in respect of the, or each, region-of- interest.
  • Each region of interest is typically one associated with pathology, as the optic disc and the macula are. It generates a quality score of the image.
  • JP2010178802 discloses an analysis system comprises an analysis computer provided with a fundus image data input receiving means to receive input of fundus image data, a filter forming means to form a double ring filter.
  • EDRS Early Treatment Diabetic Retinopathy Study
  • the conventional method further fails to detect multiple parameters or anomalies with respect to the retinal pathological changes.
  • DRS Diabetic Retinopathy Study
  • EDRS Early Treatment Diabetic Retinopathy Study
  • ETDRS is a standard for visual acuity testing in most clinical research for more than 15 years.
  • EDRS Early Treatment Diabetic Retinopathy Study
  • It is an aspect of the invention to provide a computer implemented method as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image comprising:
  • pre-processing of the fundus image by a computer processor obtaining grey scale image using a conversion module of a computer processor for extraction of vessels/ vasculature using morphological operators;
  • pre-processing is done by computer implemented extraction of frame border region around the image using a computer processor, and wherein density of vasculature corresponds to presence or absence of vitreous hemorrhage in comparison to control values.
  • It is an aspect of the invention to provide a computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image for identifying over exposed image comprising:
  • a computer implemented method as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image to identify optic disc (OD) comprising steps of:
  • retinal fundus image subjecting the retinal fundus image to computer implemented image processing techniques to extract and identify components of interest which corresponds to ocular anomalies like hard exudates (HE), hemorrhages, optic disc (OD), fovea and macula, wherein the processing comprises:
  • the optic disc further comprises steps of:
  • the gray scale image is divided with reference to a centroid of the optic disc.
  • estimate the location of avascular region, estimation of fovea region and hard exudates identification is done by one or more computer processor, wherein the macula candidate region is determined as the region within 2- 2.5 times the optic disc radius in upward and downward direction of the said fundus image,
  • the fovea region is identified within the macula candidate region
  • a computer implemented method as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image data for foreground candidates comprising:
  • hue for the foreground object is determined based on the pixel intensity of the dominant color channel
  • retinal fundus image data is generated by the computer processor
  • hues corresponds to presence or absence of cotton wool spots.
  • Figure la illustrates the flow chart of the method for retinal fundus image processing to detect vitreous hemorrhage and underexposed image (UE).
  • Figure 2 illustrates the flow chart of the method for retinal fundus image processing to detect Overexposed image (OE).
  • OE Overexposed image
  • Figure 3 illustrates the flow chart of the method for retinal fundus image processing to detect Optic disc (OD).
  • Figure 4 illustrates the flow chart of the method for retinal fundus image processing to detect Fovea.
  • Figure 5 a illustrates the flow chart of the method for retinal fundus image processing to detect Hard Exudates (HE).
  • Figure 5b illustrates the flow chart of the method for retinal fundus image processing to detect Hemorrhages (HE).
  • HE Hemorrhages
  • Figure 6a illustrates the flow chart of the method for retinal fundus image processing to detect Hemorrhages (HM).
  • HM Hemorrhages
  • Figure 6b illustrates the flow chart of the method for retinal fundus image processing to detect Hemorrhages (HM).
  • HM Hemorrhages
  • Figure 7a illustrates the flow chart of the method for retinal fundus image processing to detect Cotton wool spots (CWS).
  • Figure 7b illustrates the flow chart of the method for retinal fundus image processing to detect Cotton wool spots (CWS).
  • Diabetic retinopathy is damage to the retina caused by complications of diabetes, which can eventually lead to blindness. Diabetic retinopathy does not show any early warning signs, but include new blood vessels formed at the back of the eye, vitreous hemorrhage, specks of blood or spots, floating in a person's visual field, and blur vision. On funduscopic exam, a physician will see cotton wool spots, hard exudates because of which the perimeter and spread area of the retinal blood vessels appear to be deviating from normal observations.
  • the invention discloses a computer implemented method for retinal fundus image processing of high pliability achieved by a computing device configured with specific executable instructions/ algorithms which are the direct translation of morphological, geometrical, biological and positional properties of the retina as defined in ophthalmology.
  • the translation is based on a computer implemented method by applying simple mathematical, geometrical rules by the processor to process the given fundus image as disclosed in this invention and has the innate ability to process montages.
  • Montages are specialized fundus images which are easily processed by the computer implemented method as disclosed herein. Compilation and visualization fundus images with varying angles and centers of focus like optic disc, macula etc. Montages enable doctors to a much wider area of analysis of retina which is not possible through a single fundus image example posterior pole fundus image.
  • the present invention handles it without any special tweak.
  • fundus imaging is prone to many artefacts and human errors.
  • Some of the examples include oil stains on fundus camera lens, eyelid appearance in the fundus image, suboptimal camera handling technique, light issues due to wear and tear of lens.
  • the invention has specific capabilities of judging a given fundus image by its visual quality like underexposed, overexposed, good quality. This capability advices the technician handling the fundus camera for an image to be retaken for avoiding artefacts and for delivering maximum quality report for further validation by the doctor thereby reducing overall time in the entire process, i.e. from procurement of fundus image, analysis and validation by the doctor.
  • Any digital color fundus image is cumulative effect of combining three major color channels used in digital photography; red, green and blue channels, in short RGB respectively. When these channels are separated and viewed uniquely, various features already present in the color fundus image appear with varying levels of clarity and resolution. Most retinopathy disease features detection, green channel gives the better quality of resolution and visualization of different aspects of a color fundus image.
  • An Image is organized as W * H * C where W represents number of pixel intensities horizontally arranged. H represents the number of pixel intensities vertically arranged. C represents colorMode or specific color channel information (RGB information) pertaining to. Typically, Grey channel has the dimensions W X H.
  • RGB information specific color channel information
  • the extracted green channel is the converted to gray scale image using Grayscale routine of EB Image package.
  • the conversion module is the Grayscale routine.
  • Image processing algorithms can be used to help increase the speed of analyzing these images.
  • Masking means all the pixel intensities corresponding to indices which point to a probable exudates, hemorrhage in the fundus image are set to zero. Thereby, setting those areas in the fundus image to blank spots which wouldn't be touched in current processing for cotton wool spots detection.
  • Morphological procedures are procedures for extracting image components and are based on geometry and algebra. Mathematical operators like dilation and erosion are applied on the binary image images. Top-hat filter is another morphological operator which when applied for bright object extracts the bright features. These procedures typically facilitate identifying a key feature or a region of interest (ROI) in an image.
  • Pixel intensity based procedures like erosion, dilation etc employ superimposition of a sliding window (kernel) and reassign pixel intensities of the image underlying the kernel using logic.
  • Geometrical procedures entail, drawing geometrical figures and validating underlying feature in image by means of pixel intensity scatter, shape, curvature etc.
  • Segmentation It is converting a colour image where pixel intensities values are continuous, meaning confined to a range and can vary within, to a binary image where the pixel intensities are again confined to a range this time between 0 and 1 , and the pixel intensities are either 0 or 1. This conversion is done by simply enforcing a threshold value. Pixel intensities satisfying a particular enforced condition with respect to the threshold are set to 1 and those that do not are set to 0 resulting in a binary or a segmented image.
  • Conversion module Converting a color fundus image to a gray scale follows steps as described below:
  • RGB color channel image
  • the conversion translates fixed sub ranges in the pixel intensities color image and rescales and fixes them to new values. These new values range from 0 to a 8bit value.
  • a computer implemented method for resizing the fundus image In an embodiment disclosed herein, there is provided a computer implemented method for resizing the fundus image.
  • Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions.
  • the computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors in order to cause the computing system to: access the retinal fundus image, extraction of green channel, conversion of the image to a grayscale using Grayscale routine of EB Image package resizing the image.
  • Tested threshold It is a single value that compares and sorts the density vasculature parameter.
  • the data generated by the image processing technique when compared with the tested threshold either falls within or above the value. Grading of the image for presence or absence the disease state is done based on the tested threshold value generated.
  • a computer implemented module to extract and identify vasculature.
  • Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions.
  • the computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps: acquiring retinal fundus image which is resized. Applying morphological methods like white-top hat transformation to enhance the contrast of the image, followed by normalization of the image which will result in a normalized Top hat transformed image. Applying kmeans clustering, with number of iterations >20000.
  • Compute a vasculature tree image by comparing and extracting pixel values in this image than the smallest centroid value of all computed kmeans clusters. Segregate all foreground objects in the vasculature tree image based on eccentricity 0. Circular objects shall be segregated.
  • a final vasculature image is obtained after merging the circular and minimal area foreground objects.
  • a sliding window majority pixel rule is applied which results in a final vasculature image without frame borders. All indices of pixels with pixel value other than zero are gathered, and these represent the pixels across the border of fundus image.
  • the sliding window algorithm when applied on the final vasculature image and normalized tophat transformed image yields frame border indices.
  • the vascular matrix and density is computed.
  • the fundus is assumed as an ellipse and area is calculated using an analysis module.
  • the analysis module comprises computing the
  • foreground_indices_vasculature is computed by finding number of foreground pixels in removed_frame_final_vascualture image.
  • foreground_indices_vasculature is got by finding number of foreground pixels in the image which is obtained by applying sliding window operation on the vasculature image.
  • a kernel is typically much smaller than the image on which it is to be employed.
  • kernel traverses across each pixel of the big image. All pixels covered by the kernel in the big image are checked for minimal value. The minimal value is then reassigned to all the pixels in the big image covered by the kernel. The kernel is then shifted to the next pixel in the image, and the same procedure follows until the last pixel of the image.
  • a computer implemented module which is performed by the computer processor to identify the Optic Disc (OD).
  • Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions.
  • the computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps: Gather all indices of vasculature tree from the image obtained after application of the sliding window operation; compute the density of white pixels in this image. Further the circular coordinates in upward, downward and diagonal directions are computed to identify the number of connections.
  • the k cluster analysis is applied on the density white pixel image as well as the result for the total number of connections. Indices with values greater than zero for number of connections and highest centroid value are removed.
  • a computer implemented module which is performed by the computer processor to identify the Macula candidate region.
  • Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions.
  • the computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps:
  • the relative position of Optic Disc (OD) in the fundus image is computed by computing distance between centroid of OD and the horizontal lower and upper bounds of the fundus image which is fixed to 1 and 720 respectively. Starting from edge of OD in the fundus image about 2-2.5 OD radius vertical distances up and down is computed. This is the macula candidate region.
  • a computer implemented module which is performed by the computer processor to identify the Macula candidate region.
  • Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions.
  • the computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps: computed indices of start and end indices of row and column respectively from the macula candidate region. Fovea detection is limited within the boundaries computed. An imaginary circular area of radius equal to radius of OD, is marked and scanned with one sweep equal to diameter of OD. For each sweep, the mean intensity of Best Frame resized image of the Gray scale image is continued until the last bounds of macula candidate region.
  • HR or hard exudates are the very early signs of Diabetic Retinopathy (DR). Exudates are an abnormality observed in the first phase of DR. They generally appear in the form of clusters which may or may not be adjacent to a group of microaneurysms or near the anatomical area of fovea. Exudates are yellowish in colour and are deposits in the internal area of the retina. Hard exudates are in the posterior pole of the fundus. A computer implemented method for digital image processing to detect hard exudates is disclosed in this embodiment. These are automated methods of retinal fundus image analysis.
  • a computer implemented module which is performed by the computer processor to identify hard exudates.
  • Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions.
  • the computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps:
  • a segmented image is extracted by using a threshold value of bestframe + x* wherex is the value preset by vigorously testing images. From the centroid of the macula, a distance of 4 OD radius is marked as reference exudate scan region within the exudates image. Three channels, of red, blue and and green are got from extracting corresponding color channels from the input colour fundus image. Number of foreground objects in the reference exudate scan region is computed and for each foreground object hue value is calculated by the given method: 1.
  • hue calculated (maxgreen - maxblue)/(maxred - minofallchannels) where maxgreen is the maximum value in the green channel for the foreground object considered; maxblue is the maximum value in the blue channel for the foreground object considered; maxred is the maximum value in the red channel for the foreground object considered and minofallchannels is the smallest pixel value among all the three color channels. 2.
  • step 4 if hue calculated is less than zero then it is reset to 360. On hue calculated values apply minimum and maximum thresholds and filter out all the hue values in between. These are computed as the final hue candidates. If no hue values are observed within these defined bounds it is assumed that the image has no exudate. Also, if there are no foreground objects in reference exudates scan region binary image there are no exudates in the fundus image.
  • a computer implemented module to detect the hemorrhages.
  • Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions.
  • the computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps:
  • a sliding window operation is performed on the binary image. This resultant image is final vasculature extracted binary image.
  • the hard exudates are masked. From the remnant binary image above four quadrants are assumed, with OD centroid as reference.
  • the four quadrants are named binary top left quadrant, binary top right quadrant, binary bottom left quadrant, binary bottom right quadrant. The following steps are followed for each of the above quadrants:
  • step 3 The filtered candidates in step 2 above are removed from corresponding each quadrant's binary image.
  • a pixel intensity based filter is applied further to the resultant binary image for each quadrant. 5. All foreground objects which have pixel intensities greater than or equal to mean intensity of the corresponding quadrant are removed.
  • Cotton wool spots occur secondary to ischemia from retinal arteriole obstruction. DR occurs when the increased glucose level in the blood damages the capillaries, which provide nutrition to the retina. Manifestation of the blood and fluid leakage from the capillaries are microaneurysms, hemorrhages, hard exudates, cotton wool spots or venous loops.
  • a computer implemented module to detect cotton wool spots In an embodiment is disclosed a computer implemented module to detect cotton wool spots. Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions.
  • the computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps:
  • a segmented image is produced by processing using set algorithms on an initial set of images. Hard exudates and hemorrhages (if any) are masked. Three main channels red, blue and green are extracted (extraction procedure or relevant terminology). Number of foreground objects in the segmented image is computed and the foreground object hue value is calculated using the following method:
  • hue calculated 4 + maxred - maxgreen/ maxblue - minofallchannels where maxgreen is the maximum value in the green channel for the foreground object considered, maxblue is the maximum value in the blue channel for the foreground object considered, maxred is the maximum value in the red channel for the foreground object considered, minofallchannels is the smallest pixel value among all the three color channels. 4. In steps 1, 2 and 3 above, if hue calculated is less than zero reset it to 360.
  • hue calculated values apply minimum and maximum thresholds and filter out all the hue values in between. These are the final hue cotton wool candidates. If there are no hues values within these defined bounds it is assumed that the image has no cotton wool spots. An area filter is applied where cotton wool spots based on hue criterion under consideration, are removed is they have a major axis length greater than equal to 3 times OD radius. If there are no foreground objects in reference segmented_cws_regions binary image there are no cotton wool spots in the fundus image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Primary Health Care (AREA)
  • Veterinary Medicine (AREA)
  • Epidemiology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Hematology (AREA)
  • Signal Processing (AREA)
  • Vascular Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A computer implemented retinal fundus image processing method for processing a retinal fundus image. In one embodiment, the method comprises performing computer implemented processing of the fundus images to identify the optic disc, fovea and macula. The method further comprises processing of the retinal fundus image for identification of the over exposed images and ocular anomalies like hemorrhages, hard exudates, cotton wool spots its use in assessing the and grading the risk of Diabetic retinopathy using ETDRS.

Description

RETINAL FUNDUS IMAGE PROCESSING METHOD
FIELD OF INVENTION
The present invention relates to the field of retinal image processing. Particularly the invention relates to a computer implemented method of processing a retinal fundus image. The images processed by the computer implemented method in integration with algorithms, return data that is used in automated identification of disease states of the eye, specifically the retina. Further, the present invention relates to a method for diagnosing retinal pathological changes in a subject, and then assesses and grades risk level according to Early Treatment Diabetic Retinopathy Study (ETDRS) based on the retinal image data acquired.
BACKGROUND OF INVENTION
Imaging of human organs especially the eye is critical in diagnosis of disease conditions particularly ocular diseases. This is especially true for the human retina, where the presence of a large network of blood vessels and nerves make it a near-ideal window for exploring the effects of diseases that harm vision such as diabetic retinopathy seen in diabetic patients. Advances in computer-aided image processing and analysis technologies are essential to make imaging-based disease diagnosis scalable, cost-effective, and reproducible. Such advances would directly result in effective triage of patients, leading to timely treatment and better quality of life. Image processing systems are used in standard clinical practices for development of a diagnostic method for various diseases. Rapid development of digital imaging and computer vision has increased the potential of using the image processing technologies in ophthalmology. Fundus photography involves capturing the photograph of the back of the eye i.e. fundus. These fundus images are captured by special fundus cameras that consist of intricate microscope attached to the flash enable camera. The retinal fundus images provide vital information about the health of the sensory part of the visual system.
Diabetic retinopathy is a diabetes complication that affects eyes. The diabetic retinopathy is caused by damage to the blood vessels of light-sensitive tissue at the back of retina. The diabetic retinopathy is conventionally assessed through a dilated eye examination. In addition, tests such as a fluorescein angiogram and ocular coherence tomography (OCT) can be conducted. DR or Diabetic retinopathy is categorized based on severity. The earliest visible change to the retina is known as background retinopathy. The capillaries become blocked and bulged leaking blood or fluid. The mild and moderate non-proliferative retinopathy is second stage where the retina may be blocked decreasing supply of nutrients and oxygen to certain areas of the retina. In sever non-proliferative retinopathy, may lead to blocked oxygen supply leading to retinal ischemia. Growth of new supply of blood vessels to compensate for the lack of oxygen is neovascularization. When retinal neovasculaization occurs, DR progresses to proliferative retinopathy, the abnormal blood vessels are fragile and break causing hemorrhage in vitreous humor, leading to scarring and retinal detachment. Signs of retinal damage are formation of hard exudates (HE), cotton wool spots (CWS), microanneurism (MA), hemorrhages. During the early years of research, image processing techniques such as thresholding, filtering and morphological operators were used. Recent research is focused on implementing segmentation, edge detection, mathematical modeling, feature extraction, classification, pattern recognition and texture analysis techniques for: blood vessel enhancement, detection of HE, CWS, MA and hemorrhages.
US8787638 discloses the methods and devices for diagnosing and/or predicting the presence, progression and/or treatment effect of a disease characterized by retinal pathological changes in a subject. The retina images are captured from the subject. The images are preprocessed to enhance the image contrast using mathematical morphological operations and wavelet transform. The optic disc and macula in the preprocessed images are located by morphological analysis. The abnormal patterns are analyzed using wavelet algorithm; and integrating the detection results and grading the severity of diabetic retinopathy based on the integrated results. However the risk levels are not graded according to Early Treatment Diabetic Retinopathy Study (ETDRS).
US8503749 discloses a method for diagnosing diseases having retinal manifestations including retinal pathologies includes the steps of providing a CBIR system including an archive of stored digital retinal photography images and diagnosed patient data corresponding to the retinal photography images, the stored images each indexed in a CBIR database using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the stored images. The digital retinal image is collected from a patient and automatically characterized to locate the vascular arcades, optic disc and macula regions. US9089288 discloses a fundus camera using infrared light sources, which included an imaging optical path and an optical path for focusing and positioning and two optical paths share a common set of retina objective lens, and a computer-assisted method for retinal vessel segmentation in general and diabetic retinopathy (DR) image segmentation in particular. The method is primarily based on Multiscale Production of the Matched Filter (MPMF) scheme, which together the fundus camera, is useful for non-invasive diabetic retinopathy detection and monitoring.
US8687862 discloses a platform for automated analysis of retinal images, for obtaining from them information characterizing retinal blood vessels which may be useful in forming a diagnosis of a medical condition. The aspect of the invention proposes that a plurality of characteristics of the retina are extracted, in order to provide data which is useful for enabling an evaluation of cardiovascular risk prediction, or even diagnosis of a cardiovascular condition. US20170020389 A 1 discloses a system automatically captures a fundus image with limited user interaction such that the fundus image has one or more desired characteristics. The system can retrieve at least one image from an image capture device, process the at least one image digitally to generate a processed image, detect at least one point of interest in the processed image, determine that each of the at least one point of interest is representative of a corresponding feature of the eye, and obtain a fundus image of the eye based on the at least one image. The system is configured to automatically capture a fundus image with limited user interaction such that the fundus image has one or more desired characteristics.
CA2458815 discloses a robust technique to automatically grade retinal images through detection of lesions that occur early in the course of diabetic retinopathy: dot hemorrhages or microaneurysms, blot and striate hemorrhages, lipid exudates and nerve-fiber layer infarcts. In addition, the invention includes methods to extract the optic nerve in the appropriately identified fields, and to track and identify the vessels.
WO2010131944 discloses an apparatus for ascertaining the area of foveal avascular zone (FAZ) of the retina based on the digital map of retinal vasculature for reliable determination of the FAZ area to assist in monitoring and grading diabetic retinopathy. The apparatus is able to provide a comprehensive grading for severity of Diabetic Retinopathy and Diabetic Maculopathy. However the risk levels are not graded according to Early Treatment Diabetic Retinopathy Study (ETDRS).
WO2016032397 discloses a method of assessing the quality of a retinal image includes selecting at least one region of interest within a retinal image corresponding to a particular structure of the eye, and a quality score is calculated in respect of the, or each, region-of- interest. Each region of interest is typically one associated with pathology, as the optic disc and the macula are. It generates a quality score of the image.
CN 103870838 discloses an eye fundus image characteristics extraction method for diabetic retinopathy by utilizing an eye fundus image shot by a digital mydriatic-free eye fundus camera. The disadvantage of this stage for the presence of uneven illumination of the fundus image, a blood vessel contrast is low, the use of digital non-mydriatic fundus photography and image processing and pattern recognition technology. JP2010178802 discloses an analysis system comprises an analysis computer provided with a fundus image data input receiving means to receive input of fundus image data, a filter forming means to form a double ring filter.
However the risk levels are not graded in the prior art on a method based on Early Treatment Diabetic Retinopathy Study (ETDRS).
The conventional method further fails to detect multiple parameters or anomalies with respect to the retinal pathological changes. There is a significant necessity to extract components of interest from a given retinal image. It is very much needed to devise a computer executable method that is executed by a computing device using computer executable instructions, to render the images with requisite components of interest like extraction of blood vessels from the images, extraction or estimation of hard exudates or HE, Cotton wool spots or CWS from the fundus image, that can be extrapolated to the real time anomalies in the retina of the eye.
Accordingly, there exists a need for a method for processing retinal images, to extract the components of interest of high quality as well as clarity to identify multiple anomalies using a single computer implemented method that can determine retinal fundus anomalies or detect the presence of a retinal anomaly from the generated data. These images find application in clinical ophthalmology to detect diabetic retinopathy, by grading the images according to the Early Treatment Diabetic Retinopathy Study (ETDRS).
Full disease classifications have developed from classification that was modified by the Diabetic Retinopathy Study (DRS) and developed for the Early Treatment Diabetic Retinopathy Study (ETDRS) and is aimed at grading retinopathy in the context of overall severity of ophthalmoscope signs. ETDRS is a standard for visual acuity testing in most clinical research for more than 15 years. It classifies patients based on the ocular anomalies in the fundus images, and grades them based on severity as level 10 (No retinopathy), level 20 very mild NPDR or Non-Proliferative Diabetic Retinopathy, 35, 43 and 47 as Moderate Non-proliferative Diabetic Retinopathy with few micro aneurysms, Level 53 A-E Severe Non-proliferative Diabetic Retinopathy where the fundus image identifies extensive i.e. >20 intraretinal hemorrhages in each quadrant, venous bleeding in 2+ quadrants, Level 61,65,71,75,81,85, with indications of neovascularization and vitreous or preretinal hemorrhage where the disease is graded as Proliferative Diabetic retinopathy. OBJECTS OF INVENTION
It is the object of the invention to provide a computer implemented retinal fundus image processing method which is compatible and pliable with a wide range of fundus cameras, each of these having innate and unique hardware optic properties.
It is the primary object of the present invention to provide a computer implemented method for processing a retinal fundus image.
It is the object of the invention to provide a computer implemented retinal fundus image processing method that finds application in identification of retinal anomalies like hemorrhages, Cotton wool spots or CWS, hard exudates or HE, identification of optic disc, and fovea for further anomaly analysis.
It is another object of the invention to provide a computer implemented retinal fundus image processing method wherein the image is pre-processed, and applied with mathematical morphological procedures to detect and grade the ocular anomaly based on the generated data.
It is yet another object of the invention to provide a computer implemented retinal fundus image processing method where the data generated finds application in determining optical disc that corresponds to the real time retinal image, determination of the hemorrhagic region, and estimation of fovea, hard exudates, and cotton wool spots, based on various parameters of the extracted image. It is another object of the present invention to provide a computer implemented retinal fundus image processing method where the data generated finds application in diagnosing retinal pathological changes in a subject, and used in assessing and grading risk level according to Early Treatment Diabetic Retinopathy Study (ETDRS).
It is another object of the present invention, wherein the retinal pathological changes of the corresponding retinal fundus images generated are graded according to standards set by ETDRS to determine the presence/ absence of diabetic retinopathy. These and other objects and advantages of the present invention will become readily apparent from the following detailed description taken in conjunction with the accompanying drawings.
SUMMARY OF INVENTION
One or more of the problems of the conventional prior art may be overcome by various embodiments of the present invention.
It is an aspect of the invention to provide a computer implemented method as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image comprising:
pre-processing of the fundus image by a computer processor; obtaining grey scale image using a conversion module of a computer processor for extraction of vessels/ vasculature using morphological operators;
performing segmentation of the fundus image at the computer processor;
obtaining the binary image using a conversion module at the computer processor; computing matrix and density of the vasculature image using an analysis module at the computer processor, and
grading the image based on presence or absence of the standard density of the vasculature using a tested threshold value by the computer processor;
wherein pre-processing is done by computer implemented extraction of frame border region around the image using a computer processor, and wherein density of vasculature corresponds to presence or absence of vitreous hemorrhage in comparison to control values.
It is an aspect of the invention to provide a computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image for identifying over exposed image comprising:
pre-processing of the fundus image by a computer processor;
obtaining resized grey scale image;
computing the maximum pixel value for each slice using a computer processor; and filter slices indices;
wherein the slices with indices greater than the size criterion is an oversized image. A computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image to identify optic disc (OD) comprising steps of:
obtaining the retinal fundus image;
subjecting the retinal fundus image to computer implemented image processing techniques to extract and identify components of interest which corresponds to ocular anomalies like hard exudates (HE), hemorrhages, optic disc (OD), fovea and macula, wherein the processing comprises:
removing frame border region around retinal fundus image of the subject; performing image morphological process on the retinal fundus image to extract vasculature form fundus image;
scanning the retinal fundus image containing extracted vasculature and gray scale image of the original fundus image to detect optic disc; and detecting cotton wool spots by high mean intensity and hazy area of the retinal fundus image.
In one aspect, after detecting the optic disc further comprises steps of:
dividing the gray scale image into at least four quadrants; and
detecting hemorrhages in each of the quadrant based on mean and standard deviation of intensity on the extracted vasculature. In another aspect of the present invention, the gray scale image is divided with reference to a centroid of the optic disc.
It is another aspect of the present invention, to provide a method wherein the centroid of the optic disc is obtained by fixing its two-dimensional co-ordinates.
It is another aspect of the invention to provide a computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image for detecting optic disc or OD comprising steps of:
processing the image to identify or detect optic disc (OD) region by the computer processor;
define the temporal arcade from the OD determined image by the computer processor;
estimate the location of avascular region in temporal arcade by the analyzing module of the computer processor;
estimate the macula candidate region based on the centroid distance of the OD; estimate the fovea region within the macula candidate region;
identifying hard exudates;
wherein estimate the location of avascular region, estimation of fovea region and hard exudates identification is done by one or more computer processor, wherein the macula candidate region is determined as the region within 2- 2.5 times the optic disc radius in upward and downward direction of the said fundus image,
wherein the fovea region is identified within the macula candidate region, and
wherein hard exudates are identified based on the centroid distance of the macula and no foreground objects indicates no exudates in the fundus image.
A computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image data for foreground candidates comprising:
obtain the binary image using a conversion module in the computer processor; masking the hard exudates by the computer processor;
extracting vasculature by applying morphological procedures using the processor; obtaining foreground candidates on application of eccentricity and mean intensity filters; and identifying and labeling the foreground candidates of the given image, wherein presence of foreground candidates corresponds to presence of hemorrhages.
It is another aspect of the invention to provide a computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image to detecting the retinal pathological changes from the detected hard exudates or hemorrhages or cotton wool spots.
It is another aspect of the invention to provide a computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image data for identification of hues comprising: preprocessing or removing the fundus image frame borders;
extracting a segmented binary image;
masking of the hard exudates;
extraction of the colored channels within the segmented image; and
wherein the hue for the foreground object is determined based on the pixel intensity of the dominant color channel,
wherein the retinal fundus image data is generated by the computer processor,
wherein presence or absence of the hues corresponds to presence or absence of cotton wool spots.
BRIEF DESCRIPTION OF THE DRAWING:
So that the manner in which the features, advantages and objects of the invention, as well as others which will become apparent, may be understood in more detail, more particular description of the invention briefly summarized above may be had by reference to the embodiment thereof which is illustrated in the appended drawing, which form a part of this specification. It is to be noted, however, that the drawing illustrate only a preferred embodiment of the invention and is therefore not to be considered limiting of the invention's scope as it may admit to other equally effective embodiments.
Figure la: illustrates the flow chart of the method for retinal fundus image processing to detect vitreous hemorrhage and underexposed image (UE).
Figure 2: illustrates the flow chart of the method for retinal fundus image processing to detect Overexposed image (OE).
Figure 3: illustrates the flow chart of the method for retinal fundus image processing to detect Optic disc (OD).
Figure 4: illustrates the flow chart of the method for retinal fundus image processing to detect Fovea.
Figure 5 a: illustrates the flow chart of the method for retinal fundus image processing to detect Hard Exudates (HE).
Figure 5b: illustrates the flow chart of the method for retinal fundus image processing to detect Hemorrhages (HE).
Figure 6a: illustrates the flow chart of the method for retinal fundus image processing to detect Hemorrhages (HM).
Figure 6b: illustrates the flow chart of the method for retinal fundus image processing to detect Hemorrhages (HM).
Figure 7a: illustrates the flow chart of the method for retinal fundus image processing to detect Cotton wool spots (CWS). Figure 7b: illustrates the flow chart of the method for retinal fundus image processing to detect Cotton wool spots (CWS).
DETAILED DESCRIPTION OF THE INVENTION
The following descriptions further illustrate one or more embodiments in detail and any section of the descriptions can be solely used and combined in any suitable manner in one or more embodiments.
Diabetic retinopathy (DR) is damage to the retina caused by complications of diabetes, which can eventually lead to blindness. Diabetic retinopathy does not show any early warning signs, but include new blood vessels formed at the back of the eye, vitreous hemorrhage, specks of blood or spots, floating in a person's visual field, and blur vision. On funduscopic exam, a physician will see cotton wool spots, hard exudates because of which the perimeter and spread area of the retinal blood vessels appear to be deviating from normal observations.
Researchers have been working in the area of image processing for early detection of diabetic retinopathy. The earliest forms of retinopathy are identified by irregularity and oozing of the blood vessels. This leads to formation of hard exudates (HE), cotton wool spots (CWS), microanneurysm (MA), hemorrhages and severe DR leading to blindness. During the early years of research, image processing techniques such as thresholding, filtering and morphological operators are used. Recent research is focused on implementing segmentation, edge detection, mathematical modeling, feature extraction, classification, pattern recognition and texture analysis techniques for: blood vessel detection and understanding its structural intricacies, detection of HE, CWS, MA and hemorrhages. Recently Hsu W., Lee M. L. and Wong T. Y. have developed a platform (patented) for automated analysis of a retinal image, including automatically tracing one or more paths of one or more vessels of a retinal image, and such obtained information may be useful in forming a diagnosis of a medical condition.
The invention discloses a computer implemented method for retinal fundus image processing of high pliability achieved by a computing device configured with specific executable instructions/ algorithms which are the direct translation of morphological, geometrical, biological and positional properties of the retina as defined in ophthalmology. The translation is based on a computer implemented method by applying simple mathematical, geometrical rules by the processor to process the given fundus image as disclosed in this invention and has the innate ability to process montages. Montages are specialized fundus images which are easily processed by the computer implemented method as disclosed herein. Compilation and visualization fundus images with varying angles and centers of focus like optic disc, macula etc. Montages enable doctors to a much wider area of analysis of retina which is not possible through a single fundus image example posterior pole fundus image. Though montages cover a wider area of retina, the innate properties of retinal features are the same. Hence, the present invention handles it without any special tweak. Like with any digital photography mechanism, fundus imaging is prone to many artefacts and human errors. Some of the examples include oil stains on fundus camera lens, eyelid appearance in the fundus image, suboptimal camera handling technique, light issues due to wear and tear of lens. The invention has specific capabilities of judging a given fundus image by its visual quality like underexposed, overexposed, good quality. This capability advices the technician handling the fundus camera for an image to be retaken for avoiding artefacts and for delivering maximum quality report for further validation by the doctor thereby reducing overall time in the entire process, i.e. from procurement of fundus image, analysis and validation by the doctor.
Any digital color fundus image is cumulative effect of combining three major color channels used in digital photography; red, green and blue channels, in short RGB respectively. When these channels are separated and viewed uniquely, various features already present in the color fundus image appear with varying levels of clarity and resolution. Most retinopathy disease features detection, green channel gives the better quality of resolution and visualization of different aspects of a color fundus image. An Image is organized as W * H * C where W represents number of pixel intensities horizontally arranged. H represents the number of pixel intensities vertically arranged. C represents colorMode or specific color channel information (RGB information) pertaining to. Typically, Grey channel has the dimensions W X H. Using 'getFrame' routine of EB Image image processing R package, with the relevant frame number argument, green channel is extracted from a given color fundus image.
The extracted green channel is the converted to gray scale image using Grayscale routine of EB Image package. Here the conversion module is the Grayscale routine. Image processing algorithms can be used to help increase the speed of analyzing these images.
Masking means, all the pixel intensities corresponding to indices which point to a probable exudates, hemorrhage in the fundus image are set to zero. Thereby, setting those areas in the fundus image to blank spots which wouldn't be touched in current processing for cotton wool spots detection.
Morphological procedures- are procedures for extracting image components and are based on geometry and algebra. Mathematical operators like dilation and erosion are applied on the binary image images. Top-hat filter is another morphological operator which when applied for bright object extracts the bright features. These procedures typically facilitate identifying a key feature or a region of interest (ROI) in an image. Pixel intensity based procedures like erosion, dilation etc employ superimposition of a sliding window (kernel) and reassign pixel intensities of the image underlying the kernel using logic. Geometrical procedures entail, drawing geometrical figures and validating underlying feature in image by means of pixel intensity scatter, shape, curvature etc. Segmentation: It is converting a colour image where pixel intensities values are continuous, meaning confined to a range and can vary within, to a binary image where the pixel intensities are again confined to a range this time between 0 and 1 , and the pixel intensities are either 0 or 1. This conversion is done by simply enforcing a threshold value. Pixel intensities satisfying a particular enforced condition with respect to the threshold are set to 1 and those that do not are set to 0 resulting in a binary or a segmented image.
Conversion module: Converting a color fundus image to a gray scale follows steps as described below:
1. Any color channel image (RGB) consists of pixel intensities ranging from 0 to 24bit values.
2. The conversion translates fixed sub ranges in the pixel intensities color image and rescales and fixes them to new values. These new values range from 0 to a 8bit value.
3. Finally, the resultant gray scale has pixel intensities ranging from 0 to 255. Analysis module:
In an embodiment disclosed herein, there is provided a computer implemented method for resizing the fundus image. Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions. The computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors in order to cause the computing system to: access the retinal fundus image, extraction of green channel, conversion of the image to a grayscale using Grayscale routine of EB Image package resizing the image.
Tested threshold: It is a single value that compares and sorts the density vasculature parameter. The data generated by the image processing technique when compared with the tested threshold either falls within or above the value. Grading of the image for presence or absence the disease state is done based on the tested threshold value generated.
In another embodiment is disclosed a computer implemented module to extract and identify vasculature. Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions. The computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps: acquiring retinal fundus image which is resized. Applying morphological methods like white-top hat transformation to enhance the contrast of the image, followed by normalization of the image which will result in a normalized Top hat transformed image. Applying kmeans clustering, with number of iterations >20000. Compute a vasculature tree image by comparing and extracting pixel values in this image than the smallest centroid value of all computed kmeans clusters. Segregate all foreground objects in the vasculature tree image based on eccentricity= 0. Circular objects shall be segregated. A final vasculature image is obtained after merging the circular and minimal area foreground objects. A sliding window majority pixel rule is applied which results in a final vasculature image without frame borders. All indices of pixels with pixel value other than zero are gathered, and these represent the pixels across the border of fundus image. The sliding window algorithm when applied on the final vasculature image and normalized tophat transformed image yields frame border indices. The image obtained after the application of sliding window majority pixel rule has to be applied with frame border indices=0. The vascular matrix and density is computed. The fundus is assumed as an ellipse and area is calculated using an analysis module.
The analysis module comprises computing the
(length(foreground_indices_vasculature)/total area of the image)* 100; wherein foreground indices vasculature is computed by finding number of foreground pixels in removed_frame_final_vascualture image. Where, foreground_indices_vasculature is got by finding number of foreground pixels in the image which is obtained by applying sliding window operation on the vasculature image.
Total area of image shall be calculated assuming the fundus image as an ellipse. Area of ellipse given by A = OTir2; ri if dimensions are greater than 720 and r2 if dimensions are greater than 540.
By setting upper and lower boundaries of image are, images beyond the set values are determined as ones with vascular hemorrhages. The resized image is applied with white-tophat operation, which means 'x' image from employing 'opening' operation where, x is the green channel of the color fundus image which is resized (720X576). In image processing, 'open' operation means erosion followed by dilation. Erosion and dilation are basically summarizing operations in image. In these procedures, a relatively smaller array called a kernel is prepared with binary values [0,1].
A kernel is typically much smaller than the image on which it is to be employed. In erosion, kernel traverses across each pixel of the big image. All pixels covered by the kernel in the big image are checked for minimal value. The minimal value is then reassigned to all the pixels in the big image covered by the kernel. The kernel is then shifted to the next pixel in the image, and the same procedure follows until the last pixel of the image.
In dilation, kernel traverses across each pixel of the big image. All pixels covered by the kernel in the big image are checked for a maximal value. The maximal value is then reassigned to all the pixels in the big image covered by the kernel. The kernel is then shifted to the next pixel in the image, and the same procedure follows until the last pixel of the image. In another aspect of the invention is disclosed a computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image for identifying over exposed image comprises:
pre-processing of the fundus image by a computer processor;
obtaining resized grey scale image;
computing the maximum pixel value for each slice using a computer processor; apply kmeans, using these values filter the slice indices.
Any slice larger than or equal to
3*(dimensions_Inputimage[2]/dimensions_resized[2])*length(max_intensity_eachslice) or any single slice that is horizontally or vertically continuous merging with its adjacent slice, then the image is deemed overexposed. Otherwise, if slices size are smaller than the size criterion defined above, the image is not overexposed. The slices with indices greater than the size criterion are an oversized image.
In another aspect is disclosed a computer implemented module which is performed by the computer processor to identify the Optic Disc (OD). Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions. The computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps: Gather all indices of vasculature tree from the image obtained after application of the sliding window operation; compute the density of white pixels in this image. Further the circular coordinates in upward, downward and diagonal directions are computed to identify the number of connections. The k cluster analysis is applied on the density white pixel image as well as the result for the total number of connections. Indices with values greater than zero for number of connections and highest centroid value are removed. On a resized gray scale image an imaginary circle is drawn and the indices with maximum white pixel, minimum black pixel and those with maximum intensity are grouped. Final OD indices are based on indices with intensity scan regions, which are greater than or equal to 3 quartile of max intensity value. Only indices of 'maximum circularity' measure with maximum intensity in final OD indices are computed and the OD is determined.
In another aspect is disclosed a computer implemented module which is performed by the computer processor to identify the Macula candidate region. Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions. The computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps:
The relative position of Optic Disc (OD) in the fundus image is computed by computing distance between centroid of OD and the horizontal lower and upper bounds of the fundus image which is fixed to 1 and 720 respectively. Starting from edge of OD in the fundus image about 2-2.5 OD radius vertical distances up and down is computed. This is the macula candidate region.
In another aspect is disclosed a computer implemented module which is performed by the computer processor to identify the Macula candidate region. Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions. The computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps: computed indices of start and end indices of row and column respectively from the macula candidate region. Fovea detection is limited within the boundaries computed. An imaginary circular area of radius equal to radius of OD, is marked and scanned with one sweep equal to diameter of OD. For each sweep, the mean intensity of Best Frame resized image of the Gray scale image is continued until the last bounds of macula candidate region. The distance criterion from centroid of OD is applied to get tight predictions of Fovea. All sweeps, and 'outliers' greater than the distance criterion are filtered. The sweep index which has the lowest mean and fits into the distance criterion is marked as Fovea region of the fundus image. HR or hard exudates are the very early signs of Diabetic Retinopathy (DR). Exudates are an abnormality observed in the first phase of DR. They generally appear in the form of clusters which may or may not be adjacent to a group of microaneurysms or near the anatomical area of fovea. Exudates are yellowish in colour and are deposits in the internal area of the retina. Hard exudates are in the posterior pole of the fundus. A computer implemented method for digital image processing to detect hard exudates is disclosed in this embodiment. These are automated methods of retinal fundus image analysis.
In an embodiment is disclosed a computer implemented module which is performed by the computer processor to identify hard exudates. Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions. The computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps:
A segmented image is extracted by using a threshold value of bestframe + x* wherex is the value preset by vigorously testing images. From the centroid of the macula, a distance of 4 OD radius is marked as reference exudate scan region within the exudates image. Three channels, of red, blue and and green are got from extracting corresponding color channels from the input colour fundus image. Number of foreground objects in the reference exudate scan region is computed and for each foreground object hue value is calculated by the given method: 1. If for a foreground object maximum pixel intensity value is that of red channel; hue calculated = (maxgreen - maxblue)/(maxred - minofallchannels) where maxgreen is the maximum value in the green channel for the foreground object considered; maxblue is the maximum value in the blue channel for the foreground object considered; maxred is the maximum value in the red channel for the foreground object considered and minofallchannels is the smallest pixel value among all the three color channels. 2. If for a foreground object maximum pixel intensity value is that of green channel; hue calculated = 2 + maxblue - maxred/ maxgreen -minofallchannels where maxgreen is the maximum value in the green channel for the foreground object considered maxblue is the maximum value in the blue channel for the foreground object considered maxred is the maximum value in the red channel for the foreground object considered minofallchannels is the smallest pixel value among all the three color channels.
3. If for a foreground object maximum pixel intensity value is that of blue channel; hue calculated = 4 + maxred- maxgreen/ maxblue -minofallchannels where maxgreen is the maximum value in the green channel for the foreground object considered; maxblue is the maximum value in the blue channel for the foreground object considered; maxred is the maximum value in the red channel for the foreground object considered; minofallchannels is the smallest pixel value among all the three color channels.
4. In steps 1, 2 and 3 above, if hue calculated is less than zero then it is reset to 360. On hue calculated values apply minimum and maximum thresholds and filter out all the hue values in between. These are computed as the final hue candidates. If no hue values are observed within these defined bounds it is assumed that the image has no exudate. Also, if there are no foreground objects in reference exudates scan region binary image there are no exudates in the fundus image.
In an embodiment is disclosed a computer implemented module to detect the hemorrhages. Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions. The computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps: A sliding window operation is performed on the binary image. This resultant image is final vasculature extracted binary image. The hard exudates are masked. From the remnant binary image above four quadrants are assumed, with OD centroid as reference. The four quadrants are named binary top left quadrant, binary top right quadrant, binary bottom left quadrant, binary bottom right quadrant. The following steps are followed for each of the above quadrants:
1. Moments of each of the quadrant are computed from compute features to routine.
2. Only foreground objects with moments-eccentricity greater than or equal to a certain threshold are grouped.
3. The filtered candidates in step 2 above are removed from corresponding each quadrant's binary image.
4. A pixel intensity based filter is applied further to the resultant binary image for each quadrant. 5. All foreground objects which have pixel intensities greater than or equal to mean intensity of the corresponding quadrant are removed.
6. Finally, the resultant foreground candidates post eccentricity and mean intensity filters application are considered as hemorrhages.
Artefacts or salt and pepper noise in images are removed by applying a radius threshold which is typically of a very small size. If there are no foreground objects after step 2 or/and step 6, it is assumed that there are no hemorrhages for that particular quadrant.
Cotton wool spots occur secondary to ischemia from retinal arteriole obstruction. DR occurs when the increased glucose level in the blood damages the capillaries, which provide nutrition to the retina. Manifestation of the blood and fluid leakage from the capillaries are microaneurysms, hemorrhages, hard exudates, cotton wool spots or venous loops.
In an embodiment is disclosed a computer implemented module to detect cotton wool spots. Image from the fundus camera of the fundus is captured and processed or analyzed using computer implemented instructions. The computing system may include one or more hardware computer processors; and one or more storage devices configured to store software instructions configured for execution by the one or more hardware computer processors comprising the steps: A segmented image is produced by processing using set algorithms on an initial set of images. Hard exudates and hemorrhages (if any) are masked. Three main channels red, blue and green are extracted (extraction procedure or relevant terminology). Number of foreground objects in the segmented image is computed and the foreground object hue value is calculated using the following method:
1.If for a foreground object maximum pixel intensity value is that of red channel; hue calculated = (maxgreen - maxblue)/(maxred - minofallchannels) where maxgreen is the maximum value in the green channel for the foreground object considered; maxblue is the maximum value in the blue channel for the foreground object considered; maxred is the maximum value in the red channel for the foreground object considered minofallchannels is the smallest pixel value among all the three color channels. 2. If for a foreground object maximum pixel intensity value is that of green channel; hue calculated = 2 + maxblue - maxred/ maxgreen - minofallchannels where maxgreen is the maximum value in the green channel for the foreground object considered, maxblue is the maximum value in the blue channel for the foreground object considered, maxred is the maximum value in the red channel for the foreground object considered, minofallchannels is the smallest pixel value among all the three color channels.
3. If for a foreground object maximum pixel intensity value is that of blue channel; hue calculated = 4 + maxred - maxgreen/ maxblue - minofallchannels where maxgreen is the maximum value in the green channel for the foreground object considered, maxblue is the maximum value in the blue channel for the foreground object considered, maxred is the maximum value in the red channel for the foreground object considered, minofallchannels is the smallest pixel value among all the three color channels. 4. In steps 1, 2 and 3 above, if hue calculated is less than zero reset it to 360.
On hue calculated values apply minimum and maximum thresholds and filter out all the hue values in between. These are the final hue cotton wool candidates. If there are no hues values within these defined bounds it is assumed that the image has no cotton wool spots. An area filter is applied where cotton wool spots based on hue criterion under consideration, are removed is they have a major axis length greater than equal to 3 times OD radius. If there are no foreground objects in reference segmented_cws_regions binary image there are no cotton wool spots in the fundus image.
Whereas many alterations and modifications of the present invention will become apparent to a person skilled in the art after reading the foregoing description, it is to be understood that any particular embodiment shown and described by the way of illustration is in no way intended to be considered limiting. Therefore, references to details of various embodiments are not intended to limit the scope of the claims which in themselves recite only those features regarded essential to the invention.

Claims

We Claim:
1. A computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image comprising:
pre-processing of the fundus image by a computer processor;
obtaining grey scale image using a conversion module of a computer processor for extraction of vessels/ vasculature using morphological operators;
performing segmentation of the fundus image at the computer processor using morphological operators;
obtaining the binary image using a conversion module at the computer processor; computing matrix and density of the vasculature image using an analysis module at the computer processor; and
grading the image based on tested threshold for comparison and sorting density vasculature parameter by the computer processor.
2. A computer implemented method for processing retinal fundus image as claimed in claim 1, wherein pre-processing is done by computer implemented extraction of frame border region around the image using a computer processor.
3. A computer implemented image processing method of retinal fundus image as claimed in claim 1, wherein density of vasculature corresponds to presence or absence of vitreous hemorrhage in comparison to control values.
4. A computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image for identifying over exposed image comprises:
pre-processing of the fundus image by a computer processor;
obtaining resized grey scale image;
computing the maximum pixel value for each slice using a computer processor; and filtering of slices indices,
wherein the slices with indices greater than the size criterion is an overexposed image.
5. A computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image to identify optic disc (OD) comprising:
pre-processing of the retinal fundus image by the computer processor;
obtaining grey scale image using a conversion module of the computer processor; applying image morphological procedures for extraction of vessels/ vasculature by the computer processor;
analyzing indices of the vascular tree using pixel intensity by the computer processor;
determining optic disc based on maximum circularity measured on binary image and maximum pixel intensity;
labeling of the optic disc centroid by the computer processor, wherein preprocessing is extraction of frame border region around image and is applied to all images, and
wherein the pixel intensity used is white pixel intensity.
6. A computer implemented method as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image as claimed in claim 5, comprising:
processing the image to identify or detect optic disc (OD) region by the computer processor;
define the temporal arcade from the OD determined image by the computer processor;
estimate the location of avascular region in temporal arcade by the analyzing module of the computer processor;
estimate the macula candidate region based on the centroid distance of the OD; estimate the fovea region within the macula candidate region;
identifying hard exudates,
wherein estimation of the location of avascular region, estimation of fovea region and hard exudates identification is done by one or more computer processor,
wherein the macula candidate region is determined as the region within 2.5 times the optic disc radius in upward and downward direction of the said fundus image, wherein the fovea region is identified within the macula candidate region; and
wherein hard exudates are identified based on the centroid distance of the macula and no foreground objects indicates no exudates in the fundus image.
7. A computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image data for foreground candidates comprising:
obtain the binary image using a conversion module in the computer processor; masking the hard exudates by the computer processor;
extracting vasculature by applying morphological procedures using the processor; obtaining foreground candidates on application of eccentricity and mean intensity filters; and
identifying and labeling the foreground candidates of the given image.
8. A computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image data for processing retinal fundus image data as claimed in claim 6, wherein foreground candidates corresponds to retinal hemorrhages.
9. A computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image data for processing retinal fundus image data for identification of hues comprising:
preprocessing or removing the fundus image frame borders;
extracting a segmented binary image;
masking of the hard exudates;
extraction of the colored channels within the segmented image,
wherein the hue for the foreground object is determined based on the pixel intensity of the dominant color channel,
wherein the retinal fundus image data is generated by the computer processor.
10. A computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image for processing retinal fundus image data as claimed in claim 9, wherein presence of hues corresponds to cotton wool spots identified by high mean intensities and hazy borders of the images generated.
11. A computer implemented method, as implemented by one or more computing devices configured with specific executable instructions for processing retinal fundus image data as claimed in claim 9, wherein the color channels are red, blue or green channels.
PCT/IN2017/050604 2016-12-21 2017-12-19 Retinal fundus image processing method WO2018116321A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201641043683 2016-12-21
IN201641043683 2016-12-21

Publications (2)

Publication Number Publication Date
WO2018116321A2 true WO2018116321A2 (en) 2018-06-28
WO2018116321A3 WO2018116321A3 (en) 2018-08-09

Family

ID=62626022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2017/050604 WO2018116321A2 (en) 2016-12-21 2017-12-19 Retinal fundus image processing method

Country Status (1)

Country Link
WO (1) WO2018116321A2 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658395A (en) * 2018-12-06 2019-04-19 代黎明 Optic disk method for tracing and system and eyeground acquisition device
CN109934823A (en) * 2019-03-25 2019-06-25 天津工业大学 A kind of DR eye fundus image macular edema stage division based on deep learning
CN110298849A (en) * 2019-07-02 2019-10-01 电子科技大学 Hard exudate dividing method based on eye fundus image
CN110974151A (en) * 2019-11-06 2020-04-10 中山大学中山眼科中心 Artificial intelligence system and method for identifying retinal detachment
CN111292296A (en) * 2020-01-20 2020-06-16 京东方科技集团股份有限公司 Training set acquisition method and device based on eye recognition model
CN111461218A (en) * 2020-04-01 2020-07-28 复旦大学 Sample data labeling system for fundus image of diabetes mellitus
CN111583261A (en) * 2020-06-19 2020-08-25 林晨 Fundus super-wide-angle image analysis method and terminal
CN111583248A (en) * 2020-05-12 2020-08-25 上海深至信息科技有限公司 Processing method based on eye ultrasonic image
CN111640090A (en) * 2020-05-12 2020-09-08 宁波蓝明信息科技有限公司 Method for evaluating fundus image quality
CN112508898A (en) * 2020-11-30 2021-03-16 北京百度网讯科技有限公司 Method and device for detecting fundus image and electronic equipment
CN112734828A (en) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 Method, device, medium and equipment for determining center line of fundus blood vessel
US20210228073A1 (en) * 2020-01-29 2021-07-29 XAIMED Co., Ltd. Apparatus and methods for supporting reading of fundus image
CN113768461A (en) * 2021-09-14 2021-12-10 北京鹰瞳科技发展股份有限公司 Fundus image analysis method and system and electronic equipment
CN114627043A (en) * 2020-12-11 2022-06-14 杭州深杨医疗科技有限公司 Method, system, device and readable storage medium for grading proximate macular degeneration
CN116363132A (en) * 2023-06-01 2023-06-30 中南大学湘雅医院 Ophthalmic image processing method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9241622B2 (en) * 2009-02-12 2016-01-26 Alcon Research, Ltd. Method for ocular surface imaging
JP4909377B2 (en) * 2009-06-02 2012-04-04 キヤノン株式会社 Image processing apparatus, control method therefor, and computer program
US20120177262A1 (en) * 2009-08-28 2012-07-12 Centre For Eye Research Australia Feature Detection And Measurement In Retinal Images
TWI578977B (en) * 2011-04-07 2017-04-21 香港中文大學 Device for retinal image analysis
US9808154B2 (en) * 2012-07-13 2017-11-07 Retina Biometrix, Llc Biometric identification via retina scanning with liveness detection

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658395B (en) * 2018-12-06 2022-09-09 代黎明 Optic disc tracking method and system and eye fundus collection device
CN109658395A (en) * 2018-12-06 2019-04-19 代黎明 Optic disk method for tracing and system and eyeground acquisition device
CN109934823A (en) * 2019-03-25 2019-06-25 天津工业大学 A kind of DR eye fundus image macular edema stage division based on deep learning
CN110298849A (en) * 2019-07-02 2019-10-01 电子科技大学 Hard exudate dividing method based on eye fundus image
CN110974151A (en) * 2019-11-06 2020-04-10 中山大学中山眼科中心 Artificial intelligence system and method for identifying retinal detachment
CN111292296A (en) * 2020-01-20 2020-06-16 京东方科技集团股份有限公司 Training set acquisition method and device based on eye recognition model
US20210228073A1 (en) * 2020-01-29 2021-07-29 XAIMED Co., Ltd. Apparatus and methods for supporting reading of fundus image
US11730364B2 (en) * 2020-01-29 2023-08-22 XAIMED Co., Ltd. Apparatus and methods for supporting reading of fundus image
CN111461218B (en) * 2020-04-01 2022-07-29 复旦大学 Sample data labeling system for fundus image of diabetes mellitus
CN111461218A (en) * 2020-04-01 2020-07-28 复旦大学 Sample data labeling system for fundus image of diabetes mellitus
CN111640090A (en) * 2020-05-12 2020-09-08 宁波蓝明信息科技有限公司 Method for evaluating fundus image quality
CN111640090B (en) * 2020-05-12 2022-05-31 宁波蓝明信息科技有限公司 Method for evaluating quality of fundus images
CN111583248A (en) * 2020-05-12 2020-08-25 上海深至信息科技有限公司 Processing method based on eye ultrasonic image
CN111583248B (en) * 2020-05-12 2022-12-30 上海深至信息科技有限公司 Processing method based on eye ultrasonic image
CN111583261B (en) * 2020-06-19 2023-08-18 智眸医疗(深圳)有限公司 Method and terminal for analyzing ultra-wide angle image of eye bottom
CN111583261A (en) * 2020-06-19 2020-08-25 林晨 Fundus super-wide-angle image analysis method and terminal
CN112508898A (en) * 2020-11-30 2021-03-16 北京百度网讯科技有限公司 Method and device for detecting fundus image and electronic equipment
CN114627043A (en) * 2020-12-11 2022-06-14 杭州深杨医疗科技有限公司 Method, system, device and readable storage medium for grading proximate macular degeneration
CN112734828A (en) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 Method, device, medium and equipment for determining center line of fundus blood vessel
CN113768461A (en) * 2021-09-14 2021-12-10 北京鹰瞳科技发展股份有限公司 Fundus image analysis method and system and electronic equipment
CN113768461B (en) * 2021-09-14 2024-03-22 北京鹰瞳科技发展股份有限公司 Fundus image analysis method, fundus image analysis system and electronic equipment
CN116363132B (en) * 2023-06-01 2023-08-22 中南大学湘雅医院 Ophthalmic image processing method and system
CN116363132A (en) * 2023-06-01 2023-06-30 中南大学湘雅医院 Ophthalmic image processing method and system

Also Published As

Publication number Publication date
WO2018116321A3 (en) 2018-08-09

Similar Documents

Publication Publication Date Title
WO2018116321A2 (en) Retinal fundus image processing method
Haleem et al. Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review
Mookiah et al. Computer-aided diagnosis of diabetic retinopathy: A review
US7583827B2 (en) Assessment of lesions in an image
Akram et al. Automated detection of dark and bright lesions in retinal images for early detection of diabetic retinopathy
Winder et al. Algorithms for digital image processing in diabetic retinopathy
Besenczi et al. A review on automatic analysis techniques for color fundus photographs
Medhi et al. An effective fovea detection and automatic assessment of diabetic maculopathy in color fundus images
Siddalingaswamy et al. Automatic grading of diabetic maculopathy severity levels
AU2021202217B2 (en) Methods and systems for ocular imaging, diagnosis and prognosis
WO2020186222A1 (en) Supervised machine learning based multi-task artificial intelligence classification of retinopathies
Rasta et al. Detection of retinal capillary nonperfusion in fundus fluorescein angiogram of diabetic retinopathy
Agrawal et al. A survey on automated microaneurysm detection in diabetic retinopathy retinal images
Abràmoff Image processing
Güven Automatic detection of age-related macular degeneration pathologies in retinal fundus images
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
Brancati et al. Automatic segmentation of pigment deposits in retinal fundus images of Retinitis Pigmentosa
Priya et al. Detection and grading of diabetic retinopathy in retinal images using deep intelligent systems: a comprehensive review
Noronha et al. A review of fundus image analysis for the automated detection of diabetic retinopathy
US20230037424A1 (en) A system and method for classifying images of retina of eyes of subjects
Veras et al. A comparative study of optic disc detection methods on five public available database
Maher et al. Automatic Detection of Microaneurysms in digital fundus images using LRE
Valencia Automatic detection of diabetic related retina disease in fundus color images
Kugamourthy et al. eOphthalmologist--Intelligent Eye Disease Diagnosis System
Odstrčilík Analysis of retinal image data to support glaucoma diagnosis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17882488

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17882488

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 17882488

Country of ref document: EP

Kind code of ref document: A2