EP1427338A2 - System und verfahren zur untersuchung von patienten auf diabetische retinopathie - Google Patents

System und verfahren zur untersuchung von patienten auf diabetische retinopathie

Info

Publication number
EP1427338A2
EP1427338A2 EP02763573A EP02763573A EP1427338A2 EP 1427338 A2 EP1427338 A2 EP 1427338A2 EP 02763573 A EP02763573 A EP 02763573A EP 02763573 A EP02763573 A EP 02763573A EP 1427338 A2 EP1427338 A2 EP 1427338A2
Authority
EP
European Patent Office
Prior art keywords
images
retinal
image
diabetic retinopathy
hemorrhages
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02763573A
Other languages
English (en)
French (fr)
Inventor
Stephen H. Sinclair
Sanjay Bhasin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Philadelphia Opthalmologic Imaging Systems Inc
Original Assignee
Philadelphia Opthalmologic Imaging Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philadelphia Opthalmologic Imaging Systems Inc filed Critical Philadelphia Opthalmologic Imaging Systems Inc
Publication of EP1427338A2 publication Critical patent/EP1427338A2/de
Withdrawn legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • A61B3/1241Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes specially adapted for observation of ocular blood flow, e.g. by fluorescein angiography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14532Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring glucose, e.g. by tissue impedance measurement

Definitions

  • diabetic retinopathy One of the complications of diabetes is the gradual degradation of vision known as diabetic retinopathy. It is known that over time, a majority of diabetics will lose some vision to this condition, and the present state of the medical art is to treat the retinal lesions that mark the disease with laser light. Diagnosing diabetic retinopathy, however, requires a trained specialist to view the retina (or a photograph of the retina) at periodic examinations, and to recognize small lesions and changes therein. Because there are many more diabetics than specialists, and because they must be examined at regular intervals, difficulties and deficiencies have arisen in health-care systems that attempt to screen and treat diabetic retinopathy. The present invention provides an efficient computer-implemented screening analysis of retinal photographs to identify the stages of diabetic retinopathy.
  • Diagnosis of diabetic retinopathy is commonly performed by skilled medical personnel (usually ophthalmologists) in direct examination of the fundus, or by evaluation of sets of fundus photographs taken using special-purpose cameras. Such examinations, whether done directly or by review of photographs is a time-consuming and inexact process, and experts in diagnosis often disagree in their results, particularly in the very earliest stages of the disease. In addition such diagnosis is expensive, and is often performed at intervals that are longer than desired due to the cost and lack of available skilled manpower.
  • the present invention comprises a robust technique to automatically grade retinal images through detection of lesions that occur early in the course of diabetic retinopathy: dot hemorrhages or microaneurysms, blot and striate hemorrhages, and lipid exudates.
  • the present invention includes methods to detect nerve fiber layer infarcts, extract the optic nerve in the appropriately identified fields, and to track and identify the vessels (measuring vessel lumen diameters, tortuosity, and branching angles).
  • the present invention preferably identifies 3 levels: no retinopathy, microaneurysms alone, and lesions additional to microaneurysms, the latter two levels being the earliest detectable forms of the disease.
  • the method may be expanded to 7 levels through the detection of the lesions that occur in advanced stages of the disease and may be utilized, through the evaluation of changes of the normal vasculature (lumen diameter, tortuosity, and branching angle), to evaluate the risk of developing retinopathy.
  • the method of the present invention is particularly suited to overcoming the difficulties in grading retinopathy, which stems from image-to-image variation, low contrast of some the lesions from the background and non-uniform lighting and flare within the same image resulting in variation in the different quadrants of the same image.
  • the expert decision system implemented in the present invention bases the retinopathy grade determination upon the results of the lower-level detectors (lesion detection) for each eye photographed.
  • the system may be tuned for individual detectors and for mydriatic images and non- mydriatic images using separate parameters based on the camera type, image type, characteristics of patients, and characteristics of fundus.
  • a data archive is the core of the data management system that acts as the central repository for all patient data: images, demographics as well as reports. This archive will be accessible for storage and retrieval via the Internet and will be of a scalable design to accommodate growth over time.
  • the benefits of a centralized data management architecture include (1) The ophthalmologist or retina specialist will have access to current as well as prior studies for comparison regardless of where the historical data was acquired. (2) A central data repository will allow for an objective and quantitative means to evaluate the progression of the disease in individuals or populations over time in terms of the changes of normal vessels, the change of existing diabetic lesions ,and the occurrence of new ones. This database will provide the foundation to develop regression and serial studies to develop risk prediction algorithms in the future.
  • Figure 1 depicts histograms of two images of the fundus.
  • Figure 2 depicts an initial fundus image, and the result of retina extraction processing.
  • Figure 3 depicts the result of processing the retina image by coalescent filter [Bhasin and Meystel 94] to obtain the vessels.
  • Figure 4 depicts graphically the problem with locating the retina in a photograph.
  • Figure 5 depicts insetting
  • Figure 6 (a) depicts a subsampled original retinal image.
  • Figure 6 (b) depicts the image of Figure 6(a) after subtraction of vessels from the image.
  • Figure 7 depicts results of finding the retina using geometry to improve the pickup.
  • Figure 8 depicts an overall flow diagram for the method of the present invention.
  • Figure 9 depicts a flow diagram of the image processing method of the present invention.
  • Figure 10 depicts a flow diagram of the photographer assistance method of the present invention.
  • Figure 11 depicts a flow diagram of the serial (over time) study method of the present invention. Detailed Description of the Invention
  • images are acquired locally with immediate feedback to photographer to assist in optimizing each image quality using a small looping process to guide the photographer, analyzed locally and stored locally on CD- ROM and magnetic storage. Screening is reported immediately from the system with a printed report. Those eyes that "fall out" of screening, either because a significant number of photographs cannot be analyzed (at present too poor despite 3 attempts to photograph) for one eye or because they are above the accepted threshold in number/type of lesions/level of retinopathy, will be transmitted to a remote specialist ophthalmologist for reviewing. The specialist will then transmit back a review with grading and recommendations: either the patient is to be advised to make an appointment for further examination, or the photographs are sufficient but all the eye needs is repeat screening at a recommended interval. This report is maintained in the database for data mining with regard to the success and thresholds for screening and referral.
  • images may be obtained in one of the following ways:
  • the present invention preferably uses a protocol of 5 photographs of 35° fields per eye.
  • the present invention also provides for cataloging patients (including naming images as well as providing other demographic and clinical data) to enable batch processing and historical tracking of disease as well as identification of risk indicators. It has been determined that cataloging patients is important: keeping digital photos on file is efficient for doctor's office - for example, images may be maintained in an image- management database which also allows the recording of ancillary patient data and physician notes for each set of photgraphs.
  • the image-management system of the present invention allows medical professionals to track improvement and determine more accurately what factors in what patient classes dictate degradation rate, which dictates patient recall and reexamination frequency.
  • the system of the present invention uses a file-naming convention of the images that allows the system to automatically locate images of the same patient and the corresponding field.
  • the naming scheme for all images is: PatientlD- Camera type.Eye-Field-Field of view.?Image type.Processing
  • Field is a number from 1 thru 5 •
  • PatientlD is system generated alphanumeric identifier
  • Camera type is a number 1 for non-mydriatic camera, 2 for mydriatic camera
  • Field of view is the angular degree of retina photographed by camera with each image (i.e. 30° or 35° or 45 °)
  • Image type is a number for the type of image: 1 for digital direct color image, 2 for monochromatic derived from digitization from Ektachrome slides, 3 for monochromatic followed by the notch filter peak: (e.g. 535 for a 535nm peak filter).
  • Processing is a code that indicates the type of the image: RAW (for original photograph), GRA (graded image indicating lesions present).
  • the method of the present invention includes a software system for assisting the photographer in capturing adequate retinal images. In use, the photographer photographs the eye and indicates the field being photographed. When these images are captured, each photograph is marked with an identifier: the current system utilized but not to be restricted i.e.
  • OD field#l centered on fovea with the disc on the right or left of the field depending up on the eye;
  • OD field #2 disc in lower left (superonasal)
  • OD field #3 disc in upper left (inferonasal);
  • OD field #4 disc in upper right corner, fovea in upper center (infero temporal;
  • OD field #5 disc in lower right corner, fovea in lower center (superotemporal); the OS fields are just the opposite, right from left.
  • Contrast The background is compared with the vessels (loss of contrast is produced by moving the camera too far in or too far out- on the Canon CR6, for example, this is helped by having the photographer align some bright alignment dots.
  • Focus de-focused image is detected by examining edge definition of central vessels.
  • Retinopathy grading in the present invention is based on image processing and computer vision software for lesion detection and quantification from digitized monochromatic images taken either on line with 1280X1024 pixel density digital monochromatic camera, or utilizing the green channel from a trichromatic color camera with similar one channel pixel density, or digitized at similar resolution from slide film images at 30° or 50° taken from a non-mydriatic camera.
  • Quantification (for each field) number /field, total area/ field area evaluated, histogram of number vs size, histogram of number vs. density in each field.
  • the separate detectors are for the following lesions and anatomical features in the retina: A. Hemorrhages: dot, blot, striate
  • IRMA Intra-retinal microvascular anomalies
  • the portion containing the retina is extracted.
  • the bad contrast portions caused due to flare, etc. are removed.
  • An improvement to the present invention removes certain portions of photographs which result from image flare and contain no useful data.
  • An adaptive threshold is used to locate the vessel. Its settings depends on image contrast. This requires locating the retina within the image and measuring its contrast.
  • the ISODATA algorithm is an iterative method based upon the gray level histogram of the image. The histogram is split up into two parts, the foreground pixels and the background pixels, assuming an initial threshold value. Then the average value of the foreground and of the background pixels is calculated and a new threshold value is taken midway between these two average values. This process is repeated, based upon the new threshold estimate until the threshold value does not change any more.
  • An alternative approach is not to use the convex hull, which creates the need to close off any internal holes within the segmented retina. To do so, generate a binary image using the ISODATA technique and remove all but the largest object. Then invert the binary image and remove all objects that are not contiguous with the boundary of the image, that is that are not part of the background. Then re-invert the binary image.
  • Fig 5 An adequate inset distance for the Ik images is 100 pixels. The reason for performing this operation is to avoid including the retinal edge in measuring the retina contrast. Often the edge is subject to deep shadow or bright flare.
  • the histogram of the retina mask following insetting is used to define the region of the image for which the gray value histogram is determined. From the histogram the standard deviation is computed and that is used as a measure of the image contrast.
  • the original image was a subsampled version of the scanned image to reduce the processing time. The subsampling was by a factor of 4. The subsampling was done by a straightforward linear method and other techniques like bilinear sampling might improve the results and will be studied later.
  • the algorithm utilizes an adaptive threshold based on a rank order filter.
  • the routine assigns a value of 0 (background) or 1 (object) to a given pixel depending on its value relative to that of its neighborhood: All the pixels of the neighborhood are rank ordered on the basis of gray value, a constant offset is added to the value of the central pixel and the result compared to the value of the i m element of the neighborhood. If it is lower, that is the central pixel is substantially darker than its surround, then the central pixel is assigned a value of 1 , else it is given a value of 0.
  • Three interrelated parameters important to the operation of this adaptive threshold are the offset, kernel size and count threshold (i ).
  • the kernel size and offset are constants both for a given image and across images. We have tested both circular and rectangular kernels. As you would predict, the circular ones result in less artifact. We settled on a circular filter with a diameter of 17 pixels. The offset used was 15. The count threshold is kept constant for a given image but is varied from image to image depending on the image contrast.
  • the standard deviation of the retinal image is used.
  • the standard deviation of the histogram of each of twenty-five images was computed and each image was convolved with the adaptive threshold with the count threshold varied until the best segmentation was achieved as judged visually.
  • the present invention can remove objects like the optic nerve head which are very bright to get a clearer representation of the gray value distribution (histogram) in the rest of the image.
  • Much of the processing is dependent on grey values and these techniques along with specialized algorithms like the k-means, modified gradient, signature analysis and morphology allow us to perform adaptive segmentation that leads to a more reliable extraction of the lesions.
  • the present invention applies a smoothing filter - special configuration that removes noise but does not destroy the structure of the hemorrhages or cause artifacts in the image.
  • the lesions detected by the various detectors are fed into an expert system of known construction.
  • the expert system is constructed out of rules built from consultations with expert ophthalmologists. It embodies the reasoning used by them to make interpretations. Among the features queried of the experts and rule-coded are:
  • the present invention uses a path-searching technique.
  • To find the path we start at the position in the gray value image corresponding to a given endpoint and searche for the direction in which the mean gray value is maximum.
  • the initial search direction is determined by examining the skeleton and the connectivity of the endpoint and setting the initial direction (d) to the opposite direction.
  • To decide which to take three straight lines are considered, one in the direction (d), another in between the directions (d) and (d+1) and a third in between the directions (d) and (d-1).
  • the mean value is calculated, over a length of ⁇ val> pixels.
  • the next pixel is the candidate point that corresponds with the direction with the maximum average.
  • the process is repeated at the new pixel, using the found direction as a new value for (d).
  • the process stops, firstly, if the distance of the candidate pixel to the image boundaries is less than a preset value. Further, if the maximum of the mean values in the candidate directions is less than the specified value ⁇ minval>. Next, if a pixel is obtained that is set in the output bitplane. For this the system uses the background outside the eye. Finally the process is stopped if the number of pixels found becomes equal to a preset value.
  • examination of the gradient normal to the generated curve at a regular spacing may be used to reject the curve.
  • BYTE thr (BYTE)( stats2.m_mu + kl* stats2.m_sd); bf.Threshold( thr );
  • vessel segmentation is achieved on basis of object size and form factor.
  • the objects rejected are in fact classified as hemorrhages. Size is used to distinguish dot from blot hemorrhages (DH & BH, respectively) with later being the larger of the two.
  • the RED part should be somewhat in the center as the ideal profile of the heme is a crater - low grey values in the center surrounded by concentric rings of larger value.
  • a lipid exudate is like a mound which has the same characteristics as heme for an inverted image.
  • XMIN, YMIN, XMAX, YMAX be corners of full stamp and (ymin, xminl , xmaxl) be first ival of RED, and (ymax, xmin2, xmax2) be last ival ⁇ these form actual corners of RED.
  • XMIN lesionpost->supp.xl, ... so test is
  • Form factor of RED should be between 0.9 and 1.1 which eliminates trapeziods for vessels and triangles and allows circular objects
  • the CWS and EX are differentiated from the background of the fundus images by first inverting the image gray value so that the exudates are now darker than the background and then applying the same adaptive filter used to segment the hemorrhages. After removal of noise with a median filter the remaining objects are separated into CWS and EX. This is done on the basis of size and gradient. Objects smaller than a fixed constant and with a sharper gradient are classified as EX.
  • the skeleton of the binary vessel image is found using a distance transform. It differs from the medial-axis transform in that here the skeleton is defined as a set of connected, one pixel thick arcs, lying midway between the object boundaries and being a topological retraction with the same connectedness as the original object. Thus unlike the medial axis connectivity is preserved. It differs from the Hilditch skeleton in that the way it is applied, the distance skeleton is computed using pseudo-Euclidean distances rather than city-block distances (1 nearest neighbor, 2 diagonal, 3 knight move). Now, the skeleton is used as a mask with the distance map to obtain distances from the skeleton to the edge of the object.
  • Filter to smooth noise and generates vessel bed The use of filtering is configurable and is determined through the use of experiments done on images from a specific camera that measure the relative merit.
  • Testl area > minPix Test2: Compare both xExtent & yExtent to minimum and max thresholds or check if only either of them meet the minimum constraints. Permits specification of AND/OR on the constraint i.e. either dimension or both dimension to be constrained for x & y extent-limit rule.
  • CLesion SignatureFilterBasedOnRaw Function SPLIT 2/27/97 to allow different processing of two types - original (using cf5) or new (using raw) for analysis Function modified 1/17/97 to create diagnostic file with fixed name in current results directory. This file contains yellow filled, passed objects for final analysis. File is overwritten on each pass through. Valid during batch processing of a single image.
  • BOOL bNotOnVessel NewNotOnVessel(&ci, vslCRa, &bPassed, &reason); 1 Pixel overlap if( bNotOnVessel
  • SIGNATURESTATS definition is in CScreenerApp. Parameters to be added till accuracy of the damned system is adequate - or unimprovable. May split into multiple hierarchical functions if too much redundant processing becomes necessary with a single function. Will keep a single SIGNATURESTATS structure in any case.
  • m_ss[ 1 ] ((CIArray*)(m_analysis->m_CIArrays[0]))->GetSignatureStatistics( ((m_analysis->m_stamp->m_x 1 +m_analysis->m_stamp->m_x2)/2), (m_analysis->m_stamp->m_yl+m_analysis->m_stamp->m_y2)/2); DO RED
  • This function provides a working metric on a scale of zero to 8 for transitions from each band into the next, e.g. (7,6,3) which allows use in rules. Function requires start point within core-color region of regionarray. Yet to determine effect of concave objects.
  • pFR.m_aveBlotDefinition 100*(int)(m_doc->m_fs.aveBlotDefl2/max(1.0,m_doc- >m_fs.nBlots)+0.5)+
  • the best kind of matched filter is a classical neural network. [a] For each lesion, pick an input layer sized to match the lesion size
  • Comparison of sequential images to improve the risk prediction for pathologic changes occurring over time is provided by overlay and comparison of feature differences and by comparison in the database of the number and location of lesions (e.g. to detect new lesions or their migration toward anatomical features of ref, for instance toward the fovea).
EP02763573A 2001-08-30 2002-08-30 System und verfahren zur untersuchung von patienten auf diabetische retinopathie Withdrawn EP1427338A2 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US31595301P 2001-08-30 2001-08-30
US315953P 2001-08-30
PCT/US2002/027586 WO2003020112A2 (en) 2001-08-30 2002-08-30 System and method for screening patients for diabetic retinopathy

Publications (1)

Publication Number Publication Date
EP1427338A2 true EP1427338A2 (de) 2004-06-16

Family

ID=23226811

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02763573A Withdrawn EP1427338A2 (de) 2001-08-30 2002-08-30 System und verfahren zur untersuchung von patienten auf diabetische retinopathie

Country Status (6)

Country Link
EP (1) EP1427338A2 (de)
JP (1) JP2005508215A (de)
AU (1) AU2002327575A1 (de)
CA (1) CA2458815A1 (de)
IL (1) IL160645A0 (de)
WO (1) WO2003020112A2 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8879813B1 (en) 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1487323B1 (de) 2002-03-28 2015-04-22 Heidelberg Engineering GmbH Verfahren zur untersuchung des augenhintergrundes
JP5014593B2 (ja) * 2005-06-01 2012-08-29 興和株式会社 眼科測定装置
JP4958254B2 (ja) * 2005-09-30 2012-06-20 興和株式会社 画像解析システム、及び画像解析プログラム
WO2008062528A1 (fr) * 2006-11-24 2008-05-29 Nidek Co., Ltd. Analyseur d'image de fond d'œil
JP5182689B2 (ja) * 2008-02-14 2013-04-17 日本電気株式会社 眼底画像解析方法およびその装置とプログラム
CN102014731A (zh) * 2008-04-08 2011-04-13 新加坡国立大学 视网膜图像分析系统和方法
JP5607640B2 (ja) * 2008-10-15 2014-10-15 オプティブランド リミティド,リミティド ライアビリティ カンパニー 眼の特徴の画像を得る方法と装置
GB0902280D0 (en) * 2009-02-12 2009-03-25 Univ Aberdeen Disease determination
US20110129133A1 (en) 2009-12-02 2011-06-02 Ramos Joao Diogo De Oliveira E Methods and systems for detection of retinal changes
US9357916B2 (en) * 2012-05-10 2016-06-07 Carl Zeiss Meditec, Inc. Analysis and visualization of OCT angiography data
US9364147B2 (en) * 2013-02-11 2016-06-14 Lifelens, Llc System, method and device for automatic noninvasive screening for diabetes and pre-diabetes
TWI549649B (zh) * 2013-09-24 2016-09-21 廣達電腦股份有限公司 頭戴式系統
US20170100030A1 (en) * 2014-06-03 2017-04-13 Socialeyes Corporation Systems and methods for retinopathy workflow, evaluation and grading using mobile devices
US20160278983A1 (en) * 2015-03-23 2016-09-29 Novartis Ag Systems, apparatuses, and methods for the optimization of laser photocoagulation
JP6745496B2 (ja) * 2016-08-19 2020-08-26 学校法人自治医科大学 糖尿病網膜症の病期判定支援システムおよび糖尿病網膜症の病期の判定を支援する方法
US10169872B2 (en) 2016-11-02 2019-01-01 International Business Machines Corporation Classification of severity of pathological condition using hybrid image representation
JP2021007017A (ja) * 2020-09-15 2021-01-21 株式会社トプコン 医用画像処理方法及び医用画像処理装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6198532B1 (en) * 1991-02-22 2001-03-06 Applied Spectral Imaging Ltd. Spectral bio-imaging of the eye
US5940802A (en) * 1997-03-17 1999-08-17 The Board Of Regents Of The University Of Oklahoma Digital disease management system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO03020112A3 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8879813B1 (en) 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
US8885901B1 (en) 2013-10-22 2014-11-11 Eyenuk, Inc. Systems and methods for automated enhancement of retinal images
US9002085B1 (en) 2013-10-22 2015-04-07 Eyenuk, Inc. Systems and methods for automatically generating descriptions of retinal images
US9008391B1 (en) 2013-10-22 2015-04-14 Eyenuk, Inc. Systems and methods for processing retinal images for screening of diseases or abnormalities

Also Published As

Publication number Publication date
JP2005508215A (ja) 2005-03-31
WO2003020112A9 (en) 2004-05-06
AU2002327575A1 (en) 2003-03-18
IL160645A0 (en) 2004-07-25
WO2003020112A2 (en) 2003-03-13
CA2458815A1 (en) 2003-03-13
WO2003020112A3 (en) 2003-10-16

Similar Documents

Publication Publication Date Title
Xiong et al. An approach to evaluate blurriness in retinal images with vitreous opacity for cataract diagnosis
Besenczi et al. A review on automatic analysis techniques for color fundus photographs
Niemeijer et al. Automatic detection of red lesions in digital color fundus photographs
Akram et al. Automated detection of dark and bright lesions in retinal images for early detection of diabetic retinopathy
Sánchez et al. Retinal image analysis to detect and quantify lesions associated with diabetic retinopathy
US7474775B2 (en) Automatic detection of red lesions in digital color fundus photographs
WO2003020112A2 (en) System and method for screening patients for diabetic retinopathy
Siddalingaswamy et al. Automatic grading of diabetic maculopathy severity levels
Jan et al. Retinal image analysis aimed at blood vessel tree segmentation and early detection of neural-layer deterioration
Hunter et al. Automated diagnosis of referable maculopathy in diabetic retinopathy screening
Sakthivel et al. An automated detection of glaucoma using histogram features
Agrawal et al. A survey on automated microaneurysm detection in diabetic retinopathy retinal images
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
Kumar et al. Computational intelligence in eye disease diagnosis: a comparative study
Mookiah et al. Computer aided diagnosis of diabetic retinopathy using multi-resolution analysis and feature ranking frame work
Brata Chanda et al. Automatic identification of blood vessels, exaudates and abnormalities in retinal images for diabetic retinopathy analysis
Umamageswari et al. Identifying Diabetics Retinopathy using Deep Learning based Classification
Subramanian et al. Diagnosis of Keratoconus with Corneal Features Obtained through LBP, LDP, LOOP and CSO
Azeroual et al. Convolutional Neural Network for Segmentation and Classification of Glaucoma.
Anand et al. Optic disc analysis in retinal fundus using L 2 norm of contourlet subbands, superimposed edges, and morphological filling
Chalakkal Automatic Retinal Image Analysis to Triage Retinal Pathologies
Odstrčilík Analysis of retinal image data to support glaucoma diagnosis
Çelik Ertuǧrul et al. Decision Support System for Diagnosing Diabetic Retinopathy from Color Fundus Images
Sindhusaranya et al. Hybrid algorithm for retinal blood vessel segmentation using different pattern recognition techniques
Chawla et al. A survey on diabetic retinopathy datasets

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20040318

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20090303