GB2470727A - Processing retinal images using mask data from reference images - Google Patents

Processing retinal images using mask data from reference images Download PDF

Info

Publication number
GB2470727A
GB2470727A GB0909413A GB0909413A GB2470727A GB 2470727 A GB2470727 A GB 2470727A GB 0909413 A GB0909413 A GB 0909413A GB 0909413 A GB0909413 A GB 0909413A GB 2470727 A GB2470727 A GB 2470727A
Authority
GB
United Kingdom
Prior art keywords
image
candidate
data
retinal
mask data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0909413A
Other versions
GB0909413D0 (en
Inventor
Alan Duncan Fleming
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Aberdeen
Grampian Health Board
Original Assignee
University of Aberdeen
Grampian Health Board
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Aberdeen, Grampian Health Board filed Critical University of Aberdeen
Priority to GB0909413A priority Critical patent/GB2470727A/en
Publication of GB0909413D0 publication Critical patent/GB0909413D0/en
Priority to PCT/GB2010/001026 priority patent/WO2010139929A2/en
Publication of GB2470727A publication Critical patent/GB2470727A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Abstract

Generating output data indicating areas of a retinal image that represent lesions comprises, receiving candidate date associated with a retinal image indicating areas of the image representing candidate lesions; receiving mask data indicating areas of the image that do not represent lesions, the mask data may be produced using a reference image; and processing the candidate data and mask data to generate output data indicating areas of a retinal image representing lesions.

Description

LESION DETECTION
The present invention relates to methods and apparatus suitable for use in the detection of lesions. More particularly, but not exclusively, the present invention relates to methods for analysing retinal images to mask areas detected as lesions which are attributable to artefacts.
Screening of large populations for early detection of indications of disease is common. The retina of the eye can be used to determine indications of disease, in particular diabetic retinopathy and niacular degeneration. Screening for diabetic retinopathy is recognised as a cost-effective means of reducing the incidence of blindness in people with diabetes, and screening for macular degeneration is recognised as an effective way of reducing the incidence of blindness in the population more generally.
Diabetic retinopathy occurs as a result of vascular changes in the retina which cause sweilings of capillaries known as microaneurysms and leakages of blood into the retina known as blot haemorrhages. Microaneurysms may eventually become a source of leakage of plasma causing thickening of the retina, known as oedema. If such thickening occurs in the macular region, this can cause loss of high quality vision. Fat deposits known as exudates are associated with retinal thickening, and the presence of exudates may therefore be taken to be an indication of retinal thickening.
A currently recommended examination technique for diabetic retinal screening uses digital fundus photography of the eye. Fundus images are examined by trained specialists to detect indicators of disease such as exudates, blot haemorrhages and microaneurysms as described above. The trained specialists determine whether a patient should be referred to an ophthalmologist based upon detected indicators of disease. This is time consuming and expensive.
Automated image analysis may be used to reduce manual workloads in determining properties of images. Automated image analysis is now used in a variety of different fields. In particular, a variety of image analysis techniques are used to process
I
medical images so as to provide data indicating whether an image includes features indicative of disease. In an automated system, workload reduction can be defined to be the amount of manual work that is avoided as a result of the automated system.
An automated system can be used to process images generated from patients in two ways. The automated system can be used as a stand-alone referral tool whereby the automated system determines whether a patient should be referred to a specialist for further assessment. Alternatively, the automated system can be used as a first processing step to remove patient images without significant indications of disease, and all images which are not discarded by the automated system can be examined by skilled image examination technicians to determine whether the patient should be referred to a specialist for further assessment.
In each case, the number of images that are determined to not indicate disease by the automated system reduces the workload either of the skilled image examination technician or of the specialist to whom the patients are referred. It is important that image analysis techniques for the processing of medical images are reliable both from the point of view of detecting features which are indicative of disease and from the point of view of not incorrectly detecting features which are not indicative of disease. If an automated system incorrectly indicates that a large number of images contain indications of disease when this is in fact not the case, workload will not be significantly reduced.
It is known that the number of microaneurysms occurring on the retina of a patient, amongst other things, can indicate the presence or absence of disease such as diabetic retinopathy in the patient. In general terms, the larger the number of microaneurysms occurring on the retina of a patient, the greater the indication that the patient has disease. The accurate detection of microaneurysms in a retinal image using an automated system is therefore beneficial, but has proved difficult, It is an object of some embodiments of the present invention to obviate or mitigate at least some of the problems set out above.
According to a first aspect of the invention there is provided a method of generating output data indicating areas of a retinal image representing lesions. The method comprises receiving candidate data associated with the retinal image, the candidate data indicating areas of the retinal image identified as representing respective candidate lesions. Mask data is received, the mask data indicating areas of the retinal image determined not to represent lesions, and the candidate data and mask data are processed to generate the output data indicating areas of the retinal image representing lesions.
Applying a mask in this way allows some candidate lesions to be rejected based upon prior knowledge of, for example, a detection process or apparatus.
As indicated above, detection of microaneurysms in retinal images provides clinically valuable information. The inventor of the present invention has surprisingly realised that part of the difficulty in properly identifying microaneurysms arises because small dark regions caused by something other than a microaneurysm are sometimes wrongly detected by an automated microaneurysm detection process as microaneurysms. These incorrect detections of microaneurysms can result in an automated system generating a large number of false positives, that is, determinations that a patient has disease when in fact the patient does not.
Generation of large numbers of false positives reduces the effectiveness of automated systems in reducing workload. Small dark regions in a retinal image wrongly identified as microaneurysnis may in fact be caused by dust on a camera lens or damage to a sensor. The mask data described above can indicate areas of a retinal image which, although likely to be detected by a microaneurysm detection process as candidate microaneurysms, do not in fact represent microaneurysms. The described method can exclude such areas in the generation of the output data which will therefore more accurately represent microaneurysms.
Of course, while the preceding paragraph has explained how dust or damage to a sensor can be incorrectly identified as a microaneurysm, it will be appreciated that other artefacts may be mistakenly identified as representing other lesion types, and such artefacts can also be excluded using mask data of the type described above.
The processing may comprise identifying areas of the retinal image indicated by the candidate data that correspond to areas of the retinal image indicated by the mask data. For each of the areas indicated by the candidate data, the area may be indicated in the output data if but only if the area does not correspond to any of the areas of the retinal image indicated by the mask data.
The mask data may be generated based upon at least one reference image, wherein the at least one reference image and the retinal image are captured using the same camera. The method may further comprise applying a detection process to the or each retinal image to generate the candidate data and applying the same detection process to the reference image to generate the mask data.
Reference to "the same camera" is intended to cover the use of the same image capture device or alternatively, the use of different image capture devices having at least some components in common.
The at least one reference image may be a retinal reference image. The at least one reference image may be a plurality of retinal reference images, and the candidate data may be associated with one of the plurality of retinal reference images.
Alternatively, the or each reference image may be an image other than a retinal image. For example, the or each reference image may be an image of a test object other than a human or animal eye (for example a "test card"), or the or each reference image may be an image of an artificial eye.
The lesions may be selected from the group consisting of microaneurysm, blot haemorrhage, exudate and drusen. Indeed, the lesions may be any lesion that may indicate disease.
The areas of the retinal image determined not to represent lesions as indicated by the mask data may be areas of the retinal image that represent an artefact. The artefact may be an artefact of an image acquisition process. The artefact may be caused by contamination in a camera, such as contamination (e.g. dust) on a camera lens.
Receiving mask data may comprise receiving a plurality of mask data and receiving identification data associated with the retinal image. One of the plurality of received mask data to be used in the processing of the candidate data and the mask data may be selected based upon the received identification data associated with the retinal image.
Generating of a plurality of mask data is beneficial because a screening program may capture images from patients at different locations and it may not be practical to always use the same camera to capture each image. In such a case, different artefacts may be associated with each of the cameras used in the screening program. A different mask may be associated with each of the cameras and the associated mask can be used for each image depending upon the camera with which the image was captured.
According to a second aspect of the invention there is provided a method of generating mask data for application to retinal images to be processed. The method comprises receiving at least one image and generating mask data from the at least one image, the mask data indicating areas of the retinal images to be processed determined not to represent lesions.
The method may further comprise receiving initial data and receiving candidate data associated with the at least one image, the candidate data indicating areas of the at least one image identified as representing candidate lesions. Generating mask data may comprise updating the initial data based upon the candidate data.
Generating mask data may comprise generating, for each area of the retinal images, a likelihood that the area does not represent a lesion based upon the candidate data associated with the at least one image and the initial data. The initial data may comprise, for each area of the retinal images, an initial likelihood that the area does not represent a lesion and the likelihood for each area of the retinal images may be generated by updating the initial likelihood based upon the candidate data.
The updating may increase the initial likelihood for an area if the candidate data indicates the area represents a candidate lesion and the updating may decrease the initial likelihood for an area if the candidate data indicates the area does not represent a candidate lesion.
Updating the initial likelihood may comprise multiplying the initial likelihood by a factor based upon a predetermined constant a (which may be in the range 0 to 1) and further adding a multiple of the constant a, the multiple being based upon the candidate data.
The predetermined constant a may be determined based upon an expected frequency and a minimum frequency. The expected frequency may be based upon a proportion of images in which an area is expected to be indicated as a candidate lesion in respective received candidate data if the area does not represent a lesion and the minimum frequency may be based upon a minimum proportion of images in which an area is required to be indicated as a candidate lesion if the mask data indicates the area of the retinal images to be processed is determined not to represent a lesion.
The method may further comprise generating the candidate data by receiving an image indicating a plurality of points associated with respective candidate lesions and smoothing the received image to generate an image indicating a plurality of areas associated with the candidate lesions, each of the areas being based upon a respective one of the plurality of points. The smoothing may be Gaussian smoothing.
Smoothing the received image allows candidate lesions in respective images that are not in identical locations to be matched if the respective candidate lesions are located in approximately the same location. That is, by associating each candidate lesion associated with a point with a larger area, it is more likely that candidate lesions detected in slightly different positions in respective images can be identified as a single candidate lesion Generating the mask data may further comprise applying a first threshold to the likelihood associated with each of the areas of the retinal images. The method may further comprise processing the likelihood associated with each of the areas of the retinal images with reference to the first threshold, such that the mask data indicates that an area of the retinal images to be processed does not represent a lesion if the likelihood associated with that area exceeds the first threshold. As such, repeated detection of a candidate lesion at a particular location in respective images may cause the mask data to indicate that the candidate lesion does not represent a lesion because such detections may cause the likelihood associated with the particular location to exceed the threshold.
The first threshold may be determined based upon the constant a and an expected frequency, the frequency being based upon a proportion of images in which an area is expected to be indicated as a candidate lesion if the area does not represent a lesion. For example, the first threshold may be determined based upon a value a(1 - + a, where n is an integer based upon the expected frequency.
The method may further comprise receiving initial mask data, the initial mask data indicating whether each area of a previously processed image was indicated to not represent a lesion and for each area of the images to be processed the mask data may indicate that the respective area does not represent a lesion if and only if the likelihood for the area exceeds a second threshold and the initial mask data indicates that in the previously processed image the area was indicated to not represent a lesion. The second threshold may be lower than the first threshold.
The second threshold may be determined based upon the value a, an expected frequency and a number of images. The frequency may be based upon a proportion of images in which an area is expected to be indicated as a candidate lesion if the area does not represent a lesion. The number of images may be a number of images in which the area is not indicated as a candidate lesion, which can be expected to be processed subsequent to said likelihood for an area exceeding the first threshold, if the area does not represent a lesion.
That is, the aforesaid number of images indicates a number of images which can be processed subsequent to the likelihood for an area exceeding the first threshold, but in which a candidate lesion is not detected in the relevant area, which will still result in the area being considered to represent an artefact and therefore being indicated as masked in the generated mask data. The number of images may be based upon an assumption that the likelihood exceeds the first threshold by a predetermined amount. If the assumption is satisfied and if a number of images greater than the aforesaid number is processed, and each of those images indicates that the relevant area does not indicate a candidate lesion, that area will no longer be indicated as masked in the generated mask data. The number of images may be based upon an assumption that the likelihood has a value prior to the subsequently processed images which is caused by a detection in a first image and a further detection in an n+lth image, where n is the integer based upon the expected frequency. In such a case the value of the likelihood may be similar to but exceed the value of the first threshold.
For example, the second threshold may be determined based upon a value (a(l - + -a), where n is the integer indicating the expected frequency and p is the maximum number of images.
The method may further comprise determining if the generated mask data satisfies a predetermined criterion and if the mask data does not satisfy the predetermined criterion, modifying the mask data such that the mask data satisfies the predetermined criterion.
If more than a predetermined number of areas of said retinal images have a likelihood exceeding the first threshold, the method may further comprise modifying the first threshold such that less than or equal to the predetermined number of areas of the retinal images have a likelihood exceeding the first threshold.
The predetermined number of areas may be selected based upon a maximum proportion of areas of the retinal images determined not to represent lesions relative to the total area of the retinal images.
Receiving candidate data may further comprise determining if the candidate data satisfies a predetermined criterion and if the candidate data does not satisfy the predetermined criterion, modifying the candidate data such that the candidate data satisfies the predetermined criterion.
The predetermined criterion may be based upon a number of areas identified as representing candidate lesions and/or the predetermined criterion may be based upon a sum of the values of each of the areas of the candidate data or a sum of the values of all areas of the candidate data.
A plurality of images may be received, each image having an associated identifier and generating mask data may comprise generating a plurality of mask data, each generated mask data having an associated mask identifier based upon the identifier associated with the image processed to generate the mask data. Each image may have an associated identifier indicating image acquisition equipment (e.g. a camera) used to acquire the image, and the mask data may be generated based upon a plurality of images acquired using particular image acquisition equipment.
If this is the case then mask data indicating artefacts may be associated with each of the cameras used. Using an identifier for each of the cameras allows a different mask to be generated for each camera indicating artefacts associated with a particular camera. The associated mask may then be used for images captured by the respective camera.
The first and second aspects of the invention may be combined to provide a method of generating output data indicating areas of a retinal image representing lesions by using mask data generated as set out above.
According to a third aspect of the invention there is provided a method of identifying indications of disease in a retinal image comprising processing the retinal image according to the first aspect of the invention. The mask data used in the first aspect of the invention may be generated according to the second aspect of the invention.
The disease may be selected from the group consisting of diabetic retinopathy and age-related macular degeneration, cardio-vascular disease, and neurological disorders (for example Alzheimer's disease and stroke) although those skilled in the art will realise that the methods described herein can be used to detect indicators of any disease which are present in retinal images.
Aspects of the invention can be implemented in any convenient form. For example computer programs may be provided to carry out the methods described herein.
Such computer programs may be carried on appropriate computer readable media which term includes appropriate tangible storage devices (e.g. discs). Aspects of the invention can also be implemented by way of appropriately programmed computers.
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 is a schematic illustration of a system for analysis of retinal images according to an embodiment of the present invention; Figure 2 is a schematic illustration showing part of the system of Figure 1 in further detail; Figure 3 is an example of a retinal image suitable for processing using the system of Figure 1; Figure 4 is a flowchart showing processing to filter data indicating possible microaneurysms in an image; Figure 5 is a flowchart showing processing of a set of images, including the processing of Figure 4; Figure 6 is a flowchart showing processing to update a mask based on a set of microaneurysms associated with an image; Figures 7A to 7D show a corresponding part of images used in the processing of Figure 6; Figure 8 is a flowchart showing alternative processing to the processing of Figure 6 to update a mask based on a set of microaneurysms associated with an image; Figures 9A to 9E each show a graph indicating change in the value of a pixel in the image of Figure 7C as parameters are varied; and Figure 10 is a flowchart showing processing to generate a set of microaneurysms from an image.
Referring now to Figure 1, a camera 1 is arranged to capture a digital image 2 of an eye 3. The digital image 2 is a retinal image showing features of the retina of the eye 3. The image 2 is stored in a database 4 for processing by a computer 5. Images such as the image 2 of Figure 1 may be collected from a population for screening for a disease such as, for example, diabetic retinopathy. The camera 1 may be a fundus camera such as a Canon CR5-45NM from Canon Inc. Medical Equipment Business Group, Kanagawa, Japan, or any camera suitable for capturing an image of the retina of an eye.
Referring now to Figure 2, the camera 1 and eye 3 are shown. The camera 1 comprises an objective lens 6, a main body 7 and a sensor 8 arranged to capture an image. The sensor 8 may be a part of a standard digital camera which is attached to the main body 7, or any other suitable sensor. The main body 7 includes a lens arrangement 9 which directs light through the camera 1 from the eye 3 to the sensor 8. A focus adjustment knob 10 allows a user to adjust the focus by moving lenses of the lens arrangement 9.
Referring now to Figure 3, a retinal image 11 acquired using the camera I and suitable for processing by the computer 5 of Figure 1 is shown. The image 11 shows a retina 12 upon which can be seen an optic disc 13 and blood vessels 14. Further areas 15 can be seen and these further areas can be classified by human inspection.
Some of these further areas 15 are indicative of disease, and detection and identification of such areas is therefore desirable. Each further area 15 may be, amongst other things, a lesion such as a microaneurysm, a blot haemorrhage, an exudate, or drusen, an anatomical feature such as the optic disc, the macula or the fovea, or an artefact arising from the image acquisition process. Artefacts in a retinal image such as the retinal image 11 may be caused by, amongst other things, dust on an external lens of the camera 1, such as the objective lens 6, or a lens associated with the sensor 8, or a lens of the lens arrangement 9 inside the main body 7 of the camera 1. Damage to the sensor 8 may also cause artefacts in the retinal image.
Artefacts of the type described above may be similar in appearance to lesions, the detection of which is desirable. As such, an automated system intended to detect lesions may incorrectly detect an artefact as a lesion. This is particulary the case for microaneurysms, which are particularly difficult to distinguish from artefacts in retinal images.
Figure 4 shows processing carried out to filter an input set of possible microaneurysms MA1 associated with an image I. In general terms, the processing of Figure 4 uses stored data indicating locations at which detection of a microaneurysm is likely to be caused by something other than a microaneurysm, such as an artefact.
At step Si the set of candidate microaneurysms MA1 is received. Each of the candidate microaneurysms in the set MA1 is associated with an area of the input image I. The area of the input image I associated with a candidate microaneurysm may be indicated by a single pixel, the single pixel indicating the centre of the area of the image detected as indicating a microaneurysm. The single pixel may be identified using (x,y) coordinates or any other convenient location identifier. The set of candidate microaneurysms may be generated from the image I according to the processing of Figure 10 described below, or by any suitable method. At step S2 the set MA1 is processed together with a mask B. The mask B indicates a set of points in the image that are considered to not represent microaneurysms, irrespective of whether they are detected as candidate microaneurysms by a microaneurysm detection method such as that of Figure 10. The processing of step S2 removes any data points from the set MA1 that are associated with a corresponding point in the mask B. At step S3 the set of points determined to represent microaneurysms after applying the mask B to the input set are output.
The processing of Figure 4 to filter an input set of possible microaneurysms may be carried out as a part of the processing of a plurality of images as shown in Figure 5.
The processing of Figure 5 takes as input a series of images and generates a mask from these images. Each time a new image is input the set of microaneurysms detected in that image is used to update the mask based upon the frequency of detection of a microaneurysm at a given location in the processed images. The processing of Figure 5 is based upon the fact that in a given retinal image there are a large number of pixels (1,960,000 in an image of dimensions 1400*1400) and only a very small proportion of those pixels are likely to be identified as indicating the presence of a microar'ieurysm. As such, if the same pixel (or pixels that are very close to one another) repeatedly indicates the presence of a microaneurysm in different retinal images captured from different patients, then it is likely that microaneurysm detection at the particular pixel is caused by an artefact in the system, on the basis that repeated microaneurysm detection in a single location is unlikely. From the above, it will be appreciated that the processing of Figure 5 generates a mask based upon frequency of recurrence of microaneurysms at a particular location.
The processing of Figure 5 will now be described. At step S5 a set of images S is input. The set of images S may be taken from a population to be screened for disease. The set S will in general include a single image or multiple retinal images from each eye of each of the patients in the population.
At step S6 a mask B and an image A are initialised. Image A is an image used in the generation of the mask B. The mask B may be initialised to be a blank mask such that initially the mask B does not indicate any points in an image that are not to be considered as microaneurysms if so detected. Image A is initialised to correspond to the mask B. At step S7 an image I is selected from the set S such that the image I has not previously been processed. At step S8 a set of possible microaneurysms MA1 is determined from the image I. The set of possible microaneurysms may be determined by any suitable method such as that described below with reference to Figure 10. At step S9 the mask B is updated in accordance with the set of possible microaneurysms MA1 as is described in further detail below with reference to Figure 6. At step SlO the mask B is applied to the image I according to the processing described above with reference to Figure 4. At step SI 1 a test is carried out to determine if there are more images to be processed. If it is determined that there are more images to be processed then processing continues at step S7 where a further image is selected from the set S which has not previous'y been processed. If it is determined at step SI 1 that there are no more images to be processed then at step S12 the processing ends.
Processing to update the mask B described above with reference to step S9 of Figure 5 will now be described with reference to Figure 6. At step S15 the set of candidate microaneurysms MA1 associated with image I is received and at step S16 an image M1 is generated. The image M1 is a binary image of the same horizontal and vertical dimensions as the image I, and each pixel of the image M1 corresponds to a pixel in the image I. Each pixel identified as representing a candidate microaneurysm in the image I by the processing of step S8 of Figure 5 has value 1 in the image M1 and all other pixels have value 0 in the image M1. A small part of an example image M1 is shown in Figure 7A and described below. The image M1 is in general sparsely non-zero and non-zero locations are separated from one another.
At step S17 an image M1' is generated from the image M1 by convolving M1 with a 2-dimensional scaled Gaussian kernel, the elements of which have values given by a Gaussian function such as that shown in equation (1) below: G(x,y)= e22)2 (1) where a is a scalar with value greater than 0. The function G rapidly approaches a value of zero as lxi and II become large and it is therefore possible to limit the size of the kernel and therefore the size of the area of the image to which the kernel is applied in a single convolution operation, for example to -3a x �= 3a and -3a y �= 3a, without significantly affecting the result of the convolution. This means that the area that is affected by the convolution may be restricted to an area with radius 3a centred on a point in M1 that has value 1. The effect of convolution of the image M1 and equation (1) is to spread each non-zero point over a limited area in the image M1' (i.e. to smooth the image). In general, pixels of the image M1' have values in the range 0 to 1. However, a pixel in the image M1 may have a value greater than 1 in the case where the area used as a basis for the smoothing operations comprises a plurality of microaneurysms indicated in the image M1. An example image M1' derived from the image M1 of Figure 7A is shown in Figure 7B and described below.
At step S18 the image A is updated according to equation (2) below: A *-(1-a)A+amin(M1',l) (2) where a is a scalar in the range 0 to 1 and the function miii returns an image whose value at each pixel is equal to the minimum of the corresponding pixel value in the image indicated by its first argument and the value indicated by its second argument.
From equation (2) it can be seen that given that all pixels of the image A are initialised to zero the value of A at a particular pixel (x,y) remains zero unless the value of that pixel in the image M1' is non-zero. The function mm ensures that the factor of a that is added does not exceed a, even in the case where the value of a pixel in M1' exceeds 1 as can occur as described above.
At step S19 the mask B is generated by thresholding the image A with respect to a threshold. The thresholding is such that a pixel B(x,y) takes a value 1' when the corresponding pixel A(x,y) is equal to or exceeds the threshold l and a pixel B(x,y) takes a value 0' when the corresponding pixel A(x,y) is less than the threshold f3.
The processing of Figure 6 has been described above with reference to processing a single image. It should be noted that Figure 6 shows the processing of step S9 of Figure 5 in greater detail, and as such the processing of Figure 6 is a part of a loop for processing a plurality of images. That is, the processing of Figure 6 is repeated for each of a plurality of images. With this in mind, the processing of Figure 6 can be better understood as explained in further detail below.
It has been described above that the processing of step S17 takes a single pixel location indicating a microaneurysm and creates a smoothed area centred upon that single pixel location, the size of the area being determined by the size of the kernel.
The purpose of this smoothing (which is carried out on each processed image in turn) is to allow for slight differences in microaneurysm detections which are attributable to a single artefact to be accommodated.
The image A is updated (at step S18) on the basis of the smoothed image M1' (generated at step S17). Repeated microaneurysm detections at or near a particular location will cause the value of A at that particular location to increase, given the form of equation (2). On the other hand, where a microaneurysm is not detected at or near a particular location the value of A at that location, if already greater than zero, is caused to reduce given the form of equation (2).
In more detail, the pixel value at the location (x,y) in the image A reduces by a factor of ( ) (given the first term of the addition of equation (2)) and is increased by amin(M1',l) (given the second term of the addition of equation (2)). As such, where the value at the pixel location (x,y) in the image M' is greater than the value at the pixel location (x,y) in the image A, the net effect of the updating of the image A according to equation (2) will in general be an increase in the value of the pixel in the image A and may cause the pixel to be masked. Given that pixel values in the image M1' are likely to be greater than corresponding values in the image A in a small area around each non-zero pixel in the image M1, pixels surrounding a microaneurysm detection may become masked.
It follows that the effect of the smoothing of step S17 is to increase the number of pixels in the image M1' having a non-zero value as a result of a single microaneurysm detection, and as such to increase the number of pixels in the image A which have values which exceed the threshold f and become masked.
It will be appreciated that the image M1' produced by the processing of step S17 may be generated in any convenient way that produces a spread of non-zero points over a limited area based upon a single detection. Indeed, it will be appreciated that the exact effect of a single microaneurysm detection on the generation of the image A will depend upon the spatial extent of the smoothing operation (determined by the value a in the above example).
The value of the threshold is selected to be greater than the value a so the value of a pixel A(x,y) becomes greater than f3 (and hence the value of the corresponding pixel in the mask B becomes 1) only after more than one detection of a possible microaneurysm at similar locations.
Referring now to Figures 7A to 7D, a corresponding part of each of sample images M1, M1', A and B respectively are shown. Each of Figures 7A and 7B are generated from a corresponding part of a retinal image I, Figure 7C is generated from the image of Figure 7B together with stored data relating to previously processed images and Figure 7D is generated from the image of Figure 7C. Each of Figures 7A to 7D is generated according to the processing described above with reference to Figures 5 and 6.
Figure 7A shows a part of the image M1. Circled pixels 21, 22 indicate pixels in the image that have been identified as candidate microaneurysms according to the processing of Figure 10 described below. Figure 7B shows the result of convolving the image of Figure 7A with a kernel having values given by the Gaussian function G Figure 7D shows the mask B after updating a previous mask in accordance with the image of Figure 7C. White areas 22 indicate areas of the image that are considered to be caused by artefacts and as such should not be identified as microaneurysm.
The white areas are determined by the frequency of detections of the area as a microaneurysm in a plurality of images, as indicated by the areas 22b of the image of Figure 7C. Applying the mask B of Figure 7D to the image 7A results in three identified microaneurysms 22 being discarded, and only those identified microaneurysms 21 are considered true microaneurysms in this case.
It has been described above, with reference to Figure 6, that values a and 13 are used to determine the mask B. The values a and 13 are selected so that effective artefact removal is obtained whilst minimising the masking of true microaneurysm detections.
Given the size of an input image I and the possible number of pixel locations (1,960,000 in an image of dimensions 1400*1 400), together with the relative sparsity of microaneurysm detections (the mean number of microaneurysm detections in a retinal image of a person with diabetes is less than 2, and only 1% of images have 25 or more detections) the likelihood of a microaneurysm occurring in the same or similar location in any two images by chance is relatively low. However, artefacts caused by dust may only be detected in a proportion of images captured using the affected apparatus.
Artefacts tend to be stationary in a camera and as such, the locations of candidate lesion detections caused by an artefact in a particular camera may be determined by observing the locations and frequencies of candidate lesion detections in a sequence of processed images captured using the camera. That said, examination of images containing artefacts suggests that an artefact may be detected at or near a certain location in as few as 1 in 20 images. This inconsistent detection may be due to a number of causes such as the effect of refocusing carried out for a new patient causing an artefact to appear or disappear in subsequent images. Similarly, whether an artefact is detected as a microaneurysm may depend on the nature of the surrounding image, for example whether its position in a retinal image lies on a blood vessel.
The value of the threshold 1 can be calculated based upon the value a and a maximum number of images, n-i, expected between detections of an artefact.
According to equation (2), given that the image A is initialised by setting all its pixel values to zero, the first time an artefact is detected as a microaneurysm at a location (x,y) the value A(x,y) is assigned the value a. Based on the assumption that an artefact would normally be expected to reappear at location (x,y) at least once in every n sequentially processed images the value 1 can be determined according to equation (3). (3)
where indicates a small value determined based upon the degree of accuracy used within the system, for example if pixel values are stored to more than three decimal places, a value of of 0.001 would be suitable. The value and the degree of accuracy used within the system should be less than the value a. A suitable value for n is 20 as explained above.
Equation (3) is derived based upon equation (2) as is now described. The value a(i-a) +a in equation (3) corresponds to the value A(x,y) if a niicroaneurysm detection is made in a first image, no microaneurysm detection is made in the subsequent n-i images, and a microaneurysm detection is again made in the (n+l)th image. Subtraction of the small value sets the threshold 3 to be just smaller than the value of A(x,y) in the case where a microaneurysm detection is made in a first image and subsequently in the (n+I)th image (the 2i image where n has a value of 20, i.e. a detection rate of i in 20). This means that in such a case the value B(x,y) (generated by comparing the value A(x,y) with the threshold 3) will be 1 after processing the (n+i)t image and therefore the second microaneurysm detection will be masked as desired.
The value described above with reference to equation (3) is determined based upon an assumption that masking is desired on a second detection, if the second detection is within a predetermined number of processed images of a first detection.
It will be appreciated that the value 3 may be determined based upon a different number of required detections and equation (3) may be modified accordingly.
The preceding description has included use of the scalar value a. A suitable value of a for effective artefact removal can be determined experimentally based upon a training set of images or, alternatively, the value a can be determined based upon the value n described above together with a minimum frequency at which an artefact is detected for which masking of that artefact is desired, indicated by a number m.
As set out above, the value of the threshold f3 as determined by equation (3) is such that masking will occur if a microaneurysm detection is made in a first image, no microaneurysm detection is made in the subsequent n-i images, and a microaneurysm detection is again made in the (n+i)th image, that is a value of A(x,y) that is equal to (a(i_a) +a) will cause a pixel B(x,y)to be set to a value I' as this value will exceed the threshold. Additionally, based upon an assumption that masking will eventually occur with a minimum frequency of microaneurysm detections at a particular pixel of 1 detection in every m images, a value of A(x,y) that is equal to the value shown in equation (4) below will also cause a pixel B(x,y) to be set to a value 1' by exceeding the threshold t.
[..IJ[a(1_a)m +a1) +c4i-cx)m + all..] (4) The term a(1 -a)tm + a in equation (4) corresponds to a first detection at pixel (x,y) followed by rn-i images without detections at pixel (x,y) and a subsequent detection in the (m+1)th image, and each term (1_a)m +a in equation (4) corresponds to a subsequent (rn-i) images without detections at pixel (x,y) and a further detection in the subsequent image. The term E[a(1_a)m +a1_a)m +aJ therefore corresponds to a first detection in image 1, a second detection in image (m+i) and a third detection in image (2m+i). Equation (4) also includes a term based upon a further fourth detection in image (3m+1). Each additional term (1-a)tm +a in equation (4) (not shown) corresponds to a further detection in a further ((r-i)m+1)th image.
Since both of the sequences of microaneurysm detections set out above cause a pixel B(x,y) to be set to a value 1', and the setting of the pixel B(x,y) is dependent upon the value A(x,y) and the threshold f3, it can be deduced that the formulae corresponding to the two sequences of microaneurysm detections should be equal, to an acceptable approximation, giving equation (5) below.
[..{a(1_a)m+aj1_a)m�a1_a)m+a}..] =a(i-a) +a (5) The left hand side of equation (5) can be simplified to give equation (6) below: (6) where = (1 -a)m. Dividing both sides by a gives equation (7) below.
1++2+3+...=(i-a)+i (7) It can be seen that the left hand side of equation (7) is a geometric series. Since the value a is in the range 0 to 1, it can be seen that the value x always has a value less than 1 and equation (7) can therefore be simplified as shown in equation (8) given the known simplification of a geometric series. (8)
Substituting =(1-a)m back into equation (8) and rearranging gives equation (9) below.
1 m_(1Y11 (9) 1-(1-a) Equation (9) may be solved to determine the value a based upon the values m and n in any convenient way. For example, equation (9) can be solved by trial and error given that determination of the value a is only necessary once, at the design stage of a system implementing the methods herein described.
Experiments have shown that the following values allow artefacts to be effectively masked using the methods described above: n = 20, m 79; a = 0.01 (as given by equation (9)); and = 0.01816 (as given by equation (3)).
Whilst it has been described above that the value a can be determined based upon the values m and n, it will be appreciated that the values a and can be selected without reference to m and n, for example by using experimentation on a test set of images to determine values that give effective artefact removal whilst minimising the masking of true microaneurysm detections.
It has been described above that the mask B is generated with reference to a single thresholding of the image A with a threshold 3. In an alternative embodiment, the mask B may be generated with reference to a pair of thresholds, as set out above and y.
Figure 8 shows processing to determine the mask B based upon a pair of thresholds.
Steps 515A to S18A of Figure 8 are identical to steps S15 to S18 of Figure 6 and are not described further here.
At step SI9A binary images C, and C2, both having the same horizontal and vertical dimensions as the original retinal image I, are generated. The images C1 and C2 are binary images produced by thresholding the image A with respective thresholds y and 3, which are such that f3 > y. Each of the images C1, C2 is generated to have a value of zero for all pixels except those pixels at locations corresponding to locations in the image A having pixels with a value that is greater than or equal to the respective threshold y and 1. That is, for each pixel (x,y) where A(x,y) > y the pixel C1(x,y) is assigned the value 1 and for each pixel (x,y) where A(x,y) > 13 the pixel C2(x,y) is assigned the value 1.
At step S2OA the mask B is updated according to equation (10): B-(BACI)VC2 (10) where C1, C2 are the images generated at step S19A and A, v are respectively pixel-wise logical AND, OR operations which treat values of 1' as true and values of 0' as false. Equation (10) is such that a pixel B(x,y) takes a value 1' when the corresponding pixel A(x,y) is equal to or exceeds the threshold 13. The value of the pixel B(x,y) will also be 1' if it already has a value of 1' and the value A(x,y) exceeds the threshold y. As such, the threshold y allows a pixel (x,y) in the image B to have a value 1' when the value A(x,y) is less than the threshold 13 if the value B(x,y) in the image input to equation (10) is equal to 1'. As such, the threshold y prolongs the number of images for which masking is carried out (i.e. the number of images processed to update the image B which result in B(x,y) having a value 1') after the value A(x,y) becomes greater than the threshold 13.
The value can be set such that if an artefact appears exactly twice in (n+1) sequentially processed images, masking will continue if the artefact appears at least once in the next p processed images, according to equation (11). (11)
The number p indicates a number of images in which an artefact is not detected which can be processed after an artefact was first identified as such which will still result in the artefact being identified as an artefact; i.e. the number of images in which an artefact is not detected which can be processed after the processing of an image which caused a pixel of interest to exceed the threshold 13. The value of p is based upon an assumption that the artefact is first identified as such by the pixel of interest having a value which exceeds the threshold 13 by a minimum value (referred to as in the preceding description of equation (3)). That is, if first identification of the artefact as such is caused by a value of the pixel of interest exceeding the threshold by a value greater than the minimum value, a greater number of images in which the artefact is not detected than that indicated by the value p can be processed while maintaining identification of the artefact; given the definition of 1 above, this occurs when the value of a pixel exceeding the threshold t is greater than the value that would be caused by a first detection in a first image and a second detection in an (n+l)t' image.
The value ((a(1_ajl +ai_aY') of equation (11) corresponds to the value A(x,y) when microaneurysm detections are made in a first image, no microaneurysm detection is made in the subsequent (n-I) images, and a microaneurysm detection is again made in the (n+l)th image (corresponding to the term used in the computation of), followed by a further p images in which no microaneurysm detection is made.
Given the desire to continue masking, (i.e. that a pixel in the image A should exceed the threshold y) if an artefact reappears at least once in p images following the (n+1) images in which the artefact appeared twice, it can be seen that setting the value y to be slightly smaller than the value A(x,y) determined by ((a(i_a)+ci_a)) (by subtracting) ensures that the value C1(x,y) corresponding to the th image subsequent to the (n+1) images in which an artefact appeared exactly twice will be 1, and since the value B(x,y) is also I then according to equation (10) masking is maintained. However, setting the value y very close to the value ((c(i-ct) +a)i_a)) means that if the (p+I)th image subsequent to the (n+1) images in which an artefact appeared exactly twice does not indicate a microaneurysm at location (x,y) then the value C1(x,y) corresponding to the (n+l)f(p+1))th image will be 0 (because the value of A(x,y) will be below the threshold y) and according to equation (3) the value B(x,y) will also be 0.
In some embodiments the values m (described above with reference to the determination of the value a) and p may be equal, although in alternative embodiments it may be preferable to set the values m and p independently.
If the value y is greater than -a, the term (B AC1) will have no effect in equation (i -a) (10) on the determination of the image B. That is, if the value y is greater than -a (1-a) then for any detection at a pixel (x,y) in a processed image, the value (B A C1) for the pixel (x,y) will only be l'if the value C., for the pixel (x,y) is also 1. In such a case the method of determining the mask B with reference to a single threshold, described above with reference to Figure 6, should be used. The ineffectiveness of the threshold in the case set out above is explained by the following description.
For a detection at a pixel (x,y) in an 1th processed image, based upon values B(x,y)' and A(x,y)' determined after the (i-i)th image has been processed, there are three possible cases which are set out below.
In the first case, let the value B(x,y)' be equal to 0'. In such a case, the term (B A c1), used to determine the value B(x,y) for the itt' processed image in equation (3), will always be equal to 0' irrespective of the value A(x,y)' relative to the thresholds and 13. The value B(x,y) will therefore be determined based only upon the value of the pixel (x,y) in the image C2.
In the second case, let the value B(x,y)' be equal to 1' and the value A(x,y)' be greater than the threshold 13. In such a case the value A(x,y) for the ith processed image will also be greater than the threshold 13 as the value A(x,y) always increases for a detection in a image. As such, the value of the pixel (x,y) in the image C., will be equal to 1' and the value B(x,y) will be 1' irrespective of the term (B AC1).
In the third case, let the value B(x,y)' be equal to 1' and the value A(x,y)' be less than the threshold 13, Given that the value B(x,y)' is equal to 1' the value A(x,y)' must be greater than the threshold y. Setting the value A(x,y)' equal to the threshold T (that is, just less than the minimum value of A(x,y)') gives a value A(x,y) in the i image of (1-a)y+a, assuming a detection in the ith image. From equation (3), if the value A(x,y) is greater than the threshold 13 then the term (B AC1) again has no effect as the value of the pixel (x,y) in the image C., will be equal to 1' and the value B(x,y) will also be 1'. Therefore if (i-a)y +a >13 is satisfied it will always be the case that the term (BACI) has no effect on equation (3). Rearranging the above inequality * f3-a * * 13-a gives y> as set out above. That is, if y> the threshold y has no effect (i-a) (1-a) on equation (3) because a pixel in the image A will never exceed y without also exceeding 1 in a case where the pixel has a value 1' in the input image B. Figures 9A to 9E each show a graph indicating change in the value of a particular pixel at a location (x,y) in the image A based upon different detection frequencies in a set of 350 images. Figures 9A to 9C show change in the value for the pixel for a value of a of 0.01 (determined based upon a value of n of 20 and a value of m of 79 using equation (10) above), and Figures 9D and 9E show change in the value for the pixel for a value of a of 0.1 (determined based upon a value of n of 20 and a value of m of 21 using equation (10) above). Each of Figures 9A to 9C use a single threshold 13 to determine the mask B, as described above with reference to Figure 6, and each of images 9D to 9E use a pair of thresholds 13, y, determined as described above with reference to Figure 8.
Referring first to Figures 9A to 9C, a line 25 in each of Figures 9A to 9C indicates the value of the threshold 13. The threshold f3 is based upon a value of a of 0.01. The value of 6 used in equation (3) to determine the threshold 13 is 0.00001 indicating a high degree of accuracy is used and the value of the threshold 13 is therefore 0.01816. A respective line 27a, 27b and 27c in Figures 9A, 9B and 9C indicates the value for a particular pixel A(x,y) in the image A described above.
The line 27a of Figure 9A is based upon a detection of a microaneurysm at a particular pixel in I in every 20 images (that is a detection in image 1, a detection in image 21, 41 etc.). A point 28a in Figure 9A indicates the value A(x,y) after 21 images and two detections and has a value of 0.01817, exceeding the threshold 13 and therefore the detection at image 21 is masked. The line 27a exceeds the threshold 13 for each subsequent detection and.therefore each subsequent detection is masked.
The line 27b of Figure 9B is based upon a detection of a microaneurysm at a particular pixel in 1 in every 60 images (that is, a detection at a particular pixel in greater than 1 in m images given that m has a value of 79 in the example of Figure 9B). A point 28b, indicating the value A(x,y) after 121 images and three detections, has a value 0.01846 and therefore exceeds the threshold. A detection rate of 1 in every 60 images will therefore cause masking on the third detection at a given pixel, and subsequent detections at the same pixel will also be masked as long as a detection rate of approximately I in every 60 images is maintained. This is because a detection of 1 in every 60 images is a greater frequency than 1 in m images, where m has a value of 79.
The line 27c of Figure 9C is based upon a detection of a microaneurysm in 1 in every images (that is a detection at a particular pixel in less than 1 in m images given that m has a value of 79 in the example of Figure 9C). The line 27c can be seen to not exceed the threshold 1 after 350 processed images. Indeed, if a microaneurysm detection is made at the same pixel in 1 in every 80 images, the maximum value that the pixel A(x,y) will achieve is 0.0181, which is less than the threshold J3. Masking will therefore never commence if microaneurysm detections are made at a particular pixel in I in every 80 images for the particular set of values a, described, where a is determined based upon a minimum frequency of detection of 1 in 79 images.
In the example above, a detection rate of at least 1 image in 79 for a particular pixel will eventually cause masking of detections, as indicated by the value m used in the determination of the value a. That is, if a particular area is identified as a microaneurysm regularly but relatively infrequently, the area is considered not to be a true microaneurysm, although as frequency of detection decreases, the number of detections required to cause masking increases, within the bounds set out above.
The existence of a minimum detection rate (i.e. 1 in 79 in the example set out above) at which masking occurs is beneficial, as without such a detection rate, there is a danger that multiple detections at any location, no matter how infrequent, could cause masking, thereby increasing the possibility that the method masks true microaneurysm detections.
Referring now to Figures 9D and 9E, a line 30 in each of Figures 9D and 9E indicates the values of the threshold f3 and a line 31 in each of Figures 9D and 9E indicates the value of the threshold y. The thresholds, y are determined based upon a value of a of 0.1, a value of n of 20 and a value of p of 30. The value of used in equations (3) and (11) to determine the thresholds, y is 0.001. The values of the thresholds, y, determined by equations (3) and (II), are 0.1111 and 0.0047 respectively.
The value of a of 0.1 used in Figures 9D and 9E was selected, rather than determined based upon a value m indicating a minimum frequency to ever mask. The selected value of a gives a value of m of 21 as can be determined from equation (10).
A line 32a in Figure 9D indicates the value A(x,y) at a particular pixel based upon a detection of microaneurysms as described below. A point 33a indicating the value A(x,y) after 21 images, and therefore 2 detections, has a value of 0.1121 and exceeds the threshold f3. A microaneurysm will therefore be masked in the 21st image. A subsequent microaneurysm detection in the 41 image indicated by the point 34 increases the value of A(x,y) above the threshold f3 and so the detection in the 4l image is masked. A subsequent microaneurysm detection in the 71st image indicated by the point 35 increases the value of A(x,y) to a value below the threshold 13. Since a point 35a indicates that the value of A(x,y) remained above the threshold y for each of the 42nd to 70th processed images, the detection in the 71st image is masked even though the value A(x,y) is below the threshold A line 32b in Figure 9E indicates the value A(x,y) at a particular pixel based upon a detection of a microaneurysm in 1 in 20 images up to the 61st image. Masking is therefore carried out in the same way as for Figure 9D. The line 32b shows a subsequent detection is made at image 120 and then in I in 20 images from image onwards. A point 36 corresponding to the detection in the 120th image can be seen to have a value less than the threshold 13 (as indicated by the point 36 being below line 30). As the value of A(x,y) at image 119 is less than the threshold y (as indicated by a point 37 being below line 31), the detection in the 120th image is not masked. A point 38 corresponding to a detection in the 140th image has a value greater than the threshold y and so is masked. Subsequent detections are also masked.
A comparison of Figures 9A to 9C with Figures 9D and 9E shows that varying the value of a, and the values of 13 and T correspondingly, varies the minimum frequency of detections at a particular location that cause masking. That is, as the value of a increases, more frequent detections at a particular location are required to cause masking (i.e. a higher frequency of detections is required for masking to occur).
Conversely, as the value of a decreases, less frequent detections at a particular location can be made and masking will still occur (i.e. a lower frequency of detections will cause masking). This can be understood by the relationship between the values a, n and m set out above in equation (9).
It has been described above with reference to step S6 of Figure 5 that the images A and B are initialised to set all pixel values equal to zero. In an alternative embodiment a test image may be used to initialise the mask B. A test image may be generated by taking an image of a blank piece of card or, alternatively, by capturing an image of a model eye which is designed to have photographic properties of a real eye and which does not have any microaneurysms. The test image should be captured using the same camera as will be used to subsequently process patient images and so the test image will show any artefacts detected in the camera. The test image may then be processed using a microaneurysm detection algorithm which is identical to the processing used to identify microaneurysms in retinal images of patients. Any locations in the image that are identified as indicating microaneurysms can then be determined to be due to dust or faults in the camera and corresponding areas in the images A and B can be set to a value 1. If those locations are identified as microaneurysms in subsequent patient images, then those identified locations are determined as not indicating a microaneurysm. The images A and B may then be updated according to the processing of Figures 5 and 6 in the usual way.
If a series of images are processed in which a large number of artefacts are incorrectly identified, this may cause a large proportion of areas of subsequently processed images to be masked. A large number of artefacts may be identified, for example, if a series of poor quality images are received or if a number of images are processed from patients who have severe disease. It is therefore beneficial to include an upper bound on the number of pixels of an image that are masked by the mask B. If the number of pixels in the image B exceeds the upper bound an alternative thresholding of the image A may be used to generate the mask B. The alternative thresholding may be achieved by determining a threshold such that thresholding the image A with the determined threshold results in no more pixels in the image B having a value 1 than the upper bound.
It may further be beneficial to place an upper bound on the contribution of any set of microaneurysm detections MA1, generated from an image I, to the mask B. This may be achieved by setting an upper bound, U, on the sum of the pixel values of the image M1' generated from the set MA1. If the sum of the pixel values of the image M1' exceeds the upper bound U, pixel values in the image M1' can be modified by dividing pixel values of the image M1' by sum(M1') x U, where sum(M1') is the sum of the pixel values of the image M1'.
In an alternative embodiment for processing a set of images S, a mask may be generated by capturing a test image as described above. The test image may be used as the mask throughout the processing of images without updating the mask based upon processed patient images. That is, the processing of Figure 5 is applied to the set S without the processing of step S9.
It has been described above with reference to step S5 of Figure 5 that the input set of images to be processed, S, will in general include a single or multiple images from any eye. If multiple images are captured from a single eye then it is unlikely that a microaneurysm in the eye will cause repeat detections at similar locations since eye movement will normally occur between images and the images may also intentionally show different portions of the same eye.
If it is not the case that each of the images in the set S are generated from the same apparatus, then preferably each image in the set S has data associated with it which indicates the particular apparatus from which the image was captured. If apparatus specific identifiers are used, a mask can be generated for each of the different apparatuses by examining the identifier associated with the image and selecting the appropriate mask, both to be updated and to be applied to the set of possible microaneurysms identified from the image to perform masking. Identifier data associated with each image could also be used to prevent masking caused by repeated detections from a plurality of images generated from the same eye although this will generally not be necessary, as set out above.
The processing described above uses data indicating the presence of microaneurysms in an image. One method for detecting microaneurysms is described in Fleming et at: "Automated microaneurysm detection using local contrast normalization and local vessel detection." IEEE transactions on medical imaging, vol. 25, issue 9, September 2006, the contents of which are herein incorporated by reference. The method described in Fleming et at is now briefly described with reference to Figure 10, although any suitable method could be used to generate the data indicating the presence of microaneurysms in an image.
At step S25 an input image I corresponding to the image 2 of Figure 1 is normalised.
Normalisation may comprise a number of steps which are designed to optimise the image for automated processing. For example the image may be scaled so that the vertical dimension of the visible fund us is a standard size, for example approximately 1400 pixels for a 45 degree fundus image. The scaled image may be filtered to remove noise, for example by first applying a median filter of small dimensions which removes non-linear noise from the input image, and second convolving the median-filtered image with a Gaussian filter. A shade-corrected image may be generated by smoothing the scaled image by applying a large median filter and dividing the pixels of the noise-reduced image by the corresponding pixels of the smoothed image. The image may be normalised for global image contrast by dividing the shade-corrected image pixel-wise by the standard deviation of the pixels in the image. The output of the normalisation process will be referred to as J. Although the normalisation steps described above have been found to be effective, any suitable normalisation method could be used.
At step S26 a counter variable n is initialised to the value 0 and at step S27 a linear structuring element L is determined according to equation (6) below: L,-A(p,nr/8) (6) where p is the number of pixels in the linear structuring element and A is a function that takes a number of pixels p and an angle and returns a linear structure comprising p pixels which extends at the specified angle. It has been found that a value of p=15 is effective in the processing described here.
At step S28 an image M is determined where M is the morphological opening of the inverted image J with the structuring element L. The morphological opening calculated at step S28 is defined according to equation (7) below, M=-JoL (7) where -J is the inversion of the image J, L is the linear structuring element defined in equation (14) and o represents morphological opening.
The morphological opening operator removes structures which are not wholly enclosed by the structuring element so in the image M, areas that are possible candidate microaneurysms or vessels not at angle nit/8 are removed and areas that correspond to vessels and other linear structures extending approximately at an angle nir/8 are retained. Since a linear structuring element is used, this means structures in the image whose dimension in the direction of the structuring element is less than the length of the structuring element are removed, thus resulting in the removal of areas which are dark in J excluding vessel structures approximately at angle nirl8 but including the removal of candidate microaneurysms.
At step S29 it is determined if n is equal to 7. If n is not equal to 7 then at step S30 n is incremented and processing continues at step S26. If it is determined at step S29 that n is equal to 7 then processing continues at step S31 as described below.
The processing of steps S27 to S30 creates eight structuring elements which are arranged at eight equally spaced orientations. Applying these eight structuring elements to the image -J creates eight morphologically opened images, M, each image only including vessels extending at a particular orientation, the orientation being dependent upon the value of n. Therefore, the pixel-wise maximum of M n = 0.. .7 includes vessels at all orientations.
At step S31 an image D is generated by subtracting pixel-wise the maximum corresponding pixel across the set of images M, for n in the range 0 to 7, from the inverted image -J. Given that each of the images M contains only linear structures extending in a direction close to one of the eight orientations nrrIB, it can be seen that the subtraction results in the removal from the image of all linear structures extending close to one of the eight orientations which is generally equivalent to removing linear structures at any orientation. This means that the image D is an enhancement of dark dots present in the original image with vessels removed and potential microaneurysms retained.
At step S32 potential microaneurysms are determined by comparing the pixel values of the image D for each pixel of the image and determining if the value for a particular pixel is above an empirically determined threshold T. A suitable value for T is 5 times the 95th percentile of pixels in D. At step S33 a potential microaneurysm is determined for each connected region consisting entirely of pixels having pixel value greater than T. For each of these regions, the pixels contained within the region are searched to determine the pixel which is darkest in the shade-corrected image J. At step S33 the darkest pixel in the region is added to a set of potential microaneurysms C. A pixel taken to indicate a potential microaneurysm is thus selected for each of the regions. For each pixel that it is determined at step S32 that the maximum pixel value of the image D has a value less than T, at step S34 the pixel is determined to not be a potential microaneurysm.
Each potential microaneurysm, represented by a respective pixel, may be subjected to further processing to determine whether the microaneurysm is a candidate microaneurysm to be included in the set of microaneurysms MA, input to the processing of Figures 4 and 6. For example, each potential microaneurysm may have region growing performed upon it so as to create an area for each microaneurysm. Region growing iteratively generates a connected area C with each iteration generating an area containing the darkest pixels that are connected to a potential microaneurysm until some threshold is exceeded. Watershed region growing may also be carried out to allow characteristics of the background of a potential microaneurysm to be determined. Watershed region growing finds regions of retina that are not vessels or other lesions surrounding the area derived by region growing the potential microaneurysm. Watershed region growing is carried out in a similar way to region growing.
Further detail of the region growing and watershed region growing processes can be found in the Fleming et al reference cited above.
From the watershed region an estimate of background contrast: the standard deviation of pixels in the normalised image after high pass filtering within the region obtained from watershed retinal region growing, can be determined and denoted BC.
A paraboloid may then be fitted to the 2-dimensional region generated by the region growing process. From the fitted paraboloid, the major-and minor-axis lengths are calculated as well as the eccentricity of the potential microaneurysm.
Features used to determine whether a potential microaneurysm is in fact a candidate microaneurysm may include: 1. The number of peaks in an energy function E, such as the energy function of equation (8) below; 2. Major and minor axis lengths determined as described above; 3. The sharpness of the fitted paraboloid (or alternatively the size of the fitted paraboloid at a constant depth relative to its apex can be used since this is inversely proportional to the sharpness of the paraboloid); 4. Depth (relative intensity) of the potential microaneurysm using the original image and the background intensity estimated during normalisation; 5. Depth of the potential microaneurysm using the normalised image and the fitted paraboloid divided by BC; 6. Energy of the potential microaneurysm, i.e. the mean squared gradient magnitude around the potential microaneurysm boundary divided by BC.
7. The depth of the potential microaneurysm normalised by its size (depth divided by geometric mean of axis lengths) divided by BC.
8. The energy of the potential microaneurysm normalised by the square root of its depth divided by BC.
E(t) = mean (8) Where: boundary (C1) is the set of pixels on the boundary of the region C1; and grad(p) is the gradient magnitude of the normalised original image at a pixel p. Using a training set, a K-Nearest Neighbour (KNN)-classifier can be used to classify potential microaneurysms. A distance metric is evaluated between a feature vector to be tested and each of the feature vectors evaluated for a training set in which each of the associated potential microaneurysms was hand-annotated as microaneurysm or not microaneurysm. The distance metric can be evaluated, for example as the sum of the squares of differences between the test and training features. A set is determined consisting of the K nearest, based on the distance metric, training feature vectors to the test feature vector. A potential microaneurysm is considered to be a candidate microaneurysm if L or more members of this set are candidate rnicroaneurysms. For example, a potential microaneurysm would be considered to be a candidate microaneurysm for L=5 and K15 meaning 5 out of 15 nearest neighbours are candidate microaneurysms.
The set of candidate microaneurysms determined by the K-Nearest Neighbour-classifier from the set of potential microaneurysms is the set of microaneurysms MA1 associated with image I that is received at step SI of Figure 4 or step S15 of Figure 6, and processed at step S16 of Figure 6 to generate the image M1.
Whilst the processing of Figures 4, 5, 6 and 8 has been described above with reference to masking microaneurysms, it will be appreciated that the same technique can be used to mask any lesion type. In particular, any lesion type for which detections by a lesion detection process may be made which are caused by artefacts in the camera, and not by the presence of a lesion may be masked according to the process set out above.
Although various embodiments of the invention have been described above, it will be appreciated that various modifications can be made to the described embodiments without departing from the spirit and scope of the invention. Indeed, the foregoing description should be considered in all respects illustrative and not limiting.

Claims (49)

  1. CLAIMS: 1. A method of generating output data indicating areas of a retinal image representing lesions comprising: receiving candidate data associated with said retinal image, said candidate data indicating areas of said retinal image identified as representing respective candidate lesions; receiving mask data, said mask data indicating areas of said retinal image determined not to represent lesions; and processing said candidate data and said mask data to generate said output data indicating areas of said retinal image representing lesions.
  2. 2. A method according to claim 1, wherein said processing comprises identifying areas of said retinal image indicated by said candidate data that correspond to areas of said retinal image indicated by said mask data.
  3. 3. A method according to claim 2, wherein for each of said areas indicated by said candidate data, said area is indicated in said output data if but only if said area does not correspond to any of said areas of said retinal image indicated by said mask data.
  4. 4. A method according to any preceding claim, wherein said mask data is generated based upon at least one reference image, wherein said at least one reference image and said retinal image are captured using the same camera.
  5. 5. A method according to claim 4, further comprising: applying a detection process to the or each retinal image to generate said candidate data; and applying said detection process to said reference image to generate said mask data.
  6. 6. A method according to claim 4 or 5, wherein said at least one reference image is a retinal reference image.
  7. 7. A method according to claim 6, wherein said at least one reference image is a plurality of retinal reference images, and said candidate data is associated with one of said plurality of retinal reference images.
  8. 8. A method according to claim 4 or 5, wherein the or each reference image is an image other than a retinal image.
  9. 9. A method according to claim 8, wherein the or each reference image is an image of a test object other than a human or animal eye.
  10. 10. A method according to claim 9, wherein the or each reference image is an image of an artificial eye.
  11. 11. A method according to any preceding claim, wherein said lesions are selected from the group consisting of microaneurysm, blot haemorrhage, exudate, and drusen.
  12. 12. A method according to any preceding claim, wherein said areas of said retinal image determined not to represent lesions indicated by said mask data are areas of said retinal image that represent an artefact.
  13. 13. A method according to claim 12, wherein said artefactis an artefactofan image acquisition process.
  14. 14. A method according to claim 13, wherein said artefact is caused by contamination in a camera.
  15. 15. A method according to claim 14, wherein said artefact is caused by contamination on a camera lens.
  16. 16. A method according to any preceding claim, wherein receiving mask data comprises: receiving a plurality of mask data; receiving identification data associated with said retinal image; and selecting one of said plurality of mask data to be used in said processing of said candidate data and said mask data based upon said received identification data associated with said retinal image.
  17. 17. A computer program comprising computer readable instructions configured to cause a computer to carry out a method according to any one of claims I to 16.
  18. 18. A computer readable medium carrying a computer program according to claim 17.
  19. 19. A computer apparatus for generating output data indicating areas of a retinal image representing lesions comprising: a memory storing processor readable instructions; and a processor arranged to read and execute instructions stored in said memory; wherein said processor readable instructions comprise instructions arranged to control the computer to carry out a method according to any one of claims 1 to 16.
  20. 20. Apparatus for generating output data indicating areas of a retinal image representing lesions comprising: means for receiving candidate data associated with said retinal image, said candidate data indicating areas of said retinal image identified as representing respective candidate lesions; means for receiving mask data, said mask data indicating areas of said retinal image determined not to represent lesions; and means for processing said candidate data and said mask data to generate said output data indicating areas of said retinal image representing lesions.
  21. 21. A method of generating mask data for application to retinal images to be processed, the method comprising: receiving at least one image; and generating mask data from said at least one image, said mask data indicating areas of said retinal images to be processed determined not to represent lesions.
  22. 22. A method according to claim 21, further comprising: receiving initial data; and receiving candidate data associated with said at least one image, said candidate data indicating areas of said at least one image identified as representing a candidate lesion; wherein said generating mask data comprises updating said initial data based upon said candidate data.
  23. 23. A method according to claim 22, wherein said generating mask data comprises generating, for each area of said retinal images, a likelihood that said area does not represent a lesion based upon said candidate data associated with said at least one image and said initial data.
  24. 24. A method according to claim 23, wherein said initial data comprises for each area of said retinal images an initial likelihood that said area does not represent a lesion and wherein said likelihood for each area of said retinal images is generated by updating said initial likelihood based upon said candidate data.
  25. 25. A method according to cfaim 24 wherein said updating increases said initial likelihood for an area if said candidate data indicates said area represents a candidate lesion and said updating decreases said initial likelihood for an area if said candidate data indicates said area does not represent a candidate lesion.
  26. 26. A method according to claim 24 or 25 wherein said updating said initial likelihood comprises multiplying said initial likelihood by a factor based upon a predetermined constant a and further adding a multiple of the constant a, said multiple being based upon said candidate data.
  27. 27. A method according to claim 26 wherein said predetermined constant a is determined based upon: an expected frequency, said expected frequency being based upon a proportion of images in which an area is expected to be indicated as a candidate lesion in respective received candidate data if said area does not represent a lesion; and a minimum frequency, said minimum frequency being based upon a minimum proportion of images in which an area is required to be indicated as a candidate lesion if said mask data indicates said area of said retinal images to be processed is determined not to represent a lesion.
  28. 26. A method according to any one of claims 22 to 27 further comprising generating said candidate data by: receiving an image indicating a plurality of points associated with respective candidate lesions; and smoothing said received image to generate an image indicating a plurality of areas associated with said candidate lesions, each of said areas being based upon a respective one of said plurality of points.
  29. 29. A method according to claim 28 wherein said smoothing is Gaussian smoothing.
  30. 30. A method according to any one of claims 23 to 29 wherein said generating said mask data further comprises applying a first threshold to the likelihood associated with each of said areas of said retinal images.
  31. 31. A method according to claim 30 further comprising: processing the likelihood associated with each of said areas of said retinal images with reference to said first threshold, such that said mask data indicates that an area of said retinal images to be processed does not represent a lesion if the likelihood associated with that area exceeds said first threshold.
  32. 32. A method according to claim 31 as dependent upon claim 26 wherein said first threshold is determined based upon the constant a and an expected frequency, said frequency being based upon a proportion of images in which an area is expected to be indicated as a candidate lesion if said area does not represent a lesion.
  33. 33. A method according to claim 32 wherein said first threshold is determined based upon the value a(1 -+ a, where n is an integer based upon said expected frequency.
  34. 34. A method according to any one of claims 30 to 33, further comprising: receiving initial mask data said initial mask data indicating whether each area of a previously processed image was indicated to not represent a lesion; and for each area of said images to be processed said mask data indicates that the respective area does not represent a lesion if and only if said likelihood for said area exceeds a second threshold and said initial mask data indicates that in the previously processed image the area was indicated to not represent lesions.
  35. 35. A method according to claim 34, wherein said second threshold is determined based upon: the value a; an expected frequency, said frequency being based upon a proportion of images in which an area is expected to be indicated as a candidate lesion if said area does not represent a lesion; and a maximum number of images in which said area is not indicated as a candidate lesion, which can be expected to be processed subsequent to said likelihood for an area exceeding said first threshold, if said area does not represent a lesion.
  36. 36. A method according to claim 35, wherein said second threshold is determined based upon the value (a(1 -+ a)(1 -a), where n is said integer indicating said expected frequency and p is said maximum number of images.
  37. 37. A method according to any one of claims 21 to 36, the method further comprising: determining if said generated mask data satisfies a predetermined criterion; and if said mask data does not satisfy said predetermined criterion, modifying said mask data such that said mask data satisfies said predetermined criterion.
  38. 38. A method according to any one of claims 30 to 37, wherein if more than a predetermined number of areas of said retinal images have a likelihood exceeding said first threshold, the method further comprises modifying said first threshold such that less than or equal to said predetermined number of areas of said retinal images have a likelihood exceeding said first threshold.
  39. 39. A method according to any one of claims 22 to 38 wherein receiving candidate data further comprises: determining if said candidate data satisfies a predetermined criterion; and if said candidate data does not satisfy said predetermined criterion, modifying said candidate data such that said candidate data satisfies said predetermined criterion.
  40. 40. A method according to claim 39, wherein said predetermined criterion is based upon a number of areas identified as representing a candidate lesion.
  41. 41. A method according to any one of claims 21 to 40, wherein a plurality of images are received, each image having an associated identifier and generating mask data comprises generating a plurality of mask data, each generated mask data having an associated mask identifier based upon the identifier associated with the image processed to generate the mask data.
  42. 42. A method according to claim 41, wherein each image has an associated identifier indicating image acquisition equipment used to acquire the image, and wherein mask data is generated based upon a plurality of images acquired using particular image acquisition equipment.
  43. 43. A method according to any one of claims ito 16, wherein the or each mask data is generated by a method according to any one of claims 21 to 42.
  44. 44. A method of identifying indications of disease in a retinal image comprising processing said retinal image according to any one of claims 1 to 16 or 43.
  45. 45. A method according to claim 44, wherein said disease is selected from the group consisting of diabetic retinopathy and age-related macular degeneration.
  46. 46. A computer program comprising computer readable instructions configured to cause a computer to carry out a method according to any one of claims 21 to 45.
  47. 47. A computer readable medium carrying a computer program according to claim 46.
  48. 48. A computer apparatus for generating mask data for application to retinal images to be processed comprising: a memory storing processor readable instructions; and a processor arranged to read and execute instructions stored in said memory; wherein said processor readable instructions comprise instructions arranged to control the computer to carry out a method according to any one of claims 21 to 45.
  49. 49. Apparatus for generating mask data for application to retinal images to be processed comprising: means for receiving at least one image; and means for generating mask data from said at least one image, said mask data indicating areas of said retinal images to be processed determined not to represent lesions.Amendments to the claims have been filed as follows CLAIMS: 1. A method of generating output data indicating areas of a retinal image representing lesions comprising: receiving candidate data associated with said retinal image, said candidate data indicating areas of said retinal image identified as representing respective candidate lesions; receiving mask data, said mask data indicating areas of said retinal image determined not to represent lesions, wherein said mask data has been generated based upon at least one reference image; and processing said candidate data and said mask data to generate said output data indicating areas of said retinal image representing lesions.2. A method according to claim 1, wherein said processing comprises identifying areas of said retinal image indicated by said candidate data that correspond to areas of said retinal image indicated by said mask data.3. A method according to claim 2, wherein for each of said areas indicated by said candidate data, said area is indicated in said output data if but only if said area does not correspond to any of said areas of said retinal image indicated by said mask data.4. A method according to any preceding claim, wherein said at least one reference image and said retinal image are captured using the same camera. * *: ** 5. A method according to any preceding claim, further comprising: **** * applying a detection process to the or each retinal image to generate said * candidate data; and applying said detection process to said reference image to generate said * mask data. ***S6. A method according to any preceding claim, wherein said at least one reference image is a retinal reference image.7. A method according to claim 6, wherein said at least one reference image is a plurality of retinal reference images, and said candidate data is associated with one of said plurality of retinal reference images.8. A method according to any one of claims I to 5, wherein the or each reference image is an image other than a retinal image.9. A method according to claim 8, wherein the or each reference image is an image of a test object other than a human or animal eye.10. A method according to claim 9, wherein the or each reference image is an image of an artificial eye.11. A method according to any preceding claim, wherein said lesions are selected from the group consisting of microaneurysm, blot haemorrhage, exudate, and drusen.12. A method according to any preceding claim, wherein said areas of said retinal image determined not to represent lesions indicated by said mask data are areas of said retinal image that represent an artefact.13. A method according to claim 12, wherein said artefact is an artefact of an image acquisition process. * S: ** 14. A method according to claim 13, wherein said artefact is caused by *.S.* contamination in a camera. S..*:*. 15. A method according to claim 14, wherein said artefact is caused by * contamination on a camera lens. S. *16. A method according to any preceding claim, wherein receiving mask data comprises: receiving a plurality of mask data; receiving identification data associated with said retinal image; and selecting one of said plurality of mask data to be used in said processing of said candidate data and said mask data based upon said received identification data associated with said retinal image.17. A computer program comprising computer readable instructions configured to cause a computer to carry out a method according to any one of claims 1 to 16.18. A computer readable medium carrying a computer program according to claim 17.19. A computer apparatus for generating output data indicating areas of a retinal image representing lesions comprising: a memory storing processor readable instructions; and a processor arranged to read and execute instructions stored in said memory; wherein said processor readable instructions comprise instructions arranged to control the computer to carry out a method according to any one of claims 1 to 16.20. Apparatus for generating output data indicating areas of a retinal image representing lesions comprising: means for receiving candidate data associated with said retinal image, said candidate data indicating areas of said retinal image identified as representing respective candidate lesions; means for receiving mask data, said mask data indicating areas of said retinal S.....* image determined not to represent lesions, said mask data having been generated : *. based upon at least one reference image; and S...* means for processing said candidate data and said mask data to generate S..said output data indicating areas of said retinal image representing lesions. *. S * . * * S.21. A method of generating mask data for application to a retinal image to be processed, the method comprising: receiving at least one reference image; and generating mask data from said at least one reference image, said mask data indicating areas of said retinal image to be processed determined not to represent lesions.22. A method according to claim 21, further comprising: receiving initial data; and receiving candidate data associated with said at least one reference image, said candidate data indicating areas of said at least one reference image identified as representing a candidate lesion; wherein said generating mask data comprises updating said initial data based upon said candidate data.23. A method according to claim 22, wherein said generating mask data comprises generating, for each area of said retinal image, a likelihood that said area does not represent a lesion based upon said candidate data associated with said at least one reference image and said initial data.24. A method according to claim 23, wherein said initial data comprises for each area of said retinal image an initial likelihood that said area does not represent a lesion and wherein said likelihood for each area of said retinal image is generated by updating said initial likelihood based upon said candidate data.25. A method according to claim 24 wherein said updating increases said initial likelihood for an area if said candidate data indicates said area represents a candidate lesion and said updating decreases said initial likelihood for an area if said candidate data indicates said area does not represent a candidate lesion.* 26. A method according to claim 24 or 25 wherein said updating said initial : *. likelihood comprises multiplying said initial likelihood by a factor based upon a **** * predetermined constant a and further adding a multiple of the constant a, said multiple being based upon said candidate data. S. * * * .:. 27. A method according to claim 26 wherein said predetermined constant a is determined based upon: an expected frequency, said expected frequency being based upon a proportion of images in which an area is expected to be indicated as a candidate lesion in respective received candidate data if said area does not represent a lesion; and a minimum frequency, said minimum frequency being based upon a minimum proportion of images in which an area is required to be indicated as a candidate lesion if said mask data indicates said area of said retinal images to be processed is determined not to represent a lesion.28. A method according to any one of claims 22 to 27 further comprising generating said candidate data by: receiving an image indicating a plurality of points associated with respective candidate lesions; and smoothing said received image to generate an image indicating a plurality of areas associated with said candidate lesions, each of said areas being based upon a respective one of said plurality of points.29. A method according to claim 28 wherein said smoothing is Gaussian smoothing.30. A method according to any one of claims 23 to 29 wherein said generating said mask data further comprises applying a first threshold to the likelihood associated with each of said areas of said retinal image.31. A method according to claim 30 further comprising: processing the likelihood associated with each of said areas of said retinal image with reference to said first threshold, such that said mask data indicates that *....S * an area of said retinal image to be processed does not represent a lesion if the : ** likelihood associated with that area exceeds said first threshold. I..S S..32. A method according to claim 31 as dependent upon claim 26 wherein said first threshold is determined based upon the constant a and an expected frequency, said frequency being based upon a proportion of images in which an area is expected to be indicated as a candidate lesion if said area does not represent a lesion.33. A method according to claim 32 wherein said first threshold is determined based upon the value a(1 -a) + a, where n is an integer based upon said expected frequency.34. A method according to any one of claims 30 to 33, further comprising: receiving initial mask data said initial mask data indicating whether each area of a previously processed image was indicated to not represent a lesion; and for each area of said images to be processed said mask data indicates that the respective area does not represent a lesion if and only if said likelihood for said area exceeds a second threshold and said initial mask data indicates that in the previously processed image the area was indicated to not represent lesions.35. A method according to claim 34, wherein said second threshold is determined based upon: the value a; an expected frequency, said frequency being based upon a proportion of images in which an area is expected to be indicated as a candidate lesion if said area does not represent a lesion; and a maximum number of images in which said area is not indicated as a candidate lesion, which can be expected to be processed subsequent to said likelihood for an area exceeding said first threshold, if said area does not represent a lesion.36. A method according to claim 35, wherein said second threshold is determined based upon the value (a(1 -a) + a)(1 -a), where n is said integer indicating said expected frequency and p is said maximum number of images. * S: ** 37. A method according to any one of claims 21 to 36, the method further S...* comprising: determining if said generated mask data satisfies a predetermined criterion; )* * * . . n if said mask data does not satisfy said predetermined criterion, modifying said mask data such that said mask data satisfies said predetermined criterion.38. A method according to any one of claims 30 to 37, wherein if more than a predetermined number of areas of said retinal image have a likelihood exceeding said first threshold, the method further comprises modifying said first threshold such that less than or equal to said predetermined number of areas of said retinal image has a likelihood exceeding said first threshold.39. A method according to any one of claims 22 to 38 wherein receiving candidate data further comprises: determining if said candidate data satisfies a predetermined criterion; and if said candidate data does not satisfy said predetermined criterion, modifying said candidate data such that said candidate data satisfies said predetermined criterion.40. A method according to claim 39, wherein said predetermined criterion is based upon a number of areas identified as representing a candidate lesion.41. A method according to any one of claims 21 to 40, wherein a plurality of images are received, each image having an associated identifier and generating mask data comprises generating a plurality of mask data, each generated mask data having an associated mask identifier based upon the identifier associated with the image processed to generate the mask data.42. A method according to claim 41, wherein each image has an associated identifier indicating image acquisition equipment used to acquire the image, and wherein mask data is generated based upon a plurality of images acquired using particular image acquisition equipment.I43. A method according to any one of claims 1 to 16, wherein the or each mask lOll * data is generated by a method according to any one of claims 21 to 42.III *44. A method of identifying indications of disease in a retinal image comprising processing said retinal image according to any one of claims 1 to 16 or 43.45. A method according to claim 44, wherein said disease is selected from the group consisting of diabetic retinopathy and age-related macular degeneration.46. A computer program comprising computer readable instructions configured to cause a computer to carry out a method according to any one of claims 21 to 45.47. A computer readable medium carrying a computer program according to claim 46.48. A computer apparatus for generating mask data for application to retinal images to be processed comprising: a memory storing processor readable instructions; and a processor arranged to read and execute instructions stored in said memory; wherein said processor readable instructions comprise instructions arranged to control the computer to carry out a method according to any one of claims 21 to 45.49. Apparatus for generating mask data for application to a retinal image to be processed comprising: means for receiving at least one reference image; and means for generating mask data from said at least one reference image, said mask data indicating areas of said retinal images to be processed determined not to represent lesions. *eI * *C* ** .,* * C * be * IC.. * eeCC I, IC * JCC
GB0909413A 2009-06-02 2009-06-02 Processing retinal images using mask data from reference images Withdrawn GB2470727A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0909413A GB2470727A (en) 2009-06-02 2009-06-02 Processing retinal images using mask data from reference images
PCT/GB2010/001026 WO2010139929A2 (en) 2009-06-02 2010-05-26 Lesion detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0909413A GB2470727A (en) 2009-06-02 2009-06-02 Processing retinal images using mask data from reference images

Publications (2)

Publication Number Publication Date
GB0909413D0 GB0909413D0 (en) 2009-07-15
GB2470727A true GB2470727A (en) 2010-12-08

Family

ID=40902419

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0909413A Withdrawn GB2470727A (en) 2009-06-02 2009-06-02 Processing retinal images using mask data from reference images

Country Status (2)

Country Link
GB (1) GB2470727A (en)
WO (1) WO2010139929A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8879813B1 (en) 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023229994A1 (en) * 2022-05-23 2023-11-30 Topcon Corporation Automated oct capture
CN117854700A (en) * 2024-01-19 2024-04-09 首都医科大学宣武医院 Postoperative management method and system based on wearable monitoring equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5031632A (en) * 1989-08-10 1991-07-16 Tsuyoshi Watanabe Method for the instrumentation of sizes of retinal vessels in the fundus and apparatus therefor
US5836872A (en) * 1989-04-13 1998-11-17 Vanguard Imaging, Ltd. Digital optical visualization, enhancement, quantification, and classification of surface and subsurface features of body surfaces
WO2003030101A2 (en) * 2001-10-03 2003-04-10 Retinalyze Danmark A/S Detection of vessels in an image
WO2003030073A1 (en) * 2001-10-03 2003-04-10 Retinalyze Danmark A/S Quality measure
WO2003030075A1 (en) * 2001-10-03 2003-04-10 Retinalyze Danmark A/S Detection of optic nerve head in a fundus image
US20040258285A1 (en) * 2001-10-03 2004-12-23 Hansen Johan Dore Assessment of lesions in an image
US20050171974A1 (en) * 2002-06-07 2005-08-04 Axel Doering Method and arrangement for evaluating images taken with a fundus camera
WO2006105473A2 (en) * 2005-03-31 2006-10-05 University Of Iowa Research Foundation Automatic detection of red lesions in digital color fundus photographs
US20070002275A1 (en) * 2005-07-01 2007-01-04 Siemens Corporate Research Inc. Method and System For Local Adaptive Detection Of Microaneurysms In Digital Fundus Images
US20090123044A1 (en) * 2007-11-08 2009-05-14 Topcon Medical Systems, Inc. Retinal Thickness Measurement by Combined Fundus Image and Three-Dimensional Optical Coherence Tomography

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5836872A (en) * 1989-04-13 1998-11-17 Vanguard Imaging, Ltd. Digital optical visualization, enhancement, quantification, and classification of surface and subsurface features of body surfaces
US5031632A (en) * 1989-08-10 1991-07-16 Tsuyoshi Watanabe Method for the instrumentation of sizes of retinal vessels in the fundus and apparatus therefor
WO2003030101A2 (en) * 2001-10-03 2003-04-10 Retinalyze Danmark A/S Detection of vessels in an image
WO2003030073A1 (en) * 2001-10-03 2003-04-10 Retinalyze Danmark A/S Quality measure
WO2003030075A1 (en) * 2001-10-03 2003-04-10 Retinalyze Danmark A/S Detection of optic nerve head in a fundus image
US20040258285A1 (en) * 2001-10-03 2004-12-23 Hansen Johan Dore Assessment of lesions in an image
US20050171974A1 (en) * 2002-06-07 2005-08-04 Axel Doering Method and arrangement for evaluating images taken with a fundus camera
WO2006105473A2 (en) * 2005-03-31 2006-10-05 University Of Iowa Research Foundation Automatic detection of red lesions in digital color fundus photographs
US20060257031A1 (en) * 2005-03-31 2006-11-16 Michael Abramoff Automatic detection of red lesions in digital color fundus photographs
US20070002275A1 (en) * 2005-07-01 2007-01-04 Siemens Corporate Research Inc. Method and System For Local Adaptive Detection Of Microaneurysms In Digital Fundus Images
US20090123044A1 (en) * 2007-11-08 2009-05-14 Topcon Medical Systems, Inc. Retinal Thickness Measurement by Combined Fundus Image and Three-Dimensional Optical Coherence Tomography

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8879813B1 (en) 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
US8885901B1 (en) 2013-10-22 2014-11-11 Eyenuk, Inc. Systems and methods for automated enhancement of retinal images
US9002085B1 (en) 2013-10-22 2015-04-07 Eyenuk, Inc. Systems and methods for automatically generating descriptions of retinal images
US9008391B1 (en) 2013-10-22 2015-04-14 Eyenuk, Inc. Systems and methods for processing retinal images for screening of diseases or abnormalities

Also Published As

Publication number Publication date
WO2010139929A2 (en) 2010-12-09
GB0909413D0 (en) 2009-07-15

Similar Documents

Publication Publication Date Title
Wang et al. Retinal vessel segmentation using multiwavelet kernels and multiscale hierarchical decomposition
Sopharak et al. Simple hybrid method for fine microaneurysm detection from non-dilated diabetic retinopathy retinal images
Tjandrasa et al. Optic nerve head segmentation using hough transform and active contours
US20100142767A1 (en) Image Analysis
US20120027275A1 (en) Disease determination
Harini et al. Automatic cataract classification system
Solís-Pérez et al. Blood vessel detection based on fractional Hessian matrix with non-singular Mittag–Leffler Gaussian kernel
Almazroa et al. An automatic image processing system for glaucoma screening
Mudassar et al. Extraction of blood vessels in retinal images using four different techniques
Escorcia-Gutierrez et al. Convexity shape constraints for retinal blood vessel segmentation and foveal avascular zone detection
Ding et al. Multi-scale morphological analysis for retinal vessel detection in wide-field fluorescein angiography
Ali et al. Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet
GB2470727A (en) Processing retinal images using mask data from reference images
Vahabi et al. The new approach to automatic detection of optic disc from non-dilated retinal images
Tavakoli et al. Automated optic nerve head detection based on different retinal vasculature segmentation methods and mathematical morphology
Paranjpe et al. Automated diabetic retinopathy severity classification using support vector machine
Kumara et al. Active contour-based segmentation and removal of optic disk from retinal images
Iacoviello et al. Parametric characterization of the form of the human pupil from blurred noisy images
Prakash et al. Comparison of algorithms for segmentation of blood vessels in fundus images
Aggarwal et al. Automatic localization and contour detection of Optic disc
Kar et al. Blood vessel extraction with optic disc removal in retinal images
Patankar et al. Gradient features and optimal thresholding for retinal blood vessel segmentation
Ramasubramanian et al. A novel efficient approach for the screening of new abnormal blood vessels in color fundus images
El-Bendary et al. ARIAS: Automated retinal image analysis system
Karunanayake et al. An Improved Method for Optic Disc Localization

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)