WO2000051080A1 - Computer system for analyzing images and detecting early signs of abnormalities - Google Patents

Computer system for analyzing images and detecting early signs of abnormalities Download PDF

Info

Publication number
WO2000051080A1
WO2000051080A1 PCT/US2000/004736 US0004736W WO0051080A1 WO 2000051080 A1 WO2000051080 A1 WO 2000051080A1 US 0004736 W US0004736 W US 0004736W WO 0051080 A1 WO0051080 A1 WO 0051080A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixels
abnormalities
images
quality
Prior art date
Application number
PCT/US2000/004736
Other languages
French (fr)
Other versions
WO2000051080A9 (en
Inventor
Samuel C. Lee
Elisa T. Lee
Original Assignee
The Board Of Regents Of The University Of Oklahoma
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Board Of Regents Of The University Of Oklahoma filed Critical The Board Of Regents Of The University Of Oklahoma
Priority to AU32438/00A priority Critical patent/AU3243800A/en
Publication of WO2000051080A1 publication Critical patent/WO2000051080A1/en
Publication of WO2000051080A9 publication Critical patent/WO2000051080A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Definitions

  • diabetes affects 16 million people in the United States and is a leading cause of death and disability. Diabetes costs the United States government approximately $92 billion per i year. People of any age can get diabetes. Studies indicate that diabetes among American Indians is particularly high. In fact, Oklahoma has the largest American Indian population (approximately 250,000) in the United States. Over one-third of American Indian adults have type II diabetes. One third of the diabetic Indian population developed severe diabetic retinopathy and 6% of them developed new blindness every year.
  • Diabetic retinopathy has been identified as one of the most common causes of blindness. Unfortunately, at its early stage patients may not notice diabetic retinopathy. It is, however, treatable if it is detected early. Many cases of diabetes are first noticed by eye doctors. It was recently discovered that diabetic retinopathy may occur 6.5 years prior to diagnosis of diabetes.
  • the neural network method is suitable for solving pattern recognition problems involving the recognition of "general patterns", such as the lesion patterns exhibited in fundus images, which reflect the stage of severity of diabetic retinopathy. Therefore, it may be useful in grading diabetic retinopathy, but not in detecting individual early (often tiny and/or vague) retinal lesions. Moreover, the neural network method does not possess the ability to measure precisely (only approximately) the number, the size, or the location of the lesions.
  • Figure 1.1 is a schematic diagram of a computer system constructed in accordance with the present invention for locating abnormalities in an image.
  • Figure l.2a is a black and white reproduction of a color photograph of a good quality digital retinal image.
  • Figure l.2b is the intensity histogram of the image of Figure 1.2a.
  • Figure 1.3 is a diagram of an image coordinate system wherein the origin of the image coordinate system is located at an upper left corner of an image.
  • Figure I.4 is a flow diagram of a computer program running on the computer system depicted in Figure 1.1.
  • Figure II.1 a is a black and white reproduction of a color photograph illustrating a "dark" image of a retina.
  • Figure 11.1 b is the intensity histogram of the dark image of Figure 11.1 a.
  • Figure II.2a is a black and white reproduction of a color photograph illustrating a "bright" image of a retina.
  • Figure II.2b is the intensity histogram of the bright image of Figure ll.2a.
  • Figure II.3a is a black and white reproduction of a color photograph illustrating a "color distorted" image of a retina.
  • Figure II.3b is the intensity histogram of the green/red ratio of the color distorted image of Figure II.3a.
  • Figure II.3c is the intensity histogram of the blue/red ratio of the color distorted image of Figure 11.3a.
  • Figure 11.4a is a black and white reproduction of a color photograph illustrating a "blurry" image of a retina.
  • Figure ll.4b is the intensity histogram of the blurry image.
  • Figure II.5a is a black and white reproduction of a color photograph illustrating an image of a retina having light reflections contained therein.
  • Figure II.5b is the intensity histogram of the green/red ratio of the image depicted in Figure 11.5a.
  • Figure II.5c is the intensity histogram of the blue/red ratio of the image depicted in Figure II.5a.
  • Figure 11.6a is a black and white reproduction of a color photograph of a retina illustrating a defective slide.
  • Figure II.6b is the intensity histogram of the defective slide.
  • Figure II.7a is a black and white reproduction of a color photograph illustrating a "marginally gradable" image of a retina.
  • Figure II.7b is the intensity histogram of the marginally gradable image.
  • Figure 111.1 is a 25 x 25 disk-shaped binary matched filter.
  • Figure III.2 is a rectangular coordinate system originated at the center of the macula.
  • Figure III.3 is a logarithmic polar coordinate system originated at the center of the macula.
  • Figure IV.1a is a black and white reproduction of a color photograph of an original red-free image before normalization.
  • Figure IV.1b is the intensity histogram of the image of Figure IV.1a.
  • Figure IV.2a is a black and white reproduction of a color photograph of the image of Figure IV.1a after normalization has been conducted in accordance with the present invention.
  • Figure IV.2b is the intensity histogram of the image of Figure IV.2a.
  • Figure V.1 is a black and white reproduction of a color photograph of an image having a "spider net" constructed in accordance with the present invention interposed thereon.
  • Figure V.3 is a black and white reproduction of a color photograph illustrating the results of an AND operation on the color photographs of Figures V.1 and V.2.
  • Figure V.4 is a black and white reproduction of a color photograph illustrating the color photograph of Figure V.3 after the background pixels ⁇ b have been deleted therefrom.
  • Figure V.5a is a matrix having 15 x 15 pixels and illustrating the green values of a local image that contains a vessel segment.
  • Figure V.5b is a gradient map of the local image.
  • the average background gradient is 47.0.
  • the maximum background gradient is 170.
  • Figure V.5c is a matrix having 15 x 15 pixels, and illustrating the green values of another local image that contains a blot hemmorhage.
  • Figure V.5d is a matrix having 15 x 15 pixels and illustrating a gradient map of the image of Figure V.5c.
  • the average background gradient is 20.0.
  • Figure V.5e is black and white reproduction of a color photograph after the vessel pixels lc have been deleted.
  • Figure VI.3a is an intensity profile of a vessel segment.
  • Figure VI.3b (on two pages) illustrates twelve binary matched filters, ⁇ is the angles of the vessels.
  • Figures Vl.3c.a-g are black and white reproduction of a color photographs showing the detected HMAs from the corresponding binary images in Figures Vl.2a-g.
  • Figure VI.4 is a black and white reproduction of a color photograph showing the candidate HMA pixels after the voting.
  • Figure VII.1 is a black and white reproduction of a color photograph showing the combined candidate pixels of HMAs obtained from sections V and VI.
  • Figure VIII.1a is a diagrammatic view of a line detecting algorithm wherein P 0 is recognized as a HMA pixel because
  • Figure VIII.1b is a diagrammatic view of a line detecting algorithm wherein P 0 is recognized as a vessel pixel because
  • Figure VIII.2 is a black and white reproduction of a color photograph of the final candidate pixels of HMAs after deleting the falso detections.
  • Figure IX.1a is a black and white reproduction of a color photograph of a normalized image for exudate detection.
  • Figure IX.1b is the intensity histogram of the normalized image.
  • Figure IX.2 is a binary image showing the result of the global thresholding.
  • Figure IX. 3 is a black and white reproduction of a color photograph of non-background pixels of the image shown in IX. 2.
  • Figure IX.4a is the green/red ratios of the blue pixels in the image of Figure IX.3.
  • Figure IX.4b is a black and white reproduction of a color photograph showing exudate pixels with green/red ratios higher than R m + 0.05.
  • Figure IX. 5a.1 is a matrix showing the green/red ratios of a local image (15 x 15) that contains an exudate.
  • R 0 0.64
  • R a 0.42
  • R 0 - R a 0.22 > R f
  • Figure IX. 5a.2 is a matrix showing the green/red ratios of a local image (15 x 15) that contains only background.
  • R 0 0.51
  • R a 0.45
  • R 0 - R a 0.06 ⁇ R t .
  • Figure IX.5b is a black and white reproduction of a color photograph showing the exudates detected by the local green/red ratios.
  • Figure IX.6 is a black and white reproduction of a color photograph having the combined exudates shown in Figure IX.4 and Figure IX. 5.
  • Figure X.1a is a matrix showing a local image (15 x 15) containing a HMA.
  • Figure X.1b is a matrix showing region growing. Pixels with green values Gp ⁇ Gb are included in the HMA region (with value 255).
  • Figure X.2 is a black and white reproduction of a color photograph showing the results of the HMA and exudate growing.
  • Figures Xl.1a-d depict a diagnosis report produced in accordance with the present invention.
  • Figures Xl.2a-2c depict two lesions in differing relationships.
  • Figure XII.1a is a color fundus photograph of a patient's right eye taken at
  • Figure XII.1b is a black and white reproduction of a color photograph showing the result of a computer diagnosis performed with the present invention on the color fundus photograph shown in Figure XII.1a.
  • Figure Xll.2a is a color fundus photograph of the same patient's right eye taken at T 2 .
  • Figure Xll.2b is a black and white reproduction of a color photograph showing the result of a computer diagnosis performed with the present invention on the color fundus photograph shown in Figure Xll.2a.
  • Figure XII.3 is a black and white reproduction of a color photograph of a computer match of the pair of scanned photos shown in Figures XII.1a and Xll.2a.
  • Figures Xll.4a-b depict a time series analysis report performed with the present invention on the color photograph of Figure XII.3.
  • Figures XIIIA-XIIID are algorithm flowcharts of a computer system according to this invention. DETAILED DESCRIPTION OF THE INVENTION
  • imagery alike abnormalities means at least two different types of abnormalities having similar, but not identical, image patterns. For example, a faded laser scar and a soft exudate are imagery alike abnormalities; a pigment and the onset of a small tumor are imagery alike abnormalities.
  • vague abnormality means an abnormality that is faded or fuzzy or having a boundary which is not very well defined. Examples of a “vague abnormality” is a hard exudate, and a soft exudate.
  • FIG. 10 shown therein and designated by the general reference numeral 10, is a computer system programmed for analyzing an image 12, and automatically detecting early signs of abnormalities associated with the image 12.
  • the abnormalities in the image 12 can be very small (i.e. in some instances as small as or smaller than 20 microns), and/or vague.
  • the computer system 10 is programmed to differentiate between at least two types of "imagery alike abnormalities" having similar, but not identical, image patterns.
  • the computer system 10 can differentiate between 1 ) a faded laser scar and a soft exudate, 2) a hard exudate and a soft exudate, and (3) a pigment and the onset of a small tumor.
  • One particularly effective use of the computer system 10 is for analyzing one or more images 12 associated with an eye 11 of a patient, and detecting abnormalities, such as lesions caused by retinopathy (either diabetic retinopathy or hypertensive retinopathy).
  • abnormalities such as lesions caused by retinopathy (either diabetic retinopathy or hypertensive retinopathy).
  • the operation of the computer system 10 will be generally described herein with respect to specific abnormalities associated with the eye 11 caused by "diabetic retinopathy”. These abnormalities are flame hemorrhages, blot hemorrhages, dot hemorrhages, microaneurysms, hard exudates, soft exudates and combinations thereof. These abnormalities may also be referred to herein as "lesions”.
  • the computer system 10 can be used to detect abnormalities associated with various other categories or types of eye diseases, such as hypertensive retinopathy, macular edema and glaucoma. Furthermore, the computer system 10 can also be used to detect other categories or types of abnormalities in the image 12. For example, the abnormalities could be 1) a tumor in a living organism, 2) a newly appearing blood vessel, 3) a lump in a blood vessel, 4) and/or abnormalities in other parts of a human body or an animal body.
  • the computer system 10 can be used as a clinical aid, for mass screening, epidemiological studies, clinical trials, telemedicine, and medical education.
  • the computer system 10 can be used for analyzing images which are taken of either human or non-human bodies.
  • the computer system 10 can analyze retinal or other images depicting portions of a mouse, a cat, a dog, a cow, a sheep, a rabbit, or a horse. More particularly, when the abnormality is caused by diabetic retinopathy, the computer system 10 can be utilized for the automatic detection and/or quantification of flame, blot and dot hemorrhages, microaneurysms, hard exudates, soft exudates, and combinations thereof in an image 12 depicting at least a portion of the eye 11.
  • the computer system 10 can be a personal digital assistant, a personal computer system (either portable or desktop), a mainframe computer system or any other computer system, microprocessor or programmed logic unit capable of analyzing the image 12 and achieving the functions and purposes discussed herein.
  • the computer system 10 includes a Web site established on a global network, such as the Internet so that images 12 can be transmitted from a remote computer system to the computer system 10 over the global network.
  • the computer system 10 would analyze the image or images 12 and preferably transmit a report indicative of the results of the analysis to the remote computer system which transmitted the image or images to the computer system 10.
  • the image 12 is preferably a color retinal image, which is produced by an image recording system 14 located either remotely from the computer system 10 or adjacent to the computer system 0.
  • the image recording system 14 can be 1) an x-ray machine, 2) an MRI machine, 3) a CT machine, 4) a CAT machine, 5) an ultrasound machine, 6) an infrared machine capable of producing a thermograph, or 7) a fundus camera for example.
  • the computer system 10 could be incorporated into or in communication with the image recording system 14.
  • the image recording system 14 preferably outputs the image 12 in a digital format. However, it should be understood that the image recording system 14 does not necessarily have to output the image 12 in a digital format.
  • the image recording system 14 could output the image 12 in an analog format, and the image 12 could thereafter be scanned and digitized.
  • the computer system 10 is particular useful for analyzing color images having red components, green components, blue components and combinations thereof. However, it should be understood that the computer system 10 may also be used for analyzing gray-scale images, or unitary color images.
  • the image 12 is shown as a digital retinal image obtained by scanning a standard 35mm color fundus photo slide taken at a 45 degree field of view by a fundus camera (the image recording system 14) into the computer system 10 and saving the image 12 as a "TIFF" data format for a RGB full color image, with fixed size of 512 X 512 pixels and resolution of 10,000 PPI (pixels per inch). In this example, the diameter of the retina in the image 12 is also required to be 512 pixels.
  • each pixel in the image 12 is made up of 24 bits of data (3 bytes) and has three components: red (R), green (G), and blue (B).
  • the RGB value of each pixel of the image 12 can be read by encoding the image file using proper header and pointer values.
  • the intensity of each pixel can be calculated by the corresponding RGB values and its value also ranges from 0 (black) to 255 (white).
  • the coordinate system of the image 12 is shown in figure 1.3.
  • the coordinate system is originated at the upper left corner of the image 12.
  • the column and row axes indicate the x and y values of the coordinate (x,y), respectively.
  • the column axis is referred to as the x-axis and the row axis as the y-axis.
  • the computer system 10 is not limited to only analyzing the image 12 as set forth in the experiments described above.
  • the computer system 10 can analyze images 12 which are taken at fields of view other than 45 degrees.
  • the image 12 can have more or less pixels than the 512 x 512 pixel example described above.
  • the image 12 could have 1024 x 1024 pixels.
  • the image 12 is input into the computer system 10 in a digital format.
  • the image 12 can either be input into the computer system 10 in real-time or a predetermined time, or an arbitrary time after the image 12 is taken.
  • the image 12 can be input into the computer system 10 from the image recording system 14 immediately once the image 12 is taken via a signal path 16.
  • the image 12 can be input into the computer system 10 from the image recording system 14 a predetermined or arbitrary time period, such as a day, week, month or year, after the image 12 was taken.
  • the image 12 can be taken by the image recording system 14 and then stored on an image storage device 18. Subsequently, the image 12 can be input into the computer system 10 from the image storage device 18 as indicated by a signal path 20.
  • the image storage device 18 can be a disk, a hard drive, a memory unit, or any other device capable of storing the image 12 in a digital format.
  • the signal paths 16 and 18 can be any suitable medium that permits electronic or non-electronic communication between two entities. That is, the signal paths 16 and 18 are intended to refer to any suitable communication medium, including extra-computer communication systems and intra-computer communication systems.
  • Examples of such communication mediums include, but are not limited to: internal computer buses, local area networks, wide area networks, point-to-point shared and dedicated communication links, Internet links infra-red links, microwave links, telephone links, CATV links, satellite links, radio links, ultra-sonic links, fibre-optic links, air, wire, breadboards, circuit boards, and the like.
  • FIG. 1.4 shown therein is a flow chart illustrating a computer program 24 running on the computer system 10 for analyzing the image 12 and detecting abnormalities, such as hemorrhages and microaneurysms in the image 12.
  • the computer program 24 branches to a quality test algorithm 26.
  • the quality test algorithm quantitatively measures and describes the overall quality of the image 12 to determine whether the image 12 is suitable for use with the computer system 10. If the image 12 does not meet some minimum image quality criteria, for example, too dark, too bright, totally out of focus, etc., the computer system 10 will not process the image 12 further.
  • the quality test algorithm 26 will be discussed in more detail below.
  • the computer program 24 branches to a first detection algorithm 30 as indicated by a line 32 for detecting the macula and the optic disk in the image 12, if the computer system 10 is analyzing an image associated with the eye 11.
  • the a computer program 24 branches to an image normalization algorithm 36 as indicated by a line 38.
  • the image normalization algorithm 36 improves the quality of the image 12 so as to enhance the detection and quantification of the early signs of abnormalities, if any, depicted in the image 12.
  • the image normalization program 30 can simultaneously improve the contrast and brightness characteristics of the image 12, reduce the noise content of the image 12, or sharpen the details contained in the image 12.
  • the image normalization algorithm 36 will be discussed in more detail below.
  • the computer program 24 branches to a second detection algorithm 40 (as indicated by a line 42) for detecting relatively larger abnormalities/lesions in the image 12.
  • the second detection algorithm 40 typically detects the location and size of any relatively large lesions, such as blot and flame hemorrhages, which may be depicted in the image 12.
  • the computer program 24 branches to a third detection algorithm 44 (as indicated by a line 46) for detecting any tiny lesions in the image 12 which were not detected by the second detection algorithm 40.
  • the third detection algorithm 44 detects the location and size of microaneurysms and dot, and blot hemorrhages.
  • the computer program 24 branches to a fourth detection algorithm 48 (as indicated by a line 50) for detecting predetermined types of lesions in the image 12 which were not detected by the second detection algorithm 40 and the third detection algorithm 44.
  • the fourth detection algorithm 48 can detect imagery alike abnormalities. For example, when the abnormality is caused by diabetic retinopathy, the fourth detection algorithm 48 may detect the location and size of hard exudates and soft exudates.
  • the computer program 24 desirably branches to a report algorithm 52 (as indicated by a line 54) for providing a report of the abnormalities detected by the second detection algorithm 40, the third detection algorithm 44 and the fourth detection algorithm 48.
  • the report generated by the report algorithm 52 desirably includes the coordinate location of the optic disk (X 0 , Y 0 ) and the macula (X m , Y m ), followed by a list of all hemorrhages and microaneurysms detected. For each hemorrhages and microaneurysm detected, various other information, such as the location and size, are indicated.
  • the computer system 10 communicates with a computer guided robotic mechanism 56 via a signal path 58 for helping the physician find tiny abnormalities and avoiding mistakes. For example, if the abnormalities are lesions caused by diabetic retinopathy, laser surgery may be needed to prevent further growth of the lesions.
  • the computer guided robotic mechanism 56 desirably includes a camera on the instrument, such as the laser, so that the computer system 10 can "see” where the instrument is pointed. If activation of the instrument would cause damage to an important part of the patient, or the wrong part of the patient, then the computer system 10 outputs a signal to the computer guided robotic mechanism 56 to temporarily disable the mechanism 56, or to at least give a warning to the physician.
  • the mechanism 56 is a laser used to treat diabetic retinopathy
  • the important part of the patient may be the macula, the optic disk or a blood vessel.
  • the physician can override the computer system 10, if the physician believes that the computer system 10 is wrong.
  • the computer system 10 can also serve as an aid to the physician to reduce the possibility of errors which may occur during the treatment of the patient.
  • the quality of digitized retinal images 12 are first tested before the diagnosis continues.
  • the computer system 10 classifies the quality of the images 12 into three categories: gradable, marginally gradable, and ungradable.
  • the quality test algorithm 26 can be incorporated into the image recording apparatus 14 so that the quality test algorithm 26 analyzes the image 12 in real time upon capture by the image recording apparatus 14. This provides an immediate indication to the photographer or technician as to the quality of the image 12. If the quality test algorithm 26 indicates that the image 12 is of poor (ungradable) or marginal (marginally gradable) quality, the photographer or technician can then retake the image 12 until a good quality image 12 is captured.
  • the following is an example of an algorithm which can be employed to classify the quality of the image 12 into the three categories in accordance with the present invention.
  • the quality of the image 12 is tested to be ungradable if it meets one of the following conditions:
  • Distorted color Based on the histograms of the green/red and blue/red ratios, if the green or blue components of the image color take over the red component to be the major color of the image 12, i.e., most of the pixels have green/red ratios greater than 1 or blue/red ratios greater than 0.5, the image is considered to be ungradable.
  • Blurry Image If there are two distinct peak values presented in the histogram of the image 12 (the first peak is resulted from the black background and the second peak is resulted from the retina), and the intensity range of the second peak does not exceed 80, the image is considered to be ungradable. (figure ll.4a, figure ll.4b)
  • Figure 11.5a is a black and white reproduction of a color photograph illustrating an image of a retina having light reflections contained therein.
  • the light reflection's color could be red, green, blue and any combination thereof, and they could occur anywhere in the image 12.
  • all the light reflections have one thing in common; that is, they all exhibit abnormal green/red ratio or blue/red ratio compared to a normal retinal image.
  • green/red ratio greater than 1 and smaller than 0.1 or blue/red ratio greater than 0.5 and smaller than 0.05 to be abnormal. If the number of pixels in the image with these abnormal red/green ratios exceeds 1/10 of the total pixels of the image 12, this image 12 is considered to be ungradable.
  • Figure II.5b is the intensity histogram of the green/red ratio of the image depicted in Figure II.5a.
  • Figure ll.5c is the intensity histogram of the blue/red ratio of the image depicted in Figure ll.5a.
  • Figure l.2a shows a typical good quality retinal image and figure l.2b shows its intensity histogram. Notice that in this histogram, there are two separate regions, the first (lower intensity) region contains mostly the pixels of the dark boundary area outside the circular retinal image and the second region represents the pixel intensity distribution of the actual retinal image area. The intensity span of this region for a good quality image should be above a predetermined threshold of approximately 30, for example.
  • histogram templates are created and used in determining the gradability of the image 12. If the histograms of the image 12 match the general profiles of the templates obtained from those of the good images, and do not fall into any of the above categories (1-6) of ungradable images, this image 12 will be considered to be of a good quality image and thus gradable. If the variations of the histograms of the image 12 lie between those of the good quality images and those of the ungradable images, the image 12 will be considered to be marginally gradable.
  • Figure II.7a shows an example of a "marginally gradable" retinal image and its intensity histogram is shown in Figure II.7b.
  • the quality of a retinal image can be determined by using the following parameters:
  • the first detection algorithm 30 locates the optic disk in the original image 12 (which has not been normalized by the image normalization algorithm 30) by finding the highest average intensity area in the image 12.
  • a disk-shaped binary matched filter (an example of which is shown in Figure 111.1 ) to convolute with the pixels in the area 80 - 150 pixels from the left or right side of the optic disk, depending on whether it's a left or right retina.
  • the size of the matched filter is 25 x 25 pixels.
  • the lowest convolution value indicates the position of the center of the macula because the macula has lower intensity values compared to its surrounding background.
  • the locations of the optic disk and macula are stored in a data file, which may be called "dis&mac.dat", for example.
  • the macula is the area that provides human's primary vision, in many situations, it is desirable to describe the location of a lesion relative to its distance from the center of the macula. This may show the severity of the lesions's effect to the patient's vision.
  • Two image coordinate systems use the center of the macula as the origin of the system.
  • One is a rectangular coordinate system and the other a logarithmic polar coordinate system.
  • the rectangular coordinate system and the logarithmic polar coordinate system a shown in Figures 111.2 and 111.3, respectively.
  • the image normalization algorithm 36 normalizes the image 12, eliminates red from the image 12, reduces noise and enhances the image quality.
  • the image normalization algorithm adopts the technique of image segmentation using global thresholding to enhance the image 12. This technique requires image normalization if the illumination of image surfaces are nonuniformed.
  • the usual illumination pattern of a retinal image is shown in figure IV.1a (dark around the border area and bright at the center area). This type of illumination pattern results in the image intensity histogram in figure. IV.1b.
  • Using a single threshold value to segment out the features to be detected (HMAs) is almost impossible (figure IV.1c), and a large amount of background noise is included. To fix this problem, image normalization is needed.
  • Image normalization is the most essential part of the image normalization algorithm 36, as both of the next two sections of the computer program 24 use the global thresholding technique based on the normalized image to detect the microanuerysms, dot, blot and flame hemorrhages. Furthermore, image normalization also acts as a tool for image enhancement and noise removal. This can be verified by comparing the histograms of the images before and after the normalization. The histogram of the normalized image exhibits more uniformed pixel's distributions (We approximate this distribution as a Gaussian curve. The image normalization algorithm 36 will use this property of the distribution to automatically calculate the global threshold.), its mean value increases from 60 to 115, while its spread also increases from 80 to 110.
  • the second detection algorithm 40 of the computer program 24 is directed to the quick detection of rather large abnormalities, such as obvious hemorrhages with sizes that are easily viewed by human eye, namely blot and flame hemorrhages.
  • the second detection algorithm 40 is discussed hereinafter as analyzing the normalized image, it should be understood that the second detection algorithm 40 can be used to analyze images which have not been normalized.
  • the second detection algorithm 40 could be used to locate the macula and the optic disk, as discussed above, in a retinal image.
  • the image in figure IV.1a will be used as the example throughout our illustration of the second detection algorithm 40.
  • the algorithm consists of the following parts:
  • the idea of a "spider net” is to launch those obvious hemorrhages caught by the net.
  • the net is originated at the center of the macula.
  • the angular resolution of the net is 5°, and the radius resolution is 10 pixels.
  • the constructed net presents in the image as the white pixels ( Figure V.1). We refer to this image as In.
  • the second detection algorithm 40 will detect abnormalities having a longest cross-sectional axis (see figure XI 1.3c) of at least about 200 microns.
  • the angular resolution and the radius resolution of the "spider net” can be varied to change the sensitivity of the second detection algorithm 40. For example, as the angular resolution and the radius resolution increase, the sensitivity of the second detection algorithm 40 decreases, and vice-versa.
  • Image la still contains some background and noise pixels. To delete these pixels, we read the green value of each white pixel Go in la, and calculate the average green value of its surrounding background Ga. Since HMAs have lower reflectance compared to their neighboring area, we retain those pixels with Ga - Go ⁇ 11.5 while delete the others with Ga - Go ⁇ 11.5. The result is shown in image ID in figure V.4. As we can see from the image ID, all the background pixels have been deleted, only pixels of HMAs and vessels are retained.
  • a different scheme for detecting the lesions of small sizes is needed.
  • the following scheme can detect lesions having a longest cross-sectional axis greater than about 20 microns, and in some cases the following scheme will detect a lesion having a longest cross-sectional axis less than about 20 microns.
  • Such scheme is described by the following steps:
  • the histogram of a normalized image can be approximated to be a Gaussian curve. More observations of this normalized image show that after a certain threshold, numbers of background pixels increase dramatically. Since HMAs have lower reflectance than their surrounding background, this threshold should be able to separate the background pixels from the HMAs pixels and other objects (vessels). To determine this threshold value, we calculate the first derivatives of the left part of this curve, and the maximum derivative will indicate the point where the background pixels present a dramatic increment. The green value in the histogram corresponding to this maximum derivative will be the global threshold T. Image IV.2c shows the result of the global thresholding by value T ⁇ As we can see, the features and the background in the image are clearly separated.
  • T is not an absolutely safe threshold in terms of extracting all the HMAs in the images.
  • the offset may be other than 10 so long as the results discussed herein are achieved.
  • the results of the thresholdings using these 7 thresholds are shown in figure Vl.2a-g.
  • a matched filter uses an optimal filter to maximize the output signal-to-noise ratio.
  • the signal would be either HMM or vessels.
  • the properties of the vessels piecewise linear segments with intensity profiles approximated to be Gaussian curves ( Figure VI.3a)
  • This part of the computer program 24 combines the results obtained from the second detection algorithm 40 (discussed above in Section V) and the third detection algorithm 44 (discussed above in section VI) to include all the candidate pixels of HMAs in one image. This can be done by ORing images Ic and Id, that is, all the white pixels in both Ic and Id are written into the same image ( Figure VI 1.1). We refer to this image as Ig.
  • Figure VIII.2 shows the image that contains all the candidate pixels of HMAs with all the false detection being deleted.
  • the diabetic retinopathy is classified into three categories: Y category — diabetic retinopathy, Q category — questionable diabetic retinopathy, and N category — no diabetic retinopathy.
  • a retinal image is determined to be a Y case when at least one sure HMA is detected. 20 ⁇ m is the smallest size (in diameter) of a microanueryam that can be viewed by a human expert, and it is equivalent to 1-2 pixels for the computer system 10 given the image 12 having a size of 512 X 512 pixels.
  • this retinal image is considered to have at least one sure HMA, and thus, indicates that the patient has diabetic retinopathy. The diagnosis will be continued followed by the exudate detection. Otherwise, if sizes of all the white pixel group are smaller than 4 pixels, which means no sure HMAs are detected, we will proceed with the detection of the questionable HMAs. Finally, if no white pixel presents in image VIII.2, this retinal image is an N case — no diabetic retinopathy.
  • the resolution of the image 12 changed from 512 x 512 pixels to some other resolution, such as 1024 x 1024, for example, then the number of pixels set forth above may also be adjusted.
  • the fourth detection algorithm 48 detects and identifies imagery alike abnormalities in the image 12. Since the example we use in this patent application is diabetic retinopathy, we will explain and discuss the detection of imagery alike abnormalities, such as exudates first. The detection of imagery alike abnormalities, such as questionable HMAs will be covered in
  • section X The following table shows the properties of hard and soft exudates, compared with those of HMAS:
  • HMAs and exudates are the intensity and color.
  • Some of the methods for HMAs detection can be used in here for exudate detection, like background checking, global thresholdings, etc., except we change the term from "lower than a certain threshold” to "higher than a certain threshold”.
  • the most distinguished differences between HEs and SEs are their color, size, and neighbor.
  • this image shows the regions of the exudates rather than the seed pixels as it usually does. That is because in this part we combine the region growing method to obtain the exudates regions right after the- exudate-detection. The region growing will be discussed in section X.
  • Region growing technique is used here. It is a procedure that groups pixels into a region or subregions into larger regions. Single-linkage region growing is the simplest approach to this technique, where we start with a set of seed pixels. For each seed pixel, we try to expand it into a region by including its neighboring pixels that have similar properties, such as intensity, color, etc.. To obtain the regions of HMAs, we use those candidate pixels of HMAs (figure VIII.2) as the seed pixels.
  • diagnosis report prepared by the report algorithm 52 desirably includes the following results:
  • the image 12 indicates that the patient has diabetic retinopathy, obtain the numbers, locations and sizes of the HMAs and exudates (if there is any).
  • One is the normal coordinate system which is shown in Figure 1.3 and which is originated at the upper left corner of the image 12, the other is the polar coordinate system which is originated at the center of the macula.
  • the second coordinate system is used to indicate the severity of the diabetic retinopathy.
  • the report will indicate that the results may contain false detection due to the quality of the image 12.
  • Shown in figure Xl.1a-d is a typical computer diagnosis report for the image 12 which was produced in accordance with the present invention.
  • the report algorithm 52 applies the following rules in counting the lesions in the image 12. Shown in Figure XI.2a is a diagram of two lesions 100 and 102 which are spaced a distance "d" apart. The report algorithm 52 would count the lesions 100 and 102 as two lesions because the lesion 100 is isolated from the lesion 102. In other words, the edges of the lesions 100 and 102 are not connected.
  • Shown in Figure Xl.2b is a diagram of two lesions 104 and 106 which would be counted by the report algorithm 52 as a single lesion because the lesions 104 and 106 are connected to or at least partially overlay each other, i.e. there is not a distinguishing separation between the edges of the lesions 104 and 106.
  • Shown in Figure XI.2c is a diagram of two lesions 108 and 110 which would be counted by the report algorithm 52 as a single lesion.
  • the lesion 110 has a longest cross-sectional axis D 2 , which is much larger than a longest cross-sectional axis D., of the lesion 108.
  • the lesion 110 also has an area A 2 which is much larger than an area A., of the lesion 108.
  • the lesions 108 and 110 are separated by a closest distance D 3 . If the closest distance D 3 is less than D ⁇ and D 2 /D 1 > approximately 10, or A 2 /A., > approximately 25, then the two lesions 108 and 110 will be counted by the report algorithm 52 as a single lesion.
  • the method described above can be used in time series analysis of retinal images of the same eye taken at different times.
  • the procedure for two images of the same eye taken at two different times is described. The same procedure can be extended to multiple images taken at different times.
  • Step 1 Determine if the quality of the two images are good enough for computer processing.
  • Step 2 Each image is processed by the computer system 10 to detect the lesions, HMA, HE, and CWS. See Figures XII.1 and XII.2. Step 3. Select three pairs of substantially identical reference points from the two images, three from each image. These reference points should be selected in such a way that (1) they are obvious landmarks of the images, (2) they are present in both images, and (3) they are as far apart as possible.
  • Step 4 Overlay one image on the other by overlapping the vessels, particularly the major vessels ( Figure XII.3) using the three pairs of substantially identical reference points from the two images.
  • Step 5 Determine the common retina area of the two images.
  • Step 6 Identify and quantitate changes in lesions in the common area. These changes desirably include:
  • FIG. Xll.4a-b shown therein is an example of a report produced in accordance with the present invention.
  • the foregoing example has been set forth to illustrate an example of the computer program 24 which can be employed in carrying out the practice of the present invention.
  • changes may be made in the computer program or other computer programs may be written which can be employed in the practice of the present invention.
  • the present invention is not limited to the example set forth above.
  • Figures XIIIA-XIIID are examples of a computer system in flow chart form which is constructed in accordance with the present invention. This system may be used in various fields to detect abnormalities, these fields include but are not limited to the fields of material production, bodily analysis, and repair analysis.

Abstract

A computer program for analyzing an image and detecting early signs of abnormalities in the image. The computer program comprises software programmed to analyze the image so as to automatically detect and identify at least one of the type, location and size of at least one of an abnormality having a longest cross-sectional axis of below about 200 microns, a vague abnormality, imagery alike abnormalities and combinations thereof in the image.

Description

COMPUTER SYSTEM FOR ANALYZING IMAGES AND DETECTING EARLY SIGNS OF ABNORMALITIES
CROSS REFERENCE TO RELATED APPLICATIONS
The present application claims priority to the provisional patent application identified by U.S. Serial No. 60/121 ,512, which was filed on
February 23, 1999.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH AND DEVELOPMENT
The research for the present invention was supported, at least partly, by the National Eye Institute (Grant number: U10EY09898).
BACKGROUND OF THE INVENTION Many instances exist where it would be of great utility to be able to quickly and automatically analyze an image for detecting signs of abnormalities. This occurs in many fields where non destructive testing is very important. These include but are not limited to industrial quality control, air defense, welds, welding,production materials, and medicine. Physicians and others currently manually analyze retinal images, x-ray images, MRI images, CT images, CAT images, ultrasound images, and thermographic images for abnormalities, such as lesions or tumors.
For example, based on the latest survey, diabetes affects 16 million people in the United States and is a leading cause of death and disability. Diabetes costs the United States government approximately $92 billion per i year. People of any age can get diabetes. Studies indicate that diabetes among American Indians is particularly high. In fact, Oklahoma has the largest American Indian population (approximately 250,000) in the United States. Over one-third of American Indian adults have type II diabetes. One third of the diabetic Indian population developed severe diabetic retinopathy and 6% of them developed new blindness every year.
Diabetic retinopathy has been identified as one of the most common causes of blindness. Unfortunately, at its early stage patients may not notice diabetic retinopathy. It is, however, treatable if it is detected early. Many cases of diabetes are first noticed by eye doctors. It was recently discovered that diabetic retinopathy may occur 6.5 years prior to diagnosis of diabetes.
An annual retinal examination is highly recommended, especially among those high risk groups, such as Oklahoma American Indians. Two major reasons, high cost and shortage of ophthalmologists, especially in the rural area, make the annual examination of these people very difficult. Therefore, there is a need for developing a low-cost manner in conducting these annual examinations.
The detection and quantification of early and often tiny retinal lesions by human eyes introduces high inter-and intra-observer variability and may require invasive procedures such as a fluorescein angiogram, which involves the injection of dye into the patient's vein, to produce accurate results. Without this invasive procedure, inaccurate diagnosis and grading of diabetic retinopathy is often unavoidable.
Another prior art method for automatic detection and/or diagnosis of diabetic retinopathy lesions was based on an artificial neural network. The neural network method is suitable for solving pattern recognition problems involving the recognition of "general patterns", such as the lesion patterns exhibited in fundus images, which reflect the stage of severity of diabetic retinopathy. Therefore, it may be useful in grading diabetic retinopathy, but not in detecting individual early (often tiny and/or vague) retinal lesions. Moreover, the neural network method does not possess the ability to measure precisely (only approximately) the number, the size, or the location of the lesions.
Thus, a need exists for a low cost manner in conducting analysis and detection of abnormalities in images. It is to such a system for conducting analysis and detection of abnormalities in images that the present invention is directed. This system is usable for medical and non medical purposes and while the following description directs itself to the realm of medicine, the invention could as well be used elsewhere. Analyzing welds, welding, material formation, industrial quality control, air defense radar images, biological cell differentiation, cell counts, product defect detection etc. are just some examples of such use. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING The file of this patent contains at least one drawing executed in color. Copies of this patent with color drawing(s) will be provided in the Patent and Trademark Office upon request and payment of the necessary fee.
Figure 1.1 is a schematic diagram of a computer system constructed in accordance with the present invention for locating abnormalities in an image.
Figure l.2a is a black and white reproduction of a color photograph of a good quality digital retinal image.
Figure l.2b is the intensity histogram of the image of Figure 1.2a.
Figure 1.3 is a diagram of an image coordinate system wherein the origin of the image coordinate system is located at an upper left corner of an image.
Figure I.4 is a flow diagram of a computer program running on the computer system depicted in Figure 1.1.
Figure II.1 a is a black and white reproduction of a color photograph illustrating a "dark" image of a retina.
Figure 11.1 b is the intensity histogram of the dark image of Figure 11.1 a.
Figure II.2a is a black and white reproduction of a color photograph illustrating a "bright" image of a retina.
Figure II.2b is the intensity histogram of the bright image of Figure ll.2a.
Figure II.3a is a black and white reproduction of a color photograph illustrating a "color distorted" image of a retina. Figure II.3b is the intensity histogram of the green/red ratio of the color distorted image of Figure II.3a.
Figure II.3c is the intensity histogram of the blue/red ratio of the color distorted image of Figure 11.3a.
Figure 11.4a is a black and white reproduction of a color photograph illustrating a "blurry" image of a retina.
Figure ll.4b is the intensity histogram of the blurry image.
Figure II.5a is a black and white reproduction of a color photograph illustrating an image of a retina having light reflections contained therein.
Figure II.5b is the intensity histogram of the green/red ratio of the image depicted in Figure 11.5a.
Figure II.5c is the intensity histogram of the blue/red ratio of the image depicted in Figure II.5a.
Figure 11.6a is a black and white reproduction of a color photograph of a retina illustrating a defective slide.
Figure II.6b is the intensity histogram of the defective slide.
Figure II.7a is a black and white reproduction of a color photograph illustrating a "marginally gradable" image of a retina.
Figure II.7b is the intensity histogram of the marginally gradable image.
Figure 111.1 is a 25 x 25 disk-shaped binary matched filter. Figure III.2 is a rectangular coordinate system originated at the center of the macula.
Figure III.3 is a logarithmic polar coordinate system originated at the center of the macula.
Figure IV.1a is a black and white reproduction of a color photograph of an original red-free image before normalization.
Figure IV.1b is the intensity histogram of the image of Figure IV.1a.
Figure IV.1c is a segmented image of Figure IV.1a using threshold value T=50.
Figure IV.2a is a black and white reproduction of a color photograph of the image of Figure IV.1a after normalization has been conducted in accordance with the present invention.
Figure IV.2b is the intensity histogram of the image of Figure IV.2a.
Figure IV.2c is a segment image of Figure IV.2a using single threshold value T=90.
Figure V.1 is a black and white reproduction of a color photograph of an image having a "spider net" constructed in accordance with the present invention interposed thereon.
Figure V.2 is a black and white reproduction of a color photograph illustrating an image segmentation using threshold T1 = G a - 18 and T2 = Gma + 18. Figure V.3 is a black and white reproduction of a color photograph illustrating the results of an AND operation on the color photographs of Figures V.1 and V.2.
Figure V.4 is a black and white reproduction of a color photograph illustrating the color photograph of Figure V.3 after the background pixels \b have been deleted therefrom.
Figure V.5a is a matrix having 15 x 15 pixels and illustrating the green values of a local image that contains a vessel segment.
Figure V.5b is a gradient map of the local image. The average background gradient is 47.0. The maximum background gradient is 170. The difference is 170-47.0 = 123.
Figure V.5c is a matrix having 15 x 15 pixels, and illustrating the green values of another local image that contains a blot hemmorhage.
Figure V.5d is a matrix having 15 x 15 pixels and illustrating a gradient map of the image of Figure V.5c. The average background gradient is 20.0. The maximum background gradient is 82. - 20.0 =62.
Figure V.5e is black and white reproduction of a color photograph after the vessel pixels lc have been deleted.
Figures Vl.2a-g are binary images thresholded by Tk = X, + (k-1) x
Figure imgf000009_0001
where k=1, 2, . . . , 7, n=7.
Figure VI.3a is an intensity profile of a vessel segment. Figure VI.3b (on two pages) illustrates twelve binary matched filters, θ is the angles of the vessels.
Figures Vl.3c.a-g are black and white reproduction of a color photographs showing the detected HMAs from the corresponding binary images in Figures Vl.2a-g.
Figure VI.4 is a black and white reproduction of a color photograph showing the candidate HMA pixels after the voting.
Figure VII.1 is a black and white reproduction of a color photograph showing the combined candidate pixels of HMAs obtained from sections V and VI.
Figure VIII.1a is a diagrammatic view of a line detecting algorithm wherein P0 is recognized as a HMA pixel because |θ 01 - θ 02 1 > 20 degrees and|θ 03 - θ 04 |> 20 degrees.
Figure VIII.1b is a diagrammatic view of a line detecting algorithm wherein P0 is recognized as a vessel pixel because |θ 01 - θ 02 |< 20 degrees and|θ 03 - θ 04|< 20 degrees.
Figure VIII.2 is a black and white reproduction of a color photograph of the final candidate pixels of HMAs after deleting the falso detections.
Figure IX.1a is a black and white reproduction of a color photograph of a normalized image for exudate detection.
Figure IX.1b is the intensity histogram of the normalized image. Figure IX.2 is a binary image showing the result of the global thresholding.
Figure IX. 3 is a black and white reproduction of a color photograph of non-background pixels of the image shown in IX. 2.
Figure IX.4a is the green/red ratios of the blue pixels in the image of Figure IX.3.
Figure IX.4b is a black and white reproduction of a color photograph showing exudate pixels with green/red ratios higher than Rm + 0.05.
Figure IX. 5a.1 is a matrix showing the green/red ratios of a local image (15 x 15) that contains an exudate. R0 = 0.64, Ra = 0.42, R0 - Ra= 0.22 > Rf
Figure IX. 5a.2 is a matrix showing the green/red ratios of a local image (15 x 15) that contains only background. R0 = 0.51 , Ra = 0.45, R0 - Ra= 0.06 < Rt.
Figure IX.5b is a black and white reproduction of a color photograph showing the exudates detected by the local green/red ratios.
Figure IX.6 is a black and white reproduction of a color photograph having the combined exudates shown in Figure IX.4 and Figure IX. 5.
Figure X.1a is a matrix showing a local image (15 x 15) containing a HMA. Gs = 74, Gb = 100.
Figure X.1b is a matrix showing region growing. Pixels with green values Gp < Gb are included in the HMA region (with value 255). Figure X.2 is a black and white reproduction of a color photograph showing the results of the HMA and exudate growing.
Figures Xl.1a-d depict a diagnosis report produced in accordance with the present invention.
Figures Xl.2a-2c depict two lesions in differing relationships.
Figure XII.1a is a color fundus photograph of a patient's right eye taken at
T,.
Figure XII.1b is a black and white reproduction of a color photograph showing the result of a computer diagnosis performed with the present invention on the color fundus photograph shown in Figure XII.1a.
Figure Xll.2a is a color fundus photograph of the same patient's right eye taken at T2.
Figure Xll.2b is a black and white reproduction of a color photograph showing the result of a computer diagnosis performed with the present invention on the color fundus photograph shown in Figure Xll.2a.
Figure XII.3 is a black and white reproduction of a color photograph of a computer match of the pair of scanned photos shown in Figures XII.1a and Xll.2a.
Figures Xll.4a-b depict a time series analysis report performed with the present invention on the color photograph of Figure XII.3.
Figures XIIIA-XIIID are algorithm flowcharts of a computer system according to this invention. DETAILED DESCRIPTION OF THE INVENTION
Definitions
The term "imagery alike abnormalities" as used herein means at least two different types of abnormalities having similar, but not identical, image patterns. For example, a faded laser scar and a soft exudate are imagery alike abnormalities; a pigment and the onset of a small tumor are imagery alike abnormalities.
The term "vague abnormality" as defined herein means an abnormality that is faded or fuzzy or having a boundary which is not very well defined. Examples of a "vague abnormality" is a hard exudate, and a soft exudate.
It should be understood that the terms "such as", "inclusive" and "for example" as used herein are illustrative and not limitative.
The terms "computer software" and "computer program" may instead be read as "computer system". I. General Overview
Referring now to the drawings, and in particular to figure 1.1 , shown therein and designated by the general reference numeral 10, is a computer system programmed for analyzing an image 12, and automatically detecting early signs of abnormalities associated with the image 12. The abnormalities in the image 12 can be very small (i.e. in some instances as small as or smaller than 20 microns), and/or vague. Moreover, the computer system 10 is programmed to differentiate between at least two types of "imagery alike abnormalities" having similar, but not identical, image patterns. For example, the computer system 10 can differentiate between 1 ) a faded laser scar and a soft exudate, 2) a hard exudate and a soft exudate, and (3) a pigment and the onset of a small tumor.
One particularly effective use of the computer system 10 is for analyzing one or more images 12 associated with an eye 11 of a patient, and detecting abnormalities, such as lesions caused by retinopathy (either diabetic retinopathy or hypertensive retinopathy). By way of example, the operation of the computer system 10 will be generally described herein with respect to specific abnormalities associated with the eye 11 caused by "diabetic retinopathy". These abnormalities are flame hemorrhages, blot hemorrhages, dot hemorrhages, microaneurysms, hard exudates, soft exudates and combinations thereof. These abnormalities may also be referred to herein as "lesions".
In addition, it will be understood that the computer system 10 can be used to detect abnormalities associated with various other categories or types of eye diseases, such as hypertensive retinopathy, macular edema and glaucoma. Furthermore, the computer system 10 can also be used to detect other categories or types of abnormalities in the image 12. For example, the abnormalities could be 1) a tumor in a living organism, 2) a newly appearing blood vessel, 3) a lump in a blood vessel, 4) and/or abnormalities in other parts of a human body or an animal body. The computer system 10 can be used as a clinical aid, for mass screening, epidemiological studies, clinical trials, telemedicine, and medical education. The computer system 10 can be used for analyzing images which are taken of either human or non-human bodies. For example, the computer system 10 can analyze retinal or other images depicting portions of a mouse, a cat, a dog, a cow, a sheep, a rabbit, or a horse. More particularly, when the abnormality is caused by diabetic retinopathy, the computer system 10 can be utilized for the automatic detection and/or quantification of flame, blot and dot hemorrhages, microaneurysms, hard exudates, soft exudates, and combinations thereof in an image 12 depicting at least a portion of the eye 11.
The computer system 10 can be a personal digital assistant, a personal computer system (either portable or desktop), a mainframe computer system or any other computer system, microprocessor or programmed logic unit capable of analyzing the image 12 and achieving the functions and purposes discussed herein. In one preferred embodiment, the computer system 10 includes a Web site established on a global network, such as the Internet so that images 12 can be transmitted from a remote computer system to the computer system 10 over the global network. In this example, the computer system 10 would analyze the image or images 12 and preferably transmit a report indicative of the results of the analysis to the remote computer system which transmitted the image or images to the computer system 10.
When using the computer system 10 for detecting early signs of diabetic retinopathy, the image 12 is preferably a color retinal image, which is produced by an image recording system 14 located either remotely from the computer system 10 or adjacent to the computer system 0. The image recording system 14 can be 1) an x-ray machine, 2) an MRI machine, 3) a CT machine, 4) a CAT machine, 5) an ultrasound machine, 6) an infrared machine capable of producing a thermograph, or 7) a fundus camera for example. The computer system 10 could be incorporated into or in communication with the image recording system 14. The image recording system 14 preferably outputs the image 12 in a digital format. However, it should be understood that the image recording system 14 does not necessarily have to output the image 12 in a digital format. In this regard, it should be understood that the image recording system 14 could output the image 12 in an analog format, and the image 12 could thereafter be scanned and digitized. The computer system 10 is particular useful for analyzing color images having red components, green components, blue components and combinations thereof. However, it should be understood that the computer system 10 may also be used for analyzing gray-scale images, or unitary color images. Referring to figure 1.2, the image 12 is shown as a digital retinal image obtained by scanning a standard 35mm color fundus photo slide taken at a 45 degree field of view by a fundus camera (the image recording system 14) into the computer system 10 and saving the image 12 as a "TIFF" data format for a RGB full color image, with fixed size of 512 X 512 pixels and resolution of 10,000 PPI (pixels per inch). In this example, the diameter of the retina in the image 12 is also required to be 512 pixels. For this type of "TIFF" data format, each pixel in the image 12 is made up of 24 bits of data (3 bytes) and has three components: red (R), green (G), and blue (B). Each component occupies 1 byte (8 bits) of data and its value ranges from 0 to 255, making up to 256 x 256 x 256 = 16.7 million colors for each pixel. The RGB value of each pixel of the image 12 can be read by encoding the image file using proper header and pointer values. The intensity of each pixel can be calculated by the corresponding RGB values and its value also ranges from 0 (black) to 255 (white).
For purposes of our experiments, the coordinate system of the image 12 is shown in figure 1.3. The coordinate system is originated at the upper left corner of the image 12. The column and row axes indicate the x and y values of the coordinate (x,y), respectively. The column axis is referred to as the x-axis and the row axis as the y-axis. For example, the expression R(10, 15) = 128 represents that the red component value of the pixel located at row 15 and column 10 of the image 12 is equal to 128. The computer system 10 is not limited to only analyzing the image 12 as set forth in the experiments described above. For example, the computer system 10 can analyze images 12 which are taken at fields of view other than 45 degrees. Moreover, the image 12 can have more or less pixels than the 512 x 512 pixel example described above. For example, the image 12 could have 1024 x 1024 pixels.
The image 12 is input into the computer system 10 in a digital format. The image 12 can either be input into the computer system 10 in real-time or a predetermined time, or an arbitrary time after the image 12 is taken. For example, the image 12 can be input into the computer system 10 from the image recording system 14 immediately once the image 12 is taken via a signal path 16. Alternatively, the image 12 can be input into the computer system 10 from the image recording system 14 a predetermined or arbitrary time period, such as a day, week, month or year, after the image 12 was taken.
As another example, the image 12 can be taken by the image recording system 14 and then stored on an image storage device 18. Subsequently, the image 12 can be input into the computer system 10 from the image storage device 18 as indicated by a signal path 20. The image storage device 18 can be a disk, a hard drive, a memory unit, or any other device capable of storing the image 12 in a digital format. The signal paths 16 and 18 can be any suitable medium that permits electronic or non-electronic communication between two entities. That is, the signal paths 16 and 18 are intended to refer to any suitable communication medium, including extra-computer communication systems and intra-computer communication systems. Examples of such communication mediums include, but are not limited to: internal computer buses, local area networks, wide area networks, point-to-point shared and dedicated communication links, Internet links infra-red links, microwave links, telephone links, CATV links, satellite links, radio links, ultra-sonic links, fibre-optic links, air, wire, breadboards, circuit boards, and the like.
Referring now to figure 1.4, shown therein is a flow chart illustrating a computer program 24 running on the computer system 10 for analyzing the image 12 and detecting abnormalities, such as hemorrhages and microaneurysms in the image 12. Initially, the computer program 24 branches to a quality test algorithm 26. In general, the quality test algorithm quantitatively measures and describes the overall quality of the image 12 to determine whether the image 12 is suitable for use with the computer system 10. If the image 12 does not meet some minimum image quality criteria, for example, too dark, too bright, totally out of focus, etc., the computer system 10 will not process the image 12 further. The quality test algorithm 26 will be discussed in more detail below. If the quality of the image 12 is determined to be suitable for use with the computer system 10, the computer program 24 branches to a first detection algorithm 30 as indicated by a line 32 for detecting the macula and the optic disk in the image 12, if the computer system 10 is analyzing an image associated with the eye 11.
Once the macula and the optic disk are located in the image 12, the a computer program 24 branches to an image normalization algorithm 36 as indicated by a line 38. In general, the image normalization algorithm 36 improves the quality of the image 12 so as to enhance the detection and quantification of the early signs of abnormalities, if any, depicted in the image 12. For example, the image normalization program 30 can simultaneously improve the contrast and brightness characteristics of the image 12, reduce the noise content of the image 12, or sharpen the details contained in the image 12. The image normalization algorithm 36 will be discussed in more detail below.
Once the quality of the image 12 has been enhanced by the image normalization algorithm 36, the computer program 24 branches to a second detection algorithm 40 (as indicated by a line 42) for detecting relatively larger abnormalities/lesions in the image 12. When the abnormality is caused by diabetic retinopathy, the second detection algorithm 40 typically detects the location and size of any relatively large lesions, such as blot and flame hemorrhages, which may be depicted in the image 12. After the relatively large lesions are detected, the computer program 24 branches to a third detection algorithm 44 (as indicated by a line 46) for detecting any tiny lesions in the image 12 which were not detected by the second detection algorithm 40. When the abnormality is caused by diabetic retinopathy, the third detection algorithm 44 detects the location and size of microaneurysms and dot, and blot hemorrhages.
After the tiny lesions are detected, the computer program 24 branches to a fourth detection algorithm 48 (as indicated by a line 50) for detecting predetermined types of lesions in the image 12 which were not detected by the second detection algorithm 40 and the third detection algorithm 44. In addition, the fourth detection algorithm 48 can detect imagery alike abnormalities. For example, when the abnormality is caused by diabetic retinopathy, the fourth detection algorithm 48 may detect the location and size of hard exudates and soft exudates.
Thereafter, the computer program 24 desirably branches to a report algorithm 52 (as indicated by a line 54) for providing a report of the abnormalities detected by the second detection algorithm 40, the third detection algorithm 44 and the fourth detection algorithm 48. In general, the report generated by the report algorithm 52 desirably includes the coordinate location of the optic disk (X0, Y0) and the macula (Xm, Ym), followed by a list of all hemorrhages and microaneurysms detected. For each hemorrhages and microaneurysm detected, various other information, such as the location and size, are indicated.
Once the location and size of any abnormalities/lesions associated with a patient condition, such as diabetic retinopathy, are located, treatment may be required by the patient to correct the abnormalities/lesions or to prevent continued growth thereof. As an optional feature of the present invention, the computer system 10 communicates with a computer guided robotic mechanism 56 via a signal path 58 for helping the physician find tiny abnormalities and avoiding mistakes. For example, if the abnormalities are lesions caused by diabetic retinopathy, laser surgery may be needed to prevent further growth of the lesions.
The computer guided robotic mechanism 56 desirably includes a camera on the instrument, such as the laser, so that the computer system 10 can "see" where the instrument is pointed. If activation of the instrument would cause damage to an important part of the patient, or the wrong part of the patient, then the computer system 10 outputs a signal to the computer guided robotic mechanism 56 to temporarily disable the mechanism 56, or to at least give a warning to the physician. When the mechanism 56 is a laser used to treat diabetic retinopathy, the important part of the patient may be the macula, the optic disk or a blood vessel. After the computer system 10 has output the signal to the mechanism 56 to disable the mechanism 56 or to warn the physician of a potential mistake, the physician can override the computer system 10, if the physician believes that the computer system 10 is wrong. Thus, the computer system 10 can also serve as an aid to the physician to reduce the possibility of errors which may occur during the treatment of the patient.
Each of the quality test algorithm 26, the image normalization algorithm 36, the first detection algorithm 30, the second detection algorithm 40, the third detection algorithm 44, the fourth detection algorithm 48 and the report algorithm 52 will be discussed in more detail below.
II. Quality Test Algorithm 26
The quality of digitized retinal images 12 are first tested before the diagnosis continues. The computer system 10 classifies the quality of the images 12 into three categories: gradable, marginally gradable, and ungradable. The quality test algorithm 26 can be incorporated into the image recording apparatus 14 so that the quality test algorithm 26 analyzes the image 12 in real time upon capture by the image recording apparatus 14. This provides an immediate indication to the photographer or technician as to the quality of the image 12. If the quality test algorithm 26 indicates that the image 12 is of poor (ungradable) or marginal (marginally gradable) quality, the photographer or technician can then retake the image 12 until a good quality image 12 is captured. The following is an example of an algorithm which can be employed to classify the quality of the image 12 into the three categories in accordance with the present invention. The quality of the image 12 is tested to be ungradable if it meets one of the following conditions:
1 ) dark image: Based on the histogram (see figure 11, lb) of the image 12, if there is no distinct gap between the black background pixels and the retina pixels, or the number of pixels with intensity values lower than 25 exceeds 1/3 of the total pixels of the image 12, the image 12 is considered to be ungradable. (figure II. la, figure II. lb)
2) Bright image: Based on the histogram of the image 12, if the number of pixels with intensity values higher than 200 exceeds 1/5 of the total pixels of the image, the image 12 is considered to be ungradable. (figure ll.2a, figure ll.2b)
3) Distorted color: Based on the histograms of the green/red and blue/red ratios, if the green or blue components of the image color take over the red component to be the major color of the image 12, i.e., most of the pixels have green/red ratios greater than 1 or blue/red ratios greater than 0.5, the image is considered to be ungradable. (figure 11.3a, figure ll.3b, figure ll.3c) 4) Blurry Image: If there are two distinct peak values presented in the histogram of the image 12 (the first peak is resulted from the black background and the second peak is resulted from the retina), and the intensity range of the second peak does not exceed 80, the image is considered to be ungradable. (figure ll.4a, figure ll.4b)
5) Light Reflection: Figure 11.5a is a black and white reproduction of a color photograph illustrating an image of a retina having light reflections contained therein. There are different types of light reflections. The light reflection's color could be red, green, blue and any combination thereof, and they could occur anywhere in the image 12. However, all the light reflections have one thing in common; that is, they all exhibit abnormal green/red ratio or blue/red ratio compared to a normal retinal image. Mathematically we consider that green/red ratio greater than 1 and smaller than 0.1 , or blue/red ratio greater than 0.5 and smaller than 0.05 to be abnormal. If the number of pixels in the image with these abnormal red/green ratios exceeds 1/10 of the total pixels of the image 12, this image 12 is considered to be ungradable. Figure II.5b is the intensity histogram of the green/red ratio of the image depicted in Figure II.5a. Figure ll.5c is the intensity histogram of the blue/red ratio of the image depicted in Figure ll.5a. 6) Defective Slide: The fundus photo resulting in the image 12 was not successfully taken due to the defective film or other damages, (figures 11.6a and ll.6b)
Figure l.2a shows a typical good quality retinal image and figure l.2b shows its intensity histogram. Notice that in this histogram, there are two separate regions, the first (lower intensity) region contains mostly the pixels of the dark boundary area outside the circular retinal image and the second region represents the pixel intensity distribution of the actual retinal image area. The intensity span of this region for a good quality image should be above a predetermined threshold of approximately 30, for example.
Using the distinct histogram patterns of a set of selected prototype ungradable and gradable images, histogram templates are created and used in determining the gradability of the image 12. If the histograms of the image 12 match the general profiles of the templates obtained from those of the good images, and do not fall into any of the above categories (1-6) of ungradable images, this image 12 will be considered to be of a good quality image and thus gradable. If the variations of the histograms of the image 12 lie between those of the good quality images and those of the ungradable images, the image 12 will be considered to be marginally gradable. Figure II.7a shows an example of a "marginally gradable" retinal image and its intensity histogram is shown in Figure II.7b. Alternatively, the quality of a retinal image can be determined by using the following parameters:
1. Mean value (m) of the pixel intensities of the image, m=(∑l(x, y)) /(512 x 512), where I (x, y) is the intensity value of the pixel located at
(x, y).
2. Percentage of pixels (P.,) with intensity values T., < I < m.
3. Range of the retinal band T21 = T2 - T
4. Percentage of pixels (Pe) detected as edges by Sobel operators.
5. Percentage of pixels (Pgr) with green/red ratios greater than 1.
6. Percentage of pixels (Pbr) with blue/red ratios greater than 0.5. The approximate ranges of these parameters describing a good quality image (gradable), poor quality image (marginally gradable), and bad quality image (ungradable) are shown in the table below.
Figure imgf000027_0001
III. First detection algorithm 30
The first detection algorithm 30 locates the optic disk in the original image 12 (which has not been normalized by the image normalization algorithm 30) by finding the highest average intensity area in the image 12. To locate the macula in the image 12, we use a disk-shaped binary matched filter (an example of which is shown in Figure 111.1 ) to convolute with the pixels in the area 80 - 150 pixels from the left or right side of the optic disk, depending on whether it's a left or right retina. The size of the matched filter is 25 x 25 pixels. The lowest convolution value indicates the position of the center of the macula because the macula has lower intensity values compared to its surrounding background. The locations of the optic disk and macula are stored in a data file, which may be called "dis&mac.dat", for example.
Since the macula is the area that provides human's primary vision, in many situations, it is desirable to describe the location of a lesion relative to its distance from the center of the macula. This may show the severity of the lesions's effect to the patient's vision. Two image coordinate systems use the center of the macula as the origin of the system. One is a rectangular coordinate system and the other a logarithmic polar coordinate system. The rectangular coordinate system and the logarithmic polar coordinate system a shown in Figures 111.2 and 111.3, respectively.
IV. Image normalization algorithm 36
The following is indicative of the portion of the computer program bearing the designation <dis&mac.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
In general, the image normalization algorithm 36 normalizes the image 12, eliminates red from the image 12, reduces noise and enhances the image quality. The image normalization algorithm adopts the technique of image segmentation using global thresholding to enhance the image 12. This technique requires image normalization if the illumination of image surfaces are nonuniformed. The usual illumination pattern of a retinal image is shown in figure IV.1a (dark around the border area and bright at the center area). This type of illumination pattern results in the image intensity histogram in figure. IV.1b. Using a single threshold value to segment out the features to be detected (HMAs) is almost impossible (figure IV.1c), and a large amount of background noise is included. To fix this problem, image normalization is needed.
We define the illumination pattern of the image to be i(x, y) = g(x, y)/r(x, y), where (x, y) are the coordinates of the image, g(x, y .and r(x, y) are the green and red values of the pixel (x, y), respectively. Thus, based on the image normalization discussion set forth in Rafael C. Gonzales, Richard E. Woods, "Digital Image Processing", (which is hereby expressly incorporated herein by reference), we can obtain a normalized green image 9n (χ. y) = k x i(x, y) = k x g(x, y)/r(x, y), where k is a constant which depends on the image surface. Here we choose k ~ 255, and gn(x, y) will be the new green value of pixel (x, y) in the normalized image. In particular applications, k may be chosen as a number other than 255 so long as the purposes set forth herein are achieved. Figures IV.2a and IV.2b show the normalized image and its histogram. The normalized image has a much more uniformed illumination. Figure IV.2c shows the segmentation of the normalized image using a single global threshold.
Image normalization is the most essential part of the image normalization algorithm 36, as both of the next two sections of the computer program 24 use the global thresholding technique based on the normalized image to detect the microanuerysms, dot, blot and flame hemorrhages. Furthermore, image normalization also acts as a tool for image enhancement and noise removal. This can be verified by comparing the histograms of the images before and after the normalization. The histogram of the normalized image exhibits more uniformed pixel's distributions (We approximate this distribution as a Gaussian curve. The image normalization algorithm 36 will use this property of the distribution to automatically calculate the global threshold.), its mean value increases from 60 to 115, while its spread also increases from 80 to 110. These changes satisfy the conditions for image enhancement, set forth in Rafael C. Gonzales, and Peter F. Sharp, "Digital Image Processing",second edition (which is hereby expressly incorporated herein by reference), which will increase the visibility and detectability of the features in images (signal to noise ratio), including, in our case, small microanuerysms.
By using the above image normalization scheme, we have concluded that our method is not only able to detect proliferative diabetic retinopathy, but also capable of detecting non-proliferative diabetic retinopathy, where lesions present at their very early stages and are difficult to be detected by the human expert systems.
The automatic determination of the global threshold will be discussed in the following parts of the computer program 24.
V. Second Detection Algorithm 40
The second detection algorithm 40 of the computer program 24 is directed to the quick detection of rather large abnormalities, such as obvious hemorrhages with sizes that are easily viewed by human eye, namely blot and flame hemorrhages. Although the second detection algorithm 40 is discussed hereinafter as analyzing the normalized image, it should be understood that the second detection algorithm 40 can be used to analyze images which have not been normalized. For example, the second detection algorithm 40 could be used to locate the macula and the optic disk, as discussed above, in a retinal image. The image in figure IV.1a will be used as the example throughout our illustration of the second detection algorithm 40. In the following example, we start with the normalized image. The algorithm consists of the following parts:
1. Construction of the "spider net"
The following is indicative of the portion of the computer program bearing the designation <net.c> and submitted in the above-referenced provisional patent application upon which this patent application depends for priority.
The idea of a "spider net" is to launch those obvious hemorrhages caught by the net. The net is originated at the center of the macula. The angular resolution of the net is 5°, and the radius resolution is 10 pixels. The constructed net presents in the image as the white pixels (Figure V.1). We refer to this image as In. With the given parameters, we have found that the second detection algorithm 40 will detect abnormalities having a longest cross-sectional axis (see figure XI 1.3c) of at least about 200 microns. The angular resolution and the radius resolution of the "spider net" can be varied to change the sensitivity of the second detection algorithm 40. For example, as the angular resolution and the radius resolution increase, the sensitivity of the second detection algorithm 40 decreases, and vice-versa.
2. Image segmentation The following is indicative of the portion of the computer program bearing the designation <mac&ham.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
First we read the green value of the center of the macula G a from the normalized image, then threshold the normalized image into a binary image using threshold values T1 = Gma - 18 and T2 = Gma + 18. Pixels with green values between T1 and T2 are extracted and presented as white pixels in image I . (Figure V.2). Studies of lesions reveal that the green values of blot and flame hemorrhages usually lie between T1 and T2 in the normalized image, thus, the blot and flame hemorrhages are also extracted after the image segmentation.
3. Image AND operation
The following is indicative of the portion of the computer program bearing the designation <andn&m.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
To catch the blot and flame hemorrhages, we need the help of the spider net. This can be done by performing AND operation on images In and Im. If a pixel is white (with intensity of 255) in both images, this pixel is said to be caught by the net, and is written as a white pixel in image la (Figure V.3).
4. Deletion of the background pixels
The following is indicative of the portion of the computer program bearing the designation <backgrnd.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
Image la still contains some background and noise pixels. To delete these pixels, we read the green value of each white pixel Go in la, and calculate the average green value of its surrounding background Ga. Since HMAs have lower reflectance compared to their neighboring area, we retain those pixels with Ga - Go ≥ 11.5 while delete the others with Ga - Go < 11.5. The result is shown in image ID in figure V.4. As we can see from the image ID, all the background pixels have been deleted, only pixels of HMAs and vessels are retained.
5. Deletion of the vessel pixels <gradnt.c>
The following is indicative of the portion of the computer program bearing the designation <gradnt.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority. Diabetic lesions in retinal images such as microanuerysms and hemorrhages usually exhibit irregular round shapes, while blood vessels are approximated to be piecewise linear segments. Thus, to differentiate the lesion pixels and the vessel pixels, we calculate the average and the maximum gradients of the background of each white pixel in image ID. If the difference between these two gradients is greater than 30, for example, this white pixel is a vessel pixel, otherwise it is a lesion pixel. This is shown in figure V.5a-d. The result is shown in image lc in figure V.5e, where the white pixels are the potential candidates for the lesion pixels. For this part our main purpose is to obtain the candidate pixels of the lesions, speed is one of the major concerns. In the third detection algorithm 44, we will introduce another scheme for differentiating the vessel and lesion pixels, where matched filters are used. However, this scheme takes much more time and its accuracy is the only concern for the lesion detection.
IV. Third Detection Algorithm 44
Lesions of small sizes, such as microanuerysms, dot and blot hemorrhages, may not be able to be caught by the spider net. A different scheme for detecting the lesions of small sizes is needed. The following scheme can detect lesions having a longest cross-sectional axis greater than about 20 microns, and in some cases the following scheme will detect a lesion having a longest cross-sectional axis less than about 20 microns. Such scheme is described by the following steps:
I. Automatic determination of the global thresholds
The following is indicative of the portion of the computer program bearing the designation <gaussn.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
As shown in figure IV.2b, the histogram of a normalized image can be approximated to be a Gaussian curve. More observations of this normalized image show that after a certain threshold, numbers of background pixels increase dramatically. Since HMAs have lower reflectance than their surrounding background, this threshold should be able to separate the background pixels from the HMAs pixels and other objects (vessels). To determine this threshold value, we calculate the first derivatives of the left part of this curve, and the maximum derivative will indicate the point where the background pixels present a dramatic increment. The green value in the histogram corresponding to this maximum derivative will be the global threshold T. Image IV.2c shows the result of the global thresholding by value T\ As we can see, the features and the background in the image are clearly separated. For some cases, T is not an absolutely safe threshold in terms of extracting all the HMAs in the images. We need to add an offset value to it. After experiments, we choose this offset to be 10. Thus, a safer threshold T = T + 10, is used. However, in particular applications, the offset may be other than 10 so long as the results discussed herein are achieved.
2. Image segmentation
The following is indicative of the portion of the computer program bearing the designation <tbrhld.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
Studies of a retinal image will be made more easily understandable if we look at the image as a picture of the earth. Objects in the image such as HMAs and vessels are like islands and continents, and the background is like the Ocean. Millions of years ago the earth was a huge ocean with fewer lands. People could not see what is under the water. With the lowering of the sea level, people started to see first islands. Some of the islands became big enough later on to form continents, while others remain islands.
For our research, we are using the same idea to extract features from the retinal images. We use several linearly increased global threshold values to segment the images into binary images, with different levels of extraction of the image objects. Along with the thresholdings, objects can be recognized because some of them remain the same shapes (HMAs) and the others change from isolated spots to linear features (vessels). The shapes of the vessels can be recognized in certain stages during the thresholdings. Considering both the speed and the accuracy, after experiments, we choose the number of thresholds to be n=7. However, in particular applications, the value of n may be other than 7 so long as the results discussed herein are achieved. The starting threshold value T., is adaptive, and the ending threshold will be T7 = T = T' + 10. To determine T,, we start from 25 on the green value of the histogram, and trace the number of pixels N(i), where i = 25, 26, ..., T7. If N(i) >=I0, T, = i. The rest of the threshold values can be determined by Tk = T., + (k-1 )x (T7 - T.,)/(n-1 ), where k=1 , 2, ..., 7. The results of the thresholdings using these 7 thresholds are shown in figure Vl.2a-g.
3. Detection of HMAs using binary matched filters
The following is indicative of the portion of the computer program bearing the designation <imagel.c, image2.c, image3.c, image4.c, image5.c, imageδ.c, image7.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
The concept of a matched filter is to use an optimal filter to maximize the output signal-to-noise ratio. For our research, the signal would be either HMM or vessels. According to the discussions in Chaudhuri S. et al., "Detection of blood vessels in retinal images using two-dimensional matched filters," IEEE Trans on Medical Imaging. Vol. 8, no. 3, pp. 263- 269, September 1989, and the properties of the vessels — piecewise linear segments with intensity profiles approximated to be Gaussian curves (Figure VI.3a), we construct 12 two dimensional matched filters to match the shapes of the vessels on both profiles and the linear segments. Also these matched filters need to match the orientations of the vessels that could be varied anywhere from 0 to π. Their peak response occurs when they match the angles of the vessels. As mentioned in Chaudhuri S. et al., the optimal size of the matched filters is 15 x 15, and their angular resolution is 180°/12 = 15°. Since the images of figures VI.2 are binary, the matched filters needs to be binary also. Examples of the binary matched filters are shown in figure VI.3b.
To recognize the vessel pixels, we start from the binary image VI.2a. First we use the same method in V.4 to delete the background pixels among those white pixels in the image, then convolute the 12 matched filters with each of the non-background pixels, for which 12 convolution values are obtained, which represent the responses of the matched filters to the vessels at different angle. Since the matched filters optimize the responses of the vessel pixels and suppress those of HMA pixels, certain thresholds could be determined to separate the vessel pixels from the HMA pixels. We use two criteria to determine the thresholds: standard deviation σ of these 12 convolution and Cmax -Cmjn, where Cmax is the maximum response of the filters and Cmin is the minimum response. The following table shows the responses of the matched filters to the vessels and HMAs (the number in braces are the responses of the matched filter applied to the example vessel and HMA in figure V.5a and V.5c). Because of the shape difference, Cmax and Cmjn for vessel pixels present dramatic changes, while the changes for HMAs are minor. Thus, higher σ for vessel pixels than for HMA pixels are expected. After experiments, we choose C = Cmax - Cmjn = 2000 and στ = 800 to be the thresholds. Pixels with Cmax - Cmin >= C andσ >= στ are recognized as vessel pixels, otherwise they are HMA pixels. The detected HMA pixels are written as white pixels in figure Vl.3c.a. The same procedure is repeated for binary images in figures Vl.2b-g, and their detected HMA pixels are written as white pixels in corresponding images in figures Vl.3c.b-g.
Figure imgf000040_0001
4. HMA voting and HMA recognition rate Some of the vessel pixels are recognized as HMA pixels at early stages of the thresholdings because the thresholds are too low to make the vessel segments appear to be a line. Thus images in figure Vl.3c.a-g contain some of the vessel pixels. However, once the vessel's line shape appears at the later stages, they will be recognized immediately. For HMAs, on the other hand, there are only minor changes in shape and will be recognized during most of the thresholding stages. Thus, to determine the candidate pixels for HMAs from these 7 images, we need a voting system based on the recognition rate. We define the recognition rate to be r = n/7, where n is the number of times that a pixel appearing to be a HMA pixels in the 7 images. According to the discussion above, higher recognition rates for HMA pixels than those for vessels are expected. After experiments, we use the numbers in the following table to determine the candidate pixels for HMAs. Pixels voted as "questionable" or "yes" are written as white pixels in the image in figure VI.4. We refer to this image as image ld.
Figure imgf000041_0001
VII. Image OR Operation
Comparisons of the detection of HMAs using the second detection algorithm 40 and the third detection algorithm 44 (discussed above in sections V and VI above) are shown in the following table:
Figure imgf000042_0001
This part of the computer program 24 combines the results obtained from the second detection algorithm 40 (discussed above in Section V) and the third detection algorithm 44 (discussed above in section VI) to include all the candidate pixels of HMAs in one image. This can be done by ORing images Ic and Id, that is, all the white pixels in both Ic and Id are written into the same image (Figure VI 1.1). We refer to this image as Ig.
VIII. Deletion of the False Detection
The following is indicative of the portion of the computer program bearing the designation <final.c, border.c, reds.c, xyvesll.c, xyvesl2.c, hmaf.c> which was submitted in the above-referenced provisional patent application upon which this patent application depends for priority.
There are still four types of false detection in image Ig, which is shown in figure VII.1. The four types are listed in the following table:
Figure imgf000044_0001
To delete the first, second and forth false detection, we apply another scheme — line detector to each of the white pixels in image Ig. This method is graphically explained in figure VIII.1a and figure VIM.1b. We open 2 windows with sizes of 21 x 21 and 15 x 15, both centered at the white pixels Po. Along each side of the edges of the windows we search for those non-background pixels (P.,, P2, P3, P4, using the same method in section V.4) with the lowest green values. If |θ01 - θ02 | <20°, or | θ03 - θ^j < 20°, Po will be recognized as a vessel pixel. Otherwise it is a HMA pixel. Also, if there is no or only one non-background pixel found along each side of the windows, Po is considered to be a HMA pixel.
To delete the third false detection, we notice that in the original color image 12, the red components of the red strips Gr is higher than those of their surrounding background Gb, while the red components of HMAs are always lower than those of their surrounding background. So it is easy to differentiate the red strips from HMAs by checking their red components in the original color image 12. That is, if Go - Gr >= 0, the pixel under investigation is a HMA pixel, otherwise it is a red strip pixel.
Figure VIII.2 shows the image that contains all the candidate pixels of HMAs with all the false detection being deleted. For our research the diabetic retinopathy is classified into three categories: Y category — diabetic retinopathy, Q category — questionable diabetic retinopathy, and N category — no diabetic retinopathy. A retinal image is determined to be a Y case when at least one sure HMA is detected. 20μm is the smallest size (in diameter) of a microanueryam that can be viewed by a human expert, and it is equivalent to 1-2 pixels for the computer system 10 given the image 12 having a size of 512 X 512 pixels. Thus, if the size of one of the white pixel groups in image VIII.2 is greater than 4 pixels in area, this retinal image is considered to have at least one sure HMA, and thus, indicates that the patient has diabetic retinopathy. The diagnosis will be continued followed by the exudate detection. Otherwise, if sizes of all the white pixel group are smaller than 4 pixels, which means no sure HMAs are detected, we will proceed with the detection of the questionable HMAs. Finally, if no white pixel presents in image VIII.2, this retinal image is an N case — no diabetic retinopathy. Of course, if the resolution of the image 12 changed from 512 x 512 pixels to some other resolution, such as 1024 x 1024, for example, then the number of pixels set forth above may also be adjusted.
IX. Fourth Detection Algorithm 48
The fourth detection algorithm 48 detects and identifies imagery alike abnormalities in the image 12. Since the example we use in this patent application is diabetic retinopathy, we will explain and discuss the detection of imagery alike abnormalities, such as exudates first. The detection of imagery alike abnormalities, such as questionable HMAs will be covered in
section X. The following table shows the properties of hard and soft exudates, compared with those of HMAS:
Figure imgf000047_0001
From the table we can see that the most distinguished differences between HMAs and exudates are the intensity and color. Some of the methods for HMAs detection can be used in here for exudate detection, like background checking, global thresholdings, etc., except we change the term from "lower than a certain threshold" to "higher than a certain threshold". From the table we also see that the most distinguished differences between HEs and SEs are their color, size, and neighbor. Thus, after we detect the exudates, HEs and SEs can be differentiated based on these three properties. The algorithm for exudate detection consists of the following parts:
1. Image preprocessing
The following is indicative of the portion of the computer program bearing the designation <nor.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
Like the HMA detection, we need to have a preprocess scheme to enhance the image and gain better detection results. From the histograms in figures IV.1b and IV.2b, the image enhancement for HMA detection literally raises the brightness level of the image. This will, in some occasions, decrease the local contrasts between the exudates and their surrounding backgrounds, and will eventually lower the accuracy of the exudate detection. Thus, a new enhancement scheme for exudate detection is needed.
Based on the color definitions of the image and the color property of the exudate (yellowish and whitish), the red and green components of an exudate should both be high. To enhance this feature, we use formula Gn = ( R0 x G0) / (R0 -G0 ), where G0 is the new green values of the pixels in the enhanced image, and R0, G0 are the red and green values of the pixels in the original image. Here we need to make sure R0 >G0. The result of the image enhancement and it's histogram are shown in figures IX.1a and IX.1b respectively. From the image we can see that the exudates are highly enhanced in terms of their increased local contrasts. Those blurry soft exudates in the original image can be clearly viewed by human eye after the image enhancement.
2. Image segmentation
The following is indicative of the portion of the computer program bearing the designation <thrhldl.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
Similar to the HMA detection, we use a global thresholding technique to first extract those objects with green values higher than a certain threshold. But instead of finding the threshold value on the left side of the histogram, we calculate the threshold on the right side of the histogram, based on the same Gaussian characteristic of the histogram. The green value T corresponding to the maximum first derivative of the right side of the histogram plus a certain offset value T0 will be the threshold T for the global thresholding. Thus, T = T1 + T0. After experiments, we chose T0 = -10. However, in particular applications T0 may be varied from -10 so long as the purposes set forth herein are achieved. This negative offset is a safeguard for including all the potential exudates for our future detection. The result of the global thresholding is shown in figure IX.2. Pixels with green value greater than T are written as white pixels.
3. Deletion of the background
The following is indicative of the portion of the computer program bearing the designation <hexd.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
As we can see from figure IX.2, unlike the image normalization for HMA detection, this image includes a large amount of background information. It is understandable. Image normalization for HMA detection is targeted at unifying the illumination and noise removal, while image enhancement for exudate detection is targeted at enhancing the red and green components of the image. The red and green components of some of the backgrounds, especially those in the bright area of the retina (center area of the retina and area close to the optic disk), are also enhanced along this preprocessing. Hence, those background are most likely to be included after the thresholding.
These background can be deleted by using the same method in section V.4. Locally calculate the difference between the green value G0 of each white pixel (P0) in figure IX.2 and the average green value G0 of it's surrounding background. If Ga - G0 ≥ 20, P0 is a non-background pixel and is written as a blue pixel in the image in figure IX.3. Otherwise, P0 is a background pixel and will be deleted. Furthermore, if Ga - G0 > 60, P0 is a sure exudate pixel and will be written as a blue pixel in image lk in figure IX.6.
4. Exudate detection using global green/red rations
The following is indicative of the portion of the computer program bearing the designation <hexda.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
Because the green/red ratios of exudates are higher than those of their surrounding backgrounds, we obtain the green/red ratios R( of all the blue pixels P, in image IX.3, and calculate their mean value Rm. This is shown in the chart in figure IX.4a. It is noticed that there are several peaks in this chart and their values are all 0.05 greater than the mean value. After experiments, we are shown that these peaks actually represent the exist of the exudates in the image, especially soft exudates because they have the highest green/red ratios. Thus, if Rj - Rm ≥ 0.05, Pf is an exudate pixel, and it is written as a blue pixel in the image shown in figure IX.4b. 5. Exudate detection using local green/red ratios
The following is indicative of the portion of the computer program bearing the designation <hexdb.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
Based on the color property of the exudates, we need to check their local green/red ratios as well. This can be done by opening a local window of size 15 x 15 centered at each blue pixel P0 in image IX.3, and calculating the green/red ratio R0 of this center pixel and the average background green/red ratio Ra in this window. If the value R0 - Ra is greater than a certain threshold Rτ, P0 is recognized as an exudate pixel, otherwise it is a background pixel and will be deleted. This process is illustrated in figure IX.5a. After experiments, we choose Rτ = 0.07. All the exudate pixels are written as blue pixels in the image in figure IX.5b. Actually this image shows the regions of the exudates rather than the seed pixels as it usually does. That is because in this part we combine the region growing method to obtain the exudates regions right after the- exudate-detection. The region growing will be discussed in section X.
6. Image OR operation
The comparison of exudate detection using global and local green/red ratios is listed in the following table. To include all of the detected exudates, we simply combine the results obtained by these two methods by ORing images in figure IX. 4b and IX.5b. The result is shown in figure IX.6.
Figure imgf000053_0001
X. Single-Linkage Region Growing with Adaptive Thresholds
The following is indicative of the portion of the computer program bearing the designation <region.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority.
If there are HMAs or exudates detected, we need to obtain their regions, which will reflect their actual sizes and shapes. Region growing technique is used here. It is a procedure that groups pixels into a region or subregions into larger regions. Single-linkage region growing is the simplest approach to this technique, where we start with a set of seed pixels. For each seed pixel, we try to expand it into a region by including its neighboring pixels that have similar properties, such as intensity, color, etc.. To obtain the regions of HMAs, we use those candidate pixels of HMAs (figure VIII.2) as the seed pixels. To determine the thresholds Gτ that include the neighboring pixels of those seed pixels into HMA regions, we calculate the local contrasts between the seed pixels Gs and their surrounding backgrounds Gb, where Gs is the green value of the seed pixel, and Gb is the average green value of the background. If the neighboring pixels with local contrasts with respect to the seed pixels Gp - Gs lower than Gτ = Gb - Gs, that is, Gp - Gs < Gb - Gs, or Gp < Gb, we include them into the HMA regions corresponding to the seed pixels. Figures X.1a and X.1b graphically explains how this procedure works.
Similar enough, to obtain the regions of exudates, if the neighboring pixels with green values greater than that of the surrounding backgrounds of those exudate seed pixels (blue pixels in figure IX.61 ), that is, Gp> Gb, we include them into the exudate regions. Figure X.2 shows the result of the region growing. This is the final results of our HMA and exudate detection.
XI. Final Diagnosis Report
The following is indicative of the portion of the computer program bearing the designation <result.c> which was submitted in the above- referenced provisional patent application upon which this patent application depends for priority. The diagnosis report prepared by the report algorithm 52 desirably includes the following results:
1. whether the image 12 indicates that the patient has diabetic retinopathy.
2. if the image 12 indicates that the patient has diabetic retinopathy, obtain the numbers, locations and sizes of the HMAs and exudates (if there is any). Preferably, we use two coordinate systems to indicate the locations of the lesions. One is the normal coordinate system which is shown in Figure 1.3 and which is originated at the upper left corner of the image 12, the other is the polar coordinate system which is originated at the center of the macula. The second coordinate system is used to indicate the severity of the diabetic retinopathy.
3. differentiate the hard and soft exudates after we obtain their locations and sizes. According to their properties in section IX, we experimentally determine that if the size of an exudate is smaller than 40 pixels, or there are at least 3 exudated found in its 40 x 40 pixels' neighborhood, this exudate is a hard exudate. Otherwise it is a soft exudate. The total numbers of hard and soft exudates are also included in the report. 4. determine whether this retinal image has MD (macula degeneration). If there is at least one exudate found in the 20 x 20 pixels' neighborhood of the macula, this retina has MD. Otherwise it does not have MD.
5. if the quality of the image 12 is marginally gradable, the report will indicate that the results may contain false detection due to the quality of the image 12.
Shown in figure Xl.1a-d is a typical computer diagnosis report for the image 12 which was produced in accordance with the present invention.
The report algorithm 52 applies the following rules in counting the lesions in the image 12. Shown in Figure XI.2a is a diagram of two lesions 100 and 102 which are spaced a distance "d" apart. The report algorithm 52 would count the lesions 100 and 102 as two lesions because the lesion 100 is isolated from the lesion 102. In other words, the edges of the lesions 100 and 102 are not connected.
Shown in Figure Xl.2b is a diagram of two lesions 104 and 106 which would be counted by the report algorithm 52 as a single lesion because the lesions 104 and 106 are connected to or at least partially overlay each other, i.e. there is not a distinguishing separation between the edges of the lesions 104 and 106. Shown in Figure XI.2c is a diagram of two lesions 108 and 110 which would be counted by the report algorithm 52 as a single lesion. In this example, the lesion 110 has a longest cross-sectional axis D2, which is much larger than a longest cross-sectional axis D., of the lesion 108. The lesion 110 also has an area A2 which is much larger than an area A., of the lesion 108. The lesions 108 and 110 are separated by a closest distance D3. If the closest distance D3 is less than D^ and D2/D1 > approximately 10, or A2/A., > approximately 25, then the two lesions 108 and 110 will be counted by the report algorithm 52 as a single lesion.
XII. Time Series Analysis of Early Lesions (HMA, HE, and CWS)
In addition to detecting early lesions, HMA, HE, and CWS, the method described above can be used in time series analysis of retinal images of the same eye taken at different times. In the following, the procedure for two images of the same eye taken at two different times is described. The same procedure can be extended to multiple images taken at different times.
Step 1. Determine if the quality of the two images are good enough for computer processing.
Step 2. Each image is processed by the computer system 10 to detect the lesions, HMA, HE, and CWS. See Figures XII.1 and XII.2. Step 3. Select three pairs of substantially identical reference points from the two images, three from each image. These reference points should be selected in such a way that (1) they are obvious landmarks of the images, (2) they are present in both images, and (3) they are as far apart as possible.
Step 4. Overlay one image on the other by overlapping the vessels, particularly the major vessels (Figure XII.3) using the three pairs of substantially identical reference points from the two images.
Step 5. Determine the common retina area of the two images.
Step 6. Identify and quantitate changes in lesions in the common area. These changes desirably include:
(1 ) Lesions disappeared
(2) New lesions found
(3) Lesions enlarged
(4) Lesions shrunk
(5) No changes
Referring now to Figure Xll.4a-b, shown therein is an example of a report produced in accordance with the present invention. The foregoing example has been set forth to illustrate an example of the computer program 24 which can be employed in carrying out the practice of the present invention. However, it should be understood that changes may be made in the computer program or other computer programs may be written which can be employed in the practice of the present invention. Thus, it is intended that the present invention is not limited to the example set forth above. Set forth hereinafter in Figures XIIIA-XIIID are examples of a computer system in flow chart form which is constructed in accordance with the present invention. This system may be used in various fields to detect abnormalities, these fields include but are not limited to the fields of material production, bodily analysis, and repair analysis.
Changes may be made in the steps or the sequence of steps of the methods described herein without departing from the spirit and the scope of the invention as defined in the following claims.

Claims

What is claimed is:
1. A computer system for analyzing an image and detecting signs of abnormalities in the image, the computer system comprising: means to analyze the image so as to automatically detect and identify at least one of the type, location and size of at least one of an abnormality having a longest cross-sectional axis less than about 200 microns, a vague abnormality, imagery alike abnormalities and combinations thereof in the image.
2. The system of claim 1 , wherein the image is a retinal image.
3. The system of claim 2, wherein the small abnormality, the vague abnormality and the imagery alike abnormalities are selected from a group of abnormalities comprising flame hemorrhages, blot hemorrhages, dot hemorrhages, microaneurysms, hard exudates and soft exudates.
4. The system of claim 1 , wherein the image is an X-ray image.
5. The system of claim 1 , wherein the image is a CAT scan image.
6. The system of claim 1 , wherein the image is an MRI image.
7. The system of claim 1 , wherein the image is received in real-time.
8. The system of claim 1 , wherein the system analyzes the entire image.
9. The system of claim 1 , wherein the system will present as output a report including the size and location of each abnormality detected.
10. The system of claim 1 , wherein the image is a color image.
11. The system of claim 1 , wherein the imagery alike abnormalities are a hard exudate and a soft exudate.
12. The system of claim 1 , wherein the system includes a plurality of matched filters.
13. A computer system for analyzing an image and detecting early signs of abnormalities in the image, the system quantitatively analyzing the image so as to automatically detect and identify a small abnormality, a vague abnormality, imagery alike abnormalities and combinations thereof in the image.
14. A method for analyzing an image, the method comprising the steps of: quantitatively measuring the quality of the image; and assigning a quality measure to the image based on the quantitative measuring of the image.
15. The method of claim 14, wherein the step of quantitatively measuring the quality of the image comprises the steps of: creating at least one intensity histogram for the image; and comparing the intensity histogram for the image with a plurality of histogram templates of good quality images.
16. The method of claim 15, wherein the step of quantitatively measuring the quality of the image further comprises the step of assigning a gradable quality measure to the image where the intensity histogram for the image substantially matches at least one of the plurality of histogram templates of good quality images.
17. The method of claim 14, wherein the step of quantitatively measuring the quality of the image comprises the steps of: determining whether the image is at least one of a dark image, a bright image, a distorted color image, a blurry image, a light reflection image, and a blurry slide image; and wherein the step of assigning a quality measure to the image comprises the step of: assigning an ungradable quality measure to the image where the image is determined to be at least one of a dark image, a bright image, a distorted color image, a blurry image, a light reflection image, and a blurry slide image.
18. The method of claim 17, wherein the step of quantitatively measuring the quality of the image comprises the steps of: creating at least one intensity histogram for the image; comparing the intensity histogram for the image with a plurality of histogram templates of good quality images; assigning a marginally gradable quality measure to the image where the image is not determined to be at least one of a dark image, a bright image, a distorted color image, a blurry image, a light reflection image, and a blurry slide image, and where the intensity histogram of the image does not substantially match the general profiles of at least one of the plurality of histogram templates of good quality images.
19. A method for improving the quality of an image captured via an image recording apparatus, comprising the steps of: capturing a first image of the object via the image recording apparatus; quantitatively and immediately measuring the quality of the first image; assigning a poor quality measure to the first image based on the quantitative measuring of the image; and capturing a second image of the object in response to the assignment of the poor quality measure to the first image.
20. A method for normalizing an image having a red component, the method comprising the step of:
removing the red component from the image while simultaneously normalizing the image, reducing noise and enhancing the image quality.
21. The method of claim 20, wherein the step of removing comprises the steps of: defining an illumination pattern of the image to be i(x, y) = g(x, y)/r(x, y), where (x, y) are the coordinates of the image, and g(x, yj,and r(x, y) are the green and red values of the pixel (x, y), respectively; and obtaining a normalized green image gπ (x, y) = k x i(x, y) = k x g(x, y)/r(x, y), where k is a constant.
22. A method for detecting abnormalities in an image, the method comprising the steps of: overlaying a spider net having an origination, an angular resolution and a radius resolution onto the image to create an image ln, the origination of the spider net being located within the image ln; thresholding the image into a binary image using threshold values T1 and T2 such that pixels having a value between T1 and T2 are extracted from the image, the extracted pixels forming an image
Figure imgf000065_0001
anding the images lm and ln to produce an image la containing the abnormalities, background pixels and noise pixels; and deleting the background pixels and noise pixels in the image la to produce an image lb containing the abnormalities.
23. The method of claim 22, wherein the image is a retinal image and the origination of the spider net is positioned at the center of the macula.
24. A method for locating the macula and the optic disk in a retinal image, the method comprising the steps of: locating the highest average intensity area in the retinal image so as to locate the optic disk; convoluting the pixels in a selected area within a predetermined distance from the optic disk with a disk-shaped binary matched filter whereby the lowest convolution value obtained from the convolusion indicates the position of the center of the macula.
25. A method for conducting a time series analysis on two images taken of the same object at different times, the method comprising the steps of: processing each image to detect an abnormality contained in the images; selecting three reference points in each image such that the three reference points on each image correspond to substantially identical landmarks in each image; overlaying one image onto the other image based on the three reference points in each image; and identifying the abnormality in each image and quantitating changes in the abnormality.
26. The method of claim 25, wherein the images are retinal images depicting blood vessels, and the step of overlaying one image onto the other image is defined further as overlaying one image onto the other image by overlapping blood vessels using the three reference points in each image.
27. The method of claim 26, wherein the method further comprises the step of determining the common retina area of the two images.
28. The method of claim 25, wherein the method further comprises the step of providing a report identifying the abnormalities in each image and the quantitative changes which were identified.
PCT/US2000/004736 1999-02-23 2000-02-23 Computer system for analyzing images and detecting early signs of abnormalities WO2000051080A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU32438/00A AU3243800A (en) 1999-02-23 2000-02-24 Computer system for analyzing images and detecting early signs of abnormalities

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12151299P 1999-02-23 1999-02-23
US60/121,512 1999-02-23

Publications (2)

Publication Number Publication Date
WO2000051080A1 true WO2000051080A1 (en) 2000-08-31
WO2000051080A9 WO2000051080A9 (en) 2001-11-29

Family

ID=22397170

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/004736 WO2000051080A1 (en) 1999-02-23 2000-02-23 Computer system for analyzing images and detecting early signs of abnormalities

Country Status (2)

Country Link
AU (1) AU3243800A (en)
WO (1) WO2000051080A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090259960A1 (en) * 2008-04-09 2009-10-15 Wolfgang Steinle Image-based controlling method for medical apparatuses
US7672491B2 (en) 2004-03-23 2010-03-02 Siemens Medical Solutions Usa, Inc. Systems and methods providing automated decision support and medical imaging
GB2467840A (en) * 2009-02-12 2010-08-18 Univ Aberdeen Detecting disease in retinal images
US7856135B1 (en) 2009-12-02 2010-12-21 Aibili—Association for Innovation and Biomedical Research on Light and Image System for analyzing ocular fundus images
US8041091B2 (en) 2009-12-02 2011-10-18 Critical Health, Sa Methods and systems for detection of retinal changes
EP2821007A4 (en) * 2012-02-29 2016-04-13 Univ Kyoto Fundus oculi observation device and fundus oculi image analysis device
US9384416B1 (en) 2014-02-20 2016-07-05 University Of South Florida Quantitative image analysis applied to the grading of vitreous haze
WO2016132115A1 (en) * 2015-02-16 2016-08-25 University Of Surrey Detection of microaneurysms
CN110786824A (en) * 2019-12-02 2020-02-14 中山大学 Coarse marking fundus oculi illumination bleeding lesion detection method and system based on bounding box correction network
CN112862797A (en) * 2021-02-23 2021-05-28 复旦大学附属华山医院 Liver fibrosis nondestructive prediction method and system
RU2781356C1 (en) * 2021-11-11 2022-10-11 Общество с ограниченной ответственностью "Техтранс" Method and system for detecting a person in a dangerous area and warning them of the danger

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0577085A2 (en) * 1992-06-30 1994-01-05 Eastman Kodak Company Method and apparatus for determining visually perceptible differences between images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0577085A2 (en) * 1992-06-30 1994-01-05 Eastman Kodak Company Method and apparatus for determining visually perceptible differences between images

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
CHAN M K ET AL: "DEVELOPMENT OF A MICROCOMPUTER-BASED SYSTEM FOR THE AUTOMATIC RECOGNITION OF E. COLI COLONIES", LABORATORY MICROCOMPUTER,GB,NORTHWOOD, MIDDLESEX, vol. 10, no. 3, 1 January 1991 (1991-01-01), pages 95 - 101, XP000566584 *
JAGOE J R ET AL: "QUANTIFICATION OF RETINAL DAMAGE DURING CARDIOPULMONARY BYPASS: COMPARISON OF COMPUTER AND HUMAN ASSESSMENT", IEE PROCEEDINGS I. SOLID- STATE & ELECTRON DEVICES,GB,INSTITUTION OF ELECTRICAL ENGINEERS. STEVENAGE, vol. 137, no. 3, PART 01, 1 June 1990 (1990-06-01), pages 170 - 175, XP000127748, ISSN: 0956-3776 *
LEE, SAMUEL C.; WANG, YIMING: "Automatic retinal image quality assessment and enhancement", PROC. SPIE, MEDICAL IMAGING 1999: IMAGE PROCESSING, SAN DIEGO, CA, USA, vol. 3661, no. 1-2, 22 February 1999 (1999-02-22) - 25 February 1999 (1999-02-25), pages 1581 - 1590, XP002140437 *
LEE, SAMUEL C; WANG, YIMING: "A general algorithm for recognizing small, vague, and imager-alike objects in a nonuniformly illuminated medical diagnostic image", CONFERENCE RECORD OF 32ND ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, PACIFIC GROVE, CA, USA, 1998, vol. 2, 1 November 1998 (1998-11-01) - 4 November 1998 (1998-11-04), pages 941 - 943, XP002140433 *
LEISTRITZ, L.; SCHWEITZER, D.: "Automated detection and quantification of exudates in retinal images", APPLICATIONS OF DIGITAL IMAGE PROCESSING XVII, SAN DIEGO, CA, USA, 1994, 26 July 1994 (1994-07-26) - 29 July 1994 (1994-07-29), pages 690 - 696, XP002140435 *
LIANG, JIANMING ET AL.: "Dynamic Chest Image Analysis: Evaluation of Model-based Ventilation Study with Pyramid Images", IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT PROCESSING SYSTEMS, BEIJING, CHINA, 1997, 28 October 1997 (1997-10-28) - 31 October 1997 (1997-10-31), pages 989 - 993, XP002140436 *
WANG, YIMING; LEE, SAMUEL C.: "A fast method for automated detection of blood vessels in retinal images", CONFERENCE RECORD OF THE 31ST ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, PACIFIC GROVE, CA, USA, 1997, vol. 2, 2 November 1997 (1997-11-02) - 5 November 1997 (1997-11-05), pages 1700 - 1704, XP002140434 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7672491B2 (en) 2004-03-23 2010-03-02 Siemens Medical Solutions Usa, Inc. Systems and methods providing automated decision support and medical imaging
US10905517B2 (en) * 2008-04-09 2021-02-02 Brainlab Ag Image-based controlling method for medical apparatuses
US20090259960A1 (en) * 2008-04-09 2009-10-15 Wolfgang Steinle Image-based controlling method for medical apparatuses
GB2467840A (en) * 2009-02-12 2010-08-18 Univ Aberdeen Detecting disease in retinal images
GB2467840B (en) * 2009-02-12 2011-09-07 Univ Aberdeen Disease determination
US7856135B1 (en) 2009-12-02 2010-12-21 Aibili—Association for Innovation and Biomedical Research on Light and Image System for analyzing ocular fundus images
US8041091B2 (en) 2009-12-02 2011-10-18 Critical Health, Sa Methods and systems for detection of retinal changes
EP2821007A4 (en) * 2012-02-29 2016-04-13 Univ Kyoto Fundus oculi observation device and fundus oculi image analysis device
US9456745B2 (en) 2012-02-29 2016-10-04 Kyoto University Fundus observation apparatus and fundus image analyzing apparatus
US9384416B1 (en) 2014-02-20 2016-07-05 University Of South Florida Quantitative image analysis applied to the grading of vitreous haze
WO2016132115A1 (en) * 2015-02-16 2016-08-25 University Of Surrey Detection of microaneurysms
CN110786824A (en) * 2019-12-02 2020-02-14 中山大学 Coarse marking fundus oculi illumination bleeding lesion detection method and system based on bounding box correction network
CN110786824B (en) * 2019-12-02 2021-06-15 中山大学 Coarse marking fundus oculi illumination bleeding lesion detection method and system based on bounding box correction network
CN112862797A (en) * 2021-02-23 2021-05-28 复旦大学附属华山医院 Liver fibrosis nondestructive prediction method and system
CN112862797B (en) * 2021-02-23 2024-03-19 复旦大学附属华山医院 Liver fibrosis nondestructive prediction method and system
RU2781356C1 (en) * 2021-11-11 2022-10-11 Общество с ограниченной ответственностью "Техтранс" Method and system for detecting a person in a dangerous area and warning them of the danger

Also Published As

Publication number Publication date
AU3243800A (en) 2000-09-14
WO2000051080A9 (en) 2001-11-29

Similar Documents

Publication Publication Date Title
CN108416344B (en) Method for locating and identifying eyeground color optic disk and yellow spot
CN105513077B (en) A kind of system for diabetic retinopathy screening
Medhi et al. An effective fovea detection and automatic assessment of diabetic maculopathy in color fundus images
Narasimha-Iyer et al. Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy
Wang et al. An effective approach to detect lesions in color retinal images
US7668351B1 (en) System and method for automation of morphological segmentation of bio-images
US20060257031A1 (en) Automatic detection of red lesions in digital color fundus photographs
Jan et al. Retinal image analysis aimed at blood vessel tree segmentation and early detection of neural-layer deterioration
Rasta et al. Detection of retinal capillary nonperfusion in fundus fluorescein angiogram of diabetic retinopathy
KR102313143B1 (en) Diabetic retinopathy detection and severity classification apparatus Based on Deep Learning and method thereof
JP2005508215A (en) System and method for screening patients with diabetic retinopathy
WO2000051080A1 (en) Computer system for analyzing images and detecting early signs of abnormalities
Zhang et al. Hierarchical detection of red lesions in retinal images by multiscale correlation filtering
Lee et al. Computer algorithm for automated detection and quantification of microaneurysms and hemorrhages (HMAs) in color retinal images
Jaya Krishna et al. Retinal vessel tracking using Gaussian and radon methods
Mangrulkar Retinal image classification technique for diabetes identification
Wihandika et al. Retinal blood vessel segmentation with optic disc pixels exclusion
Mora et al. A template matching technique for artifacts detection in retinal images
Kulkarni et al. Eye gaze–based optic disc detection system
Umamageswari et al. Identifying Diabetics Retinopathy using Deep Learning based Classification
Shankar et al. Glaucoma Detection with Fully Convolutional Neural Network using Optic Disc and Segmentation Methods
Mei et al. Optic disc segmentation method based on low rank matrix recovery theory
Anand et al. Optic disc analysis in retinal fundus using L 2 norm of contourlet subbands, superimposed edges, and morphological filling
Yulianti et al. No reference image quality assessment of retinal image for diabetic retinopathy detection based on feature extraction
Firdausy et al. A study on recent developments for detection of neovascularization

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGES 6/77, 10/77, 12/77 AND 16/77, DRAWINGS, REPLACED BY NEW PAGES 6/77, 10/77, 12/77 AND 16/77

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase