MXPA99008987A - A system for imaging an ocular fundus semi-automatically at high resolution and wide field - Google Patents

A system for imaging an ocular fundus semi-automatically at high resolution and wide field

Info

Publication number
MXPA99008987A
MXPA99008987A MXPA/A/1999/008987A MX9908987A MXPA99008987A MX PA99008987 A MXPA99008987 A MX PA99008987A MX 9908987 A MX9908987 A MX 9908987A MX PA99008987 A MXPA99008987 A MX PA99008987A
Authority
MX
Mexico
Prior art keywords
images
eye
image
pupil
light
Prior art date
Application number
MXPA/A/1999/008987A
Other languages
Spanish (es)
Inventor
Zeimer Ran
Original Assignee
Johns Hopkins University
Zeimer Ran
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Johns Hopkins University, Zeimer Ran filed Critical Johns Hopkins University
Publication of MXPA99008987A publication Critical patent/MXPA99008987A/en

Links

Abstract

A system for obtaining images (100) of the fundus of the eye includes an imaging illumination device (130) which directs a light beam onto a portion of the fundus of the eye, and a video camera (166) which records the portion of the light reflected from the fundus of the eye. The plurality of images from the different areas of the fundus of the eye are arranged by a computer (116) in a mosaic image representative of a section of the fundus including the plurality of areas.

Description

A SYSTEM TO SUPPLY IMAGES SEMI-AUTOMATICALLY FROM AN EYE FUND, WITH HIGH RESOLUTION AND BROAD FIELD BACKGROUND OF THE INVENTION Field of the Invention The present invention relates to an apparatus method for photographing the bottom of an eye. More particularly, the present invention relates to a corresponding apparatus and method for obtaining images of a plurality of different portions of the fundus of the eye arranging those images to create a mosaic image of a section of the fundus of the eye, comprising the plurality of different portions. .
Description of the Related Art In industrial countries, the most common diseases of the eyes, in addition to cataracts, are diabetic retinopathy, glaucoma and macular degeneration by age. Although these diseases can lead to severe vision loss, if treated at appropriate stages, the risk of vision loss is significantly reduced.
In order to detect the onset of treatable disease, people at risk should be examined by an eye care specialist in a regular program. Unfortunately, only a portion of these people are routinely examined. For example, on average, half of all diabetic patients n visit the ophthalmologist as recommended. This lack of vigilance, not related to the availability of care, leads to an unnecessary high incidence of vision loss, accompanied by low quality of life, high costs of health management and loss of productivity of the people affected and their custodians. There is thus an effort to find available resources to examine common diseases of the eyes, since people at risk of such diseases of the eyes usually make regular visits to primary care doctors to health care, and advantageous to perform an examination in the doctor's offices of these doctors. However, at present, there are no adequate devices to carry out the investigation. The devices must be easy to use by the office staff, fast, sensitive, as accurate as possible and, more importantly, they can be provided. Photography has been used to aid in the diagnosis of common eye disorders and is often considered superior to ophthalmic examinations for the detection of a number of diseases and the diagnosis of numerous eye disorders, such as retinopathy. Diabetic and glaucoma. For diseases, such as diabetic retinopathy, the photographs allow the eye care specialists to detect the presence of pathologies, such as abnormal blood vessels, deposits of components, such as lipids, that have escaped from the vessels, and edema. To detect glaucoma, the photographs are useful for examining the optic disc and neighborhood in the loss of nerve fibers. In addition, diagnostic methods are used to assess vision. Typically, losses are detected by psychophysical tests that evaluate the subject's response to visual stimuli. Unfortunately, these currently tested methods are only available to eye care specialists. There is therefore a need for a system that makes it possible for these tests to be performed in a manner that conforms to the practical needs of a plurality of examination environments, such as the offices of primary care physicians, health clinics. optometrists, large work sites d employees and mobile units.
In order to efficiently examine common eye diseases, the image delivery system must provide these images with a relatively large field of view (eg, about 50 °), measured as the conical angle originating in the pupil extends towards the area on the retina in the image. Such a field of vision is sufficient for the reliable detection of common diseases, such as diabetic retinopathy, glaucoma and macular degeneration related to age. The system of supply of images, must also provide a resolution of 60 pixels (elements d image) or greater by degree. The conventional photographic images of the background, are currently acquired in the film with an adequate resolution for diagnosis, and both the field of view is 30 ° or less. The resolution of conventional photographs in a film is approximately 60 pixels per degree. Therefore, a total of approximately 1800 pixels (measured diagonally) is achieved (ie, 30 ° x 60 pixels per degree = 1800 pixels). For the desired field of 50 °, a total of 3000 pixels, measured diagonally, will be achieved. Stereoscopic images with a constant stereoscopic angle are also convenient in such a system of images. The detection of macular edema, a common cause of vision loss, is routinely based on the examination of stereoscopic pairs of background images to detect retinal thickening. The stereoscopic angle is the angle between two image trajectories of the two images. For viewing these images, each with a different eye, the examiner gets a sense of depth. This stereoscopic effect is improved by increasing the stereoscopic angle. Therefore, to obtain a stereoscopic sensation that can be used to compare images, the stereoscopic angle must be constant. The system must also allow efficient operation by personnel not related to ophthalmology. The most efficient method to examine common eye diseases can be achieved by examining the eye during routine visits to the primary care physician, as opposed to visits made to an ophthalmic specialist. Thus, the camera must be designed specifically for its operation by non-expert personnel. The system must also provide an examination with efficient cost. For the test to be incorporated into the routine of health care, the cost needs to be modest in proportion to the financial and medical benefits. Additionally, the system must be able to make images without the pharmacological dilation of the pupil. The conventional photographs of the fund requires the pharmacological dilatation of the pupil, by the use of a topical instillation of a drug, which dilates the pupil, to prevent it from contracting in the exposure to the light necessary for photography. The image without the instillation of a drug makes the process easier and faster. This requirement is not crucial, because the pharmacological dilatation of the pupil is a common procedure used for general eye exams. Conventional bottom cameras provide the desired resolution for a field of vision of only about 30 °, which is much smaller than the preferred field of 50 ° vision. To cover the larger desired area, the photographer is required to manually direct the camera to the multiple adjacent regions and obtain multiple photographs. Stereoscopic images are acquired by obtaining a photograph through the right angle of the pupil, then manually moving the camera to take a second image through the left side of the pupil. This procedure provides images with an unknown stereoscopic base and thus an unknown and variable perception of thickness and depth. In order to provide adequate images for diagnostic purposes, conventional background images and stereoscopic images require operation by an experienced ophthalmic photographer. Drug dilation is also typically necessary. Finally, the cost and inconvenience of the film makes these cameras unsuitable for the exam. The digital cameras that supply images with 2000 pixels of diameter, have been optically coupled to the conventional cameras of the bottom and have supplied field images of 30 ° with adequate resolution. As digital cameras electronically record the image and store it in memory, there is no need for the movie. However, such systems are not suitable, because they share the same drawbacks mentioned for the conventional chambers to which they are coupled. In addition, the high price of these digital cameras is added to the cost of the exam, making it unattractive in locations with a small volume of appropriate individuals. Some background cameras provide images or markers to assist the operator in the alignment of the pupil. A known camera supplies autofocus. Other cameras have been designed to obtain images without pharmacological dilatation of the eye. Still other cameras have been designed specifically to obtain simultaneous stereoscopic pairs, but their resolution is reduced, because each frame in the film is shared by the two images that make up the stereoscopic pair.
Neither of these cameras provides a sufficiently large photographic field with the required resolution. Also, as mentioned before, psychophysical tests, that is, tests performed to evaluate mental perceptions of physical stimuli, are important in ophthalmology. Most of these tests try to detect the pathology in the retina or the neural trajectory. The tests of visual acuity and visual field (perimetry) are an example. Visual acuity evaluates the central vision, that is, the ability of the subject to perceive small objects, and the perimetry test is directed to detect vision losses, mostly in the peripheral region of the fundus. However, when faced with a response below that of normal subjects, specialists in vision care may have difficulty in determining whether the reduced response is due to optical obstructions in the environment or caused by retinal and neuro-retinal abnormalities. The management of patients depends on this differential diagnosis. For example, a reduced visual field response may be due to insufficient dilatation of the pupil or opacities in the lens. Poor visual acuity can be caused by opacities and optical aberrations. Finally, in tests, such as perimetry, it is difficult to evaluate the location in the background that is responsible for the abnormal response. Therefore, there is a continuing need for a system that is capable of obtaining a photographic image of the background with the desired resolution and field of vision, as well as a system that is capable of measuring visual acuity and the visual field.
SUMMARY OF THE INVENTION An object of the present invention is to provide a system that is capable of obtaining a photographic image of the bottom of an eye, having a desired resolution, with a desired field of vision, for example of 50 ° or more. A further object of the invention is to provide a background image formation system, which is also capable of performing visual acuity and visual field tests of the eye, from which this background image is obtained. A further object of the invention is to provide a system to obtain an image of the bottom of the eye, which has the desired resolution and field, and transmit that image to a remote site for analysis. These and other objects of the present invention are substantially achieved by the provision + of a system which includes an optical head, used at the examination site (eg, typically a primary care physician's office) to acquire digitally transmit images by electronic lines to a remote central reading module. The optical head uses a conventional, relatively inexpensive video camera to record the background image through a small field. A plurality of such images are acquired in different locations in the background and, next, an image of alt field and high resolution is automatically constructed, generating a mosaic of these images. Also, due to the use of a high rate of video image formation, stereoscopic pairs can be acquired automatically with a very short time delay for eye movements and with a constant stereoscopic angle. To cover a field greater than 30 ° preferably a field of 50 ° or greater, with a resolution of 60 pixels per degree, an equivalent image 3000 pixels in diameter is required. Conventional video cameras can provide images of 830 pixels or more in diameter. In accordance with the present invention, the necessary area is covered with a mosaic of preferably 9 16 images. The invention provides a resource for efficiently acquiring such images under the operation of a relatively inexperienced personnel.
Also, once the eye has been photographed, the video camera sensor plane is optimally conjugated to the background. Replacing the camera with a plurality of graphic objects, psychophysical tests can be performed, by which the subject responds to the presentation of objects. One such set of objects consists of letters or symbols, used to test visual acuity. Another set may consist of grids that vary temporally and spatially and presented to the peripheral retina as they are used for the visual field test. The invention includes a device for monitoring the fixation of the subject and deriving the location of the stimulus in the background. At the reading center, a computerized module process, under the supervision of a trained staff, forms the images and generates a composite image of wide field, and stereoscopic images, if necessary. In addition, the computer module analyzes the results of psychophysical tests and determines the probability of pathologies.
BRIEF DESCRIPTION OF THE DRAWINGS Referring now to the drawings, which form a part of the original description: Figure 1 is a schematic view of an example of an image forming system according to an embodiment of the present invention, which is used as a data acquisition system to obtain an image that is to be transmitted to a remote data analysis center; Figure 2 is a schematic view of the imaging system shown in Figure 1, which further includes a device for performing a procedure for estimating the refractive error; Figure 3 is an inverted, schematic, cross-sectional view of the system shown in Figure 1, taken along lines 3-3 of Figure 1 and illustrating, in particular, the optical head; Figure 4 is an inverted, schematic, cross-sectional view of the system shown in Figure 1, taken in a direction perpendicular to lines 3-3 of Figure 1 and illustrating, in particular, the optical head; Figure 5 is a schematic view of an image formation system, according to a second embodiment of the invention, which is used as a system for data acquisition, to obtain an image that is to be transmitted to a remote center of data reading; Figure 6 is a schematic view of the imaging system, as shown in Figure 5, for performing a procedure for estimating refractive error; Figure 7 is an inverted, schematic, cross-sectional view of the system shown in Figure 5, taken along lines 7-7 of Figure 5 and illustrating, in particular, the optical head; and Figure 8 is an inverted, schematic, cross-sectional view of the system shown in Figure 5, taken in a direction perpendicular to lines 7-7 of Figure 5 and illustrating, in particular, the optical head, and it also includes an insert that illustrates five positions of the pupil, through which the images are taken.
DETAILED DESCRIPTION OF THE PREFERRED MODALITIES An example of a system 100 for imaging, according to one embodiment of the present invention, is schematically shown in Figures 1 and 3. The system includes an imaging head or subset 102 , mounted on a motorized and computer controlled stage XYZ assembly, comprising the components 104, 106 and 108. These components 104, 106 and 108 are responsible for movement in the X, Y and Z directions, respectively. As described in more detail below, during operation, the subject 110 supports the bridge of the nose against a cushion 112 for the same, and sees one of the target diodes (see Figures 3 and 4) within the training subsystem 102 of images, used for fixing. A switch 114 is activated by placing the nose against the cushion 112 for the same, thereby indicating to the computer 116 that the nose has been properly positioned. The electronic parts and the computer 116, which control the entire system, are placed under the subset 102 of imaging. The subject has access to a button 118 and the operator has access to the monitor 120 and a touch pad 122 which controls the position of a cursor on the monitor 45. The purpose of them will be described below. As shown in Figure 2, a support 124 is capable of moving from a rest position (shown by dotted lines) at a predetermined distance from the nose cushion 112. The graphic 126 of the subject or any other standardized identification sheet is placed on an internal surface of the support 124, when this support 124 is at the predetermined distance from the nose cushion 112. To estimate the refractive error of the subject and place the initial focusing position of the lens (See Figures 3 and 4) for a particular eye, the eyeglasses 128 of the subject are held by a support (not shown) in front of the camera in the shown way. The instrument performs a focus routine, described below in detail, and determines the location of the lens necessary to focus the image of the graphic 126. This position of the lens is stored by the computer 116 and is taken into account when the autofocus of the background image is performed. Figure 3 is an inverted, schematic, cross-sectional view of the optical head or subsystem 102, as taken along lines 3-3 in Figure 1, in relation to the nose and eye pad 112 it is examined. Figure 4, on the other hand, is an inverted, schematic, cross-sectional view of the optical head or subsystem 102, as taken in a direction perpendicular to lines 3-3 in Figure 1. As shown in FIGS. Figures 3 and 4, and as described more fully below, the optical head or subsystem 102 includes a halogen bulb 130, which generates light that is focused by the lenses 132 on an aperture 134 of annular configuration. The infrared light is removed by a filter 136, and the visible spectrum is limited by the filter 138 to the green light (around 540 nm). A computer controlled solenoid 140 turns the light on and off, moving a sheet 142 to act as a shutter. The opening is portrayed and focused by the lenses 144 on a mirror 146, provided with a central opening 147. The light of the bulb 130 is reflected by the mirror 146 and the aperture 134 is portrayed by the lens 148 on the pupil 150 of the eye 152. From the pupil, the beam expands and illuminates the background 154. The amount of illumination light reflected by the surfaces of the lens 148 is minimized, blocking the striking rays in the central portion of the lens 148. This is achieved by a small mask 156, which is depicted in FIG. the lens 148. If necessary, the mask 156 consists of two masks separated by a small distance, so that each mask is portrayed on the front and rear surfaces of the lens 148, respectively. The background image 154 is thus created on a conjugated plane 158. The lenses 160 and 162 together transfer the image in the conjugated plane 158 to the conjugated plane 164. A detection area of a video camera 166 is placed in this plane. The lens 162 moves to compensate for the spherical refractive error of the eye and thus optimize the focus in the detection area of the video camera 166. The movement of the focusing lens 162 is achieved by mounting it on a support 168 that slides on two rods 170 and 172. The movement of the support 168 is generated by a linear actuator 174, controlled by computer, linked to the support 168. The green light and Infrared free is used to illuminate the background 154 and forms the image in the plane 164 and the video camera 166. The use of green light is advantageous for portraying blood vessels and vascular pathologies, because it is strongly adsorbed by the blood and thus provides an optimal contrast. The output of the camera 166 is fed into an image acquisition board placed on the computer 116 (Figures 1 and 2). To obtain images in different locations in the background, the eye is directed to different orientations presenting a plurality of objectives, 176 and 178 (Figure 4) of light emitting diodes, which are activated in a predetermined order by the. control of the computer. The diodes 176 provide the fixation for all locations, except the central one (fovea). The central fixation is provided by the diode 178, which is seen by the observer through the mirrors 180 and 182. In a preferred embodiment, an array of nine (3 3), twelve (3 x 4) or sixteen (4 x 4) Diode targets 176 and 178 are used for each eye. In Figures 3 and 4, an array of (4 x 3) diodes is included for illustration, pro only two diodes are shown in Figure 4. The operation of the system will now be described in the view of Figures 1 to 4. The operator enters an identification number for the patient, controlling, through touch pad 122, the location of the cursor seen on monitor 120. The eye to be portrayed is selected and the session begins by activating the start icon with the cursor. A voice message instructs the subject in the use of the camera. Following the instructions, the lens of the subject on the nose pad 112, by which activates the switch 114, which verifies the appropriate position. The subject identifies the patient's button and presses it for the practice. The voice continues to instruct the subject to see a light (one of the diodes 176 and 178) and to press the button 118 when it flashes. Before the fixation of the light, the pupil focuses and focuses automatically. That is, as shown in Figures 3 and 4, the optical head or subsystem 102 further includes diodes 184, which illuminate the iris of the eye 154 with infrared light. This infrared light scattered by the pupil is reflected by the infrared mirror 182 and the front surface mirror 180 and is projected by the lens 186 onto the video camera 188. The mirror 182 that reflects the infrared light transmits visible light and prevents the infrared light from reaching the optic that portrays the background (for example the lenses 160 and 162 and the video camera 166). The use of infrared light allows the optical head to portray the pupil without restricting it and without the subject noticing it. The output of the camera 188 is fed into an image acquisition board, located in the computer 116. The pupil is portrayed by camera 188 and even when the pupil is not in focus, it appears as a disc darker than the iris, sclera and eyelids that surround it. The image is digitized by a board placed on the computer 116. A software algorithm (program) limits the image, that is, converts the gray scale into a black and white image. This threshold is previously established to convert the pupil into a black disc and the tissue surrounding it in white. The image is then inverted to generate a white disc for the pupil surrounded by black. The algorithm calculates, for the pupil, the mass and the center of the components of the mass, together with the horizontal and vertical axes, respectively, according to the following formulas: X = [S (xi * Di)] / S (D ±), for the horizontal center of mass, Z = [S (zi * Di)] / S (D ±), for the vertical center of mass; Mass = S (D ±) for the mass, Where Xi and z¿ are the coordinates of pixel i, with a density D equal to 0 or 1.
The center of the coordinates is established in the center of the chamber 188 and corresponds to the optical center. When the center of mass coincides with the center of the image, the pupil is centered and aligned with the optics. In other locations, the center of mass indicates the deviation from the center. These values of x and z are used by the computer 116 to generate a proportional number of pulses to the stepping motors, which control the movement of the optical head along the horizontal and vertical positioning steps, respectively. After each movement of the linear steps, 104, 106 and 108 (Figure 1), a new image is acquired and the new center of mass is calculated immediately by the displacement of the center of mass of the pupil to the center of the image. The steps are repeated for a predetermined number of times. If this number is exceeded, the procedure is aborted and the linear stages are brought to their implicit position. Before using a given image to move the stages, a quality check is highlighted in the image. That is, in order to verify that the subject is looking in the correct fixing light, the iris images with the reflections of the lighting diodes 184 are analyzed. Because the pupil is well centered, the location of the reflections in relation to the center of the pupil is specific to each direction of the gaze needed to look at a given objective. The reflections are identified by the recognition of the object known in the art and its location is compared to that expected, stored in a table. Once the pupil or focused has been taken to the center of the image, a series of 7 images is scanned at 210 msec, while the linear stage Y moves to the eye from an implicit location. For each of the seven images, a density profile is obtained along a horizontal line passing near the center of the pupil and away from the reflections from the light emitting diodes 184 that illuminate. The derivative of the density profile is calculated and the maximum and the minimum are recorded. Seven values are obtained for the maximum and seven for the minimum. The algorithm identifies the image for which the absolute values reach their highest value. This image is the best focused. If the maximum and the minimum arrive at their highest absolute value in the tables m and n, (m + n) / 2 is taken as the location of the best focus. The identification of the best focused image is transferred in the displacement of the linear stage and the stage is taken to that site. If necessary, the procedure can be repeated through a narrower range of motion centered on the previous focus approach location, thus refining this focus. Alternatively, the best focused image of the pupil can be determined based on the calculation of the "mass" of the white image representing the pupil. That is, as described above, when the image is digitized and a threshold is applied, the pupil is represented by the white pixels, while the area surrounding the pupil is represented by black pixels. If the pupil is out of focus, a region of gray pixels will be present between the white pixels (the pupil) and the black pixels (the area surrounding the pupil). Some of these gray pixels will be converted to black. However, once the pupil becomes in focus, the amount of gray pixels contributed by the pupil will decrease and thus the amount of white pixels will increase. Therefore, when the apparatus determines that the maximum amount of white pixels is present in the image, the apparatus concludes that the pupil is at the optimum focus. If the quality check, described above, is not approved, the procedure is repeated for a predetermined number of times, after which the procedure is interrupted and the operator will identify the reason for the obstruction of the pupil or the lack of response of the pupil. subject. During the focusing and focusing of the pupil, the patient sees the light and is free to blink. The procedure takes less than 1 second. With correct centering and focusing of the pupil, the fixation light flashes and the subject presses the button.
This causes the focus and background procedure to take place. This procedure takes less than a second. If the subject does not respond within the previously set time delay, the voice supplies further instructions and repeats this procedure. Once the pupil is centered and in focus, the background is automatically focused. This is achieved by presenting the subject with a fixation light, so that the region following the optic disc is portrayed. This area is chosen because it always contains large and small blood vessels. The shutter is activated (ie, the blade 142 s moves to allow bulb light 130 to pass) for 240 msec or about 240 msec and eight frames (ie, about 30 msec per frame) are digitized by the camera 166, while the motor 174 moves the lens 162 through a large interval. For each of the eight frames, a region of interest that crosses some large and small vessels is selected. In each of these regions, a frequency transformation, such as a Fast Fourier Transformation is executed, providing a two-dimensional array in the frequency domain. The components in a predetermined region of this domain are summed and eight values are obtained. The frame within the maximum sum is identified and the corresponding location of the lens 162 is derived. The motor moves the location where the lens 162 supplies the best focused image. If necessary, the procedure can be repeated through a narrower range of motion centered on the previous focus approach location thus refining the focus. The images acquired during the background focusing procedure are also used to determine the level of illumination. The average density of pixels s calculated and compared to the desired range (for example 80 120 for a gray level image of 256 (or 8 bits) L deviation from the desired range is used to change the light output of the bulb (1). In one embodiment, this is accomplished by adjusting the duty cycle of the alternating voltage supplied to the bulb 130. Once the background is focused and the light is adjusted, the plurality of objectives 176 and 178, which correspond to different areas in the background, They are presented in sequence for each desired location in the background, activating a single diode For each location, the image of the pupil is tracked, centered and focused The sheet 142 containing the shutter is opened for the time needed to digitize , in a video regime of up to 8 images During the procedure, the subject is provided with voice feedback and instructions if the quality tests are not approved.
The acquisition of the background image can be enhanced in several ways. Two modes are described here. E each of these modes, the leaf 142 that controls the shutter is activated during 120 msec or around 12 msec during the limes, four frames are digitized. In the first mode, the four frames are acquired, while the lens 162 moves in an interval around the fine focus. This mode ensures that the best-focused image is always acquired, even if the refractive error changes for the particular stare that corresponds to the fixation light This may occur if the retina is not at the same distance from the lens or in the presence of marked variations in the local corneal power. In a second mode, the four frames are acquired while the optical head 102 moves in the ej x (perpendicular to the optical axis). This mode allows you to acquire images through four well-defined locations through the pupil. A pair of images, obtained from two opposite horizontal locations in the pupil, generate a stereoscopic image. The couple with the separation may provide the best stereoscopic base and thus a better stereoscopic effect. The couple with the separation is less sure that some stereopsis can be obtained even in the presence of diseased dilated pupils. This mode has the additional advantage in the presence of previous segment opacities. If a local opacity deteriorates the background image, the acquisition of multiple images through different locations in the pupil increases the probability of obtaining a useful background image. After each acquisition of background image, s applies a quality control algorithm. A variety of checks can be applied and a modality is described. When the pupil is well centered and not obstructed by the eyelids, the background image is good (once it has been focused by the previous algorithm). Thus an algorithm checks the mass of the pupil (after the aforementioned threshold procedure) and compares it to the minimum required mass of the pupil. In addition, the mass of the 5 mm central portion of the pupil is checked and s required to be close to that of a 5 mm disc. These checks are efficient in detecting blinking obstructions by tabs. The quality of the focus of the pupil can also be checked by comparing the maximum derivative (or more) to that obtained during the fine focus of the pupil, mentioned above. After images have been acquired in all locations, the operator is presented with the images on the monitor and given the selection to repeat some of the images if deemed necessary. This is achieved by pointing the cursor to the desired image and clicking (click and mouse) on it. The voice suggests the operator to decide whether to provide an image of the other eye and, if so, the procedure repeats. At the end of the acquisition, the subject is informed of the termination. All images are stored on the computer along with documentation of the operations that took place during the session. The data representing the images stored in the computer can then be transmitted to another location, for example a center for reading images. In this image reading center, the images of different locations in the background are automatically arranged to create a mosaic image of the entire portrayed area of the background. For example, if you have taken nine images of the background (background image d 3 x 3), then these nine images will be arranged to create a single image of this entire portrayed area. This arrangement of the background images in the mosaic image can alternatively be made by the computer 116, if desired, and the data representing the mosaic image can be transmitted to the image reading center. It will be further noted that once the background image has been properly focused in the camera 166, the plane 164 is conjugated to the background. The objects placed in the flat are portrayed in the background under the best optical conditions. The optical conditions are optimized by the focus and by the optical nature of the optical head. As mentioned, the opening 148 of the mirror 146 s conjugates to the cornea of the subject and its size is small. The image formation thus depends only on the central part 2 or 3 mm of the cornea. Due to the small size of the effective corneal opening, only the cylindric power of the central cornea practically affects its optical quality. The deviations of the power necessary for the optimal image formation are thus corrected by the lens placement 162. In other words, the n-image formation depends on the peripheral cornea, which may have a local variation in the optical power (astigmatism) . The advantage of imaging under these conditions is well known in the trade. According to these conditions, the quality of the image obtained by the research camera directly reflects the optical conditions of the eye under examination. This image can then be used by an eye care specialist to determine if the psychophysical response has been influenced by optics. Likewise, the sharpness of the image can be measured by a plurality of resources, such as the analysis in the frequency domains mentioned above. Automatic centering and axial positioning of the optical head in relation to the pupil, has also ensured that the image path is in the center of the pupil. Pupil imaging by camera 188 and optics before it, also allows the determination of pupil size objective. Only eyes with a pupil size that does not interfere with the trajectory of image formation will be considered for psychophysical tests. As mentioned, for each objective presented to the patient, a corresponding single area in the background and portrayed. Also, as mentioned, for each of those areas in the background, there is a unique pattern of reflection in the cornea, once the pupil is focused and in focus. By presenting the objectives of fixation to the subject and monitor the reflection in the cornea, one can determine the background area conjugated to the plane 164. At the anus, an object placed at location 164 is projected at a known location in the background, thus establishing an objective, documented relationship between the origin of the psychophysical response (the lack of it) and anatomy. Therefore, after the background imaging has been enhanced, a plurality of psychophysical tests can be performed, i.e., a visual acuity test and a perimetry test (visual field). Other psychophysical tests, such as recognition of color, recognition of contrast, etc., may also be performed. As shown in Figure 4, to perform the visual acuity test, the camera 166 is replaced with the set 190, by rotating the plate 192 with the motor 194. This set 190 comprises a source 196 d lu and a screen 198. Lyrics or graphic objects, printed on the screen 198, are projected in the background and the subject is asked to identify or detect them in a similar way to the vision diagram. Each correct answer is input by the operator into the computer with the use of a touch sensitive pad 122 and screen 120 (Figures 1 and 2). Multiple peripheral visual tests can be performed, modifying the set 190. A screen, such as a backlight monitor, can replace the light source 196 and the screen 198. The computer can display multiple graphic objects. One such object consists of a variable grid in the local and temporal contrast. While the subject is fixed on one of the diodes 176, the grid is turned on and off and the subject is asked to respond by means of the button 118 (Figures 1 and 2). In this way a map of visual sensitivity similar to the perimetry is generated.
It will be noted that the psychophysical tests, and in particular, the perimetry test, are performed after the background images have been obtained. In this case, the visual sensitivity map can be correlated with the background areas. That is, according to the subject is fixed in one of the diodes 176, the portion of the background that is portrayed when the subject was fixed on is diode is the portion of the focus that detects the stimulus that s knows. Therefore, if the subject has difficulty detecting the stimulus while it is fixed on that diode, s can find out that that portion of the background and possibly the corresponding visual trajectory, may have a physical abnormality. After the above tests and background image formation have been performed, the results of the test and the images can be transmitted to a remote location. That is, at night, the computer connects to a communications network, such as the Internet, and automatically transfers the data to a server located in the reading center. During the next connection, the server of the lectur center provides recognition of the reception and the files are deleted from the computer in the optical head. The communication also allows the reading center to activate periodic diagnostic tests of the optical head to verify its proper operation and identify the need for the service. In the reading center, the images are archived and handled through a workflow. Multiple operations are performed. For example, quality tests are performed on the images and the best image is selected for each location in the background. The images are presented as a grid or one by the reading expert and the detection of the necessary pathology. As described above, the background images are arranged in a mosaic image of the background. Likewise, the response to the visual acuity test was analyzed by a computer and this visual acuity was determined following the way used in the trade. The analysis of the responses to the visual field test is performed. The images of the cornea and iris are used by an algorithm to determine the location of the reflection on the cornea in relation to the center of the pupil. This location is compared to that obtained during the formation of images and the location of the stimulus in the background is determined. A map of the background of the answers is generated. The quality of fixation by the subject is evaluated. If a sufficient area of the background is covered, the statistical calculations used in the perimetry trade are performed to determine if the visual field is normal or not.
If the cover is not adequate, but at least one of the answers is abnormal, the test can be considered positive, justifying care by an eye care specialist. The results are computerized and the report is sent to the health care professional. A system, according to another embodiment of the invention is shown in Figures 5 to 8. Specifically, Figure 5 is a schematic illustration of an example of an imaging system 200, similar to the system 100 described above. This system includes an imaging head or subassembly 202, which is mounted in a computer controlled and motorized stage of XYZ coordinates, comprising components 204, 205 and 208. These components 204, 206 and 208 are sensitive to movement in the directions X, Y and Z, respectively. During the operation, the subject 210 leans against one of two nose pads 212 (one for the image formation of each eye) and sees the target diodes (see Figures 7 and 8). within the subsystem 202 that forms images, used for fixing. A switch 212 supplies a signal to the computer 216 that indicates the presence of the nose to a nose cushion 212. The electronic parts and the computer 216 that control the entire system are placed under the subset that forms images. The subject is provided with a button 218 and the operator is provided with a monitor 220 and a control lever 222. The use of these latter two products is described below. As shown in Figure 6, a support 224 is able to move from the rest position (shown by dotted lines) at a predetermined distance from the nose cushion 212. The graph 226 of the subject or any other standardized identification sheet is placed on an internal surface of the support 224, when this support is at the predetermined distance from the nose cushion 212. In order to estimate the refractive error of the subject and to establish the initial position of a focusing lens (see Figures 7 and 8) for a particular eye, the eyeglasses 228 of the subject are retained by a support (not shown) in front of the camera in the way shown. The instrument performs the aforementioned focusing routine and determines the location of the lens needed to focus the image of the graphic 226. This position of the lens is stored by the computer 216 and is taken into account when the auto focus is performed afterwards. the background image. After estimating the refractive error, the holder 224 is readjusted to its storage position and the subject leans against the instrument and makes contact with one of the nose pads 212 and 213 (Figure 7) with the bridge of the nose . To ensure that the subject does not move away. The microswitch 214 is placed under the nose cushion and supplies a signal to the computer 216 if the pressure is relieved and the switch is opened. In this case, the fixation lens is turned off, and an audible warning signal is generated. An example of the head forming the bottom image or subsystem 202, according to this embodiment of the present invention, is shown schematically in Figures 7 and 8. This head forming the bottom subsystem 202 image includes two halogen bulbs , 230 and 232. These bulbs, 230 and 231, generate light from which the infrared light is eliminated by the filter 234. The beams are directed by dozens 236 and focused by the lens 238 on a mirror 240. The beams are filtered by the filter 242 pair supply green light (around 540 nm). A shutter 244, controlled by computer, turns the light beams off and on. The light of the bulbs 230 and 232 are reflected by the mirror 240 and the filaments of the light bulb are portrayed by the lens 246 on the pupil 248 of the eye 250. Thus, the filaments of the two bulbs are portrayed on the pupil two. locations spaced correspondingly at the 3 o'clock and 9 o'clock positions on the pupil.
From the pupil, the beams expand and illuminate the background 252. The background image 252 is created on * a conjugate plane 254. The lenses 256 and 258 work together to transfer the image in the conjugate plane 254 to the detection area of a 260 video camera. The lens 258 is moved to compensate for the spherical refractive error of the eye and thus optimize the focus in the detection area of the video camera 240. The movement of the focusing lens 258 is achieved by its assembly in a conventional linear bearing assembly. (not shown) and placing it by a computer controlled linear actuator. The diodes 262 illuminate the iris of the eye with infrared light. The infrared light scattered by the pupil is joined by the lens 246, reflected by the infrared mirror 264 and projected by the lens 266 to supply an image to a video camera 268. The infrared reflecting mirror 264 transmits the visible light and prevents the infrared light from reaching the optic that forms the background image (for example the lenses 256, 258 and the video camera 260). Figure 8 illustrates in greater detail the scanning assembly 270 of the pupil of Figure 7. In Figure 8 it can be seen that different glass plates, such as 272 and 274, are arranged with various inclinations on a mounting disc 276 By rotating the disc 276 with a motor 278, in order to place several plates of different inclinations in front of the optical elements, the incoming illumination beams can enter the pupil from different locations and supply trajectories that form corresponding reflected images of the background. The insert 280 in Figure 8 illustrates a preferred arrangement of the five locations above the pupil achieved by five different plates. The acquisition of a plurality (preferably five) of images through a plurality (preferably five) of the different corresponding locations in the pupil, mimics the procedure used by a trained observer to obtain a clear image in the presence of local opaque shapes in the lens of the eye. The present invention thus ensures that a clear image can be acquired automatically in the presence of local opacities in one or more locations, without the need of an experienced observer. Stereoscopic photographs are obtained by acquiring images in the two horizontal locations, which correspond to the clock position of hours 3 and 9 in the pupil (Figure 8). These two images can be acquired in consecutive video frames, that is, within 60 msec. The stereoscopic base is constant and determined by the inclination of the glass plates in the exploration set 270 of the pupil.
The thickness of the eye is aligned by the operator. The image of the pupil, generated by the camera 268, is seen by the operator on the monitor 220. The operator activates the joystick 222 to move the subset 202 that forms images in the XYZ directions relative to the eye, until the The image of the pupil is close to the center of the monitor 220. The image-forming subset 202 also moves axially until the pupil is generally focused and visible on the monitor. At this point, a computer algorithm processes the image of the pupil and determines the center of the pupil. The computer then determines the deviation of the pupil image from the center of monitor 220. The deviation is used by the computer to drive the XYZ table automatically to override this deviation. The pupil track is performed at a video rate and the alignment is performed at 15 Hz or more. As a result, the image of the pupil is automatically centered on the display 220 in less than one second. To obtain images in different locations in the background, the eye is directed to different orientations presenting a plurality of light emitting diode targets 282 for the companion eye (the opposite eye to which it is portrayed) to follow. In the vast majority of cases, the eye that is portrayed will move substantially in the same way and the same orientation as the companion eye. Although it is considered that the objectives can be presented and seen by the eye being portrayed, the use of the companion eye, as mentioned, is preferred. As shown in Figure 7, the companion eye 282 sees light from one of the diodes 282 that emit light, arranged in a grid on the plate 286. The diodes are brought into focus by a Fresnel lens 288. The use is preferred. of an array of nine (3 x 3), twelve (3 x 4) or sixteen (4 x 4) diode targets for each eye. In Figure 7, an array of (3 x 3) diodes is used (only 3 diodes are shown for each eye in Figure 7). To acquire a background image, the output of the video camera 260 in Figure 7 is digitized by an image digitizing board by a computer 216 (see Figure 5) in synchronization with a video activation signal. This activation signal is produced by the output of the camera 260 or the electronic time elements in the computer 216. Each digitized image is first stored in the RAM computer and then transferred to the storage medium (for example the hard disk). For each eye to be photographed, an optimal focus of the image is obtained by obtaining a series of images (fields in the video regime of approximately 60 fields / second, for a total time of 130 msec) through the control center. the five locations of the pupil (see insert 280 of Figure 8). For each of the images (preferably, eight images are taken), the focusing lens 258 travels a predetermined distance in synchronization with the video camera 260. The eight images stored in the RAM are examined by a computer algorithm to identify the image with the best focus. The location of the lens 258 used to obtain this image is stored and used for subsequent focusing. The way in which the background images are taken is as follows. The main refractive errors are adjusted by the instrument immediately after the measurement obtained with the glasses. The first objective is then presented to the subject. When this first objective is seen by the subject, the eye of the subject is oriented to enable an image of the bottom of the upper arch of the retinal vessels to be recorded by the video camera 260. The use of this objective ensures that the image includes the blood vessels, objects with contrasts and details, thus making the focus optimally reliable. As soon as the centering of the image of the pupil is achieved, the illumination is effected by opening the shutter 244. The aforementioned series of eight images (fields) is acquired in the video regime, each in a different position of the lens 258 of focus, and the illumination is then terminated by the closure of the shutter 244. The position of the lens 258 for the best focus is saved by the computer. The plurality of different objectives 282, corresponding to the different areas in the background are presented in sequence. For each desired location in the background, a single diode 282 is on. For each location, the image of the pupil is guided and centered on the monitor 220, as mentioned before. Once the image of the pupil is centered as detected by the computer 216, this computer generates a signal, so that the person speaking will generate a beep (sound signal) to attract the patient's attention and thus ask the patient to press the button 218 when it detects that the objective diode 282 begins to flash. This flashing starts briefly after the beep. The delay between the start of the flicker and the response of the subject is measured and the routine of the computer software evaluates whether it is shorter than a standard present time. This process is used to ensure that the subject is well fixed on the target. If the response delay is sufficiently short, a series of five images from a particular location in the background, which corresponds to the particular objective 282 is immediately acquired in less than 250 msec, which is the time delay for a voluntary movement of the eye. Each of the five images is acquired through a different location in the pupil, with the use of the previously mentioned pupil examination protocol. The five digitized images are analyzed by the software algorithm of quality control. The optimum of the five is identified and the corresponding location in the pupil is recorded. The image can be presented to the operator by means of the display 220 to enable the operator to decide to accept or repeat the acquisition through the entry by means of a joystick. The acceptable images are saved. The next objective is presented and the acquisition is repeated. It is possible to limit the acquisition to less than 5 images, acquiring images through the region in the pupil, previously identified as optimal. When the stereoscopic image is necessary, the images acquired at the 3 o'clock and 9 o'clock positions are saved. Computer software routines are designed to evaluate the quality of focus and exposure. These quality controls allow the identification of the optimal image within a series of images and provide an indication if a blinking or eye movement occurred during data acquisition. When such flickering or movement occurs, the fixation lens 282 for which the data is to be taken is presented again.
For the image of the other eye, the subject supports the bridge of the nose on the other cushion 213 and the previous procedure is repeated. At night, the computer in a remote reading center connects, by a telephone line, to the computer 216, which is placed in the 'slave' mode (secondary) Thus, the computer in the reading center can load new data, readjust parameters, delete previous files, as well as download updated software. The reading center is equipped with multi-channel communications hardware and software (equipment and software) to receive and store images of multiple optical heads in the same or different installations. The data is decompressed and processed. A computer software routine creates a mosaic of 3 x 3 or 4 x 4 images for the 9 to 16 preferred total images, mentioned above, to different targets. The mosaic is first based on the location of the lens used for each image. A fine adjustment is then made based on the self-correlation between the areas of overlap in the adjacent images. Alternatively, the computer 216 can create the mosaic image and load the data representing that image to the reading center.
Improvements in basic images, such as increased contrast and equal intensity, apply. The images are then stored together with the patient's data. A reader scrolls through the composite image and data files. The images are displayed on a large, high resolution monitor, and the reader marks the different pathologies with a pointing device. The location and nature of the pathology are automatically recorded in a file. A graduation algorithm analyzes the data according to clinical standards, such as those of ETDRS, determines the degree of retinopathy and presents the recommendations to the reader. The classification and recommendations are reviewed by the reader and, once accepted or modified, the information is stored. A report is issued to the primary care physician and other legal parties, as determined by the user. Although only a few exemplary embodiments of this invention have been described in detail above, those skilled in the art will readily appreciate that many modifications to the exemplary embodiments are possible, without departing materially from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention, as defined in the following claims.

Claims (35)

  1. CLAIMS 1. A system for delivering ophthalmic images, which comprises: a device that obtains images, which can be adapted to automatically obtain images of different sites at the bottom of an eye; and a device that arranges the images, which can be adapted to automatically arrange the images of the different background sites in an image that represents an area of the background, which comprises the different sites.
  2. 2. A system for delivering ophthalmic images, as claimed in claim 1, wherein: the device that obtains images, obtains these images each with a field of vision less than, or equal to 30 °, as measured as the conical angle which originates in the pupil of the eye and extends to a corresponding location in the background to be portrayed.
  3. 3. A system for delivering ophthalmic images, as claimed in claim 1, wherein: the device that arranges the images, arranges images of different background sites in an image representing the background area, comprising the different sites, so that the representative image has a field of vision equal to, or greater than, 50 °, as measured as a conical angle that originates in the pupil of the eye and extends to the area to be portrayed.
  4. 4. A system for delivering ophthalmic images, as claimed in claim 1, wherein: the image-obtaining device comprises: at least one illumination device, which can be adapted to emit light on different background sites; and a light detector, which can be adapted to detect the light reflected from the different sites of the background, in response to the light emitted there by the lighting device.
  5. 5. A system for delivering ophthalmic images, as claimed in claim 1, wherein: the device that obtains images obtains two of these images for each of the different sites of the background; and the device that arranges the images, arranges two images of each one of the different sites of the background, so that the image that represents the area of the background is a stereoscopic image.
  6. 6. A system for delivering ophthalmic images, as claimed in claim 5, wherein: the device that obtains images comprises: a lighting device, which can be adapted to emit light on each of the different sites of the background; and a light detector, which can be adapted to detect the light reflected from each of the different bottom locations, in response to the light emitted there by the lighting device.
  7. 7. A system for delivering ophthalmic images, as claimed in claim 1, wherein: the device that obtains images comprises an electronic camera, which can be adapted to automatically obtain images of different sites on the bottom of the eye.
  8. 8. A system for delivering ophthalmic images, as claimed in claim 7, wherein: the electronic camera is a video camera.
  9. 9. A system for delivering ophthalmic images, as claimed in claim 1, wherein: the image-obtaining device obtains a plurality of images for each of the different sites at the bottom of an eye.
  10. A system for delivering ophthalmic images, as claimed in claim 9, wherein: the image-obtaining device obtains a plurality of images for each of different sites, for a time period of less than 250 milliseconds.
  11. 11. A system for delivering ophthalmic images, as claimed in claim 1, wherein: the device that obtains images can be adapted to automatically focus the images of different sites on the bottom of an eye.
  12. 12. A system for delivering ophthalmic images, as claimed in claim 1, further comprising: an evaluation device, which can be adapted to evaluate each of the images of the different sites on the bottom of the eye, obtained by the device which obtains images, to determine if each of the images meets a predetermined quality criterion.
  13. 13. A system for delivering ophthalmic images, as claimed in claim 1, further comprising: a second device that obtains images, which can be adapted to obtain an image of the pupil of the eye.
  14. 14. A system for delivering ophthalmic images, as claimed in claim 13, wherein: the second device that obtains images can be adapted to automatically focus the image of the pupil of the eye.
  15. 15. A system for delivering ophthalmic images, as claimed in claim 13, further comprising: an evaluation device, which can be adapted to evaluate the image of the pupil of the eye, obtained by the second device that obtains images, to determine if the image of the pupil meets a predetermined quality criterion.
  16. 16. A system for delivering ophthalmic images, as claimed in claim 15, wherein: the second device that obtains images can be adapted to obtain a plurality of images of the pupil of the eye; and the evaluator device can be adapted to distinguish those images of the pupil of the eye that were obstructed by an eyelid of the eye from those images of the pupil of the eye that were not obstructed by an eyelid of the eye.
  17. 17. A system for delivering ophthalmic images, as claimed in claim 13, further comprising: an adjusting device, which can be adapted to adjust the location of the image obtained by the device in relation to the eye, based on the image of the pupil obtained by the second device that obtains images.
  18. 18. A system for delivering ophthalmic images, as claimed in claim 13, wherein: the second device that obtains images comprises: at least one light emitting device, which can be adapted to emit light on the iris of the eye; and a light detecting device, which can be adapted to detect light reflected from the iris, in response to the light emitted there by at least one light emitting device.
  19. 19. A system for delivering ophthalmic images, as claimed in claim 18, wherein: this at least one light emitting device is an infrared light emitting device.
  20. 20. A system for delivering ophthalmic images, as claimed in claim 18, wherein: this at least one light emitting device can be adapted to emit light on different sites of the iris of the eye; and the light detecting device can be adapted to detect light reflected from a different site of the iris, in response to the light emitted there by at least one light emitting device.
  21. 21. A system for delivering ophthalmic images, as claimed in claim 1, further comprising: a vision evaluating device, which can be adapted to perform at least one psychophysical test on the eye, at different sites in the back of the eye, after your images have been obtained.
  22. 22. A system for delivering ophthalmic images, as claimed in claim 21, wherein: the vision evaluating device evaluates the field of vision of the eye as one of the psychophysical tests.
  23. 23. A system for delivering ophthalmic images, as claimed in claim 1, further comprising: a data transmitter, which can be adapted to transmit data representing images of different sites at the bottom of the eye to the device that arranges the images, which is in a remote site of the device that obtains the images.
  24. 24. An apparatus for delivering ophthalmic images, comprising: at least one light emitting device, which can be adapted to emit light on at least one site on the iris of the eye; a device that detects light, which can be adapted to detect the reflected light of the iris, in response to the light emitted there by at least one light-emitting device; and a device that obtains images, which can be adapted to automatically focus and center the image of the pupil of the eye, to form an image of this pupil based on the light detected by the light detecting device.
  25. 25. An apparatus for delivering ophthalmic images, as claimed in claim 24, further comprising: an evaluating device, which can be adapted to evaluate the image of the pupil of the eye, obtained by the device that obtains images, to determine whether the The pupil's image meets a predetermined quality criterion.
  26. 26. An apparatus for delivering ophthalmic images, as claimed in claim 24, wherein: the image obtaining device can be adapted to obtain a plurality of images of the pupil of the eye; and the evaluator device can be adapted to distinguish those images of the pupil of the eye that were obstructed by an eyelid of the eye from those images of the pupil of the eye not obstructed by an eyelid of the eye.
  27. 27. An apparatus for delivering ophthalmic images, as claimed in claim 24, wherein: this at least one light emitting device is an infrared light emitting device.
  28. 28. An apparatus for delivering ophthalmic images, as claimed in claim 24, wherein: this at least one light emitting device can be adapted to emit light on different locations of the iris of the eye; and the light detecting device can be adapted to detect light reflected from the different location of the iris, in response to the light emitted there by at least one light emitting device.
  29. 29. A system for delivering ophthalmic images, which comprises: a device for obtaining images, which can be adapted to automatically obtain images of different sites on the bottom of an eye; and at least one of the following: an alignment device, which can be adapted to automatically align the device that obtains images with respect to the eye, to enable this device that obtains images to obtain images of any site on the bottom of the eye; a focusing device, which can be adapted to automatically focus the device that obtains images with respect to the bottom of the eye, to make it possible for the device that obtains images to obtain focused images of any place on the bottom of the eye; a transmission device, which can be adapted to transmit to a remote location the data representing the images, obtained by the device that obtains images; a tester device, which can be adapted to perform at least one psychophysical test on the eye at different sites of the fundus of the eye, after its images have been obtained by the device that obtains images; and a device that generates stereoscopic images, which can be adapted to generate a stereoscopic image of any of the locations in the back of the eye, based on the images of any location, obtained by the device that obtains images.
  30. 30. A system for delivering ophthalmic images, as claimed in claim 29, which comprises all devices for alignment, focusing, transmission, testing and generation of stereoscopic images.
  31. 31. A system for delivering ophthalmic images, as claimed in claim 29, wherein: the test device includes a device that generates stimuli, which can be adapted to provide optical stimuli, which are recorded on the locations on the background, whose images have been obtained by the device that obtains images.
  32. 32. A method for obtaining an ophthalmic image, comprising the steps of: automatically obtaining images of different sites on the bottom of an eye; and automatically arranging the images of the different sites on the background within an image representing an area of the background, comprising different locations.
  33. 33. A method for obtaining an ophthalmic image, comprising the steps of: emitting light on at least one location on the iris of the eye; detecting the light reflected from the iris, in response to the light there emitted by at least one light-emitting device; and automatically focusing and centering the image of the pupil of the eye to form an image of this pupil, based on the light detected by the light detecting device.
  34. 34. A method for using a device that forms images and obtain an ophthalmic image, this method comprises the steps of: controlling the device that forms images, to automatically obtain images of different sites on the bottom of an eye; and at least one of the following: automatically aligning the device that forms images, with respect to the eye, to enable this device to obtain images from any site in the back of the eye; automatically focus the device that forms images, with respect to the bottom of the eye, to make it possible for the device that forms images to obtain focused images of any place on the bottom of the eye; transmit to a remote location, the data representing the images obtained by the device that forms images; perform at least one psychophysical test on the eye, in different sites of the back of the eye, after their images have been obtained by the device that forms images; and generate a stereoscopic image of any of the locations on the bottom of the eye, based on the images of any location, obtained by the device that forms images.
  35. 35. A system that forms ophthalmic images, as claimed in claim 34, comprising all the stages of alignment, focusing, transmission test and generation of stereoscopic images.
MXPA/A/1999/008987A 1997-04-01 1999-09-30 A system for imaging an ocular fundus semi-automatically at high resolution and wide field MXPA99008987A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US041783 1997-04-01
US60/041783 1997-04-01

Publications (1)

Publication Number Publication Date
MXPA99008987A true MXPA99008987A (en) 2000-07-01

Family

ID=

Similar Documents

Publication Publication Date Title
US5943116A (en) System for imaging an ocular fundus semi-automatically at high resolution and wide field
CN113164036B (en) Methods, apparatus and systems for ophthalmic testing and measurement
US10945597B2 (en) Optical coherence tomography-based ophthalmic testing methods, devices and systems
EP1349488B1 (en) System and method for eye screening
US11839430B2 (en) Optical coherence tomography-based ophthalmic testing methods, devices and systems
US5579063A (en) Methods and devices for the measurement of the degradation of image quality on the retina of the human eye due to cataract
JP2004283609A (en) Pupillometer with pupil irregularity detection, pupil tracking, and pupil response detection capability, glaucoma screening capability, corneal topography measurement capability, intracranial pressure detection capability, and ocular aberration measurement capability
Zeimer et al. A fundus camera dedicated to the screening of diabetic retinopathy in the primary-care physician’s office
CN111386066A (en) Optical scanning self-executing device and method of human eye optical system
US20220007934A1 (en) Apparatus and method for automated non-contact eye examination
JP3851824B2 (en) Contrast sensitivity measuring device
KR20000049623A (en) Gene diagnostic system using camera for iris
MXPA99008987A (en) A system for imaging an ocular fundus semi-automatically at high resolution and wide field
WO2020113347A1 (en) Portable device for visual function testing
EP3440990A1 (en) System for imaging a fundus of an eye
RU2778672C1 (en) Method for diagnosing the functional state of ophthalmic contact lenses and a device for its implementation
US20220369921A1 (en) Ophthalmologic apparatus and measurement method using the same
Enoch et al. Vision assessment behind dense cataracts in developing nations
Diego Development of a Portable and Remotely Controlled Slit Lamp Imaging System for Extended Reality Headsets
EP3440989A1 (en) System for imaging an eye
JPH082344B2 (en) Pupillary reaction test equipment