WO2014127134A1 - Procédés et appareil pour l'imagerie rétinienne - Google Patents

Procédés et appareil pour l'imagerie rétinienne Download PDF

Info

Publication number
WO2014127134A1
WO2014127134A1 PCT/US2014/016272 US2014016272W WO2014127134A1 WO 2014127134 A1 WO2014127134 A1 WO 2014127134A1 US 2014016272 W US2014016272 W US 2014016272W WO 2014127134 A1 WO2014127134 A1 WO 2014127134A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
retina
camera
image
movement
Prior art date
Application number
PCT/US2014/016272
Other languages
English (en)
Inventor
Matthew Lawson
Ramesh Raskar
Original Assignee
Massachusetts Institute Of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/766,751 external-priority patent/US9060718B2/en
Application filed by Massachusetts Institute Of Technology filed Critical Massachusetts Institute Of Technology
Publication of WO2014127134A1 publication Critical patent/WO2014127134A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography

Definitions

  • the present invention relates generally to retinal imaging.
  • this invention comprises a device for retinal imaging.
  • the imaging device may be at least partially head-worn, and may allow a user to self-image his or her retina (i.e., use the device to capture images of a retina of the user).
  • the imaging device displays real-time visual feedback to one eye (the stimulus eye) of a user.
  • the visual feedback is indicative of (i) the pupillary axis of the user's eye that is being imaged (the test eye) and (ii) the optical axis of the device's camera.
  • an LCD in the device may display visual feedback that comprises a circle representative of the optic disc of the test eye (which serves as an approximate indication of the pupillary axis) and a square indicative of the center of the camera (which serves as an approximate indication of the optical axis of the camera).
  • This real-time visual feedback guides the user as the user changes direction of gaze in order to self-align the two axes.
  • the imaging device displays a video of moving visual stimuli to the stimulus eye.
  • the user's stimulus eye tracks this moving stimuli.
  • the test eye moves (rotates) in a similar path.
  • a camera in the device captures multiple images of different portions of the retina of the test eye. Each of these images may capture only a small portion of the retina.
  • These images are processed and stitched together to form an image of a large area of the retina. This large field of view (FOV) image of the retina can be displayed to the user in real time.
  • FOV large field of view
  • test eye rotates (while bi-ocularly coupled to the stimulus eye), the test eye moves into many rotational positions in which the test eye is "off-axis" with respect to the camera.
  • an eye is "off-axis” with respect to a camera if the optical axis of the camera is not pointed at the pupil of the eye; and
  • an eye is "on-axis” with respect to a camera if the optical axis of the camera is pointed at the pupil of the eye.
  • the camera has a wide FOV, and thus can capture an image of at least a small part of the retina, even when the test eye is off-axis.
  • Computational photography techniques are used to process the multiple images and to produce a mosaiced image. These techniques include (i) "Lucky” imaging, in which high-pass filtering is used to identify images that have the highest quality, and to discard poorer quality images; (ii) integrating images locally in time, by clustering similar images and spatially aligning them, and (iii) aligning, bending, and merging processed images into a mosaic.
  • Indirect, diffuse illumination may be employed, rather than direct illumination of the retina through the pupil.
  • a cool-to-the- touch light source is pressed against the skin near the eye (e.g. against the skin on or adjacent to the eyelid).
  • the light source may be pressed against the skin adjacent to the lateral or medial canthus.
  • the light passes through the skin, other tissues, sclera and choroid to provide diffuse, indirect illumination of the retina.
  • the light source may comprise, for example, one or more light emitting diodes (LEDs).
  • the indirect illumination may be multi-directional.
  • an array of LEDs may be pressed against the skin at different points around the eyelid and sequentially illuminated, while a synchronized camera captures images of the retina. Shadows from light from different angles may highlight different features of the eye's anatomy.
  • the retina may be illuminated directly through the pupil.
  • the imaging device includes a large lens. As the test eye rotates, different areas of the large lens are used to image different portions of the retina. The narrow aperture of the pupil causes vignetting, so that light from the retina does not reach some portions of the large lens.
  • the indirect illumination and direct illumination embodiments of this invention are similar in at least the following respects: (i) as a preliminary step, visual feedback is provided to help the user self-align the eye and camera, (ii) later, bi-ocular coupling is exploited, by displaying a moving image to the stimulus eye in order to cause the test eye to rotate along a particular trajectory, (iii) multiple images of different areas of the retina are captured as the test eye rotates; (iv) computational photography techniques are used to process the images to create, in real time, a mosaic image of a large area of the retina; and (v) the camera can image a part of the retina while the test eye is off-axis.
  • a light field (“LF") camera may be used to image the retina.
  • either direct or indirect illumination may be employed.
  • the eye be on- axis with respect to the camera.
  • a light field camera captures multiple images of the retina. These images may be synthetically refocused.
  • this invention comprises an interactive, wearable, device for self-imaging - i.e., a device configured for a person to capture and visualize images of the retina of an eye of the person; and (ii) simplifies constraints on traditional devices that require a trained operator to precisely align and focus the optics, cameras, and illumination with the human eye.
  • a micro camera captures images close to the eye without complex optics
  • an LCD display on one eye allows the user to self-align the pupil with the camera and exploit the natural biological coupling of vision to control the other
  • computational photography techniques are applied to a set of images to improve the overall image quality and field-of-view
  • indirect diffuse LED illumination as well as programmable direct illumination allow unique form factors.
  • Figure 1 is a conceptual diagram of a retinal imaging device.
  • the device is configured for (a) displaying a stimulus to a first eye, in order to control the direction of gaze of a second eye, (b) indirectly illuminating the second eye, and (3) capturing an image of the retina of the second eye.
  • Figure 2 shows a (prior art) direct ophthalmoscope.
  • Figure 3 shows a (prior art) indirect ophthalmoscope.
  • Figure 4 shows how rotation of the eye can block retinal view for a
  • Figure 5 shows how rotation of the eye can block retinal view for a
  • Figure 6 shows a (prior art) ophthalmoscope, after being translated and rotated to compensate for rotation of the eye.
  • Figure 7 shows a micro camera for retinal imaging, where the eye is on-axis with respect to the micro camera.
  • Figure 8 shows a micro camera for retinal imaging, where the eye is off-axis with respect to the micro camera.
  • Figures 9A and 9B show a single lens, direct illumination
  • FIG. 9A the eye is on-axis with respect to the camera.
  • Figure 9B the eye is off-axis with respect to the camera.
  • Figure 10 shows a light field camera for retinal imaging
  • Figure 11 shows a light field camera capturing a defocused image of a retina.
  • Figure 12 is a diagram showing an eye and camera as a compound lens system.
  • Figure 13 is a diagram showing field of view.
  • Figure 14 is a diagram showing indirect illumination of an eye for retinal imaging.
  • Figure 15 is a photograph showing an LED providing indirect illumination of an eye for retinal imaging.
  • Figure 16 shows multiple LEDs for multidirectional, indirect illumination of an eye for retinal imaging.
  • Figure 17 shows a retinal imager with multidirectional, indirect illumination, being held close to an eye.
  • Figures 18A and 18B show visual displays for helping a subject self- align (a) the pupillary axis of the subject's test eye, and (b) the optical axis of a retinal imager.
  • the two axes are not aligned; in Figure 18B, they are.
  • Figures 19A and 19B illustrate use of bi-ocular coupling for a retinal imager.
  • a stimulus presented to a stimulus eye causes the pupillary axis of the test eye to be aligned with the optical axis of the camera.
  • a stimulus presented to a stimulus eye causes the pupillary axis of the test eye to be misaligned with the optical axis of the camera.
  • Figures 20A, 20B, 20C and 20D show trajectories over which stimuli may travel, when the stimuli are presented to a stimulus eye.
  • the trajectories are circles in Figures 20A, an array of dots in Figure 20B, a spiral in Figure 20C, and an infinity symbol in Figure 20D.
  • Figures 21A, 21B and 21C illustrate bi-ocular coupling, at a point in time when approximately half of an "infinity symbol” trajectory has been displayed.
  • Figure 21C shows the stimulus eye rotating to follow a stimulus as it traces out an "infinity symbol”.
  • Figure 21B shows the camera imaging the test eye.
  • Figure 21A shows the area of the retina of the test eye that has been imaged as the stimulus moved through approximately half of an "infinity symbol” trajectory.
  • Figures 22A, 22B and 22C illustrate bi-ocular coupling, at a point in time when approximately all of an "infinity symbol” trajectory has been displayed.
  • Figure 22C shows the stimulus eye rotating to follow a stimulus as it traces out an "infinity symbol”.
  • Figure 22B shows the camera imaging the test eye.
  • Figure 22A shows the area of the retina of the test eye that has been imaged as the stimulus moved through approximately all of an "infinity symbol” trajectory.
  • Figure 23 shows a prototype of a retinal imaging device, mounted on eyeglass frames.
  • the device includes an LED for indirect illumination, a camera for imaging the retina of the test eye, and a visual display for displaying stimuli to the stimulus eye.
  • Figure 24 shows another prototype. It is similar to that shown in
  • Figure 23 shows a subject viewing stimuli on a remote screen (on a desk in front of the subject).
  • Figure 26 shows an exploded view of a direct illumination device.
  • Figure 27 shows a perspective view of a direct illumination device.
  • Figure 28 is a flowchart of image processing.
  • Figures 29 A and 29B illustrate the use of image integration for retinal imaging.
  • Figure 29A is a set of seven retinal images;
  • Figure 29B is an image produced by integrating that set.
  • Figures 30A, 30B and 30B illustrate multispectral retinal imaging.
  • the images in Figures 30A and 30B were captured with short and long wavelengths, respectively.
  • Figure 30C is a composite of the two.
  • Figures 31 A and 3 IB illustrate the use of multidirectional illumination in retinal imaging. Figures 31 A and 3 IB, respectively, were taken with illumination at different angles.
  • Figures 32A and 32B show a light field camera created by placing a micro lenslet array in front of an optical sensor.
  • the camera is imaging the retina of an eye focused at infinity.
  • the camera is imaging the retina of a near-sighted or far-sighted subject.
  • Figure 33 is an exploded view of a light field camera for retinal imaging.
  • Figure 34 is a perspective view of a light field camera for retinal imaging.
  • Figure 35 shows a light field camera being held close to an eye for retinal imaging.
  • Figures 36A, 36B, 36C shows three refocused retinal images from a light field camera.
  • the retinal tissue is an excellent absorber of light.
  • the average reflectivity of the retina (at the fovea) varies from 0.23% (blue) to 3.3% (red).
  • the reflectivity from the surface of cornea alone can get to 8% refractive, interfering with the imaging process.
  • Sclera The eye is mostly enclosed within the sclera tissue. The only clear window is a narrow pupil entrance, making the eye difficult to illuminate.
  • Alignment The pupillary axis in the eye is hard to align with the optical axis of an imaging device. The alignment is easily lost when the eye rotates about its center.
  • Dynamic variation The eye is living organ. It can moves, modify its optical power, and its pupil aperture resulting in alignment error and motion blur, focus blur, and illumination variation respectively.
  • Imaging of the retina is even harder with prior art technology. Imaging of the retina usually involves dilating the pupils, securing the head position and using non-trivial mechanical controls to align the camera with the desired viewing axis. These tasks (including, in particular, optical alignment) are very difficult to self-administrate with conventional technologies.
  • this invention can be easily used for self-administered retinal imaging: i.e., a subject can use the invention to capture images of the retina of the subject's eye.
  • Programmable stimulus and bi-ocular coupling are used to guide both gaze and focus.
  • the stimulus causes the subject to move eye gaze (rotate the eyes), while images are captured with simple optics.
  • the images are processed with computational techniques.
  • FIG. 1 is a conceptual diagram of a bi-ocular device for retinal imaging, in an exemplary implementation of this invention.
  • the bi-ocular device 101 comprises (a) a display screen 103 for displaying stimuli 104 to a first eye (the "stimulus eye") 105, in order to control the direction of gaze of a second eye (the “test eye") 107, (b) an LED 109 for indirectly illuminating the test eye 107, and (c) a CMOS camera 111 for capturing images of the retina of the test eye 107.
  • the display screen 103, LED 109, and camera 111 are mounted on eyeglasses 115.
  • Figures 2 and 3 show prior art examples of a direct ophthalmoscope and indirect ophthalmoscope, respectively.
  • Traditional direct and indirect ophthalmoscopes use special optics (including, at least in some cases, a light source 201, 301, mirror 203, 303 and lenses 205, 207, 305, 307, 309) to shine light into the eye and image the retina through a small pupil entrance 221, 321.
  • Special care is taken to prevent the illumination from obscuring the image due to reflection from the cornea, iris and sclera.
  • An observer 223, 323 sees the retinal image through the ophthalmoscope.
  • the observer 223, 323 is human (e.g., an ophthalmologist or optometrist); alternately, the observer may be a camera.
  • the light illuminating the retina enters the eye through the pupil.
  • Figures 4 and 5 illustrate how rotation of the eye (i.e., changing direction of gaze) can create a misalignment problem for a traditional
  • the eye has rotated, causing the optical axis 403, 503 of the ophthalmoscope to not pass through the pupil 221, 321.
  • part of the sclera 405, 505 blocks light from the illumination source 201, 301 and blocks the observer 223, 323 from seeing (or capturing an image of) the retina.
  • the optical axis of the ophthalmoscope does not go through the pupil of the subject's eye, the ophthalmoscope is blocked from imaging the retina.
  • a prior art solution to the misalignment problem is to move a conventional ophthalmoscope so that its optical axis 403 passes through the pupil 221 again.
  • the ophthalmoscope can successfully image the retina, because the ophthalmoscope has been moved.
  • the ophthalmoscope has been translated and rotated (about an axis that intersects the ophthalmoscope) so that (1) the optical axis 403 of the ophthalmoscope passes through the pupil 221 and (2) light from the illumination source 201 passes through the pupil 221.
  • the ophthalmoscope needs to both translate and rotate in order to keep its optical axis pointed at the pupil. This is a difficult task even for a trained practitioner and pupil dilation is often needed.
  • a conventional fundus camera can use a complex 5-axis motion in order to keep its optical axis pointed at the pupil, to compensate for a change in the eye's direction of gaze.
  • the fundus camera can move in an arc, as it rotates about the center of the pupil.
  • This 5-axis motion can be achieved with a two axis goniometer plus a 3 -axis translation.
  • a conventional fundus camera, with its precise motion control, can be used for non-mydriatic (without pupil dilation) retinal imaging.
  • Conventional ophthalmoscopes can be direct or indirect. Both are a- focal systems in which the final image is created by the eye of the observer. The difference between the two is that indirect ophthalmoscopes use relay optics that provides them with greater flexibility for imaging and illumination.
  • Conventional retinal cameras are focal systems that create a real image of the retina onto the image plane via relay optics. The reasons are partly historical (allows adding a camera to an a-focal ophthalmoscope) and partly technical (supports greater flexibility to introduce extra imaging and illumination).
  • a user wears a small imaging device that is embedded in a pair of glasses or other head-worn display;
  • a video displays a moving visual stimulus at a predefined focusing distance (e.g., infinity),
  • this stimulus is presented to one eye and the user is asked to track the stimulus,
  • biological bi-ocular coupling causes the second eye to match the gaze and focus of the first eye;
  • an imaging device (which is compatible with eye rotation) is used to capture many images of the retina;
  • lucky imaging selects only the good frames, and (vii) and the good frames are mosaiced together.
  • an imager is not added to a traditional ophthalmoscope. Instead: (i) eye-movement is guided to capture a wider field of view; (ii) simplified optics continue to image the retina even when the camera axis is not pointed at the pupil; and (iii) computational tools are used to process the captured images.
  • the retina is illuminated indirectly through the skin, sclera and other tissues, instead of directly through the pupil.
  • a micro-camera with a wide field of view (FOV) captures images of the retina.
  • the retina is exposed to scattered, diffuse, reflected light from the skin and eye tissues. The intensity of this indirect light is less than the intensity of direct illumination, allowing the pupil to naturally dilate (in a dark environment), thereby producing a wider FOV of the retina.
  • this indirect illumination is not limited by the pupil aperture and therefore is not subject to illumination focusing and misalignment problems.
  • the wide FOV of the camera (not to be confused with the FOV of the retina) enables the camera to image the retina even when the camera's optical axis is not pointed at the pupil (as long as the pupil is within the FOV of the camera).
  • the combination of indirect illumination with wide FOV camera results in a system that (i) has relatively low magnification (because of the wide FOV of the camera), (ii) has relatively large FOV, and (iii) can successfully image the retina even when the system's optical axis is not pointed at the pupil and the illumination does not pass through the pupil.
  • both the wide-FOV micro-camera and the indirect illumination are tolerant of eye rotation.
  • a relatively small retina image traverses the sensor plane as the eye moves. These small retinal images are later stitched together into a wide FOV retinal image.
  • Figures 7 and 8 are conceptual diagrams of the first prototype.
  • the retina is not illuminated directly through the pupil. Instead, an LED 701 illuminates the retina indirectly by shining light through the sclera (and other tissues, including skin).
  • a micro camera 703 takes multiple images of the retina, as the eye's direction of gaze changes.
  • the eye is on-axis with respect to the micro camera (i.e., the micro camera's optical axis points at the pupil).
  • the eye is off-axis with respect to the micro camera (i.e., the micro camera's optical axis does not point at the pupil). Even though the eye is off-axis, the micro camera has such a wide field of view that it can successfully image a portion of the retina.
  • the micro camera takes images of different, small parts of the retina. These small images are later mosaiced to form a wide FOV retinal image.
  • Second Prototype Direct Illumination
  • the retina is illuminated directly through the pupil. Direct illumination is better suited for people with denser skin pigmentation and for some imaging applications such as narrow band multi-spectral imaging.
  • the second prototype uses a simple lens (no relay optics) a beam splitter, and a pair of crossed polarizers. A second focusing lens and changeable aperture for are used for illumination position and direction steering.
  • a larger lens (12 mm diameter) is used.
  • the eye moves off-axis (i.e., when the eye moves so that the optical axis of the second prototype does not pass through the eye's pupil)
  • different parts of the larger lens are used for imaging. This is done automatically as other rays become vignetted.
  • the illumination is preferably focused as much as possible at the pupillary axis to avoid unwanted reflections.
  • the aperture shape and location of the illumination may be changed to match the motion path indicated by the bi-ocular stimulus. This can be done statically by using an aperture the shape of the motion path, which will reduce the unwanted illumination from a 2D patch to a ID path, or dynamically by using a pico-projector as the light source.
  • Figures 9A and 9B are conceptual diagrams of the second prototype.
  • the second prototype uses direct illumination for retinal imaging.
  • the second prototype comprises a light source 901, a simple lens 903, a pair of crossed polarizers 905, 907, a beam splitter 909 and a sensor 911.
  • the second prototype also includes a larger lens 921.
  • the eye's direction of gaze changes (e.g., from on-axis to off-axis)
  • different parts of the larger lens 921 are used for imaging.
  • the second prototype can capture an image of part of the retina, even when the eye is off-axis.
  • the eye is on-axis with respect to the camera (the optical axis of the camera is pointed at the pupil 923).
  • the eye is off-axis with respect to the camera (the optical axis of the camera is not pointed at the pupil 923).
  • the light source 901 may move or appear to move (relative to the subject's head as a whole) in order that the illumination continue to enter the eye through the pupil.
  • This movement of the light source 901 may be implemented in different ways, including (i) physical translation of a light source, (ii) changing the direction of light projected from a projector, (iii) turning on and off different light sources in an array of light sources, or (iv) moving or otherwise changing one or more optical elements that guide light from the light source. Further, this movement of the light source may occur while the camera sensor (or camera housing) is motionless relative to the subject's head, as a whole.
  • movement of a light source includes apparent movement of a light source or visual stimuli.
  • bi-ocular coupling may be exploited when controlling the movement of the light source. Due to bi-ocular coupling, the rotation of the test eye may be guided by rotation of the stimulus eye as the latter tracks a moving visual stimulus. Thus, the path of movement of the light source (which compensates for rotation of the test eye) may be guided by or correlated with the trajectory of movement of the visual stimuli presented to the stimulus eye.
  • one or more processors may: (i) control the path of movement of the light source that illuminates the test eye, based on the trajectory of movement of the visual stimuli presented to the stimulus eye; (ii) control the trajectory of movement of the visual stimuli presented to the stimulus eye, based on the path of movement of the light source that illuminates the test eye, or (iii) control both the path of movement of the light source that illuminates the test eye and the trajectory of movement of the visual stimuli presented to the stimulus eye, based on the fact that rotation of the test eye and stimulus eye are correlated due to bi-ocular coupling.
  • the third prototype uses a light field (LF) camera LF cameras are also known as plenoptic cameras.
  • a microlens array is positioned in front of the camera sensor plane (at a distance from the sensor plane equal to the focal length of the microlens), for light field imaging.
  • a sensor including microlens array from a Lytro ® LF camera is used.
  • the LF camera may capture a defocused image 1107.
  • the defocused image 1107 can be refocused later.
  • LF imaging can also be used to compensate for change in eye-focus.
  • the third prototype (light field camera) is most useful where the pupillary axis of the LF camera and the optical axis of the eye are aligned.
  • CMOS camera with a small aperture is employed.
  • the camera can be easily placed close to the eye, and thus (a) can capture large field of view (FOV) images of the retina without dilating the pupil, and (b) can be easily aligned.
  • FOV field of view
  • magnification and FOV varies depending on the particular implementation of this invention.
  • magnification for this CMOS camera can be computed as follows:
  • the field of view can be calculated as follows: Assume an exit pupil of the eye to be 3 mm (which limits the FOV for camera lens aperture of similar size) as shown in Figure 13. The pupil is located 6.9 mm from the center of projection, which provides in this example a field of view 1301 of approximately 19° if measured from the camera's center of projection or a field of view 1303 above 38° if measured from the center of the eye. [0093] Resolution limit and DOF
  • the just resolved distance B' of a diffraction limited camera can be estimated by 1.22 A(f I #) (Rayleigh criterion), where ⁇ is the wavelength of light, and / # is the f-number of the imaging system.
  • Airy-disk diameter (which is twice the size of B') is commonly used as the Circle of Confusion (CoC) for depth of field estimation.
  • the final derived expression is provided by: ⁇ ' , CoC - f / #
  • Equation 3 the depth of field ⁇ for this prototype is computed as 1441 OTZ , which is about four times the thickness of the retina. [0099] Empirical testing of a prototype of this invention demonstrates that the prototype can resolve spatial frequencies up to 2.5cyc I mm ; this corresponds to a resolution of approximately 200 ⁇ .
  • the retina is illuminated indirectly through the sclera (and skin and other tissues), rather than directly through the pupil.
  • An efficient (cool-to-touch) solid-state light source e.g., one or more LEDs
  • a 1W dichromatic white LED with a luminous efficacy of 120 lm/W is sufficient for indirect retinal diffuse illumination. This LED can maintain a continuous low energy light source for live streaming video, and high resolution stills.
  • light passes not only through the sclera and onto the retina, but also scatters around the ocular tissue to illuminate the choroid.
  • Figure 14 is a diagram showing indirect illumination of an eye for retinal imaging.
  • An LED 1401 is pressed against the skin 1403.
  • Light from the LED 1401 travels through the skin 1403, other tissue 1405, sclera 1407 and choroid 1409 to provide indirect diffuse illumination of the retina 1411.
  • Figure 15 is a photograph showing an LED providing indirect illumination for an eye for retinal imaging.
  • Figure 16 shows a retinal imager 1601. Multiple LEDs (e.g., 1603, 1605, 1607) provide multidirectional, indirect illumination of an eye during retinal imaging.
  • Figure 17 shows a retinal imager 1701 with multiple (e.g., LEDs 1703, 1705) being held close to an eye. The multiple LEDs provide multidirectional, indirect illumination during retinal imaging.
  • Figures 31 A and 3 IB illustrate the use of multidirectional illumination in retinal imaging.
  • Figures 31 A and 3 IB are photographs that were taken with illumination at different angles.
  • a display screen displays to one eye of a subject (the stimulus eye) a visual indication of both (1) the pupillary axis of the subject's second eye (the test eye) and the optical axis of the camera. This real-time, visual feedback helps the subject self-align these two axes.
  • Figures 18A and 18B show a visual display for helping a subject self- align the pupillary axis of the subject's test eye and the optical axis of the camera imager.
  • the visual display in Figure 18A shows that the two axes are not aligned. This gives a visual cue to the subject. The subject can then change direction of gaze to align the two axes.
  • the visual display in Figure 18B shows the results of such self- alignment: the two axes are now aligned.
  • a circle indicates the optic disc of the eye (and thus the approximate location of the pupillary axis of the eye).
  • the square 1803 indicates the camera's optical axis (which intersects the center of the square).
  • calibration of the optic disc is actuated through user feedback.
  • the subject observes the test eye's optic disc in real time and aligns its position within the wire frame before the stimulus pattern is presented.
  • the optic disc is a simple feature for detection and triggers the automated stimulus pattern.
  • a video of moving visual stimuli is displayed to the display eye.
  • the stimulus eye tracks this moving stimuli.
  • the direction of gaze of the test eye moves in a similar pattern as the stimulus eye.
  • the camera captures multiple, small images of different portions of the retina. These multiple images are later mosaiced together.
  • the test eye is off-axis (i.e., the optical axis of the camera does not point at the pupil).
  • the camera has a wide FOV.
  • the camera can capture multiple images of different small areas of the retina as the test eye rotates, even when the test eye is off axis.
  • a user feedback mechanism dramatically simplifies alignment.
  • coupled gaze control causes the test eye to rotate.
  • a wide field of view of the retina can be reconstructed from many small (narrow field of view) images of the retina taken as the test eye rotates.
  • Exemplary implementations of this invention exploit bi-ocular coupling, which synchronizes accommodation and gaze of the eyes. Parallel motion in the eye being examined is induced, by providing different stimulus to the other eye.
  • the above coupled gaze control can reduce moving parts, complexity, cost or need for calibration equipment.
  • Figures 19A and 19B illustrate the use of bi-ocular coupling for a retinal imager.
  • a stimulus 1901 presented to a stimulus eye 1903 causes the pupillary axis 1905 of the test eye 1907 to be aligned with the optical axis of the camera 1911.
  • a stimulus 1913 presented to a stimulus eye 1903 causes the pupillary axis 1905 of the test eye 1907 to be mis-aligned with the optical axis of the camera 1909.
  • a stimulus is presented to one eye (the stimulus eye) as the other eye (the test eye) is imaged.
  • the bi-ocular coupling of the two eyes causes the test eye to rotate, thereby obtaining multiple images of different parts of the retina.
  • Figures 20A, 20B, 20C and 20D show four examples of trajectories over which stimuli may travel, when a video display presents stimuli to a stimulus eye.
  • the trajectories are circles 2001 in Figures 20A, an array of dots 2003 in Figure 20B, a spiral 2005 in Figure 20C, and an infinity symbol 2007 in Figure 20D.
  • FIGS 21 A, 2 IB and 21C illustrate bi-ocular coupling, at a point in time when approximately half of an "infinity symbol” trajectory has been displayed.
  • Figure 21C shows the stimulus eye 2101 rotating to follow a visual stimulus 2103 as it traces out an "infinity symbol”.
  • Figure 2 IB shows the camera 2107 imaging the test eye 2109, while an LED 2111 provides indirect, diffuse illumination of the retina 2113 of the test eye 2109.
  • Figure 21A shows the area 2115 of the retina of the test eye that has been imaged as the stimulus moved through approximately half 2105 of an "infinity symbol" trajectory.
  • Figures 22A, 22B and 22C illustrate bi-ocular coupling, at a point in time when approximately all of an "infinity symbol” trajectory has been displayed.
  • Figure 22C shows the stimulus eye 2201 rotating to follow a visual stimulus 2203 as it traces out an "infinity symbol”.
  • Figure 22B shows the camera 2207 imaging the test eye 2209, while an LED 2211 provides indirect, diffuse illumination of the retina 2213 of the test eye 2209.
  • Figure 22A shows the area 2215 of the retina of the test eye that has been imaged as the stimulus moved through approximately all 2205 of an "infinity symbol" trajectory.
  • a first prototype of this invention comprises eyeglasses that house (i) a micro-camera to capture images of the retina of a test eye, (ii) a visual display for displaying stimuli to a stimulus eye, and (iii) a light source for indirect diffuse illumination of the retina through the sclera (and skin and other tissues).
  • the first prototype is shown in Figures 1, 7, 8, 23 (and a variation of the first prototype is shown in Figures 24, 25).
  • a second prototype of this invention comprises a compact, monocular device which implements direct illumination through the pupil (as shown in Figures 9A, 9B, 26, 27).
  • a third prototype of this invention comprises a light field sensor (as shown in Figures 10, 11, 33, 34, 35) in conjunction with indirect, diffuse illumination.
  • Chassis for the three prototypes were designed in computer automated design software and 3-D printed using ABS plastic (acrylonitrile butadiene styrene).
  • the first and second prototypes include a Point Grey® Flea® 3 camera with 60 fps at 3.2 MP resolution, as well as a Microsoft® Live-Cam Studio.
  • the third prototype which is a LF (light field) device, uses the sensor of a Lytro® LF camera.
  • Illumination is delivered to the eye in two distinct approaches: one, indirect diffuse illumination through the soft tissue surrounding the eye; and two, direct illumination delivered through the pupillary aperture.
  • Two illumination drivers are used: a 100 lumen DLP (digital light processing) LED projector (AAXA® P3 Pico Pocket Projector), which can deliver programmable, multispectral illumination in a single form factor, and a 100 lumens RGB LEDs as well as Amber and MR (near infrared) LEDs coupled to a light focusing mechanism and chassis which couples to a light pipe for delivery to the imaging device.
  • the light focusing chassis is constructed of ABS (acrylonitrile butadiene styrene) which houses a plastic condensing lens set at one focal length to the end of the fiber bundle.
  • indirect illumination is used in the first prototype, direct illumination in the second prototype, and either direct or indirect illumination in the third prototype.
  • Indirect Illumination In the first prototype and (in some cases) the third prototype, indirect illumination is delivered to the eye via a fiber bundle. The end of the bundle is held to the soft tissue surrounding the eye. The fiber bundle is then connected directly to the pico projector.
  • Direct illumination In the second prototype and (in some cases) the third prototype, the light is delivered to a 6.0mm Dia. x 12mm FL, VIS-NIR Coated plano-convex lens (Edmund® Stock No. 45-467) at a distance of one focal length away. This lens acts to condense the light into a narrow beam which is then polarized with a linear polarizing laminated film. The polarized light is then delivered to a 50R/50T plate beamsplittler (Edmund® Stock No. 46-606) oriented at 45 degrees to the imaging axis, effectively superimposing the illumination and imaging paths.
  • a 6.0mm Dia. x 12mm FL, VIS-NIR Coated plano-convex lens (Edmund® Stock No. 45-467) at a distance of one focal length away. This lens acts to condense the light into a narrow beam which is then polarized with a linear polarizing laminated film. The polarized light is then delivered to a 50R/
  • a 12mm dia., 12mm focal length hybrid VIS-coated aspheric lens (Edmund® Stock No. 65-997) which acts to focus the light onto the pupillary plane (as well as the imaging lens).
  • a second linear polarizing film is placed in front of the camera, behind the illumination plane. Cross polarization significantly reduces reflections from the cornea and surrounding eye tissue.
  • Bi-ocular coupling In the first and second prototypes, a camera and visual display are embedded in a head-worn (eyeglasses), to facilitate bi-ocular coupling during imaging of the test eye.
  • An inward facing visual display (for displaying stimuli to the stimulus eye) was fabricated from (i) an LCD and (i) a focusing objective removed from a Vuzix® WRAP 920AR display.
  • Figure 23 shows the first prototype. It comprises head-worn apparatus 2300 mounted on eyeglass frames 2301.
  • the device includes an LED 2303 for indirect illumination, a camera 2305 for imaging the retina of a test eye, and a visual display 2307 for displaying stimuli to the stimulus eye.
  • Figure 24 shows an alternate version of the first prototype.
  • the head- worn apparatus 2400 is similar to that of shown in Figure 23 : an LED 2403 and camera 2405 are mounted on eyeglasses frames 2401. However, in this alternate version, one side 2407 of the eyeglasses is transparent, to allow the stimulus eye to view stimuli displayed on a remote screen.
  • Figure 25 portrays a subject wearing the head-worn apparatus 2400 shown in Figure 24.
  • the subject 2501 is viewing stimuli 2503 on a remote screen 2505.
  • the remote screen 2505 is part of a monitor 2507 that rests on a desk 2509 in front of the subject).
  • Figures 26 and 27 show an exploded view and a perspective view, respectively, of the second prototype.
  • the second prototype employs direct illumination (i.e., illumination of the retina directly through the pupil).
  • the second prototype includes a Point Grey® Flea® 3 camera 2601, a Flea® 3 mount 2603, focusing springs 2605, 2606, lens 2607, lens chassis 2609, focusing screws 2611, 2612, beam splitter 2613, cover 2615, eye cup mount 2617, fiber cable adapter 2619 and mask slots 2621.
  • the horizontal mask slots 2621 on the handle are used to enter various aperture shapes and sizes for illumination.
  • Figure 28 is a flowchart of an image processing, in exemplary implementations of this invention. Best images are selected from the captured dataset and similar images are clustered together. Clusters are integrated locally to increase image quality by reducing noise. Finally, corrected images are mosaiced together to create a large field of view panorama.
  • composition pipeline first uses the 'Lucky' imaging approach, automatically identifying the images that contain the highest quality features from a video sequence, through the use of high-pass filtering. Selected images are then integrated locally in time to reduce noise and improve feature contrast by clustering similar images and spatially aligning them. Traditional feature matching algorithms are used for mosaicing. Key steps are shown in Figure 28. In exemplary
  • an eye-worn design in conjunction with stimulus computation, simplifies the number of variables in computation.
  • the interactive computational stimulus is synchronized with the frame rate of the camera, the location and position of every image during capture can be spatially and temporally isolated, thereby reducing the need for a complex registration algorithm.
  • Figures 29 A and 29B illustrate the use of image integration for retinal imaging.
  • Figure 29A is a set of seven retinal images;
  • Figure 29B is an image produced by integrating that set.
  • this invention can perform
  • Figures 30A, 30B and 30B illustrate such multispectral imaging .
  • the images in Figures 30A and 30B were captured with short and long wavelengths, respectively.
  • Figure 30C is a composite of the two.
  • a high speed camera synchronized with low intensity, multi-spectral LEDs captures a rapid succession of multispectral images.
  • Image registration is performed with a reference frame.
  • the images may be registered (post-capture) to a common "white" frame.
  • sequential capture of multispectral images can be achieved with simple optics.
  • a conventional single shot device uses complicated optics, coupled with beam splitters, filters and polarization (i) to direct the light into the eye, and (ii) to capture and optically demultiplex multiple spectral channels in a single instant.
  • a light field camera also known as a plenoptic camera
  • a plenoptic camera is used to image the retina.
  • Figures 32A and 32B show a light field camera created by placing a micro lenslet array in front of an optical sensor.
  • the LF camera includes a microlens array 3201 and a sensor 3203.
  • the camera is imaging the retina of an eye 3205 focused at infinity.
  • the camera is imaging the retina of a near-sighted eye 3207.
  • Figures 33 and 34 show an exploded view and a perspective view, respectively, of the third prototype.
  • the third prototype comprises a light field camera.
  • the third prototype includes a Lytro® light field sensor 3301, a Lytro® mount 3303, focusing springs 3305, 2506, lens 3307, lens chassis 3309, and focusing screws 3311, 3312.
  • Figure 34 shows a case 3401 for the light field camera.
  • Figure 35 shows a subject 3501 holding a plenoptic camera 3503 up close to a test eye for retinal imaging.
  • multiview capture with a plenoptic camera ensures that nearly all light emitting the pupil and noise can be reduced computationally.
  • Light field capture can allow for synthetic refocusing. This synthetic refocusing can be used, for example: (i) to give depth perception; (ii) eliminate the need for auto-focus assembly, (iii) allow better focal-stack analysis, and (iv) adjust for changes in focus of the test eye itself. The fourth use can very helpful, since the focus of the eye being examined can change frequently.
  • Figures 36A, 36B, 36C show images 1, 2 and 5 of five refocused images from a single light field image captured by the third prototype.
  • Image 2 Figure 36B recovers the focused features.
  • this invention can track eye movement.
  • this invention can (by using eye tracking and bi-ocular coupling), examine the steering and motion of the test eye. The results of this examination can be used to identify potential abnormalities in binocular eye function.
  • adaptive stimulus control for optimized steering and mapping of the eye is used, and real-time graphics are generated based on the properties of collected regions.
  • the adaptive stimulus control provides algorithmic control of graphics, and optimizes steering and mapping of the eye, based on collected regions.
  • one or more computer processors are specially adapted: (1) to control the operation of hardware components of the retinal imaging device, including light source, camera, and visual display; (2) to perform image processing of retinal images, including high-pass filtering to identify images that have the highest quality and to discard poorer quality images, integrating images locally in time, by clustering similar images and spatially aligning them and using phase correlation, and aligning, bending, and merging processed images into a mosaic, (3) to receive signals indicative of human input, (4) to output signals for controlling transducers for outputting information in human perceivable format, and (5) to process data, perform computations, and control the read/write of data to and from memory devices.
  • the one or more processors may be located in any position or position within or outside of the retinal imaging device. For example: (1) at least some of the one or more processors may be embedded within or housed together with other components of the device, such as the camera, electronic display screen or eyeglasses, and (2) at least some of the one or more processors may be remote from other components of the device.
  • the one or more processors may be connected to each other or to other components in the retinal imaging device either: (1) wirelessly, (2) by wired connection, or (3) by a combination of wired and wireless connections. Rectangles 117, 931 and 1031 in Figures 1, 9A, 10 each, respectively, represent one or more of these computer processors.
  • the term "camera” shall be broadly construed.
  • any of the following is a camera: (i) a traditional camera, (ii) an optical sensor, and (iii) a light field camera (also known as a plenoptic camera).
  • A comprises B, then A includes B and may include other things.
  • an electronic screen shall be construed broadly, and includes any electronic device configured for visual display .
  • an electronic screen may be either flat or not flat.
  • horizontal and vertical shall be construed broadly.
  • horizontal and vertical may refer to two arbitrarily chosen coordinate axes in a Euclidian two dimensional space.
  • the term "indicative" shall be construed broadly.
  • visual feedback is “indicative" of the pupillary axis of an eye if the feedback shows or represents either (a) the pupillary axis, (b) any proxy for the pupillary axis, or (c) any symbol or approximation of the pupillary axis.
  • visual feedback that shows an optic disc of an eye is “indicative" of the pupillary axis of the eye.
  • mosaic shall be construed broadly.
  • mosaic includes any image created by fusing, combining, stitching together or joining a set of multiple images, at least some of which multiple images capture different areas of a scene (or of an object).
  • the images in the set (and the mosaic image created from them) may be of any shape and size.
  • the images in the set may vary from image to image within the set.
  • a panoramic image stitched together from multiple smaller images is a mosaic.
  • An eye is "off-axis" with respect to a camera if the optical axis of the camera is not pointed at the pupil of the eye.
  • An eye is "on-axis" with respect to a camera if the optical axis of the camera is pointed at the pupil of the eye.
  • a or B is true if A is true, or B is true, or both A or B are true.
  • a calculation of "A or B” means a calculation of A, or a calculation of B, or a calculation of A and B.
  • a parenthesis is simply to make text easier to read, by indicating a grouping of words. A parenthesis does not mean that the parenthetical material is optional or can be ignored.
  • "Retinal self-imaging” means a human using artificial apparatus to capture an image of a retina in an eye of that human.
  • rotation of an eye means rotation of the eye about a point in the eye, which rotation may occur in more than one plane that intersects the point. Similar terms (e.g., “rotate the eye”) shall be construed in like manner.
  • This invention may be implemented in many different ways. Here are some non-limiting examples.
  • This invention may be implemented as apparatus comprising a plenoptic camera, which camera is configured for imaging of a retina of a human.
  • the apparatus may further comprise one or more light sources configured to provide indirect, diffuse illumination of the retina, which illumination passes through at least skin and sclera of the human before reaching the retina; and (2) the one or more light sources may comprise a plurality of light sources, and each of the plurality of light sources, respectively, may be configured to be positioned at a different location adjacent to an eyelid or other facial skin of the human while providing the illumination.
  • This invention may be implemented as a method of imaging a retina of a first eye of a human, which method comprises, in combination: (a) using a camera to capture multiple images of different areas of the retina during a time period in which the first eye rotates through different rotational positions; and (b) using one or more processors to process the multiple images to create a mosaiced image of a region of the retina; wherein (i) the camera has an optical axis, and (ii) more than one of the multiple images are captured by the camera when the optical axis does not point at the pupil of the first eye.
  • the method may further comprise using one or more light sources to provide indirect, diffuse illumination of the retina, which illumination passes through at least skin and sclera of the human before reaching the retina;
  • the one or more light sources may comprise a plurality of light sources, and the method may further comprise positioning each of the plurality of light sources, respectively, at a different location adjacent to facial skin of the human while providing the illumination;
  • the method may further comprise using an electronic screen to display moving visual stimuli to a second eye of the human, which stimuli (i) induce the second eye to rotate as the second eye tracks the stimuli and (ii) by bi- ocular coupling, induce the first eye to also rotate;
  • the method further may further comprise using an electronic screen to display real-time visual feedback to a second eye of the human, which feedback is indicative of the optical axis of the camera and the pupillary axis of the first eye;
  • the method may further comprise using an electronic screen (i) to display real-time visual feedback to the second eye, which feedback is indicative of the optical axis of
  • This invention may be implemented as apparatus comprising, in combination: (a) a camera, which camera is configured to capture multiple images of different areas of the retina of a first eye of a human during a time period in which the first eye rotates through different rotational positions; and (b) one or more processors, which one or processors are configured to process the multiple images to create a mosaiced image of a region of the retina; wherein (i) the camera has an optical axis, and (ii) more than one of the multiple images are captured by the camera when the optical axis does not point at the pupil of the first eye.
  • the apparatus may be configured for retinal self-imaging; (2) the apparatus may further comprise one or more light sources, which light sources are configured to provide indirect, diffuse illumination of the retina, which illumination passes through at least skin and sclera of the human before reaching the retina; (3) the one or more light sources may comprise a plurality of light sources, and each of the plurality of light sources, respectively, may be configured to be positioned at a different location adjacent to facial skin of the human while providing the illumination; (4) the apparatus may further comprise an electronic screen, and the electronic screen may be configured to display moving visual stimuli to a second eye of the human, which stimuli (i) induce the second eye to rotate as the second eye tracks the stimuli and (ii) by bi-ocular coupling, induce the first eye to also rotate; (5) the apparatus may further comprise an electronic screen, and the electronic screen may be configured to display real-time visual feedback to a second eye of the human, which feedback is indicative of the optical axis of the camera and the pupillary axis of the first eye; (6) the apparatus
  • the light sources may be configured to sequentially illuminate the retina with different wavelengths of light at different times, respectively, while the camera captures synchronized images.
  • This invention may be implemented as a method of imaging a retina of a first eye of a human, which method comprises, in combination: (a) using a camera to capture multiple images of different areas of the retina during a time period in which the first eye rotates through different rotational positions; and (b) using one or more processors to process the multiple images to create a mosaiced image of a region of the retina; wherein (i) the multiple images include a first image and a second image; (ii) the first image is captured when the first eye is in a first rotational position;
  • the first eye is off-axis with respect to the camera in the first or the second rotational position;
  • the first image is formed by light that travels through a first region of a lens, and not through a second region of the lens; and
  • the second image is formed by light that travels through the second region of the lens, and not through the first region of the lens.
  • the method may further comprise causing a light source that illuminates the retina to undergo a movement, relative to the human's head as a whole, during the time period; (2) the movement may comprise physical translation of the light source; (3) the movement may comprise changing direction of light projected from a projector; (4) the movement may comprise turning on and off different light sources in an array of light sources; (5) the movement may comprise moving or otherwise changing one or more optical elements that guide light from the light source; (6) the movement may occur while a sensor of the camera is motionless relative to the subject's head, as a whole; (7) the movement may occur while a housing of the camera is motionless relative to the subject's head, as a whole; (8) the method may further comprise using the one or more processors to control movement of a light source that illuminates the first eye, based on movement of visual stimuli presented to a second eye of the human; (9) the method may further comprise using the one or more processors to control movement of visual stimuli presented to a second eye of the human, based on
  • This invention may be implemented as a method of imaging a retina of a first eye of a human, which method comprises, in combination: (a) using a camera to capture multiple images of different areas of the retina during a time period in which the first eye rotates through different rotational positions; and (b) using one or more processors (i) to process the multiple images to create a mosaiced image of a region of the retina; and (ii) to control movement of a light source that illuminates the first eye, based on movement of visual stimuli presented to a second eye of the human; wherein the first eye is off-axis with respect to the camera in at least one of the rotational positions.
  • This invention may be implemented as a method of imaging a retina of a first eye of a human, which method comprises, in combination: (a) using a camera to capture multiple images of different areas of the retina during a time period in which the first eye rotates through different rotational positions; and (b) using one or more processors (i) to process the multiple images to create a mosaiced image of a region of the retina; and (ii) to control movement of visual stimuli presented to a second eye of the human, based on movement of a light source that illuminates the first eye; wherein the first eye is off-axis with respect to the camera in at least one of the rotational positions.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Dans des modes de réalisation illustratifs, l'invention comprend un appareil pour l'auto-imagerie rétinienne. Des stimuli visuels aident l'utilisateur à auto-aligner son œil avec une caméra. Un couplage bi-oculaire induit l'œil de test à tourner dans différentes positions. Alors que l'œil tourne, une vidéo est acquise de différentes zones de la rétine. Des procédés de photographie informatisée transforment cette vidéo en image en mosaïque d'une grande zone de la rétine. Une LED est pressée contre la peau à proximité de l'œil, pour produire une illumination indirecte et diffuse de la rétine. La caméra a un champ de visée large, et peut effectuer l'imagerie d'une partie de la rétine même lorsque l'œil est désaxé (lorsque l'axe pupillaire de l'œil et l'axe optique de la caméra ne sont pas alignés). En variante, la rétine est illuminée directement à travers la pupille, et différentes parties d'un grand objectif sont utilisées pour l'imagerie de différentes parties de la rétine. En variante, une caméra plénoptique est utilisée pour l'imagerie rétinienne.
PCT/US2014/016272 2013-02-13 2014-02-13 Procédés et appareil pour l'imagerie rétinienne WO2014127134A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/766,751 US9060718B2 (en) 2012-02-13 2013-02-13 Methods and apparatus for retinal imaging
US13/766,751 2013-02-13

Publications (1)

Publication Number Publication Date
WO2014127134A1 true WO2014127134A1 (fr) 2014-08-21

Family

ID=51354720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/016272 WO2014127134A1 (fr) 2013-02-13 2014-02-13 Procédés et appareil pour l'imagerie rétinienne

Country Status (1)

Country Link
WO (1) WO2014127134A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017025583A1 (fr) * 2015-08-12 2017-02-16 Carl Zeiss Meditec, Inc. Améliorations de l'alignement pour systèmes de diagnostic ophtalmique
EP3223232A4 (fr) * 2014-11-20 2018-07-18 Sony Corporation Système de commande, dispositif de traitement d'informations, procédé de commande et programme
CN108572450A (zh) * 2017-03-09 2018-09-25 宏碁股份有限公司 头戴式显示器、其视野校正方法以及混合现实显示系统
WO2020018296A1 (fr) * 2018-07-16 2020-01-23 Verily Life Sciences Llc Caméra rétinienne avec déflecteur de lumière et illuminateur dynamique pour agrandir la région oculaire
WO2021108458A1 (fr) 2019-11-25 2021-06-03 Broadspot Imaging Corp Imagerie choroïdienne

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030107643A1 (en) * 2001-08-17 2003-06-12 Byoungyi Yoon Method and system for controlling the motion of stereoscopic cameras based on a viewer's eye motion
US7290878B1 (en) * 2003-10-15 2007-11-06 Albert John Hofeldt Machine for binocular testing and a process of formatting rival and non-rival stimuli
US20090161826A1 (en) * 2007-12-23 2009-06-25 Oraya Therapeutics, Inc. Methods and devices for orthovoltage ocular radiotherapy and treatment planning
US20100073469A1 (en) * 2006-12-04 2010-03-25 Sina Fateh Methods and systems for amblyopia therapy using modified digital content
US20120257166A1 (en) * 2011-04-07 2012-10-11 Raytheon Company Portable self-retinal imaging device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030107643A1 (en) * 2001-08-17 2003-06-12 Byoungyi Yoon Method and system for controlling the motion of stereoscopic cameras based on a viewer's eye motion
US7290878B1 (en) * 2003-10-15 2007-11-06 Albert John Hofeldt Machine for binocular testing and a process of formatting rival and non-rival stimuli
US20100073469A1 (en) * 2006-12-04 2010-03-25 Sina Fateh Methods and systems for amblyopia therapy using modified digital content
US20090161826A1 (en) * 2007-12-23 2009-06-25 Oraya Therapeutics, Inc. Methods and devices for orthovoltage ocular radiotherapy and treatment planning
US20120257166A1 (en) * 2011-04-07 2012-10-11 Raytheon Company Portable self-retinal imaging device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3223232A4 (fr) * 2014-11-20 2018-07-18 Sony Corporation Système de commande, dispositif de traitement d'informations, procédé de commande et programme
WO2017025583A1 (fr) * 2015-08-12 2017-02-16 Carl Zeiss Meditec, Inc. Améliorations de l'alignement pour systèmes de diagnostic ophtalmique
US11672421B2 (en) 2015-08-12 2023-06-13 Carl Zeiss Meditec, Inc. Alignment improvements for ophthalmic diagnostic systems
US10849498B2 (en) 2015-08-12 2020-12-01 Carl Zeiss Meditec, Inc. Alignment improvements for ophthalmic diagnostic systems
CN108572450A (zh) * 2017-03-09 2018-09-25 宏碁股份有限公司 头戴式显示器、其视野校正方法以及混合现实显示系统
CN108572450B (zh) * 2017-03-09 2021-01-29 宏碁股份有限公司 头戴式显示器、其视野校正方法以及混合现实显示系统
CN112512401A (zh) * 2018-07-16 2021-03-16 威里利生命科学有限责任公司 具有光挡板和用于扩展眼动范围的动态照明器的视网膜相机
US11202567B2 (en) 2018-07-16 2021-12-21 Verily Life Sciences Llc Retinal camera with light baffle and dynamic illuminator for expanding eyebox
US11642025B2 (en) 2018-07-16 2023-05-09 Verily Life Sciences Llc Retinal camera with light baffle and dynamic illuminator for expanding eyebox
WO2020018296A1 (fr) * 2018-07-16 2020-01-23 Verily Life Sciences Llc Caméra rétinienne avec déflecteur de lumière et illuminateur dynamique pour agrandir la région oculaire
CN112512401B (zh) * 2018-07-16 2024-05-28 威里利生命科学有限责任公司 视网膜成像系统及使视网膜成像的方法
WO2021108458A1 (fr) 2019-11-25 2021-06-03 Broadspot Imaging Corp Imagerie choroïdienne
EP4064958A4 (fr) * 2019-11-25 2022-12-21 BroadSpot Imaging Corp Imagerie choroïdienne

Similar Documents

Publication Publication Date Title
US9295388B2 (en) Methods and apparatus for retinal imaging
CA2730720C (fr) Appareil et procede d'imagerie de l'oeil
US7428001B2 (en) Materials and methods for simulating focal shifts in viewers using large depth of focus displays
US8391567B2 (en) Multimodal ocular biometric system
WO2017053871A2 (fr) Procédés et dispositifs permettant d'obtenir une meilleure acuité visuelle
US20150223683A1 (en) System For Synchronously Sampled Binocular Video-Oculography Using A Single Head-Mounted Camera
US11483537B2 (en) Stereoscopic mobile retinal imager
CN114222520A (zh) 眼科测试系统和方法
US11439302B2 (en) System and method for visualization of ocular anatomy
JP2013527775A (ja) 目を撮像するための装置および方法
KR20160010864A (ko) 검안경
US20210335483A1 (en) Surgery visualization theatre
JP7195619B2 (ja) 眼科撮像装置およびシステム
WO2014127134A1 (fr) Procédés et appareil pour l'imagerie rétinienne
WO2021226134A1 (fr) Salle de visualisation chirurgicale
Kagawa et al. Variable field-of-view visible and near-infrared polarization compound-eye endoscope
EP3668370B1 (fr) Ophtalmoscopie indirecte miniaturisée pour photographie de fond d' il à large champ
US20240127931A1 (en) Surgery visualization theatre
Coppin Mathematical modeling of a light field fundus camera and its applications to retinal imaging
EP4146115A1 (fr) Salle de visualisation chirurgicale
Boggess Integrated computational system for portable retinal imaging
JP2019103079A (ja) 虹彩撮影装置およびこれを利用した虹彩分析システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14751659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14751659

Country of ref document: EP

Kind code of ref document: A1